In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.
Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.
As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine–they’d been obtained from the Los Angeles County coroner’s office–the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.
In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well–significantly better than the average student–even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student–a conclusion that was equally unfounded.
“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”
Confirmation bias is a fixed feature of human psychology. It is why we all embrace data that supports our beliefs and opinions and reject facts that contradicts them.It is the most researched of all forms of logical fault.
We resist information that is contrary to what we have already settled on as our beliefs. People of incompatible views often do not hear or understand one another.
This is a primary source of intergenerational disagreements. It was a major reason for the “worship wars” of years past. Confirmation bias renders the current debate over “online church” intractable. because of confirmation.
Once our minds are made up, they’re made up. Only a tremendous shock unsettles them.
Loss aversion is a form of cognitive bias that judges the pain of potential loss to be twice as motivating as the pleasure of potential gain. People tend to prefer avoiding losses over collecting equivalent benefits. As the stakes grow, aversion grows stronger.3
We consulted one church that had stalled after enjoying steady growth for several years. The average Sunday attendance was more than 95% of capacity and several people had to standing during worship services.
When we presented the obvious solution – go to two services – the church matriarch objected on the grounds that the sense of family would be lost. In her mind losing contact with a few people she knew distantly would be more intolerable than welcoming more people to the church and seeing it grow.
This feature of human psychology militates against change. The mind focuses on gains losses more than on potential gains. It perceives the pain of loss being twice as powerful as the pleasure of potential gain.
So don’t be surprised when people resist the change you’re trying to make in that church. You’re interfering with their well-formed and mostly useful habits!