Are scientists less prone to motivated reasoning?


Image of a group of people in labcwoats.
Enlarge / Do these people look prone to motivated reasoning?

A new study lays out a bit of a conundrum in its opening paragraphs. It notes that scientific progress depends on the ability to update what ideas are considered acceptable in light of new evidence. But science itself has produced no shortage of evidence that people are terrible at updating their beliefs and suffer from issues like confirmation bias and motivated reasoning. Since scientists are, in fact, people, the problems with updating beliefs should severely limit science’s ability to progress.

And there’s some indication that it does. Max Planck, for example, wrote that “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up.”

But a new study suggests it’s not as much of a problem as it might be. Taking advantage of a planned replication study, some scientists polled their peers before and after the results of the replication study came out. And most scientists seemed to update their beliefs without much trouble.

Before and after

The design of the new study is straightforward. The researchers behind it took advantage of a planned replication study—one that would redo some prominent experiments and see if they produced the same results. Prior to the results of the replication being announced, the researchers contacted about 1,100 people involved in psychology research. These participants were asked what they thought of the original results.

When the replication work was complete, some of the earlier experiments did replicate, providing greater confidence in the original results. Others failed, raising questions about whether the original results had been reliable. This should provide an opportunity for the research community to update its beliefs. To find out if it had, the researchers behind the new paper went back and found out what the same 1,100 people thought about the experiments in light of whether the experiments replicated.

In practical terms, the research team’s subjects were asked to read about the results of the studies being replicated and then judge whether the findings were likely to represent a “nontrivial” effect. Participants were also asked about whether they were confident in these earlier results or personally invested in them (such as might happen if they based their own research on the results). Half the participants were asked about the quality of the replication experiments and whether those doing the replication had succeeded in reproducing the conditions of the original experiments.

Once the replication was done, all the participants were once again asked to estimate whether the effect tested in the replication was likely to be nontrivial, as well as their confidence in the effect. They also rated the quality of the replication experiments.

This setup allowed the researchers behind the new study to judge whether the participants were updating their thinking in response to the new data. It also provides the opportunity for the researchers to look at some of the factors that influence motivated reasoning, like a personal interest in the outcome. And a participant who is engaged in motivated reasoning might dismiss the replication as being low-quality, which the researchers also asked about. So, overall, this seemed like a thorough study.

Applying the update

Overall, the participants come out of the study looking pretty good. When a replication succeeded, they were more likely to have confidence that the replicated experiment revealed a significant effect. When the replication failed, they adjusted their confidence in the opposite direction. In fact, the participants updated their beliefs more than they themselves expected they would.

They also showed little sign of motivated reasoning. There was little sign that researchers changed the opinions on the quality of the replication, even if the data called their earlier thoughts into question. Neither did they focus on differences between the original experiments and the replication. Personal interest in the results also didn’t make any difference.

Being aware of possible sources of bias might protect people from motivated reasoning, but there was no sign of that here, either. The one thing that did seem to correlate with appropriate belief updates was a self-reported sense of intellectual humility.

So, overall, psychologists don’t appear to suffer the sort of cognitive biases that keep people from accurately incorporating new information. At least when it comes to science—it’s very likely that they do so in other areas of their lives.

Some caveats

There are two big caveats. One is that the participants knew that their responses would be kept confidential, so they could afford to state opinions that might cause problems if made publicly. Thus, there could still be a gap between what the individual participants think in private and how the field as a whole responds to the differences in replication status.

The other caveat is that the participants knew they were taking part in a study on reproducibility. So, they might be expected to shade their answers so that they looked good to their fellow researchers. The main thing that argues against this is that the the participants didn’t change their opinion as much as you’d expect based on the magnitude of the difference between original and replication results. In other words, the participants reacted cautiously to a failed replication—not something you’d expect from someone doing reputation management.

Even with these caveats, it’s probably worth following up on these results. The sorts of behaviors that allow people to maintain beliefs despite contrary evidence are a major societal problems. If scientists can suspend them, in some contexts, it would be useful to understand how they do it.

Nature Human Behavior, 2021. DOI: 10.1038/s41562-021-01220-7  (About DOIs).



Source link