Three Popular Psychology Studies That Didn’t Hold Up

Led by Brian Nosek from the University of Virginia, the study was a part of the Reproducibility Project: Psychology.

The work was carried out by an global team of more than 300 people and released Thursday by the journal Science.

She urged mandatory registration of research methods beforehand to prevent scientists from picking only the most favorable data for analysis, as well as requiring adequate sample sizes and wider reporting of studies that show null result, or in other words, those do not support the hypothesis initially put forward. Nosek and his colleagues found that more than 50% of those 100 studies produced faulty or overstated findings.

That doesn’t mean all those unconfirmed studies were wrong, but it’s a stark reminder that a single study rarely provides definitive answers. The findings could contest the validity of certain scientific claims, but it also points to the difficulty of conducting effective replications and achieving reproducible results.

A landmark study involving 100 scientists from around the world has tried to replicate the findings of 270 recent findings from highly ranked psychology journals and by one measure, only 36 percent turned up the same results. The Science paper shows that large-scale replication efforts of high quality are doable even in fields like psychology where there was no strong replication culture until recently.

They focused on basic research into how people think, remember, perceive their world, and interact with others.

The second is that the replication failed by chance alone, and finally that the original results might have been a “false positive”, possibly as a result of researchers enthusiastically pursuing one line of inquiry and ignoring anything that may be inconsistent with it – rather than outright fraud.

Just yesterday we reported that a review of climate contrarian papers – journal articles that disagree with the consensus that climate change is likely caused by human activity – found that they were riddled with methodological errors, which would have made their findings impossible to replicate.

The study identified some evidence of certain influences on reproducibility, according to Anup Gampa, a Reproducibility Project team member and Ph.D. candidate at U.Va. He said that science is a unique way to gather knowledge, since it relies on reproducibility to gain confidence in evidence and ideas. These models predicted that in most scientific fields and settings the majority of published research, findings may be false.

Marcus Munafo, a co-author of the study, said there was a huge disconnect between the scientific process and the researchers’ need to grab attention to advance their careers. “To thrive in science, researchers need to earn publications, and some kind of results are easier to publish than others, particularly ones that are novel and show unexpected or exciting new directions”, he said. There could be several possible reasons if you fail to replicate findings of a previous study.

However, Nosek was disappointed.

The original study could be wrong. The first is that there may be slight differences in materials or methods that were not obvious in the published methodology.

Johanna Cohoon, a project coordinator with the Charlottesville-based Center for Open Science, said: ‘The findings demonstrate that reproducing original results may be more hard than is presently assumed, and interventions may be needed to improve reproducibility’.

Leave a Reply