Investigation says that study to repeat 100 past psychology experiments failed

a-lot-of-published-psychology-results-are-bullshit-5866

Through replication, science moves forward in a self-correcting system. He said the problem could be even worse in other fields, including cell biology, economics, neuroscience, clinical medicine, and animal research. Belen Fernandez-Castilla, team member from Universidad Complutense de Madrid, added: “Scientists investigate things that are not yet understood, and initial observations may not be robust”.

But the authors and others stressed that the problem isn’t unique to psychology. This is a question that all journal policymakers should now be asking themselves. First, it’s possible that the original result identified an effect that wasn’t real. Even if the results weren’t able to be replicated, it doesn’t take away from the fact that there’s likely something to the original findings, but as with all studies that haven’t been independently verified and replicated, they need to be taken with a grain of salt.

Studies that were rated as more challenging to conduct were less likely to replicate, as were findings that were considered surprising. When grant money is scarce, it’s also hard to direct it towards replications. For instance, boring but accurate studies may never get published, or researchers may achieve intriguing results less by documenting true effects than by hitting the statistical jackpot; finding a significant result by sheer luck or trying various analytical methods until something pans out.

“If this occurs on a broad scale, then the published literature may become more attractive than the reality”.

The more blemished reality is that it’s impossible for all hunches to be correct. Today’s study culture in interesting, Wicherts added. The research team is concerned that psychology research may not have such a solid background as it claims it has.

Journals prefer to publish novel, cutting-edge research. “It makes it very easy to make parts or all of that data publicly available, to increase transparency and reproducibility”, says psychologist Brian Nosek, leader of the study and executive director of the Center for Open Science, which supports the framework.

As with all experiments, there were complicated decisions to be made along the way.

JB: Does this mean scientists aren’t always doing a good job of writing detailed enough methodologies? Researchers conclude that people who exercise more are healthier than people who do not. Committing to methods in advance can also cut down on p-hacking – adjusting the parameters of the analysis until you get a statistically significant p-value – and other intentional or inadvertent post hoc tinkering that may lead to false positives.

A landmark initiative to reproduce the findings of 100 prominent psychology papers has been successful in fewer than half of cases.

So what does this mean? “Before we began, we tried to define a protocol to follow so that we could be confident that every replication we did had a fair chance of success”. Only 36% turned up with the same results. Some of the findings had very different results, which could be a result of the highly interchangeable nature of humans. There’s also the possibility that small changes in the methods used for an experiment could change the results in unforeseen ways.

How exact must a replication be? “I am disappointed, in the sense that I think we can do better”.

No clear answers, just hints. Replication is an important part of the scientific process, and recreating studies can reinforce patterns and help lead scientists to more solid conclusions.

Leave a Reply