Phenomena:

The field of psychology is going through a period of introspective turmoil. One the one hand, it has never been more popular. Its results lead to attention-grabbing headlines, and fill books that sit happily on bestseller lists. Conversely, some of its own practitioners are starting to ask themselves a difficult question: What proportion of the field’s findings are genuine and reliable insights into the human mind, and what proportion are red herrings produced by questionable research practices and, in rare cases, outright fraud?

This line of questioning comes from: cases of classic results that cannot be easily reproduced; studies that have documented widespread dodgy practices that lead to false results; the publication of papers that claim the impossible, like evidence for precognition; and the outing of several fraudulent scientists (with a new case emerging literally as I write this paragraph). To some, these signs augur a looming crisis of confidence for psychology. To others, these problems are unrepresentative, and being used to damn a field that generally produces solid, reliable results.

The debates can get quite energetic, but one of the more calm-headed voices in them is Brian Nosek’s. A psychologist from the University of Virginia, Nosek has been quietly trying to turn the problems into solutions. “There hasn’t been anything new in all this recent hubbub,” he says. “We’ve been talking about these problems since the 60s, but where it stopped was people complaining. There have been a lot of people who have been frustrated at how science is operating but had no outlet for making it better.”

COSNosek’s solution launches today—the Center for Open Science, a new laboratory at Charlottesville, Virginia. Unlike many new research centres, this one is less about doing great science than about making science greater. It will try to foster a new approach to research that will produce more reliable results.

Show your working

The Center’s values are epitomised in its signature project—the Open Science Framework. It’s a website that lets scientists store and share every aspect of their work, including facets that are often hidden from each other, let alone from the public. Failed experiments, the minutiae of methods, the genesis of ideas… these are often omitted from published papers or left to languish in personal file drawers. That creates strong biases in the literature, and makes it harder for people to check and reproduce each other’s work.

The Open Science Framework allows scientists to easily document these invisible steps and make them freely available. It’s the embodiment of the school maxim: “Show your working.” “It doesn’t mean that everyone will look at everything I do, but the fact that they can changes two important things,” says Nosek. “First, there’s no file drawer; research doesn’t get lost. Second, I’m accountable because someone could discover what I’m doing.”

The result should be a fuller picture of science, from conception to publication. Nosek hopes that this will also help to put the emphasis on the process of science, and shifts it away from the published, peer-reviewed paper. He says that the focus on papers has created “a mindset where the goal is to publish instead of to learn stuff”. This, in turn, creates a conflict between what is good for the researcher (Publish! Publish! Publish!) and what is good for science (Publish good stuff!). The result: lots of papers, and no clear idea about how many of them are reliable.

Check it out

Nosek has been working on that too. Alongside the OSF, he leads the Reproducibility Project—a team of 100-plus psychologists who are going to replicate every study published in three major psychological journals from 2008. They will stick to the original experiments, work with the original authors where possible, and get a sense of what proportion of published work checks out a second time.

These projects snowballed to the point where four funding agencies contacted Nosek out of the blue. He eventually landed a four-year, $5.25 million grant from the Laura and John Arnold Foundation to embody his ideas within a physical location and hire software developers who will help to build the right online tools. The Center for Open Science was born.

“Trying to coordinate all these activities in one centre is an important step forward,” says Daniele Fanelli from the University of Edinburgh. “Ten years from now, we will be doing science in a very different way but, of course, it is hard to tell if this particular model is the right one. My guess is that some of the Center’s activities will work, while others will need revising. But only trial and error will show the way, so new ideas should be encouraged.”

A new breed of paper

Bobbie Spellman, a fellow psychologists at the University of Virginia, is going to provide a home for experiments like this. As editor of the journal Perspectives on Psychological Science, she is launching a new type of paper devoted to attempts by several labs to independently replicate an existing study.

The twist is that these papers will be reviewed before the experiments are done, rather than after everything has been completed. They will be judged solely on the importance of the question and the quality of the methods. If accepted, the Center for Open Science will coordinate the actual research, and the results will be published no matter what they show. That eliminates the temptation to massage methods or data to get positive results, and removes the barrier to publishing negative ones.

It sounds like a win-win, and similar initiatives are afoot at other journals like Cortex. But Spellman says that reception to this new model has been mixed. Some psychologists have shown interest. Others are wary of a “gotcha mentality” or think that replications are being over-valued to the detriment of fresh research; they want no part of it. Spellman thinks that’s a shame. “The way we don’t keep track of replication studies is a true detriment to our science,” she says. “I believe that scientists should be flattered when others want to replicate their research.  It means they’re willing to expend scarce time and strained resources, and must find it interesting and important.”

Beyond psychology

So far, most of the people involved in the Center’s projects are psychologists, which probably reflects Nosek’s own connections, and the fact that psychology has been dealing with these issues head-on. “We need several such efforts, covering a wide range of scientific fields,” says John Ioannidis from Stanford University, who supports the new Center.

Ioannidis has been somewhat of a figurehead for the movement to improve the integrity of science. His provocatively-titled oeuvre includes papers such as “Why Most Published Research Findings Are False” and “Why Science is Not Necessarily Self-Correcting”. You can see his point, and not just in psychology. Whenever people have checked the reproducibility of results, the conclusions have been… disappointing. When a team at Bayer Healthcare asked their in-house scientists to check basic studies in cancer, heart disease and other conditions, only a quarter could be validated to the point where projects could continue. When biotech firm Amgen tried to replicate 53 “landmark” studies in cancer, they only managed to confirm 6 of them.

“There seems to be some convergence of evidence that many scientific fields produce too much information that is not only unreliable but it not even possible to check, repeat, reproduce, validate in general its reliability,” says Ioannidis. “Openness and reproducibility should have been hallmarks of all research efforts but somehow we have not managed to safeguard them.” He thinks the Center of Open Science will help to do that.

Nosek agrees. He is looking to expand the Center’s work beyond psychology and has been talking to people from across the sciences. And even though people commonly tell him that their own discipline faces special challenges, Nosek sees many universal problems and few unqiue ones. “Every time I’ve gone to a meeting and people say, “Let me tell you about my problems”, it’s like looking in a mirror,” he says. “You use a beaker, I use a person. But there’s a lot that’s very translatable.”

Optimism?

Ferric Fang from the University of Washington, who has studied the growing rot of scientific retractions, is equally enthusiastic about the Center’s goals, but tempered in his expectations. “Brian is to be congratulated, [but] in some ways, this new initiative is reminiscent of the utopian communities of 19th century America They were idealistic experiments that ultimately failed to reform society at large,” he says. “[But] I hope that they are successful.” The reason for Fang’s scepticism? “I believe it will be very challenging to achieve the Center’s goals in the context of the current reward system of science,” he says.

But Nosek isn’t operating alone. Daniel Simons from the University of Illinois, who will help to edit papers under Spellman’s new publishing model, also sees change in the air. “I think the field is ready for it,” he says. “There is a groundswell of interest in improving methods and reporting practices in the field.” Other journals are publishing replications at a higher rate, and some are producing special issues to discuss these challenges. At a recent meeting of cognitive psychologists, two sessions devoted to methodological issues were standing-room only. “I’ve never seen a methods and stats session packed like that before,” says Simons.

This building interest makes Nosek optimistic. He himself already amassed a thriving online community of some 450 members who are interested in these issues. “Once we had that, we realised that something much bigger was possible,” he says. “With a community effort, we could make some real headway on improving scientific practices.”

Correction: I’ve removed the reference to social psychology. I think it’s fair to say that that sub-field is facing more intense scrutiny than others for whatever reason, but some of the examples that followed were not specific to it. For the sake of clarity, I’ve taken it out entirely. Thanks to Jeff Sherman in the comments.

More on psychology’s problems:

Open bundled references in tabs:

Leave a Reply