Gambling on Reproducibility

Can science publishing benefit from the establishment of a stock exchange of sorts?WIKIMEDIA, NEW YORK STOCK EXCHANGEThe poor reproducibility dogging psychological science has been well documented in recent months. Now, an international team of researchers—some of whom were the first to sound the alarm regarding the field’s replication problems—suggests that solving the predicament may lie in setting up futures markets in which psychologists acting as traders place bets on whether or not a particular study’s findings will be replicated in future attempts.

“The results show that a collection of knowledgeable peers do have a good sense of what will replicate and what won’t,” Caltech behavioral economist Colin Camerer, who was not involved with the study, told Nature. “This information is in the judgments of peers but has never been collected and quantified until now.”

The study’s findings, published yesterday (November 9) in PNAS, further indicate that psychologists simply asked in isolation to predict which studies are likely to be successfully replicated perform no better than random chance would dictate. But add money and an ability to see where peers are placing their bets, and the quality of those predictions increases significantly. “There is some wisdom of crowds; people have some intuition about which results are true and which are not,” study coauthor Anna Dreber, a behavioral economist at the Stockholm School of Economics, told The Atlantic. “Which makes me wonder: What’s going on with peer review? If people know which results are really not likely to be real, why are they allowing them to be published?”

Dreber and her colleagues instructed 92 psychologists participating in University of Virginia psychologist Brian Nosek’s Reproducibility Project—designed to test the replicability of 100 psychology studies published in 2008—to place bets on whether 41 of the studies would succeed. Each participant was given the equivalent of $100 and told to buy or sell stock in studies based on their perception of their reproducibility. Each participant could see the trading activity of all the others, as is the case in stock-market trading. A trader could double his money if a study was reproduced successfully, but would lose his stock and money if it wasn’t. “If you believe the result will be replicated, you buy the contract, which increases the price,” Dreber told FiveThirtyEight. “If you don’t believe in a study, then you can short-sell it.”

The markets correctly predicted 71 percent of the replication outcomes. When the participants were asked to predict the replication odds for each of the studies, they were right only 58 percent of the time.

So could a similar system be used in addition to peer review to vet studies in psychology and other disciplines? Perhaps, said Nosek. “We may be able to use prediction markets to be more efficient in deciding what needs to be replicated, and making estimates of uncertainty about studies for which replication is not likely or possible,” he told The Atlantic.

Others aren’t so sure. “It would be unwise to use prediction markets to draw conclusions about the replicability of individual studies,” psychology researcher Simine Vazire of the University of California, Davis, told Nature. “But I think they could provide useful information about moderators of replicability—what types of studies do prediction markets anticipate to be more likely to replicate?”

Open all references in tabs: [1 - 5]

Leave a Reply