Failure to reproduce results is ‘a normal part of how science works’

Results of a new study in the pres­ti­gious journal Sci­ence hit the front page of The New York Times late last month. The study found that of 100 papers pub­lished in three top psy­chology jour­nals, more than 60 could not be replicated—that is, the find­ings could not be repro­duced. A storm of spec­u­la­tion ensued, with ref­er­ences to every­thing from fraud and exag­ger­ated sci­en­tific claims to ques­tions about whether the social sci­ences are even “sci­ence” in the first place.

For­tu­nately, calmer—and wiser—voices pre­vailed, including those of psy­chol­o­gists them­selves, who wel­comed the rig­orous eval­u­a­tion of the field. Lisa Feldman Bar­rett, Uni­ver­sity Dis­tin­guished Pro­fessor of Psy­chology at North­eastern, homed in on per­haps the most telling point the study revealed in an op-​​ed piece she penned for The New York Times: the impor­tance of con­text. Who (human, animal, mol­e­cule) expe­ri­ences what, and  under which con­di­tions, shapes an experiment’s results.

For starters, the study, pro­duced by a col­lab­o­ra­tive called the Repro­ducibility Project at the Center for Open Sci­ence, never said the results of the orig­inal exper­i­ments were wrong or even inflated; it said they did not hold up on a second try.

But the failure to repli­cate is not a cause for alarm,” noted Bar­rett sagely. “[I]n fact, it is a normal part of how sci­ence works.”

Researchers from fields as diverse as mol­e­c­ular biology and elec­trical engi­neering know that. Indeed, Lloyd Sederer, med­ical director of the New York State Office of Mental Health, mused in a U.S. News and World Report blog whether the Repro­ducibility Project’s results “would them­selves stand the test of replication.”

Sci­ence, wrote Bar­rett, relies on trial, results, val­i­da­tion of results, and if not, clues to next steps. She cited famous studies in genetics, physics, and psy­chology that failed to repli­cate to under­line how a par­tic­ular phe­nom­enon may be true but “only under cer­tain con­di­tions.” If those con­di­tions are altered, even slightly, the results will likely change. Remember the dress that sprang into meme-​​land across social media in Feb­ruary? Was it black and blue or white and gold? Well, it depended on con­di­tions: in this case, lighting conditions—how each viewer’s brain inter­preted the image’s ambiguous lighting.

Sci­ence is not a body of facts that emerge, like an orderly string of light bulbs, to illu­mi­nate a linear path to uni­versal truth,” Bar­rett con­cluded. “Rather, sci­ence (to para­phrase Henry Gee, an editor at Nature) is a method to quan­tify doubt about a hypoth­esis, and to find the con­texts in which a phe­nom­enon is likely. Failure to repli­cate is not a bug; it is a fea­ture. It is what leads us along the path—the won­der­fully twisty path—of sci­en­tific discovery.”

Open all references in tabs: [1 - 6]

Leave a Reply