The changing face of psychology

0
0
1
1992
10981
School of Psychology, Cardiff University
215
42
12931
14.0

Normal
0

false
false
false

EN-US
JA
X-NONE

In 1959, an American researcher named Ted Sterling reported something
disturbing. Of 294 articles published across four major psychology journals,
286 had reported positive results – that is, a staggering 97% of published papers were
underpinned by statistically significant effects. Where, he wondered, were all
the negative results – the less exciting or less conclusive findings? Sterling labelled
this publication bias
a form of malpractice. After all, getting published in science should never depend on getting
the “right results”.

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:12.0pt;
font-family:Cambria;
mso-ascii-font-family:Cambria;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Cambria;
mso-hansi-theme-font:minor-latin;
mso-ansi-language:EN-US;}

You might think that Sterling’s discovery would have led the psychologists
of 1959 to sit up and take notice. Groups would be assembled to combat
the problem, ensuring that the scientific record reflected a balanced sum of
the evidence. Journal policies would be changed, incentives realigned. 

Sadly, that never happened. Thirty-six years later, in 1995, Sterling
took another look at the literature and found
exactly the same problem
– negative results were still being censored. Fifteen
years after that, Daniele Fanelli from the University of Edinburgh confirmed
it yet again
. Publication bias had turned out to be the ultimate bad car
smell, a prime example of how irrational research practices can linger on and on.

Now, finally, the tide is turning. A
growing number of psychologists – particularly the younger generation – are fed
up with results that don’t replicate, journals that value story-telling over
truth, and an academic culture in which researchers treat data as their personal
property. Psychologists are realising that major scientific advances will require us to stamp
out malpractice, face our own weaknesses, and overcome the ego-driven ideals that
maintain the status quo.

Here are five key developments to watch in 2014.

1. Replication

The problem: The
best evidence for a genuine discovery is showing that independent scientists can replicate
it using the same method. If it replicates repeatedly then we can use it to build better
theories. If it doesn't then it belongs in the trash bin of history. This simple
logic underpins all science – without replication we’d still believe in phlogiston and faster-than-light
neutrinos.

In psychology, attempts to closely reproduce previous methods
are rarely attempted. Psychologists tend to see such work as boring, lacking in
intellectual prowess, and a waste of limited resources. Some of the most
prominent psychology journals even have explicit policies against publishing replications, instead offering readers a
diet of fast food: results that are novel, eye catching, and even counter-intuitive. Exciting results are fine provided they replicate. The problem is that nobody bothers to try, which litters the field with results of unknown (likely low) value.

How it’s changing:
The new generation of psychologists understands that independent replication is
crucial for real advancement and to earn wider credibility in science. A
beautiful example of this drive is the Many
Labs project
led by Brian Nosek
from the University of Virginia. Nosek and a team of 50 colleagues located in 36
labs worldwide sought to replicate 13 key findings in psychology, across a
sample of 6,344 participants. Ten of the effects replicated successfully.

Journals
are also beginning to respect the importance of replication. The prominent
outlet Perspectives
on Psychological Science
recently launched an initiative that specifically publishes direct replications of previous studies. Meanwhile, journals such as BMC Psychology and PLOS ONE officially disown the requirement for researchers to report novel, positive findings.

2. Open access

The problem: Strictly
speaking, most psychology research isn’t
really “published”
– it is printed within journals that expressly deny access to the public (unless
you are willing to pay for a personal subscription or spend £30+ on a single
article). Some might say this is no different to traditional book publishing, so what's the problem? But remember that the public being denied access to science is the very same public that already funds most
psychology research, including the subscription fees for universities. So why, you might ask, is
taxpayer-funded research invisible to the taxpayers that funded it? The answer is complicated enough to fill a 140-page government report, but the short version is that the government places the business interests of corporate publishers ahead
of the public interest in accessing science.

How it’s changing:
The open access movement
is growing in size and influence. Since April 2013, all research funded by UK
research councils, including psychology, must now be fully open
access
– freely viewable to the public. Charities such as
the Wellcome
Trust
have similar policies. These moves help alleviate the symptoms of closed access but don’t address the root cause, which is market dominance by traditional subscription publishers. Rather than requiring journals to make articles publicly
available, the research councils and charities are merely subsidising those
publishers, in some cases paying them extra for open access on top of their
existing subscription fees. What other business in society is paid twice for a
product that it didn’t produce in the first place? It remains a mystery who, other than the
publishers themselves, would call this bizarre set of circumstances a “solution”.

3. Open science

The problem: Data
sharing is crucial for science but rare in psychology. Even though ethical
guidelines require authors to share data when requested, such requests are usually
ignored or denied
, even when coming from other psychologists. Failing to publicly
share data makes it harder to do meta-analysis and easier for unscrupulous
researchers to get away with fraud. The most serious fraud cases, such as Diederik Stapel, would
have been caught years earlier if journals required the raw data to be published alongside research articles.

How it’s changing:
Data sharing isn’t yet mandatory, but it is gradually becoming unacceptable for
psychologists not to share. Evidence shows that studies which share data tend to be more accurate and less likely to make statistical
errors
. Public repositories such as Figshare and the Open Science Framework
now make the act of sharing easy, and new journals including the Journal of
Open Psychology Data
have been launched specifically to provide authors
with a way of publicising data sharing.

Some existing journals are also introducing rewards to
encourage data sharing. Since 2014, authors who share data at the journal Psychological
Science
will earn an Open Data
badge
, printed at the top of the article. Coordinated data sharing
carries all kinds of other benefits too – for instance, it allows future
researchers to run meta-analysis on huge volumes of existing data, answering
questions that simply can’t be tackled with smaller datasets.

4. Bigger data

The problem: We’ve
known for decades that psychology research is statistically underpowered.
What this means is that even when genuine phenomena exist, most experiments
don’t have sufficiently large samples to detect them. The curse of low power cuts both ways: not only is an underpowered experiment likely to miss finding water
in the desert, it’s also more likely to lead us to a mirage.

How it’s changing:
Psychologists are beginning to develop innovative ways to acquire larger samples. An exciting approach is Internet testing, which enables easy data collection from
thousands of participants. One recent study managed to replicate
10 major effects in psychology using Amazon’s Mechanical Turk
. Psychologists
are also starting to work alongside organisations that already collect large
amounts of useful data (and no, I don’t mean GCHQ). A great example is collaborative research
with online gaming companies. Tom Stafford from the University of Sheffield
recently published an extraordinary study of learning patterns in over 850,000 people by working
with a game developer.

5. Limiting researcher
“degrees of freedom”

The problem: In psychology,
discoveries tend to be statistical. This means that to test a
particular hypothesis, say, about motor actions, we might measure the
difference in reaction times or response accuracy between two experimental conditions. Because the
measurements contain noise (or “unexplained variability”), we rely on
statistical tests to provide us with a level of certainty in the outcome. This
is different to other sciences where discoveries are more black and white, like
finding a new rock layer or observing a supernova.

Whenever experiments rely on inferences from statistics,
researchers can exploit “degrees of freedom” in the analyses to produce
desirable outcomes. This might involve trying different ways of removing
statistical outliers or the effect of different statistical models, and then only reporting the approach that
“worked” best in producing attractive results. Just as buying all the tickets in a raffle guarantees a win,
exploiting researcher degrees of freedom can guarantee a false
discovery.

The reason we fall into this trap is because of incentives and human nature. As
Sterling showed in 1959, psychology journals select which studies to publish not based
on the methods but on the results: getting published in the most prominent, career-making journals requires researchers to obtain novel,
positive, statistically significant effects. And because statistical significance
is an arbitrary threshold (p.05), researchers have every incentive to tweak their analyses
until the results cross the line. These behaviours are common in psychology –
a recent survey led by Leslie John from Harvard University estimated
that at least 60% of psychologists selectively report analyses that “work”. In many cases such behaviour may even be unconscious.

How it’s changing:
The best cure for researcher degrees of freedom is to pre-register
the predictions and planned analyses of experiments before looking at the data.
This approach is now standard practice in medicine because it helps prevent the desires of the researcher from influencing the outcome. Among the basic life sciences, psychology is now leading
the way in advancing pre-registration. The journals Cortex,
Attention
Perception Psychophysics
, AIMS Neuroscience
and Experimental
Psychology
offer pre-registered articles in which peer review
happens before experiments are conducted. Not only does pre-registration put the reins
on researcher degrees of freedom, it also prevents journals from selecting which papers to publish based on the results. 

Journals aren’t the only organisations embracing
pre-registration. The Open Science
Framework
invites psychologists to publish their protocols, and the 2013 Declaration of
Helsinki
now requires public pre-registration of all human research
“before recruitment of the first subject”.

We’ll continue to cover
these developments at HQ as they progress throughout 2014.

Open bundled references in tabs:

Leave a Reply