Call for psychology researchers to use ‘new statistics’

The worlds largest scientific psychology organisation is encouraging researchers submitting papers to its influential global journal, Psychological Science, to use a radically different statistical method in reporting their research findings.

The move by the Washington-based Association for Psychological Science, or APS, to adopt the new statistics follows the work of a professor at La Trobe University in Melbourne, Emeritus Professor Geoff Cumming.

In a statement released in the United States, the APS said it had made the decision to help boost the replicability of scientific studies and promote robust research practices across all areas of the discipline.

The association is making available an online tutorial, The New Statistics: Why and how, developed by Cumming for its more than 20,000 members and anyone else wishing to submit research to the journal.

Head of La Trobes statistical cognition laboratory, Cumming is author of Understanding The New Statistics, published by Routledge in the US in 2012.

He said most psychologists as well as many researchers in biomedical science and other disciplines had traditionally relied on a statistical technique called null-hypothesis significance testing.

When significance is not significant

This requires the results to reach an arbitrary .05 p-value before the research outcome can be described as significant. Cumming said significance testing gave an illusion of certainty but was actually extremely unreliable.

It uses weird backward logic and bamboozles countless students every year in their introduction to statistics. Instead, the new statistics involves the use of effect sizes, estimation and meta-analysis.

The APS said the changes would help strengthen the overall integrity of scientific research, conveying benefits not only [for] the scientific community but also for the general public.

Cumming said the change was important because statistics and the way people understood and acted upon them could be a matter of life and death.

For example, in the 1970s, parents were advised to put new babies to sleep face-down on a sheepskin, even while evidence was gradually accumulating that back sleeping is much safer and greatly reduces the risk of cot death.

Meta-analysis was, however, not available then to integrate the scattered evidence so the dangerous advice for face-down sleeping persisted. Its estimated that if meta-analysis had been available and used, and the resulting recommendation for back sleeping had been made, as many as 50,000 infant deaths in the developed world could have been prevented.

Although distortion of published research was new to most researchers in psychology and biomedical science, estimation had been widely used by physical scientists and engineers, Cumming said.

Its a much more informative technique, and avoids the worst problems of significance testing. Meta-analysis, a vital component of the new statistics, allows researchers to integrate findings over a number of related studies.

But meta-analysis can only give reliable results if all studies on a topic are available. However, significance testing distorts the published research record and scientific journals are more likely to publish a significant result.

So studies that fail to obtain significance tend not to see the light of day and therefore escape the attention of anyone conducting meta-analysis.

Clearer conclusions

Cumming said meta-analysis was based on estimation and made statistical significance virtually irrelevant and it could allow clear conclusions to be drawn from messy research literature. Describing a research result as significant strongly suggested certainty and that the result was large and important even though statistical significance did not imply that.

Now it is up to researchers to change their deeply entrenched habit of using statistical significance, and move forward to the much more informative new statistics, he said.

Leave a Reply