Psychologist logo
Professional Practice

We can turn the replication crisis into a revolution

Sam Parsons (University of Oxford) reports from a workshop at the British Psychological Society's Annual Conference in Brighton.

10 May 2017

Psychology as a science should be self-correcting. While some may view the 'replication crisis' as a failing of our science, it may also be a valuable opportunity for us to adopt better research practices to evolve psychological research. In a workshop that should be mandatory to all researchers, Mark Andrews (Nottingham Trent University) and Daryl O'Connor (University of Leeds, and Chair of the Society's Research Board) led a workshop discussing why many published findings are likely false and how we might see the crisis as a revolutionary point in the history of psychology.
 
There are a wide range of issues that contribute towards the large number of findings that researchers have been unable to replicate. While there are serious issues in the literature, such as p-hacking and HARKing (hypothesising after the results are known), there are lesser known, more ‘innocent’ ways in which bias might be introduced into the literature. For example, optional stopping (“lets collect some more data to see if that p=.07 will become significant with a larger sample”) severely inflates the likelihood of a type 1 error, if it is not controlled for.
 
Perhaps the most pervasive element of the replication crisis is the incentives structure of our field. We are driven toward the ideal of publishing as much as possible, regardless of quality and “sexy” studies offering exciting and novel results are highly rewarded. These studies are often highly underpowered or have significant methodological flaws. More importantly, they are given too much weight in our understanding before they have been thoroughly tested and replicated. In contrast, so-called “failed” replications (a particularly terrible terminology which dismisses the important implications of not replicating an effect) or other important incremental work receive lower readership and fewer citations.
 
There is good news, however. We already have many of the tools needed to address these issues. One example is the pre-registration service offered by the open science framework. Recording an analysis plan before conducting the analysis allows us to clearly distinguish between confirmatory (hypothesis testing) and exploratory (hypothesis generating) analyses. In turn, we can be much more confident that the potential for the introduction researcher bias has been reduced. Excitingly, an increasing number of journals are adopting pre-registration practices, which offers the opportunity to submit papers before the results are known.

The workshop offered a range of researchers and practitioners the opportunity to discuss how we might move forward to improve research practices in psychology. Agreement emerged that we need to develop a scientific community that values and incentivises pre-registration, adequately powered studies, open data practices, and stronger research methods over publishing as many “sexy” results as possible.

To achieve this, we need more researchers to get involved and imbed these practices into undergraduate teaching. We can teach the next generation of researchers how to register their research projects, and to understand the statistical difference between hypothesis testing and hypothesis generating analyses. Working together, we can change methods training and incentive structures to promote research that is truly “sexy” by reporting effects that are reproducible.

- Sam Parsons attended the Annual Conference on a postgraduate bursary from the Society.

Look out for much more coverage from the Conference here over the coming weeks, and in the July print edition.

Find much more on what has become known as the replication debate in our archive.