Reproducibility, replication and open science

A collection of our coverage on a period that has been variously described as either a 'disaster' or 'triumph' for Psychology.

In the May 2012 issue, we published a special set of contributions on replication [download the PDF or see here]. At that time, awareness of issues around the reproducibility of scientific findings was growing… the roots stretched back decades, but through high profile papers such as John Ioannidis 'Why most published research findings are false', special sections in publications such as Perspectives on Psychological Science, and controversial papers such as Daryl Bem's 'Feeling the future', a revolution was growing. 

In the years since, we have repeatedly returned to the topic, and we collect some of that coverage here, in roughly chronological order:

The 2012 special, with opening contribution from Stuart J. Ritchie, Richard Wiseman and Christopher French

Rows erupt over replication attempts

Simone Schnall on her experiences

The genesis of the Open Science Framework, and Registered Replication Reports

Partially replicating Milgram's obedience studies

The Reproducibility Project reports on attempts to replicate 100 psychology studies. Was it a disaster or triumph for Psychology? It was by no means received uncritically

Is science broken? A 2015 debate.

A letter on the multiple sources of the problem of reproducibility

2016 articles on our struggle between science and pseudoscience, and whether we are - to put it frankly - buried in bullshit

Reporting from a BPS debate, with video

Nothing to smile about… the failed replication attempts were totting up by this point

Curing the five 'diseases' stifling psychological research

The 'seven deadly sins' of Psychology, by Chris Chambers, reviewed; plus an extract

'There's this conspiracy of silence around how science really works' - we meet Marcus Munafo

Around this time, 2017, there was a lot of talk around a 'new age' for Psychology, and turning a crisis into a revolution, the cornerstones of a new culture

…But the following year, it seemed the mood had soured somewhat, with increasing talk of tone and respect in the debate. Some felt the real crisis in Psychology was one of exaggeration, even in terms of our progress in reforming our science. Some were taking a wry look at the characters involved in the debate. Those in the thick of it still felt it was an 'exciting time to be a psychologist', and Brian Nosek at our Annual Conference said 'it's not a crisis, it's a reformation'. The growing registered reports movement looked to make results 'a dead currency'.

In 2019 we reported on a European perspective, and on the screentime debate – in some ways, the replication / open science discussions in microcosm.

A few times, we have asked psychologists and those in the media about the challenges of writing / making programmes based on psychological findings during the replication crisis

Over the years, there has perhaps been a growing appreciation that the incentives structure in academia is central to the issues faced around reproducibility and open science, as discussed here by Brian Nosek. Some continue to feel the 'brouhaha' is overstated.

Then Covid hit, and the importance of sound scientific foundations and evidence-based statements perhaps became more important than ever. We talked to Stuart Ritchie, one of the authors on that original 2012 special. It was a 'high stakes version of Groundhog Day, a 'jolt of transformation'

In October 2020, we highlighted the importance of massive collaboration, creative roles in 'team science', interdisciplinary work, and researching with those you may not agree with. The following month we were looking at whether 'open science' is really 'bropen science'.

In 2021, Tom Loncar used the example of 'power posing' to examine the replication debate. We also looked for lessons in the history of open science.

No doubt we will return to the replication debate, and this page will become an evolving resource. We still think there are interesting angles to explore further, e.g. in this Twitter thread – is there something of a gleeful tone to the sharing of failed replication attempts, which may ignore key methodological differences between the studies, fail to acknowledge the involvement of the original researchers in these efforts to replicate their work, and imply that actual scientific misconduct is the primary explanation for a failure to replicate?

BPS Members can discuss this article

Already a member? Or Create an account

Not a member? Find out about becoming a member or subscriber