From crisis to cornerstones of culture

Ella Rhodes speaks to psychologists leading the way in the replication revolution.

In August 2005 a paper by John Ioannidis, ‘Why most published research findings are false’, ignited a new debate across psychology and the biomedical sciences about the reliability and robustness of many published journal articles. This reproducibility ‘crisis’, while at first troubling for researchers in many areas of science, has taken a more positive turn in recent years: with psychology leading the way.

Many concerns emerged: the validity of antiquated methods of publishing; peer review; and the biases that occur in journals favouring the publication of novel, positive findings over null results or replication attempts. Within psychology specifically many classic findings failed to replicate, and concerns were raised around widespread research practices such as ‘p-hacking’ (trawling datasets for significance).

We spoke to some of the psychologists working to change the culture of academia. Professor of Biological Psychology Marcus Munafò (University of Bristol) emphasised that psychologists are uniquely placed to study the inherent cognitive biases and many other issues prevalent in all areas of academic publishing. ‘I think any focus on research integrity and fraud is missing the point,’ he said. ‘The vast majority of scientists are motivated to do good work, but however hard we train to be objective scientists, we’re still human. The reason psychologists have been at the forefront of the debate is not because psychology has a particular problem as a science but because many of the issues are issues of human behaviour.’

Psychologists, and psychology journals, have been instrumental in coming up with innovative ways to change publishing; the ‘Registered Reports’ model of publishing, where a research question and study protocols are pre-registered prior to data collection, has flourished in psychology (see and has now been taken up by 50 journals. Psychological Science has introduced open science badges for articles whose authors have shared data, materials and/or pre-registered their studies, and even some undergraduate psychology courses ask students to pre-register dissertation projects.

Munafò pointed out that since this debate came to the fore we’ve moved from a more negative stance into a phase of thinking about how to improve the way we do science. He said: ‘It feels like we’re in the middle of a rapidly evolving natural experiment where people are trying lots of different things with much of the focus on improving the quality of how we do science, and hopefully some of that will stick and some of that will make a difference.’ In fact, some of the results can already be seen: before Psychological Science introduced its badge for open data, rates of data sharing were below 5 per cent, just a few months after its introduction this shot up to more than 40 per cent.

As Editor-in-Chief of Nicotine and Tobacco Research, and thanks to a partnership with Cancer Research UK, Munafò is piloting a scheme to bring funding and Registered Reports together. The Registered Reports format allows journals to accept a study’s research question and methodology for publication in principle, before any data has been collected, and funding panels work in a very similar way. Munafò explained: ‘Logically it makes sense to say once you’re committed to funding research on the basis of a good research question and methodology, that we’ll also guarantee publication.

If applicants opt in to the pilot and they’re successful in their application to funding, the journal will take over and go through a second phase of peer review, but as part of the same process, try and fine tune the protocol and the methods, and offer in-principle acceptance of that protocol. Then the applicants have the funding to do the work, and guaranteed publication at the end of that process. Whatever their results they will be published almost immediately after their work is complete.’

Lecturer Katherine Button (University of Bath) instils good research practices in her students at all stages of teaching, and worked with colleagues from Cardiff and Exeter to allow a group of students to collaborate on, and pre-register, a third-year project. Eight students across the three universities pre-registered with the Open Science Framework before data collection began. Dr Button explained: ‘They each collected data following standardised procedures, and we held a mini-conference to discuss results and mutually agree the conclusions for the paper in preparation, on which all students will be co-authors. By working together, the students and we academics were able to conduct a rigorous piece of research that hopefully stands a good chance of being published regardless of the results, thus mutually satisfying both the need for increased methodological rigour and the career pressure to publish.’

How might these messages of doing better, more robust science, filter throughout the academic community? As well as teaching this approach at undergraduate level we need to have higher expectations of ourselves and others, Button said: ‘Expect to see that a study has been pre-registered, and if not there should be a reasonable explanation as to why not. Give more weight to results from studies which have followed best-practices – i.e. sample size calculation, protocol and analysis plan pre-registered, data and material made open-access – and be more cautious of results from studies which have not. Be more aware of the role of chance in statistical analysis, with enough flexibility in analysis and reporting it is easy to find a “statistically significant” result. Be cautious.’

Similarly, Dr Pete Etchells, Senior Lecturer in Biological Psychology, has made changes at Bath Spa University, where students will soon pre-register their final-year projects. He said: ‘There’s been lots of discussion of how we can change things from the point of view of journals or grants or general culture in already established researchers, but there’s a lot we can do at the ground level. From the very first day undergraduate psychology students come in we can start teaching these things, as they should be, as normal standard practice.’

More broadly, is the culture among academics changing? In an exclusive extract from his new book The Seven Deadly Sins of Psychology on our website Chris Chambers (Cardiff University) recalls that his proposals for Registered Reports were met with accusations ‘of being “self-righteous,” “sanctimonious,” “fascists,” “a head prefect movement,” “Nazis,” “Stasi,” “crusaders” on a “witch hunt,” and worse.’ Etchells said he hoped we were moving away from the vitriol and anger thrown around at the start of the replication crisis. However, he admitted: ‘I find it constantly fascinating that psychologists seem to be so thin-skinned about some of this stuff, when replications and good methodology are the cornerstones of science. For some reason people take these discussions about replication very personally. I don’t know why that’s the case, it certainly doesn’t need to be, and I think people trying to promote open science practices and replication are doing it for the good of science, not to get at anybody.’

Many of these methodological debates take place on blogs and social media, with some of those whose work has been questioned accusing others of bullying. Etchells said having people with opposing opinions put in a room together to speak as adults may be a better approach (with a workshop at the Society’s Annual Conference perhaps a good example: see He added: ‘People can get pretty snarky online, and that’s unhelpful. If you just talk to people in a sensible way, that’ll go a long way to helping allay people’s fears rather than firing off a snarky line on social media – with a caveat that I don’t think that that happens very often. A lot of people talk about online bullies, I see very little evidence of that in this debate. I think we need to talk to each other more.’ 

- Further reading on Registered Reports and more. 

BPS Members can discuss this article

Already a member? Or Create an account

Not a member? Find out about becoming a member or subscriber


Professor of Biological Psychology Marcus Munafò states: "‘The vast majority of scientists are motivated to do good work".

That is so, so wrong about psychologists. They are not motivated to do good work at all, merely that work which enhances their status among peers and maintains the publication rates set them by a dysfunctional academic HR. That is why we are the mess in which we currently find ourselves.

The two words missing from the vocabulary of many (not all) psychologists are "scientific integrity". None of the madcap pre-registration or other silly schemes put forward to make sure psychologists don't cheat gets to the heart of what's wrong with so many - that mindset which says 'scientific integrity' is an optional attribute. 

Remember those famous words from Feynman (1985):

“ In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots

of good materials, and they want the same thing to happen now. So they’ve arranged to make things

like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit

in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like

antennas—he’s the controller—and they wait for the airplanes to land. They’re doing everything

right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No

airplanes land. So I call these things cargo cult science, because they follow all the apparent

precepts and forms of scientific investigation, but they’re missing something essential, because

the planes don’t land.


Now it behooves me, of course, to tell you what they’re missing. . . . It’s a kind of scientific

integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of

leaning over backwards. For example, if you’re doing an experiment, you should report everything

that you think might make it invalid—not only what you think is right about it: other causes that

could possibly explain your results; and things you thought of that you’ve eliminated by some other

experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.


. . . In summary, the idea is to try to give all of the information to help others to judge the

value of your contribution; not just the information that leads to judgment in one particular

direction or another.”

Feynman, R.P. 1985. Surely you're joking, Mr. Feynman!": Adventures of a curious character. New York: W. W. Norton & Co.


Look at those bits highlighted. Now recall how so many other psychologists (students and academics alike) are reporting their results in the manner of a PR consultant or marketing specialist.

It takes a certain kind of personal integrity and intellectual courage to openly present and explore the error in your results - and explain its consequences when discussing those results in relation to their proposed explanatory accuracy. 

Pre-registration doesn't even touch this form of intellectual dishonesty.

So, no, Psychologists en-masse have, through their inability to behave like scientists, earned themselves the title of the recent article:

Ferguson, C.J. (2015). "Everybody knows psychology is not a real science": Public perceptions of psychology and how we can improve our relationship with policymakers, the scientific community, and the general public. American Psychologist, 70, 6, 527-542.

Until many students an dpsychologists begin to understand what scientific integrity actually demands from anyone wishing to call themselves a 'scientist' - we are stuck with these kinds of silly 'institutional' attempts to present a kind of "she'll be right" optimism.