Psychologist logo
Research, Research Ethics

Is science broken?

Ella Rhodes reports from a debate at University College London.

18 March 2015

A lively debate was held at London’s Senate House yesterday with panellists from neuroscience and psychology discussing the question: is science broken? If so, how can we fix it? 

The discussion covered the replication crisis along with areas of concern regarding statistics and larger, more general problems. The session began by considering the pre-registration of studies, with Chris Chambers (Cardiff University) explaining the potential. The standard system, whereby an academic completes research then submits their findings to a journal, can lead to several types of bias, he said. As one member of a very large team who had been contributing to the development of registered reports in journals including Cortex, Chambers said the first question he asks of audiences is this: if your aim is to do good science, what part of a scientific study should be beyond your control? ‘The answer you typically get is results,’ Chambers reported. The next question is, in the interests of advancing your career what part of a study do you think is most important for publishing in high-impact journals? ‘Again, it’s the results.’

This leads Chambers to conclude that ‘The incentives that drive science and individual scientists are in opposition, and I think if we’re going to tackle this issue we should recognise the incentive problem we have.’ Chambers said this incentive issue can lead to publication biases within journals, significance chasing, hypothesising after results are known, or changing hypotheses to fit results. He also said there no incentive to share data, that the whole field was encumbered by a lack of statistical power. In the current incentive structure it makes more sense to publish large numbers of acceptable papers rather than a small number which are based on studies with large samples and high power. In addition, Chambers said, academics often do not see replication as worthy of their time.

Chambers suggested that authors could adopt a philosophy where what gives hypothesis testing its scientific value is the importance of the question being asked and the quality of the methodology, not the results it produces. He then went on to explain the process of registered reports. In this structure authors submit a stage one manuscript including an introduction, proposed methods and detailed analysis, and pilot data of possible. This article then goes to a stage one peer review where reviewers address whether the hypotheses are well-founded, the methods and analysis feasible and detailed enough that someone else could reproduce the experiment directly. Is it a well-powered study with quality controls and manipulation tests included? If these requirements are met the journal offers in-principle acceptance, regardless of the study outcome. 

Authors then do the research and subsequently submit a stage two manuscript which includes the introduction and method from the original submission, results separated into two sections (the analyses mentioned in first manuscript, plus any extra analyses the authors came up with after the provisional acceptance), and a discussion. This goes to stage two review and if the authors followed the pre-approved protocol, and have conclusions justified by data, the manuscript will be published.   

Chambers then discussed 25 questions he is often asked about registered reports. These included the common concern of how does one know if registered reports are suitable for a given field. He said any area where at least part is involved with deductive hypothesis-driven research can potentially benefit if any of problems exist such as publishing bias, significance chasing, post-hoc hypothesising, low power, lack of replication and data sharing. Although not all of these problems are solved by pre-registration, Chambers feels it can be helpful to incentivise transparent practices across a number of different areas.

Following a break each of the panellists gave a brief talk around the central debate – is science broken? Dorothy Bishop, Professor of Developmental Neuropsychology at the University of Oxford, said she believed that science was broken, but had hope that it could be fixed. Professor Bishop said science has more of a problem than it had in the past. Now, she said, we are able to gather huge multivariate data sets and perform complicated statistics on them. She added: ‘It really comes down to the problem being that you have people presenting exploratory analyses as if they were hypothesis testing… I started to realise what a big issue this was when I realised I wasn’t believing a lot of literature I was reading.’ 

Bishop said her concerns began while looking into conducting EEG research. While reading the literature realised the amount of potential measurements one can take from EEG or ERP data. ‘I saw there was so much flexibility anyone who did anything with this method would find something.’  The use of four-way analysis of variance particularly concerned her and as a result she carried out several ANOVAs on a large set of random numbers and found several apparent ‘effects’.

She said she was amazed to find that virtually 75 per cent of the runs she performed on her data would come up with some effect. Bishop added: ‘In analysis of variance you are controlling for the number of levels at any one factor, but you’re not adjusting for the number of comparisons you are doing… If I made an a priori prediction that there was going to be a group by task interaction only once in all of those 15 runs would I get a false positive. But if you’re not predicting in advance and hypothesising after looking at the data you’re going to find something that looks like an effect.’

Neuroskeptic, a Neuroscience, Psychology and Psychiatry researcher and blogger, gave a personal perspective on problems with science, speaking of the events which led him to lose faith in the research in the field. He said that as undergraduate students people are taught to do statistics in a very particular way, but once a person begins PhD research things change vastly. After gathering some results for his PhD research, Neuroskeptic found he had one significant result out of seven tasks performed by his participants. He said: ‘I thought back to my undergraduate days and thought “what if you do a Bonferroni correction across all the tasks?”. I got the idea that I’d suggest this to my supervisor but don’t think I ever did, I realised that just wasn’t how it was done. I was very surprised by this. I learned as an undergraduate you do a Bonferroni correction if you have multiple tasks. I started to wonder if we aren’t doing this who else isn’t doing it? I began to lose faith in research in the field.’

Neuroskeptic said he wondered whether there was a good reason that multiple comparisons correction was not used. He added: ‘I still don’t think there’s a good reason we can’t do that. We have come to the tacit decision to accept methods which we would never teach undergraduates were a statistically good idea, but we decide that we’re happy to do them ourselves. That’s how I got on the road to blogging about these issues.’

Sophie Scott, Professor of Cognitive Neuroscience (University College London), gave a more general discussion about how people become involved in science and the legacy left by scientists. Scott said that she did not believe that scientific change or progress was something you see with one or two papers. She said that it was useful and humbling to consider where one’s own research would be in 100 years. She said it was useful to move away from thinking about processing single papers and look at some of the bigger issues. She added: ‘Some of the assumptions we make in psychology are horrific. We all have unconscious biases but what we could do is look at how that’s influencing the science we do. Because it is, whether you think it is or not.’ 

Scott said that scientists tend to look at a scientific issue or question within a framework of what can be studied. ‘If I look into the study of language most of the research is on reading written words, there’s less on listening to speech because that’s hard, there’s less on speech production and least of all on writing because that’s even harder.’ On whether science was broken, she concluded: ‘I think it’s interesting to answer these questions but I’d be concerned if we focused too much on process, because that leads to you focusing on individual papers… the bigger picture will tell you where things are going. Rather than focusing on what’s wrong and what’s right, look at what’s going to last and what’s meaningful.’

Sam Schwarzkopf, Research Fellow in Experimental Psychology (University College London), bucked the trend of the event by suggesting that science was working better than ever before. He said in the history of science there had always been irreproducible results, political obstacles, other academics scooping ideas and publication bias. Dr Schwarzkopf said we should be focusing on science that gives equal weight to exploration and replication. He said that all the talk about pre-registration and replication look at the symptoms rather than the root cause. He concluded: ‘Science is a process that’s constantly evolving and it’s better than it ever has been, it’s more open and more transparent, and there are ways of communicating science which weren’t even around ten years ago. That doesn’t mean science is perfect: we should be asking how can we make science even better than it is.’