An exciting time to be a psychologist?
With hints at hope and progress, but still with a healthy dose of scepticism, the British Psychological Society, Experimental Psychology Society and Association of Heads of Psychology Departments, in association with Wiley, hosted a second event at the Royal Society to discuss the state of replication and reproducibility in psychology. Two years since the first such event it is plain to see change has been fast, unforgiving at times, and has sparked even more debate across the discipline.
Professor Andy Field (University of Sussex) posed the question of whether researchers should really analyse their own data – he answered quickly with a resounding no. Field said while some researchers engage in questionable research practices on purpose to inflate their effects, many simply ‘know not what they do’. He pointed to p-hacking – a practice where researchers either selectively report data or use certain statistical analyses which result in seemingly statistically significant results. While few researchers would admit to doing this, Field showed a graph of expected p-values across the literature compared to the actual p-values seen. Perhaps unsurprisingly there is a disproportionately high number of p-values just below 0.05 – the threshold for statistical significance across much of psychology.
Another of these questionable research practices is ‘forking paths’: similar to p-hacking, but rather than analysing data to get a significant result researchers might see an interesting effect in their data and analyse it post-hoc or later choose to ignore outliers in the data. Field said using independent analysts would avoid many of these potential problems.
Nothing is new under the sun, as they say, and Professor Susan Fiske (Princeton University) was witness to another crisis in social psychology during her early career in the 1970s –many were worried then about the lack of replications and relevance of the research in general. Fiske said she had wandered into the current crisis naively and after her own experience has turned her attention to how psychology researchers communicate with each other via methods blogs.
In a turn of events she dubbed ‘Fiskegate’, Fiske was asked to write about the effects of new media on science and scientists. In her article, which was leaked before publication, she described ‘self-appointed data police’ giving out ferocious critiques, and colleagues leaving the field entirely because of ‘methodological terrorism’. Her comments drew a sharp response from methodological critics – many of whom post about their findings on blogs. It is also worth mentioning that one of Fiske’s PhD students was Amy Cuddy, whose work on power posing was openly critiqued in methods blogs recently.
Two of Fiske’s graduate students were given the task of finding out more about the online methods blogs which concerned her. They looked at 41 blogs, which averaged around one post per month. While we may expect these bloggers to be younger researchers they tend to be slightly more established; the bloggers were 71 per cent male, 92 per cent white, and 74 per cent mid- to late-career researchers with established citation counts.
Fiske and her colleagues also looked at who the blog posts were about. It seemed bloggers mentioned specific ‘targets’ in around 11 per cent of posts, and those individuals tend to be male rather than female – contrary to the suggestion seen on social media recently that this is largely a case of male critics targeting female researchers. Fiske said she was hoping to bring data to bear on the controversies she had been involved with.
Since the replication crisis hit, the psychology community are beginning to see some hope, and to look back at what’s been achieved with a great deal of pride. Professor Daryl O’Connor (University of Leeds), also chair of the British Psychological Society’s Research Board, said since Brian Nosek’s 2015 paper – which revealed only 36 per cent of psychology studies replicated – psychology had seen some key changes that would be important for the whole of science. He pointed to the Center for Open Science and Open Science Framework set up by Nosek and colleagues. He also welcomed the surge in journals accepting registered reports, a campaign fronted by Professor Chris Chambers (University of Cardiff). In an attempt to remove publication bias, researchers register their proposed hypotheses, methods and statistical analyses with a journal prior to publication and, if their plans are accepted, they are guaranteed publication after the experiment is complete regardless of whether their results are significant or not. One BPS journal, published by Wiley, The Journal of Neuropsychology, has recently begun offering this type of publication with an aim for it to be rolled out to all 11 BPS journals eventually.
Of course, O’Connor said, all was not entirely rosy. While some see this era as something of a renaissance for psychology, many are uncomfortable with the tone of some of the online critiques as mentioned by Fiske. A recent article in the Boston Globe sparked a debate about the debate itself, with several prominent psychologists (including Stephen Pinker) calling for an end to ‘social media hate mobs’. O’Connor said this division reminded him of Brexit, with people on both sides of the debate – those publishing critiques and those bearing the brunt of them – were all in their own personal echo chambers, not quite seeing the viewpoint of the other. He said that while there was a relatively small amount of bullying going on as part of this movement, when it does happen it can be hugely damaging to individuals. O’Connor concluded positively, saying that psychological science had truly been a trailblazer over the last two years. He praised those revolutionaries who have improved practice and propelled us forward: ‘It is an incredibly exciting time to be a psychologist,’ he said.
Professor Eric-Jan Wagenmakers (University of Amsterdam), an expert in Bayesian statistics, began his talk on radical transparency in statistical reporting with a dilemma: ‘Dr X has a favourite theory that she has worked on and published about previously, Dr X designs an experiment to test a prediction from her theory, Dr X collects the data, a painstaking and costly process. Part of her career and those of her students ride on the outcome. Now the data need to be analysed. If p <.05, the experiment is deemed a success, if p>.05, it is deemed a failure. Who is, without a shadow of a doubt, the most biased analyst in the entire galaxy, past, present, and future?’ Of course, the answer is Dr X, and herein lies the problem, Wagenmakers said. Data are analysed, as a matter of routine in psychology, by the world’s most biased analyst, who will usually have little statistical training, and with absolutely no supervision. When psychologists find a p-value of less than .05, Wagenmakers added, there is rarely any further doubt cast upon that finding.
Researchers set out to find the truth behind a given question, but they also hope to find data that will leave little room for doubt, develop a coherent theoretical framework and publish papers which make interesting claims. Wagenmakers said, as well as perverse incentives within academic publishing, we are also faced with a deep fear of uncertainty – which both lead to publication bias, massaging the data and HARKing (or ‘hypothesising after results are known’). There was a clear need, he added, to become more transparent about what we’re doing in psychology. People as a result may become more honest and we may become more forgiving of imperfections or unclear results if our statistical methods and data were less opaque.
In attempt to instil scientific rigour in undergraduate students and to increase replications in the field, Dr Katherine Button (University of Bath) has set up a project to help third-year psychology students collaborate on a replication study for their final year dissertation project. Button said that third year students’ projects are often under-resourced and carried out under heavy time constraints. Assessments of these projects also tend to be focused on individual contributions and the novelty of an experiment or its findings.
Button realised if she could have undergraduate students collaborate on a project to replicate an established research finding, also pre-registering the study’s methods and proposed analyses, this would give students the best start in terms of methodological training, but would also add an invaluable replication attempt to the literature.
Eight undergraduate students from Bath, the University of Exeter and University of Cardiff, worked together on their project along with supervision from Button, Professor Chris Chambers and Dr Natalia Lawrence, attempting to replicate research by Chambers and Lawrence which explored training response inhibition to food. The supervisors came up with a research question and protocol in the year prior to the students’ involvement, and later asked them to come up with their own hypotheses to test at the same time. Methods and statistical analysis plans were pre-registered on the Open Science Framework. Once the results are in, the full data set will also be published on the framework.
Button said it was, indeed, an exciting time to be a psychologist. While rigorous research takes more time and more resources, she said, funders and publishers were becoming more aware of the need for more rigorous and transparent research. Consortium studies, she said, were a way to tackle some of the problems with psychology, instilling best practice at the grassroots and changing psychology from the ground up.
You can also watch video from the event.
BPS Members can discuss this article
Already a member? Or Create an account
Not a member? Find out about becoming a member or subscriber