‘A high stakes version of Groundhog Day’
As the reality of the coronavirus pandemic set in in March, we looked at the work of psychologists attempting to understand how the crisis is affecting us, and to inform our response to it. A few months later, and hundreds of studies have been conducted or are in progress, examining everything from the spread of conspiracy theories to the characteristics that make people more likely to obey lockdown measures.
However, some researchers have raised alarm. They’re worried that many of these rapid new studies are falling prey to methodological issues which could lead to false results and misleading advice. Of course, these aren’t new problems: the pandemic comes at the end of a decade in which the field’s methodological crises have really been thrust under the spotlight. But is the coronavirus pandemic causing researchers to fall back on bad habits – or could it lead to positive change for the field?
A methodological crisis…
The past decade has been a turbulent one for psychology. Researchers have come to realise that a lot of psychological research rests on rather shaky foundations. A pivotal 2015 study from the Open Science Collaboration, for instance, attempted to replicate the findings of 100 psychology studies published in three influential journals, finding a significant effect for just 36 of the 97 studies that had originally found a positive outcome. Other replication attempts have cast doubt on well-known findings that appear in many introductory textbooks.
And it’s not all about reproducibility: even when findings do hold up, they don’t necessarily generalise beyond the narrow group in which they were found. Most work focuses on participants who are Western and Educated and come from Industrialised, Rich and Democratic countries – an issue that has become known as psychology’s WEIRD problem.
Of course, none of this will be news if you follow psychology research (or read the Research Digest blog). And things have undoubtedly improved (Leif Nelson and Joseph Simmons, in a 2017 paper, concluded ‘the Middle Ages are behind us, and the Enlightenment is just around the corner’). Researchers are more aware than ever of the underlying causes of these problems. Increasingly, they are pre-registering their studies so that their methods and hypotheses are out in the open before they even begin collecting their data. Replication studies abound, weeding out those findings that fail to replicate. Large collaborations are popping up to study psychological phenomena with huge numbers of participants across multiple countries.
… meets a health crisis
Enter the Covid-19 pandemic. On the surface, psychology seems like it should have a lot to contribute in a crisis whose management relies on getting people to act in certain ways. But many researchers point out that the methodological issues that have come to the fore in the past ten years are all the more serious when it comes to an actual health emergency.
Some critics say that evidence from past psychological research is simply too flawed or opaque to inform decisions that involve life-or-death situations – an argument perhaps best exemplified by a recent preprint from Hans IJzerman from Université Grenoble Alpes and colleagues, which questions whether psychology is really ‘crisis ready’. The team explicitly call out the lack of generalisability and reproducibility in past work. How can we be sure that studies based on small, ‘WEIRD’ pools of participants in very specific circumstances apply to a broad population in the midst of a crisis? Andrew Przybylski, Director of Research at the Oxford Internet Institute and co-first-author on that preprint, likens the situation to ‘a high stakes version of Groundhog Day’. Just because we’re trying to figure out how to respond to a crisis doesn’t mean that all of those issues that have come to light in the past ten years have just gone away, he says.
Of course, others have expressed more optimism about the role of psychology. This preprint was partly a reaction to a review published in Nature Human Behaviour, in which Jay Van Bavel and colleagues outlined the ways that psychology could support the response to the coronavirus pandemic. While acknowledging that evidence is limited, the team make a number of suggestions based on past work, such as taking into account that people have an ‘optimism bias’, believing bad things are more likely to happen to others than themselves. And we have published dozens of perspectives from psychologists about how the field can help right now – some in this very issue. Still, the debate about whether or not the field is ‘crisis ready’ continues.
Letting our guard down
And for many, that sense of repeating past mistakes also permeates much of the rapid research that psychologists have produced since the crisis began. Writing at The 100% CI in late March, Anne Scheel from Eindhoven University of Technology expressed worries that new studies attempting to understand the pandemic are being rushed out with major flaws: they may have a small number of participants and be underpowered, for instance. And there’s still the question of generalisability. Yes, these studies are being conducted in the context of the pandemic so are at least more relevant to the current situation. But they’re still conducted in an artificial environment – usually online – and often consist of surveys that may not tap into how people think or behave in their daily life. And that WEIRD problem is not going anywhere.
In other words, in ‘crisis mode’, researchers may be falling for the same old pitfalls that psychology has been trying to move on from in the past decade. ‘[I]t feels as if we’ve put our guard down rather than up,’ Scheel writes. Three months after typing those words, does she feel like things have improved? ‘My long-story-short answer would be: I’ve rarely felt so grimly vindicated,’ says Scheel.
Meanwhile, new scales to tap into attitudes and behaviours related to coronavirus have cropped up – but here too many researchers have expressed scepticism about the assumptions and methodologies used to develop these. Take the ‘Fear of Covid-19 Scale’, for instance, a seven-item scale that has already been translated into many languages and which asks questions like ‘I cannot sleep because I am worried about getting coronavirus-19’. The implication is that higher scores on the scale are bad; indeed, the authors write that the scale will ‘be useful in providing valuable information on fear of COVID-19 so as to facilitate public health initiatives on allaying public’s fears’.
But another group (led by Craig Harper) found that people who scored higher on the scale were more likely to practice positive public health behaviours like social distancing. They suggest that rather than measuring some pathological ‘fear’, the scale may actually be tapping into adaptive negative emotions that help us respond to dangerous situations. Again, concerns about the validity of measures aren’t new (in fact, the same team recently made a similar argument that scales of ‘social media addiction’ are really tapping in to normal, rather than pathological, social behaviour, as we reported in April). But these issues are arguably more problematic in a crisis situation.
None of this is to imply there aren’t many psychologists doing useful research right now – indeed, we shared some of the important work being done in areas like mental health in that May issue article. However, it’s clear that the crisis has highlighted – and potentially exacerbated – many of the field’s existing problems and limitations.
A way forward
But there have also been a number of suggestions for how to improve the research response. With calls for data sharing and collaboration, and the increased use of open science practices like Registered Reports, there are some bright spots, says Przybylski.
The preprint that claimed psychology is not crisis-ready had one unlikely proposal: draw from rocket science. NASA uses a system of ‘technology readiness levels’ to determine whether a new piece of technology is ready to be deployed. The first level indicates that researchers have reliably observed some principle that could help develop a technology; at later levels they have tested a piece of technology in an appropriate environment; and at the highest level they have successfully used the complete system in a mission.
The team writes that most psychological research findings haven’t even passed that first level: it’s still unclear whether many of the effects researchers have found are reliable, let alone whether they can be used in real-world interventions. So they suggest a psychological equivalent of NASA’s framework, called ‘Evidence Readiness Levels’, to guide the development and implementation of psychological research. Interestingly, they’re not the only ones who have suggested a NASA-inspired rating system. Another preprint from Kai Ruggeri and colleagues (including NASA chief scientist James Green) proposes a similar rating system that could allow decision makers to quickly assess the quality of evidence.
Others have argued that the actual infrastructure by which psychologists communicate and share information needs to be adapted to better suit the crisis. Along with several colleagues, Ulrike Hahn at Birkbeck, University of London has set up an initiative that aims to ‘reconfigure behavioural science for crisis knowledge management’. It’s a kind of meta-science project, says Hahn: ‘In a sense [it’s] trying to use extant tools to do something like build what the internet would be like, if the internet was still run by scientists, for scientists’. To that end, the team have set up subreddits where behavioural scientists can talk to each other and with policy makers and journalists, as well as a growing database of resources on psychology and the coronavirus. Although the project is still in a proof-of-concept stage, the team hopes that these sorts of forums can facilitate the kind of speedy, transparent discussion and early evaluation of ideas and results that is necessary in a crisis, while also making information readily available to policy makers and the public.
Researchers have also highlighted the urgent need for more large-scale, collaborative studies. In an April paper in Lancet Psychiatry, for instance, a multidisciplinary team (including numerous psychologists, with Emily Holmes as lead author) outlined priorities for mental health research during the pandemic. Alongside various recommendations, the authors call for work to be conducted at scale with multiple research groups and networks, warning against ‘the current uncoordinated approach with a plethora of underpowered studies and surveys’. And at least some funding bodies and research councils are beginning to recognise the need to consolidate efforts in this way, notes Przybylski, who was also a co-author on this paper. Having large consortiums working together is ‘the kind of thing that I hope will eventually move psychology into the domain of “real science” along with physics and chemistry,’ he says.
That idea that improvements in our response to the pandemic could also lead to improvements in psychology more generally underlies many of these suggestions. Przybylski says that the notion of evidence readiness levels, for instance, ‘has been implicit in a lot of our thinking for a long time’, well before coronavirus emerged. Similarly, infrastructure that facilitates communication between researchers and builds an online community is always going to be helpful, says Hahn.‘That kind of “ideal internet for science” won’t stop being useful once the crisis has passed.’
Of course, whether or not the crisis does end up leading to long-term change remains to be seen. Some studies have already been published following rapid review, but most of the larger scale research will only hit the journals months or years down the line. Will we then look back at the pandemic as a time when psychological research held a strong line of defence and response, or when we let our guard down and allowed poor practices to spread?
- Matthew Warren is editor of the British Psychological Society's Research Digest.
‘Authentic, genuine, we’re-all-in-this-together connectedness’
Several perspectives written for The Psychologist website during lockdown considered research during the crisis, and beyond. Read them via tinyurl.com/psychmagcorona – here, The Psychologist Editor Jon Sutton draws out a few themes.
The crisis prompted Liuba Papeo to reevaluate her life’s work. ‘I began questioning the very existence and relevance of my research. What I do, as a cognitive neuropsychologist who studies the relation between brain and behaviour, often feels far from the stuff of the real world. …With the explosion of the Covid-19 crisis, that feeling had become stronger than ever. Everybody suddenly turned to scientists for explanations and solutions, and I felt like I had nothing to say or to offer.’ However, although Papeo recognised that ‘we have to invest as much as we can now in Covid-related research’, she argued that ‘other lines should stay open and alive.
We do not know exactly what the future will be like. We cannot know where those lines of research will take us.’
Exploring similar themes, Emma Smith admitted that ‘It felt vain and tone deaf to care about my PhD at all.’ Others noted that their research had become ‘harder, and more important’, with family researchers Bonamy Oliver and Alison Pike calling for ‘a suite of free and open source tools that enable robust understanding of the multiple aspects of contemporary family processes in these strangest of times.’
For the researchers on the Repeated Assessment of Mental health in Pandemics (RAMP) Study the crisis emphasised the importance of a multidisciplinary understanding, while for Taryn Talbott it was an opportunity to embrace technology such as Instant Messaging to conduct qualitative research remotely. Maddi Pownall thought in terms of collaboration. ‘The ability to survive academia is reliant on togetherness with other people. It’s the hum of excitement when meeting with a potential collaborator, the tipsy introductions at a conference wine reception, the room full of laughter from frustrated PhD students, the one-to-ones with struggling students. When you strip back all of this, as the pandemic has done so mercilessly, the bare bones of academia are exposed. It’s not as pretty a place to be.’ Pondering the scope for the pandemic to bring us closer together, Pownall hoped to sustain ‘a sense of authentic, genuine, we’re-all-in-this-together connectedness’.
Sarah Redsell, Lynn Laidlaw, Judit Varkonyi-Sepp and Sarah Hotham noted that the pandemic has ‘stimulated countless applied research projects from academics in all disciplines, including psychologists. However,’ they said, ‘not all research being conducted is high quality and some of it does not appear to include outcomes of value to patients or the public.’ The authors proceeded to make a strong case for Public and Patient Involvement and Engagement (PPIE), research being done ‘with’ and ‘by’, as opposed to ‘about’ or ‘for’, people who use services. ‘PPIE is a process interwoven with the entire research process from concept to dissemination,’ they wrote.
Describing the work of the Health Psychology Exchange in facilitating PPIE, Redsell and her co-authors reported: ‘Early on, health psychology investigators informally reported that the urgency of the Covid-19 pandemic meant they did not think they had not had time to include PPIE in their research activities. The process of involvement appeared mystical to some investigators, and misunderstandings about the extent of the role of PPIE in research generally led to it either being left out completely or bolted on once a near top copy of the research project had been developed by the academic team. Investigators who were interested in including PPIE in their proposals appeared put-off by a sense of needing to embrace it completely rather than pragmatically.
This contrasts with the perspective of public contributors who report that they are keen to help at any stage of research project development and are fully aware of the need for pragmatism, speed and rapid turnaround. It seems likely that the challenges the health psychologist investigators faced were similar to those of academics working in other disciplines. This suggests that involvement is perceived as the ‘icing on the cake’ of research, rather than valued by academics. This may be underpinned by paternalism in that investigators may not think public contributors can provide rapid involvement, despite not actually asking them.’
The piece concluded with a call for health psychologists to serve as the ‘flag bearers of behaviour change within the discipline by ensuring that public views and perspectives are included in health and social care research, both during and post-Covid-19… This shift would ensure that finite funding resources are supporting research that is truly valid and meaningful for the public.’
BPS Members can discuss this article
Already a member? Or Create an account
Not a member? Find out about becoming a member or subscriber