Psychologist logo
Cyberpsychology, Mental health

‘We are all involved in this false information universe’

Our editor Jon Sutton meets Tom Buchanan, Professor of Psychology at the University of Westminster, to talk misinformation.

12 January 2022

Your background is personality and social psychology, and for the last 20 years or so how people behave online. How did you come to focus in on false information and what motivates people to share it?

A lot of my earlier work was around deception in online communication, so there’s a historical thread there. But also round about 2016, a lot of things were happening in society: the election of Donald Trump, the Brexit referendum. There was an explosion of interest, certainly among psychologists, in the role that false information online had in those events. It’s not a new research area – political scientists and communications theorists have been working on false information, propaganda, misinformation, for many, many years. But more psychologists realised that we should be paying a bit more attention to this as a discipline. 

Also, like many people, I’ve encountered misinformation in personal settings that raises curiosity, flags up the importance of thinking about this as a serious social issue. Around the time of the Brexit referendum, there was a lot of material being bandied about that wasn’t true. People who I knew personally started posting things on social media that really surprised me. Out of character based on what I knew about them, their backgrounds and so on. It really took me aback in some cases, and that was one of the things that spurred me to start investigating it a little bit more formally,

You mention information that ‘wasn’t true’: a clear judgement on what is therefore ‘misinformation’. Yet we know from the history of psychology itself, some things that are treated as conspiracy theories at the time turn out to be true… MK Ultra, things like that. How do you go about making that judgement, particularly for research purposes, on what you’re going to describe as misinformation?

There are big questions there about the nature of truth… but there’s a common distinction in the literature, between misinformation and disinformation. Disinformation is essentially material that is shared in the full knowledge that is untrue: someone might make up a story and post it online, for whatever reason, political gain, financial gain. But that’s deliberately telling lies. Misinformation is material that people share or spread in the belief that it’s true, even though it isn’t. They’re mistaken about it. The same story could be disinformation, or misinformation, depending on your perspective, and depending on who’s sharing it. So if I make up a lie, and I tell it to you, that’s disinformation. If you then tell it to someone else, because you thought it was true, then that’s misinformation. 

Because of that ambiguity, around the person’s motives, I prefer to talk about false information, which can be either disinformation or misinformation depending on the context. Other people have talked about ‘soft facts’, about ‘malinformation’, and of course the term ‘fake news’ which has become so politicised that it’s essentially useless as a technical definition. So yes there are big issues around how we talk about information, but what I’m interested in at the moment is the person’s motivations in sharing the material, and whether or not they believe it to be true at the point where they share it. There is objective truth, in many cases, and then there are some facts and beliefs that are contested. Science is all about disproving hypotheses, so we should expect things that used to be considered to be true, to no longer be considered true in the future. That only makes it more important to think about what people believe to be true and why as a motivator of their behaviour.

Do those lines get blurred in people’s own minds? People who have actively made up information who come to the point where they believe even that to be true? And if people are confronted over the sharing of misinformation, do they hold their hands up or cling to those beliefs?

Thinking about disinformation, there’s an element of intent there, a malevolent behaviour there. That is not something that you’re going to change easily by having a reasoned conversation. There’s a lot of research on debunking, ‘pre-bunking’, and inoculating people against misinformation, educating people about how to detect misinformation, some work on the kind of conversations one can have with people. But I don’t think there’s a clear answer on how easy it is to change people’s beliefs. All this links back to classic work on the psychology of persuasion… it’s a complicated area.

Do you personally challenge people?

I try to avoid conflict in most areas of life. If something is clearly wrong, and I’m in a position to say something about it, then I often would; in other circumstances, it’s just not worth getting into the arguments. You may be challenging deeply held beliefs that people have. So they’re not going to react well to you publicly saying, ‘this is a load of nonsense, you shouldn’t be saying that’. 

The BBC have good guidance on how one could go about having those conversations. Some of the key points are, try not to get emotional. Don’t be dismissive. Don’t publicly shame people for their beliefs. Try to encourage questioning and critical thinking: ‘Why do you think that? What the evidence? What are the person’s qualifications for saying that?’. 

In a feature on our Research Digest, Emily Reynolds wrote about research led by Mike Yeomans at Harvard Business School, about being receptive, and trying to bridge the gap – ‘I understand that…’, ‘so what you’re saying is…’… trying to find any common ground to build on and generally reacting in a calm, empathic way to the extent that you’re able to do that.

Yes, sometimes easier to say that than actually to do it, especially in the context of family arguments, and so on.

I went to see Robin Ince recently, the comedian and writer. He works with Professor Brian Cox a lot, who apparently goes into a sort of meltdown if confronted by ‘Flat Earth’ views… it just does not compute. Robin said he would find it very difficult to have that kind of conversation with a Flat Earther because it runs so deep, and encompasses such a constellation of views. But one of the interesting things he said was that if you watch videos from Flat Earthers, often what you’ll notice is they start with, ‘So I’ve not been having a very good day…’, and then get on to ‘your government’s lying to you, the whole world is lying to you’.

Absolutely, the social context is very important. Some research suggests that belief in false material – whether it’s conspiracist ideation, or political misinformation – tends to be associated with feelings of powerlessness, and loss of control of one’s life. Some research points to that as an important underlying factor as to why people become particularly receptive to these kinds of beliefs. And as you mentioned, it’s fairly well documented in the domain of conspiracy theories that people who believe in one also tend to believe in others. People who have been active in the Coronavirus denial movement are now being more vocal around climate change misinformation [tinyurl.com/4vmt8zy5]. I’m interested in the underlying social and individual level factors that make people vulnerable to thinking about and sharing this type of material.

And I know that Stephan Lewandowsky has talked about that in terms of sense of control. With the pandemic, inevitably, more and more people feel like they’ve lost control of their lives and they’re afraid… so the pandemic triggers more conspiracy-based thinking. His suggestion to counter that is not trying to talk somebody out of their belief, but trying to make them feel good about being in charge of their lives. Their grip on those beliefs perhaps starts to loosen because they don’t need them anymore.

Yes, and another thing that comes out research that he and his colleagues have been doing is teaching skills of critical thinking and questioning. All of these things are part of a bigger picture, but not the full solution.

That bigger picture encompasses the social context. So every time you see something in the news around politicians and trust, presumably, you’re thinking, ‘here we go, this is going to have a knock on impact on my research…’

Absolutely. If one observes the pronouncements of politicians and also how politicians are portrayed in the media, it’s easy to become aware of how false information on social media is part of a much larger ecosystem. People talk about it as ‘generalised information disorder’ in society: as well as looking at what’s going on social media, where my focus is, you got to look at what’s happening in the mass media broadcast media, newspapers, TV channels, in particular US cable TV channels. You’ve got to look at politicians and public figures themselves, who in many cases are both having false material spread about them, but also spreading false material themselves on both international and domestic stages. It becomes a convoluted morass of all kinds of dishonesty going on all over the place. It’s quite easy to become quite angry and cynical about the whole thing, unfortunately.

Within that broader ecosystem, is there still room in your research to look at personality characteristics and more individual level factors? 

I think so. It’s known that there are links between personality and how people behave on social media in general, and there are specific hypotheses one might put forward about particular constructs that might make people more or less likely to engage with false information. I’ve found a fairly erratic pattern of results across a number of studies using various different methodologies. That makes me think that personality does have a role to play, but it’s perhaps a relatively weak role in terms of determining whether or not someone is going to share false information. It will interact with other variables as well. So for instance, there’s some evidence that lower agreeableness is associated with people’s reports on whether or not they would share false information. And lower conscientiousness also appears to be associated with sharing false material. Those are the most stable effects. But some more recent research we’ve been doing has suggested that it’s perhaps the association of those variables with other characteristics such as maladaptive traits that lead one to rely more on heuristics, rather than careful reasoning, that may be at the root there. 

So there’s a lot of work to be done around unpacking exactly what the mechanisms are, what variables are important. We’ve got a focus on schizotypy as a potential mediating variable at the moment. 

Educational level, does that have much impact?

There’s a lot of debate about demographic characteristics and how they impact on sharing false information. Digital literacy is very important. Government reports will focus on raising digital literacy at population level, in order to try and reduce the impact of false information. But not everyone who shares false information is doing it because they’ve been tricked into believing it when it’s not true. For those people who are sharing false information that they know is not true, then increasing digital literacy is never going to be your solution. It’ll just enable them to do it more effectively, in some ways. 

In my own research, things like education level, age, have had inconsistent relationships with the likelihood of sharing false information. Part of the reason is that these things are bound up with other variables that may also have an influence – political orientation, for instance. Older people tend to become more conservative, and the majority of false information that’s out there is right wing in orientation. By no means all, but the vast majority of stuff. So if you’ve got a conservative political orientation, that means you’re more likely to engage with it. 

It would be so much easier if we could say ‘people who share misinformation are disagreeable, shoddy, stupid individuals’… but presumably pretty much anybody can be at risk.

Absolutely. You’ve probably done it yourself, just by clicking on something on Twitter, or Facebook, that seemed plausible at the time, but then with hindsight, may not have been. You don’t have to actually share it in order to boost the signal, because of the way that social media platforms work. Even if you spend time looking at it, your dwell time on the page is high. That causes the algorithms to kick in, and then show it to some other people. It’s increasing eyeballs on the screen, and that’s increasing advertising revenue. We are all part of the problem. 

This is why I increasingly limit my time on Twitter to rating Christmas sandwiches.  

In terms of different channels, at least with social media the companies can take some steps: ‘do you want to read this article first?’, for example. But we’ve seen a bit more in the last year or so about the spread of misinformation within WhatsApp groups.

Closed communication channels like WhatsApp are incredibly difficult, to get a handle on what’s going on – even for the companies themselves, if they’re using end-to-end encryption. It’s a hidden world that nobody beyond the people in the groups has access to. Very, very difficult to research, or to do any kind of intervention or prevention measures. 

Even on social media though, you’ve mentioned Twitter playing with the interface to change people’s engagement. But that’s not something that social media platforms are particularly incentivised to do… all of these interventions are necessary from a social responsibility point of view, but will also hurt your bottom line in terms of how the platform works and how profitable it becomes. Not an easy answer.

So what next for you in terms of the research direction?

Understanding the role of individual differences in a bit more detail. We’re planning to talk with people who we know shared false information, and essentially ask them ‘why did you do this? What was the motivation?’ We’re hoping that will give us a bit more insight into the kinds of things that we need to be thinking about and measuring in terms of understanding why people share false material online.

It’s going to be so interesting to see how much insight people have, or are willing to share. I can see how you might become cynical and hardened… I’ve had to stop listening to BBC Radio 5 Live phone-ins, where people say things like ‘well, it’s just my personal choice at the end of the day’, as if that’s actually a legitimate way to end a debate. 

Positions become very entrenched, unfortunately. I certainly avoid reading the comment sections under articles. Well worth avoiding. I think I’m probably a little bit more aware of some of the processes and some of the media level manipulations that are going on around sending narratives and so on, but do I avoid it? Probably not. 

Have you encountered misinformation from within our own profession, whether that’s about psychology or otherwise?

I could go off on a bit of a rant here about the whole of the psychology curriculum being built on misinformation. The replication crisis has demonstrated that many of the ‘facts’ that we’ve been teaching our students for decades may not actually reflect reality. As psychologists, our relationship with truth does bear some scrutiny… we all need to reflect on the amount of credence we can assign to some of the things that we teach. Also, some of the myths that are spread around psychology, particularly at the interface between psychology and the rest of society – some forms of psychological testing, for instance, I would put very firmly in the category of ‘soft facts’. 

So that’s certainly within the profession. The fact that we’re psychologists doesn’t mean that we’re not people like everyone else. We are entirely subject to the same processes, same influences, reasoning biases, as everybody else. We are all involved in this false information universe in exactly the same way.

It’s been a lesson for me from the past couple of years… I’ve received communications from psychologists with some really quite wild conspiracy theories, and my initial reaction was ‘surely we’re better than this, as evidence-based people of science?’ But in some ways, psychology is quite a questioning discipline, we pride ourselves on that critical approach, and perhaps sometimes that can tip over into a mode when we’re questioning whether anything is true, whether you can trust anybody. That could take someone down the rabbit hole.

Yes, into the era of alternative facts. But I may have painted a fairly bleak worldview – ‘we live in a world that’s a tissue of lies, deceit and deception’. That may be true. But there are people out there who are truthful and – it sounds corny – pursue the cause of truth. People doing a lot of work to try and correct false information, whether those be researchers or fact-checking organisations or people working within government organisations, and so on. We should recognise the hard work that a lot of people are doing in what is essentially a very hostile information environment.

Yes, even just in terms of the pandemic, I know there have been psychologists such as Stuart Ritchie setting up websites covering common Covid misinformation. Maybe it’s a question of signal boosting, giving more attention to the efforts to counter misinformation?

Absolutely. One of the findings from some of the research I’ve been doing is that an important factor that influences whether people share material is their level of familiarity with it. The more exposed you are to it, and the more you see it repeated, the more likely you are to share it. And the lesson there is don’t engage with stuff that you think is false even to argue with it, because that increases exposure on social media. But do engage with the stuff that you think is true, and is worth sharing. 

- You can also listen to Professor Buchanan on the latest episode of our Research Digest podcast PsychCrunch, kindly sponsored by Routledge Psychology.