From the Research Digest

February's selection from the Society's Research Digest blog.

Biological explanations lead to less empathy for patients

The idea that mental illness is related to brain abnormalities or other biological factors is popular among some patients; they say it demystifies their experiences and lends legitimacy to their symptoms. However, studies show that biological explanations can increase mental health stigma, encouraging the public perception that people with mental illness are essentially different, and that their problems are permanent. Now Matthew Lebowitz and Woo-young Ahn have published new evidence that suggests biological explanations of mental illness reduce the empathy that mental health professionals feel towards patients.

Over two hundred psychologists, psychiatrists and social workers were presented with vignettes of patients with conditions such as social phobia, depression or schizophrenia. Crucially, some of these vignettes were accompanied by purely biological explanations focused on factors like genes and brain chemistry, while other vignettes were accompanied by psychosocial explanations, such as a history of bullying or bereavement. Next, the mental health professionals reported their feelings by scoring how far a range of adjectives – such as ‘sympathetic’, ‘troubled’ and ‘warm’ – fitted their current state.

Vignettes accompanied by biological explanation provoked lower feelings of empathy from the clinicians, and this was true regardless of their specific profession. Both biological and psychosocial explanations triggered similar levels of distress, so the reduced empathy associated with biological explanation was not simply due to psychosocial explanations being more upsetting. The mental health professionals rated the biological explanations less clinically useful; biological explanation also prompted them to have less faith in psychotherapy and more confidence in drug treatments.

Similar results were found in a follow-up study in which clinicians and social workers were presented with vignettes and explanations that reflected a combination of psychosocial and biological factors, but with one approach more dominant than the other. The idea was that this would better reflect real life. In this case, explanations dominated by biological factors prompted lower empathy from clinicians.

Lebowitz and Ahn suggest biological explanations provoke reduced empathy because they have a dehumanising effect (implying patients are ‘systems of interacting mechanisms’) and give the impression that problems are permanent. With biological approaches to mental illness gaining prominence in psychology and psychiatry these are potentially worrying results. A silver lining is that both medically trained and non-medical clinicians and social workers in the study saw biological explanations as less clinically useful than psychosocial explanations.

A weakness of the research is the lack of a baseline no-explanation control condition – this means we can’t know for sure if psychosocial explanations increased empathy or if biological explanations reduced it. Also, as the researchers admitted, the vignettes and explanations were greatly simplified. Nonetheless, the findings may still give reason for concern. Lebowitz and Ahn suggest reductions in empathy may be avoided if clinicians understand that ‘even when biology plays an important etiological role, it is constantly interacting with other factors, and biological “abnormalities” do not create strict distinctions between members of society with and without mental disorders.’ cj

 

One in ten student research participants don’t make an effort
In The Clinical Neuropsychologist

It’s near the end of your university semester, you’re tired and now you’ve got to sit through 90 minutes of monotonous psychology tests to fulfil the requirements for your course. This is a familiar situation for psychology undergraduates, many of whom form the sample pools for thousands of psychology studies.

Concerns have been raised before that psychology findings are being skewed by the (lack of) effort students put into their performance as research participants. Last year, for example, researchers found that students who volunteer near the end of term perform worse on psychology tests than those who volunteer earlier.

Now Jonathan DeRight and Randall Jorgensen at Syracuse University have investigated student effort in 90 minutes of computerised neuropsychology tests designed to measure attention, memory, verbal ability and more. The session, which took place either during a morning or afternoon late in the spring semester, involved the students taking the same broad battery of tests twice, with a short gap in between. The students received course credits for their time.

To test whether the students were making a proper effort, the researchers embedded several measures – for example, performing worse than chance on a multiple-choice style verbal memory challenge was taken as a sign of low effort; so was performing more slowly on an easier version of a mental control task than on the more difficult version.

Among the 77 healthy student participants who took part (average age 19; 36 women), the researchers identified 12 per cent who failed at least one of the embedded measures of effort during the first battery of neuropsychology tests; 11 per cent also failed one or more measures during the second battery. The vast majority of those who showed low effort had participated in the morning. In fact, focusing only on the morning participants, one in four displayed low effort.

Unsurprisingly, low effort also went hand in hand with poorer performance on the neuropsychology tests, especially one of the longest and most dull cognitive tests (the ‘continuous performance task’), and especially during the second battery. A consistent exception was a particularly complex version of a test of mental self-control (the Stroop task) – perhaps because the challenge of the task provoked more concentration, even from students who were mostly not trying hard.

The estimate from this study of the fraction of student research participants not making an effort are consistent with some prior studies, but not others (the latter research found less evidence of poor effort). Clearly more research is needed. DeRight and Jorgensen concluded that ‘healthy non-clinical samples cannot necessarily be assumed to have put forth adequate effort or valid responding’. They added: ‘Assessing for effort in this population is imperative, especially when the study is designed to provide meaningful results to be used in clinical practice.’ This last, important point is a reference to the fact that results from students are often used to establish estimates of ‘normal’ performance on neuropsychology tests, for comparison when investigating patients with brain damage or other problems. cj

 

After this training regime, people saw letters of the alphabet as being alive with colour  
In Scientific Reports

A training regime at the University of Sussex has successfully conditioned 14 people with no prior experience of synaesthesia – crossing of the senses (see also p.95 and p.106) – to experience coloured phenomena when seeing letters.

The regime took place over nine weeks, a half-hour session every workday together with extra homework. Again and again, the trainees were encouraged to treat the letter ‘r’ as red, or ‘e’ as green, with a similar process repeated on 13 letters in all. This was tested every session using tasks such as viewing a sequence of letters and selecting all the associated colours, or completing a timed reading task where letters were omitted and replaced with squares of the relevant colours (see picture right).

Tasks became progressively harder, and the group were financially incentivised to outperform their previous scores. No previous intervention has been as extensive as this one, as Daniel Bor and colleagues were seeking to go beyond learned colour-letter associations to try and produce a genuine subjective experience of synesthesia.

After the training, the group became better at those ‘Stroop’ test trials where the trained colour of a presented letter matched the ink colour it was written in, and the task was to name the ink colour as fast as possible. This suggests that the training had gone deep enough to help them make rapid, non-reflective decisions.

The majority of participants also reported gaining a subjective experience of synaesthesia. By their own accounts, nine definitely experienced a coloured effect when seeing trained letters, which was mostly characterised as seeing the colour ‘in front of my mind’s eye’ (only two participants definitely didn’t have this experience). Naturally occurring synaesthetic effects can be stronger than this, with colours seen floating on the surface of the letter or number, but the reported experiences are nonetheless impressive.

In addition, participants got smarter, scoring an equivalent of 12 IQ points higher on a standard intelligence test administered pre- and post-training. We should make no firm conclusions from this, as the causal mechanism may be other aspects of the training process not directly related to synesthesia, such as the heavy load on working memory. Even so, achieving a 12-point increase in a normal- to high-functioning group is not something routinely delivered by psychology interventions.

Three months later, did the synaesthesia stick? Not so much. The effect on the Stroop task was maintained, suggesting learned associations were going strong, but participants reported a weakening or total dissipation of the coloured experience itself.

Nevertheless, this work questions whether synaesthesia is limited to a rare and genetically distinct group, and shows how learning and experience are likely to play an important part too. We already know that young synaesthetes experience a strengthening of their colour linkings during early school years. Perhaps early pairings – seen on coloured alphabet jigsaws or fridge magnets – provide the associations that some people develop into an ever-present feature of their world. af

 

A child’s popularity is related to where the teacher seats them in the classroom
In the Journal of Experimental Child Psychology

Teacher training doesn’t usually include a module on how to arrange the seating of pupils. Perhaps it should – a new study by psychologists finds that where children are placed in the classroom is associated with how well-liked they are by their classmates.

Yvonne van den Berg and Antonius Cillessen studied 34 classrooms at 27 elementary schools in the Netherlands. The 336 participating pupils had an average age of 11, and 47 per cent of them were boys. In all classrooms, it was the school policy that the teachers dictated who sat where; seating arrangements were in groups or rows, or a mixture. Every pupil was asked to say how much they liked each of their classmates, and to rate their classmates’ popularity. They gave these ratings twice: four to six weeks into the first semester (August/September time), and then again at the beginning of the school’s second semester during the following spring.

A key finding was that children who were seated in the first semester near the boundaries of their classroom tended to be less liked by their peers at that time, and also six months later, as compared with children sitting nearer the centre of the class. Another related result was that children tended to rate those located nearer to them as more likeable and more popular (this helps explain the first result – children seated centrally tend to have more classmates closer to them). Meanwhile, children who were only (re)positioned at the boundaries of the class in the second semester did not receive lower likeability ratings at that time, presumably because their reputation had already been established by then.

Why should seating position have these associations with children’s perceptions of their peers? The researchers think two psychological mechanisms are pertinent. Social psychology research on race relations and prejudice finds that the more we interact with other people, the more positive our views of them tend to be. School pupils naturally interact and socialise more with the children located near to them, and so this interaction could encourage more positive perceptions.There is also a psychological phenomenon known as the ‘mere exposure effect’, which describes how familiarity with something or someone breeds more positive feelings towards them.

Van den Berg and Cillessen also conducted a second study with 158 more schoolchildren, in which they asked them to rate each other’s popularity, and also to say where they would position themselves and their classmates if they could choose. Perhaps unsurprisingly, children said they’d like to sit nearer to their peers who were more liked and more popular. The researchers said this provided an insight into what’s known as the ‘cycle of popularity’ – well-liked and popular children typically attract more social interactions with others, this then reinforces the popular perception that others have of them via the mechanisms mentioned earlier.There are plenty of unknowns in this research.

For example, we don’t know the reasoning behind the teachers’ decisions of where they chose to place their pupils in their class. Perhaps they placed more popular pupils more centrally? In fact, there are reasons to think this unlikely – past research has found teacher and pupil ratings of pupils’ social relationships are only weakly related.

Despite the unknowns, the van den Berg and Cillessen said their results provided evidence for what’s been termed the ‘invisible hand of the teacher’ – the understudied ways that teacher decisions influence the ecology of the classroom. ‘Classroom seating arrangements may be hugely influential in children’s exposure to and interactions with other peers and, thus, in determining children’s social relationships with one another,’ the researchers concluded. They also highlighted that this new research builds on another recent study they conducted, which found that placing children closer to each other in the classroom improved pupils’ liking of each other and reduced problem behaviours in class. cj

 

Is being a worrier a sign of intelligence?
In Personality and Individual Differences

We usually see worry as a bad thing. It feels unpleasant, like a snake coiling in the pit of your stomach. And worriers are often considered weak links in a team – negative influences who lack confidence. But of course, anxiety has a useful function. It’s about anticipating and preparing for threats, and learning from past mistakes.

Increasingly, psychologists are recognising the strengths of anxious people. For example, there’s research showing that people more prone to anxiety are quicker to detect threats and better at lie detection. Now Alexander Penney and his colleagues have conducted a survey of over 100 students and they report that a tendency to worry goes hand in hand with higher intelligence.

Participants completed various measures, including one to distinguish trait anxiety from in-the-moment state anxiety. The key finding was that after controlling for the influence of test anxiety and current mood, the students who reported a general habit of worrying more (e.g. they agreed with statements like ‘I am always worrying about something’) and/or ruminating more (e.g. they said they tended to think about their sadness, or think ‘What am doing to deserve this?’) also tended to score higher on the test of verbal intelligence, taken from the Wechsler Adult Intelligence Scale.

To take one specific statistical example, verbal intelligence correlated positively with worry proneness with a statistically significant value of 0.19 (after controlling for test anxiety and mood). Together with the measures of rumination, mood and test anxiety, verbal intelligence explained an estimated 46 per cent of the variance in worry.

Another result from the survey, not so promising for worriers, was that a tendency to dwell on past social events was negatively correlated with non-verbal intelligence (that is, those students who dwelt more on past events scored lower on non-verbal IQ).

Seeking to explain these two different and seemingly contradictory correlations, the researchers surmised that: ‘[M]ore verbally intelligent individuals are able to consider past and future events in greater detail, leading to more intense rumination and worry. Individuals with high non-verbal intelligence may be stronger at processing the non-verbal signals they interact with in the moment, leading to a decreased need to re-process past social encounters.’

Of course we must be careful not to over-interpret these preliminary results – it was a small, non-clinical sample after all, so it’s not clear how the findings would generalise to people with more extreme anxiety. However it’s notable that a small 2012 study found a correlation between worry and intelligence in a sample diagnosed with generalised anxiety disorder. Penney and his colleagues concluded that ‘a worrying and ruminating mind is a more verbally intelligent mind; a socially ruminative mind, however, might be less able to process non-verbal information’. cj

The material in this section is taken from the Society’s Research Digest blog, and is written by its editor Dr Christian Jarrett and contributor Dr Alex Fradera. Visit the blog for full coverage including references and links, additional current reports, an archive, comment, social media and more.

BPS Members can discuss this article

Already a member? Or Create an account

Not a member? Find out about becoming a member or subscriber