Psychologist logo
Research Digest logo
BPS updates

10 years of the Research Digest

We celebrate 10 years of the society's Research Digest

03 September 2013

This month we celebrate 10 years of the Society’s Research Digest service. Digest editor Dr Christian Jarrett has selected some of his favourite Digest items from the past decade over the coming pages.

Also look out for our 10th birthday celebrations at Research Digest.

From 2013

Smiling fighters are more likely to lose

The day before mixed martial artists compete in the Ultimate Fighting Championships (UFC), they pose with each other in a staged face-off. A new study has analysed photographs taken at dozens of these pre-fight encounters and found that competitors who smile are more likely to lose the match the next day.
Michael Kraus and Teh-Way David Chen recruited four coders (blind to the aims of the study) to assess the presence of smiles, and smile intensity, in photographs taken of 152 fighters in 76 face-offs. Fighter smiles were mostly ‘non-Duchenne’, with little or no crinkling around the eyes. Data on the fights was then obtained from official UFC statistics. The researchers wanted to test the idea that in this context, smiles are an involuntary signal of submission and lack of aggression, just as teeth baring is in the animal kingdom.

Consistent with the researchers’ predictions, fighters who smiled more intensely prior to a fight were more likely to lose, to be knocked down in the clash, to be hit more times, and to be wrestled to the ground by their opponent (statistically speaking, the effect sizes here were small
to medium). On the other hand, fighters with neutral facial expressions pre-match were more likely to excel and dominate in the fight the next day, including being more likely to win by knock-out or submission.

These associations between facial expression and fighting performance held even after controlling for betting behaviour by fans, which suggests a fighter’s smile reveals information about their lack of aggression beyond what is known by experts. Moreover, the psychological meaning of a pre-match smile appeared to be specific to that fight – no associations were found between pre-match smiles and performance in later, unrelated fights. Incidentally, smaller fighters smiled more often, consistent with the study’s main thesis, but smiling was still linked with poorer fight performance after factoring out the role of size (in other words, smiling was more than just an indicator of physical inferiority).

If fighters’ smiles are a sign of weakness, there’s a chance opponents may pick up on this cue, which could boost their own performance, possibly through increased confidence or aggression.

To test the plausibility that smiles are read this way, Kraus and Chen asked 178 online, non-expert participants to rate head-shots of the same fighter either smiling or pulling a neutral expression in a pre-match face-off. As expected, smiling fighters were rated by the non-expert participants as less physically dominant, and this was explained by smiling fighters being perceived as less aggressive and hostile.

Of course, the researchers are only speculating about what's going on inside the minds of the fighters pre-match. It’s even possible that some of them smile in an attempt to convey insouciance. If so, Kraus and Chen said ‘it is clear that this nonverbal behaviour had the opposite of the desired effect – fighters were more hostile and aggressive during the match toward their more intensely smiling opponents.’

From 2009

Do you do voodoo? (in Perspectives on Psychological Science)

They are beloved by prestigious journals and the popular press, but many recent social neuroscience studies are profoundly flawed, according to a devastating critique – Voodoo Correlations in Social Neuroscience.

The studies in question have tended to claim astonishingly high correlations between localised areas of brain activity and specific psychological measures. For example, in 2003, Naomi Eisenberger at the University of California and her colleagues published a paper purporting to show that levels
of self-reported rejection correlated at r = .88 (1.0 would be a perfect correlation) with levels of activity in the anterior cingulate cortex.

According to Hal Pashler and his band of methodological whistle-blowers, if Eisenberg’s study and others like it were accurate, this ‘would be a milestone in understanding of brain–behaviour linkages, full of promise for potential diagnostic and therapeutic spin-offs.’ Unfortunately, Pashler’s group argue that the findings from many of these recent studies are virtually meaningless.

The suspicions of Pashler and his colleagues – Ed Vul (lead author), Christine Harris and Piotr Winkielman – were piqued when they realised that many of the cited levels of correlation in social neuroscience were impossibly high given the respective reliability of brain activity measures and measures of psychological factors, such as rejection. To investigate further they conducted a literature search and surveyed the authors of 54 studies claiming significant brain–behaviour correlations.

The search wasn’t exhaustive but was thought to be representative, with a slight bias towards higher-impact journals.

Pashler and his team found that 54 per cent of the studies had used a seriously biased method of analysis, a problem that probably also undermines the findings of fMRI studies in other fields of psychology.

These researchers had identified small areas of brain activity (called voxels) that varied according to the experimental condition of interest (e.g. being rejected or not), and had then focused on just those voxels that showed a correlation, higher than a given threshold, with the psychological measure of interest (e.g. feeling rejected). Finally, they had arrived at their published brain–behaviour correlation figures by taking the average correlation from among just this select group of voxels, or in some cases just one ‘peak voxel’. Pashler’s team contend that by following this procedure, it would have been nearly impossible for the studies not to find a significant brain–behaviour correlation.

By analogy with a purely behavioural experiment, imagine the author of a new psychometric measure claiming that his new test correlated with a target psychological construct, when actually he had arrived at his significant correlation only after he had first identified and analysed just those items that showed the correlation with the target construct.

Indeed, Pashler and his collaborators speculated that the editors and reviewers of mainstream psychology journals would routinely pick up on the kind of flaws seen in imaging-based social neuroscience, but that the novelty and complexity of this new field meant such mistakes have slipped through the net.

On a more positive note, Pashler’s team say there are ways to analyse social neuroscience data without bias and that it should be possible for many of the studies they’ve criticised to re-analyse their data. For example, one approach is to identify voxels of interest by region, before seeing if their activity levels correlate with a target psychological factor. An alternative approach is to use different sets of data to perform the different steps of analysis used previously. For example, by using one run in the scanner to identify those voxels that correlate with a psychological measure, and then using a second, independent run to assess how highly that subset of voxels correlates with the chosen measure. ‘We urge investigators whose results have been questioned here to perform such analyses and to correct the record by publishing follow-up errata that provide valid numbers,’ Pashler's team said.

Matthew Lieberman, a co-author on Eisenberger’s social rejection study, told us that he and his colleagues have drafted a robust reply to these methodological accusations, which will be published in Perspectives on Psychological Science alongside the Pashler paper. In particular he stressed that concerns over multiple comparisons in fMRI research are not new, are not specific to social neuroscience, and that the methodological approach of the Pashler group, done correctly, would lead to similar results to those already published.

‘There are numerous errors in their handling of the data that they reanalyzed,’ he argued. ‘While trying to recreate their [most damning] Figure 5, we went through and pulled all the correlations from all the papers. We found around 50 correlations that were clearly in the papers Pashler’s team reviewed but were not included in their analyses. Almost all of these overlooked correlations tend to work against their hypotheses.’

From 2003

Seeing racism in the brain (in Nature Neuroscience)

The more implicit racial bias a white person showed against black people, the more activated the cognitive/executive control regions of their brain became when they viewed photographs of black faces.

Jennifer Richeson (Dartmouth College, USA) and colleagues invited 15 white undergraduates to complete a version of the ‘implicit associations test’ (IAT; see Research in Brief, The Psychologist, 16, 429) designed to reveal underlying racial bias. Subjects were asked to press one of two keys when certain words appeared on a computer screen. A person with racial bias would be expected to respond more quickly when the same key was paired with the words ‘good’ and ‘white’ than when paired with the words ‘good’ and ‘black’. The same participants were then invited to chat with a black interviewer for five minutes about the college fraternity system and racial profiling.

Next they performed the Stroop test (see Research Digest Issue 5, item 6), a measure of cognitive control. Finally, within two weeks, the students underwent a brain scan while they viewed photographs of black and white faces.

The more racial bias subjects displayed during the IAT test, the poorer their subsequent performance on the Stroop test, and the more activated the executive control areas of their brain became when they viewed black faces.

The authors concluded their results suggest ‘individuals with high scores on subtle measures of racial bias may put forth additional effort to control their thoughts and behaviours in order to live up to their egalitarian, nonprejudiced values’.

From 2006

Why do we still believe in group brainstorming? (in the British Journal of Psychology)

So you need some fresh, innovative ideas. What do you do? Get a group of your best thinkers together to bounce ideas of each other? No, wrong answer. Time and again research has shown that people think of more new ideas on their own than they do in a group. The false belief that people are more creative in groups has been dubbed by psychologists the ‘illusion of group of productivity’. But why does this illusion persist?

Bernard Nijstad and colleagues at the University of Amsterdam argue it’s because when we’re in a group, other people are talking, the pressure isn’t always on us and so we’re less aware of all the times that we fail to think of a new idea.

By contrast, when we’re working alone and we can’t think of anything, there’s no avoiding the fact that we’re failing.

To test their theory, they recruited hundreds of students and asked them, either on their own, or in differently sized groups, to think of as many ways as possible to boost tourism to Utrecht. Afterwards the students in groups reported feeling more satisfied with their performance, and feeling that they had experienced fewer failures to come up with new ideas, than did the students who had worked alone.

In a second study, Nijstad’s team found further support for their theory by showing that the illusion of group productivity could be undermined if different members of a group had to think of ideas for different projects. In this case, the students’ satisfaction with their performance and their sense of how much they had failed to think of new ideas, resembled the experience of students who worked alone.

The researchers said ‘We suggest that working in a group may lead to a sense of continuous activity. This may provide group members with the idea that they are productive, because they feel that the group as a whole is making progress, even if they themselves are not contributing.’

Other possible reasons for why people continue to think they work better in groups include ‘memory confusion’, the idea that after working in groups people subsequently mistake other people’s ideas for the own, and ‘social comparison’, the idea that in groups people are able to see how difficult everyone else has found it to come up with ideas too.

From 2008

The boy who thought 9/11 was his fault (in Neurocase)

Researchers in London have documented the case of a 10-year-old boy with Tourette’s syndrome and obsessive compulsive symptoms, who believed the terror attacks of 9/11 occurred because he had failed to complete one of his daily rituals.

Mary Robertson and Andrea Cavanna claim this is the first-ever case reported in the literature of a person believing they were responsible for causing a major disaster of the proportion experienced in America in 2001.

The boy – described as ‘extremely pleasant and likeable’ and with good school grades – was first referred for consultation a year before 9/11 took place. As is characteristic of people with Tourette’s syndrome, the boy displayed several forms of uncontrollable tics, including excessive blinking and vocal outbursts, and he also showed obsessive tendencies and attentional problems.

Robertson next saw the boy two weeks after 9/11, at which point he was in a terrible state – ‘tortured’, as he put it, by his tics, and wracked with guilt, believing that 9/11 occurred because he had failed to walk on a particular white mark on a road.

This was just one of the many rituals the boy had developed during the course of the year. Others included so-called ‘dangerous touching’ rituals, including the need to feel the blade of knives to check their sharpness, and to put his hand in the steam of a kettle to check its heat.

Importantly, the researchers said the boy’s beliefs about 9/11 were distinct from the kind of delusions expressed by people with psychosis, and instead reflected an extreme form of the anxiety that people with obsessive compulsive disorder often experience when they fail to complete their rituals.

Fortunately, a mixture of drug treatments and reassurance (including explaining to the boy that his missed ritual actually occurred after 9/11, given the time difference between the USA and UK), led to him realising that he was not responsible for the attacks.

Robertson and Cavanna said this case study brings attention to the way our modern media – ‘immediate, realistic, and evocative’ – can lead to terrorist attacks and other disasters having harmful effects on vulnerable people miles away from the immediate environment of what happened. ‘Only time will reveal the many further psychosocial sequelae of 9/11, as well as the Madrid and London terrorist bombings,’ they said.

From 2005

Born already attached (in the British Journal of Psychology)

Psychologists are becoming increasingly aware of the importance of ‘antenatal attachment’ – the bond formed between a pregnant mother and her unborn child. For example, a mother’s degree of affection for her unborn child, and the amount of time she thinks about it, can predict the quality of the mother–child relationship once the baby is born. Pier Righetti (Conegliano Hospital, Italy) and colleagues investigated whether advances in ultrasound technology that provide enhanced fetal images, would strengthen the attachment formed between parents and their unborn baby.

Fifty-six women at 19–23 weeks pregnancy, and their partners, were split into two groups. Half attended a standard 2-D ultrasound appointment, and half underwent a state-of-the-art 4-D ultrasound, which provides superior imagery of the fetus, including showing its movements in real time. Before the ultrasound, and then two weeks after it, the parents completed self-report attachment questionnaires.

The strength of the mothers’ attachment to their unborn baby increased significantly over the two-week period, probably due in part to the fetuses starting to kick more during that time. However, the quality of the ultrasound made no difference to their strength of attachment, and the fathers’ attachment didn’t increase over the two weeks regardless of the ultrasound technology.

The authors point out that the improved ultrasound could have psychological benefits not tapped by the self-report questionnaires they used; that their research should be repeated at other stages of pregnancy, and with a greater number of couples.

From 2011

How walking through a doorway increases forgetting (in Quarterly Journal of Experimental Psychology)

Like information in a book, unfolding events are stored in human memory in successive chapters or episodes. One consequence is that information in the current episode is easier to recall than information in a previous episode. An obvious question then is how the mind divides experience up into these discrete episodes? A new study led by Gabriel Radvansky shows that the simple act of walking through a doorway creates a new memory episode, thereby making it more difficult to recall information pertaining to an experience in the room that has just been left behind.

Dozens of participants used computer keys to navigate through a virtual-reality environment presented on a TV screen. The virtual world contained 55 rooms, some large, some small. Small rooms contained one table; large rooms contained two: one at each end. When participants first encountered a table, there was an object on it that they picked up (once carried, objects could no longer be seen). At the next table, they deposited the object they were carrying at one end and picked up a new object at the other. And on the participants went. Frequent tests of memory came either on entering a new room through an open doorway, or after crossing halfway through a large room. An object was named on-screen and the participants had to recall if it was either the object they were currently carrying or the one they had just set down.

The key finding is that memory performance was poorer after travelling through an open doorway, compared with covering the same distance within the same room. ‘Walking through doorways serves as an event boundary, thereby initiating the updating of one’s event model [i.e. the creation of a new episode in memory]’ the researchers said.

But what if this result was only found because of the simplistic virtual-reality environment? In a second study, Radvansky and his collaborators created a real-life network of rooms with tables and objects. Participants passed through this real environment picking up and depositing objects as they went, and again their memory was tested occasionally for what they were carrying (hidden from view in a box) or had most recently deposited. The effect of doorways was replicated.

Participants were more likely to make memory errors after they’d passed through a doorway than after they’d travelled the same distance in a single room.
Another interpretation of the findings is that they have nothing to do with the boundary effect of a doorway, but more to do with the memory enhancing effect of context (the basic idea being that we find it easier to recall memories in the context that we first stored them). By this account, memory is superior when participants remain in the same room because that room is the same place that their memory for the objects was first encoded.

Radvansky and his team tested this possibility with a virtual-reality study in which memory was probed after passing through a doorway into a second room, passing through two doorways into a third unfamiliar room, or through two doorways back to the original room – the one where they’d first encountered the relevant objects. Performance was no better when back in the original room compared with being tested in the second room, thus undermining the idea that this is all about context effects on memory. Performance was worst of all when in the third, unfamiliar room, supporting the account based on new memory episodes being created on entering each new area.

These findings show how a physical feature of the environment can trigger a new memory episode. They concur with a study published earlier this year that focused on episode markers in memories for stories. Presented with a passage of narrative text, participants later found it more difficult to remember which sentence followed a target sentence, if the two were separated by an implied temporal boundary, such as ‘a while later...’. It’s as if information within a temporal episode was bound together, whereas a memory divide was placed between information spanning two episodes.

From 2004

Counting, the cost of a numberless language (in Science Express Reports)

Can we think about concepts for which we have no words? Could a tribe without numbers count? Peter Gordon (Columbia University) studied a hunter-gatherer tribe in the Amazon – the Piraha – who have just two numerical words, ‘hoi’ with an accent and ‘hoi’, without, signifying ‘one’ and ‘two’.

The 200 or so members of the Piraha use a ‘one-to-many’ system of counting, in which quantities above two are simply referred to as ‘many’. The tribe have no currency of their own, but barter goods instead.

In a matching task, an experimenter lined up some batteries. A member of the tribe was then required to match the number in a line of their own. They performed well when there were two or three batteries but much less so for larger quantities.

In another task, a tribesperson watched as sweets were placed in a box whose lid had fish painted on it. When the box was hidden and then revealed together with a second box that had a different number of fish painted on its lid, the tribesperson could only identify the sweet-filled box when the number of fish to be compared between box lids was less then three; after that, performance was no better than chance.

‘These studies show that the Piraha’s impoverished counting system truly limits their ability to enumerate exact quantities when set sizes exceed two or three items’, the author said.

From 2012

Why do children hide by covering their eyes? (in Journal of Cognition and Development)

A cute mistake that young children make is to think that they can hide themselves by covering or closing their eyes. Why do they make this error?

A research team led by James Russell at the University of Cambridge has used a process of elimination to find out.

Testing children aged around three to four years, the researchers first asked them whether they could be seen if they were wearing an eye mask, and whether the researcher could see another adult, if that adult was wearing an eye mask. Nearly all the children felt that they were hidden when they were wearing the mask, and most thought the adult wearing a mask was hidden too.

Next, Russell and his colleagues established whether children think it’s the fact that a person’s eyes are hidden from other people’s view that renders them invisible, or if they think it’s being blinded that makes you invisible. To test this, a new group of young kids were quizzed about their ability to be seen when they were wearing goggles that were completely blacked out, meaning they couldn’t see and their eyes were hidden, versus when they were wearing a different pair that were covered in mirrored film, meaning they could see, but other people couldn’t see their eyes.

This test didn’t go quite to plan because out of the 37 participating children, only seven were able to grasp the idea that they could see out, but people couldn’t see their eyes. Of these seven, all bar one thought they were invisible regardless of which goggles they were wearing. In other words, it appears that the children’s feelings of invisibility come from the fact that their eyes are hidden, rather than from the fact that they cannot see.

Now things get a little complicated. In both studies so far, when the children thought they were invisible by virtue of their eyes being covered, they nonetheless agreed that their head and their body were visible. They seemed to be making a distinction between their ‘self’ that was hidden, and their body, which was still visible. Taken together with the fact that it was the concealment of the eyes that seemed to be the crucial factor for feeling hidden, the researchers wondered if their invisibility beliefs were based around the idea that there must be eye contact between two people – a meeting of gazes – for them to see each other (or at least, to see their ‘selves’).

This idea received support in a further study in which more children were asked if they could be seen if a researcher looked directly at them whilst they (the child) averted their gaze; or, contrarily, if the researcher with gaze averted was visible whilst the child looked directly at him or her. Many of the children felt they were hidden so long as they didn’t meet the gaze of the researcher; and they said the researcher was hidden if his or her gaze was averted whilst the child looked on.
‘...it would seem that children apply the principle of joint attention to the self and assume that for somebody to be perceived, experience must be shared and mutually known to be shared, as it is when two pairs of eyes meet,’ the researchers said.

Other explanations were ruled out with some puppet studies. For instance, the majority of a new group of children agreed it was reasonable for a puppet to hide by covering its eyes, which rules out the argument that children only hide this way because they are caught up in the heat of the moment.

The revelation that most young children think people can only see each other when their eyes meet raises some interesting questions for future research. For example, children with autism are known to engage in less sharing of attention with other people (following another person’s gaze), so perhaps they will be less concerned with the role of mutual gaze in working out who is visible. Another interesting avenue could be to explore the invisibility beliefs of children born blind.

From 2010

How to form a habit (in European Journal of Social Psychology)

Habits are those behaviours that have become automatic, triggered by a cue in the environment rather than by conscious will. Psychologists often want to assist people in breaking unhealthy habits, while helping them adopt healthy ones. Remarkably, although there are plenty of habit-formation theories, before now, no one had actually studied habits systematically as they are formed.

Phillippa Lally and her team recruited 96 undergraduatess (mean age 27) and asked them to adopt a new health-related behaviour, to be repeated once a day for the next 84 days. The new behaviour had to be linked to a daily cue. Examples chosen by the participants included going for a 15-minute run before dinner; eating a piece of fruit with lunch; and doing 50 sit-ups after morning coffee. The participants also logged onto a website each day, to report whether they’d performed the behaviour on the previous day, and to fill out a self-report measure of the behaviour’s automaticity. Example items included ‘I do it automatically’, ‘I do it without thinking’ and ‘I’d find it hard not to do’.

Of the 82 participants who saw the study through to the end, the most common pattern of habit formation was for early repetitions of the chosen behaviour to produce the largest increases in its automaticity. Over time, further increases in automaticity dwindled until a plateau was reached beyond which extra repetitions made no difference to the automaticity.

The average time to reach maximum automaticity was 66 days, although this varied greatly between participants from 18 days to a predicted 254 days (assuming the still rising rate of change in automaticity at the study end were to be continued beyond the study’s 84 days). This is much longer than most previous estimates of the time taken to acquire a new habit – for example a 1988 book claimed a behaviour is habitual once it has been performed at least twice a month, at least 10 times. In fact, even after 84 days, half of the participants had failed to achieve a high enough automaticity score for their new behaviour to be considered a habit.
Unsurprisingly perhaps, more complex behaviours were found to take longer to become habits. Participants who had chosen an exercise behaviour took about one and a half times as long to reach their automaticity plateau compared with the participants who adopted new eating or drinking behaviours.

What about the effect of having a day off from the behaviour? Writing in 1890, William James said that a behaviour must be repeated without omission for it to become a habit. The new results found that a single missed day had little impact on later automaticity gains, suggesting James may have overestimated the effect of a missed repetition. However, there was some evidence that too many missed repeats of the behaviour, even if spread out over time, had a cumulative effect, reducing the maximum automaticity level that was ultimately reached.

It seems the message of this research for those seeking to establish a new habit is to repeat the behaviour every day if you can, be prepared for the long haul, and don't worry excessively if you miss a day or two.

This research has a serious shortcoming, acknowledged by the researchers, which is that it depended on participants’ ability to report the automaticity of their own behaviour. Also, the amount of data made it hard to form clear conclusions about the need for consistency in building a habit. However, the study provides an exciting new approach for exploring habit formation.

From 2007

Humans can track scents like a dog (in Nature Neuroscience)

Bloodhounds are unlikely to be out of work just yet, but researchers have found humans can track a scent on the ground in the same way that dogs do. While humans are traditionally considered to have a poor sense of smell compared with many of their mammalian cousins, the new finding suggests this reputation may be unfair [see also ‘The exotic sensory capabilities of humans’, December 2012].

Jess Porter and colleagues first observed that 21 out of 32 participants were able to track a 10-metre trail of chocolate essential oil through an open field using their sense of smell alone. By contrast, none of them were able to track the scent when their nostrils were taped up.

Moreover, it seems this latent ability is ripe for improvement through practice.