Psychologist logo
BPS updates, Cognition and perception

Digest: Burnt-out student participants and more

Children and written instructions; and who will benefit from CBT, in the latest from our Research Digest blog.

20 August 2014

How burnt-out students could be skewing research

In the Quarterly Journal of Experimental Psychology

It’s well known that psychology research relies too heavily on student volunteers. So many findings are assumed to apply to people in general, when they could be a quirk unique to undergraduates. Now Michael Nicholls and his colleagues have drawn attention to another problem with relying on student participants – those who volunteer late in their university term or semester lack motivation and tend to perform worse than those who volunteer early.

A little background about student research participants. Psychology students often volunteer for numerous studies throughout a semester [see tinyurl.com/qfs7ojr and tinyurl.com/prb7m6g]. Usually, they’re compelled to do this at least once in return for course credits that count towards their degree. Other times they receive cash or other forms of compensation. When in the semester they opt to volunteer for course credit is usually down to their discretion. To over-generalise, conscientious students tend to volunteer early in semester, whereas less disciplined students leave it until last minute, when time is short and deadlines are pressing.

Nicholls team first recruited 40 student participants (18 men) at Flinders University during the third week of a 14-week semester. Half of them were first years who’d chosen to volunteer early in return for course credits. The other half of the participants, who hailed from various year groups, had chosen the option to receive $10 compensation. The challenge for both groups of students was the same – to perform 360 trials of a sustained attention task. Each trial they had to press a button as fast as possible if they saw any number between 1 and 9, except for the number 3, in which case they were to withhold responding.

At this early stage of the semester there was no difference in the performance (based on speed and accuracy) of the students who volunteered for course credit or for money. There was also no difference in their motivation levels, as revealed in a questionnaire.
Later in the semester, between weeks 9 to 12, the researchers repeated the exercise, with 20 more students who had enrolled for course credit and 20 more who had applied to participate in return for cash compensation. Now the researchers found a difference between the groups. Those participants receiving financial payment outperformed those who had volunteered in return for course credit. The latter group also showed more variability in their performance than their course-credit counterparts had done at the start of the semester, and they reported having lower motivation.

These results suggest that students who wait to volunteer for course credit until late in the semester lack motivation and their performance suffers as a result. Nicholls and his colleagues explained that their findings have serious implications for experimental design. ‘A lack of motivation and/or poorer performance may introduce noise into the data and obscure effects that may have been significant otherwise. Such effects become particularly problematic when experiments are conducted at different times of semester and the results are compared.’

One possible solution for researchers planning to compare findings across experiments conducted at different ends of a semester, is to ensure that they only test paid participants. Unlike participants who are volunteering for course credit, those who are paid seem to have consistent performance and motivation across the semester.

 

Trusting to the letter
In the British Journal of Developmental Psychology

As adults, we’ve learned that simple text-based instructions are usually trustworthy. If a stranger tells us to turn next left for London, but we see a street sign that states the opposite, most of us would assume the stranger had made a mistake, and we’d follow the sign. Now researchers led by Kathleen Corriveau have investigated children’s trust in instructions delivered orally, versus those originating in written text. Their finding is that as soon as children have rudimentary reading skills, they trust written text over spoken instruction.

The research involved two differently coloured tubes leading to a cup beneath. One tube was always blocked. Dozens of children aged three to six had to decide in which tube to place a marble, in the hope it would reach the cup beneath, so that they would earn a sticker.
To help them, the children received instructions from two puppets. On each trial, one puppet simply spoke their instruction (e.g. ‘I say blue. Choose the blue tube’) whereas the other puppet opened an envelope in which was written the colour of the other tube (e.g. ‘This says red. Choose the red one’). The children didn’t get feedback on their performance until the end of the study, so they couldn’t use results to judge which puppet to trust.

Regardless of age, the children who couldn’t yet read were indiscriminate in whether they chose to trust the purely oral advice, or whether to trust the puppet who read the text instruction. By contrast, the children with some reading ability showed a clear preference to trust the puppet who read from the envelope, choosing the tube they recommended over 75 per cent of the time.

Two further studies cleared up some ambiguities. For instance, it was found that young readers prefer to trust a puppet who reads the instruction from text, than oral advice from a puppet who gets their information from a whisper in the ear: the young readers weren’t simply swayed by the fact the text puppet was drawing on a secondary source. Young readers also trusted instruction from written text over information conveyed in a coloured symbol. This shows they’re specifically trusting of written text, not just any form of permanent, external information.

Corriveau’s team said their results showed that once children learn to read, ‘they rapidly come to regard the written word as a particularly authoritative source of information about how to act in the world’. They added that in some ways this result is difficult to explain. Young readers are exposed to a good deal of fantasy and fiction in written form, so why should they be so trusting of written instruction? Perhaps they are used to seeing adults act on the basis of written information – such as maps, menus and recipes – but then again, pre-readers will also have had such experiences. Is there something special about the process of learning to read that leads children to perceive written instruction as authoritative?

 

Who will benefit from cognitive behavioural therapy?
In the Journal of Clinical Psychology

The rise of CBT has been welcomed by many as a safe, effective alternative to drug treatments for mental illness. However, there are also fears that CBT has crowded out other less structured, more time-consuming forms of psychotherapy.

The fact is, CBT doesn’t work for everyone. Precious resources could be better managed, and alternative approaches sensibly considered, if there were a way to predict in advance those patients who are likely to benefit from CBT, and those who are not.

Jesse Renaud and her colleagues administered a 10-item scale – the Suitability for Short-term Cognitive Therapy, first devised in the 1990s – to patients who underwent CBT for depression or anxiety at the McGill University Health Centre between 2001 and 2011. The researchers focused their analysis on the 256 patients (88 men) who completed their course of therapy, which lasted an average of 19 sessions.

Renaud’s team looked for correlations between patients’ answers to the Suitability scale and found that the scale was really tapping two main factors – the patients’ capacity for participation in the CBT process, and their attitudes towards CBT.

The first factor includes a patient’s insight into thoughts that pop into their heads (so-called ‘automatic thoughts’); their ability to identify and distinguish their emotions; and their use of safety behaviours
to cope with their problems (e.g. avoiding parties to cope with social anxiety). In other words, the researchers explained, this is the patient’s ‘ability to identify thoughts and feelings, and share them in a non-defensive, focused way’. The second ‘attitudes’ factor refers to, among other things, the patient’s optimism about the outcome of therapy, and their acceptance that they must take responsibility for change.

The higher patients’ scored on the first factor (their capacity for participation in CBT), the greater reduction they tended to show in their illness symptoms, based on measures taken before and after the course of CBT. Attitudes towards therapy were not correlated with symptom reductions, but we should bear in mind that this may be because the research focused only on those patients who completed therapy. Also, it may be useful in future to measure how patients’ attitudes change during therapy.

There are other reasons for caution. The amount of variance in symptom change explained by both suitability factors combined was statistically significant, but tiny – just .07 per cent. Also, the therapists who administered the therapy also recorded their patients’ improvements, so there was scope for bias. Finally, more research is needed on different forms of mental illness besides depression and anxiety. Nonetheless, this study makes a constructive contribution to a neglected area. ‘Given that the patient’s capacity provides important information about whether or not a patient will derive benefit from CBT, clinicians who are concerned about limited resources and long wait lists are encouraged to undertake a suitability assessment prior to therapy,’ the researchers said, ‘to identify patients low in their General Capacity to Participate… and consider making referrals to alternative treatments.’

 

The material in this section is taken from the Society’s Research Digest blog at www.researchdigest.org.uk/blog, and this month is written by its editor Dr Christian Jarrett. Visit the blog for full coverage including references and links, additional current reports, an archive, comment and more.

Subscribe by RSS or e-mail at
www.researchdigest.org.uk/blog

Become a fan at
www.facebook.com/researchdigest


Follow the Digest editor at
www.twitter.com/researchdigest