The appliance of science

Roz Shafran, winner of the Society’s 2010 Award for Distinguished Contributions to Professional Psychology, looks at implementation science

One in four people will experience a mental health problem in any given year. Of those, the vast majority will not receive any psychological or pharmacological help. Even when psychological help is received, it frequently lacks a strong scientific basis. This article describes the extent of the problem in the dissemination and implementation of evidence-based psychological therapies and examines some of the solutions proposed.

I confess that five years ago, I had not heard of ‘implementation science’. If you put the phrase into Google, it returns 108,000 results: perhaps I should have heard about it before! Many of the returns refer to the eponymous journal (

Very quickly the meaning of the term becomes clear from papers entitled ‘Implementation science: Understanding the translation of evidence into practice’ (Tansella & Thornicroft, 2009) and video blogs such as ‘Implementation science

– Putting good ideas into practice’ by a physician who describes himself as a closet academic (

The aims and scope of the journal Implementation Science make the term clearer still. Its focus is to publish research on the ‘scientific study of methods to promote the uptake of research findings into routine healthcare in both clinical and policy contexts’. (The impact factor of the open-access journal is 2.49 and reflects the high quality of some of the work it publishes – despite the fact that authors need to pay £1255 per article.)

Why would such a journal be needed, and what relevance is it to psychologists? The journal justifies itself by claiming that across the field of biomedicine, research findings are not put into practice in routine care – this leads to an ‘invisible ceiling’ on the potential to enhance health outcomes.

Of particular interest to psychologists, the study of implementation science focuses on the behaviour of healthcare professionals and organisations. A wide range of psychological theories of behaviour are utilised to help explain and try to change current clinical practice including the social cognitive model, the theory of planned behaviour, the theory of reasoned action, and the health beliefs model. Tansella and Thornicroft (2009) note ruefully how such theories have been applied far more often to changing patients’ behaviour rather than the behaviour of clinicians.

What is the problem?

While it is obviously encouraging that psychological theories of behaviour can be used to try to improve the translation of good-quality research into practice, there remain two separate, but connected, problems with the delivery of psychological therapies. The first is its availability – or lack of it. In the most recent Psychiatric Morbidity Survey in the England (McManus et al., 2009) 90 per cent of people who met diagnostic criteria for common mental health problems were not receiving any psychological treatment at all. This failure to provide adequate care to people with mental health problems is not new. It was described in The Lancet as a ‘silent scandal’ in 2007 (Thornicroft, 2007). The second problem is that even when psychological therapies are provided, they are frequently not research-based. In the Psychiatric Morbidity Survey, only 2 per cent were receiving cognitive or behaviour therapy despite the strong empirical support for this intervention. This problem with translating the research into clinical practice was highlighted in a controversial article entitled ‘Why do psychologists reject science’ ( The article described a report by Timothy Baker and colleagues in the USA (Baker et al., 2008) and an accompanying commentary by Walter Mischel, which stated: ‘The disconnect between much of clinical practice and the advances in psychological science is an unconscionable embarrassment’ (Mischel, 2008). The report caused a significant amount of debate, being described on the internet as both a ‘naive polemic’ and an attempt by the empirically supported movement to ‘plant its flag’ ( The report offered some solutions to the disconnect, suggesting that the key to change was to introduce a new system of accreditation, termed ‘The new Psychological Clinical Science Accreditation System (PCSAS)’ designed to accredit clinical psychology training programmes that have a high-quality science-centred education at their core.

Understanding the problem

There are multiple reasons why research is not having the desired impact on the ground. Many of those who would potentially benefit from psychological treatment are afraid to seek help owing to stigma. Eight-seven per cent of people with mental health problems have been affected by stigma and discrimination (Time to Change, 2008), and this includes discrimination and stigmatisation from their GP. The lack of available therapists on the NHS is another reason that psychological therapies have failed to be widely available. The second problem, that the research is not impacting on the type of therapy being offered, is also multifaceted. Clinical psychologists are, it turns out, human. And humans are, as cognitive psychologists know all too well, subject to a variety of heuristic biases. Such biases influence all forms of judgement, including clinical judgement. It is not surprising that surveys of clinical psychologists in the USA find that personal experience with previous clients and colleagues’ advice are used to guide their clinical practice rather than clinical research (Stewart & Chambless, 2008).

Much research is dismissed on the grounds that the clients in the research trials are not representative of clinical reality and their difficulties are much less complex. Investigations into the reasons for exclusion from psychotherapy trials, however, find that most patients are excluded as they are not severe enough to meet diagnostic criteria for inclusion or are in partial remission at the start of trials (Stirman et al., 2005). Systematic reviews conclude that effect sizes from research trials can reliably be replicated in community and routine settings (Öst, 2011).

Nevertheless, research trials typically focus on one primary problem, and NICE guidelines refer to one specific disorder, whereas the clinical reality is that a significant proportion of patients meet criteria for multiple mental health problems (Kessler et al., 2005). They often have physical health problems too. The research evidence says little about the best way to address such multiple co-occurring problems, and arguably this limits both the relevance of specific clinical guidance and its applicability on the ground.

Another important aspect to understanding the lack of implementation of research findings concerns training. Fairburn and Cooper (2011) highlight this important issue with regard to the fact that methods of training have stagnated, training in the specific interventions with research support is not widely available, and the effectiveness of training methods is relatively unknown as we have no adequate means of assessing competence.

What are the solutions?

While implementation scientists are hard at work trying to improve our understanding of the barriers to the implementation of research findings on the ground and develop appropriate interventions to increase their clinical impact, others have taken a different approach. Michael Barkham and colleagues question the relevance of this research to practitioners in routine clinical settings and suggest that practice-based evidence is as valid and as important to changing clinical reality as attempts to implement evidence-based practice (Barkham & Mellor-Clark, 2003). Others, such as Christopher Fairburn, Zafra Cooper and colleagues, have addressed the problem by emphasising the importance of ensuring randomised controlled trials for eating disorders are as clinically relevant as possible by having few exclusion criteria and making the intervention ‘transdiagnostic’; that is, transcending specific diagnostic boundaries (Fairburn et al., 2009). Leading American researchers are working on transdiagnostic solutions for depression and anxiety (Ellard et al., 2010). Such an intervention would have the added advantage of partly addressing the training barrier, since therapists need only be trained in one intervention rather than having to learn a multitude of different interventions. Changing the way training is conducted and evaluated has been suggested both in the UK (Fairburn & Cooper, 2011) and abroad (Baker et al, 2008.) The UK government programme ‘Improving Access to Psychological Therapies’ should also go a long way to helping ensure that research-based interventions are given to the majority, rather than the minority, of those with depression and anxiety, thereby addressing both the issue of availability of treatment and the provision of therapies with a sound research base.

The Third Sector is also likely to be involved in providing solutions. People who use the services know first-hand about the clinical reality, and are personally motivated to see improvements. There is a newly formed consortium of suicide charities – The Alliance of Suicide Charities (TASC UK) – that is considering how best to ensure that systems dealing with suicide assessment and risk as well as more general mental health services are improved. The Charlie Waller Memorial Trust is one of these charities ( with a range of projects that include education for schools, training for primary care workers, training of clinicians as well as the evaluation of the impact of the training via collaborations with the local NHS and universities. Charlie Waller suffered from depression and committed suicide aged 27 and the Trust is committed to increasing awareness of depression alongside improving both the availability and quality of therapy for those suffering from depression.


I believe that I was honoured with the award for Distinguished Contributions to Professional Psychology from the BPS primarily because of my work with the Charlie Waller Institute of Evidence-Based Psychological Treatment (, which is a unique collaboration between the Charlie Waller Memorial Trust, Berkshire Healthcare NHS Foundation Trust and the University of Reading. It aims to provide high-quality training in interventions with a research base, as well as to make a contribution to the development of the evidence base. In its first four years over 40 leading clinical researchers from around the world have provided the training to more than 1500 clinicians with demonstrable improvement in their knowledge and skill as assessed via exam vignettes. I believe that the Charlie Waller Institute and other high-quality training/research organisations have a role to play within implementation science and am optimistic that the appliance of science in psychological therapies will lead the way in ensuring that research really does make a significant difference to people’s lives.

Roz Shafran occupies the Charlie Waller Chair in Evidence-Based Psychological Treatment, University of Reading


Baker, T.B., McFall, R.M. & Shoham, V. (2008). Current status and future prospects of clinical psychology: Toward a scientifically principled approach to mental and behavioral health care. Psychological Science in the Public Interest, 9, 67–103.

Barkham, M. & Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence: Developing a rigorous and relevant knowledge for the psychological therapies. Clinical Psychology and Psychotherapy, 10, 319–327.

Ellard, K.K., Fairholme, C.P., Boisseau, C. et al (2010). Unified protocol for the transdiagnostic treatment of emotional disorders: Protocol development and initial outcome data. Cognitive and Behavioral Practice, 17, 88–101.

Fairburn, C.G. & Cooper, Z. (2011). Therapist competence, therapist quality and therapist training. Behaviour Research and Therapy, 49, 373–378.

Fairburn, C.G.,Cooper, Z., Doll, H.A. et al. (2009). Transdiagnostic cognitive behavioral therapy for patients with eating disorders: A two-site trial with 60-week follow-up. American Journal of Psychiatry, 166, 311–319.

Kessler, R.C., Chiu, W.T., Demler, O. et al (2005). Prevalence, severity and comorbidity of 12-month DSM-IV disorders in the National Comorbidity Survey Replication. Archives of General Psychiatry, 62, 617–627.

McManus, S., Meltzer, H., Brugha, T. et al (2007). Adult psychiatric morbidity in England 2007: Results of a household survey. Health and Social Care Information Centre.

Mischel W. (2008). Connecting clinical practice to scienti?c progress. Psychological Science Public Interest 9, 1–2.

Öst, L. (2011). Progress in CBT: Lessons from empirical reviews. Presented at the BABCP Spring Conference.

Stewart, R.E. & Chambless, D.L. (2008). Treatment failures in private practice: How do psychologists proceed? Professional Psychology: Research and Practice, 39, 176–181

Stirman, S.W., DeRubeis, R.J., Crits-Christoph, P. & Rothman, A. (2005). Can the randomized controlled trial literature generalize to nonrandomized patients? Journal of Consulting and Clinical Psychology, 73, 127–135.

Tansella, M. & Thornicroft, G. (2009). Implementation science: Understanding the translation of evidence into practice. British Journal of Psychiatry, 195, 283–285.

Time to Change (2008). Stigma shout: Service user and carer experiences of stigma and discrimination.London: Author.

Thornicroft, G. (2007). Most people with mental illness are not treated. The Lancet, 370 (9590), 807–808


BPS Members can discuss this article

Already a member? Or Create an account

Not a member? Find out about becoming a member or subscriber