The enigma of testing
Committed to disseminating best practice, Professor John Rust is a passionate advocate for psychological assessments. But he is also not shy of articulating the challenges and controversies inherent in psychometrics, which he outlined to a captivated audience in December 2014 at the International Coaching Psychology Congress hosted by the British Psychological Society’s Special Group in Coaching Psychology. At the same event, we chaired an entire stream on psychometrics in coaching, which we noted divide opinion amongst coaches and coaching psychologists. Some are fervent advocates, others very cautious about ‘putting people in boxes’. Whether one buys into psychometrics or not, the fact remains that they are an important part of psychological practice. There is no better person to comment on their not uncontroversial history and future trends than John.
Tell us, John, for how long have you been working in the field of psychometrics?
Graduating from Birkbeck in psychology, statistics and computer science in 1970, I was looking for a PhD position and joined Hans Eysenck’s team at the London Institute of Psychiatry on a Medical Research Council scholarship. The twin register had just been established (it’s still going strong, as Robert Plomin will tell you) and hence it was natural for me to specialise in psychogenetics. My PhD thesis was on the inheritance of psychophysiological measurements – skin resistance, EEG-evoked potential, heart rate, et cetera. From a statistical modelling perspective, psychogenetics and psychometrics are very similar, so when in 1976 a Lectureship in Psychometrics was advertised at the London Institute of Education I applied and took the plunge. I subsequently learned that the post was on a downward slope, previous occupants having been Cyril Burt and Paul Vernon! Psychometrics was going out of favour at that time, and my own supervisor’s reputation wasn’t much help either as his controversial book on race and IQ had just been released.
What first sparked your interest?
It was the enigma. Psychological testing, then as now, is one of the most important fields of applied psychology, yet universities were beginning to exclude it from their syllabuses on the grounds that it was too controversial. This seemed ridiculous to me – if we were convinced that scientific racism was wrong, then we ought to have the courage to find the evidence to prove this, not ignore it. My belief that if IQ testing was going to take place anyway it should at least be done properly led me to projects with test publishers, standardising many of the leading IQ tests in the UK over the following years – for example the Wechsler Intelligence Scale for Children versions WISC-III and WISC-IV, as well as the Wechsler Individual Achievement Test, an adult version of the Wechsler instrument; Raven’s Progressive Matrices,
a nonverbal test of reasoning ability; and the British Ability Scales.
Explain to us, John, why is it important to keep standardising tests?
One of the main reasons that we had to keep standardising tests over and over again every 10 years was that the norms were changing, in other words the extent to which people ‘do well’ on average. IQ scores on all of these tests were going up at an average rate of about 0.4 IQ points per year. But the significance of this seemed to pass by all of us until James Flynn, a professor of political studies – which makes him somewhat of an outsider to psychometrics – spotted the contradiction. If IQ points could increase by 12 points over 30 years, how could we give any credence to the claim that an IQ difference of 10 points between US black and white populations could only be explained by genetics? Hence the ‘Flynn effect’ was born, and psychometrics emerged from the doldrums. James Flynn is most certainly my personal hero in the field of psychometrics – he single-handedly turned the tables on those arguing for race differences in IQ, and consequently made psychometrics respectable again among psychologists, social scientists and the general public alike.
Your work has been all over the media. Tell us what you have been up to recently?
For myself, very little. I’m in my 70s now. But I am very proud to lead a band of 15 gifted psychometricians, computer scientists, psychologists, software engineers and computational social scientists within the Cambridge Psychometrics Centre. Many of them have still to complete their PhDs, but have already had an impact on how the field is developing. Youyou Wu, for example, has first-authored a paper ‘Computers know you better than your friends’ in the Proceedings of the National Academy of Sciences that has been rated by Altmetric, the leading provider of information on online impact, as among the most influential social science papers of this year.
What in your view are the three key developments in psychometrics over the last 10 years?
Today psychometrics has expanded enormously. We are no longer simply analysing 100-item questionnaires or even 5000-item item-banks. We now have available databases containing the digital traces that people leave online – their Facebook ‘Likes’ and the words and images they use in social media status updates, searches, tweets, text messages and e-mails. While the mathematical matrices are now substantially bigger, the statistical challenges for analysis of data are fundamentally the same, and given the speed of computation today, size doesn’t matter anymore.
Computer adaptive testing (CAT) is also fundamentally changing the way we see psychometric assessment. The universal introduction of broadband has now freed it from timing constraints, but what had been holding it back was the absence of inexpensive computer software. With the introduction of open source resources, this is changing rapidly. Today CAT is no longer the preserve of the big test publishers and examination boards, and its availability to students and in less developed countries has seen an explosion of expertise and know-how in its implementation.
But last and not least, I would include issues concerning privacy and ethics. We now have at our fingertips an enormous amount of knowledge about the traits, purposes and likely actions of individuals that was simply unthinkable only a few years ago – big data sends its compliments. The potential for both benefit and abuse is astonishing, and this should be receiving more attention by both the public and politicians than it has been.
How come the use of psychometrics remains controversial? For example, in some countries the unions or workers’ councils prohibit the use of psychometrics, arguing they do not relate to actual work outcomes.
Psychometrics is without doubt one of the fields of psychology to have had great social impact. All of us are tested now from the cradle to the grave, and much of this testing is high stakes; our jobs, status, health, finances and movements depend on it, so of course it’s going to be controversial. But it is also the elephant in the room – its influence is unacknowledged even when staring us in the face. Don’t believe me? Next time you are at an academic conference go to the poster session and count the ratio of posters that include a psychometric test to those that don’t. Or try to argue that IQ testing has had no impact on society. What’s that? Can’t
Talking about impact on society, there has been much talk and deep controversy about the use of testing for job seekers in the UK – witness the public reaction to the ‘My strengths’ assessment – a short strengths-based test for job seekers – or the demonstrations outside and inside the Streatham job centre. What are your views, is it right to ‘test’ people in this context?
Well, the scientist in me wants to stress the importance of any assessment being psychometrically sound; that is, it should be appropriately standardised and as reliable, valid and free from bias as possible. But another part of me screams out in protest, as this is a modern challenge not just for psychometricians but also for cognitive behaviour therapists and all other psychologists involved, either directly or indirectly, in the government’s ‘nudge’ programme. We know that these assessments are going to be carried out anyway, and the dilemma is whether we prefer this to be done under our own professional guidelines and BPS code of conduct, or will we simply disengage?
Here is one issue that we fear is not sufficiently addressed – psychometrics are important to pretty much all fields in psychology, but (in our view) neglected in psychological training, particularly at postgraduate level. What can we do to address this issue?
An intriguing question – the increasing dominance of neuroscience in undergraduate psychology degrees has led to an odd situation in my view, where a great deal of the psychology that students learn at university is largely irrelevant to their future careers as applied psychologists, whichever field, be it clinical, occupational, educational or one of the other applied divisions, they aspire to. Somehow, core areas of our discipline appear to be leaking away to behavioural economics, machine intelligence and big data analytics, and it seems that curriculum change within our university system is completely unable to keep up with rapid developments in modern communication. Of course, what travels down neurons is important, but how much more significant is the digital trail we leave online – all our hopes, expectations and desires are there for inspection and analysis. If it is decided that the latter is ‘not really psychology’, then I think the discipline is in grave danger…
Now project forward – what is your prediction about how the field of psychometrics will evolve over the next 10 years?
We already know that computers can predict our behaviour better than other humans, so I expect assessment systems using machine intelligence will become increasingly popular, simply because they are better at making predictions. But such actuarial science does have its drawbacks, and I don’t think the public will take kindly to having decisions about health, employment or education being made on these grounds alone.
Tips for understanding and using psychometrics
1. Get yourself trained and do the Society’s test user qualifications. Psychological assessment is a core skill for all budding and practising psychologists.
2. Recognise that there is no such thing as a perfect test. The key is choosing the right test for any given context, and being clear that no test is error free.
3. Interpret test results appropriately; a good start is to ensure that your norm (benchmarking) group reflects the people you are testing.
4. Always share test results in an appropriate way with your candidates, and remember that the biggest limitations of tests are the human users!
5. Keep yourself up to date. Psychometrics are changing fast!
- Professor John Rust is Director of the Psychometrics Centre at the University of Cambridge. His book Modern Psychometrics will be published in its fourth edition in 2016.
- Almuth McDowall is Course Director for the MSc in Human Resource Development and Consultancy at Birkbeck, University of London [email protected]
- Céline Rojon is lecturer in Human Resource Management at the University of Edinburgh [email protected]
BPS Members can discuss this article
Already a member? Or Create an account
Not a member? Find out about becoming a member or subscriber