Psychologist logo
Research, Research Ethics

Does psychology face an exaggeration crisis?

Brian Hughes argues that we are prone to accentuating the positive, even when it comes to progress in improving our science.

06 September 2018

Not another article about the crisis in psychology, you might complain. Déjà vu all over again? You thought we reached peak crisis some time ago, didn’t you? We’re supposed to be all post-crisis now: obsessing about the consequences of fear-mongering, disturbed that terminal negativity will prove off-putting to wider audiences (including, worryingly, funding bodies).

Some people suggest that talk of crisis in psychology is overblown. However, my view is that the problem is not exaggerated at all. If anything, the exaggeration lies elsewhere – in psychologists’ proneness to accentuate the positive in their midst.

We overstate what we have achieved in our research. We overstate the impact, importance, and applicability of our findings. And we overstate our achievements with regard to the replication crisis itself: we congratulate ourselves for the occasional bout of self-flagellation, and exaggerate the extent to which we have successfully addressed our problems.

So, yes, at the risk of engendering reader habituation, here is yet another article about a crisis in psychology – the exaggeration crisis.

How do we know that exaggeration is endemic?

Psychology’s problem with exaggeration exhibits several symptoms, which I will tackle in turn.

Despite ample warnings, our field still lacks a replication culture. Recent high-profile replication attempts have been extremely important, but there is no sign that psychology as a whole has suddenly started to embrace replication. Fewer than 1 per cent of papers published in the top 100 journals relate to replications of previous research. This shows that statistical significance remains a de facto proxy for replicability, evidence of a virulent inflationary fallacy. We go on attributing unwarranted certainty to tentative statistics, ignoring the rampant false-positive rate. Simply put, despite many treatises on the flaws of NHST, the vast bulk of psychology research published today continues to exaggerate the implications of ‘p < .05’.

We continue to freely cite non-replicable research, including several so-called ‘classic’ studies that have become staples in our psychology textbooks. It is bad enough that most studies cited in textbooks have never been replicated, but it is worse that many of those where replication attempts have occurred – and whose findings have been revealed as unreliable – continue to be cited as though nothing has changed.

We stand by, largely without protest, as extravagant claims circulate widely in popular culture under the banner of psychology. Consider that the second-ever most viewed TED Talk concerns the ropey concept of power posing; or witness the mainstream glorification of Jordan Peterson and his overreach-based mysticism. Rather than urge the public to approach such fads with caution, psychologists (and their professional bodies) often appear more concerned with finding ways to climb aboard the bandwagon.

Echoing Mats Alvesson’s essay in the August issue, on modern grandiosity, researchers today employ far more hyperbole when writing journal abstracts than they did three decades ago. In 1974, one in fifty abstracts employed complimentary descriptors (such as ‘innovative’) to summarise research. By 2014, according to Christiaan Vinkers and colleagues in a 2015 BMJ article, self-praise featured once in every six, an increase of nearly 900 per cent. Ironically, this growth in humblebragging coincided almost exactly with the emergence of the very discourse that now frets publicly about file-drawer effects, underpowered study samples, and problems with research replicability. The spread of crisis talk has done little to engender obvious modesty in scientific researchers (it may even, perversely, have discouraged it).

Notwithstanding the Open Science movement, the file-drawer problem hasn’t gone away. The average psychology study is still feebly underpowered (see Smaldino and McElreath’s 2016 paper on the ‘natural selection of bad science’), a problem that appears to worsen the more it is scrutinised (average power has plummeted from around 50 per cent in the 1960s to around 25 per cent today). And yet, almost without exception, virtually every published research paper reports a significant finding. Given that power to detect significance is mostly lacking, this logically means that a great many reported findings must be false positives. In other words, the typical reported finding in psychology is an exaggeration of a true effect, or, even, of a null effect. This ‘winner’s curse’ reflects psychology’s incorrigible exaggeration impulse.

It is true that psychology’s existential challenges have received conspicuous attention in recent years. However, It is reckless to claim we have dealt with these problems simply because we have discussed them. We cannot wish the crisis away. Yes, some technical solutions are beginning to appear (sporadically), but an obvious culture-shift has yet to take hold. The incentives in professional and academic psychology remain unchanged, and continue to reinforce the bad habits of the past.

Enablers of exaggeration in psychology

What drives psychology’s hype machine? Some excess undoubtedly results from attribution bias. People instinctively interpret ambiguity in self-flattering ways, attributing positive aspects of their work to merit and negative ones to chance. Psychologists are no exception. The result is a genuine belief that our insights are profound, our therapies outstanding, and our research more robust than is actually the case.

Some exaggeration emerges from a broader modern culture, described by Alvesson, that promotes unapologetic extravagance in language, attitude, and aspiration. Psychology is not alone in inflating wares for modern audiences, in refining image while neglecting substance. But the public gaze produced by popular interest in psychology’s subject matter certainly serves to exacerbate this tendency.

Psychology’s research pipeline is riddled with inflationary features. Journals continue to favour statistically significant findings, editorially institutionalising the file-drawer effect. Professionally, scientists and academics are judged on publication and citation volume (with some amorphous achievements relating to ‘impact’ and ‘reach’ thrown in), a system where the bigger the splash, the smoother the career progression. There is a clear imperative for research psychologists to blow their own trumpets. You could even say that those who don’t are behaving irrationally by choosing to undermine their self-interest.

Further inflation inflects from the interface of academia, public relations, media churnalism, and secondary reporting. When university press officers convert abstracts into press releases, the process frequently involves cherry-picking of results, non-specialist re-writing, and a sanguine tolerance of error. These processes of ‘sharpening’ afflict all kinds of news-reporting. The production of psychology news is presumably no exception.

What can we do about our exaggeration crisis?

To avail of a cliché, psychologists’ first step in solving their exaggeration problem is to acknowledge that they actually have an exaggeration problem. This is not as easy as it sounds. Exaggeration impulses are usually self-perpetuating. Optimism about their field leads many psychologists to adopt ‘nothing-to-see-here’ poker faces whenever the c-word is uttered, to liberally afford the benefit of doubt to peers, and to dissuade others from panicking over the state of psychology.

Given that exaggeration is a behaviour shaped by reinforcement, it is important to attack the issue of incentives in a full-on way. Exaggeration is incentivised by editors’ attitudes, the widespread (ab)use of citation metrics, authorship conventions, and the attritional nature of peer-review systems. All of these can be addressed, if the will is there.

Many journal editors (along with associate editors and reviewers) have been at the forefront of promoting good practice in research and reproducibility. However, there remains a need to shift editorial culture across psychology as a whole. In short, editors require reculturation. Replication research – the hallmark of the scientific method, but a unicorn in psychological science – can only be considered a priority format for publication if editors identify it as such. The prioritising of novelty over repetition equates to a desire for sensationalism, which, as well as undermining reproducibility, slowly blights the very tone of what we publish.

Similarly, the policy of publishing statistically significant findings rather than null effects is as demeaning as it is distorting. The file-drawer effect has receive much attention (although without altering the practice of journal editors, even after forty years). But the prioritising of significance by journal editors also feeds psychology’s exaggeration impulse. Psychologists are taught to be ashamed of having nothing exciting to say. Ideally, psychology journals should sign up to a doctrine of publication regardless of p, and a practice of peer-review that focuses on methodological rigour rather than findings.

Citation metrics need complete recalibration, or even abandonment. Valuing research on the basis of virality represents poor quality control. We all know that citation statistics do not reflect the quality of the research that is cited. In this regard, so-called altmetrics face similar problems. The number of times a paper is tweeted is effectively an alternative version of how often it is cited, but with even less connection to the notion of peer-review. Far better to dispense with person-level metrics altogether. A researcher’s h-index should be seen as no more relevant than their star sign.

Finally, if the problem is individualism, then a radical set of solutions would involve removing individuals from the picture. For example, authorship of research could be completely de-personalised: there is no absolute need in science for author names to be published alongside findings. The provenance of outputs could be tracked using study ID numbers, or information about the location where the research was conducted. There need not be a focus on highlighted individuals, and the resultant carving up of authorship credit in Lennon-McCartney terms as if bartering a divorce settlement.

Alternatively, why not dispense with pre-publication peer-review altogether? In the digital age, the cost of printing no longer requires us to filter out lesser-valued submissions. Moreover, it facilitates organic post-publication review, in the form of online commenting systems. This would remove the accolade of publication, essentially devaluing the currency and dampening the hysteria of wealth. Research would receive attention on the basis of its inherent quality, and the merit of claims would be determined by collective consensual opinion.

Indeed, why publish ‘articles’ in psychology journals at all? Why not move to the production and dissemination of open-access datasets and the formation of scientific consensus over time by expert-network crowdsourcing? If any metrics were to be involved, perhaps they could focus on the degree to which individuals (or institutions) contribute to the collective effort, with promotions (or rankings) determined on that basis.  

Talking towards a bold new world

It is important to acknowledge that human factors underpin the so-called crisis in psychology. Insofar as the crisis revolves around false claims to truth, support for the unsupportable, and achievements that are not always what they seem, it is apparent that it stems from exaggeration.

In recent years we have seen much discussion about reproducibility in psychology and many welcome initiatives to deal with the resultant problems. However, it is worth bearing in mind that the success of these initiatives depends on the spirit with which they are
taken up.

A bold new world will be of little consequence without a commitment to the pursuit of truth. New systems won’t amount to much unless there is a determination to make them work. Our inherent proneness to exaggeration is both individual and collective. But as psychologists, we are perhaps best placed to explore, understand, and address what is going on. In my view, were psychologists to neglect the human factors underpinning scientific crises, it would be especially ironic.

So long as exaggeration in psychology is rewarded, it will continue to be prevalent. This just might include a tendency to exaggerate the degree to which our replication crisis is being successfully addressed, and to pat ourselves collectively on the back for all the good work we are doing.

Dare I say, it would be dangerous to exaggerate the progress we are making. It is not yet time to stop talking about the crisis in psychology.

- Brian Hughes is Professor of Psychology at NUI Galway, whose latest book Psychology in Crisis is published by Palgrave (2018).
[email protected]

See also our extract from his previous book, Rethinking Psychology.