Overconfident defenders of the Dunning-Kruger effect
Our recent Psychologist article on the Dunning-Kruger effect (March issue) was not entirely endorsed by David Dunning in his reply (April issue). We have noted elsewhere that the right-of-reply format tends to produce protracted and personalised debates that drift away from the data (Chambers, McIntosh & Della Sala, 2021), so we will try to be brief and constructive here.
The Dunning-Kruger effect is the common finding that poor performers overestimate their abilities more than good performers do. Dunning’s ‘dual-burden’ account of the effect is that poor performers lack the metacognitive insight to be aware of their incompetence. The main target of our article was not the work of Dunning and collaborators, much of which we admire, but the divisive meme that popular discourse has made of it: that stupid people are too stupid to know they are stupid.
Dunning’s first discontent is that we argued ‘that the effect has no real existence but is a mere statistical artefact’. We actually argued that the effect is strong, but so-much shaped by statistical artefacts that it provides no real evidence for the dual-burden account. The difference depends on whether the Dunning-Kruger effect is defined as the classic pattern of overestimation in low performers, or as the dual-burden account of this pattern. It is hard to know which Dunning prefers, because he seems to switch between definitions in his reply. He initially identifies the effect with his theoretical account, chastising us for ‘conflating the idea behind the Dunning-Kruger effect with its concrete measurement’; but he subsequently identifies the effect with the pattern itself, insisting that the effect is real because ‘the pattern of self-misjudgements remains regardless of what may be producing it’. We are at least consistent, and consistent with wider usage, in identifying the Dunning-Kruger effect with the pattern of self-misjudgements (see e.g. Britannica and Wikipedia). We maintain that this pattern is driven by statistical artefacts, and not by metacognitive differences between good and poor performers.
One major artefact is regression to the mean, which will be most extreme if researchers double-dip the data, by using the same measure of performance to index ability and to benchmark self-estimation. The contribution of double-dipping to the classic pattern is confirmed by the fact that the pattern is attenuated when double-dipping is avoided, or adjustments are made for variability in the measurement of performance (Ehrlinger et al., 2008; Feld et al., 2017; Kruger & Dunning, 2002; McIntosh et al, 2018). Dunning notes that the pattern is not eliminated by such steps, so double-dipping cannot be the whole story. We agree, but this only underlines our point about the pervasiveness of regression to the mean, which occurs between any two measures that are less than perfectly correlated. Our example of height and weight was chosen advisedly, to show that imperfect correlations always imply regressive relationships, even when precise measurement is possible (with no double-dipping). The regressive relationship between ability and self-estimates tells us only that self-estimates are imperfectly related to ability; it does not tell us why, nor imply any special lack of insight amongst the poorest performers.
So, we agree with Dunning that, in order to find out whether poor performers are metacognitively different from good performers, we need other research strategies; but some of the studies he cites have shortcomings that make their conclusions mere tautologies. For instance, to find that the people least able to distinguish fake news from real news are the most likely to share fake news stories (Lyons et al., 2021) is more-or-less to measure the same thing twice, like showing that the slowest runners are the least likely to do well in running races. To find that the people who endorse autism myths have the least knowledge of autism (Motta et al., 2018) is actually to measure the same thing twice, if the test of knowledge includes some of those myths. Studies in this general area seem particularly prone to such logical circularities, which undermine their claims scientifically, but are rarely trumpeted in the media fanfare that follows.
More promising is the study by Jansen et al. (2021), which was based upon tasks from Kruger and Dunning’s (1999) original paper. This large-scale online study did not assess metacognition directly, but compared models of the data that either did or did not include the assumption that poor performers had less metacognitive insight than good performers. This assumption allowed for a slightly better fit to the data. Dunning takes this to vindicate the dual-burden account, but the proposed metacognitive differences accounted for only a small fraction of the classic pattern at the extreme ends of the ability range. When Jansen et al. plotted their data by ability quartiles, the differences between models with and without metacognitive differences were invisible. Far from vindicating metacognitive differences as a substantive source of the Dunning-Kruger effect, these data are consistent with the view that the signature pattern is overwhelmingly driven by statistical regression.
Some incompetent people may seem grossly overconfident, but this is mostly a statistical truism, not a metacognitive counterpart of incompetence; and there are also poor performers who humbly admit their limitations, and high performers hubristic in their arrogance. Indeed, people in general are rather bad at estimating themselves using simple rating scales. But self-knowledge is a multi-faceted topic, and we agree with Dunning that there may be cases in which genuine overconfidence can be traced to psychological causes, for instance if a person is using the wrong rule (e.g. for calculating compound interest), thinking it to be the right one (Williams et al., 2013). We also expect that, if metacognition is measured by appropriate methods, the poorest performers may often be the least able to distinguish their successes and failures, though this would not mean that they are overconfident or deluded. Such studies would take us beyond the reductive meme of the Dunning-Kruger effect, to a more nuanced examination of the complex topic of self-estimation.
Robert D. McIntosh
Sergio Della Sala
Human Cognitive Neuroscience, Psychology,University of Edinburgh
Chambers, C., McIntosh R.D. & Della Sala, S. (2021). Is ‘right-of-reply’ right for science? Cortex, 142, A1.
Ehrlinger, J., Johnson, K., Banner, M. et al. (2008). Why the unskilled are unaware: Further explorations of (lack of) self-insight among the incompetent. Organizational Behavior and Human Decision Processes, 105(1), 98-121.
Feld, J., Sauermann, J. & de Grip, A. (2017). Estimating the relationship between skill and overconfidence. Journal of Behavioral and Experimental Economics, 68, 18-24.
Kruger, J. & Dunning, D. (2002). Unskilled and unaware—But why? A reply to Krueger and Mueller. Journal of Personality and Social Psychology, 82(2), 189-192.
Lyons, B.A., Montgomery, J.M., Guess, A.M. et al. (2021). Overconfidence in news judgments is associated with false news susceptibility. Proceedings of the National Academic of Science, 118(23), e2019527118.
McIntosh, R.D., Fowler, E.A., Lyu, T. & Della Sala, S. (2019). Wise up: Clarifying the role of metacognition in the Dunning-Kruger effect. Journal of Experimental Psychology: General, 148(11), 1882.
Motta, M., Callaghan, T. & Sylvester, S. (2018). Knowing less but presuming more: Dunning-Kruger effects and the endorsement of anti-vaccine policy attitudes. Social Science & Medicine, 211(C), 274-281.
Williams, E.F., Dunning, D. & Kruger, J. (2013). The hobgoblin of consistency: Algorithmic judgment strategies underlie inflated self-assessments of performance. Journal of Personality and Social Psychology, 104(6), 976–994.
BPS Members can discuss this article
Already a member? Or Create an account
Not a member? Find out about becoming a member or subscriber