Looking Back: How it all began
It is now more than 40 years since Graham Hitch and I published our paper proposing to extend the earlier concept of short-term memory (STM) into a more complex and ambitious working memory model (Baddeley & Hitch, 1974). We suggested replacing one STM system with three inter-related subsystems, emphasising our assumption that its function was to hold information while working on it; a memory system that helps us to think. Our original model remains at the centre of the current version (Baddeley, 2012). The multicomponent model did not, of course, emerge fully formed. Hence I was intrigued by the invitation to reflect on how it all began, and to supplement an earlier account elsewhere, of the way in which the model subsequently developed (Baddeley & Hitch, 2007).
My first job involved working at the MRC Applied Psychological Unit in Cambridge on the design of postcodes. My supervisor was Conrad, currently best known for his discovery of the importance of acoustic similarity in verbal short-term memory. He showed that errors in recalling strings of letters tended to be similar in sound to correct items (e.g. b for v) and that sequences of similar-sounding items (e.g. b g v t c) were harder to remember than dissimilar (f k w j q). My task however, was concerned with long-term memory for postal codes. I applied the recently developed field of information theory to verbal long-term memory (LTM) and was able to generate memorable codes for every UK post town based on the letter structure of English. By this time, however the Post Office had already settled on the current system, so they were never used.
My work linking language structure to memory did, however, lead to my first short-term memory experiment, demonstrating to three eminent Harvard professors, George Miller, Gerry Bruner and Leo Postman, that their paper showing an influence of language structure on perception was in fact based on memory (Baddeley, 1964a). Their paper was also criticised by an up-and-coming young Canadian psychologist, Endel Tulving. I pointed out that he was wrong too (Baddeley, 1964b); I seem to have been rather a pugnacious young man! I was then switched to working on finding an improved method for measuring the quality of telephone links. The idea was that the negative impact of a noisy signal would be exaggerated if the message had to be held in memory, hence providing a more sensitive measure of the link. I speculated that the measure might be even more sensitive if the items to be remembered were similar in sound – Conrad’s acoustic similarity effect – and proceeded to test this.
By this time there was a very active controversy concerning whether it was necessary to assume more than one memory system, with Conrad’s work being cited as evidence for a temporary acoustic system, in contrast to the more stable system in long-term memory. However, Conrad had not tested other kinds of similarity, and so I decided I would use words rather than letters and, compare acoustic similarity with similarity of meaning. My experimental setup was rather basic, a room full of volunteers, a noise source that could be switched on or off and myself reading out sequences of five words. The results were clear; a big effect of phonological similarity, a small but significant effect of meaning, and no effect of noise level over and above what could be accounted for by mishearing. My bosses Broadbent and Conrad agreed that this was an interesting result, theoretically if not practically, and I was encouraged to go ahead and explore it further. The telephone project was passed on to Patrick Rabbitt who had just arrived at the Unit (using a more sensitive method he was able to show a small effect of noise). In collaboration with my friend Harold Dale, we went on to demonstrate that the pattern changed dramatically under standard long-term memory conditions, finding meaning to be all important and sound relatively unimportant. On the basis of these results I began to conclude that there were two separate memory storage systems: a short-term system relying on an acoustic code and a long-term system based on meaning.
This view rapidly proved too simple; semantic effects can occur dramatically in standard STM tasks such as memory span; span for unrelated words is around five and for meaningful sentences nearer 15. Furthermore, we must have acoustic/phonological long-term memory, otherwise how could we learn the sound of new words? The increasingly influential neuropsychological evidence was also inconsistent with my simple view. Despite earlier claims that amnesic patients showed semantic encoding deficits (Cermak & Reale, 1978), the alcoholic Korsakoff patients on which this conclusion was drawn subsequently proved to have subtle frontal lobe damage. Patients with a dense, but pure amnesia showed no semantic encoding difficulties (Baddeley & Warrington, 1970).
I was not alone in my enthusiasm for exploring the field of short-term memory. Demonstrations of apparently clear differences between long- and short-term memory led to the generation of a large number of experimental paradigms and many models. One book, for example, had 13 chapters each with a different model. One model, however, became dominant, so much so as to be named the ‘modal model’. This model was proposed by Atkinson and Shiffrin (1968), who also claimed it to be a working memory model which, in addition to providing short-term storage, was capable of such complex activities as selecting strategies, controlling input to LTM, guiding retrieval and much else. The model was also expressed mathematically, although the examples provided were limited to the learning of meaningless verbal items.
By the end of the 1960s however, problems with the modal model were starting to emerge. The assumption that material held in the short-term store would automatically transfer to LTM, with duration in store linked to amount learned, proved unjustified. Failure to address the issue of type of material and method of encoding created even more problems. These came to a head with a paper by Craik and Lockhart (1972), who introduced the concept of levels of processing whereby learning depended on what was done with the material rather than how long it was held in STM. For example, processing a word in terms of its visual appearance led to poor retention, making a phonological judgement about it improved retention, but processing it semantically and relating it to existing knowledge was by far the most effective. Problems also came from neuropsychology, where patients with grossly impaired verbal STM were studied by Shallice and Warrington (1970); according to the modal model, defective STM should lead to grossly impaired LTM, which it did not. Furthermore, if the system acted as a working memory, such patients should have massive problems in their daily lives. They didn’t. At this point many investigators into STM moved on to other more recently developing areas, such as levels of processing and semantic memory.
It was at this point, with me at the age of 37, that my head of department gently suggested I should perhaps consider seeking my first research grant. I applied to work on the link between long- and short-term memory, asking for a postdoc and a research assistant. The committee decided it was too expensive but happily cut the research assistant not the postdoc, Graham Hitch. I knew Graham as a master’s student, converting from a Cambridge physics degree to experimental psychology. He had just finished a PhD under Donald Broadbent on STM, and proved (and still proves) to be an ideal colleague and collaborator.
It seemed an inauspicious time to be entering the field of STM, given its problems and the fact that we did not have access to patients with the STM deficits that were so theoretically important. Happily, we hit on the idea of turning our students into ‘patients’, not by removing chunks of their left hemisphere, but by keeping it occupied in remembering strings of digits, while performing the various tasks that were assumed to depend upon short-term/working memory. The longer the digit sequence, we argued, the more STM capacity should be used up and the greater the disruption.
What we found was more complex, and in the long run more interesting than this. Concurrent digit load slowed performance down, but had an effect that was far from catastrophic. Hence in one study people solved simple reasoning tasks while holding from zero to eight digits, showing a nice linear increase in reasoning time, but performing at a consistent five per cent error rate regardless of load. We showed similar effects in studies of verbal memory and of prose comprehension. Our attempt to account for this pattern of results resulted in three proposed components that still form the core of the multicomponent model, namely an attentionally limited control system (the central executive), a system for holding sequences of acoustic/speech-based information (the phonological loop) and its visual equivalent (the visuo-spatial sketchpad).
At this point I received an invitation to submit a chapter to an influential annual series entitled Recent Advances in Learning and Motivation (It is interesting to reflect that if this happened today, our head of department would strongly advise against publishing in a volume that would not be eligible for the REF and would not register on the SCI citation count. Instead we would have had to publish as a series of separate papers, in each case struggling with sceptical referees concerned at our excessive speculation.) We hesitated; the model was clearly not yet complete (it still isn’t!), but it seemed too good an opportunity to miss and Baddeley and Hitch (1974) duly appeared. We would have been amazed – indeed, I am still amazed! – that it would still be cited four decades later.
- Alan Baddeley is Professor of Psychology at the University of York
Atkinson, R.C. & Shiffrin, R.M. (1968). Human memory: A proposed system and its control processes. In K.W. Spence & J.T. Spence (Eds.) The psychology of learning and motivation: Advances in research and theory. (Vol. 2, pp.89–195). New York: Academic Press.
Baddeley, A.D. (1964a). Immediate memory and the ‘Perception’ of letter sequences. Quarterly Journal of Experimental Psychology, 16, 364–367.
Baddeley, A.D. (1964b). The redundancy of letter-sequences and space-information. American Journal of Psychology, 77, 322.
Baddeley, A. (2012). Working memory, theories models and controversy. The Annual Review of Psychology, 63, 12.11–12.29.
Baddeley, A.D. & Hitch, G.J. (1974). Working memory. In G.A. Bower (Ed.) Recent advances in learning and motivation (Vol. 8, pp.47–89). New York: Academic Press.
Baddeley, A. & Hitch, G. (2007). Working memory: Past, present… and future query. In N. Osaka, R.H. Logie & N. D'Esposito (Eds.) The cognitive neuroscience of working memory (pp.1-20). Oxford: Oxford University Press.
Baddeley, A.D. & Warrington, E.K. (1970). Amnesia and the distinction between long- and short-term memory. Journal of Verbal Learning and Verbal Behavior, 9, 176–189.
Cermak, L.S. & Reale, L. (1978). Depth of processing and retention of words by alcoholic Korsakoff patients. Journal of Experimental Psychology: Human Learning & Memory, 4, 165–174.
Craik, F.I.M. & Lockhart, R.S. (1972). Levels of processing. A framework for memory research. Journal of Verbal Learning & Verbal Behavior, 11, 671–684.
Shallice, T. & Warrington, E.K. (1970). Independent functioning of verbal memory stores: A neuropsychological study. Quarterly Journal of Experimental Psychology, 22, 261–273.
BPS Members can discuss this article
Already a member? Or Create an account
Not a member? Find out about becoming a member or subscriber