Psychologist logo
BPS updates

Keeping it simple

Christopher Peterson and Nansook Park on the lasting impact of minimally sufficient research

09 May 2010

Simplicity is the ultimate sophistication.
Leonardo da Vinci

How can psychology be improved? A special issue of Perspectives on Psychological Science, published by the American Psychological Society, invited opinions from a variety of psychologists, including us (Diener, 2009). Our advice was to keep it simple (Peterson, 2009). We offered this advice not because simplicity is a virtue, although it is (Comte-Sponville, 2001). Rather, the evidence of history is clear that the research studies with the greatest impact in psychology are breathtakingly simple in terms of the questions posed, the methods and designs used, the statistics brought to bear on the data, and the take-home messages.

Simple does not mean simplistic. Nor does it mean easy or quick. Rather, simple means elegant, clear and accessible, not just to other researchers but to the general public. No one’s eyes glaze over when hearing about a high-impact study. No one feels stupid. No one asks, ‘And your point is?’ Psychology is the quintessential human science, and we believe that it must speak not only about but to its subject matter: people. Psychology is about everyone, not just those of us who work in ivory towers with internet access. The general public is interested in what psychologists have to say, so long as it is understandable and interesting. We would not tell those who do quantum mechanics to keep it simple, but psychology – at its best – is cut from a different cloth.

We have long believed that psychologists should ‘give it away’ (Miller, 1969). But how can we give away what we have learned if no one can understand it? If psychology falls short of this goal, the general public will turn to other sources, and we as psychologists will have no one to blame other than ourselves for our irrelevance.

Our thinking on this matter was inspired by one of the recommendations in the 1999 report by the American Psychological Association’s Task Force on Statistical Inference:
The enormous variety of modern quantitative methods leaves researchers with the nontrivial task of matching analysis and design to the research question. Although complex designs and state-of-the-art methods are sometimes necessary to address research questions effectively, simpler… can provide elegant and sufficient answers to important questions. Do not choose an analytic method to impress your readers or to deflect criticism. If the assumptions and strength of a simpler method are reasonable for your data and research problem, use it. Occam’s razor applies to methods as well as to theories (Wilkinson & The Task Force on Statistical Inference, 1999, p.598).

This recommendation appeared under the heading ‘minimally sufficient analysis’, and we generalised this label to call for ‘minimally sufficient research’.

Psychology today has become very complicated. Consider fMRI (functional magnetic resonance imaging), IRT (item-response theory), SEM (structural equation modeling), and a host of other acronymed research tools and statistical techniques. There are many legitimate uses of such strategies, and they have benefits when appropriate given the questions of interest to researchers. However, we suspect that in some cases, they have become little more than a hard-to-earn entry card to the academy.

Our own story about the needless complexity of psychological research concerns a simple cross-sectional study we did that found support for the contention that A influenced C through its effect on B. The original analyses showed that the product–moment correlation between A and C was significant, and that it shrank essentially to zero when B was held constant in a partial correlation. But the reviewers were not happy and suggested that structural equation modelling (SEM) was a better way to make the point, never mind that our measures of A, B, and C were each unifactorial and had excellent and comparable internal consistency.

We set about to make this revision, which required enrolling in a lengthy faculty workshop on the topic, buying the relevant statistical software, learning how to use it, and finally doing the suggested analysis. Six months later, after numerous consultations with local SEM experts, we finally learned that A influenced C through its effect on B! The paper was duly published (Peterson & Vaidya, 2001), and it has been rarely cited except by ourselves. We do not blame the use of SEM for the paper’s lack of impact, but we do conclude that SEM did not turn this particular sow’s ear into a silk purse, and that illustrates the thesis of the present contribution.

What are the motives of psychologists who do not keep it simple? No doubt they are diverse. In some cases, no blame attaches to individual researchers. As we just illustrated, sometimes researchers are just doing what needs to be done to have their work published. Complexity apparently impresses journal reviewers and editors. In the current academic culture, complex research designs or analyses create admiration and respect, even when unnecessary given the purpose of a study, which of course should be to answer a basic question about the human condition. Graduate students in particular have learned well this misleading lesson. When we talk to them at conferences about their work, they often regale us with procedural and statistical details of their research but rarely frame them in terms of what they hope to learn.

Our original article urging minimally sufficient research generated a surprising number of appreciative e-mail messages from other researchers, some quite well known, who applauded the recommendation and related their own stories about how the publication process led them in regrettably esoteric directions.

In other cases, researchers may be defensive about the status of psychological science and want to make their studies look like natural science rather than social science by making things as complicated as possible. We like to frame the venerable distinction between natural sciences and social sciences (like psychology) not as one between hard science versus soft science, but rather as a distinction between hard science versus really hard science. Psychologists should never be defensive about the studies they do.

Finally, it is fun to master a difficult research or statistical technique (cf. White, 1959). We are as guilty of showing off our competence as any researcher, even if doing so gets in the way of telling the best story about the study we have done.

The evidence of history
The most important studies in psychology are not just simple. Otherwise, sixth-grade science fair projects would have lasting impacts. Rather, the most important studies in psychology are just simple enough to make a really interesting point. That is why these studies become and stay important. They are not important just because they had complex designs. They are not important just because they used maximally sufficient analyses. They are not important just because they were reported in a 50+ page article or used the multiple-study format that has become obligatory in so many ‘premier’ psychology journals.

They are important because they are interesting, and because needless methodological and statistical complexity did not obscure the interesting points they made. Studies are important when they show other researchers what is possible and how to do it, not because they make research daunting. In short, an important study exemplifies the principle of minimally sufficient research.

Let us be specific with some examples of important studies and how they embody the simplicity we are extolling. Those of you who are teachers or recent students are probably familiar with the ancillary text Forty Studies That Changed Psychology (Hock, 2006). The studies described in this book and its previous editions range across the fields of psychology and were chosen because of their impact as shown by ongoing coverage in introductory psychology textbooks. There are other ways to identify the most important studies in psychology – like brute-force citations counts (see Kessler et al., 1994, for the highest impact study from the most-cited researcher in the social sciences today – research that was difficult to do but easy to understand). But none would argue that the studies in this book are not among the most important ever done in our field.

The common thread is appropriate simplicity, in design and statistical analysis. Indeed, some were case studies and used no inferential statistics whatsoever (e.g. Freud’s studies of patients with hysteria; Watson and Rayner’s conditioning of Little Albert; LaPiere’s travelling investigation of attitudes and actions; Skinner’s superstitious pigeons; Harlow’s forlorn monkeys; and Rosenhan’s multiple case study of being sane in insane places). Even the original Milgram study of obedience in effect was a case study, given that he did not assign research participants to different conditions. (Of course, Milgram, 1974, conducted subsequent studies that were true experiments, in that they systematically varied the parameters of his obedience paradigm, but it was the original demonstration of obedience that has had such a huge impact on psychology.)

Other studies of note were experiments, but always very simple ones with but one or a few dependent variables: Asch’s inquiry into conformity; Calhoun’s study of the effects of crowding among rats; Festinger and Carlsmith’s laboratory test of cognitive dissonance theory; Bandura and colleagues’ Bobo doll study of modelling; Wolpe’s investigation of systematic desensitisation as a treatment of fears; Seligman and Maier’s demonstration of learned helplessness in dogs; Langer and Rodin’s field experiment with nursing home residents; Latané and Darley’s investigation of unresponsive bystanders; Rosenthal and Jacobson’s study of teacher expectations; and so on. Results of these experiments were invariably analysed with one-way analyses of variance (ANOVA) and pairwise comparisons. In each case, we see a minimally sufficient design and a minimally sufficient analysis. And maximum impact.

Other high-impact studies used a correlational design – like Friedman and Rosenman’s study of the Type A coronary-prone behaviour pattern and Holmes and Rahe’s study of stressful life events and disease. In these cases, results were analysed with simple measures of association.

Still other high-impact studies introduced a novel method or measure, like Piaget’s méthode clinique (interviewing his own children about what they were thinking while performing various cognitive tasks); Morgan and Murray’s Thematic Apperception Test; Rorschach’s inkblots; Kohlberg’s moral dilemmas; and Rotter’s locus of control measure. What is important about these methods and measures is that they were simple enough for other researchers to use and interesting enough that other researchers wanted to use them and obviously did so.

The Smith and Glass (1977) meta-analysis of psychotherapy outcome studies deserves special mention. All clinicians and clients should mutter a daily thank you for this study because it answered the question ‘Does psychotherapy work?’ with an emphatic yes. This answer was possible because Smith and Glass introduced to psychologists a new statistical method for aggregating research results. Does this study contradict our argument that researchers should keep it simple? Not at all, because the statistical technique was introduced without fanfare and was explained in lucid fashion. The question of interest to Smith and Glass demanded a new way of looking at data, and their version of meta-analysis followed. And please note that their paper was only nine pages long! Would their study be published today? Would any of the papers mentioned here be published today? These are rhetorical questions without consensual answers, but they are still worth considering.

In many cases, the research strategies used in classic studies have been criticised by subsequent researchers. Still, the importance of the original studies remains, and the new and improved versions of these strategies are their unmistakable descendants.

We acknowledge that some of the classic studies have proven difficult to replicate and that some have been misrepresented in textbooks, invariably by making the results simpler than they really were. ‘Dumbing down’ the results of psychological research to make them more appealing to the general public, to students, or even to ourselves brings its own set of problems. Giving psychology away in a responsible way also requires a minimally sufficient approach, in this case to communication. In our own lectures about particular studies, we always mention research design compromises, effect sizes, and likely alternative explanations.

The sceptic might object that of course these classic studies in psychology embodied simple approaches because that was all that existed decades ago. If Piaget, Festinger, or Milgram had structural equation modelling programs on desktop computers or fMRI laboratories down the hall, then they would have used these strategies, and our arguments here would be specious. We have no definitive rebuttal, although we disagree. Regardless, our other arguments remain valid. A researcher’s questions should dictate his or her methods and analyses, not vice versa. We should not do research in a particular way just because we can. We should not use a statistical technique just because we understand the software and want to show off our mastery. We should not study introductory psychology students simply because they are available and easily recruited. Surely, you readers have noticed the diversity of the samples used in the high-impact studies in our field of psychology.

Do less and think more
We are not opposed to new research methods and analytic techniques; they can be valuable when needed and dictated by the questions researchers want to answer. But we are concerned here with the current academic culture, which automatically translates complexity into significance. This trend can lead the entire field of psychology to overlook what really matters.

Appreciative inquiry is an organisational change strategy that tells group members to examine what they do well and then to do more of it (Cooperrider & Srivastva, 1987). In the present case, the group is psychologists, and what many of us do is research. The lesson of history is that what we do well is often very stark – less is more – and if we were to do less (and think more), psychology would be improved.

Christopher Peterson is at the Department of Psychology, University of Michigan [email protected]

Nansook Park is at the Department of Psychology, University of Michigan

References

Comte-Sponville, A. (2001). A small treatise on the great virtues (C. Temerson, Trans.). New York: Metropolitan Books.
Cooperrider, D. & Srivastva, S. (1987). Appreciative inquiry in organizational life. Research in Organizational Change and Development, 1, 129–169.
Diener, E. (2009). Introduction to the special issue: Improving psychological science. Perspectives on Psychological Science, 4, 1.
Hock, R.R. (2006). Forty studies that changed psychology: Explorations into the history of psychological research (5th edn). Upper Saddle River, NJ: Pearson Prentice Hall.
Kessler, R.C., McGonagle, K.A., Zhao, S., et al. (1994). Lifetime and 12-month prevalence of DSM-III-R psychiatric disorders in the United States: Results from the National Comorbidity Survey. Archives of General Psychiatry, 51, 8–19.
Milgram, S. (1974). Obedience to authority. New York: Harper & Row.
Miller, G.A. (1969). Psychology as a
means of promoting human
welfare. American Psychologist, 24,
1063–1075.
Peterson, C. (2009). Minimally sufficient research. Perspectives on Psychological Science, 4, 7–9.
Peterson, C., & Vaidya, R.S. (2001). Explanatory style, expectations, and depressive symptoms. Personality
and Individual Differences, 31, 1217–1223.
Smith, M.L. & Glass, G.V. (1977). Meta-analysis of psychotherapy outcome studies. American Psychologist, 32, 752–760.
White, R.W. (1959). Motivation reconsidered: The concept of competence. Psychological Review, 66, 297–333.
Wilkinson, L. & The Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594–604.