How can we improve psychological science?

Christian Jarrett on a recent special issue from the US, and an invitation to come up with your own suggestions

In a recent navel-gazing special issue of the journal Perspectives on Psychological Science (open access at, our psychologist cousins across the Atlantic mused on the ways the science and practice of psychology could be improved. There  are 26 articles in all, focusing on how psychology research is conducted, reviewed and published, as well as papers on ethics, the teaching of psychology, and the application of psychology in the real world. Should we conduct more psychological studies into the ethics of human research? Should peer review be ditched for a superior system? Read on for a round up of these and other blue-sky suggestions. 

Dean Simonton of the University of California opened the issue with the optimistic view that through its study of science, psychology could help itself in  at least three ways – by identifying those individuals most likely to excel in the discipline, by improving the productivity of psychological scientists and by evaluating the progress of psychology.

Other contributors made more specific recommendations. Christopher Peterson at the University of Michigan said we should keep things as simple as possible, rather than concocting complicated experimental designs just for the sake of it. Recalling many of the classic experiments in psychology – such as by Asch, Milgram and Harlow – he said they all share in common a simplicity in design and statistical analysis. ‘The lesson of history is that what we do well is often very stark – less is more – and if we were to do less (and think more), psychology would be improved.’

David Barlow and Matthew Nock (Boston and Harvard Universities) called for a return to more idiographic research – that is, the study of single individuals rather than experimental groups – arguing that the single case-study approach can complement the findings from randomly controlled trials. Jerome Kagan at Harvard further advocated the importance of drawing on more than one source of evidence when testing hypotheses – verbal self-report, behavioural and biological data. ‘Every phenomenon of interest lies behind a thick curtain punctuated with a large number of tiny holes,’ he explained. ‘The view from each small opening in the curtain, analogous to the information provided by one method, does not permit a complete comprehension of the whole event.’

Other contributors lamented the decline of specific practices in psychology. Robert Cialdini at Arizona State University would like to see a return to more field research. ‘As we have moved increasingly into the laboratory and away from the study of behaviour,’ he said, ‘I believe we have been eroding the public’s perception of the relevance of our findings to their daily activities.’

Edwin Locke (University of Maryland), meanwhile, argued that it’s time to bring introspection – the act of reflecting on one’s own mental processes – out of the closet, after an unofficial ban lasting over 100 years. ‘The anti-introspection bias discourages psychologists themselves from introspecting, and not only because colleagues would probably frown upon it,’ he said. ‘Virtually no top journal would consider introspective reports to be publishable. Yet, introspection could provide valuable raw material for building theories, especially if psychologists worked together to stimulate one another’s thinking.’

John Cacioppo and Jean Decety (University of Chicago) would like to see psychological science branch out into neighbouring territories: they argued that psychologists are uniquely placed to study the functional organisation of the human brain, alongside their traditional study of mind and behaviour. Anticipating the training implications of such a development, the pair suggested ‘a stronger grounding in mathematics, computational modelling, biology, and the neurosciences’ may be needed.

According to Joan Sieber (California State University), research practices in psychology could be improved by expanding the empirical study of human research ethics – what she dubs ‘Evidence-Based Ethical Problem Solving (EBEPS)’. For instance, one’s intuition might interpret the distress shown by PTSD patients following interview by a researcher as a clear sign that to question such people is unethical. However, research using the Reactions of Research Participants Questionnaire has shown that PTSD patients actually find research interviews beneficial. See
for further examples of the kinds of issues being tackled by this approach.

Stephen Ceci (Cornell University) and Maggie Bruck (Johns Hopkins University) complained that the bureaucracy and total independence of local ethics committees – known in the US as institutional review boards (IRBs) – has caused ‘delays, missed deadlines, and reduced productivity’ to their students and themselves. They suggest introducing an appeals process for researchers, and evaluating the competence of ethics committee members. In a similar vein, Susan Fiske (Princeton University) called for local ethics boards not only to police unethical research but also to actively promote beneficial research. Following the introduction of such a complementary role ‘researchers will cooperate more actively with the IRB, viewing it as a collaborative guide rather than an intransigent roadblock,’ she said.

Anthony Greenwald (University of Washington) drew attention to risks of conflicts of interest in psychology research that to date are almost entirely unaddressed by professional bodies. For example, citing his own experience as a journal editor, he outlined how easy it would be for an editor, via the tactical selection of reviewers, to undermine a submitted article that challenged his or her own theoretical position. Reviewers, expert witnesses and researchers’ own confirmation biases are all likely to be prone to conflicts of interest that professional bodies should do more to address, Greenwald argued.

In separate contributions, Denise Park (University of Texas at Dallas) and Shelley Taylor (University of California) provided converging arguments for how to improve the impact factor of psychology journals. Both believe that the journal Psychological Science – the discipline’s leading primary research journal in terms of impact factor – provides a template that should be emulated by others, including its short, snappy articles and rapid review process, including the use of ‘triaging’ (the prompt rejection of unsuitable manuscripts prior to peer review). Park advised placing supplementary methodology online, as is done by the prestige general science journals Science and Nature, while Taylor argued against the value of papers that contain multiple variants of the same experimental design, as is found, for example, in the Journal of Personality and Social Psychology. This practice, she says, makes the ‘review process tediously long and the articles tediously dull. Other journals are increasingly adopting a word limit between 2500 and 6000 words, and psychological journals might profitably do the same.’

It has served science well since the 18th century but Jerry Suls (University of Iowa) and René Martin (Center for Research in the Implementation of Innovative Strategies in Practice) think it’s time psychology considered alternatives to the peer review process, and that psychologists are perfectly equipped to investigate the available options. These alternatives include an open (rather than anonymous) peer review system; a hybrid system of public online discussion of a paper prior to formal peer review; the highlighting of high-impact papers as gauged by the online interest and comments they attract (a procedure performed by PLoS); the adversarial model, in which reviewers play the role of prosecutors pointing out a paper’s weaknesses with authors given the chance to formulate a rebuttal; and the arXiv model, which is a self-policed online repository used for the rapid transmission of new scientific ideas prior to formal peer-reviewed publication.

‘Psychology has not yet fully exploited the opportunities offered by the Internet and other technologies for the dissemination of scientific findings or those offered by alternative practices evolving in other disciplines,’ Suls and Martin wrote; ‘in fact, psychology appears to be in the rear guard in this respect.’

Seth Schwartz (University of Miami) and Byron Zamboanga (Smith College) also made a number of recommendations for how to improve the peer review system, which they consider in its current form to be inefficient and unfair. Among their suggestions are an appeals process for authors unhappy with the rejection of their papers; for editors to avoid playing a merely passive role, in which they cede total responsibility to their chosen reviewers; and for the review panel for papers involving advanced statistics to always include at least one expert statistician.

An even more specific aspect of peer review – the question of whether reviewers should know whose work they are reviewing – was tackled by Nora Newcombe (Temple University) and Mark Bouton (University of Vermont). The pair questioned whether unmasked reviews really are prone to bias; they highlighted research showing that masking author identities actually does little to help reduce bias (unsurprising given it’s often rather easy for experts to guess whose work they are reviewing); and they concluded that there may actually be costs associated with a masked review process – for example, novice researchers not recognised as such could miss out on tailored advice.

On the specific topic of how journal citations are gauged, Martin Safer (Catholic University of America) and Rong Tang (Simmons College) asked 49 psychologists to rate the importance of citations in one of their own articles and to provide their reasons for including them. Based on the findings, Safer and Tang argued all citations are not equal and that databases like PsycINFO should therefore provide additional citation ‘meta-data’ for papers, including not just how many times they’ve been cited by other individual papers, but how many times per paper, where in those papers, and how many are self-citations.

In one of the more light-hearted offerings, David Trafimow and Stephen Rice of New Mexico State University produced fictional reviews of classic science papers to support their contention that psychologists are too harsh when reviewing each other’s work. William Harvey’s paper on blood flow and Einstein’s paper on relativity are but two examples from the annals of classic science that they believe would be rejected by a typical, contemporary psychologist reviewer: that is, one inclined to evaluate subjectively the importance of new ideas, who places too much emphasis on data at the expense of theory, and who expects new research to be connected to previous literature. ‘Above all,’ the authors warn, ‘do not be the next person to squelch a potentially great work because of ill-considered criticisms, even if the criticisms are standard in the field.’

Responding to Trafimow and Rice’s paper, Raymond Nickerson (Tufts University) argued, somewhat ironically, that they may have been too harsh in their appraisal of the peer review process in psychology. Nickerson countered that many ground-breaking scientific ideas were in fact roundly rejected by scientists of the day (Francis Bacon, for example, referred to Copernicus’s heliocentric theory as a fiction), and he described small-scale research of his own which found researchers to be generally happy with the way the peer-review system works.

M. Lynne Cooper (University of Missouri) welcomed the criticisms of peer review raised in Trafimow and Rice’s paper but felt they had failed to offer suggestions for what would constitute good review practice. Cooper outlined six principles of good reviewing, including tact and fairness, and offered formal reviewer training and reducing reviewer burden (through more editorial triaging and the allocation of fewer reviewers per paper) as ways to improve the peer review process.

Cognitive research on how to improve student learning doesn’t always translate to the real world. David Daniel (James Madison University) and Debra Poole (Central Michigan University) gave several examples of this, including the fact that signalling devices in textbooks, such as margin inserts, actually lead some students to skim unsignalled material, thus impairing their exam performance; and the fact that e-books might be universally beneficial as a study aid in the lab, whereas in real life, students vary in how much they are tempted to e-mail and web browse while reading an e-book, with obvious implications for their study outcomes. Daniel and Poole advocated a new empirical approach – pedagocical ecology – that will reveal ‘how the fundamental mental architecture that supports learning interacts with other aspects of individuality and environments to produce meaningful differences in human performance’.

Ludy Benjamin Jr. (Texas A&M University), a contributor to The Psychologist’s own recently installed ‘Looking back’ section, and David Baker (University of Akron) called for the history of psychology to become a compulsory element of doctoral courses in psychology, arguing that such a move would provide a vital antidote to the fragmentation of the discipline into ever more specialisms. ‘As psychologists, we share a connection, and that connection is found in our shared history,’ they wrote. ‘We owe it to our students and our discipline that a framework exists that causes us to see beyond the narrowness of our daily endeavours.’

Psychology textbooks are prohibitively expensive for many students in developing nations. David Myers (Hope College) championed the idea of e-books, which publishers could offer to needy countries at a discount or free, and which would also have the benefit of being locally customisable, so making Western books more relevant to foreign audiences. ‘I have broached the idea of using the Internet to deliver state-of-the-art, interactive, low-cost, locally adapted content to students who cannot afford books with our colleagues in South Africa, with my introductory text publisher, with my fellow committee members on the International Science Committee of the Association for Psychological Science’s new Fund for the Advancement of Psychological Science, and with the Rockefeller Foundation’s program officer for educational information technology,’ Myers wrote. ‘There may be roadblocks to come, but so far there is enthusiasm all around.’

Social issues
Psychology is failing to meet its potential to help address social problems. So argued Gregory Walton and Carol Dweck, who believe the discipline is uniquely placed to help, offering as it does rigorous methodology combined with insight into psychological processes. The pair pointed to psychology’s identification of ‘stereotype threat’, in the context of group differences in performance, as one example, and decision framing, in the context of organ donation as another example – this is the idea that a requirement to opt out of organ donation conveys to people that opting-in is the favoured choice.

‘[S]ometimes in the general clamour of the public discourse, psychological issues and solutions are lost. With a sustained emphasis from researchers and journal editors, psychologists can begin to illuminate the psychological dimension of other seemingly intractable social problems. By exploring these social problems, psychologists may identify novel psychological phenomena, join interdisciplinary teams of problem solvers, and display the strength and unique contributions of our field,’ they wrote. Picking up on this theme, Sumie Okazaki (New York University) said it was thanks to psychology that we know plenty about the racial prejudices of white people, conscious and non-conscious. However, she said we know far less about the impact this racism has on the mental and physical health of ethnic minorities. Psychology could expand our understanding of this area, she argued, by studying the potentially harmful effects of ambiguous situations that  might not even be perceived as racist, as well as by using experiential sampling techniques and mixed methods (i.e. qualitative and quantitative) to study the cumulative effects of subtle or covert racism on minorities.

In the final item, Felicia Huppert of Cambridge University – the only British contributor – argued that greater societal benefit could arise if psychology targeted whole populations rather than focusing on individuals. She pointed to research showing that rates of mental disorder are lower for populations who have lower base rates of psychological distress. The implication is that improving population-wide well-being will help reduce rates of clinically significant disorder.

Huppert explained the principle vividly in the context of alcohol abuse: ‘...a small change in drinking culture resulting in most people having one or two fewer drinks each week will do more to reduce problem drinking than will trying to persuade just the problem drinkers to change their habits.’ So far research in this area is observational and Huppert argued that it’s now time to begin testing whether small improvements in population means really can help reduce psychological problems, as well as boosting the numbers of people functioning extremely well. Universal parenting programmes and media interventions are obvious candidate interventions for initial research of this kind.

Over to you
What do you think of the suggestions made here to improve psychological science? The loudest cry was surely for psychology to take a critical look at the peer review process and to modernise journals. Our Society journals have made a number of incremental improvements over recent years, but post statutory regulation, perhaps it will be time for a more radical approach?

You will have your own thoughts about the many issues that weren't raised by contributors to this special issue. Notwithstanding the contribution of Ludy Benjamin, Jr and David Baker, I felt the most glaring omission was in relation to doctoral training in psychology. My own experience was of a completely unregulated system – a lottery of sorts, where one student emerged fully equipped to begin their own research career, whilst another emerged ill-prepared, having served their doctoral years as little more than an underpaid research assistant. Surely there is scope to improve this system?

I will leave the final words to Ed Diener of the University of Illinois at Urbana-Champaign, who acted as editor to the special issue. ‘I do not agree with all of the proposals made in the issue, but I do believe many of them are excellent and would improve our field if they are implemented,’ he said. ‘Some of the proposals require more discussion and study, but my hope is that these articles will stimulate discussion and lead to improvements in how we practice psychological science.’

Dr Christian Jarrett is The Psychologist’s staff journalist
[email protected]

How do you think the science and practice of psychology could be improved? Send your ideas to the editor, Dr Jon Sutton, on [email protected].

BPS Members can discuss this article

Already a member? Or Create an account

Not a member? Find out about becoming a member or subscriber