Psychologist logo
BPS updates, Ethics and morality, Research Ethics

Deception in psychological research – a necessary evil?

Allan J. Kimmel offers recommendations in a controversial area.

26 August 2011

Half a century ago, social psychologist Stanley Milgram initiated his ingenious series of experiments on obedience to authority in the psychology laboratories at Yale University (1960–1964) – research that continues to resonate to this day, both within and outside the field. Among the general public, the most disconcerting aspect of the research, which involved the bogus delivery of electric shocks to a hapless victim under the guise of a learning experiment, is what it revealed about ourselves: that people are capable of inflicting extreme, potentially deadly punishment on innocent victims if compelled to do so by an authority figure.

The implications of the findings for understanding apparently incomprehensible atrocities ranging from the Holocaust to Abu Ghraib have kept the research salient in our collective consciousness across five decades, and likely will continue to do so as new horrors emerge (Burger, 2009). Within the behavioural sciences, some researchers have raised anew the possibility that the obedience research findings were more a function of artefacts associated with the experimental situation than reflective of certain unpleasant truths about human nature (e.g. Orne & Holland, 1968; Patten, 1977). For example, Reicher and Haslam (2011) have posited a social identity explanation for the obedience results, arguing that participants complied because of their identification with the scientific authority figure (also see Haslam & Reicher, 2007). However, that debate notwithstanding, the lasting legacy of Milgram’s experiments may well be less about their results than the deceptive means by which they were obtained.

At the time of the obedience research, deception had not yet become a common fixture in psychological research laboratories, although it certainly was being employed by other researchers. Around the same time as Milgram’s research, investigators concocted a variety of elaborate research deceptions in order to furnish university students with discrepant information about their sexuality, including one manipulation that led heterosexual males to believe that they had become sexually aroused by a series of photographs depicting other men (Bergin, 1962; Bramel, 1962, 1963).

In other research, alcoholic volunteers were led to believe that they were participating in an experiment to test a possible treatment for alcoholism, but were instead injected with a drug that caused a terrifying, albeit temporary, respiratory paralysis, leading many of the participants to believe that they were dying (Campbell et al., 1964). The use of deceptive procedures seemed to grow exponentially from that point forward, yet Milgram’s project, perhaps more than any other, aroused concerns about the ethicality of using deception to satisfy research objectives and to a great extent gave impetus to the development of internal standards regulating the use of deception within the discipline of psychology (Benjamin & Simpson, 2009).

From commonplace to controversial

As far back as 1954, social psychologist W. Edgar Vinacke took issue with psychology experiments in which research participants were deceived and sometimes exposed to ‘painful, embarrassing, or worse, experiences’. Few, if any, psychologists were ready to deal with Vinacke’s concerns at the time, probably because the use of deceptive procedures by psychologists was not particularly widespread. Further, this was the dawn of an increasingly fruitful period for scientific psychology. An experimental research tradition had emerged that many psychologists hoped would rival progress in the more established physical sciences. A decade later, however, Vinacke’s questions about the ‘proper balance between the interests of science and the thoughtful treatment of the persons who, innocently, supply the data’ (p.155) were raised anew by critics within the discipline, such as American social psychologists Diana Baumrind (1964) and Herbert Kelman (1967, p.2), who lamented the growing frequency with which deceptive procedures had become so firmly a part of psychology’s research modus operandi, deftly embedded into studies like a game ‘often played with great skill and virtuosity’.

Perhaps because of the central attention it received, the obedience research arguably provided a tipping point for critics of deception. It was widely claimed that:

 - Milgram had subjected participants to extreme levels of stress and guilt as a result of their believing that they had harmed innocent victims, and that he should have terminated the experiment at the first indications of discomfort on the part of the participants;

 - his deceptive scenario served to increase the suspicions of future research participants about investigators and the research process, thereby exhausting the pool of naive participants; and

 - his approach reduced the public’s trust in psychological research and harmed the image of the discipline, thereby jeopardising community and financial support for the research enterprise as well as public trust in expert authorities.

These points reflect the moral, methodological, and disciplinary criticisms, respectively, that are typically levelled against the use of research deception.

Although most defenders of research deception tend to acknowledge these sorts of potential drawbacks, they argue that deception is an essential component of the behavioural scientist’s research arsenal, emphasising the theoretical or social advances one may anticipate from the research, and the avoidance of misleading findings that might result from a study had participants not been deceived. Deception, it is argued, is a necessary evil, often required to provide the necessary ‘technical illusions’ and increase the impact of a laboratory or field setting, such that the experimental situation becomes more realistic and reduces the effects of participants’ motives and role-playing behaviour. 

The ensuing debate over deception and other ethical issues involving the treatment of human participants (such as coercion, exposure to psychological harm, invasion of privacy, and the like) contributed in large part to the codification of ethical standards, which have been substantially strengthened over the years to the point that it has become increasingly difficult to carry out any more Milgram-type experiments (Blass, 2009). Public condemnation of some of the more egregious cases of research deception in the biomedical field, such as the Tuskegee syphilis study (a long-term, non-therapeutic experiment in which syphilitic participants were actively deceived about their true medical condition), ultimately led to the enactment of human research regulations and the emergence of ethical review boards in North America and Europe. Prior to federal regulation, few university departments of medicine and probably no departments of social and behavioural science required any type of committee review. Today, ethical review boards are commonplace in most research-oriented institutions.

In short, the ethical pendulum has swung from one extreme to the other for psychology researchers contemplating the use of deceptive procedures, so much so that it can be said that contemporary researchers are subjected to a higher level of professional ethical accountability than is the case for other professionals who supposedly serve as society’s guardians of human rights – such as lawyers, politicians, and journalists – who routinely engage in various forms of deception (Rosnow, 1997). As a result, deceptive research procedures are now subject to rigorous scrutiny both within and outside the discipline: their use must be justified by the methodological objectives of the research investigation; their potential for harm must be determined and addressed; and their application generally must conform to professional guidelines, legal strictures, and review board oversight.

One might think that these developments would have led to a significant reduction of deception in psychological research and an eventual resolution to the ethical debates it provoked, yet this hardly is the case on either count. Deception continues to find its way into research designs: my content analyses of the frequency of deception in leading social psychology journals revealed its continued use within a significant number of studies of human behaviour (Kimmel, 2001, 2004). This includes a modest increase to 40 per cent in studies utilising active deception (i.e. deception by commission, as when a researcher blatantly misleads the participant about some aspect of the investigation) and up to 35 per cent of studies employing passive deceptions (i.e. deception by omission, as when the researcher purposely withholds relevant information from the participant). These results indicate that although psychologists are using deceptive practices less than in earlier periods (during which estimates soared to nearly 70 per cent in 1975), deception remains a rather common practice, at least in some areas of psychological research.

The prevalence of deception also appears to be increasing in applied areas of behavioural research that have evolved out of the root discipline of psychology, such as consumer research. A content analysis of leading marketing and consumer behaviour research journals published from 1975 to 2007 revealed a steady increase in rates of reported deception from 43 per cent to 80 per cent for the coded investigations (Kimmel, 2001, 2004; Smith et al., 2009). Although a majority of the coded studies employed mild forms of deception (e.g. 70 per cent during the 2006–07 period), deceptions that posed greater risks to participants (i.e. ‘severe deceptions’) were observed in a further 11 per cent of the coded investigations.

The fact that psychologists are more likely to employ severe deceptions that are relevant to the fundamental beliefs and values of research participants than are investigators in related fields, such as marketing and organisational research, to some extent explains why deception has long been such a controversial issue in psychology. However, despite the potential harmful effects of deception on participants and the moral incertitude regarding its acceptability in science, it can be argued that overregulation of deception poses a significant threat to scientific progress. For example, there are fears that governments have begun to exceed their bounds by implementing increasingly stringent policies to control human research. Similarly, the expanded influence of external review has brought with it a growing concern that review boards are overstepping their intended role in an overzealous effort to force behavioural and social research into a biomedical mould, thereby making it increasingly difficult for many researchers to proceed with their investigations. As deception continues to be employed in research, these threats are likely to grow stronger.

Despite the growing prevalence of institutional review, various limitations to this form of ethical regulation have been noted, particularly in terms of what constitutes acceptable use of research deception. Typically, review committees offer little specific guidance on deception a priori (feedback on rejected research protocols may generally refer to problematic use of deception or insufficient informed consent) and researchers depend on the preferences of the individual review board members who possess varying personal norms and sensitivities for assessing costs and benefits (Kimmel, 1991; Rosnow, 1997). Review boards can maintain inconsistent standards across time and institutions, such that a proposal that is approved without modification in one institution may be required to adopt substantial changes, or else be rejected, by a review board at another institution (e.g. Ceci et al., 1985; Rosnow et al., 1993). The external review process also raises the possibility that investigations will be delayed or project proposals unfairly judged, as project proposals are evaluated by persons who lack an awareness of research problems outside their own particular disciplines.

In contrast to psychology, researchers in economics have taken a more straightforward approach to deception. Experimental economists have adopted a de facto prohibition of the use of deception in research. This practice is based largely on concerns that deception contaminates subject pools and fails to guarantee that participants will really believe what they have been told about the research environment, and as a means to establish a more trusting relationship between researcher and participant (Bonetti, 1998). Despite considerable debate, supporters of the policy have argued that most economics research can be conducted without deception, through the development of alternative procedures and guarantees of participant anonymity (e.g. Bardsley, 2000).

Beyond ‘to deceive or not’

For a scientific discipline oriented towards benevolent objectives associated with an understanding of behaviour and social and mental processes, it is somewhat difficult to fathom that ‘deception’, ‘control’, ‘manipulation’, and ‘confederate’ – terms replete with pejorative connotations – have come to occupy a central position in the psychologist’s scientific toolbox. In common understanding, deceit refers to an intentional effort to mislead people and thus is a way of making people act against their will and is seen as the most common reason for distrust (Bok, 1992). Nonetheless, a close scrutiny of the use of deceptive procedures by psychologists reveals that in the majority of cases, the deceptions are innocuous (e.g. persons are informed they are participating in a learning experiment as opposed to one in which their memory will be tested) and rarely (if ever) reach the level of those employed by Milgram (who, it must be remembered, took various precautions to identify and reduce any adverse effects, despite operating during an era in which specific ethical guidance and controls were essentially non-existent). In essence, today’s deception is comparable to the kinds of lies that typically are viewed as permissible in everyday life, such as white lies, lies to certain kinds of people (children, the dying), and lies to avoid greater harms. Previous studies have shown that participants are accepting of milder forms of deception (e.g. Christensen, 1988; Wilson & Donnerstein, 1976); non-harmful research deception has been shown to be morally justifiable from the perspective of ethical theory (Kimmel et al., 2011; Smith et al., 2009); and it cannot be denied that psychological knowledge has been significantly advanced in part by investigations in which the use of deception was a critical component.

Given these points, I believe that the question of whether or not deception should be considered an acceptable element of a research protocol is no longer a legitimate one. In the spirit of reframing and advancing subsequent considerations of research deception, I offer the following reflections and recommendations.

‘No deception’ is an admirable but unattainable goal

The current structure of governmental regulation and professional guidelines in most industrialised countries does not prohibit the use of deception for psychological research purposes (Kimmel, 2007). Unlike economic research, it seems doubtful that forbidding deception entirely would meet with similar success in a field like psychology, where the range of research questions is broader and more likely to arouse self-relevant concerns and participant role playing. Further, within psychology studies, some deceptions, such as non-intentional ones (e.g. those that arise from participant misunderstanding or absence of full disclosure) cannot be entirely avoided. This suggests that while full disclosure of all information that may affect an individual’s willingness to participate in a study is a worthy ideal, it is not a realistic possibility. Researchers are likely to vary in their judgements about what constitutes a ‘full’ disclosure of pertinent information about an investigation. Moreover, information provided to participants, such as that involving complex experimental research procedures, may not be fully understood, and researchers themselves may lack (and be in a poor position to establish) an accurate understanding of participant preferences, reactions and participation motives. Additionally, certain participant groups (e.g. young children and the mentally impaired) have cognitive limitations that seriously curtail the extent to which fully informed consent can be obtained. Thus, to some extent, it can be said that all psychological research is deceptive in some respects.

Use it wisely as a last resort

These points notwithstanding, given its capacity for harmful consequences, researchers must ensure that intentional deception (e.g. the withholding of information to obtain participation, concealment and staged manipulations in field settings, and deceptive instructions and confederate manipulations in laboratory research) is used as a last resort, not as a first resort, the latter of which in my view reflects both a moral and methodological laziness on the part of the researcher.

This recommendation is directly opposed to the ‘fun and games’ attitude of earlier periods in the history of the discipline when the use of deception was largely taken for granted by many psychologists who, in their attempts to create increasingly elaborate deceptions, compounded deception upon deception in a game of ‘can you top this?’ (Ring, 1967). Indicative of this tendency is an extreme case in which researchers employed 18 deceptions and three additional manipulations in a single experimental study of cognitive dissonance (Kiesler et al., 1968). By contrast, in the contemporary ethical and regulatory landscape, researchers need to adopt an approach that involves the stripping away of levels of deception until what is left is the bare minimum required for assuring methodological rigour and the elimination of demand characteristics that could give rise to hypothesis guessing or role playing by participants motivated by a desire to do the right and/or ‘good’ thing (or, for that matter, the wrong and/or ‘bad’ thing). This determination in certain cases will require pre-testing, using an approach akin to that of quasi-control subjects (Rosenthal & Rosnow, 2008). For example, participants could be asked to reflect on what is happening during a study and to describe how they think they might be affected by the procedure. If no demand characteristics are detected, the researcher would develop a less deceptive manipulation and have the participants once again reflect on the study. If they remain unaware of the demands of the study, the researcher could then use this lower level of deception to carry out the intended investigation.

The difficulties inherent in predicting the potential harmfulness of a procedure have long been acknowledged as a major drawback to the utilitarian, cost-benefit approach at the heart of psychology’s extant ethics codes, including the fact that the prediction must be made by the very person who has a vested interest in a favourable decision. Thus, psychologists need to develop their own knowledge base and norms about when deception is, or is not, necessary and unlikely to give rise to harm; procedures that truly constitute examples of minimal-risk research; and methods for determining participant vulnerabilities so that at-risk persons are excluded from the research.

Research alternatives can obviate the need for deception

The recommendation that deception be used as a last resort suggests that researchers must first rule out all alternative procedures as unfeasible. Unfortunately, there is no indication of the extent to which researchers routinely engage in such a pre-deception analysis, nor does it appear that documentation to that effect is required by ethical review boards. Yet these are activities that should be incorporated within the research planning and review process as required elements. During the early days of the deception debate, researchers attempted to gauge the utility of role playing (i.e. participants are told what the study is about and are then asked to play a role as if they were participating in the actual study) and simulations (i.e. conditions are created that mimic the natural environment and participants are asked to pretend or act as if the mock situation were real) as more transparent, viable alternatives to deception procedures (e.g. Geller, 1978). Although these alternatives have met with mixed results in replicating the findings of traditional experimental approaches, they can be useful research techniques in certain situations and represent efficient aids to theory development, hypothesis generation, and, as suggested above, pretest evaluations as to the potential impact on participants of deceptive procedures (Cooper, 1976).

Researchers are not without the skills and creativity necessary to conduct research that is both ethical and valid. For example, as an alternative to negative mood manipulations that have aroused ethical concerns, such as those involving the presentation of false feedback to participants concerning their skills or intelligence (e.g. Hill & Ward, 1989), participants could instead be asked to write an essay describing one of the sadder experiences in their lives. This way, the negative mood would be invoked, but not by deception (Kimmel et al., 2011).

Returning to the Milgram obedience research, we have seen some novel innovations in recent years for conducting replications in ways that reduce the ethical concerns aroused by the original investigations. In his partial replication of the Milgram obedience studies, Burger (2009) incorporated several safeguards to reduce the potential for harm entailed by the deceptive research protocol. Based on his observation that the 150-volt level of Milgram’s (1963) procedure enabled accurate estimates as to whether participants would continue to be obedient or not to the end of the research paradigm (e.g. 79 per cent of Milgram’s participants who continued past that ‘point of no return’ continued all the way to the end of the shock generator’s range), Burger employed a ‘150-volt solution’; that is, the study was stopped seconds after participants decided what to do at the critical juncture. This modification of the original procedure did not represent an alternative to deception, but it substantially reduced the risk of harm by eliminating the likelihood that participants would be exposed to the intense stress levels experienced by many of Milgram’s participants. It may be conjectured that any alternative to the original deception procedure would have undermined the intent of the replication, which in part was to determine whether obedience levels in the current era are similar to those obtained by Milgram nearly five decades earlier (Burger, 2009; see also Reicher & Haslam, 2011 for another view on the rationale for such a replication). Among the other safeguards included in the replication to further ensure the welfare of participants were a two-step screening process for identifying and excluding vulnerable participants; a repeated assurance to participants that they could withdraw from the study and still receive the monetary incentive; immediate feedback to participants that no shocks were received by the learner; and the choice of a clinical psychologist to run the experiments who was instructed to stop the procedure as soon as any signs of adverse effects became apparent. Similar safeguards were employed by Reicher and Haslam (2006), along with an onsite ethics committee review, in a reappraisal of the Stanford prison experiment (Haney et al., 1973).

Prior to running the study, Burger might also have conducted pilot tests to gauge representative participants’ reactions to a description of the research procedure, and actual participants might have been forewarned about the possibility of deception (assuming this could be done without unduly arousing suspicions about the legitimacy of the shock apparatus) or have been asked to agree to participate fully knowing that certain procedural details would not be revealed until the end of the research experience. An alternative approach, which would avoid the requirement for a confederate, would have been to conduct a role-play scenario, with participants assuming the role of learner or teacher (see Orne & Holland, 1968; Patten, 1977). Whether or not the original obedience research would have been viewed as sufficiently sound in a methodological sense or have generated as much attention had Milgram instead employed one or more of these non-deceptive alternatives – assuming the research would have been published at all – is certainly open to debate.

An ingenious non-deceptive alternative to the real-life obedience paradigm utilised by both Milgram and Burger would be to carry out the experiments in a computerised virtual environment, an approach that has been found to replicate the obedience findings while circumventing the ethical problems associated with deception (Slater et al., 2006). The virtual-reality option represents a promising direction for researchers in their search for viable alternatives to deception methodologies. As technologies continue to advance, it may very well be that researchers will have even more intriguing options for non-deceptive research in the future, to a point at which ethically questionable deceptions need not be used at all.

Conclusion

Deception in research continues to arouse an enormous amount of interest and concern both within the discipline of psychology and among the general public. Deception represents an important research tool for psychologists and serves as an essential means for overcoming the potential validity threats associated with the investigation of conscious human beings. Yet, for good reasons, it is an approach in need of a careful balance between methodological and ethical considerations.

My recommendations are unlikely to have much impact within the scientific community without a shift in the mindset of not only researchers, but also reviewers and journal editors. Researchers will have to expend some additional effort and resources in the design of their studies, and reviewers and editors must adjust their perceptions of what constitutes good and worthwhile research, while acknowledging that some topics will not be investigated as thoroughly as is ideal. For example, the recommendation that researchers employ non-deceptive procedures as alternatives to deceptive ones (as in the case of negative mood manipulations) would be undermined by journal editors beholden to multiple-method research who ask for both (along with evidence of replicability), regardless of the validity of the non-deceptive procedures.

We also need a reconsideration of the presumed greater ethical suitability of much non-deceptive research, which often requires participants to engage in time-consuming, monotonous, and uninteresting tasks, offering them dubious educational (or other) benefits. To what extent can we conclude that a non-deceptive investigation that is viewed by participants as a trivial and boring waste of their time is more acceptable than an engaging deceptive one? In fact, some studies have shown that people who participate in deception experiments versus non-deception experiments in psychology are not only accepting of various forms of deception, but report having enjoyed deception experiments more and receiving more educational benefits from them (e.g. Aguinis & Henle, 2001; Christensen, 1988).

To be sure, the days during which deception was used more out of convention than necessity and accepted without comment are long past. Confronted by an increasingly daunting array of ethical guidelines, governmental regulations, and institutional review, investigators are now compelled to weigh methodological and ethical requirements and to choose whether and how to incorporate deception within their research designs. Most behavioural scientists, when caught up in situations involving conflicting values concerning whether or not to use deception are willing to weigh and measure their sins, judging some to be larger than others. It is in this vein that I believe that any call for the prohibition of deception, as is the case in economics, would be short-sighted. What is needed instead is a careful evaluation of the circumstances under which it can be employed in the most acceptable manner in psychological research.

- Allan J. Kimmel is a social psychologist and Professor of Marketing at ESCP Europe, Paris [email protected]