Disengaging morality from robotic war
Military drones have become the preferred weapons system in counterterrorism on the route toward robotisation of warfare. Drones were originally designed for extended surveillance of combatants and their military operations. The United States weaponised the drones and deployed them widely in a variety of countries, including Afghanistan, Iraq, Pakistan, Yemen, Somalia, Libya and Syria. In rapid proliferation a host of other countries adopted them. China is now merchandising weaponised drones internationally, in an expanding global market, in competition with the United States in aerial capabilities and lethality of their drones.
The growing automatisation of weapon systems may usher in a new era in warfare in which drones are programmed to detect and kill suspected terrorists without human guidance. In a recent milestone, a drone took off from an aircraft carrier, flew to a designated location, and then returned to the carrier autonomously. This semi-autonomous capability is a large step toward further automatisation.
Here, I address the moral dimension of drone warfare and the growing broader issue of robotic war, and I do so through the perspective of social cognitive theory of moral agency. Inflicting death and destruction is ordinarily restrained by moral self-sanctions and international laws of human rights. In my 2016 book Moral Disengagement (you can read a chapter on The Psychologist website) I identify eight psychosocial mechanisms by which people selectively disengage moral self-sanctions from injustice, inhumanities and pernicious behaviour.
Mechanisms of moral disengagement
Figure 1 [see PDF] presents schematically the eight mechanisms and the locus at which moral self-sanctions are disengaged.
At the behaviour locus, people sanctify harmful means by investing them with worthy social and moral purposes. Righteous ends are used to justify harmful means. Harmful conduct is also rendered benign or even altruistic through advantageous comparison. Belief that one’s harmful actions will prevent more human suffering than they cause makes the behaviour look altruistic. Euphemistic language in its sanitising and convoluted forms cloaks harmful behaviour in innocuous language and removes humanity from it. These three mechanisms, operating at the behaviour locus, are especially powerful because they serve a dual function. They engage morality in the harmful mission but disengage morality in its execution.
At the agency locus, people evade personal accountability for harmful conduct by displacing responsibility to others and by dispersing it widely so that no one bears responsibility. This absolves them of blame for the harm they cause.
At the outcome locus, perpetrators disregard, minimise, distort, or even dispute the injurious effects of their actions. As long as harmful effects are out of sight and out of mind there is no moral issue to contend with because no perceived harm has been done. At the victim locus, perpetrators exclude those they maltreat from their category of humanity by divesting them of human qualities or attributing animalistic or demonic qualities to them. Rendering their victims subhuman weakens moral qualms over treating them harshly. Additional moral disengagement at the victim locus involves blaming the victims for bringing maltreatment on themselves or attributing it to compelling circumstances. In this mode of self-exoneration, perpetrators view themselves as victims forced to behave injuriously by wrongdoers’ offensive behaviour or by force of circumstances.
The principle justification for the drone counterterrorism campaign is that terrorist groups pose continuing imminent threats to the national security of the United States.There are several types of analyses via which we can address the role of moral disengagement in drone warfare. Media shape public consciousness, so on The Psychologist website you can find my analysis of insights in the entertainment industry on the moral ramifications of drone warfare as portrayed in the movie Eye in the Sky. In this article, I look at the eight mechanisms and how they are enlisted in the development and military deployment of drones. Extensive use of this weapons system permits a detailed analysis of suspension of moral self-sanctions in fighting a war semi-autonomously. Finally, I consider the broader issue of how the technology of artificial intelligence and robotics will alter the human role in future wars.. There is no absolute prohibition against the use of military force. Nations have a right to defend themselves against attacks that threaten the safety of their people. However, international human rights law provides a set of principles for a justifiable defensive war. It stipulates that military action is used for a just cause and right intention, rather than for vengeance or as a pretext to gain control of resources or geopolitical advantage. It is necessary as the last resort after non-violent means have been exhausted. Campaigns must be limited to the level of force needed to eradicate the threat, and counterstrikes conducted in ways that minimise civilian casualties.
Fighting terrorists with military force presents a daunting challenge – it is not a conventional war against a foreign state. Rather, it is an armed campaign against loosely interconnected networks of combatants operating surreptitiously through widely dispersed affiliates. Some operate safely in weak or failed states that lack dependable military forces to eliminate them. Other states provide safe havens for terrorists where they plot attacks beyond their borders. Drones are well suited for this type of warfare. However, terrorists have learned to elude drone attacks by hiding among civilians, dispersing their military operations rather than conducting them in command posts and moving personnel in single vehicles rather than in convoys.
Invoking the self-defence right, the Obama administration further justified drone strikes when nation states are unable or unwilling to eradicate terrorist threats to the American homeland. Targeted killings are a major legal concern regarding drone warfare, especially when they involve killing civilians in foreign countries that have not consented to drone strikes. Critics contend that individual targeted killings are assassinations, not defensive war (Cohn, 2015), and that counterterrorism campaigns violate international human rights law. In a lack of moral and congressional responsibility, Congress evaded its authority to declare war to provide legitimacy for the spreading drone warfare. To circumvent congressional inaction, Obama adopted a statutory authorisation from the Iraq War that granted presidents broad authority to take military action against al-Qaeda wherever it may be.
The image of drones as precise, accurate and inflicting fewer civilian casualties is not how drone warfare is actually fought. People can readily end up on suspected terrorist watchlists because of a ‘broad definition of what constitutes terrorism and a low threshold for designating someone as a terrorist’ (Scahill & Devereaux, 2016). In countries with deficient ground intelligence and questionable metadata, it is difficult to identify targeted persons and their activities. Under these conditions, targeting is based on cell-phone tracking. As a former drone operator said, ‘We’re not going after people – we are going after their cell phones, in the hopes that the person on the other end of the missile is the bad guy’ (Scahill & Greenwald, 2016). Pinpoint technical accuracy with faulty intelligence can be disastrous to civilians. The extent of unidentified innocent deaths goes uncounted. Civilian deaths by drone strikes in war zones are likely to be higher because combat on battlefields is more indiscriminate and combatants hide among civilians. Civilian deaths by drones in war zones are never reported.
‘Signature strikes’, shrouded by an opaque euphemism, are targeted killings of unknown individuals who exhibit behaviour often associated with militants. This is death by profiling. Profiles are notoriously weak predictors usually biased toward false positive errors. Because an individual’s identity is unknown and the profile, behavioural fit or both may be faulty, signature strikes carry high risk of killing the wrong people. Indeed, from time to time, the administration acknowledges deadly mistakes with condolence payments.
How behaviour is viewed depends on what it is called. Much of drone warfare is cast in the language of sports events. The synthesised information that places suspected terrorists on a watchlist is called a ‘baseball card’. Triumphs in drone strikes are called ‘touchdowns’ and ‘jackpots’. Targeting known terrorists are ‘personality strikes’. In ‘signature strikes’ people of unknown identities are targeted based on profile matching. Drone sensor operators are called ‘stick monkeys’. Weaponised drones are called ‘uninhabited aerial vehicles’. Military insiders reject the drone label and instead call them ‘birds’. One would not know from the sanitised, playful language that drone operators are carrying out kill missions.
When it comes to the toll on innocent lives, the language takes a serious exonerative form. All slain bystanders are often categorised as ‘enemies killed in action’ until proven otherwise posthumously. Turning nearby people killed by drone strikes into ‘enemies’ makes it easier to kill them. However, under this inclusive designation the civilian death toll cannot
Harmful behaviour is also coloured by what it is compared against. In utilitarian comparison, one’s harmful actions are construed as minor compared to the human suffering they prevent. Creative comparison can make drone warfare appear the lesser of two evils through talk of precision targeting, sparing the lives of pilots and ground forces, and the cost-effectiveness of the method. It is widely believed these conditions, that lessen the horrors of war, make it easier to go to war. In this regard, a good share of the American public is probably even unaware of the countries in which the US is conducting undeclared drone wars, let alone the effects of these military campaigns.
Basing his opinion on fewer civilian and military casualties, drone advocate Kenneth Anderson construes drones into a humanitarian weapon. It is ‘immoral’, he argued in a 2013 piece, not to develop and use them. In this line of moral reasoning, that makes use of drones morally obligatory, the humanitarian status of a weapon is judged by its comparative civilian death rate. Yet the Seals killed bin Laden but spared the women and children without losing a single soldier. A drone strike would have inflicted extensive casualties not only on the families in the compound but also on those in the adjacent blast area. In this comparison, the drone forfeits its humanitarian status. And as I discuss below, the way drone strikes are actually conducted challenges the claim of pinpoint locus and superior accuracy in identifying suspected militants.
Displacement and diffusion of responsibility
In another set of mechanisms of moral disengagement, individuals absolve themselves of ‘collateral damage’ by drone strikes through displacement and diffusion of responsibility. Moral disengagement is not solely in the individual’s mind. Much of it is built into the hierarchical structure of social systems. In drone warfare, the disavowal of responsibility for civilian casualties occurs mainly within multiple chains of command in which lower officials absolve themselves of accountability by shifting decisions to the upper echelon. Group decision making is another way of distancing oneself from untoward consequences. The group becomes the faceless agent with no one feeling personally responsible. Implementing drone warfare is also structurally diffused by dividing destructive activities into small segments that look benign in isolation.
The drone counterterrorism program is shrouded in secrecy. However, frequent references to ‘staffed up’ indicates extensive bureaucratic displacement of responsibility up the hierarchical social structure. What is lacking is detailed information on the psychosocial dynamics governing decision-making processes. Without adequate transparency and governmental oversight, it is difficult to have informed public debates on the legality, geopolitical repercussions, and the morality of drone warfare.
In response to a lawsuit filed by the American Civil Liberties Union under the Freedom of Information Act, the Obama administration lifted some of the secrecy by declassifying the rules and procedures for targeting suspected terrorists (see Charlie Savage’s piece in the New York Times last year). According to the published rules, after lawyers in national security agencies approve the domestic and international lawfulness of drone strikes, the CIA and Defense Department nominate the suspected terrorist for targeted killing. Among the guidelines, the strike should be directed at a clearly verified high-value terrorist who is an imminent threat to national security; there is near certainty that the terrorist is present in the strike site; capture is not feasible and alternatives are unavailable; and no civilians will be injured or killed. The president’s approval is required when a radicalised US citizen is targeted and when top-ranked national security officials disagree on whether a designated terrorist should be killed.
The guidelines are presented in a legalistic and bureaucratic manner devoid of information on how officials conduct the drone warfare and manage its moral aspects. Leaked documents and accounts of former officials reveal lenient criteria for placement on the watch list, and uncertainty of who is being targeted and of the toll of innocent lives taken (Scahill, 2016). In these accounts, actual drone warfare falls considerably short of the official stringent guidelines. The adherence gap suggests that the general policy guidelines serve more as justifications for drone warfare than as moral guides in the conduct of a just war. Moreover, the codified principles and operational procedures can be circumvented because the definitions of ‘imminent’, ‘necessary’, ‘unfeasible’, ‘militant’, to mention just a few, are malleable. There would be few drone missions if the rule that no civilians will be injured or killed was taken seriously. It is not enough to provide general policy guidelines. Their operational value should be evaluated in terms of behavioural compliance with them.
Judging the harmfulness of given policies and practices is a major battleground in moral disengagement. There is no moral issue if given practices are judged to be harmless. The effects of the drone counterterrorism program are vigorously debated. The Obama administration argues that it eliminates imminent threats by killing militants with only a few civilian casualties. In contrast, critics contend that drone warfare not only takes a toll on innocent lives but fuels anti-American sentiment, turns Muslims against the United States, breeds new sources of terrorism, some based on revenge for killing family members and friends, and subjects the public to chronic stress through unpredictability of where or when missiles may be fired by drones hovering overhead. It is a task for future research to verify the diverse alleged effects.
Proponents of drone warfare not only minimise civilian casualties, but also tout its benefits. They cite the success in destroying the core structure of al-Qaeda and the Taliban by repeated drone strikes in tribal regions along the Afghanistan/Pakistan border. Targeting killings of key terrorism operators are publicised as further evidence of the precision and effectiveness of drone warfare.
In a comprehensive analysis of diverse sources of evidence Dear (2013) found that targeted killings of top leaders of terrorist organisations disrupt their capability in the short term, but they evolve into more dangerous terrorist threats. They decentralise their operations, their leadership becomes younger, more radical, violent and indiscriminate. They expand their reach and add affiliates in other countries. For example, ISIS evolved from weakened al-Qaeda. Beheading the Hydra, as Dear shows, can beget greater terrorist threats rather than national tranquillity. Indeed, after 15 years of costly US warfare in the Middle East, the US public feels less safe from terrorist attacks (Gallup, 2015). With worldwide proliferation of weaponised drones, will the introduction of this new mode of warfare make people feel safer or subject them to a new threat hovering menacingly overhead? When armed drones fall into the hands of despotic rulers they will, in all likelihood, be used to spy on their citizens and to cripple their political foes.
In the final form of moral disengagement, enemies are viewed as subhuman beings or dangerously deranged ones. Expunging any sense of humanity abolishes moral restraints. Because the drone program is conducted in secrecy, information on drone pilots’ attitudes, values and management of their intrapsychic life is lacking. However, their behavioural record is highly informative. It underscores the extraordinary flexibility of moral self-regulation of conduct. The drone pilots switch their moral control off and on in daily shifts between their working hours and their off-duty family and social lives. Drone warfare does not require brawn. Female as well as male officers conduct remote-controlled killing missions. Indeed, one of the women pilots was so adept at flying the armed Predator drones that she was chosen to fly super secret CIA missions. Women’s adoption of a killer role is a product of the evolution of military technology, not biological evolution.
Although they are physically isolated from harrowing combat experiences, nearly half of drone operators experience high levels of stress (Bumiller, 2011). The US Air Force attributed the emotional distress to heavy workload demands. A more likely explanation is that drone pilots see close-up the people they are killing and the resulting carnage. Here is a drone pilot reassuring himself that he made the right decisions: ‘There was good reason for killing the people that I did.’ However, the drone strikes continued to intrude on his mental life, ‘I go through it in my head over and over and over… But you never forget about it, it never just fades away’ (Bumiller, 2011). Morally based anguish is distinguished from stress disorders arising from battle experiences. The recurrent self-chastisement is called ‘moral injury’.
Abrupt daily shifts between moral engagement and disengagement are undoubtedly another major stressor. Having to turn one’s morality off and on, day in and day out, between lethal air strikes and prosocial home life, makes it difficult to maintain a sense of moral integrity. To complicate matters, the actionable ground intelligence guiding air strikes is not always trustworthy. In keeping with the selectivity of moral disengagement, drone pilots were not distraught over killing combatants for which they had moral justification. The killing is remote but the carnage is visible in gruesome detail to the drone operators. It was video close-ups of women, children and other civilians killed in air strikes that haunted them. Faced with these diverse stressors, drone pilots are quitting in large numbers, raising deep concern in military circles over their combat capabilities. The disparagement of drone pilots by US Air Force pilots as conducting a ‘coward war’ does not aid recruitment.
Growing robotisation of modern warfare
The evolution of the weaponised drone was not just an incremental change in existing weapon systems. Rather, it ushered in a new mode of warfare rooted in technological advances in artificial intelligence and robotics. The field of artificial intelligence is divided along alternative visions. The replacers seek to supplant human intelligence through automation. The augmenters seek to extend the intellectual capabilities of humans. In his book Machines of Loving Grace Markoff (2015) highlights the inherent paradox in which ‘the same technologies that extend the intellectual power of humans can be automated to displace them as well!’.
Technological developments change the relationship between humans and machines. The systems that are being designed and built, by whatever values animate the creators, determine which aspects of human life are automated and which remain under personal control. These choices are reflected in the growing robotisation of military operations. While weapon systems can be robotised, human intrapersonal life, in its affective and moral manifestations, does not lend itself to mechanisation. As shown in the moral analysis of weaponised drones, moral factors have important bearing on the legitimisation and implementation of this mode of warfare.
Ethics lag far behind military technologies. The legality and morality of robotised warfare should be judged by existing international laws of war and rules of engagement. The novel ethical and legal challenges centre on autonomous weapon systems which can select and destroy targets on their own. Machines cannot be held responsible for their activities. There is room for error in distinguishing between civilians and combatants involving subtle contextual differences that exceed the weapon’s level of programming. Under conditions of ambiguity and uncertainty, autonomous armed robots will misjudge situations and go astray with disastrous consequences. In tragic mishaps, who bears responsibility? To add an element of moral agency it is often proposed that humans should still make the kill decisions.
To add further complexity, artificial intelligence is vulnerable to hacking. Use of autonomous weapon systems would usher in a new realm of warfare between robot hackers. They could immobilise or redirect the firearms at their enemy dispatchers themselves. Christof Heyns, a UN expert on extrajudicial issues, argues that production and use of autonomous armed robots should be banned, saying, ‘War without reflection is mechanical slaughter’ (Cumming-Bruce, 2013).
Albert Bandura is David Starr Jordan Professor of Social Science in Psychology/ Emeritus at Stanford University
In evaluating the aftermath of the Iraq war in my book on moral disengagement, I noted that the US is declaring undeclared drone wars against terrorists in seven countries with congressional irresponsibility in evading its duty as the authorisers of war. On the martial front, the military is investing heavily in developing robotic weapon systems that diminish or eliminate human inputs. These developments, that make it easier to go to war, raise concerns over stripping humanity from the horrors of war. This article examines
what robotisation portends for the nature and morality of future warfare.
Anderson, K. (2013). The case for drones. Commentary. Retrieved from www.commentarymagazine.com/articles/the-case-for-drones
Bandura, A. (2016) Moral disengagement: How people do harm and live with themselves. New York: Worth Publishers.
Bumiller, E. (2011, 19 December). Air Force drone operators report high levels of stress. The New York Times, p.A8.
Cumming-Bruce, N. (2013, 31 May). U.N. expert calls for halt on robots for military. The New York Times, p.A9.
Cohn, M. (Ed.) (2015). Drones and targeting killing: Legal, moral, and geopolitical issues.. Northampton, MA: Olive Branch Press.
Dear, K.P. (2013). Beheading the Hydra? Does killing terrorist or insurgent leaders work? Defense Studies, 13, Issue 3.
Gallup (2015). Terrorism in the United States. Retrieved from www.gallup.com/poll/4909/terrorism-united-states.aspx
Markoff, J. (2015). Machines of loving grace. New York: HarperCollins Publishers.
Savage, C. (2016, 6 August). U.S. releases drone strike ‘Playbook’ for targeting terrorism suspects. The New York Times. p.10.
Scahill, J.E. (2016). The assassination complex: Inside the government’s secret drone warfare program. New York: Simon & Schuster.
Scahill, J. & Greenwald, G. (2016). Death by metadata. In J. Scahill (Ed). The assassination complex: Inside the government’s secret drone warfare program (pp.96–106). New York: Simon & Schuster.
Scahill, J & Devereaux, R. (2016). Death and the watchlist. In J. Scahill (Ed). The assassination complex: Inside the government’s secret drone warfare program(pp.18–34). New York: Simon & Schuster.
BPS Members can discuss this article
Already a member? Or Create an account
Not a member? Find out about becoming a member or subscriber