What do psychologists think about machines that think?

Ella Rhodes reports on contributions to the annual Edge.org question.

27 January 2015

The potential for bridging the empathy gap between humans, flawed notions of a robot-ruled dystopia and an end to the drudgery of every day life were among the ideas which emerged from psychologists who answered this year’s Edge.org question: ‘What do you think about machines that think?’

Scores of psychologists contributed their thoughts. Molly Crockett, Associate Professor at the University of Oxford’s Department of Experimental Psychology, asked whether thinking machines could be used to bridge the empathy gap between human individuals. Crockett said the empathy gap is most acute in moral dilemmas, writing: ‘Utilitarian ethics stipulates that the basic criterion of morality is maximising the greatest good for the greatest number – a calculus that requires the ability to compare welfare, or “utility” across individuals.’ But the empathy gap makes interpersonal utility comparisons difficult, if not impossible. Perhaps, Crockett adds, thinking machines could be up to the job of bridging the empathy gap by quantifying preferences and translating them into a ‘common currency’ which can be used across individuals.

Fears of computers running amok are a waste of emotional energy according to Steven Pinker, author and Harvard Professor. He writes that human-level AI is 15 to 25 years away and it is bizarre to think that robotics experts will not build safeguards against harm into the machines they are creating. He asks why an intelligent system would want to disable its own safeguards, writing: ‘AI dystopias project a parochial alpha-male psychology onto the concept of intelligence… It’s telling that many of our techno-prophets don’t entertain the possibility that artificial intelligence will develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilisation.’

Similarly Michael Shermer, psychologist and founding publisher of Skeptic magazine, warns against assumptions that intelligent machines will result in either a horrifying dystopian future or an idealistic utopia. He argues that these prophecies are based on a flawed analogy between human nature and computer nature. He argues that emotions are built into humans through evolution and such emotions will not be built in to machines, thus making fears that machines will become evil unfounded.

Other psychologists considered the possibility of a future where machines do our thinking for us. Athena Vouloumanos (New York University) writes that the kind of thinking machines do will define future human societies. She predicts that once machines start thinking properly, inane tasks such as cleaning and food shopping will be the first things to disappear and eventually they may be able to do our work and create our art for us. She writes that although this could result in a dystopian image of humans becoming ‘zombie consumers in a machine-run world’. A cheerier possibility is that we may have more time to spend with our families or learning new skills simply for the joy of it.

Arnold Trehub (University of Massachusetts) argues that machines cannot think at all. He writes: ‘No machine has a point of view; that is a unique perspective on the worldly referents of its internal symbolic logic.’ He argues that humans judge the output of ‘thinking machines’ and give our own referents to the symbolic structures spouted by them.

Will thinking machines ever develop a sense of self? This is the question posed by Professor of Psychology Jessica L. Tracy (University of British Columbia) and Kristin Laurin, Assistant Professor of Organisational Behaviour (Stanford Graduate School of Business). They ask whether machines will be subject to the same evolutionary forces that made the human sense of self adaptive, in learning the need to get along with others and attaining status. They start with the assumption that machines would, one day, control their own access to resources they need like electricity and internet bandwidth. They go on to assume that machines that survive in that environment will be the ones programmed to increase their own efficiency or productivity. Those who learn to form alliances in a competitive environment for limited resources will be most effective.

They suggest that, unlike humans, machines will be able to access each other’s inner thoughts: ‘There’s no reason that one machine reading another’s hardware and software wouldn’t come to know, in exactly the self-knowing sense, what it means to be that other machine… When machines literally share minds any self they have would necessarily become collective.’ They suggest that self-awareness in machines could be adaptive and could result in them feeling empathy and motivate them to protect rather than harm human beings – a species ‘several orders of magnitude less intelligent than them’.

Read all 186 responses, including the thoughts of Susan Blackmore, Nicholas Humphrey, Christopher Chabris, Alison Gopnik, Martin Seligman and many more.

Do you have any thoughts on machines that think? Email us with your thoughts on [email protected].