Eroding the uncanny valley
We’re all familiar with the ‘creepy doll’ horror movie trope. For many people, seeing those dead glass eyes is an instantaneous one-way ticket to spooksville. If you’ve ever found yourself squirming during one of these movies, congratulations – you’ve experienced the uncanny valley.
The uncanny valley, however, isn’t just a horror movie device. In fact, it’s so pervasive outside of the silver screen that it poses problems out in the real world. Nowhere is this more of an issue than in the field of robotics.
The uncanny valley was first described by robotics professor Masahiro Mori in 1970. He had observed that, as a robot became more human-like in appearance, people’s feelings towards it shifted gradually towards empathy and positivity. That is, until a somewhat undeterminable point where it begins to look a little too human. His classic diagram of this phenomenon (which is more illustrative than empirical) shows a sharp plummet in viewer receptiveness to robots that appear ‘almost’ human, with onlookers seemingly inexplicably experiencing feelings of revulsion.
The bottom of this emotional dip is generally illustrated with labels such as ‘corpse’ or ‘zombie’, but in an increasingly technological era, we can often find examples of robots that occupy that dip. Though technology marches forward, becoming more impressive and capable by the year, many efforts to create a human-like robot are dampened by the responses of outsiders who feel uneasy about the machines’ appearances. For many, human-like designs fall into that uneasy space of seeming ‘almost human, but not human enough’ – the familiar, yet unfamiliar.
Though psychologists and robotic engineers may seem like strange bedmates, these two fields are beginning to turn Professor Mori’s conceptualisation of the uncanny valley into something more operationalised, more easily understood in the context of existing cognitive theories... and maybe even something avoidable.
Dr Stephanie Lay’s work takes a closer look at the role faces play in producing uncanny feelings. Throughout her PhD, she conducted a number of studies investigating the role of face processing, emotions and other factors that might produce uncanny feelings in onlookers. She places particular emphasis on eeriness for creating the uncanny valley – much in the way that Freud highlighted the ‘unhomely’ as pivotal to producing uncanny feelings.
In the first year of her PhD, Dr Lay took an exploratory look at how people described uncanny faces. Participants were presented with five images of faces, previously established as uncanny, and asked to provide a detailed description that would help someone pick the entity shown out in a crowd. While analysing detailed descriptions of the uncanny faces provided by participants, it became clear that the eyes were grabbing a lot of attention; moreso than any other feature.
With eyes being so often referred to as the window to the soul, it’s perhaps easy to imagine why this particular feature seeming a little ‘off’ might raise the uncanny alarm. As such, it seemed the perfect entry point for further investigation.
‘I had also observed that many of the eerier near-human agents in my own research were those with exaggerated or distorted eyes,’ Dr Lay states on her website. ‘I wondered if I would be able to demonstrate this experimentally if incongruent expressions were presented in the eye region… would very happy, angry, disgusted, frightened or sad faces with “dead” eyes really be rated as the most eerie?’
To get a closer look at how emotions conveyed by the eyes might produce these eerie, uncanny feelings in onlookers, Dr Lay presented participants with a series of composite faces. These faces showed either congruous emotions in the upper and lower facial features (happy eyes and a happy mouth, for example), or incongruous ones (such as angry eyes and a happy mouth). If the uncanny valley arises, even in part, from an issue with faces conveying emotions atypically, faces with two different expressions in their features should be rated by participants as being more eerie than faces showing one consistent emotion.
Sure enough, the incongruous expressions hit that eerie space. Though previous research had highlighted neutral eyes as producing the eeriest expressions, Dr Lay observed that stimuli with extremely happy mouths with fearful or angry eyes were most unnerving to participants. ‘Artificial faces that can’t show believable or trustworthy expressions were definitely uncanny,’ she told me. ‘That’s not [an expression] that people often face in real life, but when they do, it’s indicative of a threat or a reason to be suspicious.’
Together, these results indicate that robotic faces that veer too far towards mimicking humans may leave us trying to read intentions from facial emotions that just aren’t up to the job of conveying them. Instead of finding something familiar and trustworthy, we’re confronted with faces that just aren’t quite right, and leave us instinctually uneasy, wondering what threat may be lurking that led to that expression.
Robiticists are well aware of this issue, and instead of trying to conquer it, instead seek to avoid it altogether by stylising their designs, often favouring non-realistic features and sleek panels that could never be mistaken as human. The robot assistant Pepper (from SoftBank Robotics) is a particularly influential example of this. Instead of leaning into realistic human features, its design instead remains blocky and abstract, with simplified eyes and a touchscreen on its chest – just in case the rest wasn’t enough of an indicator it isn’t human.
However, with technology advancing, more complex designs that can handle a more advanced range of tasks are needed. As such, roboticists are looking towards mimicking biological motions to give their robots brand new capabilities. Designing nimble robots that can navigate tough terrains, open doors and more makes these machines suitable for a range of applications that previous types of robot could only dream of. But this has led to a new uncanny challenge – one that many of us may not have anticipated…
We’re all familiar with describing unsettling faces as uncanny, but how many of us found ourselves reaching for that word when we saw BigDog: Boston Dynamics’ first attempt at a quadruped robot capable of carrying supplies for troops during combat prompted uneasy uproar when videos of the prototype surfaced online. As this model’s design became more advanced (turning into the well-known Spot model), its movements became more refined and dog-like, and as it started performing complex tasks like opening and walking through doors, discomfort intensified. Public commentary continues BigDog’s video comment sections. ‘Incredible and Scary!’ ‘[It] is alive, I swear.’ The comment that may get closest to the uncanny issue, however, reads ‘The way it crawls back onto its feet... oh my lord... it reacts too realistically for my comfort.’
Though no faces are involved, this too – says Dr Burcu Ayşen Ürgen of Bilkent University – is likely the fault of the uncanny valley. ‘I think the key point here is whether some non-human agent violates your expectations or not, and this is what we think underlies the uncanny valley experience. In [Spot’s] case, you see a completely metallic robot, and based on this appearance, you may not expect that it could do the extra-ordinary things it does, like jumping over the boxes without falling. So, in other words, when it does those amazing movements, your expectations are possibly violated. At least, mine are really violated.’
Through a series of experiments, Dr Ürgen and her colleagues found reliable electrophysiological signals suggesting that a mismatch between what we expect robots to do and what they actually do seems to instigate uncanny feelings.
One of her EEG studies, published in Neuropsychologia in 2018, took a look at participants’ brain activity while viewing video clips of either a human, a mechanical humanoid robot, or a realistic android robot in-between from the waist up. Here, the mechanical and realistic robots were actually the same machine, just with or without its human-like skin, respectively. Extra jarringly, the human in these videos was also the model for this robot’s appearance.
Analyses focused on a potential called the N400 – a relatively large spike in neural activity that occurs 400 milliseconds after seeing something that violates your expectations. The relationship between this potential and expectation violations is well established. After its discovery in relation to linguistics in 1980 by Dr Marta Kutas (who also collaborated on this study), further experiments showed that the N400 was a reliable electrophysiological indicator of unexpected outcomes across many different domains.
When participants watched these videos of robots or their human model, the research team observed large spikes in the N400 during the android robot in-between’s movements (though not when it was static). There was no expectation-violation-indicating N400 when either the robot or human performed the same movements. Taken together, this means that the incongruent movements shown, but not the overall form of the android robot, were what set it apart.
This, the team believes, strongly supports the prediction violation theory of the uncanny valley effect. When we see something human-looking moving in non-human ways, the N400 appears strongly as our brains flag it as a cause for concern and unease. And what’s more, though the robots used in this study all looked human, it’s not just things that look uncannily human that can violate our expectations.
Boston Dynamics’ BigDog is a great illustration of this lesser-known extent of the uncanny valley effect. When videos of this huge, black quadruped marching through wilderness surfaced in 2004, the internet was extremely weirded-out, to say the least. This same brand of movement expectation violation in this study can be found when watching a video of BigDog, despite its non-human appearance. Our brains find the biological-like movement of its robotic legs jarring, because we expect them to move the way we’ve always seen robots move. When viewers saw it march and sway on difficult terrain with almost deer-like movement patterns, many people (judging from the social media uproar) seemingly felt that uncanny type of discomfort.
In future, Dr Ürgen hopes electrophysiological signals may be of use in empirically measuring uncanny feelings produced by different types of robot designs. ‘Our N400 study is just the beginning. This type of approach can be used in future research to better characterise what makes us feel uncomfortable with almost-human agents, and hopefully give feedback to roboticists who design and produce these agents.’
The uncanny valley may pose some hurdles for the wider adoption of robotic helpers now, but whether this phenomenon will persist into the future is not so certain. Even in the few short years since BigDog’s debut, public opinion has begun to shift. Videos of robots with biological motion dancing and doing parkour, complete with out-takes, work hard to give us a friendlier, more comfortable impression of this advanced technology and the movements they’re capable of. And that may just be eroding the uncanny valley.
‘If our prior knowledge with robots changes, our expectations will change as well,’ Dr Ürgen shared. ‘So, if we get more familiar with robots over long periods of time, it is possible that we may begin to move past the valley.’
This is great news for roboticists looking towards wider adoption of robotic assistants, but Dr Lay is unconvinced that such complete acceptance is truly imminent. ‘It’s going to take a while for us to calibrate to have these near-human agents in our lives,’ says Dr Lay, who emphasises the role trust will play in this process of acceptance. ‘For near-human agents to be acceptable, we need to have seamless interactions where we don’t fear or doubt our near-human agents, and we’re still a way off from that.’
Psychologists, however, don’t just have to wait patiently for that change to happen. By using our skills to untangle the factors that maintain the existence of the uncanny valley, we can develop tools and approaches to overcome it and, perhaps one day in the not-so-distant future, eliminate it entirely.
BOX: Uncanny voices
Think the uncanny only relates to visuals? Think again! Research shows that mismatches between expectations and reality can create uncanny feelings in the auditory domain too.
For example, one study published in 2011, led by Wade Mitchell at Indiana University, took a closer look at uncanniness caused by voices with 48 US-based undergraduate participants. Each participant viewed four 14-second videos of robot or human figures reciting neutral phrases with one of two types of voice – human, or synthetic. These videos played on loop while the volunteers completed several Likert questions rating the agent’s humanness, eeriness, and interpersonal warmth.
Analyses revealed that the participants found videos with mismatched agents and voices – ie. robot with a human voice, and human with a synthetic voice – to be significantly more eerie than those that matched. Once again, expectations as to what kind of voice each agent would produce were violated, creating uncanny feelings in viewers. So take note, roboticist readers: In order to avoid the uncanny valley of voices (at least until we’re more familiar with robotic assistant technology), picking a voice that matches your agent’s appearance is your best bet.
Funnily enough, the agent rated as warmest in this study wasn’t the human – it was the robot. As the authors state, this can probably be ‘attributed to its cuteness, relative to the seriousness of the ex-Marine human actor’. Maybe humanoid robots won’t have to be perfect and cuddly to gain wider acceptance after all!
- Emma L. Barratt is a cognitive scientist and science communicator who writes for our Research Digest.
BPS Members can discuss this article
Already a member? Or Create an account
Not a member? Find out about becoming a member or subscriber