The future of transport?

The psychology of self-driving vehicles, with Stephen Skippon and Nick Reed.

03 July 2017

Imagine a society where all road travel was in self-driving vehicles. It might look very different to today: with many fewer deaths and injuries, the disappearance of some social practices, and the emergence of new ones. Back in 2005 Frank Geels pointed out that transitions like this are never purely technological in nature: they are ‘socio-technical’. The ways people are affected by, respond to, and attribute meaning to technologies matter as much as the technologies themselves. As psychologists, we at the Transport Research Laboratory (TRL) in Wokingham have a slightly different take on Geels’ concept and see them as psychosocial-technical transitions. Neither the dynamics of a transition nor its consequences can be understood without exploring the psychology of people affected.

TRL, a former government research institution now owned by the non-profit-distributing Transport Research Foundation, began researching automation of driving in the 1960s. Along with other transport researchers in the UK and abroad, our Transport Safety and Behaviour Group is actively involved in exploring some of the psychological implications of the transition from human-driven to automated vehicles.

A central practice
Driving is a central social practice in many societies. It’s both a functional means to engage in modern life and a means to create and reflect personal identity. Researchers and policy makers who refer to ‘car-dependent’ societies are implicitly also discussing driving-dependent societies.

Driving offers many benefits, and few who drive are willing to give it up, yet it has huge costs for societies. Mass car use degrades local and global environments, and, uniquely among means of travel, it causes large numbers of deaths (1730 in the UK in 2015) and serious injuries (22,144 in 2015) with distressing and traumatic consequences that ripple out to people beyond those immediately involved in each crash.

The transition to self-driving vehicles could change all that. Human error is the sole cause of most crashes. Full automation is potentially much safer, since AVs don’t get tired, distracted or impaired by alcohol and drugs, don’t get angry, and don’t choose to take inappropriate risks. Their widespread use could also potentially reduce congestion, improve air quality, and cut damaging carbon emissions. They could bring personal benefits too – while many enjoy the experience of driving, there are many who see the attraction of being able to hand over the driving task and occupy themselves with other activities, much as rail travellers do. AVs could even enable people currently unable to drive to have the benefits of independent, point-to-point personal travel for the
first time.

We’ll explore some of the psychological dimensions of this transition by considering three scenarios. First, automation of driving on multi-lane highways is just around the corner, in the form of ‘auto-pilot’ functions that can be selected by human drivers on these roads. We’ll compare psychological theories of how human drivers drive on these roads to the ways in which AVs are being designed to do it. Second, we’ll outline the extra complexities of automating driving in urban centres. The GATEway project led by TRL (www.gateway-project.org.uk) is one of three projects commissioned in 2014 by the UK government through innovation agency Innovate UK, with support funding from commercial partners, to research the potential for integration of AVs into society. In the project, we intend to trial fully self-driving shuttle minibuses in a public, non-road urban environment to investigate their interactions with pedestrians, cyclists, etc. Third, we’ll consider the so-called ‘moral algorithm’ problem – how would an AV ‘decide’ in an emergency between two alternative behaviours, each of which could cause harm to humans?

Motorway driving
Human driving can be understood in terms of self-regulation processes in which behaviour is controlled, through feedback mechanisms, in the pursuit of target reference states or ‘goals’. A feedback mechanism features a mental comparator that compares the perceived state of the world with the reference state, detects discrepancies, and activates behaviours to reduce them.

Ray Fuller’s risk allostasis theory (RAT) elaborates on this to propose that task difficulty, experienced as ‘feelings of risk’, is self-regulated in driving. ‘Allostasis’ means that there is not a single target level of feelings of risk, but rather a target range of levels. If feelings of risk are too strong, or not strong enough, driving behaviours such as speed selection are adjusted to return them to the target range. RAT also includes a secondary mechanism in which dispositions to comply with speed limits, and influences such as enforcement measures, combine with the output of the feedback loop to determine the speed the driver adopts.

RAT may not be the whole answer. In Heikki Summala’s (2007) multiple comfort zone model, drivers seek to keep several control variables within ‘comfort zones’: safety, trip progress, rule compliance (traffic laws, social norms), smooth operation and performance, and pleasure of driving. A particular behaviour such as speed selection is the result of the combined influences of these multiple goals, some complementary, some competing and conflicting with each other. For instance, safety and rule-following goals might exert complementary restraining influences on speed, while the goal to experience pleasure from driving might exert a conflicting influence.

Evidence is also emerging that symbolic goals to signal aspects of personal identity are relevant. TRL research in our DigiCar driving simulator has found that driving styles correlate significantly with the five-factor personality traits agreeableness, conscientiousness and neuroticism. We have also found that driving style signals these personality traits to others, and we have observed drivers changing driving styles in response to the gender of a previously unknown passenger. It seems likely that at least some of the reason for risky driving styles and rule violations is the opportunity they afford to signal aspects of personal identity to others.

AVs, by contrast, are controlled by software. For less complex driving environments, that software need not be too complex, at least at high level, where control is exercised by a ‘finite state machine’ that represents the various possible behavioural states available to the vehicle (see Özgüner et al., 2011). For adaptive cruise control, available in cars today, the system remains in one state, ‘cruise’, until a switching condition is detected – such as the presence of a slower-moving vehicle ahead. It then switches to another state, ‘slow down’, which is maintained until speed is matched to the leading vehicle and following distance is appropriate for that speed. Then it switches to a third state, ‘follow’, which is maintained until some further switching condition is encountered. Adaptive cruise control cancels if the driver manually brakes or accelerates. When more aspects of driving are automated, different general driving situations (or ‘metastates’) each have their own finite state machines, and in complex situations there may be hierarchies of state machines, each representing relevant families of situation-appropriate behaviours.

On a motorway the range of potential behavioural states for a vehicle is limited and readily represented in software. The software also needs to contain representations of all possible situations (specific configurations of sensor inputs) that require a change from one state to another. The set needs to include all the potential switching conditions in which the safety of occupants and/or other road users requires a response – such as the presence ahead of stationary objects, slower-moving vehicles, or vehicles changing lanes ahead. For motorways this is still a relatively small, manageable set, which is why automation of motorway driving is already within reach.

So who’s best when it comes to this task? Human driving performance is readily impaired by tiredness, mood, alcohol, drugs and various distractions like mobile phone calls and texting. We suggest that it’s also influenced by symbolic goals that lead some to adopt risky driving styles. On UK motorways, around half of human drivers break the speed limit, and ‘tailgating’ – following dangerously close to the vehicle ahead – is commonplace. AVs won’t have any of those limitations so it’s easy to see how a transition to fully automated driving on motorways could save many lives. There are likely to be substantial reductions in congestion and emissions, too. We’re confident AVs will soon be winning this one hands down.

Urban driving: it’s complicated…
It’s not so easy to automate driving in urban areas. To a roboticist these are ‘unstructured’ environments that have not been specifically designed for automated operation and contain unpredictable elements – particularly people, doing unpredictable things in unpredictable ways. Although the range of behavioural states a vehicle can adopt is still quite limited, no matter how many switching conditions are considered there’s always the possibility of something new happening. A few days ago in a North Wales village, while being tailgated, one of us encountered the life-sized figure of a person made entirely of silver balloons slowly drifting across the road ahead.

When human drivers encounter situations never previously experienced, their responses are whatever behaviours are most simultaneously consistent with their presently active goals. The unusual balloon situation was easy enough for a human driver to handle by reference to the active goal of remaining safe – recognising that the figure wasn’t a person, this was best met by continuing forwards without slowing.

An automated system, however, can only respond to a novel situation to the extent that its sensory inputs fit closely enough to one of its switching conditions to trigger a change of state. The problem is that the range of possible situations in an unstructured environment like an urban centre can be huge, and the possibility of a silver-balloon situation is always there. To tackle this, designers of self-driving vehicles are now pursuing approaches based on machine learning: algorithms that improve their performance with experience. In machine learning, the AV is presented with a training set of environmental conditions (sensor inputs), and the correct responses to each. The AV gradually builds its own associative model of what combinations of sensor inputs determine switching from one behavioural state to another. Potentially the model can be continuously refined as further experience is gained. Much of the present effort in AV software development consists of the acquisition of very large training datasets from vehicles operating in real-world road environments. For the future, there is the potential for every AV to ‘learn’ from the collective experiences of all.

In machine learning, humans are still indirectly influential in defining how the system makes decisions, by specifying the appropriate actions in the various training conditions. Without this human feedback, a self-driving car cannot ‘learn’ in a meaningful way. Further, it may be that the training appropriate for one driving environment (in the USA, say) may not be appropriate for another (Germany, India, Japan, UK) where road networks, driving traffic laws, and driving cultures are different. Humans may need to provide different training datasets and guidance on appropriate decision-making for each distinct environment.

Nevertheless there is always the possibility of something happening that an AV can’t fit reliably to anything in its database. AVs could be programmed to err on the side of caution in such cases – for instance, slowing down – but it may still seem that the ability of an AV to respond to novel situations in urban driving will be more limited than that of a human driver.

So what justifies the claim that a transition to AVs could reduce or even put an end to crashes in urban driving? First, as discussed earlier, the performance of an AV does not degrade through tiredness, or the influence of drugs or alcohol: it operates the whole time at peak alertness and with peak response times. Second, its attention is always fully on the driving task: it’s never distracted by a conversation, texting, or anger at a recent aggressive encounter. Third, it has no attentional blind-spots. Fourth, its driving ‘style’ is consistent, exactly as programmed, and predictable by other road users. For these reasons, an AV is much less prone to lapses, and not at all prone to deliberate violations in its driving. These are potentially enormous advantages from a safety perspective.

Can we entrust moral choices to AVs?
Our final scenario is the so-called ‘trolley problem’ – an emergency situation where an AV must choose between two courses of action, each of which would cause harm to or the death of one or more humans. This is one of the most contentious topics in discussion of AVs. How could an AV make a morally appropriate choice?

From a psychological perspective, an obvious challenge to the trolley problem is to ask, how often do human drivers face it? If you drive, ask yourself whether you’ve ever experienced it yourself: very few drivers ever will in a lifetime of driving. Is it something a particular AV realistically will ever face?

If your answer to that question is yes, then consider next how human drivers might respond to this emergency. Some drivers may not even notice it, if their attention is distracted by a phone call, bored children in the back seat or any of the other sources of driver distraction. Others may notice, but be unable to make any choice in the time available: for instance if their reaction times are extended through fatigue, alcohol or drugs. Among those who are able to make a meaningful decision in time to act, there is still no guarantee that they would make a choice that was generally considered morally appropriate within their culture. Social psychology has shown us that people often carry explicit or implicit prejudices, valuing some people more highly than others based on race, gender, religion, age, etc. People are also influenced by recent experience: might a furious row with a partner just before setting off lead someone to implicitly devalue the life of someone else who shares visual characteristics with the partner? Then there’s self-interest. It can be argued that human drivers often make ‘immoral’ choices: speeding and tailgating, for instance, implicitly involve choices to put self-interest ahead of the safety of others (remember that risky driving correlates with low trait agreeableness). Emerging evidence already suggests that AV users would want them to prioritise the safety of the vehicle’s occupants over that of people outside.

Therefore in thinking about how an AV would tackle the trolley problem it’s a mistake to assume that they would necessarily be worse at it than human drivers.

How then would an AV respond to an emergency situation that involved the trolley problem? It would switch between behavioural states if a switching condition were met. Those switching conditions will have been programmed, directly or indirectly, by humans. Even in machine learning, where the software builds its own switching model through experience, it is ultimately told the correct behavioural response to each situation in its training set by a human. It’s up to human trainers and programmers to make the moral choices in advance and to provide sufficient learning experiences to build an appropriate model. The AV software will implement that model of human choices if it ever encounters a genuine trolley problem.

There are risks in this. For instance, knowing that AV users would prefer an AV that prioritises their safety over that of others outside, might manufacturers be tempted to bias the programming or training of AV in that direction? Would that fit with or contradict socially accepted moral values? However, there are some potential benefits. For instance, the perceptual systems of AVs can be made agnostic to visual details like skin colour, gender, age or body shape that might activate prejudicial responses from some humans. Importantly, there is the potential for societies to exercise control, insisting on standards and regulations that require the implementation of choices that reflect what is currently socially acceptable. That might deliver outcomes that are more socially acceptable than those that might be made by many human drivers, with all their issues.

Where next for psychology and AVs?
Current psychological interest focuses on the transition from human to autonomous driving, during which both types of control will coexist. They already do: adaptive cruise control has been available for some years. Greater autonomy is being gradually introduced by vehicle manufacturers: Tesla’s Model S car has an ‘Autopilot’ automated driving functionality for highway driving such that the driver need not normally interact with steering or pedal controls.

So far this autonomy is only partial. The human driver remains responsible, and (according to the Tesla owner’s manual) must ‘stay alert, drive safely, ensure the vehicle stays in the traveling lane, and be in control of the vehicle at all times’. How realistic is it to expect human drivers to do this? How long will it take a human driver who’s doing something else to respond to an alert, re-orient themselves to the driving situation, and decide what do? What happens as automated systems get better, so the frequency with which humans need to intervene becomes lower? Will drivers eventually become so de-skilled through inexperience that their interventions are unsafe? These are all topics of immediate research interest for transport psychologists, and driving simulators like DigiCar provide us with safe virtual-reality environments in which to study them.

During any transition, human drivers will share the roads with AVs. The ways that they respond to AVs in this mixed environment could potentially have major impacts. Some human drivers may adapt their driving behaviours in relation to AVs – for example feeling safe to change lanes into the path of an AV. If that became common there could be significant adverse effects on traffic flow dynamics and road safety. Potentially, highways authorities will need to introduce specific measures and structural features to manage adverse impacts of some interaction styles.

The presence of AVs will also influence how pedestrians and cyclists interact with vehicles. In busy urban areas with slow-moving traffic, it is not uncommon for non-verbal communication to take place between pedestrians and drivers. In the absence of human drivers, AVs may need to adopt new forms of communication to indicate their intent to pedestrians. Similarly, pedestrian behaviours may adapt to automated vehicles in the knowledge that, within the capabilities of its braking performance, an automated vehicle will certainly stop for a pedestrian in the roadway. We intend to investigate such interactions in the GATEway project.

In the 2014 book The 4th Revolution: How the Infosphere Is Reshaping Human Reality, Luciano Floridi has pointed out that successful automation typically involves ‘enveloping’ the system in an environment adapted to its particular strengths and weaknesses (ability to rapidly process large amounts of data, but inability to perform semantic, meaning-related tasks): ‘If driverless vehicles can move around with decreasing trouble…this is…because the “about” they need to negotiate has become increasingly suitable for light AI [artificial intelligence] and its limited capacities.’ Enveloping AVs in a suitable environment could involve simplifying their perceptual tasks, for instance by adding inexpensive radio frequency ID tags to other vehicles, road signs and fixed roadside objects. It might also involve adapting road traffic laws to their capabilities; and ultimately, the adaptation of human road users’ responses to them. A transition to full autonomy may be enabled more readily if progress in AV machine learning is accompanied by such adaptations to the operating environment.

Taken together, these developments may lead to significant changes in the way we are able to achieve mobility in future. However, none of them are simply technological developments. All involve interaction with people, and understanding of those interactions is just as important as engineering the technologies. Our in-depth understanding of the psychology underlying human interactions with AVs will be critical to their ultimate integration into society.

Stephen Skippon is a Principal Human Factors Researcher
[email protected]

Nick Reed is Academy Director at TRL
[email protected]

Illustration: Ciaran Murphy

References

Carver, C. S., & Scheier, M. F. (1998). On the Self-Regulation of Behavior. Cambridge, England: Cambridge University Press.

Flach, P. (2012). Machine Learning: The Art and Science of Algorithms that Make Sense of Data. Cambridge, England: Cambridge University Press.

Floridi, L. (2014). The 4th Revolution: How the Infosphere is reshaping human reality. Oxford, England: Oxford University Press.

Fuller, R. (2011). Driver Control Theory. In B.E. Porter (Ed.), Handbook of Traffic Psychology. Amsterdam, Netherlands: Elsevier.

Geels, F. (2005). Technological Transitions and System Innovations: A co-evolutionary and socio-technical analysis. Cheltenham, England: Edward Elgar Publishing.

Özgüner, Ü., Acarman, T., & Redmill, K. (2011). Autonomous Ground Vehicles. Norwood, MA: Artech.

Summala, H. (2007). Towards Understanding Motivational and Emotional Factors in Driver Behaviour: Comfort Through Satisficing. In P. C.Cacciabue (Ed.), Modelling Driver Behaviour in Automotive Environments: Critical Issues in Driver Interactions with Intelligent Transport Systems. London, England: Springer-Verlag London Ltd.

Tesla (2016) Model S Owner’s Manual. Retrieved May 2016 from https://www.teslamotors.com/sites/default/files/Model-S-Owners-Manual.pdf