Driverless vehicle problems

Two letters from our October edition.

08 September 2017

It was good to see evidence that psychologists are seriously involving themselves with the application of developments in artificial intelligence and robotics to personal and commercial transport systems (‘The future of transport’ by Stephen Skippon and Nick Reed, August 2017). The consequent technological changes are likely to see some of the largest-scale social and economic upheavals that have occurred for decades or even centuries. By now, though, most of us are probably familiar with the usual litany of benefits associated with driverless vehicles: they don’t need sleep or food, get fatigued, make mobile phone calls or daydream of muscled hunks and bikini babes.

We get the point: vehicles controlled by computers don’t suffer from most of the weaknesses and deficiencies that cause human driver error. But then the narrative seems to come to an end rather abruptly. Do driverless vehicles have no weaknesses or problems associated with them at all? Or do they, instead of having human-type failings, have computer-type failings? We all know how often and in what manner computers fail: they fail catastrophically – just stop working completely, without warning, something humans hardly ever do. Their sensors and other input devices get scratched or dirty and stop working, batteries fail or catch fire, electrical connections get corroded or break, software becomes corrupted or gets hacked.

Most of these problems are soluble: back-up power systems, duplicate control circuits, key component failure protocols, and so on. But such problems are real and serious, especially during the technological transition phase when human drivers share the roads and must cope with the consequences of driverless vehicle failure. During the development phase (i.e. at present) if a robot vehicle fails, the technicians probably just fix it and carry on. But are they recording driverless vehicle failure data? Do they know what types of failure occur and how often they happen? Do they consider what the consequences might be in real environments? Can they tell us how long a driverless vehicle will last and whether we will discover that their working lives have come to an end only when without warning they pack up completely, as we do with computers.

Dr Roger Lindsay
Ulverston, Cumbria

I was delighted to see the article by Stephen Skippon and Nick Reed on the psychology of self-driving vehicles. If memory serves me well, there have been very few articles on driving published by the BPS. [Editor's note: Of course I would disagree with this!]

It strikes me that there are (at least) two different kinds of vehicles one might consider here. One is a vehicle where the ‘driving’ is done by someone else – e.g. a 10-seater for travelling between different places – and another is a vehicle of the sort discussed by Skippon and Reed, where one’s own car switches into fully automatic mode. The former seems much more acceptable that the latter. In the latter case I wonder about the problem of ‘habit interference’. My own car, for instance, has the indicator controls on the right of the steering wheel, and the head- and side-light controls on the left. My partner’s car has the reverse. And we both make mistakes when driving the other’s car.

This small detail suggests, as do Skippon and Reed, that a great deal of attention will have to be paid to the controls of a car that can switch, or be switched, into automated mode. Indeed, some drivers might be too frightened to allow it! A colleague of mine has a car with automatic parking capabilities, but, he confesses, he has ‘never dared use it’!

James Hartley
Keele University