Psychologist logo
Cyberpsychology, Work and occupational

'The reality of technology never matches the promise'

Robert Hoffman's keynote talk, 'The Automation Age: A Challenge or an Opportunity?', delivered at the British Psychological Society's Division of Occupational Psychology Annual Conference 2018, focused on the subject of myths about automation of work. Here, Andrew Clements (University of Bedfordshire) poses some questions.

16 February 2018

In his keynote, Dr Hoffman noted that automation performs well at fixed and specific tasks, whereas humans can perform well at dynamic tasks that require adaptation and flexibility. To that extent, automation does not replace entire jobs. Indeed, automation creates new and technically demanding jobs – someone has to build and fix the machines – even as some jobs may be lost. Further, new technology always creates problems, such as new forms of errors. Dr Hoffman’s talk focused on the relative capabilities and limitations of machines and humans, identifying the need for machines and humans to complement each other. I wanted to find out more about trends and hype in automation.

What key trends in automation over the next five or ten years should be interested in?

I can’t really speak to people’s interests, because those are determined by material that is presented to them, as well as their own particular qualities. I can say that there are some trends that are likely to continue, that people might stay aware of, and perhaps be cautious about. One is a tendency for the technology to be hyped. The reality of the technology never matches the promise. Prognostication about technology, the grand things that are going to happen, almost always comes wrapped in terminology that is quite misleading, and often scientifically vacuous… claims that AI will take over the world or eliminate numerous jobs, all because a specialised machine won a chess match. 

The prognostications about AI taking over our lives… these often come from people with no background in computer science, and even if they did have a background in computer science, I’d be sceptical because computer scientists are just as likely to raise my eyebrows. My main message would be to remind people to think critically about what they’re being fed about the technology.

For example, there was a prediction that by 2045 we will be connecting our brains wirelessly to the internet…

A lot of people talk about that. To some extent that’s happening now – cochlear implants for example – but some people are taking that concept some steps into the realm of science fiction. That’s not to say that won’t happen, but I am dubious. If it does happen it perhaps will not be in the way we imagine now. I’m also concerned about the ethical implications, because if you hotwire the brain, there’s the potential for the human to become a slave to the machine. This is of course one of the enduring themes of science fiction, so there’s no surprise those two things dovetail. I’m sceptical, and cautious… but in addition to that, as is manifest in much of the hype, there’s a tendency for pundits to prognosticate… and the first thing I look for is [the pundit’s] scientific credibility. If they do have scientific credibility, then I’ll at least give it a listen.

But it’s remarkably easy for people to forget the lesson that’s been learned over and over and over again, and forgotten again, which is that predicting things is hard. Especially when it comes to human activity, it’s really hard to predict, because causation is complex and indeterminate. There are certain circumstances in which some individuals can do a pretty decent job of predicting things that are otherwise nebulous, dynamic, and hard to predict, and we have some idea of what those conditions are. One, of course, involves expertise, and the other involves lots and lots of practice, with feedback, at doing precisely that sort of task. And that’s hard come by.  We know that it’s possible. If humans were completely unable to predict the future, we’d probably be still living in trees. 

Moving beyond the pundits, what do you think may be more likely to happen?

The technologies will get better, I hope, and more powerful, certainly, and that will probably be a good thing if they’re also understandable and usable. They have to be designed such that people can understand them quickly, use them effectively, know when to trust them and when not to trust them, how to trust them, how to use them, how to rely upon them. How to interact with them in such a way as to make the computer better and designed in such a way as to help the human be a better human.

Thinking about the human-machine interaction, are there things that we should be doing now that we aren’t already, whether as psychologists or as a society?

Well, again critical thinking comes to mind. Just one example is neural nets. Those computational systems have nothing to do with nervous systems… well, let me rephrase. The only thing they have to do with nervous systems is that the people who named them, and who are building them, use the word ‘neurons’. There are elements and units within the layers in a neural net… they have nothing to do with actual neurons, and how neurons work. It’s just a metaphor. And I think it’s important to be aware of that and we should hold the feet of computer scientists to the fire, to stop using trendy jargon and misleading metaphorical terminology. It’s deeply misleading, and may lead to another ‘AI Winter’ in which the funding goes away because the technology was over-sold.

We should look toward the technology as it gets more capable and powerful, to be learnable, usable, useful, all of those things, to help people become better people, at the same time that the people help the machines become better machines. I think that’s all quite possible, but not if we allow the trendy jargon and the hype to rule the day.

- Dr Robert R. Hoffman is a world leader in cognitive systems engineering and Human-Centered Computing. He is a Senior Member of the Association for the Advancement of Artificial Intelligence, Senior Member of the Institute of Electrical and Electronics and Engineers, Fellow of the Association for Psychological Science, Fellow of the Human Factors and Ergonomics Society, and a Fulbright Scholar.