Psychologist logo

Where to start on the road to superintelligent AI

Ginny Smith reports from the Cambridge Science Festival.

10 March 2016

Artificial Intelligence (AI) is a topic that has fascinated us for decades. From Isaac Asimov’s famous I, Robot stories to more modern fiction like the recent film Ex Machina, the idea of creating sentient machines is something that fills us with both delight and dread. But just how realistic is it? What challenges do we face on the road to superintelligent AI? And what will the consequences be? Recently, as part of the Cambridge Science Festival, great minds from the fields of robotics, computing and neuroscience came together to discuss these questions in front of a packed audience.

Taking to the stage, once the rather ironic technical difficulties had been resolved, were technology entrepreneur and founder of Acorn computers Dr Hermann Hauser; senior lecturer in the Computer Laboratory of the University of Cambridge, Dr Mateja Jamnik; head of the University of Cambridge Psychology Department, Professor Trevor Robbins; and co-founder of the Bristol Robotics Laboratory, Professor Alan Winfield. The discussion was ably chaired by BBC Radio 4’s Tom Feilden. Interestingly, despite being organised by neuroscientist Professor Barbara Sahakian, the focus was very much on robotics and computer learning, with only Robbins to delve into the complexities of the human brain.

One major point that came across in all the talks and discussion is how difficult it is to define intelligence, something psychologists are extremely familiar with. Interestingly, it has turned out to be much easier to develop computers that can pass what seem like extremely difficult cognitive challenges, like playing Chess or Go, than to develop machines that can execute seemingly simple tasks like picking up an egg. This highlights just how much we take for granted about our brains. It is only when we try  to develop something from scratch to carry out these simple tasks that we start to realise the amazing interactions between our brain, senses and muscles that allow us to make the tiny adjustments needed to carry an egg without dropping it or crushing it. Building a robot that can handle such complexity is proving to be a huge challenge.

Despite the hold-ups in terms of physical robotics, computer learning has come on dramatically in the last decade. Both Hausner and Jamnik put this down to three things - the improvement in machine learning algorithms based on neural networks, the availability of cheap distributed computing power via cloud computing and the huge amounts of data now generated on a daily basis. These combine to allow us to teach computers rather than programming them, making them much more flexible and opening up the possibility for their intelligence to increase at frighteningly rapid rates. However while these networks may be based on the neurons found in our brains, they are hugely simplified, and far less efficient than we are at learning.

To be able to extract the essence of what a chair is, a computer will need to be trained on huge numbers of examples – probably far more chairs than a human will see in their lifetime and certainly more than an infant sees when learning the concept of chair. So what is it about human learning that makes it so efficient? Robbins explained that unlike machine learning, human learning isn’t done passively. Shared attention, imitation and social learning are all intertwined with how we learn language and skills, and make it much more efficient. This is something that will be extremely difficult to replicate in computers. But maybe we won’t need to. The amount of available data is only going to increase over the next decade, so it may be that the inefficiencies of requiring huge training sets won’t be a problem. And as computing power increases, speed isn’t likely to be an issue for long either.

But is modelling the human brain really the best way to create AI? And considering how much there is still to understand about our brains, is it even possible? One project that Winfield is working on aims to endow robots with 'Theory of Mind' – something that is vital for them to be able to interact with humans in the real world. There are various ways of looking at our ability to predict and understand the feelings and actions of another person, but his research is based on simulation theory, which suggests that we run simulations of possible outcomes in our brains in order to predict the consequences.

To model this, Winfield has created robots with internal simulations including themselves, their environments and other robots in the area. They are programmed to be ethical and prevent harm to the other robots, so will delay achieving their goal to intervene if they see or predict that another robot is about to do something dangerous. This is a fascinating development, but raises some interesting ethical questions – particularly as Winfield freely admits the robots could be made unethical simply by changing a single line of code!

On the topic of ethics, the panel was curiously quiet – perhaps including a philosopher, lawyer or researcher from somewhere like the Centre for the Study of Existential Risk would have allowed more discussion in this area. Because the ethical questions are, in my opinion, some of the most important and difficult to answer. I have no doubt that with the rate of technological advance we will, at some point, develop super-intelligent computers and functioning robots which will become as ubiquitous as computers and mobile phones. But what happens then? Who is to blame if your driverless car malfunctions and hits a pedestrian while you are napping in the front seat? What is to stop someone hacking your robot-butler to help them steal your belongings? And eventually, will it be ethical, or even possible, to keep robots subservient if they do develop consciousness and emotions? More than the technical issues, these are the problems that we will need to grapple with as the inevitable happens and machines become even more a part of our daily lives. I, for one, don’t have a clue where to start.

- Ginny Smith is a freelance science communicator based in Cambridge. The event was part of the Cambridge Science Festival, which runs to 20 March.

Quotes from the event:

Herman Hauser
"Consensus is that we will have superintelligence by 2050"
"(Computers) will be able to do everything which humans are able to do, and they will be able to do it better"

Alan Winfield"My favourite hard problem in intelligence is making a cup of tea in someone else's kitchen- no robot can currently do that!" Mateja Jamnik"Today, machine learning algorithms know nothing about the problem. They just know how to learn.""I see the future in the hands of humans, but heavily supported by intelligent machines - a kind of augmented intelligence" Trevor Robbins"A computer might be able to play chess, but it can't move the pieces!""Never say never, but I wouldn't underestimate the difficulties of replicating the human brain"