The victory of a computer over one of the world’s strongest players of the game Go has been hailed by many as a landmark event in artificial intelligence. But why? After all, computers have beaten us at games before, most notably in 1997 when the computer Deep Blue triumphed over chess grandmaster Gary Kasparov.

We can get a hint of why the Go victory is important, however, by looking at the difference between the companies behind these game-playing computers. Deep Blue was the product of IBM, which was back then largely a hardware company. But the software – AlphaGo – that beat Go player Lee Sedol was created by DeepMind, a branch of Google based in the UK specialising in machine learning.

Advertisement

AlphaGo’s success wasn’t because of so-called “Moore’s law”, which states that computer processor speed doubles roughly every two years. Computers haven’t yet become powerful enough to calculate all the possible moves in Go – which is much harder to do than in chess. Instead, DeepMind’s work was based on carefully deploying new machine-learning methods and integrating them within more standard game-playing algorithms. Using vast amounts of data, AlphaGo has learnt how to focus its resources where they are most needed, and how to do a better job with those resources.

Deep learning

This is roughly how it works. At any point in the game, the algorithm has to consider the game tree, a theoretical diagram describing every possible move and countermove up to any depth, in order to work out the best move to make next. This tree is way too big for any computer to search fully and so various methods exist for making a reasonable decision based on just a section of the tree.

The use of neural networks allowed AlphaGo to assess a particular move without exploring its consequences too deeply. Neural networks are a class of learning algorithms that can be trained by being shown many examples of the required behaviour. In this case they were trained by providing AlphaGo with examples from millions of past matches, as well as from playing millions of matches with itself.

Advertisement

The finer details may be interesting only to few experts, but the take-home message is important for everybody. Most of the machine-learning elements of AlphaGo’s software were rather general-purpose modules. This means they weren’t designed specifically just to play Go. Actually, some of them closely resemble the computer tools currently used for analysing images, and others are like the reinforcement learning tools found in various game playing programmes.

Self-driving cars will bring AI to our streets. Ed and Eddie/Flickr, CC BY-SA

This means that we can expect other applications like this, where machine-learning elements are combined in other ways or embedded into other types of software, to give them a new advantage. This might mean more intelligent self-driving cars, or digital personal assistants.

The past few years have been an exciting time for artificial intelligence, and there is more to come. But we should also consider for a moment the many concerns that are being voiced about its future. These are not specifically related to AlphaGo but the general field of data-driven AI.

Advertisement

This kind of technology does not pose a threat to the existence of our species but it does pose some challenge to our everyday lives. There is some concern about artificial intelligence making human workers redundant, of course. But we should also consider how our autonomy can be affected if we allow data-driven machines to make decisions that affect us, and the technology that makes this possible.

Risk not threat

Through the internet, we rely on a rather unified global data infrastructure for much of our communication and many of our transactions: from payments and purchases to access to news, education and entertainment. The artificial intelligence that powers this infrastructure means that it is isn’t a passive medium but instead is looking back at us.

It is constantly trying to learn from our actions, to infer our intentions, and – when appropriate – to gently nudge us in some direction. Sometimes this is to encourage us to buy something, but work has also been done on technologies that try to persuade us to change our attitudes or to alter our emotions. This is the dominant business model of artificial intelligence on the internet: to make us click.

Advertisement

Having lots of very intelligent agents competing for our attention, and perhaps even competing for influence on our behaviour, is a scenario that should be carefully investigated. This goes beyond the basic web surveillance we have learnt about in the last few years and covers the possibility of machines inferring, predicting and perhaps even shaping our behaviour. Would we want to engage in an online conversation with an online digital assistant in the future when the tool is as smart as AlphaGo – and is programmed to pursue its own goals?

If there is any risk coming today from artificial intelligence it does not involve killer robots. It comes from our willingness to embed data-driven AI at the very centre of the internet that we depend so heavily on before having fully understood its consequences for our privacy and autonomy. There won’t be robots chasing us down the street any time soon, but there might be lots of online agents each pursuing their own goals, observing us, inferring our aims and using that information to steer us. Not quite an existential threat, but still something to consider before it is too late.

Nello Cristianini, Professor of artificial intelligence, University of Bristol

This article was first published on The Conversation.