The perception of what artificial intelligence was capable of began to change when chess grand master and world champion Garry Kasparov lost to Deep Blue, IBM’s chess-playing program, in 1997. Deep Blue, it was felt, had breached the domain of a cerebral activity considered the exclusive realm of human intellect. This was not because of something technologically new: in the end, chess was felled by the brute force of faster computers and clever heuristics. But if chess is considered the game of kings, then the east Asian board game Go is the game of emperors.

Significantly more complex, requiring even more strategic thinking, and featuring an intricate interweaving of tactical and strategical components, it posed an even greater challenge to artificial intelligence. Go relies much more on pattern recognition and subtle evaluation of the general positions of playing pieces. With a number of possible moves per turn an order of magnitude greater than chess, any algorithm trying to evaluate all possible future moves was expected to fail.

Advertisement

Until the early 2000s, programs playing Go progressed slowly, and could be beaten by amateurs. But this changed in 2006, with the introduction of two new techniques. First was the Monte Carlo tree search, an algorithm that rather than attempting to examine all possible future moves instead tests a sparse selection of them, combining their value in a sophisticated way to get a better estimate of a move’s quality. The second was the (re)discovery of deep networks, a contemporary incarnation of neural networks that had been experimented with since the 1960s, but which was now cheaper, more powerful, and equipped with huge amounts of data with which to train the learning algorithms.

The combination of these techniques saw a drastic improvement in Go-playing programs, and ultimately Google DeepMind’s AlphaGo program beat Go world champion Lee Sedol in March 2016. Now that Go has fallen, where do we go from here?

The future of AI is in physical form

Following Kasparov’s defeat in 1997, scientists considered that the challenge for AI was not to conquer some cerebral game. Rather, it needed to be physically embodied in the real world: football.

Advertisement

Football is easy for humans to pick up, but to have a humanoid robot running around a field on two legs, seeing and taking control of the ball, communicating under pressure with teammates, and all mostly without falling over, was considered completely out of the question in 1997. Only a handful of laboratories were able to design a walking humanoid robot. Led by Hiroaki Kitano and Manuela Veloso, the ambitious goal set that year was to have by 2050 a team of humanoid robots able to play a game of football against the world champion team according to FIFA rules, and win. And so the RoboCup competition was born.

The RoboCup tournament reached its 20th year in Leipzig this year. Its goal has always been to improve and challenge the capacity of artificial intelligence and robotics, not in the abstract but in the much more challenging form of physical robots that act and interact with others in real time. In the years since, many other organisations have recognised how such competitions boost technological progress.

The first RoboCup featured only wheeled robots and simulated 2D football leagues, but soon leagues that permitted Sony’s four-legged AIBO robot dogs were introduced and, since 2003, humanoid leagues. In the beginning, the humanoids’ game was quite limited, with very shaky robots attempting quivering steps, and where kicking the ball almost invariably caused the robot to fall. In recent years, their ability has significantly improved: many labs now boast five or six-a-side humanoid robot teams.

No ordinary ballgame

In order to push competitors on to reach the goal of a real football match by 2050, the conditions are made harder every year. Last year, the green carpet was replaced by artificial turf, and the goalposts and the ball coloured white. Thsi makes it harder for robots to maintain stability and poses a challenge of recognising the goals and ball. So while the robots may seem less capable this year than the year before, it’s because the goalposts are moving.

Advertisement

The tasks involved in playing football, although much more intuitive to humans than chess or Go, are a major challenge for robots. Technical problems of hitherto unimaginable complexity have to be solved: timing a kick while running, identifying the ball against a glaring sun, running on wet grass, providing the robot with sufficient energy for 45 minutes’ play, even the materials that go into constructing a robot can’t disintegrate during a forceful game. Other problems to be solved will define important aspects of our life with robots in the future: when a robot collides with a human player, who can take how much damage? If humans commit fouls, may a robot foul back?

RoboCup offers up in miniature the problems we face as we head towards intelligent robots interacting with humans. It is not in the cerebral boardgames of chess or Go, but here on the pitch in the physical game of football that the frontline of life with intelligent robots is being carved out.

Daniel Polani, Professor of Artificial Intelligence, University of Hertfordshire.

This article first appeared on The Conversation.