Rise of the Machines
The fascination with machines that can outperform us has been around for a long time.
In 1770 early entrepreneur Wolfgang von Kempelen created what appeared to be a chess playing machine – a large box upon which a chess board sat, the chess pieces moving from some unknown force in response to a human opponent, seemingly by the mysterious machinery hidden in the box beneath the board. This was the famous Mechanical Turk – the world’s first, self-proclaimed chess playing machine. The Turk was in fact a mechanical illusion that allowed a human chess master hiding inside to operate the machine. However, it set in motion the quest to build a chess-playing machine.
In 1948 Alan Turing, the man largely responsible for cracking the Enigma machine in WWII , began writing a chess program for a computer that did not yet exist. By 1950, the programme was completed and dubbed the Turochamp. In 1952, he tried to implement it on a Ferranti Mark 1, but lacking enough power, the computer was unable to execute the programme. This was to become the test for artificial intelligence – could a computer play chess?
From the early ’60s chess playing programmes started to appear, but their strength was limited and they were no match for the best players.
In 1968, after hearing researchers predict that a computer would defeat the world chess champion within ten years, David Levy made a famous bet with four AI experts, ultimately totalling £1,250, that no computer program would win a chess match against him within ten years. In 1973, he wrote:
Clearly, I shall win my … bet in 1978, and I would still win if the period were to be extended for another ten years. Prompted by the lack of conceptual progress over more than two decades, I am tempted to speculate that a computer program will not gain the title of International Master before the turn of the century and that the idea of an electronic world champion belongs only in the pages of a science fiction book.
The final match necessary for Levy to win the bet also was played in August and September 1978 at the Canadian National Exhibition, against Chess 4.7, the successor to Chess 4.5. Although Levy won 4.5-1.5 he lost a game and observed ‘My opponent was very, very much stronger than I had thought possible when I started the bet. Now nothing would surprise me very much.’
Eighteen years later the World Chess Champion, Gary Kasparov lost a game to Deep Blue and although he recovered and won the match, the following year he lost perhaps the most famous match in the history of the game to an updated version of the program.
When Kasparov’s successor, Vladimir Kramnik, lost a match against Deep Fritz in 2006 it was clear that the best machines were now vastly superior to their human opponents.
The current world champion, Magnus Carlsen, has a rating of 2857, but he would be no match for the strongest engines, Houdini (3277) Stockfish (3318) & Komodo 3340.
Chess computers have their own world championship, but in draughts the program Chinook won the right to play in the human World Championship by finishing second to the Marion Tinsley in the US Nationals in 1990. When Tinsley, world champion from 1955-1958 and 1975-1991 (he withdrew from championship play during the years 1958–1975) who lost only seven games in his 45 year career, resigned his title in protest, the American Checkers Federation and English Draughts Assiciation created the new title Man vs. Machine World Championship, and Tinsley won with four wins to Chinook’s two, with 33 draws.
In a rematch in 1994, Chinook was declared the Man-Machine World Champion when Tinsley withdrew after six drawn games due to pancreatic cancer.
In 1995, Chinook defended its man-machine title against Don Lafferty in a 32-game match. The final score was 1-0 with 31 draws for Chinook over Lafferty. After the match, Jonathan Schaeffer decided not to let Chinook compete any more, but instead try to solve checkers. At the time it was rated at 2814.
There is a perception that Chinook has ‘solved’ draughts, but that is not correct. Chinook is unbeatable in 28 of the 156 sound 3-move ballots that are used to determine the opening moves of a game, but theoretically vulnerable in the remaining 128.
In a match between Chinook and a genuine oracle (one armed with the 24-piece, perfect-play databases) Chinook would be certain to lose.
Despite its simple rules, Go is vastly more complex than chess, and possesses more possibilities than the total number of atoms in the visible universe. Compared to chess, Go has both a larger board with more scope for play and longer games, and, on average, many more alternatives to consider per move.
In 1965 mathematician I. J. Good wrote:
‘Go on a computer? – In order to programme a computer to play a reasonable game of Go, rather than merely a legal game – it is necessary to formalise the principles of good strategy, or to design a learning programme. The principles are more qualitative and mysterious than in chess, and depend more on judgment. So I think it will be even more difficult to programme a computer to play a reasonable game of Go than of chess.’
However, in March this year, AlphaGo a program developed by Google DeepMind in London became the first Computer Go program to beat a 9-dan professional human Go player without handicaps on a full-sized 19×19 board when it beat Lee Sedol 4-1 in a five-game match.
The situation in bridge is somewhat different.
In his book, Bridge, My Way, published in 1992, Zia Mahmood describes how he offered a £1,000,000 bet that no four-person team of his choosing would be beaten by a computer.
A few years later the program GIB, brainchild of American computer scientist Matthew Ginsberg, proved capable of expert declarer plays in numerous tests and in 1996 Zia withdrew his bet. Two years later, GIB became the world champion in computer bridge, and went on to finish 12th in a 1998 Par Contest in declarer play involving 34 of the world’s best humans.
However, in 1999 Zia beat seven computer programs, including GIB, in an individual round robin match staged in London.
The publicity generated by that match stimulated interest and has resulted in the development of stronger bridge playing programs, such as ten-time World Computer Champion Jack and Wbridge5. A series of articles published in 2005 and 2006 in the Dutch bridge magazine IMP describe matches between Jack and seven top Dutch pairs including a Bermuda Bowl winner and two European champions. Over 196 boards Jack defeated three of the seven pairs (including the European champions) and overall, the program lost by a small margin 359-385 IMPs.
Irrespective of the results of computers against humans in tournament-bridge, computer bridge already has changed one aspect of the game.
Commercially available double-dummy programs such as Deep Finesse can solve bridge problems, typically within a second. These days, editors of books and magazines will rely solely on humans to analyse bridge problems before publication. Every set of deals from tournament play shows the result of the program’s analysis.
Nevertheless, in comparison to Chess, Draughts and Go, computer bridge has a long way to go and human players, such as the stars competing in the 1st Yeh Online Bridge World Cup still reign supreme. The question whether bridge-playing programs will reach world-class level in the foreseeable future is not easy to answer.