Jonathan De Vita studied computer science at Lancaster University, gaining a BSc (Hons) degree. As part of his studies, he researched the field of AI and coding using specialised programming languages. This article will look at computer chess, exploring the many attempts by computer scientists to create chess-playing machines throughout history.
Including both hardware and software, computer chess enables chess players to practise in the absence of human opponents. In addition to entertainment, computer chess also presents opportunities for analysis and training. Computer chess applications hosted on supercomputers operate at the highest echelons of the game, playing at the level of a chess grandmaster. Computer chess applications are also available via smartphone and standalone chess-playing machines, with GNU Chess, Leela Chess Zero, Fruit, Stockfish and other free opensource applications available across various platforms today.
Whether implemented via hardware or software, computer chess applications rely on different strategies than human players. Computer chess platforms attempt to execute the best sequences during play by leveraging heuristic methods to search, evaluate and build trees representing move sequences. Capable of processing hundreds of thousands of nodes per second, the computational speed of modern computers has made such an approach incredibly effective.
The first chess machines first arose in the 1950s, relatively early on in the vacuum-tube computer age. These early programs were very rudimentary, performing so poorly even a novice human player could beat them. However, by 1997, chess engines running on specialised hardware and supercomputers became much more sophisticated and were eventually able to defeat even the best human players. By 2006, desktop PC programs had achieved the same capabilities, leading McGill University Professor of Computer Science Monty Newborn to declare: ‘the science has been done.’ However, due to the game’s vast number of potential variations, solving chess remains beyond the remit of modern computers.
Technically, the game of chess is a turn-based, two-player, adversarial board game. Comparable to draughts, noughts-and-crosses, Go and Connect Four, chess centres around perfect information rather than being a game of chance.
The world’s first chess program, Anti-Clerical Chess, was released at Los Alamos. Its resources were so limited that play was confined to a six-by-six board with no bishops. Despite its rudimentary nature, the program made history by beating a human opponent for the first time.
Released in 1958, Arthur Samuel’s checkers program relied on a self-improving learning strategy that was conceptually similar to some of the recent machine learning advances. Since the advent of microcomputers, a growing number of computer programmers have turned their attention to the realm of computer chess, developing transposition tables and other powerful computational techniques.
In 1997, Deep Blue made history as the first computer in the world to beat a chess champion, with the dedicated chess hardware defeating Gary Kasparov. A decade later, Chinook ‘solved’ the game of checkers in 2007 by proving that perfect play by both opponents would result in a draw.
Like other sophisticated computer programs, e.g. operating systems and language compilers, computer chess programs are incredibly complex. Subtle design interactions and concepts between components across the entire program can result in unfortunate and unforeseen behaviours, leaving opponents and audiences alike wondering whether the computer’s curious move choices might be a bug or a yet-to-be-appreciated act of genius.
By their nature, chess programs must be agile, incorporating laser-accurate data compilers and highly responsive operating systems with low latency. Achieving this simultaneously is a tall order for even the most talented programmer. Nevertheless, AI expert Andrew Lea FBCS contends that every serious programmer should write a chess engine, mini operating system, compiler or program of similar complexity at least once in their career. After all, as Andrew Lea points out, such a project provides valuable experience and expertise in project management, advanced algorithms, complex data structures, insightful heuristics, accurate programming and other techniques necessary to become a master programmer.