Reading time:

Chess Hero to Chess Zero: The Incredible Story of the Search for Intelligence in the World's Smartest Game

The greatest irony regarding the current crop of neural net based chess engines like Alpha Zero and Leela Zero is that their rise to fame was never supposed to be hardware dependent or linked to the size of the distributed computational peer to peer network. It was supposed to be the elevation to a fundamentally better understanding of how our brains work translated into silicon and breathing life into the most insignificant of hardware configurations and turning them all into gods. It was not a race to zero.

Chess has always been the perfect theatre for AI explorations. Games like noughts and crosses, draughts seem trivial in comparison. Go has a much bigger game tree but is very homogenized with a high degree of symmetry and with each counter being equivalent, think of it as a massive noughts and crosses board of a 10^120 x 10^120, highly complex game tree but the zero variety of patterns and plays and not a rich and varied as chess. Go always seemed on the cusp of being unsolvable and impervious to any form of brute force and canned heuristics. Chess lay on the cusp of solvability by knowledge and solvability by brute force, so it was always going to be a favourite for the early AI scientists and engineers.

Chess is the touchstone of human intelligence because its variety is a good allegory for life.The chess pieces are individuals but as a unit like a family, a group, or country, each acting for the good of the whole. 

Credit: Lawrence Cole - Adversarial Search (game playing search)

I suppose one ought to say something about two giants of early AI, Claude Shannon and Alan Turing. While Shannon, an Electronic and Electrical Engineer, was creating information theory, estimating the Shannon Number (the chess game tree complexity) experimenting with electromagnetic mice in mazes and Turing was designing the first chess computer on paper, the world was a very different place that it is now. AI was more focused on defining the problem space and executing human like actions in that problem space. It was the golden age of Functionalism, the concept that AI can be completely encapsulated and defined by a series of discrete actions which are equivalent to the mental state and functional state of any intellectual task or game being undertaken.

I was an Electronic Engineering undergraduate, long after the end of the AI Winter but some time before the summer of AI,  in London's Imperial College, the same august institution where the Matthew Lai, the brains behind DeepMind's AlphaZero studied and published his paper on a chess engine, Giraffe, based on a deep learning algorithm.

At Imperial, AI was a hot subject and undergraduates from Elec Eng, Computer Science, Physics and even Chem Eng and Mech Eng would pile into a lecture hall at the merest mention of a lecture on any AI related subject. It was on one such lecture on AI that I first heard Professor Igor Aleksander,  an emeritus professor of Neural Systems Engineering in the Department of Electrical and Electronic Engineering talking about Qualia. Professor Aleksander had a wry smile on his face as he discussed the delusion of AI without understanding Qualia. He talked about how qualia evaded definition and substantiation and how it was ignored by some AI experts. 

However If only it were that simple. Professor Aleksander, who was also my professor at the time, was a neural network engineering pioneer who created the world's first independent pattern recognition neural network. Professor  Aleksander was a trailblazer for AI in the 1980s and continued to be until his retirement in 2002. I remember in that lecture he gave he talked of imperative that  this exotic quality of true AI systems called Qualia be taken serious. "Do mind Qualia". I mind Qualia and have done minded it long before Demis Hassabis and his crew redefined AI to be a general intelligence algorithm in a non-cooperative game space using reinforcement learning. I am certain that is not what real AI is and is confirmed by the lack of progress of DeepMind in areas of general intelligence for real world co-operative game space as seen in the AI leaning environment for the card game Hanabi.

Credit: Jeremy Wainwright from his Last Lecture.

Functionalism is the battleground for the Artificial Intelligence. Essentially Functionalism + Qualia = Real AI.

Credit: Wikibooks - Consciousness Studies

Does a Chess engine or robot feel consciousness about the brilliant attack it is launching?

What is AlphaZero really doing when it sacrifices pieces for those bewilderingly complex positions that it always seems to end up winning? Is it actually applying real AI or as Professor Aleksander would say, is it exhibiting Qualia?, or is it simply following pre-qualified self play for its N-block Single residual-block neural net with dual policy and value outputs. The difference between the two is like night on earth and day on a planet a trillion light years away.

Credit: VentureBeat.com - DeepMind’s AlphaZero beats state-of-the-art chess and shogi game engines

So we come back from Zero through 360 degrees and back to Zero, in one revolution that changes the state of Qualia. A Human Grandmaster uses a highly defined neural net to decipher the complexity on the chess board, a brute force engine augmented with chess heuristics like StockFish evaluates millions of positions. AlphaZero lies in the middle of these and appears to be vastly superior to both the human and the conventional chess engines. How would AlphaZero fair against a tablebase? Could it infer the perfect sequence of moves in the Nalimov-Lomonosov tables? Wait isn't AlphaZero basically self playing through the chess game tree and then applying a self built evaluation function to ensure its play matches that game tree. What is the point of this, why don't we just solve the chess game tree and be done with it all?

Credit: Tuomas Sandholm - Algorithms for solving sequential (zero-sum) games

Let us all be completely intellectually honest about this. DeepMind have produced a solution to the problem of the chess game tree (and all zero-sum game trees) that can approximate the end node solution from any initial position. But what is this? Is it intelligence at all or just autonomous statistical analysis and inference with decision trees and action using MCTS (Monte Carlo Tree Search). Isn't it just functionalism for the sake of functionalism? Can Boston Dynamics also argue their robots are conscious or even have augmented artificial intelligence because they have programmed to leap about in light forest undergrowth like mindless synthetic humans and artificial dogs avoiding trees and climbing slopes to the delight of their enterprising but functionalistic creators.

I wrote a paper in which I hypothesized that a perfect chess machine based on the 32-man tablebase playing perfect moves would play "imperfectly" when given a losing non-initial position it would suggest all losing moves as equally losing and would not understand the concept of the human struggle with error and play a challenging losing move to prolong the game and perhaps give chances for overturning the result of the game to a draw or better. 

That is essence of the paradox at the heart of AI. We build domain specific autonomous expert systems and assume that a general purpose general intelligence real AI system will be a scaled up version of this. It is not and will never be. Why because the world is about the struggle with error and the awareness of the errors and the game tree sub-optimal solution is what sets apart real AI from  AI by any means.

ChessHot
United States