![](http://files.turbosquid.com/Preview/Content_on_9_14_2004_15_44_09/martian.jpg3f7c99f3-763a-4711-807d-fb1a22d4765dLarge.jpg)
An uneasy situation to be sure, but what if in addition to having to teach the visitor far from home how to play checkers from scratch, eventually making him an expert player, the only information you were allowed to tell him is where the game pieces were allowed to be on the board and how they were allowed to be moved. After a series of games, you would tell him how many games he had won, but not which games they were.
Could you do it?
David B. Fogel faced this very same challenge, albeit without the Martians and ray guns. Instead, Fogel aimed to have a computer learn the game of checkers in this very same way. We all know how dense computers are -- they certainly won't do anything other than exactly what you tell them -- so we can see that this task is still daunting nonetheless.
In Blondie24: Playing at the Edge of AI, Fogel describes how he used evolutionary computing algorithms to have a computer teach itself checkers. In the process, he manages to give a good introduction and general review of the state of artificial intelligence, to provide interesting insight into what it truly means for software to act intelligent, and to detail how he and his colleague Kumar Chellapilla managed to evolve a neural network that could play checkers competitively with the experts.
If you have studied AI at all in the past, then you probably remember spending a lot of time on subjects like decision trees, expert systems, and search tree algorithms such as A*. Indeed, topics like these seem to be at the forefront of the field, and capturing human knowledge and reusing it in an effective way constitutes, for many, the essence of artificial intelligence.
But does the use of our own knowledge really demonstrate intelligence? As Fogel points out, not really. These kinds of programs are simply acting like we do. They make no contribution to understanding what intelligence really is, nor how it develops in the first place.
Fogel offers an alternative. He believes that mimicking what nature has done so well is the key to having software learn for itself; that is, we should be focusing on evolution.
And that's exactly what Fogel uses to create a checkers program that can win against very high ranking players. He sets up a neural network with a somewhat arbitrary configuration, using the piece placement on the squares of a checkerboard as inputs. Several networks are created with random weights between neurons; this becomes the first generation in the evolutionary process.
The networks play against each other based on the most basic rules of how pieces can move from particular squares and when the game is over. They don't know when they lose or win a game; instead, they are assigned a final score for the series of games they have played. Then the half of the generation with the lowest scores is thrown out. The better half is copied and slightly randomly mutated. This new set of neural networks becomes the next generation, and the process begins again.
I'm definitely simplifying things here (for one thing, the neural networks don't play checkers, but rather evaluate a particular board configuration), but suffice it to say that these neural networks, after many generations of evolution, could actually play checkers very well. This is quite remarkable if you think about it -- just from random mutation and survival of the fittest, we start with something more or less random that knows nothing about checkers, and eventually get something that can beat some of the best. Nowhere have we instilled our own human expertise or knowledge into the program like the creators of Chinook did, or those of Deep Blue in the case of chess.
If you are the least bit curious of the details on how Fogel and Chellapilla achieved this, I highly recommend reading his book. I couldn't help myself feeling at least as enthralled with the progress of the artificial checkers player as I have with any fictional protagonist. Fogel does a wonderful job of providing the necessary background of the field, and repeats the key points just enough so the reader doesn't get lost.
More importantly, I hope you can share in the excitement surrounding the possibilities of evolutionary computing. Though it's been around for decades, I'm not sure that it's been used to anywhere near its potential. As hardware speeds up and complicated problems can be evolved more efficiently, we might find ourselves being able to create something closer and closer to true intelligence.
If you ask me, this sounds a bit less frustrating (or dangerous) than trying to do the same with our extra terrestrial companions mentioned earlier...