Thursday, September 27, 2007

Blondie24: Playing at the Edge of AI

Imagine this scenario: You have recently befriended a visitor from Mars. Martians have no knowledge of human culture, let alone any experience with our board games. But somehow your new pal has heard of checkers and desperately wants to learn how to play. So much so, in fact, that you question his "friendship" as he demands success while waving around his ray gun menacingly.

An uneasy situation to be sure, but what if in addition to having to teach the visitor far from home how to play checkers from scratch, eventually making him an expert player, the only information you were allowed to tell him is where the game pieces were allowed to be on the board and how they were allowed to be moved. After a series of games, you would tell him how many games he had won, but not which games they were.

Could you do it?

David B. Fogel faced this very same challenge, albeit without the Martians and ray guns. Instead, Fogel aimed to have a computer learn the game of checkers in this very same way. We all know how dense computers are -- they certainly won't do anything other than exactly what you tell them -- so we can see that this task is still daunting nonetheless.

In Blondie24: Playing at the Edge of AI, Fogel describes how he used evolutionary computing algorithms to have a computer teach itself checkers. In the process, he manages to give a good introduction and general review of the state of artificial intelligence, to provide interesting insight into what it truly means for software to act intelligent, and to detail how he and his colleague Kumar Chellapilla managed to evolve a neural network that could play checkers competitively with the experts.

If you have studied AI at all in the past, then you probably remember spending a lot of time on subjects like decision trees, expert systems, and search tree algorithms such as A*. Indeed, topics like these seem to be at the forefront of the field, and capturing human knowledge and reusing it in an effective way constitutes, for many, the essence of artificial intelligence.

But does the use of our own knowledge really demonstrate intelligence? As Fogel points out, not really. These kinds of programs are simply acting like we do. They make no contribution to understanding what intelligence really is, nor how it develops in the first place.

Fogel offers an alternative. He believes that mimicking what nature has done so well is the key to having software learn for itself; that is, we should be focusing on evolution.

And that's exactly what Fogel uses to create a checkers program that can win against very high ranking players. He sets up a neural network with a somewhat arbitrary configuration, using the piece placement on the squares of a checkerboard as inputs. Several networks are created with random weights between neurons; this becomes the first generation in the evolutionary process.

The networks play against each other based on the most basic rules of how pieces can move from particular squares and when the game is over. They don't know when they lose or win a game; instead, they are assigned a final score for the series of games they have played. Then the half of the generation with the lowest scores is thrown out. The better half is copied and slightly randomly mutated. This new set of neural networks becomes the next generation, and the process begins again.

I'm definitely simplifying things here (for one thing, the neural networks don't play checkers, but rather evaluate a particular board configuration), but suffice it to say that these neural networks, after many generations of evolution, could actually play checkers very well. This is quite remarkable if you think about it -- just from random mutation and survival of the fittest, we start with something more or less random that knows nothing about checkers, and eventually get something that can beat some of the best. Nowhere have we instilled our own human expertise or knowledge into the program like the creators of Chinook did, or those of Deep Blue in the case of chess.

If you are the least bit curious of the details on how Fogel and Chellapilla achieved this, I highly recommend reading his book. I couldn't help myself feeling at least as enthralled with the progress of the artificial checkers player as I have with any fictional protagonist. Fogel does a wonderful job of providing the necessary background of the field, and repeats the key points just enough so the reader doesn't get lost.

More importantly, I hope you can share in the excitement surrounding the possibilities of evolutionary computing. Though it's been around for decades, I'm not sure that it's been used to anywhere near its potential. As hardware speeds up and complicated problems can be evolved more efficiently, we might find ourselves being able to create something closer and closer to true intelligence.

If you ask me, this sounds a bit less frustrating (or dangerous) than trying to do the same with our extra terrestrial companions mentioned earlier...

9 comments:

Christian Muise said...

1st thought - slippery slope...

There is an entire branch(s?) of AI that tackle problems humans haven't instilled the knowledge for. In fact most of AI is directed to the goal of having the program do things that humans haven't told them how to do. The exception would be the most straightforward expert systems...

But consider the complex ones that crunches entailments and infers knowledge that humans haven't been able to figure out...prime example is theorem proving/reasoning AI's...in the ideal case it's an expert system that has the predefined knowledge of known theorems, mathematics, etc and knows how they can be used. You then ask it something like ?equal(P,NP) and wait for the answer ;)

Unreasonable? Probably for the given query, but it's already been done and has shown/verified some mathematical hypothesis. But then we get into the issue of proving algorithm correctness which is something the Software Engineering research group deals with - I avoid at all costs ;).

Gail Carmichael said...

I suppose I didn't convey the idea that you laid out -- that we want to eventually have computers figure things out that we don't already know -- in my post, but really that's what all this kind of research is about, right? If we can get a computer to play checkers without telling it anything about it except the basic rules, who's to say we can't give the computer some basic rules of logic and have it figure out how to solve proofs? Keep in mind, also, that this book was written in 2001. I'm sure much has been done since then to further these ideas. That's what makes the whole idea exciting to me.

So I'm not sure why you say it's a slippery slope...?

Gail Carmichael said...

I guess what you are saying is that more of AI is dedicated to the stuff like this book is about, rather than the expert systems, etc. I'd believe it for sure, though the media doesn't seem to care (think of all the articles that have been coming out about computers winning in chess, etc - how many of these have learned on their own?). You have to admit that, until recently, this has not been the case. Think of the attention systems like Deep Blue got. I bet a lot has happened since the publishing of that book.

Christian Muise said...

The slippery slope is getting caught up in the magic of evolutionary algorithms to the point that everything seems in reach ;). It's very easy to do, and the problem is that the real roadblocks in that path of AI aren't as easily seen / explained compared to other areas.

Quite certainly there have been proof building AIs for quite some time (before 2001). They're built around complex inference that respects known proof techniques and works on a knowledge base of mathematical facts.

It's very easy to come up with new mathematical equivalences (even non-intuitive ones), but the trick (so far unsolved) is to come up with new ones (unguided) that actually have an impact on what we do.

When a proof solving AI stumbles upon a 3 page equivalence relation that noose cares about, its hard to get excited ;). Biggest problems with these approaches (any proof generating approach probably) is how you would express concepts like "induction" or "exchange method" or any other strong proof technique in any sort of expressive logic.

Gail Carmichael said...

"The slippery slope is getting caught up in the magic of evolutionary algorithms to the point that everything seems in reach"

Ok, got ya... Didn't mean to imply it.

And I think that proof building may not have been a very good example. Taken literally, you can say Prolog could do it - give it some facts and try to derive something. As you say, nobody needs to get excited about that. Also as you say, it's finding the solution to problems we didn't know we were looking for that's interesting, but not so easy to determine. There's an awful lot I wouldn't know how to evolve even if I had the chance. To me it is certainly not the be all and end all for finding solutions to all problems.

Very cool area though, and I'm interested in seeing what this kind of learning (where the programs "discover" their own features of the problem) actually can do that we didn't expect.

Christian Muise said...

Not so much that it can't produce unexpected results - that's easy. It's even easy to produce mathematical proofs that no human has seen, realized, and in most cases understands completely (even though it may be sound).

Problem is if that mess of a proof is simply useless to us - ie. you look at it, work through what it's inferred, and say, "So what?").

The latter is most common in proof building systems thus far. Querying for a proof (?equals(P,NP)) are hard because they require such a huge knowledge base of mathematics and inference rules - quite possibly some that don't yet exist.

Restricting the domain of mathematics to a manageable size just causes all queries to be trivial for humans (and thus pointless for us to ask computers :p).

Gail Carmichael said...

Right, it can produce unexpected results, BUT something like Prolog only starts with what we give it, so in a sense, it's not creating anything new, just finding new uses of knowledge we already had. Are there other methods that you can think of that don't start with knowledge we give it that can do similar things? I'd definitely be interested in learning more about those.

I was thinking more about what kinds of problems you could approach the same way as the author did checkers. For example, how can you teach a robot program to learn volleyball by only telling it the rules? Somehow seems to be on a whole new level above board games. Wonder if the same approach could possibly work, at least for some aspects of the game...

Christian Muise said...

I think you're mixing up evolutionary AI with that of adaptive AI. Anything that learns (it's memory is changed over time based on the whatever input/choices it makes) is adaptive AI. Prolog is just a language, not a type of AI (ANNs can be programmed in Prolog).

Anything always starts with what we give it, and it's rare that AI systems will not "learn" during the course of whatever they're doing. What your probably keying in on from their approach is the hands-off "unassisted learning" that they did...good question is how much we actually need to steer the AI for it to be successful.

Jason M. actually did research into trying to teach AI tic-tac-toe without even telling it the rules...just saying "Bad AI" at the end of the game...forget what his results were though...

Gail Carmichael said...

[For the outside reader, you may have noticed that Haz and I know each other. In fact, we did our undergrad together. You may also have noticed that Haz is far more knowledgeable in the field than I am -- he's our AI guy and I've taken just one class and read this book. So needless to say that I'm using this comment banter as a chance to learn and discuss :)]

"I think you're mixing up evolutionary AI with that of adaptive AI"

This is possible, I haven't been making any distinction in my mind at all.

"Prolog is just a language"

So true, and I probably held on to that example too long. It was just my way of expressing the idea of inputting what we know and using easily programmable logic deductions to get from one place to something else without any actual "learning." I kind of saw, possibly unfairly, a lot of the machine learning techniques as something only slightly more advanced.

About the tic-tac-toe thing, the authors of this book did the same thing, using the same sort of technique as they did with checkers. I wonder if Jason did that too - set up a neural network with arbitrary neurons and use it to evaluate moves and evolving it to get better...?

Post a Comment

Comments are moderated - please be patient while I approve yours.

Note: Only a member of this blog may post a comment.