[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: computer-go: Insight of a human



On Fri, 1 Sep 2000, Vincent Diepeveen wrote:

> Genetic algorithms & neural networks are basically doing random things.

I can't see this at all for neural networks - could you clarify what in
your opinion is so random about them?  I agree for genetic algorithms;
I use them as learning methods of last resort, when I can see no way to
obtain a gradient.

> Learning is random in them. Their behaviour is real weak in game playing,
> except some very simplistic games where the complete domain can be tested.

Hmmm... the world's best backgammon program is TD-Gammon, a neural network.
A neural network technique has been applied to chess, with very respectable
results.  Over the past decade, the state of the art has gone a bit beyond
the primitive algorithms and tic-tac-toe examples found in most textbooks.

The central problem in any computer go program seems to me one of represen-
tation - if we knew how to encode the high-level concepts well, we'd be a
lot further along.  Good representation should make generalization easy.
Given a good representation that is (or can be made) parametric and
differentiable (in other words: a neural network), I see no reason why
one shouldn't get good mileage out of training it on a few truckloads of
game records.  The information is there in the data, albeit implicitly
- and a good representation should be the key to extracting it.

The problem with many machine learning approaches to go (my own past
included) is that they start with a tabula rasa, and expect the learning
algorithm to do the entire job.  This is about as realistic as expecting
a monkey to learn to play go, and should not be contrued as proof that
these techniques cannot be very useful in their proper place.

- nic