[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: computer-go: Neural Nets: suggesting and evaluating



On Thursday, August 7, 2003, at 08:07  AM, Markus Enzenberger wrote:

On Thursday 07 August 2003 05:29, Darren Cook wrote:
I'd be really interested to see the results of such a
program. I believe gnugo can be set to output tactical
search results from an input position, so making the
training data may not actually be that hard.
NeuroGo uses tactical search result as an input, although it
reads only ladders (which 3 liberties allowed at the target
block until depth 2) and this is indeed an important input
feature.

As Nici pointed out, it slows down processing the position,
but reduces the number of games necessary for training.
Unforunately I cannot precalculate the input, since I found
learning from self-played games by far superior to learning
from master games.
Really? My sense from the literature was that most people had found the opposite. We're running some experiments on this now, so we should be able to add another data point to this question. I expect that the best results will come from watching master games to get a rough idea of what moves are good (e.g., don't play on the first line) and then tuning with self-play.

Our current input features are, for each point on the board:

Is it occupied by a black stone?

Is it occupied by a white stone?

Is it unconditionally alive (1.0) unconditionally dead (inside an enemy eye, 0.0), or neither (0.5), according to Benson's algorithm?

How many liberties does the chain at this point have? (This is squashed into the range [0, 1] by the function 1 - (1/x).)

How many stones are in the chain at this point? (Also squashed.)

Can the chain at this point be captured in a ladder? (This is a narrow search, where the attacker occupies liberties and the defender plays in liberties. The group escapes the ladder if it ever gets three liberties.)

Jim Levenick and I have had a lot of discussion on the idea of preprocessing. My inclination was to have many, fairly primitive features like those above, and ask the machine learning algorithm to combines them in complicated ways. Jim leaned toward much more powerful features combined in a simpler way. My idea is easier to write, but his will probably work better in the long run. :-)

Peter Drake
Assistant Professor of Computer Science
Lewis & Clark College
http://www.lclark.edu/~drake/