[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: computer-go: Machine Learning and Go
Thore,
I agree that the graph representation you describe ("CFG") is absolutely the
right way to represent the board for neural net applications (though it's not
completely new - see Markus Enzenberger's report at
http://home.t-online.de/home/markus.enzenberger/neurogo.html).
I've been wrestling with how to use this kind of representation for neural nets
for a couple of years, and hadn't yet come up with a satisfying solution to the
problem of mapping an arbitrary planar graph onto a fixed set of inputs. I
wasn't satisfied with Markus' solution but I'm not quite sure yours captures the
full power of the representation either.
If I understand what you did correctly, you call each possible linear subgraph
(one without branches or loops) rooted at the node of interest a feature, and
use the count of the number of "matches" of each feature as an input. However,
this seems like this might eliminate some patterns of interest, and conflate
others, for example those where the spatial structure of a field of empty points
is critical, and those where distinguishing distinct from partially overlapping
paths is critical.
Also, if a feature subgraph ends arbitrarily (because you've truncated the
length rather than because there are no more adjacent nodes) you can't
distinguish eyes from dame, i.e.
B-o vs. B-o-W
Curious about your thoughts on this.
On a tangent, I've also been thinking about using this representation for
pattern matching in conventional programs. It maintains all the advantages for
this use - it facilitates generalization and factors out arbitrary distinctions,
dramatically reducing the number of patterns needed to represent key shapes. It
also allows representation of a whole class of very important patterns that
can't be represented with classic fixed size patterns (see
http://cns.nyu.edu/~mechner/compgo/acg/ for Tim Klinger's and my previous swing
at this problem). Unfortunately I haven't been able to think of a good pattern
matching algorithm for it - again, basically because you can't count on the
number of adjacent nodes a (non-empty) node will have, and because you don't in
general care about the order of the adjacent nodes, the only matching algorithms
I've been able to think of are basically like search (which seems too
expensive).
Any thoughts on this from anyone?
Regards,
-David Mechner
http://cns.nyu.edu/~mechner/
> I thought you might be interested in this because we use a new approach of
> converting a graph-based representation into feature vectors that enable us
> to learn some aspects of good and bad "shape". These methods are then used to
> learn good Tsume-Go moves and to play on a 9x9 board after learning from game
> records. I would be delighted to receive some comments and suggestions.