[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: computer-go: Machine Learning and Go
On Fri, 26 May 2000, David Mechner wrote:
>I agree that the graph representation you describe ("CFG") is absolutely the
>right way to represent the board for neural net applications (though it's not
>completely new - see Markus Enzenberger's report at
>http://home.t-online.de/home/markus.enzenberger/neurogo.html).
>
>I've been wrestling with how to use this kind of representation for neural nets
>for a couple of years, and hadn't yet come up with a satisfying solution to the
>problem of mapping an arbitrary planar graph onto a fixed set of inputs. I
>wasn't satisfied with Markus' solution but I'm not quite sure yours captures the
>full power of the representation either.
Maybe it is possible to find even a better way to represent Go positions
for a machine learning approach.
There were some details in NeuroGo's architecture, that were not satisfying.
Why represent a Go position as a graph with arbitrary geometry,
if the Go board used as input has a fixed geometry?
Only when processing the position, it is useful to join nodes and treat
them as larger units (e.g. strings).
It might also be useful to represent relationship not only by
the discreet values adjacent/not adjacent (or higher level relationships).
Instead e.g. continuous values in the range [0..1] could be used.
Two years ago I was looking for some improvements to NeuroGo's architecture.
So I modeled the ideas above by using two kinds of neurons:
one for representing intersections on the board
and one for representing the edges between adjacent intersections.
I defined the value for connectivity along a path as
the product of the edge-neuron's activity along the path.
The propagation was performed as if the layers were fully connected,
but with an additional multiplication of the connectivity value
of the strongest connection between the intersection-neurons involved.
It was even possible to use a backprop-derived learning algorithm
(similar to Sigma-Pi units).
I never found time to write up the new architecture in detail,
but the later versions of NeuroGo are based on it.
It turned out, that it was able to achieve about the same playing strength
as the versions before, but not significantly better.
I still considered it as superior, because of the static architecture
and the fewer pre-processed inputs necessary.
- Markus
--
Markus Enzenberger | http://home.t-online.de/home/markus.enzenberger