[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Introduction



Matt Gokey wrote:

> Some of this has been tried in the chess domain and has failed there
> as well I think.

I don't think enough effort was put into that line of research to say
that it failed; it was less successful (in terms of playing strength)
than work on search, and was abandoned.

> It doesn't seem possible to handle the subtle variations in position
> using techniques like this in many complex games (Chess, Checkers,
> Othello, Go, etc.).  The only good way to truly "read a position"
> seems to be to play it out to a reasonable depth.  How can one
> predict future positional characteristics accurately using only the
> current position?  I don't think it's possible.?  Anyone disagree?

I disagree strongly, and there's a simple proof: the existence of
human experts. We predict future positional characteristics accurately
using only the current position. Of course human experts do read
things out, but not in any sort of exhaustive way that can be compared
to the search done by chess programs. Human players' ability to play
speed-go demonstrates, I think, that a great deal of our skill likes
outside of our ability to search.

> and I would argue that we can not develop AI powerful enough to
> handle a complex domain like Go using Go.
> . . .
> Hard coding the knowledge - via patterns, rules, special purpose
> algorithms, etc.  seems to be working to a degree, but I have read
> over and over again that the programs don't really seem to
> understand the positions and so make blunders that cause them to
> lose games.  As the size of the hard coded knowledge set increases,
> the difficulty in keeping the consistency and integrity of the set
> as a whole, I have a feeling, will increase exponentially.  In
> short, I'm of the opinion we can't likely create a good (dan level)
> Go playing program by modeling (AI or not) our perception (probably
> flawed) of human playing techniques.  What does everyone think?

I disagree - proof is forthcoming :) The research that Tim Klinger and
I have been doing attempts to address the shortcomings of existing
programs not by abandoning explicit representation or modeling of human
expertise, but by making it more complete. Existing go programs are
impressive in many ways, but a close comparison of what human players
and existing programs think about shows that there are many things
existing go programs represent inadequately (or not at all). 

Our concentration has been on life and death analysis and we expect,
by the end of the summer, to have results showing that programs can do
quite good general life and death analysis by explicitly representing
the knowledge and *logic* that human experts use when reading out life
and death problems.


In any case, there are lots of areas in computer go ripe for research,
and I think machine learning (which is where it sounds like you're
headed) is really fascinating area. 

One suggestion: I don't think today's ML techniques are capable of
learning the go function from scratch. So if you don't have the
expertise, resources, inclination, or temerity to reinvent the
multitude of wheels needed to support a full traditional go program,
I'd advise restricting the scope of your research - pick a limited
task to start; perhaps move ordering, low-liberty tactics, territory
assessment, eye assessment, group strength assessment, pattern
learning, whatever; and try to use whatever your favorite technique is
to accomplish that.

-David

-- 
David A. Mechner            Center for Neural Science
mechner@xxxxxxxxxxxxxxxxx         4 Washington Place, New York, NY 10003
212.998.3580                http://www.cns.nyu.edu/~mechner/