[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [computer-go] Pattern matching - example play
> -----Original Message-----
> From: Frank de Groot [mailto:frank@xxxxxxxxxxxxxxxxx]
> ----- Original Message -----
> From: "Don Dailey" <drd@xxxxxxxxxxxxxxxxx>
> Subject: Re: [computer-go] Pattern matching - example play
> > I guess what I'm saying is that good players mainly gets
their
> > knowledge from explanations, by a teacher, by being taught.
>
> Has that been demonstrated?
> It seems counter-intuitive to me.
> I would think that Go players mainly learn from playing games against
> stronger players.
I reached dan-level mainly by studying books and pro games, and played a
tournament about every 3 or 4 months.
Trying to understand moves that you do not understand at all is the key
to becoming a better player.
A book (or teacher) provides good principles and explanations how to
play in a particular position. But this verbalized knowledge does not
generalize from single examples for a beginner go player. It is when the
beginner tries to mimic the new insights either in playing games against
stronger players or when he tries to understand a pro game the real
learning takes place. Finally, the best lessons for me were analyzing
lost games in tournaments I tried to win very hard but failed.
So what is my point? No matter if you have a teacher, a book, or a game
this is only the starting point for the learning. The learning takes
place when you try use new concepts or try to mimic the shapes played by
better players and see the consequences. No go program I know of is
truly capable of doing this.
My own program uses a mix of go knowledge in terms of algorithms and
hand made patterns as well as a crude "on the spot pattern harvest
approach" in the fuseki/joseki joseki stage that is similar to Frank's
(it is just based on 5000 pro games + 1000 hand tuned "punish bad
moves"-games. The program is too slow to analyze things by itself.
Everything it does is mere imitation (or pure confusion when it cannot
imitate something). The only surprising plays that emerge is that the
program sometimes seems to understand the concept "Play away from
strength" and overconcentration, without having a single line of code in
the program. It just follows from having an evaluation function that at
least can evaluate some positions correctly. I believe that a lot of go
principles should not be forced into the program it should follow from
simpler and more basic knowledge.
So mixing all of these approaches explicit programmed knowledge vs
harvested patterns (that has been discussed as if they were opposed to
each others) ends up in a program that plays nice moves most of the
times but horrible blunders a lot. If I keep the current approach and
polished it fulltime for some years it might possible to go from ~15-18
kyu to perhaps 5 kyu (very optimistic) but after that I guess it would
be impossible to improve it. Something is missing or the entire approach
is flawed.
The idea to let a programmer add a lot of go knowledge (in code or
patterns) is flawed because most knowledge is either to vague to be
programmed or so extremely specific that no mortal will be able to
specify them without contradictions and bugs.
Perfect shape predictions is also fine but will not solve the problem of
evaluating the whole board. Ideal move generation is certainly possible
but without good full board evaluation in a deep search. Perhaps they
key here is lazy evaluation. Do the chess programmers here have a good
idea how that can be used in go?
--
Magnus Persson
Center for Adaptive Behavior and Cognition
Tel: +49-(30)-82406-350
Cell phone: +49 163 6639868
_______________________________________________
computer-go mailing list
computer-go@xxxxxxxxxxxxxxxxx
http://www.computer-go.org/mailman/listinfo/computer-go/