[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: computer-go:life and death



At 05:12 PM 5/28/2001 -0400, you wrote:

>

I don't think that what current programs do is what I am loosely calling
"analysis."  They may get many problems right but my point is that they
lack the facility to represent and make inferences from the logical
relationships between strategic goals of interest in a way that is
independent of any specific domain knowledge -- something akin to the
"goal graphs" that I discuss in my thesis.
I think you are right that programs don't do analysis in a domain-independent
way as you discuss, although I think Bruce Wilcox's original program
tried to do this. The strong go programs analyze positions in a very
domain-specific way. But in that domain-specific way, go programs do
make inferences about the position. Many Faces understands the strength
of a group is dependent on the relative liberty counts of nearby groups, etc.
It doesn't do any general reasoning with that knowledge. The relationships
and implications are all hard-coded and go specific.

But these programs are not getting these problems right by accident. They
are doing goal-directed reading until the terminal positions are stable. Bruce's
program could discover new goals during the reading, and then use those goals
later during the same exploration. This is described in his papers.

Many Faces understands goals during reading such as gain eyes, attack neighbors,
run away, etc, and applies the goals that are appropriate during move generation.


For example, even in the problems that our program ultimately failed to
solve because of missing or incorrect domain knowledge, the "analysis" or
structure that represented the problem at hand -- the fact that
successfully capturing one block, would allow save another for example --
was still correct. My conclusion that current programs do not do this kind
of analysis comes from the lack of any published literature to the
contrary, my discussions with you and other authors of strong programs, as
well as the general conversation on this list.
Capturing a block to save another is pretty basic, and all programs can generate
moves based on this knowledge. But usually it is hard coded in a move generator, and
not part of a set of rules that are expressed in a "rule" or "pattern" language.

I think I understand now that you are distinguishing between knowledge that is coded
as patterns/rules and knowledge that is coded as C. And between reasoning using
some kind of inferencing engine, and reasoning that is hard coded in the move
generation algorithms. Your approach is domain-independent, but does not work as
well for go as the hard-coded approach, since it is much slower.

For myself, having the rules encoded as patterns/rules or in C source code is just syntax :)
I see either approach as doing the same kind of "Analysis".

Unfortunately, the strong programs are proprietary, and have not published their
algorithms. The academic efforts have all been quite weak, because a thesis
project is too short to create a strong program.


I think our disagreement lies in our interpretation of the word
"understand."  The motivation for the point I made above is that I believe
that some sort of explicit representation of the problem logic is
important for programs to solve harder problems (ones that no program can
solve now) and current programs -- as far as I know -- lack such a
representation.
I understand :) But I think Thomas Wolf has proven with his program that explicit
representation of the logic is not necessary to solve hard life and death problems.

And again, I don't see the real difference in representing the problem logic in 700
"pattern rules", or in 3000 lines of C code. The logic is the same in either case,
and is just as explicit. Although the C code may not be as clear to someone other
than the original author :)

If you define C code as implicit knowledge, and patterns or
goals or rules as explicit knowledge, then I agree with you that strong programs
don't use much explicit knowledge in reasoning about life and death. This is
because it is too slow, and therefor leads to weak play.



So, every strong program can solve 90% of Kano II using the following
definition of correctness?

A life and death problem is *correctly solved* by a program iff the
program finds a tree of moves representing the program's moves and the
player's counter-moves, recursively, which leads to positions that are
statically solvable.  Such a tree is in effect a "proof" of the
correctness of the result under the assumption that the moves considered
at each ply include one that does indeed work when such a move exists.
Yes. Wolf's program constructs exactly such a tree that he calls the proof tree.
Many Faces' life and death reading also constructs such a tree, although the
algorithm is best-first rather than recursive. Every program that does life and death reading
constructs such a tree, and I think all of the strong programs do life/death reading now.

I will be happy to share sgf files of some of the problems in volume 2 if you like, so you
can see if they meet your definition. Currently Many Faces gets the wrong answer to
only four problems in that book when playing under game conditions. But I've tuned
against that problem set, so this is not a fair assessment of its strength at life and death
reading. Against an untuned set of problems that are slightly more difficult, it gets between 75 and
80 percent correct.

Also, since I test under game conditions, sometimes it analyzes the problem correctly, but
still plays a different move that it thinks is better. For example, one of the problems it gets
wrong in volume 2 is a ko where the program has to ignore two ko threats to win it. The program
correctly reads this result, and decides that it is better to attack some other weak stones in the problem
than to start such a disadvantageous ko. And I think it is right.

I've attached the analysis for the problem in your thesis that your program missed as
an example. Since the analysis is under game conditions, it doesn't just find the
first move that works, but looks at every reasonable move at the first ply, since in
a game often these problems have more than one answer, and the program must find
them all. You can see that the move generator scored the correct first move much
higher than the others (44% chance of success as compared to 8% or 6%).

After the first move, the static life analyzer is quite confident (92%) that the group
has been killed, but since there is time left for this search, it continues to look at
possibilities near the center.

In total it generates 74 moves, and evaluates 38 positions, in about 2.5 seconds.


I don't know of any such results except (I think) for Wilmott's thesis and
the subsequent paper by Willmott, Richardson, Bundy, and Levine (mentioned
in an earlier post).
Unfortunately, the strong programs aren't publishing much :( I would publish more, but it
is too time consuming.


Tim
David Fotland
(;
GM[1]FF[4]VW[]AP[Many Faces of Go:10.0]
SZ[13]
HA[0]
ST[1]
DT[2001-05-28]
KM[0.0]
RU[Japanese]
AB[lb][kc][jb][jd][je][he][gf][gg][gh][hi][ii][ji][ki][kh][lh]
[lg]
AW[mc][ld][me][lf][mg][ke][jf][kg][ig][jh][hg]
C[Success: 100% confidence.  Looked at 38 positions.
74 generated nodes.]

(;B[lc]
C[Success: 100% chance of success. 44% generated confidence.  0% \
best failing child. Trial 0. 26 moves in this tree.]

(;W[hf]
C[Failure: 0% chance of success. 50% generated confidence.  0% \
best failing child. Trial 0. 6 moves in this tree.]
;B[ih]
C[Success: 100% chance of success. 82% generated confidence.  0% \
best failing child. Trial 0. 5 moves in this tree.]

(;W[jg]
C[Leaf position, Failure: 0% confidence. 44% generated confidence.]
)
(;W[kd]
C[Leaf position, Failure: 0% confidence. 38% generated confidence.]
)
(;W[md]
C[Leaf position, Failure: 0% confidence. 30% generated confidence.]
)
(;W[if]
C[Leaf position, Failure: 0% confidence. 22% generated confidence.]
)
(;W[mh]
C[Leaf position, Failure: 0% confidence. 20% generated confidence.]
))
(;W[ih]
C[Failure: 0% chance of success. 50% generated confidence.  0% \
best failing child. Trial 1. 14 moves in this tree.]

(;B[hf]
C[Failure: 0% chance of success. 76% generated confidence.  0% \
best failing child. Trial 1. 1 moves in this tree.]
;W[if]
C[Leaf position, Success: 100% confidence. 94% generated confidence.]
)
(;B[if]
C[Success: 100% chance of success. 74% generated confidence.  0% \
best failing child. Trial 3. 11 moves in this tree.]

(;W[kd]
C[Leaf position, Failure: 0% confidence. 36% generated confidence.]
)
(;W[jg]
C[Leaf position, Failure: 0% confidence. 22% generated confidence.]
)
(;W[mb]
C[Leaf position, Failure: 0% confidence. 12% generated confidence.]
)
(;W[mh]
C[Leaf position, Failure: 0% confidence. 12% generated confidence.]
)
(;W[hf]
C[Failure: 0% chance of success. 10% generated confidence.  0% \
best failing child. Trial 15. 6 moves in this tree.]
;B[ie]
C[Success: 100% chance of success. 80% generated confidence.  0% \
best failing child. Trial 15. 5 moves in this tree.]

(;W[kd]
C[Leaf position, Failure: 0% confidence. 64% generated confidence.]
)
(;W[jg]
C[Leaf position, Failure: 0% confidence. 60% generated confidence.]
)
(;W[mh]
C[Leaf position, Failure: 0% confidence. 42% generated confidence.]
)
(;W[md]
C[Leaf position, Failure: 0% confidence. 34% generated confidence.]
)
(;W[kf]
C[Leaf position, Failure: 0% confidence. 32% generated confidence.]
))))
(;W[if]
C[Failure: 0% chance of success. 48% generated confidence.  0% \
best failing child. Trial 4. 1 moves in this tree.]
;B[ih]
C[Leaf position, Success: 99% confidence. 76% generated confidence.]
)
(;W[mb]
C[Leaf position, Failure: 0% confidence. 12% generated confidence.]
)
(;W[mh]
C[Leaf position, Failure: 0% confidence. 12% generated confidence.]
))
(;B[if]
C[Failure: 0% chance of success. 8% generated confidence.  0% best \
failing child. Trial 6. 1 moves in this tree.]
;W[lc]
C[Leaf position, Success: 100% confidence. 92% generated confidence.]
)
(;B[ih]
C[Failure: 0% chance of success. 8% generated confidence.  0% best \
failing child. Trial 2. 1 moves in this tree.]
;W[lc]
C[Leaf position, Success: 100% confidence. 92% generated confidence.]
)
(;B[mb]
C[Failure: 0% chance of success. 6% generated confidence.  0% best \
failing child. Trial 7. 1 moves in this tree.]
;W[lc]
C[Leaf position, Success: 100% confidence. 92% generated confidence.]
)
(;B[hf]
C[Failure: 0% chance of success. 6% generated confidence.  0% best \
failing child. Trial 8. 1 moves in this tree.]
;W[lc]
C[Leaf position, Success: 100% confidence. 92% generated confidence.]
)
(;B[mh]
C[Failure: 2% chance of success. 6% generated confidence.  1% best \
failing child. Trial 9. 2 moves in this tree.]
;W[lc]
C[Success: 86% chance of success. 92% generated confidence.  8% \
best failing child. Trial 9. 1 moves in this tree.]
;B[hf]
C[Leaf position, Failure: 2% confidence. 8% generated confidence.]
))