[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: computer-go: A problem with understanding lookahead




> > You obviously are not familiar  with the computer chess world, because
> > "counting  material" is a  horrible evaluation  function and  does not
> > lead to "sophisticated positional judgement" in modern programs
> 
> The qualification of 'horrible' is rather relative of course. There's no Go
> program that has an evaluation that comes even close to the precision of
> counting material in Chess, and I don't think there will be for quite some
> time.
> 
>     Mark

This is apples to oranges, but let's try:

I keep hearing on this group  that Go programs  can't search, they are
primarily evaluation based.  That means most of the strength is due to
evaluation.  Go programs don't play great, but they play like advanced
beginners at least.  Correct?

Turn  off a chess program evaluation  except for material counting and
watch people laugh hillariously at every move.  We will even allow a 1
ply search with  capture quies so  that obvious tactical  mistakes are
not made.  This program will be  quickly overtaken by even the weakest
tournament players.

I think your statement is probably incorrect, because Go programs play
a "passable game",  but a chess program   like this would have a  dead
lost game within 3 moves.  You can even add a  few ply of search to my
example  and this will be  true.  

I  have often wondered  how  deep you would  have  to search to make a
tactical only program play good.  Eventually  it would come to a point
where  it was  "forced" into  playing good  moves  so that it wouldn't
lose.  Of course  a  program like  this  might do fairly  well against
weaker players due to the fact that  it would be wildly opportunistic,
it  wouldn't let  you get  away with  any tactical  nonsense and would
pounce on any tactical errors.

But we are not talking about this, we are talking about comparing only
the evaluations.

I DO believe  evaluation is probably a  little easier in Chess but I'm
not even sure  about this.  It's  not the  evaluator that  makes Go so
much harder than  chess, believe  me  it's incredibly difficult  to do
good evaluation  in  Chess too.   It's  the out  of  control branching
factor.    

I think just about the whole world was brainwashed  by the Kasparov vs
Deep  Blue  match,  which caused   most people  to  believe  chess was
basically  "solved" by computers  and   there  were no obstactles   to
overcome.  But this isn't even close to being true, and we are far far
away  from knowing how  to evalute chess  programs  soundly.  The only
thing we have going  for us is that deep  searches have great power to
simulate added knowledge.

Branching factor-wise,  chess is roughly like  6x6 or 7x7 Go.  Could a
go program be designed that  plays 6x6 go fairly  well?  Probably.   I
think it  might even be  almost possible to  exhaustively search a 5x5
board.

There  is  wonderful book out about  checkers  called "One Jump Ahead"
which is highly entertaining and will  open your eyes.  Checkers is to
Chess kind of like Chess is to Go, much  simpler, much lower branching
factor, a little simpler to evaluate and so on.  Chinook, the checkers
program this book is based  on, is the  current world champion  player
above humans and   other computers,  and   yet the author,    Jonathan
Schaeffer has no  misconceptions  about the evaluation function.    He
seems to consider  evaluation the big weak  point  of his program  and
knows it does not compare to any good human judgement.  Even checkers,
where  losing a  pawn  is  far more  of   a liability  than in  chess,
evaluation is critical.

Don