[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: computer-go: perfect players



Hi Chris, 

I'm glad  you   are interested  in  the  experiments I  did.    I have
considered  doing a paper on  this, or perhaps  putting it on web page
with all the data, maybe I will do this later.

So here is a little more detail  about how I conducted the experiment.
I tried to be as stringent (anal) as  possible about the methodology I
used.

The evaluation function was simple, I  counted a pawn as 70 points and
a king  as 100  (or the equivalent.)   This is usaully  the tactically
sound thing to do.  I tried other values and did thousands of games to
get something  reasonably optimal.  It  actually didn't make  too much
difference what values I used as long  as a king was worth more than a
pawn, but not more than 2 times  a pawn.  Exactly 2 times seemed to be
a  little inferior.   If  I had  positional  terms in  the program  it
probably  would have been  an important  choice, but  in this  case it
wasn't.

To get randomness, I always  scrambled the root   move list.  I  never
reordered  the move list even though  this  hurts alpha-beta cutoffs a
little because  I wanted  there  to be a  truly  random choice  (among
equally strong choices.)  I used  iterative deepening which is a major
help in  collecting killer moves (to improve  a/b  cutoffs) and I also
used  a hash table.    I could not   have  achieved these nice  depths
without  the hash table.  The  speed was crippled  a little more by my
choice to be  very conservative in the hash  table  implementation.  I
would not use scores  from the hash  table unless the depths  matched.
This makes it so that hash table doesn't  actually improve the results
of a search, except for the big speedup.

I  also did  a  study to  see  how often  there  were repeated  games.
Occasionally I did get repeated games, but it was quite rare.

Also, I imposed  a modified set of rules.  To  avoid really long games
that were  dead draws, I assumed a  draw result if the  game reached a
given ply number.  I used a  pretty liberal number so that actual wins
didn't get  turned into draws  too often.  In  practice, even a  1 ply
search would eventually stumble into a win if it was possible.

Another  important change  was that  I didn't  accept  multiple jumps.
This simplified the coding and made things faster.  I didn't care that
I wasn't being 100% true to  the spirit of real checkers, because this
was a study  in game playing theory, not in  checkers.  I believe that
this version of  checkers is even simpler than  the multiple jump game
and I wanted a very simple, but non-trivial game.

I intend to  take this farther,   but haven't had  much  time.  Please
understand, I'm not trying  to create a great  6x6 checkers program, I
could probably approach perfection if I wanted to.  What I'm trying to
do  is understand better  the whole nature of game  playing theory.  A
highly simplified domain is a wonderful tool for  doing this.  In this
first step I just wanted empirical evidence, by comparing to a simpler
game, that Chess and Go were harder games than  we imagine them to be.
There is a kind  of unspoken belief, in my  opinion, that  great chess
players play close  to perfect chess, and  I was hoping to demonstrate
that it's not very likely that this is true.

The next step,  if I go farther with this toy  domain, is to introduce
more evaluation terms  and observe the effects.  I  do expect a little
evaluation to make  a big improvement, but can 5 ply  beat 20 ply with
some help from a decent evaluation?  I would like to know.

Don




   From: "Fant, Chris" <chris.fant@xxxxxxxxxxxxxxxxx>
   Date: Thu, 3 May 2001 15:52:25 -0400 
   MIME-Version: 1.0
   X-Mailer: Internet Mail Service (5.5.2650.21)
   Sender: owner-computer-go@xxxxxxxxxxxxxxxxx
   Precedence: bulk
   Reply-To: computer-go@xxxxxxxxxxxxxxxxx
   Content-Type: text/plain;
	   charset="iso-8859-1"
   Content-Length: 1503

   They could if the people they play against are also very close to
   perfection.  And what I meant by "very close" is very close relative to how
   close 9 dan Go players are.   But judging from what you said about your 6x6
   checkers experiment (which was very interesting), humans aren't particularly
   close in any of the games that we play.  I guess that's we like them.  By
   the way, I also agree with your "human arrogance theory."

   But one thing I didn't understand about your 6x6 checkers games is, what
   "statistical noise" are you talking about?  Did you introduce some
   randomness at the beginning of each game?  Like each player makes 1 or two
   random moves before actually looking ahead?  If so, it seems like you would
   need more than 200 games to discern a 4% advantage of one player over the
   other.  I assume that you alternated who when first, so that is only 100
   chances to be dealt a decent opening.   Like I said, I find that project
   very interesting, for some reason.  I you have any more info about it you
   would like to share, or if you have a web page about it, I'd like to check
   it out.  Thanks. 


   -----Original Message-----
   From: Don Dailey [mailto:drd@xxxxxxxxxxxxxxxxx]
   Sent: Thursday, May 03, 2001 12:31 PM
   To: computer-go@xxxxxxxxxxxxxxxxx
   Cc: computer-go@xxxxxxxxxxxxxxxxx; computer-go@xxxxxxxxxxxxxxxxx
   Subject: Re: computer-go: perfect players



   And one other  thing too.  The very   best Chess players  lose LOTS of
   games still.  They couldn't be very close to perfection.

   Don