[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [computer-go] Pattern matching - rectification & update



Your site is impressive.  

I  don't really  believe  in  pro-move prediction  systems,  but I  do
believe a strong go program would probably play the same move as a pro
more often than a weak program would.

It's too bad that there is not  an easy way to quantify the quality of
a move.   For instance,  there may be  many cases where  the predicted
move is perfectly valid and move choice is more a matter of style than
anything else.  

What you would like to have (but  probably can't have) is a way to say
that  the  program's  top  choice  was  a GREAT  move  or  a  move  of
"professional  quality",   whether  it  happened  to  be   the  one  a
professional chose or not.

It might  also be the case  that an occasional  prediction is actually
BETTER than the move the professional chose in a particular situation,
in which case your prediction statistics get hurt unfairly.

One subjective way around this is to  get a real pro to rate the top N
choices in  a few sample  games, asking him  to "put a check  mark" on
move choices that are reasonable  pro candidates, in other words could
this  be a move  that a  pro is  reasonably likely  to play?   I still
wouldn't trust this measurement  unless you got verification from more
than one pro, then you could actually compare their opinions.

Perhaps a more valid and interesting  way to get "test data" is to get
a team  of pro's to  play a game  against another team of  pro's.  The
rules might work like this:

   1.  Each player  on a team  suggests (nominates) a  candidate move.
   2.  The candidate  moves are all  voted upon, using a  borda voting
       scheme.
   3.  The top voted move is played.   (select randomly among ties)

Notes:
 
  Players should not  have access to each others  opinons and choices.
  The voting and nomination stages should be anonymous.

  Borda couting is probably best for this, as it is one of the fairest
  voting systems,  however no voting  system is entirely  fair.  Borda
  counting is based on each  player ranking ALL the choices, from best
  to  worst  and tallying  up  the  result.   It's probably  not  very
  important how the move is chosen.

  You are not so much interested in  the move played as you are in the
  initial move nominations, presumably  all of the nominations are pro
  quality.  However, you might play  games with the actual counts, for
  instance throwing out choices that did poorly with the voting.
   
For those that like to play with pro move prediction data, it would be
extremely  useful  having  some  "test  data" based  on  a  few  games
generated in this fashion (and the voting statistics.)  These games do
not  have to  be generated  by professional  players,  just relatively
strong players.

In my  opinion, the problem  with pro-prediction schemes is  that it's
only  the occasional  move that  makes the  biggest difference  in the
strength of  good players.  At least  it's this way in  chess.  A weak
master plays chess very much like a grandmaster, it might only be 2 or
3 moves in the whole game that "separates the men from the boys."


- Don



   X-Original-To: computer-go@xxxxxxxxxxxxxxxxx
   From: "Frank de Groot" <frank@xxxxxxxxxxxxxxxxx>
   Date: Fri, 19 Nov 2004 14:25:08 -0800
   X-Priority: 3
   X-MSMail-Priority: Normal
   X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1441
   Reply-To: computer-go <computer-go@xxxxxxxxxxxxxxxxx>
   Sender: computer-go-bounces@xxxxxxxxxxxxxxxxx
   X-Scanned-By: MIMEDefang 2.42

   My previous posting about the performance of my pattern matcher was totally
   wrong.
   Also the graph is wrong.

   MArk Boon & I had a difference of opinion on my assertion that such a system
   is worth 100,000 USD and I claimed that my system predicted almost all
   Joseki moves in a never-before seen pro game correctly, but I did not give
   any evidence. Neither was my previous post much of a help, since it
   contained the result of a few bugs.

   After fixing them I was quite swept off my feet to find that 44% of all
   moves in the game I referred to (Mr Popo vs. GoMonster, a game between
   anonymous pros on IGS) were predicted correctly, and that almost two-third
   of the moves are in the top-five.

   Even better, the "learning" is not even half-way, meaning the performance
   will likely go up to 50% correct prediction.

   As is clear, this kind of pro-prediction has never been achieved in any
   research or commercial software.

   The average pro-prediction (over 50,000 games) is:

   Move #1   46%
   Move #2   57%
   Move #3   62%
   Move #4   67%
   Move #5   70%

   Move 25  90%

   These values will improve over the coming days.

   I tried to explain more clearly how the system works, and put part of the
   "analized" game & (unannotated) SGF here:

   http://www.moyogo.com/joseki.htm

   The most interesting aspect of this is that in fact the pattern system plays
   Go all by itself, at least against me. I am unable to win from it. It
   invades corners, makes eyes inside them etc. Unfortunately, as a non-Go
   player, I am unable to judge the playing strength and it will take a while
   before I can test the system against a Go program.

   I have to thank Micheal Reiss, who inspired me to drastically change the way
   my pattern system works due to the information on his website about his
   "Good Shape" module. It was just one word that held the key and that word
   was "urgency". There are many ways to give patterns a value and for the past
   two years I had focused on something that was promising and gave good
   results, but after focusing on "urgency" the thing blew away all previous
   results.

   _______________________________________________
   computer-go mailing list
   computer-go@xxxxxxxxxxxxxxxxx
   http://www.computer-go.org/mailman/listinfo/computer-go/

_______________________________________________
computer-go mailing list
computer-go@xxxxxxxxxxxxxxxxx
http://www.computer-go.org/mailman/listinfo/computer-go/