[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [computer-go] Pattern matching - example play




> None of the Go programmers have ever given any arguments as to why you can't
> extract Go knowledge from game records, and be better than "manual" Go
> knowledge. 

This is  a real tricky issue.  I  don't think anyone has  been able to
prove this,  but I  also don't think  anyone has proven  the converse.

To me,  "extracting" knowledge means to  get it in some  form where it
can be used  (with some success) in a game  playing programs.  I think
this has already been done.  There is clearly some information in well
played games and so it should be possible to "extract" it.

However Frank,  it seems to  me that it  should be better  to directly
apply  actual  knowledge as  opposed  to  trying  to obtain  knowledge
indirectly.

  Let me ask you a question:   If you wanted to learn to be a really
  good player,  which method would give more success?

     a)  Playing over published games.
     b)  Learning directly from masters i.e. taking lessons?

Reading  books where masters  annotate games  is a  form of  method B.
Even without a teacher, I would learn much more from books that taught
concepts that I  assume have been worked out  over thousands of years.
Direct tranfer  of knowledge  must be better  than learning  ONLY from
example (but learning from example is quite important too.)

You must  also realize  that learning only  from example  is something
that humans are  extremely well equipped to do.   More than computers.
And yet it's still neccesary to  be direct in teaching methods to be a
master of the game.

Finally, although a very long way off, learning from example only must
hit a wall at some point.   Unless some deeper learning is going on at
the same  time, you are  limited by the  example.  Its very  much like
rasing  children, provide  a bad  example and  your children  will not
progress (or if they do it's because of deeper processes and their own
reasoning abilities and self-discipline.)

I  think  that  ultimately,  learning   from  human  games  is  a  big
limitation.  You  will hit a  wall where there  is a serious  point of
diminishing  returns.  Heikki  Levanto  mentioned just  one aspect  of
this, there are  many more.  Most of the  knowledge expressed in games
is buried  far beneath  the surface, extracting  the real  reasons for
moves is  extremely non-trivial  and is really  what you would  want a
program to understand.

I  envision  a  possible  future procedure  where  programs  bootstrap
themselves,  playing  progressively  better  and  better  by  teaching
themselves.   This is  how humans  did it  using a  process  that took
hundred or even thousands of years.

I  think at the  point we  are at  now, it's  very possible  that your
approach and similar ones could be very good.


- Don



_______________________________________________
computer-go mailing list
computer-go@xxxxxxxxxxxxxxxxx
http://www.computer-go.org/mailman/listinfo/computer-go/