[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: computer-go: FPGA
>I have always believed that any program should be designed in such a
>way that it is GUARANTEED to improve with increasing computing
>resources.
From: Dave Dyer <ddyer@xxxxxxxxxxxxxxxxx>
This is not possible. You can guarantee that the computer uses
all the time available, but you cannot guarantee that the additional
work will result in an improvement. It might even make things worse.
I think you are talking in practical terms, not theoretical. I COULD
in THEORY, design a stupid go program guaranteed to improve with
depth. But if you are saying it's difficult to write a program that
scales well with hardware, then I would be inclinded to believe you.
I once proved to myself, in computer chess, that a selective search
can play WORSE with increasing depth, if you don't select the moves
properly. It's easy to see why when you consider the case where the
computers opponent got his best moves filtered out by accident. The
computer would choose it's own moves based on the illusion that the
opponent is limited to these bad moves (chosen poorly by the plausible
move generator. I am quite sure you will see this kind of behavior
(worse play in general with depth) when the selectivity is done
improperly. Of course I don't pretend that I know an effective way to
do it with computer go!
Even with a completely correct search (in chess), sometimes looking a
ply deeper will make it play a worse move. For instance, it might
find a way to do something that is ordinarily a good thing, but turns
out to be a horrible thing in that specific position. But ON THE
AVERAGE, searching deeper will ALWAYS make it win more games if the
search is full width. Apparently this is also true with selectivity
if it's designed properly. In modern chess programs the selectivity
is also designed to be based on depth so that a move is more likely to
be included if it's farther away from the leaf nodes.
But imagine a program that only knew how to evaluate positions that
were completely played out and were either wins or losses and also did
only full width searches. Such a program would should a definite
improvement with depth. It would not be a practical program, but at
some point it would play a perfect game of go given a fast enough
computer (one that probably will never exist!) I modified my chess
program in this fashion once just for fun and did a set of tests.
Even though it only understood wins and losses, each ply of search
would dominate the previous level.
I have heard the description of many go programs and how they do the
selective search (not the tactical local searches) and though I'm no
expert at computer go, I do know a lot about game theory in general
and I am an expert in computer chess. I can't imagine any of these
searches scaling well, and indeed it seems suprising that they improve
the program at all. I can easily imagine that adding another
selective layer to them might make it play weaker. But most of the
descriptions I have read were in very high level terms, and not very
specific at all.
Dave, I am interested in why a computer would get consistantly worse
results with more computing power and why you believe it's not
possible to fix.
Don