[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Incremental Data Structures
Goliath was designed to compute almost all data-structures
incrementally. I did not do this however by storing previous data. The
only thing cached is the result of tactical srearches. In general I
found a speedup of more than a tenfold by designing the computation of
data-structures to be incrementally computed from one random position to
another. The more the two positions differ, the longer the computation
will take, but generally the difference is just a single stone, and the
updating of information remains localized. The speed-gain would be
highest when the board would become more full, since the influence of an
extra stone over the board remains small.
The advantage of this method is that it's a universal method that both
works by adding moves and by removing moves. Also, very often one moves
from one position in a search tree to a completely different branch or
position. In this case I didn't need to update the complete path from
one position to the other move by move, but would just incrementally
update from the old position to the new, however different the may be.
In the Tsume go program this was perfected even further which allows it
to do full board evaluation several hundred times per second on a
high-end PC.
I also found it didn't work to design the algorithm first and then
optimize it later. The only way to get proper speed was to design it as
optimally as possible from the very start. One single piece that would
remain inefficient would undo most of the saving the other pieces
obtained.