[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Incremental Data Structures



Hello,

 I have been reading this thread with great interest. I 
have some comments to might be of interested.

 There are many different kinds of information, for exampe:
there is the information required just to perform board management
(take off captured groups etc) which is very low level and can be
done effeciently stone by stone, and there
is information concerning the influence of stones this is high level. 
It is the latter that we are going to be playing around
with once the board manager is stable. 

 During a search you do not wish to all the time maintain a full
set of information. For example if you have placed a cutting stone
you might firstly be interested in if it can be captured in a ladder,
during the initail reading you will only wish to maintain the lowest
level of information. If you find that it is not captured then
you will (IMHO) want to look at the stable end points of sequences
arising from the cut and evaluate the the influence fields potential
territory etc. 

 I have given up programming GO at present because it is not very good
for my brain but I did consider this problem and my own conclusion
was that :

 o Most high level information (at least the stuff I considered)
is not suited to incremental update. That is the effect of
placing a stone can be so convoluted that you may as well 
calulate the information from a fresh look at the board.

 o Since you maybe jumping around your search tree you will want
to cache the information should you wish to compare positions.
I was thinking of using the the board hashfunction as an index
into a table of data. If the data exists -- use it -- if not construct
and put into the table. 

 o Some information is local and using a full board hashfunction
may  cause recalulation of information when a stone stone is placed
remotely. This could be cured by using a local hash function.
(For example use the XOR values for all points out to 3 levels away
from the group in interest). This means different leaves of the
tree can share the same information (even if the tree does not collide).

 It is likely that the information will eventually cause you to run 
out of virtual memory, so the hash table data base could start 
deleting data purging the most historic stuff. You do not really need
to worry about freeing the data base because it is done automatically.

 The advantages of this approach:

 o Book keeping is easy. You simply request data and if it is not
avialable it is calculated. 

 The disadvantage,

 o Absolutely none (I my be a bit biased)


 Thank you for reading ,

   Paul.