[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Incremental Data Structures



Darren wrote:
> >1.Write it without the speedup.
> >  [Optional: time it now. Is it slow enough to be worth speeding up?]
> 
> >2.Write a second version with the speedup, that also still does 
> >it the slow way. After every move check that the updated data 
> > is exactly the same.
> >
> >3.Write a third version that just does the speedup.
> >
> >4.Get a large set of game records, and have each of your three
> > versions play every move in every game, with timing.  
  [...]
> >5.Look at the timing and decide if it was really worth it.

To amplify what Tim said in reply to this, because incremental
algorithms are /different algorithms/ than from-scratch algorithms,
doing this would essentially mean writing and debugging the entire go
program twice. In our six-odd years of work we haven't even finished
doing it once. 

What motivated our use of the incremental approach was our intention to
accumulate knowledge about the position as a game progresses as a human
player does. Suppose you spend three minutes reading out a life and
death situation in a corner. The result (say, that the group is dead) is
factored into your positional evaluation from that moment on (until the
situation is disturbed), and in addition you know that all those moves
that look good because they might save your corner are really bad
because they don't work. Now, suppose you do some global lookahead
reading, or any kind of reading anywhere on the board in which you want
to know the balance of territory, the status of the group in the corner,
or whether those superficially good looking moves in the corner are
really good. Or even suppose you tenuki and simply want to know the
territorial balance in the next state.

I don't believe you have to implement it both ways to accurately
evaluate the relative efficiencies of the methods - it's inconceivable
to me that caching such results will be less efficient than redoing that
3 minute analysis on each state transition (and there may be many such
cached analyses at any given time). 

More importantly, I don't believe anyone using from-scratch update would
even contemplate using an algorithm that relies on such expensive state
information, and this constraint on imagination is the real cost we
hoped to avoid by using incremental update.

-D

-- 
David A. Mechner            Center for Neural Science
mechner@xxxxxxxxxxxxxxxxx         4 Washington Place, New York, NY 10003
212.998.3580                http://cns.nyu.edu/~mechner/