[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Shallow Or Deep Search?
>>
>> 2) the overhead in generating, regenerating, and evaluating positions
>> is substantial, so the cost of the shallow, indecisive searches with poor
>> cutoffs is very substantial. This is also why searches based on alternatatives
>> to alpha-beta such as "proof number" search are problematic. These other
>> search strategies invariable require the search engine to be agile about
>> switching fom one part of the tree to another.
>
> Sorry I am not sure about what you mean in (2), could you clarify ?
>It could be that I do not think that agile switching is a problem
>but a feature of a good search engine ?
>
Part 1:
Go programs derive many "facts" based on exhaustive analysis. The
state of all these facts and the underlying state that led to them has
to be switched too. It's both time consuming and error prone to keep
track of all the dependant state.
Even without hard-to-derive facts, there's a lot of pressure in Go programs
to incrementally maintain a lot of information, such as the complete set
of groups, their liberties, the set of adjacent groups, etc, because such
information is much easier to generate if you know exactly what changed
than if you simply remember that you don't remember and generate from scratch.
Consequently, simply making or unmaking a move tends to be a relatively time
consuming process, not simply a matter of chaning a[x,y]=Black;
Part 2:
Since shallow searches are getting poor information from their terminal nodes
(because they were depth limited before they produced any useful information)
They don't produce alpha-beta cutoffs, and hence to be nearly worst-case
searches, which take exponentially more time. So not only do shallow searches
produce poor results, they are very ineffecient at generating them.