[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: computer-go: [Question] Alpha-Beta Pruning & Null Move



I think it depends on your point of view whether you call it 'pruning' or
not. I think an algorithm that selectively decides to extend some variations
but not others could be called 'pruning' on any variation that didn't get
extended.

For this to work, I don't think it's necessary for the pruning search to be
cheaper, as long as it reduces the total search effort had you searched each
line equally deep. But in practice I do try to make the null move search
cheaper of course. Often the fact that the search exceeds a certain
(smallish) number of nodes is enough since if the opponent can make things
complicated it's probably worth exploring that line a bit more anyway.

I'm sure it's quite different from chess, since the whole point is that the
extra move often DOESN'T return an overwhelming advantage which indicates
the move would be better spent elsewhere.

The actual implementation in Goliath is not so great and relies on quite
some heuristics and fine-tuning as well. But I found that it plays better
with than without, even though it weighs a bit on the performance.

> -----Original Message-----
> From: owner-computer-go@xxxxxxxxxxxxxxxxx
> [mailto:owner-computer-go@xxxxxxxxxxxxxxxxx]On Behalf Of Don Dailey
> Sent: Thursday, April 26, 2001 7:24 PM
> To: computer-go@xxxxxxxxxxxxxxxxx
> Cc: computer-go@xxxxxxxxxxxxxxxxx
> Subject: Re: computer-go: [Question] Alpha-Beta Pruning & Null Move
>
>
>
> Hi Mark,
>
> It sounds like your algorithm uses null  move to prune but without the
> depth   reduction   factor.  This probably   makes   sense in tactical
> reading.  You are  essentially trying a null move  to see if a line is
> worth persuing, and this will be much more  accurate than using static
> pruning rules.  With  a tactical search  in Go, the  tree is very deep
> with a  very low branching factor.   The point is that depth reduction
> does not have much meaning in this context because your stopping rules
> are not primarly  based on depth  (although you probably still have an
> upper limit.)
>
> In  order to  be cheaper,   which would  be  neccesary to   rightfully
> consider it a pruning algorithm, the null move search has to made very
> cheap,  especially so if it doesn't  produce rich cutoffs.  Even if it
> does it needs  to  be cheaper than  the search  you would normally  be
> doing.  When  the null   move  search  fails,  it  just becomes   pure
> overhead.
>
> It may be that you can take steps to  severely constrain the null move
> search.  I'm sure you've experimented with this.  In chess, getting an
> extra  move  is SO  overwhelming, that   it usually returns  something
> optimistic even  with depth reduction.   With tactical reading you may
> have  something like this with   some other constraints you might  not
> normally consider.  I don't really know.
>
> Don
>
>
>
>
>
>    From: "Mark Boon" <tesuji@xxxxxxxxxxxxxxxxx>
>    Date: Wed, 25 Apr 2001 17:38:09 +0200
>
>    Don's description of 'null move pruning' is what I thought it
> would be. I
>    used something similar in Goliath as one of the criteria to
> stop reading
>    certain variations. It had rather vague concepts of 'urgency' and
>    'stability' to decide whether to (continue to)read in a
> certain situation or
>    not. The original driver was that static evaluation turned out
> to be almost
>    useless in situations that are not stable. So unstable
> situations had to be
>    read further. One of the ways to determine whether a situation
> was stable
>    was to allow one side to move twice and see if it would lead to an
>    unacceptable result for the side that 'passed locally', taking
> into account
>    the average value of a move for the side that 'passed'.
>
>    I wouldn't know how to measure the speed-up. For my program it actually
>    caused it to slow down considerably, because it would do much
> more reading
>    in variations the static evaluation would have given up. Plus that the
>    situation after the two moves in a row by one side might still
> not be stable
>    and therefore need more reading. But for the accuracy of the
> reading it was
>    a great improvement. This method for determining whether a
> situation was
>    stable or not was always a last resort. If you can do it by
> heuristics it is
>    preferable since it saves a lot of time. The difficulty was to
> find a good
>    balance for the heuristics.
>
>    In tactical reading, a pass is also sometimes allowed but I
> wonder if it can
>    be regarded as a null move. It is used to prevent one side of
> being forced
>    to play and actually make his position worse because the only moves
>    available to him are filling his own liberty or filling his own eye.
>
>    One of the best usages of null moves I find in the endgame.
> Sente moves are
>    generally doubled in value. To find out if a move is sente,
> try moving twice
>    and see if the result gets doubled or more.
>
>    Does anyone actually have experience with making a program play endgame
>    properly? I have briefly looked into it using brute-force in a local
>    situation, but wasn't very successful. What I wanted for example is a
>    program to be able to see that the value of a gote move would
> be higher than
>    evaluated if it could be followed up by a profitable sente
> sequence. This is
>    a very basic technigue that even most kyu players master, but it seems
>    surprisingly difficult for programs. A slightly more sophisticated
>    technigue, which I also thought to be easy for computers, is
> to evaluate
>    what gote moves either side can make locally, see which side
> can make the
>    bigger move and award that side half of the difference. (For
> example, if
>    black cannot make any points, but white can make two points,
> evaluate it as
>    one point for white.) After a months worth of work the results
> were still
>    very poor so I wondered if anyone else tried.
>
>        Mark
>
>
>
>