[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

computer-go: Re: new strategy game vs. Amazons



William Harold Newman wrote:

  
The game of Amazons has a reasonably impressive branching factor, and my impression is that existing programs which don't use radically new techniques still play a decent game.

  
This is true, but perhaps one of the reasons has to do with the fact that Amazons shares the "newness property" mentioned by Måns Ullerstam below, so we are not really sure what a decent game is.  (See:  http://www.cs.unimaas.nl/ICGA/games/amazons/)  Nevertheless, the comment seems correct.  For example, our program, Invader (http://www.csun.edu/~lorentz/amazon.htm), easily beats beginners of the game, but once players learn that there are really two separate strategies to the game that must be balanced (briefly, territory and mobility) humans learn to do this better than the programs can and so can often beat most of the programs, or at least give them a very good game.  I would guess that the best humans are at about the same level as the best programs.  As we learn more about the strategic concepts of the game, though, I wouldn't be surprised if we find that humans will improve at a faster rate than the programs.  Indeed, Invader uses no "radically new techniques."  As a point of reference, in the early stages of the game (up to about move 15) it never completes a three-ply search, even with a fair amount of forward pruning.  In the middle game, when the branching factor drops to "only" about 300 or 400 we are lucky to finish four plies.  This is with 30 seconds per move.

-Richard Lorentz

Måns Ullerstam wrote:
I agree to all mails regarding this new game Arimaa. But I think there is
one interesting aspect of developing software that can beat the best human
in this game. The interesting part is that there are still no humans that
are good in this game, no common opening theories or common ways to decide
what is good and bad.

For AI purposes this means that we have to develop software that is good
at a task where we don't know what is good or bad ourselves. There is no
way we can watch a game of Arimaa played by our software and decide if it
is playing good or bad, we can only watch the results of its play.

I think this makes it an interesting AI exercise, since our minds are not
clogged with prior knowledge of (what we think is) good and bad strategy.