[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: computer-go: abstract info and neural nets
Oops -- the Xin Yao reference should have been
"A new evolutionary system for evolving artificial
neural networks", IEEE Transactions on Neural
Networks 8(3):694-713, May 1997. Which is available
online at:
http://www.cs.adfa.edu.au/pub/xin/tnn2770.ps.gz
Carl
_________________________________________________
[(hp)] Carl Staelin
Senior Research Scientist
Hewlett-Packard Laboratories
Technion City
Haifa, 32000
ISRAEL
+972(4)823-1237x221 +972(4)822-0407 fax
staelin@xxxxxxxxxxxxxxxxx
_______http://www.hpl.hp.com/personal/Carl_Staelin_______
> -----Original Message-----
> From: Staelin, Carl [mailto:staelin@xxxxxxxxxxxxxxxxx]
> Sent: Sunday, January 13, 2002 10:46 AM
> To: computer-go@xxxxxxxxxxxxxxxxx
> Subject: RE: computer-go: abstract info and neural nets
>
>
> > I am in favor of full board NN evaluation.
> >
> > I found bunch of demo GO programs like EzGO,
> > HandTalk, ManyFaces, Goliath, and TurboGO,
> > etc. on 9x9 board, and have them played to
> > each other awhile to collect about hundread
> > games. Then, I use NN to get an initial full
> > board evaluation function. My program
> > (ForeverBlack) then uses the NN to play with
> > itself and other programs, and the new game
> > records are used for further learning.
> >
> > After 400+ games, I found ForeverBlack started
> > making living group, connections, and kills.
> > The programs behaves no worse than influence-only
> > move (such as the influence function used by
> > HandTalk), and first 10~20 moves of a game
> > usually make sense.
> >
> > With the game records grow, I constantly needs
> > to increase hidden nodes of ForeverBlack's NN,
> > that's painfull. Because the training time
> > becomes longer and longer. My 800mhz computer
> > can not get weights converged in a full half
> > week's learn.
> >
> > Can someone point to me literature or articles
> > about incremental construction of NN for
> > regression (piecewise continous function regression)?
>
> One paper I would suggest you read is Xin Yao's
> paper "Evolving artificial neural networks" in
> Proceedings of the IEEE, 87(9):1423-1447, Sept.
> 1999, which is available online at:
> http://www.cs.adfa.edu.au/pub/xin/yao_ie3proc_online.ps.gz
> He uses an evolutionary system to both prune and
> grow networks incrementally, and you might use a
> similar approach to simply grow the networks. He
> uses a generalized feedforward network (GFF)
> architecture, which may prove useful. I think the
> biggest question is where one should (usefully)
> add new nodes.
>
> Another question is how are you training, and
> what is the network supposed to be learning?
> Are you using a reinforcement learning-type
> approach where the network learns to estimate
> the "value" of a board position, or are you
> doing something different? I would be very
> interested in hearing what inputs you are giving
> the network, and what outputs you are asking it
> to learn. I would also be interested in how
> you are training the network, e.g., via
> reinforcement learning or supervised learning.
>
> If you are using a somewhat straightforward
> batch-type approach with traditional back-
> propagation momentum-based learning, then I
> think you might usefully switch optimization
> algorithms to use a quasi-Newton training
> algorithm such as BFGS, which is *far* more
> efficient. If your networks get too large,
> then you might switch to a limited-memory
> BFGS optimization algorithm.
>
> Again, depending on how you are training the
> networks, you might look at a completely
> different network training approach, such as
> NEAT which uses an evolutionary approach to
> choose both the architecture and set the
> connection weights. You might look at Kenneth
> Stanley's paper "Evolving neural networks through
> augmenting topologies", which is available at:
> http://www.cs.utexas.edu/users/nn/pages/publications/abstracts
.html#stanley.
utcstr01.ps.gz
Cheers,
Carl
_________________________________________________
[(hp)] Carl Staelin
Senior Research Scientist
Hewlett-Packard Laboratories
Technion City
Haifa, 32000
ISRAEL
+972(4)823-1237x221 +972(4)822-0407 fax
staelin@xxxxxxxxxxxxxxxxx
_______http://www.hpl.hp.com/personal/Carl_Staelin_______