[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: computer-go: abstract info and neural nets



Hi,

I am in favor of full board NN evaluation.

I found bunch of demo GO programs like EzGO, HandTalk, ManyFaces, Goliath,
and TurboGO, etc. on 9x9 board, and have them played to each other awhile to
collect about hundread games. Then, I use NN to get an initial full board
evaluation function. My program (ForeverBlack) then uses the NN to play with
itself and other programs, and the new game records are used for further
learning.

After 400+ games, I found ForeverBlack started making living group,
connections, and kills. The programs behaves no worse than influence-only
move (such as the influence function used by HandTalk), and first 10~20
moves of a game usually make sense.

With the game records grow, I constantly needs to increase hidden nodes of
ForeverBlack's NN, that's painfull. Because the training time becomes longer
and longer. My 800mhz computer can not get weights converged in a full half
week's learn.

Can someone point to me literature or articles about incremental
construction of NN for regression (piecewise continous function regression)?

Weimin