[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[computer-go] Weights are very important
At 17:38 11-1-2004 +0100, chrilly wrote:
>
>Waldviertel-Hochland, 11.01.2004
>The quality of an evaluation function is according my experience in
>computer-chess determined by:
>1) The number and the quality of positional features
>2) The accuracy with which these features are classified, especially the
>percentage of wrong-classifications.
>3) The fine-tuning of the weights.
Though i do not disagree from human viewpoint with your findings, i hope
you realize that a human hands down tunes weights better than any NN does.
For a human it is for example trivial that influencing a square by a strong
group, but with a potential of the opponent being close, that this can
deliver (assuming territory is 100 points a square so 36100 for the entire
board) never more than 100 points.
Yet a NN when trained will pick some random value from between -36000 and
+36000 and start tuning with that.
As a human we know very well that it should be something between 1 and 40.
I would pick 30 when it is an important feature and i would pick 5 when it
is very unimportant.
So the accuracy of a rude human guess, just done one time by me, being an
absolute idiot when playing Go (because of my strong chess playing
abilities even when playing fulltime go i would probably never get better
than 3 dan or so which is very poor level of course for having complicated
go-knowledge and knowing i play never now let's guess 15 kyu).
So even with very little knowledge about Go, i am capable of tuning very
very accurate for it if we consider the statistical chance a network will
pick a value of about 5.
A value of 1..10 from a total range of [-36000;36000] is
representing a chance of: 10 / 72000 = 0.000139
Or in short you are underestimating how difficult it is for automatic
learning approaches to pick that value of 1..10
Best regards,
Vincent
>According my experience is 1) and 2) much more important than 3). Logically
>one can not fine-tune a feature, when the programm does not know at all
>about it. It is also pointless to fine-tune the weight, when the feature is
>in a high percentage of positions mis-classified.
>E.g. it is in chess important to have some knowledge of center-centrol. The
>formal defintion, recognition in a programm, of center control is a very
>difficult problem. E.g. GM J.Nunn writes: "There is no mathematical
>definition of this term, but one usually sees it". Unfortunatly programms
>have no chess-eye.
>Once one has implemented the right features and can classify them - most of
>the time -correctly, the setting of the weights can be in a relative wide
>range without affecting the playing strength of the programm in a
>significant way. The programm plays with different weights different, but it
>is astonishing, how little impact this has on the Elo-rating.
>
>I think the same should hold in Go. I assume it is much more important to
>recognize groups correctly or the divide the game in the correct subgames,
>than optimizing the weights. I also assume that the flat gradient - changing
>weights has only a marginal impact on performance - is also a serious
>problem for an automatic learner. I assume from my hand-tuning experience,
>that there are also a lot of local optimums.
>
>Note: I assume of course, that the weights are already in a more or less
>optimum range. The flat gradient "law" is only true around this optimum.
>
>Best Regards
>Chrilly
>
>_______________________________________________
>computer-go mailing list
>computer-go@xxxxxxxxxxxxxxxxx
>http://computer-go.org/mailman/listinfo/computer-go
>
>
_______________________________________________
computer-go mailing list
computer-go@xxxxxxxxxxxxxxxxx
http://computer-go.org/mailman/listinfo/computer-go