[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [computer-go] Weights are very important
> On Tue, Jan 13, 2004 at 04:38:41AM +0100, Vincent Diepeveen wrote:
>> At 20:06 12-1-2004 +0100, Nicol N. Schraudolph wrote:
>> >On Mon, 12 Jan 2004, Vincent Diepeveen wrote:
>> >
>> >> Yet a NN when trained will pick some random value from between
>> -36000 and +36000 and start tuning with that.
>> >>
>> >> As a human we know very well that it should be something between 1
>> and 40.
>> >
>> >If you know a priori that the value should be between 1 and 40, why
>> on earth would you not initialize your neural net within that range
>> and let learning refine it from there? When I seek optima in a
>> design
>>
>> For a very simple reason. If you believe that NN's work, it should
>> figure that out itself of course. If you don't believe in it, you
>> don't use them at all.
> Machine learning is an optimization problem rather than a
There are people who believe that all of Machine learning is an
optimization problem. Support Vector Machines(SVMs) are in fact only an
optimization problem, and neural networks are not altogether removed from
an optimization problem.
However, some people would say that Machine Learning is strictly a
statistical problem. There are whole classes of learning algorithms that
are based on Bayesian Analysis.
Yet another set would claim that Machine Learning is merely a numerical
measurement problem, and are simply satisfied to memorize everything that
has been seen before, and when presented with an opportunity to make a
decsion simply report the case that most nearly represents the new case.
In any case the only way that different learning techniques can be
compared is with cross fold validation. Presetting the weights violates
the principle that a learning algorithm needs to have no knowledge of the
testing set when given its training set. This eliminates any sort of
pretuning. It is however valid to seperate the training data into two
categories and then train on part and use the other part as a pseudo
testing set before testing it, but once the algorithm has been tested, it
cannot be modified. Even then the only thing that can be said is that one
algorithm is better than the other on that particular data set. Usually
people do not bother doing these tests on more than one training
set(Machine Learning by Tom M. Mitchell, Bayesian Networks and Decsion
Graphs, Finn V. Jensen, Nonlinear Programming by Olvi Mangassarian).
For those of you who want to play go. Knowledge of experimental
methodology, when trying to decide which algorithm to implement, while
solving a particular problem, might be useful.
Sincerely,
Robin Kramer
P.S. When you get Neural Networks to the point where you can download my
brain, sign me up, I would like to live forever in a computer orbiting the
sun with my solar cell array, playing Go with my friends. However, I want
to see your experimental methodology first. I would not believe you if
you said, "It works", but it only works for your brain.
> simulation/calculation problem, and I have less experience with such
> problems, but I have *some* experience, and I've read a fair amount
> about such problems. In many hard optimization problems it's very
> helpful to apply prior knowledge about the general properties of the
> solution. The kinds of prior knowledge that you want to have are
> somewhat different than what you want in problems similar to
> integration; in particular, almost any optimization method will almost
> certainly appreciate it very very much if it doesn't need to tunnel out
> of too many local minima in order to find the global minimum you care
> about. One good and classic way to help is to give it the kind of
> information that Nicol Schraudolph described.
>
>> If you believe in it, giving it the accurate range, without the chance
>> to also put a parameter at 0, then you are later fooling yourself when
>> you claim it works.
>
> Do you believe that if people who figure out efficient orbits for
> space missions give their programs initial hints then they are later
> fooling themselves when they says the programs work, that they
> shouldn't give hints if they believe in the programs, and that if they
> don't believe in the programs they shouldn't use the programs at all?
>
> --
> William Harold Newman <william.newman@xxxxxxxxxxxxxxxxx>
> In examining the tasks of software development versus software
> maintenance, most of the tasks are the same -- except for the additional
> maintenance task of "understanding the existing product". -- Robert L.
> Glass, _Facts and Fallacies of Software Engineering_
> PGP key fingerprint 85 CE 1C BA 79 8D 51 8C B9 25 FB EE E0 C3 E5 7C
> _______________________________________________
> computer-go mailing list
> computer-go@xxxxxxxxxxxxxxxxx
> http://computer-go.org/mailman/listinfo/computer-go
_______________________________________________
computer-go mailing list
computer-go@xxxxxxxxxxxxxxxxx
http://computer-go.org/mailman/listinfo/computer-go