[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [computer-go] A New Razor



Oops thats right, neural networks have a biased hypothesis space, which
means that a single net cannot represent an arbitrary instance space with
that networks hypothesis space, therefore the VC dimension is not
infinte(Machine Learning, Tom M. Mitchelll)

I still think that it is better to split the margin, after classifying all
known data into the correct category, than to take the simplest
explanation which fits the data.  One way to achieve this is through
bagging.

Thank you, but I think all of this theory is pretty much useless, for the
most part it is false especially what I said.  It won't get me a job, pay
the bills, or even play go, nor does it improve my life in any appreciable
way, in fact I can't think of a more miserable thing than having studied
computational learning theory.  I certainly don't want to be a guinea pig
for some professor's grand theory on life.  Althogh I do appreciate the
attempt at a personalized explanation.

Regards,

Robin Kramer

Endocronology and the way hormones bind to DNA to transcribe
neurotransmitters which are also hormones is a sure fire way to make one
happy :-P, but that is another topic.

>
>
> Robin,
> I don't know anything about you, but it's clear you've
> done an impressive job of reading the computational
> learning theory literature and forming your own
> opinions. It's also clear, however, that you've missed
> many things in so doing. Neural nets do not have an
> infinite VC-dimension. My paper with Haussler in Neural
> Computation 1 1986 proved the VC-dimension
> of neural nets was finite, if I recall correctly we
> bounded it between W and WlogN, where W is the number
> of weights and N the number of nodes. But the effective
> VC-dimension of large neural nets trained by backprop
> will be much lower (although nobody knows how to estimate
> it theoretically) because backprop doesn't exercise
> anywhere near all the degrees of freedom.
> Likewise, Bayesian methods have their own Occam's razor--
> there is an alternative, perhaps more intuitive, and
> equally useful version of occam's razor that arises
> naturally within Bayesian methods.
>
> Chapter 4 of *What is Thought?* gives a pedagogical
> treatment of the VC-dimension, MDL, and Bayesian routes
> to Occam's razor. Chapter 6 discusses other topics, such
> as effective dimensions. The treatment is unified and I
> believe it is quite readable even though it gives many
> of the mathematical intuitions-- friends of mine from
> various walks of life read it in draft.
> *What is Thought?* is also organized around the
> goal of understanding cognition, which I believe
> makes all of its discussion and data unified and
> relatively easy to follow.
> With all due respect, I recommend you look at it.
> It doesn't make a lot of sense for me to spend
> 6 years writing, much of that in the effort to
> be pedagogically clear, and then rehash it in dribs
> and drabs less clearly over the next 6 years.
> http://www.whatisthought.com
> _______________________________________________
> computer-go mailing list
> computer-go@xxxxxxxxxxxxxxxxx
> http://computer-go.org/mailman/listinfo/computer-go



_______________________________________________
computer-go mailing list
computer-go@xxxxxxxxxxxxxxxxx
http://computer-go.org/mailman/listinfo/computer-go