[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Use of Probability



John Aspinall wrote:

[A bunch of stuff, and every word of it true.]

> Recommended reading: E.T. Jaynes' textbook on Bayesian reasoning.
> Available on the net from http://omega.albany.edu:8080/JaynesBook

Thanks especially for this URL, John!  I have become fascinated
by Bayesian probability (which differs, at least historically,
from classical probability -- there were some big grudges against
it for a long time... personality conflicts, etc. -- but I digress).

Maybe I should say "probabilities" [plural] because as John put it:
> It is an error to think that there is a single number - "the probability" -
> that can represent the uncertainty associated with a game decision.  Nearly
> every probability is a conditional probability; it depends on other
> assumptions.  There may be many different probabilities for an event --
> dependent upon what the event is conditioned on.

Exactly so.  However, deducing their distribution from sampled data is what
Bayesian probabilities are all about, isn't it?  We use BP to create a
_classifier_ that will partition the set of possible events into smaller
subsets, based on the observed likelihood that an event will occur, given
some set of conditions (which conditions themselves may affect the likelihood).

Seems to me, based on my limited reading so far, that if you ask a classical
probabilist what the probability of a flipped coin's coming up heads is, he
will flatly state:  "Fifty percent.  Everybody knows that."  End of story.

But if you ask a Bayesian the same thing, he will say, "Let me see
the coin."  And, he'll want to run statistical experiments using the
coin, just to find out whether he can do better than _random_ in predicting
the coin's behavior.  Not in any specific flip, maybe, but over the long haul.
If one side of the coin is a little heavier than the other, for example...

Anyway, I hope I haven't grossly mischaracterized the situation, but I think
it has been shown that you _can_ do better than random at predicting the future,
using Bayesian probabilities, and in some cases with a surprisingly small set of
sample data.  Corrections are most welcome.

The question of what makes one classifier better than another reduces to the
familiar problem, "How do I represent the data?"

> Summary: You can't just talk about "the probability".  There's no such
> animal.  There's "the probability of A given X", and "the probability of A
> given Y".  They're not the same.  Representing and combining them is the
> hard part.

Amen to that, brother!

Thanks again, John.  Haven't been able to connect to omega.albany.edu yet; I
hope that means the rest of you are only keeping the server busy for a while.
:-)

Rich
-- 
Richard L. Brown              Office of Information Services
rbrown@xxxxxxxxxxxxxxxxx        University of Wisconsin System Administration
rlbrown6@xxxxxxxxxxxxxxxxx    780 Regent St., Rm. 246  /  Madison, WI  53715