[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re[2]: Use of Probability



Dave Dyer wrote:

> This discussion is a move in the right direction - using a Baysean
> framework to represent the state as a tree of conditional probabilities
> is an improvement - but it is also a dead end

That last is a fairly strong assertion; I would welcome a proof.  :-)

> The problem is that Baysean reasoning requires you to assign a probability
> to every event, and the whole edifice collapses if events assigned low 
> probabilities actually occur.  

It all depends on how you define "event".  If you choose such high-level
constructs as "survival" and "death", then of course, as Dave's example
shows, you may end up with nonsense probabilities, because, well, "It all
depends."

Instead, consider that at any moment there are at most 361 possible events.
Then the question is, can we order the possible moves from "most-likely-to-be-
chosen-by-Cho-Chikun" to "least-likely-to-be-chosen-by-Cho-Chikun", having
already observed a number of instances of moves actually chosen by Cho Chikun?

With proper representation of the data -- and it is possible to unequivocally
determine, i.e., to measure, whether one representation is "better" than some
other -- I think that such an ordering, assigning probabilities to each legal
empty point, consistent with observed data, is indeed possible, and not at all
a dead end.  I can't prove it -- yet...

But watch this space.  You may also want to think about this question in the
meantime:  What is a good way to represent a go "event" so that a data set
of statistically observed events will yield to this sort of analysis, and
thereby provide us with a good classifier in previously unseen situations?
Then you may wish to ask:  What is a better way?  Fact is, some ways _are_
provably better than others.

Even if we are unable to correctly predict Cho's choice of move in a given
situation, we may be able to come up with a list of say, five, or seven, most
probable choices.  Presto!  We've pruned the tree, and all that remains is
a little reading.  And even if we are wrong, that is, either we falsely choose 
a point that Cho would _not_ play, or even if we miss a move that he _would_
choose, we can still do better than _random_, which -- again -- has the 
desirable effect of pruning the tree for us, even if it does not _solve_ the
"best move" function.

I will continue to be optimistic about this approach.  Lots of work remains
to be done, of course, so for now just assume I'm talking through my hat.
The proof of the pudding is in the eating...  of the enemy's stones.  :-)

> In fact, in problems such as Go there is a large amount of uncertainty
> in almost all such probabilities, and uncertainty is a completely different
> beast from probability. Worse, there is no satisfactory formal framework
> for reasoning about uncertainty.

This is certainly [?] true, but isn't it interesting how Lorenz attractors
and Mandelbrot sets and the like keep turning up in the most unexpected and
disparate places?  Chaos theory may not be very satisfactory from some
standpoints, but it does at least provide an interface between probability
(whatever that _really_ means...  I don't know, do you?) and uncertainty.

Unexpected stuff happens, but luckily it happens within predictable margins.

Rich
-- 
Richard L. Brown              Office of Information Services
rbrown@xxxxxxxxxxxxxxxxx        University of Wisconsin System Administration
rlbrown6@xxxxxxxxxxxxxxxxx    780 Regent St., Rm. 246  /  Madison, WI  53715

p.s.  My Linux ``fortune'' program, quite coincidentally, handed me this today:

      The Heineken Uncertainty Principle:
           You can never be sure how many beers you had last night.