Conditional independence.
We remember independence means we have two events and we can
get the probability of them both being the case by just multiplying.
Or in other words,
if A and B are independent,
then probability of A given B is the same as the probability of A.
Now, as I said, independence is not actually always true.
Very often, you have something else, right?
Two things being independent unless some outlier,
which hopefully is quite rare, is the case.
And to understand this, I would like to upgrade our example with the toothache.
When you go to a dentist, language production is difficult.
If you go to a dentist, what you usually have,
you have a toothache, you go to the dentist.
The first thing is they take this little hooked metal thing and
they start scraping across your teeth.
And if it goes smoothly over your teeth, then everything is fine.
And if it catches, then there's a cavity there,
which saves the dentist from crawling into your mouth,
essentially, to find the cavities.
But it's not actually, this is a kind of sensor that doesn't always work.
So that's the catch thing.
And the important thing here is if you have a cavity,
in 90% of the cases, the trained dentist will actually find it
with this little metal hook.
So you might have funny teeth, so there's a probability that this thing will catch,
even if you don't have a cavity.
And that's in 20% of all cases.
Okay, so slightly upgraded the example.
So you go to the dentist, and you have a toothache,
and indeed the dentist's probes actually catches.
Now what is the probability that you, in this case, have a cavity?
And of course, by Bayer's rules, you want to have this diagnostic probability,
and you might actually want to go to the causal probabilities.
We know P of cavity, and we know P of toothache and catch and so on.
So this is what we're going to use first.
And then of course, we're going to open our little tool chest
that we've developed last week and this week.
And so what we do is we're going to look at normalization.
We have the general case that we have something we're interested in,
let's call it X, and some evidence.
And by our normalization theorem, we know that this is alpha times the joint probability here.
So what we do next is we use the product rule.
And what I'm doing here is really prototypical for computing.
So product rule is essentially, and we'll order the variables here correctly,
and then we have the probability of the evidence given X times P of X,
where X is cavity and E is toothache and catch.
So if we do that, then we have a nice match here.
And then we have cavity and toothache.
See, this is capitalized, so we have two possibilities here.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:32:10 Min
Aufnahmedatum
2021-01-28
Hochgeladen am
2021-02-11 17:47:08
Sprache
en-US
Bayes' Rule with multiple evidence and how conditional independence helps us.