2 - 23.2. Inference: Filtering, Prediction and Smoothing (Part 1) [ID:30350]
50 von 156 angezeigt

And there are four kinds of inference tasks we will actually, and we finished with this

slide, there are four kinds of inference tasks that we're going to study. One is what we

call filtering, which is really computing the belief state. We have a set of random variables

and we're going to look at every possible worlds given by this distribution and look

with the evidence what we believe about the world. What's the probability distribution

of my random variables given some evidence. So if we go back to this model here, here

this X would be the single rain variables and we would have evidence of all the umbrellas

up to now. So we're really interested in does it rain today or not, given that we've seen

the distribution of umbrellas in the past. And then of course we have something called

prediction. Given the evidence so far, what's the probability distribution on rain on Saturday?

In our example. Or given the evidence of stock prices of BMW over the last three weeks, what's

the likelihood that the stock is going to rise dramatically tomorrow? If so, then we

should buy some today. If the likelihood of it falling tomorrow drastically is high, then

we should sell our stocks and so on. Then there's something called smoothing, which

in a way is predicting the past. Remember that in our umbrella example, we have some

belief about the probability distribution of all of these rains, which is fed in all

the observations we've made so far. And it seems clear that the more observations we

have, the better, the more we know about these unobservable variables. So say...

Say this is today, and we're interested in this day, did it rain? Then of course today

we have one, two, three, four observations, whereas on the day we're interested in itself,

we only had one, two observations. So more information can mean that I can do better

computation of the probabilities. So that's essentially what we call smoothing. And then

of course, something we call most likely explanation, which the umbrella example isn't so suited

for. Typically there, so you need that for all kinds of recognition processes. Think

about speech recognition. We have something like a time dependent process. And this variable

might be, what does the speaker want to say? Or what does the speaker actually say? And

then we have a noisy sensor, namely what do we hear due to all kinds of things. And then

you want to find the most likely explanation of the real time sequence to the one you observe.

And then there may be lots of stuff in between, side noises, the speaker lisping, your beings

don't death, or all those kind of things. And so for that it's important that, and I'm

going to come back to that, it's important to realise that we're looking at the most

likely sequence, not the sequence of most likely explanations. And those can be quite

different. So, any questions so far? Yes?

Yes, if you have essentially independent things then they should be 50-50. That's the boring

case. Fortunately in our example, rain is not entirely a total cause. And the best weather

prediction we know in the short run is that the weather today is the same as it was yesterday.

Of course that also carries no information. And in particular, say that is the case, that

gives us a nice example for most likely explanation. Even if the weather doesn't change much, which

means we're going to predict in the short term that it stays as it is. In the long term,

given observations, that's actually not the best prediction. In the weather case, we can

actually observe the weather when it comes. And then of course the most likely explanation

is exactly the right weather, even if in the short term. But yes?

Yes, you say you're a historian, okay? And deep underground. You might actually be interested

in what was the weather actually. Say to see whether the director was lying. And you said,

well, don't worry, you were deep underground, but it actually was raining the whole time.

You might want to check up on whether that's actually the case. And the more you know,

the better you can postict the past. Or if you're interested in weather patterns, say

you're a meteorologist, meteorologist, I'm sorry, meteorologist is something different.

Then of course you're interested in, even though I couldn't directly observe the weather,

you might be interested in what is the pressure curve high in the atmosphere. Even if you're

Teil eines Kapitels:
Chapter 23. Temporal Probability Models

Zugänglich über

Offener Zugang

Dauer

00:29:04 Min

Aufnahmedatum

2021-03-29

Hochgeladen am

2021-03-30 14:16:32

Sprache

en-US

Short overview of Filtering, Prediction and Smoothing. This clip concentrates on Filtering and its formulas. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen