3 - 23.2. Inference: Filtering, Prediction and Smoothing (Part 2) [ID:30351]
50 von 189 angezeigt

The next step is filtering, it's not filtering, it's prediction.

Meaning what's the weather like tomorrow, given the umbrella evidence.

And the thing is that if you think about this, you can do exactly the same thing as in

filtering.

This is today, so we know true, true, as in our example.

And I want to know at some point in the future what's the weather.

So we can do exactly the same filtering thing, except that we have no new evidence.

For the algorithm, that just makes things easier.

We have to essentially do nothing new.

And if you think of this, of this sawtooth-like computation, then you can just basically leave

out these down errors, because you have no evidence.

And what you're just going to do, you're just going to compute this into the future.

And so you're going to use this forward algorithm essentially without certain steps, in a slightly

simplified form.

And what you observe is that you're getting a situation where prediction kind of tends

to a stationary distribution of the Markov chain.

One of the things we saw in the up arrow phase of the forward algorithm is that if you start

with 0.5, 0.5, and use the transition model on this, you're always going to get 0.5, 0.5.

And that, of course, is true for all of this phase here.

And not only is it true that you have these fixed points, but you're also kind of tending

towards the fixed points by general mathematical arguments.

And so it's also kind of plausible when you cease to have evidence, you're kind of iterating

the transition models until it, and leave it to grind on by itself.

And then it's going to probably discover kind of the prior distribution.

And only your evidence kind of perturbs that situation.

And so you're going to find these kind of system invariants.

And the only question really is how long does it take?

Because if you're back at the 0.5, 0.5 fixed point, there's no more information.

And so what happens is typically you're starting out with something information driven, right?

Umbrella, not umbrella, not umbrella, umbrella or something like that.

And then you kind of flat line at some point and say, what we're interested in is this

time until there's no information left in prediction.

And it's clear that predictions back here carry no predictive power.

You don't even have to.

That is something you can compute without taking all the evidence into account.

And in this time where we speak of the mixing time here, meaning if you know the mixing

time, a lot is known about these mixing times, which I'm not going to tell you, gives you

some estimate of how long you want to trust your predictions.

The algorithm can predict any future.

It just happens to mean nothing.

It is just as good as random after mixing time.

And the mixing time is essentially a measure of how stochastic the sequence is up to them.

How much information it has.

The more wildly it fluctuates, the more information is in it.

And back here, there's no information.

So you want to only predict while there's some information left.

The only thing you really need to remember is prediction is filtering without evidence

or with partial evidence.

So the algorithm is simple.

We're just using this forward algorithm.

Teil eines Kapitels:
Chapter 23. Temporal Probability Models

Zugänglich über

Offener Zugang

Dauer

00:23:41 Min

Aufnahmedatum

2021-03-29

Hochgeladen am

2021-03-30 14:16:38

Sprache

en-US

The idea of Prediction is mentioned. Also, Smoothing is explained with examples and a Forward/Backward Algorithm. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen