33 - Recap Clip 6.7: Hidden Markov Models (Part 2) [ID:30436]
50 von 126 angezeigt

Now, of course, once we've actually introduced an error rate for the sensor, we can't be

certain about anything anymore, so we have to work with probabilities everywhere.

If we do that with an error rate of 0.2, which is already rather high, but okay, and looking

at the same cases we did before, i.e. we start with the sensor feedback, there's an obstacle

north, south, and west, then we can compute the probabilities that our mass rover is in

any of those squares resulting in roughly that figure, right?

You'll notice that the ones where we were certain before, that it has to be in one of

those, are now rather big, because the probability is still rather large, but in all of these

other squares there are still these kind of colored circles which tell us, okay, it's

not completely unlikely that the robot maybe is there.

And of course that's somewhat corresponding to the proximity to the actual feedback, right?

So if we get north, south, west, here we have north, south, so that's just one error bit,

here we get north, so we have two error bits, so this is rather unlikely.

Here we would have to have, north would still be correct, but here it would be wrong, here

it would be wrong, here it would be wrong, so this one is rather unlikely.

I assume you get the idea.

And now of course we can again take one step further and assume we get one more information

after moving, then we end up with this picture, where we now have one clear winner with very

high probability and like a couple of nodes that have some probability, but it's probably

sufficiently low that we don't really need to care about them.

Right, one advantage of my clicker thing not working is I can actually use the mouse, which

is nice.

Right, so much for that, and then we can actually do statistics and look at how well all of

these works.

Of course, you know the actual jazz, we have this algorithm for smoothing, which just propagates

the probabilities backwards, given some sequence of evidence, and we can use the Viterbi algorithm

to find out most likely sequence of values of our hidden variable that accounts for the

observations we make.

In this case, that just means we can use smoothing to find out where our robot started out, and

we can use Viterbi to figure out the most likely path that our robot actually took to

end up in whatever state it is now.

Looking at those statistics, I'm still not sure.

I think you have the same problem.

No idea why the left graph peters out at one Manhattan distance of one.

Has anyone figured that out maybe?

Okay, so I guess mentally substitute one by zero.

I'm pretty sure that that's just an artifact of whatever.

Oh, they're using Manhattan distance.

So maybe they just used Manhattan distance wrong or had some artifact that prevented

them from actually reaching zero.

I'm pretty sure it should be zero.

And what's probably not too surprising is the main information in here, namely the bigger

our error rate, the longer this actually takes to converge.

The same, of course, for the Viterbi path.

If I have an error rate of 0.2, then it will take a lot longer to actually converge towards

the proper sequence.

Right.

And having done all that, we could come up with the idea that maybe we can use the equation

that we're using for forward, i.e. for inference in smoothing as well.

How do we do that?

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:10:39 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 10:58:19

Sprache

en-US

Recap: Hidden Markov Models (Part 2)

Main video on the topic in chapter 6 clip 7.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen