6 - 23.3. Hidden Markov Models (Part 2) [ID:30354]
50 von 150 angezeigt

And here is really what happens.

Right?

I've put in fat circles for high probability values

and tiny circles and even tiny circles,

mid circles, small circles, and tiny circles

for various degrees of probability.

And what you're noting is that these are exactly

four high probability places are the same one

as in the deterministic case.

I've used an error rate of 0.2 here.

If the error rate goes up, the difference between these things

becomes fuzzier and fuzzier.

If the error rate reaches 100%, they all look the same.

That's an improvement over the case we had before.

And qualitatively, we're still getting the same result

as in the deterministic case because we actually

have the same kind of obstacle situation

and the error rate is rather low.

Now, you can imagine that in more complex situations,

we will get some ties here and get

more non-deterministic behavior.

So let's recap.

We have a base situation, which we can model very easily

in the deterministic case.

And then we model the non-deterministic case.

Here, we're lucky.

We can use a hidden Markov model,

which means breakout num pi.

And things are easy.

We have to determine our hidden variable, x i.

We have to determine our evidence variable.

We have to make sensible assumptions on all of those.

And then once we do, we can actually

use standard technology, i.e. linear algebra,

to compute, to filter, and estimate where we are

and where we might be and eventually, possibly,

where we are.

And by the way, this is one of the mechanisms

that you have in self-driving cars, which actually have

basically the same kind of information

if their GPS isn't working correctly, say.

Right.

And then you can actually use those kind of things here.

You should actually try and then do such exercises.

We've kind of went through the steps here.

This might be exactly your ability to do that modeling.

Might be exactly what your future employer who sees,

ah, they went through AI 1 and 2 in Erlangen.

So that's exactly what we need.

The first thing they'll do is something like this.

Teil eines Kapitels:
Chapter 23. Temporal Probability Models

Zugänglich über

Offener Zugang

Dauer

00:18:56 Min

Aufnahmedatum

2021-03-29

Hochgeladen am

2021-03-30 14:27:26

Sprache

en-US

The Robot Localization example is continued and further applications are discussed. Also, the so-called Country dance algorithm is explained. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen