5 - 23.3. Hidden Markov Models (Part 1) [ID:30353]
50 von 212 angezeigt

In some cases, we can do, if we want to kind of implement, then one of the problems here

is getting these backwards message out of this steaming pile of summations and matrix

or tensor products.

There's a good reason I haven't written down backward of so and so is this and that.

Because you have loads of numbers in here, many of those.

And you kind of have to pick and choose all these things manually.

That's a complex, just writing what these backwards message are is a complex art.

And you really don't want to do it.

And the question is, when are these computations actually easy?

And it turns out that if you have a single discrete variable that has a fixed domain,

then things become easy.

Then you can use linear algebra.

Then you can just basically write down the transition model as a matrix.

Because if you have one variable, then the distribution becomes a vector.

And any of these complex sum and times operations are really only matrix multiplication.

So what we can do is we can have a transition matrix, which just basically has these kind

of conditional probabilities.

S times S, many numbers here.

And we can have a sensor matrix, which is actually, could be time dependent, but could

also be, in our example, we kind of leave the, if we have a stationary sensor, then

we kind of leave out the.

For each time step where we have evidence, we look at this.

These probabilities, put them into the diagonal, which is just basically gives us point wise

multiplication.

And for every time t where we have an observation, we can actually use the respective transition

probabilities and put them into these diagonal matrices.

And then we can actually write down what the forward message is and the backward message

is.

F from 1 to t plus 1 is actually alpha, which we had determined earlier, with this matrix

product.

F is a column vector, so we have a column vector here, times the matrix gives us a row

vector times a diagonal matrix gives us a column vector again, and so that's kind of

the dimensions add up for it nicely.

And you can convince yourself that it actually gets you the right sums and products here.

And the smoothing backwards message is similar.

So our algorithm back there becomes very simple.

You just have these matrices.

S plus 1 of those.

And then you build those messages, fire up NumPy or whatever you have for MATLAB or so

for dealing with large matrices, and then you just let it grind away.

And this is not a problem if the domain is rather large.

Linear algebra we can do efficiently.

Other people have implemented that.

So we get good algorithms here, and we even can control the constant factors here, because

we know how big the matrices are.

You can read that off these messages.

Then I want to make an example, a somewhat real world example, and I would like to set

it up with a simpler thing.

So we have a maze-like structure, and we have a robot.

The robot at some point wakes up, and it has a map of this complex situation, and is interested

Teil eines Kapitels:
Chapter 23. Temporal Probability Models

Zugänglich über

Offener Zugang

Dauer

00:21:07 Min

Aufnahmedatum

2021-03-29

Hochgeladen am

2021-03-30 14:27:20

Sprache

en-US

Definition and algorithm of the Hidden Markov Model. An example for Robot Localization is started. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen