7 - 23.4. Dynamic Bayesian Networks [ID:30355]
50 von 101 angezeigt

Okay, so that's one of the things you can do.

If you can condense everything into one variable, then HMMs are a good idea because you can

just use linear algebra.

Another way we can go forward is by what is called dynamic Bayesian networks, which is

the ideas I've been kind of possibly under the radar, been alluding to all the time.

We will call a Bayesian network dynamic if the variables are indexed by time structure,

so we have a dynamic.

And we have these time slices.

So our example, remember the umbrellas example, would be such a network where we kind of condense

all the stationary part into one time slice.

And here, and since this is stationary, we kind of have places enough to put all the

conditional probability tables.

Another example is robot motion.

Say we have a battery, and we have the initial values always here on the left.

And then we are interested in actually the robot location, which is x.

Think of a two-tuple, say, in a two plane.

And then, of course, we measure things like the velocity, x dot, and things like the battery,

which is also something we can't see, but we have an evidence variable, which is the

battery meter that says, well, it's half full or something like this.

And then this part here would be the time slice.

And we would have here the observable variables, battery meter and this one.

And we would think of this.

If we unroll this, we would have a second order Markov model here.

The good thing here is that you can directly use Bayesian networks methods by essentially

just if you have a network like this, then you basically can unroll that into a finite

network of time t.

That actually gives us a Bayesian network in the conventional sense.

It's finite.

And we can use normal Bayesian network methods based on this.

And the more Markovian the whole thing is, the better the network behaves.

The more Markovian these networks are, the less arrows we have.

And few arrows is beautiful in Bayesian networks.

Because it makes the algorithms fast.

So here we have relatively few arrows, which also gives us in this particular case a polytree

structure, which means inference is good.

We actually only have a single tree here and a very skinny one.

So life is good here.

Of course, the naive method, which is just used Bayesian networks technology we had so

far, will give us an update cost that is linear.

Because we always have to thread through all the inferences here.

We can't use, we're not actually using stationarity here.

But we're making use of the fact that we have lots of conditional independencies.

If you compare this with, say, HMMs, then if you think of a dynamic Bayesian network,

which is essentially such a thing here, you can condense all of those into a single variable.

Here's one.

And if you now trace which ones actually depend on which ones, turns out that you have three

variables, which gives you two to the three possible states.

And it's rigged, of course, in a way such that everything depends on everything, which

means that if we did this instead of with three of those, we did this with 20 of those,

we would get a dynamic Bayesian network that has 160 parameters.

Teil eines Kapitels:
Chapter 23. Temporal Probability Models

Zugänglich über

Offener Zugang

Dauer

00:12:37 Min

Aufnahmedatum

2021-03-29

Hochgeladen am

2021-03-30 14:37:24

Sprache

en-US

Definition of a Dynamic Bayesian Network and a comparison with Hidden Markov Models is given. Also, there is a short summary of this chapter. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen