8 - Artificial Intelligence II [ID:57508]
50 von 790 angezeigt

OK.

And everything we've been doing is really trying to get this going mathematically in

terms of how we can get this going.

So, we've been doing this for a long time.

We've been doing this for a long time.

And we've been doing this for a long time.

And we've been doing this for a long time.

And we've been doing this for a long time.

And we've been doing this for a long time.

And we've been doing this for a long time.

And we've been doing this for a long time.

And we've been doing this for a long time.

And we've been doing this for a long time.

And we've been doing this for a long time.

And we've been doing this for a long time.

And we've been doing this for a long time.

So, deterministic actions means expectation of utility.

OK?

Which again, kind of gets us into the big sums of lots of factors of probabilities territory.

But at least we know what we have to do.

The current limitation in all of this is that we are still looking at static worlds.

Right?

The world doesn't change while the agent does things.

OK?

Makes certain things relatively a lot easier.

And that'll be, of course, the next thing we are going to alleviate.

So, the idea there is we take our Bayesian networks, our Bayesian networks machinery,

and we can extend it to new kinds of random variables,

naming the random variable of choosing an action, the action variable,

which is not deterministic necessarily anymore, but has an influence on certain random variables.

And those, again, influence, in a way, we may or may not understand fully, influence the utility,

which we then, using that formula, maximize.

There's one connection I want to call your attention to.

In constraint satisfaction problems, we had these factored world representations.

Remember?

We had a couple of attributes, which are really functions of the real world having a couple of values.

And instead of looking at the real world, we basically looked at the values of the attributes instead.

If that gives you a deja vu, then that's ideal.

Because the attributes in these attribute value or factored representations

kind of have the same status or the same purpose that our random variables have.

And remember, random variables are also functions, not from the real world, but from probability distributions.

And we're abstracting over the world and using these random variables or attributes for modeling.

And in particular, this very often means that really our sample space, remember, probability spaces,

is really something like the domain one is of that shape.

Where these here, the X1, no, that's not what I wanted.

The domains, I'm writing with these vertical bars here.

So you're really having a big Cartesian product here over all the domains of the random variables.

So that's something when we're doing Bayesian networks, this is kind of what you should keep in mind.

And it's essentially the same we were doing with the factored representations.

Which is also something you as the agent designers need to choose what are the random variables,

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:30:49 Min

Aufnahmedatum

2025-05-20

Hochgeladen am

2025-05-21 20:19:08

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen