18 - Artificial Intelligence I [ID:9925]
50 von 746 angezeigt

The following content has been provided by the University of Erlangen-Nürnberg.

You have last week completed constraint propagation.

And just to put ourselves into the general picture,

we're still looking at agents who are trying to determine what the next action is.

And for that, agents need to have a world model.

The thing to understand there, I want to remind you,

is that the world is, and we observe part of it,

and in our computational machinery, we make a model of the world

which we're using to determine what the next best action is going to be.

And this semester and next semester,

the critical part is what is the shape of our world model?

How does that work?

We started out with very, very simple world models,

which we call black box world models,

where we basically said the world is in a state that was our model,

and my actions can put the world into a successor state.

That was a very, very simple world model,

which makes it easy to make algorithms on top of it, right?

Search algorithms, very simple, but also inefficient,

because there's lots of things we can't do.

We can't look into the states and not make any heuristics or something like that

by just looking into the state.

States are just states.

And I've used constraint satisfaction and propagation

as an example of what changes if we kind of relax

and have more interesting, more complex world models.

Rather than having black box states, of which we essentially only know the name,

we had factored representations of states,

where we basically had a couple of features which had a couple of values,

kind of the fill out the form kind of world model,

which is good if our features describe the world well,

and we've concentrated on problems where this is the case,

and then we could do cool stuff,

and we've had very successful algorithms

that could give us a 20-fold, 40-fold or something like this,

advance of search depth.

And the main factor here is that we have more information about the states.

So we get a tremendous speed up as compared to these dumb search algorithms,

which means we can solve problems we couldn't otherwise solve.

And the other thing that a factored representation of states makes possible

is that we go from state space search,

making predictions by essentially trying all possible futures in the state space,

we allowed ourselves to have state description languages.

And instead of searching all possible futures in the state space,

we allow ourselves to lift the search into the space of all descriptions,

of states.

And that is good because if we have a good description language,

one description might actually describe lots of states,

which allows us to cover a lot of state space ground

by very few descriptions level steps.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:22:13 Min

Aufnahmedatum

2018-12-19

Hochgeladen am

2018-12-19 16:31:28

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen