1 - General introduction to this chapter [ID:22466]
50 von 51 angezeigt

Just to put ourselves into the general picture, we're still looking at agents,

and at agents who are trying to determine what the next action is. And for that, agents need to

have a world model. Right? The thing to understand there, I want to remind you, is that the world is,

we observe part of it, and in our computational machinery we make a model of the world which

we're using to determine what the next best action is going to be. And this semester and

next semester. The critical part is, what is the shape of our world model? How does that work? We

started out with very, very simple world models, which we call black box world models, where we

basically said the world is in a state that was our model, and my actions can put the world into

a successor state. That was a very, very simple world model, which makes it easy to make algorithms

on top of it, right? Search algorithms, very simple, but also inefficient, because there's lots of things

we can't do. We can't look into the states and not make any heuristics or something like that,

by just looking into the state. States are just states. And I've used constraint satisfaction and

propagation as an example of what changes if we kind of relax and have more interesting, more complex

world models. Rather than having black box states, of which we essentially only know the name, we had

factored representations of states, where we basically had a couple of features which had a

couple of values, kind of to fill out the form kind of world model. Which is good if our features

describe the world well, and we've concentrated on problems where this is the case. And then we

could do cool stuff, and we've had very successful algorithms that could give us a twentyfold,

fortyfold or something like this, advance of search depth. And the main factor here is that we had

more information about the states. So we get a tremendous speed up as compared to these dumb

search algorithms. Which means we can solve problems we couldn't otherwise solve. And the

other thing that a factored representation of states makes possible is that we go from state

space search, making predictions by essentially trying all possible futures in the state space,

we allowed ourselves to have state description languages. And instead of searching all possible

futures in the state space, we allow ourselves to lift the search into the space of all descriptions

of states. And that is good because if we have a good description language, one description might

actually describe lots of states. Which allows us to cover a lot of state space ground by very

few descriptions level steps. If we have a bad description language, we're not gaining anything,

we might be losing things. But if we have a good description language and constraint propagation is

so attractive because we have good description languages. The description language of what are

our current domains and our current already given assignments, that happens to be a good

description language. Which kind of respects the internal invariance of the search. Which is why

we're doing constraint satisfaction and not voodoo magic or something like that. Voodoo magic has

bad description languages, you have to light a fire and put powders and so on. And that doesn't

really help you much in problem solving, some people believe. But that's really what we've been

seeing. We've seen two things. One is looking into the states and the other one is going to

description language level search of course. We're not going to escape search ever in this

course. Sometimes search spaces are very very thin and then we're going to call it an algorithm.

But otherwise we're going to do search in the worst case always. But we may do that at different

levels. So the constraints were basically factored representations, which is one step up from black

box representation. But those are still quite inefficient because feature-value pairs are just

very sketchy descriptions of the world. They're not very flexible. You kind of before anything

else determine the vocabulary you want to talk about the world and give you something give yourself

something like 15 words. Okay. English has half a million words. That's kind of the description

language we humans use about the world. And even if English has half a million of words, we're still

actively using 20 to 30,000 words as an educated adult. Okay. So giving ourselves just 14 or 15

words in a form to fill in might strike us as not quite enough for the whole world. Okay. But it does

for Bundesliga and factory planning and those kind of things. And now basically what we're going to

do is we're going to take the full step and give ourselves languages that can be used to describe

the world. And instead of doing state space search, we only do inference, which is description level

Teil eines Kapitels:
Propositional Reasoning, Part I: Principles

Zugänglich über

Offener Zugang

Dauer

00:08:52 Min

Aufnahmedatum

2020-11-02

Hochgeladen am

2020-11-02 11:57:01

Sprache

en-US

A general introduction, why this chapter is needed compared to the last chapter.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen