25 - Artificial Intelligence I [ID:49644]
50 von 540 angezeigt

We started the last chapter, or the last part of AI 1, which is planning.

Planning can be seen as either a way of solving all search problems, or as adding time and change to world description language based methods.

Rather legitimate. The idea is, again, we're trying to do reasoning inference at the world description level.

And just like we've been doing that on CSP, but CSP essentially is only for configuration problems that don't take time into account, don't take change into account.

The only change that you need for that is just choosing values one after the other until you've exhausted the variables.

Here we actually have changing worlds. And one of the ways we could do it is by introducing fluents.

Time dependent properties and time dependent axioms about perceptions and effects of actions.

And if done naively as we've done it, the frame axioms that we need, namely axioms about what doesn't change under an action, kill the method.

Just wanted to make it clear that that means, if you do it naively, there are competitive, fluent based planners.

And those do well when inference about the state of the world in between the planning steps plays a role.

You have these kind of axioms that say, oh my, action does this and that.

And then depending on the state of the world, that could have a whole chain of effects that you want to reason about.

In much the same way that we were reasoning about consequences in the either knowledge representation or first of all logic chapters.

If those things, if the world basically is entailment intensive, then these fluent based planners can actually have a good effect.

So even though I said, fluents kill the method, it's really you have to do something clever there.

And it's really natural if you want to have these entailment heavy situations in there.

And so we looked at planning languages, which are essentially search languages, except that rather than having black box states, we have world descriptions as states.

And the whole universe of methods that we had last time is at our disposal here.

Everything, the states, the initial state, the goal states and the actions really become open towards these world descriptions.

And essentially we've looked at one very simple example, which is strips.

Strips is going to be kind of the default framework for planning.

And the central idea here is that of a strips task, which looks a bit like a search problem, except that the states are structured.

The states are the simplest thing you can imagine for a world description, namely a finite set of facts and facts are just atomic formulae.

They can be things like on AB, on BC, where B and C are constants.

We're using in strips propositional logic, usually in the form of something like PLNQ as a world description language.

And the simplest world description is just a conjunction of atoms.

The things that are true in the world.

And everything else is considered to be false.

That's also some restriction you can see in this.

If we are saying C is on A, B is on the table, A is on the table, and we're not saying something about D or F or something like this.

That means that there is no, if we're not mentioning it, there is no block D and F.

So there's this closed world assumption in there, meaning if we don't know about these things, they don't exist.

That's different to what we had in first-order logic or propositional logic, where we typically have an open world assumption.

If something isn't mentioned, that doesn't mean it doesn't exist, but we haven't spoken about it yet.

And if you think about in first-order logic these existential rules that say there is a tiny unicorn,

then the rule just basically gives it a name. Let's call it Billy.

And then we're mentioning it, and then it is in our universe.

That's kind of what we think of as an open world assumption.

And closed world and open world systems really are quite different.

Behave differently.

In such a system, you would expect the query, is there a block F or is block F on the table, to get the answer no, because we don't know that is actually true.

By the way, Prolog typically implements negation as failure, which just basically means if you want to deal with not something,

then what Prolog tries to do is to prove something, and if it fails in that, it'll say no.

If I can't prove it, I'm going to say no, even though there might be one.

Our world description might actually allow this to be happening, and that's kind of what a closed world assumption actually gives you.

And in all of these inference-based methods, you should be aware of whether we have somewhere in the background an open world or a closed world assumption.

Databases, for instance, have a closed world assumption.

We don't explicitly know about something, we say no.

Whereas these knowledge representation systems typically say something like maybe.

There could be, but we haven't seen one.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:28:22 Min

Aufnahmedatum

2024-01-24

Hochgeladen am

2024-01-29 17:19:10

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen