Can you hear me?
Yeah.
Excellent.
So, welcome back to the third but last AI lecture.
We're counting down.
And I'm very happy to see that some of you actually braved the swim to the university.
What a weather.
Right.
Biking to the university early this morning was not fun.
So, we are looking at planning.
Planning being search and problem solving using declarative descriptions of states and actions.
Yesterday we looked at what this could look like.
Having all these fluents and having all these time points and axioms about how things work and change and so on.
And we ran into the frame problem.
Which needed to be solved.
And one of these solutions to the frame problem that worked out pretty well is the strips system.
And the idea of the strips system is that we use propositional logic.
We're going to extend that slightly towards first order logic a bit in the future.
Out of convenience but it's going to stay propositional logic more or less.
And we describe search problems.
Remember search problems were determined by a set of states, a set of actions, a subset of initial states and a subset of goal states.
That's all there is to them.
Only for a planning problem we have the same structure. Only that we're using instead of just having sets of states.
Which means if you think about it, a set of states you know whether something is in the set or not in the set.
That's the only thing you know about that thing.
Only now we are looking essentially at states which we think of as described by conjunctions of facts.
Things like A is on B, the gripper is empty and C is on the table.
Or we'll look embarrassingly long at an example with a truck that kind of moves around in Australia.
And then we have the truck is at Adelaide or in Adelaide and it's in Brisbane and it has the parcel loaded and all of those kind of things.
Those are the descriptions.
Descriptions of the states.
So far so trivial.
Essentially the same thing we did for CSPs.
Where we also had descriptions X equals 17 or X is between 3 and 9 or something like this.
Those are atomic descriptions of the world.
And we kind of conjoined them together.
And the big bummer here is actually how we do actions.
Actions are given by three lists of facts essentially.
One of them is the precondition list that needs to match the current state otherwise the action is not applicable.
We have an add list of facts, the stuff that is true after the action has been done.
And we have a delete list of facts that is a set of facts that is untrue or at least unknown after the action.
And why are we doing it this way?
Because if we very carefully add and subtract facts, then that actually means we're not actually touching certain parts of the state.
And that is the state that kind of automatically stays as it is.
We don't need frame axioms because we have the delete list.
Because we know what changes and by default if it's not on the delete list it's not being changed.
If it's not on the effects list, the add and the delete list, together we call them effects, then it's not going to be changed.
If I drop my laptop to the ground here, it will not affect how Joe Biden feels back in Washington.
So the action of dropping my laptop will probably have a fact that Michael Kolas is sad afterwards or something like this,
but it will not talk about Joe Biden at all.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:31:02 Min
Aufnahmedatum
2023-02-02
Hochgeladen am
2023-02-03 11:39:06
Sprache
en-US