22 - Artificial Intelligence I [ID:10010]
50 von 640 angezeigt

The following content has been provided by the University of Erlangen-Nürnberg.

We are talking about logic and the idea here is that an agent has a world model that is described in some kind of a formal language.

We have an inference procedure that allows you, or the agent actually, to derive new knowledge, new pieces of world model from old pieces of world model.

Things that came in from the perception engine, things the agent knows itself, like what actions did I do, and other pieces of knowledge that the agent has derived before.

The interesting thing here is we're doing all of the inferencing, the reasoning, and the evolvement of the world model at the language level.

We're transforming propositional formulae into propositional formulae by inference, which is different than in state-space search,

where as we didn't have a good language for describing scales, namely only the names of the states, we couldn't really do.

That's the state of play. We've seen one such language, propositional logic.

We've seen various inference procedures.

Last time we looked at resolution for propositional logic and a variant of this, DPLL.

Resolution, very simple idea. What you do is you restrict resolution to clause sets, clause sets being conjunctions of clauses, clauses being disjunctions of literals,

and in propositional logic, literals are just labeled atomic formulae. Atomic formulae as propositional formulae without any connectives, in this case, in this logic, those are exactly the propositional variables.

The idea is that if you have a huge clause set that is unsatisfiable, you will be able to make this unsatisfiability explicit by deriving the empty clause at some point,

via repeated applications of the resolution rule, which is very simple.

We have two clauses. They have a complementary literal. You cut out the complementary literals and you make a new clause from both of the rests.

You do that systematically, say, by a technique like level saturation, where you basically make sure that you resolve every clause with every clause, call that level one, and then do it again, call that level two,

and then enumerate all the possible clauses you can generate by increasing the level, then you will actually end up with an empty clause if one of those can be derived. That's resolution.

We've looked at a calculus for transforming propositional formulae into clause normal form and talked about things like don't know and don't care non-determinism.

In this case, we have a don't care non-determinism, which means we can greedily pursue one path and get to a clause normal form, which we can then do resolution on.

Okay, here's the resolution proof. And then we tested the whole thing for the Wampus case. Remember we have this kind of cave-like Wampus domain.

We have some initial information, which we can describe in propositional logic, and then we can transform the description into clause normal form and then make a resolution proof.

That tells us the Wampus is at a certain place, which is good to know, because if we know that, we can kill the Wampus and then can search for the gold.

Okay.

Good. So that's a very standard way of doing these things. We're using deduction as an inference process.

We could have used either tableau or resolution.

You can imagine that having efficient inference procedures is very important for agents because they want to be able to react quickly or have a very good world model.

That was about as far as we got yesterday, last week.

Was there any questions?

Now, I would like to tell you that when we're designing agents, one of the central things, one of the central choices we have to make is the language in which we describe the world.

It's even more important than...

One of the...

I got kind of surprised of what I was seeing here.

So one of the central things you can choose is the description language.

You have choices there. Some people want to tell you that there's only one logic.

And usually they say it's first order logic, which we didn't see here.

So that's wrong already. We've seen propositional logic, which is good for certain things, which is very good because we have decision procedures for satisfiability for it.

But really, this is not a very nice language to talk about the world.

You see the Wampus Cave, say there, or this, and you want to know the rule. You'd not probably be very happy if I said not S11 implies not W11 and not W12 and not W21.

And keep on for about three hours.

By the way, not what I told you when we talked about Wampus the first time.

Well, I told you about things. Well, if there's a pit, then there's a breeze in the others.

And if there's a Wampus, then it stinks in the adjacent things. That's kind of a language you would like to use.

So we have a choice in which language we are. We should choose a language for which we have efficient, for which we have efficient inference procedures.

But there is a kind of a big field of engineering and that has been a very active field of engineering in the last 60 years.

In particular, the first AI program ever in 54 was an inference procedure essentially for a small superset called Pressberger arithmetic of propositional logic.

So this is an important thing.

Good.

Before we go into looking at better languages and we're going to look at predicate logic.

We are going to look at state of the art satisfaction solvers for propositional languages.

There is only really one game in town and that's called DPLL.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:21:15 Min

Aufnahmedatum

2019-01-16

Hochgeladen am

2019-01-16 13:54:36

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen