27 - Artificial Intelligence I [ID:44958]
50 von 786 angezeigt

Wonder it's off. That's a bit much.

Is this better? Okay, so we have two more weeks of AI and then in two weeks and epsilon on the 13th we have the exam.

There have been two questions that kind of have been popping up. One is, I can't come to Germany when is the online exam?

The answer is there is no online exam. Everything will be face to face. There's no way I can change that.

Okay, I'm probably preaching to the wrong crowd here, but just for the record there will not be an online exam.

The second thing is how does that work? We don't have any homeworks on planning anymore. Will planning be excluded from the exam?

And the answer is no. There will be a planning homework, homework 12, and just to be clear and for the record anything that's taught before the exam can be on the exam.

Okay, the good news is nothing that's after the exam will be on the exam.

Okay, are there any questions? Yes.

Okay, they're very, very, very different. RDF is only the A-box part of an ontology. It tells you facts about individuals, right?

It tells you how girls love Mary, right? Whereas owl is about the T-box, the terminology. It tells you facts about and relations of between classes of objects, right?

All girls love horses or cats, whatever you like. Okay, so that's something which is about whole sets of individuals.

With owl you can do more inferences. In owl you can define classes or concepts, right? And with these definitions or with other concept axioms you can make inferences.

More questions?

You can think of RDF as the equivalent of, or at least the moral equivalent of PLNQ.

No quantifiers and the T-box owl as slightly castrated quantifiers. Castrated enough so that we don't lose decidability.

But we left all of that behind ourselves and we're going to, I introduced this new kind of part that we're going to look at, which is the part on planning.

And I tried to convince you that we wanted to do the things we've done with search, but rather than have kind of a different heuristic per problem,

we wanted to basically have one heuristic or one algorithm to kill all search problems. And the idea is essentially the same idea we had success with when going to,

to inference in constraint satisfaction problems, namely why don't we go up one level and go from search with states and actions to state descriptions and action descriptions.

Actually we didn't really have action descriptions in constraint log, in constraint satisfaction problems, but we need to do actions now.

And so the idea is to go from a black box description where we actually have states and actions that just have names or numbers or something like that, you can't look into them, into a declarative description of both.

And the hope is we're going to have to verify that, that we will be able to do more.

The price we pay, of course, is that we have to do more, right? We have to learn more. There's a good reason why this stuff is at the end of AI one and not at the beginning.

We need more concepts.

And at this point, last week I was very unhappy with how I explained things. So, especially since the solution which came on the next slide was kind of underappreciated.

And I know why. So I thought, well, let's look at the Wumpus world again and see what we would have done with just this idea, namely let's take a declarative description.

Right? Let's try and spell out what that would mean.

And just a little spoiler, we're going to essentially fall onto our noses. It's not going to work very well.

But I hope that I can contrast what planning is doing to what we're doing now. And then you understand what the, learn to appreciate what planning does better.

Okay. So, right? We've already done much of that.

Declarative descriptions of the world. And trying to kind of reason about new knowledge that comes in by the percepts or what we know about the world and what we know about what our actions do and so on.

All of those things we've done. And we've even done it with the Wumpus world.

Right? And I've even given you a choice which logic to take. We can do it in propositional logic. We can do it in first order logic where we did the resolution stuff, I think.

And we could even do it with ALC. Not a problem.

Right?

But I've been essentially lying to you.

Because it doesn't quite work.

Everything on the slides was correct, but you didn't notice that I cheated at one point.

Right? Just think about, right, the agent running around in the maze and perceiving different things.

Right? Perceiving a draft. Which means there should be a breeze. And I'm distinguishing between what we perceive and what's actually happening in our world model.

Right? So, our agent that runs around will come in and there is no draft and then it will go to the next cell and then there is a draft.

Right? So we have two perceptions. One is A and the other one is not A.

Boom. Contradiction.

Reasoning evaporates in the path of logic.

Right? From a contradiction, everything follows.

Right? Agent dead. Or at least insane.

Believes in everything. This is not a good idea.

And we've kind of not, we've shut our eyes on this.

Right?

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:30:22 Min

Aufnahmedatum

2023-02-01

Hochgeladen am

2023-02-02 16:09:08

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen