15 - Artificial Intelligence I [ID:49634]
50 von 760 angezeigt

OK.

We started a new part of the class this year, and it's about knowledge and inference.

In essence, it's really the idea of constraint propagation, torched if you want, adapted

to general structured representations. Structured representations means we have a language in

which we can describe the world. And we're going to see multiple such languages. They're

Logics. Languages plus inference is what logic really is. And we're going to use it in agents

in general. In this case, an agent in a partially observable world. The wumpus is going to stay a

little bit with us as a running example. And I try to motivate that we need a language that can

express facts about the world. Essentially things like there's a wumpus in 3-1, or there's a pit

here, or there's a pit here or there, and stuff. There's no pit there. All of those kind of things.

We have to have a language that can do statements, basic facts, and then conjunctions of facts,

disjunctions of facts, negations of facts. And if you do that seriously, what you end up with

this language. We have basic expressions of facts. And we're calling them propositional

variables. We'll relax that in a minute. But logicians are people who want to have everything

as simple as possible. If you have a logic and I have a logic, we don't compare it like how many

horsepower does my car have or so, but how little is my logic. And the one with the littlest logic

kind of wins. Which means that we have a logic where we're not looking into the internal structure

of statements. For the statements themselves, we have, if you want, a black box representation.

The only thing that matters is that they can be true or false. That they can be made true or false

depending on the situation we're in. Depending on which the agent is in. You think back at the

wampus cave, there might be a wampus in 3.1 or there might not. In the beginning the agent doesn't

know. After a while, it may know. So that's where the propositional variables come from. And we're

going to, as I said, relax that in a minute. And then from the variables, we make formulae by

essentially combining or wrapping up in a language these propositional variables via negations,

conjunctions, A and B. The A1 and A2 are called conjuncts. We have disjunctions and disjuncts. We

have implications and we have equivalents. But really we only need the first two. Negation and

conjunction. Remember, if I have a littler logic than you, then I win. Okay. Any questions so far?

Now, whenever you have a language on which you want to do inference, you have to have a quality

standard for inference. Inference is just rules how to do this and that. The semantics was relatively

easy in CSPs because the constraints gave us what it all means and the domains gave us what it all

means. Here, things are more difficult. We kind of have to dream up our own theory of meaning. And I

was very quick about that, but we'll come back to it. The way we do this is we insist on what is

important to us for this language. And remember, we have propositional variables. They can be true

or false. So true and false must be kind of the meaning. And the agent, of course, is interested

in what is true about the world. All the things that are not true about the world, don't worry it

that much. It wants to know where the wumpus is and where the gold is and where the pits are. And so

the meaning of it all must somehow involve true and false. And you'll see in all of these logics

we're going to look at, true false is always going to be part of the semantics. And we're only always

going to be interested in about what is true. Actually, we're going to be interested in what is

true in all situations. So that's what we call the universe. True or false here. Very, very simple.

And then if you look back, we have these combining language features, which we're going to think of

as functions. Taking two formulae, giving me a formula. And in our semantics, we think of those as

constants. They have constants, why? Because, or even logical constants, because they are going to have a fixed meaning.

A and B is going to be true whenever A is true and B is true. That's a principle we use in computer science

all the time called compositionality. A and B is true if A is true and B is true. Sounds trivial.

Is in a way trivial. But the important thing is what I'm not saying. A and B is true if independently

A and B are true and nothing else. You don't have to know about the moon phase. It's going to be true no matter what the moon phase is.

No matter whether Colazza had bad dreams tonight or woke up happy doesn't play a role. So we can actually compute the meaning

just from the parts and knowing what the connective is. Same thing for not. Not A is going to be true

if A is false. No moon phase, no bad dreams, nothing. Which means we can compute with this in principle.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:29:18 Min

Aufnahmedatum

2023-12-06

Hochgeladen am

2023-12-07 00:19:07

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen