Okay, let's start the last lecture of AI in 2018.
We've talked about logic yesterday and we've mainly, I tried to convince you of a couple
of things. One was that logic is easy. Logics can be very small. And the other thing I wanted
to convince you of is there's more than one logic. In fact, logics make very good pets.
You might have more than one logic as a pet. Okay, and you develop logic as descriptions
of particular worlds. Remember, in the agent, the logic is actually the language in which
to describe the world model. The world model is just a set of sentences in that particular
logic that's implemented in the agent. There are lots of different agents in different
environments. And so it's not a surprise that they need different tools to survive. I.e.,
they need different logics. Logics that are kind of tailored to the world they live in.
And yesterday I mainly tried to kind of teach you logic, independent logic. What are the
things we'll see every time? We'll see a formal language. And no matter how we do it, there
will be a formal language which we can decide well-deformedness in. There will be a semantics
which is just essentially a mapping from that language into the world. That's what we need.
That's kind of the reverse of the sensory mapping. In the sensory mapping, you see something
in the world and map it into your language, whereas the semantics should be the inverse
or at least partial inverse to that. And the last thing is we need a calculus. A way of
taking world models and deriving better world models out of that. So the first thing you
should realize is this is a description layer process, just like we learned in constraint
propagation, where we took constraint descriptions and made tighter equivalent constraint descriptions
out of it. This is exactly what we're doing with the calculus. We take a world model and
make a quote unquote tighter world model, something where we can see directly what the consequences
are. Without changing the meaning, that's the important thing, which is why we're studying
meaning, but we're really interested in these calculi here. Okay, I could directly stop
now. That's all you really have to know about logic. Except, of course, there are many calculi
and some of them are better for some things and other are better for other things. So
we're going to learn a couple of calculi and we're going to learn a couple of calculi,
I hope we can get to them, which are very suited for implementing on the machine. We're
doing not philosophy, but AI after all. Okay, so it's always also the question is, can we
engineer this so that we can actually build it? Good. Okay, so let's start. And we've
looked at two logics, actually. One is called propositional logic is very important logic.
We're going to concentrate on that. And the other one was this little Hilbert calculus
example I gave you, which was kind of a subset of this. So the logic we were talking about
has essentially propositions. We're going to call them propositional variables for weird
and wonderful reasons. But if you see a proposition which we're going to write as P or Q or P
17 or so, think of things that can be true or not. sentences of languages that can be
true or not. It is currently 10 o'clock is such a proposition, it happens to be false.
Okay. And there are lots of those infinitely many of those. And we'll just abbreviate them
by single letters because that fits on the slides nicely. It also and that's important
to realize is they're essentially black boxes. You will see that we cannot look into them.
Okay, this is something we're going to change that we really have a logic here that has
black box propositions. And we're going to in our language only be able to talk about
P and Q P implies Q P but not Q and all of those kind of things. But we're not going
to be able to look into the propositions, which will change after Christmas, where we
learn another logic which is called first order logic, which allows us to look into
those. Okay. So that's really what we have. We have propositional variables. Think of
Peter loves Mary, we have connectives, things like and or implies if and only if and so
on. And using those connectives, we can actually build up complex formulae out of the propositions
and the connectors in the obvious way. We're going to call propositional formulae without
connective atomic and propositions with connective complex. Okay, and the atomic ones boring
Presenters
Zugänglich über
Offener Zugang
Dauer
01:28:18 Min
Aufnahmedatum
2018-12-20
Hochgeladen am
2018-12-21 19:09:46
Sprache
en-US