Yeah, okay. This will be a slightly weird exercise every Tuesday. I am going to speak
in English although there is no non-German speaker in the room because, I mean, yeah,
precisely the person watching it, the only one, I guess, will prefer English. Well,
and because the course is in English anyway. Let me introduce Tadeusz Litak,
my co-lecturer sitting in the back. Just looking forward, I guess, to seeing how I'm going to make
a mess of this. Okay. Today will be lightweight, so some of you may be bored, others maybe will
not be bored, so we'll find out as we go along. So this is meant to be the light introduction to
classical propositional logic, which will take us, yeah, I forgot to look up before and we made a
plan of this, I think it's about three sessions from now or maybe four. And we'll start with a
recap of that part of propositional logic that we covered in Gloyn or for those who are slightly
further advanced back in the day, it used to be called GloLOP, right? Logic and logic programming,
no more logic programming in there today. Okay, so you will, many of you will recognize much of
what I'm going to do today. So classical propositional logic, yes. We can switch on the
light, yeah, if we find it. Just a moment, this could actually be this. Okay, I'll leave out the
lights in the main room, right? Is that okay? Okay. So there was one part in Gloyn where I stated
very generally and this will be a recurring theme here that typically a logic consists of a semantics,
a syntax and a proof theory, actually in the other order you usually start with a syntax. I may
actually have said this yesterday already. And well, we'll just do this for propositional logic
today, probably deferring the proof systems part to next week. So we'll start with the syntax.
There is something that one tends to forget when one presents the syntax of a language and that
is typically that the non-logical symbols that I use, like in programming the names of the functions
that I program and stuff like that, they're actually a parameter of the syntax. So it's
not always going to be the same. So I start off defining the syntax by fixing that parameter.
So I will give myself a set of names of some kind. Of course, I don't care how exactly these are
built and I will call these things atoms. And to be clear on the notation, I'll try to be consistent.
Actually, is that a good idea? That's how we do it in the Loin. It's probably not a good idea to
use capital letters. I'll use small letters from the beginning of the Latin alphabet. So these
things stand for propositions that I do not look into, hence the name atoms. So the longer name
for atoms is atomic propositions, hence the name propositional logic. And then given that
set of atoms, I introduce my syntax, which is very simple. I give it in terms of a grammar.
Phi is going to be, well actually let's make this two letters, so use letters from around that region
of the Greek alphabet as names for formulas, Psi and Phi. And I'll just list some alternatives
of what a formula might be. So for instance, a formula might be bottom, bottom or falsum.
Or a formula could be a conjunction of two other formulas where a conjunction is denoted by this
funny little hat here. Or it could be a negation of a formula. Or, and this is the part where I
use the parameter of the syntax, it could be some a where a is an atom. So is everybody aware of
the mechanism how grammars such as this generate a syntax? I'll need explicit nods on this from
everybody. Yes, yes, yes, yes, yes. Okay, very good. Okay, so given this basic syntax, one typically
introduces some abbreviations that are just there to let formulas look nicer. For instance,
disjunction. So I use this symbol triple equality to, actually no, I want to use that for something
else. I use single equality for syntactic equality of formulas. So this is going to be the definition
of disjunction in terms of the De Morgan law as usual. So Phi or Psi is nothing but an abbreviation
for not Phi and not Psi. If one thinks about this for long enough, then one sees that this actually,
at least intuitively, does the job if one thinks classically. Then implication. Implication is just
an abbreviation for the disjunction not Phi or Psi. So this is something that by now should be
fairly ingrained. So everybody is used to accepting that an implication is true as long as soon as
either the succident, that's the Psi, is true or the antecedent, that's the Phi, is false. Of course,
that's in reality a highly debatable proposition. In fact, some of what we're going to do along the
way will break this. But for the moment it stays like that. This is called material implication.
Okay, what else do we need? Of course, we should be able to talk about true things rather than just false things.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:27:01 Min
Aufnahmedatum
2015-10-13
Hochgeladen am
2019-04-28 05:49:03
Sprache
en-US
The course overviews non-classical logics relevant for computer scientists, in particular
-
Modal logics, extended to formalisms for reasoning about programs - PDL, mu-calculus. Modal systems also form the core of logics of agency and logics for reasoning about knowledge. Moreover they can be seen as a computationally well-behaved fragment of first-order logic over relational structures.
-
Intuitionistic logic, which can be seen as a fragment of certain modal logics (S4) or as the logic of type theory and program extraction.
-
Linear logic, which is established as the core system for resource-aware reasoning
-
The logic of bunched implications and separation logic: more recent formalisms to reason about heap verification and programs involving shared mutable data structures.
-
Fuzzy and multi-valued logics for reasoning with vague information.