Let's begin with a quote.
As you are himself, I am not a linear logitian.
I am just the inventor of the new order.
I will just let that stand there.
So we have seen linear logic as a syntactic system.
We have seen the connectives and we have heard some stuff about motivation.
Although these few sessions that we have we can't get very deep in terms of motivation.
And we have seen the sequence system which basically showed that there were two flavors of connectives.
There were multiplicative ones which would force you to form disjoint unions of contexts as you go along the proof.
And there are the additive ones which basically inherit the context as it stands.
Okay, we haven't seen the semantics.
Slightly surprisingly linear logic was originally semantically motivated.
So it's actually the reason that Gervard said this is that he and many other people saw and see linear logic as a tool in the study of the denotational semantics of other things in particular the Lundar calculus rather than as a logic in its own right.
Although it has since been used perfectly well as a logic in its own right due to its features of resource awareness that we see.
That original semantics is basically, well technically more than we can digest here.
So it's a proofability, sorry a proof semantics where you interpret formulas as types basically.
It's a semantics that lives in the programs as proofs parallel. So the formulas themselves become types. They're not evaluated to some truth value but rather they are,
they give rise to a particular form of space. So they're called coherent space.
So each one of these formulas that means gives rise to one coherent space which is the denotation of the formula form of as a type.
And then you look at proofs of formulas which are then functions between those types.
So like I said this will go too far. So this is what one calls a proof semantics.
So basically you give an actual semantics to the proofs in your system. That is something that we haven't so far done.
We've given a semantics to formulas in our system. Basically we've said well we have a semantic structure and they're satisfied or not.
Alternatively we give them a proof value, a truth value somewhere.
But the proofs weren't just there in the background. They established that a formula is provable but after that we didn't care.
So this semantics is different. It actually assigns a semantic object to a proof.
And will in particular allow you to tell that two proofs are essentially different because they denote different functions in the semantics.
So we'll be slightly less ambitious here for reasons of time basically. It's not that hard.
It's not like the mathematical concepts behind this would be totally indigestible.
But actually telling you in a few minutes what a coherent space is is probably faster than telling you the other thing that I'm going to do today.
But the semantics itself then becomes more important.
So what we're going to do today is something that one could for, or does actually, for the sake of distinction refer to as provability semantics.
So it's a semantics like the one that we've been used to so far. You discard the proofs and the semantics. You only care whether a formula is provable or not, no matter by which proof.
And of course this will be a provability semantics of the neurologic theory.
And the semantics will be given in terms of face spaces, which are algebraic structures.
And the semantics is therefore more algebraic. So it's more like proof theoretics.
It's more like the classical semantics that we've seen in the beginning, semantics of classical logic in terms of just a single algebra of truth values, in our particular case just two truth values more generally.
We could have assumed that we take a Boolean algebra of truth values, which would give us basically the same logic.
And which we've done differently for modal logic and intuitionistic logic. For modal and intuitionistic logic we had proper spatial structures in our semantics that you could draw on the backboard.
So of the semantics objects that I'm about to define, although they're called face spaces, they're really more like algebra in the sense that you wouldn't really want to draw their structure, their abstract algebraic options.
Okay, let's just quit waffling.
Quit waffling. So what is a face space? A face space is a pair m, orthogonal. I'm not going to read this bottom, and this is meant as orthogonal.
So it shouldn't be confused with falsity, it has to do with forming orthogonal complements in the vector space, if you will.
So this thing consists of two components.
First of all, a commutative monoid, m, whose components are right like this, as you hopefully, who doesn't know what a monoid is?
Okay, everybody knows what a monoid is. So a monoid has a binary connective and a neutral element. For that binary connective I'm going to write the binary connective as an infix dot, or just omit it altogether.
And I'm going to write the neutral element as e, to avoid further overloading the one that we've already seen.
So that's the neutral element. And secondly, a subset of the monoid. Nothing more required.
And that subset is called the polar of the face space. Well, that's all we need.
So on top of that we're now going to stack a few definitions that will allow us to interpret our formulas.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:33:16 Min
Aufnahmedatum
2015-12-21
Hochgeladen am
2019-04-28 04:49:03
Sprache
en-US
The course overviews non-classical logics relevant for computer scientists, in particular
-
Modal logics, extended to formalisms for reasoning about programs - PDL, mu-calculus. Modal systems also form the core of logics of agency and logics for reasoning about knowledge. Moreover they can be seen as a computationally well-behaved fragment of first-order logic over relational structures.
-
Intuitionistic logic, which can be seen as a fragment of certain modal logics (S4) or as the logic of type theory and program extraction.
-
Linear logic, which is established as the core system for resource-aware reasoning
-
The logic of bunched implications and separation logic: more recent formalisms to reason about heap verification and programs involving shared mutable data structures.
-
Fuzzy and multi-valued logics for reasoning with vague information.