The
point that led to the stumble was that proof search for S4 isn't stateless. It worked completely
fine for K because for K proof search is stateless. I only have to look at the current sequence
and the rules. I don't have to know any more than that. So I just keep stupidly applying
the rules. For S4 I have the state. Or basically I have to remember things as I search for
the proof. Why is that? Well, our termination argument said, well, there are at most these
and these many sequence that can crop up in a branch and therefore after something like
M square steps, one of these sequence is bound to repeat. And then we stop. But of course
in order to do that we need to remember which sequence we see. So what would really happen
in the completeness proof, we have something like a set say S, a set of sequence that we've
already seen. And our state of the proof search is something like this. We have a current
sequence that we're currently trying to prove and we have this set of sequence that we're
remembering as having already tried them basically. And if we build a model for such a thing,
it will be, or if we try to conduct an induction that will give us a model for such a thing,
it will look slightly weird. It will basically be an actual model of our target sequence
or the, well actually of the negation of our target sequence, right, because we're constructing
counter models here. And in the leaves it will have some putative states, some putative
states as counter models for the sequence we've already seen. And basically this animal
here will be a counter model for our actual target sequence provided that we already have
counter models for the ones already seen that we can glue in this place here. Well that
sounds all nice and well, but it turns out when you really try to formalize this and
build a model inductively, then you have to know so much about how this model looks in
between that basically you haven't gained anything. So you have to impose so much structure
on this thing that you might just as well stick to the original proof graph that you
have. So this idea basically fails. So now what I'm going to do is give you a hopefully
correct description of how this works, and then we're going to try it before I prove
or attempt to give a sort of hand wavy proof that it's correct. So basically what we're
doing here is tableau calculi in disguise, right, so tableau calculi work basically the
opposite way there. In tableau calculi you have things that look like sequence but you're
not trying to show they are valid, you're trying to show they are satisfiable, so everything's
exactly dual. And then you build models, etc. So everything looks very much as here, and
tableau are infamous for being messy. So this was one failed attempt to present it in a
way that wouldn't be messy, but apparently we have to do it in a messy way after all.
So okay, how does this work? So we start with a failed proof search. As...
As a nearly tree-shaped graph. The graph will only be nearly tree-shaped because there will
eventually be back edges. We can leave these imaginary because we can actually reconstruct
them from a structure that is basically tree-shaped. I mean, this comes from encountering sequence
again that we've already seen. So either we can opt to make an explicit back edge in the
tree for such a sequence that we've already seen, or we can just leave it implicit and
say, well, okay, anybody looking at the graph will be able to see that the sequence is already
in the graph and just draw the additional edge for himself. Okay, to phrase these things
reasonably I'm going to need a somewhat wholesale piece of terminology for propositional reason.
Whenever I use the adjective propositionally, I mean...
I mean interpreting all non-propositional constructs that I see as atoms. So I can use
this for any logic that I use. So for instance, if I have a predicate logic formula and I
see quantifiers in that and I say propositionally, then that means, well, I use a parser that
doesn't recognize the universal quantifier and will instead assume that this is just
a weird name for some atom. And here in modal logic, I use a parser that whenever it encounters
a box or a diamond will just say, heck, I don't know this, so it's going to be an atom.
Okay. In particular, I will use this notation here using the double horizontal bar on my
Presenters
Zugänglich über
Offener Zugang
Dauer
01:28:11 Min
Aufnahmedatum
2015-11-17
Hochgeladen am
2019-04-28 07:49:04
Sprache
en-US
The course overviews non-classical logics relevant for computer scientists, in particular
-
Modal logics, extended to formalisms for reasoning about programs - PDL, mu-calculus. Modal systems also form the core of logics of agency and logics for reasoning about knowledge. Moreover they can be seen as a computationally well-behaved fragment of first-order logic over relational structures.
-
Intuitionistic logic, which can be seen as a fragment of certain modal logics (S4) or as the logic of type theory and program extraction.
-
Linear logic, which is established as the core system for resource-aware reasoning
-
The logic of bunched implications and separation logic: more recent formalisms to reason about heap verification and programs involving shared mutable data structures.
-
Fuzzy and multi-valued logics for reasoning with vague information.