Okay, I'm not sure if I can conclude anything conclusive from that.
Seems pretty much all the same ballpark as before.
Yeah.
Okay.
Let's quickly recap.
So there we go.
We've been talking about the kind of things that I can actually do with first order logic now that we know like syntax, semantics, proof theory, and all of those things.
The first thing to note when it comes to proofs is these two notions, namely soundness and completeness.
Soundness, you definitely always want.
If you have a calculus that is unsound, you can prove stuff that isn't true.
That would be pretty bad.
If you have a calculus that is not complete, okay, then there is some stuff that is true which you cannot prove.
Fair enough.
But all the stuff that you can prove definitely is true.
So an unsound calculus is utterly useless.
An incomplete calculus is still pretty useful, even if it could be more useful if there were a complete one.
Fair enough.
Turns out natural deduction obviously is sound.
Also turns out that natural deduction is complete, which if you think about it is kind of a bit mind blowing.
So we've technically just devised this extremely simple language that allows us to talk about really complicated settings, things like the natural numbers,
i.e. all of arithmetic, i.e. everything that your computer can do basically.
And then it turns out that just purely syntactically we can write down a bunch of proof rules and by just concatenating them,
we can reconstruct everything that we know to be true about arithmetic.
I find that to be pretty insane if you think about it.
Nevertheless, it's true.
So natural deduction is one of those calculi that happen to be complete.
So if there is anything true in some kind of theory, then you can actually prove it using natural deduction.
The way that that works is via the model existence theorem, which is a lot easier to prove than completeness directly.
Nevertheless, we're not going to talk about how to actually prove the existence theorem for natural deduction either,
because it gets very complicated very quickly.
But the basic idea is that instead of trying to prove that whenever something logically follows from a theory, you can actually prove it.
Instead, you show the following.
If there is a theory and that theory does not prove a contradiction, then there is a model that satisfies that theory.
That is the model existence theorem.
And it turns out to be in practice a lot more useful than completeness directly.
Yeah, in particular, if you're actually interested in models.
The proof of the equivalent is relatively short, but hurts a bit in your brain if you go through it and you're not used to that kind of reasoning.
I.e. go through it step by step and see if you can understand what's happening here.
It's a pretty good exercise.
OK, a bunch of first order theories that actually occur in mathematics that we should be interested in.
So in generally, especially abstract algebra in math is full of the kinds of settings that are straightforwardly just like first order theories.
Semigroups is something that I can just write down as a first order theory.
Monoids, groups, these kinds of things.
With fields, it starts to get a bit tricky because in a field I have both like an additive group and like a commutative monoid for the multiplication.
But multiplication is only defined on, sorry, division is only defined on the part of the group that does not contain zero.
So I somehow have to throw that out.
And then I have to still make sure, though, because first order logic forces me to interpret symbols as total functions.
So I have to give a total function that is somehow actually one over x everywhere except for zero.
And then I have to choose some value for zero just to make that work, which is a bit annoying.
Then we have things like vector spaces where I have two distinct domains now.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:24:50 Min
Aufnahmedatum
2025-06-03
Hochgeladen am
2025-06-05 19:39:08
Sprache
en-US