For the new year, remember what we were doing?
We were looking at agent functions for model-based agents, meaning agents that
have a model of the world, and in this part of the course we're looking at
model-based agents that have structured models, or in other words that have
models that use formal languages for describing the world. And we're currently
looking at agents which use a particular logic called first-order logic for
talking about the world. And where we're using
theorem-proving methods, automated theorem-proving methods, in particular
tableau methods, to derive new facts from facts that come in by either world
knowledge, think about the rules of the Wumpus game, or by sensory information,
where you basically see I'm going into the next square, oh it stinks,
that's a fact, it stinks in square one one, is sensory information that
leads to facts which together with the world knowledge we can use to derive new
facts. That's the setting. We have an agent that derives knowledge out of
pre-existing knowledge like the world knowledge or the game's knowledge or
those kind of things, and sensory information, and that gives us, gives the
agent a way of knowing about things that it cannot sense yet. If I'm in a square
and it doesn't stink, I have no reason to be scared of a Wumpus. If there's also no
breeze, I know that I'm not in danger of any pits, and so on. So the thing we're
looking at is the agent function, namely the procedure that allows us to do these
inferences, and we're doing that with an inference procedure called tableau. Tableau
because it's very easy to implement, and we've looked at tableau and other
inference procedures like resolution or DPLL for propositional logic, which is
what we have up there, which is essentially how do we prove a formula? Well, we decorate it with a
little false in the right upper hand corner, and then we apply a bunch of
rules. Here are five of them, and we develop the whole thing into a tableau,
which is just a tree with multiple branches, and if all of these branches
end in this thumbtack or inconsistency, then we call the tableau closed, and whenever we have a closed
tableau, we can say, aha, the formula we're trying to prove is valid. Now, you might wonder, what we
really are after for our agent is something of the form, some set phi of previous world knowledge
and percepts. Does that give me my query? For instance, I think I called it OK of 1, 1, which means
no danger, I can go there or something like this. Well, how do we do that in a tableau? Well, essentially,
we start out with a tableau where we put all the previous knowledge as true and OK of 1, 1 with false.
We develop it in a tableau. If that closes everywhere on every branch, we say, hooray, 1, 1 is OK.
Is everybody kind of generally situated? If you have questions, now would be a good time.
OK, good. So that's the first thing. We're doing tableau, but we're not doing tableau in propositional
logic, which is what you're seeing there, but we're doing tableau in first order logic. So let us
remember what first order logic does. First order logic allows us to say things about individuals.
For instance, things like OK of 1, 1, 1 being kind of the cell at position 1, 1. And we can say
something about every individual. We've seen sentences like for all x, if human of x implies
mortal of x, we have these quantifiers for all and exists. They're the big innovation, which means we have to somehow
deal with them, which means our tableau calculus acquires new rules. And we've talked about the natural rules that we would have
is if for all xA is true, then it's true for whatever we drop in, for every term C we drop in for x, that this must also be true.
This is a very natural rule. Unfortunately, in most contexts, in first order logic, it's actually infinitely branching.
And infinitely branching rules are bad for implementation. In a way, you can think about this as a way you have to choose a C and you have no
idea which Cs will be good. So instead of this, we introduced new rules for the tableau. This one, which basically instead of taking a concrete C to
instantiate, we're basically saying, oh, we'll take something. We'll decide later what. So we're going to call it y.
And we're going to take a meta variable y and we'll just replace x with the meta variable y, which is something like a promise we're going to decide later.
There's two costs we have to pay for this. One of the costs is the existential rule has to be changed as well because these meta variables that are kind of
instantiated later kind of get into the witness construction. So you have to do something about it and instead of having a witness constants, you need now a what we call a
Presenters
Zugänglich über
Offener Zugang
Dauer
01:23:09 Min
Aufnahmedatum
2024-01-09
Hochgeladen am
2024-01-10 17:49:08
Sprache
en-US