My next goal was to convince you that the possibility of having multiple possible worlds
is not actually very helpful.
We need it, we need to plan for these things, but we need more information.
We need the information how often, so to speak, these contingencies are going to become actual
world states.
I used to, I used the Wumpus world again to illustrate this.
When you're kind of planning and the agent has visited these three cells and plans where
to go next, then the agent has zero information of this part of the cave.
Remember it's dark.
And it has some information about this part of the cave because it can feel breezes here
or it would be able to determine stink and so on, given a reliable sensor model, which
I'm assuming just for the simplicity of it.
So here we want to know, well, we know that all of these three cells could be unsafe because
otherwise we wouldn't be feeling breezes in those two.
So we have three possible worlds, right?
Pit here, pit here, pit here, or pit there.
Well, actually a couple more, but combinations of pits.
And that's probably six, isn't it?
Yes.
So we have six possible worlds.
And now the question, of course, is where should I go?
And the answer is I have no clue because I cannot so far, because if I only have possible
worlds, I cannot really see what the distribution of risks is.
And so what we also want to know is what is the likelihood of falling into a pit in each
of those three.
So we need to have something that tells me how likely are those possible worlds to actually
be the right possible, the actual world.
And we looked a little bit into whether logic could be the answer.
We tried some modeling using logic.
After all, that's what we've learned last year.
And we came essentially up with the problem that logic, while good in saying something
about all the possibilities, is actually bad in capturing the full information.
Essentially because it knows true or false, because it's binary.
We looked at basically different rules, how we could really capture things.
I introduced this cavities example, which we're going to look at quite a lot in the
next weeks.
So we had simple rules that kind of modeled the world, supposedly, but they were wrong.
We do less simple rules where you have all put all the possible explanations of something.
If you experience a toothache, there's a variety of things that could be the cause.
Gingivitis, cavity, having been in a fight, an accident, all those kind of things.
Simple rules are also flawed, even though there isn't kind of an interaction between
cavities and toothaches.
It's not of the form that if A, then necessarily B. So likelihoods rear their ugly head here.
So we have concluded that there's just no way.
We have to kind of bite the bullet and think and talk about likelihoods.
We were talking about probabilities yesterday, starting to build up the machinery that lets
us talk about reasoning under uncertainty and acting under uncertainty, which is exactly
what we want our agents to be able to do this semester.
Before we go into the maths, which is rather simple, and you've probably heard it before,
I would like to make sure that we're on the same page in what we're really doing here.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:09:52 Min
Aufnahmedatum
2021-03-30
Hochgeladen am
2021-03-31 10:27:13
Sprache
en-US
Recap: Modeling Uncertainty
Main video on the topic in chapter 3 clip 4.