Welcome back to AI2.
We're through with all the admin and intro stuff, so we can actually start doing the
real stuff.
So last week with Florian, you talked about essentially an introduction to belief states.
The idea is that if we have sources of uncertainty, which really means we don't know what the
world is like, because we might have trouble sensing it, we might have trouble with the
determinism of our action, we might have the bad situation that the world changes without
us changing it.
If you think about it, all of those things happen all the time.
So we can never be sure what the state of the world is.
So we have to prepare to maintain as an agent a whole set of possible worlds, and that we
call the belief state.
We're not just naive about that, or we're not going to be naive about that, but we're
going to attribute in our world model.
You can still hear me?
We are going to attribute different likelihoods of certain worlds.
I don't know whether it's raining outside, but I'm pretty sure it's not sunny and 40
degrees Celsius outside, but probably almost raining and somewhere near 13 or 15 at most
degrees.
I don't know what the world outside is like, but I have a certain model of the world which
makes certain possible worlds, and both of these worlds, 40 degree world and 12 degree
worlds are both possible, but they have different likelihoods.
So I'm going to make my decision based on what is possible and what do I consider most
likely.
And the mechanics of this, which we're calling probabilistic reasoning, is really what we're
going to look at.
So we're ultimately interested in agents that can entertain belief models with what we're
going to call probability distributions over the belief models.
That's going to be what we need, and we're going to look at a couple of examples still.
I'm working slowly my way towards decision theory.
So consider a situation where you want to be in herzal at 10.15, either to give a lecture
that's typically mine or to take an exam.
Now you could have two possible plans.
One is to get up early, be there early, and be there for sure in time.
Or get up very late and kind of come flying in here and maybe be a quarter of an hour
late.
They're both wonderful plans.
They have different properties.
Plan two really only succeeds with 50% probability.
That might be enough for you if it's not an exam.
If the lecture is for two people, that might also be enough.
If it's for 500, I would probably not like that plan so much.
Just to see what's involved, think about the same problem with going to Frankfurt Airport.
Intercontinental flight costs you a fortune.
If you're not there, you've lost the fortune.
Okay?
So really what a rational agent, and we've talked about those, does is choose the action
where there's the best balance between the investment or the cost and the benefit.
The benefit being not missing the flight and losing all that money.
Or embarrassing yourselves in front of 500 students or those kind of things.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:30:51 Min
Aufnahmedatum
2023-04-25
Hochgeladen am
2023-04-26 17:29:07
Sprache
en-US