So, the other ingredient we always have in the back of our mind is that probabilities
are nice and dandy, but they don't give you anything until you act on them.
Proposal agents are things that need to act in the world, and we can only see whether
they're acting rationally or intelligently if they act on the world.
Otherwise, who knows what's going on in there.
It could be doing wonderful reasoning stuff and having all the probability states, unless
it does something or says something or something like that.
You don't know whether it's intelligent.
Acting is actually an important question here.
We looked at a couple of examples where acting is a process that has something to do with
both probabilities.
What's the chance I'll be on time in my lecture or in Frankfurt or whatever?
Of course, the utility of reaching those goals.
We have some kind of a...
And utility is an internal measurement function.
It's a subjective thing.
It's not like in evolution where you have an external performance measure or in learning
agents where you have an external performance measure.
Utility is something internal, which is why I sometimes gloss over it in the form of how
happy will this make me?
Just to see that it's internal, that's part of the agent design, its utility function.
It may coincide with some kind of an external performance measure or not.
Often when internal utility has something to do with the external performance measures,
then these agents are successful.
But sometimes failing an exam makes you very happy because you have a private utility of
that.
You want it to win a bet or you know that if you get a 4.0 on this exam, you can't redo
it and so you want to fail.
Internal utilities might not be the official ones.
That's one thing I want you to have absolutely straight.
For us, essentially, if you want to have some kind of a decision theory for rational agents
and for that, we're going to explore that we're going to get that in non-deterministic
environments as a combination of probability theory to know what the state of the world
is and utility.
If you start implementing that, you get an agent that is essentially like this.
It has an upper part that probabilistically finds out what the world is like and then
looks at all the action outcomes, which are going to be probabilistic and actually gives
that a utility rating and then it tries to act rationally by maximizing utility or, of
course, much more precisely said, maximizing expected utility because you don't know what
the utility of your action is going to be.
It's going to be a probabilistic outcome.
The good thing about these kind of agents is that if you have a utility function, then
you can actually deal with conflicting goals.
I'm hungry.
I haven't had lunch yet, but I also don't want to walk out of the AI to lecture.
These are, unless you have something to eat with you.
These are conflicting goals.
Now it's up to you to have your utility function decide which one is better.
Simple agents without utility functions, they have a goal and they pursue one unless they
can map one goal to another or something like this.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:06:38 Min
Aufnahmedatum
2021-03-30
Hochgeladen am
2021-03-31 10:26:32
Sprache
en-US
Recap: Acting Under Uncertainty
Main video on the topic in chapter 3 clip 5.