Okay, so welcome to another week of AI.
I'll let you find a seat first.
So last week
we finished looking at the prerequisites.
All the stuff I expect you to be able to do.
And we started with looking at
artificial agents.
Essentially the idea of an agent, we are going to use that as the central
metaphor
for intelligence
in this course
is that we have
an entity
that can act on an environment
and that can sense the environment
and behave, therefore behave in an environment.
And we
pursued the idea that it would do so
rationally.
Rationally meaning
that it optimizes some kind of a performance measure.
It's essentially survival of the fittest.
You want to optimize some kind of a performance measure.
In evolution
that is
reproduction, right?
That's the performance measure.
The better you reproduce, the fitter you are.
And the other way around, by definition.
We allow ourselves
other performance measures.
But still the idea, this performance measure
has to be optimized
as far as we can.
We've looked at all the things we don't have to do as an agent. We don't have to
be a mission. We don't have to know everything.
We just have to optimize
as far as we can expect
our actions to be optimal.
We don't have to be successful
even though we might like to
because we don't know everything.
So that's rationality.
And we want to look at rational agents and
the biggest upshot that I want you to understand about last week was that
what is rational
depends on
the environment, the characteristics of the environment. We looked at them, right?
Dynamic and static environments,
fully observable or partially observable environments,
Presenters
Zugänglich über
Offener Zugang
Dauer
01:31:48 Min
Aufnahmedatum
2022-11-09
Hochgeladen am
2022-11-10 17:49:10
Sprache
en-US