We were talking about rational agents as a framework, and the idea is that all these
algorithms we're learning technically need to live somewhere.
What kind of ideas we just kind of package them in an intelligent agent, and the agents are things that can sense an environment.
Yes?
Pardon me?
Yes, that was Matrix.
Did I use the wrong chat?
Ah, okay, yeah, well, you will eventually.
There are lots of experts who have access.
Okay, so, there we go.
And we talked about agents being rational and all of those kind of things.
Those are the things, of course, that are in the quiz.
And the last thing we really did was we talked about kinds of environments because these kinds of environments actually determine what agents are successful in them.
Remember, rationality was about being successful, being maximally successful in expectation in a particular environment.
It depends on the agent design, how successful you can be.
If you design an agent that is essentially going to be a bunny, and you forget the legs, and the world has foxes, probably not very successful.
The sensors, the actuators, and the environment, they all have to fit to actually make your agent design work well.
So, we need to understand the environment, and there are a couple of dimensions in which we can classify the environment.
We've looked at them, observability, actually, are there things that are hidden from the agent and the agent cannot know?
If yes, life is more complicated.
Deterministic versus stochastic environments.
Do all of your actions actually work out?
Episodic or dynamic or semi-dynamic and discrete and single agent and so on environments.
Okay, so lots of new words, and we looked at a couple of examples.
In extension to what I told you last time, you have to take these examples with a grain of salt.
Somebody asked about this, what?
Chess is deterministic, and it really depends on whether you think of chess as being single agent.
If it's single agent and the kind of opponent melts into the environment, then of course it's non-deterministic.
You sometimes don't win.
Or if you think of it as a two-player game, then really determinism is really of whether your actions are successful.
Then yes, probably it is, because if I want to move my pawn from D1 to D3, then I can usually manage.
So there's a little bit of injecting of you or a modeling decision in a way there.
This is something I would like to also use the agent metaphor as.
Sometimes there are modeling decisions, which means you as the agent designer, that's kind of what we're prepping you for in this course,
that you can actually go and design agents.
You have to take the modeling decisions.
Am I going to approach this as a single agent or as a multi-agent problem?
Am I going to, even though the environment is stochastic, I'm going to kind of pretend it's actually not.
We know that there's a problem here, but maybe runtime of my algorithm makes up for it.
That's you as agent designers who make these decisions, and your agents will be more or less successful in the world.
We'll have this, I already told you about that, calla challenge, where you actually make these modeling decisions
and are aware of these things and really understand what I hope, what's involved.
That's something you should participate in.
Not only is it fun, but you can get extra bonus points.
You learn something, you really learn something.
The same, by the way, holds for the AI1 systems project, which might be something you want to do,
which is essentially for every chapter we do here, there is a non-trivial programming problem in this project.
After you've done it, you really understood what AI1 is about.
Maybe you don't want to do it in parallel, because it's a lot of work.
Any questions so far?
Presenters
Zugänglich über
Offener Zugang
Dauer
01:26:03 Min
Aufnahmedatum
2024-11-05
Hochgeladen am
2024-11-06 16:59:10
Sprache
en-US