OK, so I'll start in English directly this time.
So we have started last week with actually going to the meat of things.
So the first part here is about probabilistic reasoning, which means we're going to deal
with uncertainty as we need to be able to do if we have agents in stochastic and non-deterministic
environments, and partially observable environments, where we don't see the state of the world
in one go, and where we are not sure that our actions will actually succeed.
A typical thing, Wumpus world, which is partially observable because in the cave is dark, so
you're getting full observation information here just to explain the game.
But of course, our agent, because it's dark, cannot see anything and can only feel breezes
and stench and glitter and all of those kind of things.
And of course, we might have non-deterministic actions, where my plan is to go forward or
into cell 2-1, but I actually end up in cell 1-2.
Remember it's dark, sometimes actions don't succeed.
So we have unreliable sensors, partial observability, and unreliable actions.
Those are very common things.
Think of having a robot that not just is in a lab, but goes outside, say a self-driving
car.
One of the most important things for the robot is to know where am I, where do I want to
go and all of those kind of things, and they're all unreliable.
So we recap the agent architecture last time.
Remember the kind of guiding metaphor here is that we have these agents, which can interact
with the environment via percepts and actions, and we looked at various kinds of them.
And I'm going to jump over this.
We looked at very simple ones, like you would have on simple bacteria or something like
that, where you basically have a response, where you sense the world, and you have a
couple of condition action rules for simple, one-celled organisms.
Those are basically programmed into the DNA and force it to behave certain ways.
And we have these kind of agents which keep a world state, which you don't really get
on one-celled organisms.
You need some kind of a brain for that.
You need to remember things.
So that's what we drilled in a little bit, that we have a world of representation.
And in the last semester, all of our agents essentially fell into that class.
Now we want to extend that, and the upshot of everything we did last time is we need
to extend this model to not a model of the world state, but since we don't know the world
state and cannot know the world state, to what we believe about the world state.
And that really needs two things.
We need to have a belief state.
What do we believe the world is like?
And we need to extend our model to allow multiple possible worlds, because some things we don't
know, so we have to kind of plan for both contingencies.
And to have a transition model that actually updates our belief state with new information
coming in.
For instance, if I'm unsure whether I'm on, I don't know, I can see outside.
So if I'm unsure whether I'm on the bottom floor of this building or there's something
beneath, I need to have a plan for both contingencies.
But I can also just basically go down, see whether there are some stairs, and look what's
beneath here, and then I can actually restrict the number of possible models, making my belief
state more accurate.
But of course, I have to plan for my actions being unreliable, and when I thought I was
Presenters
Zugänglich über
Offener Zugang
Dauer
01:27:54 Min
Aufnahmedatum
2018-04-18
Hochgeladen am
2020-04-30 09:36:14
Sprache
en-US
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter.
Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz
-
Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.
-
Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).
-
Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.