10 - Artificial Intelligence II [ID:57510]
50 von 522 angezeigt

Okay. Quiz is over. I've seen that almost everybody has finished, so that's a good sign.

I've also seen that the green is increasing. That probably means, that can mean one of two things.

One is you've managed to hook up the system directly to ChatGPT, or you're actually learning.

I'm kind of rooting for the second. So, something is going on here. I think we'll have to investigate that.

Okay, good. It doesn't look that bad though, if I look at it. Good.

Righto. We have been looking at temporal inference last week.

So, the general setup is that we have time indexed random variables.

And we have the usual two categories. We have the state variables, which we call hidden variables in the non-temporal domain.

And we have the evidence variables that give us our observations.

And we're really interested in various probabilities, typically conditional probabilities, given certain evidence.

The classical, simplest example is the weather is a state variable.

The evidence or observational variable is whether there's an umbrella.

And we are interested in the weather, given that we've observed a sequence of umbrella sightings over the days.

Okay, so we've looked at four kinds of inference procedures.

One is state estimation, which is really what is the state today, given all the evidence leading up to today.

We have prediction, which is today plus K, what is the state there.

Then there's smoothing, which is what is the state in the past, given that we now have more information.

And finally, there's the inference to the most likely explanation, which is really what is the most likely state's sequence.

Okay, so we've looked at algorithms, we've looked at complexities. The complexities don't seem that bad.

Even the Viterbi algorithm, the algorithm that looks for the most likely explanation, is essentially linear.

Which is good, because the linear complexity is also distributed over linear lifetimes of the agents.

So we'll have kind of constant state estimation, constant smoothing, all of those kind of things.

Which is a good thing, and which is one of the reasons agents can actually survive.

Wouldn't it be terrible if once you kind of turn 20, then your system shuts down for a week or so to just basically do garbage collection or thinking about the past.

There's something called midlife crisis for people like me or so.

And maybe that's one of the things where things go nonlinear, but who knows.

Okay, good. So we've also seen that a couple of assumptions make this possible.

One is that we're only doing Markov chains. First order Markov processes.

One, it's a restriction, meaning not a lot of dependencies.

The second one is that that is stationary, meaning the CPTs are always the same.

And we also want to have a sensor Markov property, that's something we're kind of building in.

And we want this to be stationary, so that we can kind of deal with things.

That makes the things sufficiently easy that we can have linear inference.

I would like to show you an extended example for a hidden Markov model.

That's kind of robot localization.

You, I'm sure, have heard about these self-driving cars.

Nowadays, the cars all have GPS.

But if you're doing the same thing, say, on the moon, you kind of have to see where you are.

Okay? Think about also rescue robots, where you have kind of houses that collapsed in an earthquake,

and you do not know what's inside them, and the robots crawl in there and try to rescue people.

They have to do something which we've kind of approximated in this maze.

We have a map. The robot doesn't know where on the map it is, and it needs to self-localize.

Answer the question, where am I with what probability?

Before I go into the real problem, let's look at a much simpler problem, where we have a maze,

and we know what the maze looks like.

We have a robot that has four sonar sensors that can sense into the north, the east, the south, and the west.

And it can see whether there's a wall there.

Okay? And say we see that the first evidence is that it sees a wall to the north, to the south, and to the west.

And then you can see that this is a possible location.

Right? North, south, west.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:16:12 Min

Aufnahmedatum

2025-05-27

Hochgeladen am

2025-05-28 17:29:11

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen