4 - Artificial Intelligence II [ID:47295]
50 von 439 angezeigt

Can you hear me now?

Yeah.

So, welcome back to AI2.

Just to situate ourselves, we're working our way to a new kind of an agent,

which we call utility-based agent.

This kind of an agent is well-suited for environment that are not totally observable,

meaning we have a certain amount of uncertainty about our, about the state of the world.

Therefore, we have to look at sets of possible worlds, which we call a belief state,

rather than a world state, which has one possible world, right?

In a totally observable world, we know what state we're in.

If we are in a non-tobiously observable world, we don't.

We have to entertain lots of possibilities.

And we can do that by just saying, oh, all of these are possible,

but what we're going to do now is basically we are going to grade the whole thing by our likelihoods

of the perceived livelihoods of the possible worlds.

That's the one thing we're doing.

We're basically changing the world model from the, we know the one state,

to we have a probability distribution over the possible states.

And we'll work our way towards the lower part, which is utility-based rather than goal-based.

Now, you can have the uncertainty handling in the model without introducing utilities,

but it's really works best together, right?

You can also do utility when you know what state you're in,

but I would like to basically make these agents the two things together.

And we're basically in the upper part trying to understand how to do inference,

probabilistic inference from the sensing to the belief state.

Right? That's what always happens.

We have sensory information, and that basically tells us or updates our world model.

That's what we're working towards.

And we're using probability theory here.

Works better when I plug it in.

Nope.

Oh, come on.

All right, we looked at unconditional probabilities, and the math behind it.

But we're really interested in the conditional probabilities.

And that's going to be kind of a occurring theme.

We're always interested in individual probabilities and conditional probabilities.

And we're often interested in the probability distributions.

The only difference is that rather than talking about real numbers between zero and one,

we're talking about indimensional vectors, and instead of talking about addition multiplication,

and so on, of numbers between zero and one, we'll be talking about addition multiplication,

and all kinds of weird and wonderful matrix operations on vectors,

and indimensional matrices, sometimes called tensors.

Okay? So that's kind of the math.

And sometimes it's these kind of distribution level things are very easy to write down.

I'll just say p of capital X dot p of capital Y.

And it's going to look very plausible until you really, really think about what the operations are.

And I would like you to keep a very suspicious eye open to whether you understand what's going on.

Because those are the operations we have to eventually implement.

If you take a Python, if you take straight Python, then you'll have to basically do the quadratically or exponentially many multiplications that we're just writing a little dot four.

And so keep an eye open for that, that you actually understand what's going on there, because computation works like that.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:31:15 Min

Aufnahmedatum

2023-04-26

Hochgeladen am

2023-04-28 22:59:07

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen