2 - Artificial Intelligence II [ID:57527]
50 von 912 angezeigt

Welcome everybody to the second lecture of AI2.

I can already see that there are fewer people here, which is a pity, but that's the way

the Koki crumbles, I guess.

Right, we're in the boring part still, which is kind of the preliminaries and recap what

I'm assuming and all of those kind of things.

But I want to situate what we're doing this semester, so we need to recap where we are,

so all the grading and those kind of things are behind us.

Are there any questions about admin stuff?

Okay, so I want to remind you of something we talked about last semester.

AI is not just LLMs.

In AI1, I tried to show you symbolic artificial intelligence, and the idea is that we talk

about representations of situations, objects, and so on, and they become symbols, and we

then manipulate these representations, do unspeakable things to them, and then those

map back to the real world, and kind of the solutions mysteriously map back to actually

real world solutions.

That's one way of doing AI.

It works well for certain things, and less well for other things.

There are other ways.

There's statistical artificial intelligence, which basically, there should be, no.

It just doesn't like you.

So in statistical AI, we give up the illusion, you probably have to say, of totally observable

environments.

There are things we can't control, and we can't even know.

We're not sure while we're sitting in here whether in Erlangen Central, the big smokestack

still stands or not.

It could have fallen down, and we wouldn't have known, right?

Or Trump did something interesting again.

We might not know while we're sitting here.

Okay, so we have to deal with uncertainty.

That's something that allows us to kind of have more interesting agent environments,

but of course also means we have to do more interesting math, which we'll do.

There's something called sub-symbolic AI, where you give up this notion that all that

we want to talk about or simulate in agents, that everything has to have a symbol.

Certain things might be too low level.

If you think about if you're playing ping pong, it's probably not the case that we have

a symbolic representation.

That we have a symbolic representation of where exactly the ball is and where exactly

my hand is and what the angle of my wrist has to be and all of those kind of things.

There could be, but it's probably not reasonable.

Maybe we need a layer that is below the symbols, and there are computational mechanisms that

seem to be working at that level, neurons, right?

The thing that's, I hope, at hard work at the moment, right?

That does things.

And it's not clear that there are symbols for all of those things.

That's something we're going to look at.

All the deep learning and so on is always sub-symbolic AI.

There's embodied AI, where the idea is that how can you be intelligent if you don't have

a body that kind of interacts, actually interacts with the environment and learns something

about the environment just by thinking that's probably not going to get intelligence going.

All of those things are different and partially independent ways of looking at AI.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:26:50 Min

Aufnahmedatum

2025-04-29

Hochgeladen am

2025-04-30 12:59:06

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen