2 - Artificial Intelligence II [ID:47293]
50 von 971 angezeigt

Okay, so yesterday, Kohl has just did the administrative introduction and I think he

hasn't started with any of the material yet, correct?

Right, so the key idea about this lecture is that AI can be split into the symbolic

and the subsymbolic parts.

We've looked at the symbolic parts in AI-1 and we're doing the subsymbolic aspects in

AI-2.

The subsymbolic aspects are the ones which are currently all the rage.

That's where all the machine learning happens, neural networks, back propagation and so on.

And in many ways, that's currently the part of AI that's winning.

But we also know that in the long run, the subsymbolic methods alone are not going to

cut it.

We're already seeing the problems with the big tools that are currently so hyped up like

chat GPT where they're doing amazing things that surprise us, but we have no idea why.

And more importantly, the tools have no idea why they're doing anything they're doing.

And for example, chat GPT is just basically talking bullshit and it has no idea that it's

doing it because the entire approach is not using any of the symbolic methods that we

learned in AI-1.

In particular, these tools have no idea what it means to be right about something or what

it means for a statement to be correct.

So in AI-2, we're going to get an overview of all the different subsymbolic approaches.

And the combination of the two is pretty much up to you guys.

That's for the next generation to figure out because so far, we haven't really had a clue

how to combine those two branches of AI for the optimum outcome.

One thing that subsymbolic approaches can do very well is handle uncertainty because

they operate a lot with probabilities, guesses, estimates, considering all the different possibilities

in parallel, trying to figure out which one is best.

If you take that to the extreme, you get methods like neural networks where you just have millions

of individual probabilities and you just tweak them all until the outcome resembles what

you were supposed to do in the first place.

But you can also do more exact approaches where you basically take something we've done

in AI-1 and just add a little bit of probability.

So instead of saying this formula is true or false, you can say, well, this formula

has a truth value that's somewhere between 0 and 1.

That's like the probability that the formula is true.

So we can have approaches that are very knowledge near and approaches where the knowledge is

entirely forgotten and all we have is a bunch of numbers and linear algebra that somehow

spits out results.

And one of the most important practical aspects here is that if we're ever using our tools

in real world environments, we never have a chance to have perfect information in the

first place.

So if you're thinking of some kind of robot that is operating in some kind of environment,

everything it knows about the world it has to get from sensors, from measurements, cameras,

distance sensors and so on.

All the initial information is imprecise from the start.

So you have to model everything using probabilities.

And then you very quickly get into a very different kind of system design than what

we've seen in AI-1.

All the methods in AI-1 assumed that we had perfect knowledge about the environment.

The environment was maybe a search space where we had a set of states, a set of actions we

can do and a set of...and a transition relation that tells us exactly what the successor state

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:25:50 Min

Aufnahmedatum

2023-04-19

Hochgeladen am

2023-04-21 13:49:06

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen