21 - Artificial Intelligence II [ID:47312]
50 von 793 angezeigt

Okay, can you can you can you hear me?

I have a microphone, a bit of an overkill, but we needed for the online version.

Okay, so the part of the course on machine learning is coming to an end.

We've essentially looked at inductive learning first, essentially given input output pairs,

synthesize a set of new input output pairs, or in other words a function that empirically

best generalizes from the examples that's inductive learning.

Works extremely well.

The whole deep learning hype is inductive learning.

The next thing we did was looked at how we could in principle introduce learning into knowledge

and if you think about it.

Even though it's almost clear that what we've shown you as the methods, which are the best

methods we currently know, are probably not quite what we want eventually.

It's I think to me at least relatively clear that you somehow want to integrate knowledge

into learning.

There's a certain part of large language models where you can kind of take a large language

model from Google and then train it further on some specialist, say, text corpus, in the

hope that the right thing is kind of generalized, which could be seen as introducing knowledge

into learning.

And if you think about your experiences with large language models like chat to PT or

a bard or something like this, and I'm sure you've all played with them, there seems

to be some how knowledge in these models otherwise they wouldn't be so useful.

It even seems that the models can actually do something like inference.

Somewhat like humans, lots of errors and lots of non-sequitours and all of those kind of

things, but we don't have a lot of control over the inference that happens in these things.

So call me fundamentalist.

I'm not quite happy yet.

I think there must be something that works better.

Many humans can do high quality inferencing if we give it enough priority.

So I think that somewhere between those things we've learned about inductive learning and

the things we learned about knowledge and learning that something needs to happen there.

And I would like you to keep your eyes open for those things.

You never know where the good ideas come from.

I don't think that just doing what we've been doing with large language models, adding

a little bit more data or a little bit more computation time is actually going to bring

the big steps forward that we still need.

And the last chapter I want to very briefly talk about is reinforcement learning.

One of the big problems of inductive learning and knowledge in learning is that they need

a lot of resources we often don't have.

The problem is most clear in inductive learning where you need input output pairs and many,

many, many input output pairs in reality.

And depending on what it is you're trying to learn, you may have them or may be able

to generate them or not.

If you want to basically want to learn, say, translation from English to German, then

you need input output pairs or in other words pairs of the form Peter Love-Marie and Peter

Lippe-Marie.

Now, you need something like the Rosetta Stone.

You probably know about the Rosetta Stone, right?

That is the kind of stone that had the same text in hieroglyphic, coptic Greek, I think,

and something else.

I don't remember.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:26:02 Min

Aufnahmedatum

2023-06-27

Hochgeladen am

2023-06-28 18:09:06

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen