3 - Deep Learning - Introduction Part 3 [ID:13184]
50 von 83 angezeigt

Thanks for tuning in to the next video of deep learning.

So what I want to show you in this video are a couple of limitations of deep learning.

So you may wonder, are there any limitations? Are we done yet?

Aren't we learning something here that will solve all of the problems?

You can build a machine that learns to solve more and more complex problems,

and more and more general problem solvers,

then you basically have solved all the problems, at least all the solvable.

Well, of course there are some limitations.

For example, tasks like image captioning yields impressive results.

You can see that the networks are able to identify the baseball player,

or girl in pink dresses jumping in air, or even people playing the guitar.

So let's magnify this a bit and look at some errors.

Here on the left you can see this is clearly not a baseball bat.

Also this isn't a cat in the center image.

And there are also slight errors like the one on the right hand side.

The cat on top of the suitcases isn't black.

Sometimes there are even plain errors.

Like here in the left image I don't see a horse in the middle of the road.

And also on the right image there is no woman holding a teddy bear in front of a mirror.

Machines in their mathematical intelligence far exceed most people already.

In their ability to play games they far exceed most people already.

In their ability to understand language they lag behind my five year old, far behind my five year old.

So the reason for this is of course there's a couple of challenges.

And one major challenge is training data.

Deep learning applications require huge manually annotated data sets.

And these are hard to obtain, they are time consuming, expensive, and often they are ambiguous.

So as you've seen already in the ImageNet challenge, sometimes it's not clear which label to assign.

And obviously you would have to assign two labels or a distribution of labels.

Also we see that even with the human annotations there's typically errors.

And what you have to do in order to get a really good representation of the labels,

you actually have to ask two or even five experts to do the entire labeling process.

And then you can find out the instances where you have a very sharp distribution of labels,

these are typical prototypes, and broad distributions of labels.

These are images where people are not sure what is actually shown in the image.

If we have such problems then we typically get a significant drop in performance.

So the question is how far can we get with simulations, for example, to expand training data?

Of course there are also challenges with trust and reliability.

So verification is mandatory for high-risk applications and regulators can be very strict about those.

And they really want to understand what's happening in those high-risk systems.

Regulators pay disproportionate amounts of attention to that which generates press.

This is just an objective fact. And Tesla generates a lot of press.

And to end learning essentially prohibits to identify how the individual parts work.

So it's very hard for regulators to tell what part does what and why the system actually works that well.

And we must admit at this point that this is largely unsolved.

To a large degree it's difficult to tell which part of the network is doing what.

Modular approaches that are based on classical algorithms may be one approach to solve these problems in the future.

This brings us to the future directions. And something that we like to do here in Erlang in particular is learning of algorithms.

So for example you can show that the classical computer tomography, which is expressed in the filter back-projection formula here,

where you have a filtering along the projection direction and then a summation over the angle in order to produce the final image.

So this convolution and back-projection can actually be expressed in terms of linear operators.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:08:48 Min

Aufnahmedatum

2020-04-14

Hochgeladen am

2020-04-14 11:16:03

Sprache

en-US

 

Deep Learning - Introduction Part 3   This video introduces the topic of Deep Learning and discusses challenges and future directions.   Video References: Lex Fridman's Channel   Further Reading A gentle Introduction to Deep Learning 

 

Tags

introduction artificial intelligence deep learning machine learning
Einbetten
Wordpress FAU Plugin
iFrame
Teilen