6 - Introduction to Adversarial Robustness for Deep Learning [ID:24061]
50 von 388 angezeigt

These students in the group of Björn Heskoffy and they will speak about a very interesting topic, namely, they will tell us how to find out whether neural networks are instable and what this means and how this can be prevented.

And their talk is called Introduction to Adversarial Robustness for Deep Learning.

I'm looking forward to your talk and please, Rene, you can go ahead.

Maybe I can start telling something about this first slide while Rene is trying to figure out his microphone problem.

So basically, I guess every one of you has heard of deep learning until now because at the moment everyone is using it.

So from companies like Nvidia over Amazon and Tesla, and it's also arrived in many real world applications, for example, self-driving cars.

So Tesla, for example, or medical applications like automatic tumor detection or recommender systems on shopping sites or in Spotify.

So basically, if you get some recommendation to watch a new YouTube video, it's probably also a machine learning algorithm who gave this recommendation.

And so it seems awesome and a lot of applications.

So Rene, can you do the next slide?

So also in academia perspective, deep learning has led to many breakthroughs which were not expected until at least 10 years from now.

So for example, the first superhuman performance in image recognition, which should be corrected to the first.

So there's one human who actually went through the whole data set and annotated each image and tried to put it into the glasses.

So it's not really superhuman performance.

So super Kafafi performance who was the guy who actually did this annotation.

Maybe next.

Click.

So for example, would be the first human AI that was actually able to beat a top go player.

So this was thought to be impossible because it basically has a lot of so a very high number of possible moves in every round.

And to construct a search tree to find optimal move is very hard in practice.

And it was a very surprising result that actually I was able to beat the top human go player in the tournament.

At least, yeah, like a common thing to say is I guess that AI is at least not able to create a things but getting there.

And there are algorithms who are able to do content creation and create new and original content on their own.

For example, these images are faces which are just computer generated.

I guess for some of them you can see they are generated but also very nice.

I hear somebody typing, maybe a Gunein? Can we hear you?

Yes, do we hear me now?

Yeah.

Oh, amazing! Cool.

Then I'll continue from here.

That's fine.

I guess. Okay, you still hear me?

Good.

Okay, so this is only the tip of the iceberg, of course.

All these amazing things that deep learning can accomplish.

There's a lot of stuff that we don't immediately see when we hear about these amazing accomplishments.

And in this talk, we will look at some of the severe limitations that still need to be solved.

Starting with this paper in Treating Properties of Neural Networks, which was published in 2013

and continued by this explaining and harnessing adversarial examples in the year later,

the community has become aware of some fundamental issues that neural networks have.

And I'll now try to explain to you what these are.

If we take this nice picture of a panda that our network actually does classify as a panda with confidence of 60% roughly,

and we add some noise to this picture, we will get a picture that still looks like a panda to us humans,

but the network will say that this is a gibbon now, and it's very confident that this is a gibbon.

Obviously, you don't want this to be the case.

You don't want someone to just come around, add a little bit of noise to your image,

and suddenly your network doesn't know what's going on anymore.

Of course, it's not always easy to just change the image directly.

In real world scenarios, we'd rather have something like this.

If you are, for example, a self-driving car and you see the stop sign, which has a couple of white and red stickers on it,

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:53:41 Min

Aufnahmedatum

2020-11-11

Hochgeladen am

2020-11-17 10:27:51

Sprache

en-US

An introduction to adversarial attacks and defenses for deep neural networks given by René Raab and Leo Schwinn.

Tags

functional minimization methods framework approximation control distance reconstruction energy deep search basic weights models measure layer activation problem example propagation
Einbetten
Wordpress FAU Plugin
iFrame
Teilen