38 - Deep Learning - Plain Version 2020 [ID:21172]
50 von 103 angezeigt

Welcome back to deep learning. So today I want to talk a bit about the actual

visualization techniques that allow you to understand what's happening inside a

deep neural network. Okay so let's try to figure out what's happening inside the

networks and we'll start with the simple parameter visualization. This is

essentially the easiest technique. We've already worked with this in previous

videos so the idea is that you can plot the learned filter weights directly and

it's easy to implement. It's also easy to interpret for the first layer. Here you

see some example for the first layer filters in AlexNet and if you see very

noisy first layers then you can probably already guess that there's

something wrong. So for example you're picking up the noise characteristic of a

specific sensor. Apart from that it's mostly uninteresting because you can

see that they take the shape of edge and Gabor filters but it doesn't tell you

really what's happening in later parts of the network and if you have small

kernels then you can probably still interpret them but if you go deeper you

would have to understand what's already happening in the first layers so they

somehow stack and you can't understand what's really happening inside the

deeper layers. So we need some different ideas and one idea is that they visualize

the activations. So the kernels are difficult to interpret so we look at the

activations that are generated by the kernels because they tell us what the

network is computing from a specific input. So if you have a strong response

it probably means that the feature is present even if you have a weaker

response the feature is probably absent. So how does this look like? Yeah for the

first layer you can see that the activations then look like normal filter

responses so you see here the input and then filter 0 and filter 1 you can see

that they somehow filter the image and of course you can then proceed and look

at the activations of deeper layers and then you already realize that looking at

the activations may be somehow interesting but the activation maps by

the downsampling typically lose resolution very quickly so this means

that you then can visualize only very coarse activation maps. So here you see a

visualization that may correspond maybe to face detection or face like features

and then we can start speculating what this kind of feature is actually

representing inside the deep network. There's the deep visualization toolbox

that you have in reference 25 and it's online available and it allows you to

compute things like that. Well the drawback is of course that we don't get

precise information why that specific neuron was activated or why this feature

map takes this shape. Well what else can we do? We can investigate features via

occlusion and the idea here is that you move a masking patch around the input

image and with this patch then you kind of remove information from the image

and then you try to visualize the confidence for a specific decision with

respect to the different positions of this occluding patch and then of course

areas where the patch caused a large drop in confidence is probably an area

that is related to the specific classification. So we have some example

here we have this patch that we mask the original input on the left the two

different versions of masking on the right and then you can see that the

reduction in confidence for the number three is much larger in the center image

than on the right hand side image. So we can try to identify confounds or wrong

focus with these kind of techniques and let's look at some more examples. Here

you see the Pomeranian image on the top left and the important part of the image

is really located in the center if you start occluding the center then also the

confidence for the class Pomeranian will go down. In the middle column you see the

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:09:59 Min

Aufnahmedatum

2020-10-12

Hochgeladen am

2020-10-12 19:36:26

Sprache

en-US

Deep Learning - Visualization Part 3

This video shows simple visualization techniques based on lesion studies and investigating activations.

For reminders to watch the new video follow on Twitter or LinkedIn.

Further Reading:
A gentle Introduction to Deep Learning

Einbetten
Wordpress FAU Plugin
iFrame
Teilen