9 - Deep Learning [ID:9240]
50 von 733 angezeigt

So good morning everybody.

Welcome back to our lecture,

Deep Learning and today's topic will be visualization.

So, so far we've seen the basics,

we've seen the universal approximation theory,

we've seen back propagation and how to train networks using gradient,

gradient descent, then we went through different architectures,

such that we can see that for particular tasks,

you need to train different architectures and several essentially recipes,

how to do that well,

that we also did present and often these networks then have a hierarchical setup

and you try to extract low-level features in the earlier layers

and then more and more abstract features throughout the network.

Another thing that we talked about were recurrent networks,

such that we can also model sequences.

And today, we want to look into visualization

and there's a couple of different tasks that we need to visualize

in order first of all to communicate with other researchers

and also in order to understand what our network is really learning

because up to now, we consider them essentially as a black box

and we didn't look into the individual layers and functions

that have already been learned.

So, yeah, there's three topics that we want to talk about today.

One is network architecture visualization.

So, this is mainly to communicate with other researchers

such that they can easily understand which architecture you chose

and that it can be implemented rather quickly.

Then obviously, visualization of training is important

such that you can see how the training evolves during the training process

and which layers change and so on

and we will have a couple of examples how to do that.

We've already seen the different loss curves

which are very important tool for that but there's more to that

that we also look into.

And then visualization of parameters and in the parameters,

we really want to figure out which neuron, which layer is responsible

for what kind of function and this is rather difficult

but we'll look into the different techniques

how this can actually be performed.

Okay, so why do we need visualization?

Well, we've treated them as a black box so far.

We have some inputs, some outputs and essentially,

we can specialize in architecture then we train

but we don't really understand what's happening within this black box

and today we want to actually find some tools

that potentially can help us understanding

what this black box is actually doing.

So, yeah, I already told you that.

We want to communicate with other researchers.

We want to identify issues during the training.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:12:51 Min

Aufnahmedatum

2018-06-06

Hochgeladen am

2018-06-06 16:49:08

Sprache

en-US

Tags

reconstruction energy deep spatial visualization examples confidence guided investigating deconvnet feature tank patch activations identify backpropagation important image class motivation features pattern exercise problem output recognition learning training data classification network architecture
Einbetten
Wordpress FAU Plugin
iFrame
Teilen