13 - Deep Learning [ID:10085]
50 von 739 angezeigt

Welcome to the final lecture on deep learning. Can everyone hear me? Yeah. So today's presentation

will be split into two parts. The first on weekly supervised learning, which I'll give,

and the second on precision learning, which my colleague Christopher will give. So you've

covered supervised learning techniques and unsupervised learning techniques so far. There's

a third branch, which is categorized as weekly supervised learning. The main importance of

these techniques is when dealing with the difficulty of getting high quality annotated

data, which is necessary for fully supervised learning approaches. So the three main applications

or types of weekly supervised learning that we'll be covering today are first going from

2D sparse annotations to 3D dense volumetric segmentations, the second going from rough

to fine annotations, and the third going from coarse image-wise labels to object localization.

So for the first type, going from sparse labels to dense pixel-wise annotations or 2D slice-wise

labels to 3D volumetric segmentations, the main aim here is that generating dense volumetric

segmentations can be extremely time-consuming and infeasible in many applications, specifically

in medical imaging. Additionally, the information contained in labels in adjacent slices are

often very similar, so it doesn't actually add that much information in terms of the

richness of the description that networks can learn from. Consequently, if we have an

approach that is able to learn on sparse 2D labels and generate dense volumetric segmentations,

it would be extremely useful for a variety of applications. So one of the first few papers

which proposed such an approach was a 3D unit paper, which I highly recommend all of you

to read, and the two main applications of this type of approach was one, to enable a

semi-automatic and interactive segmentation approach where you have sparse 2D slice-wise

annotations and you try to densify these into a 3D volumetric segmentation for a single

data set. The second application is to train a network on multiple sparsely annotated 2D

slices from multiple data sets and generalize this to a 3D case. So this type of weekly

supervised learning falls under the category of incomplete supervision because of the nature

of the labels in the data, which is that there is incomplete labels at multiple slices and

you only have sparse labels distributed across multiple slices. So the main challenge here

is dealing with such sparse labels. If we look at the common form of one-hot encoded

labels, why? The form of the softmax cross-entropy function is given by the equation that you

see here. We'll have to modify this in order to accommodate for the missing data or missing

labels. This can be done by adding an additional label denoted yk plus one, which represents

unlabeled data or essentially unlabeled voxel. The loss function can then be refactored into

a weighted cross-entropy loss where the weight is assigned a value of zero. If you have an

unlabeled voxel, so if yk plus one is equal to one and a weight of greater than zero is

applied to all of the labeled voxels. The main aim here is to prevent any unlabeled

data from contributing to the loss and hence the gradients that are computed in the backward

pass. Similarly, the weights of greater than zero can also enable in balancing out the

foreground and background classes. The second type of weekly supervised learning

is going from rough to fine annotations. Generating high quality annotated data for fully supervised

segmentation can be extremely time consuming and might even require experts and a lot of

resources in order to do so. Instead of generating such dense pixel-wise annotations, the question

is can we use something that is a lot simpler to generate and still learn object boundaries

and be able to generate accurate segmentations for objects of interest. In this case, we

are talking about using say bounding boxes to go from using bounding boxes as initial

targets to then generate dense pixel-wise segmentations of the objects of interest.

This type of weekly supervised learning where you're learning segmentations of objects

accurately using just rough annotations as the name suggests is an inaccurate supervision

because it can be viewed as the labels having a certain degree of noise or in other words

being inaccurate. Using just bounding boxes within such an approach to generate segmentations

leads to very basic and poor quality ones. So how can we improve on these? One approach

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:25:16 Min

Aufnahmedatum

2019-01-29

Hochgeladen am

2019-01-29 23:29:03

Sprache

en-US

Deep Learning (DL) has attracted much interest in a wide range of applications such as image recognition, speech recognition and artificial intelligence, both from academia and industry. This lecture introduces the core elements of neural networks and deep learning, it comprises:

  • (multilayer) perceptron, backpropagation, fully connected neural networks

  • loss functions and optimization strategies

  • convolutional neural networks (CNNs)

  • activation functions

  • regularization strategies

  • common practices for training and evaluating neural networks

  • visualization of networks and results

  • common architectures, such as LeNet, Alexnet, VGG, GoogleNet

  • recurrent neural networks (RNN, TBPTT, LSTM, GRU)

  • deep reinforcement learning

  • unsupervised learning (autoencoder, RBM, DBM, VAE)

  • generative adversarial networks (GANs)

  • weakly supervised learning

  • applications of deep learning (segmentation, object detection, speech recognition, ...)

Tags

arbitrary reconstruction deep image learning network segmentation proceeding implementation discretization
Einbetten
Wordpress FAU Plugin
iFrame
Teilen