Okay, so welcome back everybody to our lecture, Deep Learning.
And today's topic is Unsupervised Deep Learning.
So, so far we had mainly supervised methods or we did have some knowledge on the world and reinforcement learning.
We at least knew the rules of the game and so on.
And today we want to discuss unsupervised learning.
And the main motivation obviously so far we had huge up to millions of data sets and a training data set.
We had many objects and few modalities.
So we have maybe a camera and a specific application classifying cats and dogs and then millions of observations.
But if you look at other application domains like medical imaging for example,
you have typically 30 maybe 100 images that you can work with because you collected them at a specific clinic.
And then often due to mainly due to data protection legislation, you are not able to pull really big data sets.
And obviously all of the data is also extremely costly because they have to be acquired with typical scanners.
A CT scan can cost up to 500 euros.
So doing millions of CT scans is also extremely costly.
So in contrast, here you also don't have hundreds of objects you're looking at, but you're interested in one complex object.
So you're typically scanning humans and you're interested in their anatomy and they typically have a lot of variation.
And you want to figure out variations and diagnose them.
And because diagnosis and physical properties of those scanners have their limitations, there's also many different modalities.
You have MR scanners, CT scanners, ultrasound, optical coherence tomography.
So many, many different physical effects that you use to look into essentially the same complex object.
So today in Germany, you typically have 65 CT scans in a thousand inhabitants, which means that only in 2014 there were five million CT scans.
So this is obviously highly sensitive data, as we already hinted at.
And there are some initiatives to try to make things available.
There are some competitions, there's data donation and so on.
But generally those do not solve the annotation problem.
So you also would have to annotate all of the data and diagnose all of those CT data sets.
And obviously that's extremely expensive.
So there are some solutions around that have been employed.
And one technique that is quite popular is so-called weekly supervised learning.
We will talk more about that actually in the next lecture.
And there you have a label for related class.
And then you try to localize this using this class label.
So here you see an example on the left hand side for the image label brushing teeth.
And then you can use visualization focus techniques that you actually try to identify the location that is giving the change in the class label.
So there you can try to localize where the actual information is happening in the image.
So brushing teeth, it's the toothbrush and the mouth that's going to be detected.
And for cutting trees it is typically the saw and then the cut in the tree that is localized.
So this is a kind of weekly but still supervised learning.
Then obviously there's also semi-supervised techniques where you have only partial data, where you annotate a few examples.
And then you try to bootstrap the algorithm.
So you annotate some of the data, you train a first system, then you run much of the data that you already have without annotation.
And then you manually correct.
And there's a whole, so we could have a whole lecture on how to select those samples and how to figure out which samples to annotate in order to have a most representative annotation with few annotations that you actually have.
And then there's unsupervised learning where we don't have any labels at all.
And unsupervised learning is what we want to discuss today.
It's also a very important family of machine learning methods.
So let's have a look here.
One example for label-free learning is dimensionality reduction or you can also find it under the term manifold learning.
So here you try to find the intrinsic structure of data and you try to unfold it.
So a very typical example here is the Swiss row.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:10:56 Min
Aufnahmedatum
2019-01-22
Hochgeladen am
2019-01-22 21:49:02
Sprache
en-US
Deep Learning (DL) has attracted much interest in a wide range of applications such as image recognition, speech recognition and artificial intelligence, both from academia and industry. This lecture introduces the core elements of neural networks and deep learning, it comprises:
-
(multilayer) perceptron, backpropagation, fully connected neural networks
-
loss functions and optimization strategies
-
convolutional neural networks (CNNs)
-
activation functions
-
regularization strategies
-
common practices for training and evaluating neural networks
-
visualization of networks and results
-
common architectures, such as LeNet, Alexnet, VGG, GoogleNet
-
recurrent neural networks (RNN, TBPTT, LSTM, GRU)
-
deep reinforcement learning
-
unsupervised learning (autoencoder, RBM, DBM, VAE)
-
generative adversarial networks (GANs)
-
weakly supervised learning
-
applications of deep learning (segmentation, object detection, speech recognition, ...)