10 - Deep Learning [ID:11817]
50 von 1044 angezeigt

I guess due to the traffic jam on the highway or on the way to Erlangen Süd, they are stuck.

So let's anyways begin with this lecture.

I'm Vincent Kristlein, so here the second last author on this long author list.

And I was actually responsible for this slide set, and since Andreas Meyer is unfortunately

sick, I will hold this presentation today.

And it's going to be about unsupervised deep learning.

So the motivation is basically so far in the lecture we have huge data sets, so we have

the ImageNet data set which goes up to millions.

We have many variations in the objects, but very few modalities.

In contrast, in medical imaging we have a problem here.

So we have only typical very small size data sets from maybe 30 to 100 patients, and that's

already quite a lot.

We have studies and PhD thesis which have only three patients, and that's quite a few.

And unfortunately for these methods, then typically deep learning methods don't apply.

But here we will try to find methods which can use additional data.

So we have a high variation from one complex object, for example an X-ray, chest X-ray,

but in contrast many modalities.

So it can look very – we have a high variation here.

And so we have actually quite a lot scans, so 5 million CT scans alone in the year 2014,

where roughly 65 CT scans per 1,000 inhabitants.

I guess some of you already had one CT scan, so I had at least one.

But it is highly sensitive data, so there is an ongoing trend to make this data available,

but currently it's not the case, and maybe that's good, or maybe for the science or

research it's of course bad.

But yeah, you don't – or many of you maybe don't want to show what sicknesses they had

or have.

And yeah, the difficulties here, although there is no more data available, it's quite

difficult and expensive to obtain training data, because this has to be done by physicians

who really are the experts in this domain.

And so some solutions exist.

So one solution is weekly supervised learning.

Weekly supervised learning means if you have, for example, detection or a segmentation task,

but you only have a rough boundary or – how do you say – bounding box of the object,

and then you can use weekly supervised learning to actually obtain a fine-grained segmentation.

And this can be done, for example, by using methods like where you look at where the network

basically looks at, and then you get the object localization for free, kind of.

And this is done, for example, here in these examples, in the image example.

So if you train a classifier for brushing teeth and you always feed them with the big

images, in the end, the classifier will train and will learn what is actually the toothbrush.

But you see it also reacts a little bit to the hand, because the hand is always holding

a toothbrush.

And yeah, here also cutting trees.

It also goes on the helmet, maybe you see, so on the face here.

Maybe you can see that.

I'm not so sure.

I don't have a mouse here.

Okay.

Anyways.

So it has also some drawbacks here.

And then we can do semi-supervised learning.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:31:35 Min

Aufnahmedatum

2019-07-11

Hochgeladen am

2019-07-11 19:59:02

Sprache

en-US

Tags

units batch image autoencoder normalization models layer input functions training model random sample autoencoders
Einbetten
Wordpress FAU Plugin
iFrame
Teilen