Welcome back to deep learning. So today we want to continue talking about weekly annotating examples and today's topic will be particularly focusing on 3D annotations.
So welcome back to our lectures on weekly and self supervised learning, the second part from sparse annotations to dense segmentations.
Now what means dense? You dense? Probably not. So what we are actually interested in is a dense 3D segmentation.
So here you have an example of a volumetric image and you can see that we have a couple of slices that we visualize on the left hand side.
Then we can annotate each of these slices and then we can apply it for training, for example a 3D unit to produce a full 3D segmentation.
And as you already might have guessed, annotating all of the slices subsequently, probably with different orientations in order to get rid of bias that is introduced by the slice orientation, is extremely tedious.
So you don't want to do that and what we will look in in the next couple of minutes is talking about how to use sparsely sampled slices in order to get a full automatic 3D segmentation.
Also this approach is interesting because it allows for interactive correction. So let's look into this idea.
So you train with sparse labels and typically we have these one hot labels with essentially being one if you are part of the segmentation mask.
So it's either true or false and then you get this cross entropy loss where you essentially then can back propagate with and you essentially use exactly the label that returned one because it was annotated.
But of course that's not true for the non-annotated samples. So what you can do is you can develop this further to something that is called a weighted cross entropy loss.
And here you multiply the original cross entropy with an additional weight W and W is set in a way that it's zero if it's not labeled and you can assign a weight that's greater than zero otherwise.
And if you do so, by the way, if you have this W you can also extend it to be interactive by updating the Y and the W of Y.
So this means that you update the labels over the iteration of uses. If you do so, then you can actually work with sparsely annotated 3D volumes and train algorithms to produce complete 3D volumes.
So let's look at some takeaway messages. Weekly supervised learning is actually an approach to omit fine-grained labels because they are expensive and we try to get away with something that is much cheaper.
The core idea is that the label has less detail than the target and the methods essentially depend on prior knowledge. So like knowledge about the object, knowledge about the distribution or even a prior algorithm that can help you with the refinement of the labels and weak labels that we called hints earlier.
Typically, this is inferior to fully supervised training, but it's highly relevant in practice because annotations are very, very costly.
And don't forget about transfer learning. This can also help you. So we discussed this already quite a bit in earlier lectures.
And what we've seen here is, of course, related to semi-supervised learning and self-supervised learning.
And this is also the reason why we talk next time about the topic of self-supervision and how these ideas have sparked quite some boost in the field over the last couple of years.
So thank you very much for listening and looking forward to see you in the next video. Bye bye.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:04:40 Min
Aufnahmedatum
2020-10-12
Hochgeladen am
2020-10-12 22:06:27
Sprache
en-US
Deep Learning - Weakly and Self-Supervised Learning Part 2
In this video, we discuss weak supervision and how to go from 2D to 3D.
For reminders to watch the new video follow on Twitter or LinkedIn.
Further Reading:
A gentle Introduction to Deep Learning