58 - Deep Learning - Weakly and Self-Supervised Learning Part 3 [ID:19162]
50 von 145 angezeigt

Welcome back to deep learning. So today we want to start talking about ideas that are

called self-supervised learning. So somehow we want to obtain labels by self-supervision

and we will look into what this term actually means, what the core ideas are in the next

couple of videos. So this is part three of weekly and self-supervised learning and today

we actually start talking about self-supervised learning. There are a couple of ideas around

in self-supervised learning and you can essentially split them into two parts. You can say one

is how to get the labels, the self-supervised labels and the other part is that you work

on the losses in order to embed those labels and have particular losses that are suited

for the self-supervision. Okay, so let's start with the definition and the motivation is

you could say that classically people in machine learning believe that supervision is of course

the approach that produces the best results but we have these massive amounts of labels

that we need. So you could actually very quickly then come to the conclusion that the AI revolution

will not be supervised. This is very clearly visible in the following statement by Jan

Le Kün. Most of human and animal learning is unsupervised learning. If intelligence

was a cake, unsupervised learning would be the cake, supervised learning would be the

icing on the cake and reinforcement learning would be the cherry on the cake and of course

this is substantiated by observations in biology and how humans and animals learn. So the idea

of self-supervision is that you try to use information that you already have about your

problem to come up with some surrogate label that allows you to do training processes.

The key ideas here on this slide by Jan Le Kün can be summarized as the following. So

you try to predict the future from the past. You can predict the future also from the recent

past or you predict the past from the present or the top from the bottom. Also an option

could be to predict the occluded from the visible. So you pretend that there is a part

of the input that you don't know and predict that and this essentially allows you to come

up with a surrogate task and with the surrogate task you can already perform training and

the nice thing is you don't need any label at all because you intrinsically use the structure

of the data. So essentially self-supervised learning is an unsupervised learning approach

but every now and then you need to make clear that you're doing something new in a domain

that has been researched on for many decades. So you may not refer to the term unsupervised

anymore and Jan Le Kün actually proposed the term self-supervised learning and he realized

that unsupervised is a loaded and confusing term. So although the ideas have already been

around before the term self-supervised learning has been established, it makes sense to use

this term to concentrate on a particular kind of unsupervised learning. So you could say

it's a subcategory of unsupervised learning. It uses pretext surrogate or pseudo tasks

in a supervised fashion and this essentially means you can use all of the supervised learning

methods and you have labels that are automatically generated that can then be used as a measurement

of correctness to create a loss in order to train your weights. And the idea is then that

this is beneficial for a downstream task like retrieval, supervised or semi-supervised classification

and so on. By the way, in this kind of broad definition you could also argue that generative

models like generative adversarial networks are also some kind of semi-supervised learning

method. So essentially, Jan de Kün had this very nice idea to frame this kind of learning

in a new way and if you do so, this is of course very helpful because you can make clear

that you're doing something new and you're different from the many unsupervised learning

approaches that have been out there for a very long time. Okay, so let's look into some

of these ideas. There's of course these pretext tasks and you can work with generation-based

methods so you can use GANs, you can do like super resolution approaches, you down sample

and try to predict the high resolution image, you can do inpainting approaches or colorization,

of course this also works with videos. You can work with context-based methods. So here

you try to solve things like the jigsaw puzzle or clustering. In semantic label-based methods

you can do things like trying to estimate moving objects or predict the relative depth.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:15:55 Min

Aufnahmedatum

2020-07-05

Hochgeladen am

2020-07-05 22:06:30

Sprache

en-US

Deep Learning - Weakly and Self-Supervised Learning Part 3

In this video, we look into the fundamental concepts of self-supervised learning. In particular, we look at different strategies to create surrogate labels from data automatically.

Further Reading:
A gentle Introduction to Deep Learning

Tags

Perceptron Introduction artificial intelligence deep learning machine learning pattern recognition
Einbetten
Wordpress FAU Plugin
iFrame
Teilen