48 - Deep Learning - Unsupervised Learning Part 3 [ID:18255]
50 von 92 angezeigt

Welcome back to deep learning. So today we finally want to look into the generative

adversarial networks which is a key technology in unsupervised deep learning. So let's see what I

have for you here. Well the unsupervised deep learning part 3 generative adversarial networks

guns. The key idea of guns is to play the following game. You have a generator and a

discriminator. Now the generator you could argue is somebody who generates a fake image

and then the discriminator has to figure out whether the generator actually produced something

that's real or something which is fake. So the discriminator can decide fake or real

and in order to train the discriminator he has access to many real data observations.

So the outcome of the discriminator then is whether the input was real or fake. Well of

course this is difficult to ask persons and artists to draw things so we replace the two

with deep neural networks and we have D that is the discriminator and we have G that is

the generator and the generator receives some latent input, some noise variable z and from

the noise variable and the parameters it produces some image and the discriminator then tries

to figure out whether this was a real or fake image so the output of the discriminator is

going to be one for real and zero for fake. Once we have found this kind of neural network

representation then we are also able to describe a loss and the loss of our discriminator is

to minimize the following function that is dependent on the parameters of the discriminator

and the parameters of the generator and it is essentially minimizing the expected value

of x from the data and this is simply the logarithm of the output of our discriminator

for real samples minus the expected value of some generated noise and that is the logarithm

of one minus the discriminator of the generator of some noise. So it's trained to distinguish

real data samples from fake ones. Now if you want to train the generator you simply minimize

the loss of the generator that is the negative loss of the discriminator. So the generator minimizes

the log probability of the discriminator being correct. So you train to generate domain images

to full D. Optionally you can run k steps of one player for every step of the other player and the

equilibrium is a saddle point of the discriminator loss. If you look into this in more detail then

you can find that the loss of the generator is directly tied to the negative loss of the

discriminator. So you can summarize this game with a value function specifying the discriminators

payoff that is given as V and this is the negative loss of the discriminator and this then results in

the following min-max game. So the optimal parameter set of the generator can be determining by

maximizing V with respect to the discriminator and nested into a minimization of the parameters

of G with respect to the same value function. So let's have a look at the optimal discriminator

and there is a key assumption that is both densities are non-zero everywhere because

otherwise some input values would never be trained and the discriminator would have undetermined

behavior in those areas. Then you solve with respect to the gradient of the discriminator

loss with respect to the discriminator to be zero and then you can find the optimal discriminator

for any data distribution and any model distribution in the following way. The optimal discriminator is

the distribution of the data divided by the distribution of the data plus the distribution

of the model over all your input domain of X. Unfortunately this optimal discriminator is theoretical

and unachievable so it's key for GANs to have an approximation mechanism and GANs use supervised

learning to estimate this ratio and then this leads to the problem of underfitting and overfitting.

Now what else can we do? We can do non-saturating games that we modify the generators loss and then

in this example we are no longer using the same function for both but instead we have a new loss

for the generator where we simply compute the expected value of the logarithm of the discriminator

of the generator given some input noise. In minmax G minimizes the log probability of D being correct

in this solution G minimizes the log probability of D being mistaken. It's heuristically motivated

because it fights the vanishing gradient of G when D is too smart and this is particularly a problem

in the beginning. However the equilibrium is no longer describable using a single loss. So there's

also things like extensions that are quite popular like the feature matching loss or the perceptual

loss. Here then G is trying to match the expected value of features f of X of some intermediate layer

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:11:26 Min

Aufnahmedatum

2020-06-21

Hochgeladen am

2020-06-21 21:06:33

Sprache

en-US

Deep Learning - Unsupervised Learning Part 3

In this video, we talk about the basic ideas of Generative Adversarial Networks (GANs) and show some examples.

Further Reading:
A gentle Introduction to Deep Learning

Tags

Perceptron Introduction artificial intelligence deep learning machine learning pattern recognition
Einbetten
Wordpress FAU Plugin
iFrame
Teilen