18 - Learning of Wasserstein Generative Models and Patch-based Texture Synthesis [ID:33962]
50 von 633 angezeigt

This nice introduction.

And yeah, I would like also to thank you for the invitation.

Also thank you for participating to this Nomads project.

So I also had the chance to go to one year, almost one year in Cambridge, thanks to Daniel,

which is a very good project organizer also.

So today I will talk to you about learning of Wasserstein generative models and some

application to texture image, texture synthesis.

So this is a joint work with Antoine Houdin, which is a postdoc in Bordeaux.

Arthur Leclerc is an associate professor in Bordeaux and also a close collaborator Julien

Rabin that Daniel also knows, who is an associate professor in a camp in Normandy.

Okay so I will briefly introduce you to the subject of the talk, then go more in details

onto optimal transportation, and then I will explain to you how we can learn some Wasserstein

generative models and then go to patch based texture synthesis application.

So first of all, so this talk is about generative models.

So I guess that most of you have already heard about this.

So it's currently used in all our paper with application with MNIST database for numbers

or fashion MNIST or bedrooms or generating faces.

And all this work based on the following assumption.

So you assume that you have a data set which is discrete.

So yeah, it would be the blue dots.

And what you want to do is to be able to generate new samples automatically.

And to that end, you start from, for instance, generating random vectors in some space.

And then you want to estimate some generator that will transform this red point into this

red ones that are close in some sense to the blue ones.

So the idea is to generate new images.

So here, think about these blue dots as the faces, true faces or true bedrooms.

And then you want to generate new ones that are close in some space, space of images.

And the objective is to find the best generator that will be parameterized by this CETA parameter.

So there are several ways to do it.

First way to do it is to use a variational autoencoder.

So you have an autoencoder with some randomness at the bottleneck.

And then you use the decoder as a generative model.

And one very popular model is the GAN model for generative adversarial network, where

you not only train a generator, but also a discriminator here that will try to discriminate

between fake images that are generated by your model and the true ones.

And then you have this kind of min max problem when the discriminator want to be able to

discriminate between fake and true and the generator want to hack the discriminator.

And then after this very important work, there are the Vassar-Stein-Gan extension, where

here you don't only try to separate between images, but between distribution of images.

So here we are not looking only locally at the point to find a separation between the

point, but really look at the whole distribution.

And this is linked with transport distance and in fact Vassar-Stein distance.

So you can rewrite the problem like this.

So assume that you have some distance between probability measures.

And what you want to do is to find the generator that will give you a distribution that is

close to the discrete one, which is here the data set.

And if you consider the particular Vassar-Stein distance with Euclidean cost, which is called

W1, then looking at the duality of Vassar-Stein distance, you have this expression, which

is very close to this one, excepting that here you have to add a constraint on the dual

variable and consider that it slipshits.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:01:40 Min

Aufnahmedatum

2021-06-08

Hochgeladen am

2021-06-08 15:38:42

Sprache

en-US

Nicolas Papadakis on "Learning of Wasserstein Generative Models and Patch-based Texture Synthesis":

The problem of WGAN (Wasserstein Generative Adversarial Network) learning is an instance of optimization problems where one wishes to find, among a parametric class of distributions, the one which is closest to a target distribution in terms of an optimal transport (OT) distance. Applying a gradient-based algorithm for this problem requires to express the gradient of the OT distance with respect to one of its argument, which can be related to the solutions of the dual problem (Kantorovich potentials). The first part of this talk aims at finding conditions that ensure the existence of such gradient. After discussing regularity issues that may appear with discrete target measures, we will show that regularity problems are avoided when using entropy-regularized OT and/or considering the semi-discrete formulation of OT. Then, we will see how these gradients can be exploited in a stable way to address some imaging problems where the target discrete measure is reasonably large. Using OT distances between multi-scale patch distributions, this allows to estimate a generative convolutional network that can synthesize an exemplar texture in a faithful and efficient way.
This is a joint work with Antoine Houdard, Arthue Leclaire and Julien Rabin.

Tags

functional minimization methods framework approximation control distance reconstruction energy deep search basic weights models measure layer activation problem example propagation advanced GANs methods
Einbetten
Wordpress FAU Plugin
iFrame
Teilen