8 - Vincent Duval: Representing the solutions of total variation regularized problems [ID:17380]
50 von 349 angezeigt

Well, thank you. First of all, thank you very much for the invitation. It's really a pleasure

to be able to give a talk at this wonderful seminar. So I will talk about how to represent

the solutions of total variation on regularized problems. And it's quite related to Michael

Stokes since we're trying to obtain similar results but for different energies. And let

me mention that this is a joint work with Claire Boyer, Antonin Chambol, Yohann de Castro,

Frédéric de Gournet and Pierre Weiss. So first I recall a few known results about how to

represent the solutions of variational problems in the space of radon measures. Michael has

already mentioned some of them so I'll be very quick about that. And then I will generalize

it to variational problems involving convex regularizations. And eventually we will deal

with the case of functions with bounded variation and to see how it applies to image processing.

So this topic of trying to represent the solutions of variational problems is quite old in fact.

For many years mathematicians have tried to describe these kind of solutions. And I must

say that in signal and image processing there was a, well, Michael had a huge influence

by really highlighting the importance of that fact, especially when working with spaces

of measures. So I became interested into this topic when I was working on similar problems

as him, namely the basis pursuit for measure. And so what is it exactly? Well, you're given

a compact domain and you're trying to recover a radon measure but you only have a few linear

measurements that are given by integrating this measure against continuous functions.

So you have M linear measurements and you want to find the measure which has the least

total variation among all the measures which provide you these linear observations. So

the total variation, Michael has already talked about it, it's really the generalization

of the L1 norm can be defined by this duality formula for instance. And it's been considered

in the last few years by many of us in signal and image processing after the works of de

Castro and Gamboa, Brilis, Pikharainen and Candace and Fernandez-Granda for instance.

It's been used for deconvolution, frequency estimation, super resolution, but it's quite

old energy that has been used for many years, at least since the works of Krine and Berling

in the 1930s who were using it for theoretical purposes.

And curiously, this kind of models provides us with some of the oldest examples of representative

theorem that I am aware of at least. There's a result by Suwiki in the 1940s which tells

us that there is a solution to this basis pursuit problem that has the form of a sum

of at most m dirac masses. Later on, this result was refined by Fisher and Jerome in

the 70s and they noticed that all the extreme points of the solution set are actually of

this form of a sum of at most m dirac masses. So that's quite interesting if you remember

what an extreme point is, that is the points of a convex state that cannot be written as

a convex combination of other points of this convex set. For instance, in this picture

we have e naught, e1, e2, e3, e4 and all the red points on that arc of circle. Well, these

extreme points, they are important because of the Krine-Muhlmann theorem which tells

you that you can reconstruct the whole convex set by simply taking the close convex hull

of the extreme points. So in some sense, if you are able to find all the extreme points

of the solution set of your variational problems, then you have access to the full set of solutions.

So you simply need to look for the solutions which are n sparse. Let me also mention that

actually Fischer and Jerome were working on a slightly more general model where you have

a differential operator that is involved, this operator L, which maps some functional

space onto the space of measures and they consider this kind of models which are a bit

more elaborate. And these results were generalized by Michael and collaborators and also Flint

and Weiss recently. But I won't go into the detail about these more sophisticated models.

It's just to give some intuition about what we can get using representer theorems.

So what are the consequences of these representation results? First of all, you have some intuition

on the theoretical side about what you do when you regularize, what we can understand

from these examples is the fact that the total variation norm tends to promote Dirac masses.

Zugänglich über

Offener Zugang

Dauer

00:45:20 Min

Aufnahmedatum

2020-06-08

Hochgeladen am

2020-06-08 23:36:34

Sprache

en-US

The total (gradient) variation is a regularizer which has been widely used in inverse problems arising in image processing, following the pioneering work of Rudin, Osher and Fatemi. In this talk, I will describe the structure the solutions to the total variation regularized variational problems when one has a finite number of measurements.
First, I will present a general representation principle for the solutions of convex problems, then I will apply it to the total variation by describing the faces of its unit ball.

It is a joint work with Claire Boyer, Antonin Chambolle, Yohann De Castro, Frédéric de Gournay and Pierre Weiss.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen