With four friends from Graz, Eric Kobler, who has just finished his PhD, Carl Kunisch,
you might all know him, Thomas Pienitz and Thomas Pock. And what I will present today
is a quite applied talk because I removed all deeper theorems from this talk because
I just want to convey the main messages. How to exploit energy-based models for image reconstruction
where a regularizer is learned. And in the end we will figure out that not only the regularizer
is learned but what happens if you in addition learn the data fidelity term. And how to apply
all this to situations where you do not have any ground truth data. So this is the main
idea behind this talk. And let me start. So we have any inverse problem in imaging. Z is equal to ay
plus psi. And we have the unknown target y. And our goal is to estimate this unknown target as usual.
Then we have a measured observation z. Any measurement we don't know. And what we for the
moment assume is that we have detailed knowledge about the process itself. So the forward operator
ay is known. At least we think that we know it. I mean for image denoising, this is the classical
identity operator. But for CT, MRI reconstruction and so on, this operator is much more difficult.
And in most cases it's known, but not all the time. And we assume that there's some noise
and we know the properties of the distribution of the noise for the moment. Later on we will see
what happens if we don't know the noise distribution or even worse if we even do not know
the noise itself. Okay and as you all might know this is an ill-posed problem so it's difficult to
actually solve it. And here are some basic examples. Image denoising, you have a corrupted
image that is heavily corrupted by noise and below you see the target. Single-image super
resolution, you want to upscale an image and so A in this case is the down sampling operator.
We have CT reconstruction where A is some kind of radon transform and here an MRI reconstruction.
In the first row you see the case-based data so the FUYI domain and below you see the reconstructed
MRI image. Okay and how do we get this target Y? And we in this talk I want to advocate a
variational approach. So we have an energy which is composed of a data fidelity term plus a regularizer
and the data fidelity term penalizes as usual the deviation of the forward model A from the
measurement set. And here for the moment let's assume that we have a quadratic data fidelity term
and the regularizer has the task to impose prior knowledge about the solution structure.
And this is one of the central elements of this talk how to design a good regularizer.
And everything here has a statistical interpretation if you have the maximum
a posteriori estimate the map which is given below then maximizing the conditional probability
x given z is the same as minimizing this in the negative logarithmic domain. Then we can identify
data fidelity term with minus log p and the probability of z given x. And the regularizer is
in this from this perspective nothing else but the negative log prior.
Okay this is the general setting you all might have heard about this
and the central question in the first part of the this talk is how to design this regularizer.
And okay on the right you see an image the famous water castle image
and this is heavily corrupted by Gaussian noise. You see that it has quite a bad PSNR score.
And if you now pursue the variational approach with the regularizer then we have to think about
the statistics of natural images to design a regularizer that really promotes the statistics.
And what do I mean by statistics of natural images? Now what I could do is I could do the following.
I can extract two times two patches from natural images. I took a large data set 14 million patches
were extracted and then I what I did then was here we have the horizontal and the vertical component
of the discrete gradient. And if I now plot this joint probability density function in this
logarithmic domain then I see a very particular structure of natural image gradients.
You have a very sharp peak at the origin and you have quite long tails for larger
values of the horizontal and vertical components. So you have kind of structure in the images.
And what our regularizer should do then is it should help to identify or promote this particular structure.
And as a first guess one could use the total variation which was popularized in this
Rudin-Oscher-Fatemi work of 1992. You just use use the standard discretization of this total variation
and if you now compare this I mean the standard total variation is nothing else but an absolute
Zugänglich über
Offener Zugang
Dauer
00:58:21 Min
Aufnahmedatum
2021-07-07
Hochgeladen am
2021-07-08 14:38:04
Sprache
en-US