35 - Lecture_09_1_Generalized_Tikhonov [ID:39057]
50 von 146 angezeigt

Hi, we are going to talk about generalized Tickenhoff regularization today, and let's

recall quickly that Tickenhoff regularization is the reconstruction parameter UT lambda,

which minimizes this functional. So there's a balance between two terms. Minimizing this

term enforces, let's say, data fidelity. So if we want to make this small, then we have

to choose a parameter U, which is as close as possible to, well, in image space as close

as possible to the data. So if we only minimize that, we recover least square solutions. And

this term penalizes large parameters. And it makes sense to do that in order to get

uniqueness of this minimization problem. For example, if we drop this completely, then,

as I said, we recover the least square solution, but this might not be unique. And in introducing

this penalization term recovers uniqueness again. And at the same time, if lambda is

considerably large, this will also lead the parameter to be small in some sense, small

in the sense that it's not too big. So it's a balance between two things, data fidelity

and not too large. But this is not the optimal model in some examples where minimization

of the norm of the parameter makes no sense or makes little sense. So for example, we

might know a priori, so before we get the data, that parameters are close to some, let's

say empirical value u0. Then the following model might be more realistic or more helpful

where we minimize again some data misfit term. But now we don't penalize norm of u, but

we penalize deviation of u from this empirical parameter u0. So this enforces closeness to

u0. And if we're in a setting where we know that the parameter is, has to be close to

u0 with some deviation that will give us the right data, then this makes a lot of sense.

So, we might have a priori information about the, let's call it shape of u. So, for example,

let's consider the setting where u is, well such a vector, u1 until un, which is a discretization

of some function. For example, the deconvolution problem or the denoising problem is such an

example. So we assume that the parameter is a discretized function. So for example, an

image or a thermal conductivity function. So if those parameters don't have any connections

or if u1 is, let's say a length scale parameter, u2 is a temperature parameter, if u3 is physical

density of something like that. So if those parameters are completely disjoint and independent

quantities, then this doesn't make any sense. But sometimes our parameter is a function

or at least a discretization of a function. For example, if we're looking at images and

we're looking at deconvolution, for example, then our parameter is pixels, which we can

interpret as discretization of an image function. So in this setting, we might know something

about the shape of u. So here, we might know that f is smooth, which means that the norm

of f' is small. So what does it mean? So we might know, well, f is a bad, f is a really

bad notation. Excuse me, f is really bad. Let's call it something else. Let's call it phi

maybe. So the parameter is a function phi. So if a parameter is a discretization of a

function, then we think that phi maybe looks like that, rather than very oscillatory with

jumps. So this has high phi prime norm, and this has small phi prime norm. So if the function

is smooth, if the parameter that we don't know is believed to be smooth a priori, so

that's a modeling assumption. So you might know that your image is smooth in this area

where you're looking at it, then this makes sense to do. And this is an assumption that

you can use and you can include it into such a tick-off type regularization term. So here,

you enforce closeness to u0, but you can do something similar in order to enforce smoothness

of this discrete function u. And how does it work? Phi prime of xi is approximated by

phi of xi plus one minus phi of xi divided by delta x. And well, where delta x is xi

plus one minus xi. And then this is ui plus one minus ui divided by delta x. So this means

that this vector phi prime of x1, phi prime of x2, and so on, phi prime of xn is approximately

the following matrix, minus one, one, zeros, zero, minus one, one, zeros, and so on, zero,

one, zero, minus one. Okay. And that's the matrix, times the vector u1, u2, un. This we

call L. For example, phi prime of x1 is roughly, sorry, there's a constant missing. Of course,

there's one over delta x. I'm going to put this here. So phi prime of x1 is roughly u2,

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:25:11 Min

Aufnahmedatum

2021-12-08

Hochgeladen am

2021-12-08 14:36:04

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen