25 - Lecture_08_1_Tikhonov [ID:37999]
50 von 120 angezeigt

Our next topic will be Tichonov regularization, which is a new approach to regularizing ill-posed

inverse problems.

Now we consider, as always, an inverse problem of this type, where for now we say that the

data and also the forward operator and the parameter are all finite.

This is an ill-posed inverse problem.

It doesn't have to be ill-posed, of course, but it doesn't really warrant a lot of thought

if it isn't ill-posed.

And we define the Tichonov regularized solution of this

inverse problem as the following.

T for Tichonov, and there is a scalar lambda, which is a tuning parameter.

This is the parameter U, which minimizes the following expression.

We penalize deviations in the misfit functional, and this would be a least square reconstruction,

but we add another term, just lambda half times the norm of U squared.

The scalar lambda positive is called a regularization parameter.

So what does this do?

Formalize this as a remark.

So this, let's say, strives for a balance between two factors, good fit with the data,

i.e. small value for Au minus f in the norm, but at the same time we want the norm of U

not to be too big.

So small to norm of the parameter U, i.e. small norm of U squared.

This is similar to the minimum norm solution, and it's not the same thing completely.

The minimum norm solution minimizes this absolutely, and then it picks the parameter with the minimum

norm within this small subset.

And the Tichonov regularized solution does both things at the same time and tries to

keep a balance, and this balance is managed by this parameter lambda.

So lambda is a tuning parameter for this balance.

And of course this two norm, this is a way of ensuring uniqueness of the reconstruction.

The interesting thing is that we can explicitly compute this Tichonov solution again via the

SVD, which is not a big surprise because we have used it all the time, so why should it

fail us now?

And this is the statement of the following theorem, 3.6, let f equal to Au plus epsilon

be a linear inverse problem.

Then the Tichonov regularized solution is equal to, well let's write things down first,

A is equal to U sigma V transpose, which is the SVD, and this Tichonov regularized solution

is given by V times sigma, now something called sigma tilde plus lambda times U transpose

times V. So it's very similar to the minimum norm solution, but here we just had sigma

plus, now we have something sigma tilde plus depending on lambda of course, where sigma

tilde plus lambda is defined as a diagonal matrix consisting of entries sigma one divided

by sigma one squared plus lambda, up to sigma m divided by sigma m squared plus lambda.

So we don't need any, we don't have to distinguish cases where sigma is equal to zero because

then this is just zero, so there's no need for special cases.

This is the padded diagonal matrix.

Okay, so for now this is just a statement, we proved this very similarly to the proof

of the characterization of the minimum norm solution last week, the proof is almost a

copy with a special consideration of this quadratic error terms here, so we write again

a generic vector U in parameter space as V times not alpha, A, which we can do, and A

we call this alpha one alpha n, I think, I always get confused with the dimensionality,

I think it's n.

And G we define as U transposed F, now we try to find A in Rn such that V times A is

equal to the Tikhonov regularized solution.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:20:00 Min

Aufnahmedatum

2021-11-15

Hochgeladen am

2021-11-15 12:46:03

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen