37 - Lecture_09_2_Morozov_Principle [ID:39058]
50 von 175 angezeigt

Hi. We are still in the setting where we have a linear inverse problem of this type here,

and we are considering the Tikhonov regularized solution, UT lambda, and the big question

here is how to choose lambda in some way which is not completely arbitrary. And the idea

is we have to balance this. And why do we have to balance things? So if lambda is equal

to zero, then we're minimizing Au minus f squared which gives us least square solutions.

And it's interesting to look at the, not at the point lambda is equal to zero and minimize

that, but what happens if lambda is positive but goes down to zero? What happens with UT

lambda? So what happens here? That's the topic of our next lemma. 3.7. And it's the

question. It's just two equations. The Tikhonov regularized parameter UT lambda converges

to the minimum norm solution for lambda equal to zero. So there's a difference here. You

can take lambda equal to zero because then minimizing this optimization problem is just

a least square minimization problem. You can take any least square solution, but the idea

is if you do this for small lambdas and you let lambda go to zero, then you converge to

the unique least square solution with minimum norm, which is the minimum norm solution.

So this is specifically in the limit lambda going to zero and not choosing lambda equal

to zero right away. And the second equation is lambda goes to plus infinity, then UT lambda

converges to zero. That makes sense visually. If lambda goes to infinity, then we don't

really care about the data at all. We only care about making this small as small as possible.

And what's the point with the smallest possible norm? Well, that is the zero point. Okay,

this is quite easily proven. We can go back to our formula for this Tikhonov regularization

and rigorized reconstruction, which is this is V times the diagonal matrix sigma one divided

by sigma one squared plus lambda, sigma two divided by sigma two squared plus lambda and

so on. It goes up until the point sigma L divided by sigma L squared plus lambda. But

sigma L is the largest singular value, still larger than zero and then the rest is just

zeros times U transposed times F. And U minimum norm solution, you can check your lecture

notes again. This was V times the diagonal matrix consisting of the pseudo inverse of

this diagonal matrix, which is one over sigma one, one over sigma two, one over sigma L

and the rest was set to zero. U transposed F. Now you see very easily that if lambda

goes to zero, these entries converge to each other. So for lambda going to zero, we see

that this converges to the same object. And for lambda equal to infinity, this goes to

zero and so on. So this entry goes to zero and this is what happens. So you see immediately

from this representation of the Tikhonov regularization what happens with this reconstruction in the

limit lambda going to zero and lambda going to plus infinity.

So we have, now we know the edge cases of this balancing. For lambda arbitrary small,

we converge to the minimum norm solution, which has least possible squares here and

lambda going to infinity, this is lambda going to zero, and lambda going to infinity, this

converges to zero. We don't want that, we don't want the reconstruction to be arbitrary

small, it doesn't make any sense. We don't want to recover zero as a parameter, this

doesn't make any sense. There has to be some dependency on the data so lambda can't be

too large. And at the same time we can't choose lambda equal to zero or going to zero because

well, what's the problem with that? It's a good thing to have a least square solution,

right? So this makes sense. So why don't we let lambda converge to zero? We understand

that lambda equal to zero is bad because then we have non-uniqueness because least square

solutions might be non-unique, but why don't we choose lambda going to zero arbitrarily

small? So what's the problem here? Well, this converges to the minimum norm solution and

AUmn minus f squared is the smallest possible quantity that we can obtain from any parameter.

So why is this not optimal? We know that f is AU plus epsilon, this is some error term,

which means that AU minus f squared is the norm of epsilon squared. And if epsilon is

sufficiently large, we don't want to enforce least squares of the data misfit. Why not?

Well, we're putting a lot of work in making this as small as possible. So we're trying

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:30:27 Min

Aufnahmedatum

2021-12-08

Hochgeladen am

2021-12-08 14:46:04

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen