42 - Lecture_10_1_Gradient_Descent [ID:39216]
50 von 256 angezeigt

Hi, our new chapter will be iterative optimization methods.

I have done this lecture before and then I realized that I've forgotten to click on record,

so I will be scrolling through this document instead of writing and talking at the same

time.

Okay, so iterative optimization methods, this is about finding the minimizer of a quadratic

functional of this form.

So we have a specific function phi of u, which is given by a quadratic, so one half u transposed

times b times u minus lowercase b transpose u.

So this b here is a matrix and this b here is a vector.

And b is not just a matrix, it's a symmetric matrix and a positive definite matrix, so

b transpose equal to b and if we multiply b from the right and from the left with vectors,

then this is always positive.

And there's a specific application that we have in mind for this quadratic function,

so this works for all kinds of quadratic functions of this form, but the specific application

that we're interested in comes from inverse problems, of course, because this is the context

of this lecture, and phi will have this specific form here.

So phi of u is some data misfit functional plus a regularizer, so this is basically just

the target functional of generalized Tikhloff.

And this is of the same form as the one above here, as we can easily see by multiplying

this out.

So the first term, well this is just a quadratic, so we have one half u transposed a transposed

a u, this with itself, then this times this will give us minus u transposed a transposed

f, and then this squared will give us this term, one half f transposed f.

This is lambda half u transposed l transposed l u, and by combining this term with this

term we get this expression, so one half u transposed times a matrix, which we can call

capital B, times u minus, this is a vector which we can call lowercase b times u plus

some constant which we're not interested in.

So we're not never interested in constants because they might change, so they change

the functions, well relative value, but it doesn't change the minimizer itself.

Now by choosing B like this and lowercase b like this, we recover exactly this form

here, and this B is matrix B, is indeed symmetric and it is positive definite.

If either a or l has to be a full rank, but that's a minor thing that we are not that

interested in.

Okay then this is the first setting that we are interested in, minimizing a quadratic,

and this second application is solving a linear equation of the form B u is equal to B, capital

U is equal to B.

In the application of inverse problems this would be the normal equation, so if you substitute

again the specific B that we have up here and this specific lowercase b that we have

up here, then you can see that this is exactly the normal equation of the generalized Tychonoff

regularized reconstruction.

We saw that this is equivalent, so we can either compute the minimizer of this Tychonoff

functional or we can solve the normal equation in order to get this exactly.

In the broader context of quadratic optimization, finding the minimizer of this function is

equal to finding the solution to this linear equation here.

This is basically due to the fact that if we take the gradient of phi, then this is

just B u minus B.

Maybe I can write this down, the gradient of phi of u is B u minus B. As you can see,

a minimizer of this function has to be a root, so a zero of the gradient, so B of the minimizer

has to be equal to lowercase b.

Okay, so solving this linear equation is equivalent to solving this quadratic optimization problem,

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:28:32 Min

Aufnahmedatum

2021-12-10

Hochgeladen am

2021-12-10 14:46:04

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen