48 - Lecture_11_2_Solving_TV [ID:39469]
50 von 119 angezeigt

Hi, last time we talked about how the conjugate gradient method can be used to solve quadratic

optimization problems, and in particular we can solve Tikhonov and generalized Tikhonov,

regularized inverse problems, and we can in principle also apply this to more general

non-quadratic, but still smooth, minimization problems, but we're having difficulties with

the specific TV problem, like this here, because of this non-differential term here. So CG

is not that good on problems like this, and we will talk about a method for attacking

this kind of problem. It's not a method that is applied in high-dimensional problems, so

even for moderately sized 2D images, this is not something we would in practice use,

but still it gives you a tool to try this out for yourself. Because if we were to talk

about the necessary math in order to talk about iterative methods, this would take a

few weeks more, and that's not the topic of this lecture. So if you're interested in this

kind of things, then I would suggest taking a class, for example, Mathematical Image Processing,

where you will definitely talk about topics like this. So the goal here is to find some

method of finding the minimizer of the TV regularized inverse problem of this type here.

We can call this phi of u, and L is a discretized differential operator, which I have written

down at least twice in the lecture notes. And the continuous analog would be, if you're

familiar with that, minimizing the function u, minimizing the term au minus f. Now this

is the L2 norm squared plus lambda, the so-called TV norm of u, sorry, of u, of gradient u.

So that's the continuous analog. For efficient algorithms, see a course on mathematical

image processing. We would need ideas from convex analysis and optimization

theory. And this takes some time, and I'm not willing to spend the time just for this

specific model, because we're interested in more general in those problems at all. But

still I want to give you one method that can, in principle, be used to solve this, well,

exactly, so we're not talking about an iterative algorithm, but we will convert this minimization

problem into a form which can be solved exactly by putting it into an optimization algorithm

on, let's say, Python or Matlab. The idea is to increase the space on which we look

for a minimizer, and we are going to increase the dimensionality of this problem by three,

by, well, times three, so we'll make it thrice larger. And we present an algorithm which

can be used for low-dimensional applications. Just so you can use it in order to think about

some examples if you're interested in that. And the idea of this algorithm is to write

this problematic term lU, this is problematic because we're going to take the absolute value

here, and we're going to write this, so this is an Rn vector, we're going to write this

into two components, v plus minus v minus, where v plus is in R plus n, and v minus is

also in R plus n, so these are vectors consisting only of positive entries. So, example would

be, we can write the following vector 1 minus 1 minus 2 as 1, 0, 0, minus 0, 1, 2, this

would be a decomposition like that, so if that were lU, this would be v plus, this would

be v minus. So v plus is the positive part of this vector, and v minus is minus the negative

part. And we can always find this, so it's easy to construct this, v plus the i's entry

is the i's entry of i plus the absolute value of the l, i's entry of lU, half, and v plus,

sorry, v minus i is lU, the i's entry, minus the absolute value of lU, i's entry divided

by 2. So you can see for yourself, you could try this on this example here, this gives

you what you want. Or maybe there's a minus missing, let me check quickly, minus 1 minus,

yes there's a minus in front, so that way it's positive. And, oops, no, I'm not interested

in that. Okay, so, right, so the idea is to somehow come to terms with this object here,

and lU, the one norm of lU is a complicated thing, because it's the sum over all entries

of lU, k, and then we take the absolute value of that. This is problematic because it's

non-differentiable and we don't want to work with things like that, but the interesting

thing now is, well, lU can be written as v plus minus v minus, and they have exclusive

entries, so if v plus has an entry, then v minus doesn't have an entry. So we can write

this as the sum of the entries of v plus plus the entries of v minus. So it's basically

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:20:25 Min

Aufnahmedatum

2021-12-16

Hochgeladen am

2021-12-16 21:46:03

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen