6 - Lecture_02_3_Common_Structure_IP [ID:36892]
50 von 76 angezeigt

In order to wrap things up, let's talk about what we have learned so far and what kind

of inverse problems we have gotten to know and what the common structure of these inverse

problems are.

So, very generally, a typical inverse problem that we will look at is an equation of form

F is K of U and possibly plus some epsilon. This will not always be the case, but in some

cases, where K is some operator from X to Y. This is the parameter space, this is the

data space or the observation space. This is possibly nonlinear operator between X and

Y and these X and Y, these could be just the real numbers or it could be the Rd or it could

be function spaces and all those different examples we will just summarize as bound spaces.

So complete vector spaces with a norm. This U is the unknown, we call this a parameter,

so this could be an image, it could be a number, it could be a vector, it could be a function,

it could be almost anything, it's something unknown which is plugged into this nonlinear

operator which we call the forward operator and then maybe some measurement noise happens

on top and then we measure the result of this application of K on U. So this is the data

and we will always infer or we want to infer U from F. That's the basic goal and we have

seen many examples for this kind of situation. So U could be a patients in a composition

and K could be some Radon transformation and then F would be X-ray, tomography, sinogram

data and we would like to infer what the patient looks like inside from the sinogram or you

could be thermal conductivity and we are measuring the indirect result of this conductivity pointwise

on the resulting field. We are trying to infer the whole conductivity from just a few noise

measurements for example. Or maybe K is a convolution operator, this is an image and

we are trying to find the deblurred version or the original version of some blurry version

of some image or maybe a deblurred and denoised version of some image. So maybe this is an

imaging application and all these examples have the same form. Essentially the unknown

parameter is mapped into some other space, this could be nonlinear, it could be a compact

operator between binary spaces and we just have the data and we want to infer this unknown

parameter. So U could be a number, a vector, an image or a function or almost anything

that can be plugged into a function, into this operator here and similarly F could be

any of these things as well. And the common structure of these inverse problems we are

looking at will be that these inverse problems are hard to solve because of many things,

maybe we have loss of information. If the dimensionality of Y is lower than the dimensionality

of X, so that means the information we can get out of this is lower than the information

that we have to infer about you. So this is a very easy way to lose information of course.

So this means we have too few measurements maybe, there could be high measurement noise,

this measurement noise could make it impossible to recover the original parameter. Then K

could be smoothing, for example the convolution operator is a smoothing operator, this is

due to compactness usually, so if this operator is compact then things will be problematic

and we will always need prior information on you in order to have a chance

at solving the inverse problem. And what will be prior information, for example U not too

rough, for example with an image we could say, well we assume that U will not be too

noisy and we want to, for example if you want to do denoising then it is a good assumption

to say, well this is not the original image because it is too noisy, because it is too

wiggly. If you are trying to reduce the wiggliness and noise in this image then this is actually

the assumption that you are working on, that a true image is not too rough or that U has

a small norm and of course the question you have to ask yourself is which norm, what is

a good norm to minimize for. And this is quite informal what I am telling you, but there

is one structure to all that and this is the following definition, 1.1 and now finally

we can start doing some mathematics and this will lead us to a good definition of hard

or easy inverse problems. So let X and Y be Banach spaces, they could be Hilbert spaces,

they could even be just the real numbers but we will stay in this generality for now. And

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:13:30 Min

Aufnahmedatum

2021-10-20

Hochgeladen am

2021-10-20 16:56:38

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen