3 - Lecture_01_3_Elementary_Inverse_Problem [ID:36842]
50 von 57 angezeigt

Hi. The next example will be the most elementary one that I can think of, and we consider the

following forward problem, so I'm going to call this a forward problem right away. The

forward problem is evaluation of this linear equation. So this is some parameter. This

will be the unknown later, but the forward problem is always we have some fixed parameter

and we compute the data. So we call this a parameter, we call this the data, and this

is some maybe measurement error. Now A is a diagonal matrix, sigma 1, sigma 2, sigma

n, and zeroes on the other off diagonal elements, and we assume that sigma 1 is larger than

sigma 2 is larger than blah blah, larger than sigma n, and sigma n is almost zero. So that's

the image you should have in mind. Sigma 1 might be quite large, maybe a thousand or

something like that, and sigma 2 is maybe a hundred, and the later elements are quite

small and they almost converge to zero, whatever that means, but that's the mental image that

you should have in mind. So this turns this matrix vector multiplication to something

a lot more easy, so W1 is just sigma 1 x1 plus epsilon 1 and so on, Yn is sigma n xn

plus epsilon n. This is a direct problem. This is what we should think of the generation

process of the data. So x will be a diagonal later, but the hidden process generating the

data is very easy. We take this hidden parameter x and it's multiplied by sigma i, there's

some additive noise, which we don't have on our control, and then this becomes our data

y1. You can see the problem right away. If sigma n is roughly zero, then Yn is largely

dominated by the noise term epsilon n. So we assume that the noise has the same magnitude

on all those channels, on all those dimensions, so we say that epsilon 1 and epsilon n, they

are different numbers, but they're roughly of the same order of magnitude. That means

that if sigma n is quite small, then a lot of information is lost by multiplying sigma

n times xn, so this will be roughly zero, this will be just a noise term. That's the

standard problem of the inverse problems that the data generation process has dimensions,

and in this case those are the later dimensions, which are obscured by the noise. So maybe

the first dimension is still fine, so maybe this is a thousand, this is one, this is one,

then this will be one thousand and one, and just 0.1% of the data is the noise, but in

this later, well, high frequency dimensions, so to speak, the noise will dominate the actual

data here. Now, the inverse problem will be recover x from y, and well, this is a very

easy problem, or it should be, just apply a to the minus one, the inverse Fourier operator.

What happens then? Well, y, sorry, so we call this maybe x star, we could call this that,

so that's the reconstruction of x. x star is defined as a to the minus one of y, and

then this means, we can write this dimension by dimension, x one star is x one, well, let's

make more steps, so a to the minus one is also diagonal matrix, so diagonal of one over

sigma one, one over sigma n, and now we can see trouble arising, right, because sigma

n is roughly zero, so that means that this is a very large entry. So x one star will

be x one plus epsilon one divided by sigma one, and x n star will be x n plus epsilon

n divided by sigma n, and you see the same problem again, the data is not a problem,

the data is generated in a quite straightforward way, just data is the data, right, there's

nothing to do here, but if we just apply the inverse function to the data, then we might

get something very different from the actual parameter, especially in those dimensions

where, now this factor sigma n is small, where the inverse operator becomes unstable. Now

we saw this for differentiation and integration operator as well, right, so the fault problem

integration, this funnels everything, right, it makes everything smooth, especially those

dimensions, which will be problematic later, those are flattened, they are pushed down,

right, there's nothing happening here, but the inverse operator, it amplifies problematic

dimensions and noise present in data is amplified as well, so if the noise is non-zero, then

the reconstruction of the parameter we get from the data will be very bad in those dimensions,

and that is, you know, that's all there is to know about inverse problems, about the

problematic structure of inverse problems is that, sorry, if the forward operator, so

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:08:04 Min

Aufnahmedatum

2021-10-19

Hochgeladen am

2021-10-19 22:46:43

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen