12 - Bayesian Inverse Problems [ID:15865]
50 von 236 angezeigt

Thank you for listening

Hi

Today's topic are B theysian inverse problems

and while B is not everything we will encounter in this course

it's good framework in which to develop notions like map estimator

maximum likelihood estimator

or in which we can motivate sampling algorithms which will come later.

Now a Bayesian inverse problem, we'll start by fixing the notation so it might help to

download the slides and keep this notation slide ready somewhere, I don't know, next to your video.

So we'll think about this parameter x which is thought of being unknown and something we want to infer.

And this parameter has some density rho x, so x is distributed according to this density rho x

and this rho x is our prior density. We call this a prior density because it's our a priori belief about x.

So we have some assumptions on x and if we know almost nothing at all about x then x will be a very broad prior,

like a very flat Gaussian distribution or maybe a so-called improper prior, for example a completely

uniform Lebesgue density on the whole space. So that is a special case that we won't consider but

so either we know something about x then this is modeled by some density x or we know almost nothing

then x is a very broad distribution. And we will both talk about the density rho x and about the

measure mu zero, so keep those apart. So mu zero from a means you integrate the density rho x over a.

So those two things are of course the same thing but sometimes it's more fitting to think

about densities and sometimes it's easier to think about the measure mu zero. So both

the density and this mu is called the prior and this is our our modeling assumption on x.

So this is again just you know a more sophisticated way of writing what I just explained. So we will

write x is distributed according to rho x and the same thing from mu zero but of course one thing is

a function or to be more exact is a pdf it's normalized by one it's positive and so on and

mu zero is a problem to measure. The second random variable that we will be looking at is epsilon.

The epsilon is something that will take on the meaning of measurement noise. So we will have

some kind of measurement process later and this epsilon is something that disturbs this measurement

and this epsilon also has a noise density rho epsilon.

So for example a good a good noise model often is a Gaussian measure with mean zero. So usually

this is what rho epsilon looks like. Almost all physical measurement noise phenomena are in some

sense well approximated by a centered Gaussian. There are some counter examples to that but there

are few. Now the actual Bayesian inverse problem that is this thing here

Bayesian inverse problem. Recall that you know x this is the unknown we don't know that and of

course the measurement noise is also unknown. So there are two things that we do not know

and this unknown parameter x is mapped by some something that we can call the measurement operator.

So this is something that uses this hidden parameter and does something with it.

Usually it reduces the dimension like there could be some physical dynamical system with it it could

be a neural network it could be almost anything. So this is some kind of map that takes this other

parameter x and maps it into some other space and then it can be used to measure the noise.

So this is unknown parameter x and maps it into some other space and this result here is perturbed by

additive measurement noise. This constitutes this random variable y which we call the observation.

And now so it's called the observation because this is observable so we assume that we are given

the numerical value of y. So we get y and what we'd like to do of course is to find out how this

should change our belief about x. So what we want to have in now in the language of Bayesian statistics

is starting with the distribution mu zero on x. We'd like to use the data

and obtain the conditional or posterior distribution on x given this observable y takes on the numerical value y.

So of course this is just application of Bayes theorem. We still have to use the

something of Pryon x. We use data and incorporate that into the posterior distribution.

And just another piece of notation this is mu y this is the conditional or the posterior on x.

This small y just says this is dependent on the actual value of y of course.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:35:03 Min

Aufnahmedatum

2020-05-14

Hochgeladen am

2020-05-15 00:06:21

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen