Hi, it's time to talk about Monte Carlo integration and we start by recording what we talked about
when we talked about Bayesian inverse problems.
So we assume that we have some unknown quantity x, which is an unknown parameter or something
you'd like to infer, and this parameter is a priori distributed according to this prior
measure mu0, which has density rho x.
So this is our symbol for density with respect to the Lebesgue measure.
We could also say that rho x is the Radon-Nikodym derivative of mu0 with respect to the Lebesgue
measure.
And there's some measurement noise epsilon, which is also distributed according to some
density which we call rho epsilon.
Now y is given by g of x plus epsilon, so this constitutes a measurement process where
this unknown variable is mapped by some maybe complicated mapping g and the result is then
perturbed by some additive measurement noise epsilon.
And we saw that the distribution of the data given a fixed value of the parameter x is
given by this likelihood function here, which is just the density of the measurement noise
and we plug in y minus g of x.
Then we applied Bayes theorem and that said that we can write down the posterior, which
is the distribution of x given data y.
We call this mu y in order to denote the dependency on the specific value of y.
This measure has the back density rho of x given y is equal to y and that is given by
some constant, which is the normalization times the likelihood and times the prior.
And we saw that when we wrote this as d mu0 divided by dx, then we could put this on the
other side and we got that the Radon-Nikodym derivative of the posterior with respect to
the prior is given by this normalization times the likelihood.
And this c, there's a typo here, this should say 1 over c.
1 over c is the evidence and this is the density of y, which is the same as marginalizing out
the joint density of x and y.
So that's what we saw last time.
Now again with those two different ways of writing the posterior, so either we look at
the posterior in terms of its back density, so this is this line.
This line says we can evaluate the measure of sets A with respect to this mu y by integrating
the back measure.
So this is the proper Lebesgue integral, the usual integral that we look at and we integrate
over the density, this rho y density.
Or we can get the same quantity by not integrating over the back measure but over the prior and
then we have to change the integrand to that function, which of course is that function
here.
So those are two options of looking at the same quantity, two different kinds of integrals
and in some context it will be easier to think about this measure as having a Lebesgue density
and sometimes it will be easier to think about this measure having a prior density.
Okay so now we're given a posterior distribution mu y, so that is the correct way of combining
our prior with the data and let's say this gives us some posterior distribution like
that, so that shows the level sets of the density of this posterior.
So it looks quite complicated, it has multiple peaks, it's skewed and warped and there are
difficult shapes in here, there's kind of an isolated peak here, it's a really complicated
distribution.
So here we can still plot it, so in two dimensions we can look at that, we can say oh there seems
to be a really important region here and important region here, there's kind of an x structure
here and blah blah blah.
So we can kind of do an image analysis here, but in higher dimensions that is not possible.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:22:38 Min
Aufnahmedatum
2020-05-14
Hochgeladen am
2020-05-14 23:46:20
Sprache
en-US