20 - Correctness of Metropolis--Hastings [ID:15873]
50 von 108 angezeigt

We have seen a simulation which hopefully convinced you that metropolis-Hastings seems

to work when we looked at this density 2 times x on this interval and then we used random

walk metropolis-Hastings in order to sample from that.

But it's worth taking a closer look and trying to prove that this weird combination of using

a proposal Markov chain and adding an accept and reject step on top of that actually does

what we want, which is construct a Markov chain with the correct invariant density.

It's a bit of work but it's not too much so we'll try to attempt this.

It might be worth it taking a copy of this slide and putting it next to your screen or

somewhere else so you can look at notation.

This is the measure which we want to sample from and has density rho.

This auxiliary Markov chain that we use, which might be a random walk metropolis-Hastings

Markov chain, so we just take the current position and move a bit to the left or right,

is given by Q of xA and has density small q of x and y.

This acceptance factor is defined by this quantity, the minimum of 1 and the ratio of

those two products and metropolis-Hastings works like this.

First we employ this auxiliary Markov chain so we sample some y according to this Markov

chain.

This is just notation for sampling from this Markov chain given we are at sample x right

now and then we have to get this uniformly distributed number between 0 and 1, this number

z and this means that x prime will be either a proposal if this number z is less or equal

than alpha or it is x again if the opposite happens.

This x prime will be the new sample, so this is how Markov metropolis-Hastings works.

We can interpret this as a Markov chain going from x to x prime.

This defines a transition from state x to state x prime.

This looks interesting because it is in some sense both finite state space and continuous

state space because this proposal Markov chain usually is something continuous but with a

positive probability usually we take the same point again and that is a direct measure.

If we are at point x then the new point x prime is a combination of two things, one

is the density might be centered around x but does not need to be centered around x

so this is q of x and point and on top of that there is a direct measure on the current

point, this is a direct measure in x of point.

One step of this combined Markov chain, so a proposal Markov chain plus accept reject

gives us a Markov chain which has a continuous part and a singular part, a direct measure

part.

That makes the formalism, if we try to do this completely correct then the math will

look complicated and will obfuscate the main idea so I will try to find the right balance

between mathematical rigor and something that we can understand intuitively.

The transition kernel of this combined Markov chain is then a sum of two things with probability,

this should not be a, this should be alpha, with probability alpha of x and y we get a

density q and with a direct measure dx of dy.

This is not just a of x y because this has to take into account what this z does, so

I will explain this best.

How could we get from x to x again, so when do we start with x and remain with x?

This is all the possible combinations of going from x here and rejecting that and here and

rejecting that and going here and rejecting that.

So staying at x is a combination of all possible proposals multiplied by probability that we

reject this sample.

This is why we get this integral here, so it's a Dirac and the height of this Dirac

or the weighting of this Dirac is the, well this is the total probability that we reject

the proposal.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:14:21 Min

Aufnahmedatum

2020-05-14

Hochgeladen am

2020-05-14 23:36:42

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen