Hi, our goal will be to introduce Markov chain Monte Carlo methods. But before we can do
this, we first need to talk about Markov chains and what they are. A Markov chain is very
simply a sequence of random variables and they depend on each other in a certain way.
So they're not independent but they depend on each other in that way that if we are able
to remember all the previous states, if you know x1, x2 up to xn, then the probability
of the next step is given by forgetting everything until the last step.
So we just need to remember the last step.
The easiest way to think about that is a computer example.
This game is called Snakes and Letters and I don't know if you know this but it works
like this.
You have your figurine, it's somewhere on this board, then you throw a die and depending
on what number your die shows you go this number on the board, so you go 1, 2, 3, 4
and then you're on 61.
If you jump on the head of a snake you slide down the snake and you go back to the tail
of the snake and if you hit the bottom of the ladder you can climb up the ladder.
So that means if you are somewhere on the board with probability 1, 6 you land here,
with 1, 6 you go here, there's a probability distribution of possible next steps and this
depends only on your current step, on your current position and it doesn't depend at
all on the way how you got up here.
So this could also be called a forgetful sequence, it's not completely forgetful because the
last step matters but not the ones before that.
So what isn't a Markov chain?
Consider this urn model where you have an urn full of balls of different colors and
you pick a ball, look at its color, write down the color and then put it in another
urn.
So the probability of the next ball's color depends on all the balls you have removed
because let's say you've removed all the blue balls already then it's impossible to get
another blue ball.
So that means you have to remember all the previous balls that you've removed and then
you have to think about whether there are any blue balls left or not.
So you can't forget how you got here in this example here.
So this is not a Markov chain.
So the number of, sorry the color of subsequent balls going from here to here is not a Markov
chain.
There are two different versions of Markov chains, we have to think about both.
The first is a lot easier and that's why we'll start with it.
And this is where the Markov chain is defined on a finite state space.
So that means that those elements xi are elements of a finite set from 1 to k.
So there's a finite and fixed amount of possible states where this Markov chain could be in.
And in that case you can summarize all the information that this Markov chain carries
by the transition probabilities going from i to j.
So you have to be slightly careful with this notation because some authors swap those indices.
So in that notation you have to read from i to j.
So the probability that you go from xn equal to i to xn plus 1 equal to j is given by the
ij entry of this matrix P. And the slightly more difficult case is whether
your sequence elements can be an arbitrary point within some set.
So let's say maybe the whole of Rn.
So it's a continuum of possibilities and then you can't think about the probability of x
being exactly in one point because that's always zero.
But you have to think about this in terms of a density.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:29:26 Min
Aufnahmedatum
2020-05-14
Hochgeladen am
2020-05-14 23:56:21
Sprache
en-US