14 - Obtaining samples directly [ID:15867]
50 von 92 angezeigt

Hi, we saw that it would be nice to be able to obtain samples from some kind of measure

because then we can compute all kinds of interesting integrals, like conditional mean, conditional

variance, and other things. So the next question is obviously how do we get samples from some

We will start with the elementary methods first, then we will switch to Monte Carlo methods,

which are more advanced. In particular, there are some distributions from which we can sample

directly just by their form. This is the first thing we will look at. In some cases, we can

sample from other distributions if we can write down the inverse function of their CDF.

That's the second thing we will see now. Now, the first thing that seems completely trivial,

and it kind of is, but we will need to talk about that at least for a few minutes,

how can we compute samples which are uniformly distributed on the unit interval 0,1?

So how do we pick a random number on 0,1? We can't just pick 0.4 every time. That's obviously not a

random number. So how do we get a random number on the interval? And it is impossible to generate

truly random numbers with a computer because the computer is always deterministic and the computer

just cannot do something random. But we can give the computer some algorithms which behave very

erratically. Those erratically behaving algorithms are called ergodic mappings. So the easiest

ergodic mapping that I can think of is the following. We take a number, then we take two

times this number, and then we cut off the integer part. So for example, 0.452, this is mapped to 0.904.

And this is again mapped to 1.8, but we're dropping the 1 because of this modular 1,

so we get to 0.808. And this succession of the numbers is ergodic mapping,

which in this context just means very chaotic. And if we plot a few iterations of some arbitrary

value, then we see that this jumps up and down. There's no real shape that we can see. And this

is something that behaves randomly enough. Of course, the Python routine for uniform random

numbers does something more complicated, something more sophisticated, and this specifically will

not work. The reason being that two times a number will quickly shift up all the decimal

numbers in the binary decomposition of your variables in how they're stored in your machine,

and you will at some point just stay at 0. That's something that you could do numerically.

So that's not a good choice, but not because it's a bad function, but because your computer

can't store numbers with infinite precision. And regarding the starting point for this

iteration, you can, for example, pick something that is not easily reproduced, for example,

the current system time in milliseconds. So if you take that, then usually if you rerun

the algorithm another time, then you will not have the same number of milliseconds,

and then you will start with something else, and then you can run a few iterations, and

then you will get a random number. A random number. It's not random, but it looks random.

So that's how you sample uniformly on 0, 1 in a quasi-random way. If you want true randomness,

you need to have some quantum system that can generate truly random numbers. Okay, so

that's the easiest thing that you can think of. You can sample uniformly on bounded intervals.

If you want to sample on minus 1, 1, then you just stretch those samples so that they

fit in this big interval. Okay, that's really easy. Now Gaussian random numbers are a bit

more tricky, but not much more tricky. And there's an algorithm which is called the Box-Muller

algorithm. It works like this. You pick two random numbers, u1, u2, which have to be uniformly

distributed on minus 1 and 1, and then you define two numbers, z1 and z2, which are of

this form, and then those z1 and z2 will be iid Gaussian random numbers. So why does that

work? Well, it's some kind of transformation magic here. It's not overly complicated. And

if you do that, you know, if we simulate, I don't know how many samples I use, maybe

100,000 or so. So if you sample 100,000 u1s and u2s, so those only show the u1s, and you

do this transformation thingy here in pairs, then you get those samples here, and that

is z1. You always get those in pairs. So you need another histogram for u2, another histogram

for z2, but you can forget the second component, then you get random numbers in one dimension.

So that's how you get Gaussian random numbers n01. And if you want something, you know,

mu, sigma squared, you do something similar to the uniform distribution case. So you just

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:13:07 Min

Aufnahmedatum

2020-05-14

Hochgeladen am

2020-05-14 23:26:42

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen