9 - Ex06 Gaussian Mixture Model and EM algorithm [ID:29120]
50 von 271 angezeigt

Okay, welcome everybody.

Today we are going to be solving the exercises with respect to the topic Gaussian mixture

models and expectation and maximization algorithm.

Here, so before I start, these are the links for the evaluation of the course.

So I am going to upload the slides to students so you will have access to them later.

But yeah, so in the lecture you would be evaluating Professor Meyer and the exercises you will

be evaluating Stefan, Sarah, Sonia and myself.

I am also going to put them again at the end of the session.

Okay so let's start with the exercises.

The first exercise, so first please if you have questions, feel free to interrupt the

class because it is difficult for me to see the screen and read the chat.

So please use the microphone.

Or you can write a question but I will not reply immediately because I will not see it

immediately.

Okay, so the first question is explain the difference between maximum likelihood, maximum

a posteriori estimation and estimation using the expectation and maximization algorithm

and name some examples for each of the methods.

So first before going to the differences, I just want to give a bit of context.

So in parameter estimation, we are finding the set of parameters, we want to find the

set of parameters for an observed random variable.

So if we think that our variable is normally distributed, then we want to estimate the

parameters of a Gaussian distribution or a normal distribution.

Then we have to find the mean and standard deviation or covariance matrix.

Well, standard deviation if it's univariant and covariance matrix if it is multivariant.

So okay but what are the differences?

So we can use the three of them to estimate these parameters.

But let's review the first one, maximum likelihood estimation.

And in the first one, you have all observations are assumed to be mutually statistically independent.

The observations are kept fixed and the log likelihood function is optimized regarding

the parameters.

So that means that the parameters, the estimates of the parameters maximize these probability.

And this is the same, yeah, so they maximize this probability and when you replace the

probability, so for example, in this example, we are trying to estimate the parameters of

a normally distributed variable.

And here when you replace the equation of a Gaussian distribution in this term, then

to maximize these parameters, you have to get the derivatives with respect to the mean

and with respect to the covariance matrices and then you end up with these estimates.

So this is the example that it was given on the slides of the course.

The derivation of these estimates, you can find them in the pages from five to eight

of these files of the slides of the course that are being understood.

So this is for maximum likelihood estimation and then we have the maximum posterior estimation.

In the maximum posterior estimation, yeah, we are maximizing the posteriors and then

yeah, you replace, so well, this posterior can be expressed as these two expressions.

And then in the example, we are only estimating the mean.

We dis-assume that the variance is known for the variable, for the random variable.

But something that is important to notice here is that in this case, we know also the

distribution of the parameter itself that we want to estimate.

So we estimate, so we assume that this mean that we want to estimate is distributed normally

and with mean zero and standard deviation m.

And yeah, so, and the variable, the random variable X is also normally distributed and

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:38:27 Min

Aufnahmedatum

2021-01-29

Hochgeladen am

2021-01-29 16:46:19

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen