7 - Musteranalyse/Pattern Analysis (früher Mustererkennung 2) (PA) [ID:382]
50 von 1137 angezeigt

Welcome to the Monday morning session.

As usual, Monday morning we will start to look on our main, on our mind map to follow

up with the storyline.

You know, it's important that you do not lose the context where we are in.

We discuss many mathematical details for sure, but you have to keep in mind the big picture.

That's very important.

Otherwise, you will get lost in the whole story.

So pattern analysis is all about, sorry, is all about the a posteriori probability.

What is or why is the a posteriori probability that important for us?

Well, it's important for us because we know if we do our decision based on the maximization

of the a posteriori probability, we get the Bayesian classifier and we know about the

optimality of the Bayesian classifier.

So what we have to do basically is find a way to model the a posteriori probability

and we have seen different methods.

For instance, we talked a lot about logistic regression.

That's a method to model the a posteriori probability directly.

You remember the formula that you have to keep in mind is this structure of the a posteriori

probability.

And then we talked about the Gaussian classifier that makes use of the decomposition of the

a posteriori probability saying that P of Y given X is proportional to P of Y times

P of X given Y where this is the class conditional density function, the class conditional density

function and this one is modeled by the Gaussian.

And then we studied in detail the structure of the decision boundaries and we found out,

for instance, that the priors more or less imply an offset of our decision boundary.

It's just a translation.

We also found out that in the case of Gaussians, arbitrary Gaussians, we get quadratic decision

boundaries.

We also found out that if the classes share the same covariance matrix, the decision boundary

is a linear function.

So we found many things out in this context.

And then we have discussed feature transformations.

We talked about the PCA, the principal component analysis, and the LDA with and without dimension

reduction.

With and without dimension reduction.

After that, that was what we did last week.

We looked into regression problems and linear regression and found a way how to use least

square approaches to find linear decision boundaries.

And what we have seen is that we will find even a closed form solution to this problem.

We can find a closed form solution to this problem.

And then we also introduced the so-called ridge regression where we had an additional

constraint in our optimization procedure that was telling us that the vector theta that

defines the linear decision boundary should have minimum length.

You remember that?

Okay.

So, now having in mind that the lecture is running now for two, three weeks, we did quite

a lot so far, right?

During the lecture, you always have the feeling, oh, it's quite plausible what the guy is telling

us.

It's not that much.

But at the end of the day, if you reconsider which topics we have covered, you will notice

Zugänglich über

Offener Zugang

Dauer

01:25:19 Min

Aufnahmedatum

2009-05-18

Hochgeladen am

2017-07-05 12:42:32

Sprache

en-US

Tags

Analyse Linear PA Regression Norms Norm Dependent Inner Product Unit Balls Least Squares Chebyshef Ridge Compressed Sensing Penalty Functions
Einbetten
Wordpress FAU Plugin
iFrame
Teilen