Okay, so good morning everybody.
Monday, 90 minutes, pattern analysis,
and before we continue in the topics
where we are currently in,
we will reconsider the big picture.
Now, I want to make sure that you don't get lost
in the forest of the topics we are covering.
And I want to make sure that you don't get lost
in the forest of the topics we are covering.
We have learned at the beginning that the Bayesian classifier
is basically the topic that is the hook for all the chapters
we discuss in pattern analysis.
And in Bayesian decision theory, the a posteriori probability
plays a central role and an important role.
And what we are considering is basically the a posteriori probability
saying what is the probability to observe class Y given a feature vector X.
And the decision rule basically is not that difficult.
I mean, compute the a posteriori probability.
The engineering issue is how can we model the a posteriori probability
in terms of a statistical formula, for instance.
How can we compute the a posteriori probability from observations,
meaning how can we train, how can we learn,
how can we estimate the degrees of freedom of our probabilistic model using observations.
And then later on in the second part of the lecture,
we will also think about how can I evaluate the posterior probability efficiently.
That's something we haven't talked about so far, not in any way.
We have the a posteriori probability, we need to model it,
and once we have it, we use it for decision making.
Later on we will talk about speech recognition systems.
I mean, can you imagine that you use your BlackBerry and you are saying a name
you want to call and it takes five hours to come up with a decision
because the complexity of evaluating the posterior probabilities is that high,
completely unacceptable, completely unacceptable.
So we also have to talk about efficiency later on
and the efficient evaluation of this probability.
So if you have the mind map in mind, pattern analysis,
at least as I teach it here, is basically the modeling,
the dealing with the a posteriori probabilities.
And at the beginning we have reconsidered basic facts
that most of you should know from the winter semester lecture
and those of you who came into the pattern recognition community this semester,
we have briefly shown that in the presence of a zero-one loss function or cost function,
the a posteriori probability and the decision process based on the a posteriori probability
is optimal for this particular cost function.
So we talked about the optimality of the Bayesian classifier.
My handwriting is extremely poor.
That's also something that students always complain when they fill out the evaluation form.
So I have to live with that fact.
Okay.
Then we looked into how can we model the a posteriori probability
Presenters
Zugänglich über
Offener Zugang
Dauer
00:00:00 Min
Aufnahmedatum
2009-05-11
Hochgeladen am
2025-09-30 08:52:01
Sprache
en-US