So I guess we can start now. Welcome everybody. To give you some news if you have not been
in the Q&A session, Professor Meyer has started to also write down the stuff he said during
his videos in a blog. So if you prefer reading over watching the videos or if you prefer
like having a transcript of the lecture videos for exam preparation, you can access them
on our website and the links are also posted on Stodon. So this is just an additional information.
So it's just the stuff, a transcript from the stuff that was said in the videos. But
if you prefer reading, you can have the blog information there. There will always be, like
Professor Meyer said, maybe two weeks delay in that because we need to prepare all the
blogs and also the images for the blogs. But it should be already when you start your exam
preparation. So this is just an additional help from us. So now I guess I shared the
wrong desktop. Question, can you see like my presentation, like pattern recognition theory
exercises? Yes. And then I just opened the chat on the, like this was the problem on
Wednesday that I could not see the chat and the presentation at the same time. So I now
have two monitors and should also be able to see the chat. So yeah, for the exercise,
for me it would, I would prefer if you would just, when you have questions that you just
ask the questions, just interrupt me. But if you prefer like not speaking loudly and
you can also just put your question on the chat and I try to stop from now and then and
to answer the questions. Okay, then I guess we can start. So how I will do the exercises
more or less like last year we had Blackboard exercises. So we could actually write and
what I will do now is I try to solve the problems by writing, deriving the equations and the
answers in a handwritten manner. So this is why some of the slides are empty and we will
fill them out during this session. But I've put always after the empty slides, a printed
version of the content that I'm going to write because like maybe my handwriting is very
bad or yeah, just to have a backup for this because yeah, I just suppose that yeah, then
you have a printed version that is more or less from the lecture and also a version that
you can have right now and take your own notes for understanding the content. So to start
with, let's do a short recap of what we did two weeks ago in theory exercises. So the
topic was base classifier. The base classifier from the concept, very simple. It just says
the decision rule is we take the class that has the highest posterior probability. So
our task is basically we know we get an X, which could be an arbitrary feature vector,
for example, 0.1 and 2 and our task is to determine Y. So Y is unknown in this case
and how do we decide on which Y we take? We basically calculate the posterior for all
the classes. Let's say we have two classes and we would calculate our posterior for Y
being 0 given X is 0 this vector and we could calculate this and this will yield us a probability.
For example, 0.8 and then we would calculate the posterior for the other class, which in
this case will be lower because this has all to add up to 1. Of course, the probability
for this one here will be 0.2. So this adds up to 1. And what does the base classifier
do? It just takes that class that has the highest posterior probability. So this would
be in this case we take class 0 because we have a higher probability for class 0, 80%
and 20% would be the probability of being wrong.
So this would be the probability of being wrong. And the base classifier is the best
classifier when you have the zero loss function because it tries to minimize the probability
of being wrong, which fits to this loss function here, zero one loss function, which says which
class you are wrong, you always pay one when you are wrong. And this leads to the decision
rule that you try to minimize the probability of being wrong. So you take it to base classifier
when you want to be always right. Of course, this is not always the optimum decision because
in reality, it's not always the case that if you are wrong with one class and wrong
with another class, it's the same. Usually it's like when you're wrong classifying, for
example, disease, that's usually the case that being wrong in one direction is worse
than being wrong in another direction. For example, not diagnosing a deadly disease would
Zugänglich über
Offener Zugang
Dauer
01:10:51 Min
Aufnahmedatum
2020-11-27
Hochgeladen am
2020-11-27 18:48:27
Sprache
en-US
Sorry, my recording program crashed at 1:10h . The Wednesday session is complete.