2 - Pattern Recognition (PR) [ID:2369]
50 von 1011 angezeigt

the following content has been provided by the University

of Erlangen Nürnberg so let's continue

in the text so welcome to the Tuesday morning lecture today

we want to finalize the introduction to pattern recognition

with the postulates these are basic assumptions that underlie

all classifiers and all classification systems and after that

we will talk about two important aspects in pattern recognition

one is how do we evaluate the performance of a

classifier when is a classifier good or when is a

classifier not doing well and the second question that we're

going to consider how do we judge the performance of

a classifier in terms of a few optimality criteria and

one aspect that we will have to consider is the so called loss function

that is associated with correct and wrong decisions that are done by

the classifier and we will learn about a very important and fundamental

theoretical theoretical result of pattern recognition which shows that

under certain circumstances the so called Bayesian classifier is the

optimal classifier that's for the program of today and now we

will discuss the six postulates so the basic assumptions that we use for

all the future discussion we we are doing again what we are doing

here is we compute we compute for each pattern a feature vector

and we have to map this feature vector to a class number and we

call this class number Y and we have seen yesterday with the fish example that

we can compute the multiple features higher dimensional features for

each observation and somehow we have to find a

decision boundary that splits up the feature space into different

classes I have the feeling you know our university is so attractive

and wonderful students are squeezed in and fascinated by pattern recognition

and see their future in this field and what's your

name his name is David

he's always nice to me right he knows

how it works with the oral exam and all the circumstances you can

optimize your your prior to get an excellent grade

talk to him so so we have a few basic assumptions for the features space and

also basically these postulates are very intuitive I mean if I say

I want to characterize a picture by a feature vector and if I have

two pictures showing the same object I expect feature vectors that are close to

each other and not here and there in the feature space this is

one assumption features belonging to the same class should be as close as

possible another postulate is if we have two different classes two different objects

these features should be as different as possible also intuitively clear if you

design a classifier where your features of the same class are very much

different from each other and belong to the same class they are very

close to each other to to different classes they are close to each

other then you did something wrong so on this level we will discuss these

six postulates welcome come in

unbelievable this is not medical image processing guys this is pattern recognition so what we

expect first is we expect a representative sample of patterns so if

we want to set up a classifier we expect that for a certain

problem we have enough training data we expect if we have to set up

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:29:45 Min

Aufnahmedatum

2012-10-16

Hochgeladen am

2012-10-17 07:41:07

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen