5 - Pattern Recognition (PR) [ID:2431]
50 von 362 angezeigt

the following content has been provided by the University of Erlangen Nürnberg the background

is denoted let's say by a class level Y equal to 0 and our foreground is denoted by Y

equal to 1 further furthermore we know that there is a class conditional probability density

for for the background and the foreground so for for two classes in our

two class classification problem and we know that we have an P of X

given Y is equal to 0 and this is equal

to a Gaussian with a mean

vector and and a covariance matrix

in our case the covariance matrix is equal to the intended

metrics three times three times our sample variance

the same for class Y equal to 1 but

here we have a different mean vector oh

sorry must be of course but the same variance in this case so we know the

class conditional probability densities and we know our priors so we

know that P of Y equals 0 is equal to P

of Y equal to 1 and this should be equal to

0. to .5 in our case because we don't have any kind of prior knowledge so

if you would like to perform an image segmentation with the Bayes classifier in this

case we can apply the Bayes rule if you know all of

these quantity so if you know our probability density the class conditional

probability densities and our priors it's possible to use the Bayes rule so our goal

is to use Bayes rule for classification and the classification is a maximization of

the posterior probability okay so this is a

vector but this was our problem in image segmentation we have our quantities

and we used the Bayes rule to decided for a given RGB pixel so

X denotes in RGB pixel a red green and blue pixel

value and we would like to classify it as a foreground

and background pixel so this is our goal it sounds

good but usually in practice we have the problem that we

don't know these quantities here the class conditional probabilities maybe we know that it's

a Gaussian distribution but very often the case it is unknown that

we have or the mean vector and the covariance matrix is in

general unknown okay so and if you don't know

these quantities it's not possible to apply the Bayes rule for

example for classification so our problem in practices is that how

to find estimates for our mean vectors and our covariance matrices

so how to find the mean vectors and our covariance matrix in general we have

two covariance matrices sigma 1 and sigma 2 this is first one

and this is the second one if

we have a good estimate for our for these quantities it's possible or it's straightforward

to apply the Bayes rule in this case and we know all or

all parameters of our statistic so the maximum likelihood technique is

now an approach to estimate such parameters from a

from a given data set so in practice you

have some patterns some example patterns available and you would like

to estimate these parameters from from the data so in the

next step let's start with a general

treatment of of maximum likelihood and in

general we have the

problem that we have

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:40:54 Min

Aufnahmedatum

2012-10-29

Hochgeladen am

2012-10-30 13:43:34

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen