11 - Diagnostic Medical Image Processing (DMIP) 2010/11 [ID:1176]
50 von 290 angezeigt

So no summary today. We are currently considering image correction methods, image pre-processing

methods and in particular we are in the field of MR imaging. And one major problem in MR

imaging is the heterogeneity of the applied magnetic field. If you talk to physicists,

they will say this is not a problem at all. If you talk to image processing people, they

will claim this is a serious issue. And we have seen that these heterogeneities imply

low frequency changes of intensity profiles and these low frequency intensity profiles

cause serious issues in terms of automatic processing of these images, saying you want

to find edges or homogeneous regions or things like that. Usually that causes a lot of problems

if the heterogeneity is not removed properly. And we are currently discussing various methods

for removing these inhomogeneities or heterogeneities. They are saying basically one and the same

thing. And in terms of math, what do we have to do? We have a sum of two numbers and we

have to find out what term we have to subtract from the measured value to get the original

value. That's the idea. Like in the fact pixel interpolation we divide it by two and now

we start to work on a unique decomposition of a sum of two numbers. That's basically

the message for today. And don't talk to theory people about these things. They will not believe

you. It's magic. We are doing magic here. Good. The reason why we can do that is of

course because we have local neighborhood relationships and we can do interpolation

and some prior knowledge can be used to get rid of these ambiguities that we have usually

in solving these problems. Last week I started to explain to you one very, very important

concept. It's very important. So let me write it again. It's very, very important. It's

very important. So I use the exponent here. Infinity. It's very, very, very, very important

if you want to do pattern recognition and image processing. And it's the so-called

Kaubeck-Liebler divergence. And the idea is basically we measure the information that

we gain from a random variable or from a distribution. And the information I of x is defined as the

integral of p of x log p of x dx. And usually you have here a minus sign. You remember that?

Some of you might have heard this. And now what we assume is that we approximate p of

x by a discrete estimate. Let's say we approximate this by q of x. And now the question is what

do we gain in terms of information and what we do is basically nothing else but computing

the difference. We say we compute the KL divergence between p and q. And this is the integral

p of x log p of x dx minus integral p of x log. And now we use, now I use a different

color to make this clear, q of x dx. So we look what happens in terms of information

if I replace this density by this estimate here. And if I estimate things properly, these

two terms should be the same and so the difference should be zero. So we say this here, this

entropy difference is basically measuring how well p of x is approximated by q. So the

KL divergence is nothing else but a similarity measure between two density functions. It

measures the difference between p of x divided by q of x dx. And if two densities, p and

q, are the same, this KL divergence will be zero because this ratio will turn out to be

one, the logarithm of one is zero, so the whole thing is zero. And for all other cases,

the KL divergence will be larger than zero. So KL is a similarity measure like the sum

of square differences we have in the square estimates as we did it before for image undistortion.

So whenever I talk to you about the KL divergence, that's just a similarity measure that compares

how well p and q fit to each other. And we have seen last week one important example

and it's a very, very important example because we look at the bivariate case, so we have

probability density function of a random vector that is two dimensional, that is two dimensional,

you can consume. You know that Coke Zero was done for men because men do not buy Diet Coke?

Real men do not buy Diet Coke. Now you destroyed my thinking, my way of thinking, I lost the

track. So what do we compute? The KL divergence of bivariate probability densities. And we

know that two random variables, and I'm repeating this because this is very, very, very, very,

very, very important. Two random variables are statistically independent if this factorization

holds, right? So let's use here an index x and y, and now we compute the KL divergence

Zugänglich über

Offener Zugang

Dauer

00:40:14 Min

Aufnahmedatum

2010-11-29

Hochgeladen am

2011-04-11 13:53:29

Sprache

de-DE

Einbetten
Wordpress FAU Plugin
iFrame
Teilen