28 - Diagnostic Medical Image Processing (DMIP) 2010/11 [ID:1415]
50 von 1190 angezeigt

Okay, so welcome to the last lecture of this winter course.

In summer we will not continue the way we started here.

In summer we will consider completely different topics.

But anyways, what I want to do today is I want to round up the chapter on image registration

and then I will close the session with a brief summary of the topics we have considered.

So yesterday we have learned a very important similarity measure that is heavily used in

image registration and as I said it's one of the state of the art image, it's a state

of the art image similarity measure that is heavily used in practice.

And this is the mutual information and the registration approach is always called also

the MMI approach, that is we maximize the mutual information, MMI, maximization of the

mutual information.

And I have explained to you the mutual information concept by arguing with statistical dependency

and independency and for my way of thinking this is very intuitive.

For your way of thinking this might be some kind of weird, it depends on your, basically

on your personal taste.

For the guys who are in information theory we also can explain things way more different

with the sender and receiver and the channel idea and for this it's important to know about

the entropy.

How is the entropy defined for a random variable X?

James.

The probability of X times the log of the probability of X and is this the way you learned

it in Georgia?

Oh okay, okay that's the point because there is the minus missing.

Okay, okay that's basically how entropy is defined and that's a measure for the information

encoded in X.

It's a very weird intuition that I explained to you here.

So don't tell Professor Huber what I'm doing here, he will be upset.

And you can also define a multi-dimensional entropy by saying it's P of X and Y log P

of X and Y.

And if we now look at the mutual information, the mutual information is basically defined

as the KL divergence computed for the joint density and the product of the marginals.

And I write it down again here in the discrete form, it's the sum over X and Y P of X and

Y log P of X and Y divided by PX, PY.

And this can be rewritten and that's what we started yesterday already as sum over X

and Y P of X and Y log P of X and Y minus sum over X and Y P of X and Y log P of X.

And then we have minus sum over X and Y P of X and Y log P of Y.

That's just using the known calculus that you know from school, that you should know

from school, that you might remember from school.

If you have the feeling you see it the first time, reconsider your career and your decision

to sit here.

Okay, good.

So what is this?

This is the negative entropy of X and Y, right?

And in information theory, they call the entropy always H. Why is it called H?

Thermodynamics.

And H means heat or?

Okay, H of X and Y, okay?

And it's the negative because the minus sign is missing.

And what is this here?

I mean, we can rewrite this.

Zugänglich über

Offener Zugang

Dauer

01:21:33 Min

Aufnahmedatum

2011-02-08

Hochgeladen am

2011-04-11 13:53:29

Sprache

de-DE

Einbetten
Wordpress FAU Plugin
iFrame
Teilen