17 - Interventional Medical Image Processing (IMIP) 2011 [ID:1624]
50 von 952 angezeigt

The following content has been provided by the University of Erlangen-Nürnberg.

So good morning everybody to the Monday session.

We are heading right to the end of the lecture series.

There are three weeks left and we will mostly look into the registration problem within these three weeks.

So it's a very important topic in medical imaging and especially in interventional imaging this is a very hot topic

because during surgeries you would like to make use of all the image information that is available

and you want to combine the various image sources properly such that you can speed up and enhance the whole procedure.

Before we continue looking into the algorithmic part of image registration and all the details about that

I would like to discuss the topics we have considered so far.

So during the lecture you might always have the feeling it's not so much we are actually discussing

but if you sit down and try to reconsider things step by step you will notice that we have quite a load of information that we have to deal with.

And we started out with one chapter on image preprocessing like very similar to winter semester and preprocessing

and in the chapter on preprocessing we looked at the beginning at the problem how can I compute point features.

Point features are very important if we want to deal with magnetic navigation

because there we learned how we can use corresponding points to compute the transformation between two images or for 3D ultrasound.

We also have seen that if you want to do 3D ultrasound structure from motion approaches

that basically requires point correspondences in an image sequence and how can we find those points?

Well we need some point detectors or corner detectors and we have basically seen one important basic, no problem.

We have seen one important corner detector, this corner detector was basically making use of the structure tensor

and the structure tensor is basically the sum over the outer product of the gradients in a given neighborhood.

And what we actually do is we have here the gradient at a certain point or a selected point and we compute here the dyadic product of these two

and sum up over a local neighborhood.

So basically we compute the covariance matrix of the gradients in the local neighborhood.

If we look at the covariance we basically can analyze what's happening in the local structure.

For instance if your covariance looks like this you have gradients in this direction like you have it in this direction.

That basically means that both principal directions are more or less showing up with the same probability.

And this situation happens in an image if you have something like that you have here an edge from black to white for instance

and here a corner, here the gradient goes this way, the gradient goes this way

and if you put things together into the covariance matrix you will see that you have these two principal directions.

That's why we computed here the eigen decomposition of the covariance matrix of the gradients

and then if both eigenvalues are similar and have, if they are similar and non equal to zero then we know there must be something like a corner around that point.

We also added the idea that by considering the vectors, the gradient vectors in the local neighborhood

we also attach to them a weight and the weight is decreasing with the distance to this point.

So you can put over the gradients a Gaussian curve and say the far away the gradient is or the point where I compute the gradient

the lower the weight should be in this covariance matrix.

So there are hundreds of different variants of the structure tensor but the core idea is this.

And given the gradient of course you can also replace the gradient by any other feature vector you can compute.

We talked about the HAR features in pattern recognition winter semester.

You can compute a 150 dimensional HAR feature for each point and then you can compute the structure tensor.

I have never seen somebody doing that but you can do this once you understood the abstract concept.

So we can compute here point features, we get homogeneous regions if the gradient is zero we get an edge if the gradient has one principal direction.

It's all very intuitive once you understood the geometry underlying the covariance matrix of gradients here.

Then we learned about the SIFT features, we learned about smoothing operations like the bilateral filter

that is taking into consideration both the local neighborhood of points and the similarity of intensities.

Very important filter, very simple filter and a filter that was invented in 1998.

So it's not so old but very basic and it changed our medical image pre-processing structure a lot because bilateral filtering is used in many systems today.

Then we talked about magnetic navigation. What is the picture you have to keep in mind for magnetic navigation?

The air people are a constrained picture.

So the only thing you need to know in this chapter is basically this figure here.

Remember this figure and you have to remember that images from now on are no longer matrices, images are 2D planes in the 3D space.

Zugänglich über

Offener Zugang

Dauer

01:23:54 Min

Aufnahmedatum

2011-07-11

Hochgeladen am

2011-07-12 09:35:28

Sprache

en-US

Tags

Mustererkennung Informatik Bildverarbeitung Medizinische
Einbetten
Wordpress FAU Plugin
iFrame
Teilen