The following content has been provided by the University of Erlangen-Nürnberg.
Good morning everybody. We are currently considering image processing techniques and algorithms for
interventional setups. And in summer semester we have already considered a few of these important
techniques and at the beginning we talked about feature detection, how to find significant points,
important points in the image and how to find these important points fast and efficiently.
So we talked about gradients. How do we compute gradients and images?
We basically compute something like differences and variations in X and Y direction and the gradient
is a vector and the gradient always points into the direction of the highest change of intensities.
Then we also extended the gradient by looking into a local neighborhood and considering the gradient
vectors and we computed for the gradient vectors the principal directions and that led to the so-called
structure tensor. How is the structure tensor defined once again? It's just the sum over a local
neighborhood of the gradient at this point times the gradient at this point transposed.
This is a sum of rank one matrices. And this is basically the covariance matrix.
The covariance matrix of the gradients if you look at this with the glasses of statistics. Welcome!
Good and then we have seen that the eigenvalues and eigenvectors tell us something about the properties,
whether it's a homogeneous area, whether there is a corner or whether there is an edge.
An edge detection was something that is important in shutter segmentation where we have considered
the problem of having an automatic system that finds in images these rectangular structures.
And for finding the rectangular structures we have introduced the so-called half transform.
Can somebody tell me in a few sentences what's meant by half transform?
Half transform is good for detecting straight lines in images. And if we want to detect straight lines
in images all the points on the straight lines share the property that they fulfill the equation
of the straight line. And basically a straight line is characterized by two parameters,
that's the slope and the intercept, the translation, or it's the normal vector or normal vector,
the orthogonal vector on the straight line and the offset from the origin.
And if we have many points characterizing these parameters we can build up an accumulative array
where we find peaks for those points sets or for those lines that are present in the image.
That was a very intuitive scheme and half transform is very powerful and can of course be also applied
to circle detection for instance if you use instead of a parameterized version of a straight line
a parameterized version of a circle. You can think about this as homework.
Good. So we talked about the half transform and then we talked about the hawk features
and the sift features and a set of different features. These features are rather modern
I have to say. They are rather popular these days. They are very recent features and heavily used
in many practical applications. And my PhD students have introduced to you these features
with 70 or 80 or 500 slides I have seen. So it was a really fast session I guess.
After that we looked into or before that we looked into one sub problem that is important
for interventions and that's magnetic navigation. In magnetic navigation we considered,
hello Martin, we considered the problem that we want to adjust the tip of the catheter
in a way that it moves into the right direction. And this motion should not be implied
by mechanical manipulation of the catheter. It should be implied by an external magnetic field.
And we talked about a user interface that allows us to adjust the magnetic field
by looking at two projections. And here we have the translation vector T.
And there is one result of Long at Higgins and he has observed in the 80s that the vector
that connects the optical center of camera 1 with the world point, the optical center of camera 2
with the world point and the translation vector they lie in a plane. They form a plane.
A two dimensional subspace. It looks not that difficult if you look at this brief figure here
or illustration here but it was a very important relationship. And we have seen that for all points
P and Q in 3D we have the relationship that P transpose E times Q is required to be 0
and that is the epipolar constraint. And this matrix here is R T X and it is the essential matrix.
And if you multiply from left and right with the K matrix then you get the F matrix.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:29:05 Min
Aufnahmedatum
2011-06-06
Hochgeladen am
2011-08-25 18:22:35
Sprache
en-US