So we are currently in the chapter on image registration, and we consider rigid image registration methods.
That means that objects are just rotated and translated and there is no deformation, but just a rigid transformation.
And we have considered in the past algorithms where features, selected point features were used for computing point correspondences,
and based on these point correspondences we were able to set up linear methods, linear algorithms to estimate rotation and translation.
And what we are currently considering is a much more different problem where we say we have two images, let's say F and G, a source and a target image,
and we have to find a transformation T such that these two images coincide in a joint coordinate system.
And this transformation has to be estimated based on intensity information.
That means we do not compute any features, we do not compute any points, we do not compute any correspondences, but we just take the intensities,
and based on the corresponding intensities we set up an objective function that tells us how well the images fit to each other.
So basically we do the following, we overlay the two images, let's say this one, and then we take the other one, this one here,
and then we have here an overlap region, and if the two images are captured by one and the same modality, for instance the two images are x-ray images,
we can assume that corresponding intensities have corresponding values or similar values.
So what I can do is I can sum over this area here and we call this omega, that's the domain of overlap, it's usually called omega in the literature,
we can walk through this here pixel by pixel and compare fij minus g transformed ij, so we transform the indices, the xy coordinate,
and sum this up over all ij's in omega, so we sum over all the ij's in our image domain.
And I compute now the transform t such that this here achieves the minimum.
So what I can do is I can take the image, I can rotate the image, I can translate the image, and then I can compute the corresponding intensities.
Of course you run into many practical problems.
The one image grid is like that, and the other image grid looks like that, and what you see is that the grid points, they do not coincide,
so you need to do interpolation if you want to compute the differences because you have to select one point in space and you have to compute the differences,
you cannot compute just the difference between this one and this one, but you have to estimate the value at this point and then compute the difference.
And you can imagine that the interpolation scheme has a huge impact on the behavior of this function.
If you have a rotation and translation as they consider transform, you can use this objective function, it works pretty well.
Usually it works pretty well.
So what people do is they use this and estimate by global optimization strategy the transform such that the difference of the two registered images is as close as possible to the zero image.
That's the idea here.
And there are many tricks out in the field.
You can use the full resolution images, but you can also use here image hierarchies.
You might know these from computer graphics or image processing lectures, and instead of doing a registration on the 512 by 512 resolution,
you do an initial registration on a 36 by 36 resolution and you get initial estimates and then you refine it going up in the resolution hierarchy.
So there are many tricks in the field.
We cannot go through them, but most of them are very intuitive.
The nice thing also is that these resolution hierarchies can be easily computed using graphics cards, today's graphics cards, so this can be done.
In an extremely efficient manner.
So...
Well, yes.
I mean, you can constrain the search space by saying that you do not allow a rotation of 180 degrees, but only of 90 degrees or so.
But of course you're right.
I mean, if you have a rotation by 180 degrees and the two images fit pretty well, then maybe you get stuck in the local minimum there.
And you have to be aware of the fact that this objective function is not a function like that.
This will more look like something like that, where it's really hard to find the global minimum.
So this is something that causes a lot of trouble.
And I told you if you go out and if you work with the registration software that is commercially available, the Richard registration methods are well below 30 seconds for volume datasets, even in low quality.
So we are on a very advanced level these days.
If you are considering other transformations beyond rotation and translation, but deformations, usually the objective function here is extended by some regularizers that depend basically on the transformation that you estimate.
For instance, you require the transformation to be diffeomorphic, that you do not allow that the image breaks up, you get holes or you get foldings or things like that.
You can express these things in such a regularizer.
And in summer semester, we will look into the methods how to do non-rigid registration.
In winter semester, we focus more on the rigid registration problem.
Here is one example where rigid registration was used already 20 years ago.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:23:45 Min
Aufnahmedatum
2010-02-01
Hochgeladen am
2011-04-11 13:53:27
Sprache
de-DE