Good morning again. As I said yesterday we had a guest speaker and I was rather ashamed
that only four students attended. That was something that is totally unacceptable. I
should invite or if you want me to invite medical people then at least I expect that
500 people attend or something like that. That's not good. Good. Okay, so we are in the
final for this semester and the final topic we are looking into or one of the final topics
is non-rigid image registration in various applications. And yesterday we have seen why
image registration is very, very important and yesterday we also have seen that especially
the registration of multi-model images is important and Professor Kuvert has also pointed
out why non-rigid registration is so crucial and where the weak parts of the existing algorithms
actually are. So what we are going to discuss today is we will continue with a mathematical
formulation and please reconsider what we have done so far in terms of image registration.
Let me just add here another page to come up with the mandatory mind map for the Tuesday
session. What is this? I want to make this a little thinner. So in interventional image
processing we are currently discussing non-rigid image registration. And what is meant by non-rigid
image registration? Well non-rigid image registration is nothing else but the mapping of two or
more images into a joint coordinate system, into a common coordinate system. And while
we map things into a joint or common coordinate system we can deform it. That's the idea of
non-rigid image registration. And what we have seen last time is that basically this
turns out to be an optimization problem that requires to minimize a functional equation.
That requires to minimize a functional equation. And we have seen that this functional equation
dependent on the displacement vector field is basically some similarity measure that
depends on the two images. Hold on, we call it source and target image. Source and target
image plus some regularizer that only depends on you. And this regularizer only depends
on the displacement vector field. Only depends on the displacement vector field and not on
the observations, not on the actual images that we want to register. So in terms of pattern
recognition this is also called some kind of prior knowledge that we are applying here.
For the pattern recognition guys in here, just a weak pointer, if you optimize the,
just for the pattern recognition guys, if you optimize the a posteriori probability
and you do the logarithm of the a posteriori probability that is the same as optimizing
log p of y plus log p of x given y. And this here is also not dependent on the observation
and is called the prior. So there is some similarity between these two things here and
basically also basing classifiers can be reduced to a problem that is stated as above. Just
for your information how to look at these things. So let me just remove that because
some of you might be confused by that. Good. And how do we solve such a functional equation?
How do we maximize that? Well a sufficient and necessary condition is that the Euler-Lagrosian
differential equation is valid. And what is the Euler-Lagrosian differential equation
telling us? Well basically it allows us to optimize a functional that says it's u prime
dx from x1 to x2. You remember that. And if I want to minimize this, a necessary and sufficient
condition is that the Euler-Lagrosian differential equation is fulfilled and the Euler-Lagrosian
differential equation looks like what? Sabin? The function derived by variable times that
variable derived again. So it's f of u minus d dx f u prime is 0, right? That's the Euler-Lagrosian
differential equation. We had one whole chapter on variational calculus and we have also shown
by the first variation that this is a necessary and sufficient condition. Okay. So we can
deal with these things and here the s and r actually can be exactly written in this
integral form so that we can apply the Euler-Lagrosian differential equation. And today we will compute
the Euler-Lagrosian differential equation for the SSD similarity measure and the diffusion
and the curvature regularizer. That's something we are going to discuss and we will also look
at the derivative, the first variation of the mutual information for the multimodal
case. But we will see the details later on. So non-rigid registration is part of a huge
chapter that we are currently discussing. Then we have talked about the tracking of
Presenters
Zugänglich über
Offener Zugang
Dauer
00:00:00 Min
Aufnahmedatum
2009-07-07
Hochgeladen am
2025-09-30 08:52:01
Sprache
en-US