Welcome to the afternoon session.
We are currently talking about image registration basically without knowing that in detail.
And image registration is very, very important in the interventional environment because
there you treat the patient, you deal with the patient, and you want to use all the data
that was acquired before and during the intervention and combine them to a registration.
And in winter semester, as far as you have attended, we discussed the problem of rigid
image registration.
So you have just a rotation and translation.
That's fine if you have rigid objects, but patients are usually non-rigid.
So they move, you know, and the belly is higher or lower.
So you need a deformable registration.
So you have to map one image to the other, including or incorporating also deformations.
And to do a non-rigid image registration, we need a few mathematical tools.
And one important mathematical tool is the variational calculus that you should have
heard about in algorithmic 3 as far as I am informed.
Did you hear about that in algorithm 3?
No?
No?
In math?
Engineering mathematics should also introduce?
No.
I mean, we do some weird things there.
So we consider functions as variables, right?
And we search not in a variable space or in a finite dimensional space of parameters,
but we search in a functional space.
And we optimize not parametric functions, but we optimize functionals that depend on
functions.
So it's a little different in thinking about these things, but it's not that difficult,
right?
No, not right.
It's difficult.
It takes a while to get used to it, and we started out last week to motivate the variational
image smoothing.
And there the idea was, given an acquired image f, we want to compute a smooth image
g such that the acquired image and the processed image, the filtered image, such that these
are as similar as possible.
So we compute the squared difference for each pixel in x and y direction.
And on the other hand, we want to compute g in a way that it's smooth.
So we compute here with a NABLA operator the gradient of the image and, for instance, the
L2 norm, and we look for a function g that minimizes this term here.
So we penalize high gradients.
So if we have high gradients, for instance, if you have noise, a high degree of noise,
you have locally high gradients, and these high gradients are penalized here by the Euclidean
length of the gradient.
So you look for a g such that the difference is very small, but also such that the gradient
field is not that big, right?
OK.
And here f and g are image functions.
So they are two-dimensional functions where you put in two values, f, x, and y, and you
get an intensity value out of that.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:00:00 Min
Aufnahmedatum
2009-06-29
Hochgeladen am
2025-09-30 08:52:01
Sprache
en-US