20 - Interventional Medical Image Processing (IMIP) 2012 [ID:2278]
50 von 662 angezeigt

The following content has been provided by the University of Erlangen-Nürnberg.

So welcome to the Monday afternoon lecture on interventional medical image processing.

And we are currently considering one of the last or final chapters of this semester

where we will talk about a problem called image registration.

And I'll explain this to you today in more detail where we allow for non-rigid deformations.

Before we had lectures on linear algebra, we had lectures on the computation of point features

like shift features or the structure tensor.

We had lectures on the half transform to detect parametric structures in the image.

We talked about magnetic navigation with a catheter, magnetic navigation.

We talked about the epipolar geometry in this context.

And the eight-point algorithm and different ways how to make this robust in terms of numerical robustness.

We talked about 3D ultrasound.

In this context, we learned about structure from motion approaches using different projection models.

And we also had a chapter on hand-eye calibration.

I'm sorry my machine here is not accepting my pen consistently.

We talked about hand-eye calibration and last week we...

I'm not sure what's this doing here.

We talked about variational calculus.

So, okay.

Now that's the big picture and now we go into image registration.

And before I explain to you what image registration is, let's briefly reconsider what we discussed in the context of variational calculus.

What was the core idea of variational calculus and how much was it different what we have done so far, at least in the context of this lecture?

The new thing was that instead of having parameters that we are going to estimate, we estimate functions.

And parameter estimation is mostly done in pattern recognition, image processing, computer vision, medical imaging, using an objective function.

That depends on the parameters and we need to minimize the function or maximize the function, depending on the approach we're using.

If we use a similarity measure, the least squares, some of the square differences, for instance, we have to minimize the function.

How do we minimize functions with respect to a set of parameters?

We compute the derivative or the gradient and require the gradient to vanish at the minimum.

If it's a quadratic function and if it's convex, it's all rather straightforward.

How to do that?

If we have probabilistic models, usually we estimate our parameters in the sense of the maximum likelihood estimation.

So we maximize an objective function.

And if we are in the variational calculus framework, we have to estimate a function according to an objective function that we need to minimize.

So what was the optimization problem we had to consider?

Maybe I should reboot the system because something is going...

Ugly. Wrong. Let me just restart this.

Überschreiben.

Now...

So what was the objective function we had to optimize?

You remember that?

We want to make the difference between our image and the other image to be the same.

Yeah. That was the filtering example that we have considered.

But in the framework of variational calculus, we have considered a way more general problem.

There was this integral from x1 to x2.

You remember that?

Roughly?

No?

There was this integral from x1 to x2.

Then we had a function dependent on the parameter x, dependent on the function that we are looking for, dependent on the first derivative with respect to x of this function.

And this has to be maximized or minimized depending on what kind of optimization problem we have.

Zugänglich über

Offener Zugang

Dauer

01:18:47 Min

Aufnahmedatum

2012-07-02

Hochgeladen am

2012-07-03 12:35:15

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen