6 - Diagnostic Medical Image Processing (DMIP) [ID:1868]
50 von 303 angezeigt

The following content has been provided by the University of Erlangen-Nürnberg.

So welcome to the shorter Monday session. We will continue today to talk about image

undistortion. I don't want to repeat what we have considered so far. Tomorrow morning

I will give the big picture again and embed what I'm going to say into the general framework.

So today we want to continue with the discussion how to estimate the distortion mapping using

a calibration pattern. And last time I started out to generalize this idea of linear estimators

and fair parametrizations, you remember that, for polynomials of higher degrees. And somehow

I ended up with a disaster and I couldn't make it clear to you what I actually meant.

So let me just hook up on to this point again and then we will continue in the text. So

what we discussed last week was the term of parametrization. Something that you should

be aware of if you build parametric models for any image processing problem. And I have

pointed out basically that this problem set can be easily explained by just looking at

a very simple problem. That means at the problem of fitting a straight line through a set of

points. So this is the regression line. And we have seen two parametrizations. If you

choose to use mx plus t, then you know that m tends to be infinity if the straight line

is parallel to the y axis. So remember that configuration here that you have something

like this. This is the x axis and this is the y axis. And the implication here is if

you change a little bit here the input points, the slope of the straight line will change

drastically because the m tends to be very very large if you are close to be parallel.

And our alternative representation was to say, okay why don't we use n transpose x is

d, where n is required to be a unit vector. And I also pointed out if you do the estimation

of n and d using these points you end up with an eigenvalue eigenvector problem. The matrix

cookbook.com where you can look up the partial derivatives and then you basically end up

with a solution of an eigenvalue problem to find the straight line. Then I started out

to do the following. I said without any limitation we can do a data normalization or we call

it also data balancing. We can move our coordinate system basically to the centroid of the points

we observe. So the origin of the coordinate system coincides with the centroid of the

sample points. And basically if we do that our straight line is characterized by nx is

equal to zero. That's an important trick here. That's an important trick here that we basically

reduce the d to be zero. The d if you compute n transpose x that's basically the signed

distance. The signed distance of x to the straight line. And if all the points lie on

this line they fulfill here this constraint that the distance here is zero. And now can

we lift it now for higher order polynomials and that's what I have to extend or what I

have to add here. What I messed up last week a little bit. If you have something like that

you have your let's say second degree polynomial y is ax square plus bx plus c and we want

to fit in this parabola through these points here we can do the following. We can move

to a three dimensional space and look at these vectors x square x and one and we can compute

the centroid of these points x square x and basically we end up in 3D with the same equation

n transpose x has to be zero. That's the core idea I messed it up last week with a one here.

Basically we compute the mean vector here of the x square x or if we have mixed terms

we compute the mean vector and then normalize things in a way that we shift our origin of

the coordinate system to the origin and then we end up again with a system of homogeneous

equations and we can solve the fitting problem as well with a eigenvalue eigenvector problem.

In pattern recognition if you attend the lecture on pattern recognition in your master's program

you will learn in this context the term of kernels just for the future if you see this

again. Okay. So what I recommend is sit down play a little bit with Matlab do this type

of fitting and do experiments with it. It's also fun to see how these regression curves

fall into the points and which different methods actually can be applied to address this important

issue here. Okay. Then we have introduced the term of measurement matrices. With a measurement

matrix basically we always mean matrices that are built up using data or points that are

Zugänglich über

Offener Zugang

Dauer

00:43:56 Min

Aufnahmedatum

2011-11-07

Hochgeladen am

2011-11-16 16:06:39

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen