The following content has been provided by the University of Erlangen-Nürnberg.
Okay, that means I compute the mean value of these two points. What is the mean value?
Sum all the coordinates up and then divide it by the number of the coordinates and you get the centroid or the center of gravity.
Okay, the centroid. So I compute the centroid. Let's say the centroid is here.
Okay, it's nice that you can show that the centroid also goes to the fitted line usually.
And then we fit the coordinate system, let's say, like this here. So this is the centroid.
What does that mean for the straight line if the centroid coincides with the origin of the coordinate system?
That basically means that we have a linear function to compute and not an affine function because we don't have an offset here.
So the translation T is zero. It disappeared.
So by this simple trick, I put the coordinate system into the set or I select the coordinate system in the way that the centroid coincides with the coordinate origin.
That's something that can be done easily. You get your input data.
You scale it in a way that the centroid coincides with the origin of the coordinate system.
So what do I need to know to find out which one is the right straight line that fits through the whole story?
Well, what do I have to do? I have to compute here basically the normal vector, right?
If I give you the normal vector, you basically can draw the figure.
If I say the 2D line is characterized by the following normal vector, zero, one, you can immediately draw the line.
Zero, one means it's a parallel line to the x-axis or it's the x-axis basically.
OK. So what do we have to do now? We basically have to compute the normal vector.
How do we compute the normal vector? First of all, we have to, we compute the normal vector in a way that the sum of these distances here, you remember that?
The Euclidean distances of the points to the line, they have to be minimized.
Roman, you remember that?
A little bit. So we draw a line and then we can compute the distance of the points to the line and sum it up.
And then we look for the line such that the sum of the distances or the square distances is as small as possible.
OK. That's the idea. So how do I compute this distance here for a given n?
Roman, how do I compute the distance here for a given n?
Well, for a given point xi, that's a 2D point.
I just multiply this point with n and I get the projected length, right?
You remember that? You have here your point, you have here your vector and you project the vector here, then you just compute this here by this color product.
And if this vector has unit length, this is the Euclidean length of this projected vector. You remember that?
Yes, of course, sir. That's the answer I want to hear. Let's try.
Yes, of course. And then we sum the whole thing over all i. And now we have the problem.
This here is the signed distance. So if I go to this side, I will get a negative contribution.
And to make sure that I don't have the mutual cancellation of positive and negative contributions,
I do the standard trick that I say I take the square of it and I minimize this because I want to minimize it.
Subject 2, 2. And I want to have the normal vector of unit length.
OK. OK. And that's the optimization problem I have to solve. And so I use the Lagrange multiplier method.
So I get sum over i and I rewrite the product. So it's x i transposed n times x i transposed n plus lambda 1 minus n transposed n.
And this has to be minimal. And this can be rewritten as, hold on a second, I'll start a new page.
So I have sum over x i transposed over all i plus, plus, sorry, 1 minus n transposed n is equal to.
And this can be rewritten as n x i x i transposed n and here a transposed plus lambda 1 minus n transposed n.
And this is equal to, and now I can bring in the i, the sum, n transposed sum over all i x i x i transposed times n plus lambda 1 minus n transposed n.
And this has to be minimized. OK. And what is this here, Roman?
The covariance matrix, yes! That's the covariance matrix of our points.
So we are basically now computing, we are computing the smallest principal axis of the covariance matrix. We will see it in a few seconds.
You remember the mapping of a circle with a linear mapping to an ellipse and we are looking now for the smallest axis of the ellipse.
Let's see how we can compute it. SVD will be the solution for sure, right Thomas? But we will see how that works.
So we want to minimize this, so a necessary condition here is the derivative with respect to the components of n of this function here, f of n, has to vanish, right?
The derivative has to vanish. If I want to minimize the function at this point where the minimum is, I need a zero derivative, right? In all dimensions.
So let's compute the derivative here. Oops, there is the transposed missing.
So basically we get two times sum over i, xi, xi transposed, that's the covariance matrix, times n is equal to, and now the derivative of lambda one is zero,
Presenters
Zugänglich über
Offener Zugang
Dauer
00:25:55 Min
Aufnahmedatum
2011-10-31
Hochgeladen am
2011-11-16 12:55:05
Sprache
en-US