18 - (Lecture 6, Part 2) Camera Calibration [ID:32159]
50 von 266 angezeigt

Hello everyone and welcome back to competition lecture series.

This is lecture 6 part 2.

We will continue from where we left off last part.

We were talking about camera calibration basically and in that direction we saw some definitions

of what transforms mean, how it is a global transform.

We also saw different ways of fitting lines basically linear least squares and total least

squares.

We saw two different methods of solving Ax equals to b and Ax equals to 0 kind of optimization

problems.

And in this lecture, in this part of the lecture we are going to see how we can use this optimization

techniques in context of for recovering camera parameters or yeah basically recovering camera

parameters.

So, a little recap before we go ahead is we saw these two different common optimization

problems.

The first one is of the form Ax equals to b linear least squares and the solution to

this is given by a composed closed form solution here where Ax equals to A transpose A, we

take a pseudo inverse of A and A transpose B. There is a direct combined in MATLAB available

for the same.

We also talked about how the A is not a it could be a singular matrix or its dimension

is not fixed and therefore, we always have to take a pseudo inverse and therefore, a

trivial solution does not exist.

Essentially this is a problem where we minimize this two norm of Ax minus b.

Another problem statement that we saw is of the form Ax equals to 0.

It has a constraint where X transpose X equals to 1 and therefore, essentially what we do

is we minimize this term here and the solution is simple Eigen value decomposition of A transpose

A where we take the minimum Eigen value and its corresponding Eigen vector that is the

final solution for this kind of problems.

Now we are going to see how we can apply this methods in recovering camera parameters.

These two methods both of them are part of are called global optimization in the terms

of linear least squares.

So total least squares as well as linear least squares they form this global optimization

methods.

The good things is that they are easy to implement, easy to understand also and optimization is

quite straightforward.

You just have two equations to solve and that is it.

There are clearly specified objectives basically you have the point points given to you from

the real world as well as image plane and all you would have to do is compute the solution

through the given equations.

The bad things is that let us say if you have a point which is an outlier which does not

fall in the in the cluster then its weight gets calculated and it affects the final solution

and there is no way of getting rid of that weight because this is the direct method and

in this method all the points are considered at one moment.

So there might be bad matches, there might be some extra points like those outliers which

might disturb our final solution distort our final solution.

It does not therefore gives us a good fit and it is not possible to get multiple fits

either.

So you are left with only one solution and it is not possible to improve it.

However in there are iterative solutions are always better and there are there is one method

explained in Czelesski reference book so if you are interested in you can just check it

out just read once and you will understand how it works.

Teil einer Videoserie :

Presenters

Zugänglich über

Offener Zugang

Dauer

00:25:01 Min

Aufnahmedatum

2021-05-03

Hochgeladen am

2021-05-03 17:17:20

Sprache

en-US

Tags

Computer Vision
Einbetten
Wordpress FAU Plugin
iFrame
Teilen