21 - (Lecture 7, Part 3) Epipolar Geometry and Ransac [ID:32176]
50 von 210 angezeigt

Hello everyone and welcome back to computer vision lecture series. This is lecture 7 part

3. In this lecture we are going to continue talking about solutions to model fitting for

inliers and outliers. We are specifically going to look into hue transforms and ransacks.

But before we go there, let's talk about our original stereo problem. When you have two

image stereo pairs, how can we know which are the good correspondences or which of the

matches that you generated are the good ones? Is there a way to know apart from visually?

Because visually it takes time and if you have hundreds and millions of images, it's

not efficient. So how do you choose which are the good matches? We need to find out

some quantitative measure based on the algorithms or the methods that we use. Maybe some parameterization,

some constraints that we can impose that can generate for us good matches. For example,

with linear list squares, when we use linear list squares to fit a set of points using

a line, for example in this case you see the list squares is able to fit all the points

in all the red points here. So it's a good fit. However, it is not robust to noise. If

there is an outlier, the linear list squares fit is not able to penalize the outlier with

low values. It generates as big an error and using those big errors, the model is fitted

and because of that the shape of the original shape is changed and it is not really a good

fit. So there is another solution to the linear list squares which is called robust list squares,

which is therefore dealing with the outliers. In this, there is a general approach where

we minimize robust function rho with a scale parameter sigma. In this case, in the linear

list squares case, the errors that were, so this is a map or the graph of the error or

the penalty that is imposed on the fitting algorithm. Instead of using this parabolic

or exponentially increasing error curve, we flatten the error curve above a certain limit.

So if the penalty is higher than a certain limit, then the function will apply a constant

penalty for larger errors or residuals in this case. The scale sigma can be chosen using

adaptive measures, but this kind of robust function fitting helps linear list squares

to deal with some outliers. So we also need to, the effect of the outlier is minimized

if we use the robust linear list squares measure. However, if the scale parameter is not chosen

properly, then the model fitting is again not very good. So if there is a higher or

too large or too small value of the scale that is being chosen, then the fit is quite

poor and it behaves more or less like the linear list squares.

Robust estimation is basically a non-linear parameter optimization problem that can be

solved iteratively that we have seen using linear list squares. And scale is a scale

of such function should be chosen appropriately and we can choose them adaptively using based

on some median residuals and things like that. So we calculate all the error matrices, sorry

error values and then you choose the median values of that and then you can choose an

appropriate scale. But this is like a hack or it's like a scarce or a rough, very rough

approximation to fitting robust functions. However, these kinds of methods apply a good

initialization for more sophisticated iterative methods. So fitting and alignment methods

that we have seen so far basically which search the global space for the optimal parameters,

linear list squares and robust list squares that we have seen. And there is another method

called iterative closest point which we are not covering in this series. And then there

is another set of methods which propose some hypothesis and then we test those hypothesis

on those model parameters or the model fits and then we do an iterative or number of different

runs to choose the best possible model fit for the parameter. So sorry the best possible

parameters for the model fit. In this case there are methods like cube transforms and

ran side. So we begin with hues transforms. Hue transform is basically estimating parameters

for through a voting procedure. What is usually done? Like let's say for example we are fitting

a line or you are doing an edge detection algorithm on a certain image. The edge detection

algorithm maybe you will use a Kani edge detector or any other state of the art. Maybe there

is a line and the edges are not continuously being shown on that line. Maybe the algorithm

Teil einer Videoserie :

Presenters

Zugänglich über

Offener Zugang

Dauer

00:23:56 Min

Aufnahmedatum

2021-05-03

Hochgeladen am

2021-05-03 17:38:21

Sprache

en-US

Tags

Computer Vision
Einbetten
Wordpress FAU Plugin
iFrame
Teilen