25 - (Lecture 9, Part 1) Binocular Stereo and Disparity [ID:32182]
50 von 324 angezeigt

Hello everyone and welcome back to computer vision lecture series.

This is lecture 9 part 1.

In this lecture we are going to talk about binocular stereo and disparity.

How to see different problems in multiple view geometry.

One of them is binocular stereo.

How to approach that problem and how to have solutions to that problem using different

methods we are going to look at them.

And also we are going to study how to calculate disparity map which are depth maps basically

using these techniques in multiple view geometry.

So let's jump into them.

So in multiple view geometry one of the main problem is detecting structures of the real

world.

In the 3D world we capture different images of the 3D world through different camera positions.

For example here we have three different camera systems and we are imaging the same 3D world

and converting these 3D points into 2D points in this image plane.

And for each of these camera we have the rotation and translation matrix is basically known.

We know these parameters and using this information can we reconstruct the original location of

this 3D point or not.

That is the question or the problem that we are trying to solve in detecting structures

using the multiple view geometry in which we have different combinations of cameras

set at different points along the scene and we captured this scene in different views

and we try to reconstruct the 3D world using that.

If you remember in the beginning of the lecture I think in the first part of the lecture when

we were discussing about some applications of computer vision one of them was reconstruction

of this 3D view through by capturing multiple views of the same scene using time shifted

cameras.

So we looked at that at a scene from the movie matrix and we saw how it was constructed.

So something similar can be thought of in this context as well where we are trying to

estimate the 3D coordinates of the point in the real world using the images captured through

different viewpoints essentially this.

Using multiple view geometry we are also trying to we can also have different set of corresponding

points known correspondences between two or more images and using this so for example

if we know this point correspondences in each and every images that we generated can we

reconstruct the camera parameters from them and in this case the cameras are moved along

sorry my mistake in this case the images are captured using the same camera.

So in a way you are reconstructing either the motion of the camera or your camera is

stationary and the image is changing and you are capturing different images moving images

or a video kind of so you find correspondences between different frames of a given video

or different images and try to reconstruct the motion of the objects inside them.

This kind of problem can also be solved and we have already seen in our previous part

of the lecture where we are discussing where we discussed in detail dense motion estimation

in which we essentially were computing optical flow vectors for each and every pixel value

of a given image and this optical flow vector will give us the direction of motion and its

magnitude in the real world so how much it moved and from one image to another image

or from one frame to another image.

So optical flow is a way of also trying to estimate the motion also.

So we saw this in the previous part of the lectures and we have also seen parametric

motion estimation also using optical flow techniques and solving an error matrix.

So essentially you create an error matrix where you try to optimize it using different

transforms, whether it is Euclidean or Digit body transform, whether it is translational

Teil einer Videoserie :

Presenters

Zugänglich über

Offener Zugang

Dauer

00:31:12 Min

Aufnahmedatum

2021-05-03

Hochgeladen am

2021-05-03 18:38:40

Sprache

en-US

Tags

Computer Vision
Einbetten
Wordpress FAU Plugin
iFrame
Teilen