Today we have to start on time because I have to leave at 20 to 5 sharp.
So medical image processing, interventional medical image processing, welcome to the Tuesday
no Monday session, the short one.
Monday the short one.
We are currently in the context of 3D ultrasound and we have introduced briefly last time the
core idea of ultrasound systems and I pointed out to you that you basically have an ultrasound
probe and with this ultrasound probe you can capture a 2D image.
And with some markers attached to the ultrasound probe you can capture images, you can capture
the position and orientation of the ultrasound probe and you can use the slices and sort
them into a 3D volume to generate 3D ultrasound images.
And the question is how can we get the required information on the position and orientation
of the ultrasound probe.
I have shown to you some mechanical devices last time that you can either rotate it around
some predefined axis in a very precise manner.
So there are many mechanical ideas to do that and then at the end we have seen the modern
navigation systems where we use markers for computer vision based object localization
and pose or pose estimation.
And today we will look into two important methods for the computation of the position
and orientation of our object.
So what picture do we have to keep in mind is we have here our ultrasound probe with
a cable and attached to it we have some markers and these markers are usually easily detected
in images that are captured by a standard video camera.
So we capture the whole scenario with a video camera mounted to the ceiling and we compute
the for instance the centroid of these markers and use these points as 2D points in the image
plane that are used for 3D pose estimation for the position and orientation estimation.
That's the idea.
And we will start out with the factorization method based on orthographic projection.
Let me briefly remind you what types of projection models we will consider.
In winter semester we have seen a more detailed characterization of projection models.
Here for us it's sufficient if you have the scenario that here is your image plane
and your 3D point and here your optical center.
You can either connect the 3D point with the optical center and compute the intersection
with the image plane.
This is the so-called perspective projection.
And last time we have seen that the perspective projection is nonlinear in X, Y and Z so in
the coordinates of the point because we have to compute a ratio.
Instead we have also considered the orthogonal or orthographic projection where we just forget
about the range value basically and do an orthogonal projection of this point into the
image plane.
A much simpler mapping and this mapping is linear and the linear mapping can easily be
expressed in terms of matrix notation and all the math related to it is fairly simple.
So this is the orthographic projection.
This is the orthographic projection.
Now I want to consider the image generation process.
So I'm considering my CCD chip, I'm considering my 3D points, I know that I apply a certain
projection model to the 3D points to get my 2D image points.
Similarly to what we did in the epipolar geometry case.
But now we first assume that we have orthographic projection and do you know which projection
model we have used for epipolar geometry?
Was it orthogonal projection or orthographic projection?
Presenters
Zugänglich über
Offener Zugang
Dauer
00:45:23 Min
Aufnahmedatum
2009-05-11
Hochgeladen am
2017-07-05 11:05:44
Sprache
en-US