11 - Interventional Medical Image Processing (früher Medizinische Bildverarbeitung 2) (IMIP) [ID:372]
50 von 504 angezeigt

So, welcome to the Monday session, just 45 minutes on medical image processing, interventional

medical image processing.

And where are we currently?

We are currently considering light fields and virtual endoscopy.

So we are considering a topic that is in between computer graphics and computer vision image

processing.

So that's something in between.

Light fields were invented by computer graphics people and a lot of computer vision is required

to implement these things.

And I think I have mentioned that there is one group at Stanford who, they invented the

light fields and they are also currently working on various applications of light fields.

For instance, light fields are also applied to microscopy and many other medical applications.

And what we have discussed so far, oh I see there is a typo introduction.

What is this?

That's interesting.

So, we have introduced the planoptic function and we are currently looking into local geometry

rendering.

And that's the point where I would like to start today.

That's local geometry rendering.

The method we discussed was developed here at our university by a PhD student at that

time, Benno Heigl and one colleague of mine, Professor Koch, 10 years ago.

And the main goals have been, well we want to allow for arbitrary camera motions.

Yeah, sorry, that's the back issue.

That's a good explanation.

And we don't want to fill the light slabs, for instance, with information where it's

required that you visit very specific positions with your camera.

You want to have a freehand motion and out of the freehand motion we would like to compute

3D information and we want to render the images.

What we have considered so far, and that's important to remember, we have discussed methods

that allow us to compute 3D structure and camera motion out of point correspondences.

That's also something that is heavily used in this framework.

So we move our camera, we capture image sequences, we track points using standard point trackers

and out of the point correspondences and the factorization methods, for instance, that

we have discussed we can compute the camera motion, so the extrinsic camera parameters

and the 3D surface.

How would you compute the intrinsic camera parameters?

How would you compute the intrinsic camera parameters?

So the K matrix.

We have here the intrinsic camera parameters and the extrinsic ones represented by rotation

and translation.

These can be estimated by using the factorization method and the intrinsic camera parameters

and it's important to know that these are constant while I move my camera.

They are constant and they can be calibrated using a calibration pattern.

So once I get a new camera I just take a calibration pattern and I capture an image and out of

this I can compute the intrinsic camera parameters.

There are also methods for self-calibration without using a calibration pattern but that's

a completely different discipline and that would require a complete or a whole lecture

on computer vision that we don't want to do here.

So the main goals of the approach we are currently discussing are we want to render images, we

want to render new images from arbitrary image sequences without acquisition constraints.

Zugänglich über

Offener Zugang

Dauer

01:26:59 Min

Aufnahmedatum

2009-06-01

Hochgeladen am

2017-07-05 12:23:43

Sprache

en-US

Tags

Mustererkennung Informatik Bildverarbeitung IMIP Medizin
Einbetten
Wordpress FAU Plugin
iFrame
Teilen