7 - Interventional Medical Image Processing [ID:5037]
50 von 773 angezeigt

So, welcome everybody. Let's continue with our lecture on interventional medical image

processing. And today we will look into some noise reduction techniques and the interesting

part is that these noise reduction techniques will try to preserve edges because we've already

seen edges are important and noise kind of interferes with the processes that we want

to apply. For example, if we want to detect key points, we've already seen that edge preserving

filtering is a good post-processing technique in order to improve the outcomes of the key

point detection. So today we will look into several versions of edge preserving filtering

and we will have two methods in particular which we will highlight. One is based on bilateral

filters and the other one is based on guided filtering. Both techniques are very popular

and they are quite straightforward to implement and we will also show towards the end of the

lecture how we can adapt those methods to a particular application and we will see that

most of the time it's not so difficult to adapt the algorithm but you still have to

work with the algorithm and you have to understand the properties of your data and then you can

get pretty nice processing results. Okay, so here the motivation is going to be range

imaging. Later we will also look into x-ray imaging for the edge preserving filtering.

Of course you've already seen this kind of sensors here in the lecture and we've been

working with them quite a bit from range imaging data. You can typically get different kinds

of information. The nice thing is for example with time of flight camera. So the time of

flight camera actively measures at every pixel how long the time took for a light source

to emit a particular light ray and the time until it's measured back. So this essentially

is a depth measurement because you can compute from the time that the light propagates and

returns to the sensor you can compute a depth value at every pixel or you can use structured

light like in the old Kinect sensor that has been employed using some kind of random pattern

that is projected into the scene and then you can use point correspondences to reconstruct

the depth at every point of the random pattern and from that you can also get a dense depth

field. So the nice thing is those camera technologies deliver not just an RGB image or a grayscale

image but they also deliver the depth at every pixel in the camera image. So you have a dense

field that can display often this kind of information is called 2.5D because you get

essentially 3D information but only from your viewpoint. So they call it 2.5D or because

the depth dimension is so you don't have like in a 3D data set if you have a slice image

you have full 3D information but here you only get the information until the surface

until you have the first interaction with the scene. And on the right side you see a

couple of examples so this is such a depth image and here the depth is color coded and

you can see that this hand is in front of this hand. And you also get some grayscale

information so this is the grayscale image where you can get some texture features from

and this is typically aligned to those two information sources or you can calibrate them

against each other. And the other thing is of course from the range data you can reconstruct

a point cloud and this point cloud shows you the depth and if you have a visualization

in 3D then you can even turn this image and you can create a lateral view onto your point

cloud and of course you only get data on the surface with respect to the viewing direction.

Typically those cameras have a measurement range in between let's say one to seven meters

or one to three meters so this is the typical range that the depth data is acquired and

you have only rather low resolutions like 200 by 200 pixels or 640 by 480 so this is

a typical configuration for such a camera. The working principle is different in those

different cameras but the data is somewhat similar and you should also be aware that

there is a certain noise level and this noise level may also be dependent on the surface

properties so if you have a surface that has a high reflectance in particular in time of

flight so if you have metal surfaces they will spoil your measurements so you will not

get reliable data on high reflectance data in some camera modalities. This leads to artifact

and then you have to do some specific modality dependence pre-processing to get rid of the

Zugänglich über

Offener Zugang

Dauer

01:28:20 Min

Aufnahmedatum

2015-05-12

Hochgeladen am

2015-05-25 14:19:28

Sprache

en-US

This lecture focuses on recent developments in image processing driven by medical applications. All algorithms are motivated by practical problems. The mathematical tools required to solve the considered image processing tasks will be introduced.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen
Herunterladen
Video
Cc