23 - Diagnostic Medical Image Processing [ID:10398]
50 von 735 angezeigt

So, good morning everybody. Let's start in the text. The story of the lecture or the

storyline is that we talk about modalities, image pre-processing, 3D reconstruction, and

the chapter on 3D reconstruction was quite lengthy this time and at some point also during

the lecture I had the feeling that we got lost in too many details, that there is some

danger that you lose the big picture of all of it. We will close the 3D reconstruction

today but before I do so I will explain to you one filtering operation that was really

pushed forward at the end of the 90s, 1998, and this filtering approach is used for many,

many tasks in particular also in the context of image smoothing. So we will talk about

the bilateral filter and I have slides for it but let me sketch as usual the idea on

the blackboard. Bilateral filter that was basically introduced in 1998 by two computer

vision researchers, Tomasi and Manduki, and they came up with a very interesting idea.

If you want to build filter and there is a whole theory on filter design and blah blah

blah and you get lost with all these formalisms, if you build a filter one of the simplest ideas

is to take pixels in a neighborhood, to take pixels in a certain neighborhood in the image

and you just add up the intensity values in the neighborhood and compute the average.

What's the mean filtering? The mean filter computes the average intensity value of a

given neighborhood and assigns this pixel to the average value. By this you can reduce

the jumps of intensities, the random noise that you have, the variations in the local

environment and you flatten the image. The only consequence of a filter like this is

if you have edges in the image that means you have a bright area that goes into a dark

area, then at the edges you start to smear, you start to sandpaper the edges and you lose

contrast information. Images that are processed by using mean filtering, they seem to be less

noisy, less noisier, but they appear a little bit blurred, unsharp, blurred. The question

is what do you want to do with the images and what kind of information is available

by the unfiltered image and the filtered image and then in many situations it's also the

personal taste that decides on whether you apply filtering or not. The computation of

the new intensity value is done by summing up here the x, y values in the neighborhood

of the currently considered point and x. Let's call this g, that's the filtered image, f

of x and y and we can weight them uniformly saying if I have a 3 by 3 neighborhood each

value is weighted by 1 over 9 or you can also say if the distance of two pixels is 1 I weight

it with 1, if the pixel is square root of 2 I weight it with 1 half or something like

that. So you have here weights depending on the position but not on the intensity values

here. So these are weights. It's a very important thing what I wrote here. So we just look at

the position of the intensity value and look how far away it is from the currently considered

pixels and we weight the intensity value of the current position just dependent on the

geometry of our grid structure of the image. So I just say this intensity value goes into

this averaging process by a pre-factor that depends on the geometry. If I use mean filtering

I just have a binary condition saying it's 1 if it's in the 3 by 3 neighborhood and it's

0 otherwise. So all are weighted by 1 if I just sum over the local neighborhood. If you

use more sophisticated filters you say this one is closer than this one, this one has

distance square root of 2, this one has distance 1. So this should have a higher impact on

the solution than this one and you can incorporate this by this weighting scheme. That's how

very common image filters work. You look at the grid geometry and you weight the intensities

and then you compute a linear combination if you do linear filtering of the neighboring

pixels. So mean filtering means that Wx is 1 over 9 if you have the 3 by 3 neighborhood

or 0 dependent on the constraint that x and y is in the neighborhood of the currently

considered pixel. So nxy otherwise. Okay? Now you can increase the window and say the

far away I am the lower the weight will be. Anybody in the audience who didn't catch this

idea? The boundaries we do not consider. The boundaries is always something that you have

to consider separately. Usually you just set the boundary values to 0 then you get high

Zugänglich über

Offener Zugang

Dauer

01:26:47 Min

Aufnahmedatum

2015-01-15

Hochgeladen am

2019-04-10 08:19:02

Sprache

en-US

  • Modalitäten der medizinischen Bildgebung
  • akquisitionsspezifische Bildvorverarbeitung

  • 3D-Rekonstruktion

  • Bildregistrierung

 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen