The following content has been provided by the University of Erlangen-Nürnberg.
Yeah? Okay. Good. Does everybody remember the cloud? Yes? So we will not do the cloud
today? Or is anybody particularly interested in going through the cloud today? We can vote.
Who wants to go through the cloud? Okay. Who wants to skip the cloud? Okay. So this is
for today. Yes, for today, of course. So this is one against four. And the rest of you seems
to be undecided. Let's say it this way. Undecided. Good. So then there was almost a clear majority
towards not doing the cloud. So we will skip the cloud today. And to be honest, we only
have one lecture in this week. And on Thursday there's a national holiday, right? So we will
skip next week's lecture. And then next week on Tuesday we can go through the cloud again
because then we have already... Does the Bergkirchweide, the beer festival, already start next week?
No, it's on Thursday, right? Next week, Thursday. So probably also a hard time. Oh, no, well
the lecture is at 12 on Thursday. Maybe you can go to the lecture and then to the beer
festival. That could be like the awesome preparation for going to the beer festival lecture and
then try to get one of the free beers. Sounds like a brilliant plan. Okay, good. Then let's
not do the cloud today. And we will talk again about pre-processing and image enhancement.
And we've already seen the motivation and we already talked a bit about the normalized
convolutions. So let's skip over this motivation. So you've seen the nice colorful images. And
colorful images are nice and the medical... There's typically a lot of information also
in the red channel. And you can go ahead and then process the data in order to improve
the image quality and reduce the noise. So you've seen this. And let's have a look again
at this awesome video. So you see there's some outliers in the center. This is invalid
measurement and there it seems that the object is super close to the camera. But that's of
course not true. And now we remove the outliers. We essentially use the normalized convolution
that we'll go through quickly after this video. Then we use some temporals moving because
we have a lot of frames per second. And this way we can reduce the noise further. And now
we can then try to incorporate edge preserving filtering. And if you do edge preserving filtering,
you can get a result like this one. And you see that this is much better than the raw
input data that you've just seen. So we can do a lot with some rather simple tricks in
order to improve image quality. So this is one of the messages you should take home from
this lecture is that you can actually improve the raw input data considerably with some
nice filtering tricks. Okay, good. So we already talked about a normalized convolution. Let's
go back to the definitions and how we actually want to describe those convolution kernels.
And what we will start with is, like in the last lecture, the traditional spatial domain
variant of the convolution. So we have n pixels. N is the number of total pixels in the
image. And we will also use it later to discuss complexity of the algorithms because we will
see some of those algorithms are really fast. Okay, so the input image is 2D, but the total
number, so if you have a continuous index for the pixels, it's going to be n. These
pixels are discrete and they are in a vector where you have the x and y coordinate. And
you have some local neighborhoods that you define with an odd number in x and y direction
where you define r as the radius and then 2 times r plus 1 squared will give you the
total number of elements in this neighborhood. So we always have a center pixel in the convolution
kernel. Then we consider g as the input image and f as the output image. Now if you go to
discrete convolution, and this is just a refresher, of course you have some kind of kernel. And
here the kernel is denoted as k. And k is going to be, for example, the Gaussian kernel.
So here you see the example for the Gaussian kernel. And you essentially weigh the distance
to the center pixel. And you have some standard deviation. And if your kernel is big enough,
you can just truncate it to zero. So you define your kernel size, sample your Gaussian in
here, and this will be the kernel that you use for convolution. And of course in this
case the convolution theorem holds. So instead of computing this in spatial domain and doing
all the multiplications and sums, you can just transfer your kernel and your image to
Presenters
Zugänglich über
Offener Zugang
Dauer
01:21:50 Min
Aufnahmedatum
2016-05-03
Hochgeladen am
2016-05-03 17:55:35
Sprache
en-US
This lecture focuses on recent developments in image processing driven by medical applications. All algorithms are motivated by practical problems. The mathematical tools required to solve the considered image processing tasks will be introduced.