So welcome to the Monday session. We are still in the chapter on magnetic resonance imaging
and we will conclude the chapter today or the section today by considering probabilistic
correction methods for bias fields. We all know what bias fields are. These are the low
frequencies inaccuracies we have or distortions, intensity distortions we have in the MR images
due to heterogeneities of the magnetic fields. And these artifacts can be reduced by various
methods. For us it's important to remember frequency domain filters, high pass filters
basically, homomorphic filtering, homomorphic unsharp masking and polynomial surface fitting
and then we also considered entropic methods and KL divergence based methods to eliminate
the bias field. The KL divergence is something that we will see for many, many other applications
within this lecture and the upcoming lectures on medical image processing. The clustering
approach was introduced around about five to seven years ago and is still heavily used
in practice. The clustering method for bias field estimation is basically combining two
things and that's important to remember. It's combining segmentation and the intensity correction
simultaneously. That means that we know something, we have prior knowledge about the tissue classes
that are present in the image and based on this knowledge we can set up a clustering
method that allows us for correcting the bias field. We have seen one basic clustering method,
decay means clustering, where we do clustering pixel-wise without considering the local neighborhood
and then we have discussed the fuzzy C means clustering where we basically regularized
the fuzzy C means objective function by an additional term requesting similar classes
in a local neighborhood. And the formulas we have seen last Tuesday, they were incredibly
large, they looked so complicated but basically it's nothing else but a gradient computation
of the objective function and we compute the zero crossings and get these closed form solutions.
For you it's important to know that we have no global method to solve for the minimization
or maximization of this objective function. We have used coordinate descent where we said
okay let's look at one dimension and then the other dimension and then the other dimension
and we iterate over one dimensional optimization problems for all the dimensions. That's also
an important concept. Please do not stick to this particular application whenever you
have to deal with the clustering method. Sit down and think about local neighborhood relationships
and whether you can add additional constraints to the objective function and then think about
the optimization problem. I mean due to the fact that most of the students in the audience
are computer science students, please have also a look at the complexity of these methods,
for instance the k-means clustering method or the optimization of the objective function
that has to be considered in k-means clustering. The optimization problem is rather hard, it's
an NP-hard problem. So the theory tells us it's NP-hard. In practice we have pretty fast
convergence. That's the two things that fight each other. Another application where we observe
a similar situation is the solution of linear programming problems. Maybe you know the simplex
method for solving linear programming methods. It's also known to be an NP problem and in
practice it can be shown that this can be solved rather efficiently. It converges pretty
well and works out pretty well in many practical examples. At least that was the saying 10
years ago. Nowadays we have way better algorithms than the simplex method for solving linear
programming problems. Just for you to set up the context properly. And now we will talk
about a probabilistic correction method and be sure you will love it and it's very exciting
and most of the most successful methods in pattern recognition are based on a probabilistic
setup and that's why we do a lot of probability theory in our classes and lectures. And it
also emphasizes that it is important what you have learned in the basic lectures. So
what do we intend to do? Like in bias field correction using the c-means clustering, we
now again look into a hybrid approach. And in this hybrid approach we do simultaneously
bias field correction and image segmentation. So we combine these two things. And image
segmentation in this particular situation is nothing else but a proper labeling of the
pixel values. So there are different methods for image segmentation. For instance you can
Presenters
Zugänglich über
Offener Zugang
Dauer
00:42:37 Min
Aufnahmedatum
2010-12-06
Hochgeladen am
2011-04-11 13:53:29
Sprache
de-DE