So today's talk is about a new way of thinking about disease classification and what I did
in this talk is a little bit the underlying algorithm that I'm selling here is about joint
registration and segmentation in order to gain features that I can then use for disease
classification.
So I'm familiar with that most of you are not so don't do a lot of research in classification
but the concept of joint registration and segmentation might be something that you are
of interest of so bear with me through the slides when I talk about disease classification
because there is a larger part in the medical imaging community that works on this problem
and I thought it might be interesting for you to be exposed to that too.
So the talk itself is divided into four parts, let me see, yes, where I first want to moderate
the need to perform disease classification then look at the uniform framework that we
suggest in this talk and then look also at alternative ways in order to encode defamation
and then go into conclusion.
So for many diseases such as schizophrenia we currently have the problem that we rely
on lengthy interview processes with an expert where an expert basically interviews the patient
in order to find out if the patient is affected by the disease.
That process is not only very time consuming but also subjective so that people in the
community look for alternative ways in order to detect diseases.
One way of doing so is for example by taking a scan of the patient and then have an expert
look at the scan and basically perform the analysis that way.
That is not only more objective but seems to be more effective.
When you do that then the question arises how do I extract those measurements from the
scans that characterize this disease and so there the clinical community has developed
a certain kind of workflow in order to perform what they call morphometry studies where you
have a, where for each group healthy and diseased you have a set of scans that represent this
group you perform this study and at the end you get as a result group differences so you
get measurement or you get some kind of output that tells you how does an average brain look
like for that specific patient group and what is a standard deviation of that patient group.
The way that, so here is for example an example that was a study that was done by Hirayasu
in 98 where we have eight different structures or in this case actually three different structures
the superior temporal gyrus, hippocampus and amygdala that were manually segmented in 50
cases and then these 50 cases were sent through this pipeline where we extracted the manual
segmentation then computed the volume for each anatomical region and from this, from
these volumes performed a statistical t-test to see if there is a difference in those anatomical
structures with respect to volume for the different patient groups.
Now this is a very tedious process and this is actually where I worked with Torsten on
this problem of performing this task automatically.
Now we can instead of doing manual segmentation we can do automatic segmentation and we showed
then later that we can actually increase the accuracy of these studies using automatic
segmentation versus manual one but what we get out as a result is basically group differences.
So I now have, I know now that the schizophrenic brain that there are significant differences
for example in the left superior temporal gyrus in the volume of those in general then
to the, to healthy control patients.
But what we are actually interested in is if I give you one patient I want to know is
this patient not schizophrenic or not.
So that's a slightly different way of asking of this problem where you want to distinguish
between healthy and diseased patients.
So here was the outcome of this study but what we really want to know is give me a scan
and I want to tell you what kind of disease this patient is impacted by.
And one way that people in the medical imaging community often do that is they basically
Presenters
Prof. Kilian Pohl
Zugänglich über
Offener Zugang
Dauer
00:48:34 Min
Aufnahmedatum
2010-09-29
Hochgeladen am
2018-05-02 15:46:23
Sprache
de-DE
This talk discusses an anatomical parameterization of spatial warps to reveal structural differences between medical images of healthy control subjects and disease patients. The warps are represented as structure-specific 9-parameter affine transformations, which constitute a global, non-rigid mapping between the atlas and image coordinates. Our method estimates the structure specific transformation parameters directly from medical scans by minimizing a Kullback-Leibler divergence measure. The resulting parameters are then input to a linear Support Vector Machine classifier, which assigns individual scans to a specific clinical group. We test the accuracy of our approach on a data set consisting of Magnetic Resonance scans from 16 first episode schizophrenics and 17 age-matched healthy control subjects. On this small size data set, our approach, which performs classification based on the MR images directly, yields a leave-one-out cross-validation accuracy of up to 90%. This compares favorably with the accuracy achieved by state-of-the-art techniques in schizophrenia MRI research.