23 - Musteranalyse/Pattern Analysis (früher Mustererkennung 2) (PA) [ID:428]
50 von 728 angezeigt

Okay, Tuesday, 45 minutes, the last session of this semester.

I'm so sorry.

I'm so sorry because as I said yesterday, we are on the right level to do real pattern

analysis and pattern recognition right now.

So we could have another two semesters doing really cool stuff.

Unfortunately, I don't have any more lectures for advanced students.

So this is the last lecture for the semester and I want to try to introduce you to the

Markov random field theory.

And I pointed out yesterday already that this is a very important theory.

It has no major impact to, let me say, practical killer applications.

So I don't know of a system that is making use of Markov random fields and that really

is making money basically.

But it's a basic concept and it's heavily used in many research projects.

And also in our lab, we use Markov random fields, for instance, for the segmentation

of the spine or for the segmentation of the lung and things like that.

So we have many ongoing, not that many, but we have a few ongoing projects that make use

of Markov random fields.

And the idea was motivated by hidden Markov models where we have state transitions in

an oriented graph basically.

We are now considered graphs that are not oriented and the edges between the vertices

they define dependency structures.

So if we want to say one pixel in an image depends on another pixel in the image, we

just draw a vertex into this.

And we just draw an edge into this between the two vertices.

And the neighborhood structure is usually repeating all over the image.

So we have for each pixel the same neighborhood structure.

And as I pointed out already, I mean the reason why we define local neighborhoods, welcome.

The reason why we define local neighborhoods is that the PDF of a whole image is tremendously

complicated.

I mean we are in an incredible space.

And to make this plausible, how incredibly large the space of all images is and the probability

measure on the space is the example that within these images there is all the image where

I marry Sabine.

Our marriage photo is included in the set.

This tells you how large this is because events with probabilities 0.000 can happen there.

Okay, good.

Events with such a low probability.

So we need to reduce our space a little bit.

And the idiot's approach is mentioned here.

You can also say it's a smart approach.

If you assume mutually independent pixels, of course your density breaks down to a very,

very simplified situation.

We have used something like that in the naive base.

You remember that?

How can you justify naive base if you construct your features, for instance, by PCA, you know

they are Gaussians and mutually independent and base makes definitely sense.

In an image, well, this type of independency, does it make sense?

Well, if it's an image just showing noise and independent pixel noise, that might make

sense but in usual images that you acquire with a camera or a medical device, you will

have dependencies over there.

Zugänglich über

Offener Zugang

Dauer

00:00:00 Min

Aufnahmedatum

2009-07-21

Hochgeladen am

2025-09-30 08:52:01

Sprache

en-US

Tags

Analyse PA Markov Random Fields
Einbetten
Wordpress FAU Plugin
iFrame
Teilen