16 - Musteranalyse/Pattern Analysis (PA) [ID:2343]
50 von 376 angezeigt

Welcome to the Tuesday session. I'm sorry that we had to cancel the lecture yesterday.

I had an unforeseen business trip yesterday and I couldn't shift it. So originally I thought

that Rüdiger will tell you something about the usage of SVM in the field of retina image

analysis. We do a lot of work there for building screening systems to check whether somebody has

a high risk of heart attack for instance. Just look into the eyes and analyze the vessels and

the calcifications on the vessels of the retina and based on that you can compute risk indices

telling you how high the risk is that you suffer a heart attack for instance or glaucoma disease

or any other things. And in the classifiers that we have built for these screening systems,

the classifiers we use for these screening systems are combined SVMs actually. So we use

support vector machines for doing the classification. And originally I thought or I expected or I hope

that he's going to explain this to you but somehow he decided to cancel the lecture. Anyways,

what did I want to or what do I want to tell you with that? SVMs are heavily used in practice even

if we don't see some practical examples right now. What we are currently doing is we are now

trying to lift certain concepts into the language of SVMs. And with kernels we can do a lot of

interesting things and we have seen that in the perceptron algorithm we always see inner products

of feature vectors. That in the, this is the perceptron learning algorithm or the decision

boundary for instance we get or also in SVMs both in the learning and in the decision stage

we basically have to compute inner products of feature vectors. And the idea was why don't we

replace the inner products of feature vectors by transformed features and so-called kernels that

relate the transformed features and that compute basically the inner product of the, where do I

have it, of the transformed features. And we also have motivated this by you know features and

classes in feature space where a linear decision boundary is basically impossible. Right? You

remember the picture with a circle and so on. So I hope that was well motivated and we have

introduced the concept of kernels saying that two features x and x prime they can be transformed

by any nonlinear mapping and then instead of the original inner product we consider the inner

product of the transformed features. And instead of telling you the transform and then applying

the inner product we just say okay we just define here a function that reads the feature vector

x and the x prime and results in basically a value and the kernel that we get has to fulfill

the properties of a kernel such that you can rewrite the mapping by a nonlinear mapping of

the features followed by an inner product. That's basically the idea. So we grew into this from the

original formulation of the SVM. We have seen okay only inner products are used so let's use a feature

transform. How does the feature transform affect the algorithm? Basically nothing changes but we

have to compute the inner products of the transformed features. And now we do even one step more saying

instead of telling the system the transform and then computing the inner product we just say this

is the kernel and this mapping can be decomposed under certain circumstances by a mapping and

followed by an inner product. That was the idea. So in the future we usually do not talk about

phi and the inner product but we talk about kernels. And we have seen different kernels. Where

are the kernels? Here the standard linear kernel or the polynomial kernel or the Laplacian radial

basis function kernel where we use the L1 norm, the Gaussian radial basis function kernel or the

sigmoid kernel. And it's interesting. I mean this is a kernel just telling us what happens if we have

x and what happens if we have x prime. It gives us a value. And we know by theory under certain

circumstances we can say that this is a kernel and then we can compute or we know there exists a

phi and an inner product. So this mapping here that we have can be rewritten in terms of a phi

mapping of features followed by an inner product. But we don't care how this looks like in this

particular situation. That's highly complex. Very complicated but we don't care actually. We just

have the kxx prime. Okay. Very powerful. And there are many lemmatas and theorems telling us when a

k function, a k mapping is a kernel. So you can read whole books on that and theory and it's quite

complicated and very entertaining. But for us it's sufficient you know basically to know the core

idea. And the kernel trick, the kernel trick is something that you have to remember whenever we

have something in terms of an inner product, in terms of a kernel, an algorithm, we can replace

Zugänglich über

Offener Zugang

Dauer

00:49:48 Min

Aufnahmedatum

2009-06-23

Hochgeladen am

2012-07-30 16:19:35

Sprache

en-US

Tags

Analyse PA Fisher Kernels EM-Algorithm
Einbetten
Wordpress FAU Plugin
iFrame
Teilen