This audio is presented by the University of Erlangen-Nürnberg.
Last week was enjoyable, but the three weeks before that we talked about random forests,
which was kind of a topic that was kind of just moved in place because we thought,
okay, it's an important topic on one hand, but on the other hand it's relatively closed.
So we can cover these random forest things in this limited amount of time
and then return to where we took off with the density estimation,
where we left off with the density estimation.
That's what we'll also pick up today and continue there.
So if you think back what happened before these random forests,
which is already a long time ago, then you guys were discussing density estimation.
We also had density estimation with random forests, right?
So density estimation with random forests was a very...
Random forests are already a rather advanced tool to estimate probability density.
So first of all, what is density estimation?
Density estimation means we know that our samples have some distribution in the feature space
and now we would like to somehow represent or model this distribution
and predict for other locations in the feature space how likely it is to draw a feature from there.
So we had, for instance, this notorious example with this...
with very complicated structures like this, where we say,
okay, in order to be able to estimate a shape like this,
we need to have an estimator that operates at proper scale.
Because if we approximate this density too coarsely, so if we underfit,
we would just say, oh, okay, we have a density here which has samples in this area
and no samples here on the outside.
But this is not the level of detail that we would really appreciate in this case.
So on the other hand, if we had an estimator that overfits,
density estimator that overfits, then we would obtain a description of our density
which is overly detailed, like a very, very rugged boundary here.
Although when we look at the big picture, we would say, oh,
like the true density that we would like to describe is just this kind of swirl here.
So this would be the proper scale.
And then in terms of the random forests, what we always had was this trade-off like,
okay, there's the depth of the trees. If trees become too deep, they overfit.
So we become too detailed here in our description.
But what we can do in order to counter overfitting is to increase the forest size,
so more trees, create more trees.
And our main practical limitation there is only like what are our computational resources.
How deep can we make single trees and how large can we, like how many trees can we manage in our system?
Okay. Now this is fairly, talking about random trees or random forests is fairly complicated.
And today, when we talk about density estimation,
we will talk about a nonparametric technique that is actually super simple.
So today's and most likely also tomorrow's topic is density estimation.
Subtopic nonparametric and particularly Parson windows.
Okay. So what means nonparametric?
Nonparametric means like we don't try to find a big parameter vector to explicitly model our density,
but instead we kind of just directly estimate the density from the data that we have.
And then we'll see in a second how this works.
It's really straightforward and there are a couple of aspects to look into a little bit greater detail,
but the complexity of this overall approach is much lower than this whole random forest thing.
So if you like to read up most of the stuff that we are doing on Parson windows,
Presenters
Zugänglich über
Offener Zugang
Dauer
01:25:35 Min
Aufnahmedatum
2015-06-01
Hochgeladen am
2015-06-01 11:38:59
Sprache
en-US
This lecture first supplement the methods of preprocessing presented in Pattern Recognition 1 by some operations useful for image processing. In addition several approaches to image segmentation are shown, like edge detection, recognition of regions and textures and motion computation in image sequences. In the area of speech processing approaches to segmentation of speech signals are discussed as well as vector quantization and the theory of Hidden Markov Models.
Accordingly several methods for object recognition are shown. Above that different control strategies usable for pattern analysis systems are presented and therefore also several control algorithms e.g. the A(star) - algorithm.
Finally some formalisms for knowledge representation in pattern analysis systems and knowledge-based pattern analysis are introduced.