Dieser Audiobeitrag wird von der Universität Erlangen-Nürnberg präsentiert.
Ich bin nicht nur heute hier, an Professor Honegger, sondern für die nächsten drei Wochen.
Ich bin ein wenig überrascht, weil morgen die Lesung nicht stattfinden wird, weil ich dort eine Konfliktabschlussung habe.
Aber nur morgen wird die Lesung nicht stattfinden.
Ich werde jetzt die Lesung von der Universität Erlangen-Nürnberg präsentieren.
Wir werden über random forests reden.
Das wird das Thema sein.
Verstehen wir die Football-Peter-Mafter, was dieriers meilleur drin sind als er hat.
Aus der Ausbildung herself war das只有 der ersteん der Teams.
Da war es works addresses.
preventive
it
so
Random Forests sind eine Generellisierung von Position Forests oder Bagging.
So, verschiedene Konzepte kommen hier in die Spielform.
Random Forests sind ein unieferes Framework, um all diese Vorbilder zu erheben.
Random Forests sind eine Generellisierung
für ein unieferes Framework, für Position Trees, Bagging
ist eine generale Chairmanτέ blurredaut.
Hier XL-イベ Kamen
Normal viens
repeil 好
Okay, today we looking at Aspect Forrest model and the classification forest
Big Topic.
And Regression Point.
Do you know the difference between classification and regression already?
So classification is a crisp last label like is this a car or is this a liver tumor?
Regression means like on a continuous scale, what would be an appropriate value to assign to this phenomenon or to this feature vector?
Then we can do actually density estimation with random forests.
So we consider density forests.
And yeah, depending on how quick I am, I don't know yet whether we will talk about many forests.
So all the effects are taken from this book so you can also look this up later when you are preparing for the exam and the proof is out.
Let's start with looking at decision trees.
So what is a decision tree?
So the concept of trees is known from my algorithm and data structures.
So we have a binary structure with a root node.
Each node has two successors.
And eventually, like at the bottom of the tree, you have leaf nodes.
So these are the nodes, I call it leaf nodes.
This is the root node.
This is a node in the context of decision trees.
It's often called a split node.
We'll come to that in a second.
Okay, now a decision tree means we have a feature vector that encodes the information that we would like to capture.
And at each of these internal nodes, we ask a question to this feature vector.
Like for instance, if we would like to classify images into indoor and outdoor, we could ask at the root node, is the top of the image blue?
It might be an indicator for an outdoor image.
So we have a question here.
For instance, is the top of the image blue?
Presenters
Zugänglich über
Offener Zugang
Dauer
01:30:42 Min
Aufnahmedatum
2015-05-04
Hochgeladen am
2015-05-04 11:22:30
Sprache
en-US
This lecture first supplement the methods of preprocessing presented in Pattern Recognition 1 by some operations useful for image processing. In addition several approaches to image segmentation are shown, like edge detection, recognition of regions and textures and motion computation in image sequences. In the area of speech processing approaches to segmentation of speech signals are discussed as well as vector quantization and the theory of Hidden Markov Models.
Accordingly several methods for object recognition are shown. Above that different control strategies usable for pattern analysis systems are presented and therefore also several control algorithms e.g. the A(star) - algorithm.
Finally some formalisms for knowledge representation in pattern analysis systems and knowledge-based pattern analysis are introduced.