How to Give your Rich Capital Inlines
Welcome back to our deep learning lecture.
We are now in part four of the introduction
Welcome back to our deep learning lecture.
And here, we want to talk about machine learning,
And here, we want to talk about machine learning,
pattern recognition,
and we want to give a short introduction
in all the terminology, notation,
and what you will need over the scope of the next couple of videos.
So throughout this entire lecture series we will use a following notation.
Matrices are bold and uppercase. So examples here are M and A.
Vectors are bold and lowercase. Examples here are V and X.
Scalars are italic and lowercase.
y, w, alpha.
For the gradient of a function, we use the gradient symbol.
For partial derivatives, we use the partial notation.
Furthermore, we have some specifics about deep learning.
So the trainable weights will be generally called w.
Features or inputs are x. These are typically vectors.
Then we have the ground truth label, which is y.
We have some estimated output that is y hat.
And if we have some iterations going on, we typically do that in superscript and put it into brackets.
This is an iteration index here, iteration i for variable x.
Of course, this is a very coarse notation and we will develop it further throughout the lecture.
If you have attended previous lectures of our group,
then you should know the classical image processing pipeline of pattern recognition.
It does recording with sampling followed by analog to digital conversion.
Then you have the pre-processing feature extraction followed by classification.
Of course, in the classification step, you have to do the training.
The first part of the pattern recognition pipeline is covered in our lecture introduction to pattern recognition.
The main part of classification is then covered in pattern recognition.
Now, what you see in this image is a classical image recognition problem.
Let's say you want to differentiate apples from pears.
Now one idea that you could do is you could draw a circle around them and then measure the length of the major and minor axis.
So you will recognize that apples are round and pears are longer.
So the ellipses have a different in major and minor axis.
Now you could take those two numbers and represent them as a vector value in a vector space representation.
You then enter a two-dimensional space in which you will find that all of the apples are located on the diagonal through the x-axis.
If the diameter in one direction increases, also the diameter in the other direction increases.
Your pairs are of the straight line because they have a difference in minor and major axes.
Now you will find that a line is able to separate those two and you can essentially consider this as your first classification system.
Now what many people think about how the big data processing works is shown in this small figure.
So this is your machine learning system.
Yep, pour the data in this big pile of linear algebra and then collect the answers on the other side.
And what if the answers are wrong?
Just stir the pile until they start looking right.
So you can see in this picture is that of course this is how many people think that they approach deep learning.
You just pull the data in and in the end you just stir a bit and then you get the right results.
That's actually not how it works.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:15:04 Min
Aufnahmedatum
2020-10-04
Hochgeladen am
2020-10-04 14:56:17
Sprache
en-US