6 - Deep Learning - Plain Version 2020 [ID:21026]
50 von 172 angezeigt

Welcome back everybody to our lecture on deep learning. Today we want to go into the topic.

We want to introduce some of the important concepts and theories that have been fundamental to the field.

Today's topic will be feedforward networks and feedforward networks are essentially the main configuration

of neural networks as we use them today.

So in the next couple of videos we want to talk about the first models and some ideas behind them.

We also introduce a bit of theory.

One important block will be about universal function approximation where we will essentially show

that neural networks are able to approximate any kind of function.

This will then be followed by the introduction to the softmax function and some activations.

In the end we want to talk a bit how to optimize such parameters and in particular

we will talk about the backpropagation algorithm.

So let's start with the model and what you've already heard about is the perceptron.

We've already talked about this which was essentially a function that would map any high-dimensional input

into an inner product of the weight vector and the input.

Then we are only interested in the signed distance that is computed.

You can interpret this essentially as you see on the right hand side.

The decision boundary is shown in red and what you are computing with the inner product

is essentially a signed distance of a new sample to this decision boundary.

If we consider only the sign we can decide whether we are on the one side or the other.

Now if you look at classical pattern recognition and machine learning

we would still follow a so-called pattern recognition pipeline.

We have some measurement that is converted and preprocessed in order to increase the quality,

let's say decrease the noise.

In the preprocessing we essentially stay in the same domain as the input.

So if you have an image as input the output of the preprocessing will also be an image.

But probably with better properties towards the classification task.

Then we want to do feature extraction.

You remember the example with the apples and the pears.

From these we extract features which then result in some high-dimensional vector space.

We can then go ahead and do the classification.

Now what we have already seen with the perceptron is that we are able to model linear decision boundaries.

This immediately then led to the observation that perceptrons cannot solve the logical exclusive or or the so-called XOR.

You can see the visualization of the XOR problem here on the left hand side.

So imagine you had some kind of distribution of classes where on the top left and the bottom right is blue

and the other class is bottom left and top right.

This is inspired by the logical XOR function.

You will not be able to separate those two point clouds with a single linear decision boundary.

So you either need curves or you use multiple lines.

With a single perceptron you will not be able to solve the problem.

Because people have been arguing, look we can model logical functions with perceptrons.

If we build perceptrons on perceptrons we can essentially build all of logic.

Now if you can't build XOR then you're probably not able to describe the entire logic.

And therefore we will never achieve strong AI.

This was a period of time when all funding to artificial intelligence was tremendously cut down

and people would not get any new grants.

They would not get money to support the research.

Hence this period became known as the AI winter.

Things change with the introduction of the multilayer perceptron.

This is now the expansion of the perceptron.

You do not just do a single neuron but you use multiple of those neurons and you arrange them in layers.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:17:41 Min

Aufnahmedatum

2020-10-09

Hochgeladen am

2020-10-09 10:36:20

Sprache

en-US

Deep Learning - Feedforward Networks Part 1   This video introduces the topic of feedforward networks, universal approximation, and how to map a decision tree onto a neural network.   For reminders to watch the new video follow on Twitter or LinkedIn.   Further Reading A gentle Introduction to Deep Learning 
Einbetten
Wordpress FAU Plugin
iFrame
Teilen