2 - Deep Learning [ID:10935]
50 von 967 angezeigt

….

Okay. Hello, everybody! To the second neural network

or deep learning class I am here to replace

Katharina, Breininger and Andreas Mayer today

because both don't have time.

So I'm Vincent Christl requirement here also on the the second and last

person. And I will also be together with Andreas Meier the person who gives the

examinations at the end. So the oral exams work will be given by Andreas and me so

you can just call me Vincent and yeah feel free to interrupt if you have any

questions? Are there any questions right away? Okay, so nothing here. Good. Okay, let's start

with more. So last time only the introduction was, I hope, so this chapter has not been

touched yet. Okay, not that I prepared something wrong here. Okay, good. So the outline for

today is that we will discuss the model a little bit, so basically the passing drones.

So we will first make a recap about pass of drones. And then passive drones, we will see

that there is a universal function approximation, which tells us a little bit that they can

as function approximators.

And then we will go make a brief overview,

basically, over neural networks,

and the most fields will be touched in the future.

But in this way, you will get a whole overview already,

and see how it actually all brings, or fits together.

And this starts with activations,

and with one special one, the softmax function,

and then goes over how to actually train

our neural networks, and that it is good

to make an abstraction into layers.

So from nodes to layers then.

Okay, so let's come to the model.

So we had, at the end of the last lecture,

we have already seen the Perceptance decision rule.

So why is the signum?

So we have just a linear decision boundary,

W transpose times X.

So W are the weights, and X are the input.

And then we just take the sign of this dot product,

and on which, either if it's on the left side,

we say, okay, it is the class, or if it's the right side,

okay, it's not the class.

So here it depends only on the sign of this distance

we are computing here.

And here on the left, on the bottom left,

we see again the perceptron.

Should we dim it?

I don't know, is it better if we dim it?

Yes?

Okay, let's see, how can we do that here?

Maybe this one?

Or is that bad?

Maybe it's bad for the video, but I don't know.

Okay, it doesn't work here.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:14:58 Min

Aufnahmedatum

2019-05-02

Hochgeladen am

2019-05-02 22:19:04

Sprache

en-US

Deep Learning (DL) has attracted much interest in a wide range of applications such as image recognition, speech recognition and artificial intelligence, both from academia and industry. This lecture introduces the core elements of neural networks and deep learning, it comprises:

  • (multilayer) perceptron, backpropagation, fully connected neural networks

  • loss functions and optimization strategies

  • convolutional neural networks (CNNs)

  • activation functions

  • regularization strategies

  • common practices for training and evaluating neural networks

  • visualization of networks and results

  • common architectures, such as LeNet, Alexnet, VGG, GoogleNet

  • recurrent neural networks (RNN, TBPTT, LSTM, GRU)

  • deep reinforcement learning

  • unsupervised learning (autoencoder, RBM, DBM, VAE)

  • generative adversarial networks (GANs)

  • weakly supervised learning

  • applications of deep learning (segmentation, object detection, speech recognition, ...)

Tags

algorithm backpropagation compute vector layer softmax feedback activation perceptron problem output analytic forward learning linear layers functions gradients chain
Einbetten
Wordpress FAU Plugin
iFrame
Teilen