7 - Machine Learning for Physicists [ID:8038]
50 von 1645 angezeigt

The following content has been provided by the University of Erlangen-Nürnberg.

OK, hello.

Good evening, everyone.

First, some organizational point.

After the end of today's lecture, maybe all of you who want to take the exam remain seated

And then we can work out when we should schedule the exam.

So that's at the end of the lecture.

Now, at this point, we are about halfway

through the lecture series.

And so I wanted to remind you what we did so far.

We started by describing the basic structure

of a neural net, which is by now well familiar to you.

We then said how to adjust the cost,

to minimize the cost function that

tells us how the true answers diverge from the neural net

answers by stochastic gradient descent.

Then we figured out how that stochastic gradient descent

actually does, in practice, adjust

the weights of a neural network and how that can be implemented

very efficiently.

And that was back propagation.

So that was already the steepest slope.

Since then, it's been downhill in terms of difficulty,

but not in terms of applications.

We first applied this to a rather simple task

of representing, say, a function of a few variables,

or likewise an image in two dimensions,

so that the neural network would represent

one particular function or one particular image.

And then we went on and asked, how can a network

be trained to classify images, a whole class of images,

not only having to do with a single image,

but being able to classify a whole class of images?

And we applied this to the very important and very well-known

example of handwriting recognition.

Then we went to more sophisticated networks.

So we realized that, of course, it's the most general setting

if you have connections between every neuron in one layer

and every neuron in another layer.

But sometimes you can exploit the particular structure

of your input.

For example, in images, there is some translational invariance.

A feature has the same meaning, regardless of where

it is placed, usually.

And so that led us to convolutional networks.

And then in the last lecture, we discussed

so-called autoencoders, where the novelty was that you don't

even need to specify what is the correct answer.

Rather, you just ask the network to be

able to reproduce the input as faithfully as possible

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:25:10 Min

Aufnahmedatum

2017-06-26

Hochgeladen am

2017-06-27 11:21:57

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen