OK, hello, good evening everyone, welcome back to the lecture series.
I thought maybe I start by giving you a kind of review of where we have come during the
course of the lecture series.
So this was the mountain where we, so to speak, learned how to implement a neural network
and how to do backpropagation in order to train it.
And since then, we've looked at a series of applications.
So the first was just to show that we can represent,
basically, arbitrary functions, including an arbitrary image
in terms of a neural network.
And we said we can use neural networks for classification.
So you give me an input like an image.
I tell you what it means if I have
seen many training examples.
Then we said that for the particular purpose of images,
it's actually good to exploit translational invariance.
And you can build a neural network
that uses many, many fewer weights in its construction.
These are the so-called convolutional neural networks.
Then we went to something that is no longer supervised
learning, but which just tries to encode as well as possible
given input examples.
So these are the autoencoders.
Then we had another brief remark about how
to do unsupervised learning in the sense of trying
to visualize high-dimensional data in a meaningful way
that would give you, for example,
clusters that you can recognize of data points that
are similar.
And so that was dimensional reduction, the TSME method.
And then we went to look at more sophisticated networks.
And the first one was recurrent networks, that
is networks with memory.
So that is when neurons also depend
on what has come before in terms of the input
and in terms of their internal memory state.
Then we briefly mentioned word vectors.
So that's just useful for a particular application
of recurrent neural networks when
you want to analyze a full text, which can
be seen as a string of words.
But how do you represent these words?
You can just label them in a big dictionary.
That would be the very basic version.
But you can also invent these so-called word vectors
where different components of the vector
have different semantic meanings.
And then you can even do calculations
with such word vectors.
And then very recently now, we started
by looking into the big subject of reinforcement learning.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:22:54 Min
Aufnahmedatum
2019-06-05
Hochgeladen am
2019-06-06 05:39:03
Sprache
en-US
This is a course introducing modern techniques of machine learning, especially deep neural networks, to an audience of physicists. Neural networks can be trained to perform diverse challenging tasks, including image recognition and natural language processing, just by training them on many examples. Neural networks have recently achieved spectacular successes, with their performance often surpassing humans. They are now also being considered more and more for applications in physics, ranging from predictions of material properties to analyzing phase transitions. We will cover the basics of neural networks, convolutional networks, autoencoders, restricted Boltzmann machines, and recurrent neural networks, as well as the recently emerging applications in physics.