The following content has been provided by the University of Erlangen-Nürnberg.
Okay, good evening everyone. So today is the last lecture of our series and I wanted to
take the opportunity to first present to you some examples of where neural networks have
been applied in science and then also even to speak more generally about artificial intelligence
that is so to speak a few levels beyond what we have been discussing. Also in the very end you
may stay around those of you who want to pass the exam and then we can discuss what might be the
typical type of question that we can ask in the exam. Okay so where are neural networks applied
in science and this is just a few examples that I picked from the current literature. But more
generally first if you see some neural network being applied to science what are the general
questions you should have when you approach such a manuscript. For example you can ask yourself
what are the what is the specific scientific goal so what is the scientific context and what
are the challenges why would you actually need machine learning for this task instead of some
standard data analysis or just some standard kind of theory or numerics. And then neural networks
have this general format where you train on many input examples and you have to provide the output
that you desire to have and so you have to understand in each case what is the input format in which
way do you present the examples to the network and then what is the output format. A very important
question is how do people generate these training examples are they taken from existing observations
so their number is fixed are they generated specifically for this neural network how many
do you have are they generated maybe by numerical simulations things like this. Then there is the
accuracy of the predictions so these predictions for example better than simpler machine learning
techniques or simple data analysis techniques are they even better than humans and then finally what
is the speed up that you really achieve because instead of having a neural network predict the
say properties of a material you might as well do another experiment and so then hopefully the
neural network is just cheaper to do this. So now I will go through a series of examples I don't
claim that I know completely the scientific background behind each of these examples but
here we go. So one of the very first fields in which people tried to apply machine learning was
quantum chemistry or material science and that even started in the 90s. So for the specific case
of quantum chemistry in general your goal will be to predict the properties of a molecule for example
what is the shape of the molecule how are the nuclei arranged what are the binding lengths and
between the atoms what are the binding angles what is the overall energy the electronic energy of
this molecule and devising techniques for this has been a challenge for basically more than half a
century now and there are several techniques known but they are computationally very expensive and so
the question is whether you might use neural networks to improve to improve the predictions.
Now in general this will look like you use training data for example from Hartree-Fock
calculations or so-called density functional theory which are computational methods to give
you all of this information which would yield for example the energies that you want to have and
then you have to provide to the network not only this output so that what you get from these computer
programs is of course the output the prediction but you also have to provide somehow the input
and what is the input in this case it's it would not be enough to just tell the network oh I have
some molecule with seven carbon atoms and 15 hydrogen atoms and two oxygen and one nitrogen
because of course there are many different molecules that might have the same number of
atoms and you cannot possibly know which which it is so at least you have to provide the structure
of the molecule encoded in some fashion and so this leads to the quite challenging problem of
choosing a suitable input representation so in principle there are of course even textural
representations that can precisely encode where each atom sits you could also make just a long
list of the different types of atoms you could enumerate them and then for each atom you say
which are the neighboring atoms to which it may be connected but in general the input representation
is called the so-called molecular fingerprint and it's your task as a designer to figure out
suitable input representation and the success will depend on it so one version that has been used
and which I found quite amusing is to just have well the number of atoms and also say the number
Presenters
Zugänglich über
Offener Zugang
Dauer
01:27:23 Min
Aufnahmedatum
2017-07-24
Hochgeladen am
2017-07-25 10:50:46
Sprache
en-US