Introduction. So I'd like to thank the invitation and it's a pleasure to give the talk at the
seminar. So today the talk is really like an overview talk and most of it is not about
my own research, it's more about what I think are the important things about the neural
network and also partial differential equations. And in fact, the style of the talk is like,
they will contain quite a bit of materials. It's kind of like one slide per topic. So
I'll probably run quite fast, but to give you the overall high level ideas. So if you
have questions about the details, I'll be happy to answer at the end of the talk, or
maybe you can interrupt me in the middle of the talk.
So the title of the talk is about the neural network and partial differential equations.
And so we're all familiar. I mean, I think most of us are coming from a numerical analysis
of computational math background. So we all know PD very well. And neural network is something
which gained a lot of attention in the past 10 years. So it has made a lot of progress
in machine learning and AI. So from a mathematical point of view, I think the most important
thing about the neural network, it is that it's a very flexible representation for high
dimensional functions, maps, and also distribution. Because in modern computational science, more
and more we start to represent work with high dimensional objects. So that's why the neural
network becomes quite useful. So here I list a few common architectures for neural network.
The first is the fully connected neural network, where you see you have the input coming from
the left hand side and gradually being processed by the so-called neurons or hidden layers
in the middle, and then eventually produce output. So the second example is CNN, which
is a convolutional neural network. This is probably one of the most successful, mostly
commonly used neural network architectures. It has played a lot of roles in vision and
also image processing. And we also have a recurrent neural network, which is developed
by INA. So here you can really think about this like a Markov chain. So the data feeding
to the system, this block A at every step, and it at the same time also produce one piece
of output. But the real network is you should think about this whole thing where you have
incoming sequence and it produces an outgoing sequence. So finally, this so-called ResNet,
it's very simple. Typically it's combined with CNN, but the important thing about ResNet
is that it uses so-called skip connections, which essentially what it's doing is doing
this transformation gradually. You can think about like an ODE. Now, these are the famous
examples of a neural network, at least some of the famous examples. And you can already
see that it's strong connection with mathematics, like convolutional operators, Markov chains,
ODE, and dynamic systems. So really mathematics and PDEs, sorry, neural network PDEs are interwined
together. So in this talk, what I'm trying to share with you is that what I think that
how neural network can help solve partial differential equations. And in the other direction
is how PDE can help explain also develop neural network architectures. So as I said, this
talk will be like one slide per topic. So in the first part, neural network for PDEs,
I will try to touch on a few topics. So again, as I said, the main reason that neural network
will be used for PDE is because neural network can represent high dimensional functions and
high dimensional maps. Now, what can high dimensional maps and functions appear? The
first place is high dimensional PDEs. There's a PDE that we work with in scientific computation
as a really inherent high dimension. So these such high dimensional map can also appear
in low dimensional PDEs, especially in the case of the inverse problems and parametric
PDEs. So finally, I will also touch on the case of reducing from high dimensional to
low dimensional. This is a model reduction. Okay, let's touch on the first part of the
first part, which is a neural network for high dimensional PDEs. So here the neural
network is typically just used to represent the solutions of the PDEs. And I think the
best way is to just give you three important examples for high dimensional PDEs. The first
example is the so-called many-body quantum mechanics. So we all know that on a micro
scale our world is governed by quantum mechanics. And because we have many particles, typically
Zugänglich über
Offener Zugang
Dauer
00:47:04 Min
Aufnahmedatum
2021-12-08
Hochgeladen am
2021-12-08 18:26:06
Sprache
en-US