20 - Deep Learning and Computations of PDEs (Siddhartha Mishra, ETH Zurich) [ID:20559]
50 von 1673 angezeigt

Welcome back everyone to our seminar series.

And today we have also Sitata Mishra from ETH Zurich speaking about deep learning and

computations of PDEs.

Thank you very much Marius and also to Enrique for inviting me to participate and to talk

in this interesting seminar.

So I'll talk about deep learning and computations of PDEs and let me see, let me try it.

I hope everyone can see my screen and now.

Very nicely, very nicely.

Thank you.

So now, yes, there it is.

Okay.

So I'm going to talk about PDEs.

So at the beginning, let me write them in this very, very abstract form, you know.

So differential operator applied to a function equals to another function and hidden in this

formulation at the initial and boundary conditions and so on.

They will be made explicitly wrong.

So when you compute a PDE, we are either computing the fields, that is the solution field, which

is a function of space and time, or we are computing observables.

So these are functionals or quantities of interest, which can be written in this generic

form.

And to compute these objects, there are a variety of successful numerical methods, finite

difference, finite element, finite volume, and their successes as well as their deficiencies

are well known.

One of the issues is that whenever you deal with PDEs in high dimensions, and I'll quantify

and qualify what I mean by that, then these methods seem to have problems.

And by high dimensions, I mean that space time dimension should be greater than or equal

to four or greater than four.

Okay.

So why do I care about high dimensional PDEs?

In this lecture, we'll have two examples of them or two classes of examples of them.

The first are what are called parametric PDEs.

So parametric PDEs, in addition to space and time, the solution depends on a parameter

as does the input.

Now this parameter can, for instance, parameterize the probability space.

So in case your PDE is, or there are, let's say, measurement errors, you want to quantify

them.

This is what is called uncertainty quantification or UQ.

In that case, one often parameterizes these in terms of parameters, and these parameters

can live in a space RT bar, where this D bar is very high dimensional.

You'll see examples of that.

Another way in which this Y comes in is if you think of it as a control or a design parameter.

So in that case, this guy here parameterizes the design space.

So in either case, and I'll give you some examples in a bit, what we are interested

in is we are computing the fields, but now the fields are also a function of the parameters,

and we are computing observables for different values of the parameters.

Just to give you an example, and I'll come back to this example many times.

My background is in hyperbolic conservation laws.

So the PDEs are typically compressible Euler or compressible Navier-Stokes equations.

So the PDEs are right here.

Observables are the lift and the drag, which can be written in terms of these integrals.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:20:45 Min

Aufnahmedatum

2020-09-02

Hochgeladen am

2020-09-07 16:56:22

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen