3 - Deep redatuming for PDE and inverse problems (L. Demanet) [ID:42798]
50 von 638 angezeigt

You are mostly a crowd of mathematicians, please stop me if I'm wrong.

But there's not somebody has a question and wants to write it in the chat.

I can also have a look on that and read it.

Yeah, I'll keep the chat open to.

So it might be a bit unconventional as a talk format for all of you.

And I apologize if it's a little far from your own interests, but I kind of personally

get distracted over the past, I don't know, five, 10 years, five years.

And I wanted to show you what gets me excited about the interplay of inverse problems and

machine learning and what that might mean for math.

But it's not going to be a lot of math in this talk, unfortunately.

So thank you so much for the invitation Tobias and Enrique and Nicola.

I appreciate that.

I want to also acknowledge the participants in this line of work that I'm going to show

this Pawan Baradwaj from IIS.

I don't know if he's here in the room today, but Pawan played a central role in developing

the things I'm going to show you.

So it was postdoc in the group.

Matt was a graduate student in the group and is now a postdoc at MIT.

Hong-Yu Sun just graduated to postdoc at Caltech and Brenda Kanya was a graduate student at

MIT.

I also want to acknowledge Borian Gieszkowski.

We are starting work together on similar topics and extensions of this and Borian is here

in the room and you know him very well.

All right.

So this is going to be a bit of, well, perhaps philosophical talk about what is the role

of neural networks in our business as applied mathematicians?

Why should we care?

Should we jump on that train and what kind of differences it makes?

So the main way that neural networks so far have been used in inverse problems, or perhaps

the simplest and most accessible way would be to say if inverting a forward map, let's

say a nonlinear forward map here, that would go from M to D. M is model, D is data.

And inverting this map is for some reason complicated, then neural networks can typically

take on that task.

You can imagine that F inverse would be either ill-conditioned or would have issues such

as lack of convexity that means that iterative algorithms will run quickly out of breath

to invert it, then neural networks can pick up the slack and help to some extent.

This is what's happened in quite a few papers.

It's fine.

It's okay for small-scale problems.

So what you do is you train this neural net from a bunch of couples of M and D and then

you go ahead and you sample in that ensemble in that distribution to then test the neural

network once it's trained.

And you train on simulations.

You don't train on real data here.

In this case, you train on simulations.

This is fine, but what I want to show you is the idea that for genuine applications

when you run into large scales, when you run into difficult situations where you don't

want to ask too much of the neural net like I did in the previous slide, then a very fruitful

approach I believe is to do estimation tasks in data space.

So you still have a model of a partial differential equation that is this script F that goes from

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:40:08 Min

Aufnahmedatum

2022-06-10

Hochgeladen am

2022-06-19 14:06:03

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen