2 - FAU DCN-AvH Seminar: PottsMGNet: A Mathematical Explanation of Encoder-Decoder Based Neural Networks [ID:49156]
50 von 373 angezeigt

The best group you see in terms of control, apply mass, so it's very good. I have the

optinative box here. So, yeah, so essentially, I'm going to, this is a very new work we just

finished. We have been working this one for about more than two years. Even you see that I did

many of the programming for really UCD Park. And then the essential idea is here to give a very

precise mathematical explanation about neural network more general machine learning, exactly

using control approach plus some numerical techniques we know. So this has been joint work

with Professor Raymond Chang from CTU and Professor Liu Hao from the University of Hong Kong.

And some of the ideas we use here come from earlier collaboration with Professor Liu Jun,

which was there were several talks, even two of them are available in YouTube. So I will try to

review some of the ideas here. So this talk more is like a survey plus the new ideas. And so the

purpose is to give an explanation about these neural networks. It is mathematically explained,

especially the current explanation given very clear, everything becomes so clear, every parameter

have a numerical meaning. And then we know you see that the neural network, you can explain

it as another way is a high dimensional interpolation. So if you give, you see that

one data is the input data, Y1 is ground truth, mostly X1, Y1 are high dimension. So the neural

network is an interpolation. So it is an interpolation. This is one way to explain.

So another way to explain the neural network is what we are doing here. So essentially, you see

that all started from the residual neural network. So the residual neural network, you can write it

this way. And then people realize, you see that you can regard it this way as the discretization

of a dynamic system. You use this way to explain the neural network. So here we mentioned some of

the literature results, especially as you will not notice very earlier, they have a number of papers.

And we also noticed that there was a Harper and Riftall, and then they also have a several series

of paper, even as you organized a half year program in Cambridge University with Corolla,

Hendy Muller and Cheng Te, we invite them to give a talk. And especially when we are working on this

paper, we notice that Corolla and collaborators also have published a few papers following this

way. So what we are doing is more on the numerical side. We are happy that it is a very, very clear

explanation. So you see that I use it for more general, but here we just try to consider the

mappings from image to image. So it could be segmentation, could be denoising. Especially

the explanation we are giving is essentially that we explain it as a two-phase segmentation.

Essentially, give an image, you have a ground truth, which is two-phase segmentation.

If you use a neural network, we can give you a very clear explanation. But you see that

explanation applies to multi-phase, to imaging denoising and impending in the same way.

You see, just to make everything clear, we take the two-phase image segmentation as an explanation.

Here, you see, I will try to review some of the literature results. The reviews have, you see,

that two purpose. One is introduce the traditional idea. Also, you see, we can really use this one

together with the neural network. You see that for two-phase segmentation, the very earlier

successful approach is the transverse set. And then you need to use the level set. You need to

solve such equations. If you don't use the level set, you see that you can use a binary function

or probability function, which is v equal to zero one. And the minimized this one is equivalent to

minimize the level set one. And then you can stretch both. You see, like, v below 0.5 is phase

zero. Above 0.5 is phase one. So this is the so-called CEN model. Mostly, I will try to use

the name of a Pulse model, because in data analysis or machine learning, people mostly

are calling to keep calling this one Pulse model. A Pulse model is that you give me, you see,

that k functions. I need to segment the domain into k. So here, k is equal to two. So equal to

zero one, which means you give a g zero, g one. And I want to minimize this one such that you

partition the domain into two domains. And then you can use this one as a segmentation. So here is

the length of the interface of the boundary. And the g zero, g one are some given function.

So this is a two-phase Pulse model. We just take two-phase to make it clear or easier for the

explanation. So you see that once you solve this so-called, this model, and v equal to zero one,

and so here is what you get. You split the domain into two phases. And this eta, you see that if v

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:41:04 Min

Aufnahmedatum

2023-09-19

Hochgeladen am

2023-09-28 20:36:03

Sprache

en-US

Date: Tue. September 19, 2023:
Event: FAU DCN-AvH Seminar
Organized by: FAU DCN-AvH, Chair for Dynamics, Control, Machine Learning and Numerics – Alexander von Humboldt Professorship at FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany)
Title: PottsMGNet: A Mathematical Explanation of Encoder-Decoder Based Neural Networks

Speaker: Prof. Dr. Xue-Cheng Tai
Affiliation: Norwegian Research Centre (Norway)

SEE MORE:

https://dcn.nat.fau.eu/pottsmgnet-a-mathematical-explanation-of-encoder-decoder-based-neural-networks/

Tags

online ONLINE Seminar FAU FAU DCN-AvH FAU DCN-AvH Seminar
Einbetten
Wordpress FAU Plugin
iFrame
Teilen