Thank you. Thank you for the introduction and thank you for the invitation of the
committee. And good afternoon everyone. And today I will talk about physics-informed neural networks
for Nesmos PD-constrained optimization problems. And first I will introduce some backgrounds and
motivation. And then I will take two typical examples about Nesmos PD-constrained optimization
to illustrate the main ideas of the design of pins to solve this kind of problems. And finally,
there are some conclusions and perspectives. And first, it is well known that PDs model various
physical phenomena. And for some applications, we need not only to model certain physical process,
but also to control the system or optimize the considered process to meet certain goals. And for
this purpose, a given objective functional has to be minimized subject to a PD or a system of coupled
PD systems, usually with other additional constraints. And this kind of additional
constraints are used to guarantee some realistic requirements. And we get PD-constrained optimization
problems. And this is an example of control the head distribution of a metal bar. And mathematically,
a PD-constrained optimization problem can be written in this form. And here, U and Y are
assumed to be binary spaces and UAD and YAD are closed convex sites. And these two sites are used
to impose some control of state constraints. And here, G is objective functional to be minimized.
And E represents a PD or PD systems. Here, Y describes the state of the considered system
modeled by the PD. And the variable U is a parameter from the source term or some coefficient that
should be adapted in an optimal way and to minimize the objective functional. And the
constraints, U belongs to UAD and Y belongs to YAD, describe some physical restrictions on
the realistic requirements. And this model is somehow very abstract. And this kind of PD-constrained
optimization problem can cover very many important applications in optimal control, optimal design,
and inverse problems. And here, we focus on some non-smooth cases. And we talk about two types of
non-smoothness. The first type of non-smoothness is we consider a non-smooth objective functional.
That means the objective functional can be written in this form. And here, the functional G consists
of a data fidelity term and a possible smooth regularization. For example, the L2 norm. And here,
the non-smooth functional R is used to capture some prior information on the variable U. For example,
the boundedness, sparsity, or the discontinuity. And to impose this kind of prior information,
we use different types of non-smooth regularizations. And as we mentioned, to expose our ideas clearly,
here we focus on a concrete example. It's a parabolic sparse optimal control problem. And that means we
would like to promote the sparsity of the control of this problem. And for this purpose, we use the
L1 norm of the control variable as the regularization term. And here, the sparse or the sparsity means
that the support of the control variable is only a subset of the domain. That means in some parts of
the domain, the control variable is zero. And here, the state equation is a parabolic equation. And here,
UED is used to impose some pointwise boundedness to the control variable. And the second type of
non-smoothness is we consider some non-smooth PDE. And here, we focus on interface problems. Interface
problems are piecewise defined PDEs in different regions coupled with together interface conditions.
And the solutions are non-smooth or even discontinuous. And here is an example of the geometry of an
interface problem. Here, the gamma is the interface, and it divides the whole domain into
omega minus and omega plus. And then we define the PDEs piecewise in omega minus and omega plus.
And again, we focus on a concrete example. And here, we consider an elliptic interface optimal
problem. Here, we consider a smooth object functional. And the state equation is an elliptic interface
problem. And here, beta is a piecewise constant. And the jump discontinuity across the interface is
defined here by the limit from different sides of the interface. And this term means the interface
gradient condition. That is the condition, the gradient of the solution across the interface.
And next, we will focus on these two typical examples to expose our ideas.
And before we introduce our algorithms, we first present a brief literature review.
Okay, there are many works about the theoretical analysis of the applications of the numerical
methods for PD constrained optimization problems. And for the numerical methods,
they mainly consist of optimization algorithms plus some numerical discretization.
And the optimization algorithms include semi-smooth Newton methods, primordial active side methods,
Presenters
Dr. Yongcun Song
Zugänglich über
Offener Zugang
Dauer
00:35:05 Min
Aufnahmedatum
2024-06-10
Hochgeladen am
2024-06-11 11:19:49
Sprache
en-US
Lecture: Physics-informed neural networks for non-smooth PDE-constrained optimization problems