Let us revise briefly. How does it work in that case? So you see, in this context, we
are considering a finite dimensional system in n dimensions, the control being in n dimensions.
But the same will apply if we were dealing with an infinite dimensional dynamical system.
The Schrodinger equations, wave equations, heat equations,
Stokes equations. Okay. So we are given an initial datum and we know that the controllability of
the system is guaranteed when A and B fulfill the Kalman rank condition. Right? So remind you that
the Kalman rank condition is at the rank of B, A, B, up to A n minus 1 B is equal to n. Right?
And this is related to the theorem of Keighley-Hamilton, which says that once you
know all these powers of a matrix A to n, A n minus 1, identity A, A n minus 1,
all other powers of A can be derived out of this combination of these lower ones. Right? And behind
it, this was also the power series expansion for the exponential of A, which is the generator of
the semi group and so on. Okay. Well, so the first thing we should say is that if the system is large,
I mean, you are dealing with a large system, I don't know, the human cardiovascular system,
the human brain or a social network and is huge, checking these algebraic conditions might not be
easy. Right? So one thing is that, you know, the formulation algebraically,
analytically is very easy. And another one is the computational complexity. Okay. So that's a
warning here. So here this leads to high computational complexity
when n is large. Okay. Good.
Okay. So then what is the typical, the prototypical goal in control theory? In principle,
open loop control. So we are here doing open control. Right. So I am interested in open loop
control. We have the other day also shown how once you have done the open loop control, you can
build feedback controllers, right? Which are in that sense, much more realistic because you
can compute the control on real time out of the state. But this is the central problem we have to
understand in order to then, you know, derive all the other conclusions is this prototypical problem
of controllability. So we are given a time or is, oh, we are given an initial data. We are given a
target, a time capital T and we like to build the control so that Xt is equal to Xt. And we said,
in order to do this is very interesting to check, to look at the adjunct system
that now is solved in the opposite sense of time. So time goes from capital T to zero.
Right. And it turns out that once you understand or you introduce, you employ the adjunct system,
being a successful control, U can be written by duality. So we are just using the solutions
of the adjunct system as test functions in order to multiply in the state equation and integrate
by parts. And what we realize is that, you know, this identity here completely characterized
by the size being a control. Right. Why? Because when I multiply the equation of the state by phi
right here, I will get BU phi or you want UBS star phi that you integrate from zero to
capital T. So you get this term here. Right. And on the right hand side, the only thing you get
is the terms associated to the integration by parts in time. Right. Between X prime and phi.
Right. Why? Because once you integrate by parts and you put, you know, the A of X into the
A star of phi again by definition of the adjoint. So AX times phi is simply X times A star phi.
And you use the adjunct equation, then all this contribution is gone, except for these, say,
extremal terms, T equals zero and T equal capital T coming from the integration by parts here.
And therefore you get these two terms. So you see that here X zero is given, is the initial datum
of the system. Xt is given, is the target we are given. Right. U is the control we are looking for.
And phi is an arbitrary function. Right. Solution of the adjoint system for every phi t. Okay.
Then the next step was to realize, oh, but then, you know, this is the Euler Lagrange equation for
the minimization of this functional. Indeed, if we are able to minimize this functional,
the critical point of this functional leads to the control U. Right. The control being simply
U equal to B star phi hat, phi hat being the minimizer of phi t. The existence of the
minimizer is guaranteed by the direct method of the calculus of variation because the functional
is quadratic, is convex, is continuous in a Hilbert space, in our Rn, actually in the
Euclidean space. The only tricky aspect is to prove the coercivity. But the coercivity is
Presenters
Zugänglich über
Offener Zugang
Dauer
02:48:53 Min
Aufnahmedatum
2024-07-07
Hochgeladen am
2024-08-07 23:33:39
Sprache
en-US
S06: Gradient-descent methods (2), Duality algorithms, and Controllability (1)
Date: July 2024
Course: Control and Machine Learning
Lecturer: Prof. Enrique Zuazua
_
Check all details at: https://dcn.nat.fau.eu/course-control-machine-learning-zuazua/
TOPICS
S01: Introduction to Control Theory
S02: Introduction: Calculus of Variations, Controllability and Optimal Design
S03: Introduction: Optimization and Perpectives
S04: Finite-dimensional Control Systems (1)
S05: Finite-dimensional Control Systems (2) and Gradient-descent methods (1)
S06: Gradient-descent methods (2), Duality algorithms, and Controllability (1)
S07: Controllability (2)
S08: Neural transport equations and infinite-dimensional control systems
S09: Wave equation control systems
S10: Momentum Neural ODE and Wave equation with viscous damping
S11: Heat and wave equations: Control systems and Turnpike principle (1)
S12: Turnpike principle (2), Deep Neural and Collective-dynamics
_
Check all details at: https://dcn.nat.fau.eu/course-control-machine-learning-zuazua/