8 - Course: Control and Machine Learning [ID:53631]
50 von 1291 angezeigt

Okay, so this is the point we got yesterday, right?

So in particular, these guarantees as described in this drawing, the following fact, right?

Graphically, what we are saying here is that if you consider,

sorry, if you consider

the differential equation associated to residual neural networks that we have called the neural

differential equation, because this is simply a differential equation and it has a neural

structure due to the presence of the sigma with non-linearity. In particular, when the non-linearity

sigma is the ReLU, right, using the constructive inductive arguments that I presented yesterday,

you can show that the system has a very, very ambitious and distinguished property of

simultaneous control in the sense that when you give me n different initial points and n

different final points, whatever n is, so n is arbitrary,

and please recall that capital N here does not have anything to do with the dimension of the

clearance space, right? So we are in dimension D, x is the state, this is in dimension D,

this is the number of, for instance, agents interacting on a network, D, right?

And now I consider n possible trajectories, n possible configurations. What I'm saying is that

we can always find one single control, B of t, A of t, and W of t, so that each of them, right,

goes to the allocated point, right? Of course, there are a few remarks to be done,

right? You see that how x1 goes to y1, x2 goes to y2, x3 goes to y3. Of course, the first remark

to be done is that in order for this to be possible, right, so these are, so here we write

arbitrary terminal states, but they have to be all different, right? So, for example,

x1 and x2 go to the same destination, y1. Why that is not possible?

Why I cannot take with this neural network, right, the same terminal states, right,

and I cannot take with this neural network, right, the same terminal states, right?

Why that is not possible?

Why I cannot take with this neural OV system, despite this simultaneous control property,

why I cannot take two different initial configurations and drive them to the same final one?

Do you have a hint of why this cannot happen?

Any comment on this?

The reason is very simple. If two different initial data will get to the same final

configuration, for instance, because I have another trajectory doing this,

I will be breaking the uniqueness of the solutions, yes, unique solution, thank you, Matt.

I will be breaking the property that for our Kochi problem associated with an ODE

for which the non-linearity is globally ellipses and the dependence on time is, say,

measurable and integrable, right? We have the uniqueness of the solution of the corresponding

Kochi problem. If I will solve the problem backwards in time, right, I will be in a situation

where the Y1 will have to lead to two different trajectories, one going to X1 and the other one

going to X2, and this is clearly impossible. So this can never be achieved, right? I will never

be able to drive endpoints to end locations unless they are different. Now, of course,

the distance here, Y1, Y3, can be very close, right? The only thing I need is the data to be

different. Now, when you look to the proof I have done, however, you realize that when you start

putting very narrow distances, when you put very narrow distances among points,

if you analyze the proof we have given here with piecewise constant motions,

you realize that, of course, if the distance is very, very, very small, when you decide to act,

for instance, pushing the north hemisphere to the left while the south hemisphere is frozen,

you will need to wait longer, right? Why? Because the distance is so narrow that you really need to

see the separation emerging, okay? Then, eventually, we have seen that if I need longer time,

when I rescale time to one, this is having an impact on the control, which is of the order

of the time interval, right? So we have said that whatever you can do with the control Wt

in the time interval zero, capital T, you can do it with the control capital T times Wt, sorry,

and maybe also a change of value here, right? Capital T times Wt in the time interval zero one,

Teil eines Kapitels:
S08 Neural transport equations and infinite-dimensional control systems

Zugänglich über

Offener Zugang

Dauer

02:52:29 Min

Aufnahmedatum

2024-07-07

Hochgeladen am

2024-08-07 23:32:03

Sprache

en-US

S08: Neural transport equations and infinite-dimensional control systems

Date: July 2024
Course: Control and Machine Learning
Lecturer: Prof. Enrique Zuazua

_

Check all details at: https://dcn.nat.fau.eu/course-control-machine-learning-zuazua/

TOPICS

S01: Introduction to Control Theory

S02: Introduction: Calculus of Variations, Controllability and Optimal Design

S03: Introduction: Optimization and Perpectives

S04: Finite-dimensional Control Systems (1)

S05: Finite-dimensional Control Systems (2) and Gradient-descent methods (1)

S06: Gradient-descent methods (2), Duality algorithms, and Controllability (1)

S07: Controllability (2)

S08: Neural transport equations and infinite-dimensional control systems

S09: Wave equation control systems

S10: Momentum Neural ODE and Wave equation with viscous damping

S11: Heat and wave equations: Control systems and Turnpike principle (1)

S12: Turnpike principle (2), Deep Neural and Collective-dynamics

_

Check all details at: https://dcn.nat.fau.eu/course-control-machine-learning-zuazua/

Tags

FAU control mathematics machine learning Mathematik Applied Mathematics Turnpike control theory FAU MoD FAU DCN-AvH Chair for Dynamics, Control, Machine Learning and Numerics (AvH Professorship)
Einbetten
Wordpress FAU Plugin
iFrame
Teilen