4 - Course: Control and Machine Learning [ID:53627]
50 von 1279 angezeigt

The previous days we have made a general introduction to the topic of control, addressing some traditional

motivations coming often from science and technology and industry.

We have also established some of the fundamental ideas in calculus of variation, optimization

and control.

We have also seen the very tight link with the modern theory of machine learning that,

as I was showing you the last day, is very much based on an intelligent combination of

all the tools developed in these fields, in particular in control theory.

But with a different kind of application in mind, in which often we are not dealing with

the classical systems of mechanics like the pendulum or the Navier-Stokes equations, but

rather with simply the necessity of understanding data, big data, a big amount of information

that we receive nowadays in many contexts and that we have to handle, we have to manipulate

to extract relevant information and improve processes as could simply be communication

online, for instance.

So then, as we have shown, whenever you address these topics, also in machine learning, it

is to you to decide what kind of modeling you are going to employ.

The same happens when addressing the problems of nature, of technology, of industry.

For the same problem, there are very many different degrees of complexity on the models

you consider.

And of course, you have always to take into account this Occam-Resuard principle in the

foundations in the philosophy of science that indicate that very often the best possible

model for a given process, for a given system, is not the one necessarily that is more complicated,

the one that is integrating all possible tiny details.

Because if you put all these details together, the model might risk to become so big, so

complex that afterwards, when you will have to face issues such as numerical simulation,

control, parameter identification, optimization, and so on, the model will simply be too big,

right? And you will face the course of dimensionality.

While very often, we are interested in some very specific features.

For instance, in the first example I gave, the work that Lord Maxwell did back in 1868,

where he was trying to understand the stability of the governor, the rotation in the ball

mechanism that was able to stabilize pressure within the steam engine, he didn't consider the

most possible sophisticated model. He took the simple harmonic oscillator with damping,

and we saw already on this second order differential equation by computing the roots of the

characteristic polynomial, we understood that the decay properties of that system

are not monotonic with respect to the parameters, right? And this was the key observation that

explained why the technicians, the engineers that were trying to tune these mechanisms,

sometimes they were failing on doing it properly. Why? Because they were

looking for perfection. They were increasing the parameters of damping while eventually,

as we have seen, the decay rate of a system, whenever it is second order in time, is not

monotonic, right? It's monotonic for the small variations of the parameters according to

intuition, right? If you press harder, the response will be better, but eventually the

simple intuition breaks down, the process saturates, and once you keep increasing the damping,

the velocity of propagation starts to degenerate, to decay, right? So then that was a warning.

All this together makes that when you are dealing with control systems, before you get

into very sophisticated systems, it is much better to first analyze the issues from a simple

modeling perspective. So you first adopt a simple modeling paradigm where the analysis will be much

faster, you will get a better understanding, you will be able to do numerical simulations much

faster, and only after you realize that the conclusions of this process are correct,

you can proceed to the next step and analyze how you enrich your model to get some further,

you know, detail, right? So it's the principle of the zoom, right? So you don't try to get

the global picture from the very beginning. So first you look to, you know, you observe

Teil eines Kapitels:
S04 Finite-dimensional Control Systems (1)

Zugänglich über

Offener Zugang

Dauer

02:45:52 Min

Aufnahmedatum

2024-07-07

Hochgeladen am

2024-08-07 23:32:50

Sprache

en-US

S04: Finite-dimensional Control Systems (1)

Date: July 2024
Course: Control and Machine Learning
Lecturer: Prof. Enrique Zuazua

_

Check all details at: https://dcn.nat.fau.eu/course-control-machine-learning-zuazua/

TOPICS

S01: Introduction to Control Theory

S02: Introduction: Calculus of Variations, Controllability and Optimal Design

S03: Introduction: Optimization and Perpectives

S04: Finite-dimensional Control Systems (1)

S05: Finite-dimensional Control Systems (2) and Gradient-descent methods (1)

S06: Gradient-descent methods (2), Duality algorithms, and Controllability (1)

S07: Controllability (2)

S08: Neural transport equations and infinite-dimensional control systems

S09: Wave equation control systems

S10: Momentum Neural ODE and Wave equation with viscous damping

S11: Heat and wave equations: Control systems and Turnpike principle (1)

S12: Turnpike principle (2), Deep Neural and Collective-dynamics

_

Check all details at: https://dcn.nat.fau.eu/course-control-machine-learning-zuazua/

Tags

FAU control mathematics machine learning Mathematik Applied Mathematics Turnpike control theory FAU MoD FAU DCN-AvH Chair for Dynamics, Control, Machine Learning and Numerics (AvH Professorship)
Einbetten
Wordpress FAU Plugin
iFrame
Teilen