Okay, yeah, I thought I would like to say a bit about optimization and uncertainty.
Much of it will be finite dimensional, but we will also have a little, yeah, extension
to infinite dimensional optimization.
So let's start with a very easy linear optimization problem and we will deal a lot with robust
optimization and also at the interface between robustness and stochasticity.
So suppose you have a linear optimization problem, you want to minimize, this is a linear
cost function and that would be the green thing here is the feasible region and this
would be an optimum vertex, yeah.
But now it could be that your constraints and your cost functions or so, they are uncertain
and it may look like this, yeah, so, or may look like that.
And so what could mean or what that could mean is that you say, well, the former optimum
solution is now even not feasible anymore, yeah, because now this is a feasible region.
And if you now have, say, a little bit of a different cost function, then you could
say, well, that was my former optimum solution, maybe I just go to the nearest now feasible
solution which, however, may now be completely far away from what you now would consider
feasible and good.
So robust optimization then says, well, I take the uncertainties into account that are
typical for the cost, for coefficients and so on and it's a bit of a game where you say,
we are the feasibility players, yeah, we have to play X in the best possible way, but we
have a bad adversary that has a certain budget and it can change the cost, it can change
the coefficients of our input and in a bad possible way and it's now our task to find
still the best guaranteed X solution regardless of what our bad adversary wants to do.
Of course, we know its budget, so we can somehow protect against what an adversary does.
Now, the adversary has an uncertainty set U and it can choose C, A and B from that uncertainty
set.
And there is a lot of going on with respect to robustness.
It is a full protection against uncertainty where you say, what are my typical uncertainties?
For example, you have uncertainty sets as inputs that are either given by scenarios,
say data-driven, you just look what you historically have as data and then you want to protect
against the typical scenarios or you say, well, a parameter may fluctuate or may differ
between nominal value plus and minus K percent or so or whatever you like and you look for
robust feasible solutions that are feasible here and now before you know how the uncertainty
manifests itself and that it has to be feasible regardless of how you manifest itself, the
uncertainty.
And among those solutions that are robust feasible, you want to have one with best guaranteed
solution value.
Now you have to evaluate different approaches that are now really studied a lot.
Do I really want to have such a robust protection regardless of how the uncertainty manifests
itself or am I fine with a stochastic solution in a probabilistic sense?
There's a lot of also going on at that interface because you don't know really what your probability
distributions are, so they are also uncertain and you may want to be robust with respect
to uncertain probability distributions, something like that.
Then how tractable is that mathematically and how conservative are your solutions?
Depending of course on the size of the uncertainty set, you may have quite bad solutions if you
compare them with the unprotected ones.
And how about adjustability?
Some decisions may only be necessary to take once you know what the uncertainty is, so
these are weight and see solutions as you have in a stochastic sense, then you would
also would like to take that into account, that some decisions can be transferred to
later, postponed and yeah.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:35:10 Min
Aufnahmedatum
2024-06-11
Hochgeladen am
2024-06-12 16:03:43
Sprache
en-US
Lecture: Trends in Optimization under Uncertainty