22 - Online Learning for Optimization Problems with Unknown or Uncertain Cost Functions [ID:35730]
50 von 241 angezeigt

Okay. Hello and welcome to the second part of the talk about online learning for optimization

under uncertainty. My name is Kevin Eigner and I'm a PhD student in the group of optimization

under uncertainty and data analysis in the Department of Data Science under the supervision

of Professor Fraukelias. My research in our group consists of data-driven optimization under

uncertainty in very different settings and applications. This talk is about the joint work

of these people written here on the slide. First of all, I want to thank the organizers of the

seminar for setting up everything and giving us the opportunity to present our work.

This talk starts with a short and general introduction in optimization under uncertainty

with respect to uncertainties in the objective function. Then I am presenting our work on data

driven optimization under uncertainty over time in combination with online learning. We study

in this talk the special class of stochastic optimization problems with unknown discrete

probability distributions. I think that with this problem class the general concepts of data-driven

optimization under uncertainty can be explained very well and I'm also going to give some remarks

about other settings or about the state of the art in the literature. So let's start with a

classical optimization problem. We want to minimize a function f over the feasible set x. The vector u

represents the problem input for the objective function f. If this input is known, this is a

deterministic optimization problem which we might solve with standard methods. This optimization

problem might represent any decision problem you can imagine, for example to find the shortest path

between two points in the street network or anything else. Unfortunately, the problem input

may not be known exactly in practice and it is affected by uncertainty. In the example of our

shortest path problem you can imagine travel times on specific roads that are not known

beforehand and they can be uncertain because of changing traffic. To handle such uncertainties

in an optimization problem one can roughly specify two general ideas. There are the concepts

of stochastic optimization and robust optimization. In stochastic optimization one assumes that the

uncertain parameter is a random vector on some probability space with probability measure p.

A stochastic optimization problem includes stochastic quantities like the expected value

inside the optimization problem. In robust optimization on the other side one assumes that

the uncertain parameter lies inside a predefined uncertainty set u and one optimizes the objective

in a worst case sense over all possible realizations of the uncertainty in u. This is

modeled with this min-max expression. One can say that both modeling approaches lead usually

to optimization problems that are hard to solve and no generic solution method exists

for all of these problems. But there are some approaches to handle these problems that sometimes

for special classes of problems work. First of all one can try to reformulate the

optimization problem under uncertainty into an equivalent or sufficient optimization problem that

can be solved with standard techniques. This can be achieved for example by duality theory

optimality systems like KKT or one can plug in all possible scenarios of the uncertainty.

This works only in special structured problems. Another approach is to decompose the optimization

problem that you cannot solve into easier problems that you can solve. The goal is then to combine

the solutions of the subproblems to a solution of the starting problem. This can be done with

cutting plane algorithms or the computation of some most likely scenarios but sometimes

one can only solve the problem approximately for example with some linearization techniques or

approximation with conservative constraints. Another problem of optimization under uncertainty

can be that the probability distribution for stochastic optimization or a suitable uncertainty

set for robust optimization may be also unknown but in practice we have access to historical data.

Therefore the huge research area about data-driven optimization deals with incorporating data about

uncertain parameters directly in the modeling or into the solution process of decision making.

One traditional way is to use for example statistics to estimate the probability distribution

or the uncertainty set from data. Another approach would be to approximate stochastic

quantities like the expected value with a finite sum of weighted data samples as in the sample

average approximation or in the scenario approach. To make this more concrete we can now take a

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:29:47 Min

Aufnahmedatum

2021-07-06

Hochgeladen am

2021-07-13 17:56:21

Sprache

en-US

Kevin-Martin Aigner (Uni Erlangen) on "Online Learning for Optimization Problems with Unknown or Uncertain Cost Functions"

 

We consider the robust treatment of stochastic optimization problems involving random vectors with unknown discrete probability distributions. With this problem class, we demonstrate the basic concepts of data-driven optimization under uncertainty. Furthermore, we introduce a new iterative approach that uses scenario observations to learn more about the uncertainty over time. This means our solutions become less and less conservative, interpolating between distributionally robust and stochastic optimization. We achieve this by solving the distributionally robust optimization problem over time via an online-learning approach while iteratively updating the ambiguity sets. We provide a regret bound for the quality of the obtained solutions that converges at a rate of O(log(T)/T) and illustrate the effectiveness of our procedure by numerical experiments. Our proposed algorithm is able to solve the online learning problem significantly faster than equivalent reformulations. This is joint work with Kristin Braun, Frauke Liers, Sebastian Pokutta, Oskar Schneider, Kartikey Sharma and Sebastian Tschuppik.

Tags

functional minimization methods framework approximation control distance reconstruction energy deep search basic weights models measure layer activation problem example propagation
Einbetten
Wordpress FAU Plugin
iFrame
Teilen