24 - Accelerated Forward-Backward Optimization using Deep Learning/ClipID:35911 previous clip

Recording date 2021-07-26

Via

Free

Language

English

Organisational Unit

Lehrstuhl für Angewandte Mathematik (Modellierung und Numerik)

Producer

Lehrstuhl für Angewandte Mathematik (Modellierung und Numerik)

Format

lecture

Jevgenija Rudzusika (KTH Stockholm) on Accelerated Forward-Backward Optimization using Deep Learning:

We propose several deep-learning accelerated optimization solvers with convergence guarantees. We use ideas from the analysis of accelerated forward-backward schemes like FISTA, but instead of the classical approach of proving convergence for a choice of parameters, such as a step-size, we show convergence whenever the update is chosen in a specific set. Rather than picking a point in this set using some predefined method, we train a deep neural network to pick the best update. Finally, we show that the method is applicable to several cases of smooth and non-smooth optimization and show superior results to established accelerated solvers.

More clips in this category "Naturwissenschaftliche Fakultät"

2021-10-20
Studon
protected  
2021-10-15
Passwort / Studon
protected  
2021-10-17
Passwort / Studon
protected