Welcome back to Pattern Recognition. So today we want to look a bit more into convex optimization
and in particular I want to introduce you to the concept of duality in convex problems.
So let's have a look at duality in convex optimization and if we want to look into
something that is dual we have to differentiate essentially two problems. There is the primal
problem and the dual problem. So let's start with the primal problem. So this is an optimization
problem as we've seen them before. So we want to minimize some function f0 of x and this
is subject to some constraints and here we have two sets of constraints. There is the
fi of x which are inequality constraints and then we have the hi of x which are equality
constraints. Note that this doesn't necessarily have to be convex functions. So duality is
generally a concept that you can introduce whenever you're dealing with constrained optimization.
So let's look into the Lagrangian that emerges then from this one here and of course we can
build the Lagrangian function. This is L and this is the primal f0 plus then the inequality
constraints multiplied with lambda i's and summed up and then the equality constraints
multiplied with nu i and summed up. So again we use this trick that we move from an n-dimensional
space to a space of higher dimension and here we increase the dimensionality by m the number
of inequality constraints plus p the number of equality constraints. So we've seen that
these are the Lagrange multipliers. This is the inequality one, this is the equality one
and now we have a set of new vectors that are lambda and nu and these are our Lagrange
multipliers and they're also called the dual variables. So this is the step towards the
dual problem. Now let's look into the Lagrange dual function and in the Lagrange dual function
now we essentially get rid of x. So essentially we want to pick x in a way such that we get
the maximum lower bound of our Lagrangian function. So we can write this up as g of
lambda nu is the infimum over x of our Lagrangian function and we can essentially expand this
here this is of course then the infimum over f0 of x plus the inequality constraints plus
the equality constraints. So this is interesting and you can note here already that the Lagrange
dual function is a pointwise affine function in the dual variables. Also the Lagrange dual
function is concave. So the introduction of this infimum makes our function concave even
if the original problem is not convex. Now let's introduce some p star that is the optimal
value of the optimization problem. Then for any lambda greater to zero and any nu the
following bound is valid. The p star is an upper bound for our Lagrange dual. So this
is the maximum value that can be attained by the Lagrange dual function. Let's look
at some example. So let's start with our primary function f0 of x that we see here and now
we introduce a first constraint this is f1 of x. Now let's align them here such you can
see where the same points are and now we can start building the Lagrange dual function
and of course we have to introduce some kind of lambda to mix the two and we start with
a lambda of 0.1 and now we are interested in the largest value of x that is a lower
bound for this function. So you can see now when I start increasing the value of lambda
that we are deforming the original function and you can see actually in about this position
here we have some upper bound that is higher than the previous upper bounds because we
got rid of the negative lobe on the left hand side and also in the center we kind of have
a very high upper bound that we can choose here. So now if I start increasing lambda
more you see that the center lobe goes down and our maximum lower bound is already going
down again. So you can see that we essentially can write this up as a function of lambda.
So if we now plot this in a different way you can see that there is our p star which
is just a fixed value this is the optimal value of our optimization problem and you
can see if we vary lambda then the value of our Lagrange dual this infimum is slowly increasing
up to some point we are already pretty close to p star and then it is decreasing again.
So in our example neither f0 x nor f1 x is convex but the dual function is concave. So
this is a very interesting observation. Now let's look into this a little bit more. So
we can see that if we have a feasible point of the optimization problem so this is x tilde
Presenters
Zugänglich über
Offener Zugang
Dauer
00:18:47 Min
Aufnahmedatum
2020-11-11
Hochgeladen am
2020-11-11 14:39:10
Sprache
en-US
In this video, we look into the concept of duality in optimization and strong duality in convex problems.
This video is released under CC BY 4.0. Please feel free to share and reuse.
For reminders to watch the new video follow on Twitter or LinkedIn. Also, join our network for information about talks, videos, and job offers in our Facebook and LinkedIn Groups.
Music Reference: Damiano Baldoni - Thinking of You