We shall meet in place where there is no darkness.


Differences Towards the Four Constrained Optimization Methods

These four constrained optimization methods looks similarly when first seen:

  • Lagrange Multipliers
  • Penalty Methods
  • Augmented Lagrangian Methods
  • Merit Methods

Here is a comprehensive explaination towards these four methods written by Brian Borchers.

No, they’re not all the same and it’s important to understand the differences between them.

Start with a simple optimization problem

minf(x){\rm min}\,f({\bf x})

subject to

g(x)=0{\bf g}({\bf x})=0

where we can assume for simplicity that ff and gg are smooth (at least twice continously differentiable.)

The Lagrangian function is

L(x,λ)=f(x)+λg(x)L({\bf x},\boldsymbol{\lambda})=f({\bf x})+\boldsymbol{\lambda}^\top {\bf g}({\bf x})

Note that LL is a function of x{\bf x} and λ\boldsymbol{\lambda}. The first order necessary condition for a point x{\bf x}^* to be a minimizer is that there is a λ\boldsymbol{\lambda}^* such that (x,λ)({\bf x}^*,\boldsymbol{\lambda}^*) is a stationary point of LL. In the method of multipliers, we try to solve the nonlinear system of equations

xL=0λL=0\nabla_{\bf x}L={\bf 0} \\\nabla_{\boldsymbol{\lambda}}L={\bf 0}

This is typically done by alternately minimizing with respect to x\bf x and updating λ\boldsymbol{\lambda}. Given a Lagrange multiplier estimate λ(k)\boldsymbol{\lambda}^{(k)}, we minimize L(x,λ(k))L({\bf x},\boldsymbol{\lambda}^{(k)}) to get x(k){\bf x}^{(k)}. Then we update λ\boldsymbol{\lambda} with

λ(k+1)=λ(k)+αkg(xk)\boldsymbol\lambda^{(k+1)}=\boldsymbol\lambda^{(k)}+\alpha_k {\bf g}({\bf x}^k)

Where αk\alpha_k is a step size parameter that can be set in various ways.

An penalty function for our problem is a function that is 00 if g(x)=0{\bf g}({\bf x})={\bf 0} and greater than 0\bf 0 when g(x)0{\bf g}({\bf x}) \neq \bf 0. A commonly used penalty function is the quadratic penalty function

ϕ(g(x))=g2(x)\phi({\bf g}({\bf x}))={\bf g}^2 ({\bf x})

In the penalty function method, we solve an unconstrained problem of the form

minf(x)+ρϕ(g(x)){\rm min}\, f({\bf x})+\boldsymbol\rho^\top\phi({\bf g}({\bf x}))

where ρ\boldsymbol \rho is a penalty parameter that is increased until the solution of the penalized problem is close to satisfying g(x)=0{\bf g}({\bf x})={\bf 0}. Note that ρ\boldsymbol \rho is not a Lagrange multiplier in this case.

For problems with inequality constraints a commonly used penalty function is

ϕ(g(x))=(max{0,g(x)})2\phi({\bf g}({\bf x})) = \left({\rm max} \left\{ 0, {\bf g}({\bf x}) \right\}\right)^2

An augmented Lagrangian function combines the penalty function idea with the Lagrangian:

L^(x,λ,ρ)=f(x)+λg(x)+ρϕ(g(x))\hat L({\bf x},\boldsymbol\lambda, \boldsymbol\rho)=f({\bf x})+\boldsymbol\lambda^\top{\bf g}({\bf x})+\boldsymbol\rho^\top\phi({\bf g}({\bf x}))

Augmented Lagrangian methods minimize L^\hat L with respect to x\bf x, update the Lagrange multiplier estimate λ\boldsymbol \lambda and then (if necessary) update the penalty parameter ρ\boldsymbol\rho in each iteration. In practice, augmented Lagrangian methods outperform simple penalty methods and the method of multipliers.

Merit functions are used in a variety of nonlinear programming algorithms. You’ll most commonly see them used in sequential quadratic programming methods. In these methods, a search direction, d(k){\bf d}^{(k)}, is computed at each iteration. The step is from x(k){\bf x}^{(k)} to

x(k+1)=x(k)+αkd(k){\bf x}^{(k+1)}={\bf x}^{(k)}+\alpha_k {\bf d}^{(k)}

where the step size parameter αk\alpha_k is determined by minimizing a merit function

αk=argminαM(x(k)+αd(k))\alpha_k = {\rm arg}\min_\alpha M({\bf x}^{(k)}+\alpha{\bf d}^{(k)})

The merit function is typically something like a penalized objective function or an augmented Lagrangian, but there’s a great deal of freedom in the form of the merit function.

These functions and the associated methods are described in many textbooks on nonlinear optimization. A good discussion can be found in Numerical Optimization by Nocedal and Wright.

  • 本文作者: Panelatta
  • 本文链接: https://bofc.tech/dbc0aa41/
  • 版权声明: 本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!