Convex Optimization Lecture 12 - Equality Constrained Optimization Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 19
Today s Lecture 1 Basic Concepts 2 for Equality Constrained Problems 2 / 19
Outline 1 Basic Concepts 2 for Equality Constrained Problems 3 / 19
Equality Constrained Optimization Problems equality constrained minimization problem: minimize subject to f (x) Ax = b f (x) convex, twice continuously differentiable A R p n with ranka = p < n p = f (x ) = inf {f (x) Ax = b} attained and finite optimality condition: there exists a ν R p such that Ax = b, f (x ) + A T ν = 0 4 / 19
Equality Constrained Quadratic Optimization equality constrained quadratic minimization: minimize subject to (1/2)x T Px + q T x + r Ax = b where P S n + optimality condition: there exists a ν R p such that Ax = b, Px + q + A T ν = 0 equivalent to [ P A T A 0 ] [ x ν ] = [ q b ] basis for Newton s method in equality constrained optimization 5 / 19
Solving Equality Constrained Optimization how to solve equality constrained optimization? eliminate equality constraints solve dual problem and recover the solution to primal problem Newton s method for equality constrained optimization Newton s method is most commonly used preserve structures (e.g., sparsity) of the problem 6 / 19
Eliminate Equality Constraints find F R n (n p) and ˆx R n such that {x Ax = b} = { Fz + ˆx z R n p} ˆx is any particular solution to Ax = b F is any matrix such that range(f ) = null(a) (i.e., AF = 0) the original problem equivalent to minimize f (z) = f (Fz + ˆx) with optimization variable z R n p from z, recover optimal primal and dual variables x = Fz + ˆx, ν = (AA T ) 1 A f (x ) choice F and ˆx are not unique 7 / 19
Example Optimal Allocation With Resource Constraints resource allocation problem: minimize subject to n f i (x i ) i=1 n x i = b i=1 eliminating x n = b n 1 i=1 x i is equivalent to [ ] I ˆx = be n, F = 1 T R n (n 1) equivalent problem minimize ) n 1 n 1 f i (x i ) + f n (b x i i=1 i=1 8 / 19
Solve Dual Problem Lagrangian: dual function: L(x, ν) = f (x) + ν T (Ax b) g(ν) = inf f (x) + x νt (Ax b) ( ) = b T ν + inf f (x) + (A T ν) T x x ( ) = b T ν sup f (x) (A T ν) T x = b T ν f ( A T ν) where f (y) sup x ( f (x) + y T x ) is the conjugate of f dual problem maximize b T ν f ( A T ν) with optimization variable ν R p 9 / 19
Example Equality Constrained Analytic Center equality constrained analytic center: minimize subject to f (x) = Ax = b conjugate function (domf = R n ++): f (y) sup x = sup x = sup x n log x i i=1 ( f (x) + y T x ) n log x i + y T x i=1 n (log x i + y i x i ) i=1 = n n i=1 log ( y i) 10 / 19
Example Equality Constrained Analytic Center dual problem: maximize b T ν + n + with implicit constraints A T ν > 0 n log (A T ν) i i=1 how to reconstruct primal solution from dual solution? optimality condition: f (x ) + A T ν = (1/x1,..., 1/xn) T + A T ν = 0 which implies x i = 1/(A T ν ) i, i = 1,..., n 11 / 19
Outline 1 Basic Concepts 2 for Equality Constrained Problems 12 / 19
assume that the initial point is feasible Ax (0) = b choose Newton step x nt such that x + x nt is feasible second-order Taylor approximation: minimize subject to ˆf (x + v) = f (x) + f (x) T v + (1/2)v T 2 f (x)v A(x + v) = b optimality condition for equality constrained quadratic problem: [ 2 f (x) A T ] [ ] [ ] xnt f (x) = A 0 w 0 13 / 19
Newton s Method For Equality Constrained Optimization Newton s method for equality constrained optimization: a starting point x domf with Ax = b repeat the following steps 1 Newton step x nt and Newton decrement λ(x) 2 = x T nt 2 f (x) x nt = f (x) T x nt 2 quit if λ2 2 ɛ 3 exact or backtracking line search 4 update x := x + t x nt require a feasible starting point ( feasible descent method ) convergence results basically the same as unconstrained cases 14 / 19
Infeasible Start second-order Taylor approximation (x may be infeasible): minimize subject to ˆf (x + v) = f (x) + f (x) T v + (1/2)v T 2 f (x)v A(x + v) = b solving for Newton step: minimize subject to (1/2)v T 2 f (x)v + f (x) T vf (x) Av = b Ax optimality condition: [ 2 f (x) A T A 0 ] [ xnt w ] [ f (x) = Ax b ] 15 / 19
Interpretation as Primal-Dual Newton Step recall optimality conditions: Ax } {{ b = 0 }, f (x ) + A T ν = 0 }{{} primal feasibility dual feasibility define residual r(x, ν) = (r dual (x, ν), r pri (x, nu)) = ( f (x)+a T ν, Ax b) R n R p first-order Taylor approximation of residual r(x, ν): [ ] x r(x + x, ν + ν) r(x, ν) + Dr(x, ν) ν where Dr(x, ν) R (n+p) (n+p) is the derivative of r(x, ν) trying to make the residual equal to zero: [ ] x Dr(x, ν) = r(x, ν) ν 16 / 19
Interpretation as Primal-Dual Newton Step trying to make the residual equal to zero: [ 2 f (x) A T ] [ ] [ x f (x) + A = T ν A 0 ν Ax b ] writing ν + = ν + ν, we have [ 2 f (x) A T ] [ x A 0 ν + ] [ f (x) = Ax b ] same as the optimality condition with x nt = x, w = ν + = ν + ν 17 / 19
Features of Infeasible Start Newton Method objective value may not decrease: it is possible that f (x + t x) f (x), t 0 residual always decreases: r(x + t x, ν + t ν) 2 < r(x, ν) 2 for some t > 0 line search based on r 2 full step feasibility: x + x is feasible 18 / 19
Infeasible Start Newton Method infeasible start Newton method for equality constrained optimization: a starting point x domf, ν repeat the following steps 1 compute primal and dual Newton steps x nt and ν nt 2 backtracking line search on r 2 1 t := 1 2 while r(x + t x nt, ν + t ν nt) 2 > (1 αt) r(x, ν) 2, t := βt 3 update x := x + t x nt and ν := ν + t ν nt until Ax = b and r(x, ν) 2 < ɛ 19 / 19