Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa

Similar documents
11. Equality constrained minimization

Equality constrained minimization

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization and l 1 -minimization

Motivation Subgradient Method Stochastic Subgradient Method. Convex Optimization. Lecture 15 - Gradient Descent in Machine Learning

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Convex Optimization M2

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

5. Duality. Lagrangian

Lecture: Duality.

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Convex Optimization Boyd & Vandenberghe. 5. Duality

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Newton s Method. Javier Peña Convex Optimization /36-725

A Brief Review on Convex Optimization

Constrained Optimization and Lagrangian Duality

10. Unconstrained minimization

Unconstrained minimization

4. Convex optimization problems (part 1: general)

Lecture: Duality of LP, SOCP and SDP

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Lecture 14: October 17

Primal-Dual Interior-Point Methods

subject to (x 2)(x 4) u,

EE/AA 578, Univ of Washington, Fall Duality

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

Nonlinear Optimization for Optimal Control

Convex Optimization & Lagrange Duality

Convex Optimization. 9. Unconstrained minimization. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lecture 3. Optimization Problems and Iterative Algorithms

EE 227A: Convex Optimization and Applications October 14, 2008

A Tutorial on Convex Optimization II: Duality and Interior Point Methods

CS-E4830 Kernel Methods in Machine Learning

Lecture Note 5: Semidefinite Programming for Stability Analysis

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Lecture 8. Strong Duality Results. September 22, 2008

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Lecture 18: Optimization Programming

Optimization for Machine Learning

Homework Set #6 - Solutions

10-725/ Optimization Midterm Exam

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization Lecture 13

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Barrier Method. Javier Peña Convex Optimization /36-725

12. Interior-point methods

ICS-E4030 Kernel Methods in Machine Learning

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

12. Interior-point methods

LECTURE 13 LECTURE OUTLINE

Lagrangian Duality Theory

Convex Optimization and SVM

Gradient Descent. Ryan Tibshirani Convex Optimization /36-725

CS711008Z Algorithm Design and Analysis

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

Nonlinear Optimization: What s important?

Linear and Combinatorial Optimization

Interior Point Algorithms for Constrained Convex Optimization

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Nonlinear Programming

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization

Lecture 6: Conic Optimization September 8

Lecture 14: Newton s Method

Conditional Gradient (Frank-Wolfe) Method

Part IB Optimisation

Unconstrained minimization: assumptions

Sparsity Regularization

Dual Decomposition.

Lecture 9 Sequential unconstrained minimization

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Descent methods. min x. f(x)

1 Review of last lecture and introduction

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

9. Dual decomposition and dual algorithms

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 6 Fall 2009

14. Nonlinear equations

Semidefinite Programming Basics and Applications

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Introduction to Nonlinear Stochastic Programming

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

Constrained Optimization

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

LECTURE 10 LECTURE OUTLINE

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

EE364a Review Session 5

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Transcription:

Convex Optimization Lecture 12 - Equality Constrained Optimization Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 19

Today s Lecture 1 Basic Concepts 2 for Equality Constrained Problems 2 / 19

Outline 1 Basic Concepts 2 for Equality Constrained Problems 3 / 19

Equality Constrained Optimization Problems equality constrained minimization problem: minimize subject to f (x) Ax = b f (x) convex, twice continuously differentiable A R p n with ranka = p < n p = f (x ) = inf {f (x) Ax = b} attained and finite optimality condition: there exists a ν R p such that Ax = b, f (x ) + A T ν = 0 4 / 19

Equality Constrained Quadratic Optimization equality constrained quadratic minimization: minimize subject to (1/2)x T Px + q T x + r Ax = b where P S n + optimality condition: there exists a ν R p such that Ax = b, Px + q + A T ν = 0 equivalent to [ P A T A 0 ] [ x ν ] = [ q b ] basis for Newton s method in equality constrained optimization 5 / 19

Solving Equality Constrained Optimization how to solve equality constrained optimization? eliminate equality constraints solve dual problem and recover the solution to primal problem Newton s method for equality constrained optimization Newton s method is most commonly used preserve structures (e.g., sparsity) of the problem 6 / 19

Eliminate Equality Constraints find F R n (n p) and ˆx R n such that {x Ax = b} = { Fz + ˆx z R n p} ˆx is any particular solution to Ax = b F is any matrix such that range(f ) = null(a) (i.e., AF = 0) the original problem equivalent to minimize f (z) = f (Fz + ˆx) with optimization variable z R n p from z, recover optimal primal and dual variables x = Fz + ˆx, ν = (AA T ) 1 A f (x ) choice F and ˆx are not unique 7 / 19

Example Optimal Allocation With Resource Constraints resource allocation problem: minimize subject to n f i (x i ) i=1 n x i = b i=1 eliminating x n = b n 1 i=1 x i is equivalent to [ ] I ˆx = be n, F = 1 T R n (n 1) equivalent problem minimize ) n 1 n 1 f i (x i ) + f n (b x i i=1 i=1 8 / 19

Solve Dual Problem Lagrangian: dual function: L(x, ν) = f (x) + ν T (Ax b) g(ν) = inf f (x) + x νt (Ax b) ( ) = b T ν + inf f (x) + (A T ν) T x x ( ) = b T ν sup f (x) (A T ν) T x = b T ν f ( A T ν) where f (y) sup x ( f (x) + y T x ) is the conjugate of f dual problem maximize b T ν f ( A T ν) with optimization variable ν R p 9 / 19

Example Equality Constrained Analytic Center equality constrained analytic center: minimize subject to f (x) = Ax = b conjugate function (domf = R n ++): f (y) sup x = sup x = sup x n log x i i=1 ( f (x) + y T x ) n log x i + y T x i=1 n (log x i + y i x i ) i=1 = n n i=1 log ( y i) 10 / 19

Example Equality Constrained Analytic Center dual problem: maximize b T ν + n + with implicit constraints A T ν > 0 n log (A T ν) i i=1 how to reconstruct primal solution from dual solution? optimality condition: f (x ) + A T ν = (1/x1,..., 1/xn) T + A T ν = 0 which implies x i = 1/(A T ν ) i, i = 1,..., n 11 / 19

Outline 1 Basic Concepts 2 for Equality Constrained Problems 12 / 19

assume that the initial point is feasible Ax (0) = b choose Newton step x nt such that x + x nt is feasible second-order Taylor approximation: minimize subject to ˆf (x + v) = f (x) + f (x) T v + (1/2)v T 2 f (x)v A(x + v) = b optimality condition for equality constrained quadratic problem: [ 2 f (x) A T ] [ ] [ ] xnt f (x) = A 0 w 0 13 / 19

Newton s Method For Equality Constrained Optimization Newton s method for equality constrained optimization: a starting point x domf with Ax = b repeat the following steps 1 Newton step x nt and Newton decrement λ(x) 2 = x T nt 2 f (x) x nt = f (x) T x nt 2 quit if λ2 2 ɛ 3 exact or backtracking line search 4 update x := x + t x nt require a feasible starting point ( feasible descent method ) convergence results basically the same as unconstrained cases 14 / 19

Infeasible Start second-order Taylor approximation (x may be infeasible): minimize subject to ˆf (x + v) = f (x) + f (x) T v + (1/2)v T 2 f (x)v A(x + v) = b solving for Newton step: minimize subject to (1/2)v T 2 f (x)v + f (x) T vf (x) Av = b Ax optimality condition: [ 2 f (x) A T A 0 ] [ xnt w ] [ f (x) = Ax b ] 15 / 19

Interpretation as Primal-Dual Newton Step recall optimality conditions: Ax } {{ b = 0 }, f (x ) + A T ν = 0 }{{} primal feasibility dual feasibility define residual r(x, ν) = (r dual (x, ν), r pri (x, nu)) = ( f (x)+a T ν, Ax b) R n R p first-order Taylor approximation of residual r(x, ν): [ ] x r(x + x, ν + ν) r(x, ν) + Dr(x, ν) ν where Dr(x, ν) R (n+p) (n+p) is the derivative of r(x, ν) trying to make the residual equal to zero: [ ] x Dr(x, ν) = r(x, ν) ν 16 / 19

Interpretation as Primal-Dual Newton Step trying to make the residual equal to zero: [ 2 f (x) A T ] [ ] [ x f (x) + A = T ν A 0 ν Ax b ] writing ν + = ν + ν, we have [ 2 f (x) A T ] [ x A 0 ν + ] [ f (x) = Ax b ] same as the optimality condition with x nt = x, w = ν + = ν + ν 17 / 19

Features of Infeasible Start Newton Method objective value may not decrease: it is possible that f (x + t x) f (x), t 0 residual always decreases: r(x + t x, ν + t ν) 2 < r(x, ν) 2 for some t > 0 line search based on r 2 full step feasibility: x + x is feasible 18 / 19

Infeasible Start Newton Method infeasible start Newton method for equality constrained optimization: a starting point x domf, ν repeat the following steps 1 compute primal and dual Newton steps x nt and ν nt 2 backtracking line search on r 2 1 t := 1 2 while r(x + t x nt, ν + t ν nt) 2 > (1 αt) r(x, ν) 2, t := βt 3 update x := x + t x nt and ν := ν + t ν nt until Ax = b and r(x, ν) 2 < ɛ 19 / 19