Quadratic Programming

Similar documents
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

5 Handling Constraints

Sufficient Conditions for Finite-variable Constrained Minimization

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Constrained Optimization

1 Computing with constraints

May 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

MATH2070 Optimisation

Constrained optimization: direct methods (cont.)

Examination paper for TMA4180 Optimization I

Optimization Methods

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

14. Nonlinear equations

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Arc Search Algorithms

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Numerical optimization

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Lecture 13: Row and column spaces

Math 205 A B Final Exam page 1 12/12/2012 Name

Written Examination

Gradient Descent. Dr. Xiaowei Huang

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

The Fundamental Theorem of Linear Algebra

Nonlinear Optimization for Optimal Control

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Algorithms for Constrained Optimization

2.098/6.255/ Optimization Methods Practice True/False Questions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

CS711008Z Algorithm Design and Analysis

Constrained optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Optimization Tutorial 1. Basic Gradient Descent

2.3 Linear Programming

Basic Math for

MATH 4211/6211 Optimization Basics of Optimization Problems

Constrained Optimization Theory

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

IE 5531: Engineering Optimization I

MPC Infeasibility Handling

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Optimality Conditions

Lecture 3. Optimization Problems and Iterative Algorithms

Optimality conditions for unconstrained optimization. Outline

Unconstrained Optimization

Constrained Optimization and Lagrangian Duality

Mathematical optimization

Transpose & Dot Product

Performance Surfaces and Optimum Points

COMP 558 lecture 18 Nov. 15, 2010

Unconstrained optimization

Lagrange Multipliers

Transpose & Dot Product

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

5.5 Quadratic programming

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion:

Introduction to gradient descent

Lecture 7 Unconstrained nonlinear programming

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

SUMMARY OF MATH 1600

A New Trust Region Algorithm Using Radial Basis Function Models

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Chapter 3 Numerical Methods

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

Reduced-Hessian Methods for Constrained Optimization

Lecture 3: Basics of set-constrained and unconstrained optimization


Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Barrier Method. Javier Peña Convex Optimization /36-725

Existence of minimizers

Constrained Nonlinear Optimization Algorithms

Generalization to inequality constrained problem. Maximize

Functions of Several Variables

Scientific Computing: Optimization

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Numerical Optimization

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Computational Optimization. Constrained Optimization Part 2

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

Nonlinear Programming

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

LINEAR AND NONLINEAR PROGRAMMING

Computational Finance

Linear Algebra problems

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

Key words. Nonconvex quadratic programming, active-set methods, Schur complement, Karush- Kuhn-Tucker system, primal-feasible methods.

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

7. Dimension and Structure.

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Transcription:

Quadratic Programming

Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2

SideBar: Matrix Spaces Four fundamental subspaces of a matrix Column space, col(a) Row space, row(a) Null space Ax=0, null(a) Left Null space x T A=0, lnull(a) Rank =dim(col(a))=dim(row(a)) Dim(col(A))+Dim(lnull(A)) = # column col(a) ad lnull(a) are orthogonal Dim(row(A))+Dim(null(A)) =# row row(a) and null(a) are orthogonal 3

Linear Equality Constraints min x F(x) s.t. Ax = b Assume constraints are consistent and linearly independent t contraints remove t degrees of freedom solution x x = A T x a + Zx z from Row space Null space 4

Graphical Interpretation x = A T x a + Zx z A T x a : a particular solution (AX=b) Zx z : a homogeneous solution (AX=0) 5

Feasible Search Directions Feasible points x 1, x 2 have Ax 1 = Ax 2 = b Feasible step p satisfies Ap = A(x 1 x 2 ) = 0 If Z is a basis for null(a), feasible directions p are such that p = Zp z I.e., direction of change (p) should be in the null space of A Ap=0 Ax 2 =A(x 1 +p) = Ax 1 =b 6

Optimality Conditions Taylor series expansion along feasible direction F(x + єzp z ) = F(x) + єp zt Z T g(x) + ½є 2 p zt Z T G(x + єθzp z )Zp z g is the gradient [f 1,f 2,, f n ] T єp zt Z T g(x) = feasible direction * gradient = change Projected gradient p z T Z T g(x) = 0 for all p z at constrained stationary points Therefore, Z T g(x) = 0 is first-order optimality condition This implies that g(x) ϵ null(z T ) g(x) must in row(a) so g(x) = A T λ at local minimum k Gradient direction is orthogonal to the feasible direction Change is zero or local landscape is flat (extreme or saddle point) z z T 1 T g( x) 0 7

Optimality Conditions First-order condition necessary but not sufficient; only guarantees critical point Second order condition: projected Hessian G is positive semi-definite Positive semi-definite G guarantees weak minimum 8

Summary Necessary conditions for constrained minumum: Ax = b Z T g(x) = 0 Z T G(x)Z positive semi-definite 9

Algorithm Step 1: If conditions satisfied, terminate Step 2: Compute feasible search direction Step 3: Compute step length Step 4: Update estimate of minimum Search direction computed by Newton's Method: F(x + єzp z ) = F(x) + єp zt Z T g(x) + ½є 2 p zt Z T G(x + єθzp z )Zp z F(x + єzp z ) =0 (derivative with respect to p z ) Z T g+ Z T GZp z =0 solve Z T GZp z = -Z T g for p z and set p = Zp z Cf. g + H(f) p = 0 (for 1D case), This says that 1D condition is true along the direction p 10

Linear Inequality Constraints min x F(x) s.t. Ax <= b Each row a T x <= b is a half plane Active constraint: a T x = b Inactive constraint: a T x < b If set of active constraints at solution was known, could convert to equality constraints Active Set Methods: maintain current active constraints, use equality constraint methods KKT condition applies here those inactive constraints have lambda of zero 11

Active constraint Constant cost curves Constrained Minimum cost Unconstrained inimum cost Active constraint Inactive constraint 12

Feasible Search Directions Recall that, before a new search a T x = b (active) or a T x < b (inactive, don t care) Feasible search must not invalid these constraints Concentrate on the active set Binding perturbation: a T p = 0; constraint remains active (a T (x+tp) = a T x =b) Non-binding perturbation: a T p <0 ; constraint becomes inactive (a T (x+tp) = a T x +t a T p = b+ t a T p <b) 13

Optimality Conditions First and second order conditions from linear equality case apply for binding perturbations Added condition: g(x) T p <= 0 for all non-binding perturbations p satisfying Ap <= 0 g(x) T p!= 0 means gradient * direction = change If g(x) T p >0, then some constraints will be violated (because we start with Ax=b) Since g(x) = A T λ, g(x) T p <= 0 implies λap <= 0 This holds only if all λ 0 Because, If λ j < 0, choose p such that (a j ) T p = 1, (a i ) T p = 0, then: g(x) T p = λ j (a j ) T p = λ j < 0 14

Summary Necessary conditions for constrained minimum: Ax = b Z T g(x) = 0 Z T G(x)Z positive semi-definite λ i 0, i = 1,...,t 15

Algorithm Step 1: If conditions satisfied, terminate Step 2: Decide if a constraint should be deleted from working set; if so, go to step 6 Step 3: Compute feasible search direction Step 4: Compute step length Step 5: Add a constraint to working set if necessary, go to step 7 Step 6: Delete a constraint from the working set and update Z Step 7: Update estimate of minimum 16

Computing Search Direction Newton's method computes feasible step with respect to currently active constraints Need to check if a T p < 0 for any inactive constraints Find intersection x + αp to closest constraint Line search between x and x + αp determines optimal step єp If є = α, new constraint added to working set 17

Quadratic Programming Simplifications possible when using quadratic objective function Hessian becomes constant matrix Newton's method becomes exact rather than approximate 18

Quadratic Programming Newton method finds minimum in 1 iteration Line search not needed; either take full step, or shorten to nearest constraint Constant Hessian need not be evaluated at each iteration 19

Quadratic Programming Special factorization updates can be applied Example: Cholesky factor of G is updated by a single column when a constraint deleted Decomposition need only be done once at the beginning of execution 20