Numerical Optimization

Similar documents
Numerical Optimization

Numerical Optimization


Numerical Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

10 Numerical methods for constrained problems

Lecture 18: Optimization Programming

Numerical Optimization

Algorithms for constrained local optimization

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Applications of Linear Programming

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

5.6 Penalty method and augmented Lagrangian method

Computational Optimization. Augmented Lagrangian NW 17.3

Generalization to inequality constrained problem. Maximize

minimize x subject to (x 2)(x 4) u,

CONSTRAINED NONLINEAR PROGRAMMING

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Lagrangian Duality Theory

Algorithms for Constrained Optimization

Lecture 3. Optimization Problems and Iterative Algorithms

CS-E4830 Kernel Methods in Machine Learning

Lagrangian Duality for Dummies

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

Lecture 6: Conic Optimization September 8

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

An interior-point stochastic approximation method and an L1-regularized delta rule

Second Order Optimality Conditions for Constrained Nonlinear Programming

1 Computing with constraints

Lecture 16: October 22

Solving generalized semi-infinite programs by reduction to simpler problems.

8 Barrier Methods for Constrained Optimization

A New Penalty-SQP Method

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 12 Scribe: Indraneel Mukherjee March 12, 2008

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

Operations Research Lecture 4: Linear Programming Interior Point Method

Interior-Point Methods for Linear Optimization

Support Vector Machines

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

Multidisciplinary System Design Optimization (MSDO)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

The Methods of Solution for Constrained Nonlinear Programming

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

Computational Optimization. Constrained Optimization Part 2

CS711008Z Algorithm Design and Analysis

LECTURE 10 LECTURE OUTLINE

Introduction to Machine Learning Spring 2018 Note Duality. 1.1 Primal and Dual Problem

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

More on Lagrange multipliers

Image restoration. An example in astronomy

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

Primal-Dual Interior-Point Methods

2.3 Linear Programming

Optimization Theory. Lectures 4-6

Support Vector Machine via Nonlinear Rescaling Method

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

Convex Optimization & Lagrange Duality

Constrained Optimization

5 Handling Constraints

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Lecture 13: Constrained optimization

Algorithms for nonlinear programming problems II

Duality revisited. Javier Peña Convex Optimization /36-725

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Duality Theory of Constrained Optimization

ICS-E4030 Kernel Methods in Machine Learning

12. Interior-point methods

Algorithms for nonlinear programming problems II

Decision Science Letters

Part IB Optimisation

Optimality Conditions for Constrained Optimization

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

Lagrange Relaxation and Duality

IOE 511/Math 652: Continuous Optimization Methods, Section 1

Existence of minimizers

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

Penalty, Barrier and Augmented Lagrangian Methods

Optimisation in Higher Dimensions

MATH 4211/6211 Optimization Basics of Optimization Problems

Introduction to Nonlinear Stochastic Programming

Perturbed Proximal Primal Dual Algorithm for Nonconvex Nonsmooth Optimization

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

Transcription:

Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on

Consider the problem: Barrier and Penalty Methods x X where X R n. Idea: Approximation by an unconstrained problem Solve a sequence of unconstrained optimization problems Penalty Methods Penalize for violating a constraint Barrier Methods Penalize for reaching the boundary of an inequality constraint

x X Define a function, ψ(x) = { 0 if x X + if x / X Solve an equivalent unconstrained problem: Not a practical approach + ψ(x) Replace ψ(x) by a sequence of continuous non-negative functions that approach ψ(x)

Penalty Methods x X Let x be a local imum Let X = {h j (x) 0, j = 1,..., l} Define P(x) = 1 l [max(0, h j (x)] 2 2 j=1 Define q(x, c) = + cp(x) Define a sequence {c k } such that c k 0 and c k+1 > c k k. Let x k = arg x q(x, c k ) Ideally, {x k } x as {c k } +

Nonlinear Program (NLP) h j (x) 0, j = 1,..., l e i (x) = 0, i = 1,..., m Define P(x) = 1 2 l [max(0, h j (x)] 2 + 1 2 j=1 m e 2 i (x) i=1 and q(x, c) = + cp(x). Assumption: f, h j s and e i s are sufficiently smooth

Lemma If x k = arg x q(x, c k ) and c k+1 > c k, then Proof. q(x k, c k ) q(x k+1, c k+1 ) P(x k ) P(x k+1 ) f (x k ) f (x k+1 ). q(x k+1, c k+1 ) = f (x k+1 ) + c k+1 P(x k+1 ) f (x k+1 ) + c k P(x k+1 ) f (x k ) + c k P(x k ) = q(x k, c k ) Also, f (x k ) + c k P(x k ) f (x k+1 ) + c k P(x k+1 )... (1) f (x k+1 ) + c k+1 P(x k+1 ) f (x k ) + c k+1 P(x k ).... (2) Adding (1) and (2), we get P(x k ) P(x k+1 ). f (x k+1 ) + c k P(x k+1 ) f (x k ) + c k P(x k ) f (x k+1 ) f (x k )

Lemma Let x be a solution to the problem, x X.... (P1) Then, for each k, f (x k ) f (x ). Proof. f (x k ) f (x k ) + c k P(x k ) f (x ) + c k P(x ) = f (x ) Theorem Any limit point of the sequence, {x k } generated by the penalty method is a solution to the problem (P1).

Nonlinear Program (NLP) h j (x) 0, j = 1,..., l e i (x) = 0, i = 1,..., m Penalty Function Method (to solve NLP) (1) Input: {c k } k=0, ɛ (2) Set k := 0, initialize x k (3) while (q(x k, c k ) f (x k )) > ɛ (a) x k+1 = arg x q(x, c k ) (b) k := k + 1 endwhile Output : x = x k

Example: x = (1, 0) T 1[(x 2 1 3) 2 + (x 2 2) 2 ] x 1 + x 2 0 x 1 + x 2 1 x 2 0

1[(x 2 1 3) 2 + (x 2 2) 2 ] x 1 + x 2 0 x 1 + x 2 1 x 2 0 q(x, c) = 1 2 [(x 1 3) 2 + (x 2 2) 2 ] + c 2 [(max(0, x 1 + x 2 )) 2 +(max(0, x 1 + x 2 1)) 2 + (max(0, x 2 )) 2 ] Let x 0 = (3, 2) T (Violates the constraint x 1 + x 2 1 ) At x 0, q(x, c) = 1 2 [(x 1 3) 2 + (x 2 2) 2 ] + c 2 [(x 1 + x 2 1) 2 ]. ) x q(x, c) = 0 x 1 (c) = = x (c) ( 2c+3 2c+1 2 2c+1

1[(x 2 1 3) 2 + (x 2 2) 2 ] x 1 + x 2 0 x 1 + x 2 1 x 2 0 At x 0 = (3, 2) T, q(x, c) = 1 2 [(x 1 3) 2 + (x 2 2) 2 ] + c 2 [(x 1 + x 2 1) 2 ]. x q(x, c) = 0 x 1 (c) = ( 2c+3 2c+1 2 2c+1 Taking limit as c, x = (1, 0) T ) = x (c)

Consider the problem, e(x) = 0 Let (x, µ ) be a KKT point ( f (x ) + µ e(x ) = 0) Penalty Function: q(x, c) = + cp(x) As c, q(x, c) = Consider the perturbed problem, and the penalty function, ˆq(x, c) = + c(e(x) θ) 2 e(x) = θ = 2cθe(x) + ce(x) 2 (ignoring constant term) = + µe(x) +ce(x) 2 }{{} L(x,µ) = ˆL(x, µ, c) (Augmented Lagrangian Function)

At (x, µ ), x L(x, µ ) = f (x ) + µ e(x ) = 0. x ˆq(x, c) = x ˆL(x, µ, c) = x L(x, µ ) + 2ce(x ) e(x ) = 0 c Q. How to get an estimate of µ? Let x c be a imizer of L(x, µ, c). Therefore, x L(x c, µ, c) = f (x c) + µ e(x c) + ce(x c) e(x c) = 0 f (x c) = (µ + ce(x }{{ c)) e(x } c) estimate of µ

Program (EP) e(x) = 0 Augmented Lagrangian Method (to solve EP) (1) Input: c, ɛ (2) Set k := 0, initialize x k, µ k (3) while ( ˆL(x k, µ k, c) f (x k )) > ɛ (a) x k+1 = arg x ˆL(x k, µ k, c) (b) µ k+1 = µ k + ce(x k ) (c) k := k + 1 endwhile Output : x = x k

Nonlinear Program (NLP) h j (x) 0, j = 1,..., l e i (x) = 0, i = 1,..., m Easy to extend the Augmented Lagrangian Method to NLP Rewrite the inequality constraint, h(x) 0 as an equality constraint, h(x) + y 2 = 0

Barrier Methods Typically applicable to inequality constrained problems h j (x) 0, j = 1,..., l Let X = {x : h j (x) 0, j = 1,..., l} Some Barrier functions (defined on the interior of X) B(x) = l j=1 1 h j (x) or B(x) = l j=1 log( h j (x)) Approximate problem using Barrier function (for c > 0) + 1 c B(x) x Interior of X

Primal Problem Cutting-Plane Methods Dual Problem h j (x) 0, j = 1,..., l e i (x) = 0, i = 1,..., m x X X is a compact set. Dual Function: z = θ(λ, µ) = x X Equivalent Dual problem max z,µ,λ z max θ(λ, µ) λ 0 + λ T h(x) + µ T e(x) z + λ T h(x) + µ T e(x), x X λ 0 Linear Program with infinite constraints

Equivalent Dual problem max z,µ,λ z z + λ T h(x) + µ T e(x), x X λ 0 Idea: Solve an approximate dual problem. Suppose we know {x j } k 1 j=0 such that z + λ T h(x) + µ T e(x), x {x 0,..., x k 1 } Approximate Dual Problem max z,µ,λ z + λ T h(x) + µ T e(x), x {x 0,..., x k 1 } z λ 0 Let (z k, λ k, µ k ) be the optimal solution to this problem.

Approximate Dual Problem max z,µ,λ z + λ T h(x) + µ T e(x), x {x 0,..., x k 1 } z λ 0 If z k + λ kt h(x) + µ kt e(x) x X, then (z k, λ k, µ k ) is the solution to the dual problem. Q. How to check if z k + λ kt h(x) + µ kt e(x) x X? Consider the problem, + λ kt h(x) + µ kt e(x) x X and let x k be an optimal solution to this problem.

+ λ kt h(x) + µ kt e(x) x X and let x k be an optimal solution to this problem. If z k f (x k ) + λ kt h(x k ) + µ kt e(x k ), then (λ k, µ k ) is an optimal solution to the Lagrangian dual problem. If z k > f (x k ) + λ kt h(x k ) + µ kt e(x k ), then add the constraint, z f (x k ) + λ T h(x k ) + µ T e(x k ) to the approximate dual problem.

Nonlinear Program (NLP) h j (x) 0, j = 1,..., l e i (x) = 0, i = 1,..., m x X Summary of steps for Cutting-Plane Method: Initialize with a feasible point x 0 while stopping condition is not satisfied (z k, λ k, µ k ) = argmax z,λ,µ z z f (x j ) + λ T h(x j ) + µ T e(x j ), j = 0,..., k 1 λ 0 x k = arg x + λ kt h(x) + µ kt e(x) x X Stop if z k f (x k ) + λ kt h(x k ) + µ kt e(x k ). Else, k := k + 1.