A New Penalty-SQP Method

Similar documents
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

Infeasibility Detection in Nonlinear Optimization

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

Recent Adaptive Methods for Nonlinear Optimization

Numerical Methods for PDE-Constrained Optimization

Hot-Starting NLP Solvers

Constrained Nonlinear Optimization Algorithms

An Inexact Newton Method for Nonlinear Constrained Optimization

5 Handling Constraints

An Inexact Newton Method for Optimization

Algorithms for Constrained Optimization

PDE-Constrained and Nonsmooth Optimization

Algorithms for constrained local optimization

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

Constrained optimization: direct methods (cont.)

A Trust-Funnel Algorithm for Nonlinear Programming

Lecture 13: Constrained optimization

Lecture 15: SQP methods for equality constrained optimization

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

Using Interior-Point Methods within Mixed-Integer Nonlinear Programming

Inexact Newton Methods and Nonlinear Constrained Optimization

HYBRID FILTER METHODS FOR NONLINEAR OPTIMIZATION. Yueling Loh

On the use of piecewise linear models in nonlinear programming

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE


On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

minimize x subject to (x 2)(x 4) u,

A Trust-region-based Sequential Quadratic Programming Algorithm

10 Numerical methods for constrained problems

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity

Interior Methods for Mathematical Programs with Complementarity Constraints

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Multidisciplinary System Design Optimization (MSDO)

MPC Infeasibility Handling

2.3 Linear Programming

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

c 2018 Society for Industrial and Applied Mathematics Dedicated to Roger Fletcher, a great pioneer for the field of nonlinear optimization

Numerical Optimization

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

1 Computing with constraints

Nonlinear Optimization Solvers

On the complexity of an Inexact Restoration method for constrained optimization

arxiv: v1 [math.na] 8 Jun 2018

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

Barrier Method. Javier Peña Convex Optimization /36-725

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Sub-Sampled Newton Methods

5.6 Penalty method and augmented Lagrangian method

POWER SYSTEMS in general are currently operating

Large-Scale Nonlinear Optimization with Inexact Step Computations

A stabilized SQP method: superlinear convergence

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

Numerical Optimization. Review: Unconstrained Optimization

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods

Sequential Convex Programming

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

Lecture V. Numerical Optimization

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

4TE3/6TE3. Algorithms for. Continuous Optimization

Lecture 24: August 28

Linear programming II

A penalty-interior-point algorithm for nonlinear constrained optimization

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence

NUMERICAL OPTIMIZATION. J. Ch. Gilbert

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

Flexible Penalty Functions for Nonlinear Constrained Optimization

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

Algorithms for nonlinear programming problems II

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Software for Integer and Nonlinear Optimization

Higher-Order Methods

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

A Filter Active-Set Algorithm for Ball/Sphere Constrained Optimization Problem

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Programming, numerics and optimization

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties

Solving MPECs Implicit Programming and NLP Methods

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

c 2005 Society for Industrial and Applied Mathematics

Transcription:

Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008

Background and Motivation Illustration of Numerical Results Final Remarks Outline Background and Motivation Illustration of Numerical Results Final Remarks

Background and Motivation Illustration of Numerical Results Final Remarks Sequential Quadratic Programming We solve the nonlinear optimization problem min f (x) s.t. b(x) 0 c(x) = 0, where f : R n R, b : R n R l and c : R n R m are smooth, via min f k + fk T d k + 1 d T 2 k (H k + H k )d k s.t. b k + b T k d k 0 c k + c T k d k = 0 and promote convergence with the l 1 penalty function φ(x; ρ) ρf (x) + l m max{0, b i (x)} + c j (x). i=1 j=1

Background and Motivation Illustration of Numerical Results Final Remarks Handling infeasible subproblems We have various options when a given subproblem is infeasible Feasibility restoration: replace the subproblem with min l m r i + (s j + t j ) + 1 d T H 2 k k d k i=1 j=1 s.t. b k + b T k d k r c k + c T k d k = s + t (... but what if we move away from optimality?) Constraint relaxation: replace the subproblem with ( ) l m min ρ f k + fk T d k + 1 d T 2 k H k d k + r i + (s j + t j ) + 1 d T H 2 k k d k s.t. b k + b T k d k r c k + c T k d k = s + t i=1 (... but what if we move away from feasibility?) j=1

Background and Motivation Illustration of Numerical Results Final Remarks Penalty-SQP Capabilities Sl 1 QP Global Fast Local Feasible Infeasible

Background and Motivation Illustration of Numerical Results Final Remarks Penalty-SQP Capabilities Sl 1 QP Global Fast Local Feasible Infeasible

Background and Motivation Illustration of Numerical Results Final Remarks Penalty-SQP Capabilities Sl 1 QP Global Fast Local Feasible Infeasible

Background and Motivation Illustration of Numerical Results Final Remarks Penalty-SQP Capabilities Sl 1 QP Feasible Infeasible Global Fast Local

Background and Motivation Illustration of Numerical Results Final Remarks Penalty-SQP Capabilities Sl 1 QP Feasible Infeasible Global Fast Local???

Background and Motivation Illustration of Numerical Results Final Remarks MINLP methods

Background and Motivation Illustration of Numerical Results Final Remarks MINLP methods

Background and Motivation Illustration of Numerical Results Final Remarks Summary There is a need for algorithms that converge quickly, regardless of whether the problem is feasible or infeasible Interior-point methods are known to perform poorly in infeasible cases, but active set methods seem promising Room for improvement in active set methods, too Feasibility restoration techniques are an option, but we prefer a single algorithm with no heuristics

Background and Motivation Illustration of Numerical Results Final Remarks A single algorithmic framework Our goal is to design a single optimization algorithm designed to solve min f (x) (NLP) s.t. b(x) 0 c(x) = 0 or, if (NLP) is infeasible, { (FEAS) min v(x) } l m max{0, b i (x)} + c j (x) i=1 j=1 We combine (NLP) and (FEAS) to define (P) {min ρf (x) + v(x)} where ρ 0 is a penalty parameter to be updated dynamically

Background and Motivation Illustration of Numerical Results Final Remarks Our method for step computation and acceptance We generate a step via min q k (d k ; ρ) ( ρ f k + f T ) k d k + 1 2 dt k H k d k + l i=1 { max 0, b i } k bi T k dk + which is equivalent to the quadratic subproblem m c j k + cj T dk k + 1 2 dk T H k d k j=1 (Q ρ) ( ) min ρ f k + fk T d k + 1 d T 2 k H k d k + s.t. b k + b T k d k r c k + c T k d k = s + t l m r i + (s j + t j ) + 1 d T H 2 k k d k i=1 j=1 and we measure progress with the exact penalty function φ(x; ρ) ρf (x) + v(x)

Background and Motivation Illustration of Numerical Results Final Remarks A Penalty-SQP framework Step 0. Initialize x 0 and set η (0, 1), τ (0, 1) and k 0 Step 1. If x k solves (NLP) or (FEAS), then stop Step 2. Compute a value for the penalty parameter, call it ρ k Step 3. Compute d k by solving (Q ρ) with ρ ρ k Step 4. Let α k be the first member of the sequence {1, τ, τ 2,...} s.t. φ(x k ; ρ k ) φ(x k + α k d k ; ρ k ) ηα k [q k (0; ρ k ) q k (d k ; ρ k )] Step 5. Update x k+1 x k + α k d k, go to Step 1

Background and Motivation Illustration of Numerical Results Final Remarks A Penalty-SQP framework Step 0. Initialize x 0 and set η (0, 1), τ (0, 1) and k 0 Step 1. If x k solves (NLP) or (FEAS), then stop Step 2. Compute a value for the penalty parameter, call it ρ k Step 3. Compute d k by solving (Q ρ) with ρ ρ k Step 4. Let α k be the first member of the sequence {1, τ, τ 2,...} s.t. φ(x k ; ρ k ) φ(x k + α k d k ; ρ k ) ηα k [q k (0; ρ k ) q k (d k ; ρ k )] Step 5. Update x k+1 x k + α k d k, go to Step 1

Background and Motivation Illustration of Numerical Results Final Remarks Steering Rules We extend the steering rules for setting ρ k 1 1. First, we guarantee first-order progress toward minimizing the feasibility violation measure v(x) during each iteration 1 Byrd, Nocedal, Waltz (2008) and Byrd, López-Calva, Nocedal (2008)

Background and Motivation Illustration of Numerical Results Final Remarks Steering Rules We extend the steering rules for setting ρ k 2. Second, we avoid settling on a penalty function with ρ too large by ensuring that progress in φ is proportional to the largest potential progress in feasibility

Background and Motivation Illustration of Numerical Results Final Remarks Penalty parameter update a. If the current iterate is feasible, then the algorithm should reduce ρ, if necessary, to ensure linearized feasibility; i.e., l k (d k ) l i=1 { } max 0, bk i bk i T dk + m c j k + cj T k d = 0 j=1 b. Otherwise, ρ should be set so that l k (0) l k (d k ) ɛ 1(l k (0) l k ( d k )) q k (0; ρ k ) q k (d k ; ρ k ) ɛ 2(q k (0; 0) q k ( d k ; 0)) where the reference direction is given by d k arg min (Q ρ) for ρ = 0

Background and Motivation Illustration of Numerical Results Final Remarks Penalty parameter update, extended a. If the current iterate is feasible, then the algorithm should reduce ρ, if necessary, to ensure linearized feasibility; i.e., l k (d k ) l i=1 { } max 0, bk i bk i T dk + m c j k + cj T k d = 0 j=1 b. Otherwise, ρ should be set so that l k (0) l k (d k ) ɛ 1(l k (0) l k ( d k )) q k (0; ρ k ) q k (d k ; ρ k ) ɛ 2(q k (0; 0) q k ( d k ; 0)) c. Finally, if ρ is decreased we ensure ρ k ɛ 3 KKT error for (FEAS) 2

Background and Motivation Illustration of Numerical Results Final Remarks Strategy for fast convergence Hitting a moving target: x k x ρ ˆx where x k kth iterate of the algorithm x ρ solution of penalty problem (P) ˆx infeasible stationary point of (NLP), solution of (FEAS) It has been shown 2 that, for some C, C > 0, x k+1 ˆx x k+1 x ρ + x ρ ˆx C x k x ρ 2 + O(ρ) C x k ˆx 2 + O(ρ), so convergence is quadratic if ρ x k ˆx 2 2 Byrd, Curtis, Nocedal (2008)

Background and Motivation Illustration of Numerical Results Final Remarks Strategy for fast convergence Hitting a moving target: x k x ρ ˆx where x k kth iterate of the algorithm x ρ solution of penalty problem (P) ˆx infeasible stationary point of (NLP), solution of (FEAS)

Background and Motivation Illustration of Numerical Results Final Remarks Strategy for fast convergence Hitting a moving target: x k x ρ ˆx where x k kth iterate of the algorithm x ρ solution of penalty problem (P) ˆx infeasible stationary point of (NLP), solution of (FEAS)

Background and Motivation Illustration of Numerical Results Final Remarks An infeasible problem with a strict minimizer of infeasibility

Background and Motivation Illustration of Numerical Results Final Remarks Problem batch as a MINLP Adding the constraint tl[1] 5 creates an infeasible problem:

Background and Motivation Illustration of Numerical Results Final Remarks Summary We have argued for the need of optimization algorithms that converge quickly, even on infeasible problems We have presented a penalty-sqp approach that transitions smoothly between solving an optimization problem and the corresponding feasibility problem The novel components of the algorithm are: 1. the quadratic model q k (it is quadratic even for ρ = 0) 2. an appropriate update for ρ to ensure superlinear convergence on infeasible problem instances We have illustrated that the algorithm performs well on both feasible and infeasible problem instances

Background and Motivation Illustration of Numerical Results Final Remarks Penalty-SQP Capabilities Now Thanks! Questions? Sl 1 QP Feasible Infeasible Global Fast Local