Infeasibility Detection in Nonlinear Optimization

Similar documents
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

A New Penalty-SQP Method

Recent Adaptive Methods for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

Nonlinear Optimization Solvers

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

Steering Exact Penalty Methods for Nonlinear Programming

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

Algorithms for Constrained Optimization

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Trust-Funnel Algorithm for Nonlinear Programming

MS&E 318 (CME 338) Large-Scale Numerical Optimization

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

What s New in Active-Set Methods for Nonlinear Optimization?

Inexact Newton Methods and Nonlinear Constrained Optimization

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization

Algorithms for constrained local optimization

Effective reformulations of the truss topology design problem

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

An Inexact Newton Method for Optimization

PDE-Constrained and Nonsmooth Optimization

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher

Solving MPECs Implicit Programming and NLP Methods

An Inexact Newton Method for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

NUMERICAL OPTIMIZATION. J. Ch. Gilbert

A Trust-region-based Sequential Quadratic Programming Algorithm

Interior Methods for Mathematical Programs with Complementarity Constraints

arxiv: v1 [math.na] 8 Jun 2018

Effective reformulations of the truss topology design problem

Lecture 15: SQP methods for equality constrained optimization

Flexible Penalty Functions for Nonlinear Constrained Optimization

Apolynomialtimeinteriorpointmethodforproblemswith nonconvex constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

MS&E 318 (CME 338) Large-Scale Numerical Optimization

HYBRID FILTER METHODS FOR NONLINEAR OPTIMIZATION. Yueling Loh

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

INTERIOR-POINT ALGORITHMS, PENALTY METHODS AND EQUILIBRIUM PROBLEMS

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

Numerical Nonlinear Optimization with WORHP

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

INTERIOR-POINT ALGORITHMS, PENALTY METHODS AND EQUILIBRIUM PROBLEMS

INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER CONE, AND SEMIDEFINITE PROGRAMMING. Hande Yurttan Benson

Adaptive augmented Lagrangian methods: algorithms and practical numerical experience

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software


A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

On the use of piecewise linear models in nonlinear programming

Seminal papers in nonlinear optimization

10 Numerical methods for constrained problems

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

Hot-Starting NLP Solvers

An augmented Lagrangian method for equality constrained optimization with rapid infeasibility detection capabilities

Using Interior-Point Methods within Mixed-Integer Nonlinear Programming

An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

University of Erlangen-Nürnberg and Academy of Sciences of the Czech Republic. Solving MPECs by Implicit Programming and NLP Methods p.

5 Handling Constraints

Lecture 13: Constrained optimization

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

Multidisciplinary System Design Optimization (MSDO)

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

arxiv: v1 [math.oc] 20 Aug 2014

Large-Scale Nonlinear Optimization with Inexact Step Computations

Numerical Methods for PDE-Constrained Optimization

The Squared Slacks Transformation in Nonlinear Programming

AN EXACT PRIMAL-DUAL PENALTY METHOD APPROACH TO WARMSTARTING INTERIOR-POINT METHODS FOR LINEAR PROGRAMMING

Quasi-Newton Methods. Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization

Quasi-Newton Methods. Javier Peña Convex Optimization /36-725

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity

POWER SYSTEMS in general are currently operating

How the augmented Lagrangian algorithm can deal with an infeasible convex quadratic optimization problem. Motivation, analysis, implementation

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

Solving a Signalized Traffic Intersection Problem with NLP Solvers

On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger

2.098/6.255/ Optimization Methods Practice True/False Questions

Convergence analysis of a primal-dual interior-point method for nonlinear programming

A New Low Rank Quasi-Newton Update Scheme for Nonlinear Programming

Ettore Majorana Centre for Scientific Culture International School of Mathematics G. Stampacchia. Erice, Italy. 33rd Workshop

On Lagrange multipliers of trust region subproblems

1 1 Introduction Some of the most successful algorithms for large-scale, generally constrained, nonlinear optimization fall into one of two categories

A penalty-interior-point algorithm for nonlinear constrained optimization

On the complexity of an Inexact Restoration method for constrained optimization

arxiv: v2 [math.oc] 11 Jan 2018

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization

2 P. E. GILL, W. MURRAY AND M. A. SAUNDERS We assume that the nonlinear functions are smooth and that their rst derivatives are available (and possibl

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

The Surprisingly Complicated Case of [Convex] Quadratic Optimization

Transcription:

Infeasibility Detection in Nonlinear Optimization Frank E. Curtis, Lehigh University Hao Wang, Lehigh University SIAM Conference on Optimization 2011 May 16, 2011 (See also Infeasibility Detection in Nonlinear Programming, Figen Oztoprak, MS71) Infeasibility Detection in Nonlinear Optimization 1 of 35

Outline Motivation Penalty-SQO Method Penalty-Interior-Point Method Unified Framework(?) Infeasibility Detection in Nonlinear Optimization 2 of 35

Outline Motivation Penalty-SQO Method Penalty-Interior-Point Method Unified Framework(?) Infeasibility Detection in Nonlinear Optimization 3 of 35

Constrained minimization Is this how we should formulate optimization problems? min x s.t. f (x) ( c E (x) = 0 c I (x) 0 (OP) Infeasibility Detection in Nonlinear Optimization 4 of 35

Feasibility violation minimization Suppose that if (OP) is infeasible, then we want to solve» min v(x) := c E (x) x max{c I (FP) (x), 0} Many algorithms/codes do this already, by either switching from solving one problem to the other; transitioning from solving one problem to the other. But are they doing it efficiently? Infeasibility Detection in Nonlinear Optimization 5 of 35

Numerical experiments: Infeasible optimization problems Iterations and f evaluations for 8 infeasible optimization problems (2-3 variables): Prob. Ipopt Knitro-Direct Knitro-Active Filter Iter. Eval. Iter. Eval. Iter. Eval. Iter. Eval. 1 48 281 38 135 22 235 16 16 2 109 170 *10000 *40544 23 167 12 12 3 788 3129 12 83 9 202 10 10 4 46 105 25 61 10 201 11 11 5 72 266 *1060 *3401 18 45 26 26 6 63 141 *76 *264 16 37 27 27 7 87 152 *10000 *43652 *10000 *20091 30 30 8 104 206 33 97 41 560 28 28 Problems also run with Knitro-CG and LOQO, but they failed every time. (We want your infeasible test problems!) Infeasibility Detection in Nonlinear Optimization 6 of 35

Numerical experiments: Feasibility problems (solved directly) Iterations and f evaluations for 8 feasibility problems (2-3 variables): Problem Ipopt Knitro-Direct Knitro-Active Filter Iter. Eval. Iter. Eval. Iter. Eval. Iter. Eval. 1 28 29 14 15 13 24 17 21 2 31 32 31 33 8 9 12 13 3 50 131 10 11 9 12 12 13 4 24 79 18 29 10 13 10 12 5 166 786 29 40 21 24 30 32 6 37 48 20 21 19 22 26 27 7 59 65 31 34 19 20 25 28 8 46 71 19 20 23 28 26 29 = If we can switch/transition efficiently, then our current tools work well. Infeasibility Detection in Nonlinear Optimization 7 of 35

Rapid detection of infeasibility Problem type Global convergence Fast local convergence Feasible Infeasible? Infeasibility Detection in Nonlinear Optimization 8 of 35

Two-phase strategy Infeasibility Detection in Nonlinear Optimization 9 of 35

Two-phase strategy Exploratory step determines possible progress toward feasibility. Rapid infeasibility detection requires rapid transition/switch to feasibility steps. (Important to have consistency between ways the two steps measure feasibility.) Infeasibility Detection in Nonlinear Optimization 10 of 35

Outline Motivation Penalty-SQO Method Penalty-Interior-Point Method Unified Framework(?) Infeasibility Detection in Nonlinear Optimization 11 of 35

Penalty methods Penalization provides one mechanism for transitioning to infeasibility detection: min ρf (x) + x,r,s,t et r + e T s + e T t 8 >< c E (x) = r s s.t. c >: I (x) t (r, s, t) 0 (PP) How should ρ be updated? Infeasibility Detection in Nonlinear Optimization 12 of 35

Literature (Rich history of SQP methods) Fletcher, Leyffer (2002) Byrd, Gould, Nocedal (2005) Byrd, Nocedal, Waltz (2008) Byrd, Curtis, Nocedal (2010) Byrd, López-Calva, Nocedal (2010) Gould, Robinson (2010) Morales, Nocedal, Wu (2010) Infeasibility Detection in Nonlinear Optimization 13 of 35

Steering and FILTER strategies Steering methods solve a sequence of constrained subproblems: FILTER:... we make use of a property of the phase I algorithm in our QP solver. If an infeasible QP is detected, the solver exists with a solution of [an LP minimizing the l 1 norm of violated constraints]. Infeasibility Detection in Nonlinear Optimization 14 of 35

SQuID strategy Compute v k as the optimal value for min d,r,s,t et r + e T s + e T t 8 c E (x k ) + c E (x k ) T d = r s >< c I (x s.t. k ) + c I (x k ) T d t d >: (r, s, t) 0 Update ρ for rapid infeasibility detection (Byrd, Curtis, Nocedal, 2010) Compute d k as the search direction from min ρ(f (x k) + f (x k ) T d) + e T r + e T s + e T t + 1 d,r,s,t 2 dt H(x k, λ k ; ρ)d 8 c E (x k ) + c E (x k ) T d = r s >< c I (x k ) + c I (x k ) T d t s.t. e >: T r + e T s + e T t v k (r, s, t) 0 Update ρ for descent Infeasibility Detection in Nonlinear Optimization 15 of 35

Numerical experiments: Infeasible optimization problems Iterations and f evaluations for 8 infeasible optimization problems (2-3 variables): Prob. Filter SQuID Iter. Eval. Iter. Eval. 1 16 16 32 65 2 12 12 12 18 3 10 10 14 36 4 11 11 26 57 5 26 26 5 16 6 27 27 18 18 7 30 30 19 19 8 28 28 17 17 Infeasibility Detection in Nonlinear Optimization 16 of 35

Outline Motivation Penalty-SQO Method Penalty-Interior-Point Method Unified Framework(?) Infeasibility Detection in Nonlinear Optimization 17 of 35

Penalty and interior-point methods Constrained subproblems in penalty methods can be expensive: min ρf (x) + x,r,s,t et r + e T s + e T t 8 >< c E (x) = r s s.t. c >: I (x) t (r, s, t) 0 (PP) Interior-point methods are more efficient for large-scale problems: min f (x) µ X ln u i x,u 8 >< c E (x) = 0 s.t. c >: I (x) = u u 0 (IP) Infeasibility Detection in Nonlinear Optimization 18 of 35

Penalty-interior-point method Applying an interior-point reformulation to (PP): X min ρf (x) µ (ln r i + ln s i ) + X (ln t i + ln u i ) + e T r + e T s + e T t x,r,s,t,u ( c E (x) = r s s.t. c I (x) = t u (PIP) The optimization problem (OP) and feasibility problem (FP) can be solved via (PIP): µ 0 and ρ ρ > 0 to solve (OP). µ 0 and ρ 0 to solve (FP). Infeasibility Detection in Nonlinear Optimization 19 of 35

Literature Previous work with similar motivations: Jittorntrum and Osborne (1980) Polyak (1982, 1992, 2008) Breitfeld and Shanno (1994, 1996) Goldfarb, Polyak, Scheinberg, and Yuzefovich (1999) Gould, Orban, and Toint (2003) Chen and Goldfarb (2006, 2006) Benson, Sen, and Shanno (2008) Infeasibility Detection in Nonlinear Optimization 20 of 35

Literature Previous work with similar motivations: Jittorntrum and Osborne (1980) Polyak (1982, 1992, 2008) Breitfeld and Shanno (1994, 1996) Goldfarb, Polyak, Scheinberg, and Yuzefovich (1999) Gould, Orban, and Toint (2003) Chen and Goldfarb (2006, 2006) Benson, Sen, and Shanno (2008) Parameter updates are essential to have a practical algorithm. Conservative strategy: 1. Fix ρ > 0 and (approximately) solve (PIP) for µ 0. 2. If infeasible, then reduce ρ and go to step 1. (ρ reduced too slowly to ensure fast convergence for infeasible problems.) Infeasibility Detection in Nonlinear Optimization 21 of 35

PIPAL strategy Newton system has the form A ρ,µd = ρq 1 + µq 2 + q 3. Steering strategy now efficient! (Only one factorization required.) Solution for (ρ, µ) = (0, µ) indicates progress possible toward primal feasibility. Adjust ρ to aim for primal feasibility. Adjust µ to aim for dual feasibility and complementarity. Infeasibility Detection in Nonlinear Optimization 22 of 35

Conservative strategy ρ μ Each dot may require at least a few iterations. Each row may require the computational effort of an entire interior-point solve. For an infeasible problem, we need both ρ and µ to reduce to zero. Infeasibility Detection in Nonlinear Optimization 23 of 35

PIPAL strategy ρ μ Blue line separates acceptable and unacceptable ρ values (for primal feasibility). µ then set to ensure progress toward dual feasibility and complementarity. For an infeasible problem, ρ and µ can reduce rapidly. Infeasibility Detection in Nonlinear Optimization 24 of 35

Numerical results: Feasible problems (sample size = 422) Infeasibility Detection in Nonlinear Optimization 25 of 35

Numerical results: Feasible problems w/ ρ decrease (sample size = 136) Infeasibility Detection in Nonlinear Optimization 26 of 35

Numerical results: Degenerate problems (sample size = 114) Added constraints: c i (x) 2 0 Infeasibility Detection in Nonlinear Optimization 27 of 35

Numerical results: Infeasible problems (sample size = 90) Added constraints: c i (x) 2 1 Infeasibility Detection in Nonlinear Optimization 28 of 35

Outline Motivation Penalty-SQO Method Penalty-Interior-Point Method Unified Framework(?) Infeasibility Detection in Nonlinear Optimization 29 of 35

Constrained minimization Is this how we should formulate optimization problems? min x s.t. f (x) ( c E (x) = 0 c I (x) 0 (OP) That is, should the strategy be the following: Attempt to solve (OP), but... if it is infeasible, then solve» min v(x) := c E (x) x max{c I (FP) (x), 0} Infeasibility Detection in Nonlinear Optimization 30 of 35

Minimization over set of feasibility violation minimizers Let X be the solution set of feasibility violation minimization:» min v(x) := c E (x) x max{c I (FP) (x), 0} We should solve min x f (x) s.t. x X. (Even better would be a single problem or set of conditions for solving (UP).) (UP) Infeasibility Detection in Nonlinear Optimization 31 of 35

1 0 Consider the following model (AMPL presolver turned off): var x; minimize f: (x-1)^2; s.t. c: 1 <= 0; Solver Inf. Declared? Iterations Final (x, y) CONOPT No 0 0.0000 FILTER Yes 0 0.0000 IPOPT Yes 6 0.0099 KNITRO Yes 7 0.6667 LANCELOT Yes 1 1.0000 LOQO No 500 0.6728 MINOS Yes 0 0.0000 MOSEK Yes 1 1.0000 PENNON No 0 1.0000 SNOPT Yes 0 0.0000 Infeasibility Detection in Nonlinear Optimization 32 of 35

y 2 1 Consider the following model (AMPL presolver turned off): var x; var y; minimize f: (x-1)^2; s.t. c: y^2 + 1 <= 0; Solver Inf. Declared? Iterations Final (x, y) CONOPT Yes 3 (0.0000,0.0000) FILTER Yes 2 (0.0000,0.0000) IPOPT Yes 6 (0.0099,0.0000) KNITRO Yes 4 (0.3765,0.0000) LANCELOT Yes 1 (1.0000,0.0000) LOQO No 500 (0.4569,0.0000) MINOS Yes 2 (1.0000,0.0000) MOSEK Yes 1 (0.0000,0.0000) PENNON No 0 (1.0000,0.0000) SNOPT Yes 3 (1.0000,0.0000) Infeasibility Detection in Nonlinear Optimization 33 of 35

Penalty and switching methods Penalization provides one mechanism for transitioning to infeasibility detection: min ρf (x) + x,r,s,t et r + e T s + e T t 8 >< c E (x) = r s s.t. c >: I (x) t (r, s, t) 0 (PP) However, once x X is found, f (x) has been thrown out (since ρ 0). Switching methods throw out f (x) even earlier. Infeasibility Detection in Nonlinear Optimization 34 of 35

What is needed? General-purpose algorithms that: require no constraint qualifications to do something useful minimize the objective over the set of feasibility violation minimizers are as efficient and robust as contemporary methods when problems are nice Infeasibility Detection in Nonlinear Optimization 35 of 35