Recent Adaptive Methods for Nonlinear Optimization

Similar documents
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

An Inexact Newton Method for Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization

Large-Scale Nonlinear Optimization with Inexact Step Computations

A New Penalty-SQP Method

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

PDE-Constrained and Nonsmooth Optimization

Infeasibility Detection in Nonlinear Optimization

Hot-Starting NLP Solvers

A Trust-Funnel Algorithm for Nonlinear Programming

Algorithms for Constrained Optimization

What s New in Active-Set Methods for Nonlinear Optimization?

MS&E 318 (CME 338) Large-Scale Numerical Optimization

arxiv: v1 [math.oc] 20 Aug 2014

Numerical Methods for PDE-Constrained Optimization

Adaptive augmented Lagrangian methods: algorithms and practical numerical experience

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations


A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Algorithms for constrained local optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Flexible Penalty Functions for Nonlinear Constrained Optimization

Constrained Nonlinear Optimization Algorithms

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

Lecture 15: SQP methods for equality constrained optimization

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

arxiv: v1 [math.na] 8 Jun 2018

5 Handling Constraints

A Trust-region-based Sequential Quadratic Programming Algorithm

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

HYBRID FILTER METHODS FOR NONLINEAR OPTIMIZATION. Yueling Loh

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

Steering Exact Penalty Methods for Nonlinear Programming

10 Numerical methods for constrained problems

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

1 Computing with constraints

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

MS&E 318 (CME 338) Large-Scale Numerical Optimization

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal

CONVEXIFICATION SCHEMES FOR SQP METHODS

NUMERICAL OPTIMIZATION. J. Ch. Gilbert

An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Constrained optimization: direct methods (cont.)

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

minimize x subject to (x 2)(x 4) u,

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

POWER SYSTEMS in general are currently operating

Feasible Interior Methods Using Slacks for Nonlinear Optimization

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

4TE3/6TE3. Algorithms for. Continuous Optimization

Nonlinear Optimization Solvers

On the use of piecewise linear models in nonlinear programming

Algorithms for Nonsmooth Optimization

Optimisation in Higher Dimensions

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

An Active-Set Quadratic Programming Method Based On Sequential Hot-Starts

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Computational Finance

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

The Squared Slacks Transformation in Nonlinear Programming

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Nonlinear Optimization: What s important?

Interior Methods for Mathematical Programs with Complementarity Constraints

A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent

Lecture 13: Constrained optimization

A note on the implementation of an interior-point algorithm for nonlinear optimization with inexact step computations

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization

Optimization and Root Finding. Kurt Hornik

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory

Lecture 18: Optimization Programming

2.3 Linear Programming

Accelerated primal-dual methods for linearly constrained convex problems

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

On Lagrange multipliers of trust-region subproblems

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Some Recent Advances in Mixed-Integer Nonlinear Programming

arxiv: v2 [math.oc] 11 Jan 2018

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Transcription:

Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould (RAL), Zheng Han (American Express), Johannes Huber (SAFEmine), Hao Jiang (Johns Hopkins), Travis Johnson (Square, Inc.), Jorge Nocedal (Northwestern), Daniel P. Robinson (Johns Hopkins), Olaf Schenk (U. della Svizzera Italiana), Philippe L. Toint, (U. of Namur), Andreas Wächter (Northwestern), Hao Wang (ShanghaiTech), Jiashan Wang (U. of Washington) ExxonMobil Research Annandale, NJ 8 July 2015 Recent Adaptive Methods for Nonlinear Optimization 1 of 40

Outline Motivation NLP Algorithms QP Algorithms Summary Recent Adaptive Methods for Nonlinear Optimization 2 of 40

Outline Motivation NLP Algorithms QP Algorithms Summary Recent Adaptive Methods for Nonlinear Optimization 3 of 40

Constrained nonlinear optimization Consider the constrained nonlinear optimization problem min f(x) x R n s.t. c(x) 0, (NLP) where f : R n R and c : R n R m are continuously differentiable. (Equality constraints also OK, but suppressed for simplicity.) We are interested in algorithms such that if (NLP) is infeasible, then there will be an automatic transition to solving the feasibility problem min v(x), where v(x) = x R n dist(c(x) Rm ). (FP) Any feasible point for (NLP) is an optimal solution of (FP). Recent Adaptive Methods for Nonlinear Optimization 4 of 40

Algorithmic framework: Classic NLP solver subproblem solver linear solver Recent Adaptive Methods for Nonlinear Optimization 5 of 40

Algorithmic framework: Detailed NLP solver approximation model solution subproblem solver approximation model solution linear solver Recent Adaptive Methods for Nonlinear Optimization 6 of 40

Inefficiencies of traditional approaches The traditional NLP algorithm classes, i.e., augmented Lagrangian (AL) methods sequential quadratic optimization (SQP) methods interior-point (IP) methods may fail or be inefficient when exact subproblem solves are expensive... or inexact solves are not computed intelligently algorithmic parameters are initialized poorly... or are updated too slowly or inappropriately a globalization mechanism inhibits productive early steps... or blocks superlinear local convergence This is especially important when your subproblems are NLPs! Recent Adaptive Methods for Nonlinear Optimization 7 of 40

Contributions A variety of algorithms and algorithmic tools incorporating/allowing inexact subproblem solves flexible step acceptance strategies adaptive parameter updates global convergence guarantees superlinear local convergence guarantees efficient handling of nonconvexity This talk provides an overview of these tools within AL methods with adaptive penalty parameter updates SQP methods with inexact subproblem solves IP methods with inexact linear system solves a penalty-ip method with adaptive parameter updates and subproblem methods also useful for control, image science, data science, etc. Recent Adaptive Methods for Nonlinear Optimization 8 of 40

Algorithmic framework: Detailed NLP solver approximation model solution subproblem solver approximation model solution linear solver Recent Adaptive Methods for Nonlinear Optimization 9 of 40

Algorithmic framework: Inexact NLP solver approximation model termination conditions approximate solution step type subproblem solver approximation model termination conditions approximate solution step type linear solver Recent Adaptive Methods for Nonlinear Optimization 9 of 40

Outline Motivation NLP Algorithms QP Algorithms Summary Recent Adaptive Methods for Nonlinear Optimization 10 of 40

AL methods A traditional augmented Lagrangian (AL) method for solving min (x,s) R n R m + observes the following strategy: f(x) s.t. c(x) + s = 0, Given ρ R + and y R m, approximately solve min (x,s) R n R m + ρf(x) + (c(x) + s) T y + 1 2 c(x) + s 2 2 Update ρ and y to drive (global and local) convergence Potential inefficiencies: Poor initial (ρ, y) may ruin good initial (x, s) Slow/poor update for (ρ, y) may lead to poor performance Recent Adaptive Methods for Nonlinear Optimization 11 of 40

Ideas: Steering rules and adaptive multiplier updates Steering rules for exact penalty methods: Byrd, Nocedal, Waltz (2008) Do not fix ρ during minimization Rather, before accepting any step, reduce ρ until the step yields progress in a model of constraint violation proportional to that yielded by a feasibility step Cannot be applied directly in an AL method Steering rules for AL methods: Curtis, Jiang, Robinson (2014) w/ Gould (2015) Modified steering rules that may allow constraint violation increase Simultaneously incorporate adaptive multiplier updates Gains in performance in trust region and line search contexts Implementation in lancelot (Also, steering rules in penalty-ip method: Curtis (2012)) Recent Adaptive Methods for Nonlinear Optimization 12 of 40

Numerical experiments Implementation in lancelot shows that steering (with or without safeguarding) yields improved performance in terms of numbers of iterations for CUTEst set 1 0.8 0.6 0.4 0.2 Lancelot Lancelot-steering Lancelot-steering-safe 0 1 2 4 8 A matlab implementation of an adaptive steering strategy (aal-ls) outperforms a basic AL method (bal-ls) in terms of function evaluations on an OPF set Recent Adaptive Methods for Nonlinear Optimization 13 of 40

SQP methods A traditional sequential quadratic optimization (SQP) method: Given x R n and y R m, solve min f(x) + d R n g(x)t d + 1 2 dt H(x, y)d s.t. c(x) + J(x)d 0 (QP) Set ρ R + to ensure d yields sufficient descent in φ(x; ρ) = ρf(x) + v(x) Potential inefficiencies/issues: (QP) may be infeasible (QP) expensive to solve exactly inexact solves might not ensure d yields descent in φ(x; ρ) steering not viable with inexact solves Recent Adaptive Methods for Nonlinear Optimization 14 of 40

Ideas (equalities only): Step decomposition and SMART tests Step decomposition in trust region (TR) framework: Celis, Dennis, Tapia (1985) Normal step toward constraint satisfaction Tangential step toward optimality in null space of constraints Requires projections during tangential computation Normal step with TR, but TR-free tangential: Curtis, Nocedal, Wächter (2009) Incorporate SMART tests: Byrd, Curtis, Nocedal (2008, 2010) Normal and tangential steps can be computed approximately Consider various types of inexact solutions Prescribed inexactness conditions based on penalty function model reduction Recent Adaptive Methods for Nonlinear Optimization 15 of 40

Ideas (inequalities, too): Feasibility and optimality steps w/ scenarios Inexact SQP method: Curtis, Johnson, Robinson, Wächter (2014) Similar to steering, compute approximate(!) feasibility step for reference Also given ρ R +, solve min ρ(f(x) + d R n g(x)t d) + e T s + 1 2 dt H(ρ, x, y)d s.t. c(x) + J(x)d s (PQP) Consider various types of inexact solutions Approximate Sl 1QP step Multiplier-only step Convex combination of feasibility and Sl 1QP step Recent Adaptive Methods for Nonlinear Optimization 16 of 40

Iteration comparison: AL vs. SQP AL (left) with cheap iterations vs. SQP (right) with expensive iterations Recent Adaptive Methods for Nonlinear Optimization 17 of 40

Iteration comparison: SQP vs. isqp SQP (left) with expensive iterations vs. isqp (right) with cheaper iterations Recent Adaptive Methods for Nonlinear Optimization 18 of 40

Iteration comparison: AL vs. isqp AL (left) with cheap iterations vs. isqp (right) with few expensive iterations Recent Adaptive Methods for Nonlinear Optimization 19 of 40

IP methods A traditional interior-point (IP) method for solving min (x,s) R n R m + observes the following strategy: Given µ R +, approximately solve min (x,s) R n R m + f(x) s.t. c(x) + s = 0, mx f(x) µ ln s i s.t. c(x) + s = 0 i=1 Update µ to drive (global and local) convergence Potential inefficiences: Direct factorizations for Newton s in subproblem can be expensive Slack bounds can block long steps ( jamming ) Slow/poor update for µ may lead to poor performance Recent Adaptive Methods for Nonlinear Optimization 20 of 40

Ideas: Inexact IP method Step decomposition with scaled trust region: Byrd, Hribar, Nocedal (1999)» d x k S 1 k k ds k 2 Allow inexact linear system solves: Curtis, Schenk, Wächter (2010) Normal step with TR, but TR-free tangential Incorporate (modified) SMART tests Implemented in ipopt (optimizer) with inexact, iterative linear system solves by pardiso (SQMR): Curtis, Huber, Schenk, Wächter (2011) Recent Adaptive Methods for Nonlinear Optimization 21 of 40

Implementation details Incorporated in ipopt software package: Wächter, Laird, Biegler interior-point algorithm with inexact step computations flexible penalty function for faster convergence: Curtis, Nocedal (2008) tests on 700 CUTEr problems (almost) on par with original ipopt Linear systems solved with pardiso: Schenk, Gärtner includes iterative linear system solvers, e.g., SQMR: Freund (1997) incomplete multilevel factorization with inverse-based pivoting stabilized by symmetric-weighted matchings Server cooling example coded w/ libmesh: Kirk, Peterson, Stogner, Carey Recent Adaptive Methods for Nonlinear Optimization 22 of 40

Hyperthermia treatment planning Let u j = a j e iφ j and M jk (x) = E j (x), E k (x) where E j = sin(jx 1 x 2 x 3 π): min 1 Z (y(x) y t(x)) 2 dx 2 Ω 8 < y(x) 10(y(x) 37) = u M(x)u in Ω s.t. 37.0 y(x) 37.5 on Ω : 42.0 y(x) 44.0 in Ω 0 Original ipopt with N = 32 requires 408 seconds per iteration. N n p q # iter CPU sec (per iter) 16 4116 2744 2994 68 22.893 (0.3367) 32 32788 27000 13034 51 3055.9 (59.920) Recent Adaptive Methods for Nonlinear Optimization 23 of 40

Server room cooling Let φ(x) be the air flow velocity potential: min X c i v ACi 8 φ(x) = 0 in Ω nφ(x) = 0 on Ω wall >< nφ(x) = v ACi on Ω ACi s.t. φ(x) = 0 in Ω Exhj >: φ(x) 2 2 v 2 min on Ω hot v ACi 0 Original ipopt with h = 0.05 requires 2390.09 seconds per iteration. h n p q # iter CPU sec (per iter) 0.10 43816 43759 4793 47 1697.47 (36.1164) 0.05 323191 323134 19128 54 28518.4 (528.119) Recent Adaptive Methods for Nonlinear Optimization 24 of 40

Ideas: Penalty-IP method Jamming can be avoided by relaxation via penalty methods Two parameters to juggle: ρ and µ Simultaneous update motivated by steering penalty methods...... and adaptive barrier method: Nocedal, Wächter, Waltz (2009)... leading to penalty-interior-point method: Curtis (2012) More reliable than ipopt on degenerate and infeasible problems Recent Adaptive Methods for Nonlinear Optimization 25 of 40

Ideas: Trust-funnel IP method Trust-funnel method for equality constrained problems: Gould, Toint (2010) Does not require a penalty function or a filter Drives convergence by a monotonically decreasing sequence with v(x k ) v max k for all k Normal and tangential step decomposition, but allows flexible computation that may allow skipping certain subproblems in some iterations IP method with scaled trust regions: Curtis, Gould, Robinson, Toint (2015) Recent Adaptive Methods for Nonlinear Optimization 26 of 40

Idea: Flexible penalty function A traditional penalty/merit function Han (1977); Powell (1978) requires f(x k + α k d k ) + π k v(x k + α k d k ) f(x k ) + π k v(x k ) while a filter Fletcher, Leyffer (1998) requires f(x k + α k d k ) f j or v(x k + α k d k ) v j for all j F k Either can result in blocking of certain steps Recent Adaptive Methods for Nonlinear Optimization 27 of 40

Idea: Flexible penalty function A flexible penalty function Curtis, Nocedal (2008) requires f(x k + α k d k ) + πv(x k + α k d k ) f(x k ) + πv(x k ) for some π [π l k, πu k ] Parameters πk l and πu k updated separately Of course, blocking may still occur, but (hopefully) less often Recent Adaptive Methods for Nonlinear Optimization 28 of 40

Idea: Rapid infeasibility detection Suppose your algorithm is driven by a penalty/merit function such as φ(x) = ρf(x) + v(x) and, at an iterate x k, the following occur: x k is infeasible for (NLP) in that v(x k ) 0 you have computed a feasibility step that does not reduce a model of v sufficiently compared to v(x k ) Then, there is reason to believe that (NLP) is (locally) infeasible, and you may obtain superlinear convergence to an infeasible stationary point by setting ρ KKT for (FP) 2 Byrd, Curtis, Nocedal (2010); Burke, Curtis, Wang (2014) Recent Adaptive Methods for Nonlinear Optimization 29 of 40

Idea: Dynamic Hessian modifications Inexact SQP and IP methods involve iterative solves of systems of the form» Hk Jk T»» xk gk = J k 0 y k c k If H k is not positive definite in the null space of J k, then even an exact solution may not be productive in terms of optimization Apply a symmetric indefinite iterative solver Under certain conditions, modify H k (say, adding some multiple of a positive definite matrix) and restart the solve (hopefully not entirely from scratch) Recent Adaptive Methods for Nonlinear Optimization 30 of 40

Outline Motivation NLP Algorithms QP Algorithms Summary Recent Adaptive Methods for Nonlinear Optimization 31 of 40

Subproblem solvers for SQP methods How should QP subproblems be solved within adaptive NLP methods? Need scalable step computations... but also the ability to obtain accurate solutions quickly Traditional approaches have not been able to take SQP methods to large-scale! Primal (or dual) active-set methods Interior-point methods Alternating direction methods (sorry to the ADMM fans out there...) Recent Adaptive Methods for Nonlinear Optimization 32 of 40

Iterative reweighting algorithm Consider a QP subproblem with H 0 of the form min x R n gt x + 1 2 xt Hx + dist(ax + b R m ) Approximate with a smooth quadratic about (x k, ɛ k ) and solve (as {ɛ k } 0) min x R n gt x + 1 2 xt Hx + mx w i (x k, ɛ k ) Ax + b Proj(Ax k + b) 2 2 i=1 Convergence/complexity theory, generic dist( ); Burke, Curtis, Wang, Wang (2015) Recent Adaptive Methods for Nonlinear Optimization 33 of 40

Numerical experiments Results for l 1 -SVM show improvements over ADMM in CG iterations and sparsity Recent Adaptive Methods for Nonlinear Optimization 34 of 40

Primal-dual active-set (PDAS) methods Consider a QP subproblem with H 0 of the form min x R n gt x + 1 2 xt Hx s.t. x u Given a partition (A, I) of the index set of variables, a PDAS method performs: 1. Set x A = u A and y I = 0 2. Compute a primal subspace minimizer x I (via linear system) 3. Set remaining dual variables y A to satisfy complementarity 4. Update partition based on violated bounds Finite termination for certain H (e.g., M-matrix): Hintermüller, Ito, Kunish (2003) Recent Adaptive Methods for Nonlinear Optimization 35 of 40

Algorithmic extensions We have proposed a variety of algorithmic extensions: Globalization strategy for general convex QPs: Curtis, Han, Robinson (2014) Inexact subspace minimization techniques (w/ guarantees) for certain H: Curtis, Han (2015) Adapted for isotonic regression and trend filtering: Curtis, Han (2015) Recent Adaptive Methods for Nonlinear Optimization 36 of 40

Outline Motivation NLP Algorithms QP Algorithms Summary Recent Adaptive Methods for Nonlinear Optimization 37 of 40

Contributions A variety of algorithms and algorithmic tools incorporating/allowing inexact subproblem solves flexible step acceptance strategies adaptive parameter updates global convergence guarantees superlinear local convergence guarantees efficient handling of nonconvexity We have outlined these tools within an AL method with adaptive penalty parameter updates SQP methods with inexact subproblem solves an IP method with inexact linear system solves a penalty-ip method with dynamic parameter updates and discussed subproblem methods also useful in their own right Recent Adaptive Methods for Nonlinear Optimization 38 of 40

Bibliography I [1] J. V. Burke, F. E. Curtis, and H. Wang. A Sequential Quadratic Optimization Algorithm with Rapid Infeasibility Detection. SIAM Journal on Optimization, 24(2):839 872, 2014. [2] J. V. Burke, F. E. Curtis, H. Wang, and J. Wang. Iterative Reweighted Linear Least Squares for Exact Penalty Subproblems on Product Sets. SIAM Journal on Optimization, 25(1):261 294, 2015. [3] R. H. Byrd, F. E. Curtis, and J. Nocedal. An Inexact SQP Method for Equality Constrained Optimization. SIAM Journal on Optimization, 19(1):351 369, 2008. [4] R. H. Byrd, F. E. Curtis, and J. Nocedal. An Inexact Newton Method for Nonconvex Equality Constrained Optimization. Mathematical Programming, 122(2):273 299, 2010. [5] R. H. Byrd, F. E. Curtis, and J. Nocedal. Infeasibility Detection and SQP Methods for Nonlinear Optimization. SIAM Journal on Optimization, 20(5):2281 2299, 2010. [6] F. E. Curtis. A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization. Mathematical Programming Computation, 4(2):181 209, 2012. [7] F. E. Curtis, N. I. M. Gould, H. Jiang, and D. P. Robinson. Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience. Technical Report 14T-006, COR@L Laboratory, Department of ISE, Lehigh University, 2014. In second review for Optimization Methods and Software. [8] F. E. Curtis, N. I. M. Gould, D. P. Robinson, and Ph. L. Toint. An Interior-Point Trust-Funnel Algorithm for Nonlinear Optimization. Technical Report 13T-010, COR@L Laboratory, Department of ISE, Lehigh University, 2013. In second review for Mathematical Programming. Recent Adaptive Methods for Nonlinear Optimization 39 of 40

Bibliography II [9] F. E. Curtis and Z. Han. Globally Convergent Primal-Dual Active-Set Methods with Inexact Subproblem Solves. Technical Report 14T-010, COR@L Laboratory, Department of ISE, Lehigh University, 2014. In second review for SIAM Journal on Optimization. [10] F. E. Curtis, J. Huber, O. Schenk, and A. Wächter. A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations. Mathematical Programming, Series B, 136(1):209 227, 2012. [11] F. E. Curtis, H. Jiang, and D. P. Robinson. An Adaptive Augmented Lagrangian Method for Large-Scale Constrained Optimization. Mathematical Programming, DOI: 10.1007/s10107-014-0784-y, 2014. [12] F. E. Curtis, T. Johnson, D. P. Robinson, and A. Wächter. An Inexact Sequential Quadratic Optimization Algorithm for Large-Scale Nonlinear Optimization. SIAM Journal on Optimization, 24(3):1041 1074, 2014. [13] F. E. Curtis and J. Nocedal. Flexible Penalty Functions for Nonlinear Constrained Optimization. IMA Journal of Numerical Analysis, 28(4):749 769, 2008. [14] F. E. Curtis, J. Nocedal, and A. Wächter. A Matrix-Free Algorithm for Equality Constrained Optimization Problems with Rank Deficient Jacobians. SIAM Journal on Optimization, 20(3):1224 1249, 2009. [15] F. E. Curtis, O. Schenk, and A. Wächter. An Interior-Point Algorithm for Large-Scale Nonlinear Optimization with Inexact Step Computations. SIAM Journal on Scientific Computing, 32(6):3447 3475, 2010. Recent Adaptive Methods for Nonlinear Optimization 40 of 40