MS&E 318 (CME 338) Large-Scale Numerical Optimization
|
|
- Stanley Sanders
- 6 years ago
- Views:
Transcription
1 Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview Here we consider sequential quadratic programming methods (SQP methods) for the general optimization problem NCB minimize φ(x) x R n subject to c(x) = 0, l x u. As with the BCL and LCL approaches, SQP methods use a sequence of subproblems (each representing a major iteration) to find estimates (x k, y k ) that reduce certain augmented Lagrangians. An important function of the subproblems is to estimate which inequalities in l x u are active. If an active-set method is used, it will perform some number of minor iterations on each subproblem. In BCL and LCL methods, the subproblem objective is the current augmented Lagrangian, which must be evaluated each minor iteration. In MINOS, for example, the reduced-gradient method that solves the LCL subproblems requires an average of 2 or 3 evaluations of the functions and gradients per minor iteration. This is acceptable if the function and gradient evaluations are cheap (as they are in the GAMS and AMPL environments) but may be prohibitive in other applications. For example, shape optimization in aerospace design requires the solution of a partial differential equation (PDE) each time φ(x) and c(x) are evaluated, and an adjoint PDE to obtain gradients. SQP methods economize by using quadratic programming (QP) subproblems to estimate the set of active constraints. Many minor iterations may be needed to solve the subproblems, but the functions and gradients are not being evaluated during that process. Instead, each subproblem solution is used to generate a search direction ( x, y) for a linesearch on the augmented Lagrangian (regarded as a function of x and y). Typically only 1 5 function and gradients values are needed in each linesearch, regardless of problem size. However, SQP methods are likely to need more major and minor iterations than LCL methods (i.e., more linear algebra). 2 An NPSOL application NPSOL [9] is a successful dense SQP implementation. It has found wide use within the NAG Library and has proved remarkably robust and efficient on highly nonlinear optimization problems, particularly within the aerospace industry. In the 1990s, the largest application we encountered was a trajectory optimization at McDonnell Douglas Space Systems (Huntington Beach) involving about 2000 constraints and variables, computing controls for the last breath-taking seconds of flight of the DC-Y, a proposed SSTO (Single-Stage-To-Orbit) manned spacecraft. The aim was to minimize the fuel needed as the rocket, descending nose-first to Earth, started rotating at an altitude of 2800 feet and slowed down using rocket thrust just time to land on its tail. NPSOL s solution showed that the landing maneuver could be accomplished using half the fuel estimated by classical techniques. A second optimization was used to determine when the rotation should begin. With an extra constraint limiting acceleration to 3g (since a human crew would be aboard!), the optimal rotation altitude proved to be only 1400 feet the height of the Empire State Building. Such touchdowns would be far more abrupt and astonishing than the Apollo moon landings. (More like the Lunar Excursion Module s take-off from the moon played in reverse!) 79
2 80 MS&E 318 (CME 338) Large-Scale Numerical Optimization 3 Quadratic approximations in a subspace Linearized constraints are a key component of SQP methods, just as they are for LCL methods. They define a subspace in which it is reasonable to form a quadratic approximation to the augmented Lagrangian. Let (x k, y k ) be the current solution estimate and ρ k be the current penalty parameter. Also define J k J(x k ). The following functions are needed: Linear approximation to c(x): c k (x) = c(x k ) + J k (x x k ) Departure from linearity: d k (x) = c(x) c k (x) Modified augmented Lagrangian: M k (x) = φ(x) y T k d k(x) + ρ k d k (x) 2 M k (x) = g 0 (x) ( J(x) J k ) T ỹk (x) 2 M k (x) = H 0 (x) i (ỹ k(x)) i H i (x) + ρ k ( J(x) Jk ) T ( J(x) Jk ) ỹ k (x) y k ρ k d k (x). The last four expressions simplify greatly when x = x k. ỹ k (x k ) = y k, and the terms involving J(x) and ρ k vanish: In particular, d k (x k ) = 0 and M k (x k ) = φ(x k ) M k (x k ) = g 0 (x k ) 2 M k (x k ) = H 0 (x k ) i (y k) i H i (x k ). With x x x k, the quadratic approximation to M k (x) at x k on the subspace c k (x) = 0 is therefore Q k (x) = φ(x k ) + g 0 (x k ) T x xt 2 M k (x k ) x, where the quadratic term may or may not be available. 4 NPSOL Recall that the MINOS subproblem takes the form LC k minimize x R n M k (x) = φ(x) y T k d k(x) ρ k d k (x) 2 subject to c k (x) = 0, l x u. The stabilized LCL method adapts early subproblems judiciously to ensure global convergence, but ultimately the subproblems become LC k and their solutions (x k, y k ) converge rapidly, as predicted by Robinson (1972). The SQP methods in NPSOL and SNOPT are based on the same subproblem, but with the objective M k (x) replaced by a quadratic approximation Q k (x): QP k minimize Q k (x) = φ(x k ) + g 0 (x k ) T x + 1 x R n 2 xt H k x subject to c k (x) = 0, l x u, where H k is a quasi-newton approximation to 2 M k (x) (perhaps with ρ k = 0). To achieve global convergence, QP k is regarded as providing a search direction ( x, y) along which the augmented Lagrangian M k (x) (with ρ k 0) may be reduced as a function of both primal and dual variables. Note that the true Hessian of M k (x) is zero in rows and columns associated with linear variables. In NPSOL, slack variables are therefore excluded from H k, and in SNOPT both
3 Spring 2015, Notes 11 SQP Methods 81 slacks and other linear variables in x are excluded. The quasi-newton Hessian approximations are therefore of the form ( Hk H k =, 0) where H k is positive definite. The positive-definiteness provides an intrinsic bound on the relevant parts of x, improving the validity of the quadratic model. 4.1 NPSOL s merit function Let (x k, y k ) be the primal and dual solution of QP k. NPSOL forms the search direction ( ) x = y ( ) x k x k y k y k and seeks a steplength α such that moving to the point (x, y) (x k + α x, y k + α y) (where 0 < α 1) produces a sufficient decrease in the augmented Lagrangian L(x, y, ρ k ) = φ(x) y T c(x) ρ k c(x) 2 relative to L(x k, y k, ρ k ). Initially ρ k = 0. Later, ρ k is increased if necessary to ensure descent for the merit function. It may also be decreased a limited number of times, because a finite (unknown) value is always adequate for well defined problems, and because ρ k = 0 becomes acceptable within some neighborhood of a solution to NCB. The global convergence properties of NPSOL (and the strategy for adjusting ρ k ) are described in Gill, Murray, Saunders, and Wright [11]. The QP subproblems are assumed to be feasible, and to be solved accurately. 4.2 Hessian updates A dense triangular factorization H k = R T k R k is maintained in NPSOL. After the linesearch, a BFGS update of the form R k+1 = Q k R k (I + r k s T k ), Q k orthogonal is made to improve the quality of the Hessian approximation within some space. Various choices are possible for the vectors r k and s k. They reflect the change in x and the change in gradient of some updated Lagrangian L k (x, y k+1, ρ k+1 ) defined with the latest estimate of y (but with ρ k+1 = 0 where possible). NPSOL uses the least-squares solver LSSOL [5] for the subproblems QP k. LSSOL deals directly with the triangular factor R k. It is unique in avoiding the squared condition number that other solvers fall prey to with Hessians of the form R T k R k. 5 SNOPT The reliability of NPSOL and LSSOL allowed them to be applied to increasingly large problems (particularly at McDonnell Douglas Space Systems, Huntington Beach, CA during the 1990s). However, LSSOL uses orthogonal transformations of J(x k ) to solve the QP subproblems, and as problem size increases the total cost becomes dominated by the associated linear algebra. To a large extent, SNOPT is a large-scale SQP implementation that combines the virtues of MINOS and NPSOL. Sparse Jacobians are handled as in the reduced-gradient method (using LUSOL to maintain a sparse basis factorization), and the same merit function as in NPSOL ensures global convergence.
4 82 MS&E 318 (CME 338) Large-Scale Numerical Optimization The primary innovation is a method for treating the QP Hessian in sparse form. Also, infeasible problems NCB are dealt with more methodically by including an l 1 penalty term within the problem definition (where e is a vector of ones): NCB(γ) minimize φ(x) + γe T (v + w) x, v, w subject to c(x) + v w = 0, l x u, v, w 0. Initially γ = 0, but it is set to γ 10 4 (1 + g 0 (x k ) ) if one of the QP subproblems QP k (0) (see below) proves to be infeasible, or if the norm of the dual variables for QP k (0) becomes large: y > SNOPT is then in elastic mode for subsequent major iterations. Optimization continues with that value of γ until an optimal solution of NCB(γ) has been obtained. If the nonlinear constraints are not satisfied ( c(x) not sufficiently small), γ is increased and optimization continues. If γ reaches an allowable maximum value (default ) and the nonlinear constraints are still violated significantly, the problem is declared infeasible. At this stage, SNOPT has essentially minimized c(x) The QP subproblem If SNOPT is in elastic mode, the SNOPT subproblems include the l 1 penalty terms verbatim: QP k (γ) minimize φ(x k ) + g 0 (x k ) T x + 1 x, v, w 2 xt H k x + γe T (v + w) subject to c k (x) + v w = 0, l x u, v, w 0, with γ > 0. But before elastic mode is activated, γ and v, w are zero. 5.2 SQOPT SNOPT includes a sparse QP solver SQOPT, which employs a two-phase active set algorithm to achieve feasibility and optimality. SQOPT implements elastic programming implicitly. Any specified bounds on the variables or constraints may be elastic. The associated variables or slacks are given a piecewise linear objective function that penalizes values outside the real bounds. If γ > 0 in NCB(γ) (perhaps because previous QP subproblems were infeasible), SNOPT defines elastic bounds on the slacks associated with linearized nonlinear constraints. SQOPT then solves each QP k (γ) with v and w implemented implicitly. Separately, SNOPT can use SQOPT to find a point close to a user-provided starting point x 0. One option is to set l j = u j = (x 0 ) j for the nonlinear variables and specify that those bounds are elastic. (Linear variables retain their true bounds.) SQOPT will then solve the proximal point problem PP1 minimize x R n x x 0 1 subject to the linear constraints and (modified) bounds. Alternatively, SNOPT s PP2 option asks SQOPT to minimize 1 2 x x subject to the linear constraints and true bounds. For further details of SNOPT and SQOPT, see [6, 7, 8]. 5.3 Limited-memory Hessian updates For moderate-sized problems, SNOPT maintains a dense Hessian approximation similar to that in NPSOL. For large problems with thousands of nonlinear variables, a limited-memory procedure is used to represent H k. At certain iterations k = r, say, a positive-definite
5 Spring 2015, Notes 11 SQP Methods 83 diagonal estimate H r = R 2 r is used. In particular, R 0 is a multiple of the identity. Up to l quasi-newton updates (default l = 10) are accumulated to represent the Hessian as H k = R T kr k, k 1 R k = R r (I + r j s T j ). j=r SQOPT accesses H k by requesting products of the form H k v. The work and storage required grows linearly with k r. When k = r + l, the diagonals of R k are computed and saved to form the next R r. Storage is then reset by discarding the vectors {r j, s j }. More elaborate limited-memory methods are known; notably Nocedal et al. [16, 2]. The simple form above has an advantage for problems with purely linear constraints: the reduced Hessian required by the reduced-gradient implementation in SQOPT can be updated between major iterations. This is important on large problems with many superbasic variables. As the SQP iterations converge, very few minor iterations are needed to solve QP k and the cost becomes dominated by the formation of Z T H k Z and its factors, but when the constraints are linear, this expense is avoided. (The LU factors of the basis can also be retained.) The thesis research of Andrew Bradley [1] led to a new limited-memory quasi-newton implementation for SNOPT7. A circular buffer is used to hold the latest pairs of update vectors {s j, y j } (recording the recent changes in x and the gradient of the Lagrangian), and the initial diagonal Hessian approximation H 0 is continually updated. The resulting LMCBD procedure (limited-memory circular buffer with diagonal update) produces a robust Hessian approximation that on average outperforms SNOPT s previous limited-memory implementation. 5.4 Quasi-Newton and CG To avoid forming and factorizing the reduced Hessian at the start of each major iteration, SNOPT can use a quasi-newton option in SQOPT. It maintains dense triangular factors R T R Z T H k Z analogous to the RG method in MINOS. A further option is to use the Conjugate-Gradient method to solve the reduced-hessian system Z T H k Z x S = Z T g. This often converges in remarkably few iterations even when there are many thousands of superbasic variables. The diagonal plus low-rank structure of H k plus the large identity matrix in Z must be largely responsible, though it would be easier to explain if the diagonal part H r = R 2 r were always the identity. 5.5 QP solvers for many degrees of freedom Hanh Huynh s thesis research [15] included the development of a convex QP solver QPBLU that periodically factorizes the current KKT matrix ( ) H0 W K 0 = T, H W 0 diagonal, and uses block-lu updates to accommodate up to about 100 active-set changes to W as well as a few BFGS updates to H 0. The aim was to handle many degrees of freedom with a direct method rather than the CG option above (allowing W to be large with many more columns than rows). QPBLU was implemented in Fortran 95 and designed to use any available sparse solver to factorize the current K 0. It included interfaces to LUSOL [10], MA57 [4], PARDISO [18], SuperLU [3], and UMFPACK [19]. Such solvers are assumed to give a factorization K 0 = LU and to permit separate solves with L, L T, U, and U T. For example, MA57 s symmetric factorization K 0 = LDL T was regarded as K 0 = L(DL T ). At the time, PARDISO did not separate its factors, so QPBLU used L = I and U = K 0. This work was continued by Chris Maes for his thesis research [17]. A new QP solver QPBLUR was developed with modules from QPBLU serving as starting point. QPBLUR used a BCL method to solve a sequence of regularized convex QP subproblems (in order to solve the given convex QP). It was incorporated into SNOPT7 as an alternative to SQOPT. The aim was to allow use of the most advanced (possibly parallel) linear system solvers within
6 84 MS&E 318 (CME 338) Large-Scale Numerical Optimization an active-set QP solver and hence within an SQP algorithm. In May 2010, Chris installed a more recent version of PARDISO that allowed separate solves with L and U and optionally used multiple CPUs during its Factor phase. The intention was for SNOPT to use SQOPT while the number of superbasic variables remains below 2000 (say), then switch to QPBLUR if necessary. With its warm-start capability, QPBLUR bridged the gap between null-space active-set methods and full-space interior methods. Comprehensive numerical comparisons are given in [17]. Meanwhile, Elizabeth Wong for her thesis research [20] developed a convex and nonconvex QP solver icqp, based again on block-lu factors of KKT systems (but without regularization). More recently, Gill and Wong have developed SQIC [14, 13, 12], a further block-lu/kkt-based solver for convex and nonconvex QP intended for use within SNOPT9 (a Fortran 2003 implementation of SNOPT). SQIC manages the transition from a reduced- Hessian method to a block-lu method for updating the active set. References [1] A. M. Bradley. Algorithms for the Equilibration of Matrices and Their Application to Limited-Memory Quasi-Newton Methods. PhD thesis, ICME, Stanford University, May [2] R. H. Byrd, J. Nocedal, and R. B. Schnabel. Representations of quasi-newton matrices and their use in limited-memory methods. Math. Program., 63: , [3] J. W. Demmel, S. C. Eisenstat, J. R. Gilbert, X. S. Li, and J. W. H. Liu. A supernodal approach to sparse partial pivoting. SIAM J. Matrix Anal. Appl., 20(3): , [4] I. S. Duff. MA57: a Fortran code for the solution of sparse symmetric definite and indefinite systems. ACM Trans. Math. Software, 30(2): , See ldl in Matlab. [5] P. E. Gill, S. J. Hammarling, W. Murray, M. A. Saunders, and M. H. Wright. User s guide for LSSOL (Version 1.0): a Fortran package for constrained linear least-squares and convex quadratic programming. Report SOL 86-1, Department of Operations Research, Stanford University, CA, [6] P. E. Gill, W. Murray, and M. A. Saunders. SNOPT: An SQP algorithm for large-scale constrained optimization. SIAM Review, 47(1):99 131, SIGEST article. [7] P. E. Gill, W. Murray, and M. A. Saunders. User s guide for SNOPT 7: Software for large-scale nonlinear programming. (see Software), [8] P. E. Gill, W. Murray, and M. A. Saunders. User s guide for SQOPT 7: Software for large-scale linear and quadratic programming. (see Software), [9] P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. User s guide for NPSOL (Version 4.0): a Fortran package for nonlinear programming. Report SOL 86-2, Department of Operations Research, Stanford University, Stanford, CA, [10] P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. Maintaining LU factors of a general sparse matrix. Linear Algebra and its Applications, 88/89: , [11] P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. Some theoretical properties of an augmented Lagrangian merit function. In P. M. Pardalos, editor, Advances in Optimization and Parallel Computing, pages North Holland, North Holland, [12] P. E. Gill and E. Wong. Numerical results for SQIC: Software for large-scale quadratic programming. Report CCoM 14-4, Center for Computational Mathematics, University of California, San Diego, La Jolla, CA, [13] P. E. Gill and E. Wong. User s guide for SQIC: Software for large-scale quadratic programming. Report CCoM 14-2, Center for Computational Mathematics, University of California, San Diego, La Jolla, CA, [14] P. E. Gill and E. Wong. Methods for convex and general quadratic programming. Math. Prog. Comp., 7:71 112, [15] H. M. Huynh. A Large-scale Quadratic Programming Solver Based On Block-LU Updates of the KKT System. PhD thesis, SCCM, Stanford University, Sep [16] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Math. Program., 45: , [17] C. M. Maes. A Regularized Active-Set Method for Sparse Convex Quadratic Programming. PhD thesis, ICME, Stanford University, Nov [18] PARDISO parallel sparse solver. [19] UMFPACK solver for sparse Ax = b. [20] E. Wong. Active-Set Methods for Quadratic Programming. PhD thesis, Department of Mathematics, University of California, San Diego, June 2011.
MS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationA convex QP solver based on block-lu updates
Block-LU updates p. 1/24 A convex QP solver based on block-lu updates PP06 SIAM Conference on Parallel Processing for Scientific Computing San Francisco, CA, Feb 22 24, 2006 Hanh Huynh and Michael Saunders
More informationImplementation of a KKT-based active-set QP solver
Block-LU updates p. 1/27 Implementation of a KKT-based active-set QP solver ISMP 2006 19th International Symposium on Mathematical Programming Rio de Janeiro, Brazil, July 30 August 4, 2006 Hanh Huynh
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2017 Notes 10: MINOS Part 1: the Reduced-Gradient
More informationCONVEXIFICATION SCHEMES FOR SQP METHODS
CONVEXIFICAION SCHEMES FOR SQP MEHODS Philip E. Gill Elizabeth Wong UCSD Center for Computational Mathematics echnical Report CCoM-14-06 July 18, 2014 Abstract Sequential quadratic programming (SQP) methods
More information2 P. E. GILL, W. MURRAY AND M. A. SAUNDERS We assume that the nonlinear functions are smooth and that their rst derivatives are available (and possibl
SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED OPTIMIZATION PHILIP E. GILL y, WALTER MURRAY z, AND MICHAEL A. SAUNDERS z Abstract. Sequential quadratic programming (SQP) methods have proved highly
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationReduced-Hessian Methods for Constrained Optimization
Reduced-Hessian Methods for Constrained Optimization Philip E. Gill University of California, San Diego Joint work with: Michael Ferry & Elizabeth Wong 11th US & Mexico Workshop on Optimization and its
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationA REGULARIZED ACTIVE-SET METHOD FOR SPARSE CONVEX QUADRATIC PROGRAMMING
A REGULARIZED ACTIVE-SET METHOD FOR SPARSE CONVEX QUADRATIC PROGRAMMING A DISSERTATION SUBMITTED TO THE INSTITUTE FOR COMPUTATIONAL AND MATHEMATICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationPDE-Constrained and Nonsmooth Optimization
Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)
More informationPreprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory
Preprint ANL/MCS-P1015-1202, Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 4: The Primal Simplex Method 1 Linear
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationSNOPT: An SQP Algorithm for Large-Scale Constrained Optimization
SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization Philip E. Gill; Department of Mathematics, University of California, San Diego, La Jolla, CA Walter Murray, Michael A. Saunders; Department
More informationSurvey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA
Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling
More informationHot-Starting NLP Solvers
Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio
More informationLUSOL: A basis package for constrained optimization
LUSOL: A basis package for constrained optimization IFORS Triennial Conference on OR/MS Honolulu, HI, July 11 15, 2005 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management
More information5.5 Quadratic programming
5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the
More informationInexact Newton Methods and Nonlinear Constrained Optimization
Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationNewton s Method. Javier Peña Convex Optimization /36-725
Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationRecent Adaptive Methods for Nonlinear Optimization
Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould
More informationLecture 15: SQP methods for equality constrained optimization
Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationAn Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints
An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:
More informationA Trust-region-based Sequential Quadratic Programming Algorithm
Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationIPSOL: AN INTERIOR POINT SOLVER FOR NONCONVEX OPTIMIZATION PROBLEMS
IPSOL: AN INTERIOR POINT SOLVER FOR NONCONVEX OPTIMIZATION PROBLEMS A DISSERTATION SUBMITTED TO THE PROGRAM IN SCIENTIFIC COMPUTING AND COMPUTATIONAL MATHEMATICS AND THE COMMITTEE ON GRADUATE STUDIES OF
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationThe use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher
The use of second-order information in structural topology optimization Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher What is Topology Optimization? Optimize the design of a structure
More informationNumerical Methods for PDE-Constrained Optimization
Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More informationGAMS/MINOS MINOS 1 GAMS/MINOS. Table of Contents:
GAMS/MINOS MINOS 1 GAMS/MINOS Table of Contents:... 1. INTRODUCTION... 2 2. HOW TO RUN A MODEL WITH GAMS/MINOS... 2 3. OVERVIEW OF GAMS/MINOS... 2 3.1. Linear programming... 3 3.2. Problems with a Nonlinear
More informationConvex Quadratic Approximation
Convex Quadratic Approximation J. Ben Rosen 1 and Roummel F. Marcia 2 Abstract. For some applications it is desired to approximate a set of m data points in IR n with a convex quadratic function. Furthermore,
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationInfeasibility Detection in Nonlinear Optimization
Infeasibility Detection in Nonlinear Optimization Frank E. Curtis, Lehigh University Hao Wang, Lehigh University SIAM Conference on Optimization 2011 May 16, 2011 (See also Infeasibility Detection in Nonlinear
More informationNonlinear Optimization Solvers
ICE05 Argonne, July 19, 2005 Nonlinear Optimization Solvers Sven Leyffer leyffer@mcs.anl.gov Mathematics & Computer Science Division, Argonne National Laboratory Nonlinear Optimization Methods Optimization
More informationA STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 318 (CME 338 Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2018 Notes 7: LUSOL: a Basis Factorization
More informationTrust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization
Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu
More informationREGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 15, No. 3, pp. 863 897 c 2005 Society for Industrial and Applied Mathematics A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR NONLINEAR OPTIMIZATION MICHAEL P. FRIEDLANDER
More informationA STABILIZED SQP METHOD: GLOBAL CONVERGENCE
A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,
More informationPreconditioned conjugate gradient algorithms with column scaling
Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and
More informationA review of sparsity vs stability in LU updates
A review of sparsity vs stability in LU updates Michael Saunders Dept of Management Science and Engineering (MS&E) Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationarxiv: v1 [math.oc] 10 Apr 2017
A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation Tuan T. Nguyen, Mircea Lazar and Hans Butler arxiv:1704.03064v1 math.oc] 10 Apr 2017 Abstract
More informationAn SQP Method for the Optimal Control of Large-Scale Dynamical Systems
An SQP Method for the Optimal Control of Large-Scale Dynamical Systems Philip E. Gill a, Laurent O. Jay b, Michael W. Leonard c, Linda R. Petzold d and Vivek Sharma d a Department of Mathematics, University
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationREPORTS IN INFORMATICS
REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationA Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity
A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University
More informationSome Theoretical Properties of an Augmented Lagrangian Merit Function
Some Theoretical Properties of an Augmented Lagrangian Merit Function Philip E. GILL Walter MURRAY Michael A. SAUNDERS Margaret H. WRIGHT Technical Report SOL 86-6R September 1986 Abstract Sequential quadratic
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationA New Low Rank Quasi-Newton Update Scheme for Nonlinear Programming
A New Low Rank Quasi-Newton Update Scheme for Nonlinear Programming Roger Fletcher Numerical Analysis Report NA/223, August 2005 Abstract A new quasi-newton scheme for updating a low rank positive semi-definite
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD
A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained
More informationNUMERICAL OPTIMIZATION. J. Ch. Gilbert
NUMERICAL OPTIMIZATION J. Ch. Gilbert Numerical optimization (past) The discipline deals with the classical smooth (nonconvex) problem min {f(x) : c E (x) = 0, c I (x) 0}. Applications: variable added
More informationNumerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09
Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods
More informationAn Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84
An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A
More informationA New Penalty-SQP Method
Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More informationSatisfying Flux Balance and Mass-Action Kinetics in a Network of Biochemical Reactions
Satisfying Flux Balance and Mass-Action Kinetics in a Network of Biochemical Reactions Speakers Senior Personnel Collaborators Michael Saunders and Ines Thiele Ronan Fleming, Bernhard Palsson, Yinyu Ye
More informationLarge-Scale Nonlinear Optimization with Inexact Step Computations
Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com IPAM Workshop on Numerical Methods for Continuous
More informationOn the Performance of SQP Methods for Nonlinear Optimization
On the Performance of SQP Methods for Nonlinear Optimization Philip E. Gill Michael A. Saunders Elizabeth Wong UCSD Center for Computational Mathematics Technical Report 15-1 January 2015 Abstract This
More informationImplicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks
Implicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks Carl-Johan Thore Linköping University - Division of Mechanics 581 83 Linköping - Sweden Abstract. Training
More informationGraph Coarsening Method for KKT Matrices Arising in Orthogonal Collocation Methods for Optimal Control Problems
Graph Coarsening Method for KKT Matrices Arising in Orthogonal Collocation Methods for Optimal Control Problems Begüm Şenses Anil V. Rao University of Florida Gainesville, FL 32611-6250 Timothy A. Davis
More informationLecture 14: October 17
1-725/36-725: Convex Optimization Fall 218 Lecture 14: October 17 Lecturer: Lecturer: Ryan Tibshirani Scribes: Pengsheng Guo, Xian Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationBindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.
Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of
More informationNOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained
NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume
More informationSeminal papers in nonlinear optimization
Seminal papers in nonlinear optimization Nick Gould, CSED, RAL, Chilton, OX11 0QX, England (n.gould@rl.ac.uk) December 7, 2006 The following papers are classics in the field. Although many of them cover
More informationThe Squared Slacks Transformation in Nonlinear Programming
Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More informationChapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization
In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of
More informationProjection methods to solve SDP
Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method
More informationAn External Active-Set Strategy for Solving Optimal Control Problems
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 5, MAY 009 119 An External Active-Set Strategy for Solving Optimal Control Problems Hoam Chung, Member, IEEE, Elijah Polak, Life Fellow, IEEE, Shankar
More informationALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS
ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS JOHANNES BRUST, OLEG BURDAKOV, JENNIFER B. ERWAY, ROUMMEL F. MARCIA, AND YA-XIANG YUAN Abstract. We present
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationProximal Newton Method. Ryan Tibshirani Convex Optimization /36-725
Proximal Newton Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: primal-dual interior-point method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h
More information2. Quasi-Newton methods
L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization
More information