Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Size: px
Start display at page:

Download "Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA"

Transcription

1 Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

2 NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling Basic Variables Handling Nonbasic Variables Handling Superbasic Variables Ensuring Global Convergence Algorithms and Software Algorithmic Comparison Summary and Future Wor 2

3 Constrained Optimization (Nonlinear Programming) Problem: Min x f(x) s.t. g(x) h(x) = where: f(x) - scalar objective function x - n vector of variables g(x) - inequality constraints, m vector h(x) - meq equality constraints. Sufficient Condition for Unique Optimum - f(x) must be convex, and - feasible region must be convex, i.e. g(x) are all convex h(x) are all linear Except in special cases, there is no guarantee that a local optimum is global if sufficient conditions are violated. 3

4 Variable Classification for NLPs Min f(x) Min f(z) s.t. g(x) + s = (add slac variable) s.t. c(z) = h(x) =, s a z b Partition variables into: z B - dependent or basic variables z N - nonbasic variables, fixed at a bound z S - independent or superbasic variables Analogy to linear programming. Superbasics required only if nonlinear problem 4

5 Characterization of Constrained Optima (two primal, two slacs) Min Min Linear Program Two nonbasic Linear Program (Alternate Optima) nonbasic variables, two basic, no superbasic Min One nonbasic variable Two basic, one superbasic Min Min No nonbasic variables Two basic, two superbasic Convex Objective Functions Linear Constraints 5

6 What conditions characterize an optimal solution? Unconstrained Local Minimum Necessary Conditions f (x*) = p T 2 f (x*) p for p R n (positive semi-definite) Unconstrained Local Minimum Sufficient Conditions f (x*) = p T 2 f (x*) p > for p R n (positive definite) 6

7 Optimal solution for inequality constrained problem Min f(x) s.t. g(x) Analogy: Ball rolling down valley pinned by fence Note: Balance of forces ( f, g1) 7

8 Optimal solution for general constrained problem Problem: Min f(x) s.t. g(x) h(x) = Analogy: Ball rolling on rail pinned by fences Balance of forces: f, g1, h 8

9 Optimality conditions for local optimum Necessary First Order Karush Kuhn - Tucer Conditions L (z*, u, v) = f(x*) + g(x*) u + h(x*) v = (Balance Forces) u (Inequalities act in only one direction) g (x*), h (x*) = (Feasibility) u j g j (x*) = (Complementarity: either g j (x*) = or u j = ) u, v are "weights" for "forces," nown as KKT multipliers, shadow prices, dual variables To guarantee that a local NLP solution satisfies KKT conditions, a constraint qualification is required. E.g., the Linear Independence Constraint Qualification (LICQ) requires active constraint gradients, [ g A (x*) h(x*)], to be linearly independent. Also, under LICQ, KKT multipliers are uniquely determined. Necessary (Sufficient) Second Order Conditions - Positive curvature in "constraint" directions. - p T 2 L (x*) p (p T 2 L (x*) p > ) where p are the constrained directions: g A (x*) T p=, h(x*) T p= This is the space of the superbasic variables! 9

10 Optimality conditions for local optimum Min f(z), s.t. c(z) =, a z b L (z*, λ, ν) = f(z*) + c(z*) λ - ν a + ν b = (Balance Forces) a z* b, c (z*) = (Feasibility) z*- a perp ν a b - z* perp ν b In terms of partitioned variables (when nown) Basic (n b = m): Β L (z*, λ, ν) = Β f(z*) + Β c(z*) λ = c (z*) = Square system in z B and λ Nonbasic (n n ): Ν L (z*, λ, ν) = Ν f(z*) + Ν c(z*) λ ν bnd = z*= bnd Square system in z N and ν Superbasic (n s ): S L (z*, λ, ν) = S f(z*) + S c(z*) λ = Square system in z S n = n n + n b + n s 1

11 Handling Basic Variables Β L (z*, λ, ν) = Β f(z*) + Β c(z*) λ = c (z*) = Full space: linearization and simultaneous solution of c(z) = with stationarity conditions. embedded within Newton method for KKT conditions (SQP and Barrier) Reduced space (two flavors): - elimination of nonlinear equations and z B at every point (feasible path GRG methods) - linearization, then elimination of linear equations and z B (MINOS and rsqp) 11

12 Handling Nonbasic Variables Ν L (z*, λ, ν) = Ν f(z*) + Ν c(z*) λ ν bnd = z*= bnd Barrier approach - primal and primal-dual approaches (IPOPT, KNITRO, LOQO) Active set (two flavors) QP or LP subproblems - primal-dual pivoting (SQP) gradient projection in primal space with bound constraints (GRG) 12

13 Gradient Projection Method (nonbasic variable treatment) Define the projection of an arbitrary point x onto box feasible region. The ith component is given by Piecewise linear path x(t) starting at the reference point x and obtained by projecting steepest descent direction at x onto the box region is given by 13

14 Handling Superbasic Variables S L (z*, λ, ν) = S f(z*) + S c(z*) λ = Second derivatives or quasi-newton? +1 B = B + T yy T s y - B s s T B s B s BFGS Formula s = x +1 -x y = L(x +1, u +1, v +1 ) - L(x, u +1, v +1 ) second derivatives approximated by change in gradients positive definite B ensures unique search direction and descent property Exact Hessian requires far fewer iterations (no buildup needed in space of superbasics) full-space - exploit sparsity (efficient large-scale solvers needed) reduced space - update or project (expensive dense algebra if ns large) 14

15 Algorithms for Constrained Problems Motivation: Build on unconstrained methods for superbasics wherever possible. Classification of Methods: Reduced Gradient Methods - (with Restoration) GRG2, CONOPT Reduced Gradient Methods - (without Restoration) MINOS Successive Quadratic Programming - generic implementations Barrier Functions - popular in 197s, but fell into disfavor. Barrier Methods have been developed recently and are again popular. Successive Linear Programming - only useful for "mostly linear" problems, cannot handle superbasic variables in a systematic way We will concentrate on algorithms for first four classes. 15

16 Reduced Gradient Methods Nonlinear elimination of basic variables (and multipliers, λ) Use gradient projection to update and remove nonbasic variables Solve problem in reduced space of superbasic variables using the concept of reduced gradients df dz dc df dz S S = f z f z S Because c( S z ) T c = z S dz S z B S c f z B [ c] z T f z B B [ c] dz B c c 1 = = z c S zb dz S z S z B This leads to reduced gradient for superbasic : = dz + dz =, we have : S c + zb 1 B 1 dz = 16

17 Example of Reduced Gradient Min s. t. c T x 2 1 3x 1 = [3 2 x x2 = 24 T 4], f = [2 x 1-2] Let df dz df dx S 1 z S = = f z = 2 x 1 x S 1, z 3 B z = S c x 2 [ c] z B 1 f z 1 [] 4 (- 2) = 2 x + 3 / 2 1 B If c T is (m x n); z S c T is m x (n-m); z B c T is (m x m) (df/dz S ) is the change in f along constraint direction per unit change in z S 17

18 Setch of GRG Algorithm 1. Initialize problem and obtain a feasible point at z 2. At feasible point z, partition variables z into z N, z B, z S 3. Remove nonbasics 4. Calculate reduced gradient, (df/dz S ) 5. Evaluate search direction for z S, d = B -1 (df/dz S ) 6. Perform a line search. Find α (,1] with z S := z S + α d Solve for c(z S + α d, z B, z N ) = If f(z S + α d, z B, z N ) < f(z S, z B, z N ), set z +1 S =z S + α d, := If (df/dz S ) <ε, Stop. Else, go to 2. 18

19 GRG Algorithm Properties 1. GRG2 has been implemented on PC's as GINO and is very reliable and robust. It is also the optimization solver in MS EXCEL. 2. CONOPT is implemented in GAMS, AIMMS and AMPL 3. GRG2 uses Q-N for small problems but can switch to conjugate gradients if problem gets large. 4. CONOPT does the same but reverts to an SQP method where it then uses exact second derivatives. This is a local feature. 5. Convergence of c(z S, z B, z N ) = can get very expensive because inner equation solutions are required. 6. Safeguards can be added so that restoration (step 5.) can be dropped and efficiency increases. 7. Line search used based on improvement of f(z). No global convergence proof 19

20 Reduced Gradient Method without Restoration Linearize then Eliminate Basics (MINOS/Augmented) Motivation: Efficient algorithms are available that solve linearly constrained optimization problems (MINOS): Min f(x) s.t. Ax b Cx = d Extend to nonlinear problems, through successive linearization Develop major iterations (linearizations) and minor iterations (GRG solutions). Strategy: (Robinson, Murtagh & Saunders) 1. Partition variables into basic, nonbasic variables and superbasic variables.. 2. Linearize active constraints at z D z= r 3. Let ψ = f (z) + v T c(z) + β (c(z) T c(z)) (Augmented Lagrange), 4. Solve linearly constrained problem: Min s.t. ψ (z) Dz = r a z b using reduced gradients to get z Set =+1, go to Algorithm terminates when no movement between steps 3) and 4). 2

21 MINOS/Augmented Notes 1. MINOS has been implemented very efficiently to tae care of linearity. It becomes LP Simplex method if problem is totally linear. Also, very efficient matrix routines. 2. No restoration taes place, nonlinear constraints are reflected in ψ(z) during step 3). MINOS is more efficient than GRG. 3. Major iterations (steps 3) - 4)) converge at a quadratic rate. 4. Reduced gradient methods are complicated, monolithic codes: hard to integrate efficiently into modeling software. 5. No globalization and no global convergence proof 21

22 Successive Quadratic Programming (SQP) Motivation: Tae KKT conditions, expand in Taylor series about current point. Tae Newton step (QP) to determine next point. Derivation KKT Conditions x L (x*, u*, v*) = f(x*) + ga(x*) u* + h(x*) v* = h(x*) = ga(x*) =, where g A are the active constraints. Newton -Step xx L g A h g T A h T Δx Δu = - Δv x L ( x, u, v ) g A h ( x ) ( x ) Requirements: xx L must be calculated and should be regular correct active set g A, need to now nonbasic variables! good estimates of u, v 22

23 SQP Chronology 1. Wilson (1963) - active set (nonbasics) can be determined by solving QP: Min f(x ) T d+ 1/2 d T xx L(x, u, v ) d d s.t. g(x ) + g(x ) T d h(x ) + h(x ) T d = 2. Han (1976), (1977), Powell (1977), (1978) - approximate xx L using a positive definite quasi-newton update (BFGS) - use a line search to converge from poor starting points. Notes: - Similar methods were derived using penalty (not Lagrange) functions. - Method converges quicly; very few function evaluations. - Not well suited to large problems (full space update used). For n > 1, say, use reduced space methods (e.g. MINOS). 23

24 Elements of SQP Search Directions How do we obtain search directions? Form QP and let QP determine constraint activity At each iteration,, solve: Min f(x ) T d + 1/2 d T B d d s.t. g(x ) + g(x ) T d h(x ) + h(x ) T d = Convergence from poor starting points As with Newton's method, choose α (stepsize) to ensure progress toward optimum: x +1 = x + α d. α is chosen by maing sure a merit function is decreased at each iteration. Exact Penalty Function ψ(x) = f(x) + μ [Σ max (, g j (x)) + Σ h j (x) ] μ > max j { u j, v j } Augmented Lagrange Function ψ(x) = f(x) + u T g(x) + v T h(x) + η/2 {Σ (h j (x)) 2 + Σ max (, g j (x)) 2 } 24

25 Newton-Lie Properties for SQP Fast Local Convergence B = xxl xxl is p.d and B is Q-N B is Q-N update, xxl not p.d Quadratic 1 step Superlinear 2 step Superlinear Enforce Global Convergence Ensure decrease of merit function by taing α 1 Trust region adaptations provide a stronger guarantee of global convergence - but harder to implement. 25

26 Basic SQP Algorithm. Guess x, Set B = I (Identity). Evaluate f(x ), g(x ) and h(x ). 1. At x, evaluate f(x ), g(x ), h(x ). 2. If >, update B using the BFGS Formula. 3. Solve: Min d f(x ) T d + 1/2 d T B d s.t. g(x ) + g(x ) T d h(x ) + h(x ) T d= If KKT error less than tolerance: L(x*) ε, h(x*) ε, g(x*) + ε. STOP, else go to Find α so that < α 1 and ψ(x + αd) < ψ(x ) sufficiently (Each trial requires evaluation of f(x), g(x) and h(x)). 5. x +1 = x + α d. Set = + 1 Go to 2. 26

27 Large-Scale SQP Min f(z) Min f(z ) T d + 1/2 d T W d s.t. c(z)= s.t. c(z ) + (Α ) T d = z L z z U z L z + d z U Active set strategy (usually) applied to QP problems (could use interior point) to handle nonbasic variables Few superbasics (1-1) Apply reduced space (linearized) elimination of basic variables Many superbasics ( 1) Apply full-space method 27

28 Few degrees of freedom => reduced space SQP (rsqp) Tae advantage of sparsity of A= c(x) project W into space of active (or equality constraints) curvature (second derivative) information only needed in space of degrees of freedom second derivatives can be applied or approximated with positive curvature (e.g., BFGS) use dual space QP solvers + easy to implement with existing sparse solvers, QP methods and line search techniques + exploits 'natural assignment' of dependent and decision variables (some decomposition steps are 'free') + does not require second derivatives - reduced space matrices are dense - may be dependent on variable partitioning - can be very expensive for many degrees of freedom - can be expensive if many QP bounds 28

29 29 Reduced space SQP (rsqp) Range and Null Space Decomposition = + ) ( ) ( T x c x f d A A W λ Assume nonbasics removed, QP problem with n variables and m constraints becomes: Define reduced space basis, Z R n x (n-m) with (A ) T Z = Define basis for remaining space Y R nx m, [Y Z ] R nx n is nonsingular. Let d = Y d Y + Z d Z to rewrite: [ ] [ ] [ ] = + ) ( ) ( T Z Y T T x c x f I Z Y d d I Z Y A A W I Z Y λ

30 3 Reduced space SQP (rsqp) Range and Null Space Decomposition (A T Y) d Y =-c(x ) is square, d Y determined from bottom row. Cancel Y T WY and Y T WZ; (unimportant as d Z, d Y --> ) (Y T A) λ = -Y T f(x ), λ can be determined by first order estimate Calculate or approximate w= Z T WY d Y, solve Z T WZ d Z =-Z T f(x )-w Compute total step: d = Y dy + Z dz = + ) ( ) ( ) ( T T Z Y T T T T T T x c x f Z x f Y d d Y A Z W Z Y W Z A Y Y W Y Y W Y λ

31 Barrier Methods for Large-Scale Nonlinear Programming Original Formulation min x R s.t n f ( x) c( x) = x Can generalize for a x b Barrier Approach min x R s.t n ϕ μ ( x) c( x) = = f ( x) μ ln n i= 1 x i As μ, x*(μ) x* Fiacco and McCormic (1968) 31

32 32 Solution of the Barrier Problem Newton Directions (KKT System) ) ( ) ( ) ( = = = + x c e Xv v x A x f μ λ Solve + = e Xv c v A f d d d X V A I A W x μ λ ν λ Reducing the System v Vd x X v e X d 1 1 =μ = Σ + + c d A A W x T ϕ μ λ V X 1 Σ= IPOPT Code IPOPT Code or.org ) ( [1,1,1...] x diag X e T = =

33 Global Convergence of Newton-based Barrier Solvers Merit Function Exact Penalty: P(x, η) = f(x) + η c(x) Aug d Lagrangian: L*(x, λ, η) = f(x) + λ T c(x) + η c(x) 2 Assess Search Direction (e.g., from IPOPT) Line Search choose stepsize α to give sufficient decrease of merit function using a step to the boundary rule with τ ~.99. for α (, α ], x v x + 1 = λ + α d + 1 = λ + α ( λ λ ) How do we balance φ (x) and c(x) with η? v x + α d Is this approach globally convergent? Will it still be fast? v + 1 = x (1 τ ) x (1 τ ) v + + α d > > x 33

34 Line Search Filter Method Store (φ, θ) at allowed iterates Allow progress if trial point is acceptable to filter with θ margin If switching condition T a b α[ φ d] δ[ θ ], a > 2b > is satisfied, only an Armijo line search is required on φ 2 φ(x) If insufficient progress on stepsize, evoe restoration phase to reduce θ. Global convergence and superlinear local convergence proved (with second order correction) θ(x) = c(x) 34

35 IPOPT Algorithm Features Line Search Strategies for Globalization - l 2 exact penalty merit function - Filter method (adapted and extended from Fletcher and Leyffer) Hessian Calculation - BFGS (full/lm) - SR1 (full/lm) - Exact full Hessian (direct) - Exact reduced Hessian (direct) - Preconditioned CG Algorithmic Properties Globally, superlinearly convergent (Wächter and B., 25) Easily tailored to different problem structures Freely Available CPL License and COIN-OR distribution: Beta version recently rewritten in C++ Solved on thousands of test problems and applications 35

36 Trust Region Barrier Method KNITRO W A A T d λ g = c 36

37 Composite Step Trust-region SQP Split barrier problem step into 2 parts: normal (basic) and tangential (nonbasic and superbasic) search directions Trust-region on tangential component: larger combined radius Zd θ δ 2 d d N δ 1 x 37

38 KNITRO Algorithm Features Overall method - Similar to IPOPT - But does not use filter method - applies full-space Newton step and line search (LOQO is similar) Trust Region Strategies for Globalization - based on l 2 exact penalty merit function - expensive: used only when line search approach gets into trouble Tangential Problem - Exact full Hessian (direct) Algorithmic Properties Globally, superlinearly convergent (Byrd and Nocedal, 2) Stronger convergence properties using trust region over line search Available through Zenia Available free to academics Solved on thousands of test problems and applications - Preconditioned CG to solve tangential problem Normal Problem - use dogleg method based on Cauchy step 38

39 Typical NLP algorithms and software SQP - NPSOL, VF2AD, fmincon reduced SQP - SNOPT, rsqp, MUSCOD, DMO, LSSOL GP + Elimination - GP/Lin. Elimin. - GRG2, GINO, SOLVER, CONOPT MINOS Second derivatives and barrier - IPOPT, KNITRO, LOQO Interesting hybrids - FSQP/cFSQP - SQP and elimination LANCELOT (actually AL for equalities along with GP) 39

40 Comparison of NLP Solvers: Data Reconciliation (24) Iterations Degrees of Freedom LANCELOT MINOS SNOPT KNITRO LOQO IPOPT 1 CPU Time (s, norm.) LANCELOT MINOS SNOPT KNITRO LOQO IPOPT.1 Degrees of Freedom 4

41 ari fail fail ex8_2_ i i ex8_2_ i i qcqp5-2c $ qcqp5-3c qcqp75-1c Mittelmann NLP benchmar ( ) qcqp75-2c t t tt qcqp1-1c 1 1c qcqp1-2c qcqp15-1c 1c t t t t t t fail fail tt qcqp5-2nc qcqp5-3nc IPOPT.6 qcqp75-1nc fail fail KNITRO qcqp75-2nc.5 LOQO t t tt SNOPT qcqp1-1nc.4 1nc CONOPT qcqp1-2nc qcqp15-1nc 1nc t t t t fail fail tt nql18 nql fail fail t t fail fail t t $ qssp6 qssp t t t t fail fail qssp m t t fail fail log(2)*minimum CPU time WM_CFx ! 3552! fail fail WM_CFy t t 462! 462! fail fail fail fail Weyl_m Large-scale t t 2772! 2772! fail fail Test Problems fail fail fail fail NARX_CFy ! 226! fail fail m fail fail tt 5-25 variables, 25 constraints NARX_Weyl t t 8291! 8291! fail fail fail fail fail fail fail fail =========================================================================== == fraction solved within Comparison of NLP solvers (latest 882 Mittelmann study) Limits Fail IPOPT 7 2 KNITRO 7 LOQO 23 4 SNOPT CONOPT

42 NLP Summary Widely used - solving nonlinear problems with thousands of variables and superbasics routinely Exploiting sparsity and structure leads to solving problems with millions of variables (and superbasics) Availability of structure and second derivatives is the ey. Global and fast local convergence properties are wellnown for many (but not all algorithms) Exceptions: LOQO, MINOS, CONOPT, GRG 42

43 Current Challenges - Exploiting larger problems in parallel (shared and distributed memory), - Irregular problems (e.g., singular problems, MPECs) - Extracting accurate first and second derivatives from codes... - Embedding robust NLPs within MINLP and global solvers in order to quicly detect infeasible solutions in subregions 43

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Nonlinear Programming: Concepts, Algorithms and Applications

Nonlinear Programming: Concepts, Algorithms and Applications Nonlinear Programming: Concepts, Algorithms and Applications L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Nonlinear Programming and Process Optimization Introduction

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Infeasibility Detection in Nonlinear Optimization

Infeasibility Detection in Nonlinear Optimization Infeasibility Detection in Nonlinear Optimization Frank E. Curtis, Lehigh University Hao Wang, Lehigh University SIAM Conference on Optimization 2011 May 16, 2011 (See also Infeasibility Detection in Nonlinear

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Nonlinear Optimization Solvers

Nonlinear Optimization Solvers ICE05 Argonne, July 19, 2005 Nonlinear Optimization Solvers Sven Leyffer leyffer@mcs.anl.gov Mathematics & Computer Science Division, Argonne National Laboratory Nonlinear Optimization Methods Optimization

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

A trust region method based on interior point techniques for nonlinear programming

A trust region method based on interior point techniques for nonlinear programming Math. Program., Ser. A 89: 149 185 2000 Digital Object Identifier DOI 10.1007/s101070000189 Richard H. Byrd Jean Charles Gilbert Jorge Nocedal A trust region method based on interior point techniques for

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Large-Scale Nonlinear Optimization with Inexact Step Computations

Large-Scale Nonlinear Optimization with Inexact Step Computations Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com IPAM Workshop on Numerical Methods for Continuous

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence RC23036 (W0304-181) April 21, 2003 Computer Science IBM Research Report Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence Andreas Wächter, Lorenz T. Biegler IBM Research

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis

More information

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review Department of Statistical Sciences and Operations Research Virginia Commonwealth University Oct 16, 2013 (Lecture 14) Nonlinear Optimization

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

A SECOND DERIVATIVE SQP METHOD : LOCAL CONVERGENCE

A SECOND DERIVATIVE SQP METHOD : LOCAL CONVERGENCE A SECOND DERIVATIVE SQP METHOD : LOCAL CONVERGENCE Nic I. M. Gould Daniel P. Robinson Oxford University Computing Laboratory Numerical Analysis Group Technical Report 08-21 December 2008 Abstract In [19],

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

Nonlinear Programming: Concepts and Algorithms for Process Optimization

Nonlinear Programming: Concepts and Algorithms for Process Optimization Nonlinear Programming: Concepts and Algorithms for Process Optimization L.. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Nonlinear Programming and Process Optimization

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps

An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps R.A. Waltz J.L. Morales J. Nocedal D. Orban September 8, 2004 Abstract An interior-point method for nonlinear

More information

Feasible Interior Methods Using Slacks for Nonlinear Optimization

Feasible Interior Methods Using Slacks for Nonlinear Optimization Feasible Interior Methods Using Slacks for Nonlinear Optimization Richard H. Byrd Jorge Nocedal Richard A. Waltz February 28, 2005 Abstract A slack-based feasible interior point method is described which

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

Interior Methods for Mathematical Programs with Complementarity Constraints

Interior Methods for Mathematical Programs with Complementarity Constraints Interior Methods for Mathematical Programs with Complementarity Constraints Sven Leyffer, Gabriel López-Calva and Jorge Nocedal July 14, 25 Abstract This paper studies theoretical and practical properties

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

arxiv: v1 [math.oc] 10 Apr 2017

arxiv: v1 [math.oc] 10 Apr 2017 A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation Tuan T. Nguyen, Mircea Lazar and Hans Butler arxiv:1704.03064v1 math.oc] 10 Apr 2017 Abstract

More information

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity Preprint ANL/MCS-P864-1200 ON USING THE ELASTIC MODE IN NONLINEAR PROGRAMMING APPROACHES TO MATHEMATICAL PROGRAMS WITH COMPLEMENTARITY CONSTRAINTS MIHAI ANITESCU Abstract. We investigate the possibility

More information

Numerical Nonlinear Optimization with WORHP

Numerical Nonlinear Optimization with WORHP Numerical Nonlinear Optimization with WORHP Christof Büskens Optimierung & Optimale Steuerung London, 8.9.2011 Optimization & Optimal Control Nonlinear Optimization WORHP Concept Ideas Features Results

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 A globally and quadratically convergent primal dual augmented Lagrangian

More information

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

2 P. E. GILL, W. MURRAY AND M. A. SAUNDERS We assume that the nonlinear functions are smooth and that their rst derivatives are available (and possibl

2 P. E. GILL, W. MURRAY AND M. A. SAUNDERS We assume that the nonlinear functions are smooth and that their rst derivatives are available (and possibl SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED OPTIMIZATION PHILIP E. GILL y, WALTER MURRAY z, AND MICHAEL A. SAUNDERS z Abstract. Sequential quadratic programming (SQP) methods have proved highly

More information