A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

Size: px
Start display at page:

Download "A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity"

Transcription

1 A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University OP17 Vancouver, British Columbia, Canada 24 May 2017 A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 1 of 25

2 Outline Motivation Proposed Algorithm Theoretical Results Numerical Results Summary A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 2 of 25

3 Outline Motivation Proposed Algorithm Theoretical Results Numerical Results Summary A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 3 of 25

4 Introduction Consider nonconvex equality constrained optimization problems of the form min f(x) x R n s.t. c(x) = 0, where f : R n R and c : R n R m are twice continuously differentiable. We are interested in algorithm worst-case iteration / evaluation complexity. Constraints are not necessarily linear! (No projection-based algorithms.) A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 4 of 25

5 Algorithms Sequential Quadratic Optimization (SQP) / Newton s Method Trust Funnel; Gould & Toint (2010) Short-Step ARC; Cartis, Gould, & Toint (2013) A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 5 of 25

6 Algorithms Sequential Quadratic Optimization (SQP) / Newton s Method Global convergence: globally convergent (line search or trust region) Trust Funnel; Gould & Toint (2010) Global convergence: globally convergent Short-Step ARC; Cartis, Gould, & Toint (2013) Global convergence: globally convergent A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 5 of 25

7 Algorithms Sequential Quadratic Optimization (SQP) / Newton s Method Global convergence: globally convergent (line search or trust region) Worst-case complexity: No proved bound Trust Funnel; Gould & Toint (2010) Global convergence: globally convergent Worst-case complexity: No proved bound Short-Step ARC; Cartis, Gould, & Toint (2013) Global convergence: globally convergent Worst-case complexity: O(ɛ 3/2 ) (simplified; more later) A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 5 of 25

8 Trust region vs. ARC 1: Solve to compute s k : min s R n Trust Region q k(s) := f k + g T k s st H k s s.t. s 2 δ k (dual: λ k ) 2: Compute ratio: ρ q k f k f(x k +s k ) f k q k (s k ) 3: Update radius: ρ q k η: accept and δ k ρ q k < η: reject and δ k 1: Solve to compute s k : min s R n arc c k(s) := f k + g T k s st H k s σ k s 3 2 2: Compute ratio: ρ c k f k f(x k +s k ) f k c k (s k ) 3: Update regularization: ρ c k η: accept and σ k ρ c k < η: reject and σ k A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 6 of 25

9 Trust region vs. ARC: Subproblem solution correspondence 1: Solve to compute s k : min s R n Trust Region q k(s) := f k + g T k s st H k s s.t. s 2 δ k (dual: λ k ) 2: Compute ratio: ρ q k f k f(x k +s k ) f k q k (s k ) 3: Update radius: ρ q k η: accept and δ k ρ q k < η: reject and δ k σ k = λ k δ k δ k = s k 2 1: Solve to compute s k : min s R n arc c k(s) := f k + g T k s st H k s σ k s 3 2 2: Compute ratio: ρ c k f k f(x k +s k ) f k c k (s k ) 3: Update regularization: ρ c k η: accept and σ k ρ c k < η: reject and σ k A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 6 of 25

10 Short-Step ARC c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 7 of 25

11 Short-Step ARC c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 7 of 25

12 Short-Step ARC c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 7 of 25

13 Main concern Completely ignores the objective function during the first phase Question: Can we do better? A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 8 of 25

14 Contributions Completely ignores the objective function during the first phase Question: Can we do better? Yes!(?) First, consider TRACE method for unconstrained nonconvex optimization FEC, D. P. Robinson, M. Samadi, A trust region algorithm with a worst-case iteration complexity of O(ɛ 3/2 ) for nonconvex optimization, Mathematical Programming, 162(1 2), Second, rather than two-phase approach that ignores objective in phase 1, wrap in a trust funnel framework that observes objective in both phases. A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 8 of 25

15 Contributions Completely ignores the objective function during the first phase Question: Can we do better? Yes!(?) First, consider TRACE method for unconstrained nonconvex optimization FEC, D. P. Robinson, M. Samadi, A trust region algorithm with a worst-case iteration complexity of O(ɛ 3/2 ) for nonconvex optimization, Mathematical Programming, 162(1 2), Second, rather than two-phase approach that ignores objective in phase 1, wrap in a trust funnel framework that observes objective in both phases. Why trust funnel? Do not know, in general, how to bound number of updates to a penalty parameter and/or updates of filter entries! Trust funnel: driving factor is reducing constraint violation. A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 8 of 25

16 Outline Motivation Proposed Algorithm Theoretical Results Numerical Results Summary A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 9 of 25

17 SQP core Ideally, given x k, find s k as a solution of Issues: min f s R n k + gk T s st H k s s.t. c k + J k s = 0 H k (Hessian of Lagrangian) might not be positive definite over Null(J k ). Trust region!... but constraints might be incompatible. A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 10 of 25

18 Trust funnel basics Step decomposition approach: First, compute a normal step toward minimizing constraint violation: min v(x) = 1 2 c(x) 2 s 2 = norm R n mv k (snorm) s.t. s norm 2 δk v Second, compute multipliers λ k (or take from previous iteration). Third, compute a tangential step toward optimality: min s tang R n mf k (snorm + stang) s.t. J k s tang = 0, s norm + s tang 2 δ f k. A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 11 of 25

19 Tangential step s 2 c k + J k s = c k + J k s norm s 1 A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 12 of 25

20 Main idea Two-phase method combining trust funnel and trace. Trust funnel for globalization trace for good complexity bounds Phase 1 towards feasibility, two types of iterations: F-iterations improve objective and reduce constraint violation. V-iterations reduce constraint violation. Our algorithm vs. basic trust funnel modified F-iteration conditions and a different funnel updating procedure uses trace instead of tradition trust region ideas (for radius updates) after getting relatively feasible, switches to phase 2. A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 13 of 25

21 Illustration c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 14 of 25

22 Illustration c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 14 of 25

23 Illustration c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 14 of 25

24 Illustration c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 14 of 25

25 Illustration c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 14 of 25

26 Illustration c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 14 of 25

27 Illustration c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 14 of 25

28 Illustration c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 14 of 25

29 Illustration c(x) = 0 f = f A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 14 of 25

30 Outline Motivation Proposed Algorithm Theoretical Results Numerical Results Summary A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 15 of 25

31 Phase 1 Recall that v(x) = J(x) T c(x) and define the iteration index set Theorem 1 I := {k N : J T k c k 2 > ɛ v}. For any ɛ v (0, ), the cardinality of I is at most K(ɛ v), which accounts for O(ɛ 3/2 v ) successful steps and finite contraction and expansion steps between successful steps. Hence, the cardinality of I is K(ɛ v) = O(ɛ 3/2 v ). A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 16 of 25

32 Phase 1 Recall that v(x) = J(x) T c(x) and define the iteration index set Theorem 1 I := {k N : J T k c k 2 > ɛ v}. For any ɛ v (0, ), the cardinality of I is at most K(ɛ v), which accounts for O(ɛ 3/2 v ) successful steps and finite contraction and expansion steps between successful steps. Hence, the cardinality of I is K(ɛ v) = O(ɛ 3/2 v ). Corollary 2 If {J k } have full row rank with singular values bounded below by ζ (0, ), then has cardinality O(ɛ 3/2 v ). I c := {k N : c k 2 > ɛ v/ζ} A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 16 of 25

33 Phase 2 Options for phase 2: trust funnel method (no complexity guarantees) or target-following approach similar to Short-Step ARC to minimize Theorem 3 Φ(x, t) = c(x) f(x) t 2 For ɛ f (0, ɛ 1/3 v ], the number of iterations until is O(ɛ 3/2 f ɛ 1/2 v ). g k + J T k y k 2 ɛ f (y k, 1) 2 or J T k c k 2 ɛ f c k 2 Same complexity as Short-Step ARC: If ɛ f = ɛ 2/3 v, then overall O(ɛv 3/2 ) (though loose KKT tolerance). If ɛ f = ɛ v, then overall O(ɛ 2 v ). A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 17 of 25

34 Outline Motivation Proposed Algorithm Theoretical Results Numerical Results Summary A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 18 of 25

35 Implementation Matlab implementation: Phase 1: our algorithm vs. one doing V-iterations only Phase 2: trust funnel method [Curtis, Gould, Robinson, & Toint (2016)] Termination conditions: Phase 1: c k 10 6 max{ c 0, 1} or { J T k c k 10 6 max{ J T 0 c 0, 1} and c k > 10 3 max{ c 0, 1} Phase 2: g k + J T k y k 10 6 max{ g 0 + J T 0 y 0, 1}. A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 19 of 25

36 Test set Equality constrained problems (190) from the CUTEst test set: 78 constant (or null) objective 60 time limit (1 hour) 13 feasible initial point 3 infeasible phase 1 2 function evaluation error 1 small stepsizes (less than ) Remaining set consists of 33 problems. A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 20 of 25

37 TF TF-V-only Phase 1 Phase 2 Phase 1 Phase 2 Problem n m #V #F f g + J T y #V #F #V f g + J T y #V #F BT e e e e BT e e e e BT e e e e BT e e e e BT e e e e BT e e e e BT e e e e BT e e e e BT e e e e BT e e e e BT e e e e BT e e e e BYRDSPHR e e e e CHAIN e e e e FLT e e e e GENHS e e e e HS100LNP e e e e HS111LNP e e e e HS e e e e HS e e e e HS e e e e HS e e e e HS e e e e HS e e e e HS e e e e HS e e e e HS e e e e HS e e e e MARATOS e e e e MSS e e e e MWRIGHT e e e e ORTHREGB e e e e SPIN2OP e e time +1.67e e-01 time time A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 21 of 25

38 TF TF-V-only Phase 1 Phase 2 Phase 1 Phase 2 Problem n m #V #F f g + J T y #V #F #V f g + J T y #V #F BT e e e e BT e e e e A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 22 of 25

39 Summary of results Our algorithm, at the end of phase 1 for 26 problems, reaches a smaller function value for 6 problems, reaches the same function value Total number of iterations of our algorithm for 18 problems is smaller for 8 problems is equal A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 23 of 25

40 Outline Motivation Proposed Algorithm Theoretical Results Numerical Results Summary A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 24 of 25

41 Summary Proposed an algorithm for equality constrained optimization Trust funnel algorithm with improved complexity properties Promising performance in practice based on our preliminary experiment A step toward practical algorithms with good iteration complexity F. E. Curtis, D. P. Robinson, and M. Samadi. Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization. Technical Report 16T-013, COR@L Laboratory, Department of ISE, Lehigh University, A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization 25 of 25

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Frank E. Curtis, Lehigh University Beyond Convexity Workshop, Oaxaca, Mexico 26 October 2017 Worst-Case Complexity Guarantees and Nonconvex

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

A trust region algorithm with a worst-case iteration complexity of O(ɛ 3/2 ) for nonconvex optimization

A trust region algorithm with a worst-case iteration complexity of O(ɛ 3/2 ) for nonconvex optimization Math. Program., Ser. A DOI 10.1007/s10107-016-1026-2 FULL LENGTH PAPER A trust region algorithm with a worst-case iteration complexity of O(ɛ 3/2 ) for nonconvex optimization Frank E. Curtis 1 Daniel P.

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

An Inexact Newton Method for Nonconvex Equality Constrained Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Newton Method for Nonconvex Equality Constrained Optimization Richard H. Byrd Frank E. Curtis Jorge Nocedal Received: / Accepted: Abstract

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

On the complexity of an Inexact Restoration method for constrained optimization

On the complexity of an Inexact Restoration method for constrained optimization On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization

More information

Infeasibility Detection in Nonlinear Optimization

Infeasibility Detection in Nonlinear Optimization Infeasibility Detection in Nonlinear Optimization Frank E. Curtis, Lehigh University Hao Wang, Lehigh University SIAM Conference on Optimization 2011 May 16, 2011 (See also Infeasibility Detection in Nonlinear

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 1 / 30 S. Smooth Optimization

Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 1 / 30 S. Smooth Optimization Numerical Experience with a Class of Trust-Region Algorithms for Unconstrained Smooth Optimization XI Brazilian Workshop on Continuous Optimization Universidade Federal do Paraná Geovani Nunes Grapiglia

More information

HYBRID FILTER METHODS FOR NONLINEAR OPTIMIZATION. Yueling Loh

HYBRID FILTER METHODS FOR NONLINEAR OPTIMIZATION. Yueling Loh HYBRID FILTER METHODS FOR NONLINEAR OPTIMIZATION by Yueling Loh A dissertation submitted to The Johns Hopkins University in conformity with the requirements for the degree of Doctor of Philosophy. Baltimore,

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

This manuscript is for review purposes only.

This manuscript is for review purposes only. 1 2 3 4 5 6 7 8 9 10 11 12 THE USE OF QUADRATIC REGULARIZATION WITH A CUBIC DESCENT CONDITION FOR UNCONSTRAINED OPTIMIZATION E. G. BIRGIN AND J. M. MARTíNEZ Abstract. Cubic-regularization and trust-region

More information

R-Linear Convergence of Limited Memory Steepest Descent

R-Linear Convergence of Limited Memory Steepest Descent R-Linear Convergence of Limited Memory Steepest Descent Frank E. Curtis, Lehigh University joint work with Wei Guo, Lehigh University OP17 Vancouver, British Columbia, Canada 24 May 2017 R-Linear Convergence

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

Parameter Optimization in the Nonlinear Stepsize Control Framework for Trust-Region July 12, 2017 Methods 1 / 39

Parameter Optimization in the Nonlinear Stepsize Control Framework for Trust-Region July 12, 2017 Methods 1 / 39 Parameter Optimization in the Nonlinear Stepsize Control Framework for Trust-Region Methods EUROPT 2017 Federal University of Paraná - Curitiba/PR - Brazil Geovani Nunes Grapiglia Federal University of

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

A GLOBALLY CONVERGENT STABILIZED SQP METHOD A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization

How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization Frank E. Curtis Department of Industrial and Systems Engineering, Lehigh University Daniel P. Robinson Department

More information

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Clément Royer - University of Wisconsin-Madison Joint work with Stephen J. Wright MOPTA, Bethlehem,

More information

Quadratic Programming

Quadratic Programming Quadratic Programming Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2 SideBar: Matrix Spaces Four fundamental subspaces

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models by E. G. Birgin, J. L. Gardenghi, J. M. Martínez, S. A. Santos and Ph. L. Toint Report NAXYS-08-2015

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

A SECOND DERIVATIVE SQP METHOD : THEORETICAL ISSUES

A SECOND DERIVATIVE SQP METHOD : THEORETICAL ISSUES A SECOND DERIVATIVE SQP METHOD : THEORETICAL ISSUES Nicholas I. M. Gould Daniel P. Robinson Oxford University Computing Laboratory Numerical Analysis Group Technical Report 08/18 November 2008 Abstract

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

A trust-funnel method for nonlinear optimization problems with general nonlinear constraints and its application to derivative-free optimization

A trust-funnel method for nonlinear optimization problems with general nonlinear constraints and its application to derivative-free optimization A trust-funnel method for nonlinear optimization problems with general nonlinear constraints and its application to derivative-free optimization by Ph. R. Sampaio and Ph. L. Toint Report naxys-3-25 January

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter

More information

arxiv: v1 [math.oc] 20 Aug 2014

arxiv: v1 [math.oc] 20 Aug 2014 Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience Detailed Version Frank E. Curtis, Nicholas I. M. Gould, Hao Jiang, and Daniel P. Robinson arxiv:1408.4500v1 [math.oc]

More information

On the use of piecewise linear models in nonlinear programming

On the use of piecewise linear models in nonlinear programming Math. Program., Ser. A (2013) 137:289 324 DOI 10.1007/s10107-011-0492-9 FULL LENGTH PAPER On the use of piecewise linear models in nonlinear programming Richard H. Byrd Jorge Nocedal Richard A. Waltz Yuchen

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Frank E. Curtis April 26, 2011 Abstract Penalty and interior-point methods

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

Stochastic Optimization Algorithms Beyond SG

Stochastic Optimization Algorithms Beyond SG Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

1. Introduction. We consider the general smooth constrained optimization problem:

1. Introduction. We consider the general smooth constrained optimization problem: OPTIMIZATION TECHNICAL REPORT 02-05, AUGUST 2002, COMPUTER SCIENCES DEPT, UNIV. OF WISCONSIN TEXAS-WISCONSIN MODELING AND CONTROL CONSORTIUM REPORT TWMCC-2002-01 REVISED SEPTEMBER 2003. A FEASIBLE TRUST-REGION

More information

A SECOND DERIVATIVE SQP METHOD : LOCAL CONVERGENCE

A SECOND DERIVATIVE SQP METHOD : LOCAL CONVERGENCE A SECOND DERIVATIVE SQP METHOD : LOCAL CONVERGENCE Nic I. M. Gould Daniel P. Robinson Oxford University Computing Laboratory Numerical Analysis Group Technical Report 08-21 December 2008 Abstract In [19],

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July

More information

Adaptive augmented Lagrangian methods: algorithms and practical numerical experience

Adaptive augmented Lagrangian methods: algorithms and practical numerical experience Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 Adaptive augmented Lagrangian methods: algorithms and practical numerical

More information

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis

More information

On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm

On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm Int. Journal of Math. Analysis, Vol. 6, 2012, no. 28, 1367-1382 On Stability of Fuzzy Multi-Objective Constrained Optimization Problem Using a Trust-Region Algorithm Bothina El-Sobky Department of Mathematics,

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Fran E. Curtis February, 22 Abstract Penalty and interior-point methods

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

c 2018 Society for Industrial and Applied Mathematics Dedicated to Roger Fletcher, a great pioneer for the field of nonlinear optimization

c 2018 Society for Industrial and Applied Mathematics Dedicated to Roger Fletcher, a great pioneer for the field of nonlinear optimization SIAM J. OPTIM. Vol. 28, No. 1, pp. 930 958 c 2018 Society for Industrial and Applied Mathematics A SEQUENTIAL ALGORITHM FOR SOLVING NONLINEAR OPTIMIZATION PROBLEMS WITH CHANCE CONSTRAINTS FRANK E. CURTIS,

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Lehigh University Johannes Huber University of Basel Olaf Schenk University

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Optimal Newton-type methods for nonconvex smooth optimization problems

Optimal Newton-type methods for nonconvex smooth optimization problems Optimal Newton-type methods for nonconvex smooth optimization problems Coralia Cartis, Nicholas I. M. Gould and Philippe L. Toint June 9, 20 Abstract We consider a general class of second-order iterations

More information

Worst Case Complexity of Direct Search

Worst Case Complexity of Direct Search Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient

More information

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization Frank-Wolfe Method Ryan Tibshirani Convex Optimization 10-725 Last time: ADMM For the problem min x,z f(x) + g(z) subject to Ax + Bz = c we form augmented Lagrangian (scaled form): L ρ (x, z, w) = f(x)

More information

Derivative-Free Trust-Region methods

Derivative-Free Trust-Region methods Derivative-Free Trust-Region methods MTH6418 S. Le Digabel, École Polytechnique de Montréal Fall 2015 (v4) MTH6418: DFTR 1/32 Plan Quadratic models Model Quality Derivative-Free Trust-Region Framework

More information

Lecture 14: October 17

Lecture 14: October 17 1-725/36-725: Convex Optimization Fall 218 Lecture 14: October 17 Lecturer: Lecturer: Ryan Tibshirani Scribes: Pengsheng Guo, Xian Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 3, pp. 863 897 c 2005 Society for Industrial and Applied Mathematics A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR NONLINEAR OPTIMIZATION MICHAEL P. FRIEDLANDER

More information