Inexact Newton Methods and Nonlinear Constrained Optimization

Size: px
Start display at page:

Download "Inexact Newton Methods and Nonlinear Constrained Optimization"

Transcription

1 Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009

2 Outline PDE-Constrained Optimization Newton s method Inexactness Experimental results Conclusion and final remarks

3 Outline PDE-Constrained Optimization Newton s method Inexactness Experimental results Conclusion and final remarks

4 Hyperthermia treatment Regional hyperthermia is a cancer therapy that aims at heating large and deeply seated tumors by means of radio wave adsorption Results in the killing of tumor cells and makes them more susceptible to other accompanying therapies; e.g., chemotherapy

5 Hyperthermia treatment planning Computer modeling can be used to help plan the therapy for each patient, and it opens the door for numerical optimization The goal is to heat the tumor to a target temperature of 43 C while minimizing damage to nearby cells

6 PDE-constrained optimization min f (x) s.t. c E(x) = 0 c I(x) 0 Problem is infinite-dimensional Controls and states: x = (u, y) Solution methods integrate numerical simulation problem structure optimization algorithms

7 Algorithmic frameworks We hear the phrases: Discretize-then-optimize Optimize-then-discretize I prefer: Discretize the optimization problem min f (x) s.t. c(x) = 0 Discretize the optimality conditions min f h(x) s.t. c h (x) = 0 min f (x) s.t. c(x) = 0 [ f + A, λ c ] = 0 [ ] ( f + A, λ )h = 0 c h Discretize the search direction computation

8 Algorithms Nonlinear elimination min u,y f (u, y) s.t. c(u, y) = 0 min u f (u, y(u)) uf + uy T y f = 0 Reduced-space methods d y : toward satisfying the constraints λ : Lagrange multiplier estimates d u : toward optimality Full-space methods H u 0 A T u d u uf + A T 0 H y A T u λ y d y = y f + A T y λ A u A y 0 δ c

9 Outline PDE-Constrained Optimization Newton s method Inexactness Experimental results Conclusion and final remarks

10 Nonlinear equations Newton s method F(x) = 0 F(x k )d k = F(x k ) Judge progress by the merit function φ(x) 1 F(x 2 k) 2 Direction is one of descent since φ(x k ) T d k = F(x k ) T F(x k )d k = F(x k ) 2 < 0 (Note the consistency between the step computation and merit function!)

11 Equality constrained optimization Consider Lagrangian is min f (x) x R n s.t. c(x) = 0 L(x, λ) f (x) + λ T c(x) so the first-order optimality conditions are [ ] f (x) + c(x)λ L(x, λ) = F(x, λ) = 0 c(x)

12 Merit function Simply minimizing [ ] ϕ(x, λ) = 1 F(x, 2 λ) 2 = 1 f (x) + c(x)λ 2 2 c(x) is generally inappropriate for constrained optimization We use the merit function φ(x; π) f (x) + π c(x) where π is a penalty parameter

13 Minimizing a penalty function Consider the penalty function for min (x 1) 2, s.t. x = 0 i.e. φ(x; π) = (x 1) 2 + π x for different values of the penalty parameter π Figure: π = 1 Figure: π = 2

14 Algorithm 0: Newton method for optimization (Assume the problem is sufficiently convex and regular) for k = 0, 1, 2,... Solve the primal-dual (Newton) equations [ ] [ ] [ ] H(xk, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 c(x k ) δ k Increase π, if necessary, so that Dφ k (d k ; π k ) 0 (e.g., π k λ k + δ k ) Backtrack from α k 1 to satisfy the Armijo condition φ(x k + α k d k ; π k ) φ(x k ; π k ) + ηα k Dφ k (d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

15 Convergence of Algorithm 0 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω over which f, c, and their first derivatives are bounded and Lipschitz continuous. Also, (Regularity) c(x k ) T has full row rank with singular values bounded below by a positive constant (Convexity) u T H(x k, λ k )u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 Theorem (Han (1977)) The sequence {(x k, λ k )} yields the limit [ ] lim f (xk ) + c(x k )λ k k = 0 c(x k )

16 Outline PDE-Constrained Optimization Newton s method Inexactness Experimental results Conclusion and final remarks

17 Large-scale primal-dual algorithms Computational issues: Large matrices to be stored Large matrices to be factored Algorithmic issues: The problem may be nonconvex The problem may be ill-conditioned Computational/Algorithmic issues: No matrix factorizations makes difficulties more difficult

18 Nonlinear equations Compute F(x k )d k = F(x k ) + r k requiring (Dembo, Eisenstat, Steihaug (1982)) r k κ F(x k ), κ (0, 1) Progress judged by the merit function φ(x) 1 F(x 2 k) 2 Again, note the consistency... φ(x k ) T d k = F(x k ) T F(x k )d k = F(x k ) 2 +F(x k ) T r k (κ 1) F(x k ) 2 < 0

19 Optimization Compute [ ] [ ] [ ] H(xk, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k + 0 c(x k ) δ k [ ρk r k ] satisfying [ ρk r k ] [ ] κ f (xk ) + c(x k )λ k, κ (0, 1) c(x k ) If κ is not sufficiently small (e.g., 10 3 vs ), then d k may be an ascent direction for our merit function; i.e., Dφ k (d k ; π k ) > 0 for all π k π k 1 Our work begins here... inexact Newton methods for optimization We cover the convex case, nonconvexity, irregularity, inequality constraints

20 Model reductions Define the model of φ(x; π): m(d; π) f (x) + f (x) T d + π( c(x) + c(x) T d ) d k is acceptable if m(d k ; π k ) m(0; π k ) m(d k ; π k ) = f (x k ) T d k + π k ( c(x k ) c(x k ) + c(x k ) T d k ) 0 This ensures Dφ k (d k ; π k ) 0 (and more)

21 Termination test 1 The search direction (d k, δ k ) is acceptable if [ ] [ ] ρk κ f (xk ) + c(x k )λ k, κ (0, 1) c(x k ) r k and if for π k = π k 1 and some σ (0, 1) we have m(d k ; π k ) max{ 1 2 d T k H(x k, λ k )d k, 0} + σπ k max{ c(x k ), r k c(x k ) } }{{} 0 for any d

22 Termination test 2 The search direction (d k, δ k ) is acceptable if ρ k β c(x k ), β > 0 and r k ɛ c(x k ), ɛ (0, 1) Increasing the penalty parameter π then yields m(d k ; π k ) max{ 1 2 d T k H(x k, λ k )d k, 0} + σπ k c(x k ) }{{} 0 for any d

23 Algorithm 1: Inexact Newton for optimization (Byrd, Curtis, Nocedal (2008)) for k = 0, 1, 2,... Iteratively solve [ ] [ ] [ ] H(xk, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 c(x k ) until termination test 1 or 2 is satisfied If only termination test 2 is satisfied, increase π so { π k max π k 1, f (x k) T d k + max{ 1 d } T 2 k H(x k, λ k )d k, 0} (1 τ)( c(x k ) r k ) Backtrack from α k 1 to satisfy δ k φ(x k + α k d k ; π k ) φ(x k ; π k ) ηα k m(d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

24 Convergence of Algorithm 1 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω over which f, c, and their first derivatives are bounded and Lipschitz continuous. Also, (Regularity) c(x k ) T has full row rank with singular values bounded below by a positive constant (Convexity) u T H(x k, λ k )u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 Theorem (Byrd, Curtis, Nocedal (2008)) The sequence {(x k, λ k )} yields the limit [ ] lim f (xk ) + c(x k )λ k k = 0 c(x k )

25 Handling nonconvexity and rank deficiency There are two assumptions we aim to drop: (Regularity) c(xk ) T has full row rank with singular values bounded below by a positive constant (Convexity) u T H(x k, λ k )u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 e.g., the problem is not regular if it is infeasible, and it is not convex if there are maximizers and/or saddle points Without them, Algorithm 1 may stall or may not be well-defined

26 No factorizations means no clue We might not store or factor [ H(xk, λ k ) ] c(x k ) c(x k ) T 0 so we might not know if the problem is nonconvex or ill-conditioned Common practice is to perturb the matrix to be [ ] H(xk, λ k ) + ξ 1I c(x k ) c(x k ) T ξ 2I where ξ 1 convexifies the model and ξ 2 regularizes the constraints Poor choices of ξ 1 and ξ 2 can have terrible consequences in the algorithm

27 Our approach for global convergence Decompose the direction d k into a normal component (toward the constraints) and a tangential component (toward optimality) We impose a specific type of trust region constraint on the v k step in case the constraint Jacobian is (near) rank deficient

28 Handling nonconvexity In computation of d k = v k + u k, convexify the Hessian as in [ H(xk, λ k ) + ξ 1I ] c(x k ) c(x k ) T 0 by monitoring iterates Hessian modification strategy: Increase ξ 1 whenever u k 2 > ψ v k 2, ψ > ut k (H(x k, λ k ) + ξ 1I )u k < θ u k 2, θ > 0

29 Algorithm 2: Inexact Newton (Regularized) (Curtis, Nocedal, Wächter (2009)) for k = 0, 1, 2,... Approximately solve min 1 2 c(x k) + c(x k ) T v 2, s.t. v ω ( c(x k ))c(x k ) to compute v k satisfying Cauchy decrease Iteratively solve [ ] [ ] [ ] H(xk, λ k ) + ξ 1 I c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 δ k c(x k ) T v k until termination test 1 or 2 is satisfied, increasing ξ 1 as described If only termination test 2 is satisfied, increase π so { π k max π k 1, f (x k) T d k + max{ 1 2 ut k (H(x } k, λ k ) + ξ 1 I )u k, θ u k 2 } (1 τ)( c(x k ) c(x k ) + c(x k ) T d k ) Backtrack from α k 1 to satisfy φ(x k + α k d k ; π k ) φ(x k ; π k ) ηα k m(d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

30 Convergence of Algorithm 2 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω over which f, c, and their first derivatives are bounded and Lipschitz continuous Theorem (Curtis, Nocedal, Wächter (2009)) If all limit points of { c(x k ) T } have full row rank, then the sequence {(x k, λ k )} yields the limit [ ] lim f (xk ) + c(x k )λ k k = 0. c(x k ) Otherwise, and if {π k } is bounded, then lim ( c(x k))c(x k ) = 0 k lim f (x k) + c(x k )λ k = 0 k

31 Handling inequalities Interior point methods are attractive for large applications Line-search interior point methods that enforce c(x k ) + c(x k ) T d k = 0 may fail to converge globally (Wächter, Biegler (2000)) Fortunately, the trust region subproblem we use to regularize the constraints also saves us from this type of failure!

32 Algorithm 2 (Interior-point version) Apply Algorithm 2 to the logarithmic-barrier subproblem for µ 0 Define min f (x) µ q ln s i, s.t. c E (x) = 0, c I (x) s = 0 i=1 H(x k, λ E,k, λ I,k ) 0 c E (x k ) c I (x k ) d x k 0 µi 0 S k c E (x k ) T d s k δ E,k c I (x k ) T S k 0 0 δ I,k so that the iterate update has [ ] [ ] [ ] xk+1 xk d x + α k s k+1 s k k S k dk s Incorporate a fraction-to-the-boundary rule in the line search and a slack reset in the algorithm to maintain s max{0, c I (x)}

33 Convergence of Algorithm 2 (Interior-point) Assumption The sequence {(x k, λ E,k, λ I,k )} is contained in a convex set Ω over which f, c E, c I, and their first derivatives are bounded and Lipschitz continuous Theorem (Curtis, Schenk, Wächter (2009)) For a given µ, Algorithm 2 yields the same limits as in the equality constrained case If Algorithm 2 yields a sufficiently accurate solution to the barrier subproblem for each {µ j } 0 and if the linear independence constraint qualification (LICQ) holds at a limit point x of {x j }, then there exist Lagrange multipliers λ such that the first-order optimality conditions of the nonlinear program are satisfied

34 Outline PDE-Constrained Optimization Newton s method Inexactness Experimental results Conclusion and final remarks

35 Implementation details Incorporated in IPOPT software package (Wächter) inexact algorithm yes Linear systems solved with PARDISO (Schenk) SQMR (Freund (1994)) Preconditioning in PARDISO incomplete multilevel factorization with inverse-based pivoting stabilized by symmetric-weighted matchings Optimality tolerance: 1e-8

36 CUTEr and COPS collections 745 problems written in AMPL 645 solved successfully 42 real failures Robustness between 87%-94% Original IPOPT: 93%

37 Helmholtz N n p q # iter CPU sec (per iter) (21.833) (149.66) (2729.1)

38 Helmholtz Not taking nonconvexity into account:

39 Boundary control min 1 2 Ω (y(x) yt(x))2 dx s.t. (e y(x) y(x)) = 20 in Ω y(x) = u(x) on Ω 2.5 u(x) 3.5 on Ω where y t(x) = x 1 (x 1 1)x 2 (x 2 1) sin(2πx 3 ) N n p q # iter CPU sec (per iter) (0.2165) (7.9731) (380.88) Original IPOPT with N = 32 requires 238 seconds per iteration

40 Hyperthermia Treatment Planning min 1 2 Ω (y(x) yt(x))2 dx s.t. y(x) 10(y(x) 37) = u M(x)u in Ω 37.0 y(x) 37.5 on Ω 42.0 y(x) 44.0 in Ω 0 where u j = a j e iφ j, M jk (x) =< E j (x), E k (x) >, E j = sin(jx 1 x 2 x 3 π) N n p q # iter CPU sec (per iter) (0.3367) (59.920) Original IPOPT with N = 32 requires 408 seconds per iteration

41 Groundwater modeling min 1 2 Ω (y(x) yt(x))2 dx α Ω [β(u(x) ut(x))2 + (u(x) u t(x)) 2 ]dx s.t. (e u(x) y i (x)) = q i (x) in Ω, i = 1,..., 6 y i (x) n = 0 on Ω y i (x)dx = 0, i = 1,..., 6 Ω 1 u(x) 2 in Ω where q i = 100 sin(2πx 1 ) sin(2πx 2 ) sin(2πx 3 ) N n p q # iter CPU sec (per iter) ( ) ( ) ( ) Original IPOPT with N = 32 requires approx. 20 hours for the first iteration

42 Outline PDE-Constrained Optimization Newton s method Inexactness Experimental results Conclusion and final remarks

43 Conclusion and final remarks PDE-Constrained optimization is an active and exciting area Inexact Newton method with theoretical foundation Convergence guarantees are as good as exact methods, sometimes better Numerical experiments are promising so far, and more to come

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Large-Scale Nonlinear Optimization with Inexact Step Computations

Large-Scale Nonlinear Optimization with Inexact Step Computations Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com IPAM Workshop on Numerical Methods for Continuous

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Lehigh University Johannes Huber University of Basel Olaf Schenk University

More information

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

An Inexact Newton Method for Nonconvex Equality Constrained Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Newton Method for Nonconvex Equality Constrained Optimization Richard H. Byrd Frank E. Curtis Jorge Nocedal Received: / Accepted: Abstract

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

A GLOBALLY CONVERGENT STABILIZED SQP METHOD A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Frank E. Curtis April 26, 2011 Abstract Penalty and interior-point methods

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Noname manuscript No. (will be inserted by the editor) A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Johannes Huber

More information

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Fran E. Curtis February, 22 Abstract Penalty and interior-point methods

More information

Feasible Interior Methods Using Slacks for Nonlinear Optimization

Feasible Interior Methods Using Slacks for Nonlinear Optimization Feasible Interior Methods Using Slacks for Nonlinear Optimization Richard H. Byrd Jorge Nocedal Richard A. Waltz February 28, 2005 Abstract A slack-based feasible interior point method is described which

More information

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

On the use of piecewise linear models in nonlinear programming

On the use of piecewise linear models in nonlinear programming Math. Program., Ser. A (2013) 137:289 324 DOI 10.1007/s10107-011-0492-9 FULL LENGTH PAPER On the use of piecewise linear models in nonlinear programming Richard H. Byrd Jorge Nocedal Richard A. Waltz Yuchen

More information

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 A globally and quadratically convergent primal dual augmented Lagrangian

More information

An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps

An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps R.A. Waltz J.L. Morales J. Nocedal D. Orban September 8, 2004 Abstract An interior-point method for nonlinear

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

A Line search Multigrid Method for Large-Scale Nonlinear Optimization

A Line search Multigrid Method for Large-Scale Nonlinear Optimization A Line search Multigrid Method for Large-Scale Nonlinear Optimization Zaiwen Wen Donald Goldfarb Department of Industrial Engineering and Operations Research Columbia University 2008 Siam Conference on

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7. SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

arxiv: v2 [math.oc] 11 Jan 2018

arxiv: v2 [math.oc] 11 Jan 2018 A one-phase interior point method for nonconvex optimization Oliver Hinder, Yinyu Ye January 12, 2018 arxiv:1801.03072v2 [math.oc] 11 Jan 2018 Abstract The work of Wächter and Biegler [40] suggests that

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

Convergence analysis of a primal-dual interior-point method for nonlinear programming

Convergence analysis of a primal-dual interior-point method for nonlinear programming Convergence analysis of a primal-dual interior-point method for nonlinear programming Igor Griva David F. Shanno Robert J. Vanderbei August 19, 2005 Abstract We analyze a primal-dual interior-point method

More information

A note on the implementation of an interior-point algorithm for nonlinear optimization with inexact step computations

A note on the implementation of an interior-point algorithm for nonlinear optimization with inexact step computations Math. Program., Ser. B (2012) 136:209 227 DOI 10.1007/s10107-012-0557-4 FULL LENGTH PAPER A note on the implementation of an interior-point algorithm for nonlinear optimization with inexact step computations

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Primal-dual Subgradient Method for Convex Problems with Functional Constraints Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual

More information

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Clément Royer - University of Wisconsin-Madison Joint work with Stephen J. Wright MOPTA, Bethlehem,

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

On the complexity of an Inexact Restoration method for constrained optimization

On the complexity of an Inexact Restoration method for constrained optimization On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Infeasibility Detection in Nonlinear Optimization

Infeasibility Detection in Nonlinear Optimization Infeasibility Detection in Nonlinear Optimization Frank E. Curtis, Lehigh University Hao Wang, Lehigh University SIAM Conference on Optimization 2011 May 16, 2011 (See also Infeasibility Detection in Nonlinear

More information

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties André L. Tits Andreas Wächter Sasan Bahtiari Thomas J. Urban Craig T. Lawrence ISR Technical

More information

x 2. x 1

x 2. x 1 Feasibility Control in Nonlinear Optimization y M. Marazzi z Jorge Nocedal x March 9, 2000 Report OTC 2000/04 Optimization Technology Center Abstract We analyze the properties that optimization algorithms

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

arxiv: v1 [math.oc] 20 Aug 2014

arxiv: v1 [math.oc] 20 Aug 2014 Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience Detailed Version Frank E. Curtis, Nicholas I. M. Gould, Hao Jiang, and Daniel P. Robinson arxiv:1408.4500v1 [math.oc]

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Interior Methods for Mathematical Programs with Complementarity Constraints

Interior Methods for Mathematical Programs with Complementarity Constraints Interior Methods for Mathematical Programs with Complementarity Constraints Sven Leyffer, Gabriel López-Calva and Jorge Nocedal July 14, 25 Abstract This paper studies theoretical and practical properties

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information