An Inexact Newton Method for Optimization

Size: px
Start display at page:

Download "An Inexact Newton Method for Optimization"

Transcription

1 New York University Brown Applied Mathematics Seminar, February 10, 2009

2 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc) Research: Nonlinear Optimization, algorithms and theory Large-scale optimization (PDE-constrained problems, today s talk) Methods with fast detection of infeasibility (MINLP problems) Global convergence mechanisms

3 Outline Motivational Example Algorithm development and theoretical results Experimental results Conclusion and final remarks

4 Outline Motivational Example Algorithm development and theoretical results Experimental results Conclusion and final remarks

5 Hyperthermia treatment Regional hyperthermia is a cancer therapy that aims at heating large and deeply seated tumors by means of radio wave adsorption Results in the killing of tumor cells and makes them more susceptible to other accompanying therapies; e.g., chemotherapy

6 Hyperthermia treatment planning Computer modeling can be used to help plan the therapy for each patient, and it opens the door for numerical optimization The goal is to heat the tumor to the target temperature of 43 C while minimizing damage to nearby cells

7 Hyperthermia treatment as an optimization problem The problem is to { 37 in Ω\Ω0 min (y y y,u t)dv where y t = 43 in Ω 0 Ω subject to the bio-heat transfer equation (Pennes (1948)) (κ y) }{{} thermal conductivity and the bound constraints where Ω 0 is the tumor domain + ω(y)π(y y b ) }{{} = effects of blood flow 37.0 y 37.5, on Ω 41.0 y 45.0, in Ω 0 σ 2 i u ie i 2 }{{} electromagnetic field, in Ω

8 Large-scale optimization Consider min f (x) x R n s.t. c E(x) = 0 c I(x) 0 where f : R n R, c E : R n R p and c I : R n R q are smooth functions The best contemporary methods are limited by problem size; e.g., sequential quadratic programming (small to moderate sizes) interior-point methods (moderate to large sizes) We want the fast solution of problems with millions of variables

9 Challenges in large-scale optimization Computational issues: Large matrices may not be stored Large matrices may not be factored Algorithmic issues: The problem may be nonconvex The problem may be ill-conditioned Computational/Algorithmic issues: No matrix factorizations makes difficulties more difficult

10 Main contributions ALGORITHMS: Inexact Newton methods for constrained optimization, broadening the potential application of fast optimization algorithms THEORY: Global convergence and the potential for fast local convergence SOFTWARE: new release of Ipopt (Wächter) with Pardiso (Schenk) ARTICLES: An Inexact SQP Method for Equality Constrained Optimization, SIAM Journal on Optimization, 19(1): , with R. H. Byrd and J. Nocedal An Inexact Newton Method for Nonconvex Equality Constrained Optimization, Mathematical Programming, Series A, to appear, with R. H. Byrd and J. Nocedal A Matrix-free Algorithm for Equality Constrained Optimization Problems with Rank-Deficient Jacobians, SIAM Journal on Optimization, to appear, with J. Nocedal and A. Wächter An Interior-Point Algorithm for Large-Scale Nonlinear Optimization with Inexact Step Computations, submitted to SIAM Journal on Scientific Computing, with O. Schenk and A. Wächter

11 Outline Motivational Example Algorithm development and theoretical results Experimental results Conclusion and final remarks

12 Equality constrained optimization Consider The Lagrangian is min f (x) x R n s.t. c(x) = 0 L(x, λ) f (x) + λ T c(x) so the first-order optimality conditions are [ ] f (x) + c(x)λ L(x, λ) = F(x, λ) = 0 c(x)

13 Inexact Newton methods Solve F(x, λ) = 0 or min ϕ(x, λ) 1 F(x, λ) 2 2 Inexact Newton methods compute F(x k, λ k )d k = F(x k, λ k ) + r k requiring (Dembo, Eisenstat, Steihaug (1982)) r k κ F(x k, λ k ), κ (0, 1)

14 A naïve Newton method for optimization Consider the problem min f (x) = x 1 + x 2, s.t. c(x) = x x = 0 that has the first-order optimality conditions F(x, λ) = 1 + 2x 1λ 1 + 2x 2λ = 0 x1 2 + x2 2 1 A Newton method applied to this problem yields 1 k F(x 2 k, λ k ) e e e e e-15

15 A naïve Newton method for optimization Consider the problem min f (x) = x 1 + x 2, s.t. c(x) = x x = 0 that has the first-order optimality conditions F(x, λ) = 1 + 2x 1λ 1 + 2x 2λ = 0 x1 2 + x2 2 1 A Newton method applied to this problem yields 1 k F(x 2 k, λ k ) e e e e e-15 k f (x k ) c(x k ) e e e e e e e e e e-08

16 A naïve Newton method for optimization fails easily Consider the problem min f (x) = x 1 + x 2, s.t. c(x) = x x = 0

17 Merit function Simply minimizing [ ] ϕ(x, λ) = 1 F(x, 2 λ) 2 = 1 f (x) + c(x)λ 2 2 c(x) is generally inappropriate for optimization We use the merit function φ(x; π) f (x) + π c(x) where π is a penalty parameter

18 Algorithm 0: Newton method for optimization (Assume the problem is convex and regular) for k = 0, 1, 2,... Solve the primal-dual (Newton) equations [ ] [ ] [ ] H(xk, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 c(x k ) δ k Increase π, if necessary, so that π k λ k + δ k (yields Dφ k (d k ; π k ) 0) Backtrack from α k 1 to satisfy the Armijo condition φ(x k + α k d k ; π k ) φ(x k ; π k ) + ηα k Dφ k (d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

19 Newton methods and sequential quadratic programming If H(x k, λ k ) is positive definite on the null space of c(x k ) T, then [ ] [ ] [ ] H(xk, λ k ) c(x k ) d f (xk ) + c(x c(x k ) T = k )λ k 0 δ c(x k ) is equivalent to min f (x k) + f (x k ) T d + 1 d T H(x d R n 2 k, λ k )d s.t. c(x k ) + c(x k ) T d = 0

20 Minimizing a penalty function Consider the penalty function for min (x 1) 2, s.t. x = 0 i.e. φ(x; π) = (x 1) 2 + π x for different values of the penalty parameter π Figure: π = 1 Figure: π = 2

21 Convergence of Algorithm 0 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω over which f, c, and their first derivatives are bounded and Lipschitz continuous. Also, (Regularity) c(x k ) T has full row rank with singular values bounded below by a positive constant (Convexity) u T H(x k, λ k )u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 Theorem (Han (1977)) The sequence {(x k, λ k )} yields the limit [ ] lim f (xk ) + c(x k )λ k k = 0 c(x k )

22 Incorporating inexactness Iterative as opposed to direct methods Compute [ ] [ ] [ ] H(xk, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k + 0 c(x k ) δ k [ ρk r k ] satisfying [ ρk r k ] [ ] κ f (xk ) + c(x k )λ k, κ (0, 1) c(x k ) If κ is not sufficiently small (e.g., 10 3 vs ), then d k may be an ascent direction for our merit function; i.e., Dφ k (d k ; π k ) > 0 for all π k π k 1

23 Model reductions Define the model of φ(x; π): m(d; π) f (x) + f (x) T d + π( c(x) + c(x) T d ) d k is acceptable if m(d k ; π k ) m(0; π k ) m(d k ; π k ) = f (x k ) T d k + π k ( c(x k ) c(x k ) + c(x k ) T d k ) 0 This ensures Dφ k (d k ; π k ) 0 (and more)

24 Termination test 1 The search direction (d k, δ k ) is acceptable if [ ] [ ] ρk κ f (xk ) + c(x k )λ k, κ (0, 1) c(x k ) r k and if for π k = π k 1 and some σ (0, 1) we have m(d k ; π k ) max{ 1 2 d T k H(x k, λ k )d k, 0} + σπ k max{ c(x k ), r k c(x k ) } }{{} 0 for any d

25 Termination test 2 The search direction (d k, δ k ) is acceptable if ρ k β c(x k ), β > 0 and r k ɛ c(x k ), ɛ (0, 1) Increasing the penalty parameter π then yields m(d k ; π k ) max{ 1 2 d T k H(x k, λ k )d k, 0} + σπ k c(x k ) }{{} 0 for any d

26 Algorithm 1: Inexact Newton for optimization (Byrd, Curtis, Nocedal (2008)) for k = 0, 1, 2,... Iteratively solve [ ] [ ] [ ] H(xk, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 c(x k ) until termination test 1 or 2 is satisfied If only termination test 2 is satisfied, increase π so { π k max π k 1, f (x k) T d k + max{ 1 d } T 2 k H(x k, λ k )d k, 0} (1 τ)( c(x k ) r k ) Backtrack from α k 1 to satisfy δ k φ(x k + α k d k ; π k ) φ(x k ; π k ) ηα k m(d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

27 Convergence of Algorithm 1 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω over which f, c, and their first derivatives are bounded and Lipschitz continuous. Also, (Regularity) c(x k ) T has full row rank with singular values bounded below by a positive constant (Convexity) u T H(x k, λ k )u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 Theorem (Byrd, Curtis, Nocedal (2008)) The sequence {(x k, λ k )} yields the limit [ ] lim f (xk ) + c(x k )λ k k = 0 c(x k )

28 Handling nonconvexity and rank deficiency There are two assumptions we aim to drop: (Regularity) c(xk ) T has full row rank with singular values bounded below by a positive constant (Convexity) u T H(x k, λ k )u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 e.g., the problem is not regular if it is infeasible, and it is not convex if there are maximizers and/or saddle points Without them, Algorithm 1 may stall or may not be well-defined

29 No factorizations means no clue We might not store or factor [ H(xk, λ k ) ] c(x k ) c(x k ) T 0 so we might not know if the problem is nonconvex or ill-conditioned Common practice is to perturb the matrix to be [ ] H(xk, λ k ) + ξ 1I c(x k ) c(x k ) T ξ 2I where ξ 1 convexifies the model and ξ 2 regularizes the constraints Poor choices of ξ 1 and ξ 2 can have terrible consequences in the algorithm

30 Our approach for global convergence Decompose the direction d k into a normal component (toward the constraints) and a tangential component (toward optimality) Without convexity, we do not guarantee a minimizer, but our merit function biases the method to avoid maximizers and saddle points

31 Normal component computation (Approximately) solve for some ω > 0 min 1 2 c(x k) + c(x k ) T v 2 s.t. v ω ( c(x k ))c(x k ) We only require Cauchy decrease: c(x k ) c(x k ) + c(x k ) T v k ɛ v ( c(x k ) c(x k ) + α c(x k ) T ṽ k ) for ɛ v (0, 1), where ṽ k = ( c(x k ))c(x k ) is the direction of steepest descent

32 Tangential component computation (idea #1) Standard practice is to then (approximately) solve min ( f (x k ) + H(x k, λ k )v k ) T u ut H(x k, λ k )u s.t. c(x k ) T u = 0, u k However, maintaining can be expensive c(x k ) T u 0 and u k

33 Tangential component computation Instead, we formulate the primal-dual system [ ] [ ] H(xk, λ k ) c(x k ) uk c(x k ) T 0 δ k [ ] f (xk ) + c(x = k )λ k + H(x k, λ k )v k 0 Our ideas from before apply!

34 Handling nonconvexity Convexify the Hessian as in [ H(xk, λ k ) + ξ 1I ] c(x k ) c(x k ) T 0 by monitoring iterates Hessian modification strategy: Increase ξ 1 whenever u k 2 > ψ v k 2, ψ > ut k (H(x k, λ k ) + ξ 1I )u k < θ u k 2, θ > 0

35 Inexact Newton Algorithm 2 (Curtis, Nocedal, Wächter (2009)) for k = 0, 1, 2,... Approximately solve min 1 2 c(x k) + c(x k ) T v 2, s.t. v ω ( c(x k ))c(x k ) to compute v k satisfy Cauchy decrease Iteratively solve [ ] [ ] [ ] H(xk, λ k ) + ξ 1 I c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 δ k c(x k ) T v k until termination test 1 or 2 is satisfied, increasing ξ 1 as described If only termination test 2 is satisfied, increase π so { π k max π k 1, f (x k) T d k + max{ 1 2 ut k (H(x } k, λ k ) + ξ 1 I )u k, θ u k 2 } (1 τ)( c(x k ) c(x k ) + c(x k ) T d k ) Backtrack from α k 1 to satisfy φ(x k + α k d k ; π k ) φ(x k ; π k ) ηα k m(d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

36 Convergence of Algorithm 2 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω over which f, c, and their first derivatives are bounded and Lipschitz continuous Theorem (Curtis, Nocedal, Wächter (2009)) If all limit points of { c(x k ) T } have full row rank, then the sequence {(x k, λ k )} yields the limit [ ] lim f (xk ) + c(x k )λ k k = 0. c(x k ) Otherwise, and if {π k } is bounded, then lim ( c(x k))c(x k ) = 0 k lim f (x k) + c(x k )λ k k

37 Handling inequalities Interior point methods are attractive for large applications Line-search interior point methods that enforce c(x k ) + c(x k ) T d k = 0 may fail to converge globally (Wächter, Biegler (2000)) Fortunately, the trust region subproblem we use to regularize the constraints also saves us from this type of failure!

38 Algorithm 2 (Interior-point version) Apply Algorithm 2 to the logarithmic-barrier subproblem for µ 0 Define min f (x) µ q ln s i, s.t. c E (x) = 0, c I (x) s = 0 i=1 H(x k, λ E,k, λ I,k ) 0 c E (x k ) c I (x k ) d x k 0 µi 0 S k c E (x k ) T d s k δ E,k c I (x k ) T S k 0 0 δ I,k so that the iterate update has [ ] [ ] [ ] xk+1 xk d x + α k s k+1 s k k S k dk s Incorporate a fraction-to-the-boundary rule in the line search and a slack reset in the algorithm to maintain s max{0, c I (x)}

39 Convergence of Algorithm 2 (Interior-point) Assumption The sequence {(x k, λ E,k, λ I,k )} is contained in a convex set Ω over which f, c E, c I, and their first derivatives are bounded and Lipschitz continuous Theorem (Curtis, Schenk, Wächter (2009)) For a given µ, Algorithm 2 yields the same limits as in the equality constrained case If Algorithm 2 yields a sufficiently accurate solution to the barrier subproblem for each {µ j } 0 and if the linear independence constraint qualification (LICQ) holds at a limit point x of {x j }, then there exist Lagrange multipliers λ such that the first-order optimality conditions of the nonlinear program are satisfied

40 Outline Motivational Example Algorithm development and theoretical results Experimental results Conclusion and final remarks

41 Implementation details Incorporated in IPOPT software package (Wächter) Linear systems solved with PARDISO (Schenk) Symmetric quasi-minimum residual method (Freund (1994)) PDE-constrained model problems 3D grid Ω = [0, 1] [0, 1] [0, 1] Equidistant Cartesian grid with N grid points 7-point stencil for discretization

42 Boundary control problem min 1 2 Ω (y(x) yt(x))2 dx, // y t(x) = x 1 (x 1 1)x 2 (x 2 1) sin(2πx 3 ) s.t. (e y(x) y(x)) = 20, in Ω y(x) = u(x), on Ω, // u(x) defined on Ω 2.5 u(x) 3.5, on Ω N n p q # nnz f # iter CPU sec e e e e e (direct) e

43 Hyperthermia Treatment Planning { min 1 37 in 2 Ω (y(x) Ω\Ω0 yt(x))2 dx, // y t(x) = 43 in Ω 0 u j = a j e iφ j s.t. y(x) 10(y(x) 37) = u M(x)u, in Ω // M jk (x) =< E j (x), E k (x) > E j = sin(jx 1 x 2 x 3 π) 37.0 y(x) 37.5, on Ω 42.0 y(x) 44.0, in Ω 0, // Ω 0 = [3/8, 5/8] 3 N n p q # nnz f # iter CPU sec (direct)

44 Sample solution for N = 40

45 Numerical experiments (currently underway) Joint with Andreas Wächter (IBM) and Olaf Schenk (U. of Basel) Hyperthermia treatment planning with real patient geometry (with Matthias Christen, U. of Basel) Image registration (with Stefan Heldmann, U. of Lübeck)

46 Outline Motivational Example Algorithm development and theoretical results Experimental results Conclusion and final remarks

47 Conclusion and final remarks Inexact Newton method for optimization with theoretical foundation Convergence guarantees are as good as exact methods, sometimes better Numerical experiments are promising so far, and more to come

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Large-Scale Nonlinear Optimization with Inexact Step Computations

Large-Scale Nonlinear Optimization with Inexact Step Computations Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com IPAM Workshop on Numerical Methods for Continuous

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Lehigh University Johannes Huber University of Basel Olaf Schenk University

More information

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

An Inexact Newton Method for Nonconvex Equality Constrained Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Newton Method for Nonconvex Equality Constrained Optimization Richard H. Byrd Frank E. Curtis Jorge Nocedal Received: / Accepted: Abstract

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

A GLOBALLY CONVERGENT STABILIZED SQP METHOD A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Frank E. Curtis April 26, 2011 Abstract Penalty and interior-point methods

More information

A Line search Multigrid Method for Large-Scale Nonlinear Optimization

A Line search Multigrid Method for Large-Scale Nonlinear Optimization A Line search Multigrid Method for Large-Scale Nonlinear Optimization Zaiwen Wen Donald Goldfarb Department of Industrial Engineering and Operations Research Columbia University 2008 Siam Conference on

More information

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Noname manuscript No. (will be inserted by the editor) A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Johannes Huber

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

On the use of piecewise linear models in nonlinear programming

On the use of piecewise linear models in nonlinear programming Math. Program., Ser. A (2013) 137:289 324 DOI 10.1007/s10107-011-0492-9 FULL LENGTH PAPER On the use of piecewise linear models in nonlinear programming Richard H. Byrd Jorge Nocedal Richard A. Waltz Yuchen

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter

More information

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

Feasible Interior Methods Using Slacks for Nonlinear Optimization

Feasible Interior Methods Using Slacks for Nonlinear Optimization Feasible Interior Methods Using Slacks for Nonlinear Optimization Richard H. Byrd Jorge Nocedal Richard A. Waltz February 28, 2005 Abstract A slack-based feasible interior point method is described which

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Fran E. Curtis February, 22 Abstract Penalty and interior-point methods

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

On the complexity of an Inexact Restoration method for constrained optimization

On the complexity of an Inexact Restoration method for constrained optimization On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization

More information

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps

An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps R.A. Waltz J.L. Morales J. Nocedal D. Orban September 8, 2004 Abstract An interior-point method for nonlinear

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

x 2. x 1

x 2. x 1 Feasibility Control in Nonlinear Optimization y M. Marazzi z Jorge Nocedal x March 9, 2000 Report OTC 2000/04 Optimization Technology Center Abstract We analyze the properties that optimization algorithms

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Clément Royer - University of Wisconsin-Madison Joint work with Stephen J. Wright MOPTA, Bethlehem,

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Sub-Sampled Newton Methods

Sub-Sampled Newton Methods Sub-Sampled Newton Methods F. Roosta-Khorasani and M. W. Mahoney ICSI and Dept of Statistics, UC Berkeley February 2016 F. Roosta-Khorasani and M. W. Mahoney (UCB) Sub-Sampled Newton Methods Feb 2016 1

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

Stochastic Optimization Algorithms Beyond SG

Stochastic Optimization Algorithms Beyond SG Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods

More information

Sequential Convex Programming

Sequential Convex Programming Sequential Convex Programming sequential convex programming alternating convex optimization convex-concave procedure Prof. S. Boyd, EE364b, Stanford University Methods for nonconvex optimization problems

More information

A trust region method based on interior point techniques for nonlinear programming

A trust region method based on interior point techniques for nonlinear programming Math. Program., Ser. A 89: 149 185 2000 Digital Object Identifier DOI 10.1007/s101070000189 Richard H. Byrd Jean Charles Gilbert Jorge Nocedal A trust region method based on interior point techniques for

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information