An Inexact Newton Method for Nonlinear Constrained Optimization

Size: px
Start display at page:

Download "An Inexact Newton Method for Nonlinear Constrained Optimization"

Transcription

1 An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009

2 Outline Motivation and background Algorithm development and theoretical results Experimental results Conclusion and final remarks

3 Outline Motivation and background Algorithm development and theoretical results Experimental results Conclusion and final remarks

4 Very large-scale optimization Consider a constrained optimization problem (NLP) of the form min f (x) x R n s.t. c E(x) = 0 c I(x) 0 where f : R n R, c E : R n R p and c I : R n R q are smooth functions We are interested in problems for which the best contemporary methods, i.e., penalty methods sequential quadratic programming interior-point methods cannot be employed due to problem size

5 Hyperthermia treatment planning Regional hyperthermia is a cancer therapy that aims at heating large and deeply seated tumors by means of radio wave adsorption, which results in the killing of tumor cells and makes them more susceptible to accompanying radio or chemotherapy. See

6 Hyperthermia treatment planning as a NLP The problem can be formulated as { 37 in Ω\Ω0 min y(x) y y,u t(x)dx where y t(x) = 43 in Ω 0 Ω subject to the bio-heat transfer equation 1 (κ(x) y(x)) }{{} thermal conductivity and the bound constraints where Ω 0 is the tumor domain + ω(x, y(x))π(x)(y(x) y b ) = }{{} effects of blood flow 37.0 y(x) 37.5, on Ω 41.0 y(x) 45.0, in Ω 0 σ 2 i u ie i 2 }{{} electromagnetic field, in Ω 1 Pennes (1948)

7 Newton s method An approach that sounds ideal for, say, the equality constrained problem min f (x) x R n s.t. c(x) = 0 (1.1) would be a Newton method The Lagrangian for (1.1) is given by L(x, λ) f (x) + λ T c(x) so if f and c are differentiable, the first-order optimality conditions are [ ] f (x) + c(x)λ L(x, λ) = = 0. c(x) (A single system of nonlinear equalities.)

8 Inexact Newton methods A Newton method for the nonlinear system of equations F(x) = 0 has the iteration ( F(x k ))d k = F(x k ) Applying an iterative linear system solver to this system yields ( F(x k ))d k = F(x k ) + r k and if progress is judged by the merit function φ(x) 1 2 F(x) 2 then the (inner) iteration may be terminated as soon as r k κ k F(x k ) (for superlinear convergence, choose κ k 0) 2 2 Dembo, Eisenstat, Steihaug (1982)

9 A naïve Newton method for NLP Consider the problem min f (x) = x 1 + x 2, s.t. c(x) = x x = 0 that has the first-order optimality conditions F(x, λ) = 1 + 2x 1λ 1 + 2x 2λ = 0 x1 2 + x2 2 1 A Newton method applied to this problem yields 1 k F(x 2 k, λ k ) e e e e e-15

10 A naïve Newton method for NLP Consider the problem min f (x) = x 1 + x 2, s.t. c(x) = x x = 0 that has the first-order optimality conditions F(x, λ) = 1 + 2x 1λ 1 + 2x 2λ = 0 x1 2 + x2 2 1 A Newton method applied to this problem yields 1 k F(x 2 k, λ k ) e e e e e-15 k f (x k ) c(x k ) e e e e e e e e e e-08

11 Our goal Our goal is to design an inexact Newton method for constrained optimization Such a framework is important so the user will be able to balance the computational costs between outer (optimization) and inner (step computation) iterations for their own applications The method must always remember that an optimization problem is being solved Many of the usual techniques in Newton-like methods for optimization for handling non-convex objective and constraint functions (near) rank deficiency of the constraint Jacobian inequality constraints are not applicable here! (We need to get creative.)

12 Outline Motivation and background Algorithm development and theoretical results Experimental results Conclusion and final remarks

13 A merit function for optimization The failure in applying a standard (inexact) Newton method to an optimization problem is due to the choice of merit function! That is, [ ] φ(x, λ) = 1 F(x, 2 λ) 2 = 1 f (x) + c(x)λ 2 2 c(x) is inappropriate for purposes of optimization Our method will be centered around the merit function φ(x; π) f (x) + π c(x) (known as an exact penalty function)

14 Algorithm 0: Newton method for optimization for k = 0, 1, 2,... Evaluate f (x k ), f (x k ), c(x k ), c(x k ), and 2 xxl(x k, λ k ) Solve the primal-dual equations [ ] [ ] [ ] 2 xx L(x k, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 c(x k ) Increase the penalty parameter, if necessary, so that Dφ k (d k ; π k ) 0 Perform a line search to find α k (0, 1] satisfying the Armijo condition δ k φ(x k + α k d k ; π k ) φ(x k ; π k ) + ηα k Dφ k (d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

15 Newton methods and sequential quadratic programming The primal-dual system [ ] [ ] 2 xx L(x, λ) c(x) d c(x) T 0 δ is equivalent to the SQP subproblem [ ] f (x) + c(x)λ = c(x) min d R n f (x) + f (x)t d d T ( 2 xxl(x, λ))d s.t. c(x) + c(x) T d = 0 if 2 xxl(x, λ) is positive definite on the null space of c(x) T

16 Minimizing an exact penalty function Consider the exact penalty function for min (x 1) 2, s.t. x = 0 i.e. φ(x; π) = (x 1) 2 + π x for different values of the penalty parameter π Figure: π = 1 Figure: π = 2

17 Convergence of Algorithm 0 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω and f, c, and their first derivatives are bounded and Lipschitz continuous on Ω (Regularity) c(x k ) T has full row rank with smallest singular value bounded below by a positive constant (Convexity) u T ( 2 xxl(x k, λ k ))u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 Theorem The sequence {(x k, λ k )} yields the limit [ ] lim f (xk ) + c(x k )λ k k = 0 c(x k )

18 Incorporating inexactness An inexact solution to the primal-dual equations; i.e., [ ] [ ] [ 2 xx L(x, λ) c(x) d f (x) + c(x)λ c(x) T = 0 δ c(x) ] + [ ] ρ r may yield Dφ k (d k ; π k ) > 0 for any π k π k 1, even if for some κ (0, 1) we have [ ] [ ] ρ κ f (x) + c(x)λ r c(x) (the standard condition required in inexact Newton methods) It is the merit function that tells us what steps are appropriate, and so it should be what tells us which inexact solutions to the Newton (primal-dual) system are acceptable search directions

19 Model reductions Modern optimization techniques focus on models of the problem functions We apply an iterative solver to the primal-dual system [ ] [ ] [ ] 2 xx L(x, λ) c(x) d f (x) + c(x)λ c(x) T = 0 δ c(x) (since an exact solution will produce an acceptable search direction) A search direction is deemed acceptable if the improvement in the model m(d; π) f (x) + f (x) T d + π( c(x) + c(x) T d ) is sufficiently large; i.e., if we have m(d; π) m(0; π) m(d; π) = f (x) T d + π( c(x) c(x) + c(x) T d ) 0

20 Termination test 1 An inexact solution (d, δ) to the primal-dual system is acceptable if [ ] [ ] ρ κ f (x) + c(x)λ r c(x) for some κ (0, 1) and if m(d; π) max{ 1 d T ( 2 2 xxl(x, λ))d, 0} + σπ max{ c(x), r c(x) } for some σ (0, 1)

21 Termination test 2 An inexact solution to the primal-dual system is acceptable if for some β > 0 and ɛ (0, 1) ρ β c(x) r ɛ c(x) Increasing the penalty parameter can then yield m(d; π) max{ 1 2 d T ( 2 xxl(x, λ))d, 0} + σπ c(x)

22 Algorithm 1: Inexact Newton for optimization for k = 0, 1, 2,... Evaluate f (x k ), f (x k ), c(x k ), c(x k ), and 2 xxl(x k, λ k ) Iteratively solve the primal-dual equations [ ] [ ] [ ] 2 xx L(x k, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 c(x k ) until termination test 1 or 2 is satisfied If only termination test 2 is satisfied, increase the penalty parameter so δ k π k f (x k) T d k + max{ 1 2 d T k ( 2 xxl(x k, λ k ))d k, 0} (1 τ)( c(x k ) r k ) Perform a line search to find α k (0, 1] satisfying the sufficient decrease condition φ(x k + α k d k ; π k ) φ(x k ; π k ) ηα k m(d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

23 Convergence of Algorithm 1 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω and f, c, and their first derivatives are bounded and Lipschitz continuous on Ω (Regularity) c(x k ) T has full row rank with smallest singular value bounded below by a positive constant (Convexity) u T ( 2 xxl(x k, λ k ))u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 Theorem (Byrd, Curtis, Nocedal (2008)) The sequence {(x k, λ k )} yields the limit [ ] lim f (xk ) + c(x k )λ k k = 0 c(x k )

24 Take-away idea #1 In a (inexact) Newton method for optimization, global convergence depends on the use of a merit function appropriate for optimization. This function should dictate which inexact solutions to the primal-dual system are acceptable search directions.

25 Handling nonconvexity and rank deficiency There are two assumptions we aim to drop: (Regularity) c(xk ) T has full row rank with smallest singular value bounded below by a positive constant (Convexity) u T ( 2 xxl(x k, λ k ))u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 If the constraints are ill-conditioned, Algorithm 1 may compute long unproductive search directions If the problem is non-convex, Algorithm 1 can easily converge to a maximizer or saddle point There is also a danger in each case that the algorithm may stall during a given iteration since an acceptable search direction cannot be computed

26 No factorizations means no clue A significant challenge in overcoming these obstacles is that we have no way of recognizing when the problem is non-convex or ill-conditioned, since we do not form or factor the primal-dual matrix [ ] 2 xx L(x k, λ k ) c(x k ) c(x k ) T 0 Common practice is to perturb the matrix to be [ ] 2 xx L(x k, λ k ) + ξ 1I c(x k ) c(x k ) T ξ 2I where ξ 1 convexifies the model and ξ 2 regularizes the constraints, but arbitrary choices of these parameters can have terrible consequences on the behavior of the algorithm

27 Decomposing the step into two components We guide the step computation by decomposing the primal search direction into a normal component (toward satisfying the constraints) and a tangential component (toward optimality)

28 Normal component computation We define the trust region subproblem for some ω > 0 min 1 2 c(x k) + c(x k ) T v 2 s.t. v ω ( c(x k ))c(x k ) An (approximate) solution v k satisfying the Cauchy decrease condition c(x k ) c(x k ) + c(x k ) T v k ɛ v ( c(x k ) c(x k ) + c(x k ) T ṽ k ) for ɛ v (0, 1), where ṽ k = ( c(x k ))c(x k ) is the direction of steepest descent, can be computed without any matrix factorizations (conjugate gradient method, inexact dogleg method)

29 Normal component computation We define the trust region subproblem for some ω > 0 min 1 2 c(x k) + c(x k ) T v 2 s.t. v ω ( c(x k ))c(x k ) An (approximate) solution v k satisfying the Cauchy decrease condition c(x k ) c(x k ) + c(x k ) T v k ɛ v ( c(x k ) c(x k ) + c(x k ) T ṽ k ) for ɛ v (0, 1), where ṽ k = ( c(x k ))c(x k ) is the direction of steepest descent, can be computed without any matrix factorizations (conjugate gradient method, inexact dogleg method)

30 Tangential component computation (idea #1) Standard practice is to then consider the trust region subproblem min ( f (x k ) + 2 xxl(x k, λ k )v k ) T u ut ( 2 xxl(x k, λ k ))u s.t. c(x k ) T u = 0, u k Note that an exact solution would yield c(x k ) T d k = c(x k ) T v k An iterative procedure for solving this problem (e.g., a projected conjugate gradient method), however, requires repeated (inexact) projections of vectors onto the null space of c(x k ) T (to maintain c(x k ) T u 0 while respecting the trust region constraint)

31 Tangential component computation Instead, we formulate the primal-dual system [ ] [ ] 2 xx L(x k, λ k ) c(x k ) uk c(x k ) T 0 δ k [ ] f (xk ) + c(x = k )λ k + xxl(x 2 k, λ k )v k 0 Our ideas for assessing search directions based on reductions in a model of an exact penalty function carry over to this perturbed system

32 Handling non-convexity The only remaining component is a mechanism for handling non-convexity We follow the idea of convexifying the Hessian matrix as in [ 2 xx L(x k, λ k ) + ξ 1I ] c(x k ) c(x k ) T 0 by monitoring properties of the computed trial search directions Hessian modification strategy: During the iterative step computation, modify the Hessian matrix by increasing its smallest eigenvalue whenever an approximate solution satisfies for ψ, θ > 0 u k > ψ v k 1 2 ut k ( xxl(x 2 k, λ k ))u k < θ u k 2

33 Inexact Newton Algorithm 2 for k = 0, 1, 2,... Evaluate f (x k ), f (x k ), c(x k ), c(x k ), and 2 xx L(x k, λ k ) Compute an approximate solution v k to the trust region subproblem min 1 2 c(x k) + c(x k ) T v 2, s.t. v ( c(x k ))c(x k ) satisfying the normal component condition Iteratively solve the primal-dual equations [ ] [ ] [ ] 2 xx L(x k, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 δ k c(x k ) T v k until termination test 1 or 2 is satisfied, modifying 2 xx L(x k, λ k ) based on the Hessian modification strategy If only termination test 2 is satisfied, increase the penalty parameter so π k f (x k) T d k + max{ 1 2 ut k ( 2 xx L(x k, λ k ))u k, θ u k 2 } (1 τ)( c(x k ) c(x k ) + c(x k ) T d k ) Perform a line search to find α k (0, 1] satisfying the sufficient decrease condition φ(x k + α k d k ; π k ) φ(x k ; π k ) ηα k m(d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

34 Convergence of Algorithm 2 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω on which f, c, and their first derivatives are bounded and Lipschitz continuous Theorem (Curtis, Nocedal, Wächter (2009)) If all limit points of { c(x k ) T } have full row rank, then the sequence {(x k, λ k )} yields the limit [ ] lim f (xk ) + c(x k )λ k k = 0. c(x k ) Otherwise, and if {π k } is bounded, then lim ( c(x k))c(x k ) = 0 k lim f (x k) + c(x k )λ k k

35 Take-away ideas #2 and #3 In an inexact Newton method for optimization... ill-conditioning can be handled by regularizing the constraint model with a trust region. However, a trust region for the tangential component may result in an expensive algorithm. The method we propose avoids these costs and emulates a true inexact Newton method. non-convexity can be handled by monitoring properties of the iterative solver iterates and modifying the Hessian only when it appears that a sufficient model reduction may not be obtained. This process may not (and need not) result in a convex model.

36 Handling inequalities The most attractive class of algorithms for our purposes are interior point methods A class of interior point methods that satisfying c(x k ) + c(x k ) T d k = 0 may fail to converge from remote starting points 3 Fortunately, the trust region subproblem we use to regularize the constraints also saves us from this type of failure! 3 (Wächter, Biegler (2000)

37 Algorithm 2 (Interior-point version) We apply Algorithm 2 to the logarithmic-barrier subproblem q min f (x) µ ln s i, s.t. c E(x) = 0, c I(x) = s for µ 0 i=1 We scale quantities related to the slack variables d s k so that the step computation subproblems still have the form min 1 2 c(x k, s k ) + A(x k, s k )v 2, δ k s.t. v ω A(x k, s k ) T c(x k, s k ) and [H(xk ] [ ] [ ], s k, λ k ) A(x k, s k ) T dk g(xk, s = k ) + A(x k, s k ) T λ k A(x k, s k ) 0 c(x k, s k ) where the Hessian H k and constraint Jacobian A k have the same properties as before We incorporate a fraction-to-the-boundary rule in the line search and a slack reset in the algorithm to maintain s max{0, c I(x)}

38 Convergence of Algorithm 2 (Interior-point) Under the same assumption as before, our convergence theorem for Algorithm 2 still holds for the barrier subproblem for a given µ Theorem (Curtis, Schenk, Wächter (2009)) If Algorithm 2 yields a sufficiently accurate solution to the barrier subproblem for each {µ j } 0 and if the linear independence constraint qualification (LICQ) holds at a limit point x of {x j }, then there exists Lagrange multipliers λ such that the first-order optimality conditions of the nonlinear program are satisfied

39 Take-away ideas #4 In an inexact Newton method for optimization... inequality constraints can be easily incorporated into Algorithm 2 if quantities related to the slack variables are scaled appropriately

40 Outline Motivation and background Algorithm development and theoretical results Experimental results Conclusion and final remarks

41 Implementation details We implemented our interior-point version of Algorithm 2 in the IPOPT software package and used PARDISO for the iterative linear system solves The normal and tangential (primal-dual) step computation was performed with the symmetric quasi-minimum residual method (SQMR) We solved two model PDE-constrained problems on the three-dimensional grid Ω = [0, 1] [0, 1] [0, 1], using an equidistant Cartesian grid with N grid points in each spatial direction and a standard 7-point stencil for discretizing the operators

42 Boundary control problem Let u(x) be defined on Ω and solve min 1 2 Ω (y(x) yt(x))2 dx s.t. (e y(x) y(x)) = 20, y(x) = u(x), on Ω 2.5 u(x) 3.5, on Ω in Ω where y t(x) = x 1 (x 1 1)x 2 (x 2 1) sin(2πx 3 ) N n p q # nnz f # iter CPU sec e e e e e (direct) e

43 (Simplified) Hyperthermia Treatment Planning Let u j = a j e iφ j be a complex vector of amplitudes a R 10 and phases φ R 10 of 10 antennas, let M ij (x) =< E i (x), E j (x) > with E j = sin(jx 1 x 2 x 3 π), and let the tumor be defined by Ω 0 = [3/8, 5/8] 3, and solve min 1 2 Ω (y(x) yt(x))2 dx s.t. y(x) 10(y(x) 37) = u M(x)u, in Ω 37.0 y(x) 37.5, on Ω 42.0 y(x) 44.0, in Ω 0 where { 37 in Ω\Ω0 y t(x) = 43 in Ω 0 N n p q # nnz f # iter CPU sec (direct)

44 (Simplified) Hyperthermia Treatment Planning An example solution (N = 40) Figure: y(x) Figure: y t (x)

45 Numerical experiments (coming soon) Further numerical experimentation is now underway with Andreas Wächter (IBM) and Olaf Schenk (U. of Basel) Hyperthermia treatment planning with real patient geometry (with Matthias Christen, U. of Basel) Geophysical modeling applications (with Johannes Huber, U. of Basel) Image registration (with Stefan Heldmann, U. of Lübeck)

46 Outline Motivation and background Algorithm development and theoretical results Experimental results Conclusion and final remarks

47 Conclusion and final remarks We have presented an inexact Newton method for constrained optimization The method hinges on a merit function appropriate for optimization, and the iterative linear system solves focus on a model of this function to decide when to terminate We have extended the basic algorithm to solve non-convex and ill-conditioned problems, and to solve problems with inequality constraints The algorithm is globally convergent to first-order optimal points or infeasible stationary points Numerical experiments are promising so far, and further testing on real-world problems is underway

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Large-Scale Nonlinear Optimization with Inexact Step Computations

Large-Scale Nonlinear Optimization with Inexact Step Computations Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com IPAM Workshop on Numerical Methods for Continuous

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

An Inexact Newton Method for Nonconvex Equality Constrained Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Newton Method for Nonconvex Equality Constrained Optimization Richard H. Byrd Frank E. Curtis Jorge Nocedal Received: / Accepted: Abstract

More information

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Lehigh University Johannes Huber University of Basel Olaf Schenk University

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Noname manuscript No. (will be inserted by the editor) A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Johannes Huber

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

A GLOBALLY CONVERGENT STABILIZED SQP METHOD A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Frank E. Curtis April 26, 2011 Abstract Penalty and interior-point methods

More information

Nonlinear optimization

Nonlinear optimization Nonlinear optimization Anders Forsgren Optimization and Systems Theory Department of Mathematics Royal Institute of Technology (KTH) Stockholm, Sweden evita Winter School 2009 Geilo, Norway January 11

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

A trust region method based on interior point techniques for nonlinear programming

A trust region method based on interior point techniques for nonlinear programming Math. Program., Ser. A 89: 149 185 2000 Digital Object Identifier DOI 10.1007/s101070000189 Richard H. Byrd Jean Charles Gilbert Jorge Nocedal A trust region method based on interior point techniques for

More information

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October

More information

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep

More information

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Fran E. Curtis February, 22 Abstract Penalty and interior-point methods

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7. SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question

More information

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725 Proximal Newton Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: primal-dual interior-point method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

x 2. x 1

x 2. x 1 Feasibility Control in Nonlinear Optimization y M. Marazzi z Jorge Nocedal x March 9, 2000 Report OTC 2000/04 Optimization Technology Center Abstract We analyze the properties that optimization algorithms

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

Feasible Interior Methods Using Slacks for Nonlinear Optimization

Feasible Interior Methods Using Slacks for Nonlinear Optimization Feasible Interior Methods Using Slacks for Nonlinear Optimization Richard H. Byrd Jorge Nocedal Richard A. Waltz February 28, 2005 Abstract A slack-based feasible interior point method is described which

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information