Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Size: px
Start display at page:

Download "Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization"

Transcription

1 Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel P. Robinson, Johns Hopkins University Andreas Wächter, Northwestern University Hao Wang, Lehigh University ISMP 2012, Berlin, Germany 20 August 2012 Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 1 of 44

2 Outline Motivation Exact SQP Inexact SQP Summary Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 2 of 44

3 Outline Motivation Exact SQP Inexact SQP Summary Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 3 of 44

4 Two topics of this talk 1. Fill in a gap in algorithmic development for smaller-scale problems. 2. Propose a method for allowing/controlling inexactness for larger-scale problems. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 4 of 44

5 Two contributions 1. Sequential quadratic optimization method with fast infeasibility detection Guaranteed progress towards (linearized) feasibility each iteration Carefully constructed update of a penalty parameter Separate multipliers for feasibility and optimality Global and fast local convergence for both feasible and infeasible problems 2. Active-set method with inexact step computations for large-scale problems Framework extended from method above Allows generic inexact solution of subproblems Careful control of semi-smooth residual functions Global and fast local convergence (at least, that s the plan!) Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 5 of 44

6 Outline Motivation Exact SQP Inexact SQP Summary Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 6 of 44

7 Constrained optimization (OP): min f (x) x s.t. c(x) = 0, x 0 (FP): min v(x) x where» v(x) := c(x) min{x, 0} 1 (PP): where (FJ): min x φ(x, µ) φ(x, µ) := µf (x) + v(x) F(x, y, z, µ) := Suppose that our optimization problem (OP) is infeasible: modeling errors data inconsistency perturbed/added constraints Then, we want to solve a feasibility problem (FP). Many algorithms/codes do this already, either by switching back-and-forth transitioning (via penalization) µf (x) + y T c(x) z T x Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 7 of 44

8 Main contribution (Part 1) Active-set method that completes the convergence picture for nonlinear optimization: Problem type Global convergence Fast local convergence Feasible Infeasible? Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 8 of 44

9 Sequential quadratic optimization (SQO) (OP): min f (x) x s.t. c(x) = 0, x 0 (FP): min v(x) x where» v(x) := c(x) min{x, 0} 1 (PP): where (FJ): min x φ(x, µ) φ(x, µ) := µf (x) + v(x) F(x, y, z, µ) := Compute direction d and multipliers (y, z) for (OP) by solving where Observe: min d f k + g T k d dt H k d s.t. c k + J T k d = 0, x k + d 0. g(x) := f (x), J(x) := c(x), and H(x, y, z, µ) : 2 xx F(x, y, z, µ). May reduce to Newton s method if active set is identified. However, a globalization mechanism is needed. Moreover, this subproblem may be infeasible! µf (x) + y T c(x) z T x Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 9 of 44

10 Recent literature (Rich history of SQO methods) Fletcher, Leyffer (2002) Byrd, Gould, Nocedal (2005) Byrd, Nocedal, Waltz (2008) Byrd, Curtis, Nocedal (2010) Byrd, López-Calva, Nocedal (2010) Gould, Robinson (2010) Morales, Nocedal, Wu (2010) Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 10 of 44

11 Issue faced by all optimization solvers Move towards feasibility and/or objective decrease? Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 11 of 44

12 FILTER and steering strategies FILTER:... we make use of a property of the phase I algorithm in our QP solver. If an infeasible QP is detected, [a feasibility restoration phase is entered]. Steering methods solve a sequence of constrained subproblems: Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 12 of 44

13 Our approach: Two-phase strategy Feasibility step to determine possible progress toward feasibility. Optimality step exploits information obtained by feasibility step. Objective function never ignored (unlike FILTER). At most two subproblems solved per iteration (unlike steering). Reduces to SQO for optimization problem in feasible cases. Reduces to (perturbed) SQO for feasibility problem in infeasible cases. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 13 of 44

14 Ensuring global convergence (1) Compute feasibility step d k to determine highest level of feasibility improvement. (2) Compute optimality step b d k. (3) Set d k τ k b dk + (1 τ k )d k to obtain proportional feasibility improvement. (4) Update penalty parameter to ensure sufficient decrease in a merit function. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 14 of 44

15 Feasibility step (OP): min f (x) x s.t. c(x) = 0, x 0 (FP): min v(x) x where» v(x) := c(x) min{x, 0} 1 (PP): where (FJ): min x φ(x, µ) φ(x, µ) := µf (x) + v(x) F(x, y, z, µ) := µf (x) + y T c(x) z T x (1) Compute feasibility step d k. Solve for (d k, r k, s k, t k ) and (y E k+1, zi k+1 ): min d,r,s,t e T (r + s) + e T t dt H k d 8 >< c k + Jk T d = r s s.t. x k + d t >: (r, s, t) 0. Resulting d k yields a reduction in a local model of v:» l k (d) := c k + Jk T d. min{x k + d, 0} 1 That is, it yields l k (d k ) = l k (0) l k (d k ) 0. (QO1) Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 15 of 44

16 Optimality step (OP): min f (x) x s.t. c(x) = 0, x 0 (FP): min v(x) x where» v(x) := c(x) min{x, 0} 1 (PP): where (FJ): min x φ(x, µ) φ(x, µ) := µf (x) + v(x) F(x, y, z, µ) := µf (x) + y T c(x) z T x (2) Compute optimality step b d k. Determine E k and I k for which d k is linearly feasible: c E k k Solve for ( b d k,br Ec k k min d,r Ec k,s Ec k,t Ic k + J E T k k dk = 0, x I k k + d I k k 0.,bsEc k k, bt Ic k k ) and (by k+1 E, bzi k+1 ): µ k g T k d + et (r Ec k + s Ec k ) + e T t Ic k dt b Hk d 8 >< s.t. c E k k c Ec k k + J E T k k d = 0 + J Ec T k k d = r Ek c s Ec k x I k k + d I k 0 x Ic k k + d Ic k t Ic k >: (r Ec k, s Ec k, t Ic k ) 0. (QO2) Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 16 of 44

17 Search direction (OP): min f (x) x s.t. c(x) = 0, x 0 (FP): min v(x) x where» v(x) := c(x) min{x, 0} 1 (PP): where (FJ): min x φ(x, µ) φ(x, µ) := µf (x) + v(x) F(x, y, z, µ) := (3) Set d k τ kdk b + (1 τ k )d k. Choose β (0, 1). Find largest τ k [0, 1] such that d k satisfies l k (d k ) β l k (d k ). µf (x) + y T c(x) z T x Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 17 of 44

18 µ update (OP): min f (x) x s.t. c(x) = 0, x 0 (FP): min v(x) x where» v(x) := c(x) min{x, 0} 1 (PP): where (FJ): min x φ(x, µ) φ(x, µ) := µf (x) + v(x) F(x, y, z, µ) := (4) Update penalty parameter to ensure sufficient decrease. Set µ k+1 so that Let µ k+1 1 (by k+1, bz k+1 ). m k (d, µ) = µ(f (x k ) + g T k d) + l k(d), then set µ k+1 so that d k yields m k (d k, µ k+1 ) ɛ l k (d k ). This ensures sufficient decrease in φ(, µ k+1 ) from x k. µf (x) + y T c(x) z T x Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 18 of 44

19 Ensuring global convergence (1) Determine potential level of feasibility improvement: l k (d k ) 0. (2) Compute search direction satisfying l k (d k ) β l k (d k ). (3) Update penalty parameter (if necessary) so that m k (d k, µ k+1 ) ɛ l k (d k ). (4) Perform line search on φ(, µ k+1 ) at x k along the descent direction d k. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 19 of 44

20 Ensuring fast local convergence: Feasible case If (OP) is feasible, then: (a) (QO1) eventually produces linearly feasible directions (b) µ eventually remains constant (c) (QO2) reduces to a standard SQO subproblem min µ k g d,r Ec k,s Ec k,t Ic k T d + et (r Ec k + s Ec k ) + e T t Ic k dt Hk b d k 8 c E k k + J E T k k d = 0 s.t. >< c Ec k k + J Ec T k k d = r Ek c s Ec k x I k k + d I k 0 x Ic k k + d Ic k t Ic k >: (r Ec k, s Ec k, t Ic k ) 0. (QO2) Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 20 of 44

21 Ensuring fast local convergence: Infeasible case If (OP) is infeasible, then: (a) (QO1) eventually yields small improvement towards feasibility (b) µ 0 and (by k, bz k ) (y k, z k ): If v k 0 and l k (d k ) θv k, then µ k KKT inf (x k, y k+1, z k+1 ) 2 (by k, bz k ) (y k, z k ) KKT inf (x k, y k+1, z k+1 ) 2. (c) (QO2) reduces to (QO1) (i.e., SQO for feasibility) Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 21 of 44

22 SQuID Sequential Quadratic Optimization with Fast Infeasibility Detection (1) Compute feasibility step via (QO1). (2) Check whether infeasible stationary point has been obtained. (3) Update µ k, by k, and bz k, if necessary (for fast local convergence). (4) Compute optimality step via (QO2). (5) Check whether optimal solution has been obtained. (6) Compute combination of feasibility and optimality steps (for global convergence). (7) Update µ k, if necessary (for global convergence). (8) Perform line search to obtain decrease in merit function. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 22 of 44

23 Global convergence Assumptions: (1) The problem functions f and c and their first-order derivatives are bounded and Lipschitz continuous in a convex set containing {x k }. (2) There exist µ max µ min > 0 such that, for any d, µ min d 2 d T H k d µ max d 2 µ min d 2 d T b Hk d µ max d 2. Theorem Either all limit points of {x k } are feasible or all are infeasible stationary. Theorem If µ k µ for some µ > 0 for all k, then every feasible limit point is a KKT point. Theorem Suppose µ k 0 and let K µ be the subsequence of iterations during which the penalty parameter µ k is decreased. Then, if all limit points of {x k } are feasible, then all limit points of {x k } k Kµ correspond to Fritz John points for (OP) where MFCQ fails. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 23 of 44

24 Local convergence: Assumptions (1) f and c and their first and second derivatives are bounded and Lipschitz continuous in an open convex set containing a given point of interest x. (2) If (x, y, z ) is a KKT point for (FP), then (a) J Z A T has full row rank. (b) e < y Z < e and 0 < z A < e. (c) d T H d > 0 for all d 0 such that J Z A T d = 0. (3) If (x, by, bz, µ ) is a KKT point for (OP), then (2) holds, µ k µ > 0, and (a) bz A + c A > 0. (b) d T b H d > 0 for all d 0 such that J E A T d = 0. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 24 of 44

25 Local convergence Theorem If v > 0, and (x k, y k, z k ) and (x k, by k, bz k ) are each close to (x, y, z ), then 2 4 x 3 2 k+1 x y k+1 y 5 z k+1 z C 4 x 3 k x 2 y k y 5 z k z for some constant C > 0 independent of k.» «byk y + O k + O(µ) bz k z k Theorem If (x k, y k, z k ) is close to (x, y, z ) and (x k, by k, bz k ) is close to (x, by, bz ), then 2 4 x 3 2 k+1 x by k+1 by 5 bz k+1 bz C 4 x 3 k x 2 by k by 5 bz k bz for some constant C > 0 independent of k. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 25 of 44

26 Numerical experiments: Feasible optimization problems FILTER: Others: SQuID: 9 failures due to declaration of infeasibility 2 failures due to declaration of infeasibility Lack of robustness due to lack of SOC and simplistic handling of nonconvexity Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 26 of 44

27 Numerical experiments: Infeasible optimization problems Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 27 of 44

28 Outline Motivation Exact SQP Inexact SQP Summary Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 28 of 44

29 Large-scale optimization You re thinking to yourself: SQO is expensive enough with one subproblem to solve. You want me to solve two?! Our response: Allowing inexactness gives flexibility in the subproblem solves. Any work to solve one subproblem is better split between our two. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 29 of 44

30 Optimality conditions (OP): (FP): where (PP): where (FJ): min f (x) x s.t. c(x) = 0, x 0 min x v(x) s.t. x 0 v(x) := c(x) 1 min x φ(x, µ) s.t. x 0 φ(x, µ) := µf (x) + v(x) F(x, y, z, µ) := µf (x) + y T c(x) z T x Optimality conditions for both (PP) and (FP) involve 2 3 min{µg(x) + J(x)y, x} ρ(x, y, µ) := 4 min{[c(x)] +, e y} 5 min{[c(x)], e + y} where Observe: [c(x)] + := max{c(x), 0} and [c(x)] := max{ c(x), 0}. If ρ(x, y, µ) = 0, v(x) = 0, and µ > 0, then (x, y/µ) is a KKT point for (OP). If ρ(x, y, µ) = 0, v(x) > 0, and µ = 0, then x is an infeasible stationary point. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 30 of 44

31 Two subproblems (OP): (FP): where (PP): where (FJ): min f (x) x s.t. c(x) = 0, x 0 min x v(x) s.t. x 0 v(x) := c(x) 1 min x φ(x, µ) s.t. x 0 φ(x, µ) := µf (x) + v(x) F(x, y, z, µ) := µf (x) + y T c(x) z T x Define the model of φ(, µ) at x k given by m k (d, µ) := µ(f k + g T k d) + c k + J k d 1. We iterate by (approximately) solving two subproblems: Feasibility subproblem: min d m k (d, 0) dt H k d s.t. x k + d 0. (QO1) Optimality subproblem: min d m k (d, µ k ) dt b Hk d s.t. x k + d 0. (QO2) Optimality conditions for both (QO1) and (QO2) involve 2 3 min{µg k + Hd + J k y, x k + d} ρ k (d, y, µ, H) := 4 min{[c k + Jk T d]+, e y} min{[c k + Jk T d], e + y} 5. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 31 of 44

32 Main contribution (Part 2) Active-set method that allows generic inexact subproblem solves: Termination tests dictate when an inexact solution is acceptable. Tests primarily monitor the reductions m k (d k, 0) and m k ( d b k, µ k ) and the residuals ρ k (d k, y k+1, 0, H k ) and ρ k ( d b k, by k+1, µ k, H b k ). Essentially, these ensure primal and dual convergence, respectively. ( Exact case : residuals are always zero, so we focused on model reductions.) Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 32 of 44

33 Termination test 1: High-level The primal-dual pairs (d k, y k+1 ) and ( b d k, by k+1 ) are acceptable if the feasibility step yields a nontrivial reduction in the feasibility model the error in solving (QO1) is small compared to the error in solving (FP) the optimality step yields a nontrivial reduction in the penalty function model the error in solving (QO2) is small compared to the error in solving (PP) and, in addition, the optimality step yields proportional improvement in the feasibility model. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 33 of 44

34 Termination test 1: Low-level The primal-dual pairs (d k, y k+1 ) and ( b d k, by k+1 ) are acceptable if the feasibility step satisfies m k (d k, 0) θ d k 2 > 0 ρ k (d k, y k+1, 0, H k ) κ ρ(x k, y k, 0) the optimality step satisfies m k ( d b k, µ k ) θ d b k 2 > 0 ρ k ( d b k, by k+1, µ k, H b k ) κ ρ(x k, by k, µ k ) and, in addition, m k ( d b k, µ k ) ɛ m k (d k, 0) where θ > 0, κ (0, 1), and ɛ (0, 1) are user-defined constants. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 34 of 44

35 Termination test 2: High-level The primal-dual pairs (d k, y k+1 ) and ( b d k, by k+1 ) are acceptable if the feasibility step yields a nontrivial reduction in the feasibility model the error in solving (QO1) is small compared to the error in solving (FP) the optimality step yields a nontrivial reduction in the penalty function model the error in solving (QO2) is small compared to the error in solving (FP) and, in addition, we take a convex combination of the steps for proportional improvement toward feasibility (potentially) decrease the penalty parameter. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 35 of 44

36 Termination test 2: Low-level The primal-dual pairs (d k, y k+1 ) and ( b d k, by k+1 ) are acceptable if the feasibility step satisfies m k (d k, 0) θ d k 2 > 0 ρ k (d k, y k+1, 0, H k ) κ ρ(x k, y k, 0) the optimality step satisfies m k ( b d k, µ k ) θ b d k 2 > 0 ρ k ( b d k, by k+1, µ k, b H k ) κ ρ(x k, by k, µ k ) and, in addition, d k τ kdk b + (1 τ k )d k so that m k (d k, 0) ɛ m k (d k, 0) 8 < µ k j ff if m k (d k, µ k ) β m k (d k, 0) µ k+1 : min δµ k, (1 β) m k (d k,0) otherwise g k T d k +θ d k 2 where θ > 0, κ (0, 1), ɛ (0, 1), β (0, 1), and δ (0, 1) are constants. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 36 of 44

37 Termination test 3: High-level The primal-dual pairs (d k, y k+1 ) and ( b d k, by k+1 ) are acceptable if the feasibility step is zero the optimality step is zero the feasibility multipliers are not changed and, in addition, the error in solving (QO2) is small compared to the error in solving (PP) Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 37 of 44

38 Termination test 3: Low-level The primal-dual pairs (d k, y k+1 ) and ( b d k, by k+1 ) are acceptable if d k 0 b dk 0 y k+1 y k and, in addition, where κ (0, 1) is a constant. ρ k (0, by k+1, µ k, b H k ) κ ρ(x k, by k, µ k ) Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 38 of 44

39 Inexact SQO (1) Check whether infeasible stationary point or optimal solution obtained. (2) Compute optimality and feasibility steps so that TT1, TT2, or TT3 holds. Hessian modifications performed within subproblems solves. (3) Update y k, and by k, if necessary (details suppressed). (4) Compute combination of feasibility and optimality steps. (5) Update µ k, if necessary. (5) Perform line search to obtain decrease in merit function. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 39 of 44

40 Global convergence: Assumptions (1) The problem functions f and c and their first-order derivatives are bounded and Lipschitz continuous in a convex set containing {x k }. (2) {H k } and { b H k } are bounded. (3) If x k is stationary for the penalty problem, then the subproblem solver eventually produces a direction of insufficient curvature or y such that ρ k (0, y, µ k, b H k ) is arbitrarily small. Else, it eventually produces a direction of insufficient curvature or (d, y) such that m k (d, µ k ) θ d 2 with ρ k (d, y, µ k, b H k ) arbitrarily small. (4) For the feasibility subproblem, the subproblem solver eventually produces a direction of insufficient curvature or, for ξ k > 0, it produces (d, y) satisfying max{ξ k, m k (d, 0)} θ d 2 with ρ k (d, y, 0, H k ) arbitrarily small. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 40 of 44

41 Global convergence Theorem The following limit always holds: lim k ρ(x k, y k, 0) = 0. Theorem The following limit holds if µ k = µ > 0 for all large k: lim ρ(x k, by k, µ) = 0. k TO DO: Ensure that µ stays bounded below under nice conditions. Local convergence(?!) Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 41 of 44

42 Outline Motivation Exact SQP Inexact SQP Summary Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 42 of 44

43 Summary Developed an SQO method that completes the convergence picture for NLO: Problem type Global convergence Fast local convergence Feasible Infeasible Proposed extensions for solving large-scale problems with inexact calculations. Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 43 of 44

44 Thanks!! Infeasibility detection: J.V. Burke, F.E. Curtis, and H. Wang, A Sequential Quadratic Optimization Algorithm with Rapid Infeasibility Detection, submitted to SIAM Journal on Optimization, F. E. Curtis, A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization, Mathematical Programming Computation, Volume 4, Issue 2, pg , R. H. Byrd, F. E. Curtis, and J. Nocedal, Infeasibility Detection and SQP Methods for Nonlinear Optimization, SIAM Journal on Optimization, Volume 20, Issue 5, pg , Inexactness and constrained optimization: F.E. Curtis, D.P. Robinson, and A. Wächter, An Inexact Sequential Quadratic Optimization Algorithm for Large-Scale Nonlinear Optimization, in preparation. F. E. Curtis, J. Huber, O. Schenk, and A. Wchter, On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations, to appear in Mathematical Programming, Series B, F. E. Curtis, O. Schenk, and A. Wächter, An Interior-Point Algorithm for Large-Scale Nonlinear Optimization with Inexact Step Computations, SIAM Journal on Scientific Computing, Volume 32, Issue 6, pg , R. H. Byrd, F. E. Curtis, and J. Nocedal, An Inexact Newton Method for Nonconvex Equality Constrained Optimization, Mathematical Programming, Volume 122, Issue 2, pg , F. E. Curtis, J. Nocedal, and A. Wächter, A Matrix-free Algorithm for Equality Constrained Optimization Problems with Rank-Deficient Jacobians, SIAM Journal on Optimization, Volume 20, Issue 3, pg , R. H. Byrd, F. E. Curtis, and J. Nocedal, An Inexact SQP Method for Equality Constrained Optimization, SIAM Journal on Optimization, Volume 19, Issue 1, pg , Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization 44 of 44

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Infeasibility Detection in Nonlinear Optimization

Infeasibility Detection in Nonlinear Optimization Infeasibility Detection in Nonlinear Optimization Frank E. Curtis, Lehigh University Hao Wang, Lehigh University SIAM Conference on Optimization 2011 May 16, 2011 (See also Infeasibility Detection in Nonlinear

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

HYBRID FILTER METHODS FOR NONLINEAR OPTIMIZATION. Yueling Loh

HYBRID FILTER METHODS FOR NONLINEAR OPTIMIZATION. Yueling Loh HYBRID FILTER METHODS FOR NONLINEAR OPTIMIZATION by Yueling Loh A dissertation submitted to The Johns Hopkins University in conformity with the requirements for the degree of Doctor of Philosophy. Baltimore,

More information

On the use of piecewise linear models in nonlinear programming

On the use of piecewise linear models in nonlinear programming Math. Program., Ser. A (2013) 137:289 324 DOI 10.1007/s10107-011-0492-9 FULL LENGTH PAPER On the use of piecewise linear models in nonlinear programming Richard H. Byrd Jorge Nocedal Richard A. Waltz Yuchen

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Large-Scale Nonlinear Optimization with Inexact Step Computations

Large-Scale Nonlinear Optimization with Inexact Step Computations Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com IPAM Workshop on Numerical Methods for Continuous

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

A GLOBALLY CONVERGENT STABILIZED SQP METHOD A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

An Inexact Newton Method for Nonconvex Equality Constrained Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Newton Method for Nonconvex Equality Constrained Optimization Richard H. Byrd Frank E. Curtis Jorge Nocedal Received: / Accepted: Abstract

More information

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Lehigh University Johannes Huber University of Basel Olaf Schenk University

More information

Stochastic Optimization Algorithms Beyond SG

Stochastic Optimization Algorithms Beyond SG Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis

More information

Nonlinear Optimization Solvers

Nonlinear Optimization Solvers ICE05 Argonne, July 19, 2005 Nonlinear Optimization Solvers Sven Leyffer leyffer@mcs.anl.gov Mathematics & Computer Science Division, Argonne National Laboratory Nonlinear Optimization Methods Optimization

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

On sequential optimality conditions for constrained optimization. José Mario Martínez  martinez On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

On the complexity of an Inexact Restoration method for constrained optimization

On the complexity of an Inexact Restoration method for constrained optimization On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

A stabilized SQP method: superlinear convergence

A stabilized SQP method: superlinear convergence Math. Program., Ser. A (2017) 163:369 410 DOI 10.1007/s10107-016-1066-7 FULL LENGTH PAPER A stabilized SQP method: superlinear convergence Philip E. Gill 1 Vyacheslav Kungurtsev 2 Daniel P. Robinson 3

More information

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

CONVEXIFICATION SCHEMES FOR SQP METHODS

CONVEXIFICATION SCHEMES FOR SQP METHODS CONVEXIFICAION SCHEMES FOR SQP MEHODS Philip E. Gill Elizabeth Wong UCSD Center for Computational Mathematics echnical Report CCoM-14-06 July 18, 2014 Abstract Sequential quadratic programming (SQP) methods

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher The use of second-order information in structural topology optimization Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher What is Topology Optimization? Optimize the design of a structure

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Interior Methods for Mathematical Programs with Complementarity Constraints

Interior Methods for Mathematical Programs with Complementarity Constraints Interior Methods for Mathematical Programs with Complementarity Constraints Sven Leyffer, Gabriel López-Calva and Jorge Nocedal July 14, 25 Abstract This paper studies theoretical and practical properties

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

Adaptive augmented Lagrangian methods: algorithms and practical numerical experience

Adaptive augmented Lagrangian methods: algorithms and practical numerical experience Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 Adaptive augmented Lagrangian methods: algorithms and practical numerical

More information

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

arxiv: v1 [math.oc] 20 Aug 2014

arxiv: v1 [math.oc] 20 Aug 2014 Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience Detailed Version Frank E. Curtis, Nicholas I. M. Gould, Hao Jiang, and Daniel P. Robinson arxiv:1408.4500v1 [math.oc]

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Maria Cameron. f(x) = 1 n

Maria Cameron. f(x) = 1 n Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Frank E. Curtis, Lehigh University Beyond Convexity Workshop, Oaxaca, Mexico 26 October 2017 Worst-Case Complexity Guarantees and Nonconvex

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Frank E. Curtis April 26, 2011 Abstract Penalty and interior-point methods

More information

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory Preprint ANL/MCS-P1015-1202, Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Steering Exact Penalty Methods for Nonlinear Programming

Steering Exact Penalty Methods for Nonlinear Programming Steering Exact Penalty Methods for Nonlinear Programming Richard H. Byrd Jorge Nocedal Richard A. Waltz April 10, 2007 Technical Report Optimization Technology Center Northwestern University Evanston,

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence RC23036 (W0304-181) April 21, 2003 Computer Science IBM Research Report Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence Andreas Wächter, Lorenz T. Biegler IBM Research

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees Noname manuscript No. (will be inserted by the editor) A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees Frank E. Curtis Xiaocun Que May 26, 2014 Abstract

More information

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Fran E. Curtis February, 22 Abstract Penalty and interior-point methods

More information

NUMERICAL OPTIMIZATION. J. Ch. Gilbert

NUMERICAL OPTIMIZATION. J. Ch. Gilbert NUMERICAL OPTIMIZATION J. Ch. Gilbert Numerical optimization (past) The discipline deals with the classical smooth (nonconvex) problem min {f(x) : c E (x) = 0, c I (x) 0}. Applications: variable added

More information

Feasible Interior Methods Using Slacks for Nonlinear Optimization

Feasible Interior Methods Using Slacks for Nonlinear Optimization Feasible Interior Methods Using Slacks for Nonlinear Optimization Richard H. Byrd Jorge Nocedal Richard A. Waltz February 28, 2005 Abstract A slack-based feasible interior point method is described which

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information