SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
|
|
- Gyles Peters
- 5 years ago
- Views:
Transcription
1 SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory question 6. SF2822 Applied Nonlinear Optimization, KTH 2 / 24 Lecture 9, 207/208
2 Optimality conditions for nonlinear programs Consider an equality-constrained nonlinear programming problem P = ) f x) subject to gx) = 0, where f, g C 2, g : IR n IR m. If the Lagrangian function is defined as Lx, λ) = f x) λ T gx), the first-order optimality conditions are Lx, λ) = 0. We write them as ) ) ) x Lx, λ) f x) Ax) T λ 0 = =, λ Lx, λ) gx) 0 where Ax) T = g x) g 2 x) g m x) ). SF2822 Applied Nonlinear Optimization, KTH 3 / 24 Lecture 9, 207/208 Newton s method for solving a nonlinear equation Consider solving the nonlinear equation f u) = 0, where f : IR n IR, f C 2. Then, f u + p) = f u) + 2 f u)p + o p ). Linearization given by f u) + 2 f u)p. Choose p so that f u) + 2 f u)p = 0, i.e., solve 2 f u)p = f u). A Newton iteration takes the following form for a given u. p solves 2 f u)p = f u). u u + p. The nonlinear equation need not be a gradient.) SF2822 Applied Nonlinear Optimization, KTH 4 / 24 Lecture 9, 207/208
3 Speed of convergence for Newton s method Theorem Assume that f C 3 and that f u ) = 0 with 2 f u ) nonsingular. Then, if Newton s method with steplength one) is started at a point sufficiently close to u, then it is well defined and converges to u with convergence rate at least two, i.e., there is a constant C such that u k+ u C u k u 2. The proof can be given by studying a Taylor-series expansion, u k+ u = u k 2 f u k ) f u k ) u = 2 f u k ) f u ) f u k ) 2 f u k )u u k )). For u k sufficiently close to u, f u ) f u k ) 2 f u k )u u k ) C u k u 2. SF2822 Applied Nonlinear Optimization, KTH 5 / 24 Lecture 9, 207/208 One-dimensional example for Newton s method For a positive number d, consider computing /d by minimizing f u) = du ln u. Then, f u) = d u, f u) = u 2. We see that u = d. u k+ = u k f d u k ) f u k ) = u u k k = 2u k uk 2 d. uk 2 Then, u k+ d = 2u k uk 2 d d = d u k ) 2. d SF2822 Applied Nonlinear Optimization, KTH 6 / 24 Lecture 9, 207/208
4 One-dimensional example for Newton s method, cont. Graphical picture for d = 2. SF2822 Applied Nonlinear Optimization, KTH 7 / 24 Lecture 9, 207/208 First-order optimality conditions as a system of equations The first-order necessary optimality conditions may be viewed as a system of n + m nonlinear equations with n + m unknowns, x and λ, according to ) ) f x) Ax) T λ 0 =, gx) 0 A Newton iteration takes the form where 2 xxlx, λ) Ax) T Ax) 0 x + λ + ) ) ) x p = +, λ ν ) ) ) p f x) + Ax) T λ =, ν gx) for Lx, λ) = f x) λ T gx). SF2822 Applied Nonlinear Optimization, KTH 8 / 24 Lecture 9, 207/208
5 First-order optimality conditions as a system of equations, cont. The resulting Newton system may equivalently be written as ) ) ) 2 xxlx, λ) Ax) T p f x) =, Ax) 0 λ + ν gx) alternatively 2 xxlx, λ) Ax) T Ax) 0 ) ) ) p f x) λ + =. gx) We prefer the form with λ +, since it can be directly generalized to problems with inequality constraints. SF2822 Applied Nonlinear Optimization, KTH 9 / 24 Lecture 9, 207/208 Quadratic programming with equality constraints Compare with an equality-constrained quadratic programming problem 2 pt Hp + c T p EQP) subject to Ap = b, p IR n, where the unique optimal solution p and multiplier vector λ + are given by ) ) ) H A T p c A 0 λ + =, b if Z T HZ 0 and A has full row rank. SF2822 Applied Nonlinear Optimization, KTH 0 / 24 Lecture 9, 207/208
6 Newton iteration and equality-constrained quadratic program ) ) ) 2 Compare xxlx, λ) Ax) T p f x) Ax) 0 λ + = gx) ) ) ) H A T p c with A 0 λ + =. b Identify: 2 xxlx, λ) H f x) c Ax) A gx) b. SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Newton iteration as a QP problem A Newton iteration for solving the first-order necessary optimality conditions to P = ) may be viewed as solving the QP problem QP = ) 2 pt 2 xxlx, λ)p + f x) T p subject to Ax)p = gx), p IR n, and letting x + = x + p, and λ + are given by the multipliers of QP = ). Problem QP = ) is well defined with unique optimal solution p and multiplier vector λ + if Z x) T 2 xxlx, λ)z x) 0 and Ax) has full row rank, where Z x) is a matrix whose columns form a basis for nullax)). SF2822 Applied Nonlinear Optimization, KTH 2 / 24 Lecture 9, 207/208
7 An SQP iteration for problems with equality constraints Given x, λ such that Z x) T 2 xxlx, λ)z x) 0 and Ax) has full row rank, a Newton iteration takes the following form.. Compute optimal solution p and multiplier vector λ + to QP = ) 2 pt 2 xxlx, λ)p + f x) T p subject to Ax)p = gx), p IR n, 2. x x + p, λ λ +. We call this method sequential quadratic programming SQP). Note! QP = ) is solved by solving a system of linear equations. Note! x and λ have given numerical values in QP = ). SF2822 Applied Nonlinear Optimization, KTH 3 / 24 Lecture 9, 207/208 Speed of convergence for SQP method for equality-constrained problems Theorem Assume that f C 3, g C 3 on R n and that x is a regular point that satisfies the second-order sufficient optimality conditions of P = ). If the SQP method with steplength one) is started at a point sufficiently close to x, λ, then it is well defined and converges to x, λ with convergence rate at least two. Proof. In a neighborhood of x, λ it holds that ) Z x) T 2 2 xxlx, λ)z x) 0 and xxlx, λ) Ax) T is Ax) 0 nonsingular. The subproblem QP = ) is hence well defined and the result follows from Newton s method. Note! The iterates are normally not feasible to P = ). SF2822 Applied Nonlinear Optimization, KTH 4 / 24 Lecture 9, 207/208
8 SQP method for equality-constrained problems So far we have discussed SQP for P = ) in an ideal case. Comments: If Z x) T 2 xxlx, λ)z x) 0 we may replace 2 xxlx, λ) by B in QP = ), where B is a symmetric approximation of 2 xxlx, λ) that satisfies Z x) T BZ x) 0. A quasi-newton approximation B of 2 xxlx, λ) may be used. If Ax) does not have full row rank Ax)p = gx) may lack solution. This may be overcome by introducing elastic variables. This is not covered here. We have shown local convergence properties. To obtain convergence from an arbitrary initial point we may utilize a merit function and use linesearch. SF2822 Applied Nonlinear Optimization, KTH 5 / 24 Lecture 9, 207/208 Enforcing convergence by a linesearch strategy Compute optimal solution p and multiplier vector λ + to QP = ) 2 pt 2 xxlx, λ)p + f x) T p subject to Ax)p = gx), p IR n, x x + αp, where α is determined in a linesearch to give sufficient decrease of a merit function. Ideally, α = eventually.) SF2822 Applied Nonlinear Optimization, KTH 6 / 24 Lecture 9, 207/208
9 Example of merit function for SQP on P = ) A merit function typically consists of a weighting of optimality and feasibility. An example is the augmented Lagrangian merit function M µ x) = f x) λx) T gx) + 2µ gx)t gx), where µ is a positive parameter and λx) = Ax)Ax) T ) Ax) f x). The vector λx) is here the least-squares solution of Ax) T λ = f x).) Then the SQP solution p is a descent direction to M µ at x if µ is sufficiently close to zero and Z x) T BZ x) 0, see Nocedal and Wright pp We may then carry out a linesearch on M µ in the x-direction and define λx) = Ax)Ax) T ) Ax) f x). Ideally the step length is chosen as α =. We consider the pure method, where α = and λ + is given by QP = ). SF2822 Applied Nonlinear Optimization, KTH 7 / 24 Lecture 9, 207/208 SQP for inequality-constrained problems In the SQP subproblem QP = ), the constraints are approximated by a linearization around x, i.e., the requirement on p is g i x) + g i x) T p = 0, i =,..., m. For an inequality constraint g i x) 0 this requirement may be generalized to g i x) + g i x) T p 0. An SQP method gives in each iteration a prediction of the active constraints in P) by the constraints that are active in the SQP subproblem. The QP subproblem gives nonnegative multipliers for the inequality constraints. SF2822 Applied Nonlinear Optimization, KTH 8 / 24 Lecture 9, 207/208
10 The SQP subproblem for a nonlinear program The problem P) f x) subject to g i x) 0, i I, g i x) = 0, i E, x IR n, where f, g C 2, g : IR n IR m, has, at a certain point x, λ, an SQP subproblem QP) 2 pt 2 xxlx, λ)p + f x) T p subject to g i x) T p g i x), i I, g i x) T p = g i x), i E, p IR n, which has optimal solution p and Lagrange multiplier vector λ +. SF2822 Applied Nonlinear Optimization, KTH 9 / 24 Lecture 9, 207/208 An SQP iteration for nonlinear optimization problem Given x, λ such that 2 xxlx, λ) 0, an SQP iteration for P) takes the following form.. Compute optimal solution p and multiplier vector λ + to QP) 2 pt 2 xxlx, λ)p + f x) T p subject to g i x) T p g i x), i I, g i x) T p = g i x), i E, p IR n. 2. x x + p, λ λ +. Note that λ i 0, i I, is maintained since λ + are Lagrange multipliers to QP). SF2822 Applied Nonlinear Optimization, KTH 20 / 24 Lecture 9, 207/208
11 Theorem Speed of convergence for SQP method Assume that f C 3, g C 3 on R n and that x is a regular point that satisfies the second-order sufficient optimality conditions of P) with strict complementarity. If the SQP method with steplength one) is started at a point sufficiently close to x, λ, then it is well defined, has the same active constraints as P) has at x, and converges to x, λ with convergence rate at least two. Proof. If the constraints that are not active at x are ignored, the SQP method converges according to the discussion from the equality constrained case. Hence, p tends to zero quadratically. For constraint i, g i x) + g i x) T p g i x) g i x) p. For x sufficiently close to x, if g i x ) > 0, then g i x) > 0, and for p small enough, g i x) + g i x) T p g i x) g i x) p > 0. SF2822 Applied Nonlinear Optimization, KTH 2 / 24 Lecture 9, 207/208 SQP method for nonlinear optimization We have discussed the ideal case. Comments: If 2 xxlx, λ) 0, we may replace 2 xxlx, λ) by B in QP), where B is a symmetric approximation of 2 xxlx, λ) that satisfies B 0. A quasi-newton approximation B of 2 xxlx, λ) may be used. Example SQP quasi-newton solver: SNOPT.) The QP subproblem may lack feasible solutions. This may be overcome by introducing elastic variables. This is not covered here. We have shown local convergence properties. To obtain convergence from an arbitrary initial point we may utilize a merit function and use linesearch or trust-region strategy. SF2822 Applied Nonlinear Optimization, KTH 22 / 24 Lecture 9, 207/208
12 Example problem Consider small example problem P) 2 x + ) x 2 + 2) 2 subject to 3x + x 2 2) 2 x x 2 ) = 0, x 0, x 2 0, x IR 2. Optimal solution x ) T, λ SF2822 Applied Nonlinear Optimization, KTH 23 / 24 Lecture 9, 207/208 Graphical illustration of example problem Optimal solution x ) T, λ SF2822 Applied Nonlinear Optimization, KTH 24 / 24 Lecture 9, 207/208
E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationNonlinear optimization
Nonlinear optimization Anders Forsgren Optimization and Systems Theory Department of Mathematics Royal Institute of Technology (KTH) Stockholm, Sweden evita Winter School 2009 Geilo, Norway January 11
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.
SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question
More informationSF2822 Applied nonlinear optimization, final exam Saturday December
SF2822 Applied nonlinear optimization, final exam Saturday December 5 27 8. 3. Examiner: Anders Forsgren, tel. 79 7 27. Allowed tools: Pen/pencil, ruler and rubber; plus a calculator provided by the department.
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationLecture 15: SQP methods for equality constrained optimization
Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained
More informationCONVEXIFICATION SCHEMES FOR SQP METHODS
CONVEXIFICAION SCHEMES FOR SQP MEHODS Philip E. Gill Elizabeth Wong UCSD Center for Computational Mathematics echnical Report CCoM-14-06 July 18, 2014 Abstract Sequential quadratic programming (SQP) methods
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationSome Theoretical Properties of an Augmented Lagrangian Merit Function
Some Theoretical Properties of an Augmented Lagrangian Merit Function Philip E. GILL Walter MURRAY Michael A. SAUNDERS Margaret H. WRIGHT Technical Report SOL 86-6R September 1986 Abstract Sequential quadratic
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More informationThe Squared Slacks Transformation in Nonlinear Programming
Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationNonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity
Preprint ANL/MCS-P864-1200 ON USING THE ELASTIC MODE IN NONLINEAR PROGRAMMING APPROACHES TO MATHEMATICAL PROGRAMS WITH COMPLEMENTARITY CONSTRAINTS MIHAI ANITESCU Abstract. We investigate the possibility
More informationSF2822 Applied nonlinear optimization, final exam Wednesday June
SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed.
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationSF2822 Applied nonlinear optimization, final exam Saturday December
SF2822 Applied nonlinear optimization, final exam Saturday December 20 2008 8.00 13.00 Examiner: Anders Forsgren, tel. 790 71 27. Allowed tools: Pen/pencil, ruler and eraser; plus a calculator provided
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationOn the use of piecewise linear models in nonlinear programming
Math. Program., Ser. A (2013) 137:289 324 DOI 10.1007/s10107-011-0492-9 FULL LENGTH PAPER On the use of piecewise linear models in nonlinear programming Richard H. Byrd Jorge Nocedal Richard A. Waltz Yuchen
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationPart 2: Linesearch methods for unconstrained optimization. Nick Gould (RAL)
Part 2: Linesearch methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD
A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained
More informationAM 205: lecture 18. Last time: optimization methods Today: conditions for optimality
AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3
More informationREGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis
More informationMODIFYING SQP FOR DEGENERATE PROBLEMS
PREPRINT ANL/MCS-P699-1097, OCTOBER, 1997, (REVISED JUNE, 2000; MARCH, 2002), MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY MODIFYING SQP FOR DEGENERATE PROBLEMS STEPHEN J. WRIGHT
More informationThe use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher
The use of second-order information in structural topology optimization Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher What is Topology Optimization? Optimize the design of a structure
More informationPreprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory
Preprint ANL/MCS-P1015-1202, Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR
More informationInterior Point Methods for Convex Quadratic and Convex Nonlinear Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationSome new facts about sequential quadratic programming methods employing second derivatives
To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationLecture 2: Linear SVM in the Dual
Lecture 2: Linear SVM in the Dual Stéphane Canu stephane.canu@litislab.eu São Paulo 2015 July 22, 2015 Road map 1 Linear SVM Optimization in 10 slides Equality constraints Inequality constraints Dual formulation
More informationA STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationarxiv: v1 [math.oc] 10 Apr 2017
A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation Tuan T. Nguyen, Mircea Lazar and Hans Butler arxiv:1704.03064v1 math.oc] 10 Apr 2017 Abstract
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview
More informationSufficient Conditions for Finite-variable Constrained Minimization
Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute
More informationalso has x æ as a local imizer. Of course, æ is typically not known, but an algorithm can approximate æ as it approximates x æ èas the augmented Lagra
Introduction to sequential quadratic programg Mark S. Gockenbach Introduction Sequential quadratic programg èsqpè methods attempt to solve a nonlinear program directly rather than convert it to a sequence
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,
More information6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection
6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationMATH 4211/6211 Optimization Constrained Optimization
MATH 4211/6211 Optimization Constrained Optimization Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Constrained optimization
More informationAn Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints
An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:
More informationarxiv: v1 [math.na] 8 Jun 2018
arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical
More informationLecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming
Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationA STABILIZED SQP METHOD: GLOBAL CONVERGENCE
A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationComputational Optimization. Augmented Lagrangian NW 17.3
Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationTrust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization
Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationExam in TMA4180 Optimization Theory
Norwegian University of Science and Technology Department of Mathematical Sciences Page 1 of 11 Contact during exam: Anne Kværnø: 966384 Exam in TMA418 Optimization Theory Wednesday May 9, 13 Tid: 9. 13.
More information3E4: Modelling Choice. Introduction to nonlinear programming. Announcements
3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More informationQuadratic Programming
Quadratic Programming Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2 SideBar: Matrix Spaces Four fundamental subspaces
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationMore on Lagrange multipliers
More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function
More informationNLPJOB : A Fortran Code for Multicriteria Optimization - User s Guide -
NLPJOB : A Fortran Code for Multicriteria Optimization - User s Guide - Address: Prof. K. Schittkowski Department of Computer Science University of Bayreuth D - 95440 Bayreuth Phone: (+49) 921 557750 E-mail:
More informationNUMERICAL OPTIMIZATION. J. Ch. Gilbert
NUMERICAL OPTIMIZATION J. Ch. Gilbert Numerical optimization (past) The discipline deals with the classical smooth (nonconvex) problem min {f(x) : c E (x) = 0, c I (x) 0}. Applications: variable added
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 15, No. 3, pp. 863 897 c 2005 Society for Industrial and Applied Mathematics A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR NONLINEAR OPTIMIZATION MICHAEL P. FRIEDLANDER
More information1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.
AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The
More informationMiscellaneous Nonlinear Programming Exercises
Miscellaneous Nonlinear Programming Exercises Henry Wolkowicz 2 08 21 University of Waterloo Department of Combinatorics & Optimization Waterloo, Ontario N2L 3G1, Canada Contents 1 Numerical Analysis Background
More informationA stabilized SQP method: superlinear convergence
Math. Program., Ser. A (2017) 163:369 410 DOI 10.1007/s10107-016-1066-7 FULL LENGTH PAPER A stabilized SQP method: superlinear convergence Philip E. Gill 1 Vyacheslav Kungurtsev 2 Daniel P. Robinson 3
More informationArc Search Algorithms
Arc Search Algorithms Nick Henderson and Walter Murray Stanford University Institute for Computational and Mathematical Engineering November 10, 2011 Unconstrained Optimization minimize x D F (x) where
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More information8. Constrained Optimization
8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer
More informationSurvey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA
Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling
More information1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:
Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion
More information