also has x æ as a local imizer. Of course, æ is typically not known, but an algorithm can approximate æ as it approximates x æ èas the augmented Lagra

Size: px
Start display at page:

Download "also has x æ as a local imizer. Of course, æ is typically not known, but an algorithm can approximate æ as it approximates x æ èas the augmented Lagra"

Transcription

1 Introduction to sequential quadratic programg Mark S. Gockenbach Introduction Sequential quadratic programg èsqpè methods attempt to solve a nonlinear program directly rather than convert it to a sequence of unconstrained imization problems. To make this introduction as simple as possible, I will begin by discussing the SQP framework for equality-constrained NLPs. The basic idea is analogous to Newton's method for unconstrained imization: At each step, a local model of the optimization problem is constructed and solved, yielding a step èhopefullyè toward the solution of the original problem. In unconstrained imization, only the objective function must be approximated, and the local model is quadratic. In the NLP s:t: gèxè =; both the objective function and the constraint must be modeled. An SQP method uses a quadratic model for the objective function and a linear model of the constraint. A nonlinear program in which the objective function is quadratic and the constraints are linear is called a quadratic program èqpè. An SQP method solves a QP at each iteration. Given an current estimate x èkè of a solution x æ, g can be approximated by gèx èkè + pè : = rgèx èkè è T p + gèx èkè è; and so the constraint is replaced by gèxè = rgèx èkè è T p + gèx èkè è=: At ærst glance, one would expect that the quadratic objective function for the model problem would be the Taylor approximation to f: fèx èkè + pè : = fèx èkè è+rfèx èkè è æ p + p ær fèx èkè èp: However, this would be the wrong choice, because the curvature of the constraints must be captured by the model problem. I demonstrate this by an example below. If æ is the Lagrange multiplier corresponding to a local imizer x æ of s:t: gèxè =; then the Lagrangian `èæ; æ è has the property that `èx; æ è= for all feasible x. It follows that `èx; æ è s:t: gèxè =

2 also has x æ as a local imizer. Of course, æ is typically not known, but an algorithm can approximate æ as it approximates x æ èas the augmented Lagrangian method does, for exampleè. Given x èkè and èkè, `èx èkè + p; èkè è : = p ær`èx èkè ; èkè èp + r`èx èkè ; èkè è æ p + `èx èkè ; èkè è èfor p near è: I will show below that solving p ær`èx èkè ; èkè èp + r`èx èkè ; èkè è+`èx èkè ; èkè è s:t: rgèx èkè è T p + gèx èkè è= yields improved values of x èkè and èkè èat least when x èkè and èkè are close to x æ and æ, respectivelyè. First, however, I give an example that illustrates the necessity of using the Lagrangian to deæne the quadratic programg subproblems. Example. Deæne f :R! R and g :R! R by Then the solution of the nonlinear program = èx, è, x ; gèxè = x + x, : s:t: gèxè = is x æ = è; è, and the corresponding Lagrange multiplier is æ =,, as can be easily checked. Taking x èè =è=; =è, I form the QP or s:t: p ær fèx èè èp + rfèx èè è æ p + fèx èè è rgèx èè è T p + gèx èè è=;,p + p, p, p + s:t: p +p + 9 =; in hopes that the solution p èè will deæne a step towards x æ, allowing me to deæne x èè = x èè + p èè. However, the quadratic program is unbounded below and hence has no solution. This is clearly seen in Figure, which shows the contours of the quadratic objective function together with the linearized constraint and the original nonlinear constraint. This example shows that the quadratic objective function in the QP subproblem cannot simply approximate f. By contrast, if èè =,=, then simpliæes to p ær`èx èè ; èè èp + r`èx èè ; èè è æ p + `èx èè ; èè è s:t: rgèx èè è T p + gèx èè è= p +p + p + p + 77 s:t: p +p + 9 =: This QP is well-posed; it is illustrated in Figure,which also shows x èè = x èè + p èè, where p èè is the solution to the QP. As Figure shows, p èè isagood step towards x æ.

3 Decreasing Decreasing Figure : An illustration of the ærst proposed quadratic program in Example.. The nonlinear constraint, the linearized constraint, and the contours of the quadratic objective function are shown. The asterisk indicates x èè =è=; =è. Figure : An illustration of the second proposed quadratic program in Example.. The nonlinear constraint, the linearized constraint, and the contours of the quadratic objective function are shown. The asterisk indicates x èè =è=; =è, while the small circle indicates x èè =è,:8; : :9è. Relationship of Newton's method to SQP If x æ is a local imizer and nonsingular point of

4 s:t: gèxè =; and æ is the corresponding Lagrange multiplier, then rfèx æ è,rgèx æ è æ = ;,gèx æ è = : r `èx æ ; æ è,rgèx æ è,rgèx æ è T is nonsingular. It is therefore reasonable to try to compute x æ ; æ by applying Newton's method to the system r,rgèxè = ;,gèxè = : Given an estimate èx èkè ; èkè èofèx æ ; æ è, Newton's method deænes èx èk+è ; èk+è è=èx èkè ; èkè è+ èp èkè ;! èkè è, where the step èp èkè ;! èkè è is deæned by the linear system r `èx èkè ; èkè è,rgèx èkè è,rgèx èkè è T p èkè! èkè =, r`èx èkè ; èkè è,gèx èkè è : èè On the other hand, assug èx èkè ; èkè è is suæciently close to èx æ ; æ è that r `èx èkè ; èkè è is positive deænite on the null space of rgèx èkè è T, then the quadratic program p ær`èx èkè ; èkè èp + r`èx èkè ; èkè è+`èx èkè ; èkè è s:t: rgèx èkè è T p + gèx èkè è= is a convex program and has a unique solution-lagrange multiplier pair èp èkè ;! èkè è. This pair is detered by the ærst-order necessary conditions: r `èx èkè ; èkè èp èkè + r`èx èkè ; èkè è,rgèx èkè è! èkè = ; èè rgèx èkè è T p èkè + gèx èkè è = : èè But the system èíè is the same as system èè. In other words, the SQP method is equivalent to Newton's method applied to the ærst-order necessary conditions!. Therefore, at least locally, the SQP method deænes not only a good step from x èkè towards x æ, but also a good step from èkè towards æ. In fact, under the usual assumptions, èx èkè ; èkè è! èx æ ; æ è quadratically. The advantage of the SQP framework over simply applying Newton's method to the ærst-order necessary conditions is that the optimization framework gives us some basis for modifying the step when èx èkè ; èkè è is not suæciently close to èx æ ; æ è that pure Newton's method deænes a good step. This is the same reason that, in unconstrained imization, it is advantageous to think of Newton's method as repeatedly imizing a quadratic model rather than as trying to ænd a zero of the gradient. Sequential quadratic programg deænes a locally convergent algorithm, and it is in this sense that one can talk about the SQP method. However, diæerent techniques can be used to create a globally-convergent iteration, and so there can be many SQP algorithms. Example. Figure illustrates four iterations of the local SQP method applied to Example. with x èè =è; è and èè ==. The numerical results of nine iterations are summarized intable, which shows x èkè! x æ =è; è and èkè! æ =,.

5 Figure : Four iterations of the local SQP method applied to the NLP from Example.. The nonlinear constraint, the linearized constraint, and the contours of the quadratic objective function are shown. The asterisk indicates x èkè and the small circle x èk+è, k = èupper leftè, k = èupper rightè, k = èlower leftè, and k = èlower rightè. An alternate formulation of the local SQP method The following calculation suggests a slightly diæerent formulation of the SQP algorithm: r`èx èkè ; èkè è æ p = The last equality is valid if p satisæes the constraint rfèx èkè è,rgèx èkè è èkè æ p = rfèx èkè è æ p, rgèx èkè è èkè æ p = rfèx èkè è æ p, èkè æ rgèx èkè è T p = rfèx èkè è æ p + èkè æ gèx èkè è rgèx èkè è T p + gèx èkè è=: èè It follows that, on the linearized feasible set deæned by èè, the two quadratics and p ær`èx èkè ; èkè èp + r`èx èkè ; èkè è+`èx èkè ; èkè è p ær`èx èkè ; èkè èp + rfèx èkè è+fèx èkè è

6 k x èkè x èkè èkè : :,: :9 :7,: : æ, :8,:,:8 æ, :7,:7 :78 æ, :,:879,:9 æ, :98,:877 :89 æ, :9,:88 7,:9 æ, :,:997 8 :8 æ, :,: 9,:79 æ, :,: Table : Nine iterations of the SQP method applied to Example.. See also Figure. diæer by a constant, and therefore the QPs and p ær`èx èkè ; èkè èp + r`èx èkè ; èkè è+`èx èkè ; èkè è èè s:t: rgèx èkè è T p + gèx èkè è= èè p ær`èx èkè ; èkè èp + rfèx èkè è+fèx èkè è è7è s:t: rgèx èkè è T p + gèx èkè è= è8è have the same solution p èkè. However, the Lagrange multipliers for the two QPs diæer since the gradient of the objective functions are diæerent. The optimality conditions for è7í8è are or, equivalently, r `èx èkè ; èkè èp èkè + rfèx èkè è,rgèx èkè è! èkè = ; rgèx èkè è T p èkè + gèx èkè è = ; r `èx èkè ; èkè èp èkè,rgèx èkè è! èkè =,rfèx èkè è; rgèx èkè è T p èkè + gèx èkè è = : Adding rgèx èkè è èkè to both sides of the ærst equation yields r `èx èkè ; èkè èp èkè,rgèx èkè è! èkè, èkè =,r`èx èkè ; èkè è; è9è rgèx èkè è T p èkè + gèx èkè è = : èè I have already observed that QPs èíè and è7í8è have the same solution p èkè. optimality conditions èíè and è9íè shows that Comparing the! èkè, èkè =! èkè ; that is,! èkè = èkè +! èkè = èk+è : Therefore, when the QP is formulated as è7í8è, the Lagrange multiplier for the QP is not the step to èk+è, it is actually èk+è itself. For equality-constrained NLPs, this is the only signiæcant

7 diæerence between the two formulations of the QP subproblem. However, as I will show, the diæerence is much more signiæcant for inequality-constrained NLPs, and it is necessary to adopt version quadratic objective è7è. A ænal remark is that the constant term in the quadratic objective is irrelevant for detering p èkè and! èkè or! èkè and so it is usually not included. Therefore the objective function for the QP is taken to be p ær`èx èkè ; èkè èp + r`èx èkè ; èkè è or p ær`èx èkè ; èkè èp + rfèx èkè è 7

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

LECTURE Review. In this lecture we shall study the errors and stability properties for numerical solutions of initial value.

LECTURE Review. In this lecture we shall study the errors and stability properties for numerical solutions of initial value. LECTURE 24 Error Analysis for Multi-step Methods 1. Review In this lecture we shall study the errors and stability properties for numerical solutions of initial value problems of the form è24.1è dx = fèt;

More information

I R TECHNICAL RESEARCH REPORT. A Computationally Efficient Feasible Sequential Quadratic Programming Algorithm. by C. Lawrence, A. Tits T.R.

I R TECHNICAL RESEARCH REPORT. A Computationally Efficient Feasible Sequential Quadratic Programming Algorithm. by C. Lawrence, A. Tits T.R. TECHNICAL RESEARCH REPORT A Computationally Efficient Feasible Sequential Quadratic Programming Algorithm by C. Lawrence, A. Tits T.R. 98-46 I R INSTITUTE FOR SYSTEMS RESEARCH ISR develops, applies and

More information

W 1 æw 2 G + 0 e? u K y Figure 5.1: Control of uncertain system. For MIMO systems, the normbounded uncertainty description is generalized by assuming

W 1 æw 2 G + 0 e? u K y Figure 5.1: Control of uncertain system. For MIMO systems, the normbounded uncertainty description is generalized by assuming Chapter 5 Robust stability and the H1 norm An important application of the H1 control problem arises when studying robustness against model uncertainties. It turns out that the condition that a control

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

where Sènè stands for the set of n æ n real symmetric matrices, and æ is a bounded open set in IR n,typically with a suæciently regular boundary, mean

where Sènè stands for the set of n æ n real symmetric matrices, and æ is a bounded open set in IR n,typically with a suæciently regular boundary, mean GOOD AND VISCOSITY SOLUTIONS OF FULLY NONLINEAR ELLIPTIC EQUATIONS 1 ROBERT JENSEN 2 Department of Mathematical and Computer Sciences Loyola University Chicago, IL 60626, U.S.A. E-mail: rrj@math.luc.edu

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

The total current injected in the system must be zero, therefore the sum of vector entries B j j is zero, è1;:::1èb j j =: Excluding j by means of è1è

The total current injected in the system must be zero, therefore the sum of vector entries B j j is zero, è1;:::1èb j j =: Excluding j by means of è1è 1 Optimization of networks 1.1 Description of electrical networks Consider an electrical network of N nodes n 1 ;:::n N and M links that join them together. Each linkl pq = l k joints the notes numbered

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura

More information

LECTURE 17. Algorithms for Polynomial Interpolation

LECTURE 17. Algorithms for Polynomial Interpolation LECTURE 7 Algorithms for Polynomial Interpolation We have thus far three algorithms for determining the polynomial interpolation of a given set of data.. Brute Force Method. Set and solve the following

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

9 Facta Universitatis ser.: Elect. and Energ. vol. 11, No.3 è1998è this paper we have considered shaping gain for two interesting quantization procedu

9 Facta Universitatis ser.: Elect. and Energ. vol. 11, No.3 è1998è this paper we have considered shaping gain for two interesting quantization procedu FACTA UNIVERSITATIS èniçsè Series: Electronics and Energetics vol. 11, No.3 è1998è, 91-99 NONLINEAR TRANSFORMATION OF ONEíDIMENSIONAL CONSTELLATION POINTS IN ORDER TO ERROR PROBABILITY DECREASING Zoran

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

Benders Decomposition

Benders Decomposition Benders Decomposition Yuping Huang, Dr. Qipeng Phil Zheng Department of Industrial and Management Systems Engineering West Virginia University IENG 593G Nonlinear Programg, Spring 2012 Yuping Huang (IMSE@WVU)

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Numerical Optimization

Numerical Optimization Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Deænition 1 A set S is ænite if there exists a number N such that the number of elements in S èdenoted jsjè is less than N. If no such N exists, then

Deænition 1 A set S is ænite if there exists a number N such that the number of elements in S èdenoted jsjè is less than N. If no such N exists, then Morphology of Proof An introduction to rigorous proof techniques Craig Silverstein September 26, 1998 1 Methodology of Proof an example Deep down, all theorems are of the form if A then B. They may be

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Figure 1: Startup screen for Interactive Physics Getting Started The ærst simulation will be very simple. With the mouse, select the circle picture on

Figure 1: Startup screen for Interactive Physics Getting Started The ærst simulation will be very simple. With the mouse, select the circle picture on Experiment Simulations í Kinematics, Collisions and Simple Harmonic Motion Introduction Each of the other experiments you perform in this laboratory involve a physical apparatus which you use to make measurements

More information

University of California, Berkeley. ABSTRACT

University of California, Berkeley. ABSTRACT Jagged Bite Problem NP-Complete Construction Kris Hildrum and Megan Thomas Computer Science Division University of California, Berkeley fhildrum,mctg@cs.berkeley.edu ABSTRACT Ahyper-rectangle is commonly

More information

Lecture 16: October 22

Lecture 16: October 22 0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

P.B. Stark. January 29, 1998

P.B. Stark. January 29, 1998 Statistics 210B, Spring 1998 Class Notes P.B. Stark stark@stat.berkeley.edu www.stat.berkeley.eduèçstarkèindex.html January 29, 1998 Second Set of Notes 1 More on Testing and Conædence Sets See Lehmann,

More information

Introduction Radiative habhas can be used as one of the calibrations of the aar electromagnetic calorimeter èemè. Radiative habha events èe, e +! e, e

Introduction Radiative habhas can be used as one of the calibrations of the aar electromagnetic calorimeter èemè. Radiative habha events èe, e +! e, e SLíPUí865 UMSèHEPèè9 October 2 Kinematic Fit for the Radiative habha alibration of aar's Electromagnetic alorimeter æ Johannes M. auer Department of Physics and stronomy, University of Mississippi, University,

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

tokamak and stellarator geometry, regarding both its physical character and its interaction

tokamak and stellarator geometry, regarding both its physical character and its interaction THE INFLUENCE OF ZONAL EXB FLOWS ON EDGE TURBULENCE IN TOKAMAKS AND STELLARATORS B. SCOTT, F. JENKO, A. KENDL Max-Planck-Institut fíur Plasmaphysik, Garching, Germany We report on æuid, gyroæuid and gyrokinetic

More information

Operations Research Lecture 4: Linear Programming Interior Point Method

Operations Research Lecture 4: Linear Programming Interior Point Method Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

J. TSINIAS In the present paper we extend previous results on the same problem for interconnected nonlinear systems èsee ë1-18ë and references therein

J. TSINIAS In the present paper we extend previous results on the same problem for interconnected nonlinear systems èsee ë1-18ë and references therein Journal of Mathematical Systems, Estimation, and Control Vol. 6, No. 1, 1996, pp. 1í17 cæ 1996 Birkhíauser-Boston Versions of Sontag's Input to State Stability Condition and Output Feedback Global Stabilization

More information

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations Subsections One-dimensional Unconstrained Optimization Golden-Section Search Quadratic Interpolation Newton's

More information

The closer x is to a, the better the slope of the secant line will approximate the slope of the tangent line. y=f(x)

The closer x is to a, the better the slope of the secant line will approximate the slope of the tangent line. y=f(x) Math 1710 Topics for ærst exam Chapter 1: Limits an Continuity x1: Rates of change an its Calculus = Precalculus + èitsè Limit of a function f at a point x 0 = the value the function `shoul' take at the

More information

arxiv: v1 [math.oc] 10 Apr 2017

arxiv: v1 [math.oc] 10 Apr 2017 A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation Tuan T. Nguyen, Mircea Lazar and Hans Butler arxiv:1704.03064v1 math.oc] 10 Apr 2017 Abstract

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

Optimality conditions for Equality Constrained Optimization Problems

Optimality conditions for Equality Constrained Optimization Problems International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767 P-ISSN: 2321-4759 Volume 4 Issue 4 April. 2016 PP-28-33 Optimality conditions for Equality Constrained Optimization

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Nonlinear optimization

Nonlinear optimization Nonlinear optimization Anders Forsgren Optimization and Systems Theory Department of Mathematics Royal Institute of Technology (KTH) Stockholm, Sweden evita Winter School 2009 Geilo, Norway January 11

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

A proof of Calibration via Blackwell's. Dean P. Foster æ. Abstract. Over the past few years many proofs of calibration have been presented

A proof of Calibration via Blackwell's. Dean P. Foster æ. Abstract. Over the past few years many proofs of calibration have been presented A proof of Calibration via Blackwell's Approachability heore Dean P. Foster æ February 27, 1997 Abstract Over the past few years any proofs of calibration have been presented èfoster and Vohra è1991, 1997è,

More information

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization Convex Optimization Ofer Meshi Lecture 6: Lower Bounds Constrained Optimization Lower Bounds Some upper bounds: #iter μ 2 M #iter 2 M #iter L L μ 2 Oracle/ops GD κ log 1/ε M x # ε L # x # L # ε # με f

More information

F is n ntiderivtive èor èindeæniteè integrlè off if F 0 èxè =fèxè. Nottion: F èxè = ; it mens F 0 èxè=fèxè ëthe integrl of f of x dee x" Bsic list: xn

F is n ntiderivtive èor èindeæniteè integrlè off if F 0 èxè =fèxè. Nottion: F èxè = ; it mens F 0 èxè=fèxè ëthe integrl of f of x dee x Bsic list: xn Mth 70 Topics for third exm Chpter 3: Applictions of Derivtives x7: Liner pproximtion nd diæerentils Ide: The tngent line to grph of function mkes good pproximtion to the function, ner the point of tngency.

More information

5.5 Quadratic programming

5.5 Quadratic programming 5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1.

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1. ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS R. ANDREANI, E. G. BIRGIN, J. M. MARTíNEZ, AND M. L. SCHUVERDT Abstract. Augmented Lagrangian methods with general lower-level constraints

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

In the previous chapters we have presented synthesis methods for optimal H 2 and

In the previous chapters we have presented synthesis methods for optimal H 2 and Chapter 8 Robust performance problems In the previous chapters we have presented synthesis methods for optimal H 2 and H1 control problems, and studied the robust stabilization problem with respect to

More information

SF2822 Applied nonlinear optimization, final exam Saturday December

SF2822 Applied nonlinear optimization, final exam Saturday December SF2822 Applied nonlinear optimization, final exam Saturday December 5 27 8. 3. Examiner: Anders Forsgren, tel. 79 7 27. Allowed tools: Pen/pencil, ruler and rubber; plus a calculator provided by the department.

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

where A and B are constants of integration, v t = p g=k is the terminal velocity, g is the acceleration of gravity, and m is the mass of the projectil

where A and B are constants of integration, v t = p g=k is the terminal velocity, g is the acceleration of gravity, and m is the mass of the projectil Homework 5 Physics 14 Methods in Theoretical Physics Due: Wednesday February 16, 4 Reading Assignment: Thornton and Marion, Ch..4. 1. è1d motion, Newtonian resistance.è In class, we considered the motion

More information

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

On sequential optimality conditions for constrained optimization. José Mario Martínez  martinez On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

L. FAYBUSOVICH in detail in ë6ë. The set of points xèæè;æ ç 0; is usually called the central trajectory for the problem For the construction

L. FAYBUSOVICH in detail in ë6ë. The set of points xèæè;æ ç 0; is usually called the central trajectory for the problem For the construction Journal of Mathematical Systems, Estimation, and Control Vol. 5, No. 3, 1995, pp. 1í13 cæ 1995 Birkhíauser-Boston A Hamiltonian Formalism for Optimization Problems æ Leonid Faybusovich Abstract We consider

More information

The Fundamental Theorem of Linear Inequalities

The Fundamental Theorem of Linear Inequalities The Fundamental Theorem of Linear Inequalities Lecture 8, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) Constrained Optimisation

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

SF2822 Applied nonlinear optimization, final exam Wednesday June

SF2822 Applied nonlinear optimization, final exam Wednesday June SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed.

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

Optimization: an Overview

Optimization: an Overview Optimization: an Overview Moritz Diehl University of Freiburg and University of Leuven (some slide material was provided by W. Bangerth and K. Mombaur) Overview of presentation Optimization: basic definitions

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Economics 472. Lecture 16. Binary Dependent Variable Models

Economics 472. Lecture 16. Binary Dependent Variable Models University of Illinois Fall 998 Department of Economics Roger Koenker Economics 472 Lecture 6 Binary Dependent Variable Models Let's begin with a model for an observed proportion, or frequency. We would

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information