Numerical Optimization Algorithms

Size: px
Start display at page:

Download "Numerical Optimization Algorithms"

Transcription

1 Numerical Optimization Algorithms 1. Overview. Calculus of Variations 3. Linearized Supersonic Flow 4. Steepest Descent 5. Smoothed Steepest Descent

2 Overview 1 Two Main Categories of Optimization Algorithms Gradient Based Non-Gradient Based

3 Overview Non-Gradient Based Only objective function evaluations are used to find optimum point. Gradient and Hessian of the objective function are not needed. May be able to find global minimum BUT requires a large number of design cycles. Non-gradient based family of methods: genetic algorithms, grid searchers, stochastic, nonlinear simplex, etc. In the case of Genetic Algorithms: Evaluations of the objective function of an initial set of solutions starts the design process. Initial set is typically very LARGE. Able to handle integer variables such as number of vertical tails, number of engines and other integer parameters. Able to seek the optimum point for objective functions that do not have smooth first or second derivatives.

4 Overview 3 Gradient Based Requires existence of continuous first derivatives of the objective function and possibly higher derivatives. Generally requires a much smaller number of design cycles to converge to an optimum compared to non-gradient based methods. However, only convergence to a local minimum is guaranteed. Simple gradient-based methods only require the gradient of the objective function but usually requires N iterations or more where N is the number of design variables. Methods that use the Hessian (Quasi-Newton) generally only require N iterations.

5 Overview 4 Gradient or Non-Gradient Based? Two Step Approach: 1. Use low-fidelity method (Panel Method, Euler) together with Non- Gradient Based method in the Conceptual Design Stage.. Use higher-fidelity method (Navier-Stokes) together with Gradient Based method to refine the design. The proper combination of the different flow solvers with the various optimization algorithms is still an OPEN research topic.

6 Calculus of Variations 1 Consider a class of optimization problems for which a curve y(x) is to be chosen to minimize a cost function described by I = x 1 x 0 F (x, y, y ) dx, where F is an arbitrary function that is continuous and twice-differentiable. The function F is dependent on x, y, and y, where y(x) is the trajectory to be optimized and it is a continuous function and differentiable, and y represents the derivative of y. Under a variation δy, the first variation of the cost function can be expressed as δi = x 1 x 0 F y F δy + y δy dx, Expand the equation by integrating the second term by parts δi = x 1 x 0 F y δy dx + F y δy x 1 x1 x 0 x 0 d F dx y δy dx.

7 Calculus of Variations Assuming fixed end points, then the variations of y at x 0 and x 1 are zero, δy(x 0 ) = δy(x 1 ) = 0, so that δi = x 1 x 0 F y d F dx y δy dx = x 1 x 0 Gδy dx, where G may be recognized as the gradient of the cost function and is expressed as G = F y d F dx y. A further variation of the gradient, then results to the following expression δg = G G δy + + G or δg = A δy, y y δy y δy where A is the Hessian operator. Thus the expression for the Hessian can be expressed as the differential operator A = G y + G d y dx + G d y dx. (1)

8 Linearized Supersonic Flow 1 In this example, we explore this concept by deriving the gradient and Hessian operator for linearized supersonic flow. Consider a linearized supersonic flow over a profile with a height y(x), where y is continuous and twice-differentiable. The surface pressure can be defined as where p p = ρq dy M 1 dx, ρq M 1 is a constant and p is the freestream pressure. Next consider an inverse problem with cost function I = 1 B (p p t) dx, where p t is the target surface pressure. The variation of the equation for the surface pressure and cost function under a profile variation δy is δp = ρq d M 1 dx δy and δi = (p p B t) δp dx.

9 Linearized Supersonic Flow Substitute the variation of the pressure into the equation for the variation of the cost function and integrate by parts to obtain δi = (p p ρq d B t) δy dx M 1 dx = B The gradient can then be defined as ρq M 1 d dx (p p t)δy dx. g = ρq d M 1 dx (p p t).

10 Linearized Supersonic Flow 3 To form the Hessian, take a variation of the gradient and substitute the expression for δp δg = ρq d M 1 dx δp = ρ q 4 d M 1 dx δy. Thus the Hessian for the inverse design of the linearized supersonic flow problem can be expressed as the differential operator A = ρ q 4 d M 1 dx. ()

11 Brachistochrone 1 Brachistochorne Problem: Find the minimum time taken by a particle traversing a path y(x) connecting initial and final points (x o, y o ) and x 1, y 1 ), subject only to the force of gravity. The total time is given by T = x 1 x o where the velocity of a particel starting from rest and falling under the influence of gravity is ds v and v = gy ds = dx + dy ds dy = 1 + dx dx ds = 1 + y dx

12 Brachistochrone Substitution for v and ds, yields T = x 1 x o 1 + y gy dx = 1 g x 1 x o 1 + y dx = I y g Therefore, I = x 1 x o From Calculus of Variations, 1 + y y dx = x 1 x o F (y, y )dx G = F y d dx F y = F y d F dx y Compute the partial derivatives of F with respect to y and y and substitute into the gradient formula produces G = 1 + y y 3 d y dx y(1 + y )

13 Brachistochrone 3 The expression for the gradient can then be simplified to Since F does not depend on x, G = 1 + y + yy (y(1 + y )) 3 G = F y d dx F y = F y F y x F y yy F y y y y G = F y y F y yy F y y y y = d dx (F y F y ) On an optimal path, G = 0, so or y G = d dx (F y F y ) = 0 F y F y = const

14 Brachistochrone 4 The expression can then be expanded to 1 + y y F y F y y 1 1 y (1 + y ) 1 y = const = const y(1 + y ) = const The classical solution can be obtained by the substitution y(t) = C sin into the above equation, where C is a constant. t y(1 + y ) = C y = C C sin ( t ) = cos ( t sin ( t ) ) = cot t

15 Brachistochrone 5 Finally, the optimal path can be derived by substituting y in the previous equation with to yield dy dx dy dx = cot x = t y cot ( t ) = tan x(t) = 1 C (t sin t) t dy dt dt = C sin t dt

16 Brachistochrone 6: Continuous Grad Let the trajectory be represented by the discrete values y j = y(x j ) at x j = j x where x is the mesh interval, j is defined as 0 j N + 1 and N is the number of design variables which are also the number of mesh points. From the gradient obtained through calculus of variations, the continuous gradient can be computed as G j = 1 + y j + y jy j (y j (1 + y j ))3 where y j and y j can be evaluated at the discrete points using second-order finite difference approximation y j = y j+1 y j 1 x,and y j = y j+1 y j + y j 1 x

17 Brachistochrone 7 : Discrete Grad In the discrete approach, I can be approximated using the rectangle rule of integration and I = N j=0 F j+ 1 x,where F j+ 1 = y j+ 1 = 1 (y j+1 + y j ),and y j y j+ 1 y j+ 1 = (y j+1 y j ) x Now the discrete gradient can be evaluated where G j = I = F y j y d dx ) (A j+ + A 1 j 1 = x A j+ 1 = 1 + y j+ 1 y 3 j+ 1 F y = 1 + y y 3 (B j+ 1 B j 1 ),and B j+ 1 = d dx y x y(1 + y ) y j+ 1 yj+ 1(1 + y j+ 1 )

18 Steepest Descent 1 Line search methods require the algorithm to choose a direction p and search along this direction from the current iterate to obtain a new iterate for the function value. Once the direction is chosen, then a step length α is multiplied to the search direction to advance the optimization to the next iterate. In order to obtain the search direction, p, and the step length, α, we may employ Taylor s theorem. First, let us define the objective function as f(x), then the optimization problem can be stated as min x f(x), where x R n is a real vector with n 1 components and f: R n R is a smooth function. Let, p be defined as the search direction. Then by Taylor s theorem f(x + αp) = f(x) + αp T f + 1 α p T f(x + tp)p +....

19 Steepest Descent From the Taylor s expansion, the second term p T f is the rate of change of f along the search direction p. The last term contains the expression f(x + αp) which corresponds to the Hessian matrix. The value p that would provide the most rapid decrease in the objective function f(x), is the solution of the following optimization problem: min p p T f, subject to p = 1. With p = 1, the expression p T f can be simplified to p T f = p f cos θ = f cos θ, where θ is the angle between the search direction p and the gradient f. The above expression would attain its minimum value, when cos θ takes on the value 1.

20 Steepest Descent 3 Therefore, the equation can be further simplified to yield an expression for the search direction p of steepest descent p T f = f p = f f. Accordingly a simple optimization algorithm can then be defined by setting the search direction, p, to the negative of the gradient at every iteration. Therefore: p = f. With a line search method the step size α is chosen such that the maximum reduction of the objective function f(x) is attained. The vector x is then updated by the following expression: x n+1 = x n α f.

21 Steepest Descent 4 An alternative approach is to try to follow the continuous path of steepest descent in a sequence of many small steps. The equation above can be rearranged as such x n+1 x n α = f. In the limit as α 0, this reduces to x t = f, (3) where α is the time step in a forward Euler discretization.

22 Smoothed Steepest Descent 1 Let x represent the design variable, and f the gradient. Instead of making the step δx = αp = α f, we replace the gradient f by a smoothed value f. To apply smoothing in the x direction, the smoothed gradient f may be calculated from a discrete approximation to where ɛ is the smoothing parameter. Then the first order change in the cost function is f x ɛ f = f, (4) x δf = = α fδxdx f x ɛ x f fdx = α f dx + α x ɛ x f fdx.

23 Smoothed Steepest Descent Now, integrating the second integral by parts, δf = α f dx + = α < 0, f + ɛ α fɛ f α x f x dx ɛ f x dx where the second term in the first line of the equation is zero if the end points of the new gradient vector are assigned zero values. If ɛ is positive, the variation of the objective function is less than zero and this assures an improvement if α is positive unless f and hence f are zero. Smoothing ensures that each new shape in the optimization sequence remains smooth. It also acts as a preconditioner, which allows the use of much larger steps, and leads to a large reduction in the number of design iterations needed for convergence. A larger smoothing parameter allows a larger time step to be used and thus accelerates the convergence.

24 Smoothed Steepest Descent 3 Jameson and Vassberg have shown that the implicit smoothing technique corresponds to an implicit time stepping scheme for the descent equation (3) if the smoothing parameter ɛ is chosen appropriately. Consider a parabolic equation of the form A second order implicit discretization is x t = π x y. φδx k 1 + (1 + φ)δx k φδx k+1 = φ ( x n k 1 xn k + xn k+1). where φ = π t y. This corresponds exactly to smoothing the correction with the formula ɛ = π. Their results show that the number of iterations required by the smoothing technique is similar to that of the implicit time stepping scheme, and both approaches perform better than the simple steepest descent and Quasi-Newton methods by a large amount.

25 Smoothed Steepest Descent 4 For some problems, such as the calculus of variations, the implicit smoothing technique can be used to implement the Newton method. In a Newton method, the gradient is driven to zero based on the linearization g(y + δy) = g(y) + Aδy, where A is the Hessian. In the case of the calculus of variations a Newton step can be achieved by solving Aδy = G y + G d y dx + G d y dx δy = g, since the Hessian can be represented by the differential operator. Thus the correct choice of smoothing from equation (4) approximates the Newton step, resulting in quadratic convergence, independent of the number of mesh intervals.

An Investigation of the Attainable Efficiency of Flight at Mach One or Just Beyond

An Investigation of the Attainable Efficiency of Flight at Mach One or Just Beyond An Investigation of the Attainable Efficiency of Flight at Mach One or Just Beyond Antony Jameson Department of Aeronautics and Astronautics AIAA Aerospace Sciences Meeting, Reno, NV AIAA Paper 2007-0037

More information

PDE Solvers for Fluid Flow

PDE Solvers for Fluid Flow PDE Solvers for Fluid Flow issues and algorithms for the Streaming Supercomputer Eran Guendelman February 5, 2002 Topics Equations for incompressible fluid flow 3 model PDEs: Hyperbolic, Elliptic, Parabolic

More information

A Crash-Course on the Adjoint Method for Aerodynamic Shape Optimization

A Crash-Course on the Adjoint Method for Aerodynamic Shape Optimization A Crash-Course on the Adjoint Method for Aerodynamic Shape Optimization Juan J. Alonso Department of Aeronautics & Astronautics Stanford University jjalonso@stanford.edu Lecture 19 AA200b Applied Aerodynamics

More information

Gradient Based Optimization Methods

Gradient Based Optimization Methods Gradient Based Optimization Methods Antony Jameson, Department of Aeronautics and Astronautics Stanford University, Stanford, CA 94305-4035 1 Introduction Consider the minimization of a function J(x) where

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

Inverse Kinematics. Mike Bailey.

Inverse Kinematics. Mike Bailey. Inverse Kinematics This work is licensed under a Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International License Mike Bailey mjb@cs.oregonstate.edu inversekinematics.pptx Inverse Kinematics

More information

ENGI Gradient, Divergence, Curl Page 5.01

ENGI Gradient, Divergence, Curl Page 5.01 ENGI 94 5. - Gradient, Divergence, Curl Page 5. 5. The Gradient Operator A brief review is provided here for the gradient operator in both Cartesian and orthogonal non-cartesian coordinate systems. Sections

More information

Optimal control problems with PDE constraints

Optimal control problems with PDE constraints Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.

More information

Edexcel past paper questions. Core Mathematics 4. Parametric Equations

Edexcel past paper questions. Core Mathematics 4. Parametric Equations Edexcel past paper questions Core Mathematics 4 Parametric Equations Edited by: K V Kumaran Email: kvkumaran@gmail.com C4 Maths Parametric equations Page 1 Co-ordinate Geometry A parametric equation of

More information

Second Order ODEs. Second Order ODEs. In general second order ODEs contain terms involving y, dy But here only consider equations of the form

Second Order ODEs. Second Order ODEs. In general second order ODEs contain terms involving y, dy But here only consider equations of the form Second Order ODEs Second Order ODEs In general second order ODEs contain terms involving y, dy But here only consider equations of the form A d2 y dx 2 + B dy dx + Cy = 0 dx, d2 y dx 2 and F(x). where

More information

Tutorial-1, MA 108 (Linear Algebra)

Tutorial-1, MA 108 (Linear Algebra) Tutorial-1, MA 108 (Linear Algebra) 1. Verify that the function is a solution of the differential equation on some interval, for any choice of the arbitrary constants appearing in the function. (a) y =

More information

A Short Essay on Variational Calculus

A Short Essay on Variational Calculus A Short Essay on Variational Calculus Keonwook Kang, Chris Weinberger and Wei Cai Department of Mechanical Engineering, Stanford University Stanford, CA 94305-4040 May 3, 2006 Contents 1 Definition of

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Adjoint Formulations for Topology, Shape and Discrete Optimization

Adjoint Formulations for Topology, Shape and Discrete Optimization 45 th Aerospace Sciences Meeting and Exhibit, January 8 11, 27, Reno, Nevada Adjoint Formulations for Topology, Shape and iscrete Optimization Sriram and Antony Jameson epartment of Aeronautics and Astronautics

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information

The Brachistochrone Curve

The Brachistochrone Curve The Brachistochrone Curve Paige R MacDonald May 16, 2014 Paige R MacDonald The Brachistochrone Curve May 16, 2014 1 / 1 The Problem In 1696, Johann Bernoulli posed the following problem to the scientific

More information

Lecture 16: Relaxation methods

Lecture 16: Relaxation methods Lecture 16: Relaxation methods Clever technique which begins with a first guess of the trajectory across the entire interval Break the interval into M small steps: x 1 =0, x 2,..x M =L Form a grid of points,

More information

Arc Length and Surface Area in Parametric Equations

Arc Length and Surface Area in Parametric Equations Arc Length and Surface Area in Parametric Equations MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2011 Background We have developed definite integral formulas for arc length

More information

Inverse Kinematics. Mike Bailey. Oregon State University. Inverse Kinematics

Inverse Kinematics. Mike Bailey. Oregon State University. Inverse Kinematics Inverse Kinematics Mike Bailey mjb@cs.oregonstate.edu inversekinematics.pptx Inverse Kinematics Forward Kinematics solves the problem if I know the link transformation parameters, where are the links?.

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

AP Calculus 2004 AB FRQ Solutions

AP Calculus 2004 AB FRQ Solutions AP Calculus 4 AB FRQ Solutions Louis A. Talman, Ph. D. Emeritus Professor of Mathematics Metropolitan State University of Denver July, 7 Problem. Part a The function F (t) = 8 + 4 sin(t/) gives the rate,

More information

Engg. Math. I. Unit-I. Differential Calculus

Engg. Math. I. Unit-I. Differential Calculus Dr. Satish Shukla 1 of 50 Engg. Math. I Unit-I Differential Calculus Syllabus: Limits of functions, continuous functions, uniform continuity, monotone and inverse functions. Differentiable functions, Rolle

More information

PH.D. PRELIMINARY EXAMINATION MATHEMATICS

PH.D. PRELIMINARY EXAMINATION MATHEMATICS UNIVERSITY OF CALIFORNIA, BERKELEY SPRING SEMESTER 207 Dept. of Civil and Environmental Engineering Structural Engineering, Mechanics and Materials NAME PH.D. PRELIMINARY EXAMINATION MATHEMATICS Problem

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1. MTH4101 CALCULUS II REVISION NOTES 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) 1.1 Introduction Types of numbers (natural, integers, rationals, reals) The need to solve quadratic equations:

More information

An Investigation of the Attainable Efficiency of Flight at Mach One or Just Beyond

An Investigation of the Attainable Efficiency of Flight at Mach One or Just Beyond 45 th Aerospace Sciences Meeting and Exhibit, January 8 11, 2007, Reno, Nevada An Investigation of the Attainable Efficiency of Flight at Mach One or Just Beyond Antony Jameson Department of Aeronautics

More information

Coordinate systems and vectors in three spatial dimensions

Coordinate systems and vectors in three spatial dimensions PHYS2796 Introduction to Modern Physics (Spring 2015) Notes on Mathematics Prerequisites Jim Napolitano, Department of Physics, Temple University January 7, 2015 This is a brief summary of material on

More information

Math and Numerical Methods Review

Math and Numerical Methods Review Math and Numerical Methods Review Michael Caracotsios, Ph.D. Clinical Associate Professor Chemical Engineering Department University of Illinois at Chicago Introduction In the study of chemical engineering

More information

Final Exam. Monday March 19, 3:30-5:30pm MAT 21D, Temple, Winter 2018

Final Exam. Monday March 19, 3:30-5:30pm MAT 21D, Temple, Winter 2018 Name: Student ID#: Section: Final Exam Monday March 19, 3:30-5:30pm MAT 21D, Temple, Winter 2018 Show your work on every problem. orrect answers with no supporting work will not receive full credit. Be

More information

THE BRACHISTOCHRONE CURVE: THE PROBLEM OF QUICKEST DESCENT

THE BRACHISTOCHRONE CURVE: THE PROBLEM OF QUICKEST DESCENT International Journal of Pure and Applied Mathematics Volume 82 No. 3 2013, 409-419 ISSN: 1311-8080 (printed version) url: http://www.ijpam.eu PA ijpam.eu THE BRACHISTOCHRONE CURVE: THE PROBLEM OF QUICKEST

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

25. Chain Rule. Now, f is a function of t only. Expand by multiplication:

25. Chain Rule. Now, f is a function of t only. Expand by multiplication: 25. Chain Rule The Chain Rule is present in all differentiation. If z = f(x, y) represents a two-variable function, then it is plausible to consider the cases when x and y may be functions of other variable(s).

More information

Robotics. Islam S. M. Khalil. November 15, German University in Cairo

Robotics. Islam S. M. Khalil. November 15, German University in Cairo Robotics German University in Cairo November 15, 2016 Fundamental concepts In optimal control problems the objective is to determine a function that minimizes a specified functional, i.e., the performance

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

CHAPTER 5: Linear Multistep Methods

CHAPTER 5: Linear Multistep Methods CHAPTER 5: Linear Multistep Methods Multistep: use information from many steps Higher order possible with fewer function evaluations than with RK. Convenient error estimates. Changing stepsize or order

More information

VANDERBILT UNIVERSITY. MATH 2300 MULTIVARIABLE CALCULUS Practice Test 1 Solutions

VANDERBILT UNIVERSITY. MATH 2300 MULTIVARIABLE CALCULUS Practice Test 1 Solutions VANDERBILT UNIVERSITY MATH 2300 MULTIVARIABLE CALCULUS Practice Test 1 Solutions Directions. This practice test should be used as a study guide, illustrating the concepts that will be emphasized in the

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Quasi-Newton Methods

Quasi-Newton Methods Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Lecture 6, September 1, 2017

Lecture 6, September 1, 2017 Engineering Mathematics Fall 07 Lecture 6, September, 07 Escape Velocity Suppose we have a planet (or any large near to spherical heavenly body) of radius R and acceleration of gravity at the surface of

More information

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 6: Monday, Mar 7. e k+1 = 1 f (ξ k ) 2 f (x k ) e2 k.

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 6: Monday, Mar 7. e k+1 = 1 f (ξ k ) 2 f (x k ) e2 k. Problem du jour Week 6: Monday, Mar 7 Show that for any initial guess x 0 > 0, Newton iteration on f(x) = x 2 a produces a decreasing sequence x 1 x 2... x n a. What is the rate of convergence if a = 0?

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Mathematical optimization

Mathematical optimization Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Calculus of Variations and Computer Vision

Calculus of Variations and Computer Vision Calculus of Variations and Computer Vision Sharat Chandran Page 1 of 23 Computer Science & Engineering Department, Indian Institute of Technology, Bombay. http://www.cse.iitb.ernet.in/ sharat January 8,

More information

Numerical Algorithms as Dynamical Systems

Numerical Algorithms as Dynamical Systems A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places. NUMERICAL METHODS 1. Rearranging the equation x 3 =.5 gives the iterative formula x n+1 = g(x n ), where g(x) = (2x 2 ) 1. (a) Starting with x = 1, compute the x n up to n = 6, and describe what is happening.

More information

Nonlinear equations and optimization

Nonlinear equations and optimization Notes for 2017-03-29 Nonlinear equations and optimization For the next month or so, we will be discussing methods for solving nonlinear systems of equations and multivariate optimization problems. We will

More information

Derivatives and Integrals

Derivatives and Integrals Derivatives and Integrals Definition 1: Derivative Formulas d dx (c) = 0 d dx (f ± g) = f ± g d dx (kx) = k d dx (xn ) = nx n 1 (f g) = f g + fg ( ) f = f g fg g g 2 (f(g(x))) = f (g(x)) g (x) d dx (ax

More information

1 Numerical optimization

1 Numerical optimization Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................

More information

Basic Aspects of Discretization

Basic Aspects of Discretization Basic Aspects of Discretization Solution Methods Singularity Methods Panel method and VLM Simple, very powerful, can be used on PC Nonlinear flow effects were excluded Direct numerical Methods (Field Methods)

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

ENGI Partial Differentiation Page y f x

ENGI Partial Differentiation Page y f x ENGI 344 4 Partial Differentiation Page 4-0 4. Partial Differentiation For functions of one variable, be found unambiguously by differentiation: y f x, the rate of change of the dependent variable can

More information

Find the indicated derivative. 1) Find y(4) if y = 3 sin x. A) y(4) = 3 cos x B) y(4) = 3 sin x C) y(4) = - 3 cos x D) y(4) = - 3 sin x

Find the indicated derivative. 1) Find y(4) if y = 3 sin x. A) y(4) = 3 cos x B) y(4) = 3 sin x C) y(4) = - 3 cos x D) y(4) = - 3 sin x Assignment 5 Name Find the indicated derivative. ) Find y(4) if y = sin x. ) A) y(4) = cos x B) y(4) = sin x y(4) = - cos x y(4) = - sin x ) y = (csc x + cot x)(csc x - cot x) ) A) y = 0 B) y = y = - csc

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION 15-382 COLLECTIVE INTELLIGENCE - S19 LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION TEACHER: GIANNI A. DI CARO WHAT IF WE HAVE ONE SINGLE AGENT PSO leverages the presence of a swarm: the outcome

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Simulation based optimization

Simulation based optimization SimBOpt p.1/52 Simulation based optimization Feb 2005 Eldad Haber haber@mathcs.emory.edu Emory University SimBOpt p.2/52 Outline Introduction A few words about discretization The unconstrained framework

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)

More information

Edexcel Core Mathematics 4 Parametric equations.

Edexcel Core Mathematics 4 Parametric equations. Edexcel Core Mathematics 4 Parametric equations. Edited by: K V Kumaran kumarmaths.weebly.com 1 Co-ordinate Geometry A parametric equation of a curve is one which does not give the relationship between

More information

Introduction to Nonlinear Optimization Paul J. Atzberger

Introduction to Nonlinear Optimization Paul J. Atzberger Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,

More information

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review ECE 680Modern Automatic Control p. 1/1 ECE 680 Modern Automatic Control Gradient and Newton s Methods A Review Stan Żak October 25, 2011 ECE 680Modern Automatic Control p. 2/1 Review of the Gradient Properties

More information

ECS550NFB Introduction to Numerical Methods using Matlab Day 2

ECS550NFB Introduction to Numerical Methods using Matlab Day 2 ECS550NFB Introduction to Numerical Methods using Matlab Day 2 Lukas Laffers lukas.laffers@umb.sk Department of Mathematics, University of Matej Bel June 9, 2015 Today Root-finding: find x that solves

More information

Numerical methods for the Navier- Stokes equations

Numerical methods for the Navier- Stokes equations Numerical methods for the Navier- Stokes equations Hans Petter Langtangen 1,2 1 Center for Biomedical Computing, Simula Research Laboratory 2 Department of Informatics, University of Oslo Dec 6, 2012 Note:

More information

SECTION A. f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes.

SECTION A. f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes. SECTION A 1. State the maximal domain and range of the function f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes. 2. By evaluating f(0),

More information

Vectors, metric and the connection

Vectors, metric and the connection Vectors, metric and the connection 1 Contravariant and covariant vectors 1.1 Contravariant vectors Imagine a particle moving along some path in the 2-dimensional flat x y plane. Let its trajectory be given

More information

NonlinearOptimization

NonlinearOptimization 1/35 NonlinearOptimization Pavel Kordík Department of Computer Systems Faculty of Information Technology Czech Technical University in Prague Jiří Kašpar, Pavel Tvrdík, 2011 Unconstrained nonlinear optimization,

More information

Review for Exam 2 Ben Wang and Mark Styczynski

Review for Exam 2 Ben Wang and Mark Styczynski Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Mathematics of Physics and Engineering II: Homework problems

Mathematics of Physics and Engineering II: Homework problems Mathematics of Physics and Engineering II: Homework problems Homework. Problem. Consider four points in R 3 : P (,, ), Q(,, 2), R(,, ), S( + a,, 2a), where a is a real number. () Compute the coordinates

More information

Section Taylor and Maclaurin Series

Section Taylor and Maclaurin Series Section.0 Taylor and Maclaurin Series Ruipeng Shen Feb 5 Taylor and Maclaurin Series Main Goal: How to find a power series representation for a smooth function us assume that a smooth function has a power

More information

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours)

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours) SOLUTIONS TO THE 18.02 FINAL EXAM BJORN POONEN December 14, 2010, 9:00am-12:00 (3 hours) 1) For each of (a)-(e) below: If the statement is true, write TRUE. If the statement is false, write FALSE. (Please

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

(A) Opening Problem Newton s Law of Cooling

(A) Opening Problem Newton s Law of Cooling Lesson 55 Numerical Solutions to Differential Equations Euler's Method IBHL - 1 (A) Opening Problem Newton s Law of Cooling! Newton s Law of Cooling states that the temperature of a body changes at a rate

More information

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 5: Gradient Descent Scribes: Loc Do,2,3 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for

More information

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2 1 Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 8 topics: quadratic optimisation, least squares,

More information

Optimization. Totally not complete this is...don't use it yet...

Optimization. Totally not complete this is...don't use it yet... Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Series Solutions of Differential Equations

Series Solutions of Differential Equations Chapter 6 Series Solutions of Differential Equations In this chapter we consider methods for solving differential equations using power series. Sequences and infinite series are also involved in this treatment.

More information

Introduction to the Calculus of Variations

Introduction to the Calculus of Variations 236861 Numerical Geometry of Images Tutorial 1 Introduction to the Calculus of Variations Alex Bronstein c 2005 1 Calculus Calculus of variations 1. Function Functional f : R n R Example: f(x, y) =x 2

More information

Some notes about PDEs. -Bill Green Nov. 2015

Some notes about PDEs. -Bill Green Nov. 2015 Some notes about PDEs -Bill Green Nov. 2015 Partial differential equations (PDEs) are all BVPs, with the same issues about specifying boundary conditions etc. Because they are multi-dimensional, they can

More information

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning Lecture 0 Neural networks and optimization Machine Learning and Data Mining November 2009 UBC Gradient Searching for a good solution can be interpreted as looking for a minimum of some error (loss) function

More information

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc. ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully.

More information

Review session Midterm 1

Review session Midterm 1 AS.110.109: Calculus II (Eng) Review session Midterm 1 Yi Wang, Johns Hopkins University Fall 2018 7.1: Integration by parts Basic integration method: u-sub, integration table Integration By Parts formula

More information

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

28. Pendulum phase portrait Draw the phase portrait for the pendulum (supported by an inextensible rod)

28. Pendulum phase portrait Draw the phase portrait for the pendulum (supported by an inextensible rod) 28. Pendulum phase portrait Draw the phase portrait for the pendulum (supported by an inextensible rod) θ + ω 2 sin θ = 0. Indicate the stable equilibrium points as well as the unstable equilibrium points.

More information

Applications of adjoint based shape optimization to the design of low drag airplane wings, including wings to support natural laminar flow

Applications of adjoint based shape optimization to the design of low drag airplane wings, including wings to support natural laminar flow Applications of adjoint based shape optimization to the design of low drag airplane wings, including wings to support natural laminar flow Antony Jameson and Kui Ou Aeronautics & Astronautics Department,

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Introduction to unconstrained optimization - direct search methods

Introduction to unconstrained optimization - direct search methods Introduction to unconstrained optimization - direct search methods Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Structure of optimization methods Typically Constraint handling converts the

More information