Topic 8c Multi Variable Optimization

Size: px
Start display at page:

Download "Topic 8c Multi Variable Optimization"

Transcription

1 Course Instructor Dr. Raymond C. Rumpf Office: A 337 Phone: (915) E Mail: rcrumpf@utep.edu Topic 8c Multi Variable Optimization EE 4386/5301 Computational Methods in EE Outline Mathematical Preliminaries Multivariable functions Scalar and vector fields Gradients and Hessians Revised derivative tests for multiple variables Powell s method Gradients and Hessians Steepest ascent method Newton s method for multiple variables Multivariable Optimization 1

2 Mathematical Preliminaries Multivariable Optimization 3 Multivariable Functions General form for a multivariable function: f f 1,,, N 1 N Eample #1 Eample # f, y y y f A,, Aep f y A f Multivariable Optimization 4

3 Scalar Field Vs. Vector Field Scalar Field,, magnitude,, f y Vector Field, v, y yz magnitude, yz, direction, yz, Multivariable Optimization 5 Isocontour Lines Isocontour lines trace the paths of equal value. Closely space isocontours conveys that the function is varying rapidly. Multivariable Optimization 6 3

4 Gradient of a Scalar Field (1 of 3) We start with a scalar field f y, Multivariable Optimization 7 Gradient of a Scalar Field ( of 3) then plot the gradient on top of it. Color in background is the original scalar field. f y, Multivariable Optimization 8 4

5 Gradient of a Scalar Field (3 of 3) The gradient will always be perpendicular to the isocontour lines. Multivariable Optimization 9 The Multidimensional Gradient The standard 3D gradient can be written as,,,,,,,, f yz ˆ f yz f yz a ˆ f yz ˆ ay az y z When more dimensions are involved, we write it as f f f f 1 N 1 N Multivariable Optimization 10 5

6 Properties of the Gradient 1. It only makes sense to calculate the gradient of a scalar field*.. f points in the direction of the maimum rate of change in f. 3. f at any point is perpendicular to the constant f surface that passes through that point. 4. The gradient points toward big positive numbers in the scalar field. * The gradient of a vector field is a tensor called the Jacobian. It is commonly used in coordinate transformations, but is outside the scope of this course. Multivariable Optimization 11 Numerical Calculation of the Gradient We may not always have a closed form epression for our function so it may not be possible to calculate an analytical epression for the gradient. When this is the case, we can calculate the gradient numerically. f f 1 1 f f f f f f f N N f N N N N This can be a very epensive computation! Multivariable Optimization 1 6

7 Derivative Tests in Multiple Dimensions Suppose we have a D function f(,y). f 0 f y 0 Does this indicate a minimum? No! Figure borrowed from: Steven Chapra, Numerical Methods for Engineers, 7 th Ed., McGraw Hill. Multivariable Optimization 13 The Hessian The Hessian describes curvature of multiple variable functions. We will use it to determine whether we are really at a maimum or minimum. The Hessian is defined as f f f N f f f H f 1 N f f f N1 N N In two dimensions, the Hessian is f H f y f y f y f f f det H determinant y y Multivariable Optimization 14 7

8 Derivative Tests Revised for Multiple Dimensions f If deth0 and 0, then f, y has a local minumum. f If deth0 and 0, then f, y has a local maimum. H f y If det 0 then, has a saddle point. Multivariable Optimization 15 Conjugate Direction (1 of ) 1 a b Suppose we start at two different points, a and b, and use 1D optimizations along parallel directions to arrive at the two etremas 1 and. The direction of the line connecting 1 and is called a conjugate direction and is directed toward the maimum. Multivariable Optimization 16 8

9 Conjugate Direction ( of ) 1 a Suppose we start at common point a and use 1D optimizations along two different directions to arrive at the two etremas 1 and. The direction of the line connecting 1 and is also a conjugate direction and is directed toward the maimum. Multivariable Optimization 17 Univariate Search Multivariable Optimization 18 9

10 Algorithm for Basic Univariate Search 1. Make an initial guess. Loop over independent variables. 1. Fi all other independent variables ecept i.. Perform 1D optimization on f( i ) to find etremum. 3. If not converged, go back to Step. Multivariable Optimization 19 1 Pattern Direction After a few passes through all independent variables, an overall direction becomes apparent. It is the direction connecting the starting point to the end point. This direction points toward the etremum. Multivariable Optimization 0 10

11 Powell s Method Multivariable Optimization 1 h h 1 opt 0 Pick a starting point 0 and two different starting directions h 1 and h. Multivariable Optimization 11

12 h 1 opt 0 1 Staring at 0, perform a 1D optimization along h 1 to find etremum 1. Multivariable Optimization 3 h h 1 opt 0 1 Staring at 1, perform a 1D optimization along h to find etremum. Multivariable Optimization 4 1

13 h 3 h h 1 opt 0 1 Define h 3 to be in the direction connecting 0 to. Multivariable Optimization 5 h 3 3 h 1 h opt 0 1 Staring at, perform a 1D optimization along h 3 to find etremum 3. Multivariable Optimization 6 13

14 h h 3 3 h 1 h opt Staring at 3, perform a 1D optimization along h to find etremum 4. Multivariable Optimization 7 h 3 h h 3 3 h 1 h 5 opt Staring at 4, perform a 1D optimization along h 3 to find etremum 5. Multivariable Optimization 8 14

15 h 3 h h 3 3 h 1 h 5 opt h 4 Define h 4 to be in the direction connecting 3 to 5. Multivariable Optimization 9 h 3 h h 3 h 1 h opt This last 1D optimization is guaranteed to find the maimum because Powell showed that h 3 and h 4 are both conjugate directions. 0 1 h 4 Staring at 5, perform a 1D optimization along h 4 to find etremum opt. Multivariable Optimization 30 15

16 Algorithm Summary 1. Pick a starting point 0 and two different starting directions h 1 and h.. Staring at 0, perform a 1D optimization along h 1 to find etremum Staring at 1, perform a 1D optimization along h to find etremum. 4. Define h 3 to be in the direction connecting 0 to. 5. Staring at, perform a 1D optimization along h 3 to find etremum Staring at 3, perform a 1D optimization along h to find etremum Staring at 4, perform a 1D optimization along h 3 to find etremum Define h 4 to be in the direction connecting 3 to Staring at 5, perform a 1D optimization along h 4 to find etremum opt. Multivariable Optimization 31 Convergence Powell s method is quadratically convergent and etremely efficient. If iterated, it will converge in a finite number of iterations if the function is quadratic. Most functions are nearly quadratic near their etrema. Multivariable Optimization 3 16

17 Steepest Ascent Method Multivariable Optimization 33 Concept of Steepest Ascent Suppose we have a function f(,y). How do we find the maimum? Multivariable Optimization 34 17

18 Concept of Steepest Ascent Step 1 Pick a starting point 1. Multivariable Optimization 35 Concept of Steepest Ascent Step Calculate the gradient because that points toward increasing values of f( 1 ). f 1 Multivariable Optimization 36 18

19 Concept of Steepest Ascent Step 3 Calculate net point in direction of gradient. But how far? 1?? Multivariable Optimization 37 Concept of Steepest Ascent Step 3 Calculate net point in direction of gradient. But how far? Moving too far along the gradient may cause the algorithm to go unstable and not find the maimum. Moving not far enough will require many iterations to find the maimum. For now, lets choose a constant = 0.5. f 1 1 Multivariable Optimization 38 19

20 Concept of Steepest Ascent Step 4 Calculate the gradient at the second point. f Multivariable Optimization 39 Concept of Steepest Ascent Step 5 Calculate the net point 3 along the gradient. f 3 Multivariable Optimization 40 0

21 Concept of Steepest Ascent Step 6 Calculate the gradient at point 3. f 3 The steep gradient made us overshoot the maimum. The choice of is important! Multivariable Optimization 41 Steepest Ascent Method We wish to minimize the number of times the gradient is calculated. Let s calculate it once and then move in that direction until f() stops increasing. At this point, we reevaluate the gradient and repeat in the new direction. Algorithm 1. Pick a starting point.. Calculate the gradient at this point: g = f() 3. If the gradient is zero or less than some tolerance, we are done! 4. Otherwise, move in small increments in the direction of g until f() stops increasing: = + g Note: if we think of searching along this direction like a 1D optimization, we can improve efficiency greatly. 5. Go back to Step. Multivariable Optimization 4 1

22 Choice of (1 of 5) Too large values of a can cause the algorithm to jump away from maimum. At best, the algorithm converges on a different maimum. Multivariable Optimization 43 Choice of ( of 5) Too large values of a can also cause the algorithm to oscillate about the maimum and never converge to it. Multivariable Optimization 44

23 Choice of (3 of 5) For this eample, = 0.5 seems like a very good choice. The best choice of depends on the properties of the function. If the function varies wildly, choose small. If the function is rather well behaved, larger values of can converge faster. Multivariable Optimization 45 Choice of (4 of 5) = 0.1 seems like another good choice. This is typically the value that I choose at first if nothing else is known. Multivariable Optimization 46 3

24 Choice of (5 of 5) Small values of converge very slowly. This can be costly when evaluating the function is slow. Multivariable Optimization 47 Eample (1 of 5) Problem Find the maimum of the following function., f y y y y.5 Solution Step 1 Make an initial guess at the position. 1 y 0 1 Multivariable Optimization 48 4

25 Eample ( of 5) Step Calculate the gradient.,, f y f y f, y aˆ ˆ ay y y y aˆ ˆ y y a y y a y aˆ ˆ 4,0 0 ˆ 40 f a ˆ a aˆ 4aˆ Step 3 Gradient is not zero, so we are not done. y Multivariable Optimization 49 y y y Eample (3 of 5) Step 4 Move in direction of gradient g 4 Choose g Multivariable Optimization 50 5

26 Eample (4 of 5) Step 5 Go back to Step. Step Calculate gradient at second point g y y Step 3 We are still not done! Multivariable Optimization 51 Eample (5 of 5) Step 5 Go back to Step. Step 4 Calculate net point. 3 g g Multivariable Optimization 5 6

27 Eample (6 of 6) Step 5 Go back to Step. Step 5 And so on After 77 iterations (tolerance 10 3 ), the answer converges to e Multivariable Optimization 53 Newton s Method for Multiple Variables Multivariable Optimization 54 7

28 Newton s Method with Multiple Variables (1 of ) We can etend Newton s method to multiple variables using the Hessian. We can write a second order Taylor series for f() near = i. 1 f f f H T T i i i i i i At an etremum, f() = 0,. To find this point, we derive the gradient of the above epression. f f H i i i Multivariable Optimization 55 Newton s Method with Multiple Variables ( of ) We set our gradient to zero and solve for. i i i f ihii0 Hiif i 1 i Hi f i 1 H f f f H 0 i i i From this, our update equation for the Newton s method is 1 H f i1 i i i Multivariable Optimization 56 8

Lecture 8 Optimization

Lecture 8 Optimization 4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional

More information

Vector Calculus Review

Vector Calculus Review Course Instructor Dr. Ramond C. Rumpf Office: A-337 Phone: (915) 747-6958 E-Mail: rcrumpf@utep.edu Vector Calculus Review EE3321 Electromagnetic Field Theor Outline Mathematical Preliminaries Phasors,

More information

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x) Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Beyond Newton s method Thomas P. Minka

Beyond Newton s method Thomas P. Minka Beyond Newton s method Thomas P. Minka 2000 (revised 7/21/2017) Abstract Newton s method for optimization is equivalent to iteratively maimizing a local quadratic approimation to the objective function.

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

Lecture Outline. Maxwell s Equations Predict Waves Derivation of the Wave Equation Solution to the Wave Equation 8/7/2018

Lecture Outline. Maxwell s Equations Predict Waves Derivation of the Wave Equation Solution to the Wave Equation 8/7/2018 Course Instructor Dr. Raymond C. Rumpf Office: A 337 Phone: (915) 747 6958 E Mail: rcrumpf@utep.edu EE 4347 Applied Electromagnetics Topic 3a Electromagnetic Waves Electromagnetic These notes Waves may

More information

Numerical Methods. Root Finding

Numerical Methods. Root Finding Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

MAXIMA & MINIMA The single-variable definitions and theorems relating to extermals can be extended to apply to multivariable calculus.

MAXIMA & MINIMA The single-variable definitions and theorems relating to extermals can be extended to apply to multivariable calculus. MAXIMA & MINIMA The single-variable definitions and theorems relating to etermals can be etended to appl to multivariable calculus. ( ) is a Relative Maimum if there ( ) such that ( ) f(, for all points

More information

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14 LOCAL SEARCH Today Reading AIMA Chapter 4.1-4.2, 5.1-5.2 Goals Local search algorithms n hill-climbing search n simulated annealing n local beam search n genetic algorithms n gradient descent and Newton-Rhapson

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form: 3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by

More information

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4 Optimization Methods: Optimization using Calculus - Equality constraints Module Lecture Notes 4 Optimization of Functions of Multiple Variables subect to Equality Constraints Introduction In the previous

More information

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Dual Methods Lecturer: Ryan Tibshirani Conve Optimization 10-725/36-725 1 Last time: proimal Newton method Consider the problem min g() + h() where g, h are conve, g is twice differentiable, and h is simple.

More information

Lecture 3 September 1

Lecture 3 September 1 STAT 383C: Statistical Modeling I Fall 2016 Lecture 3 September 1 Lecturer: Purnamrita Sarkar Scribe: Giorgio Paulon, Carlos Zanini Disclaimer: These scribe notes have been slightly proofread and may have

More information

Topic 4b. Open Methods for Root Finding

Topic 4b. Open Methods for Root Finding Course Instructor Dr. Ramond C. Rump Oice: A 337 Phone: (915) 747 6958 E Mail: rcrump@utep.edu Topic 4b Open Methods or Root Finding EE 4386/5301 Computational Methods in EE Outline Open Methods or Root

More information

Machine Learning CS 4900/5900. Lecture 03. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Machine Learning CS 4900/5900. Lecture 03. Razvan C. Bunescu School of Electrical Engineering and Computer Science Machine Learning CS 4900/5900 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Machine Learning is Optimization Parametric ML involves minimizing an objective function

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Multidimensional Unconstrained Optimization Suppose we have a function f() of more than one

More information

Numerical optimization

Numerical optimization THE UNIVERSITY OF WESTERN ONTARIO LONDON ONTARIO Paul Klein Office: SSC 408 Phone: 661-111 ext. 857 Email: paul.klein@uwo.ca URL: www.ssc.uwo.ca/economics/faculty/klein/ Numerical optimization In these

More information

Electromagnetic Waves & Polarization

Electromagnetic Waves & Polarization Course Instructor Dr. Raymond C. Rumpf Office: A 337 Phone: (915) 747 6958 E Mail: rcrumpf@utep.edu EE 4347 Applied Electromagnetics Topic 3a Electromagnetic Waves & Polarization Electromagnetic These

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Optimization and Calculus

Optimization and Calculus Optimization and Calculus To begin, there is a close relationship between finding the roots to a function and optimizing a function. In the former case, we solve for x. In the latter, we solve: g(x) =

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION 15-382 COLLECTIVE INTELLIGENCE - S19 LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION TEACHER: GIANNI A. DI CARO WHAT IF WE HAVE ONE SINGLE AGENT PSO leverages the presence of a swarm: the outcome

More information

CHAPTER 2: Partial Derivatives. 2.2 Increments and Differential

CHAPTER 2: Partial Derivatives. 2.2 Increments and Differential CHAPTER : Partial Derivatives.1 Definition of a Partial Derivative. Increments and Differential.3 Chain Rules.4 Local Etrema.5 Absolute Etrema 1 Chapter : Partial Derivatives.1 Definition of a Partial

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

12.10 Lagrange Multipliers

12.10 Lagrange Multipliers .0 Lagrange Multipliers In the last two sections we were often solving problems involving maimizing or minimizing a function f subject to a 'constraint' equation g. For eample, we minimized the cost of

More information

Numerical solutions of nonlinear systems of equations

Numerical solutions of nonlinear systems of equations Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points

More information

Optimization. Totally not complete this is...don't use it yet...

Optimization. Totally not complete this is...don't use it yet... Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization (Com S 477/577 Notes) Yan-Bin Jia Nov 7, 2017 1 Introduction Given a single function f that depends on one or more independent variable, we want to find the values of those variables

More information

Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras

Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras Lecture - 09 Newton-Raphson Method Contd We will continue with our

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Numerical Implementation of Transformation Optics

Numerical Implementation of Transformation Optics ECE 5322 21 st Century Electromagnetics Instructor: Office: Phone: E Mail: Dr. Raymond C. Rumpf A 337 (915) 747 6958 rcrumpf@utep.edu Lecture #16b Numerical Implementation of Transformation Optics Lecture

More information

3.1: 1, 3, 5, 9, 10, 12, 14, 18

3.1: 1, 3, 5, 9, 10, 12, 14, 18 3.:, 3, 5, 9,,, 4, 8 ) We want to solve d d c() d = f() with c() = c = constant and f() = for different boundary conditions to get w() and u(). dw d = dw d d = ( )d w() w() = w() = w() ( ) c d d = u()

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 5: Gradient Descent Scribes: Loc Do,2,3 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for

More information

A Primer on Multidimensional Optimization

A Primer on Multidimensional Optimization A Primer on Multidimensional Optimization Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Eercise

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

Nonlinear Equations. Chapter The Bisection Method

Nonlinear Equations. Chapter The Bisection Method Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y

More information

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods Optimization Last time Root inding: deinition, motivation Algorithms: Bisection, alse position, secant, Newton-Raphson Convergence & tradeos Eample applications o Newton s method Root inding in > 1 dimension

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Exploring the energy landscape

Exploring the energy landscape Exploring the energy landscape ChE210D Today's lecture: what are general features of the potential energy surface and how can we locate and characterize minima on it Derivatives of the potential energy

More information

Gradient Descent. Sargur Srihari

Gradient Descent. Sargur Srihari Gradient Descent Sargur srihari@cedar.buffalo.edu 1 Topics Simple Gradient Descent/Ascent Difficulties with Simple Gradient Descent Line Search Brent s Method Conjugate Gradient Descent Weight vectors

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning First-Order Methods, L1-Regularization, Coordinate Descent Winter 2016 Some images from this lecture are taken from Google Image Search. Admin Room: We ll count final numbers

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor. ECE580 Exam 1 October 4, 2012 1 Name: Solution Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc.

More information

Lecture Outline. Scattering at an Impedance Discontinuity Power on a Transmission Line Voltage Standing Wave Ratio (VSWR) 8/10/2018

Lecture Outline. Scattering at an Impedance Discontinuity Power on a Transmission Line Voltage Standing Wave Ratio (VSWR) 8/10/2018 Course Instructor Dr. Raymond C. Rumpf Office: A 337 Phone: (95) 747 6958 E Mail: rcrumpf@utep.edu EE 4347 Applied Electromagnetics Topic 4d Scattering on a Transmission Line Scattering These on a notes

More information

3.1 ANALYSIS OF FUNCTIONS I INCREASE, DECREASE, AND CONCAVITY

3.1 ANALYSIS OF FUNCTIONS I INCREASE, DECREASE, AND CONCAVITY MATH00 (Calculus).1 ANALYSIS OF FUNCTIONS I INCREASE, DECREASE, AND CONCAVITY Name Group No. KEYWORD: increasing, decreasing, constant, concave up, concave down, and inflection point Eample 1. Match the

More information

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review ECE 680Modern Automatic Control p. 1/1 ECE 680 Modern Automatic Control Gradient and Newton s Methods A Review Stan Żak October 25, 2011 ECE 680Modern Automatic Control p. 2/1 Review of the Gradient Properties

More information

Example - Newton-Raphson Method

Example - Newton-Raphson Method Eample - Newton-Raphson Method We now consider the following eample: minimize f( 3 3 + -- 4 4 Since f ( 3 2 + 3 3 and f ( 6 + 9 2 we form the following iteration: + n 3 ( n 3 3( n 2 ------------------------------------

More information

Image Alignment Computer Vision (Kris Kitani) Carnegie Mellon University

Image Alignment Computer Vision (Kris Kitani) Carnegie Mellon University Lucas Kanade Image Alignment 16-385 Comuter Vision (Kris Kitani) Carnegie Mellon University htt://www.humansensing.cs.cmu.edu/intraface/ How can I find in the image? Idea #1: Temlate Matching Slow, combinatory,

More information

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems International Journal of Scientific and Research Publications, Volume 3, Issue 10, October 013 1 ISSN 50-3153 Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming

More information

Mathematical optimization

Mathematical optimization Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the

More information

Chapter 8 Gradient Methods

Chapter 8 Gradient Methods Chapter 8 Gradient Methods An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Introduction Recall that a level set of a function is the set of points satisfying for some constant. Thus, a point

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

Deep Learning II: Momentum & Adaptive Step Size

Deep Learning II: Momentum & Adaptive Step Size Deep Learning II: Momentum & Adaptive Step Size CS 760: Machine Learning Spring 2018 Mark Craven and David Page www.biostat.wisc.edu/~craven/cs760 1 Goals for the Lecture You should understand the following

More information

NonlinearOptimization

NonlinearOptimization 1/35 NonlinearOptimization Pavel Kordík Department of Computer Systems Faculty of Information Technology Czech Technical University in Prague Jiří Kašpar, Pavel Tvrdík, 2011 Unconstrained nonlinear optimization,

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Selected Topics in Optimization. Some slides borrowed from

Selected Topics in Optimization. Some slides borrowed from Selected Topics in Optimization Some slides borrowed from http://www.stat.cmu.edu/~ryantibs/convexopt/ Overview Optimization problems are almost everywhere in statistics and machine learning. Input Model

More information

Part 2: NLP Constrained Optimization

Part 2: NLP Constrained Optimization Part 2: NLP Constrained Optimization James G. Shanahan 2 Independent Consultant and Lecturer UC Santa Cruz EMAIL: James_DOT_Shanahan_AT_gmail_DOT_com WIFI: SSID Student USERname ucsc-guest Password EnrollNow!

More information

Constrained Optimization in Two Variables

Constrained Optimization in Two Variables in Two Variables James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 17, 216 Outline 1 2 What Does the Lagrange Multiplier Mean? Let

More information

Calculus of Variation An Introduction To Isoperimetric Problems

Calculus of Variation An Introduction To Isoperimetric Problems Calculus of Variation An Introduction To Isoperimetric Problems Kevin Wang The University of Sydney SSP Working Seminars, MATH2916 May 4, 2013 Contents I Lagrange Multipliers 2 1 Single Constraint Lagrange

More information

x k+1 = x k + α k p k (13.1)

x k+1 = x k + α k p k (13.1) 13 Gradient Descent Methods Lab Objective: Iterative optimization methods choose a search direction and a step size at each iteration One simple choice for the search direction is the negative gradient,

More information

Optimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X.

Optimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X. Optimization Background: Problem: given a function f(x) defined on X, find x such that f(x ) f(x) for all x X. The value x is called a maximizer of f and is written argmax X f. In general, argmax X f may

More information

Lecture 14: Newton s Method

Lecture 14: Newton s Method 10-725/36-725: Conve Optimization Fall 2016 Lecturer: Javier Pena Lecture 14: Newton s ethod Scribes: Varun Joshi, Xuan Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Functions of Several Variables

Functions of Several Variables Chapter 1 Functions of Several Variables 1.1 Introduction A real valued function of n variables is a function f : R, where the domain is a subset of R n. So: for each ( 1,,..., n ) in, the value of f is

More information

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4 Neural Networks Learning the network: Backprop 11-785, Fall 2018 Lecture 4 1 Recap: The MLP can represent any function The MLP can be constructed to represent anything But how do we construct it? 2 Recap:

More information

6.034f Neural Net Notes October 28, 2010

6.034f Neural Net Notes October 28, 2010 6.034f Neural Net Notes October 28, 2010 These notes are a supplement to material presented in lecture. I lay out the mathematics more prettily and etend the analysis to handle multiple-neurons per layer.

More information

Optimization II: Unconstrained Multivariable

Optimization II: Unconstrained Multivariable Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1

More information

Chapter 10 Conjugate Direction Methods

Chapter 10 Conjugate Direction Methods Chapter 10 Conjugate Direction Methods An Introduction to Optimization Spring, 2012 1 Wei-Ta Chu 2012/4/13 Introduction Conjugate direction methods can be viewed as being intermediate between the method

More information

1-D Optimization. Lab 16. Overview of Line Search Algorithms. Derivative versus Derivative-Free Methods

1-D Optimization. Lab 16. Overview of Line Search Algorithms. Derivative versus Derivative-Free Methods Lab 16 1-D Optimization Lab Objective: Many high-dimensional optimization algorithms rely on onedimensional optimization methods. In this lab, we implement four line search algorithms for optimizing scalar-valued

More information

Preface.

Preface. This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

Machine Learning. Lecture 04: Logistic and Softmax Regression. Nevin L. Zhang

Machine Learning. Lecture 04: Logistic and Softmax Regression. Nevin L. Zhang Machine Learning Lecture 04: Logistic and Softmax Regression Nevin L. Zhang lzhang@cse.ust.hk Department of Computer Science and Engineering The Hong Kong University of Science and Technology This set

More information

18.01 Single Variable Calculus Fall 2006

18.01 Single Variable Calculus Fall 2006 MIT OpenCourseWare http://ocw.mit.edu 8.0 Single Variable Calculus Fall 2006 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Lecture 0 8.0 Fall 2006 Lecture

More information

11 More Regression; Newton s Method; ROC Curves

11 More Regression; Newton s Method; ROC Curves More Regression; Newton s Method; ROC Curves 59 11 More Regression; Newton s Method; ROC Curves LEAST-SQUARES POLYNOMIAL REGRESSION Replace each X i with feature vector e.g. (X i ) = [X 2 i1 X i1 X i2

More information

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M. Introduction to nonlinear LS estimation R. I. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision. Cambridge University Press, 2ed., 2004. After Chapter 5 and Appendix 6. We will use x

More information

Multivariate Newton Minimanization

Multivariate Newton Minimanization Multivariate Newton Minimanization Optymalizacja syntezy biosurfaktantu Rhamnolipid Rhamnolipids are naturally occuring glycolipid produced commercially by the Pseudomonas aeruginosa species of bacteria.

More information

1 Numerical optimization

1 Numerical optimization Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................

More information

Constrained Optimization in Two Variables

Constrained Optimization in Two Variables Constrained Optimization in Two Variables James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 17, 216 Outline Constrained Optimization

More information

Math 409/509 (Spring 2011)

Math 409/509 (Spring 2011) Math 409/509 (Spring 2011) Instructor: Emre Mengi Study Guide for Homework 2 This homework concerns the root-finding problem and line-search algorithms for unconstrained optimization. Please don t hesitate

More information

4Divergenceandcurl. D ds = ρdv. S

4Divergenceandcurl. D ds = ρdv. S 4Divergenceandcurl Epressing the total charge Q V contained in a volume V as a 3D volume integral of charge density ρ(r), wecanwritegauss s law eamined during the last few lectures in the general form

More information

Nonlinear equations and optimization

Nonlinear equations and optimization Notes for 2017-03-29 Nonlinear equations and optimization For the next month or so, we will be discussing methods for solving nonlinear systems of equations and multivariate optimization problems. We will

More information

Maximum and Minimum Values - 3.3

Maximum and Minimum Values - 3.3 Maimum and Minimum Values - 3.3. Critical Numbers Definition A point c in the domain of f is called a critical number offiff c or f c is not defined. Eample a. The graph of f is given below. Find all possible

More information

Introduction to unconstrained optimization - direct search methods

Introduction to unconstrained optimization - direct search methods Introduction to unconstrained optimization - direct search methods Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Structure of optimization methods Typically Constraint handling converts the

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review Department of Statistical Sciences and Operations Research Virginia Commonwealth University Oct 16, 2013 (Lecture 14) Nonlinear Optimization

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C.

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C. Chapter 4: Numerical Computation Deep Learning Authors: I. Goodfellow, Y. Bengio, A. Courville Lecture slides edited by 1 Chapter 4: Numerical Computation 4.1 Overflow and Underflow 4.2 Poor Conditioning

More information

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

10-725/36-725: Convex Optimization Spring Lecture 21: April 6 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 21: April 6 Scribes: Chiqun Zhang, Hanqi Cheng, Waleed Ammar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Minimization of Static! Cost Functions!

Minimization of Static! Cost Functions! Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for

More information

NUMERICAL METHODS FOR SOLVING EQUATIONS

NUMERICAL METHODS FOR SOLVING EQUATIONS Mathematics Revision Guides Numerical Methods for Solving Equations Page of M.K. HOME TUITION Mathematics Revision Guides Level: AS / A Level AQA : C3 Edecel: C3 OCR: C3 NUMERICAL METHODS FOR SOLVING EQUATIONS

More information

On the other hand, if we measured the potential difference between A and C we would get 0 V.

On the other hand, if we measured the potential difference between A and C we would get 0 V. DAY 3 Summary of Topics Covered in Today s Lecture The Gradient U g = -g. r and U E = -E. r. Since these equations will give us change in potential if we know field strength and distance, couldn t we calculate

More information