UNCONSTRAINED OPTIMIZATION

Similar documents
1 Numerical optimization

Numerical Optimization

Optimization. Totally not complete this is...don't use it yet...

1 Numerical optimization

Line Search Methods. Shefali Kulkarni-Thaker

Midterm Review. Igor Yanovsky (Math 151A TA)

Nonlinear Equations. Chapter The Bisection Method

Chapter 3: Root Finding. September 26, 2005

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

Scientific Computing: An Introductory Survey

Review of Classical Optimization

Unconstrained minimization of smooth functions

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

8 Numerical methods for unconstrained problems

Optimization Methods

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Chapter 2 Solutions of Equations of One Variable

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

Optimization with Scipy (2)

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Root Finding (and Optimisation)

Lecture V. Numerical Optimization

Example - Newton-Raphson Method

Unconstrained optimization

The Steepest Descent Algorithm for Unconstrained Optimization

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

5 Finding roots of equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

Descent methods. min x. f(x)

Numerical Methods in Informatics

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Math Numerical Analysis

Numerical optimization

Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Introduction to unconstrained optimization - direct search methods

Quasi-Newton Methods

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Finding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University

Finding the Roots of f(x) = 0

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Introduction to gradient descent

Optimization Part 1 P. Agius L2.1, Spring 2008

1 The best of all possible worlds

Numerical Solution of f(x) = 0

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

1-D Optimization. Lab 16. Overview of Line Search Algorithms. Derivative versus Derivative-Free Methods

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Optimization and Calculus

Performance Surfaces and Optimum Points

REVIEW OF DIFFERENTIAL CALCULUS

Gradient Descent. Sargur Srihari


x k+1 = x k + α k p k (13.1)

Optimization Concepts and Applications in Engineering

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

Unconstrained Multivariate Optimization

Multivariate Newton Minimanization

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10)

Mathematical optimization

Scientific Computing: Optimization

Vector Derivatives and the Gradient

Steepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720

GENG2140, S2, 2012 Week 7: Curve fitting

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Scientific Computing. Roots of Equations

Numerical Optimization: Basic Concepts and Algorithms

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Unconstrained optimization I Gradient-type methods

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods

Optimization Tutorial 1. Basic Gradient Descent

Solutions of Equations in One Variable. Newton s Method

10. Unconstrained minimization

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Trust Regions. Charles J. Geyer. March 27, 2013

Lecture Notes: Geometric Considerations in Unconstrained Optimization

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

An Iterative Descent Method

Lecture 7 Unconstrained nonlinear programming

Lecture 17: Numerical Optimization October 2014

17 Solution of Nonlinear Systems

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

CHAPTER 2: QUADRATIC PROGRAMMING

CoE 3SK3 Computer Aided Engineering Tutorial: Unconstrained Optimization

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Numerical Optimization

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Lecture 8: Optimization

Numerical Optimization Algorithms

Contraction Mappings Consider the equation

Selected Topics in Optimization. Some slides borrowed from

Transcription:

UNCONSTRAINED OPTIMIZATION 6. MATHEMATICAL BASIS Given a function f : R n R, and x R n such that f(x ) < f(x) for all x R n then x is called a minimizer of f and f(x ) is the minimum(value) of f. We wish to develop algorithms that find x, given f(x) and some starting vector x. Since max f = min( f) we consider minimization problems only. 6.. Calculus in n dimensions. We will be dealing with scalar functions of n variables f(x, x,..., x n ), and so we need the basic facts from the calculus of n dimensions. For simplicity we may view these as generalizations of the -dimensional case. First Derivative or Gradient. If the function f(x, x,..., x n ) has first partial derivatives with respect to each element x i then the n-dimensional equivalent of the first derivative f (x) is the gradient vector [ ] f(x) g(x) = = x i f(x) x f(x) x. f(x) x n Second Derivative or Hessian. The n-dimensional equivalent of the second derivative f (x) is the Hessian matrix

Unconstrained Optimization 6. [ ] f(x) H(x) = = x i x j f(x) x x f(x) x x. f(x) x n x f(x) x x f(x) x x. f(x) x n x f(x) x x n f(x) x x n.... f(x) x n x n. Taylor Series in n Dimensions. The Taylor series expansion of f(x) about some x k R n is : f(x) f(x k ) + g T (x k )(x x k ) + (x x k) T H(x k )(x x k ) +, where f(x) R, g(x k ) and x k R n, and H(x k ) R n n. 6.. Optimality Conditions. The condition for the solution x of the zero-finding problem is easy to state : x is a solution if f(x ) = 0. The conditions for the solution of the minimum-finding problem are not so simple. Conditions in -Dimension. From elementary calculus we know that for a -dimensional function the following conditions must hold at an optimum x R : f (x ) = 0, and These conditions are shown in Figure 6.. f (x ) < 0, at a maximum, f (x ) > 0, at a minimum, f (x ) = 0, at a point of inflection. 4 maximum inflection 0 f(x) 4 6 minimum 8.5.5 3 3.5 4 4.5 x Figure 6.. Three points where f (x) = 0.

Unconstrained Optimization 6.3 Conditions in n-dimensions. The equivalent conditions in n dimensions are : g(x ) = 0, and H(x ) is negative definite at a maximum, H(x ) is positive definite at a minimum, H(x ) is indefinite at a saddle point, where H(x ) is symmetric and x T H(x )x is negative, positive, or 0 for all x > 0. Figure 6. shows three such points. 3 x 04.5.5 0.5 0 5 0 5 0 5 0 5 0 5 30 Figure 6.. A Minimum and Two Saddle Points. 6. ONE-DIMENSIONAL MINIMIZATION Given a function f : R R, find x R such that f(x ) < f(x) for all x R. Minimization in R is very similar to zero-finding in R in that we are looking for a point x such that f(x ) is a minimum, and in zero-finding we are looking for a point x such that f(x ) = 0. We will see that one-dimensional minimization is at the heart of most minimum-finding algorithms in R n. Indeed most of the algorithms that search for a minimum in n dimensions perform a sequence of -dimensional searches in a chosen direction. The following prototype algorithm is called the Basic Descent Algorithm :

Unconstrained Optimization 6.4 algorithm Basic-Descent (f, x 0 ) while not converged do. Choose a descent direction d k. Find s k that minimizes f(x k + s k d k ) 3. x k+ := x k + s k d k endwhile return x k endalg {Basic-Descent} In this algorithm x k and d k are n-vectors while s k is a scalar. The algorithm, after choosing (somehow) a descent direction, searches for the minimum of f along the line through x k in the direction d k. This is called the Line Search step and, as such, is a -dimensional search. 6.. Minimum-Finding versus Zero-Finding. We will see that, conceptually, the search for the minimizer x is the same in each case : generate a sequence {x k+ = T (x k )}, and stop when the sequence converges. The only differences are. The manner in which the sequence {x k+ = T (x k )} is generated and,. How convergence is checked. We consider convergence-checking first because the convergence criteria for minimum-finding are fundamentally different from those for zero-finding. Mathematically the criteria for both are similar :. Zero-Finding : Stop when either f k = 0 or x k x k ɛ x. Min-Finding : Stop when either f k f k ɛ f = 0 or x k x k ɛ x. The most obvious difference is f-sequence convergence : for zero-finding we know that the value of the function at x is f(x ) = 0, whereas for minimum finding we do not know the value of the minimum f(x ). We do know, however, that at a minimum the derivative f (x ) = 0 so that, in principle, we may replace a minimum-finding algorithm for f(x) with a zero-finding algorithm for f (x). The fact that at a minimum the derivative f (x ) = 0 raises the second difference : x-sequence convergence. To see this let us expand f(x k ) in a Taylor series about the minimizer x. We get Now f (x ) = 0 and so f(x k ) f(x ) + f (x )(x k x ) + f (x )(x k x ). f(x k ) f(x ) + f (x )(x k x ). From this we get the relationship between changes in f and changes in x. That is f(x k ) f(x ) = f(x k ) f (x ) x k e f = c e x, or,

Chap 6. Unconstrained Optimization 6.5 where e f and e x are the relative errors in f and x, respectively. If we wish to find the minimum f(x ) to full precision then the stopping rule will be e f e mach c e x e mach e x e mach. This means that in IEEE single precision with e mach = 4, the best precision we can get for x-sequence convergence is e mach =, that is, half machine precision. 6.. Unimodal Functions. In one-dimensional minimization, the equivalent of a function that changes sign in an interval [a, b] is a function that is unimodal on [a, b]. Roughly speaking, a unimodal function has one minimum on [a, b]. Definition (Unimodal Function). A function f(x) is unimodal on [a, b] if, for all x < x [a, b] there is a point x [a, b] such that x < x f(x ) > f(x ), x > x f(x ) < f(x ). or Theorem (Reduction of Interval). If f(x) is a unimodal function on [a, b] and a x < x b then f(x ) f(x ) x x, f(x ) f(x ) x x. or This theorem allows us to reduce the interval [a, b] to either [a, x ] or [x, b], as shown in Figures 6.3 and 6.4. f(x) eliminated a x x b Figure 6.3. Reduction of the Interval.

Unconstrained Optimization 6.6 eliminated f(x) a x x b Figure 6.4. Reduction of the Interval. 6..3 Bisection Algorithm. The Bisection algorithm (also called Binary Search) is similar to that for zero-finding. At each iteration it halves the interval of search by checking the value of the function at the mid-point of the interval. The starting condition is : given an interval [a, b] on which the function is unimodal, i.e., it has a minimum in the interval. This is equivalent to the condition that the function changes sign in the interval when zero-finding. Before we state the algorithm formally, let us see if the zero-finding Bisection method works. f(c) a c b Figure 6.5. One f-evaluation gives no information. It is obvious from Figure 6.5that one function evaluation gives no useful information. If we also had f(a) and f(b) would that help? No. It is easy to construct two function with f (a) = f (a), f (c) = f (c) and f (b) = f (b) that have minima on different sides of the mid-point c. To find out which side of c the minimum lies we need two function evaluations, as shown in Figure 6.6 This shows very clearly that minimum-finding is more difficult than zero-finding, at least for the Bisection algorithm. We now formally state the algorithm.

Unconstrained Optimization 6.7 f (x) f (x) f (c) f (c+ε) a c c+ε b Figure 6.6. Two f-evaluations give information. algorithm MinBisect (f, a, b, tol, maxits) for k := to maxits do m := a + (b a)/ if endfor endalg {MinBisect} f(m ɛ) < f(m + ɛ) then b := m + ɛ else a := m ɛ endif if a b tol then return m Notice that the function values are not saved between iterations. Each is either outside the new interval or at either end of it and so provide no useful information for the next iteration of this algorithm. Analysis of MinBisect. The algorithm performs two function evaluations per iteration. It is obvious that the interval is halved (contracted) each iteration. Hence e k = e k. Thus the MinBisect Algorithm has order or linear convergence and it gains bit of precision for each iteration. The error after k iterations is e k = k b a. (See the analysis of Bisect in Chapter 4 for further details). We note that if f (x) is available then the minimization problem may be replaced with finding a zero of f (x), at a cost of one derivative evaluation per iteration. Alternatively we could replace the if-test in the algorithm above with the test if f (m) > 0 then etc. Is MinBisect Optimal?. This question is not well-posed and we need to be more specific. If we are given an interval [a, b] and are allowed only two evaluations then is MinBisect optimal? Placing

Unconstrained Optimization 6.8 the evaluations slightly apart in the middle of the interval reduces the interval by approximately /. If we place them at a + θ(b a), 0 θ then if θ < / or θ > / then this is not as good as MinBisect. However, let us assume we have 4 function evaluations at our disposal. Obviously MinBisect reduces the interval to /4 but placing the evaluations in the positions and order shown in Figure 6.7 reduces the interval to [3, 4], which is /5 of the original. Clearly MinBisect is not optimal if we have more than two evaluations at our disposal. 3 4 0 3 4 5 Figure 6.7. Four f-evaluations give a reduction of /5. 6..4 Fibonacci Algorithm. The fact that we can do better than MinBisect with four evaluations poses the natural question : what is the best we can do with n evaluations? Keifer (953) studied this and other optimum search strategies. The answer involves the Fibonacci Numbers which we have seen before. The Fibonacci numbers are defined as : F 0 = F =, and F k+ = F k+ + F k. The Fibonacci numbers for k = 0,,..., 0 are k 0 3 4 5 6 7 8 9 0 F k 3 5 8 3 34 55 89 The example in Figure 6.8 shows the working of the Fibonacci Search Algorithm. It should be noted that this algorithm includes, as a special case, the MinBisect algorithm, i.e., when n =. The algorithm divides the interval into F = equal subintervals and places evaluations at F 0 ɛ and F + ɛ. 3 5 4 6 0 3 4 5 6 7 8 9 0 3 Figure 6.8. Six f-evaluations give a reduction of /3.

Unconstrained Optimization 6.9 Analysis of Fibonacci Search. Given n evaluations the Fibonacci algorithm reduces the initial interval to (b a)/f n. It can be shown that a closed-form expression for F n is [( ) n ( ) n ] F n = (φ n ˆφ n ) = 5 5 + 5 = 5 (.68 n 0.68 n ) 5.68 n, n. 5 This is a good approximation to F n for n > 5 and so we can calculate the reduction using this simple formula. The ratio of the lengths of successive intervals is F n k F n k+, k =,,..., n. As n this ratio is /.68 = 0.68, the Golden Ratio, or section. 6..5 Golden Section Algorithm. Although the Fibonacci Search algorithm is optimal for n evaluations, we usually want an iterative algorithm to continue until the initial interval has been reduced to some pre-determined tolerance, and not a pre-determined number of evaluations. We get such an algorithm from Fibonacci Search by imagining we have an infinite number of evaluations. This gives a reduction to 0.68 = λ at each iteration, as we saw above. The Golden Section λ has the property + λ = λ, or λ + λ =. We start the algorithm by placing two evaluations at x L = a + ( λ)(b a) and x U = a + λ(b a). For simplicity let us assume that a = 0 and b =. Then the first step is as shown in Figure 6.7 (a) f (x) f (x) f (c) f (c+ε) a c c+ε b Figure 6.9. Golden Section Search.

Unconstrained Optimization 6.0 From the figure it can be seen that the reduced interval is [0, λ]. We place two more evaluations in this interval at the points x L = 0 + ( λ)(λ 0) = λ( λ) and x U = λ(λ 0) = λ = λ. Thus we see that one of the two evaluations is already in the reduced interval. This is shown in Figure 6.7(b). 3 o.5 x.5 + 0.5 o o o o 0 0 0.5.5.5 3 3.5 x Figure 6.0. Relationship between Sub-intervals. The two possibilities at each step of the algorithm is shown in Figure 6.8. We see that in either case the new interval already has an evaluation in it which is used at the next step. Thus only one evaluation is needed per iteration. 0 -λ λ 0 -λ λ 0 λ (-λ) λ =(-λ) λ -λ λ (-λ) Figure 6.. General Step. The formal statement of the algorithm is as follows :

Unconstrained Optimization 6. algorithm GoldenSection (f, a, b, tol, maxits) λ := ( + 5)/ x L := a + ( λ)(b a) x U := a + (λ(b a) f L := f(x L ) f U := f(x U ) for k := to maxits do if (f L < f U ) then b := x U x U := x L f U := f L x L := a + ( λ)(b a) f L := f(x L ) else a := x L x L := x U f L := f U x U := a + λ(b a) f U := f(x U ) endif if a b tol then return (a, b) endfor { k } endalg {GoldenSection} Analysis of Golden Section. It can be seen that only one function evaluation is used at each iteration and e k = 0.68e k. Note that this is better than MinBisect which reduces the interval by 0.5 for two evaluations whereas Golden Section gives a reduction of 0.68 = 0.38 for the same work.

Unconstrained Optimization 6. Example : Golden Section. This is the Golden Section Search on the function f(x) = (x ) +, with [a, b] = [ 3, 0]. k a b x L x U f L f U 3.00000 0.00000.96600 5.03400.00058 3.9455 3.00000 5.03400 0.06899.96600.7458.00058 3 0.06899 5.03400.96600 3.3737.00058.5446 4 0.06899 3.3737.4.96600.5536.00058 5.4 3.3737.96600.4300.00058.0893 6.4.4300.68877.96600.0473.00058 7.68877.4300.96600.3634.00058.0095 8.68877.3634.85974.96600.00979.00058 9.85974.3634.96600.03068.00058.00047 Compare this with the Fibonacci Search k a b x L x U f L f U 3 0 5.000 3.6 3 5 0.36.000 3 0 5 3.000.44 4 0 3.44.000 5 3.966.004.00.08 This shows that the Golden Section Search algorithm is close to optimal. 6..6 Quadratic Algorithms. These algorithms replace the function f(x) with the polynomial f(x) p (x) = a 0 + a x + a x. The first derivative of p (x) is a +a x =0 at a minimum and so we get the approximate minimizer ˆx = a /a. Thus at each iteration of a quadratic algorithm we have 3 points, (x 0, f 0 ), (x, f ), (x, f ), and we solve the following equations for a 0, a, a : x 0 x 0 x x a 0 a = f 0 f. x x a f The next approximation is then x k+ = a /a. 6..7 Poly-Algorithms. Brent (973) has written an algorithm similar to Zeroin called Fmin which uses a combination of Golden Section Search Quadratic Search This is probably the most robust -dimensional minimum-finder available.

Unconstrained Optimization 6.3 6.3 N-DIMENSIONAL MINIMIZATION WITHOUT DERIVATIVES These algorithms use function evaluations only, without any recourse to derivative information. These algorithms are necessary when derivatives are not available, or difficult to calculate, or if the function has discontinuities in its derivatives. Example : The -Median in the Plane Problem. Find the location (x, y) of a central office to serve k clusters of customers at locations (x i, y i ) with w i customers at each location so that the total distance travelled by all customers is minimized, i.e., k min D(x, y) = x,y i= w i (x xi ) + (y y i ). This -dimensional function D(x, y) has no derivatives at the points where x = x i or y = y i. 6.3. Basic Descent Algorithm. This is a prototype for all minimization algorithms. It reduces an n-dimensional search for a minimum to a sequence of -directional searches along chosen directions. In the algorithm below d k is a direction vector with d k = and s k is a scalar. Thus, for a given d k and x k, f(x k + s k d k ) is a function of the scalar variable s k, called the step length. x k +s k d k x k xk +d k d k Figure 6.. Searching in direction.

Unconstrained Optimization 6.4 algorithm Basic-Descent (f, x 0 ) while not converged do. Choose a descent direction d k. Find s k that minimizes f(x k + s k d k ) 3. x k+ := x k + s k d k endwhile return x k endalg {Basic-Descent} 6.3. Cyclic Coordinate Descent Algorithm. This is a very simple, but not very effective, algorithm. It chooses each coordinate direction in turn and minimizes the function in that direction. In effect it minimizes f(x, x,..., x n ) with respect to x k at each iteration k. Thus it chooses d k = e k where e k is the kth standard basis vector and the algorithm cycles through the coordinate directions e, e,..., e n, e, e,.... algorithm Cyclic-Coordinate-Descent (f, x, n, tol, maxits) i := for k := to maxits do d k := e i Find s k that minimizes f(x k + s k d k ) x k+ := x k + s k d k if converged then return x k+ i := (i mod n) + endfor endalg {Cyclic-Coordinate-Descent} Example : Two dimensions. Let us minimize the function f(x) = x + x 4, starting at x 0 = (4, 4). Iteration : d = (, 0), and x + s d = (4, 4) + s (, 0) = (4 + s, 4). f(x + s d ) = (4 + s ) + (4) 4 = s + 8s + 8 This has a minimum at s = 4 and we have x = (4, 4) 4(, 0) = (0, 4). Iteration :

Unconstrained Optimization 6.5 d = (0, ), and x + s d = (0, 4) + s (0, ) = (0, 4 + s ). f(x + s d ) = 0 + (4 + s ) 4 = s + 8s + This has a minimum at s = 4 and we have x = (0, 4) 4(0, ) = (0, 0). This is in fact the minimum, obtained after two iterations. This is because the function is very simple and regular. It can be seen that the function with elliptical contours causes the method to take many steps and it is unlikely to converge for more complicated functions. The function f(x) = x + x 4 is separable in that there are no cross-product terms such as x x and the algorithm works well on such functions. Where there is interaction between variables the algorithm does badly. This type of behavior can be seen in adjusting a television to get the optimum picture. The picture is a function of the vector x = (colour, brightness, contrast). Most people adjust these one-at-a-time but need to cycle through these adjustments because they affect each other. This method is exactly analogous to cyclic coordinate descent. (0,4) (4,4) (0,0) Figure 6.3. Cyclic Coordinate Descent. 6.3.3 The Polytope Algorithm. This is a simple method that has been found to be quite useful for minimizing functions of low dimension (n < 6). It has been used extensively in the chemical industry to optimize plant performance. The Regular Polytope Algorithm. This was proposed by Spendley, Hext and Himsworth (96) and they called it the Simplex Method. To avoid confusion with its much more famous relative, we will call it the Regular Polytope Algorithm. This was one of many sequential algorithms applied in the chemical industry under the general heading of Evolutionary Operation. This phrase was coined by G.E.P. Box in 957 to denote sequential methods for increasing industrial productivity. At this time he or (his relative?) M.J. Box, or both were working for Imperial Chemicals Industries, U.K.

Unconstrained Optimization 6.6 x r x x x 0 Figure 6.4. The Regular Polytope Method. 6.4 N-DIMENSIONAL MINIMIZATION WITH DERIVATIVES These algorithms use first and second derivatives to guide the search for a minimum. The following conditions are sufficient for x to be a local minimizer of f(x) : g(x ) = 0, and H(x ) is positive definite, i.e., x T H(x )x > 0. The derivative information is used to choose a good direction d k in step of the Basic Descent Algorithm. 6.4. The Method of Steepest Descent. This is probably the oldest (formal) optimization algorithm and was invented by Cauchy in 847. The essential idea of the algorithm is this : from the point x k go in the direction in which the function decreases the fastest. That is, go in the direction that is steepest. We can derive this direction formally as follows : using the first two terms of the Taylor Series expansion about x k we get f(x) f(x k ) + g T (x k )(x x k ), or f k = g T (x k ) x k. We wish to make f k to be as big a decrease as possible, i.e., large and negative. We get this by choosing x k = g(x k ) and so we choose The steepest descent algorithm is as follows : d k = g n (x k ) = g(x k )/ g(x k ).

Unconstrained Optimization 6.7 algorithm Steepest-Descent (f, g, x, n, tol, maxits) for k := to maxits do Calculate g(x k ) d k := g(x k )/ g(x k ) Find s k that minimizes f(x k + s k d k ) x k+ := x k + s k d k if converged then return x k+ endfor endalg {Steepest-Descent} Example : Steepest Descent. Minimize Rosenbrock s function f(x) = 00(x x ) + ( x ), starting at x 0 = (, ). First we find Iteration : [ ] f(x)/ x g(x) = = f(x)/ x [ ] 400(x x x 3 ) ( x ) 00(x x). x = (, ), f(x ) = 40, g(x ) = (60, 400) T, d = g(x )/ g(x ) = (0.970, 0.4) T. x + s d = (, ) s (0.970, 0.4) = ( 0.970s, + 0.4s ). f(x + s d ) = 00(.698s 0.9435s ) + ( + 0.970s ). This has a minimum at s = 0.555 and we have x = (.465,.345). Iteration : x = (.465,.345), f(x ) = 0.3, g(x ) = (0.9, 0.0) T, d = g(x )/ g(x ) = (.0, 0.0) T. x + s d = (.465,.345) s (, 0) = (.465 s,.345). f(x + s d ) = 00(.345 (.465 s ) ) + (0.464 s ). This has a minimum at s = 0.00 and we have x 3 = (.4605,.345) and f(x 3 ) = 0.. At this point the method gets stuck unless we resort to higher precision. This is typical of the Steepest Descent algorithm. Indeed, it can be shown (see GMW 8, page 03) that even for simple quadratic functions the method gets stuck. For example, if f is the quadratic f(x) = c T x + xt Gx, where G is a symmetric positive-definite matrix, then

Unconstrained Optimization 6.8 f(x k+ ) f(x ) = f k+ (cond (G) ) (cond (G) + ) f k. If cond (G) = 50 then the reduction in the function error at each step is just (49/5) = 0.93, i.e., if f k = then f k+ = 0.93, f k+ = 0.85, f k+3 = 0.786. This shows that for a very mildly ill-conditioned G the convergence of {f k } to a minimum is linear and very slow. Furthermore, if this algorithm performs badly on a simple quadratic function it is unlikely to do well on more complicated functions. 6.4. Newton s Algorithm. Newton s method for minimizing a function of one variable f(x) is obtained by using the first three terms of the Taylor series expansion about x k : f(x) f(x k ) + f (x k )(x x k ) + f (x k )(x x k ). This is a quadratic in x that has a minimum x at x = x k+ = x k f (x k ) f (x k ). This is the same as Newton s zero-finding algorithm except that f and f have been replaced by f and f, respectively. In an analogous manner we get Newton s method in n dimensions by replacing f(x) by its nd-order Taylor approximation : f(x) f(x k ) + g T (x k )(x x k ) + (x x k) T H(x k )(x x k ). This is a quadratic in x R n that has a minimum x at x = x k+ = x k H (x k )g(x k ) or x k = H (x k )g(x k ). The last expression shows that for the Newton method the descent direction is d k = H (x k )g(x k ). In the pure form of Newton s algorithm we use a fixed step length s k =. Hence, no line search (one-directional minimization) is performed so that step of the basic descent algorithm is omitted. The main work is the calculation of d k. It may appear that this requires the calculation of H (x k ). However, we re-arrange the expression and solve the following equation for d k : H(x k )d k = g(x k ). We can now write the pure form of Newton s algorithm.

Unconstrained Optimization 6.9 algorithm Pure-Newton(f, g, H, x, n, tol, maxits) for k := to maxits do Calculate g(x k ) Calculate H(x k ) Solve H(x k )d k = g(x k ) for d k x k+ := x k + d k if converged then return x k+ endfor endalg {Pure-Newton} Analysis of Newton s Algorithm. The convergence analysis is difficult and so we merely assert that if f(x) is sufficiently nice then Newton s algorithm has nd-order convergence. We are more interested in the computational burden because it is quite substantial for each iteration. There are three main computations per iteration :. Calculate g(x k ). This requires n function evaluations.. Calculate H(x k ). This requires n / function evaluations (H is symmetric). 3. Solve H(x k )d k = g(x k ) for d k. This requires O(n 3 ) flops. Example : Newton s Algorithm. Minimize (x ) 4 + (x x ). and f(x old ) = ( ) 4 + ( 3) = 6 + 36 = 5. g(x) = H(x) = [ ] 4(x ) 3 + (x x ) = 4(x x ) [ ] (x ) + 4 = 4 +8 We must now solve H(x old )δx = g(x old ). That is [ ] [ 50 4 δx 4 +8 δx This gives ] = [ ] 44 4 [ ] 44 4 [ 50 ] 4 4 +8 δx = [0.667.667] T, x new = x old + δx = [0.667 0.333] T, f(x new ) = 3.6. Here is a table of the first three iterations Pure Newton Method

Unconstrained Optimization 6.0 δx k x k f(x k ) g(x k ) H(x k ) k = H x g k+ [ ] [ ] [ ] [ ] [ ] 0 44 50 4 0.667 0.667 0 5 3 4 4 +8.667 0.333 [ ] [ ] [ ] [ ] [ ] 0.667 9.4 3.3 4 0.444. 3.6 0.333 0 4 +8 0. 0.555 [ ] [ ] [ ] [ ] [ ]..8.5 4 0.96.4 0.64 0.555 0 4 +8 0.48 0.707 The algorithm converges, after about 0 iterations to x = [ ], f(x) = 0, g(x) = [ 0 0 ] and H(x) = [ ] 4 4 +8 3 o.5 x.5 + 0.5 o o o o 0 0 0.5.5.5 3 3.5 x Figure 6.5. Newton s Method on f(x) = (x ) 4 + (x x ).