10.3 Steepest Descent Techniques
|
|
- Cameron Waters
- 5 years ago
- Views:
Transcription
1 The advantage of the Newton and quasi-newton methods for solving systems of nonlinear equations is their speed of convergence once a sufficiently accurate approximation is known. A weakness of these methods is that an accurate initial approximation to the solution is needed to ensure convergence. The Steepest Descent method considered in this section converges only linearly to the solution, but it will usually converge even for poor initial approximations. As a consequence, this method is used to find sufficiently accurate starting approximations for the Newton-based techniques in the same way the Bisection method is used for a single equation. Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
2 Consider the system of nonlinear equations of the form f 1 (x 1, x 2,, x n ) = 0, f 2 (x 1, x 2,, x n ) = 0,. f n (x 1, x 2,, x n ) = 0. Define the function g(x 1, x 2,, x n ) = n i=1 [ f i (x 1, x 2,, x n )] 2. The function g(x 1, x 2,, x n ) is nonnegative. If x = [x 1, x 2,, x n ] t is a solution to the system F(x) = 0, then the function g reaches its local minimum, and the minimum value is 0. Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
3 The method of Steepest Descent for finding a local minimum for g : R n R is as follows: 1 Evaluate g at an initial approximation x (0) = [ ] t. x (0) 1, x(0) 2,, x(0) n 2 Determine a direction from x (0) that results in a decrease in the value of g. 3 Move an appropriate amount in this direction and call the new value x (1). 4 Repeat step 1 through step 3 with x (0) replaced by x (1). Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
4 Illustration of the Method of Steepest Descent The figure is downloaded from Wikipedia.org Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
5 Review of Gradient of a Function For g : R n R, the gradient of g at x = [x 1, x 2,, x n ] t is denoted g(x) and defined by g(x) = [ g (x), g ] g t (x),, (x). x 1 x 2 x n For example, g(x 1, x 2 ) = x 2 1 2x 1x 2 + sin(2x 1 + x 2 ), g(x) = [2x 1 2x cos(2x 1 + x 2 ), 2x 1 + cos(2x 1 + x 2 )] t. Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
6 Review of Gradient of a Function The directional derivative of g at at x = [x 1, x 2,, x n ] t in the direction of a unit vector v measures the rate of change of g in the direction v. It is defined by g(x + hv) g(x) D v g(x) = lim = v g(x) h 0 h The direction that produces the maximum value for the directional derivative occurs when v is chosen to be parallel to g(x). This direction yields the greatest increase of the value of g. The direction of greatest decrease in the value of g at x is the direction given by g(x). Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
7 derivative occurs when v is chosen to be parallel to g(x), provided that g(x) = 0. Asa consequence, the direction of greatest decrease in the value of g at x is the direction given by g(x). Figure 10.3 is an illustration when g is a function of two variables. Illustration of greatest decrease direction for a function z = g(x 1, x 2 ). z z g(x 1, x 2 ) (x 1, x 2, g(x 1, x 2 )) Steepest descent direction x x (x x 2 1, x 2 ) t 1 g(x) Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
8 The object is to reduce g(x) to its minimum value of zero. We choose x (1) by moving away from x (0) in the direction g(x (0) ) that gives the greatest decrease at x (0). That is x (1) = x (0) α g(x (0) ), for some constant α > 0. The remaining task is to choose an appropriate value of α so that g(x (1) ) is significantly less than g(x (0) ). Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
9 To determine the value α, we consider the single-variable function h(α) = g(x (0) α g(x (0) )). The value of α that minimizes the function h is the value needed. Taking derivative of h(α) is computationally too costly. Instead, we consider an approximation of h(α). To do this, we choose three numbers α 1 < α 2 < α 3 that, we hope, are close to where the minimum value of h(α) occurs. We then construct a quadratic polynomial P (x) that agrees with the value of h at α 1, α 2, and α 3. Let α be the minimizer of P (x) in [α 1, α 3 ]. Then, α is used for approximating the minimum value of g is x (1) = x (0) α g(x (0) ). Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
10 First, we choose α 1 = 0, that is g 1 = g(x (0) α 1 x (0) ) = g(x (0) ) Next, we choose a number α 3 with g 3 = g(x (0) α 3 x (0) ) < g(x (0) ) = g 1. Since α 1 does not minimize h, such a number α 3 does exist. 1 First choose α 3 = 1, and calculate g 3 = g(x (0) α 3 x (0) ). 2 If g 3 < g 1, then this α 3 works. 3 If g 3 > g 1, then let α 3 = α 3 /2 and go back to Step 1. Finally, we choose α 2 = α 3 2. Compute g 2 = g(x (0) α 2 x (0) ) Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
11 Now we obtain three data points (α 1, g 1 ), (α 2, g 2 ), (α 3, g 3 ). Use the quadratic interpolation (coefficients h 1 and h 2 to be determined) P (x) = g 1 + h 1 (x α 1 ) + h 2 (x α 1 )(x α 2 ) h 1 = g 2 g 1, h 2 = g 3 g 1 h 1 (α 3 α 1 ) α 2 α 1 (α 3 α 1 )(α 3 α 2 ) Note that α 1 = 0, and α 2 = α 3 /2, then We can find that P (x) = g 1 + h 1 x + h 2 x(x α 2 ) h 1 = g 2 g 1 α 2 = 2(g 2 g 1 ) α 3 h 2 = g 3 g 1 h 1 α 3 α 3 (α 3 α 2 ) = 2(g 1 2g 2 + g 3 ) α3 2. Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
12 Note that P (x) = g 1 + h 1 x + h 2 x(x α 2 ) The critical point α 0 of P (x) can be obtained by P (x) = h 1 +2h 2 x h 2 α 2 = 0, = α 0 = h 2α 2 h 1 2h 2 = 1 2 (α 2 h 1 h 2 ). The minimum value of P on [α 1, α 3 ] occurs either at the only critical point α 0 of P or at the right endpoint α 3. Choose α from {α 0, α 3 } so that g(x α g(x (0) )) = min{g 0, g 3 }. Update the solution by x (1) = x (0) α g(x (0) ) Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
13 Example 5. Use the Steepest Descent method with x (0) = [0, 0, 0] t to find a reasonable starting approximating to the solution of 3x 1 cos(x 2 x 3 ) 1 2 = 0, x (x ) 2 + sin(x 3 ) = 0, e x 1x x π 3 = 0. 3 Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
14 Solution (1/5). g(x 1, x 2, x 3 ) = [f 1 (x 1, x 2, x 3 )] 2 + [f 2 (x 1, x 2, x 3 )] 2 + [f 3 (x 1, x 2, x 3 )] 2 2f 1 (x) f 1 (x) + 2f 2 (x) f 2 (x) + 2f 3 (x) f 3 (x) x 1 x 1 x 1 g(x) = 2f 1 (x) f 1 (x) + 2f 2 (x) f 2 (x) + 2f 3 (x) f 3 (x) x 2 x 2 x 2 2f 1 (x) f 1 (x) + 2f 2 (x) f 2 (x) + 2f 3 (x) f 3 (x) x 3 x 3 x 3 f 1 f 2 f 3 (x) (x) (x) x 1 x 1 x 1 = 2 f 1 f 2 f 3 f 1 (x) (x) (x) (x) x 2 x 2 x f 2 (x) 2 f 1 f 2 f 3 f 3 (x) (x) (x) (x) x 3 x 3 x 3 = 2 [ J(x) ] t F(x) Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
15 Solution (2/5). For x (0) = [0, 0, 0] t, we have g(x (0) ) = g(x (0) ) = 2 [ J(x (0) ) ] t F(x (0) ) = Normalize the gradient vector z = 1 g(x (0) g(x (0) ) = 1 ) g(x(0) ) = Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
16 Solution (3/5). Set α 1 = 0, we have g 1 = g(x (0) α 1 z) = Set α 3 = 1, we have g 3 = g(x (0) α 3 z) = Since g 3 < g 1, we let α 2 = α 3 2 = 0.5, g 2 = g(x (0) α 2 z) = The data points (α 1, g 1 ), (α 2, g 2 ), (α 3, g 3 ) are (0, ), (0.5, ), (1, ) We construct P (x) = g 1 + h 1 (x α 1 ) + h 2 (x α 1 )(x α 2 ) that interpolates the data points. P (α 1 ) = g 1 is automatically satisfied. P (α 2 ) = g 2 yields h 1 = 2(g 2 g 1 ) α 3 = P (α 3 ) = g 3 yields h 2 = 2(g 1 2g 2 + g 3 ) α 2 3 = Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
17 Solution (4/5). The interpolating polynomial is P (x) = x x(x 0.5) The critical point is α 0 = 1 2 (α 2 h 1 h 2 ) = Evaluating g(x) at α 0 yields g 0 = g(x (0) ˆαz) = Since g 0 < g 3, we choose α = α 0 = x (1) = x (0) αz = Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
18 be appropriate at this stage, since 70 iterations of the Steepest Descent method are re to find x (k) x < Solution (5/5). We use Matlab programming to finish the rest of computation. e 10.5 k x (k) 1 x (k) 2 x (k) 3 g(x (k) 1, x(k) 2, x(k) 3 ) ll Rights Reserved. AMay true not be copied, solution scanned, or duplicated, to the in whole nonlinear or part. Due to electronic system rights, some isthird x party = content (0.5, may be 0, suppressed ) t from the ebook and/or echapter, ressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions re so the second iterative solution x (2) would likely be adequate as an initial approximation for Newtons method or Broydens method. One of these quicker converging techniques would be appropriate at this stage, since 70 iterations of the Steepest Descent method are required to find x x (k) Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
19 Chapter 3/27/ Numerical 6:11 PM Solution /Users/xuzhang/Dropbox/T.../ex10_3_1_new.m of Nonlinear Systems of Eqns 1 of 2 % ex10_3_1: Steepest Descent Method clc clear %% Initialization MaxIt = 100; % max number of iterations tol = 10^(-2); % tolerance x0 = [0; 0; 0]; % initial approximation %% Steepest Descent Method for i = 1:MaxIt disp(['iteration = ', int2str(i)]); F = Fun(x0); J = jacobian(x0); grad = 2*J'*F; z = (1/norm(grad))*grad; % normalize the vector a1 = 0; g1 = FunG(x0-a1*z); a3 = 1; g3 = FunG(x0-a3*z); while g3 > g1 a3 = a3/2; g3 = FunG(x0-a3*z); end a2 = a3/2; g2 = FunG(x0-a2*z); h1 = 2*(g2-g1)/a3; h2 = 2*(g1-2*g2+g3)/(a3*a3); a0 = (1/2)*(a2-h1/h2); % a0 is the critical point g0 = FunG(x0-a0*z); if g0 < g3 % select the smaller one from g0 and g3 a = a0; elseif g0 >= g3 a = a3; end g = FunG(x0-a*z) x = x0 - a*z if abs(g) < tol % Stopping Criteria break; end x0 = x; end %% Function F function F = Fun(x) y1 = 3*x(1)-cos(x(2).*x(3))-1/2; 3/27/19 6:11 PM /Users/xuzhang/Dropbox/ %% Function F function F = Fun(x) y1 = 3*x(1)-cos(x(2).*x(3))-1/2; y2 = x(1).^2-81*(x(2)+0.1).^2+sin(x(3)) ; y3 = exp(-x(1).*x(2))+20*x(3)+(10*pi-3)/3; F = [y1;y2;y3]; end %% Jacobian Function function J = jacobian(x) n = length(x); J = zeros(n); % initialization of J J(1,1) = 3; J(1,2) = x(3).*sin(x(2).*x(3)); J(1,3) = x(2).*sin(x(2).*x(3)); J(2,1) = 2*x(1); J(2,2) = -162*(x(2)+0.1); J(2,3) = cos(x(3)); J(3,1) = -x(2).*exp(-x(1).*x(2)); J(3,2) = -x(1).*exp(-x(1).*x(2)); J(3,3) = 20; end %% Function g function u = FunG(x) y1 = 3*x(1)-cos(x(2).*x(3))-1/2; y2 = x(1).^2-81*(x(2)+0.1).^2+sin(x(3)) ; y3 = exp(-x(1).*x(2))+20*x(3)+(10*pi-3)/3; u = y1.^2 + y2.^2 + y3.^2; end Xu Zhang (Mississippi State University) MA4323/6323 Numerical Analysis II Spring / 60
Numerical solutions of nonlinear systems of equations
Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points
More informationMotivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)
AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.
More informationMATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.
MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationMATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.
MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of f(x) = 0, their advantages and disadvantages. Convergence
More informationOutline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations
Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign
More informationINTRODUCTION TO NUMERICAL ANALYSIS
INTRODUCTION TO NUMERICAL ANALYSIS Cho, Hyoung Kyu Department of Nuclear Engineering Seoul National University 3. SOLVING NONLINEAR EQUATIONS 3.1 Background 3.2 Estimation of errors in numerical solutions
More informationVariable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University
Newton s Iterative s Peter W. White white@tarleton.edu Department of Mathematics Tarleton State University Fall 2018 / Numerical Analysis Overview Newton s Iterative s Newton s Iterative s Newton s Iterative
More informationCS 450 Numerical Analysis. Chapter 5: Nonlinear Equations
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationSolutions of Equations in One Variable. Newton s Method
Solutions of Equations in One Variable Newton s Method Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011 Brooks/Cole,
More informationOptimization with Scipy (2)
Optimization with Scipy (2) Unconstrained Optimization Cont d & 1D optimization Harry Lee February 5, 2018 CEE 696 Table of contents 1. Unconstrained Optimization 2. 1D Optimization 3. Multi-dimensional
More informationNumerical Solution of f(x) = 0
Numerical Solution of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@pdx.edu ME 350: Finding roots of f(x) = 0 Overview Topics covered in these slides
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More informationDepartment of Mathematical 1 Limits. 1.1 Basic Factoring Example. x 1 x 2 1. lim
Contents 1 Limits 2 1.1 Basic Factoring Example...................................... 2 1.2 One-Sided Limit........................................... 3 1.3 Squeeze Theorem..........................................
More informationc n (x a) n c 0 c 1 (x a) c 2 (x a) 2...
3 CHAPTER 6 SERIES SOLUTIONS OF LINEAR EQUATIONS 6. REVIEW OF POWER SERIES REVIEW MATERIAL Infinite series of constants, p-series, harmonic series, alternating harmonic series, geometric series, tests
More informationOptimization and Calculus
Optimization and Calculus To begin, there is a close relationship between finding the roots to a function and optimizing a function. In the former case, we solve for x. In the latter, we solve: g(x) =
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationNumerical Optimization
Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function
More informationNotes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)
Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence
More information1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that
Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a
More informationMath 409/509 (Spring 2011)
Math 409/509 (Spring 2011) Instructor: Emre Mengi Study Guide for Homework 2 This homework concerns the root-finding problem and line-search algorithms for unconstrained optimization. Please don t hesitate
More informationLecture 7 Unconstrained nonlinear programming
Lecture 7 Unconstrained nonlinear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,
More informationVasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks
C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,
More information24. x 2 y xy y sec(ln x); 1 e x y 1 cos(ln x), y 2 sin(ln x) 25. y y tan x 26. y 4y sec 2x 28.
16 CHAPTER 4 HIGHER-ORDER DIFFERENTIAL EQUATIONS 11. y 3y y 1 4. x yxyy sec(ln x); 1 e x y 1 cos(ln x), y sin(ln x) ex 1. y y y 1 x 13. y3yy sin e x 14. yyy e t arctan t 15. yyy e t ln t 16. y y y 41x
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationLecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from
Lecture 44 Solution of Non-Linear Equations Regula-Falsi Method Method of iteration Newton - Raphson Method Muller s Method Graeffe s Root Squaring Method Newton -Raphson Method An approximation to the
More informationOptimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23
Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is
More information2.4 - Convergence of the Newton Method and Modified Newton Method
2.4 - Convergence of the Newton Method and Modified Newton Method Consider the problem of finding the solution of the equation: for in Assume that is continuous and for in 1. Newton's Method: Suppose that
More informationFinding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University
Finding the Roots of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationSOLUTIONS to Exercises from Optimization
SOLUTIONS to Exercises from Optimization. Use the bisection method to find the root correct to 6 decimal places: 3x 3 + x 2 = x + 5 SOLUTION: For the root finding algorithm, we need to rewrite the equation
More informationFinding the Roots of f(x) = 0
Finding the Roots of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review
ECE 680Modern Automatic Control p. 1/1 ECE 680 Modern Automatic Control Gradient and Newton s Methods A Review Stan Żak October 25, 2011 ECE 680Modern Automatic Control p. 2/1 Review of the Gradient Properties
More informationCLASS NOTES Models, Algorithms and Data: Introduction to computing 2018
CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: April 16, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material
More informationConvex Optimization. Problem set 2. Due Monday April 26th
Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining
More informationChapter 5 Roots of Equations: Bracketing Models. Gab-Byung Chae
Chapter 5 Roots of Equations: Bracketing Models Gab-Byung Chae 2008 4 17 2 Chapter Objectives Studying Bracketing Methods Understanding what roots problems are and where they occur in engineering and science.
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization II: Unconstrained
More informationMidterm Review. Igor Yanovsky (Math 151A TA)
Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply
More informationBisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2
Bisection Method Given continuous function f (x) on the interval [a, b] with f (a) f (b) < 0, there must be a root in (a, b). To find a root: set [a 1, b 1 ] = [a, b]. set p 1 = a 1+b 1 2 and compute f
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationOct 07, 11 11:05 Page 1/1 >> prob1 Part a: NM= 0.73907868 in 17 iter MNM= 0.73908513 in 6 iter Part b: NM= 2.87939396 in >> prob2 15 iter MNM= 2.87938523 in 5 iter fixed point Steffensens root iter 2.92403530
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationA New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods
From the SelectedWorks of Dr. Mohamed Waziri Yusuf August 24, 22 A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods Mohammed Waziri Yusuf, Dr. Available at:
More informationx k+1 = x k + α k p k (13.1)
13 Gradient Descent Methods Lab Objective: Iterative optimization methods choose a search direction and a step size at each iteration One simple choice for the search direction is the negative gradient,
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More information11.5. The Chain Rule. Introduction. Prerequisites. Learning Outcomes
The Chain Rule 11.5 Introduction In this Section we will see how to obtain the derivative of a composite function (often referred to as a function of a function ). To do this we use the chain rule. This
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More information2. Quasi-Newton methods
L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization
More informationQuasi-Newton Methods
Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications
More information1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:
Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion
More information14. Nonlinear equations
L. Vandenberghe ECE133A (Winter 2018) 14. Nonlinear equations Newton method for nonlinear equations damped Newton method for unconstrained minimization Newton method for nonlinear least squares 14-1 Set
More informationRoot Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014
Root Finding: Close Methods Bisection and False Position Dr. Marco A. Arocha Aug, 2014 1 Roots Given function f(x), we seek x values for which f(x)=0 Solution x is the root of the equation or zero of the
More informationLecture 7. Root finding I. 1 Introduction. 2 Graphical solution
1 Introduction Lecture 7 Root finding I For our present purposes, root finding is the process of finding a real value of x which solves the equation f (x)=0. Since the equation g x =h x can be rewritten
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationSolving Nonlinear Equations
Solving Nonlinear Equations Jijian Fan Department of Economics University of California, Santa Cruz Oct 13 2014 Overview NUMERICALLY solving nonlinear equation Four methods Bisection Function iteration
More informationChapter 1. Root Finding Methods. 1.1 Bisection method
Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationFigure 1: Graph of y = x cos(x)
Chapter The Solution of Nonlinear Equations f(x) = 0 In this chapter we will study methods for find the solutions of functions of single variables, ie values of x such that f(x) = 0 For example, f(x) =
More informationUnit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright
cs416: introduction to scientific computing 01/9/07 Unit : Solving Scalar Equations Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright 1 Introduction We now
More informationA first order divided difference
A first order divided difference For a given function f (x) and two distinct points x 0 and x 1, define f [x 0, x 1 ] = f (x 1) f (x 0 ) x 1 x 0 This is called the first order divided difference of f (x).
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationMATH 4211/6211 Optimization Quasi-Newton Method
MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:
More informationMathematical optimization
Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the
More informationORIE 6326: Convex Optimization. Quasi-Newton Methods
ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted
More informationContinuous Optimization
Continuous Optimization Sanzheng Qiao Department of Computing and Software McMaster University March, 2009 Outline 1 Introduction 2 Golden Section Search 3 Multivariate Functions Steepest Descent Method
More informationIntroduction to Nonlinear Optimization Paul J. Atzberger
Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1
More informationSOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD
BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative
More informationPRACTICE PROBLEMS FOR MIDTERM I
Problem. Find the limits or explain why they do not exist (i) lim x,y 0 x +y 6 x 6 +y ; (ii) lim x,y,z 0 x 6 +y 6 +z 6 x +y +z. (iii) lim x,y 0 sin(x +y ) x +y Problem. PRACTICE PROBLEMS FOR MIDTERM I
More informationRoot Finding (and Optimisation)
Root Finding (and Optimisation) M.Sc. in Mathematical Modelling & Scientific Computing, Practical Numerical Analysis Michaelmas Term 2018, Lecture 4 Root Finding The idea of root finding is simple we want
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationBisection and False Position Dr. Marco A. Arocha Aug, 2014
Bisection and False Position Dr. Marco A. Arocha Aug, 2014 1 Given function f, we seek x values for which f(x)=0 Solution x is the root of the equation or zero of the function f Problem is known as root
More informationMA 8019: Numerical Analysis I Solution of Nonlinear Equations
MA 8019: Numerical Analysis I Solution of Nonlinear Equations Suh-Yuh Yang ( 楊肅煜 ) Department of Mathematics, National Central University Jhongli District, Taoyuan City 32001, Taiwan syyang@math.ncu.edu.tw
More informationMA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS
MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS 1. Please write your name and student number clearly on the front page of the exam. 2. The exam is
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More informationECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.
ECE580 Exam 1 October 4, 2012 1 Name: Solution Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc.
More information14 Increasing and decreasing functions
14 Increasing and decreasing functions 14.1 Sketching derivatives READING Read Section 3.2 of Rogawski Reading Recall, f (a) is the gradient of the tangent line of f(x) at x = a. We can use this fact to
More informationNeural Network Training
Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationOn fast trust region methods for quadratic models with linear constraints. M.J.D. Powell
DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationOutline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,
Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research
More informationMath Numerical Analysis
Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center
More informationCLASS NOTES Computational Methods for Engineering Applications I Spring 2015
CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material
More informationNumerical Optimization
Numerical Optimization Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Spring 2010 Emo Todorov (UW) AMATH/CSE 579, Spring 2010 Lecture 9 1 / 8 Gradient descent
More informationIntro to Scientific Computing: How long does it take to find a needle in a haystack?
Intro to Scientific Computing: How long does it take to find a needle in a haystack? Dr. David M. Goulet Intro Binary Sorting Suppose that you have a detector that can tell you if a needle is in a haystack,
More informationNumerical Analysis: Interpolation Part 1
Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,
More informationStatic unconstrained optimization
Static unconstrained optimization 2 In unconstrained optimization an objective function is minimized without any additional restriction on the decision variables, i.e. min f(x) x X ad (2.) with X ad R
More informationNonlinear equations can take one of two forms. In the nonlinear root nding n n
Chapter 3 Nonlinear Equations Nonlinear equations can take one of two forms. In the nonlinear root nding n n problem, a function f from < to < is given, and one must compute an n-vector x, called a root
More informationUNCONSTRAINED OPTIMIZATION
UNCONSTRAINED OPTIMIZATION 6. MATHEMATICAL BASIS Given a function f : R n R, and x R n such that f(x ) < f(x) for all x R n then x is called a minimizer of f and f(x ) is the minimum(value) of f. We wish
More informationNumerical Methods in Informatics
Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch
More information1.5 EQUATIONS. Solving Linear Equations Solving Quadratic Equations Other Types of Equations
SECTION.5 Equations 45 0. DISCUSS: Algebraic Errors The left-hand column of the table lists some common algebraic errors. In each case, give an example using numbers that shows that the formula is not
More informationConstrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.
Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization
More informationMath 480 The Vector Space of Differentiable Functions
Math 480 The Vector Space of Differentiable Functions The vector space of differentiable functions. Let C (R) denote the set of all infinitely differentiable functions f : R R. Then C (R) is a vector space,
More informationCS 323: Numerical Analysis and Computing
CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.
More information