THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Similar documents
Chapter 3: Root Finding. September 26, 2005

Math Numerical Analysis

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Lecture 8. Root finding II

Numerical Methods I Solving Nonlinear Equations

Nonlinear Equations and Continuous Optimization

CHAPTER 10 Zeros of Functions

Numerical Analysis: Solving Nonlinear Equations

Zeros of Functions. Chapter 10

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Limits, Rates of Change, and Tangent Lines

Order of convergence

CS 323: Numerical Analysis and Computing

Tangent Lines and Derivatives

Scientific Computing: An Introductory Survey

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

MA 113 Calculus I Fall 2015 Exam 3 Tuesday, 17 November Multiple Choice Answers. Question

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.

Introduction to Numerical Analysis

ROOT FINDING REVIEW MICHELLE FENG

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Math 131. The Derivative and the Tangent Line Problem Larson Section 2.1

15 Nonlinear Equations and Zero-Finders

converges to a root, it may not always be the root you have in mind.

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Lecture 7: Numerical Tools

Numerical Methods in Informatics

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

CS 323: Numerical Analysis and Computing

Lecture 39: Root Finding via Newton s Method

Numerical Methods Lecture 3

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

SYSTEMS OF NONLINEAR EQUATIONS

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

1.1: The bisection method. September 2017

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation


Computational Methods. Solving Equations

GENG2140, S2, 2012 Week 7: Curve fitting

Chapter 12: Differentiation. SSMth2: Basic Calculus Science and Technology, Engineering and Mathematics (STEM) Strands Mr. Migo M.

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Numerical Methods 5633

Determining Average and Instantaneous Rates of Change

CHAPTER 1. INTRODUCTION. ERRORS.

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

QUIZ ON CHAPTER 4 APPLICATIONS OF DERIVATIVES; MATH 150 FALL 2016 KUNIYUKI 105 POINTS TOTAL, BUT 100 POINTS

MAT137 Calculus! Lecture 45

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Solution of Nonlinear Equations

Math /Foundations of Algebra/Fall 2017 Numbers at the Foundations: Real Numbers In calculus, the derivative of a function f(x) is defined

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

1 Question related to polynomials

Nonlinear Equations. Chapter The Bisection Method

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

APPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR. Jay Villanueva Florida Memorial University Miami, FL

5.6 Logarithmic and Exponential Equations

1 The best of all possible worlds

Numerical Methods. Root Finding

Midterm Review. Igor Yanovsky (Math 151A TA)

Student Study Session. Theorems

Chapter 1. Root Finding Methods. 1.1 Bisection method

Solving Non-Linear Equations (Root Finding)

Section 2.1: The Derivative and the Tangent Line Problem Goals for this Section:

Taylor approximation

Sections 2.1, 2.2 and 2.4: Limit of a function Motivation:

The Steepest Descent Algorithm for Unconstrained Optimization

Lesson 31 - Average and Instantaneous Rates of Change

CS 221 Lecture 9. Tuesday, 1 November 2011

ADVANCED PROGRAMME MATHEMATICS: PAPER I MODULE 1: CALCULUS AND ALGEBRA

III.A. ESTIMATIONS USING THE DERIVATIVE Draft Version 10/13/05 Martin Flashman 2005 III.A.2 NEWTON'S METHOD

MATH 350: Introduction to Computational Mathematics

COURSE Iterative methods for solving linear systems

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

MA 123 September 8, 2016

Numerical Analysis Fall. Roots: Open Methods

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

ERROR IN LINEAR INTERPOLATION

Lecture 5: Random numbers and Monte Carlo (Numerical Recipes, Chapter 7) Motivations for generating random numbers

Root Finding Convergence Analysis

CHAPTER-II ROOTS OF EQUATIONS

Introductory Numerical Analysis

MAT 460: Numerical Analysis I. James V. Lambers

Numerical Methods. King Saud University

Simple Iteration, cont d

OBJECTIVE Find limits of functions, if they exist, using numerical or graphical methods.

Slopes and Rates of Change

Lecture 9 4.1: Derivative Rules MTH 124

Math Numerical Analysis Mid-Term Test Solutions

Unconstrained optimization I Gradient-type methods

MATH 350: Introduction to Computational Mathematics

Math 1241, Spring 2014 Section 3.3. Rates of Change Average vs. Instantaneous Rates

1 Solutions to selected problems

Transcription:

THE SECANT METHOD Newton s method was based on using the line tangent to the curve of y = f (x), with the point of tangency (x 0, f (x 0 )). When x 0 α, the graph of the tangent line is approximately the same as the graph of y = f (x) around x = α. We then used the root of the tangent line to approximate α. Consider using an approximating line based on interpolation. We assume we have two estimates of the root α, say x 0 and x 1. Then we produce a linear function with q(x) = a 0 + a 1 x q(x 0 ) = f (x 0 ), q(x 1 ) = f (x 1 ) (*) This line is sometimes called a secant line. Its equation is given by q(x) = (x 1 x) f (x 0 ) + (x x 0 ) f (x 1 ) x 1 x 0

y y=f(x) (x 0,f(x 0 )) x 1 x 2 α x 0 x (x 1,f(x 1 )) y y=f(x) (x 0,f(x 0 )) (x 1,f(x 1 )) α x 2 x 1 x 0 x

q(x) = (x 1 x) f (x 0 ) + (x x 0 ) f (x 1 ) x 1 x 0 We now solve the linear equation q(x) = 0, denoting the root by x 2. This yields x 2 = x 1 f (x 1 ) x 1 x 0 f (x 1 ) f (x 0 ) We can now repeat the process. Use x 1 and x 2 to produce another secant line, and then uses its root to approximate α. This yields the general iteration formula x n+1 = x n f (x n ) x n x n 1, n = 1, 2, 3,... f (x n ) f (x n 1 ) This is called the secant method for solving f (x) = 0.

Example We solve the equation f (x) x 6 x 1 = 0 which was used previously as an example for both the bisection and Newton methods. The quantity x n x n 1 is used as an estimate of α x n 1. 70 60 50 40 30 20 10 0 10 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2

n x n f (x n ) x n x n 1 α x n 1 0 2.0 61.0 1 1.0 1.0 1.0 2 1.01612903 9.15E 1 1.61E 2 1.35E 1 3 1.19057777 6.57E 1 1.74E 1 1.19E 1 4 1.11765583 1.68E 1 7.29E 2 5.59E 2 5 1.13253155 2.24E 2 1.49E 2 1.71E 2 6 1.13481681 9.54E 4 2.29E 3 2.19E 3 7 1.13472365 5.07E 6 9.32E 5 9.27E 5 8 1.13472414 1.13E 9 4.92E 7 4.92E 7 The iterate x 8 equals α rounded to nine significant digits. As with Newton s method for this equation, the initial iterates do not converge rapidly. But as the iterates become closer to α, the speed of convergence increases.

It is clear from the numerical results that the secant method requires more iterates than the Newton method (e.g., with Newton s method, the iterate x 6 is accurate to the machine precision of around 16 decimal digits). But note that the secant method does not require a knowledge of f (x), whereas Newton s method requires both f (x) and f (x). Note also that the secant method can be considered an approximation of the Newton method by using the approximation x n+1 = x n f (x n) f (x n ) f (x n ) f (x n) f (x n 1 ) x n x n 1

CONVERGENCE ANALYSIS With a combination of algebraic manipulation and the mean-value theorem from calculus, we can show [ f ] (ξ n ) α x n+1 = (α x n ) (α x n 1 ) 2f, (**) (ζ n ) with ξ n and ζ n unknown points. The point ξ n is located between the minimum and maximum of x n 1, x n, and α; and ζ n is located between the minimum and maximum of x n 1 and x n. Recall for Newton s method that the Newton iterates satisfied α x n+1 = (α x n ) 2 [ f (ξ n ) 2f (x n ) which closely resembles (**) above. ]

Using (**), it can be shown that x n converges to α, and moreover, α x n+1 lim n α x n r = f (α) r 1 2f (α) c where 1 ( ). 2 1 + 5 = 1.62. This assumes that x0 and x 1 are chosen sufficiently close to α; and how close this is will vary with the function f. In addition, the above result assumes f (x) has two continuous derivatives for all x in some interval about α. The above says that when we are close to α, that α x n+1 c α x n r This looks very much like the Newton result α x n+1 M (α x n ) 2, M = f (α) 2f (α) and c = M r 1. Both the secant and Newton methods converge at faster than a linear rate, and they are called superlinear methods.

The secant method converges slower than Newton s method; but it is still quite rapid. It is rapid enough that we can prove and therefore, is a good error estimator. x n+1 x n lim = 1 n α x n α x n x n+1 x n A note of warning: Do not combine the secant formula and write it in the form x n+1 = f (x n)x n 1 f (x n 1 )x n f (x n ) f (x n 1 ) This has enormous loss of significance errors as compared with the earlier formulation.

COSTS OF SECANT & NEWTON METHODS The Newton method x n+1 = x n f (x n) f, n = 0, 1, 2,... (x n ) requires two function evaluations per iteration, that of f (x n ) and f (x n ). The secant method x n+1 = x n f (x n ) x n x n 1, n = 1, 2, 3,... f (x n ) f (x n 1 ) requires one function evaluation per iteration, following the initial step. For this reason, the secant method is often faster in time, even though more iterates are needed with it than with Newton s method to attain a similar accuracy.

ADVANTAGES & DISADVANTAGES Advantages of secant method: 1. It converges at faster than a linear rate, so that it is more rapidly convergent than the bisection method. 2. It does not require use of the derivative of the function, something that is not available in a number of applications. 3. It requires only one function evaluation per iteration, as compared with Newton s method which requires two. Disadvantages of secant method: 1. It may not converge. 2. There is no guaranteed error bound for the computed iterates. 3. It is likely to have difficulty if f (α) = 0. This means the x-axis is tangent to the graph of y = f (x) at x = α. 4. Newton s method generalizes more easily to new methods for solving simultaneous systems of nonlinear equations.

BRENT S METHOD Richard Brent devised a method combining the advantages of the bisection method and the secant method. 1. It is guaranteed to converge. 2. It has an error bound which will converge to zero in practice. 3. For most problems f (x) = 0, with f (x) differentiable about the root α, the method behaves like the secant method. 4. In the worst case, it is not too much worse in its convergence than the bisection method. In MATLAB, it is implemented as fzero; and it is present in most Fortran numerical analysis libraries.