Generalization Of The Secant Method For Nonlinear Equations

Similar documents
Convergence Rates on Root Finding

NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.

Solution of Nonlinear Equations

Numerical Analysis: Solving Nonlinear Equations

Comparative Analysis of Convergence of Various Numerical Methods

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Midterm Review. Igor Yanovsky (Math 151A TA)

A fourth order method for finding a simple root of univariate function

Applied Mathematics Letters. Combined bracketing methods for solving nonlinear equations

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur

Optimal derivative-free root finding methods based on the Hermite interpolation

A new family of four-step fifteenth-order root-finding methods with high efficiency index

Chapter 8. P-adic numbers. 8.1 Absolute values

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

5. Hand in the entire exam booklet and your computer score sheet.

Lecture 39: Root Finding via Newton s Method

9.6 Newton-Raphson Method for Nonlinear Systems of Equations

YURI LEVIN AND ADI BEN-ISRAEL

Simple Iteration, cont d

Module 10: Finite Difference Methods for Boundary Value Problems Lecture 42: Special Boundary Value Problems. The Lecture Contains:

Mathematical Methods for Numerical Analysis and Optimization

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Three New Iterative Methods for Solving Nonlinear Equations

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions

The Newton-Raphson Algorithm

7. Piecewise Polynomial (Spline) Interpolation

Section 6.3 Richardson s Extrapolation. Extrapolation (To infer or estimate by extending or projecting known information.)

4 Nonlinear Equations

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

Zeroes of Transcendental and Polynomial Equations. Bisection method, Regula-falsi method and Newton-Raphson method

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 4

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

An Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function

A Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

A three point formula for finding roots of equations by the method of least squares

Solution of Algebric & Transcendental Equations

A Study of Iteration Formulas for Root Finding, Where Mathematics, Computer Algebra and Software Engineering Meet

HIGH ORDER VARIABLE MESH EXPONENTIAL FINITE DIFFERENCE METHOD FOR THE NUMERICAL SOLUTION OF THE TWO POINTS BOUNDARY VALUE PROBLEMS

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.

MADHAVA MATHEMATICS COMPETITION, December 2015 Solutions and Scheme of Marking

UNIT - 2 Unit-02/Lecture-01

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

SYSTEMS OF NONLINEAR EQUATIONS

Reducing round-off errors in symmetric multistep methods

Mathematical Economics: Lecture 2

Maria Cameron. f(x) = 1 n

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Homework and Computer Problems for Math*2130 (W17).

k-protected VERTICES IN BINARY SEARCH TREES

Math 701: Secant Method

Math 117: Topology of the Real Numbers

Numerical Study of Some Iterative Methods for Solving Nonlinear Equations

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

Introduction to Numerical Analysis

Numerical Methods. King Saud University

Using Lagrange Interpolation for Solving Nonlinear Algebraic Equations

A Derivative Free Hybrid Equation Solver by Alloying of the Conventional Methods

15 Nonlinear Equations and Zero-Finders

Finding simple roots by seventh- and eighth-order derivative-free methods

4.1 Classical Formulas for Equally Spaced Abscissas. 124 Chapter 4. Integration of Functions

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method

High-order Newton-type iterative methods with memory for solving nonlinear equations

Basins of Attraction for Optimal Third Order Methods for Multiple Roots

AIMS Exercise Set # 1

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix

Error formulas for divided difference expansions and numerical differentiation

Polynomial Interpolation

SOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H.

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

APPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR. Jay Villanueva Florida Memorial University Miami, FL

Numerical Analysis: Approximation of Functions

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

ERROR IN LINEAR INTERPOLATION

converges to a root, it may not always be the root you have in mind.

MATHEMATICAL EXPOSITION

The Journal of Nonlinear Sciences and Applications

Newton-Raphson Type Methods

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

Numerical Methods. King Saud University

A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials

Finding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University

Finding the Roots of f(x) = 0

Chapter 1. Root Finding Methods. 1.1 Bisection method

Accelerating Convergence

Interpolation Theory

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

A Note on Eigenvalues of Perturbed Hermitian Matrices

MTH603 FAQ + Short Questions Answers.

Discrete Orthogonal Polynomials on Equidistant Nodes

Numerical Sequences and Series

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence

Hermite Interpolation

Virtual University of Pakistan

Transcription:

Applied Mathematics E-Notes, 8(2008), 115-123 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generalization Of The Secant Method For Nonlinear Equations Avram Sidi Received 27 February 2007 Abstract The secant method is a very effective numerical procedure used for solving nonlinear equations of the form f(x) = 0. It is derived via a linear interpolation procedure and employs only values of f(x) at the approximations to the root of f(x) = 0, hence it computes f(x) only once per iteration. In this note, we generalize it by replacing the relevant linear interpolant by a suitable (k + 1)- point polynomial of interpolation, where k is an integer at least 2. Just as the secant method, this generalization too enjoys the property that it computes f(x) only once per iteration. We provide its error in closed form and analyze its order of convergence. We show that this order of convergence is greater than that of the secant method, and it increases as k increases. We also confirm the theory via an illustrative example. 1 Introduction Let α be the solution to the equation f(x) =0. (1) An effective iterative method used for solving (1) that makes direct use of f(x) [but no derivatives of f(x)] is the secant method that is discussed in many books on numerical analysis. See, for example, Atkinson [1], Henrici [2], Ralston and Rabinowitz [3], and Stoer and Bulirsch [5]. See alo the recent note [4] by the author, in which the treatment of the secant method and those of the Newton Raphson, regula falsi, and Steffensen methods are presented in a unified manner. This method is derived by a linear interpolation procedure as follows: Starting with two initial approximations x 0 and x 1 to the solution α of (1), we compute a sequence of approximations {x n } n=0, such that the approximation x n+1 is determined as the point of intersection (in the x-y plane) of the straight line through the points (x n,f(x n )) and (x n 1,f(x n 1 )) with the x-axis. Since the equation of this straight line is y = f(x n )+ f(x n) f(x n 1 ) (x x n ), (2) x n x n 1 Mathematics Subject Classifications: 65H05, 65H99. Computer Science Department, Technion - Israel Institute of Technology, Haifa 32000, Israel 115

116 Generalized Secant Method x n+1 is given as f(x n ) x n+1 = x n. (3) f(x n) f(x n 1) x n x n 1 In terms of divided differences, (3) can be written in the form x n+1 = x n f(x n) f[x n,x n 1 ]. (4) We recall that, provided f C 2 (I), where I is an open interval containing α, and f (α) 0, and that x 0 and x 1 are sufficiently close to α, the sequence {x n } n=0 converges to α of order at least (1 + 5)/2 =1.618. Another way of obtaining the secant method, of interest to us in the present work, is via a variation of the Newton Raphson method. Recall that in the Newton Raphson method, we start with an initial approximation x 0 and generate a sequence of approximations {x n } n=0 to α through x n+1 = x n f(x n), n =0, 1,.... (5) f (x n ) We also recall that, provided f C 2 (I), where I is an open interval containing α, and f (α) 0, and that x 0 is sufficiently close to α, the sequence {x n } n=0 converges to α of order at least 2. As such, the Newton Raphson method is extremely effective. To avoid computing f (x) [note that f (x) may not always be available or may be costly to compute], and to preserve the excellent convergence properties of the Newton Raphson method, we replace f (x n ) in (5) by the approximation f[x n,x n 1 ]=[f(x n ) f(x n 1 )]/(x n x n 1 ). This results in (4), that is, in the secant method. The justification for this approach is as follows: When convergence takes place, that is, when lim n x n = α, the difference x n x n 1 tends to zero, and this implies that, as n increases, the accuracy of f[x n,x n 1 ] as an approximation to f (x n ) increases as well. Note that the Newton-Raphson iteration requires two function evaluations, one of f(x) and another of f (x), per iteration. The secant method, on the other hand, requires only one function evaluation per iteration, namely, that of f(x). In Section 2 of this note, we consider in detail a generalization of the second of the two approaches described above using polynomial interpolation of degree k with k > 1. The (k + 1)-point iterative method that results from this generalization turns out to be very effective. It is of order higher than that of the secant method and requires only one function evaluation per iteration. In Section 3, we analyze this method and determine its order as well. In Section 4, we confirm our theory via a numerical example. 2 Generalization of Secant Method We start by discussing a known generalization of the secant method (see, for example, Traub [6, Chapters 4, 6, and 10]). In this generalization, we approximate f(x) by the polynomial of interpolation p n,k (x), where p n,k (x i )=f(x i ),i= n, n 1,...,n k, assuming that x 0,x 1,...,x n have all been computed. Following that, we determine

A. Sidi 117 x n+1 as a zero of p n,k (x), provided a real solution to p n,k (x) = 0 exists. Thus, x n+1 is the solution to a polynomial equation of degree k. Fork = 1, what we have is nothing but the secant method. For k =2,x n+1 is one of the solutions to a quadratic equation, and the resulting method is known as the method of Müller. Clearly, for k 3, the determination of x n+1 is not easy. This difficulty prompts us to consider the second approach to the secant method we discussed in Section 1, in which we replaced f (x n ) by the slope of the straight line through the points (x n,f(x n )) and (x n 1,f(x n 1 )), that is, by the derivative (at x n ) of the (linear) interpolant to f(x) atx n and x n 1. We generalize this approach by replacing f (x n )byp n,k (x n), the derivative at x n of the polynomial p n,k (x) interpolating f(x) at the points x n i, i =0, 1,...,k, mentioned in the preceding paragraph, with k 2. Because p n,k (x) is a better approximation to f(x) in the neighborhood of x n, p n,k (x n) is a better approximation to f (x n ) when k 2 than f[x n,x n 1 ] used in the secant method. In addition, just as the secant method, the new method computes the function f(x) only once per iteration step, the computation being that of f(x n ). Thus, the new method is described by the following (k + 1)-point iteration: x n+1 = x n f(x n) p n,k (x, n = k, k +1,..., (6) n) with x 0,x 1,...,x k as initial approximations to be provided by the user. Of course, with k fixed, we can start with x 0 and x 1, compute x 2 via the method we have described (with k = 1, namely via the secant method), compute x 3 via the method we have described (with k = 2), and so on, until we have completed the list x 0,x 1,...,x k. We now turn to the computational aspects of this method. What we need is a fast method for computing p n,k (x n). For this, we write p n,k (x) in Newtonian form as follows: p n,k (x) =f(x n )+ k i 1 f[x n,x n 1,...,x n i ] (x x n j ). (7) Here f[x i,x i+1,...,x m ] are divided differences of f(x), and we recall that they can be defined recursively via f[x i ]=f(x i ); f[x i,x j ]= f[x i] f[x j ], x i x j x i x j, (8) and, for m>i+ 1, via f[x i,x i+1,...,x m ]= f[x i,x i+1,...,x m 1 ] f[x i+1,x i+2,...,x m ], x i x m x i x m. (9) We also recall that f[x i,x i+1,...,x m ] is a symmetric function of its arguments, that is, it has the same value under any permutation of {x i,x i+1,...,x m }. Thus, in (7), f[x n,x n 1,...,x n i ]=f[x n i,x n i+1,...,x n ]. In addition, when f C m (I), where I is an open interval containing the points z 0,z 1,...,z m, whether these are distinct or not, there holds f[z 0,z 1,...,z m ]= f (m) (ξ) for some ξ (min{z i }, max{z i }). m! j=0

118 Generalized Secant Method x 0 f 0 f 01 x 1 f 1 f 012 f 12 f 0123 x 2 f 2 f 123 f 23 f 1234 x 3 f 3 f 234 f 34 f 2345 x 4 f 4 f 345 f 45 f 3456 x 5 f 5 f 456 f 56 f 4567 x 6 f 6 f 567 f 67 x 7 f 7 Table 1: Table of divided differences over {x 0,x 1,...,x 7} for use to compute x 8 via p 7,3(x) in (6). Note that f i,i+1,...,m stands for f[x i,x i+1,...,x m] throughout. Going back to (7), we note that p n,k (x) there is computed by ordering the x i as x n,x n 1,...,x n k. This ordering enables us to compute p n,k (x n) easily. Indeed, differentiating p n,k (x) in (7), and letting x = x n, we obtain p n,k(x n )=f[x n,x n 1 ]+ k i=2 i 1 f[x n,x n 1,...,x n i ] (x n x n j ). (10) In addition, note that the relevant divided difference table need not be computed anew each iteration; what is needed is adding a new diagonal (from the south-west to north-east) at the bottom of the existing table. To make this point clear, let us look at the following example: Suppose k = 3 and and we have computed x i, i =0, 1,...,7. To compute x 8, we use the divided difference table in Table 1. Letting f i,i+1,...,m stand for f[x i,x i+1,...,x m ], we have j=1 x 8 = x 7 f(x 7) p 7,3 (x 7) = x f 7 7 f 67 + f 567 (x 7 x 6 )+f 4567 (x 7 x 6 )(x 7 x 5 ). To compute x 9, we will need the divided differences f 8,f 78,f 678,f 5678. Computing first f 8 = f(x 8 ) with the newly computed x 8, the rest of these divided differences can be computed from the bottom diagonal of Table 1 via the recursion relations f 78 = f 7 f 8 x 7 x 8, f 678 = f 67 f 78 x 6 x 8, f 5678 = f 567 f 678 x 5 x 8, in this order, and appended to the bottom of Table 1. Actually, we can do even better: Since we need only the bottom diagonal of Table 1 to compute x 8, we need to save only this diagonal,

A. Sidi 119 namely, only the entries f 7,f 67,f 567,f 4567. Once we have computed x 8 and f 8 = f(x 8 ), we can overwrite f 7,f 67,f 567,f 4567 with f 8,f 78,f 678,f 5678. Thus, in general, to be able to compute x n+1 via (6), after x n has been determined, we need to store only the entries f n,f n 1,n,...,f n k,n k+1,...,n 1,n along with x n,x n 1,...,x n k. 3 Convergence Analysis We now turn to the analysis of the sequence {x n } n=0 that is generated via (6). Since we already know everything concerning the case k = 1, namely, the secant method, we treat the case k 2. The following theorem gives the main convergence result for the generalized secant method. THEOREM 3.1. Let α be the solution to the equation f(x) = 0. Assume f C k+1 (I), where I in an open interval containing α, and assume also that f (α) 0,in addition to f(α) = 0. Let x 0,x 1,...,x k be distinct initial approximations to α, and generate x n, n = k +1,k+2,..., via x n+1 = x n f(x n) p n,k (x, n = k, k +1,..., (11) n) where p n,k (x) is the polynomial of interpolation to f(x) at the points x n,x n 1,...,x n k. Then, provided x 0,x 1,...,x k are in I and sufficiently close to α, the sequence {x n } n=0 converges to α, and lim n ɛ n+1 i=0 ɛ = ( 1)k+1 n i (k + 1)! f (k+1) (α) f (α) L; ɛ n = x n α n. (12) The order of convergence is s k,1<s k < 2, where s k is the only positive root of the equation s k+1 = k i=0 si and satisfies 2 2 k 1 e<s k < 2 2 k 1 for k 2; s k <s k+1 ; lim s k =2, (13) k where e is the base of natural logarithms, and ɛ n+1 lim = L (sk 1)/k. (14) n ɛ n sk REMARK. Note that, in Theorem 3.1, s 2 =1.839,s 3 =1.928, s 4 =1.966, etc., rounded to three significant figures. For the secant method, we have s 1 =1.618, rounded to three significant figures. PROOF. Below, we shall use the short-hand notation int(a 1,...,a m ) = (min{a 1,...,a m }, max{a 1,...,a m }). We start by deriving a closed-form expression for the error in x n+1. Subtracting α from both sides of (11), and noting that f(x n )=f(x n ) f(α) =f[x n,α](x n α),

120 Generalized Secant Method we have x n+1 α = We now note that which, by ( 1 f[x ) n,α] p n,k (x (x n α) = p n,k (x n) f[x n,α] n) p n,k (x (x n α). (15) n) p n,k(x n ) f[x n,α]= { p n,k(x n ) f (x n ) } + { f (x n ) f[x n,α] }, f (x n ) f[x n,α]=f[x n,x n ] f[x n,α]=f[x n,x n,α](x n α) = f (2) (η n ) (x n α) 2! for some η n int(x n,α), and by f (x n ) p n,k(x n ) = f[x n,x n,x n 1,...,x n k ] = f (k+1) (ξ n ) (k + 1)! for some ξ n int(x n,x n 1,...,x n k ), becomes p n,k(x n ) f[x n,α]= f (k+1) (ξ n ) (k + 1)! Substituting (16) and (17) in (15), and letting k (x n x n i ) k (x n x n i ) (16) k (ɛ n ɛ n i )+ f (2) (η n ) ɛ n. (17) 2! we finally obtain D n = f (k+1) (ξ n ) (k + 1)! and Ê n = f (2) (η n ), (18) 2! ɛ n+1 = C n ɛ n ; C n p n,k (x n) f[x n,α] p n,k (x n) = D n (ɛ n ɛ n i )+Ê n ɛ n f (x n )+ D n (ɛ n ɛ n i ). (19) We now prove that convergence takes place. Without loss of generality, we can assume that I =(α T,α + T ) for some T > 0, and that m 1 = min x I f (x) > 0. This is possible since α I and f (α) 0. Let M s = max x I f (s) (x) /s!, s=1, 2,..., and choose the interval J =(α t/2,α+ t/2) sufficiently small to ensure that m 1 > 2M k+1 t k + M 2 t/2. It can now be shown that, provided x n i, i =0, 1,...,k, are all in J, there holds C n M k+1 ɛ n ɛ n i + M 2 ɛ n m 1 M k+1 ɛ n ɛ n i M k+1 ( ɛ n + ɛ n i ) + M 2 ɛ n m 1 M k+1 ( ɛ C, n + ɛ n i )

A. Sidi 121 where C M k+1t k + M 2 t/2 m 1 M k+1 t k < 1. Consequently, by (19), ɛ n+1 < ɛ n, which implies that x n+1 J, just like x n i, i =0, 1,...,k. Therefore, if x 0,x 1,...,x k are chosen in J, then C n C < 1 and hence x n+1 J, for all n k. Consequently, by (19), lim n x n = α. As for (12), we proceed as follows: By the fact that lim n x n = α, we first note that lim n p n,k (x n) = f (α) = lim n f[x n,α], and thus lim n C n = 0. This means that lim n (ɛ n+1 /ɛ n ) = 0 and, equivalently, that {x n } n=0 converges of order greater than 1. As a result, lim n (ɛ n /ɛ n i ) = 0, for all i 1, and ɛ n /ɛ n i = o(ɛ n /ɛ n j )asn, for j < i. Consequently, expanding in (19) the product (ɛ n ɛ n i ), we have k k ( [ ]) (ɛ n ɛ n i )= ɛn i 1 ɛn /ɛ n i ( ) [1+O =( 1) k ( )] ɛ n i ɛn /ɛ n 1 as n. (20) Substituting (20) in (19), and defining we obtain D n = D n p n,k (x n), E n = Ê n p n,k (x n), (21) ( ) [1+O ɛ n+1 =( 1) k ( )] D n ɛ n i ɛn /ɛ n 1 + En ɛ 2 n as n. (22) i=0 Dividing both sides of (22) by i=0 ɛ n i, and defining we have Now, σ n = ɛ n+1 i=0 ɛ, (23) n i σ n =( 1) k D n [ 1+O ( ɛn /ɛ n 1 )] + En σ n 1 ɛ n k 1 as n. (24) lim D n = 1 n (k + 1)! f (k+1) (α) f, lim (α) E n = f (2) (α) n 2f (α). (25) Since lim n D n and lim n E n are finite, lim n ɛ n /ɛ n 1 = 0 and lim n ɛ n k 1 = 0, it follows that there exist a positive integer N and positive constants β<1 and D, with E n ɛ n k 1 β when n N, for which (24) gives σ n D + β σ n 1 for all n N. (26)

122 Generalized Secant Method Using (26), it is easy to show that σ N+s D 1 βs 1 β + βs σ N, s =1, 2,..., which, by the fact that β<1, implies that {σ n } is a bounded sequence. Making use of this fact, we have lim n E n σ n 1 ɛ n k 1 = 0. Substituting this in (24), and invoking (25), we next obtain lim n σ n =( 1) k lim n D n = L, which is precisely (12). That the order of the method is s k, as defined in the statement of the theorem, follows from [6, Chapter 3]. A weaker version can be proved by letting σ n = L for all n and showing that ɛ n+1 = Q ɛ n sk is possible for s k a solution to the equation s k+1 = k i=0 si and Q = L (sk 1)/k. The proof of this is easy and is left to the reader. This completes our proof. 4 A Numerical Example We apply the method described in Sections 2 and 3 to the solution of the equation f(x) = 0, where f(x) =x 3 8, whose solution is α =2. We take k = 2 in our method. We also choose x 0 = 5 and x 1 = 4, and compute x 2 via one step of the secant method, namely, x 2 = x 1 f(x 1) f[x 0,x 1 ]. (27) Following that, we compute x 3,x 4,..., via x n+1 = x n f(x n ), n =2, 3,.... (28) f[x n,x n 1 ]+f[x n,x n 1,x n 2 ](x n x n 1 ) Our computations were done in quadruple-precision arithmetic (approximately 35- decimal-digit accuracy), and they are given in Table 2. Note that in order to verify the theoretical results concerning iterative methods of order greater that unity, we need to use computer arithmetic of high precision (preferably, of variable precision, if available) because the number of correct significant decimal digits increases dramatically from one iteration to the next as we are approaching the solution. From Theorem 3.1, and lim n ɛ n+1 = ( 1)3 ɛ n ɛ n 1 ɛ n 2 3! f (3) (2) f (2) = 1 = 0.08333 12 log ɛ n+1 /ɛ n lim n log ɛ n /ɛ n 1 = s 2 =1.83928, and these seem to be confirmed in Table 2. Also, x 9 should have a little under 50 correct significant figures, even though we do not see this in Table 2 due to the fact that the arithmetic we have used to generate Table 2 can provide an accuracy of at most 35 digits approximately.

A. Sidi 123 n x n ɛ n L n Q n 0 5.00000000000000000000000000000000000 3.000 10 0 1 4.00000000000000000000000000000000000 2.000 10 0 1.515 2 3.08196721311475409836065573770491792 1.082 10 0 0.0441 2.164 3 2.28621882971781130732266803773062580 2.862 10 1 0.1670 2.497 4 2.01034420943787831264152973172014271 1.034 10 2 0.6370 1.182 5 1.99979593345266992578358353656798415 2.041 10 4 0.1196 2.024 6 2.00000007223139333059960671366229837 7.223 10 8 0.1005 1.934 7 2.00000000000001531923884491258853168 1.532 10 14 0.0838 1.784 8 2.00000000000000000000000001893448134 1.893 10 26 9 2.00000000000000000000000000000000000 Table 2: Results obtained by applying the generalized secant method with k = 2, as shown in (27) and (28), to the equation x 3 8 = 0. Here, L n = ɛ n+1 /(ɛ n ɛ n 1 ɛ n 2 ) and Q n = ( log ɛ n+1 /ɛ n ) / ( log ɛ n /ɛ n 1 ). References [1] K. E. Atkinson, An Introduction to Numerical Analysis, Wiley, New York, second edition, 1989. [2] P. Henrici, Elements of Numerical Analysis, Wiley, New York, 1964. [3] A. Ralston and P. Rabinowitz, A First Course in Numerical Analysis, McGraw-Hill, New York, second edition, 1978. [4] A. Sidi, Unified treatment of regula falsi, Newton Raphson, secant, and Steffensen methods for nonlinear equations, J. Online Math. Appl., 6, 2006. (Available from the URL http://mathdl.maa.org/images/upload library/4/vol6/sidi/sidi.pdf). [5] J. Stoer and R. Bulirsch, Introduction to Numerical Analysis, Springer-Verlag, New York, third edition, 2002. [6] J. F. Traub, Iterative Methods for the Solution of Equations, Prentice Hall, Englewood Cliffs, N.J., 1964.