Generalization Of The Secant Method For Nonlinear Equations
|
|
- Shannon West
- 5 years ago
- Views:
Transcription
1 Applied Mathematics E-Notes, 8(2008), c ISSN Available free at mirror sites of amen/ Generalization Of The Secant Method For Nonlinear Equations Avram Sidi Received 27 February 2007 Abstract The secant method is a very effective numerical procedure used for solving nonlinear equations of the form f(x) = 0. It is derived via a linear interpolation procedure and employs only values of f(x) at the approximations to the root of f(x) = 0, hence it computes f(x) only once per iteration. In this note, we generalize it by replacing the relevant linear interpolant by a suitable (k + 1)- point polynomial of interpolation, where k is an integer at least 2. Just as the secant method, this generalization too enjoys the property that it computes f(x) only once per iteration. We provide its error in closed form and analyze its order of convergence. We show that this order of convergence is greater than that of the secant method, and it increases as k increases. We also confirm the theory via an illustrative example. 1 Introduction Let α be the solution to the equation f(x) =0. (1) An effective iterative method used for solving (1) that makes direct use of f(x) [but no derivatives of f(x)] is the secant method that is discussed in many books on numerical analysis. See, for example, Atkinson [1], Henrici [2], Ralston and Rabinowitz [3], and Stoer and Bulirsch [5]. See alo the recent note [4] by the author, in which the treatment of the secant method and those of the Newton Raphson, regula falsi, and Steffensen methods are presented in a unified manner. This method is derived by a linear interpolation procedure as follows: Starting with two initial approximations x 0 and x 1 to the solution α of (1), we compute a sequence of approximations {x n } n=0, such that the approximation x n+1 is determined as the point of intersection (in the x-y plane) of the straight line through the points (x n,f(x n )) and (x n 1,f(x n 1 )) with the x-axis. Since the equation of this straight line is y = f(x n )+ f(x n) f(x n 1 ) (x x n ), (2) x n x n 1 Mathematics Subject Classifications: 65H05, 65H99. Computer Science Department, Technion - Israel Institute of Technology, Haifa 32000, Israel 115
2 116 Generalized Secant Method x n+1 is given as f(x n ) x n+1 = x n. (3) f(x n) f(x n 1) x n x n 1 In terms of divided differences, (3) can be written in the form x n+1 = x n f(x n) f[x n,x n 1 ]. (4) We recall that, provided f C 2 (I), where I is an open interval containing α, and f (α) 0, and that x 0 and x 1 are sufficiently close to α, the sequence {x n } n=0 converges to α of order at least (1 + 5)/2 = Another way of obtaining the secant method, of interest to us in the present work, is via a variation of the Newton Raphson method. Recall that in the Newton Raphson method, we start with an initial approximation x 0 and generate a sequence of approximations {x n } n=0 to α through x n+1 = x n f(x n), n =0, 1,.... (5) f (x n ) We also recall that, provided f C 2 (I), where I is an open interval containing α, and f (α) 0, and that x 0 is sufficiently close to α, the sequence {x n } n=0 converges to α of order at least 2. As such, the Newton Raphson method is extremely effective. To avoid computing f (x) [note that f (x) may not always be available or may be costly to compute], and to preserve the excellent convergence properties of the Newton Raphson method, we replace f (x n ) in (5) by the approximation f[x n,x n 1 ]=[f(x n ) f(x n 1 )]/(x n x n 1 ). This results in (4), that is, in the secant method. The justification for this approach is as follows: When convergence takes place, that is, when lim n x n = α, the difference x n x n 1 tends to zero, and this implies that, as n increases, the accuracy of f[x n,x n 1 ] as an approximation to f (x n ) increases as well. Note that the Newton-Raphson iteration requires two function evaluations, one of f(x) and another of f (x), per iteration. The secant method, on the other hand, requires only one function evaluation per iteration, namely, that of f(x). In Section 2 of this note, we consider in detail a generalization of the second of the two approaches described above using polynomial interpolation of degree k with k > 1. The (k + 1)-point iterative method that results from this generalization turns out to be very effective. It is of order higher than that of the secant method and requires only one function evaluation per iteration. In Section 3, we analyze this method and determine its order as well. In Section 4, we confirm our theory via a numerical example. 2 Generalization of Secant Method We start by discussing a known generalization of the secant method (see, for example, Traub [6, Chapters 4, 6, and 10]). In this generalization, we approximate f(x) by the polynomial of interpolation p n,k (x), where p n,k (x i )=f(x i ),i= n, n 1,...,n k, assuming that x 0,x 1,...,x n have all been computed. Following that, we determine
3 A. Sidi 117 x n+1 as a zero of p n,k (x), provided a real solution to p n,k (x) = 0 exists. Thus, x n+1 is the solution to a polynomial equation of degree k. Fork = 1, what we have is nothing but the secant method. For k =2,x n+1 is one of the solutions to a quadratic equation, and the resulting method is known as the method of Müller. Clearly, for k 3, the determination of x n+1 is not easy. This difficulty prompts us to consider the second approach to the secant method we discussed in Section 1, in which we replaced f (x n ) by the slope of the straight line through the points (x n,f(x n )) and (x n 1,f(x n 1 )), that is, by the derivative (at x n ) of the (linear) interpolant to f(x) atx n and x n 1. We generalize this approach by replacing f (x n )byp n,k (x n), the derivative at x n of the polynomial p n,k (x) interpolating f(x) at the points x n i, i =0, 1,...,k, mentioned in the preceding paragraph, with k 2. Because p n,k (x) is a better approximation to f(x) in the neighborhood of x n, p n,k (x n) is a better approximation to f (x n ) when k 2 than f[x n,x n 1 ] used in the secant method. In addition, just as the secant method, the new method computes the function f(x) only once per iteration step, the computation being that of f(x n ). Thus, the new method is described by the following (k + 1)-point iteration: x n+1 = x n f(x n) p n,k (x, n = k, k +1,..., (6) n) with x 0,x 1,...,x k as initial approximations to be provided by the user. Of course, with k fixed, we can start with x 0 and x 1, compute x 2 via the method we have described (with k = 1, namely via the secant method), compute x 3 via the method we have described (with k = 2), and so on, until we have completed the list x 0,x 1,...,x k. We now turn to the computational aspects of this method. What we need is a fast method for computing p n,k (x n). For this, we write p n,k (x) in Newtonian form as follows: p n,k (x) =f(x n )+ k i 1 f[x n,x n 1,...,x n i ] (x x n j ). (7) Here f[x i,x i+1,...,x m ] are divided differences of f(x), and we recall that they can be defined recursively via f[x i ]=f(x i ); f[x i,x j ]= f[x i] f[x j ], x i x j x i x j, (8) and, for m>i+ 1, via f[x i,x i+1,...,x m ]= f[x i,x i+1,...,x m 1 ] f[x i+1,x i+2,...,x m ], x i x m x i x m. (9) We also recall that f[x i,x i+1,...,x m ] is a symmetric function of its arguments, that is, it has the same value under any permutation of {x i,x i+1,...,x m }. Thus, in (7), f[x n,x n 1,...,x n i ]=f[x n i,x n i+1,...,x n ]. In addition, when f C m (I), where I is an open interval containing the points z 0,z 1,...,z m, whether these are distinct or not, there holds f[z 0,z 1,...,z m ]= f (m) (ξ) for some ξ (min{z i }, max{z i }). m! j=0
4 118 Generalized Secant Method x 0 f 0 f 01 x 1 f 1 f 012 f 12 f 0123 x 2 f 2 f 123 f 23 f 1234 x 3 f 3 f 234 f 34 f 2345 x 4 f 4 f 345 f 45 f 3456 x 5 f 5 f 456 f 56 f 4567 x 6 f 6 f 567 f 67 x 7 f 7 Table 1: Table of divided differences over {x 0,x 1,...,x 7} for use to compute x 8 via p 7,3(x) in (6). Note that f i,i+1,...,m stands for f[x i,x i+1,...,x m] throughout. Going back to (7), we note that p n,k (x) there is computed by ordering the x i as x n,x n 1,...,x n k. This ordering enables us to compute p n,k (x n) easily. Indeed, differentiating p n,k (x) in (7), and letting x = x n, we obtain p n,k(x n )=f[x n,x n 1 ]+ k i=2 i 1 f[x n,x n 1,...,x n i ] (x n x n j ). (10) In addition, note that the relevant divided difference table need not be computed anew each iteration; what is needed is adding a new diagonal (from the south-west to north-east) at the bottom of the existing table. To make this point clear, let us look at the following example: Suppose k = 3 and and we have computed x i, i =0, 1,...,7. To compute x 8, we use the divided difference table in Table 1. Letting f i,i+1,...,m stand for f[x i,x i+1,...,x m ], we have j=1 x 8 = x 7 f(x 7) p 7,3 (x 7) = x f 7 7 f 67 + f 567 (x 7 x 6 )+f 4567 (x 7 x 6 )(x 7 x 5 ). To compute x 9, we will need the divided differences f 8,f 78,f 678,f Computing first f 8 = f(x 8 ) with the newly computed x 8, the rest of these divided differences can be computed from the bottom diagonal of Table 1 via the recursion relations f 78 = f 7 f 8 x 7 x 8, f 678 = f 67 f 78 x 6 x 8, f 5678 = f 567 f 678 x 5 x 8, in this order, and appended to the bottom of Table 1. Actually, we can do even better: Since we need only the bottom diagonal of Table 1 to compute x 8, we need to save only this diagonal,
5 A. Sidi 119 namely, only the entries f 7,f 67,f 567,f Once we have computed x 8 and f 8 = f(x 8 ), we can overwrite f 7,f 67,f 567,f 4567 with f 8,f 78,f 678,f Thus, in general, to be able to compute x n+1 via (6), after x n has been determined, we need to store only the entries f n,f n 1,n,...,f n k,n k+1,...,n 1,n along with x n,x n 1,...,x n k. 3 Convergence Analysis We now turn to the analysis of the sequence {x n } n=0 that is generated via (6). Since we already know everything concerning the case k = 1, namely, the secant method, we treat the case k 2. The following theorem gives the main convergence result for the generalized secant method. THEOREM 3.1. Let α be the solution to the equation f(x) = 0. Assume f C k+1 (I), where I in an open interval containing α, and assume also that f (α) 0,in addition to f(α) = 0. Let x 0,x 1,...,x k be distinct initial approximations to α, and generate x n, n = k +1,k+2,..., via x n+1 = x n f(x n) p n,k (x, n = k, k +1,..., (11) n) where p n,k (x) is the polynomial of interpolation to f(x) at the points x n,x n 1,...,x n k. Then, provided x 0,x 1,...,x k are in I and sufficiently close to α, the sequence {x n } n=0 converges to α, and lim n ɛ n+1 i=0 ɛ = ( 1)k+1 n i (k + 1)! f (k+1) (α) f (α) L; ɛ n = x n α n. (12) The order of convergence is s k,1<s k < 2, where s k is the only positive root of the equation s k+1 = k i=0 si and satisfies 2 2 k 1 e<s k < 2 2 k 1 for k 2; s k <s k+1 ; lim s k =2, (13) k where e is the base of natural logarithms, and ɛ n+1 lim = L (sk 1)/k. (14) n ɛ n sk REMARK. Note that, in Theorem 3.1, s 2 =1.839,s 3 =1.928, s 4 =1.966, etc., rounded to three significant figures. For the secant method, we have s 1 =1.618, rounded to three significant figures. PROOF. Below, we shall use the short-hand notation int(a 1,...,a m ) = (min{a 1,...,a m }, max{a 1,...,a m }). We start by deriving a closed-form expression for the error in x n+1. Subtracting α from both sides of (11), and noting that f(x n )=f(x n ) f(α) =f[x n,α](x n α),
6 120 Generalized Secant Method we have x n+1 α = We now note that which, by ( 1 f[x ) n,α] p n,k (x (x n α) = p n,k (x n) f[x n,α] n) p n,k (x (x n α). (15) n) p n,k(x n ) f[x n,α]= { p n,k(x n ) f (x n ) } + { f (x n ) f[x n,α] }, f (x n ) f[x n,α]=f[x n,x n ] f[x n,α]=f[x n,x n,α](x n α) = f (2) (η n ) (x n α) 2! for some η n int(x n,α), and by f (x n ) p n,k(x n ) = f[x n,x n,x n 1,...,x n k ] = f (k+1) (ξ n ) (k + 1)! for some ξ n int(x n,x n 1,...,x n k ), becomes p n,k(x n ) f[x n,α]= f (k+1) (ξ n ) (k + 1)! Substituting (16) and (17) in (15), and letting k (x n x n i ) k (x n x n i ) (16) k (ɛ n ɛ n i )+ f (2) (η n ) ɛ n. (17) 2! we finally obtain D n = f (k+1) (ξ n ) (k + 1)! and Ê n = f (2) (η n ), (18) 2! ɛ n+1 = C n ɛ n ; C n p n,k (x n) f[x n,α] p n,k (x n) = D n (ɛ n ɛ n i )+Ê n ɛ n f (x n )+ D n (ɛ n ɛ n i ). (19) We now prove that convergence takes place. Without loss of generality, we can assume that I =(α T,α + T ) for some T > 0, and that m 1 = min x I f (x) > 0. This is possible since α I and f (α) 0. Let M s = max x I f (s) (x) /s!, s=1, 2,..., and choose the interval J =(α t/2,α+ t/2) sufficiently small to ensure that m 1 > 2M k+1 t k + M 2 t/2. It can now be shown that, provided x n i, i =0, 1,...,k, are all in J, there holds C n M k+1 ɛ n ɛ n i + M 2 ɛ n m 1 M k+1 ɛ n ɛ n i M k+1 ( ɛ n + ɛ n i ) + M 2 ɛ n m 1 M k+1 ( ɛ C, n + ɛ n i )
7 A. Sidi 121 where C M k+1t k + M 2 t/2 m 1 M k+1 t k < 1. Consequently, by (19), ɛ n+1 < ɛ n, which implies that x n+1 J, just like x n i, i =0, 1,...,k. Therefore, if x 0,x 1,...,x k are chosen in J, then C n C < 1 and hence x n+1 J, for all n k. Consequently, by (19), lim n x n = α. As for (12), we proceed as follows: By the fact that lim n x n = α, we first note that lim n p n,k (x n) = f (α) = lim n f[x n,α], and thus lim n C n = 0. This means that lim n (ɛ n+1 /ɛ n ) = 0 and, equivalently, that {x n } n=0 converges of order greater than 1. As a result, lim n (ɛ n /ɛ n i ) = 0, for all i 1, and ɛ n /ɛ n i = o(ɛ n /ɛ n j )asn, for j < i. Consequently, expanding in (19) the product (ɛ n ɛ n i ), we have k k ( [ ]) (ɛ n ɛ n i )= ɛn i 1 ɛn /ɛ n i ( ) [1+O =( 1) k ( )] ɛ n i ɛn /ɛ n 1 as n. (20) Substituting (20) in (19), and defining we obtain D n = D n p n,k (x n), E n = Ê n p n,k (x n), (21) ( ) [1+O ɛ n+1 =( 1) k ( )] D n ɛ n i ɛn /ɛ n 1 + En ɛ 2 n as n. (22) i=0 Dividing both sides of (22) by i=0 ɛ n i, and defining we have Now, σ n = ɛ n+1 i=0 ɛ, (23) n i σ n =( 1) k D n [ 1+O ( ɛn /ɛ n 1 )] + En σ n 1 ɛ n k 1 as n. (24) lim D n = 1 n (k + 1)! f (k+1) (α) f, lim (α) E n = f (2) (α) n 2f (α). (25) Since lim n D n and lim n E n are finite, lim n ɛ n /ɛ n 1 = 0 and lim n ɛ n k 1 = 0, it follows that there exist a positive integer N and positive constants β<1 and D, with E n ɛ n k 1 β when n N, for which (24) gives σ n D + β σ n 1 for all n N. (26)
8 122 Generalized Secant Method Using (26), it is easy to show that σ N+s D 1 βs 1 β + βs σ N, s =1, 2,..., which, by the fact that β<1, implies that {σ n } is a bounded sequence. Making use of this fact, we have lim n E n σ n 1 ɛ n k 1 = 0. Substituting this in (24), and invoking (25), we next obtain lim n σ n =( 1) k lim n D n = L, which is precisely (12). That the order of the method is s k, as defined in the statement of the theorem, follows from [6, Chapter 3]. A weaker version can be proved by letting σ n = L for all n and showing that ɛ n+1 = Q ɛ n sk is possible for s k a solution to the equation s k+1 = k i=0 si and Q = L (sk 1)/k. The proof of this is easy and is left to the reader. This completes our proof. 4 A Numerical Example We apply the method described in Sections 2 and 3 to the solution of the equation f(x) = 0, where f(x) =x 3 8, whose solution is α =2. We take k = 2 in our method. We also choose x 0 = 5 and x 1 = 4, and compute x 2 via one step of the secant method, namely, x 2 = x 1 f(x 1) f[x 0,x 1 ]. (27) Following that, we compute x 3,x 4,..., via x n+1 = x n f(x n ), n =2, 3,.... (28) f[x n,x n 1 ]+f[x n,x n 1,x n 2 ](x n x n 1 ) Our computations were done in quadruple-precision arithmetic (approximately 35- decimal-digit accuracy), and they are given in Table 2. Note that in order to verify the theoretical results concerning iterative methods of order greater that unity, we need to use computer arithmetic of high precision (preferably, of variable precision, if available) because the number of correct significant decimal digits increases dramatically from one iteration to the next as we are approaching the solution. From Theorem 3.1, and lim n ɛ n+1 = ( 1)3 ɛ n ɛ n 1 ɛ n 2 3! f (3) (2) f (2) = 1 = log ɛ n+1 /ɛ n lim n log ɛ n /ɛ n 1 = s 2 = , and these seem to be confirmed in Table 2. Also, x 9 should have a little under 50 correct significant figures, even though we do not see this in Table 2 due to the fact that the arithmetic we have used to generate Table 2 can provide an accuracy of at most 35 digits approximately.
9 A. Sidi 123 n x n ɛ n L n Q n Table 2: Results obtained by applying the generalized secant method with k = 2, as shown in (27) and (28), to the equation x 3 8 = 0. Here, L n = ɛ n+1 /(ɛ n ɛ n 1 ɛ n 2 ) and Q n = ( log ɛ n+1 /ɛ n ) / ( log ɛ n /ɛ n 1 ). References [1] K. E. Atkinson, An Introduction to Numerical Analysis, Wiley, New York, second edition, [2] P. Henrici, Elements of Numerical Analysis, Wiley, New York, [3] A. Ralston and P. Rabinowitz, A First Course in Numerical Analysis, McGraw-Hill, New York, second edition, [4] A. Sidi, Unified treatment of regula falsi, Newton Raphson, secant, and Steffensen methods for nonlinear equations, J. Online Math. Appl., 6, (Available from the URL library/4/vol6/sidi/sidi.pdf). [5] J. Stoer and R. Bulirsch, Introduction to Numerical Analysis, Springer-Verlag, New York, third edition, [6] J. F. Traub, Iterative Methods for the Solution of Equations, Prentice Hall, Englewood Cliffs, N.J., 1964.
Convergence Rates on Root Finding
Convergence Rates on Root Finding Com S 477/577 Oct 5, 004 A sequence x i R converges to ξ if for each ǫ > 0, there exists an integer Nǫ) such that x l ξ > ǫ for all l Nǫ). The Cauchy convergence criterion
More informationNEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS
Bulletin of Mathematical Analysis and Applications ISSN: 181-191, URL: http://www.bmathaa.org Volume 3 Issue 4(011, Pages 31-37. NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS
More informationQueens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.
Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This
More informationChapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation
Numerical Analysis by Dr. Anita Pal Assistant Professor Department of Mathematics National Institute of Technology Durgapur Durgapur-713209 email: anita.buie@gmail.com 1 . Chapter 4 Solution of Non-linear
More informationp 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.
80 CHAP. 2 SOLUTION OF NONLINEAR EQUATIONS f (x) = 0 y y = f(x) (p, 0) p 2 p 1 p 0 x (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- Figure 2.16 cant method. Secant Method The
More informationSolution of Nonlinear Equations
Solution of Nonlinear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 14, 017 One of the most frequently occurring problems in scientific work is to find the roots of equations of the form f(x) = 0. (1)
More informationNumerical Analysis: Solving Nonlinear Equations
Numerical Analysis: Solving Nonlinear Equations Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a
More informationComparative Analysis of Convergence of Various Numerical Methods
Journal of Computer and Mathematical Sciences, Vol.6(6),290-297, June 2015 (An International Research Journal), www.compmath-journal.org ISSN 0976-5727 (Print) ISSN 2319-8133 (Online) Comparative Analysis
More informationPART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435
PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu
More informationTHE SECANT METHOD. q(x) = a 0 + a 1 x. with
THE SECANT METHOD Newton s method was based on using the line tangent to the curve of y = f (x), with the point of tangency (x 0, f (x 0 )). When x 0 α, the graph of the tangent line is approximately the
More informationMidterm Review. Igor Yanovsky (Math 151A TA)
Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply
More informationA fourth order method for finding a simple root of univariate function
Bol. Soc. Paran. Mat. (3s.) v. 34 2 (2016): 197 211. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v34i1.24763 A fourth order method for finding a simple
More informationApplied Mathematics Letters. Combined bracketing methods for solving nonlinear equations
Applied Mathematics Letters 5 (01) 1755 1760 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Combined bracketing methods for
More informationTwo Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur
28 International Journal of Advance Research, IJOAR.org Volume 1, Issue 1, January 2013, Online: Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur ABSTRACT The following paper focuses
More informationOptimal derivative-free root finding methods based on the Hermite interpolation
Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 4427 4435 Research Article Optimal derivative-free root finding methods based on the Hermite interpolation Nusrat Yasmin, Fiza Zafar,
More informationA new family of four-step fifteenth-order root-finding methods with high efficiency index
Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 3, No. 1, 2015, pp. 51-58 A new family of four-step fifteenth-order root-finding methods with high efficiency index Tahereh
More informationChapter 8. P-adic numbers. 8.1 Absolute values
Chapter 8 P-adic numbers Literature: N. Koblitz, p-adic Numbers, p-adic Analysis, and Zeta-Functions, 2nd edition, Graduate Texts in Mathematics 58, Springer Verlag 1984, corrected 2nd printing 1996, Chap.
More informationSOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD
BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative
More information5. Hand in the entire exam booklet and your computer score sheet.
WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test
More informationLecture 39: Root Finding via Newton s Method
Lecture 39: Root Finding via Newton s Method We have studied two braceting methods for finding zeros of a function, bisection and regula falsi. These methods have certain virtues (most importantly, they
More information9.6 Newton-Raphson Method for Nonlinear Systems of Equations
372 Chapter 9. Root Finding and Nonlinear Sets of Equations This equation, if used with i ranging over the roots already polished, will prevent a tentative root from spuriously hopping to another one s
More informationYURI LEVIN AND ADI BEN-ISRAEL
Pp. 1447-1457 in Progress in Analysis, Vol. Heinrich G W Begehr. Robert P Gilbert and Man Wah Wong, Editors, World Scientiic, Singapore, 003, ISBN 981-38-967-9 AN INVERSE-FREE DIRECTIONAL NEWTON METHOD
More informationSimple Iteration, cont d
Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Simple Iteration, cont d In general, nonlinear equations cannot be solved in a finite sequence
More informationModule 10: Finite Difference Methods for Boundary Value Problems Lecture 42: Special Boundary Value Problems. The Lecture Contains:
The Lecture Contains: Finite Difference Method Boundary conditions of the second and third kind Solution of the difference scheme: Linear case Solution of the difference scheme: nonlinear case Problems
More informationMathematical Methods for Numerical Analysis and Optimization
Biyani's Think Tank Concept based notes Mathematical Methods for Numerical Analysis and Optimization (MCA) Varsha Gupta Poonam Fatehpuria M.Sc. (Maths) Lecturer Deptt. of Information Technology Biyani
More information1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that
Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a
More informationChapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence
Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we
More informationThree New Iterative Methods for Solving Nonlinear Equations
Australian Journal of Basic and Applied Sciences, 4(6): 122-13, 21 ISSN 1991-8178 Three New Iterative Methods for Solving Nonlinear Equations 1 2 Rostam K. Saeed and Fuad W. Khthr 1,2 Salahaddin University/Erbil
More informationA Review of Bracketing Methods for Finding Zeros of Nonlinear Functions
Applied Mathematical Sciences, Vol 1, 018, no 3, 137-146 HIKARI Ltd, wwwm-hikaricom https://doiorg/101988/ams018811 A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions Somkid Intep
More informationThe Newton-Raphson Algorithm
The Newton-Raphson Algorithm David Allen University of Kentucky January 31, 2013 1 The Newton-Raphson Algorithm The Newton-Raphson algorithm, also called Newton s method, is a method for finding the minimum
More information7. Piecewise Polynomial (Spline) Interpolation
- 64-7 Piecewise Polynomial (Spline Interpolation Single polynomial interpolation has two major disadvantages First, it is not computationally efficient when the number of data points is large When the
More informationSection 6.3 Richardson s Extrapolation. Extrapolation (To infer or estimate by extending or projecting known information.)
Section 6.3 Richardson s Extrapolation Key Terms: Extrapolation (To infer or estimate by extending or projecting known information.) Illustrated using Finite Differences The difference between Interpolation
More information4 Nonlinear Equations
4 Nonlinear Equations lecture 27: Introduction to Nonlinear Equations lecture 28: Bracketing Algorithms The solution of nonlinear equations has been a motivating challenge throughout the history of numerical
More informationRoots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations
Roots of Equations Direct Search, Bisection Methods Regula Falsi, Secant Methods Newton-Raphson Method Zeros of Polynomials (Horner s, Muller s methods) EigenValue Analysis ITCS 4133/5133: Introduction
More informationZeroes of Transcendental and Polynomial Equations. Bisection method, Regula-falsi method and Newton-Raphson method
Zeroes of Transcendental and Polynomial Equations Bisection method, Regula-falsi method and Newton-Raphson method PRELIMINARIES Solution of equation f (x) = 0 A number (real or complex) is a root of the
More information2.29 Numerical Fluid Mechanics Spring 2015 Lecture 4
2.29 Spring 2015 Lecture 4 Review Lecture 3 Truncation Errors, Taylor Series and Error Analysis Taylor series: 2 3 n n i1 i i i i i n f( ) f( ) f '( ) f ''( ) f '''( )... f ( ) R 2! 3! n! n1 ( n1) Rn f
More informationJim Lambers MAT 460/560 Fall Semester Practice Final Exam
Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationAn Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function
Applied Mathematical Sciences, Vol 6, 0, no 8, 347-36 An Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function Norhaliza Abu Bakar, Mansor Monsi, Nasruddin assan Department of Mathematics,
More informationA Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation
ELSEVIER An International Journal Available online at www.sciencedirect.com computers &.,=. = C o,..ct. mathematics with applications Computers and Mathematics with Applications 48 (2004) 709-714 www.elsevier.com/locate/camwa
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationA three point formula for finding roots of equations by the method of least squares
Journal of Applied Mathematics and Bioinformatics, vol.2, no. 3, 2012, 213-233 ISSN: 1792-6602(print), 1792-6939(online) Scienpress Ltd, 2012 A three point formula for finding roots of equations by the
More informationSolution of Algebric & Transcendental Equations
Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection
More informationA Study of Iteration Formulas for Root Finding, Where Mathematics, Computer Algebra and Software Engineering Meet
A Study of Iteration Formulas for Root Finding, Where Mathematics, Computer Algebra and Software Engineering Meet Gaston H. Gonnet Abstract. For various reasons, including speed code simplicity and symbolic
More informationHIGH ORDER VARIABLE MESH EXPONENTIAL FINITE DIFFERENCE METHOD FOR THE NUMERICAL SOLUTION OF THE TWO POINTS BOUNDARY VALUE PROBLEMS
JOURNAL OF THE INTERNATIONAL MATHEMATICAL VIRTUAL INSTITUTE ISSN (p) 2303-4866, ISSN (o) 2303-4947 www.imvibl.org /JOURNALS / JOURNAL Vol. 8(2018), 19-33 DOI: 10.7251/JMVI1801019P Former BULLETIN OF THE
More informationMATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.
MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.
More informationMADHAVA MATHEMATICS COMPETITION, December 2015 Solutions and Scheme of Marking
MADHAVA MATHEMATICS COMPETITION, December 05 Solutions and Scheme of Marking NB: Part I carries 0 marks, Part II carries 30 marks and Part III carries 50 marks Part I NB Each question in Part I carries
More informationUNIT - 2 Unit-02/Lecture-01
UNIT - 2 Unit-02/Lecture-01 Solution of algebraic & transcendental equations by regula falsi method Unit-02/Lecture-01 [RGPV DEC(2013)] [7] Unit-02/Lecture-01 [RGPV JUNE(2014)] [7] Unit-02/Lecture-01 S.NO
More informationHence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.
The Bisection method or BOLZANO s method or Interval halving method: Find the positive root of x 3 x = 1 correct to four decimal places by bisection method Let f x = x 3 x 1 Here f 0 = 1 = ve, f 1 = ve,
More informationSYSTEMS OF NONLINEAR EQUATIONS
SYSTEMS OF NONLINEAR EQUATIONS Widely used in the mathematical modeling of real world phenomena. We introduce some numerical methods for their solution. For better intuition, we examine systems of two
More informationReducing round-off errors in symmetric multistep methods
Reducing round-off errors in symmetric multistep methods Paola Console a, Ernst Hairer a a Section de Mathématiques, Université de Genève, 2-4 rue du Lièvre, CH-1211 Genève 4, Switzerland. (Paola.Console@unige.ch,
More informationMathematical Economics: Lecture 2
Mathematical Economics: Lecture 2 Yu Ren WISE, Xiamen University September 25, 2012 Outline 1 Number Line The number line, origin (Figure 2.1 Page 11) Number Line Interval (a, b) = {x R 1 : a < x < b}
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationHomework and Computer Problems for Math*2130 (W17).
Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should
More informationk-protected VERTICES IN BINARY SEARCH TREES
k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from
More informationMath 701: Secant Method
Math 701: Secant Method The secant method aroximates solutions to f(x = 0 using an iterative scheme similar to Newton s method in which the derivative has been relace by This results in the two-term recurrence
More informationMath 117: Topology of the Real Numbers
Math 117: Topology of the Real Numbers John Douglas Moore November 10, 2008 The goal of these notes is to highlight the most important topics presented in Chapter 3 of the text [1] and to provide a few
More informationNumerical Study of Some Iterative Methods for Solving Nonlinear Equations
International Journal of Engineering Science Invention ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 5 Issue 2 February 2016 PP.0110 Numerical Study of Some Iterative Methods for Solving Nonlinear
More informationNUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)
NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR) Autumn Session UNIT 1 Numerical analysis is the study of algorithms that uses, creates and implements algorithms for obtaining numerical solutions to problems
More informationIntroduction to Numerical Analysis
Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes
More informationNumerical Methods. King Saud University
Numerical Methods King Saud University Aims In this lecture, we will... Introduce the topic of numerical methods Consider the Error analysis and sources of errors Introduction A numerical method which
More informationUsing Lagrange Interpolation for Solving Nonlinear Algebraic Equations
International Journal of Theoretical and Applied Mathematics 2016; 2(2): 165-169 http://www.sciencepublishinggroup.com/j/ijtam doi: 10.11648/j.ijtam.20160202.31 ISSN: 2575-5072 (Print); ISSN: 2575-5080
More informationA Derivative Free Hybrid Equation Solver by Alloying of the Conventional Methods
DOI: 1.15415/mjis.213.129 A Derivative Free Hybrid Equation Solver by Alloying of the Conventional Methods Amit Kumar Maheshwari Advanced Materials and Processes Research Institute (CSIR, Bhopal, India
More information15 Nonlinear Equations and Zero-Finders
15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).
More informationFinding simple roots by seventh- and eighth-order derivative-free methods
Finding simple roots by seventh- and eighth-order derivative-free methods F. Soleymani 1,*, S.K. Khattri 2 1 Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran * Corresponding
More information4.1 Classical Formulas for Equally Spaced Abscissas. 124 Chapter 4. Integration of Functions
24 Chapter 4. Integration of Functions of various orders, with higher order sometimes, but not always, giving higher accuracy. Romberg integration, which is discussed in 4.3, is a general formalism for
More information632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method
632 CHAP 11 EIGENVALUES AND EIGENVECTORS QR Method Suppose that A is a real symmetric matrix In the preceding section we saw how Householder s method is used to construct a similar tridiagonal matrix The
More informationHigh-order Newton-type iterative methods with memory for solving nonlinear equations
MATHEMATICAL COMMUNICATIONS 9 Math. Commun. 9(4), 9 9 High-order Newton-type iterative methods with memory for solving nonlinear equations Xiaofeng Wang, and Tie Zhang School of Mathematics and Physics,
More informationBasins of Attraction for Optimal Third Order Methods for Multiple Roots
Applied Mathematical Sciences, Vol., 6, no., 58-59 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.988/ams.6.65 Basins of Attraction for Optimal Third Order Methods for Multiple Roots Young Hee Geum Department
More informationAIMS Exercise Set # 1
AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest
More information11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix
11.3 Eigenvalues and Eigenvectors of a ridiagonal Matrix Evaluation of the Characteristic Polynomial Once our original, real, symmetric matrix has been reduced to tridiagonal form, one possible way to
More informationError formulas for divided difference expansions and numerical differentiation
Error formulas for divided difference expansions and numerical differentiation Michael S. Floater Abstract: We derive an expression for the remainder in divided difference expansions and use it to give
More informationPolynomial Interpolation
Polynomial Interpolation (Com S 477/577 Notes) Yan-Bin Jia Sep 1, 017 1 Interpolation Problem In practice, often we can measure a physical process or quantity (e.g., temperature) at a number of points
More informationSOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H.
International Journal of Differential Equations and Applications Volume 12 No. 4 2013, 169-183 ISSN: 1311-2872 url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijdea.v12i4.1344 PA acadpubl.eu SOLVING
More information3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.
3.1 Introduction Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x 3 +1.5x 1.5 =0, tan x x =0. Practical existence test for roots: by intermediate value theorem, f C[a, b] & f(a)f(b)
More informationAPPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR. Jay Villanueva Florida Memorial University Miami, FL
APPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR Jay Villanueva Florida Memorial University Miami, FL jvillanu@fmunivedu I Introduction II III IV Classical methods A Bisection B Linear interpolation
More informationNumerical Analysis: Approximation of Functions
Numerical Analysis: Approximation of Functions Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a
More informationNUMERICAL MATHEMATICS & COMPUTING 7th Edition
NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid
More informationERROR IN LINEAR INTERPOLATION
ERROR IN LINEAR INTERPOLATION Let P 1 (x) be the linear polynomial interpolating f (x) at x 0 and x 1. Assume f (x) is twice continuously differentiable on an interval [a, b] which contains the points
More informationconverges to a root, it may not always be the root you have in mind.
Math 1206 Calculus Sec. 4.9: Newton s Method I. Introduction For linear and quadratic equations there are simple formulas for solving for the roots. For third- and fourth-degree equations there are also
More informationMATHEMATICAL EXPOSITION
MATHEMATICAL EXPOSITION An Improved Newton's Method by John H. Mathews California State University, Fullerton, CA 92634 4(oif John earned his Ph.D. in mathematics from Michigan State University under the
More informationThe Journal of Nonlinear Sciences and Applications
J. Nonlinear Sci. Appl. 2 (2009), no. 3, 195 203 The Journal of Nonlinear Sciences Applications http://www.tjnsa.com ON MULTIPOINT ITERATIVE PROCESSES OF EFFICIENCY INDEX HIGHER THAN NEWTON S METHOD IOANNIS
More informationNewton-Raphson Type Methods
Int. J. Open Problems Compt. Math., Vol. 5, No. 2, June 2012 ISSN 1998-6262; Copyright c ICSRS Publication, 2012 www.i-csrs.org Newton-Raphson Type Methods Mircea I. Cîrnu Department of Mathematics, Faculty
More informationINTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.
INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x
More informationNumerical Methods. King Saud University
Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation
More informationA Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials
Applied Mathematical Sciences, Vol. 8, 2014, no. 35, 1723-1730 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.4127 A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating
More informationFinding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University
Finding the Roots of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationFinding the Roots of f(x) = 0
Finding the Roots of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationChapter 1. Root Finding Methods. 1.1 Bisection method
Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This
More informationAccelerating Convergence
Accelerating Convergence MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013 Motivation We have seen that most fixed-point methods for root finding converge only linearly
More informationInterpolation Theory
Numerical Analysis Massoud Malek Interpolation Theory The concept of interpolation is to select a function P (x) from a given class of functions in such a way that the graph of y P (x) passes through the
More informationx 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.
Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationMTH603 FAQ + Short Questions Answers.
Absolute Error : Accuracy : The absolute error is used to denote the actual value of a quantity less it s rounded value if x and x* are respectively the rounded and actual values of a quantity, then absolute
More informationDiscrete Orthogonal Polynomials on Equidistant Nodes
International Mathematical Forum, 2, 2007, no. 21, 1007-1020 Discrete Orthogonal Polynomials on Equidistant Nodes Alfredo Eisinberg and Giuseppe Fedele Dip. di Elettronica, Informatica e Sistemistica Università
More informationNumerical Sequences and Series
Numerical Sequences and Series Written by Men-Gen Tsai email: b89902089@ntu.edu.tw. Prove that the convergence of {s n } implies convergence of { s n }. Is the converse true? Solution: Since {s n } is
More informationChebyshev-Halley s Method without Second Derivative of Eight-Order Convergence
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 4 2016, pp. 2987 2997 Research India Publications http://www.ripublication.com/gjpam.htm Chebyshev-Halley s Method without
More informationHermite Interpolation
Jim Lambers MAT 77 Fall Semester 010-11 Lecture Notes These notes correspond to Sections 4 and 5 in the text Hermite Interpolation Suppose that the interpolation points are perturbed so that two neighboring
More informationVirtual University of Pakistan
Virtual University of Pakistan File Version v.0.0 Prepared For: Final Term Note: Use Table Of Content to view the Topics, In PDF(Portable Document Format) format, you can check Bookmarks menu Disclaimer:
More information