Geometrically constructed families of iterative methods

Size: px
Start display at page:

Download "Geometrically constructed families of iterative methods"

Transcription

1 Chapter 4 Geometrically constructed families of iterative methods 4.1 Introduction The aim of this CHAPTER 3 is to derive one-parameter families of Newton s method [7, 11 16], Chebyshev s method [7, 30 32, 59 61], Halley s method [7, 30 32, 60 73], super- Halley method [30 32, 61, 71 73], Chebyshev-Halley type methods [74], C-methods [30], Euler s method [7, 30, 64], osculating circle method [33] and ellipse method [31, 32, 61] for finding simple roots of nonlinear equations, permitting f (x n ) = 0 at some points in the vicinity of required root. Halley s method, Chebyshev s method, super-halley method, Chebyshev-Halley type methods, Euler s method, osculating circle method and as an exceptional case, Newton s method are seen as the special cases of these families. All methods of various families are cubically convergent to simple roots except Newton s or a family of Newton s method. Further, by making some semi-discrete modifications to above proposed cubically convergent families, a number of interesting new classes of third-order as well as fourth-order multipoint iterative methods can be derived which are free from second-order derivative. Furthermore, numerical examples show that all proposed one-point as well as multipoint iterative families are effective and comparable to the well-known existing classical methods. As discussed previously, one of the most basic problems in computational mathematics is to find solutions of nonlinear equations (1.1). To solve equation (1.1), one can use 3 The main contents of this CHAPTER have been published in [206]. 93

2 iterative methods such as Newton s method [7, 11 16] and its variants namely, Halley s method [7, 30 32, 60 73], Chebyshev s method [7, 30 32, 59 61] and super-halley method [30 32, 61, 71 73] respectively. Among all these iterative methods, Newton s method is probably the well-known and mostly used algorithm. Its geometric construction consists in considering a straight line y = ax + b, (4.1) and then determining unknowns a and b by imposing the tangency conditions : y(x n ) =, y (x n ) = f (x n ), (4.2) thereby obtaining a tangent line y(x) = f (x n )(x x n ), (4.3) to the graph of f (x) at x n, }. The point of intersection of this line with x axis, gives the celebrated Newton s method f (x n) f (x n ). (4.4) The tangent line in Newton s method can be seen as first degree polynomial of f (x) at x = x n. Though, Newton s method is simple and fast with quadratic convergence, but its major drawback is that, it may fail miserably if at any stage of computation, the derivative of the function f (x) is either zero or very small in the vicinity of the required root. Therefore, it has poor convergence and stability problems as it is very sensitive to initial guess. The well-known third-order methods which entail an evaluation of f (x) are close relatives of Newton s method and can be obtained by admitting geometric derivation from the quadratic curves, e.g. parabola, hyperbola, circle or ellipse. parabola In this category, the Euler s method [7, 30, 64] can be constructed by considering a and imposing the tangency conditions : x 2 + ax + by + c = 0, (4.5) y(x n ) =, y (x n ) = f (x n ) and y (x n ) = f (x n ), (4.6) 94

3 one gets y(x) = f (x n )(x x n ) + f (x n ) (x x n ) 2. (4.7) 2 The point of intersection of (4.7) with x axis gives an iteration scheme } 2 1 ± 1 2L f (x n ), (4.8) where L = f (x n) f (x n ) f (x n )} 2. (4.9) The well-known Halley s method [7, 30 32, 60 73] admits its geometric derivation from a hyperbola in the form axy + y + bx + c = 0. (4.10) After the imposition of conditions (4.6) on (4.10) and taking the intersection of (4.10) with x axis as next iterate, the Halley s iteration process reads as } 2 f (xn ) 2 L f (x n ). (4.11) The classical Chebyshev s method [7, 30 32, 59 61] admits its geometric derivation from parabola in a following form : ay 2 + y + bx + c = 0. (4.12) Similarly after imposing the tangency conditions (4.6) and taking the intersection of (4.12) with x axis as next iterate, Chebyshev s iteration process reads as 1 L } f (xn ) 2 f (x n ). (4.13) Amat et al. [30] studied another type of third-order methods by considering a hyperbola in a following form : a n y 2 + b n xy + y + c n x + d n = 0, (4.14) and by the similar conditions as before, one can get an iteration process reads as L f (x ( ) n ) b f (xn ) f (x n n ), (4.15) f (x n ) where b n is a parameter depending on n. For particular values of b n, some interesting particular cases of this family are 95

4 (i) For b n = 0, formula (4.15) corresponds to classical Chebyshev s method [7, 30 32, 59]. (ii) For b n = f (x n ) 2 f (x n ), formula (4.15) corresponds to famous Halley s method [60, 30 32]. (iii) For b n = f (x n ) f (x n ), formula (4.15) corresponds to super-halley method [71 73]. (iv) For b n ±, formula (4.15) corresponds to classical Newton s method [7, 11 16]. Osculating circle method [33] and ellipse method [31, 32, 61] are more cumbersome from computing point of view as compared to Halley s, Chebyshev s or super-halley methods. For more detailed survey of these most important techniques, some excellent text books [1, 7, 14, 21] are available in literature. The problem with existing methods is that these techniques may fail to converge in some cases if an initial guess is far from zero point or if the derivative of the function is small or even zero in the vicinity of required root. Recently, Kanwar and Tomar [33, 231, 232] using different approach derived new classes of iterative techniques having quadratic and cubic convergence respectively. These techniques provide an alternative to the failure situation of existing classical techniques. The purpose of this study is to eliminate the defects of existing classical methods by simple modifications of iteration processes. 4.2 Development of iteration schemes Based on the two theorems [5], namely Theorem and Theorem (discussed in Chapter 1), proposed methods can be derived by admitting geometrical derivation from the hyperbolic curve, cubic curve and general quadratic curve Derivation from a hyperbola Let us now, develop an iteration scheme by fitting the function f (x) of equation (1.1) in the following form of a hyperbola : 2 } a n ye α(x xn) } + bn (x x n ) ye α(x xn) } + ye α(x xn) + c n (x x n ) + d n = 0, (4.16) 96

5 where α R. The unknowns a n,c n and d n are now determined in terms of b n by using tangency conditions (4.6) on equation (4.16). This gives a n = f (x n ) + α 2 2α f (x n ) b n 2 f (x n ) α } 2 f (x n ) α,. (4.17) c n = f (x n ) α } and d n = 0. At a root and it follows from (4.16) that y(x n+1 ) = 0, (4.18) a n } 2 }. (4.19) c n b n If (4.16) is an exponentially fitted straight line, then a n = 0 = b n and from (4.19), one can have f (x n ) α. (4.20) This is a one-parameter family of Newton s method [205, 231] already discussed in Chapter 2 and do not fail even if f (x n ) = 0 or very small in the vicinity of required root, unlike Newton s method. Ignoring the term in α, one gets Newton s method [7, 11 16]. Further, making use of a n and c n in (4.19), one can obtain L f (x n) f (x } n ) 2 f (x 1 + b n ) f (x n n ) α, (4.21) where b n is a parameter and f (x n ) α } L f (x n) = f (x n) f (x n ) + α 2 2α f (x n ) } f (x n ) α } 2. (4.22) This is a modification over the formulas (4.15) of Amat et al. [30] and do not fail even if f (x n ) is very small or zero in the vicinity of required root. For particular values of b n, some interesting particular cases of (4.21) are as under : (i) For b n = 0, one can obtain } 2 L f (x n) f (x n ) α. (4.23) This is a modification over the well-known cubically convergent Chebyshev s method. 97

6 (ii) Putting b n = L f (x n), one gets 2u 2 2 L f (x n) f (x n ) α. (4.24) This is a modification over the well-known cubically convergent Halley s method. (iii) Substituting b n = L f (x n) u, one can get L f (x } n) 2 1 L f (x n) f (x n ) α. (4.25) This is a modification over the well-known cubically convergent super-halley method. (iv) Letting b n = α L f (x n) u,α R, one can have L f (x n) 2 1 α L f (x n) } f (x n ) α. (4.26) This is a modification over the well-known cubically convergent Chebyshev-Halley type methods [74]. (v) If b n ±, one obtains f (x n ) α. (4.27) This is a modification over the well-known quadratically convergent Newton s method. It can be verified that in the absence of the term in parameter α, method (4.23), method (4.24), method (4.25), method (4.26) and method (4.27) reduce to Chebyshev s method [7, 30 32, 59 61], Halley s method [60, 30 32, 61 73], super-halley method [30 32, 61, 71 73], Chebyshev-Halley type methods [74] and Newton s method [7, 11 16] respectively. The sign of parameter α in all these methods is chosen so that the denominator is largest in magnitude. Following theorem gives an order of convergence of the proposed iterative scheme (4.21) : 98

7 Theorem Let f : I R R be a sufficiently differentiable function defined on an open interval I, enclosing a simple zero of f (x) (say x = r I). Assume that initial guess x = x 0 is sufficiently close to r and f (x n ) α 0 in I. Then an iteration scheme defined by formula (4.21) is cubically convergent and satisfies the following error equation: e n+1 = 2C2 2 C 3 3αC } 2 α2 + b n (C 2 α) e 3 n + O ( e 4 n), (4.28) where e n = x n r is an error at n th iteration and C k = ( ) 1 f k (r) k! f (r),k = 2,3,... Proof. Using Taylor s series expansion for about x = r and taking into account that f (r) = 0 and f (r) 0, one can have = f (r) e n +C 2 e 2 n +C 3 e 3 n + O(e 4 n), (4.29) and Using equations (4.29)-(4.31), one gets and f (x n ) = f (r) 1 + 2C 2 e n + 3C 3 e 2 n + O(e 3 n), (4.30) f (x n ) = f (r) 2C 2 + 6C 3 e n + 12C 4 e 2 n + O(e 3 n). (4.31) f (x n ) α = e n (C 2 α)e 2 n ( 2C 3 2C αC 2 α 2) e 3 n + O ( e 4 ) n, (4.32) L f (x n) = f (x n) f (x n ) + α 2 2α f (x n ) } f (x n ) α } 2 =2(C 2 α)e n (6C 2 2 6C 3 6C 2 α + 3α 2 )e 2 n + O(e 3 n). (4.33) Using equations (4.32) and (4.33), one can obtain 1 L f (x n) b n f (x n ) α } } = (C 2 α)e n ( 3C 2 2 3C 3 3αC α2 +b n (C 2 α) ) e 2 n+o(e 3 n). Substituting equations (4.32) and (4.34) in formula (4.21), one obtains e n+1 = (4.34) 2C2 2 C 3 3αC } 2 α2 + b n (C 2 α) e 3 n + O ( e 4 n). (4.35) This completes proof of the theorem. 99

8 4.2.2 Derivation from a cubic curve Another iterative processes can also be derived by considering different exponentially fitted osculating curves. For instance, if one takes a cubic equation namely } 3 } 2+ } a n ye α(x xn) +bn ye α(x xn) ye α(x xn) +d n (x x n ) = 0, and then using tangency conditions (4.6) on (4.36), one gets At a root and it follows from (4.36) that a n =C f (x n ) + α 2 2α f (x n ) f (x n ) α } 2, b n = f (x n ) + α 2 2α f (x n ) 2 f (x n ) α } 2, d n = f (x n ) α } L f (x n) +C L f (x n) } } 2 (4.36). (4.37) y(x n+1 ) = 0, (4.38) f (x n ) α. (4.39) This is a modification over the Amat et al. C-methods [30, 71] and do not fail even if f (x n ) is very small or zero in the vicinity of required root. It can be verified that in the absence of the term in parameter α, method (4.39) reduces to Amat et al. C-methods [30, 71]. A mathematical proof for an order of convergence of the proposed iterative scheme (4.39) is given below : Theorem Let f : I R R be a sufficiently differentiable function defined on an open interval I, enclosing a simple zero of f (x) (say x = r I). Assume that initial guess x = x 0 is sufficiently close to r and f (x n ) α 0 in I. Then an iteration scheme defined by formula (4.39) is cubically convergent and satisfies the following error equation: e n+1 = (2 4C)C 2 2 C 3 + ( 3 + 8C) (C 2 α α2 2 )} e 3 n + O(e 4 n). (4.40) 100

9 Proof. Using equations (4.29)-(4.31), one gets and f (x n ) α = e n (C 2 α)e 2 n ( 2C 3 2C αC 2 α 2) e 3 n + O ( e 4 ) n, (4.41) L f (x n) = f (x n) f (x n ) + α 2 2α f (x n ) } f (x n ) α } 2 =2(C 2 α)e n (6C 2 2 6C 3 6C 2 α + 3α 2 )e 2 n + O(e 3 n). (4.42) Substituting equations (4.41) and (4.42) in formula (4.39), one can obtain e n+1 = (2 4C)C 2 2 C 3 + ( 3 + 8C) This completes proof of the theorem. (C 2 α α2 2 )} e 3 n + O(e 4 n). (4.43) Derivation from a general quadratic curve Let us consider a following general quadratic equation : ) 2 ( ) (x x n ) 2 + a n (ye α(x xn) + bn (x x n ) + c n ye α(x xn) + d n = 0, (4.44) and determine the unknowns b n, c n and d n in the terms of parameter a n by tangency conditions (4.6), one can have b n = 2[ 1 + a n f (x n ) α } 2] f (x n ) α } f (x n ) + α 2 2α f, (x n ) c n = 2[ 1 + a n f (x n ) α } 2] f (x n ) + α 2 2α f (x n ),. (4.45) d n =0. Taking next iterate x n+1 as a point of intersection of (4.44) with x axis, one gets another two-parameter family of iteration scheme as } f D 1, (4.46) (x n ) α D 2 101

10 where D 1 = 2 + a n f (x n ) α } 2 ( 2 + L f (x n) ), (4.47) and D 2 = 1 + a n f (x n ) α } 2 ± 1 + a n f (x n ) α } 2} 2 L f (x n ) D 1. (4.48) This is a modification over formula (20) of Sharma [32] and do not fail even if f (x n ) is very small or zero in the vicinity of required root. Iterative family (4.46) requires computation of square root at each step and sometimes is not convenient for practical usage. Therefore, it can be further reduced to two parameter rational iterative family ( similar to Chun [31] ), or by using the well-known approximation ( similar to Jiang and Han [61] ) and is given by Special cases: x 1 + x, x < 1, 2 2 [ 1 + a n f (x n ) α } 2] D 1 4 [ 1 + a n f (x n ) α } 2] 2 L f (x n )D 1 f (x n ) α. (4.49) For particular values of a n, some interesting particular cases of (4.46) are (i) For a n = 0, one can obtain 2 f (x n ) 1 ± 1 2L f (x n) f (x n ) α. (4.50) This is a modification over the well-known cubically convergent Euler s method [7, 30]. (ii) Taking a n = 1, one gets f (x n ) α D 1 D 2, (4.51) where D 1 = 2 + f (x n ) α } 2 ( 2 + L f (x n) ), (4.52) and D 2 = 1 + f (x n ) α } 2 ± 1 + f (x n ) α } 2} 2 L f (x n ) D 1. (4.53) This is a modification over the cubically convergent osculating circle method derived by Kanwar et al. [33]. 102

11 2 (iii) Letting a n = ), one can have f (x n )} (2 L 2 f (x n) 2 2 L f (x n) f (x n ) α. (4.54) This is a modification over the well-known cubically convergent Halley s method. (iv) Substituting a n = L f (x n) 4 f (x n )} 2 (1 L f (x n) ), one gets L f (x } n) 2 1 L f (x n) f (x n ) α. (4.55) This is a modification over the well-known cubically convergent super-halley method. (v) If a n ±, one gets } 2 L f (x n) f (x n ) α. (4.56) This is a modification over the well-known cubically convergent Chebyshev s method. (vi) For a n = 1/ f (x n )} 2, one can get f (x n ) α. (4.57) This is a modification over the well-known quadratically convergent Newton s method. It can be verified that in the absence of the term in parameter α, method (4.50), method (4.51), method (4.54), method (4.55), method (4.56) and method (4.57) reduce to Euler s method [7, 30, 64], osculating circle method [33], Halley s method [60, 30 32, 61 73], super-halley method [30 32, 61, 71 73], Chebyshev s method [7, 30 32, 59 61] and Newton s method [7, 11 16] respectively. The sign of parameter α in all these methods is chosen so that the denominator is largest in magnitude. Next, Theorem presents a mathematical proof for an order of convergence of the proposed iterative scheme (4.46). 103

12 Theorem Let f : I R R be a sufficiently differentiable function defined on an open interval I, enclosing a simple zero of f (x) (say x = r I). Assume that initial guess x = x 0 is sufficiently close to r and f (x n ) α 0 in I. Then an iteration scheme defined by formula (4.46) is cubically convergent for any finite a n except a n = 1/ f (x n )} 2 (in that case it has quadratic convergence) and satisfies the following error equation : ( 2C2 α ) α + a n f (r)} 2( 4C2 2 e n+1 = 6C 2α + 3α 2) } 2 ( 1 + a n f (r)} 2) C 3 e 3 n + O ( e 4 n). (4.58) Proof. Using equations (4.29)-(4.31), one can get and f (x n ) α = e n (C 2 α)e 2 n ( 2C 3 2C αC 2 α 2) e 3 n + O ( e 4 ) n, (4.59) L f (x n) = f (x n) f (x n ) + α 2 2α f (x n ) } f (x n ) α } 2 =2(C 2 α)e n ( 6C 2 2 6C 3 6C 2 α + 3α 2) e 2 n + O(e 3 n). (4.60) Using equations (4.29), (4.30) and (4.60), one can have D 1 = 2 ( 1 + a n f (r)} 2) + 2a n f (r)} 2( 5C 2 3α ) e n + O(e 2 n), (4.61) and D 2 = 2 ( 1 + a n f (r)} 2) + ( 4a n f (r)} 2( 2C 2 α ) 2(C 2 α) ) e n + O(e 2 n). (4.62) Substituting equations (4.59), (4.61) and (4.62) in formula (4.46), one gets ( 2C2 α ) α + a n f (r)} 2( 4C2 2 e n+1 = 6C 2α + 3α 2) } 2 ( 1 + a n f (r)} 2) C 3 e 3 n + O ( e 4 n). (4.63) This completes proof of the theorem. Alternatively, all these modified techniques ( (4.21), (4.39) and (4.46) ) can also be obtained by using the idea of Ben-Israel [211]. By applying the existing classical techniques to a modified function, say f (x) := e a(x)dx f (x), for some suitable function f (x), one can get the corresponding modified iterative schemes. 104

13 4.2.4 Worked examples Now, let us make an attempt to present some numerical results obtained by employing existing classical methods namely, Chebyshev s method (CM), Halley s method (HM), super- Halley method (SHM), Euler s method (EM), osculating circle method (OCM) and newly developed methods namely, modified Chebyshev s method (MCM) (4.23), modified Halley s method (MHM) (4.24), modified super-halley method (MSHM) (4.25), modified Euler s method (MEM) (4.50) and modified osculating circle method (MOCM) (4.51) respectively to solve nonlinear equations given in Table 4.1. Here, for simplicity, the formulae are tested for α = 1. The results are summarized in Table 4.2 (Number of iterations), Table 4.3 (Computational order of convergence) and Table 4.4 (Total number of function evaluations) respectively. Computations have been performed using MAT LAB R version 7.5 (R2007b) in double precision arithmetic. We use ε = as a tolerance error. The following stopping criteria are used for computer programs : (i) x n+1 x n < ε, (ii) f (x n+1 ) < ε. 105

14 Table 4.1 Test examples, interval (I) containing the root, initial guesses (x 0 ) and exact root (r) Example No. Examples I Initial guesses Exact root (r) exp(1 x) 1 = 0. [0,2] x exp( x) = 0. [1, 3] sin(x) = 0. [0, 2] exp( x) sin(x) = 0. [5, 7] arctan(x) = 0. [ 1, 2] x 3 + 4x 2 10 = 0. [0,2] x exp( x) 0.1 = 0. [ 2, 2] exp(x 2 + 7x 30) 1 = 0. [2,4] cos(x) x = 0. [ 2,2]

15 Table 4.2 Number of iterations (D and F below stand for divergent and failure respectively) Example No. CM MCM HM MHM SHM MSHM EM MEM OCM MOCM (4.23) (4.24) (4.25) (4.50) (4.51) D 1 D 1 D 1 D 1 D D D F 4 F 4 F 4 F 4 F D F 3 F 3 F 3 F 3 F D D D Converges to undesired root. 107

16 Table 4.3 Computational order of convergence (COC) Example No. CM MCM HM MHM SHM MSHM EM MEM OCM MOCM (4.23) (4.24) (4.25) (4.50) (4.51) == 2.90 == 2.96 == 2.86 == 2.96 == == 3.06 == 3.31 == 3.13 == 3.16 == Here, the symbols namely (i) represents COC for a case of undesired root. (ii)... represents COC for a case of divergence. (iii) == represents COC for a case of failure. (iv) represents COC for a case when number of iterations are not sufficient. 108

17 Table 4.4 Total number of function evaluations (TNOFE) Example CM MCM HM MHM SHM MSHM EM MEM OCM MOCM No. (4.23) (4.24) (4.25) (4.50) (4.51) = 12 = 12 = 12 = 12 = = 9 = 9 = 9 = 9 = Here, the symbols namely (i) represents TNOFE for a case of undesired root. (ii) represents TNOFE for a case of divergence. (iii) = represents TNOFE for a case of failure. 109

18 4.3 Generalized families of multipoint iterative methods without memory The practical difficulty associated with third-order methods given by formula (4.21) may be an evaluation of second-order derivative. In past and recent years, many new modifications of Newton s method free from second-order derivative have been proposed by researchers [7, 27, 34, 35, 82, 9, 47 55, 57, 58, 75 81, ] by discretization of second-order derivative or by considering different quadrature formulae for computation of an integral arising from Newton s theorem [8, 52 55] or by considering Adomian decomposition method [39 44]. All these modifications are targeted at increasing a local order of convergence with the view of increasing their efficiency index [13]. Some of these multipoint iterative methods are of third-order [7, 52 55, 57, 58, 75 81, 87 95], while others are of order four [7, 34, 35, 82, 9, 47 51, ]. Recently, fourth-order methods proposed by Nedzhibov et al. [101], Basu [102], Maheswari [103], Cordero et al. [104], Petković and Petković (a survey) [105], Ghanbari [106], Soleymani [47, 107], Cordero and Torregrosa [108], Chun et al. [109] etc. have optimal order of convergence [9]. The efficiency index [13] of these optimal multipoint iterative methods is equal to These multipoint methods calculate new approximations to a zero of f (x) by sampling f (x) and possibly its derivatives for a number of values of independent variable, per step. Nedzhibov et al. [101] have modified Chebyshev-Halley type methods [74] to derive several cubic and quartic order convergent multipoint iterative methods free from second-order derivative. Since all these techniques are variants of Newton s method and main practical difficulty associated with these multipoint iterative methods is that an iteration can be aborted due to the overflow or leads to divergence if the derivative of the function at any iterative point is singular or almost singular (namely f (x n ) 0). Here, it is intended to develop and unify the class of third-order and fourth-order multipoint iterative methods free from second-order derivative in which f (x n ) = 0 is permitted. The main idea of proposed methods lies in discretization of second-order derivative involved in family (4.21) (similar to Nedzhibov et al. [101]). Therefore, again this work can be viewed as a generalization over Nedzhibov et al. [101] families of multipoint iter- 110

19 ative methods. An additional feature of these processes is that they may possess a number of disposable parameters which can be used to ensure the convergence is to a certain order for simple roots. This section, deals with the derivation of three different families free from second-order derivative involved in the family (4.21) First Family Let us take u u(x n ) = f (x n ) α, and expanding the function f (x n u) about a point x = x n with 0 by Taylor s series expansion, one can have f (x n u) = u f (x n ) + u2 2 f (x n ) + O(u 3 ). (4.64) Inserting the value of u defined above in the coefficient of f (x n ), one can obtain f (x n u)( f (x n ) α ) + α } 2 f (x n ) α = u2 2 f (x n ) + O(u 3 ). (4.65) Further, L f (x n) from equation (4.22) may be written as follows : L f (x n) = f (x n ) f (x n ) α } 2 α f (x n ) α α f (x n) f (x n ) f (x n ) α } 2. (4.66) Taking the approximate value of f (x n ) from formula (4.65), one can simplify formula of L f (x n) given in (4.66) as L f (x n) 2 f (x n u) α 2 } 2 f. (4.67) (x n ) α If u is sufficiently small, then the second term in above equation can be neglected and one obtains L f (x n) = 2 f (x n u). (4.68) Substituting this approximate value of L f (x n) in (4.21), a following corresponding multipoint family is obtained 1 + f (x n u) ( [1 + b n f (x n ) α )] f (x n ) α. (4.69) 111

20 This is a modification over the formula (2.1) of Nedzhibov et al. [101] and do not fail even if f (x) is very small or zero in the vicinity of required root. For different specific values of b n, the following various families of multipoint iterative methods can be derived from (4.69), e.g. (i) For b n = 0, one gets formula f (x n ) α f (x n u) f (x n ) α. (4.70) This is a modification over the well-known cubically convergent Traub s method [7, pp. 168]. (ii) Letting b n = L f (x n) 2u, one can obtain } 2 f (x n ) α } f (x n u)}. (4.71) This is a modification over the well-known cubically convergent Newton-Secant method [7, pp. 180]. (iii) Substituting b n = L f (x n) u, one obtains [ ] f (xn ) f (x n u) f. (4.72) (x n ) α 2 f (x n u) This is a modification over the well-known quartically convergent Traub-Ostrowski s method [7, 82, 101]. It can be seen that in absence of the term in parameter α, method (4.70), method (4.71) and method (4.72) reduce to Traub s method [7, pp. 168], Newton-Secant method [7, pp. 180] and Traub-Ostrowski s method [7, 82, 101] respectively. The sign of parameter α in all these families is chosen so that the denominator is largest in magnitude. A mathematical proof for an order of convergence of the proposed iterative family (4.69) is given in following theorem : 112

21 Theorem Let f : I R R be a sufficiently differentiable function defined on an open interval I, enclosing a simple zero of f (x) (say x = r I). Assume that initial guess x = x 0 is sufficiently close to r and f (x n ) α 0 in I. Then an iteration scheme defined by formula (4.69), is cubically convergent for b n R and satisfies the following error equation : e n+1 = (C 2 α)(b n + 2C 2 α)e 3 n + O(e 4 n). (4.73) Proof. Using Taylor s series expansion for about x = r and taking into account that f (r) = 0 and f (r) 0, one can have = f (r) e n +C 2 e 2 n +C 3 e 3 n + O(e 4 n), (4.74) and From equations (4.74) and (4.75), one gets Further, one can get Using equations (4.74)-(4.77), one can have f (x n u) ( [1 + b n f (x n ) α f (x n ) = f (r) 1 + 2C 2 e n + 3C 3 e 2 n + O(e 3 n). (4.75) f (x n ) α = e n + ( C 2 + α)e 2 n + O(e 3 n). (4.76) f (x n u) = f (r) (C 2 α)e 2 n + O(e 3 n). (4.77) )] = (C 2 α)e n + ( (α C 2 )(b n + 3C 2 ) + 2C 3 α 2) e 2 n + O(e 3 n). Substituting equations (4.76) and (4.78) in formula (4.69), one gets (4.78) e n+1 = (C 2 α)(b n + 2C 2 α)e 3 n + O(e 4 n). (4.79) This completes proof of the theorem. 113

22 4.3.2 Second family Replacing second-order derivative in (4.22) by the finite difference between first-order derivatives as where u is as defined earlier. One obtains f (x n ) f (x n ) f (x n θu),θ R 0}, (4.80) θu L f (x n) f (x n ) f } (x n θu) θ f (x n ) α } + 2 α2 f 2α f (x n ) (x n ) α f (x n ) α } 2. (4.81) If u is sufficiently small, then the second term in above equation can be neglected and then one can get L f (x n) = f (x n ) f (x n θu) θ f (x n ) α } 2α f (x n ) f (x n ) α } 2. (4.82) Substituting this approximate value of L f (x n) in (4.21), a following multipoint family is obtained ( f (x n ) f (x n θu) ) } 2αθu f (x n ) θ ( f (x n ) + (b n α) ) f (x n ) α. (4.83) The formula (4.83) is a modification over formula (2.5) given by Nedzhibov et al. [101] and do not fail even if f (x n ) = 0 or very small in the vicinity of required root. For particular values of b n and θ, some particular cases of formula (4.83) are (i) For b n = L f (x n) 2u and θ = 1, one gets f (x n ) + f (x n 2 f (x n ) α ). (4.84) + 2α 2 f (x n ) f (x n ) α This is a modification over the well-known cubically convergent Weerakoon and Fernando method [8]. (ii) Putting b n = L f (x n) 2u family of multipoint iterative methods and θ = 1, one can get a following new cubically convergent 3 f (x n ) f (x n + 2 f (x n ) α ). (4.85) + 2α 2 f (x n ) f (x n ) α 114

23 (iii) Substituting b n = L f (x n) 2u and θ = 1 2, one can get an another new cubically convergent family of multipoint iterative methods 2 f (x n ) f (x n + 2 f (x n ) α } ). (4.86) + α 2 f (x n ) f (x n ) α (iv) Letting b n = L f (x n) 2u and θ = 1 2, one gets f (x n 2 f (x n ) α } ) + α 2 f (x n ) f (x n ) α. (4.87) This is a modification over the cubically convergent method derived by Traub [7, pp. 182]. (v) Taking b n = L f (x n) u and θ = 2 3, one can obtain f (x n ) α f (x n )} 2 f (x n ) α 3 4α f (x n) ( ). 2 f x n 2 3 u 2 f (x n ) f (x n )} 2 +α 2 } 2 f (x n ) α f ( x n 2 3 u ) f (x n ) (4.88) This is a modification over the well-known quartically convergent Jarratt s method [34]. It is easy to observe that in absence of the term in parameter α, method (4.84), method (4.85), method (4.86), method (4.87) and method (4.88) reduce to Weerakoon and Fernando method [8], third-order multipoint iterative method derived by Traub [7, pp. 182] and Jarratt s method [34, 35] respectively. Here, the parameter α should be chosen so that the denominator is largest in magnitude. A mathematical proof for an order of convergence of the proposed iterative family (4.83) is presented in the next theorem. 115

24 Theorem Let f : I R R be a sufficiently differentiable function defined on an open interval I, enclosing a simple zero of f (x) (say x = r I). Assume that initial guess x = x 0 is sufficiently close to r and f (x n ) 0 in I. Then an iteration scheme defined by formula (4.83) is cubically convergent and satisfies the following error equation : e n+1 = (2C C 3(2 3θ) + b n (C 2 α) 3C 2 α + 2α 2 ) e 3 n + O(e 4 n). (4.89) Proof. Using Taylor s series expansion for about x = r and taking into account that f (r) = 0 and f (r) 0, one can have = f (r) e n +C 2 e 2 n +C 3 e 3 n + O(e 4 n), (4.90) and From equations (4.90) and (4.91), one gets Further, one can get f (x n ) = f (r) 1 + 2C 2 e n + 3C 3 e 2 n + O(e 3 n). (4.91) f (x n ) α = e n + ( C 2 + α)e 2 n + O(e 3 n). (4.92) f (x n θu) = f (r) 1 + 2C 2 (1 θ)e n + ( 3C 3 (1 θ) 2 + 2C 2 (C 2 α)θ ) e 2 n + O(e 3 n). (4.93) Using equations (4.90)-(4.93), one can obtain (1 2αθu) f (x n ) f (x n θu) 2θ( f (x n ) + (b n α) ) ( = α + C ) 2 e n + 1 ( 3C3 + 2(αθ C 2 )(b n +C 2 2α) 4C 2 ) 2 e 2 θ 2θ n + O(e 3 n). (4.94) Substituting equations (4.92) and (4.93) in formula (4.83), one gets e n+1 = (2C C 3(2 3θ) + b n (C 2 α) 3C 2 α + 2α 2 ) e 3 n + O(e 4 n). (4.95) Proof of the theorem is now complete. 116

25 4.3.3 Third family Replacing second-order derivative in (4.22) by the finite difference as f (x n ) 5 f (x n ) 4 f ( x n u ) 2 f (x n u), (4.96) 3u where u is as defined earlier. One obtains L f (x n) 5 f (x n ) 4 f ( x n u ) 2 f } (x n u) 3 f +α 2 2 (x n ) α } f 2α f (x n ) (x n ) α f (x n ) α } 2. (4.97) If u is sufficiently small, then the second term in above equation can be neglected and one can obtain L f (x n) = 5 f (x n ) 4 f ( x n u ) 2 f (x n u) 3 f (x n ) α } 2α f (x n ) f (x n ) α } 2. (4.98) Substituting this approximate value of L f (x n) in (4.21), the following multipoint family is obtained ( 5 f (x n ) 4 f ( x n u ) 2 f (x n u) ) } 6αu f (x n ) ( f (x n ) + (b n α) ) Then for b n = L f (x n) 2u in (4.99), one gets f (x n ) α. (4.99) f (x n ) + 4 f (x n 2 f (x n ) α } 6 ) ( + f x n ). f (x n ) α + 6αu f (x n ) (4.100) This is a modification over the cubically convergent formula explored by Hasanov et al. [101, 230], which do not fail even if f (x n ) = 0 in the vicinity of required root. Here, the parameter α should be chosen such that the denominator is largest in magnitude. It can be observed that in absence of the term in parameter α, the family (4.100) reduces to Hasanov et al. method [101, 230]. Next, Theorem presents a mathematical proof for an order of convergence of the proposed iterative family (4.99). 117

26 Theorem Let f : I R R be a sufficiently differentiable function defined on an open interval I, enclosing a simple zero of f (x) (say x = r I). Assume that initial guess x = x 0 is sufficiently close to r and f (x n ) α in I. Then an iteration scheme defined by formula (4.99) is cubically convergent and satisfies the following error equation : e n+1 = ( 2C b n (C 2 α) 3C 2 α + 2α 2) e 3 n + O(e 4 n). (4.101) Proof. Using Taylor s series expansion for about x = r and taking into account that f (r) = 0 and f (r) 0, one can have and From equations (4.102) and (4.103), one gets Further, one can have and = f (r) e n +C 2 e 2 n +C 3 e 3 n + O(e 4 n), (4.102) f (x n ) = f (r) 1 + 2C 2 e n + 3C 3 e 2 n + O(e 3 n). (4.103) f (x n ) α = e n + ( C 2 + α)e 2 n + O(e 3 n). (4.104) f (x n u) = f (r) 1 + 2C 2 (C 2 α)e 2 n + O(e 3 n), (4.105) ( f x n u ) ( ) = f 3C3 (r) 1 +C 2 e n C 2(C 2 α) e 2 n + O(e 3 n). (4.106) Using equations (4.102)-(4.106), one can obtain (5 6αu) f (x n ) 4 f ( x n u 2) f (x n u) 6( f (x n ) + (b n α) ) =(C 2 α)e n + ( 2C 3 2α 2 (C 2 α)(b n + 3C 2 ) ) e 2 n + O(e 3 n). (4.107) Substituting equations (4.104) and (4.107) in the formula (4.99), one can get This completes proof of the theorem. e n+1 = ( 2C b n (C 2 α) 3C 2 α + 2α 2) e 3 n + O(e 4 n). (4.108) On the similar lines, one can further derive some more new cubically convergent families of multipoint iterative methods free from second-order derivative by discretizing the secondorder derivative involved in families (4.39) and (4.46). 118

27 4.3.4 Further exploration of family (4.69) One can also construct a different family of fourth-order convergent multipoint iterative methods by suitably choosing parameter b n. Again consider the family (4.69) f (x n u) 1 + ( )] [1 + b n where α R and u = f (x n ) α. f (x n ) α f (x n ) α, (4.109) Taking b n = 2C 2 + α f (x n ) f (x n ) + α, in an error equation (4.79), the family (4.109) has at least fourth-order convergence, i.e. after simplifications, one gets new fourth-order multipoint iterative family given by f (x n ) α f (x n u) f (x n ) f (x n )} 2 f (x n ). (4.110) Further, one can derive another fourth-order multipoint iterative family free from secondorder derivative by using (4.65) and is given by [ f 1 + (x n ) α f (x n u) f (x n ) u f (x n )} 2 2α } 2 2 f (x n u)( f (x n ) α ) (4.111) This is a modification over the well-known Traub-Ostrowski s method [7, 82] and is also an optimal fourth-order method. This family requires same evaluations of the function and its derivative as that of Traub-Ostrowski s method per iteration. Thus, an efficiency index [13] of this method is equal to 3 4 = 1.587, which is better than the one of Newton s method 2 2 = and method (4.72) (derived in this chapter) 3 3 = More importantly, this method will not fail even if the derivative of the function is very small or even zero in the vicinity of required root, unlike Traub-Ostrowski s method [1, 43]. Here, the parameter α should be chosen so that denominator is largest in magnitude. One can see in absence of the term in parameter α, the family (4.111) reduces to wellknown Traub-Ostrowski s method [7, 82]. A mathematical proof for an order of convergence of the proposed iterative family (4.111) is given by next theorem. ]. 119

28 Theorem Let f : I R R be a sufficiently differentiable function defined on an open interval I, enclosing a simple zero of f (x) (say x = r I). Assume that initial guess x = x 0 is sufficiently close to r and f (x n ) α in I. Then an iteration scheme defined by formula (4.111) has optimal order of convergence four and satisfies the following error equation : e n+1 = (C 2 α)(c2 2 C 3 αc 2 + 2α 4 )e 4 n + O(e 5 n). (4.112) Proof. Using Taylor s series expansion for about x = r and taking into account that f (r) = 0 and f (r) 0, one can have = f (r) e n +C 2 e 2 n +C 3 e 3 n + O(e 4 n), (4.113) and f (x n ) = f (r) 1 + 2C 2 e n + 3C 3 e 2 n + 4C 4 e 3 n + O(e 4 n). (4.114) From equations (4.113) and (4.114), one gets f (x n ) α = e n + ( C 2 + α)e 2 n + (2C 2 2 2C 3 2αC 2 + α 2 )e 3 n + O(e 4 n). (4.115) Further, one can obtain f (x n u) = f (r) (C 2 α)e 2 n + ( 2C C 3 + 2αC 2 α 2 )e 3 n + O(e 4 n). (4.116) Using equations (4.113), (4.114) and (4.116), one gets f (x n u) f (x n ) u f (x n )} 2 2α } 2 2 f (x n u)( f (x n ) α ) =(C 2 α)e n + ( C 2 2 +C 3 )e 2 n + ( 2C 2 C 3 + 3C 4 + 2C 2 2α C 3 α 3C 2 α 2 + 2α 3) e 3 n + O(e 4 n). (4.117) Substituting equations (4.115) and (4.117) in formula (4.111) and simplifying, one obtains e n+1 = (C 2 α)(c 2 2 C 3 αc 2 + 2α 4 )e 4 n + O(e 5 n). (4.118) This completes proof of the theorem. 120

29 4.3.5 Worked examples Here, an attempt has been made to present some numerical results obtained by employing existing classical methods namely, Steffensen s method (SM), Traub s method (T M), Newton-Secant method (NSM), Weerankoon and Fernando method (WFM), Hasanov et al. method (HAM), Traub-Ostrowski s method (T OM), Jarratt s method (JM) and newly developed methods namely, modified Traub s method (MT M) (4.70), modified Newton-Secant method (MNSM) (4.71), modified Weerankoon and Fernando method (MW FM) (4.84), modified Hasanov et al. method (MHAM) (4.100), modified Traub-Ostrowski s method (MT OM) (4.111) and modified Jarratt s method (MJM) (4.88) respectively to solve nonlinear equations given in Table (4.5). Here, for simplicity, the formulae are tested for α = 1. The results are summarized in Table 4.6 (Number of iterations), Table 4.7 (Computational order of convergence) and Table 4.8 (Total number of function evaluations) respectively. Computations have been performed using MAT LAB R version 7.5 (R2007b) in double precision arithmetic. We use ε = as a tolerance error. The following stopping criteria are used for computer programs : (i) x n+1 x n < ε, (ii) f (x n+1 ) < ε. 121

30 Table 4.5 Test examples, interval (I) containing the root, their initial guesses (x 0 ) and exact root (r) Example No. Examples I Initial guesses Exact root exp(1 x) 1 = 0. [0,2] sin(x) = 0. [0, 2] exp( x) sin(x) = 0. [5, 7] arctan(x) = 0. [ 1, 2] x 3 + 4x 2 10 = 0. [0,2] (x 1) 3 1 = 0. [0,3] exp(x 2 + 7x 30) 1 = 0. [2,4] cos(x) x = 0. [ 1,2] log(x) = 0. [.5, 3.5]

31 Table 4.6 Number of iterations (D and F below stands for divergent and failure respectively) Example SM TM MTM NSM MNSM WFM MWFM HAM MHAM TOM MTOM JM MJM No. (4.70) (4.71) (4.84) (4.100) (4.111) (4.88) D D D D D 5 D D D D 5 13 F 4 F 5 F 5 F 5 F 4 F F 1 F 1 F 5 F 1 F 1 F D D D D D 5 D 6 D 5 D 6 D Converges to undesired root 123

32 Table 4.7 Computational order of convergence (COC) Example SM TM MTM NSM MNSM WFM MWFM HAM MHAM TOM MTOM JM MJM No. (4.70) (4.71) (4.84) (4.100) (4.111) (4.88) == 2.88 == 2.98 == 2.98 == 2.98 == 3.90 == == == == 2.99 == == == Here, the symbols used, are already defined in this chapter under the Table

33 Table 4.8 Total number of function evaluations (TNOFE) Example SM TM MTM NSM MNSM WFM MWFM HAM MHAM TOM MTOM JM MJM No. (4.70) (4.71) (4.84) (4.100) (4.111) (4.88) = 12 = 15 = 15 = 20 = 12 = = 3 = 3 = 15 = 4 = 3 = Here, the symbols used, are already defined in this chapter under the Table

34 4.4 Discussion and conclusions This work proposes many new families of Chebyshev s method, Halley s method, super- Halley method, Chebyshev-Halley type methods, C-methods, Euler s method, osculating circle method and ellipse method for finding simple roots of nonlinear equations, permitting f (x n ) = 0, at some points in the vicinity of required root. Further, an attempt is made to propose some new families of third-order and fourth-order multipoint iterative methods free from second-order derivative by discretization. The numerical examples considered in Table 4.1 and Table 4.5 overwhelmingly support that proposed one-point as well as multipoint iterative methods can compete with any of the existing methods including classical Newton s method and Traub-Ostrowski s method. A reasonably close initial guess is necessary for these multipoint methods to converge. This condition, however, applies practically to all iterative methods for solving equations. Fourth-order multipoint iterative method of first family have optimal order with computational efficiency index [13] equal to 3 4 = and third-order multipoint iterative methods of first and second families have efficiency indices [13] equal to 3 3 = 1.442, which are better than the one of Newton s method 2 = In third family, a modified third-order multipoint family of Hasanov et al. method [101, 226] has been obtained, permitting f (x n ) = 0, at some points in the vicinity of required root. Further, one can also construct many new families of iterative methods free from first and second-order derivatives in computing process by the forward-difference approximations. Furthermore, the behaviours of existing classical iterative methods (one-point as well as multipoint) and the proposed modifications can be compared by their corresponding correction factors. The factor f (x n) f (x n ), which appears in the classical iterative schemes, is now f (x modified to n ) f (x n ) α, where α R. Here, the parameter α should be chosen such that the denominator is largest in magnitude. As is well-known that, if an initial guess x n is close enough to the solution r with f (r) 0, the classical iterates are well defined and converge to the required root. Now this modified factor has two major advantages over its predecessor. The first one is that it is always well defined, even if f (x n ) = 0. The second one is that an absolute value of denominator in the formula is greater than f (x n ) provided x n is not accepted as an approximation to the required root r. This means that the numerical 126

35 stability of iterative schemes having this factor is better than the classical iterates. Moreover, if x n is sufficiently close to r (simple root), then will be sufficiently close to zero, and in particular the considered denominator f (x n ) α will also be close to zero if f (x n ) 0. In these problems where f (x n ) 0, a small value of the parameter α may lead to divergence or failure. Such situations can be handled by choosing α in such a way such that the correction factor is as small as possible, i.e. denominator f (x n ) α is sufficiently large in magnitude. Finally, it is concluded that these techniques can be used as an alternative to existing techniques or in some cases where existing techniques are not successful, therefore, have definite practical utility. 127

Optimal derivative-free root finding methods based on the Hermite interpolation

Optimal derivative-free root finding methods based on the Hermite interpolation Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 4427 4435 Research Article Optimal derivative-free root finding methods based on the Hermite interpolation Nusrat Yasmin, Fiza Zafar,

More information

Two New Predictor-Corrector Iterative Methods with Third- and. Ninth-Order Convergence for Solving Nonlinear Equations

Two New Predictor-Corrector Iterative Methods with Third- and. Ninth-Order Convergence for Solving Nonlinear Equations Two New Predictor-Corrector Iterative Methods with Third- and Ninth-Order Convergence for Solving Nonlinear Equations Noori Yasir Abdul-Hassan Department of Mathematics, College of Education for Pure Science,

More information

A Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation

A Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation ELSEVIER An International Journal Available online at www.sciencedirect.com computers &.,=. = C o,..ct. mathematics with applications Computers and Mathematics with Applications 48 (2004) 709-714 www.elsevier.com/locate/camwa

More information

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 4 2016, pp. 2987 2997 Research India Publications http://www.ripublication.com/gjpam.htm Chebyshev-Halley s Method without

More information

A three point formula for finding roots of equations by the method of least squares

A three point formula for finding roots of equations by the method of least squares Journal of Applied Mathematics and Bioinformatics, vol.2, no. 3, 2012, 213-233 ISSN: 1792-6602(print), 1792-6939(online) Scienpress Ltd, 2012 A three point formula for finding roots of equations by the

More information

SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS Open J. Math. Sci., Vol. 1(017, No. 1, pp. 5-33 ISSN 53-01 Website: http://www.openmathscience.com SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS MUHAMMAD SAQIB 1, MUHAMMAD IQBAL Abstract.

More information

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations F. Soleymani Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran E-mail: fazl_soley_bsb@yahoo.com; Tel:

More information

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence Palestine Journal of Mathematics Vol. 3(1) (2014), 113 119 Palestine Polytechnic University-PPU 2014 Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence M.A. Hafiz

More information

NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS

NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS Bulletin of Mathematical Analysis and Applications ISSN: 181-191, URL: http://www.bmathaa.org Volume 3 Issue 4(011, Pages 31-37. NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS

More information

A New Modification of Newton s Method

A New Modification of Newton s Method A New Modification of Newton s Method Vejdi I. Hasanov, Ivan G. Ivanov, Gurhan Nedjibov Laboratory of Mathematical Modelling, Shoumen University, Shoumen 971, Bulgaria e-mail: v.hasanov@@fmi.shu-bg.net

More information

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications Applied Mathematics E-Notes, 17(017), 1-30 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

A Three-Step Iterative Method to Solve A Nonlinear Equation via an Undetermined Coefficient Method

A Three-Step Iterative Method to Solve A Nonlinear Equation via an Undetermined Coefficient Method Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 14, Number 11 (018), pp. 145-1435 Research India Publications http://www.ripublication.com/gjpam.htm A Three-Step Iterative Method

More information

Modified Jarratt Method Without Memory With Twelfth-Order Convergence

Modified Jarratt Method Without Memory With Twelfth-Order Convergence Annals of the University of Craiova, Mathematics and Computer Science Series Volume 39(1), 2012, Pages 21 34 ISSN: 1223-6934 Modified Jarratt Method Without Memory With Twelfth-Order Convergence F. Soleymani,

More information

On Newton-type methods with cubic convergence

On Newton-type methods with cubic convergence Journal of Computational and Applied Mathematics 176 (2005) 425 432 www.elsevier.com/locate/cam On Newton-type methods with cubic convergence H.H.H. Homeier a,b, a Science + Computing Ag, IT Services Muenchen,

More information

Math 4329: Numerical Analysis Chapter 03: Newton s Method. Natasha S. Sharma, PhD

Math 4329: Numerical Analysis Chapter 03: Newton s Method. Natasha S. Sharma, PhD Mathematical question we are interested in numerically answering How to find the x-intercepts of a function f (x)? These x-intercepts are called the roots of the equation f (x) = 0. Notation: denote the

More information

A new sixth-order scheme for nonlinear equations

A new sixth-order scheme for nonlinear equations Calhoun: The NPS Institutional Archive DSpace Repository Faculty and Researchers Faculty and Researchers Collection 202 A new sixth-order scheme for nonlinear equations Chun, Changbum http://hdl.handle.net/0945/39449

More information

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative

More information

Family of Optimal Eighth-Order of Convergence for Solving Nonlinear Equations

Family of Optimal Eighth-Order of Convergence for Solving Nonlinear Equations SCECH Volume 4, Issue 4 RESEARCH ORGANISATION Published online: August 04, 2015 Journal of Progressive Research in Mathematics www.scitecresearch.com/journals Family of Optimal Eighth-Order of Convergence

More information

SOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H.

SOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H. International Journal of Differential Equations and Applications Volume 12 No. 4 2013, 169-183 ISSN: 1311-2872 url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijdea.v12i4.1344 PA acadpubl.eu SOLVING

More information

A Fifth-Order Iterative Method for Solving Nonlinear Equations

A Fifth-Order Iterative Method for Solving Nonlinear Equations International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767, P-ISSN: 2321 4759 www.ijmsi.org Volume 2 Issue 10 November. 2014 PP.19-23 A Fifth-Order Iterative Method for Solving

More information

A new family of four-step fifteenth-order root-finding methods with high efficiency index

A new family of four-step fifteenth-order root-finding methods with high efficiency index Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 3, No. 1, 2015, pp. 51-58 A new family of four-step fifteenth-order root-finding methods with high efficiency index Tahereh

More information

Finding simple roots by seventh- and eighth-order derivative-free methods

Finding simple roots by seventh- and eighth-order derivative-free methods Finding simple roots by seventh- and eighth-order derivative-free methods F. Soleymani 1,*, S.K. Khattri 2 1 Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran * Corresponding

More information

A Two-step Iterative Method Free from Derivative for Solving Nonlinear Equations

A Two-step Iterative Method Free from Derivative for Solving Nonlinear Equations Applied Mathematical Sciences, Vol. 8, 2014, no. 161, 8021-8027 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.49710 A Two-step Iterative Method Free from Derivative for Solving Nonlinear

More information

Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and Their Dynamics

Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and Their Dynamics Applied Mathematical Sciences, Vol. 9, 5, no. 49, 747-748 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.988/ams.5.5658 Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and

More information

Some Third Order Methods for Solving Systems of Nonlinear Equations

Some Third Order Methods for Solving Systems of Nonlinear Equations Some Third Order Methods for Solving Systems of Nonlinear Equations Janak Raj Sharma Rajni Sharma International Science Index, Mathematical Computational Sciences waset.org/publication/1595 Abstract Based

More information

New seventh and eighth order derivative free methods for solving nonlinear equations

New seventh and eighth order derivative free methods for solving nonlinear equations DOI 10.1515/tmj-2017-0049 New seventh and eighth order derivative free methods for solving nonlinear equations Bhavna Panday 1 and J. P. Jaiswal 2 1 Department of Mathematics, Demonstration Multipurpose

More information

A New Fifth Order Derivative Free Newton-Type Method for Solving Nonlinear Equations

A New Fifth Order Derivative Free Newton-Type Method for Solving Nonlinear Equations Appl. Math. Inf. Sci. 9, No. 3, 507-53 (05 507 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/0.785/amis/090346 A New Fifth Order Derivative Free Newton-Type Method

More information

15 Nonlinear Equations and Zero-Finders

15 Nonlinear Equations and Zero-Finders 15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).

More information

A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index

A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index International Journal of Mathematical Modelling & Computations Vol. 04, No. 03, Summer 2014, 277-288 A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index

More information

Convergence of a Third-order family of methods in Banach spaces

Convergence of a Third-order family of methods in Banach spaces International Journal of Computational and Applied Mathematics. ISSN 1819-4966 Volume 1, Number (17), pp. 399 41 Research India Publications http://www.ripublication.com/ Convergence of a Third-order family

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation

More information

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy, Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

More information

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation Numerical Analysis by Dr. Anita Pal Assistant Professor Department of Mathematics National Institute of Technology Durgapur Durgapur-713209 email: anita.buie@gmail.com 1 . Chapter 4 Solution of Non-linear

More information

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center

More information

Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations

Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations e Scientific World Journal, Article ID 34673, 9 pages http://dx.doi.org/0.55/204/34673 Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations T. Lotfi, K. Mahdiani,

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

High-order Newton-type iterative methods with memory for solving nonlinear equations

High-order Newton-type iterative methods with memory for solving nonlinear equations MATHEMATICAL COMMUNICATIONS 9 Math. Commun. 9(4), 9 9 High-order Newton-type iterative methods with memory for solving nonlinear equations Xiaofeng Wang, and Tie Zhang School of Mathematics and Physics,

More information

Three New Iterative Methods for Solving Nonlinear Equations

Three New Iterative Methods for Solving Nonlinear Equations Australian Journal of Basic and Applied Sciences, 4(6): 122-13, 21 ISSN 1991-8178 Three New Iterative Methods for Solving Nonlinear Equations 1 2 Rostam K. Saeed and Fuad W. Khthr 1,2 Salahaddin University/Erbil

More information

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1 53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

Root Finding Methods

Root Finding Methods Root Finding Methods Let's take a simple problem ax 2 +bx+c=0 Can calculate roots using the quadratic equation or just by factoring Easy and does not need any numerical methods How about solving for the

More information

Dynamical Behavior for Optimal Cubic-Order Multiple Solver

Dynamical Behavior for Optimal Cubic-Order Multiple Solver Applied Mathematical Sciences, Vol., 7, no., 5 - HIKARI Ltd, www.m-hikari.com https://doi.org/.988/ams.7.6946 Dynamical Behavior for Optimal Cubic-Order Multiple Solver Young Hee Geum Department of Applied

More information

University of Education Lahore 54000, PAKISTAN 2 Department of Mathematics and Statistics

University of Education Lahore 54000, PAKISTAN 2 Department of Mathematics and Statistics International Journal of Pure and Applied Mathematics Volume 109 No. 2 2016, 223-232 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: 10.12732/ijpam.v109i2.5

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 20, 2014 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

SKILL BUILDER TEN. Graphs of Linear Equations with Two Variables. If x = 2 then y = = = 7 and (2, 7) is a solution.

SKILL BUILDER TEN. Graphs of Linear Equations with Two Variables. If x = 2 then y = = = 7 and (2, 7) is a solution. SKILL BUILDER TEN Graphs of Linear Equations with Two Variables A first degree equation is called a linear equation, since its graph is a straight line. In a linear equation, each term is a constant or

More information

Line Search Methods. Shefali Kulkarni-Thaker

Line Search Methods. Shefali Kulkarni-Thaker 1 BISECTION METHOD Line Search Methods Shefali Kulkarni-Thaker Consider the following unconstrained optimization problem min f(x) x R Any optimization algorithm starts by an initial point x 0 and performs

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

Ch 9/10/11/12 Exam Review

Ch 9/10/11/12 Exam Review Ch 9/0// Exam Review The vector v has initial position P and terminal point Q. Write v in the form ai + bj; that is, find its position vector. ) P = (4, 6); Q = (-6, -) Find the vertex, focus, and directrix

More information

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2 Bisection Method Given continuous function f (x) on the interval [a, b] with f (a) f (b) < 0, there must be a root in (a, b). To find a root: set [a 1, b 1 ] = [a, b]. set p 1 = a 1+b 1 2 and compute f

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

Conic Sections. Geometry - Conics ~1~ NJCTL.org. Write the following equations in standard form.

Conic Sections. Geometry - Conics ~1~ NJCTL.org. Write the following equations in standard form. Conic Sections Midpoint and Distance Formula M is the midpoint of A and B. Use the given information to find the missing point. 1. A(, 2) and B(3, -), find M 2. A(5, 7) and B( -2, -), find M 3. A( 2,0)

More information

A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations

A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations Int. Journal of Math. Analysis Vol. 5 2011 no. 37 1813-1819 A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations F. Soleymani 1 and B. S. Mousavi 2 Young Researchers Club Islamic

More information

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur 28 International Journal of Advance Research, IJOAR.org Volume 1, Issue 1, January 2013, Online: Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur ABSTRACT The following paper focuses

More information

Formulas to remember

Formulas to remember Complex numbers Let z = x + iy be a complex number The conjugate z = x iy Formulas to remember The real part Re(z) = x = z+z The imaginary part Im(z) = y = z z i The norm z = zz = x + y The reciprocal

More information

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems algorithms Article Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems Xiaofeng Wang * and Xiaodong Fan School of Mathematics and Physics, Bohai University, Jinzhou 203, China;

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

Document downloaded from:

Document downloaded from: Document downloaded from: http://hdl.handle.net/1051/56036 This paper must be cited as: Cordero Barbero, A.; Torregrosa Sánchez, JR.; Penkova Vassileva, M. (013). New family of iterative methods with high

More information

arxiv: v2 [math.na] 11 Jul 2018

arxiv: v2 [math.na] 11 Jul 2018 Beyond Newton: a new root-finding fixed-point iteration for nonlinear equations Ankush Aggarwal, and Sanay Pant Zienkiewicz Centre for Computational Engineering, College of Engineering, Swansea University,

More information

Some New Three Step Iterative Methods for Solving Nonlinear Equation Using Steffensen s and Halley Method

Some New Three Step Iterative Methods for Solving Nonlinear Equation Using Steffensen s and Halley Method British Journal of Mathematics & Computer Science 19(2): 1-9, 2016; Article no.bjmcs.2922 ISSN: 221-081 SCIENCEDOMAIN international www.sciencedomain.org Some New Three Step Iterative Methods for Solving

More information

NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS

NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS Dr. Farooq Ahmad Principal, Govt. Degree College Darya Khan, Bhakkar, Punjab Education Department, PAKISTAN farooqgujar@gmail.com Sifat

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS 5.1 Introduction When a physical system depends on more than one variable a general

More information

Chapter 3: The Derivative in Graphing and Applications

Chapter 3: The Derivative in Graphing and Applications Chapter 3: The Derivative in Graphing and Applications Summary: The main purpose of this chapter is to use the derivative as a tool to assist in the graphing of functions and for solving optimization problems.

More information

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014 Root Finding: Close Methods Bisection and False Position Dr. Marco A. Arocha Aug, 2014 1 Roots Given function f(x), we seek x values for which f(x)=0 Solution x is the root of the equation or zero of the

More information

A New Accelerated Third-Order Two-Step Iterative Method for Solving Nonlinear Equations

A New Accelerated Third-Order Two-Step Iterative Method for Solving Nonlinear Equations ISSN 4-5804 (Paper) ISSN 5-05 (Online) Vol.8, No.5, 018 A New Accelerated Third-Order Two-Step Iterative Method for Solving Nonlinear Equations Umair Khalid Qureshi Department of Basic Science & Related

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Higher Portfolio Quadratics and Polynomials

Higher Portfolio Quadratics and Polynomials Higher Portfolio Quadratics and Polynomials Higher 5. Quadratics and Polynomials Section A - Revision Section This section will help you revise previous learning which is required in this topic R1 I have

More information

High School Mathematics Honors PreCalculus

High School Mathematics Honors PreCalculus High School Mathematics Honors PreCalculus This is an accelerated course designed for the motivated math students with an above average interest in mathematics. It will cover all topics presented in Precalculus.

More information

Comparative Analysis of Convergence of Various Numerical Methods

Comparative Analysis of Convergence of Various Numerical Methods Journal of Computer and Mathematical Sciences, Vol.6(6),290-297, June 2015 (An International Research Journal), www.compmath-journal.org ISSN 0976-5727 (Print) ISSN 2319-8133 (Online) Comparative Analysis

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Scientific Computing. Roots of Equations

Scientific Computing. Roots of Equations ECE257 Numerical Methods and Scientific Computing Roots of Equations Today s s class: Roots of Equations Polynomials Polynomials A polynomial is of the form: ( x) = a 0 + a 1 x + a 2 x 2 +L+ a n x n f

More information

Algebra II. A2.1.1 Recognize and graph various types of functions, including polynomial, rational, and algebraic functions.

Algebra II. A2.1.1 Recognize and graph various types of functions, including polynomial, rational, and algebraic functions. Standard 1: Relations and Functions Students graph relations and functions and find zeros. They use function notation and combine functions by composition. They interpret functions in given situations.

More information

ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates

ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates Further Pure Summary Notes. Roots of Quadratic Equations For a quadratic equation ax + bx + c = 0 with roots α and β Sum of the roots Product of roots a + b = b a ab = c a If the coefficients a,b and c

More information

Step 1: Greatest Common Factor Step 2: Count the number of terms If there are: 2 Terms: Difference of 2 Perfect Squares ( + )( - )

Step 1: Greatest Common Factor Step 2: Count the number of terms If there are: 2 Terms: Difference of 2 Perfect Squares ( + )( - ) Review for Algebra 2 CC Radicals: r x p 1 r x p p r = x p r = x Imaginary Numbers: i = 1 Polynomials (to Solve) Try Factoring: i 2 = 1 Step 1: Greatest Common Factor Step 2: Count the number of terms If

More information

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations Volume 27, N. 3, pp. 269 274, 2008 Copyright 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam An efficient Newton-type method with fifth-order convergence for solving nonlinear equations LIANG FANG 1,2, LI

More information

MATH1190 CALCULUS 1 - NOTES AND AFTERNOTES

MATH1190 CALCULUS 1 - NOTES AND AFTERNOTES MATH90 CALCULUS - NOTES AND AFTERNOTES DR. JOSIP DERADO. Historical background Newton approach - from physics to calculus. Instantaneous velocity. Leibniz approach - from geometry to calculus Calculus

More information

IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS 136 IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS 1Ogbereyivwe, O. and 2 Ojo-Orobosa, V. O. Department of Mathematics and Statistics,

More information

Core A-level mathematics reproduced from the QCA s Subject criteria for Mathematics document

Core A-level mathematics reproduced from the QCA s Subject criteria for Mathematics document Core A-level mathematics reproduced from the QCA s Subject criteria for Mathematics document Background knowledge: (a) The arithmetic of integers (including HCFs and LCMs), of fractions, and of real numbers.

More information

A new modified Halley method without second derivatives for nonlinear equation

A new modified Halley method without second derivatives for nonlinear equation Applied Mathematics and Computation 189 (2007) 1268 1273 www.elsevier.com/locate/amc A new modified Halley method without second derivatives for nonlinear equation Muhammad Aslam Noor *, Waseem Asghar

More information

MATHEMATICS LEARNING AREA. Methods Units 1 and 2 Course Outline. Week Content Sadler Reference Trigonometry

MATHEMATICS LEARNING AREA. Methods Units 1 and 2 Course Outline. Week Content Sadler Reference Trigonometry MATHEMATICS LEARNING AREA Methods Units 1 and 2 Course Outline Text: Sadler Methods and 2 Week Content Sadler Reference Trigonometry Cosine and Sine rules Week 1 Trigonometry Week 2 Radian Measure Radian

More information

Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations

Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations International Engineering Mathematics, Article ID 828409, 11 pages http://dx.doi.org/10.1155/2014/828409 Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear

More information

Introduction to conic sections. Author: Eduard Ortega

Introduction to conic sections. Author: Eduard Ortega Introduction to conic sections Author: Eduard Ortega 1 Introduction A conic is a two-dimensional figure created by the intersection of a plane and a right circular cone. All conics can be written in terms

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

Improving homotopy analysis method for system of nonlinear algebraic equations

Improving homotopy analysis method for system of nonlinear algebraic equations Journal of Advanced Research in Applied Mathematics Vol., Issue. 4, 010, pp. -30 Online ISSN: 194-9649 Improving homotopy analysis method for system of nonlinear algebraic equations M.M. Hosseini, S.M.

More information

INTRODUCTION TO NUMERICAL ANALYSIS

INTRODUCTION TO NUMERICAL ANALYSIS INTRODUCTION TO NUMERICAL ANALYSIS Cho, Hyoung Kyu Department of Nuclear Engineering Seoul National University 3. SOLVING NONLINEAR EQUATIONS 3.1 Background 3.2 Estimation of errors in numerical solutions

More information

SESSION CLASS-XI SUBJECT : MATHEMATICS FIRST TERM

SESSION CLASS-XI SUBJECT : MATHEMATICS FIRST TERM TERMWISE SYLLABUS SESSION-2018-19 CLASS-XI SUBJECT : MATHEMATICS MONTH July, 2018 to September 2018 CONTENTS FIRST TERM Unit-1: Sets and Functions 1. Sets Sets and their representations. Empty set. Finite

More information

Analysis Methods in Atmospheric and Oceanic Science

Analysis Methods in Atmospheric and Oceanic Science Analysis Methods in Atmospheric and Oceanic Science AOSC 652 Week 7, Day 1 13 Oct 2014 1 Student projects: 20% of the final grade: you will receive a numerical score for the project and final grade will

More information

A short presentation on Newton-Raphson method

A short presentation on Newton-Raphson method A short presentation on Newton-Raphson method Doan Tran Nguyen Tung 5 Nguyen Quan Ba Hong 6 Students at Faculty of Math and Computer Science. Ho Chi Minh University of Science, Vietnam email. nguyenquanbahong@gmail.com

More information

a factors The exponential 0 is a special case. If b is any nonzero real number, then

a factors The exponential 0 is a special case. If b is any nonzero real number, then 0.1 Exponents The expression x a is an exponential expression with base x and exponent a. If the exponent a is a positive integer, then the expression is simply notation that counts how many times the

More information

UNC Charlotte Super Competition Level 3 Test March 4, 2019 Test with Solutions for Sponsors

UNC Charlotte Super Competition Level 3 Test March 4, 2019 Test with Solutions for Sponsors . Find the minimum value of the function f (x) x 2 + (A) 6 (B) 3 6 (C) 4 Solution. We have f (x) x 2 + + x 2 + (D) 3 4, which is equivalent to x 0. x 2 + (E) x 2 +, x R. x 2 + 2 (x 2 + ) 2. How many solutions

More information

THE PYTHAGOREAN THEOREM

THE PYTHAGOREAN THEOREM THE STORY SO FAR THE PYTHAGOREAN THEOREM USES OF THE PYTHAGOREAN THEOREM USES OF THE PYTHAGOREAN THEOREM SOLVE RIGHT TRIANGLE APPLICATIONS USES OF THE PYTHAGOREAN THEOREM SOLVE RIGHT TRIANGLE APPLICATIONS

More information

Check boxes of Edited Copy of Sp Topics (was 261-pilot)

Check boxes of Edited Copy of Sp Topics (was 261-pilot) Check boxes of Edited Copy of 10023 Sp 11 253 Topics (was 261-pilot) Intermediate Algebra (2011), 3rd Ed. [open all close all] R-Review of Basic Algebraic Concepts Section R.2 Ordering integers Plotting

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

MA 8019: Numerical Analysis I Solution of Nonlinear Equations MA 8019: Numerical Analysis I Solution of Nonlinear Equations Suh-Yuh Yang ( 楊肅煜 ) Department of Mathematics, National Central University Jhongli District, Taoyuan City 32001, Taiwan syyang@math.ncu.edu.tw

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45 Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more

More information

Exam 4 SCORE. MA 114 Exam 4 Spring Section and/or TA:

Exam 4 SCORE. MA 114 Exam 4 Spring Section and/or TA: Exam 4 Name: Section and/or TA: Last Four Digits of Student ID: Do not remove this answer page you will return the whole exam. You will be allowed two hours to complete this test. No books or notes may

More information