High-order Newton-type iterative methods with memory for solving nonlinear equations

Size: px
Start display at page:

Download "High-order Newton-type iterative methods with memory for solving nonlinear equations"

Transcription

1 MATHEMATICAL COMMUNICATIONS 9 Math. Commun. 9(4), 9 9 High-order Newton-type iterative methods with memory for solving nonlinear equations Xiaofeng Wang, and Tie Zhang School of Mathematics and Physics, Bohai University, Jinzhou, Liaoning, China College of Sciences, Northeastern University, Shenyang 89 Liaoning, China Received May, ; accepted December 7, Abstract. In this paper, we present a new family of two-step Newton-type iterative methods with memory for solving nonlinear equations. In order to obtain a Newton-type method with memory, we first present an optimal two-parameter fourth-order Newton-type method without memory. Then, based on the two-parameter method without memory, we present a new two-parameter Newton-type method with memory. Using two self-correcting parameters calculated by Hermite interpolatory polynomials, the R-order of convergence of a new Newton-type method with memory is increased from 4 to 5.76 without any additional calculations. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods. AMS subject classifications: 65H5, 65B99 Key words: Newton-type iterative method with memory, nonlinear equations, R order convergence, root-finding. Introduction Finding the root of a nonlinear equation f(x) = is a classical problem in scientific computation. Recently, many iterative methods have been proposed for solving nonlinear equations, see [-4] and the references therein. In these methods, the iterative methods with memory are worth studying. The iterative method with memory can improve the order of convergence of the method without memory without any additional calculations and it has a very high computational efficiency. There are two kinds of iterative methods with memory, i.e. Steffensen-type method and Newton-type method. In this paper, we only consider the Newton-type method with memory. For example, in [9] Petković et al. proposed a two-step Newton-type iterative method with memory of the R-order 4.56 by using inverse interpolation, which is written as N(x n ) = x n f(x [ n) f (x n ), φ(t) = t x n f(t) f(x n ) f(t) f(x n ) ] f, (x n ) y n = N(x n ) + f(x n ) φ(y n ), () x n+ = N(x n ) + f(x n ) φ(y n ), For interpretation of color in all figures, the reader is referred to the web version of this article available at Corresponding author. addresses: w888w@6.com (X. Wang), ztmath@6.com (T. Zhang) c 4 Department of Mathematics, University of Osijek

2 9 X. Wang and T. Zhang where y = N(x ). The basic idea of () comes from Neta who in [5] derived the following three-step Newton-type method with memory of the R-order.85: N(x n ) =x n f(x [ n) f (x n ), ψ(t) = f(t) f(x n ) t x n f(t) f(x n ) f (x n ) f(x n ) y n =N(x n ) + [f(y n )ψ(z n ) f(z n )ψ(y n )] f(y n ) f(z n ), f(x n ) z n =N(x n ) + [f(y n )ψ(z n ) f(z n )ψ(y n )] f(y n ) f(z n ), f(x n ) x n+ =N(x n ) + [f(y n )ψ(z n ) f(z n )ψ(y n )] f(y n ) f(z n ), where y = N(x ) and z = y + f(x ) /. The efficiency indices of methods () and () are low. The reason is that iterative methods () and () require four and six functional evaluations in their first iteration, respectively. In order to accelerate the convergence or improve the computational efficiency of the Newton-type method with memory, in [4] Wang and Zhang developed a two-step Newton-type method with memory of R-order 5, which is given by λ n = H 4 (x n ) f (x n ), y n =x n x n+ =y n f(x n ) λ n f(x n ) + f (x n ), f(y n ) λ n f(x n ) + f (x n ) ( ) f(xn ) + ( + β)f(y n ), f(x n ) + βf(y n ) where β R, H 4 (x n ) = f[x n, x n, y n ]+(4f[x n, x n, y n, x n ] f[x n, y n, x n, x n ])(x n y n ) and H 4 (x) = H 4 (x; x n, x n, y n, x n, x n ) is a Hermite interpolatory polynomial of fourth degree. The efficiency index of method () is 5 /.7, which is higher than the efficiency indices of methods () and (), see [4]. The purpose of this paper is to improve further the R-order of convergence and the efficiency index of the two-step Newton-type iterative method and give the convergence analysis. This paper is organized as follows. In Section, we derive a family of optimal fourth-order iterative methods without memory for solving nonlinear equations. Further accelerations of convergence speed are attained in Section. We obtain a new family of two-point Newton-type iterative methods with memory by varying two free parameters in per full iteration. The two self-accelerating parameters are calculated using information available from the current and previous iterations. The corresponding R-order of convergence is increased from 4 to The maximal efficiency index of the new method with memory is (5.76) /.7865, which is higher than the efficiency indices of the existing Newton-type methods. Numerical examples are given in Section 4 to illustrate convergence behavior of our methods for simple roots. In Section 5, some dynamical aspects associated to the presented methods are studied. Section 6 gives a short conclusion. ], () ()

3 Newton-type iterative methods with memory 9. The two-step Newton-type method without memory In order to get a two-step Newton-type iterative method with memory based on the well known Ostrowski s method [7, ], we consider the following two-step iterative method without memory by using the weight function method. y n =x n f(x n) f (x n ) G(s n), f(y n ) x n+ =y n f[x n, y n ] f (x n ) H(s n), where G(s n ) and H(s n ) are two weight functions with s n = f(x n )/f (x n ). The functions G(s n ) and H(s n ) should be determined so that the iterative method (4) is of order four. For the iterative method defined by (4) we have the following result. Theorem. Let a I be a simple zero of a sufficiently differentiable function f : I R R for an open interval I. Then the iterative method defined by (4) is of fourth-order convergence when G() =, G () = γ, H() =, H () =, H () = β and γ, β R, (5) and it satisfies the error equation below e n+ = (c γ)(c c β c γ)e 4 n + O(e 5 n). (6) Proof. Let e n = x n a, c n = (/n!)f (n) (a)/f (a), n =,,.... Using the Taylor expansion and taking into account f(a) = we have f(x n ) =f (a)[e n + c e n + c e n + c 4 e 4 n + c 5 e 5 n + O(e 6 n)], (7) f (x n ) =f (a)[ + c e n + c e n + 4c 4 e n + 5c 5 e 4 n + O(e 5 n)]. (8) Dividing (7) by (8) we obtain s n = f(x n) f (x n ) = e n c e n + (c c )e n + ( 4c + 7c c c 4 )e 4 n + O(e 5 n). (9) Using (5), (9) and Taylor expansion G(s n ) = G() + G ()s n + O(s n) we have G(s n ) = + γs n = + γ(e n c e n + (c c )e n + ( 4c + 7c c c 4 )e 4 n + O(e 5 n)). () From (9) and () we get e n,y =y n a = x n a G(s n ) f(x n) f (x n ) =(c γ)e n + ( c + c + c γ)e n + (4c 7c c + c 4 5c γ + 4c γ)e 4 n + O(e 5 n). () (4)

4 94 X. Wang and T. Zhang By an argument similar to that of (7), we have f(y n ) =f (a)[(c γ)e n (c c c γ)e n + (5c 7c γ + 4c 4 γ + c ( 7c γ ))e 4 n + O(e 5 n)]. () From (7), () and () we obtain f[x n, y n ] =f (a)( + c e n + (c + c c γ)e n +( c + c c + c 4 +c γ c γ)e n+o(e 4 n)), () f(y n )(f[x n, y n ] f (x n )) =(c γ)e n + ( c + c + c γ)e n + (c c γ + (c 4 + c γ) c (6c + γ ))e 4 n + O(e 5 n). (4) Using (5), (9) and Taylor expansion H(s n ) = H() + H ()s n + H ()s n/ + O(s n) we get H(s n ) = + βe n βc e n + β(5c 4c )e 4 n + O(e 5 n). (5) Hence, together with (), (4) and (5), we obtain the error equation e n+ =x n+ a = y n a f(y n )H(s n )(f[x n, y n ] f (x n )) The proof is completed. =(c γ)(c c β γc )e 4 n + O(e 5 n). (6) It is obvious that the iterative method (4) requires two functions and one derivative per iteration, thus it is an optimal scheme. Let p /n be the efficiency index (see [7]), where p is the order of the method and n is the number of functional evaluations per iteration required by the method. We see that the new method (4) without memory has the efficiency index of Remark. Based on Theorem, we obtain a new iterative method without memory as follows: y n =x n f(x n) f (x n ) G(s n), f(y n ) x n+ =y n f[x n, y n ] f (x n ) H(s n), where s n = f(x n )/f (x n ). G(s n ) and H(s n ) in (7) are two weight functions satisfying condition (5). Functions G(s n ) and H(s n ) in (7) can take many forms. For example, taking the functions G(s n ) = ( γs n ) and H(s n ) = ( βs n), we can give the following fourth-order iterative method without memory: (7) f(x n ) y n =x n f (x n ) γf(x n ), f(y n ) f (x n ) (γ, β R). (8) x n+ =y n f[x n, y n ] f (x n ) f (x n ) βf(x n ),

5 Newton-type iterative methods with memory 95 Taking the functions G(s n ) = ( γs n ) and H(s n ) = + βs n, we obtain the following fourth-order iterative method without memory: y n =x n f(x n ) f (x n ) γf(x n ), f(y n ) x n+ =y n f[x n, y n ] f (x n ) ( + β ( ) ) f(xn ) f, (x n ) (γ, β R). (9) Remark. It is noteworthy that the error equation (6) can be written as the following scheme e n+ =(c γ)(c c β c γ)e 4 n + O(e 5 n) =(c γ)[c (c γ) (c + β)]e 4 n + O(e 5 n). () We can improve the order of convergence of method (7) by taking parameters γ = c and β = c in (). Therefore, the new method with memory can be obtained by method (7) without memory.. New Newton-type iterative methods with memory In this section we will improve the convergence rate of method (7) by varying the parameters γ and β in per full iteration. Taking γ = c and β = c in (), we can improve the order of convergence of method (7). However, the exact values of f (a), f (a) and f (a) are not available in practice and such acceleration of convergence cannot be realized. But we could approximate the parameter γ by γ n and approximate the parameter β by β n. Parameters γ n and β n can be computed by using information available from the current and previous iterations and satisfy lim γ n = c = f (a) n f (a) and lim β n = c = f (a) n 6f (a) such that the fourth-order asymptotic convergence constant is zero in (). In this paper, the self-accelerating parameter γ n is equal to the parameter λ n of Newtontype method (). We consider the following four methods for β n : Method : where γ n = H 4 (x n ) f (x n ) and β n = H 4 (x n ) 6f (x n ), () H 4 (x n ) = 6f[x n, x n, y n, x n ] + 6f[x n, x n, y n, x n, x n ](x n x n y n ). Method : γ n = H 4 (x n ) f (x n ) H (y n ) and β n = 6f (x n ), () where H (y n ) = 6f[y n, x n, x n, y n ]. H (x) = H (x ; y n, x n, x n, y n ) is a Hermite interpolatory polynomial of third degree.

6 96 X. Wang and T. Zhang Method : where γ n = H 4 (x n ) f (x n ) H 4 (y n ) and β n = 6f (x n ), () H 4 (y n ) = H (y n ) + 6f[y n, x n, x n, y n, x n ](y n y n x n ). H 4 (x) = H 4 (x; y n, x n, x n, y n, x n ) is a Hermite interpolatory polynomial of fourth degree. Method 4: where γ n = H 4 (x n ) f (x n ) H 5 (y n ) = H 4 (y n ) + 6f[y n, x n, x n, y n, x n, x n ] H 5 (y n ) and β n = 6f (x n ), (4) [(y n y n )(y n x n ) + (y n x n ) + (y n x n )(y n y n x n )]. H 5 (x) = H 5 (x; y n, x n, x n, y n, x n, x n ) is a Hermite interpolatory polynomial of fifth degree. Here f[x n, x n ] = f (x n ) and f[x n, t] = (f(t) f(x n ))/(t x n ) are two first-order divided differences. The higher order divided differences are defined recursively. The divided difference f[y n, x n, x n, t, t,..., t ] of order m (m ) is defined as f[y n, x n, x n, t, t,..., t ]= f[x n, x n, t, t,..., t ] f[y n, x n, x n, t, t,... t m 4 ] t y n. The divided difference f[t, t,..., t ] of order m(m ) is defined as f[t, t,..., t ] = f[t, t,..., t ] f[t, t,... t m 4 ] t t. Remark. Now, we can obtain the following two-parameter Newton-type iterative method with memory y n =x n f(x n) f (x n ) G(s n), f(y n ) x n+ =y n f[x n, y n ] f (x n ) H(s n), where s n = f(x n )/f (x n ), G(s n ) satisfies the condition G() =, G () = γ n and H(s n ) satisfies the condition H() =, H () =, H () = β n. The parameters γ n and β n are calculated by using one of the formulas ()-(4) and they depend on the data available from the current and the previous iterations. In this paper, the self-accelerating parameter γ n is equal to the parameter λ n of Newton-type method (). (5)

7 Newton-type iterative methods with memory 97 Remark 4. Parameter () can be computed easily, because Hermite interpolation polynomial H 4 (x) has been used in the parameter γ n. Computation of self-correcting parameters γ n and β n is complicated in this paper. The main purpose of this study is to choose available nodes as good as possible to obtain the maximal convergence order of two-step Newton type methods with memory. Hence, some simple approximations of self-accelerating parameters γ n and β n are not considered in this paper. The concept of the R-order of convergence [6] and the following assertion (see []) will be applied to estimate the convergence order of the iterative method with memory (5). Theorem. If the errors of approximations e j = x j a obtained in an iterative root-finding method IM satisfy n e k+ (e k i ) mi, k k({e k }), i= then the R-order of convergence of IM, denoted with O R (IM, a), satisfies the inequality O R (IM, a) s, where s is the unique positive solution of the equation s n+ n m i s n i =. i= Lemma. Let Hm be the Hermite interpolating polynomial of the degree m that interpolates a function f at interpolation nodes y n, x n, t,..., t contained in an interval I and the derivative f (m+) is continuous in I and the Hermite interpolating polynomial Hm (x) satisfied the condition H m (y n ) = f(y n ), H m (x n ) = f(x n ), H m(x n ) = f (x n ), H m (t j ) = f(t j )(j =,..., m ). Define the errors e t,j = t j a (j =,..., m ) and assume that Then. all nodes y n, x n, t,..., t are sufficiently close to the zero a;. the conditions e n = x n a = O(e t,... e t, ) and e n,y = y n a = O(e ne t,... e t, ) hold. β n + c ( ) m c m+ j= e t,j. (6) Proof. The error of the Hermite interpolation can be expressed as follows f(x) H m (x) = f (m+) (ξ) (m + )! (x y n)(x x n ) (x t j ) (ξ I). (7) Differentiating (7) at the point x = y n we obtain f (y n ) H m(y n ) = 6 f (m+) (ξ) (m + )! j= (y n t j ) + (y n x n ) j= k= j = j k (y n t j )

8 98 X. Wang and T. Zhang + (y n x n ) k= H m(y n ) = f (y n ) 6 f (m+) (ξ) (m + )! + (y n x n ) k= i = i k (y n t j ) (ξ I), (8) j = j k j i (y n t j ) + (y n x n ) j= i = i k (y n t j ) j = j k j i k= j = j k (y n t j ) (ξ I), (9) Taylor s series of derivatives of f at the point x n, y n I and ξ I about the zero a of f give f (x n ) =f (a)( + c e n + c e n + 4c 4 e n + O(e 4 n)), () f (y n ) =f (a)( + c e n,y + c e n,y + 4c 4 e n,y + O(e 4 n,y)), () f (y n ) =f (a)(c + 6c e n,y + c 4 e n,y + O(e n,y)), () f (y n ) =f (a)(6c + 4c 4 e n,y + O(e n,y)), () f (m+) (ξ) =f (a)((m + )!c m+ + (m + )!c m+ e ξ + O(e ξ)), (4) where e ξ = ξ a. Substituting () and (4) into (9) we have m(y n ) 6f (a) c + 4c 4 e n,y c m+ (y n t j ) + (y n x n ) j= (y n t j ) + (y n x n ) (y n t j ). (5) k= j = k= i = j = j k i k j k j i H

9 Dividing (5) by () we obtain H β n = m(y n ) 6f (x n ) c c m+ and k= j = j k Newton-type iterative methods with memory 99 j= (y n t j ) + (y n x n ) (y n t j ) + (y n x n ) k= β n + c c m+ (y n t j ) + (y n x n ) + (y n x n ) j= k= The proof is completed. i = i k i = i k k= (y n t j ) j = j k j i (y n t j ), (6) j = j k j i j = j k (y n t j ) ( ) m c m+ j= e t,j. (7) Theorem. Let the varying parameters γ n and β n in the iterative method (5) be calculated by (). If an initial approximation x is sufficiently close to a simple root a of f(x), then the R-order of convergence of the iterative method (5) with memory is at least Proof. Let the sequence {x n } be generated by an iterative method (IM) converging to the root a of f(x) with the R-order O R (IM, a) r, we write e n+ D n,r e r n, e n = x n a, (8) where D n,r tends to the asymptotic error constant D r of (IM) when n. So, e n+ D n,r (D n,r e r n ) r = D n,r D r n,re r n. (9) It is similar to the derivation of (9). We assume that the iterative sequence {y n } has the R-order p; then e n,y D n,p e p n D n,p (D n,r e r n ) p = D n,p D p n,r erp n. (4)

10 X. Wang and T. Zhang Using (), (), γ n and β n, we obtain the corresponding error relations for the methods with memory (5) e n,y =y n a (c γ n )e n, (4) e n+ =x n+ a (c γ n )[c (c γ n ) (c + β n )]e 4 n. (4) Here, we omitted higher order terms in (4)-(4). Hermite interpolating polynomial H 4 (x) satisfied the conditions H 4 (x n ) = f(x n ), H 4(x n ) = f (x n ), H 4 (y n ) = f(y n ), H 4 (x n ) = f(x n ) and H 4(x n ) = f (x n ). The error of the Hermite interpolation can be expressed as follows: f(x) H 4 (x) = f (5) (ξ) (x x n ) (x x n ) (x y n ) (ξ I). (4) 5! Differentiating (4) at the point x = x n we obtain f (x n ) H 4 (x n ) = f (5) (ξ) (x n x n ) (x n y n ) (ξ I), (44) 5! H 4 (x n ) =f (x n ) f (5) (ξ) (x n x n ) (x n y n ) (ξ I), (45) 5! f (x n ) H 4 (x n ) =6 f (5) (ξ) [(x n x n )(x n y n )+(x n x n ) ] (ξ I), (46) 5! H 4 (x n ) =f (x n ) 6 f (5) (ξ) [(x n x n )(x n y n ) 5! + (x n x n ) ] (ξ I). (47) Taylor s series of derivatives of f at the point x n, ξ I about the zero a of f give f (x n ) =f (a)(c + 6c e n + c 4 e n + O(e n)), (48) f (x n ) =f (a)(6c + 4c 4 e n + O(e n)), (49) f (5) (ξ) =f (a)(5!c 5 + 6!c 6 e ξ + O(e ξ)), (5) where e ξ = ξ a. Substituting (48) and (5) into (45) we have H 4 (x n ) f (a)(c + c 5 e n,y e n + c e n ). (5) Substituting (49) and (5) into (47) we have H 4 (x n ) 6f (a)(c c 5 (e n,y e n + e n ) + 4c 4 e n ). (5) Dividing (5) by () we obtain γ n = H 4 (x n ) f (x n ) ( c + c 5 e n,y e n + (c c )e n ), (5) c γ n c 5 e n,y e n. (54)

11 Dividing (5) by () we obtain Newton-type iterative methods with memory β n = H 4 (x n ) 6f (x n ) ( c c 5 (e n,y e n + e n ) + (4c 4 c c )e n ), (55) c + β n ( c 5 (e n,y e n + e n ) (4c 4 c c )e n ) c5 e n. (56) According to (4), (4), (54) and (56), we get e n,y (c γ n )e n c 5 e n,y e n (D n,r e r n ) c 5 D n,p D n,re r+p+ n, (57) e n+ c 5 e n,y e n [c c 5 e n,y e n + c 5 e n ]e 4 n c 5e 4 n D n,p e p n (D n,re r n ) 4 c 5D n,p D 4 n,re 4r+p+4 n. (58) By comparing exponents of e n appearing in two pairs of relations ((4),(57)) and ((9),(58)), we get the following system of equations { r + p + = rp, 4r + p + 4 = r (59). Positive solution of system (59) is given by r 5.59 and p.99. Therefore, the R-order of the methods with memory (5), when γ n is calculated by (), is at least Using the results of Lemma and Theorem, we get Theorem 4 as follows: Theorem 4. Let the varying parameters γ n and β n in the iterative method (5) be calculated by (), () and (4), respectively. If an initial approximation x is sufficiently close to a simple root a of f(x), then the R-order of convergence of iterative methods (5) with memory is at least 5.456, and 5.76, respectively. Proof. Method, γ n and β n are calculated by (): Using Lemma for m = and t = y n, we obtain According to (4), (54) and (6), we get c + β n c 4 e n,y. (6) e n+ c 5 e n e n,y [c c 5 e n e n,y c 4 e n,y ]e 4 n c 4 c 5 e n e n,ye 4 n c 4 c 5 e n Dn,pe p n (D n,re r n ) 4 c 4 c 5 Dn,pD n,re 4 4r+p+ n. (6) By comparing exponents of e n appearing in two pairs of relations ((4),(57)) and ((9),(6)), we get the following system of equations { r + p + = rp, 4r + p + = r (6). Positive solution of system (6) is given by r and p.98. Therefore, the R-order of the methods with memory (5), when γ n is calculated by (), is at least

12 X. Wang and T. Zhang Method, γ n and β n are calculated by (): Using Lemma for m = 4, t = y n and t = x n, we obtain According to (4), (54) and (6), we get c + β n c 5 e n,y e n, (6) e n+ c 5 e n e n,y [c c 5 e n e n,y + c 5 e n,y e n ]e 4 n c 5e n e n,ye 4 n c 5e n Dn,pe p n (D n,re r n ) 4 c 5Dn,pD n,re 4 4r+p+ n. (64) By comparing exponents of e n appearing in two pairs of relations ((4), (57)) and ((9), (64)), we get the following system of equations { r + p + = rp, 4r + p + = r (65). Positive solution of system (65) is given by r and p.875. Therefore, the R-order of the methods with memory (5), when γ n is calculated by (), is at least Method 4, γ n and β n are calculated by (4): Hermite interpolating polynomial H5 (x) satisfied the condition H 5 (y n ) = f(y n ), H 5 (x n ) = f(x n ), H 5 (x n ) = f (x n ), H5 (y n ) = f(y n ), H5 (x n ) = f(x n ) and H 5(x n ) = f (x n ). The error of the Hermite interpolation can be expressed as follows: f(x) H 5 (x) = f (5) (ξ) (x y n )(x x n ) (x x n ) (x y n ) (ξ I). (66) 6! Differentiating (66) at the point x = y n we obtain f (y n ) H 5 (y n ) =6 f (5) (ξ) [(y n y n )(y n x n ) + (y n x n )(y n x n ) 6! +4(y n x n )(y n x n )(y n y n ) + (y n x n ) (y n x n ) H + (y n x n ) (y n y n )] (ξ I), (67) 5 (y n ) =f (y n ) 6 f (5) (ξ) [(y n y n )(y n x n ) 6! +(y n x n )(y n x n ) +4(y n x n )(y n x n )(y n y n ) + (y n x n ) (y n x n ) + (y n x n ) (y n y n )] (ξ I). (68) Taylor s series of derivatives of f at the point ξ I about the zero a of f give where e ξ = ξ a. Substituting () and (69) into (68) we have f (5) (ξ) = f (a)(6!c 6 + 7!c 7 e ξ + O(e ξ)), (69) H 5 (y n ) 6f (a)(c + c 6 e n,y e n ). (7)

13 Dividing (7) by () we obtain Newton-type iterative methods with memory According to (4), (54) and (7), we get H 5 (y n ) β n = 6f (x n ) (c + c 6 e n,y e n ), (7) c + β n c 6 e n,y e n. (7) e n+ c 5 e n,y e n [c c 5 e n,y e n c 6 e n,y e n ]e 4 n c 5 (c c 5 c 6 )e 4 n e n,ye 4 n c 5 (c c 5 c 6 )Dn,pD n,re 4 4r+p+4 n. (7) By comparing exponents of e n appearing in two pairs of relations ((4),(57)) and ((9),(7)), we get the following system of equations { r + p + = rp, 4r + p + 4 = r (74). Positive solution of system (74) is given by r 5.76 and p.858. Therefore, the R-order of the methods with memory (5), when γ n is calculated by (4), is at least The proof is completed. Remark 5. The efficiency index of iterative method (5) with the corresponding expressions ()-(4) of γ n and β n is (5.59) /.744, (5.456) /.758, (5.578) /.777 and (5.76) /.7865, respectively. The efficiency indices of our methods with memory are higher than the efficiency index (5 /.7) of Newton-type method () with memory. 4. Numerical results The new methods (8) and (9) with and without memory are used to solve nonlinear functions f i (x)(i =, ) and the computation results are compared with other Newton-type iterative methods () and () with memory, see Tables -. The absolute errors x k a in the first four iterations are given in Tables -, where a is the exact root computed with significant digits. The computational order of convergence ρ is defined by []: The following test functions are used: ρ ln( x n+ x n / x n x n ) ln( x n x n / x n x n ). (75) f (x) = xe x sin (x) + cos(x) + 5, a , x =., f (x) = arcsin(x ).5x +, a , x =.7. The numerical results shown in Tables - are in concordance with the theory developed in this paper. We can notice that the results obtained by our methods with memory are better than the other two-step Newton-type methods with memory. The highest order of convergence of our methods with memory is 5.76, which is higher than methods () and ().

14 4 X. Wang and T. Zhang Methods x a x a x a x 4 a ρ (8), γ = β =.e-4.7e-9.e-78.4e-5 4. (9), γ = β =.e-4.6e-9.e-78.e-5 4. ().5e-5.e-.5e-8.8e (), λ =., β =.4e-4.66e-.e-5.9e (8), (), γ = β =.e-4.85e-4.47e-7.6e (8), (), γ = β =.e-4.45e-4.45e-.89e (8), (), γ = β =.e-4.4e-5.e-4.e (8), (4), γ = β =.e-4.e-5.5e-48.4e (9), (), γ = β =.e-4.8e-4.8e-7.e (9), (), γ = β =.e-4.4e-4.6e-.8e (9), (), γ = β =.e-4.e-5.6e-4.4e (9), (4), γ = β =.e-4.e-5.4e-48.47e Table : Numerical results for f (x) by the methods with and without memory Methods x a x a x a x 4 a ρ (8), γ = β =.6.56e-4.48e-7.7e-69.8e (9), γ = β =.6.56e-4.49e-7.e-69.4e ().7e-8.7e-4.e-89.84e (), λ =., β =.75.4e-5.5e-7.96e-9.88e (8), (), γ = β =.6.56e-4.65e-4.e-8.77e (8), (), γ = β =.6.56e-4.4e-4.6e-.e (8), (), γ = β =.6.56e-4.55e-5.8e-4.7e (8), (4), γ = β =.6.56e-4.6e-6.5e-49.6e (9), (), γ = β =.6.56e-4.66e-4.8e-8.7e (9), (), γ = β =.6.56e-4.4e-4.9e-.e (9), (), γ = β =.6.56e-4.57e-5.e-4.9e (9), (4), γ = β =.6.56e-4.64e-6.9e-49.85e Table : Numerical results for f (x) by the methods with and without memory 5. Dynamical analysis Dynamical properties of the rational function give us important information about numerical features of the iterative method as its stability and reliability. In what follows, we compare our methods with and without memory to methods with memory () and () by using the basins of attraction for three complex polynomials f(z) = z k, k =,, 4. We use a similar like technique as in [5] to generate the basins of attraction. To generate the basins of attraction for the zeros of a polynomial and an iterative method we take a grid of 4 4 points in a rectangle D = [.,.] [.,.] C and we use these points as z. If the sequence generated by the iterative method reaches a zero z of the polynomial with a tolerance z k z < 5 and a maximum of 5 iterations, we decide that z is in the basin of attraction of the zero and we paint this point blue for this root. In the same basin of attraction, the number of iterations needed to achieve the solution is shown in darker or

15 Newton-type iterative methods with memory Figure : Top row: Methods without memory (8)(left) and (9)(right). Middle row: Methods ()(left) and ()(right). Bottom row: Methods (8) with (4)(left) and (9) with (4)(right). The results are for the polynomial z.

16 6 X. Wang and T. Zhang Figure : Top row: Methods without memory (8)(left) and (9)(right). Middle row: Methods ()(left) and ()(right). Bottom row: Methods (8) with (4)(left) and (9) with (4)(right).The results are for the polynomial z.

17 Newton-type iterative methods with memory Figure : Top row: Methods without memory (8)(left) and (9)(right). Middle row: Methods ()(left) and ()(right). Bottom row: Methods (8) with (4)(left) and (9) with (4)(right). The results are for the polynomial z 4.

18 8 X. Wang and T. Zhang brighter colors (the less iterations, the brighter color). Black color denotes lack of convergence to any of the roots (with the maximum of iterations established) or convergence to the infinity. The fractals of our methods with memory are almost similar, we only give the basins of attraction of our methods with memory of the R- order The parameters used in iterative methods (8) (9) without memory are γ =. and β =.. The parameters λ =. and β =. are used in method () with memory. Our methods with memory use the parameters γ =. and β =. in the first iteration. Figures - show that the new methods with memory have very little diverging points compared to other methods. Figures - show that the basins of attraction for our methods with memory are larger than the other methods and the convergence speed of our methods with memory are faster than our methods (8) (9) without memory. On the whole, we can see that the new methods with memory are better than the other methods in this paper. 6. Conclusions In this paper, we have proposed a new family of two-step Newton-type iterative methods with and without memory for solving nonlinear equations. Using two selfcorrecting parameters calculated by Hermite interpolatory polynomials the R-order of convergence of the new Newton-type method with memory is increased from 4 to 5.76 without any additional calculations. The main contribution of this paper is that we obtain the maximal order of Newton-type methods without any additional function evaluations. The new methods are compared in performance and computational efficiency with some existing methods by numerical examples. We observed that the computational efficiency indices of the new methods with memory are better than those of other existing two-step methods. Acknowledgement The project supported by the National Natural Science Foundation of China (Nos. 7 and 7). We would like to thank the referees for their helpful suggestions. References [] G. Alefeld, J. Herzberger, Introduction to Interval Computation, Academic Press, New York, 98. [] C. Chun, M. Y. Lee, B. Neta, J. Džunić, On optimal fourth-order iterative methods free from second derivative and their dynamics, Appl. Math. Comput. 8(), [] A. Cordero, J. R. Torregrosa, Variants of Newton s Method using fifth-order quadrature formulas, Appl.Math. Comput. 9(7), [4] J. Džunić, M. S. Petković, L. D. Petković, Three-point methods with and without memory for solving nonlinear equations, Appl. Math. Comput. 8(),

19 Newton-type iterative methods with memory 9 [5] B. Neta, A New Family of Higher Order Methods for Solving Equations, Int. J. Comput. Math. 4(98), [6] J. M. Ortega, W. C. Rheinbolt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 97. [7] A. M. Ostrowski, Solutions of Equations and System of Equations, Academic Press, New York, 96. [8] M. S. Petković, A note on the priority of optimal multipoint methods for solving nonlinear equations, Appl. Math. Comput. 9(), [9] M. S. Petković, J. Džunić, B. Neta, Interpolatory multipoint methods with memory for solving nonlinear equations, Appl. Math. Comput.8 (), [] M. G. Sánchez, À.Grau, M. Noguera, Ostrowski type methods for solving systems of nonlinear equations, Appl. Math. Comput. 8(), [] J. R. Sharma, R. K. Guha, P. Gupta, Some efficient derivative free methods with memory for solving nonlinear equations, Appl. Math. Comput. 9(), [] F. Soleymani, M. Sharifi, B. S. Mousavi, An inprovement of Ostrowski s and King s techniques with optimal convergence order eight, J. Optim. Theory. Appl. 5(), 5 6. [] X. Wang, J. Džunić, T. Zhang, On an efficient family of derivative free three-point methods for solving nonlinear equations, Appl. Math. Comput. 9(), [4] X. Wang, T. Zhang, A new family of Newton-type iterative methods with and without memory for solving nonlinear equations, Calcolo (), 5, Doi:.7/s [5] J. L. Varona, Graphic and numerical comparison between iterative methods, The Math. Intelligencer, 4(),7 46.

A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index

A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index International Journal of Mathematical Modelling & Computations Vol. 04, No. 03, Summer 2014, 277-288 A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index

More information

Optimal derivative-free root finding methods based on the Hermite interpolation

Optimal derivative-free root finding methods based on the Hermite interpolation Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 4427 4435 Research Article Optimal derivative-free root finding methods based on the Hermite interpolation Nusrat Yasmin, Fiza Zafar,

More information

A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations

A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations Int. Journal of Math. Analysis Vol. 5 2011 no. 37 1813-1819 A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations F. Soleymani 1 and B. S. Mousavi 2 Young Researchers Club Islamic

More information

A new family of four-step fifteenth-order root-finding methods with high efficiency index

A new family of four-step fifteenth-order root-finding methods with high efficiency index Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 3, No. 1, 2015, pp. 51-58 A new family of four-step fifteenth-order root-finding methods with high efficiency index Tahereh

More information

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur 28 International Journal of Advance Research, IJOAR.org Volume 1, Issue 1, January 2013, Online: Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur ABSTRACT The following paper focuses

More information

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 4 2016, pp. 2987 2997 Research India Publications http://www.ripublication.com/gjpam.htm Chebyshev-Halley s Method without

More information

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence Palestine Journal of Mathematics Vol. 3(1) (2014), 113 119 Palestine Polytechnic University-PPU 2014 Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence M.A. Hafiz

More information

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems algorithms Article Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems Xiaofeng Wang * and Xiaodong Fan School of Mathematics and Physics, Bohai University, Jinzhou 203, China;

More information

Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations

Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations e Scientific World Journal, Article ID 34673, 9 pages http://dx.doi.org/0.55/204/34673 Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations T. Lotfi, K. Mahdiani,

More information

Modified Jarratt Method Without Memory With Twelfth-Order Convergence

Modified Jarratt Method Without Memory With Twelfth-Order Convergence Annals of the University of Craiova, Mathematics and Computer Science Series Volume 39(1), 2012, Pages 21 34 ISSN: 1223-6934 Modified Jarratt Method Without Memory With Twelfth-Order Convergence F. Soleymani,

More information

SOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H.

SOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H. International Journal of Differential Equations and Applications Volume 12 No. 4 2013, 169-183 ISSN: 1311-2872 url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijdea.v12i4.1344 PA acadpubl.eu SOLVING

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

A Three-Step Iterative Method to Solve A Nonlinear Equation via an Undetermined Coefficient Method

A Three-Step Iterative Method to Solve A Nonlinear Equation via an Undetermined Coefficient Method Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 14, Number 11 (018), pp. 145-1435 Research India Publications http://www.ripublication.com/gjpam.htm A Three-Step Iterative Method

More information

Dynamical Behavior for Optimal Cubic-Order Multiple Solver

Dynamical Behavior for Optimal Cubic-Order Multiple Solver Applied Mathematical Sciences, Vol., 7, no., 5 - HIKARI Ltd, www.m-hikari.com https://doi.org/.988/ams.7.6946 Dynamical Behavior for Optimal Cubic-Order Multiple Solver Young Hee Geum Department of Applied

More information

A new sixth-order scheme for nonlinear equations

A new sixth-order scheme for nonlinear equations Calhoun: The NPS Institutional Archive DSpace Repository Faculty and Researchers Faculty and Researchers Collection 202 A new sixth-order scheme for nonlinear equations Chun, Changbum http://hdl.handle.net/0945/39449

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and Their Dynamics

Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and Their Dynamics Applied Mathematical Sciences, Vol. 9, 5, no. 49, 747-748 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.988/ams.5.5658 Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and

More information

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations F. Soleymani Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran E-mail: fazl_soley_bsb@yahoo.com; Tel:

More information

A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials

A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials Applied Mathematical Sciences, Vol. 8, 2014, no. 35, 1723-1730 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.4127 A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating

More information

ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS

ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS Fractals, Vol. 22, No. 4 (2014) 1450013 (16 pages) c World Scientific Publishing Company DOI: 10.1142/S0218348X14500133 ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS

More information

Family of Optimal Eighth-Order of Convergence for Solving Nonlinear Equations

Family of Optimal Eighth-Order of Convergence for Solving Nonlinear Equations SCECH Volume 4, Issue 4 RESEARCH ORGANISATION Published online: August 04, 2015 Journal of Progressive Research in Mathematics www.scitecresearch.com/journals Family of Optimal Eighth-Order of Convergence

More information

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations Volume 27, N. 3, pp. 269 274, 2008 Copyright 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam An efficient Newton-type method with fifth-order convergence for solving nonlinear equations LIANG FANG 1,2, LI

More information

New seventh and eighth order derivative free methods for solving nonlinear equations

New seventh and eighth order derivative free methods for solving nonlinear equations DOI 10.1515/tmj-2017-0049 New seventh and eighth order derivative free methods for solving nonlinear equations Bhavna Panday 1 and J. P. Jaiswal 2 1 Department of Mathematics, Demonstration Multipurpose

More information

Convergence of a Third-order family of methods in Banach spaces

Convergence of a Third-order family of methods in Banach spaces International Journal of Computational and Applied Mathematics. ISSN 1819-4966 Volume 1, Number (17), pp. 399 41 Research India Publications http://www.ripublication.com/ Convergence of a Third-order family

More information

NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS

NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS Bulletin of Mathematical Analysis and Applications ISSN: 181-191, URL: http://www.bmathaa.org Volume 3 Issue 4(011, Pages 31-37. NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS

More information

Math 113 (Calculus 2) Exam 4

Math 113 (Calculus 2) Exam 4 Math 3 (Calculus ) Exam 4 November 0 November, 009 Sections 0, 3 7 Name Student ID Section Instructor In some cases a series may be seen to converge or diverge for more than one reason. For such problems

More information

A Fifth-Order Iterative Method for Solving Nonlinear Equations

A Fifth-Order Iterative Method for Solving Nonlinear Equations International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767, P-ISSN: 2321 4759 www.ijmsi.org Volume 2 Issue 10 November. 2014 PP.19-23 A Fifth-Order Iterative Method for Solving

More information

A New Fifth Order Derivative Free Newton-Type Method for Solving Nonlinear Equations

A New Fifth Order Derivative Free Newton-Type Method for Solving Nonlinear Equations Appl. Math. Inf. Sci. 9, No. 3, 507-53 (05 507 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/0.785/amis/090346 A New Fifth Order Derivative Free Newton-Type Method

More information

An optimal sixteenth order convergent method to solve nonlinear equations

An optimal sixteenth order convergent method to solve nonlinear equations Lecturas Matemáticas Volumen 36 (2) (2015), páginas 167-177 ISSN 0120 1980 An optimal sixteenth order convergent method to solve nonlinear equations Un método convergente de orden dieciséis para resolver

More information

Document downloaded from:

Document downloaded from: Document downloaded from: http://hdl.handle.net/1051/56036 This paper must be cited as: Cordero Barbero, A.; Torregrosa Sánchez, JR.; Penkova Vassileva, M. (013). New family of iterative methods with high

More information

A three point formula for finding roots of equations by the method of least squares

A three point formula for finding roots of equations by the method of least squares Journal of Applied Mathematics and Bioinformatics, vol.2, no. 3, 2012, 213-233 ISSN: 1792-6602(print), 1792-6939(online) Scienpress Ltd, 2012 A three point formula for finding roots of equations by the

More information

Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations

Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations International Engineering Mathematics, Article ID 828409, 11 pages http://dx.doi.org/10.1155/2014/828409 Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear

More information

Basins of Attraction for Optimal Third Order Methods for Multiple Roots

Basins of Attraction for Optimal Third Order Methods for Multiple Roots Applied Mathematical Sciences, Vol., 6, no., 58-59 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.988/ams.6.65 Basins of Attraction for Optimal Third Order Methods for Multiple Roots Young Hee Geum Department

More information

Research Article Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods

Research Article Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods Abstract and Applied Analysis Volume 0, Article ID 3865, 4 pages doi:0.55/0/3865 Research Article Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods F. Soleymani and S. Shateyi Department

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS 136 IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS 1Ogbereyivwe, O. and 2 Ojo-Orobosa, V. O. Department of Mathematics and Statistics,

More information

A Two-step Iterative Method Free from Derivative for Solving Nonlinear Equations

A Two-step Iterative Method Free from Derivative for Solving Nonlinear Equations Applied Mathematical Sciences, Vol. 8, 2014, no. 161, 8021-8027 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.49710 A Two-step Iterative Method Free from Derivative for Solving Nonlinear

More information

EULER-LIKE METHOD FOR THE SIMULTANEOUS INCLUSION OF POLYNOMIAL ZEROS WITH WEIERSTRASS CORRECTION

EULER-LIKE METHOD FOR THE SIMULTANEOUS INCLUSION OF POLYNOMIAL ZEROS WITH WEIERSTRASS CORRECTION EULER-LIKE METHOD FOR THE SIMULTANEOUS INCLUSION OF POLYNOMIAL ZEROS WITH WEIERSTRASS CORRECTION Miodrag S. Petković Faculty of Electronic Engineering, Department of Mathematics P.O. Box 73, 18000 Niš,

More information

Some New Three Step Iterative Methods for Solving Nonlinear Equation Using Steffensen s and Halley Method

Some New Three Step Iterative Methods for Solving Nonlinear Equation Using Steffensen s and Halley Method British Journal of Mathematics & Computer Science 19(2): 1-9, 2016; Article no.bjmcs.2922 ISSN: 221-081 SCIENCEDOMAIN international www.sciencedomain.org Some New Three Step Iterative Methods for Solving

More information

Quadrature based Broyden-like method for systems of nonlinear equations

Quadrature based Broyden-like method for systems of nonlinear equations STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 6, March 2018, pp 130 138. Published online in International Academic Press (www.iapress.org) Quadrature based Broyden-like

More information

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0 8.7 Taylor s Inequality Math 00 Section 005 Calculus II Name: ANSWER KEY Taylor s Inequality: If f (n+) is continuous and f (n+) < M between the center a and some point x, then f(x) T n (x) M x a n+ (n

More information

Finding simple roots by seventh- and eighth-order derivative-free methods

Finding simple roots by seventh- and eighth-order derivative-free methods Finding simple roots by seventh- and eighth-order derivative-free methods F. Soleymani 1,*, S.K. Khattri 2 1 Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran * Corresponding

More information

Geometrically constructed families of iterative methods

Geometrically constructed families of iterative methods Chapter 4 Geometrically constructed families of iterative methods 4.1 Introduction The aim of this CHAPTER 3 is to derive one-parameter families of Newton s method [7, 11 16], Chebyshev s method [7, 30

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

SQUARE-ROOT FAMILIES FOR THE SIMULTANEOUS APPROXIMATION OF POLYNOMIAL MULTIPLE ZEROS

SQUARE-ROOT FAMILIES FOR THE SIMULTANEOUS APPROXIMATION OF POLYNOMIAL MULTIPLE ZEROS Novi Sad J. Math. Vol. 35, No. 1, 2005, 59-70 SQUARE-ROOT FAMILIES FOR THE SIMULTANEOUS APPROXIMATION OF POLYNOMIAL MULTIPLE ZEROS Lidija Z. Rančić 1, Miodrag S. Petković 2 Abstract. One-parameter families

More information

Applied Mathematics Letters. Combined bracketing methods for solving nonlinear equations

Applied Mathematics Letters. Combined bracketing methods for solving nonlinear equations Applied Mathematics Letters 5 (01) 1755 1760 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Combined bracketing methods for

More information

SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS Open J. Math. Sci., Vol. 1(017, No. 1, pp. 5-33 ISSN 53-01 Website: http://www.openmathscience.com SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS MUHAMMAD SAQIB 1, MUHAMMAD IQBAL Abstract.

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS

NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS Dr. Farooq Ahmad Principal, Govt. Degree College Darya Khan, Bhakkar, Punjab Education Department, PAKISTAN farooqgujar@gmail.com Sifat

More information

Math 106 Fall 2014 Exam 2.1 October 31, ln(x) x 3 dx = 1. 2 x 2 ln(x) + = 1 2 x 2 ln(x) + 1. = 1 2 x 2 ln(x) 1 4 x 2 + C

Math 106 Fall 2014 Exam 2.1 October 31, ln(x) x 3 dx = 1. 2 x 2 ln(x) + = 1 2 x 2 ln(x) + 1. = 1 2 x 2 ln(x) 1 4 x 2 + C Math 6 Fall 4 Exam. October 3, 4. The following questions have to do with the integral (a) Evaluate dx. Use integration by parts (x 3 dx = ) ( dx = ) x3 x dx = x x () dx = x + x x dx = x + x 3 dx dx =

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics ON A HYBRID FAMILY OF SUMMATION INTEGRAL TYPE OPERATORS VIJAY GUPTA AND ESRA ERKUŞ School of Applied Sciences Netaji Subhas Institute of Technology

More information

ACTA UNIVERSITATIS APULENSIS No 18/2009 NEW ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS BY USING MODIFIED HOMOTOPY PERTURBATION METHOD

ACTA UNIVERSITATIS APULENSIS No 18/2009 NEW ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS BY USING MODIFIED HOMOTOPY PERTURBATION METHOD ACTA UNIVERSITATIS APULENSIS No 18/2009 NEW ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS BY USING MODIFIED HOMOTOPY PERTURBATION METHOD Arif Rafiq and Amna Javeria Abstract In this paper, we establish

More information

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications Applied Mathematics E-Notes, 17(017), 1-30 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations

More information

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver Abstract and Applied Analysis Volume 01, Article ID 63893, 8 pages doi:10.1155/01/63893 Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver Mi Young Lee and Changbum

More information

A Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation

A Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation ELSEVIER An International Journal Available online at www.sciencedirect.com computers &.,=. = C o,..ct. mathematics with applications Computers and Mathematics with Applications 48 (2004) 709-714 www.elsevier.com/locate/camwa

More information

Examination paper for TMA4215 Numerical Mathematics

Examination paper for TMA4215 Numerical Mathematics Department of Mathematical Sciences Examination paper for TMA425 Numerical Mathematics Academic contact during examination: Trond Kvamsdal Phone: 93058702 Examination date: 6th of December 207 Examination

More information

Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations

Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations Fayyaz Ahmad Abstract Preconditioning of systems of nonlinear equations modifies the associated Jacobian

More information

arxiv: v1 [cs.it] 12 Jun 2016

arxiv: v1 [cs.it] 12 Jun 2016 New Permutation Trinomials From Niho Exponents over Finite Fields with Even Characteristic arxiv:606.03768v [cs.it] 2 Jun 206 Nian Li and Tor Helleseth Abstract In this paper, a class of permutation trinomials

More information

Error formulas for divided difference expansions and numerical differentiation

Error formulas for divided difference expansions and numerical differentiation Error formulas for divided difference expansions and numerical differentiation Michael S. Floater Abstract: We derive an expression for the remainder in divided difference expansions and use it to give

More information

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

AP Calculus Testbank (Chapter 9) (Mr. Surowski) AP Calculus Testbank (Chapter 9) (Mr. Surowski) Part I. Multiple-Choice Questions n 1 1. The series will converge, provided that n 1+p + n + 1 (A) p > 1 (B) p > 2 (C) p >.5 (D) p 0 2. The series

More information

Limits at Infinity. Horizontal Asymptotes. Definition (Limits at Infinity) Horizontal Asymptotes

Limits at Infinity. Horizontal Asymptotes. Definition (Limits at Infinity) Horizontal Asymptotes Limits at Infinity If a function f has a domain that is unbounded, that is, one of the endpoints of its domain is ±, we can determine the long term behavior of the function using a it at infinity. Definition

More information

On the influence of center-lipschitz conditions in the convergence analysis of multi-point iterative methods

On the influence of center-lipschitz conditions in the convergence analysis of multi-point iterative methods On the influence of center-lipschitz conditions in the convergence analysis of multi-point iterative methods Ioannis K. Argyros 1 Santhosh George 2 Abstract The aim of this article is to extend the local

More information

A New Modification of Newton s Method

A New Modification of Newton s Method A New Modification of Newton s Method Vejdi I. Hasanov, Ivan G. Ivanov, Gurhan Nedjibov Laboratory of Mathematical Modelling, Shoumen University, Shoumen 971, Bulgaria e-mail: v.hasanov@@fmi.shu-bg.net

More information

Accelerating Convergence

Accelerating Convergence Accelerating Convergence MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013 Motivation We have seen that most fixed-point methods for root finding converge only linearly

More information

Generalization Of The Secant Method For Nonlinear Equations

Generalization Of The Secant Method For Nonlinear Equations Applied Mathematics E-Notes, 8(2008), 115-123 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generalization Of The Secant Method For Nonlinear Equations Avram Sidi

More information

Solutions Final Exam May. 14, 2014

Solutions Final Exam May. 14, 2014 Solutions Final Exam May. 14, 2014 1. Determine whether the following statements are true or false. Justify your answer (i.e., prove the claim, derive a contradiction or give a counter-example). (a) (10

More information

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer. Math 22 - Review for Exam 3. Answer each of the following questions as either True or False. Circle the correct answer. (a) True/False: If a n > 0 and a n 0, the series a n converges. Soln: False: Let

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

An Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function

An Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function Applied Mathematical Sciences, Vol 6, 0, no 8, 347-36 An Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function Norhaliza Abu Bakar, Mansor Monsi, Nasruddin assan Department of Mathematics,

More information

November 20, Interpolation, Extrapolation & Polynomial Approximation

November 20, Interpolation, Extrapolation & Polynomial Approximation Interpolation, Extrapolation & Polynomial Approximation November 20, 2016 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic

More information

Improving homotopy analysis method for system of nonlinear algebraic equations

Improving homotopy analysis method for system of nonlinear algebraic equations Journal of Advanced Research in Applied Mathematics Vol., Issue. 4, 010, pp. -30 Online ISSN: 194-9649 Improving homotopy analysis method for system of nonlinear algebraic equations M.M. Hosseini, S.M.

More information

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation Numerical Analysis by Dr. Anita Pal Assistant Professor Department of Mathematics National Institute of Technology Durgapur Durgapur-713209 email: anita.buie@gmail.com 1 . Chapter 4 Solution of Non-linear

More information

Lecture Notes for Math 1000

Lecture Notes for Math 1000 Lecture Notes for Math 1000 Dr. Xiang-Sheng Wang Memorial University of Newfoundland Office: HH-2016, Phone: 864-4321 Office hours: 13:00-15:00 Wednesday, 12:00-13:00 Friday Email: swang@mun.ca Course

More information

Some Third Order Methods for Solving Systems of Nonlinear Equations

Some Third Order Methods for Solving Systems of Nonlinear Equations Some Third Order Methods for Solving Systems of Nonlinear Equations Janak Raj Sharma Rajni Sharma International Science Index, Mathematical Computational Sciences waset.org/publication/1595 Abstract Based

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION. 1. Introduction Consider the following unconstrained programming problem:

A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION. 1. Introduction Consider the following unconstrained programming problem: J. Korean Math. Soc. 47, No. 6, pp. 53 67 DOI.434/JKMS..47.6.53 A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION Youjiang Lin, Yongjian Yang, and Liansheng Zhang Abstract. This paper considers the

More information

A New Accelerated Third-Order Two-Step Iterative Method for Solving Nonlinear Equations

A New Accelerated Third-Order Two-Step Iterative Method for Solving Nonlinear Equations ISSN 4-5804 (Paper) ISSN 5-05 (Online) Vol.8, No.5, 018 A New Accelerated Third-Order Two-Step Iterative Method for Solving Nonlinear Equations Umair Khalid Qureshi Department of Basic Science & Related

More information

Applied Mathematics and Computation

Applied Mathematics and Computation Applied Mathematics and Computation 8 (0) 584 599 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc Basin attractors for

More information

A new family of adaptive methods with memory for solving nonlinear equations

A new family of adaptive methods with memory for solving nonlinear equations https://doi.org/10.1007/s40096-018-0272-2 ORIGINAL RESEARCH A new family of adaptive methods with memory for solving nonlinear equations Vali Torkashvand 1 Taher Lotfi 1 Mohammad Ali Fariborzi Araghi 2

More information

AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period:

AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period: WORKSHEET: Series, Taylor Series AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period: 1 Part I. Multiple-Choice Questions (5 points each; please circle the correct answer.) 1. The

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Using Lagrange Interpolation for Solving Nonlinear Algebraic Equations

Using Lagrange Interpolation for Solving Nonlinear Algebraic Equations International Journal of Theoretical and Applied Mathematics 2016; 2(2): 165-169 http://www.sciencepublishinggroup.com/j/ijtam doi: 10.11648/j.ijtam.20160202.31 ISSN: 2575-5072 (Print); ISSN: 2575-5080

More information

Completion Date: Monday February 11, 2008

Completion Date: Monday February 11, 2008 MATH 4 (R) Winter 8 Intermediate Calculus I Solutions to Problem Set #4 Completion Date: Monday February, 8 Department of Mathematical and Statistical Sciences University of Alberta Question. [Sec..9,

More information

Do not turn over until you are told to do so by the Invigilator.

Do not turn over until you are told to do so by the Invigilator. UNIVERSITY OF EAST ANGLIA School of Mathematics Main Series UG Examination 216 17 INTRODUCTION TO NUMERICAL ANALYSIS MTHE612B Time allowed: 3 Hours Attempt QUESTIONS 1 and 2, and THREE other questions.

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics ON SIMULTANEOUS APPROXIMATION FOR CERTAIN BASKAKOV DURRMEYER TYPE OPERATORS VIJAY GUPTA, MUHAMMAD ASLAM NOOR AND MAN SINGH BENIWAL School of Applied

More information

NON-MONOTONICITY HEIGHT OF PM FUNCTIONS ON INTERVAL. 1. Introduction

NON-MONOTONICITY HEIGHT OF PM FUNCTIONS ON INTERVAL. 1. Introduction Acta Math. Univ. Comenianae Vol. LXXXVI, 2 (2017), pp. 287 297 287 NON-MONOTONICITY HEIGHT OF PM FUNCTIONS ON INTERVAL PINGPING ZHANG Abstract. Using the piecewise monotone property, we give a full description

More information

Taylor Series. Math114. March 1, Department of Mathematics, University of Kentucky. Math114 Lecture 18 1/ 13

Taylor Series. Math114. March 1, Department of Mathematics, University of Kentucky. Math114 Lecture 18 1/ 13 Taylor Series Math114 Department of Mathematics, University of Kentucky March 1, 2017 Math114 Lecture 18 1/ 13 Given a function, can we find a power series representation? Math114 Lecture 18 2/ 13 Given

More information

Solving Higher-Order p-adic Polynomial Equations via Newton-Raphson Method

Solving Higher-Order p-adic Polynomial Equations via Newton-Raphson Method Malaysian Journal of Mathematical Sciences 11(1): 41 51 (017) MALAYSIAN JOURNAL OF MATHEMATICAL SCIENCES Journal homepage: http://einspem.upm.edu.my/journal Solving Higher-Order p-adic Polynomial Equations

More information

Three New Iterative Methods for Solving Nonlinear Equations

Three New Iterative Methods for Solving Nonlinear Equations Australian Journal of Basic and Applied Sciences, 4(6): 122-13, 21 ISSN 1991-8178 Three New Iterative Methods for Solving Nonlinear Equations 1 2 Rostam K. Saeed and Fuad W. Khthr 1,2 Salahaddin University/Erbil

More information

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2. INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

A fourth order method for finding a simple root of univariate function

A fourth order method for finding a simple root of univariate function Bol. Soc. Paran. Mat. (3s.) v. 34 2 (2016): 197 211. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v34i1.24763 A fourth order method for finding a simple

More information

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable) Nonlinear equations Definition A nonlinear equation is any equation of the form where f is a nonlinear function. Nonlinear equations x 2 + x + 1 = 0 (f : R R) f(x) = 0 (x cos y, 2y sin x) = (0, 0) (f :

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

Polynomial Interpolation with n + 1 nodes

Polynomial Interpolation with n + 1 nodes Polynomial Interpolation with n + 1 nodes Given n + 1 distinct points (x 0, f (x 0 )), (x 1, f (x 1 )),, (x n, f (x n )), Interpolating polynomial of degree n P(x) = f (x 0 )L 0 (x) + f (x 1 )L 1 (x) +

More information

Global Attractivity of a Higher-Order Nonlinear Difference Equation

Global Attractivity of a Higher-Order Nonlinear Difference Equation International Journal of Difference Equations ISSN 0973-6069, Volume 5, Number 1, pp. 95 101 (010) http://campus.mst.edu/ijde Global Attractivity of a Higher-Order Nonlinear Difference Equation Xiu-Mei

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson JUST THE MATHS UNIT NUMBER.5 DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) by A.J.Hobson.5. Maclaurin s series.5. Standard series.5.3 Taylor s series.5.4 Exercises.5.5 Answers to exercises

More information

A new Newton like method for solving nonlinear equations

A new Newton like method for solving nonlinear equations DOI 10.1186/s40064-016-2909-7 RESEARCH Open Access A new Newton like method for solving nonlinear equations B. Saheya 1,2, Guo qing Chen 1, Yun kang Sui 3* and Cai ying Wu 1 *Correspondence: yksui@sina.com

More information