High-order Newton-type iterative methods with memory for solving nonlinear equations

Similar documents
A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index

Optimal derivative-free root finding methods based on the Hermite interpolation

A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations

A new family of four-step fifteenth-order root-finding methods with high efficiency index

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems

Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations

Modified Jarratt Method Without Memory With Twelfth-Order Convergence

SOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H.

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

A Three-Step Iterative Method to Solve A Nonlinear Equation via an Undetermined Coefficient Method

Dynamical Behavior for Optimal Cubic-Order Multiple Solver

A new sixth-order scheme for nonlinear equations

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and Their Dynamics

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations

A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials

ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS

Family of Optimal Eighth-Order of Convergence for Solving Nonlinear Equations

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations

New seventh and eighth order derivative free methods for solving nonlinear equations

Convergence of a Third-order family of methods in Banach spaces

NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS

Math 113 (Calculus 2) Exam 4

A Fifth-Order Iterative Method for Solving Nonlinear Equations

A New Fifth Order Derivative Free Newton-Type Method for Solving Nonlinear Equations

An optimal sixteenth order convergent method to solve nonlinear equations

Document downloaded from:

A three point formula for finding roots of equations by the method of least squares

Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations

Basins of Attraction for Optimal Third Order Methods for Multiple Roots

Research Article Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

A Two-step Iterative Method Free from Derivative for Solving Nonlinear Equations

EULER-LIKE METHOD FOR THE SIMULTANEOUS INCLUSION OF POLYNOMIAL ZEROS WITH WEIERSTRASS CORRECTION

Some New Three Step Iterative Methods for Solving Nonlinear Equation Using Steffensen s and Halley Method

Quadrature based Broyden-like method for systems of nonlinear equations

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

Finding simple roots by seventh- and eighth-order derivative-free methods

Geometrically constructed families of iterative methods

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

SQUARE-ROOT FAMILIES FOR THE SIMULTANEOUS APPROXIMATION OF POLYNOMIAL MULTIPLE ZEROS

Applied Mathematics Letters. Combined bracketing methods for solving nonlinear equations

SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

Mathematics for Engineers. Numerical mathematics

NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS

Math 106 Fall 2014 Exam 2.1 October 31, ln(x) x 3 dx = 1. 2 x 2 ln(x) + = 1 2 x 2 ln(x) + 1. = 1 2 x 2 ln(x) 1 4 x 2 + C

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Journal of Inequalities in Pure and Applied Mathematics

ACTA UNIVERSITATIS APULENSIS No 18/2009 NEW ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS BY USING MODIFIED HOMOTOPY PERTURBATION METHOD

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver

A Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation

Examination paper for TMA4215 Numerical Mathematics

Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations

arxiv: v1 [cs.it] 12 Jun 2016

Error formulas for divided difference expansions and numerical differentiation

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

Limits at Infinity. Horizontal Asymptotes. Definition (Limits at Infinity) Horizontal Asymptotes

On the influence of center-lipschitz conditions in the convergence analysis of multi-point iterative methods

A New Modification of Newton s Method

Accelerating Convergence

Generalization Of The Secant Method For Nonlinear Equations

Solutions Final Exam May. 14, 2014

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.

5. Hand in the entire exam booklet and your computer score sheet.

An Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function

November 20, Interpolation, Extrapolation & Polynomial Approximation

Improving homotopy analysis method for system of nonlinear algebraic equations

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Lecture Notes for Math 1000

Some Third Order Methods for Solving Systems of Nonlinear Equations

Chapter 3: Root Finding. September 26, 2005

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION. 1. Introduction Consider the following unconstrained programming problem:

A New Accelerated Third-Order Two-Step Iterative Method for Solving Nonlinear Equations

Applied Mathematics and Computation

A new family of adaptive methods with memory for solving nonlinear equations

AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period:

Homework and Computer Problems for Math*2130 (W17).

Using Lagrange Interpolation for Solving Nonlinear Algebraic Equations

Completion Date: Monday February 11, 2008

Do not turn over until you are told to do so by the Invigilator.

Journal of Inequalities in Pure and Applied Mathematics

NON-MONOTONICITY HEIGHT OF PM FUNCTIONS ON INTERVAL. 1. Introduction

Taylor Series. Math114. March 1, Department of Mathematics, University of Kentucky. Math114 Lecture 18 1/ 13

Solving Higher-Order p-adic Polynomial Equations via Newton-Raphson Method

Three New Iterative Methods for Solving Nonlinear Equations

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

Improved Newton s method with exact line searches to solve quadratic matrix equation

A fourth order method for finding a simple root of univariate function

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

Numerical Methods in Physics and Astrophysics

Polynomial Interpolation with n + 1 nodes

Global Attractivity of a Higher-Order Nonlinear Difference Equation

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

A new Newton like method for solving nonlinear equations

Transcription:

MATHEMATICAL COMMUNICATIONS 9 Math. Commun. 9(4), 9 9 High-order Newton-type iterative methods with memory for solving nonlinear equations Xiaofeng Wang, and Tie Zhang School of Mathematics and Physics, Bohai University, Jinzhou, Liaoning, China College of Sciences, Northeastern University, Shenyang 89 Liaoning, China Received May, ; accepted December 7, Abstract. In this paper, we present a new family of two-step Newton-type iterative methods with memory for solving nonlinear equations. In order to obtain a Newton-type method with memory, we first present an optimal two-parameter fourth-order Newton-type method without memory. Then, based on the two-parameter method without memory, we present a new two-parameter Newton-type method with memory. Using two self-correcting parameters calculated by Hermite interpolatory polynomials, the R-order of convergence of a new Newton-type method with memory is increased from 4 to 5.76 without any additional calculations. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods. AMS subject classifications: 65H5, 65B99 Key words: Newton-type iterative method with memory, nonlinear equations, R order convergence, root-finding. Introduction Finding the root of a nonlinear equation f(x) = is a classical problem in scientific computation. Recently, many iterative methods have been proposed for solving nonlinear equations, see [-4] and the references therein. In these methods, the iterative methods with memory are worth studying. The iterative method with memory can improve the order of convergence of the method without memory without any additional calculations and it has a very high computational efficiency. There are two kinds of iterative methods with memory, i.e. Steffensen-type method and Newton-type method. In this paper, we only consider the Newton-type method with memory. For example, in [9] Petković et al. proposed a two-step Newton-type iterative method with memory of the R-order 4.56 by using inverse interpolation, which is written as N(x n ) = x n f(x [ n) f (x n ), φ(t) = t x n f(t) f(x n ) f(t) f(x n ) ] f, (x n ) y n = N(x n ) + f(x n ) φ(y n ), () x n+ = N(x n ) + f(x n ) φ(y n ), For interpretation of color in all figures, the reader is referred to the web version of this article available at www.mathos.hr/mc. Corresponding author. Email addresses: w888w@6.com (X. Wang), ztmath@6.com (T. Zhang) http://www.mathos.hr/mc c 4 Department of Mathematics, University of Osijek

9 X. Wang and T. Zhang where y = N(x ). The basic idea of () comes from Neta who in [5] derived the following three-step Newton-type method with memory of the R-order.85: N(x n ) =x n f(x [ n) f (x n ), ψ(t) = f(t) f(x n ) t x n f(t) f(x n ) f (x n ) f(x n ) y n =N(x n ) + [f(y n )ψ(z n ) f(z n )ψ(y n )] f(y n ) f(z n ), f(x n ) z n =N(x n ) + [f(y n )ψ(z n ) f(z n )ψ(y n )] f(y n ) f(z n ), f(x n ) x n+ =N(x n ) + [f(y n )ψ(z n ) f(z n )ψ(y n )] f(y n ) f(z n ), where y = N(x ) and z = y + f(x ) /. The efficiency indices of methods () and () are low. The reason is that iterative methods () and () require four and six functional evaluations in their first iteration, respectively. In order to accelerate the convergence or improve the computational efficiency of the Newton-type method with memory, in [4] Wang and Zhang developed a two-step Newton-type method with memory of R-order 5, which is given by λ n = H 4 (x n ) f (x n ), y n =x n x n+ =y n f(x n ) λ n f(x n ) + f (x n ), f(y n ) λ n f(x n ) + f (x n ) ( ) f(xn ) + ( + β)f(y n ), f(x n ) + βf(y n ) where β R, H 4 (x n ) = f[x n, x n, y n ]+(4f[x n, x n, y n, x n ] f[x n, y n, x n, x n ])(x n y n ) and H 4 (x) = H 4 (x; x n, x n, y n, x n, x n ) is a Hermite interpolatory polynomial of fourth degree. The efficiency index of method () is 5 /.7, which is higher than the efficiency indices of methods () and (), see [4]. The purpose of this paper is to improve further the R-order of convergence and the efficiency index of the two-step Newton-type iterative method and give the convergence analysis. This paper is organized as follows. In Section, we derive a family of optimal fourth-order iterative methods without memory for solving nonlinear equations. Further accelerations of convergence speed are attained in Section. We obtain a new family of two-point Newton-type iterative methods with memory by varying two free parameters in per full iteration. The two self-accelerating parameters are calculated using information available from the current and previous iterations. The corresponding R-order of convergence is increased from 4 to 5.76. The maximal efficiency index of the new method with memory is (5.76) /.7865, which is higher than the efficiency indices of the existing Newton-type methods. Numerical examples are given in Section 4 to illustrate convergence behavior of our methods for simple roots. In Section 5, some dynamical aspects associated to the presented methods are studied. Section 6 gives a short conclusion. ], () ()

Newton-type iterative methods with memory 9. The two-step Newton-type method without memory In order to get a two-step Newton-type iterative method with memory based on the well known Ostrowski s method [7, ], we consider the following two-step iterative method without memory by using the weight function method. y n =x n f(x n) f (x n ) G(s n), f(y n ) x n+ =y n f[x n, y n ] f (x n ) H(s n), where G(s n ) and H(s n ) are two weight functions with s n = f(x n )/f (x n ). The functions G(s n ) and H(s n ) should be determined so that the iterative method (4) is of order four. For the iterative method defined by (4) we have the following result. Theorem. Let a I be a simple zero of a sufficiently differentiable function f : I R R for an open interval I. Then the iterative method defined by (4) is of fourth-order convergence when G() =, G () = γ, H() =, H () =, H () = β and γ, β R, (5) and it satisfies the error equation below e n+ = (c γ)(c c β c γ)e 4 n + O(e 5 n). (6) Proof. Let e n = x n a, c n = (/n!)f (n) (a)/f (a), n =,,.... Using the Taylor expansion and taking into account f(a) = we have f(x n ) =f (a)[e n + c e n + c e n + c 4 e 4 n + c 5 e 5 n + O(e 6 n)], (7) f (x n ) =f (a)[ + c e n + c e n + 4c 4 e n + 5c 5 e 4 n + O(e 5 n)]. (8) Dividing (7) by (8) we obtain s n = f(x n) f (x n ) = e n c e n + (c c )e n + ( 4c + 7c c c 4 )e 4 n + O(e 5 n). (9) Using (5), (9) and Taylor expansion G(s n ) = G() + G ()s n + O(s n) we have G(s n ) = + γs n = + γ(e n c e n + (c c )e n + ( 4c + 7c c c 4 )e 4 n + O(e 5 n)). () From (9) and () we get e n,y =y n a = x n a G(s n ) f(x n) f (x n ) =(c γ)e n + ( c + c + c γ)e n + (4c 7c c + c 4 5c γ + 4c γ)e 4 n + O(e 5 n). () (4)

94 X. Wang and T. Zhang By an argument similar to that of (7), we have f(y n ) =f (a)[(c γ)e n (c c c γ)e n + (5c 7c γ + 4c 4 γ + c ( 7c γ ))e 4 n + O(e 5 n)]. () From (7), () and () we obtain f[x n, y n ] =f (a)( + c e n + (c + c c γ)e n +( c + c c + c 4 +c γ c γ)e n+o(e 4 n)), () f(y n )(f[x n, y n ] f (x n )) =(c γ)e n + ( c + c + c γ)e n + (c c γ + (c 4 + c γ) c (6c + γ ))e 4 n + O(e 5 n). (4) Using (5), (9) and Taylor expansion H(s n ) = H() + H ()s n + H ()s n/ + O(s n) we get H(s n ) = + βe n βc e n + β(5c 4c )e 4 n + O(e 5 n). (5) Hence, together with (), (4) and (5), we obtain the error equation e n+ =x n+ a = y n a f(y n )H(s n )(f[x n, y n ] f (x n )) The proof is completed. =(c γ)(c c β γc )e 4 n + O(e 5 n). (6) It is obvious that the iterative method (4) requires two functions and one derivative per iteration, thus it is an optimal scheme. Let p /n be the efficiency index (see [7]), where p is the order of the method and n is the number of functional evaluations per iteration required by the method. We see that the new method (4) without memory has the efficiency index of 4.587. Remark. Based on Theorem, we obtain a new iterative method without memory as follows: y n =x n f(x n) f (x n ) G(s n), f(y n ) x n+ =y n f[x n, y n ] f (x n ) H(s n), where s n = f(x n )/f (x n ). G(s n ) and H(s n ) in (7) are two weight functions satisfying condition (5). Functions G(s n ) and H(s n ) in (7) can take many forms. For example, taking the functions G(s n ) = ( γs n ) and H(s n ) = ( βs n), we can give the following fourth-order iterative method without memory: (7) f(x n ) y n =x n f (x n ) γf(x n ), f(y n ) f (x n ) (γ, β R). (8) x n+ =y n f[x n, y n ] f (x n ) f (x n ) βf(x n ),

Newton-type iterative methods with memory 95 Taking the functions G(s n ) = ( γs n ) and H(s n ) = + βs n, we obtain the following fourth-order iterative method without memory: y n =x n f(x n ) f (x n ) γf(x n ), f(y n ) x n+ =y n f[x n, y n ] f (x n ) ( + β ( ) ) f(xn ) f, (x n ) (γ, β R). (9) Remark. It is noteworthy that the error equation (6) can be written as the following scheme e n+ =(c γ)(c c β c γ)e 4 n + O(e 5 n) =(c γ)[c (c γ) (c + β)]e 4 n + O(e 5 n). () We can improve the order of convergence of method (7) by taking parameters γ = c and β = c in (). Therefore, the new method with memory can be obtained by method (7) without memory.. New Newton-type iterative methods with memory In this section we will improve the convergence rate of method (7) by varying the parameters γ and β in per full iteration. Taking γ = c and β = c in (), we can improve the order of convergence of method (7). However, the exact values of f (a), f (a) and f (a) are not available in practice and such acceleration of convergence cannot be realized. But we could approximate the parameter γ by γ n and approximate the parameter β by β n. Parameters γ n and β n can be computed by using information available from the current and previous iterations and satisfy lim γ n = c = f (a) n f (a) and lim β n = c = f (a) n 6f (a) such that the fourth-order asymptotic convergence constant is zero in (). In this paper, the self-accelerating parameter γ n is equal to the parameter λ n of Newtontype method (). We consider the following four methods for β n : Method : where γ n = H 4 (x n ) f (x n ) and β n = H 4 (x n ) 6f (x n ), () H 4 (x n ) = 6f[x n, x n, y n, x n ] + 6f[x n, x n, y n, x n, x n ](x n x n y n ). Method : γ n = H 4 (x n ) f (x n ) H (y n ) and β n = 6f (x n ), () where H (y n ) = 6f[y n, x n, x n, y n ]. H (x) = H (x ; y n, x n, x n, y n ) is a Hermite interpolatory polynomial of third degree.

96 X. Wang and T. Zhang Method : where γ n = H 4 (x n ) f (x n ) H 4 (y n ) and β n = 6f (x n ), () H 4 (y n ) = H (y n ) + 6f[y n, x n, x n, y n, x n ](y n y n x n ). H 4 (x) = H 4 (x; y n, x n, x n, y n, x n ) is a Hermite interpolatory polynomial of fourth degree. Method 4: where γ n = H 4 (x n ) f (x n ) H 5 (y n ) = H 4 (y n ) + 6f[y n, x n, x n, y n, x n, x n ] H 5 (y n ) and β n = 6f (x n ), (4) [(y n y n )(y n x n ) + (y n x n ) + (y n x n )(y n y n x n )]. H 5 (x) = H 5 (x; y n, x n, x n, y n, x n, x n ) is a Hermite interpolatory polynomial of fifth degree. Here f[x n, x n ] = f (x n ) and f[x n, t] = (f(t) f(x n ))/(t x n ) are two first-order divided differences. The higher order divided differences are defined recursively. The divided difference f[y n, x n, x n, t, t,..., t ] of order m (m ) is defined as f[y n, x n, x n, t, t,..., t ]= f[x n, x n, t, t,..., t ] f[y n, x n, x n, t, t,... t m 4 ] t y n. The divided difference f[t, t,..., t ] of order m(m ) is defined as f[t, t,..., t ] = f[t, t,..., t ] f[t, t,... t m 4 ] t t. Remark. Now, we can obtain the following two-parameter Newton-type iterative method with memory y n =x n f(x n) f (x n ) G(s n), f(y n ) x n+ =y n f[x n, y n ] f (x n ) H(s n), where s n = f(x n )/f (x n ), G(s n ) satisfies the condition G() =, G () = γ n and H(s n ) satisfies the condition H() =, H () =, H () = β n. The parameters γ n and β n are calculated by using one of the formulas ()-(4) and they depend on the data available from the current and the previous iterations. In this paper, the self-accelerating parameter γ n is equal to the parameter λ n of Newton-type method (). (5)

Newton-type iterative methods with memory 97 Remark 4. Parameter () can be computed easily, because Hermite interpolation polynomial H 4 (x) has been used in the parameter γ n. Computation of self-correcting parameters γ n and β n is complicated in this paper. The main purpose of this study is to choose available nodes as good as possible to obtain the maximal convergence order of two-step Newton type methods with memory. Hence, some simple approximations of self-accelerating parameters γ n and β n are not considered in this paper. The concept of the R-order of convergence [6] and the following assertion (see []) will be applied to estimate the convergence order of the iterative method with memory (5). Theorem. If the errors of approximations e j = x j a obtained in an iterative root-finding method IM satisfy n e k+ (e k i ) mi, k k({e k }), i= then the R-order of convergence of IM, denoted with O R (IM, a), satisfies the inequality O R (IM, a) s, where s is the unique positive solution of the equation s n+ n m i s n i =. i= Lemma. Let Hm be the Hermite interpolating polynomial of the degree m that interpolates a function f at interpolation nodes y n, x n, t,..., t contained in an interval I and the derivative f (m+) is continuous in I and the Hermite interpolating polynomial Hm (x) satisfied the condition H m (y n ) = f(y n ), H m (x n ) = f(x n ), H m(x n ) = f (x n ), H m (t j ) = f(t j )(j =,..., m ). Define the errors e t,j = t j a (j =,..., m ) and assume that Then. all nodes y n, x n, t,..., t are sufficiently close to the zero a;. the conditions e n = x n a = O(e t,... e t, ) and e n,y = y n a = O(e ne t,... e t, ) hold. β n + c ( ) m c m+ j= e t,j. (6) Proof. The error of the Hermite interpolation can be expressed as follows f(x) H m (x) = f (m+) (ξ) (m + )! (x y n)(x x n ) (x t j ) (ξ I). (7) Differentiating (7) at the point x = y n we obtain f (y n ) H m(y n ) = 6 f (m+) (ξ) (m + )! j= (y n t j ) + (y n x n ) j= k= j = j k (y n t j )

98 X. Wang and T. Zhang + (y n x n ) k= H m(y n ) = f (y n ) 6 f (m+) (ξ) (m + )! + (y n x n ) k= i = i k (y n t j ) (ξ I), (8) j = j k j i (y n t j ) + (y n x n ) j= i = i k (y n t j ) j = j k j i k= j = j k (y n t j ) (ξ I), (9) Taylor s series of derivatives of f at the point x n, y n I and ξ I about the zero a of f give f (x n ) =f (a)( + c e n + c e n + 4c 4 e n + O(e 4 n)), () f (y n ) =f (a)( + c e n,y + c e n,y + 4c 4 e n,y + O(e 4 n,y)), () f (y n ) =f (a)(c + 6c e n,y + c 4 e n,y + O(e n,y)), () f (y n ) =f (a)(6c + 4c 4 e n,y + O(e n,y)), () f (m+) (ξ) =f (a)((m + )!c m+ + (m + )!c m+ e ξ + O(e ξ)), (4) where e ξ = ξ a. Substituting () and (4) into (9) we have m(y n ) 6f (a) c + 4c 4 e n,y c m+ (y n t j ) + (y n x n ) j= (y n t j ) + (y n x n ) (y n t j ). (5) k= j = k= i = j = j k i k j k j i H

Dividing (5) by () we obtain H β n = m(y n ) 6f (x n ) c c m+ and k= j = j k Newton-type iterative methods with memory 99 j= (y n t j ) + (y n x n ) (y n t j ) + (y n x n ) k= β n + c c m+ (y n t j ) + (y n x n ) + (y n x n ) j= k= The proof is completed. i = i k i = i k k= (y n t j ) j = j k j i (y n t j ), (6) j = j k j i j = j k (y n t j ) ( ) m c m+ j= e t,j. (7) Theorem. Let the varying parameters γ n and β n in the iterative method (5) be calculated by (). If an initial approximation x is sufficiently close to a simple root a of f(x), then the R-order of convergence of the iterative method (5) with memory is at least 5.59. Proof. Let the sequence {x n } be generated by an iterative method (IM) converging to the root a of f(x) with the R-order O R (IM, a) r, we write e n+ D n,r e r n, e n = x n a, (8) where D n,r tends to the asymptotic error constant D r of (IM) when n. So, e n+ D n,r (D n,r e r n ) r = D n,r D r n,re r n. (9) It is similar to the derivation of (9). We assume that the iterative sequence {y n } has the R-order p; then e n,y D n,p e p n D n,p (D n,r e r n ) p = D n,p D p n,r erp n. (4)

X. Wang and T. Zhang Using (), (), γ n and β n, we obtain the corresponding error relations for the methods with memory (5) e n,y =y n a (c γ n )e n, (4) e n+ =x n+ a (c γ n )[c (c γ n ) (c + β n )]e 4 n. (4) Here, we omitted higher order terms in (4)-(4). Hermite interpolating polynomial H 4 (x) satisfied the conditions H 4 (x n ) = f(x n ), H 4(x n ) = f (x n ), H 4 (y n ) = f(y n ), H 4 (x n ) = f(x n ) and H 4(x n ) = f (x n ). The error of the Hermite interpolation can be expressed as follows: f(x) H 4 (x) = f (5) (ξ) (x x n ) (x x n ) (x y n ) (ξ I). (4) 5! Differentiating (4) at the point x = x n we obtain f (x n ) H 4 (x n ) = f (5) (ξ) (x n x n ) (x n y n ) (ξ I), (44) 5! H 4 (x n ) =f (x n ) f (5) (ξ) (x n x n ) (x n y n ) (ξ I), (45) 5! f (x n ) H 4 (x n ) =6 f (5) (ξ) [(x n x n )(x n y n )+(x n x n ) ] (ξ I), (46) 5! H 4 (x n ) =f (x n ) 6 f (5) (ξ) [(x n x n )(x n y n ) 5! + (x n x n ) ] (ξ I). (47) Taylor s series of derivatives of f at the point x n, ξ I about the zero a of f give f (x n ) =f (a)(c + 6c e n + c 4 e n + O(e n)), (48) f (x n ) =f (a)(6c + 4c 4 e n + O(e n)), (49) f (5) (ξ) =f (a)(5!c 5 + 6!c 6 e ξ + O(e ξ)), (5) where e ξ = ξ a. Substituting (48) and (5) into (45) we have H 4 (x n ) f (a)(c + c 5 e n,y e n + c e n ). (5) Substituting (49) and (5) into (47) we have H 4 (x n ) 6f (a)(c c 5 (e n,y e n + e n ) + 4c 4 e n ). (5) Dividing (5) by () we obtain γ n = H 4 (x n ) f (x n ) ( c + c 5 e n,y e n + (c c )e n ), (5) c γ n c 5 e n,y e n. (54)

Dividing (5) by () we obtain Newton-type iterative methods with memory β n = H 4 (x n ) 6f (x n ) ( c c 5 (e n,y e n + e n ) + (4c 4 c c )e n ), (55) c + β n ( c 5 (e n,y e n + e n ) (4c 4 c c )e n ) c5 e n. (56) According to (4), (4), (54) and (56), we get e n,y (c γ n )e n c 5 e n,y e n (D n,r e r n ) c 5 D n,p D n,re r+p+ n, (57) e n+ c 5 e n,y e n [c c 5 e n,y e n + c 5 e n ]e 4 n c 5e 4 n D n,p e p n (D n,re r n ) 4 c 5D n,p D 4 n,re 4r+p+4 n. (58) By comparing exponents of e n appearing in two pairs of relations ((4),(57)) and ((9),(58)), we get the following system of equations { r + p + = rp, 4r + p + 4 = r (59). Positive solution of system (59) is given by r 5.59 and p.99. Therefore, the R-order of the methods with memory (5), when γ n is calculated by (), is at least 5.59. Using the results of Lemma and Theorem, we get Theorem 4 as follows: Theorem 4. Let the varying parameters γ n and β n in the iterative method (5) be calculated by (), () and (4), respectively. If an initial approximation x is sufficiently close to a simple root a of f(x), then the R-order of convergence of iterative methods (5) with memory is at least 5.456, 5.578 and 5.76, respectively. Proof. Method, γ n and β n are calculated by (): Using Lemma for m = and t = y n, we obtain According to (4), (54) and (6), we get c + β n c 4 e n,y. (6) e n+ c 5 e n e n,y [c c 5 e n e n,y c 4 e n,y ]e 4 n c 4 c 5 e n e n,ye 4 n c 4 c 5 e n Dn,pe p n (D n,re r n ) 4 c 4 c 5 Dn,pD n,re 4 4r+p+ n. (6) By comparing exponents of e n appearing in two pairs of relations ((4),(57)) and ((9),(6)), we get the following system of equations { r + p + = rp, 4r + p + = r (6). Positive solution of system (6) is given by r 5.456 and p.98. Therefore, the R-order of the methods with memory (5), when γ n is calculated by (), is at least 5.456.

X. Wang and T. Zhang Method, γ n and β n are calculated by (): Using Lemma for m = 4, t = y n and t = x n, we obtain According to (4), (54) and (6), we get c + β n c 5 e n,y e n, (6) e n+ c 5 e n e n,y [c c 5 e n e n,y + c 5 e n,y e n ]e 4 n c 5e n e n,ye 4 n c 5e n Dn,pe p n (D n,re r n ) 4 c 5Dn,pD n,re 4 4r+p+ n. (64) By comparing exponents of e n appearing in two pairs of relations ((4), (57)) and ((9), (64)), we get the following system of equations { r + p + = rp, 4r + p + = r (65). Positive solution of system (65) is given by r 5.578 and p.875. Therefore, the R-order of the methods with memory (5), when γ n is calculated by (), is at least 5.578. Method 4, γ n and β n are calculated by (4): Hermite interpolating polynomial H5 (x) satisfied the condition H 5 (y n ) = f(y n ), H 5 (x n ) = f(x n ), H 5 (x n ) = f (x n ), H5 (y n ) = f(y n ), H5 (x n ) = f(x n ) and H 5(x n ) = f (x n ). The error of the Hermite interpolation can be expressed as follows: f(x) H 5 (x) = f (5) (ξ) (x y n )(x x n ) (x x n ) (x y n ) (ξ I). (66) 6! Differentiating (66) at the point x = y n we obtain f (y n ) H 5 (y n ) =6 f (5) (ξ) [(y n y n )(y n x n ) + (y n x n )(y n x n ) 6! +4(y n x n )(y n x n )(y n y n ) + (y n x n ) (y n x n ) H + (y n x n ) (y n y n )] (ξ I), (67) 5 (y n ) =f (y n ) 6 f (5) (ξ) [(y n y n )(y n x n ) 6! +(y n x n )(y n x n ) +4(y n x n )(y n x n )(y n y n ) + (y n x n ) (y n x n ) + (y n x n ) (y n y n )] (ξ I). (68) Taylor s series of derivatives of f at the point ξ I about the zero a of f give where e ξ = ξ a. Substituting () and (69) into (68) we have f (5) (ξ) = f (a)(6!c 6 + 7!c 7 e ξ + O(e ξ)), (69) H 5 (y n ) 6f (a)(c + c 6 e n,y e n ). (7)

Dividing (7) by () we obtain Newton-type iterative methods with memory According to (4), (54) and (7), we get H 5 (y n ) β n = 6f (x n ) (c + c 6 e n,y e n ), (7) c + β n c 6 e n,y e n. (7) e n+ c 5 e n,y e n [c c 5 e n,y e n c 6 e n,y e n ]e 4 n c 5 (c c 5 c 6 )e 4 n e n,ye 4 n c 5 (c c 5 c 6 )Dn,pD n,re 4 4r+p+4 n. (7) By comparing exponents of e n appearing in two pairs of relations ((4),(57)) and ((9),(7)), we get the following system of equations { r + p + = rp, 4r + p + 4 = r (74). Positive solution of system (74) is given by r 5.76 and p.858. Therefore, the R-order of the methods with memory (5), when γ n is calculated by (4), is at least 5.76. The proof is completed. Remark 5. The efficiency index of iterative method (5) with the corresponding expressions ()-(4) of γ n and β n is (5.59) /.744, (5.456) /.758, (5.578) /.777 and (5.76) /.7865, respectively. The efficiency indices of our methods with memory are higher than the efficiency index (5 /.7) of Newton-type method () with memory. 4. Numerical results The new methods (8) and (9) with and without memory are used to solve nonlinear functions f i (x)(i =, ) and the computation results are compared with other Newton-type iterative methods () and () with memory, see Tables -. The absolute errors x k a in the first four iterations are given in Tables -, where a is the exact root computed with significant digits. The computational order of convergence ρ is defined by []: The following test functions are used: ρ ln( x n+ x n / x n x n ) ln( x n x n / x n x n ). (75) f (x) = xe x sin (x) + cos(x) + 5, a.764787989, x =., f (x) = arcsin(x ).5x +, a.594896898698, x =.7. The numerical results shown in Tables - are in concordance with the theory developed in this paper. We can notice that the results obtained by our methods with memory are better than the other two-step Newton-type methods with memory. The highest order of convergence of our methods with memory is 5.76, which is higher than methods () and ().

4 X. Wang and T. Zhang Methods x a x a x a x 4 a ρ (8), γ = β =.e-4.7e-9.e-78.4e-5 4. (9), γ = β =.e-4.6e-9.e-78.e-5 4. ().5e-5.e-.5e-8.8e-494 4.56 (), λ =., β =.4e-4.66e-.e-5.9e-57 5. (8), (), γ = β =.e-4.85e-4.47e-7.6e-675 5. (8), (), γ = β =.e-4.45e-4.45e-.89e-79 5.4 (8), (), γ = β =.e-4.4e-5.e-4.e-787 5.57 (8), (4), γ = β =.e-4.e-5.5e-48.4e-844 5.69 (9), (), γ = β =.e-4.8e-4.8e-7.e-675 5. (9), (), γ = β =.e-4.4e-4.6e-.8e-79 5.4 (9), (), γ = β =.e-4.e-5.6e-4.4e-788 5.57 (9), (4), γ = β =.e-4.e-5.4e-48.47e-845 5.69 Table : Numerical results for f (x) by the methods with and without memory Methods x a x a x a x 4 a ρ (8), γ = β =.6.56e-4.48e-7.7e-69.8e-78 4. (9), γ = β =.6.56e-4.49e-7.e-69.4e-78 4. ().7e-8.7e-4.e-89.84e-868 4.56 (), λ =., β =.75.4e-5.5e-7.96e-9.88e-696 5. (8), (), γ = β =.6.56e-4.65e-4.e-8.77e-68 5. (8), (), γ = β =.6.56e-4.4e-4.6e-.e-77 5.4 (8), (), γ = β =.6.56e-4.55e-5.8e-4.7e-789 5.57 (8), (4), γ = β =.6.56e-4.6e-6.5e-49.6e-855 5.7 (9), (), γ = β =.6.56e-4.66e-4.8e-8.7e-68 5. (9), (), γ = β =.6.56e-4.4e-4.9e-.e-77 5.4 (9), (), γ = β =.6.56e-4.57e-5.e-4.9e-789 5.57 (9), (4), γ = β =.6.56e-4.64e-6.9e-49.85e-855 5.7 Table : Numerical results for f (x) by the methods with and without memory 5. Dynamical analysis Dynamical properties of the rational function give us important information about numerical features of the iterative method as its stability and reliability. In what follows, we compare our methods with and without memory to methods with memory () and () by using the basins of attraction for three complex polynomials f(z) = z k, k =,, 4. We use a similar like technique as in [5] to generate the basins of attraction. To generate the basins of attraction for the zeros of a polynomial and an iterative method we take a grid of 4 4 points in a rectangle D = [.,.] [.,.] C and we use these points as z. If the sequence generated by the iterative method reaches a zero z of the polynomial with a tolerance z k z < 5 and a maximum of 5 iterations, we decide that z is in the basin of attraction of the zero and we paint this point blue for this root. In the same basin of attraction, the number of iterations needed to achieve the solution is shown in darker or

Newton-type iterative methods with memory 5 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Figure : Top row: Methods without memory (8)(left) and (9)(right). Middle row: Methods ()(left) and ()(right). Bottom row: Methods (8) with (4)(left) and (9) with (4)(right). The results are for the polynomial z.

6 X. Wang and T. Zhang - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Figure : Top row: Methods without memory (8)(left) and (9)(right). Middle row: Methods ()(left) and ()(right). Bottom row: Methods (8) with (4)(left) and (9) with (4)(right).The results are for the polynomial z.

Newton-type iterative methods with memory 7 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Figure : Top row: Methods without memory (8)(left) and (9)(right). Middle row: Methods ()(left) and ()(right). Bottom row: Methods (8) with (4)(left) and (9) with (4)(right). The results are for the polynomial z 4.

8 X. Wang and T. Zhang brighter colors (the less iterations, the brighter color). Black color denotes lack of convergence to any of the roots (with the maximum of iterations established) or convergence to the infinity. The fractals of our methods with memory are almost similar, we only give the basins of attraction of our methods with memory of the R- order 5.76. The parameters used in iterative methods (8) (9) without memory are γ =. and β =.. The parameters λ =. and β =. are used in method () with memory. Our methods with memory use the parameters γ =. and β =. in the first iteration. Figures - show that the new methods with memory have very little diverging points compared to other methods. Figures - show that the basins of attraction for our methods with memory are larger than the other methods and the convergence speed of our methods with memory are faster than our methods (8) (9) without memory. On the whole, we can see that the new methods with memory are better than the other methods in this paper. 6. Conclusions In this paper, we have proposed a new family of two-step Newton-type iterative methods with and without memory for solving nonlinear equations. Using two selfcorrecting parameters calculated by Hermite interpolatory polynomials the R-order of convergence of the new Newton-type method with memory is increased from 4 to 5.76 without any additional calculations. The main contribution of this paper is that we obtain the maximal order of Newton-type methods without any additional function evaluations. The new methods are compared in performance and computational efficiency with some existing methods by numerical examples. We observed that the computational efficiency indices of the new methods with memory are better than those of other existing two-step methods. Acknowledgement The project supported by the National Natural Science Foundation of China (Nos. 7 and 7). We would like to thank the referees for their helpful suggestions. References [] G. Alefeld, J. Herzberger, Introduction to Interval Computation, Academic Press, New York, 98. [] C. Chun, M. Y. Lee, B. Neta, J. Džunić, On optimal fourth-order iterative methods free from second derivative and their dynamics, Appl. Math. Comput. 8(), 647 648. [] A. Cordero, J. R. Torregrosa, Variants of Newton s Method using fifth-order quadrature formulas, Appl.Math. Comput. 9(7), 686 698. [4] J. Džunić, M. S. Petković, L. D. Petković, Three-point methods with and without memory for solving nonlinear equations, Appl. Math. Comput. 8(), 497 497.

Newton-type iterative methods with memory 9 [5] B. Neta, A New Family of Higher Order Methods for Solving Equations, Int. J. Comput. Math. 4(98), 9 95. [6] J. M. Ortega, W. C. Rheinbolt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 97. [7] A. M. Ostrowski, Solutions of Equations and System of Equations, Academic Press, New York, 96. [8] M. S. Petković, A note on the priority of optimal multipoint methods for solving nonlinear equations, Appl. Math. Comput. 9(), 549 55. [9] M. S. Petković, J. Džunić, B. Neta, Interpolatory multipoint methods with memory for solving nonlinear equations, Appl. Math. Comput.8 (), 5 54. [] M. G. Sánchez, À.Grau, M. Noguera, Ostrowski type methods for solving systems of nonlinear equations, Appl. Math. Comput. 8(), 77 85. [] J. R. Sharma, R. K. Guha, P. Gupta, Some efficient derivative free methods with memory for solving nonlinear equations, Appl. Math. Comput. 9(), 699 77. [] F. Soleymani, M. Sharifi, B. S. Mousavi, An inprovement of Ostrowski s and King s techniques with optimal convergence order eight, J. Optim. Theory. Appl. 5(), 5 6. [] X. Wang, J. Džunić, T. Zhang, On an efficient family of derivative free three-point methods for solving nonlinear equations, Appl. Math. Comput. 9(), 749 76. [4] X. Wang, T. Zhang, A new family of Newton-type iterative methods with and without memory for solving nonlinear equations, Calcolo (), 5, Doi:.7/s9- -7-. [5] J. L. Varona, Graphic and numerical comparison between iterative methods, The Math. Intelligencer, 4(),7 46.