Applied Mathematics and Computation

Similar documents
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Applied Mathematics and Computation

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

MATH 2710: NOTES FOR ANALYSIS

A new modified Halley method without second derivatives for nonlinear equation

NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS. The Goldstein-Levitin-Polyak algorithm

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Applied Mathematics and Computation

Using the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process

Convex Optimization methods for Computing Channel Capacity

4. Score normalization technical details We now discuss the technical details of the score normalization method.

Preconditioning techniques for Newton s method for the incompressible Navier Stokes equations

Some results of convex programming complexity

State Estimation with ARMarkov Models

Lower bound solutions for bearing capacity of jointed rock

Probability Estimates for Multi-class Classification by Pairwise Coupling

Estimation of the large covariance matrix with two-step monotone missing data

A note on variational representation for singular values of matrix

Improved Capacity Bounds for the Binary Energy Harvesting Channel

Lower Confidence Bound for Process-Yield Index S pk with Autocorrelated Process Data

Elementary Analysis in Q p

16.2. Infinite Series. Introduction. Prerequisites. Learning Outcomes

Universal Finite Memory Coding of Binary Sequences

Location of solutions for quasi-linear elliptic equations with general gradient dependence

Additive results for the generalized Drazin inverse in a Banach algebra

Applied Mathematics and Computation

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized

Generalized Coiflets: A New Family of Orthonormal Wavelets

VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES

arxiv: v1 [physics.data-an] 26 Oct 2012

An Analysis of Reliable Classifiers through ROC Isometrics

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

A continuous review inventory model with the controllable production rate of the manufacturer

On Wald-Type Optimal Stopping for Brownian Motion

ON POLYNOMIAL SELECTION FOR THE GENERAL NUMBER FIELD SIEVE

Principles of Computed Tomography (CT)

Linear diophantine equations for discrete tomography

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA

On split sample and randomized confidence intervals for binomial proportions

Series Handout A. 1. Determine which of the following sums are geometric. If the sum is geometric, express the sum in closed form.

Multi-Operation Multi-Machine Scheduling

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

q-ary Symmetric Channel for Large q

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0

Applicable Analysis and Discrete Mathematics available online at HENSEL CODES OF SQUARE ROOTS OF P-ADIC NUMBERS

HENSEL S LEMMA KEITH CONRAD

On a class of Rellich inequalities

Spectral Analysis by Stationary Time Series Modeling

A generalization of Amdahl's law and relative conditions of parallelism

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation

Feedback-error control

Session 5: Review of Classical Astrodynamics

Approximating min-max k-clustering

A SIMPLE PLASTICITY MODEL FOR PREDICTING TRANSVERSE COMPOSITE RESPONSE AND FAILURE

On the Chvatál-Complexity of Knapsack Problems

Multiplicative group law on the folium of Descartes

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Research Article An iterative Algorithm for Hemicontractive Mappings in Banach Spaces

16.2. Infinite Series. Introduction. Prerequisites. Learning Outcomes

Logistics Optimization Using Hybrid Metaheuristic Approach under Very Realistic Conditions

Notes on Instrumental Variables Methods

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

DEPARTMENT OF ECONOMICS ISSN DISCUSSION PAPER 20/07 TWO NEW EXPONENTIAL FAMILIES OF LORENZ CURVES

Lilian Markenzon 1, Nair Maria Maia de Abreu 2* and Luciana Lee 3

ON THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS AND COMMUTATOR ARGUMENTS FOR SOLVING STOKES CONTROL PROBLEMS

Asymptotically Optimal Simulation Allocation under Dependent Sampling

Generalized diffusion equation

Optimal Design of Truss Structures Using a Neutrosophic Number Optimization Model under an Indeterminate Environment

Information collection on a graph

#A47 INTEGERS 15 (2015) QUADRATIC DIOPHANTINE EQUATIONS WITH INFINITELY MANY SOLUTIONS IN POSITIVE INTEGERS

A MIXED CONTROL CHART ADAPTED TO THE TRUNCATED LIFE TEST BASED ON THE WEIBULL DISTRIBUTION

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS

A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials

On the Properties for Iteration of a Compact Operator with Unstructured Perturbation

The Fekete Szegő theorem with splitting conditions: Part I

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

arxiv:cond-mat/ v2 25 Sep 2002

Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Application

Statics and dynamics: some elementary concepts

Analysis of some entrance probabilities for killed birth-death processes

Topic 7: Using identity types

Monopolist s mark-up and the elasticity of substitution

Information collection on a graph

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

Notes on duality in second order and -order cone otimization E. D. Andersen Λ, C. Roos y, and T. Terlaky z Aril 6, 000 Abstract Recently, the so-calle

Improved Bounds on Bell Numbers and on Moments of Sums of Random Variables

ANALYTIC APPROXIMATE SOLUTIONS FOR FLUID-FLOW IN THE PRESENCE OF HEAT AND MASS TRANSFER

3.4 Design Methods for Fractional Delay Allpass Filters

1. INTRODUCTION. Fn 2 = F j F j+1 (1.1)

COMPARISON OF VARIOUS OPTIMIZATION TECHNIQUES FOR DESIGN FIR DIGITAL FILTERS

CERIAS Tech Report The period of the Bell numbers modulo a prime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education

Transcription:

Alied Mathematics and Comutation 217 (2010) 1887 1895 Contents lists available at ScienceDirect Alied Mathematics and Comutation journal homeage: www.elsevier.com/locate/amc Derivative free two-oint methods with and without memory for solving nonlinear equations M.S. Petković a, *, S. Ilić b,j.džunić a a Faculty of Electronic Engineering, Deartment of Mathematics, University of Niš, 18000 Niš, Serbia b Faculty of Science, Deartment of Mathematics, University of Niš, 18000 Niš, Serbia article info abstract Keywords: Multioint iterative methods Nonlinear equations Derivative free methods Methods with memory Comutational efficiency Two families of derivative free two-oint iterative methods for solving nonlinear equations are constructed. These methods use a suitable arametric function and an arbitrary real arameter. It is roved that the first family has the convergence order four requiring only three function evaluations er iteration. In this way it is demonstrated that the roosed family without memory suorts the Kung Traub hyothesis (1974) on the uer bound 2 n of the order of multioint methods based on n + 1 function evaluations. Further acceleration of the convergence rate is attained by varying a free arameter from ste to ste using information available from the revious ste. This aroach leads to a family of two-ste self-accelerating methods with memory whose order of convergence is at least 2 þ 5 4:236 and even 2 þ 6 4:449 in secial cases. The increase of convergence order is attained without any additional calculations so that the family of methods with memory ossesses a very high comutational efficiency. Numerical examles are included to demonstrate excetional convergence seed of the roosed methods using only few function evaluations. Ó 2010 Elsevier Inc. All rights reserved. 1. Introduction Multioint iterative methods for solving nonlinear equations are of great ractical imortance since they overcome theoretical limits of one-oint methods concerning the convergence order and comutational efficiency. Although this tye of root-finding methods were extensively studied in Traub s book [1] and some aers and books ublished in the 1960s and 1970s (see, e.g., [2 7]), the interest for multioint methods has renewed in the first decade of the 21st century. The main goal and motivation in constructing root-solvers is to achieve as high as ossible convergence order consuming as small as ossible function evaluations. In the case of multioint methods, this requirement is closely connected with results of Kung and Traub [6] who conjectured that the order of convergence of any multioint method without memory, consuming n + 1 function evaluations er iteration, cannot exceed the bound 2 n (called otimal order). Multioint methods with this roerty are usually called otimal methods. An extensive list of otimal methods can be found, for examle, in [8]. Defining the comutational efficiency of a root-finding method by the efficiency index E = r 1/h, where r is the order of convergence and h is the number of function evaluations er iteration (see [2,. 20]), otimal comutational efficiency is 2 n/(n+1) for otimal methods. Therefore, the efficiency index of otimal two-oint methods is 2 2/3 1.587. Let a be a simle real root of a real function f: D R? R and let x 0 be an initial aroximation to a. In many ractical situations it is referable to avoid calculations of derivatives of f. For examle, a classical Steffensen s method [9] * Corresonding author. E-mail addresses: ms@eunet.rs (M.S. Petković), sneska@mf.ni.ac.rs (S. Ilić), jovana.dzunic@elfak.ni.ac.rs (J. Džunić). 0096-3003/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2010.06.043

1888 M.S. Petković et al. / Alied Mathematics and Comutation 217 (2010) 1887 1895 f ðx k Þ 2 x kþ1 ¼ x k f ðx k þ f ðx k ÞÞ f ðx k Þ ðk ¼ 0; 1;...Þ; which is obtained from Newton s method x kþ1 ¼ x k f ðx kþ f 0 ðx k Þ ðk ¼ 0; 1;...Þ by substituting the derivative f 0 (x k ) by the ratio f ðx k þf ðx k ÞÞ f ðx k Þ, is an examle of a derivative free root-finding method. f ðx k Þ The efficiency index of Steffensen s method is 2 1/2 1.414, the same as that of Newton s method. Its comutational efficiency can be imroved by using derivative free two-oint otimal methods of the fourth order, as shown by Kung and Traub [6]. Recently, Ren et al. [10] roosed the following one-arameter family of derivative free two-oint methods of the fourth order, y k ¼ x k f ðx kþ f ½x k ; z k Š ; x f ðy kþ1 ¼ y k k Þ f ½x k ; y k Šþf½y k ; z k Š f½x k ; z k Šþaðy k x k Þðy k z k Þ ; f ðxþ f ðyþ x y where z k ¼ x k þ f ðx k Þ; f ½x; yš ¼ denotes a divided difference, and a is a real arameter. In this aer we consider another derivative free two-oint families of methods without and with memory. We rove that the order of convergence is four for methods without memory and asymtotically 2 þ 5 4:236 for methods with memory and even 2 þ 6 4:449 in a secial case. From a theoretical as well as ractical oint of view, this means that the resented methods are cometitive or even better than existing otimal two-oint methods. The resented derivative free methods are useful for finding roots of a function f when the calculation of derivatives of f is comlicated and exensive. In Section 2 we construct derivative free two-oint methods of otimal order four. The first ste is Steffensen-like method, while a suitable aroximation of the first derivative is used in the second ste to reduce a number of function evaluations. In this way the order of convergence four is reached by only three function evaluations, which means that these methods are otimal in the sense of the Kung Traub conjecture. Further imrovement of convergence rate is considered in Section 3. A family of two-oint methods with memory is constructed by varying a free arameter from ste to ste using only available information. Accelerating the convergence rate from 4 to 2 þ 5 4:236 (asymtotically) without any new calculations, the comutational efficiency of these methods is imroved comaring to the methods without memory. This theoretical result is confirmed by numerical examles, two of which are resented in Section 4. A comarison with other methods of the same tye, erformed in Section 4, has shown that the roosed methods are cometitive or even better than existing two-oint otimal methods. 2. Derivative free otimal two-oint methods Let u 1 ðxþ ¼ f ðxþ fðx bfðxþþ ; bf ðxþ where b is an arbitrary real constant. Uon exanding f(x bf(x)) into a Taylor series about x, we arrive at u 1 ðxþ ¼f 0 ðxþ b 2 f ðxþf 00 ðxþþoðf ðxþ 2 Þ: Hence, we obtain an aroximation to the derivative f 0 (x) in the form f 0 ðxþ u 1 ðxþ ¼ f ðxþ fðx bfðxþþ : ð2þ bf ðxþ Note that the aroximation f 0 f ðxþbf ðxþþ f ðxþ ðxþ, used in [1,. 178], gives the same results as the methods based on (2) and bf ðxþ considered in the sequel. To construct derivative free two-oint methods of otimal order, let us start from the doubled Newton method 8 < y k ¼ x k f ðx k Þ f 0 ðx k Þ ðk ¼ 0; 1;...Þ: ð3þ : x kþ1 ¼ y k f ðy k Þ f 0 ðy k Þ It is well known that the above iterative scheme is inefficient and our rimary aim is to imrove comutational efficiency. We also wish to substitute derivatives f 0 (x k ) and f 0 (y k ) by convenient aroximations. The latter requirement can be attained at the first ste using the aroximation (2) to f 0 (x k ). The derivative f 0 (y k ) in the second ste will be aroximated by u 2 (x)=u 1 (x)/h(u,v), where h(u,v) is a differentiable function that deends on two real variables u ¼ f ðyþ f ðxþ ; v ¼ f ðyþ f ðx bf ðxþþ : ð1þ

M.S. Petković et al. / Alied Mathematics and Comutation 217 (2010) 1887 1895 1889 Therefore, f 0 ðx k Þu 1 ðx k Þ¼ f ðx kþ fðx k bf ðx k ÞÞ ; ð4þ bf ðx k Þ f 0 ðy k Þu 2 ðx k Þ¼ u 1 ðx kþ hðu k ;v k Þ ; u k ¼ f ðy kþ f ðx k Þ ; v k ¼ f ðy k Þ f ðx k bf ðx k ÞÞ : Now we start from the iterative scheme (3) and using (4) and (5) we construct the following family of two-oint methods 8 < y k ¼ x k f ðx kþ u 1 ðx k Þ ðk ¼ 0; 1;...Þ: ð6þ : x kþ1 ¼ y k hðu k ;v k Þ f ðy k Þ u 1 ðx k Þ The function h should be determined in such a way that the order of convergence of the two-oint method (6) is four. Throughout this aer e k = x k a denotes the error of aroximation calculated in the kth iteration. In our convergence analysis we will omit sometimes the iteration index k for simlicity. A new aroximation x k+1 to the root a will be denoted with ^x. Let us introduce the errors e ¼ x a; d ¼ y a; ^e ¼ ^x a: We will use Taylor s exansion about the root a to exress f(x), f(x bf(x)) and f(y) as series in e. Then we reresent d and ^e by Taylor s olynomials in e. Assume that x is sufficiently close to the root a of f, then u and v are close enough to 0. Let us reresent a real two-valued function h aearing in (6) by Taylor s series about (0,0) in a linearized form, hðu; vþ ¼hð0; 0Þþh u ð0; 0Þu þ h v ð0; 0Þv; where h u and h v denote artial derivatives of h with resect to u and v. It can be roved that the use of artial derivatives of higher order does not rovide faster convergence (greater than four). The exressions of Taylor s olynomials (in e) of functions involved in (6) are cumbersome and lead to tedious calculations, which needs the use of comuter. If a comuter is emloyed anyway, it is reasonable to erform all calculations necessary in finding the convergence rate using a symbolic comutation, as done in this aer. Such an aroach was used in [11]. Therefore, instead of the resentation of exlicit exressions in a convergence analysis we use symbolic comutation in the rogramming ackage Mathematica to find candidates for h. If necessary, intermediate exressions can be always dislayed during this comutation using the rogram given below, although their resentation is most frequently of academic interest. We will find the coefficients h(0,0), h u (0,0), h v (0,0) of the develoment (7) using a simle rogram in Mathematica and an interactive aroach exlained by the comments COM-1, COM-2 and COM-3. First, let us introduce the following abbreviations used in this rogram. ak = f (k) (a)/(k!f 0 (a)), e = x a, e1¼ ^e ¼ ^x a; b ¼ b, fx = f (x), fxi = f(x bf(x)), fi =[f (x) f(x bf(x))]/(bf (x)), fy = f (y), f1a = f 0 (a), h0 = h(0,0), hu = h u (0,0), hv = h v (0,0). ð5þ ð7þ Program (written in Mathematica) fx=f1a(e+a2e^2+a3e^3+a4e^4); fxi=f1a((e-bfx)+a2(e-bfx)^2+a3(e-bfx)^3); fi=series[(fx-fxi)/(bfx), {e,0,2}]; d=e-bfx^2series [1/(fx-fxi), {e,0,2}]; fy=f1a(d+a2d^2+a3d^3); u=series[fy/fx,{e,0,2}]; v=series[fy/fxi,{e,0,2}]; e1=d-(h0+huu+hvv)fy/fi//simlify; C2=Coefficient[e1,e^2] COM-1: h0=1; C3=Coefficient[e1,e^3]//Simlify COM-2: hu=1; hv=1; C4=Coefficient[e1,e^4]//Simlify COM-3: Out½C4Š ¼ a2ð 1 þ bf 1aÞða3ð 1 þ bf 1aÞþa2 2 ð5 5bf 1a þ b 2 f 1a 2 ÞÞ Comment COM-1: From the exression of the error e1¼ ^e ¼ ^x a we observe that ^e is of the form ^e ¼ ^x a ¼ C 2 e 2 þ C 3 e 3 þ C 4 e 4 þ Oðe 5 Þ: ð8þ The iterative two-oint methods (6) will have the order of convergence equal to four if we determine the coefficients of the develoment aearing in (7) so that C 2 and C 3 (in (8)) vanish. We find these coefficients equalling shaded exressions in boxed formulas to 0. First, from Out[C2] we take h 0 = h(0,0) = 1 and then calculate C 3.

1890 M.S. Petković et al. / Alied Mathematics and Comutation 217 (2010) 1887 1895 Comment COM-2: From Out[C3] we see that the term b 2 f 0 (a) can be eliminated only if we choose hu = h u = h u (0,0) = 1. With this value the coefficient C 3 will vanish taking hv = h v = h v (0,0) = 1. Comment COM-3: Substituting the entries h(0,0) = h u (0,0) = h v (0,0) = 1 in the exression of e1 (=ê), we obtain h i ^e ¼ ^x a ¼ a 2 ð1 bf 0 ðaþþ a 2 2 5 5bf 0 ðaþþb 2 f 0 ðaþ 2 a 3 ð1 bf 0 ðaþþ e 4 þ Oðe 5 Þ; or, using the iteration index, h i e kþ1 ¼ x kþ1 a ¼ a 2 ð1 bf 0 ðaþþ a 2 2 5 5bf 0 ðaþþb 2 f 0 ðaþ 2 a 3 ð1 bf 0 ðaþþ e 4 k þ Oðe5 k Þ: ð9þ Therefore, to rovide the fourth order of convergence of the two-oint methods (6), it is necessary to choose a two-valued function h so that its truncated develoments (7) satisfies hð0; 0Þ ¼h u ð0; 0Þ ¼h v ð0; 0Þ ¼1: According to the above analysis we can formulate the following convergence theorem. Theorem 1. Let h(u,v) be a differentiable two-valued function that satisfies the conditions h(0,0) = h u (0,0) = h v (0,0) = 1. If an initial aroximation x 0 is sufficiently close to the root a of a function f, then the convergence order of the family of two-oint methods (6) is equal to four. Let us sto for a while to study the choice of the function h in (6). Considering (10), we see that the simlest form of the function h is obviously hðu;vþ ¼1 þ u þ v: Let us note that any function of the form h(u,v)=1+u + v + g(u,v), where g is a differentiable function such that g(0,0) = g u (0,0) = g v (0,0) = 0, satisfies the condition (10). For examle, we can take g(u,v)=cuv (c is a arameter), g(u,v)=v ( 1 u + + m u m )org(u,v)=u(q 1 v + + q s v s ), and so on. However, from a ractical oint of view, we should take forms of as low as ossible comutational cost. Another examle is the function hðu;vþ ¼ 1 þ u 1 v : ð12þ It is easy to check that it satisfies the conditions (10). A more general (but more comlicated) two-arameter function h(u,v) = (1 + u + au 2 )/(1 v + bv 2 ) also satisfies this condition. The more convenient choices are 1 hðu;vþ ¼ 1 u v and hðu;vþ ¼ð1þuÞð1 þ vþ: Finally, substituting 1 hðu;vþ ¼ ð1 uþð1 vþ in (6) gives the Kung Traub method (28) considered in Section 4. Remark 1. The family of two-oint methods (6) requires three function evaluations and has the order of convergence four. Therefore, this family is otimal in the sense of the Kung Traub conjecture and ossesses the comutational efficiency E(6) =4 1/3 1.587. ð10þ ð11þ Remark 2. According to (9), the asymtotic error constant of the family (6) is x kþ1 a C 4 ðaþ ¼lim k!1 ðx k aþ ¼ a 2ð1 bf 0 ðaþþ a 2 4 2 ð5 5bf 0 ðaþþb 2 f 0 ðaþ 2 Þ a 3 ð1 bf 0 ðaþþ : However, this exression is valid if the Taylor series of h given by (7) is truncated to the dislayed members. Obviously, it holds for the function h defined by (11). For a articular function h, the asymtotic error constant C 4 (a) is given by a secific exression. For examle, choosing h(u,v) = (1 + u)/(1 v) (formula (12)) we find the asymtotic error constant C 4 ðaþ ¼a 2 ð1 bf 0 ðaþþ 2 ða 2 2 ð3 bf 0 ðaþþ a 3 Þ: ð13þ Remark 3. Since d = a 2 (1 bf 0 (a))e 2 + O(e 3 ), it is easy to show that the asymtotic error constant C 4 (a) always contains the factor 1 bf 0 (a), that is, C 4 ðaþ ¼ð1 bf 0 ðaþþwða 2 ; a 3 ; f 0 ðaþþ; ð14þ

M.S. Petković et al. / Alied Mathematics and Comutation 217 (2010) 1887 1895 1891 where W is a non-zero constant deending on a 2, a 3, f 0 (a). This fact is a very imortant in develoing the corresonding twooint methods with memory, see Section 3. Remark 4. We seak about the family (6), since the choice of various functions h satisfying the conditions (10) and a arameter b gives a variety of two-oint methods. Remark 5. The statement fi = Series[(fx-fxi)/(bfx),{e, 0, 2}] in the above rogram gives f ðx k Þ f ðx k bf ðx k ÞÞ bf ðx k Þ which will be utilized in Section 3. ¼ f 0 ðaþ 1 þ a 2 ð2 bf 0 ðaþþe k þ Oðe 2 k Þ ; ð15þ 3. Two-oint methods with memory Following Traub s classification [1], the two-oint methods (6) belong to the class of methods without memory. Recall that a multioint method with memory reuses old information, for examle, some entries from the revious iteration. In this section we will imrove the convergence rate of the family (6) using an old idea given in [1,. 186] consisting of varying the arameter b as the iteration roceeds. Considering the error relation (9) and having in mind the form of the asymtotic error constant (14), we observe that the order of convergence of the two-oint family (6) could be increased from 4 to 5 taking b =1/f 0 (a). However, in ractice we have no information on the exact value f 0 (a) and this idea cannot be fully realized. Instead of that, we can vary b from ste to ste using information available from the revious ste and exceed the order 4 without using any new function evaluations. By defining b recursively as the iteration roceeds, we obtain two-oint methods with memory corresonding to (6). Such methods may also be called self-accelerating methods. From the revious discussion, it is clear that we are forced to estimate b by an aroximation of 1/f 0 (a) using available data. We resent two methods. Method (I): Following (2) we estimate b k ¼ b k 1 f ðx k 1 Þ f ðx k 1 Þ f ðx k 1 b k 1 f ðx k 1 ÞÞ ðk ¼ 1; 2;...Þ ð16þ starting from b 0. The initial value b 0 may be chosen in various ways; we found by ractical exeriments that the choice of small entry (in magnitude), for examle, b 0 =10 2 or less, gives satisfactory results in ractice. Since f ðxþ f ðx bf ðxþþ bf ðxþ! f 0 ðxþ when b! 0; it may be exected that b k, defined by (16), will quickly aroach f 0 (a) in the course of iterative rocess, esecially in latter iterations, if the initial aroximation x 0 is reasonable close to the root a. Using recursively calculated arameter b k, we define the following two-ste methods with memory 8 < y k ¼ x k f ðx kþ u 1 ðx k ;b k Þ ðk ¼ 0; 1;...Þ; ð17þ : f ðy k Þ where x kþ1 ¼ y k hðu k ;v k Þ u 1 ðx k ;b k Þ u 1 ðx k ; b k Þ :¼ f ðx kþ f ðx k b k f ðx k ÞÞ b k f ðx k Þ and b k is calculated by (16). To estimate the order of convergence of the two-oint methods (17) with memory, we will use the relations (14) (16). First, C 4 given by (14) (this quantity is not a constant now) is rewritten in the form ec 4 ¼ð1 b k f 0 ðaþþ e W k ða 2 ; a 3 ; f 0 ðaþ; b k Þ: ð18þ It can be observed from the convergence analysis that the arameter b k always aears multilied by f 0 (a). Putting q k = b k f 0 (a), we may rewrite (18) as ec 4 ¼ð1 q k Þ e W k ða 2 ; a 3 ; q k Þ: ð19þ From (15) and (16) we find

1892 M.S. Petković et al. / Alied Mathematics and Comutation 217 (2010) 1887 1895 Hence b k ¼ 1 f 0 ðaþ 1 þ a 2 ð2 q k 1 Þe k 1 þ Oðe 2 Þ ¼ 1 f 0 ðaþ 1 a 2ð2 q k 1 Þe k 1 þ Oðe 2 k 1 Þ : k 1 1 b k f 0 ðaþ ¼1 q k ¼ a 2 ð2 q k 1 Þe k 1 þ Oðe 2 k 1Þ: ð20þ In a limiting rocess when x k? a, the quantity e W k ða 2 ; a 3 ; q k Þ in (19) does not vanish and tends to a constant. Using this fact and (20) we can write the following error relation for the two-oint methods (17), e kþ1 ¼ x kþ1 a ¼ e C 4 e 4 k þ Oðe5 k Þ¼ð1 q kþ e W k ða 2 ; a 3 ; q k Þe 4 k þ Oðe5 k Þ; ð21þ giving e kþ1 ¼ a 2 ð2 q k 1 Þ e W k ða 2 ; a 3 ; q k Þe 4 k e k 1 þ Oðe 5 k Þ: Hence, when x k? a, e kþ1 e 4 k e k 1; ð22þ where the denotation c d means that c = O(d). Method (II): We may aroximate the arameter b using the secant aroach, that is, b k ¼ x k x k 1 f ðx k Þ fðx k 1 Þ 1 f 0 ðaþ : Then we can state the following family of two-oint methods with memory, 8 < y k ¼ x k f ðx k Þ u 1 ðx k ;b k Þ ðk ¼ 0; 1;...Þ; : f ðy k Þ where x kþ1 ¼ y k hðu k ;v k Þ u 1 ðx k ;b k Þ u 1 ðx k ; b k Þ :¼ f ðx kþ f ðx k b k f ðx kþþ b k f ðx kþ ð23þ ð24þ and b k is calculated by (23). In a similar way as in Section 2 for the family (6), we can derive the error relation for the family of methods (24), e kþ1 ¼ x kþ1 a ¼ C 4 e4 k þ Oðe5 k Þ¼ð1 b k f 0 ðaþþw k ða 2; a 3 ; q k Þe4 k þ Oðe5 k Þ; q k ¼ b k f 0 ðaþ: ð25þ Using a Taylor series about a, we arrive at and hence f ðx k Þ¼f 0 ðaþe k þ 1 2 f 00 ðaþe 2 k þ Oe3 k f ðx k Þ f ðx k 1 Þ x k x k 1 ¼ f 0 ðaþþ 1 2 ðe k þ e k 1 Þf 00 ðaþþoe 2 k 1 : Then 1 b k f 0 f 0 ðaþ ðaþ¼1 f 0 ðaþþ 1 e ¼ 1 1 ðe k þ e k 1 Þf 00 ðaþ þ Oe 2 ð 2 k þ e k 1 Þf 00 ðaþþoe 2 2f 0 k 1 ðaþ k 1 ¼ ðe k þ e k 1 Þf 00 ðaþ þ Oe 2 2f 0 k 1 : ðaþ Substituting the exression of 1 b k f 0 ðaþ in the error relation (25) leads again to the relation (22). To estimate the convergence rate of the two-ste methods (17) and (24), we will use the concet of the R-order of convergence introduced by Ortega and Rheiboldt [12] and the following assertion (see [13,. 287]). Theorem 2. If the errors of aroximations e j =x j a obtained in an iterative root-finding rocess IP satisfy e kþ1 Yn i¼0 ðe k i Þ m i ; k P kðfe k gþ; then the R-order of convergence of IP, denoted with O R (IP,a), satisfies the inequality O R ðip; aþ P s ; where s* is the unique ositive solution of the equation s nþ1 Xn i¼0 m i s n i ¼ 0: Now we can state the convergence theorem of the two-oint methods (17) and (24) with memory. ð26þ

M.S. Petković et al. / Alied Mathematics and Comutation 217 (2010) 1887 1895 1893 Theorem 3. If an initial aroximation x 0 is sufficiently close to the root a of a function f, then the R-order of convergence of the family of two-oint methods (17) and (24) is at least 2 þ 5. Proof. Using Theorem 2 for n =1,m 0 =4,m 1 = 1, according to (22) and (26) we form the quadratic equation s 2 4s 1 ¼ 0: The lower bound of the R-order of the methods (17) and (24) is determined by the unique ositive root s ¼ 2 þ 5 4:236 of this equation. h The choice of h(u,v) = (1 + u)/(1 v) in(17) and (24) rovides more accurate aroximations comared to the function h(u,v)=1+u + v (see, for examle, Tables 1 and 2). In fact, the following assertion is valid. Theorem 4. If an initial aroximation x 0 is sufficiently close to the root a of a function f and h(u,v) = (1 + u)/(1 v), then the R- order of convergence of the family of two-oint methods (17) and (24) is at least 2 þ 6 4:449. Proof. From (14) we see that the asymtotic error constant C 4 (a) for h(u,v) = (1 + u)/(1 v) contains the factor (1 bf 0 (a)) 2. Since 1 b k f 0 (a)=o(e k 1 ) (see (20)), from a relation similar to (21) we obtain e kþ1 e 4 k e2 k 1 : Hence, according to Theorem 2 and (26), it follows that the R-order of the families (17) and (24), with h(u,v) = (1 + u)/(1 v), is at least s ¼ 2 þ 6 4:449, where s* is the ositive root of the equation s 2 4s 2=0. h Remark 6. The R-order at least 2 þ 6 can also be achieved dealing with the function h(u,v) = (1 + u)(1 + v + v 2 ) and some more comlicated functions. Remark 7. The fact that the order 2 þ 5 of the families (17) and (24) is greater than 4 (Theorem 3) does not mean that the Kung Traub conjecture is refuted. Recall that this conjecture holds only for multioint methods without memory, while the iterative formulas (17) and (24) define the methods with memory. Table 1 f(x)=e x sin 5x 2, x 0 = 1.5, a = 1.363973180263712... Two-oint methods jx 1 aj jx 2 aj jx 3 aj jx 4 aj (6) h =1+u + v, b = 0.01 1.70( 2) 6.41( 8) 2.27( 29) 3.57( 115) (17) h =1+u + v, b 0 = 0.01 1.70( 2) 2.91( 8) 1.08( 34) 8.35( 146) (24) h =1+u + v, b 0 = 0.01 1.70( 2) 2.35( 9) 1.03( 38) 5.63( 163) (6) h ¼ 1 v 1þu ; b ¼ 0:01 8.36( 3) 4.85( 9) 6.98( 34) 2.98( 133) (17) h ¼ 1 v 1þu 0 ¼ 0:01 8.36( 3) 1.83( 9) 4.51( 41) 3.79( 180) (24) h ¼ 1 v 1þu 0 ¼ 0:01 8.36( 3) 1.93( 10) 2.12( 44) 2.04( 195) Ostrowski IM (27) 6.40( 3) 2.53( 9) 7.39( 35) 5.41( 137) Kung Traub IM (28) 1.68( 2) 1.11( 7) 3.96( 28) 6.45( 110) Jarratt IM (29) 6.39( 3) 2.82( 9) 1.24( 34) 4.67( 136) Maheshwari IM (30) 2.57( 2) 2.95( 7) 1.51( 26) 1.02( 103) Ren Wu Bi IM (1), a =0,x 0 = 1.4 a 1.85( 2) 3.31( 4) 9.35( 12) 5.42( 42) a The Ren Wu Bi method does not converge in 100 iterative stes for x 0 = 1.5. Table 2 f(x)=(x 2)(x 10 + x +1)e x 1, x 0 = 2.1, a =2. Two-oint methods jx 1 aj jx 2 aj jx 3 aj jx 4 aj (6) h =1+u + v, b = 0.01 1.01( 3) 7.84( 11) 2.93( 39) 5.68( 153) (17) h =1+u + v, b 0 = 0.01 1.01( 3) 5.01( 11) 2.23( 42) 3.13( 175) (24) h =1+u + v, b 0 = 0.01 1.01( 3) 4.00( 11) 6.60( 43) 1.92( 177) (6) h ¼ 1 v 1þu ; b ¼ 0:01 3.29( 4) 3.66( 13) 5.59( 49) 3.04( 192) (17) h ¼ 1 v 1þu 0 ¼ 0:01 3.29( 4) 2.00( 13) 5.20( 55) 4.69( 240) (24) h ¼ 1 v 1þu 0 ¼ 0:01 3.29( 4) 1.45( 13) 7.63( 56) 1.13( 243) Ostrowski IM (27) 1.72( 3) 3.13( 10) 3.49( 37) 5.43( 145) Kung Traub IM (28) 7.56( 3) 6.80( 7) 4.88( 23) 1.29( 87) Jarratt IM (29) 1.75( 3) 3.42( 10) 5.11( 37) 2.54( 144) Maheshwari IM (30) 5.27( 3) 1.59( 7) 1.45( 25) 9.97( 98) Ren Wu Bi IM (1), a = 0 2.66( 2) 2.09( 3) 1.26( 6) 2.53( 19)

1894 M.S. Petković et al. / Alied Mathematics and Comutation 217 (2010) 1887 1895 Remark 8. The values of b k aroximated by (16) and (23) have interesting geometric interretation. In both cases b k aroaches 1/f 0 (a). Recall that 1/f 0 (a) determines the sloe of the normal to the tangent line to the curve y = f(x) at the oint [a,0]. Therefore, b k tends to the tangent of the angle of this normal to the x axis. According to Theorems 3 and 4 we conclude that iterative adjustment of a arameter b leads to the increase of convergence seed of the two-oint methods considered. Since this acceleration of convergence is achieved without any new function evaluations, we conclude that the comutational efficiency of the roosed methods (17) and (24) with memory is increased comared to the basic methods (6) without memory. Numerical results resented in Section 4 evidently confirm the increase of convergence seed of the family of two-oint methods (17) and (24). 4. Numerical results In this section we demonstrate the convergence behavior of the roosed families (6), (17) and (24) of two-oint methods with and without memory. For comarison, in our numerical exeriments we also tested the derivative free methods (1) and several two-oint iterative methods (IM) reviewed below. For simlicity, we omit the iteration index and introduce the Newton correction w(x) (after Newton) and a divided difference by wðxþ ¼ f ðxþ f 0 ðxþ ; ðxþ fðyþ f ½x; yš ¼f : x y Ostrowski s method [2] y ¼ x wðxþ; ^x ¼ y f ðyþ f 0 ðxþ f ðxþ f ðxþ 2fðyÞ : ð27þ Kung Traub s method [6] bf ðxþ 2 y ¼ x f ðx þ bf ðxþþ f ðxþ ; f ðyþf ðx þ bf ðxþþ ^x ¼ y ½f ðx þ bf ðxþþ f ðyþšf ½x; yš : ð28þ Note that the family of two-oint methods (28) is a secial case (n = 2) of derivative free multioint family of methods of arbitrary order of convergence 2 n requiring n + 1 function evaluations, see [6]. The family (28) can be obtained as a secial case of the family (6) taking h(u,v) = [(1 u)(1 v)] 1. Another otimal multioint methods of arbitrary order of convergence were considered in [14]. Jarratt s method [3] ^x ¼ x 1 2 wðxþþ f ðxþ f 0 ðxþ 3f 0 x 2 wðxþ : ð29þ 3 Maheshwari s method [15] ( ) ½fðx wðxþþš 2 f ðxþ ^x ¼ x wðxþ : ð30þ f ðxþ 2 fðx wðxþþ f ðxþ The listed methods, including the Ren Wu Bi family (1) and the roosed methods (6), (17) and (24), have the efficiency index at least 4 1/3 1.587. We did not insert numerical results obtained by iterative methods of lower comutational efficiency since these methods are not cometitive and their lower rank is redictable. Another otimal two-oint methods of the fourth order with the efficiency index 4 1/3 were considered in [16 22], but the cited aers do not exhaust all sources. For demonstration, among many numerical examles we selected two examles imlemented in the rogramming ackage Mathematica by the use of multi-recision arithmetic. Examle 1. We tested the roosed methods (6), (17) and (24) with b = b 0 = 0.01 and choosing two forms (11) and (12) of the function h, the family (1) for a = 0 and the two-oint methods (27) (30) listed above. These methods were alied to the function f ðxþ ¼e x sin 5x 2 to aroximate its root a = 1.363973180263712... The absolute values of the errors of the aroximations x k in the first four iterations are dislayed in Table 1, where A( h) means A 10 h. All methods used the common initial aroximation x 0 = 1.5, excet the Ren Wu Bi method (1) (see the remark *) below Table 1) which shows a divergent behavior in the first 100 iterations. For this reason, this method was alied for a closer aroximation x 0 = 1.4. Examle 2. The two-oint methods emloyed in Examle 1 were also alied to the function f ðxþ ¼ðx 2Þðx 10 þ x þ 1Þe x 1

M.S. Petković et al. / Alied Mathematics and Comutation 217 (2010) 1887 1895 1895 to aroximate its root a = 2. In this test we used the common initial aroximation x 0 = 2.1. The obtained results are resented in Table 2. According to the results resented in Tables 1 and 2 and a number of numerical examles, we can conclude that the derivative free two-oint methods (6) are cometitive with existing otimal two-oint methods of the fourth order. Even better, the families of methods (17) and (24) with memory, based on recursively calculated arameter b k, are faster than the existing methods. Since all of the tested methods have the same comutational cost, it is evident that the methods (17) and (24) are most efficient. Comaring the fourth order methods (1) and (6) we notice that the main advantage of the roosed family (6) is the ossibility of acceleration of convergence by varying a arameter b. The lack of a corrector (as a arameter b in (6)) in the iterative formula (1) leads to the divergence of the method (1) in Examle 1 (for various values of a arameter a in a wide range) and relatively modest result in Examle 2. Our concluding remark is concerned with an imortant roblem aearing in ractical alication of multioint methods. As emhasized in [14], a fast convergence, one of the advantages of multioint methods, can be attained only if initial aroximations are sufficiently close to the sought roots; otherwise, it is not ossible to realize the exected convergence seed in ractice. For this reason, alying multioint root-finding methods, a secial attention should be aid to finding good initial aroximations. We note that an efficient rocedure for finding sufficiently good initial aroximations was recently roosed by Yun [23]. For illustration, simle statements in the rogramming ackage Mathematica, alied to the function from Examle 2 and the interval [1,5], f[x_]=(x-2)(x^(10)+x + 1)Ex[-x-1]; a = 1; b = 5; m = 5; x0 = 0.5(a + b+sign[f[a]]nintegrate[tanh[mf[x]],{x, a, b}]) gives considerably good initial aroximation x 0 = 1.99857 to the root a =2. Acknowledgement This work was suorted by the Serbian Ministry of Science under grant 144024. References [1] J.F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Englewood Cliffs, New Jersey, 1964. [2] A.M. Ostrowski, Solution of Equations and Systems of Equations, Academic Press, New York, 1960. [3] P. Jarratt, Some fourth order multioint methods for solving equations, Math. Comut. 20 (1966) 434 437. [4] P. Jarratt, Some efficient fourth-order multioint methods for solving equations, BIT 9 (1969) 119 124. [5] R. King, A family of fourth order methods for nonlinear equations, SIAM J. Numer. Anal. 10 (1973) 876 879. [6] H.T. Kung, J.F. Traub, Otimal order of one-oint and multioint iteration, J. ACM 21 (1974) 643 651. [7] B. Neta, On a family of multioint methods for nonlinear equations, Int. J. Comut. Math. 9 (1981) 353 361. [8] M.S. Petković, L.D. Petković, Families of otimal multioint methods for solving nonlinear equations: a survey, Al. Anal. Discrete Math. 4 (2010) 1 22. [9] J.F. Steffensen, Remark on iteration, Skand. Aktuar Tidskr. 16 (1933) 64 72. [10] H. Ren, Q. Wu, W. Bi, A class of two-ste Steffensen tye methods with fourth-order convergence, Al. Math. Comut. 209 (2009) 206 210. [11] R. Thukral, M.S. Petković, A family of three-oint methods of otimal order for solving nonlinear equations, J. Comut. Al. Math. 233 (2010) 2278 2284. [12] J.M. Ortega, W.C. Rheiboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [13] G. Alefeld, J. Herzberger, Introduction to Interval Comutation, Academic Press, New York, 1983. [14] M.S. Petković, On a general class of multioint root-finding methods of high comutational efficiency, SIAM J. Numer. Anal. 47 (2010) 4402 4414. [15] A.K. Maheshwari, A fourth-order iterative method for solving nonlinear equations, Al. Math. Comut. 211 (2009) 383 391. [16] D. Basu, From third to fourth order variant of Newton s method for simle roots, Al. Math. Comut. 202 (2008) 886 892. [17] C. Chun, A family of comosite fourth-order iterative methods for solving nonlinear equations, Al. Math. Comut. 187 (2007) 951 956. [18] C. Chun, Some variants of King s fourth-order family of methods for nonlinear equations, Al. Math. Comut. 190 (2007) 57 62. [19] C. Chun, Some fourth-order iterative methods for solving nonlinear equations, Al. Math. Comut. 195 (2008) 454 459. [20] C. Chun, Y. Ham, A one-arameter fourth-order family of iterative methods for nonlinear equations, Al. Math. Comut. 189 (2007) 610 614. [21] C. Chun, B. Neta, Certain imrovements of Newton s method with fourth-order convergence, Al. Math. Comut. 215 (2009) 821 828. [22] M.S. Petković, L.D. Petković, A one arameter square root family of two-ste root-finders, Al. Math. Comut. 188 (2007) 339 344. [23] B.I. Yun, A non-iterative method for solving non-linear equations, Al. Math. Comut. 198 (2008) 691 699.