Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur

Similar documents
A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Family of Optimal Eighth-Order of Convergence for Solving Nonlinear Equations

A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index

Dynamical Behavior for Optimal Cubic-Order Multiple Solver

High-order Newton-type iterative methods with memory for solving nonlinear equations

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations

A new family of four-step fifteenth-order root-finding methods with high efficiency index

A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Finding simple roots by seventh- and eighth-order derivative-free methods

A new sixth-order scheme for nonlinear equations

A three point formula for finding roots of equations by the method of least squares

Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations

New seventh and eighth order derivative free methods for solving nonlinear equations

A Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence

Optimal derivative-free root finding methods based on the Hermite interpolation

Basins of Attraction for Optimal Third Order Methods for Multiple Roots

Using Lagrange Interpolation for Solving Nonlinear Algebraic Equations

Modified Jarratt Method Without Memory With Twelfth-Order Convergence

Convergence of a Third-order family of methods in Banach spaces

Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and Their Dynamics

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications

A Three-Step Iterative Method to Solve A Nonlinear Equation via an Undetermined Coefficient Method

ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS

Two improved classes of Broyden s methods for solving nonlinear systems of equations

Applied Mathematics and Computation

Error formulas for divided difference expansions and numerical differentiation

Generalization Of The Secant Method For Nonlinear Equations

SYSTEMS OF NONLINEAR EQUATIONS

Four-Point Optimal Sixteenth-Order Iterative Method for Solving Nonlinear Equations

A fourth order method for finding a simple root of univariate function

SOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H.

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.

Applied Mathematics and Computation

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

University of Education Lahore 54000, PAKISTAN 2 Department of Mathematics and Statistics

Another Sixth-Order Iterative Method Free from Derivative for Solving Multiple Roots of a Nonlinear Equation

New Methods for Solving Systems of Nonlinear Equations in Electrical Network Analysis

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Geometrically constructed families of iterative methods

SQUARE-ROOT FAMILIES FOR THE SIMULTANEOUS APPROXIMATION OF POLYNOMIAL MULTIPLE ZEROS

A new nonmonotone Newton s modification for unconstrained Optimization

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Optimal Order of One-Point and Multipoint Iteration

Newton-Raphson Type Methods

An improved generalized Newton method for absolute value equations

Iterative Methods for Single Variable Equations

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver

arxiv: v1 [math.na] 27 Dec 2018

YURI LEVIN AND ADI BEN-ISRAEL

Quadrature based Broyden-like method for systems of nonlinear equations

WHEN studying distributed simulations of power systems,

IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Curve Fitting. Objectives

Numerical Methods I Solving Nonlinear Equations

A New Fifth Order Derivative Free Newton-Type Method for Solving Nonlinear Equations

An optimal sixteenth order convergent method to solve nonlinear equations

POLYNOMIAL INTERPOLATION IN NONDIVISION ALGEBRAS

EVALUATION OF A FAMILY OF BINOMIAL DETERMINANTS

A first order divided difference

A Two-step Iterative Method Free from Derivative for Solving Nonlinear Equations

Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations

Solving nonlinear equations (See online notes and lecture notes for full details) 1.3: Newton s Method

Numerical solutions of nonlinear systems of equations

Comparative Analysis of Convergence of Various Numerical Methods

A new modified Halley method without second derivatives for nonlinear equation

Applied Mathematics and Computation

arxiv: v1 [math.na] 21 Jan 2015

Some New Three Step Iterative Methods for Solving Nonlinear Equation Using Steffensen s and Halley Method

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Research Article Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods

Order of convergence

A new family of adaptive methods with memory for solving nonlinear equations

A Fifth-Order Iterative Method for Solving Nonlinear Equations

An Efficient Solver for Systems of Nonlinear. Equations with Singular Jacobian. via Diagonal Updating

Chapter 3: Root Finding. September 26, 2005

Accelerating Convergence

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing

A new Newton like method for solving nonlinear equations

Document downloaded from:

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

SMOOTHNESS PROPERTIES OF SOLUTIONS OF CAPUTO- TYPE FRACTIONAL DIFFERENTIAL EQUATIONS. Kai Diethelm. Abstract

Applied Mathematics and Computation

On the Homotopy Perturbation Method and the Adomian Decomposition Method for Solving Abel Integral Equations of the Second Kind

15 Nonlinear Equations and Zero-Finders

Improving homotopy analysis method for system of nonlinear algebraic equations

A Derivative Free Hybrid Equation Solver by Alloying of the Conventional Methods

NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS

Hk+lYk 8k, Hk+ Hk + (Sk Hkyk)V[, LINEAR EQUATIONS* WHY BROYDEN S NONSYMMETRIC METHOD TERMINATES ON. Y g(xk+l) g(xk) gk+l gk

Two New Predictor-Corrector Iterative Methods with Third- and. Ninth-Order Convergence for Solving Nonlinear Equations

A Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Transcription:

28 International Journal of Advance Research, IJOAR.org Volume 1, Issue 1, January 2013, Online: Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur ABSTRACT The following paper focuses on two-point derivative free methods The following paper The following paper presents an efficient family of two-point derivative free methods for solving nonlinear equations with memory. Using this accelerating technique the convergence order increases from 4 to at least 2+ 6 4.45, 5, 1 2(5 + 33 ) 5.37 and 6 without any additional function. It is more efficient compared to class of two-point methods and three-point methods. Keywords: Two point method, convergence, nonlinear equation, computational.

29 Introduction First studied in Traub s book [16] and some papers and books published in the 1960s and 1970s (see, e.g., [2], [3], [4], [6], [8], [9], [11]). With the following iterative method, high order of convergence can be attained for solving nonlinear equations at low computational cost, based on multipoint iterations based on root solvers, which overcome theoretical limits of one-point methods concerning the convergence order and computational efficiency. This method is connected to hypothesis of Kung and Traub [6] from 1974. They have conjectured that the order of convergence of any multipoint method without memory, consuming n function evaluations per iteration, cannot exceed the bound 2n 1 (called optimal order). Multipoint methods with this property are usually called optimal methods. An extensive (but not exhausting) list of optimal methods may be found, for example,in [12] and [13]. Let be a simple real zero of a real function f : D R R and let x0 be an initial approximation to x0. In many practical situations it is preferable to avoid calculations of derivatives of f. First multipoint derivative free methods were developed by Kung and Traub in [6] in 1974 with arbitrary order 2n 1 (n 2) requiring n + 1 function evaluations. For n = 2 one obtains the derivative free method. (1) (k = 0,1, ) of Steffensen-type with quadratic convergence (see [16,p. 185]). Taking n = 3in the Kung-TRAUB family, the following derivative free two-point family of fourth order methods. (2) (k = 0,1,..) is obtained, where f[x, y] = f(x) f(y) /(x y) is a divided difference and is a nonzero constant. Three function evaluations that has fourth order of convergence is required for the family (2) which supports Kung-Traub conjecture. Its efficiency index is E(2) 1.587. Basic numerical analysis principle is used. Ranking of numerical algorithm can be attained by computational efficiency, which is directly proportional to the quality of an algorithm and inversely proportional to its computational cost. In case of root finders, very fast convergence of great accuracy is irrelevant if its computational cost is too high. A two-point family with memory of high computational efficiency is stated in this paper. TWO-POINT METHOD WITHOUT MEMORY

30 Two-point methods without memory and increasing order of convergence 2+ 6 4.45, 5, ½(5+ 33) 5.37 and 6 (depending on the accelerating technique) without additional calculations. New methods of finding simple roots of nonlinear equations are obtained. whose computational efficiency is higher than the efficiency of existing methods known in literature in the class of two-point methods and even higher than the efficiency of optimal three-point methods of order eight. 300 Miodrag S. Petkovi c, Jovana Dˇzuni c, Ljiljana D. Petkovi c. The main idea is based on the use of suitable two-valued functions and the variation of a free parameter in each iterative step. This parameter is calculated using\information from the current and previous iteration so that the developed methods may be regarded as methods with memory following Traub s classification. An additional motivation for studying methods with memory arises from a surprising fact that such classes of methods have been considered in literature very seldom in spite of their high computational efficiency. We cite pioneering results of Traub, the three-point method of Neta [9] and the recently developed method with memory [14] with the order 2 + 5 4.236. The paper is organized as follows. In Section 2 we present a family of twopoint methods without memory, which is an extended version of the family derivedin [14]. None calculation of derivatives are requested. A modification of the Kung- Traub method (2) appears as a special case of this family. Using a varying parameter, which is recursively calculated in each iteration, we develop a family of two-point methods with memory. In Section 3 we state convergence theorems which show that the R-order of convergence of the proposed family with memory is at least 2+ 6 4.45 if a standard secant approach is applied, at least five if a new method called improved secant approach is employed and even ½ (5+ 33) 5.372 and 6 using Newton s interpolator polynomials of the second and third degree, respectively. We emphasize that the increase of the order of convergence is obtained without any additional function evaluations, which points to very high computational efficiency. Indeed, the efficiency index 1.71 of the proposed fifth order two-point methods with memory is higher than the efficiency index 1.68 of optimal three-point methods of order eight, while the methods with Newton s interpolation (with the efficiency indices 1.75 and 1.817) are even more efficient. Numerical examples and the comparison with the existing optimal two-point methods are given in Section 4. FAMILY OF TWO-POINT METHODS WITH MEMORY To construct a family of derivative-free two-point methods, let us start from the doubled Newton method (k = 0,1,.)

31 And substitute derivative f (x) and f (y) by suitable approximations. This is often used model in designing two-point methods. Let =, be a function that appears in the Steffensen-like method (1), where is an arbitrary real constant. Obviously, '(x) is an approximation to the first derivative f (x) assuming that f(x) is small enough. It is natural to approximate f (x) '(x) in (3). The derivative f (y) in the second step of (3) will be approximated by f (y) '(x)/h(u, v), where h(u, v) is at least twotimes differentiable function that depends two real variables Now from the iterative scheme (3) we state the family of two-point iterative methods (k = 0,1,.) Where The weight function h should be determined in such way that the order of convergence of the two-point method (5) is as high as possible, which is the subject of the following theorem. Theorem 1. If an initial approximation x0 is sufficiently close to a zero _ of f and the weight function h appearing in (5) satisfies the conditions (6) h(0, 0) = (0, 0) = ((6) 0, 0) = 1, (0, 0) = 2, ) (0, 0) <, (0, 0) <, then the error relation related to the family of two-point methods (5) is given by (7 ) = -

32 Proof. Introduce the abbreviations (k = 2,3,.) In what follows we will derive the error relation (7) of the family of two-point methods (5), which is essential to our study. Using Taylor series about the root, we obtain (8) f( ) And (9) f( +( + Remark 1. It was proved in [14] that the fourth order of the method (5) can be attained under the relaxed conditions (14) The additional requirement hvv(0, 0) = 2 in Theorem 1 enables that the term 1 + ( ) in (5) is squared; this fact is of essential importance which will be shown later. Otherwise, the relaxed conditions (14) (assuming hvv 6= 2) give only linear factor 1 + ( ) and, consequently, slower convergence, see [14]. We observe from (7) that the order of onvergence of the family (5) is four when 6= 1/f( ). If we could provide = 1/f( ), the order of the family (5) would exceed four. However, the value f( _) is not available in practice and such acceleration is not possible. Instead of that, an approximation f( ) f( ), calculated by available information can be utilized. Then, setting = 1/ f( ),we can achieve that the order of convergence of the modified method exceeds 4 without using any new function evaluations. Moreover, we will show in Section 3 that a special approximation of can produce two-point methods with memory which have the order six. Henceforth, we will often write wk = xk +kf(xk), for brevity. In this paper we will consider four methods for approximating f( ). (I) (II) (III) (secant approach) (improved secant approach) (Newton s interpolatory approach), where

33 Is Newton s interpolatory polynomial of second degree, set though three best available approximations (nodes) (IV) (Improved Newton s interpolatory approach), where Is Newton s interpolatory polynomial of third degree, set through four best available approximations (nodes) and Then the parameter = k can be calculated recursively as the iteration proceeds as (15) (Method (I)), (16) (Method (II)), (17) (method (III)), (18) (method (IV)) To calculate k by (17) and (18), we need the expressions of N 2 and N 3. Since And 1( 1), ) ( We find (19) And (20) It is preferable to calculate divided differences of higher order by a recursive procedure using divided differences of lower order. Remark 2. The secant methods (I) and (II) are, in fact, the derivatives N 1(xk) of Newton s interpolatory polynomials of first order at the nodes xk, xk 1 and xk, yk 1, respectively.

34 Remark 3. The accelerating method (15), actually Traub s method [16] from 1964, was used in [14] to increase the order from 4 to 2 + 5 4.236 under the conditions (14). The accelerating methods (16), (17) and (18), together with the additional condition hvv (0, 0) = 2, are new, simple and very useful, providing considerable improvement of convergence rate without any additional function evaluations. By defining recursively as the iteration proceeds using (15), (16), (17) or (18), we obtain a new derivative free two-point method with memory corresponding to (5), (21) (k= 0,1,.) We use the term method with memory following Traub s classification [16, p. 8] and the fact that the evaluation of the parameter k depends on data available from the current and the previous iterative step. Accelerating methods obtained by recursively calculated free parameter may also be called self-accelerating methods. The initial value 0 should be chosen before starting the iterative process, for example, using one of ways proposed in [16, p. 186]. Note that the iterative scheme (21) defines a family of two-step methods. We can apply different two-valued weight functions h that satisfy the conditions (6). For convenience, recall the list of functions of simple form presented in [14]: 1) ; 2) 3) 4) 5) Using simple rearrangement, it is easy to show that the choice h(u, v) = 1 (1 u)(1 v) gives the Kung-Traub method (2) with memory as a special case, (22) (k=0,1,..)

35 REFERENCES [1] C. Chun: Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput., 195 (2008), 454 459. [2] P. Jarratt: Some fourth order multipoint methods for solving equations. Math. Comp., 20 (1966), 434 437. [3] P. Jarratt: Some efficient fourth-order multipoint methods for solving equations. BIT, 9 (1969), 119 124. [4] R. King: A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal., 10 (1973), 876 879. [5]J. Kou, Y. Li, X. Wang: A composite fourth-order iterative method. Appl. Math. Comput., 184 (2007), 471 475. [6] H. T. Kung, J. F. Traub: Optimal order of one-point and multipoint iteration. J. ACM, 21 (1974), 643 651. [7] A. K. Maheshwari: A fourth-order iterative method for solving nonlinear equations. Appl. Math. Comput., 211 (2009), 383 391. [8] B. Neta: On a family of multipoint methods for nonlinear equations. Int. J. Comput. Math., 9 (1981), 353 361. [9] B. Neta: A new family of higher order methods for solving nonlinear equations. Int. J. Comput. Math., 14 (1983), 191 195. [10] J. M. Ortega, W. C. Rheiboldt: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York, 1970. [11] A. M. Ostrowski: Solution of Equations and Systems of Equations. Academic Press, New York, 1960. [12] M. S. Petkovi c: On a general class of multipoint root-finding methods of high computational efficiency. SIAM. J. Numer. Anal., 47 (2010), 4402 4414. [13]A family of two-point methods with memory for solving nonlinear equations 317

36 [14]. M. S. Petkovi c, L. D. Petkovi c: Families of optimal multipoint methods for solving nonlinear equations: a survey. Appl. Anal. Discrete Math., 4 (2010), 1 22. [15] M. S. Petkovi c, S. Ili c, J. Dˇzuni c: Derivative free two-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput., 217 (2010), 1887 1895. [16]H. Ren, Q. Wu, W. Bi: A class of two-step Steffensen type methods with fourth-order convergence. Appl. Math. Comput., 209 (2009), 206 210. [16] J. F. Traub: Iterative Methods for the Solution of Equations. Prentice-Hall, New Jersey, 1964. [17] B. I. Yun: A non-iterative method for solving non-linear equations. Appl. Math. Comput., 198 (2008), 691 699.