NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS

Similar documents
Three New Iterative Methods for Solving Nonlinear Equations

SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

A New Modification of Newton s Method

University of Education Lahore 54000, PAKISTAN 2 Department of Mathematics and Statistics

Two New Predictor-Corrector Iterative Methods with Third- and. Ninth-Order Convergence for Solving Nonlinear Equations

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations

Numerical Study of Some Iterative Methods for Solving Nonlinear Equations

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence

arxiv: v1 [math.na] 21 Jan 2015

A Numerical Method to Compute the Complex Solution of Nonlinear Equations

SOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H.

A Three-Step Iterative Method to Solve A Nonlinear Equation via an Undetermined Coefficient Method

A fourth order method for finding a simple root of univariate function

A Fifth-Order Iterative Method for Solving Nonlinear Equations

Improving homotopy analysis method for system of nonlinear algebraic equations

On Newton-type methods with cubic convergence

Generalization Of The Secant Method For Nonlinear Equations

ACTA UNIVERSITATIS APULENSIS No 18/2009 NEW ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS BY USING MODIFIED HOMOTOPY PERTURBATION METHOD

Quadrature based Broyden-like method for systems of nonlinear equations

IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

Using Lagrange Interpolation for Solving Nonlinear Algebraic Equations

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS

A new modified Halley method without second derivatives for nonlinear equation

A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations

New predictor-corrector type iterative methods for solving nonlinear equations

A New Fifth Order Derivative Free Newton-Type Method for Solving Nonlinear Equations

Geometrically constructed families of iterative methods

Iterative Methods for Single Variable Equations

A new sixth-order scheme for nonlinear equations

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Improvements in Newton-Rapshon Method for Nonlinear Equations Using Modified Adomian Decomposition Method

Preliminary Examination in Numerical Analysis

Family of Optimal Eighth-Order of Convergence for Solving Nonlinear Equations

Some Third Order Methods for Solving Systems of Nonlinear Equations

Modified Jarratt Method Without Memory With Twelfth-Order Convergence

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence

A Two-step Iterative Method Free from Derivative for Solving Nonlinear Equations

High-order Newton-type iterative methods with memory for solving nonlinear equations

The Journal of Nonlinear Sciences and Applications

Modified Bracketing Method for Solving Nonlinear Problems With Second Order of Convergence

Solution of Algebric & Transcendental Equations

An Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function

ON THE EFFICIENCY OF A FAMILY OF QUADRATURE-BASED METHODS FOR SOLVING NONLINEAR EQUATIONS

Newton-Raphson Type Methods

Improving the Accuracy of the Adomian Decomposition Method for Solving Nonlinear Equations

A New Numerical Scheme for Solving Systems of Integro-Differential Equations

A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index

NUMERICAL METHODS FOR ENGINEERING APPLICATION

Numerical Methods in Physics and Astrophysics

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations

Midterm Review. Igor Yanovsky (Math 151A TA)

New seventh and eighth order derivative free methods for solving nonlinear equations

A Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation

Numerical Methods in Physics and Astrophysics

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

Basins of Attraction for Optimal Third Order Methods for Multiple Roots

A New Accelerated Third-Order Two-Step Iterative Method for Solving Nonlinear Equations

Convergence Rates on Root Finding

Introduction to Numerical Analysis

Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations

Optimal derivative-free root finding methods based on the Hermite interpolation

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

ADOMIAN-TAU OPERATIONAL METHOD FOR SOLVING NON-LINEAR FREDHOLM INTEGRO-DIFFERENTIAL EQUATIONS WITH PADE APPROXIMANT. A. Khani

SECTION A. f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes.

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Numerical Method for Solving Fuzzy Nonlinear Equations

Math Numerical Analysis Mid-Term Test Solutions

Adomian Decomposition Method with Laguerre Polynomials for Solving Ordinary Differential Equation

Math Numerical Analysis

AN AUTOMATIC SCHEME ON THE HOMOTOPY ANALYSIS METHOD FOR SOLVING NONLINEAR ALGEBRAIC EQUATIONS. Safwan Al-Shara

An optimal sixteenth order convergent method to solve nonlinear equations

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

SOLUTION OF EQUATION AND EIGENVALUE PROBLEMS PART A( 2 MARKS)

Analysis II: Basic knowledge of real analysis: Part V, Power Series, Differentiation, and Taylor Series

Algorithms for nonlinear programming problems II

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT

5. Hand in the entire exam booklet and your computer score sheet.

Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations

Some New Three Step Iterative Methods for Solving Nonlinear Equation Using Steffensen s and Halley Method

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

CAM Ph.D. Qualifying Exam in Numerical Analysis CONTENTS

X. Numerical Methods

Numerical solution of system of linear integral equations via improvement of block-pulse functions

PICARD OPERATORS IN b-metric SPACES VIA DIGRAPHS

Applied Mathematics Letters. Combined bracketing methods for solving nonlinear equations

PLC Papers. Created For:

Differential Transform Method for Solving. Linear and Nonlinear Systems of. Ordinary Differential Equations

Contraction Mappings Consider the equation

Some Derivatives of Newton s Method

A new family of four-step fifteenth-order root-finding methods with high efficiency index

Taylor Series. Math114. March 1, Department of Mathematics, University of Kentucky. Math114 Lecture 18 1/ 13

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

A QUADRATIC TYPE FUNCTIONAL EQUATION (DEDICATED IN OCCASION OF THE 70-YEARS OF PROFESSOR HARI M. SRIVASTAVA)

HIGH ORDER VARIABLE MESH EXPONENTIAL FINITE DIFFERENCE METHOD FOR THE NUMERICAL SOLUTION OF THE TWO POINTS BOUNDARY VALUE PROBLEMS

Regent College Maths Department. Further Pure 1 Numerical Solutions of Equations

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Transcription:

Bulletin of Mathematical Analysis and Applications ISSN: 181-191, URL: http://www.bmathaa.org Volume 3 Issue 4(011, Pages 31-37. NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS (COMMUNICATED BY MARTIN HERMANN HADI TAGHVAFARD Abstract. Two new iterative methods for solving nonlinear equations are presented using a new quadrature rule based on spline functions. Analysis of convergence shows that these methods have third-order convergence. Their practical utility is demonstrated by numerical examples to show that these methods are more efficient than that of Newton s. 1. Introduction Solving nonlinear equations is one of the most predominant problems in numerical analysis. Newton s method is the most popular one in solving such equations. Some historical points on this method can be found in [14]. Recently, some methods have been proposed and analyzed for solving nonlinear equations [1,, 4, 6, 7, 8]. These methods have been suggested by using quadrature formulas, decomposition and Taylor s series [3, 8, 9, 13]. As we know, quadrature rules play an important and significant role in the evaluation of integrals. One of the most well-known iterative methods is Newton s classical method which has a quadratic convergence rate. Some authors have derived new iterative methods which are more efficient than that of Newton s [5, 9, 1, 13]. This paper is organized as follows. Section provides some preliminaries which are needed. Section 3 is devoted to suggest two iterative methods by using a new quadrature rule based on spline functions. These are implicit-type methods. To implement these methods, we use Newton s and Halley s method as a predictor and then use these new methods as a corrector. The resultant methods can be considered as two-step iterative methods. In section 4, it will be shown that these two-step iterative methods are of third-order convergence. In section 5, a comparison between these methods with that of Newton s is made. Several examples are given to illustrate the efficiencies and advantages of these methods. Finally, section 6 will close the paper. 000 Mathematics Subject Classification. 6A18, 65D05, 65D07, 65D15, 65D3. Key words and phrases. Newton s method, Halley s method, Iterative method, Spline functions, Order of convergence. c 011 Universiteti i Prishtinës, Prishtinë, Kosovë. Submitted April 5, 011. Published August 9, 011. 31

3 HADI TAGHVAFARD We use the following definition [13]:. Preliminaries Definition 1. Let α R and x n R, n = 0, 1,,.... Then the sequence x n is said to be convergence to α if lim x n α = 0. n If there exists a constant c > 0, an integer n 0 0 and p 0 such that for all n > n 0, we have x n+1 α c x n α p, then x n is said to be convergence to α with convergence order at least p. If p = or p = 3, the convergence is said to be quadratic or cubic, respectively. Notation 1. The notation e n = x n α is the error in the n th iteration. The equation e n+1 = ce p n + O(e p+1 n, is called the error equation. By substituting e n = x n α for all n in any iterative method and simplifying, we obtain the error equation for that method. The value of p obtained is called the order of this method. We consider the problem of numerical determine a real root α of nonlinear equation f(x = 0, f : D R R. (.1 The known numerical method for solving equation (.1 is the classical Newton s method given by f(x n f, n = 0, 1,,... (x n where x 0 is an initial approximation sufficiently close to α. The convergence order of the classical Newton s method is quadratic for simple roots [3]. If the second order derivative f(x n is available, the third-order convergence rate can be achieved by using the Halley s method [5]. Its iterative formula is f(x n f (x n (f (x n f(x n f, n = 1,, 3,.... (x n 3. Iterative method For several reasons, we know that cubic spline functions are popular spline functions. They are smooth functions with which to fit data, and when used for interpolation, they do not have the oscillatory behavior that is the characteristic of high-degree polynomial interpolation. We have derived a new quadrature rule which is based on spline interpolation [11]. In this section, we review it briefly and then present the iterative methods. Let us suppose = {x i i = 0, 1, } be a uniform partition of the interval [a, b] by knots a = x 0 < a+b = x 1 < b = x and h = x 1 x 0 = x x 1 and y i = f(x i ; then the cubic spline function S (x which interpolates the values of the function f at the knots x 0, x 1, x and satisfies S (a = S (b = 0 is readily characterized by their moments, and these moments of interpolating cubic spline function can

NEW ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS 33 be calculated as the solution of a system of linear equations. We can obtain the following representation of the cubic spline function in terms of its moments [10]: S (x = α i + β i (x x i + γ i (x x i + δ i (x x i 3, (3.1 for x [x i, x i+1 ], i = 0, 1, where α i = y i, β i = y i+1 y i h M i M i+1 h, 6 γ i = M i+1 M i 6h M 0 = M = 0, M 1 = 3 + b [f(a f(a h + f(b]. Now from (3.1 we obtain, b a and thus a+b b S (xdx = S (xdx + S (xdx a+b a = h [ f(a + f( a + b ] + f(b h3 4 [M 0 + M 1 + M ] = h [ 3f(a + 10f( a + b ] 8 + 3f(b. b a f(tdt b a 16 { 3f(a + 10f ( a + b } + 3f(b. (3. We use the above quadrature rule to approximate integrals and use it to obtain an iterative method. In order to do this, let α D be a simple zero of sufficiently differentiable function f : D R R for an open interval D and x 0 is sufficiently close to α. To derive the iterative method, we consider the computation of the indefinite integral on an interval of integration arising from Newton s theorem f(x = f(x n + x x n f (tdt. (3.3 Using the quadrature rule (3. to approximate the right integral of (3.3, x f (tdt = x x { ( } n 3f (x n + 10f xn + x + 3f (x, (3.4 x n 16 and looking for f(x = 0. From (3.3 and (3.4 we obtain x = x n 3f (x n + 10f ( x+x n + 3f (x. (3.5 This fixed point formulation enables us to suggest the following implicit iterative method (, (3.6 3f (x n + 10f xn +x n+1 + 3f (x n+1 which requires the (n+1th iterate x n+1 to calculate the (n+1th iterate itself. To obtain the explicit form, we make use of the Newton s iterative step to compute the (n + 1th iterate x n+1 on the right-hand side of (3.6, namely replacing f( xn+xn+1 with f( x n+y n, where y n = x n f(x n f (x n following explicit method: is the Newton iterate and obtain the

34 HADI TAGHVAFARD Algorithm 1. For a given x 0, compute the approximate solution x n+1 by the following iterative scheme where 3f (x n + 10f ( x n+y n, n = 0, 1,,... + 3f (y n y n = x n f(x n f (x n. We can replace x n+1 in the right hand-side of (3.6 by Halley s method. Therefore we suggest the following algorithm: Algorithm. For a given x 0, compute the approximate solution x n+1 by the following iterative scheme where 3f (x n + 10f ( x n +y n y n = x n + 3f (y n, n = 0, 1,,... (3.7 f(x n f (x n (f (x n f(x n f (x n. (3.8 4. Analysis of convergence In this section, we prove the convergence of Algorithm. In a similar way, one can prove the convergence of Algorithm 1. Theorem 4.1. Let α D be a simple zero of sufficiently differentiable function f : D R R for an open interval D. If x 0 is sufficiently close to α, then the two-step iterative method defined by Algorithm converges cubically to α in a neighborhood of α and it satisfies the error equation ( 7 e n+1 = 16 c e 3 n + O(e 4 n, where c k = 1 f k (α k! f (α, k = 1,, 3,... and e n = x n α. Proof. Let α be a simple zero of f, i.e. f(α = 0 and f (α 0. Since f is sufficiently differentiable, by using Taylor s expansion of f(x n, f (x n and f (x n about α, we obtain f(x n = f (α(x n α + 1! f (α(x n α + 1 3! f (α(x n α 3 + 1 4! f (4 (α(x n α 4 + 1 5! f (5 (α(x n α 5 + O((x n α 6 = f (α [ e n + c e n + c 3 e 3 n + c 4 e 4 n + c 5 e 5 n + O(e 6 n ], (4.1 f (x n = f (α [ 1 + c e n + 3c 3 e n + 4c 4 e 3 n + 5c 5 e 4 n + O(e 5 n ], (4. f (x n = f (α [ c + 6c 3 e n + 1c 4 e n + 0c 5 e 3 n + O(e 4 n ], (4.3 where c k = 1 f k (α k! f (α, k = 1,, 3,... and e n = x n α. Form (4.1, (4. and (4.3 one obtains f(x n f (x n = f (α[e n + 3c e n + (4c 3 + c e 3 n + O(e 4 n] (4.4 f (x n = f (α[1 + 4c e n + (6c 3 + 4c e n + (1c c 3 + 8c 4 e 3 n + O(e 4 n](4.5 f(x n f (x n = f (α[c e n + (6c 3 + c e n + (8c c 3 + 1c 4 e 3 n + O(e 4 n] (4.6

NEW ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS 35 Substituting (4.4, (4.5 and (4.6 into (3.8, we get Now, from (4.7 we have y n = x n x n + y n f(x n f (x n (f (x n f(x n f (x n = α + (c c 3 e 3 n + O(e 4 n (4.7 = x n f(x n f (x n (f (x n f(x n f (x n = α + 1 e n + 1 (c c 3 e 3 n + O(e 4 n (4.8 Substituting (4.7 and (4.8 into the Taylor s expansion of f(y n, f (y n and f ( x n+y n about α and get f(y n = f (α(y n α + 1! f (α(y n α + 1 3! f (α(y n α 3 + 1 4! f (4 (α(y n α 4 + O((y n α 5 = f (α [ (c c 3 e 3 n + ( 3c 3 + 6c c 3 3c 4 e 4 n + O(e 5 n ], (4.9 f (y n = f (α[1 + c (y n α + c 3 (y n α + c 4 (y n α 3 + O((y n α 4 ] = f (α[1 + (c 3 c c 3 e 3 n + O(e 4 n], (4.10 f ( x n + y n = f (α[1 + c e n + (c 3 c c 3 e 3 n + O(e 4 n] (4.11 By substituting (4., (4.10 and (4.11 into (3.7, we get and thus 3f (x n + 10f ( x n+y n + 3f (y n ( 7 = α + 16 c e 3 n + O(e 4 n. e n+1 = ( 7 16 c e 3 n + O(e 4 n. This means that the method defined by algorithm is cubically convergent. Theorem 4.. Let α D be a simple zero of sufficiently differentiable function f : D R R for an open interval D. If x 0 is sufficiently close to α, then the two-step iterative method defined by Algorithm 1 converges cubically to α in a neighborhood of α and it satisfies the error equation ( 3 e n+1 = 8 c + 5 8 c 1 3 c 3 e 3 n + O(e 4 n, where c n = 1 f n (α n! f (α, n = 1,, 3,... and e n = x n α. Proof. Similar to the proof of Theorem 4.1.

36 HADI TAGHVAFARD 5. Numerical examples In this section, we employ the methods obtained in this paper to solve some nonlinear equations and compare them with the Newton s method (NM. We use the stopping criteria x n+1 x n < ϵ and f(x n+1 < ϵ, where ϵ = 10 14, for computer programs. All programs are written in Matlab. In Table 1, the number of iterations (NI and function evaluations (NFE is given so that the stopping criterion is satisfied. In table, the comparison of the number of operations needed for obtaining solutions for examples 1 and is given. We use the following test functions and display the approximate zeros x found up to the 14 th decimal place. Example1 f 1 (x = x 3 x + 3, x = 1.671699881657161 Example f (x = x 3 + 4x 10, x = 1.3653001341410 Example3 f 3 (x = cos(x x, x = 0.7390851331516 Example4 f 4 (x = xe x sin (x + 3 cos(x + 5, x = 1.0764787130919 Table 1. Comparison of number of iterations and function evaluations of different iterative methods. NI NFE f(x x 0 NM Alg.1 Alg. NM Alg.1 Alg. f 1 (x 5 41 7 6 8 8 30 f (x - 0.3 53 4 7 106 16 135 f 3 (x π/4 70 4 4 140 16 0 f 4 (x 1. 6 7 10 1044 8 50 Table. Comparison of number of operations of different iterative methods. + or or f(x x 0 NM Alg.1 Alg. NM Alg.1 Alg. f 1 (x 5 164 63 60 164 56 54 f (x - 0.3 1 36 97 65 3 16 In Tables 1 and, it is shown that in some cases, algorithms 1 and are better than the Newton s method for solving nonlinear equations. It is observe that in some cases the new methods require less iteration and function evaluation than that of Newton s. Moreover, as you see in Table, the new methods require less number of operations than Newton s method. 6. Conclusion We derived two iterative methods based on spline functions for solving nonlinear equations. Convergence proof is presented in details for algorithm. In Theorem 4.1 and 4., we proved that the order of convergence of these methods is three. Analysis of efficiency showed that these methods are preferable to the well-known Newton s method. From numerical examples, we showed that these methods have great practical utilities.

NEW ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS 37 Acknowledgement The author would like to thank the referees for their constructive suggestions and criticisms that led to an improved presentation. References [1] S. Abbasbandy, Improving Newton?-Raphson method for nonlinear equations by modified Adomian decomposition method, Appl. Math. Comput. 145, (003, 887?-893. [] S. Amat, S. Busquier, J.M. Gutierrez, Geometric constructions of iterative functions to solve nonlinear equations, J. Comput. Appl. Math. 157, (003 197?-05. [3] K. E. Atkinson, An introduction to numerical analysis, nd ed., John Wiley & Sons, New York, 1987. [4] E. Babolian, J. Biazar, Solution of nonlinear equations by adomian decomposition method, Appl. Math. Comput. 13, (00 167?-17. [5] H. Bateman, Halley s method for solving equations, The American Mathematical Monthly, Vol. 45, No. 1, (1938, 11 17. [6] F. Costabile, M.I. Gualtieri, S.S. Capizzano, An iterative method for the solutions of nonlinear equations, Calcolo 30, (1999 17?-34. [7] M. Frontini, Hermite interpolation and a new iterative method for the computation of the roots of non-linear equations, Calcolo 40, (003 109?-119. [8] M. Frontini, E. Sormani, Third-order methods from quadrature formulae for solving systems of nonlinear equations, Appl. Math. Comput. 149, (004, 771?-78. [9] A.Y. Ozban, Some new variants of Newton s method, Appl. Math. Lett. 17, (004, 677?-68. [10] J. Stoer, R. Bulrisch, Introduction to Numerical Analysis, Second edition, Springer Verlag, 1993. [11] H. Taghvafard, A New Quadrature Rule Derived from Spline Interpolation with Error Analysis, Int. J. Comput. Math. Sci., 5(3, (011, 119?-14. [1] J. Traub, Iterative methods for the solution of equations, Prentice?Hall, Englewood Cliffs, New Jersey, 1964. [13] S. Weerakoon, T. G. I. Fernando, A Variant of Newton s Method with Accelerated Third- Order Convergence, Appl. Math. Lett., 13, (000, 87?-93. [14] T. Yamamoto, Historical development in convergence analysis for Newton s and Newton-like methods, J. Comput. Appl. Math., 14, (000, 1 3. Hadi Taghvafard Department of Mathematics, Shiraz University, Shiraz, Iran. E-mail address: taghvafard@gmail.com