SOME NEW HIGHER ORDER MULTI-POINT ITERATIVE METHODS AND THEIR APPLICATIONS TO DIFFERENTIAL AND INTEGRAL EQUATIONS AND GLOBAL POSITIONING SYSTEM

Size: px
Start display at page:

Download "SOME NEW HIGHER ORDER MULTI-POINT ITERATIVE METHODS AND THEIR APPLICATIONS TO DIFFERENTIAL AND INTEGRAL EQUATIONS AND GLOBAL POSITIONING SYSTEM"

Transcription

1 SOME NEW HIGHER ORDER MULTI-POINT ITERATIVE METHODS AND THEIR APPLICATIONS TO DIFFERENTIAL AND INTEGRAL EQUATIONS AND GLOBAL POSITIONING SYSTEM THESIS Submitted by Kalyanasundaram M in partial fulfillment for the award of the degree of DOCTOR OF PHILOSOPHY in MATHEMATICS DEPARTMENT OF MATHEMATICS PONDICHERRY ENGINEERING COLLEGE PONDICHERRY UNIVERSITY PUDUCHERRY INDIA JUNE 2016

2 DEPARTMENT OF MATHEMATICS PONDICHERRY ENGINEERING COLLEGE BONAFIDE CERTIFICATE It is certified that this thesis entitled Some New Higher Order Multi-point Iterative Methods and their Applications to Differential and Integral Equations and Global Positioning System is the bonafide work of Kalyanasundaram M who carried out the research under my supervision. Further certified that to the best of my knowledge the work reported herein does not form part of any other thesis or dissertation on the basis of which a degree or award was conferred on an earlier occasion of this or any other candidate. Dr. J. JAYAKUMAR Research Supervisor) Professor Department of Mathematics Pondicherry Engineering College Puducherry Date: Place: Puducherry ii

3 DEPARTMENT OF MATHEMATICS PONDICHERRY ENGINEERING COLLEGE DECLARATION I hereby declare that the thesis entitled Some New Higher Order Multi-point Iterative Methods and their Applications to Differential and Integral Equations and Global Positioning System submitted to Pondicherry University, Puducherry, India for the award of the degree of DOCTOR OF PHILOSOPHY in MATHEMATICS is a record of bonafide research work carried under the supervision of Prof. Dr. J. Jayakumar, Department of Mathematics, Pondicherry Engineering College, Puducherry. Herein this research work does not form part of any other thesis or dissertation on the basis of which a degree or award was conferred on an earlier occasion. KALYANASUNDARAM M Date: Place: Puducherry iii

4 Thesis dedicated to my parents Madhu and Palaniyammal iv

5 Acknowledgements I express my heartiest gratitude to my research supervisor, Prof. Dr. J. Jayakumar, for his guidance, motivation and constant support. He has provided many valuable suggestions, reviewed my work at every stage of the research and had given a complete freedom during the entire period of my research. I have learnt many things from him, which will be indeed very useful for my future career. I also thankful to my doctoral committee members, Dr. P. A. Padmanabham, Professor, Head, Department of Mathematics, Pondicherry Engineering College and Dr. Rajeswari Seshadri, Associate Professor, Department of Mathematics, Pondicherry University, for their advices throughout the period of research. Without their guidance and continuous help, this dissertation would not have been possible. My sincere thanks to my spiritual masters of Shri Ram Chandra Mission for their support during my difficult times. I am very grateful to respectable principal Prof. Dr. T. Sundararajan for his support. Also, I thank Deans of Research Pondicherry Engineering College for their help in completing my thesis. I would like to thank Dr. Babajee Diyashvir Kreetee Rajiv, for his many suggestions and constant support in caring out my research as a research collaborator. I would like to thank all the researchers in this field who ably supported me by sending their papers whenever I requested them. My sincere thanks to Prof. Dr. G. Sivaradje, TEQIP Coordinator and other members of TEQIP who helped me in getting the research fellowship for a period of three years. The TEQIP grant was crucial for the successful completion of this research. Furthermore, I would like to thank all the faculty members of the Department of Mathematics for their support during my Ph.D. studies. I would like to express my heartiest gratitude to Mr. Satbir Bakshi, Mr. S. Senthilnathan, Mr. A. Manickam, Mrs. Shantha Sheela Devi, Mrs. Anukumari Shukla and Ms. Amrita Jaiswal for supporting me morally as well as financially. I have experienced great friendship and enjoyment from persons who made my stay at PEC an unforgettable experience: M. Karthikeyan, D. Neelamegam, G. Jegan, P. Thamizhselvi, S. Karpagam, R. Ayyappan and J. Udayageetha. Also, I would like to express my heartiest gratitude to my friends M. Lakhsmankumar, S. Balajee and V. Murugan. v

6 Finally, it is my greatest honor to thank my parents, Mr. A. Madhu Father) and Mrs. M. Palaniyammal Mother), for their patience, self-sacrifice and love. Without their blessings this work would never have come into existence. I feel very privileged to have siblings, M. Jothi, M. Jeevabalan and M. Kalpana. I must acknowledge here encouragement and good words I have received from my siblings were simply great. Last but not the least, I thank all the members of my relatives for their care in my development. Pondicherry Engineering College, Pondicherry June 29, 2016 Kalyanasundaram M vi

7 Table of Contents Acknowledgements Table of Contents List of Tables List of Figures Summary Publications Glossary of Symbols v vii x xii xiii xv xvii 1 Introduction Classification of iterative methods Order of Convergence Computational order of convergence Computational Efficiency Initial approximations One-point iterative methods for simple zeros Review of Traub s period to date Some new variants of Newton s method of order three, four and five Construction of new methods Convergence Analysis of the methods Numerical Examples Concluding Remarks Class of modified Newton s method having fifth and sixth order convergence Construction of new methods A three step fifth order I.F New class of I.F. with order six Convergence Analysis of the methods Numerical Examples Concluding Remarks vii

8 4 Two families of Newton-type methods having fourth and sixth order convergence Construction of new methods Family of Optimal fourth order I.F Family of sixth order I.F Convergence Analysis of the methods Numerical Examples Concluding Remarks Family of higher order multi-point iterative methods based on power mean Construction of new methods Family of Optimal fourth order I.F Family of sixth order I.F Family of twelfth order I.F Convergence Analysis of the methods Numerical Examples Dynamic Behaviour in the Complex Plane Concluding Remarks Some New Multi-point Iterative Methods and their Basins of Attraction Construction of new methods Convergence Analysis of the methods Higher Order Methods Convergence Analysis of the methods Numerical Examples Basins of attraction Polynomiographs of p 1 z) = z Polynomiographs of p 2 z) = z A study on extraneous fixed points An application problem Concluding Remarks Improved Harmonic Mean Newton-type methods for system of nonlinear equations Construction of new methods Convergence Analysis of the methods Efficiency of the Methods Numerical Examples Application Concluding Remarks Efficient Newton-type methods for system of nonlinear equations Construction of new methods Convergence Analysis of the methods Efficiency of the Methods Numerical examples viii

9 8.5 Applications Chandrasekhar s equation D Bratu problem Concluding Remarks An improvement to double-step Newton-type method and its multi-step version Construction of new methods Convergence Analysis of the methods Efficiency of the Methods Numerical examples Applications Chandrasekhar s equation D Bratu problem Concluding Remarks Application in Global Positioning System Introduction Basic Equations for Finding User Position Measurement of Pseudorange Solution of User Position from Pseudoranges Numerical Results for the GPS Problem Concluding Remarks Conclusion and Future Work 156 Bibliography 159 ix

10 List of Tables 2.1 Numerical results for f 1 x), x 0) = Numerical results for f 1 x), x 0) = Numerical results for f 2 x), x 0) = Numerical results for f 2 x), x 0) = Numerical results for f 3 x), x 0) = Numerical results for f 3 x), x 0) = Numerical results for f 4 x), x 0) = Numerical results for f 4 x), x 0) = Numerical results for f 5 x), x 0) = Numerical results for f 5 x), x 0) = Comparison of Efficiency Index Numerical results for f 1 x), x 0) = Numerical results for f 1 x), x 0) = Numerical results for f 2 x), x 0) = Numerical results for f 2 x), x 0) = Numerical results for f 3 x), x 0) = Numerical results for f 3 x), x 0) = Numerical results for f 4 x), x 0) = Numerical results for f 4 x), x 0) = Numerical results for f 5 x), x 0) = Numerical results for f 5 x), x 0) = Results for the best value of β [ 20, 20] for f 1 x)-f 5 x) Comparison of Efficiency Index Numerical results for f 1 x), x 0) = Numerical results for f 2 x), x 0) = Best value of β [ 20, 20] for f 3 x), x 0) = Best value of β [ 20, 20] for f 4 x), x 0) = Comparison of results for best value of β for 3 rd P M with 2 nd NM Comparison of results for best value of β for 4 th MJ Comparison of results for best value of β for 6 th MJ Comparison of Efficiency Index Comparison of results for f 1 x) and f 2 x) Results for 2 nd NM and for best value of β in [ 20, 20] Results for the best value of β in [ 20, 20] Comparison of Efficiency Index x

11 6.1 Numerical results for f 1 x) Numerical results for f 2 x) Numerical results for f 3 x) Numerical results for f 4 x) Numerical results for f 5 x) Comparison of convergent and divergent grids for p 1 z) Comparison of convergent and divergent grids for p 2 z) Comparison of results for Planck s radiation law problem Comparison of results for Planck s radiation law problem Comparison of results for Planck s radiation law problem Results for Planck s radiation law problem in fzero Comparison of Efficiency Index Comparison of EI and CE Comparison of different methods for system of nonlinear equations Comparison of different methods for system of nonlinear equations Comparison of different methods for system of nonlinear equations Comparison of number of λ s out of 350 λ s) for 1-D Bratu problem Comparison of EI and CE Numerical results for Test Problems TP1-TP6) Weights and knots for the Gauss-Legendre formula m = 8) Numerical results for Chandrasekhar s equation Comparison of number of λ s out of 350 λ s) for 1-D Bratu problem Comparison of EI and CE Comparison of different methods for system of nonlinear equations Comparison of different methods for system of nonlinear equations Comparison of different methods for system of nonlinear equations Comparison of iteration and errors for Chandrasekhar s equation Comparison of number of λ s for 2-D Bratu problem for n = Comparison of number of λ s for 2-D Bratu problem for n = Coordinates of observed satellite and pseudorange Comparison of the iterative methods for GPS xi

12 List of Figures 3.1 Comparison of iterations for f 2 x), x 0) = 1.7 for 3 rd P M Comparison of error for f 2 x), x 0) = 1.7 for 3 rd P M Comparison of iterations for f 2 x), x 0) = 1.7 for 6 th P MM Comparison of error for f 2 x), x 0) = 1.7 for 6 th P MM Results for f 9 x) Polynomiographs of p 1 z) Polynomiographs of p 1 z) Polynomiographs of p 2 z) Comparison of Efficiency index and Computational efficiency index Variation of θ for different values of λ Variation of number of iteration with λ for 1-D Bratu problem Order of the 2r + 4) th HM family for each λ Comparison of Efficiency index and Computational efficiency index Comparison of Efficiency index and Computational efficiency index Variation of θ for different values of λ Two dimensional user position Three dimensional user position xii

13 Summary In this thesis, we are interested to propose and study new multi-point iterative methods and their error analysis. Chapter 1 presents some preliminaries and literature review leading towards this work. Chapters 2-5 consider some new families of higher order multi-point iterative methods for solving scalar nonlinear equation. We have studied the local convergence analysis using Taylor series for all the methods proposed. Also, we have applied the new methods on many examples and compared with some existing equivalent methods and the results are tabulated in the respective chapters. In Chapter 5 basins of attraction for the fourth order methods have been studied for few cases. In Chapter 6, we have presented a new family of fourth order iterative methods which uses weight functions. Further, we have extended them to sixth and twelfth order methods. The performances of the proposed methods are displayed by applying them on many examples. An application problem arising from Planck s radiation law is also verified. Basins of attraction analysis on the complex domain is carried out on certain standard complex polynomials for the proposed methods and some equivalent methods whose results are displayed using polynomiographs. Chapter 7 considers a new fourth order Newton-like method based on harmonic mean and its multi-step version for solving system of nonlinear equations. The new fourth order method requires evaluation of one function and two first order Fréchet derivatives for each iteration. The multi-step version requires one more function evaluation for each iteration and has converges order 2r + 4, where r is a positive integer and r 1. We have proved that the root α is a point of attraction for a general iterative function, whereas the proposed new schemes also satisfy this result. Numerical experiments, including an application to the 1-D Bratu problem are given to illustrate the efficiency of the new methods. In Chapter 8, we have presented some efficient iterative methods of convergence order four, five and six respectively for solving system of nonlinear equations. Our xiii

14 xiv aim is to achieve higher order Newton-like methods with only one inverse of Jacobian matrix for each iteration. It is proved that new iterative schemes satisfy that the root α is a point of attraction. The performance of the proposed methods are verified through numerical examples and we have considered Chandrasekhar s equation and 1-D Bratu problem for the application. In Chapter 9, we have proposed a new method with convergence order five by improving the double-step Newton method. The multi-step version requires one more function evaluation for each step. The multi-step version converges with order 3r+5, r 1 and positive integer. Numerical experiments compare the new methods with some existing methods. Our methods are also verified on Chandrasekhar s problem and 2-D Bratu problem to illustrate the applications. Chapter 10 considers nonlinear system of equations arising out of a GPS receiver, whose data is carried on electromagnetic signals transmitted by the earth-orbiting GPS satellite constellation and the computation of the travel time of these received signals. The time measurements are converted to distance measurements, which can be used to compute the unknown position and time of the receiver from the known positions of the satellite transmitters and signal transit times. A set of nonlinear navigation equations is formed. These nonlinear equations are solved using iterative techniques based on newly developed Newton-type methods given in Chapters 7-9. The results indicate that the new Newton-type methods are simple, fast and accurate as compared to Newton s method to predict the earth coordinates in GPS.

15 Publications Refereed Papers 1. Kalyanasundaram M., J. Jayakumar, Some higher order Newton-like methods for solving system of nonlinear equations and their applications, Int. J. Appl. Comput. Math, 2016), DOI /s z. Springer) 2. Kalyanasundaram M., D.K.R. Babajee, J. Jayakumar, An improvement to double-step Newton method and its multi-step version for solving system of nonlinear equations and its applications, Numerical Algorithms, 2016), DOI: /s Springer, SCIE, IF = 1.42) 3. Kalyanasundaram M., J. Jayakumar, Higher Order Methods for Nonlinear Equations and their Basins of Attraction, Mathematics 2016, 4, 22; DOI: /math Thomson Reuters - ESCI) 4. D.K.R. Babajee, Kalyanasundaram M., J. Jayakumar. On some improved Harmonic Mean Newton-like methods for solving systems of nonlinear equations, Algorithms ), pp Thomson Reuters - ESCI) 5. D.K.R. Babajee, Kalyanasundaram M., J. Jayakumar, A family of higher order multi-point iterative methods based on power mean for solving nonlinear equations, Afrika Matematika, 2015), DOI: /s Springer, Scopus) 6. Kalyanasundaram M., J. Jayakumar, Two new families of iterative methods for solving nonlinear equations, Tamsui Oxf. J. Inf. Math. Sci., 2015), Accepted). Scopus) 7. Kalyanasundaram M., J. Jayakumar, Class of modified Newton s method for solving nonlinear equations, Tamsui Oxf. J. Inf. Math. Sci., ), pp Scopus) xv

16 xvi 8. Kalyanasundaram M., J. Jayakumar, A fifth order modified Newton type method for solving nonlinear equations, IJAER, 1072) 2015), pp Scopus) 9. J. Jayakumar, Kalyanasundaram M., Power means based modification of Newton s method for solving nonlinear equations with cubic convergence, IJAMC, 62) 2015), pp J. Jayakumar, Kalyanasundaram M., Generalized Power means modification of Newton s method for Simple Roots of Nonlinear Equation, Int. J. Pure Appl. Sci. Technol., 182) 2013), pp J. Jayakumar, Kalyanasundaram M., Modified Newton s method using harmonic mean for solving nonlinear equation, IOSR Journal of Mathematics, ), pp Refereed Conference Proceedings 1. Kalyanasundaram M., J. Jayakumar, Higher order Multi-point Iterative methods and its Application to GPS and Nonlinear Differential Equations, Book of Abstracts of the National Conference on Recent Developments in Mathematical Analysis and its Applications, Pondicherry University, February 25 26, Kalyanasundaram M., J. Jayakumar, A fifth order modified Newton type method for solving nonlinear equations, Proceedings of the Interdisciplinary National Conference on Soft computing and its Applications, Anna University, India, Vol. 1, pp , J. Jayakumar, Kalyanasundaram M., Two Class of Sixth Order Modified Newton s Method for Solving Nonlinear Equations, Proceedings of the International Conference on Mathematics and its Applications, Anna University, India, pp , 2014.

17 Glossary of Symbols R Set of Real numbers C Set of Complex numbers x Element of R z Element of C fx) Nonlinear function in R x x 0) e e k) Root of fx) Initial Point = x x = x k) x error = x k+1) x k) ψx) Iterative Function I.F.) d Functional Evaluation F.E.) per iteration d ψ EI CE Total no. of function evaluations Efficiency Index Computational Efficiency index EI T = p d EI O N = p 1 d Number of iteration for scalar case c j = f j) x ), j = 2, 3,... j!f x ) p Order of convergence ACOC ρ k x F x) Approximated computational order of convergence ACOC for scalar case = x 1, x 2,..., x n ) T = f 1 x), f 2 x),..., f n x)) T xvii

18 xviii x F x) Sx, δ) Sx, δ) F x k) ) e k) = x 1, x 2, x 3,..., x n) T, root of F x) Fréchet derivative of F x) { } Open ball x x < δ {, δ > 0 } Closed ball x x δ, δ > 0 F x) evaluated at x = x k) = x k) x C q =1/q!)[F x )] 1 F q) x ), q 2 M Number of iteration for multivariate case err min = x k+1) x k) 2 p c JR) F R) TP ACOC for system of equation Julia set Fatou set Test Problem λ, β Real parameter λ c f ξ Critical value of λ Inverse of f Extraneous fixed points 1-D One Dimensional in Bratu problem 2-D Two Dimensional in Bratu problem GPS Global Positioning System

19 Chapter 1 Introduction One of the popular iterative methods for finding approximate solutions of nonlinear equations is the classical Newton s method, which has quadratic convergence. Higher order methods such as Chebyshev s, Halley s, Chebyshev-Halley s methods, etc., require evaluation of second or higher order derivatives and hence they are less efficient and more costly in terms of computational time. Calculating zeros of scalar nonlinear equation fx) = 0 and system of nonlinear equations F x) = 0 rank among the most significant problems in the theory and practice not only of applied mathematics but also of many branches of engineering sciences, physics, computer science, finance, to mention only some of the fields. These problems lead to a rich blend of mathematics, numerical analysis and computing science. Thus, it is important to study higher order variants of Newton s method, which require only function and its first derivative calculations and are more robust when compared to Newton s method. 1.1 Classification of iterative methods Let f be a real single-valued function of a real variable. If fx ) = 0 then x is said to be a zero of f or equivalently, a root of the equation fx) = ) We will always assume that f has sufficient number of continuous derivatives in the neighborhood of x. Roots of equation 1.1) can be found analytically only in some special cases. To solve 1.1) approximately and find the root x, it is customary to 1

20 2 apply some iterative methods of the form x k+1) = ψx k) ), k = 0, 1, 2, ) where x k) is an approximation to the root x isolated in a real interval [a,b], x k+1) is the next approximation and ψ is a suitable continuous function defined on [a,b]. The iterative method starts with an initial approximation x 0) [a, b] and converges to x. The function ψ is called Iteration Function I.F.). Formula 1.2) defines the simplest iterative method where only one previous approximation x k) is required for evaluating the next approximation x k+1). Such an iterative method is called one-point iterative method without memory. Let x k+1) be determined by new information at x k) and reused information at x k 1),..., x k n), 1 n k. Thus x k+1) = ψx k) ; x k 1),..., x k n) ). 1.3) Then ψ is called a one-point I.F. with memory. The semicolon in equation 1.3) separates the point at which new data are used from the points at which old data are reused. Let x k+1) be determined by new information at x k), φ 1 x k) ),..., φ i x k) ), i 1. No old information is reused. Thus, x k+1) = ψx k), φ 1 x k) ),..., φ i x k) )). 1.4) Then ψ is called a multipoint I.F. without memory. Finally, let x k+1) be determined by new information at x k), x k 1),..., x k n) and reused information at x k n 1),..., x k m). Thus x k+1) = ψx k), x k 1),..., x k n) ; x k), x k n 1),..., x k m) ), m > n. 1.5) Then ψ will be called a multi-point I.F. with memory. The semicolon in equation 1.5) separates the point at which new data are used from the points at which old data are reused. 1.2 Order of Convergence The convergence rate of an iterative method is the issue of equal importance to the theory and practice of iterative methods as the convergence itself. The convergence rate is defined by the order of convergence.

21 3 Definition Traub 1964) Let x 0), x 1),..., x k),... be a sequence converging to x. Let e k) = x k) x. If there exists p R and C R {0} such that e k+1) C, 1.6) e k) ) p then p is called the order of the sequence and C is the asymptotic error constant. Definition Wait 1979) If the sequence {x k } tends to a limit x in such a way that x k+1) x lim k x k) x ) = C p for p 1, then the order of convergence of the sequence is said to be p, and C is known as the asymptotic error constant. If p = 1, p = 2 or p = 3, the convergence is said to be linear, quadratic or cubic, respectively. Let e k) = x k) x, then the relation ) e k+1) = C e k) ) p + O e k) ) p+1 ) = O e k) ) p. 1.7) is called the error equation. The value of p is called the order of convergence of the method. In practice, the order of convergence is often determined by the following statement known as Schröder-Traub s theorem in Traub 1964). Theorem Let ψ be an I.F. such that ψ p) is continuous in a neighborhood of x. Then, ψ is of order p if and only if ψx ) = x ; ψ j x ) = 0, j = 1, 2,..., p 1; ψ p) x ) ) The asymptotic error constant is given by ψx k) ) x lim = k x k) x p ψ p) x ) p! = C p. 1.9) For example, we consider Newton s iteration ψx) = x ux), where ux) = fx). By a direct calculation we find that f x) We have ψx ) = x, ψ x ) = 0, ψ x ) = f x ) f x ) 0. ψx k) ) x x k) x 2 C 2 = Therefore, Newton s iterations have second order convergence. f x ) 2f x ). 1.10) The following two theorems are concerned with the order of the composition of iteration functions.

22 4 Theorem Traub 1964) Let x be a simple zero of a function f and let ψ 1 x) define an iterative method of order order p 1. Then a composite I.F. ψ 2 x) introduced by Newton s method ψ 2 x) = ψ 1 x) fψ 1x)), 1.11) f x) defines an iterative method of order p Theorem Traub 1964) Let ψ 1 x), ψ 2 x),...,ψ s x) iterative functions of order p 1, p 2,...,p s respectively. Then the composition ψx) = ψ 1 ψ 2...ψ s x))...)) defines the iterative method of order p 1 p 2...p s. Babajee 2012) developed a technique using weight functions to improve the order of old methods. Theorem Babajee 2012) Let a sufficiently smooth function f : D R R has a simple root x in the open interval D. Let ψ old x) be an I.F. of order p. Then the I.F. defined as ψ new x) = ψ old x) G fψ old x)) is of local convergence of order p + q if G is the weight function satisfying the error equation G = 1 ) 1 + C f x G e q + Oe q+1 ), ) where C G is a constant. Suppose that the error equation of the old I.F. is given by e old = ψ old x) x = C old e p Then, the error equation of the new I.F. is given by e new = ψ new x) x = C G C old e p+q c 2 C 2 old e 2p +..., 1.12) where c j = f j) x ), j = 2, 3, 4,.... j!f x) 1.3 Computational order of convergence Together with the order of convergence, for practical purposes we describe the notion of computational order of convergence COC). Namely, it is of interest to check the order of convergence of an iterative method during its practical implementation and estimate how much it differs from the theoretical order. Definition Let x k 1), x k) and x k+1) be the last three successive approximations to the sought zero x obtained in the iterative process x k+1) = ψx k) ) of presumably order p. Then, the computational order of convergence ρ can be approximated using the formula ρ = ln xk+1) x )/x k) x ) ln x k) x )/x k 1) x ). 1.13)

23 5 The COC has been used in many papers to test numerically the order of convergence of new methods whose order has been theoretically studied. The value of zero x is unknown in practice so that we use another approach that avoids the use of zero x studied by Cordero and Torregrosa 2007) by introducing a more realistic relationship called approximated computational order of convergence ACOC). The ACOC of a sequence {x k) } k 0 is defined by ρ k = ln êk+1) /ê k) ln ê k) /ê k 1), 1.14) where ê k) = x k) x k 1). It was proved by Grau and Diaz-Barrero 2000) that ρ k p when ê k 1) 0 which means that ρ k p in the sense ρ k lim k p = 1. The use of the computational order of convergence, given by 1.13) and 1.14) serves as a practical check on the theoretical error calculations. These formulae give mainly satisfactory results in practice. Apart from the estimation of a real convergence rate of an iterative method in practical realization, the computational order of convergence may be suitably applied in designing new root-solvers. Namely, in some complicated cases, it is not easy to find theoretical order of convergence of such a method. Test-examples that include the calculation of ACOC can be helpful to predict the convergence speed of designed method, which makes easier further convergence analysis. 1.4 Computational Efficiency In practice, it is important to know certain characteristics of the applied root-finding algorithm, for instance, the number of numerical operations in calculating the desired root to the wanted accuracy, convergence speed, processor running time, occupation of storage space, etc. In spite of the ever-growing speed of modern computers, these features remain important issues due to the constantly increasing complexity of the problems solved by computers. To compare various numerical algorithms, it is necessary to define computational efficiency based on the speed of convergence order), the cost of evaluating f and its derivatives problem cost), and the cost of constructing the iterative process combinatory cost).

24 6 Obviously, a root-finding method is more efficient as its amount of computational work is smaller, keeping the remaining parameters fixed. In other words, the most efficient method is the one that satisfies the posted stopping criterion for the smallest CPU central processing unit) time. The following definitions are given to calculate Efficiency Index and informational Efficiency Index: Definition Ostrowski 1960) Efficiency Index is defined as EI O = p 1 d, 1.15) where p is the order of convergence of the method and d is the total number of new function evaluations the values of f and its derivatives) per iteration. Definition Traub 1964) The Informational Efficiency Index EI T ) is defined as EI T = p d. 1.16) The following alternative formula, obtained by taking the logarithm of 1.15) EI = log p d, 1.17) does not essentially differ from 1.15) See McNamee 2007)). Computational cost of any I.F. ψ constructed for solving a nonlinear equation f depends on the number of function evaluations F.E.) per iteration. The connection between the order of convergence of ψ and the cost of evaluation of f and its derivatives are given by the so-called fundamental theorem of one-point I.F., stated by Traub 1964). Theorem Traub 1964) Let ψ be any one-point iterative function with order p and let d ψ be the number of new F.E. per iteration. Then for any p there exists ψ with the informational efficiency EIψ) = p/d ψ = 1 and for all ψ it holds EIψ) = p/d ψ 1. Moreover, ψ must depend explicitly on the first p 1 derivatives of f. Consequently, one-point iteration function with sufficiently smooth f cannot attain EI T greater than 1. This means that iterative methods with EI T greater than 1 could be found only in the class of multipoint methods, which will be discussed later in detail. The main goal in the construction of new methods is to obtain a method with the best possible efficiency. This means that according to the definition 1.15) or 1.17), it is desirable to attain as high as possible convergence order with the fixed number

25 7 of F.E. per iteration. In the case of multipoint methods without memory, this demand is closely related to the optimal order of convergence considered in the Kung-Traub conjecture. Kung-Traub Conjecture Kung and Traub 1974): Let ψ be an I.F. without memory with d function evaluations. Then pψ) p opt = 2 d 1, 1.18) where p opt is the maximum order. Multipoint methods that satisfy the Kung-Traub conjecture is usually called optimal methods. Algorithms of optimal efficiency are of particular interest in the present trend in research where we have strived to get some optimal methods in this thesis. The Kung-Traub conjecture is supported by the families of multipoint methods of arbitrary order p, proposed in Kung and Traub 1974) and Zheng et al. 2011) and also by a number of particular multipoint methods, which will be discussed later in this chapter. 1.5 Initial approximations Every iterative method for solving a nonlinear equation f requires the knowledge of an initial approximation x 0) to the sought root x. Many one-point root-finding methods as well as multipoint iterative methods are based on Newton s method, which is famous for its simplicity and good local convergence properties. However, a good convergence of Newton s method cannot be expected when the initial guess is not properly chosen, especially when the slope of the function f is extremely flat or very steep near the root or f is of oscillatory type. The significance of the choice of initial approximations becomes even more important if higher-order iterative methods are applied due to their sensitivity to perturbations. If the initial approximation is not close enough to the zero, then these methods may converge slowly at the beginning of the iterative process, which consequently decreases their computational efficiency.

26 8 1.6 One-point iterative methods for simple zeros In this section we give a review of the most frequently used one-point iterative methods for solving nonlinear equations. Since there is a vast literature studying these methods, including their derivation, convergence behavior, and numerical experiments, we present only basic iterative formulae which are used or cited in later chapters. Let ψx) be a general I.F. Let fx) = 0 be a given nonlinear equation with a simple root x located in some interval [a, b]. The best known iterative method for solving nonlinear equations is the classical Newton s method 2 nd NR) having quadratic convergence: ψ 2 nd NMx) = x ux), where ux) = fx) f x), f x) ) For small values of h the approximation f x) f x) = fx + h) fx) h 1.20) holds. Taking two consecutive approximations x k 1) and x k), from 1.20) we obtain the approximation to the first derivative in the form f x k) ) = fxk) ) fx k 1) ) x k) x k 1). 1.21) Substituting 1.21) into 1.19) yields the iterative formula ψ Sec x k 1), x k) ) = x k) fx k) x k) x k 1) ) fx k) ) fx k 1) ), 1.22) which defines the well-known secant method. The convergence order of this method is It possesses a superlinear convergence and does not require the evaluation of derivative of fx). Taking h = fx) in 1.20) and substituting it in the Newton formula 1.19), we obtain the derivative free Steffensen method ψ 2 nd Stex) = x fx) 2 fx + fx)) fx). 1.23) The iterative method 1.23) belongs to the class of multipoint methods and has order two. The Halley I.F. 3 rd Hal) and Chebyshev I.F. 3 rd Che) both having cubic convergence are given by ψ 3 rd Halx) = x ux) 1 c 2 x)ux), 1.24)

27 9 ψ 3 Chex) = x ux) c rd 2 x)ux) 2, 1.25) where c 2 x) is defined as in 1.12). A half century ago, Traub 1964) proved that one-point iterative method for solving a single nonlinear equation of the form fx) = 0, which requires the evaluation of the given function f and its first p 1 derivatives, can reach the order of convergence at most p. For this reason, a great attention was paid to multipoint iterative methods since they overcome theoretical limits of one-point methods concerning the convergence order and computational efficiency. Beside Traub s research presented in his fundamental book Traub 1964) this class of methods was also extensively studied in some papers published in the 1970s See Jarratt 1966a, 1969), King 1973), Kung and Traub 1974)). Surprisingly, the interest in multipoint methods has grown again in the first decade of this century. However, some of the newly developed methods were represented by new iterative formulae, but without any improvement compared to the existing methods, others were only rediscovered methods of the 1960s See for more details in Petkovic and Petkovic 2007)) and only a few new methods have brought a genuine advance in the theory and practice of iterative processes. In the next section, we give a review of some of the I.F. from Traub s period to present date. 1.7 Review of Traub s period to date In the following, we review many iterative methods proposed from the period of Traub 1964), where we shall specify some of the methods with I.F. and many of the methods without I.F. All the methods given here with I.F. are more relevant to our present work. The most important contribution which deals with many methods comprehensively is the book by Traub 1964). In this book, Traub derived a number of cubically convergent two-point methods. One of the presented approaches relies on interpolation, which will be illustrated by several examples. Let x be fixed point and let f be a real function whose zero is sought. We construct an interpolation function Φt) such that Φ r) t) = f r) t), r = 0, 1, 2...n, 1.26)

28 10 which makes use of r j values of functions f j) with 0 j q < n. That is, we wish to replace the dependence of the interpolating function on the higher order derivatives of f by lower derivatives evaluated at a number of points. We do not require that Φt) is necessarily an algebraic polynomial. A general approach to the construction of multipoint methods of interpolatory type is presented in Traub 1964). To construct two-point methods of third order free from second derivative, we restrict ourselves to the special case when [ ] Φt) = fx) + t x) a 1 f x) + a 2 f x + b 2 t x)). 1.27) The condition Φx) = fx) is automatically fulfilled and we only impose additional conditions 1.26) to hold at the point t = x for r = 1, 2. Further, following details found in Traub 1964), we obtain the following system of equations a 1 + a 2 = 1, 2b 2 a 2 = 1. This system has a solution for any b 2 0. For b 2 = 1/2, it follows that a 1 = 0, a 2 = 1, so that 1.27) becomes Φt) = fx) + t x)f x t x) ). 1.28) For b 2 = 1, there follows that a 1 = a 2 = 1/2 and from 1.27) we get Φt) = fx) + 1 ) f 2 t x) x) + f t). 1.29) Let t = ψ be a zero of Φ, that is Φψ) = 0. Putting t = ψ in 1.28) we get fx) ψx) = x f x ) ψ x)). 2 This is an implicit relation in ψ. Substituting ψ by Newton s approximation x ux) on the right hand side of 1.30), we get the third order midpoint method 3 rd MP ) fx) ψ 3 rd MP x) = x f x ) ux)). 2 Note that 1.31) was rediscovered much later by Frontini and Sormani 2003) who used the quadrature rule of midpoints. Also, the same method has been rediscovered in different ways by Homeier 2003) and Ozban 2004). Another two-point method can be obtained from 1.29) by taking t = ψ. Then from 1.29) we get the relation ψx) = x 2fx) f x) + f ψ). 1.32)

29 11 Replacing ψ by x ux) on the right-hand side of 1.32), we obtain a two-point method of third order known as arithmetic mean method 3 rd AM) ψ 3 rd AMx) = x 2fx) f x) + f x ux)). 1.33) Note that Traub pointed out that the 3 rd AM is a generalization of Newton s I.F. in the sense that the derivative appearing in Newton s I.F. is replaced by the average of derivatives evaluated at x and at the Newton point of x. Also, note that the 3 rd AM method was rediscovered after many years by Weerakoon and Fernando 2000), who derived this method by the use of numerical integration. Let us now employ the function f which is inverse to f into the interpolation formula 1.29). Then we have Φt) = fy) + 1 ) f 2 t y) y) + f t). 1.34) For t = 0 we define ψ = Φ0). Then, due to the relations f y) = dx dy = 1 f x), f 0) = f fα)) = 1 f α), we obtain ) ψ = x 1 2 fx) 1 f x) ) f α) We estimate f α) by f x ux)), from 1.35) there follows the iterative method ) ψ 3 HMx) = x 1 rd 2 fx) 1 f x) ) f x ux)) Traub 1964) pointed out that this I.F. is a generalization of Newton I.F. in the sense that the reciprocal of the derivative is replaced by the average of the reciprocal derivatives evaluated at x and at the Newton point of x. This method was later rediscovered by Homeier 2003) and Ozban 2004). Traub introduced a double step Newton method with fourth order convergence 4 th NR) by evaluating f and f at two points ψ 4 th NRx) = x ux) f[x ux)] f [x ux)], 1.37) which was recently rediscovered by Noor et al. 2013) using the variational iteration technique.

30 12 A family of multistep I.F. with the first derivative evaluated at every step introduced by Traub 1964) and known as p th T M ψ p th T Mx) = A r x), j = 2, 3, 4...r, A 1 x) = x. A j x) = A j 1 x) f[a j 1x)], f x) 1.38) Some of the main advantages of this p th T M method are summarized below: An I.F. of order p is constructed from p 1 values of f and one value of f. f x) 1 need to be calculated only once even for a high order I.F. The form of A r x) suggests generalizing to systems of equations with f x) 1 replaced by [F x)] 1 where F x) is the Frechet derivative of the system. Thus, an I.F. of order p may be constructed which requires only one matrix inversion. The recursive definition of A r x) permits its calculation in a simple loop on a computer. The asymptotic error constant of a one-point iteration function of order p generally depends on f p) x ), where p is arbitrary. But the asymptotic error constant of p th T M family depends only on f x ) f x ). The first optimal two point I.F. was constructed by Ostrowski 1960), several years before Traub s extensive investigation in this area described in Traub 1964). It is given by ψ 4 th OM1x) = x ux) fψ 2 nd NMx)) fx) 2fψ 2 nd NMx)) fx). 1.39) Another optimal fourth order I.F. introduced by Ostrowski 1960) is given by ψ 4 OM2x) = ψ th 2 NMx) fψ 2 NMx)) fx) nd nd f x) fx) 2fψ 2 NMx)) nd ). 1.40) Jarratt 1966b) gave a three point fifth order I.F.5 th JM) ψ 5 th JMx) = x 6fx) f x) + f [ψ 2 nd NMx)] + 4f [v 1 x)], 1.41)

31 13 where v 1 x) = x 1 8 ux) 3 8 fx) f [ψ 2 nd NM x)] two-point fourth order I.F. 4 th JM) and it is given by. Also, Jarratt 1969) suggested optimal ψ 4 JMx) = x 3f x 2ux)) + f x) 3 fx) ) th 6f x 2ux)) 2f. 1.42) 3 x) f x) King 1973) developed a one-parameter family of optimal fourth-order I.F. 4 th KM), which is given as ψ 4 th KMx) = ψ 2 nd NMx) ) fx) + βfψ 2 NMx)) fψ nd 2 NMx)) nd. fx) + β 2)fψ 2 NMx)) f nd x) 1.43) An optimal fourth order I.F. 4 th KT ) given by Kung and Traub 1974) ψ 4 th KT x) = ψ 2 nd NMx) fψ 2 nd NMx)) f x) [ 1 1 fψ 2 nd NM x)) fx) ] ) Sharma 2005) suggested a third-order I.F. formed by the composition of Newton and Steffensen I.F.s for finding simple roots of a nonlinear equation. Per iteration this method requires two functions and one derivative evaluations. Abbasbandy 2005) presented some efficient numerical algorithms for solving a system of nonlinear equations based on the modified Adomian decomposition method. Chun 2006) constructed a Newton-like iteration methods for the computation of solutions of nonlinear equations. The new scheme is based on the homotopy analysis method applied to equations in general form equivalent to the nonlinear equations. It provides a tool to develop new Newton-like iteration methods or to improve the existing iteration methods which contain the well-known Newton iteration formula. The order of convergence and the corresponding error equations are derived analytically. Kanwar 2006) suggested a new family with cubic convergence obtained by discrete modification and his experiments show that the method is suitable in the cases where Steffensen or Newton-Steffensen methods fail. Kou et al. 2007b) presented a family of fifth order I.F. formed by the composition of Newton and third-order I.F. for solving nonlinear equations. Per iteration the new methods require two evaluations of the function, one first derivative and one second derivative. Sharma and Guha 2007) suggested a one-parameter family of sixth order methods 6 th SG) for solving equations based on 4 th OM2. Each member of the family requires three evaluations of the given function and one evaluation of

32 14 its derivative per iteration. Numerical examples are presented and the performance is compared with Ostrowski method ψ 6 th SGx) = ψ 4 th OM2x) fψ 4 th OM2x)) f x) where a R. ) fx) + afψ 2 NMx)) nd, fx) + a 2)fψ 2 NMx)) nd 1.45) Chun 2007) presented a new two-parameter family of iterative methods for solving nonlinear equations which includes, as a particular case, the classical Potra and Ptak third-order method. Per iteration the new methods require evaluations of two functions and one of its derivative. He showed that each member of the family is cubically convergent. Salkuyeh 2007) gave a family of Newton-type methods free from second and higher order derivatives for solving nonlinear equations. The order of the convergence of this family depends on a function. Under a condition on this function this family converge cubically and by imposing one more condition on this function one can obtain methods of order four. It has been shown that this family includes many of the available iterative methods. Xiaojian 2007) developed a family of third-order I.F. 3 rd P M) by using power mean and it is given by ψ 3 rd P Mx) = x fx) Dx, β), ) f Dx, β) = signf x) β + f 2 nd NM)) β 1 β x)) ) The cases β = 1, 1, 2 correspond to arithmetic mean I.F. 3 rd AM), harmonic mean I.F. 3 rd HM) and square mean I.F. 3 rd SM) respectively. For the case β = 0, he obtained geometric mean I.F. 3 rd GM) by allowing β 0. Kou et al. 2007a) presented new modifications in Newton s method known as 5 th Kou. Analysis of convergence shows that these methods have an order of convergence five. ψ 5 th Koux) = ψ 3 rd AMx) fψ 3 rd AMx)) f ψ 2 nd NMx)). 1.47) Using the linear interpolation on two points, Parhi and Gupta 2008) developed a three point sixth order I.F. by using 3 rd AM and it is given by ψ 6 AMx) = ψ th 3 AMx) fψ 3 AMx)) f x) + fψ rd 2 NMx)) nd rd f x) 3fψ 2 NMx)) f nd x) ). 1.48)

33 15 Cordero and Torregrosa 2008) presented a family of multi-point iterative methods for solving nonlinear equations and the order of convergence of its elements are studied. The computational efficiency of some elements of this family is also provided. Basu 2008) presented a family of cubically convergent schemes with three function evaluation per iteration based on Newton I.F. Also, based on inverse Newton I.F., he presented another family of cubically convergent schemes with three function evaluation per iteration. Finally, he combined a special case of the first family with a special case of second family and proposed a composite fourth order scheme. Thukral 2008) introduced a new Newton-type method for solving a nonlinear equation. This new method was proved to have cubic convergence. They examined the effectiveness of the rational Newton method by comparing the performance with the well established methods for approximating the root of a given nonlinear equation. Noor and Waseem 2009) suggested two new two-step iterative methods for solving the system of nonlinear equations using quadrature formulae. They proved that these methods have cubic convergence. Wang and Liu 2009) developed two new families of sixth-order methods for getting simple roots of nonlinear equations. Per iteration, these methods require two function evaluation and two first derivative evaluations, which gave EI O = Hueso et al. 2009a) presented a modification of Newton s method for nonlinear systems whose Jacobian matrix is singular. They proved, under certain conditions, this modified Newton s method has quadratic convergence. Moreover, to confirm the theoretical results they tested different numerical examples to compare this variant with the classical Newton s method. Hueso et al. 2009b) developed a family of predictor-corrector methods free from the second derivative for solving systems of nonlinear equations. In general, the obtained methods have an order of convergence three, but in some particular cases the order is four. Chun and Kim 2010) presented some new third-order iterative methods for finding a simple root of a nonlinear scalar equation. A geometric approach based on the circle of curvature is used to construct the new methods. Awawdeh 2010) employ the homotopy analysis method to derive a family of iterative methods for solving systems of nonlinear algebraic equations. This approach yields second and third order iterative methods which are more efficient than their

34 16 classical counterparts such as Newton s, Chebychev s and Halley s methods. Ezquerro et al. 2010a) discuss an extension of Gander s result for quadratic equations. Gander provides a general expression for iterative methods with order of convergence at least three in the scalar case. Taking into account an extension of this result, they define a family of iterations in Banach spaces with R-order of convergence at least four for quadratic equations. Petkovic et al. 2010) derived a new class of three-point I.F. for solving nonlinear equations of eighth order. These methods have been developed by combining fourth order methods from the class of optimal two-point methods and a modified Newton s method in the third step, obtained by a suitable approximation of the first derivative based on interpolation by a nonlinear fraction. It is proved that the new three-step methods reach eighth order convergence using only four function evaluations, which supports the Kung-Traub conjecture on the optimal order of convergence. Kim 2010) developed a new two-step biparametric family of sixth-order iterative methods free from second derivatives to find a simple root of a nonlinear algebraic equation. Cordero et al. 2010a) developed three point sixth order I.F. by using method of Homeier 2003) and linear interpolation on two points and it is given by ψ 6 th CMx) = ψ 3 rd HMx) 2fψ 3 rd HMx))f ψ 2 nd NMx)) f ψ 2 nd NMx)) 2 f x) 2 + 2f ψ 2 nd NMx))f x). 1.49) Cordero et al. 2010b) suggested a reduced composition technique which has been used on Newton and Jarratt s methods in order to obtain an optimal relation between convergence order, functional evaluations and the number of operations. Also, Cordero et al. 2010c) presented a new family of iterative methods for solving nonlinear equations with sixth and seventh order convergence. The new methods in these two papers are obtained by composing known methods of third and fourth order with Newton s method and using an adequate approximation of the last derivative, which provides high order of convergence and reduces the required number of functional evaluations per step. Li et al. 2010) gave modification of Newton s method with higher-order convergence. The modification of Newton s method is based on King s fourth-order

35 17 method. The new method requires four evaluations of the function and two evaluations of first derivatives per iteration. Analysis of convergence demonstrates that the order of convergence is sixteen. Noor 2010) gave a new modified homotopy perturbation method and analyzed a class of iterative methods for solving nonlinear equations. This modification of the homotopy method is quite flexible. These methods include the two-step Newton method as a special case and it is of fourth-order convergent. Thukral and Petkovic 2010) derived a family of three-point iterative methods for solving nonlinear equations by using a suitable parametric function and two arbitrary real parameters. They proved that the methods have convergence order eight requiring only four function evaluations per iteration. The proposed class of methods is optimal which supports the Kung-Traub hypothesis. An efficient fourth-order technique with two first derivative evaluations and one function evaluation has been presented by Khattri and Abbasbandy 2011). Cordero et al. 2011a) derived new iterative methods with order of convergence four and higher, for solving nonlinear systems, by composing iteratively golden ratio methods with a modified Newton s method. Sharma and Sharma 2011) developed a new third order method for finding multiple roots of nonlinear equations based on the scheme for simple roots developed by Kou et al. 2007b). Further investigation gave rise to new third and fourth order family of methods which do not require second derivative. The fourth order family has optimal order, since it requires three evaluations per step. Ardelean 2011) studied the basins of attraction for some of the iterative methods for solving the equation P z) = 0, where P : C C is a complex polynomial. The beautiful fractal pictures generated by these methods are presented too. Grau-Sanchez et al. 2011) gave two new iterative methods and analyzed their convergence. A generalization of the efficiency index used in the scalar case for several variables in iterative methods for solving systems of nonlinear equations is revisited. Analytic proofs of the local order of convergence based on developments of multilineal functions and numerical concepts were used to illustrate the results. An approximation of the computational order of convergence is computed independently without the knowledge of the root and the necessary time to get one correct decimal

36 18 is studied. Arroyo et al. 2011) discussed the problem of determination of the preliminary orbit of a celestial body. They compared the results obtained by the classical Gauss s method with those obtained by some higher order iterative methods for solving nonlinear equations. The original problem of determination of the preliminary orbits was posed by means of a nonlinear equation. They modified this equation in order to obtain a nonlinear system which describes the mentioned problem and derived a new efficient iterative method for solving it. Scott et al. 2011) observed that the iterative methods are classified by the order, informational efficiency and efficiency index. Here, the authors considered other criteria, namely the basins of attraction of the method and its dependence on the order. Further, several methods of various orders and their basins of attraction are presented. Khattri and Argyros 2011) introduced a four parameter family of sixth order convergent iterative methods for solving nonlinear scalar equations. Methods of this family require evaluation of four functions per iteration. These methods are totally free of derivatives. Convergence analysis shows that this family is sixth order convergent. Yun 2011) developed a new simple iteration formula, which do not require any derivative evaluation. It is proved that the convergence order of the new method is quadratic. Sharma and Guha 2011) presented a one-parameter family of iterative methods for solving nonlinear equations. All the methods of the family have third-order convergence, except the one which has fourth-order convergence. It is shown that the fourth order method is found to be more efficient than the third-order methods. Based on Ostrowski s method, Cordero et al. 2011b) proposed a new family of eighth-order methods for solving nonlinear equations. In terms of computational cost, each iteration requires three evaluations of functions and one evaluation of its first derivative. This method is optimal according to Kung and Traub s conjecture. An efficient fourth order I.F. 4 th KA) presented by Khattri and Abbasbandy 2011) uses one function and two first derivative evaluations per computing step where τ = f x 2 3 ux)) f x). ψ 4 th KAx) = x τ 9 2 τ τ 3 ) fx) f x) 1.50)

37 19 Khattri and Log 2011) developed a simple yet practical algorithm for constructing derivative free iterative methods having higher order convergence. Soleymani 2011) gave some new sixth-order modifications of Jarratt methods for solving single nonlinear equation. Sharifi et al. 2012) developed a general class of I.F. using two evaluations of the first order derivative and one evaluation of the function per computing step. It is proved that the new class of I.F. has fourth-order convergence and found to be optimal. The derived class is further extended for multiple roots. Kanwar et al. 2012) developed a new cubically convergent family of super - Halley method based on power means. Some well known methods can be regarded as particular cases of the proposed family. New classes of higher order multipoint iterative methods free from second order derivative are derived from semi-discrete modifications of the above mentioned methods. It is shown that the super-halley method is the only method which produces fourth order multipoint iterative methods. Furthermore, these multipoint methods with cubic convergence have also been extended to finding the multiple zeros of nonlinear functions. Soleymani et al. 2012c) investigated the construction of some classes of twopoint I.F. without memory for finding simple roots of nonlinear scalar equations. These classes are built via weight functions and reach optimal order four using three function evaluations based on Weerakoon and Fernando 2000) and Homeier 2003). Chun et al. 2012) developed new fourth order optimal root finding methods for solving nonlinear equations. The classical Jarratt s family of fourth order method is obtained as a special case. They present results which describe the conjugacy classes and the dynamics of the presented optimal method for complex polynomials of degree two and three. Cordero et al. 2012a) introduced a technique for solving nonlinear systems that improved the order of convergence of any given iterative method which uses the Newton iteration as a predictor. The main idea is to compose a given iterative method of order p with a modification of the Newton method that introduces just one evaluation of the function, obtaining a new method of order p + 2. Cordero et al. 2012b) presented a new technique for designing iterative methods for solving nonlinear systems. This procedure, called pseudocomposition, uses a known method as a predictor and the Gaussian quadrature as a corrector. The order of convergence

38 20 of the resulting scheme depends, among other factors, on the order of the last two steps of the predictor. They also introduce a new iterative algorithm of order six, and apply the mentioned technique to generate a new method of order ten. Babajee 2012) improved the order of the midpoint iterative method from three to four with the same number of function evaluations using weight functions to obtain a class of two-point fourth order Jarratt-type methods. Then he proved a general result to further improve the order of the old methods through an additional function evaluation using weight functions. In this way, he developed two three-point sixth order midpoint methods using different weights. A family of n-point I.F. of order 2n is proposed. Using weight functions, he further improved the methods to obtain a five-point sixteenth order midpoint method with the same efficiency index as the optimal two-point fourth order Jarratt-type methods. Chun and Neta 2012) developed a sixth order I.F. which requires an additional evaluation of the function f at the point iterated by 4 th KT I.F. ψ 6 th CNMx) = ψ 4 th KT x) fψ 4 th KT x)) [ f x) 1 fψ 2 nd NM x)) fx) fψ 4 th KT x)) fx) ] ) Babajee and Jaunky 2013) derived an optimal fourth-order Newton-secant method with three function evaluations using weight functions and they show that it is a member of the King family of fourth-order methods. Also, they obtain an eighth order optimal Newton-secant method. The authors proved the local convergence of the methods and apply the methods to solve a fourth order polynomial arising in ocean acidifications and study their dynamics. Cordero et al. 2013) suggested a new technique to obtain derivative-free methods with optimal order of convergence in the sense of the Kung-Traub conjecture for solving nonlinear smooth equations. Jaiswal 2013) developed a new modification of method free from derivatives by approximating the derivatives in the Newton-Steffensen third-order method by the central difference quotient. They proved that the method obtained preserves the order of convergence, without calculating any derivative. Herceg and Herceg 2013) derived a family of six sets of means based modifications of Newton s method for solving nonlinear equations. Each set is a parametric class of methods. Some wellknown methods of this family are 3 rd AM, 3 rd HM, 3 rd GM and 3 rd P M.

39 21 Ardelean 2013) introduced a new third-order iterative method for solving nonlinear equations. This method converges on larger intervals than some other similar known methods. A comparison between the new method and other third-order methods is presented by using the basins of attraction for the real roots of some test problems. Zhou et al. 2013) presented two families of higher order iterative methods for solving multiple roots of nonlinear equations. One is of order three and the other is of order four. The third order family contains several iterative methods known already. The presented fourth order iterative family has optimal order. Local convergence analysis and some special cases of the presented families are discussed. Behl and Kanwar 2013) derived a one-parameter family of Chebyshev s method for finding simple roots of nonlinear equations. They developed a new fourth-order variant of Chebyshev s method for this family without adding any functional evaluation to the previously used three functional evaluations. Chebyshev-Halley type methods are seen as the special cases of the proposed family. New classes of higher order multipoint iterative methods free from second-order derivative are also derived from semi-discrete modifications of cubically convergent methods. Fourthorder multipoint iterative methods are optimal, since they require three functional evaluations per step. Soleymani et al. 2013) suggested a general class of multi-point iteration methods with various orders. The error analysis is presented to prove the convergence order. Also, a thorough discussion on the computational complexity of the new iterative methods is given. Abad et al. 2013) developed two iterative methods of order four and five for solving nonlinear systems of equations. An application of this method to solve the nonlinear system for the Global Positioning System GPS) and several academic nonlinear systems are tested. Petkovic et al. 2013, 2014) developed a family of Jarratt s type two-point methods which produces some existing and some new methods. Babajee 2014) has recently improved the 3 rd HM method to get a fourth order method for solving single nonlinear equation. This method is one of the members in the family of higher order multi-point iterative methods based on power mean given in the present thesis and Babajee et al. 2015a).

40 22 Jaiswal 2014) suggested a new class of third and fourth order iterative methods for solving nonlinear equations. The multivariate extension of some of these methods is also deliberated. The efficiency of the new fourth order method over some existing fourth order methods is also confirmed by basins of attraction analysis. Sharma and Gupta 2014) presented a three step iterative method of convergence order five for solving systems of nonlinear equations. The methodology is based on a two step method with cubic convergence given in Homeier 2004). Computational efficiency in its general form is discussed and a comparison between the efficiency of the proposed technique and existing ones is made. Sharma and Arora 2014) presented a family of three-point iterative methods based on Newton s method for solving nonlinear equations. In terms of computational cost, this family requires four function evaluations and has convergence order eight. Therefore, it is optimal in the sense of Kung-Traub hypothesis. Singh and Gupta 2014) gave two new three step higher order iterative methods by modifying two third-order two step methods for solving nonlinear equations. This is done by introducing Newton s method as a third step in both methods. The derivative in the third step is approximated by using linear interpolation and divided differences leading to new sixth order methods. Singh and Jaiswal 2014) proposed a family of third-order and optimal fourth-order iterative methods. These methods are constructed through weight function concept. Sharma et al. 2014) presented a derivative free two step family of fourth order methods for solving systems of nonlinear equations using the well known Traub- Steffensen method in the first step. In order to determine the local convergence order, they apply the first order divided difference operator for functions of several variables and direct computation by Taylor s expansion. Cordero et al. 2014) presented a family of optimal iterative methods for solving nonlinear equations with eighth-order convergence. These methods are based on Chun s fourth-order method. They use the Ostrowski s efficiency index and several numerical tests in order to compare the new methods with other known eighth-order ones. They also extend this comparison to the dynamical study of the different methods. Zheng et al. 2015) introduced a modified Chebyshev-like method with order

41 23 four and study the semilocal convergence of the method by using majorizing functions for solving nonlinear equations in Banach spaces. They proved an existenceuniqueness theorem and give a-priori error bounds which demonstrates the R-order of the method. Moreover, the local convergence of this method is also analyzed. Cordero et al. 2015) suggested a parametric family of iterative methods for solving nonlinear systems which have third order convergence. The numerical section is devoted to estimating the solution of the classical Bratu problem by transforming it as a nonlinear system by using finite differences, and solving it with different elements of the iterative family. Shah and Noor 2015) presented some new classes of iterative methods for solving nonlinear equations by using an auxiliary function together with decomposition technique. These new methods include the Halley method and its variant forms as special cases. Ullah et al. 2015) considered the multi-step iterative method to solve systems of nonlinear equations. Since the Jacobian evaluation and its inversion are expensive, in order to achieve a better computational efficiency, they compute Jacobian and its inverse only once in a single cycle of the proposed multi-step iterative method. Actually the involved systems of linear equations are solved by employing the LU-decomposition, rather than inversion. The base iterative method has convergence order five and then they use a matrix polynomial of degree two to design a multi-step method. Each inclusion of single step in the base method will increase the convergence order by three. The general expression given for the order is 3s 1, where s 2 is the number of steps of the multi-step iterative method. Computational efficiency is also discussed in comparison with other existing methods. In this chapter, we have reviewed the historical development of higher order multipoint iterative methods without memory from the period of Traub 1964) to the present date. Mainly we have pointed out the order of convergence of the methods, any novelty in the idea of the proposed methods and efficiency index were discussed. Although, efforts have been made to review lot of literature, due to the limited time available, I may have missed certain important papers in this area for which I apologise the concerned authors for their very good contribution and development of this field of research.

42 Chapter 2 Some new variants of Newton s method of order three, four and five In this chapter, we propose new Newton s type methods for solving scalar nonlinear equation fx) = 0 with convergence order three, four and five using the idea of Simpson s quadrature rule and power mean. The main goal and motivation in the construction of new methods is to get better efficiency index. In other words, it is desirable to attain convergence order as high as possible with fixed number of function evaluations per iteration. In Section 2.1, we present the construction of the new methods. Convergence analysis to obtain the truncation error by using Taylor series is presented in Section 2.2. In Section 2.3, numerical experiments and their results are tabulated. Comparison of the efficiency indices and concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient. 2.1 Construction of new methods The Newton I.F. can be constructed from its local linear model, of the function fx), which is the tangent drawn to the function fx) at the current point x. The local linear model at x is Lx) = fx) + f x)ψ x). 2.1) This linear model can be interpreted from the viewpoint of Newton s theorem fψ) = fx) + I NT, where I NT = 24 ψ x f t)dt. 2.2)

43 25 Weerakoon and Fernando 2000) showed that if the indefinite integral I NT is approximated by the rectangle I NT = f x)ψ x), the Newton s I.F. is obtained by setting Lx) = 0. Hasanov et al. 2002) obtained a new linear model L 1 x) = fx) + 1 x + ψ ) ) f x) + 4f + f ψ) ψ x). 2.3) 6 2 ) ) by approximating I NT by Simpson s formula I NT f 1 x) + 4f x+ψ + f ψ). 6 2 Solving the new linear model, they obtained the implicit I.F. ψx) = x 6fx) f x) + 4f x+ψ 2 ). 2.4) + f ψ) Using the Newton I.F. to estimate ψ with in f, they obtained a third order Simpson s variant method 3 rd SV ) 6fx) ψ 3 rd SV x) = x ) f x) + 4f x+ψ2 nd NM x) ) + f ψ 2 NMx)) nd This 3 rd SV I.F. has cubic convergence, requires one function and three first order derivative evaluations resulting in a decrease in EI. Therefore, 3 rd SV I.F. is computationally more expensive. Also, 3 rd SV I.F. is a special case of family of third order I.F. given in Frontini and Sormani 2003). Further, Cordero and Torregrosa 2007) extended 2.5) to solve systems of nonlinear equation. However, the method 2.5) is not efficient see Babajee and Dauhoo 2006), Babajee 2015b)). Babajee 2015b) derived a new fifth order method with four function evaluations by using a weight function in 2.5) and it is given by 6fx) ψ 5 DKRx) = x ) th f x) + 4f x+ψ2 Hτ) 2.6) nd NM x) + f 2 ψ 2 NMx)) nd where Hτ) = τ 1)2 3 8 τ 1)3, τ = f ψ 2 nd NM x)) f x). To obtain a new class of method, we rewrite equation 2.5) as follows ψ 3 rd SV x) = x 3fx) f x)+f ψ 2 nd NM x)) + 2f 2 x+ψ2 ). 2.7) nd NM x) 2 Replacing arithmetic mean with power means in equation 2.7), we have obtained a new class of third order method 3 rd SP M given in Jayakumar and Madhu 2013) ψ 3 rd SP Mx) = x where Dx, β) = signf x)) 3fx) Dx, β) + 2f x+ψ2 ), 2.8) nd NM x) 2 ) 1 β, β R. f x) β +f ψ 2 nd NM x))) β 2

44 26 Remark The result provided in Jayakumar and Madhu 2013) has third order convergence, whereas for the value of β = 5 we get fourth order convergence which will be shown in the next section. Fourth order simpson s power mean method 4 th SP M) is a special case of I.F. 2.8) and it is given by ψ 4 th SP Mx) = x 3fx) Dx, 5) + 2f x+ψ2 nd NM x) 2 ). 2.9) Fifth order simpson s power mean method 5 th SHM): By improving 3 rd SP M, a fifth order method is obtained by putting β = 1 in 2.8) along with weight function, we have 3fx) ψ 5 SHMx) = x ) th Dx, 1) + 2f x+ψ2 Hτ), 2.10) nd NM x) 2 ) where Dx, 1) = f x) f ψ 2 nd NM x)) τ 1) τ 1)3, τ = f ψ 2 nd NM x)) f x). represents harmonic mean and Hτ) = 2.2 Convergence Analysis of the methods Theorem Let x D, be a simple zero of a sufficiently differentiable function f : D R R for an open interval D. If x 0) is sufficiently close to x, then the method 2.8) has cubic convergence. Proof. Let e = x x and x be a simple zero of the function fx) = 0. By expanding fx) and f x) by Taylor series about x, we obtain fx) = f x ) e + c 2 e 2 + c 3 e 3 + c 4 e 4 + c 5 e 5 + c 6 e ) 2.11) and f x) = f x ) 1 + 2c 2 e + 3c 3 e 2 + 4c 4 e 3 + 5c 5 e 4 + 6c 6 e ) 2.12) where c j = f j) x ), j = 2, 3, 4... We have j!f x ) ψ 2 nd NMx) = x + c 2 e 2 2c 2 2 c 3 )e 3 + 4c 32 7c 2 c 3 + 3c 4 ) e )

45 27 Expanding f ψ 2 nd NMx)) by Taylor s series about x and using 2.13), we get f ψ 2 nd NMx)) = f x ) 1 + 2c 2 2e 2 + 4c 2 c 3 c 3 2)e ) 2.14) Further, we have x + ψ 2 nd NMx) 2 = x e c 2e 2 c 2 2 c 3 )e ) Again expand f x+ψ 2 nd NM x) ) by Taylor s series about x and use 2.15) to get 2 x + f ψ2 ) NMx) nd = f x ) 1 + c 2 e + c c 3)e 2 + 2c c 2c ) 2.16) 2 c 4)e From equations 2.12) and 2.14) respectively, we have ) f x) β = f x ) 1 β + 2βc 2 e + 3βc 3 + 2ββ 1)c 2 2)e ) f ψ 2 NMx)) β = f x ) 1 β + 2βc 2 2e nd 2.17) 2.18) From 2.17) and 2.18), we get Adding 2.16) and 2.19), we get ) 1/β f signf x) β + f ψ x)) 2 NMx)) β nd 2 = f x ) 1 + c 2 e + 1 ) 2 c2 2 + βc c 3 )e ) f signf x) β + f ψ x)) 2 NMx)) β 1/β x + nd + 2f ψ2 ) NMx) nd 2 2 = f x ) 3 + 3c 2 e + 1 ) 5 + β)c c 3 e ) ) 33 + β)c β)c 2 c 3 + 6c 4 e ) 2.20) If equations 2.11) and 2.20) are substituted in 2.8), we have ψ 3 classx) = x e 1 ) 5 + β)c 2 rd 2 e ) ) 6 6 c β)c β)c 3 e ) Finally, we have ψ 3 rd SP Mx) x = 1 6 ) 5 + β)c 2 2 e 3 + Oe 4 ). 2.22) Hence, the method 2.8) has cubic convergence for any β R \ { 5}.

46 28 Remark For β = 5 in 2.22) produces fourth order convergence of the method 4 th SP M 2.9). Theorem Let x D, be a simple zero of a sufficiently differentiable function f : D R R for an open interval D. If x 0) is sufficiently close to x, then the method 2.10) has fifth order convergence. Proof. Expanding f ψ 2 nd NMx)) by Taylor s series about x and using 2.13), we get f ψ 2 NMx)) = f x ) 1 + 2c 2 2e 2 + 4c nd 2 c 3 c 3 2)e 3 ) 2.23) + c 2 8c c 2 c 3 + 6c 4 )e Further, we have x + ψ 2 nd NMx) 2 = x e c 2e 2 c 2 2 c 3 )e ) 4c 32 7c 2 c 3 + 3c 4 e ) Again expand f x+ψ 2 nd NM x) 2 ) by Taylor s series about x and use 2.24) to get x + f ψ2 ) NMx) nd = f x ) 1 + c 2 e + c c 3)e 2 + 2c c 2c c ) 4 e 3 + 4c c2 2c 3 4 Using equations 2.23) and 2.12), we have + 3c c 2c c 5 ) e ) ) τ = f ψ 2 nd NMx)) f x) = 1 2c 2 e + 6c 2 2 3c 3 )e 2 44c 3 2 4c 2 c 3 + c 4 )e ) + 40c c 2 2c 3 + 9c c 2 c 4 5c 5 )e Hτ) = 1 + 2c2 2 3 e2 + 5c c ) 2c 3 e c4 2 37c 2 2c 3 + 9c c 2 c 4 )e )

47 29 We have ) Dx, 1) = f x) + 1 f x ux)) = f x ) 1 + c 2 e + 3 ) 2 c 3e 2 + c 32 c 2 c 3 + 2c 4 e 3 ) + 3c c 2 2c c2 3 c 2 c ) 2 c 5 e ) Using equations 2.28) and 2.25), x + Dx, 1) + 2f ψ2 ) NMx) nd 2 = f x ) c 2 e + 2c c 3 )e 2 + 3c c 2 c 3 + 3c 4 ) e 3 ) 5c c2 2c 3 + 8c 2 c c c 5 ))e ) Finally, by using 2.11), 2.29) and 2.27) in 2.10), we have ψ 5 th SHMx) x = c 42 16c 22c 3 6c 23 + c 5 ) e 5 + Oe 6 ) 2.30) Hence, the proposed new method 2.10) has fifth order convergence. 2.3 Numerical Examples In this section, we give numerical results on some examples to compare the efficiency of the proposed methods 3 rd SP M, 4 th SP M, 5 th SHM with 2 nd NM, 3 rd AM and 5 th DKR methods. Numerical computations have been carried out in the MATLAB software, rounding to 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process error = x k+1) x k) < ɛ, where ɛ = and N is the number of iterations required for convergence. d ψ represents the total number of function evaluations. The ACOC denoted as ρ k is given by ρ k = ln xk+1) x k) )/x k) x k 1) ) ln x k) x k 1) )/x k 1) x k 2) ).

48 30 The following nonlinear equations are taken as examples: f 1 x) = sin2 cos x) 1 x 2 + e sinx3), f 2 x) = xe x2 sin 2 x + 3 cos x + 5, x = x = f 3 x) = x 3 + 4x 2 10, x = f 4 x) = sinx) + cosx) + x, x = f 5 x) = x 2 sin x, x = Table 2.1: Numerical results for f 1 x), x 0) = 0.7 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 1.33) e rd SP M 2.8) β = e β = e β = e β = e β = e th SP M 2.9) e th SHM 2.10) e th DKR 2.6) e In Tables , we have compared the number of iterations, convergence order, function evaluations, total number of function evaluations for convergence, approximated computational order of convergence, absolute error and CPU time. Table 2.1 displays the results for f 1 x), x 0) = 0.7. It is observed from this table, proposed methods and other comparable methods agree with the computational order of convergence ρ k ) and the method 5 th SHM converges with less number of iterations and least error. Similarly, the following Tables 2.2 to 2.10 display the results for functions f 1 x) to f 5 x) with different starting points. The observation of the results in these tables show a similar pattern as found in Table 2.1.

49 31 Table 2.2: Numerical results for f 1 x), x 0) = 0.9 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 1.33) e rd SP M 2.8) β = e β = e β = e β = e β = e th SP M 2.9) e th SHM 2.10) e th DKR 2.6) e Table 2.3: Numerical results for f 2 x), x 0) = 1.0 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 1.33) e rd SP M 2.8) β = e β = e β = e β = e β = e th SP M 2.9) e th SHM 2.10) e th DKR 2.6) e

50 32 Table 2.4: Numerical results for f 2 x), x 0) = 1.7 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 1.33) e rd SP M 2.8) β = e β = e β = e β = e β = e th SP M 2.9) e th SHM 2.10) e th DKR 2.6) e Table 2.5: Numerical results for f 3 x), x 0) = 1.0 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 1.33) e rd SP M 2.8) β = e β = e β = e β = e β = e th SP M 2.9) e th SHM 2.10) e th DKR 2.6) e

51 33 Table 2.6: Numerical results for f 3 x), x 0) = 1.6 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 1.33) e rd SP M 2.8) β = e β = e β = e β = e β = e th SP M 2.9) e th SHM 2.10) e th DKR 2.6) e Table 2.7: Numerical results for f 4 x), x 0) = 0.2 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 1.33) e rd SP M 2.8) β = e β = e β = e β = e β = e th SP M 2.9) e th SHM 2.10) e th DKR 2.6) e

52 34 Table 2.8: Numerical results for f 4 x), x 0) = 0.6 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 1.33) e rd SP M 2.8) β = e β = e β = e β = e β = e th SP M 2.9) e th SHM 2.10) e th DKR 2.6) e Table 2.9: Numerical results for f 5 x), x 0) = 1.6 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 1.33) e rd SP M 2.8) β = e β = e β = e β = e β = e th SP M 2.9) e th SHM 2.10) e th DKR 2.6) e

53 35 Table 2.10: Numerical results for f 5 x), x 0) = 2.0 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 1.33) e rd SP M 2.8) β = e β = e β = e β = e β = e th SP M 2.9) e th SHM 2.10) e th DKR 2.6) e Concluding Remarks We have compared the efficiency index of 2 nd NM, 3 rd AM, 5 th DKR along with proposed methods in Table 2.11 and it is observed that 5 th SHM I.F. has better efficiency index. Hence, we conclude that the proposed 5 th SHM I.F. performs better Table 2.11: Comparison of Efficiency Index Methods p f f d EI T EI O 2 nd NM 1.19) rd AM 1.33) rd SP M 2.8) th SP M 2.9) th SHM 2.10) th DKR 2.6) than 2 nd NM and can be a competitor to 2 nd NM and other methods of equivalent order available in the literature. However, this method is not optimal.

54 Chapter 3 Class of modified Newton s method having fifth and sixth order convergence In this chapter, we propose new Newton s type methods for solving scalar nonlinear equation fx) = 0 with convergence order five and six using the idea of power mean. The main goal and motivation in the construction of new methods is to get better efficiency index. In other words, it is desirable to attain convergence order as high as possible with fixed number of function evaluations per iteration. In Section 3.1, the construction of the new methods is presented. Truncation error using Taylor series and the convergence analysis are derived in Section 3.2. In Section 3.3, numerical examples and their results are tabulated. Comparison of the efficiency indices and concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient. 3.1 Construction of new methods A three step fifth order I.F. In continuation to the equations 2.1) and 2.2) considered in section 2.1, Weerakoon and Fernando 2000) also gave a new linear model L 2 x) = fx) f x) + f ψ)) ψ x) 3.1) 36

55 37 by approximating the definite integral with the trapezoidal rule I NT 1 2 f x) + f ψ)) ψ x), they obtained an implicit I.F. ψx) = x 2fx) f x) + f ψ). 3.2) Using the Newton I.F. to compute f ψ) by f ψ 2 nd NMx)), they obtained ψ 3 rd AMx) = x 2fx) f x) + f ψ 2 nd NMx)). 3.3) Ozban 2004) used the harmonic mean instead of the arithmetic mean in 3.3) and obtained Harmonic mean Newton s method having cubic convergence ) fx) f x) + f ψ 2 NMx)) nd ψ 3 HMx) = x rd f x)f ψ 2 NMx)) nd. 3.4) Lukić and Ralević 2005) used the geometric mean instead of the arithmetic mean in 3.3) and obtained Geometric mean Newton s method having cubic convergence ψ 3 rd GMx) = x fx) signf x)) f x)f ψ 2 nd NMx)). 3.5) Adding one more Newton step in 3 rd GM I.F., we have obtained a new I.F. having fifth order convergence with two functions and two derivative evaluations proposed in Madhu and Jayakumar 2015a) ψ 5 th GMx) = ψ 3 rd GMx) fψ 3 rd GMx)) f ψ 2 nd NMx)), 3.6) which may be called as 5 th GM method New class of I.F. with order six Xiaojian 2007) used the power mean instead of the arithmetic mean in 3.3) they obtained class of Newton s method having cubic convergence with three function evaluations ψ 3 rd P Mx) = x fx) Dx, β), Dx, β) = signf x)) f x) β + f ψ 2 nd NMx))) β 2 ) 1 β 3.7) Madhu and Jayakumar 2014) suggested a new class of three step I.F. having sixth order convergence ψ 6 th P MMx) = ψ 3 rd P Mx) fψ 3 rd P Mx)) f ψ 3 rd P Mx)) 3.8)

56 38 This method requires five function evaluations to converge per iterative cycle. The efficiency index for 6 th P MM method EI O = is better than Newton s method but lower than 3 rd PM whose EI O = In order to improve the efficiency, we approximate f ψ 3 rd P Mx)) by a combination of already computed function values. We use an approximation by linear interpolation of f ψ 3 rd P Mx)) that does not introduce any new function evaluation, by using linear interpolation on two points x, f x)) and ψ 2 NMx), f ψ nd 2 NMx))). Thus, nd f ψ 3 rd P Mx)) = ψ 3 rd P Mx) x ψ 2 NMx) x f ψ 2 NMx)) + ψ nd 3rd P Mx) ψ NMx) 2nd f x). x ψ nd 2 NMx) nd Then f ψ 3 rd P Mx)) = f f x) 2 f x)f ψ x) 2 NMx)) nd signf x)) f x) β +f ψ 2 nd NM x))) β 2 ) 1 β 3.9) By substituting this approximation f ψ 3 rd P Mx)) in equation 3.8), we obtain a new class of I.F. 6 th P MM) with two functions and its two first derivative evaluations see Madhu and Jayakumar 2014)) ψ 6 th P MMx) = ψ 3 rd P Mx) f x) fψ 3 rd P Mx)) f x) 2 f x)f ψ 2 nd NM x)) f signf x) β +f ψ2 nd x))) β x)) NM 2 ) 1 β 3.10) where β R. The special case with β = 1 is already reported in Parhi and Gupta 2008) and the case β = 1 is found in Cordero et al. 2010a). 3.2 Convergence Analysis of the methods Theorem Let a sufficiently smooth function f : D R R has a simple root x in the open interval D. If x 0) is sufficiently close to x, then the method 3.6) is of local fifth-order convergence and it satisfies the error equation ψ 5 th GMx) x = c 2 2c c 3 )e 5 + Oe 6 ) Proof. Taylor expansion of fx) and f x) about x gives [ ] fx) = f x ) e + c 2 e 2 + c 3 e 3 + c 4 e 4 + c 5 e )

57 39 and so that [ ] f x) = f x ) 1 + 2c 2 e + 3c 3 e 2 + 4c 4 e 3 + 5c 5 e ) ψ 2 nd NMx) = x + c 2 e 2 2c 2 2 c 3 )e 3 7c 2 c 3 4c 3 2 3c 4 )e ) Again expanding f ψ 2 nd NMx)) about x gives f ψ 2 nd NMx)) = f x )[1 + 2c 2 2e 2 + 4c 2 c 3 c 3 2)e ] 3.14) Using equations 3.12) and 3.14), we get signf x)) f x)f ψ 2 nd NMx)) = f x )[1 + c 2 e c c 3 )e c 2c 3 + 4c 4 c 3 2)e ] 3.15) Now using equations 3.11) and 3.15), we have ψ 3 rd P Mx) = x c2 2 + c 3 )e 3 + c c 4 )e c 2 c c 3 c 22 23c 42 15c c 5 ) e ) Again expanding fψ 3 rd P Mx)) about x gives [ 1 fψ 3 rd P Mx)) = f x ) 2 c2 2 + c 3 )e 3 + c c 4 )e ) ] 3.17) 24c 2 c c 3 c 22 23c 42 15c c 5 e Finally, using equations 3.14), 3.16) and 3.17) in 3.6), we get ψ 5 th GMx) x = c 2 2c c 3 )e 5 + Oe 6 ) 3.18) Thus, it is concluded that 5 th GM method has fifth order of convergence. Theorem Let a sufficiently smooth function f : D R R has a simple root x in the open interval D. If x 0) is sufficiently close to x, then the class of method 3.10) is of local sixth-order convergence and it satisfies the error equation ψ 6 th P MMx) x = β + β2 )c c 2c β)c 3 2c 3 )e 6 + Oe 7 ).

58 40 Proof. From equations 3.12) and 3.14) and by simplification, we have ) f signf x) β + f ψ x)) 2 NMx)) β 1 ) nd β = f x )[ 1 + c 2 e c2 2 + βc c 3 )e 2 + 2c β + 1)c 2c 3 1 ] 2 3β + 1)c3 2)e ) and We have f f x) 2 f x)f ψ x) 2 NMx)) nd signf f x)) x) β +f ψ 2 nd NM x))) β 2 [ ] = f x ) 1 2c 2 c 3 c 3 2 βc 3 2)e ) 1 β 3.20) ψ 3 rd P Mx) = x c2 2 + βc c 3 )e ) Using equations 3.21) and 3.20) in 3.10) and upon simplification, we have ψ 6 th P MMx) x = β + β2 )c c 2c β)c 3 2c 3 )e 6 + Oe 7 ). 3.22) Thus, it is found that 6 th PMM method has sixth order convergence. 3.3 Numerical Examples In this section, we give numerical results on some some examples to compare the efficiency of the proposed methods 5 th GM, 6 th P MM with 2 nd NM, 3 rd AM, 3 rd HM, 3 rd GM and 3 rd P M methods. Numerical computations have been carried out in the MATLAB software, rounding to 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process error = x k+1) x k) < ɛ, where ɛ = and N is the number of iterations required for convergence. d ψ represents the total number of function evaluations. The ACOC denoted as ρ k is given by ρ k = ln xk+1) x k) )/x k) x k 1) ) ln x k) x k 1) )/x k 1) x k 2) ). The same examples f 1 x) - f 5 x) considered in section 2.3 have been considered here for numerical experiments.

59 41 Table 3.1: Numerical results for f 1 x), x 0) = 0.7 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 3.3) e rd GM 3.5) e rd HM 3.4) e th GM 3.6) e th P MM 3.10) β = e β = e β = e β = e β = e Table 3.2: Numerical results for f 1 x), x 0) = 0.9 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 3.3) e rd GM 3.5) e rd HM 3.4) e th GM 3.6) e th P MM 3.10) β = e β = e β = e β = e β = e

60 42 Table 3.3: Numerical results for f 2 x), x 0) = 1.0 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 3.3) e rd GM 3.5) e rd HM 3.4) e th GM 3.6) e th P MM 3.10) β = e β = e β = e β = e β = e Table 3.4: Numerical results for f 2 x), x 0) = 1.7 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 3.3) e rd GM 3.5) e rd HM 3.4) e th GM 3.6) e th P MM 3.10) β = e β = e β = e β = 1 Div Div Div Div Div Div Div β = e

61 43 Table 3.5: Numerical results for f 3 x), x 0) = 1.0 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 3.3) e rd GM 3.5) e rd HM 3.4) e th GM 3.6) e th P MM 3.10) β = e β = e β = e β = e β = e Table 3.6: Numerical results for f 3 x), x 0) = 1.6 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 3.3) e rd GM 3.5) e rd HM 3.4) e th GM 3.6) e th P MM 3.10) β = e β = e β = e β = e β = e

62 44 Table 3.7: Numerical results for f 4 x), x 0) = 0.2 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 3.3) e rd GM 3.5) e rd HM 3.4) e th GM 3.6) e th P MM 3.10) β = e β = e β = e β = e β = e Table 3.8: Numerical results for f 4 x), x 0) = 0.6 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 3.3) e rd GM 3.5) e rd HM 3.4) e th GM 3.6) e th P MM 3.10) β = e β = e β = e β = e β = e

63 45 Table 3.9: Numerical results for f 5 x), x 0) = 1.6 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 3.3) e rd GM 3.5) e rd HM 3.4) e th GM 3.6) e th P MM 3.10) β = e β = e β = e β = e β = e Table 3.10: Numerical results for f 5 x), x 0) = 2.0 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd AM 3.3) e rd GM 3.5) e rd HM 3.4) e th GM 3.6) e th P MM 3.10) β = e β = e β = e β = e β = e

64 46 In Tables , we have compared the number of iterations, convergence order, function evaluation per iteration, total number of function evaluations, computational order of convergence, error and CPU time. In all the examples, our proposed class of 6 th P MM I.F. has better results when compared with 2 nd NM, 3 rd AM, 3 rd GM, 3 rd HM, 5 th GM. Table 3.1 displays the results for f 1 x), x 0) = 0.7. It is observed from this table, proposed methods and other comparable methods agree with the computational order of convergence ρ k ) and the method 6 th P MM converges with less number of iterations and least error. Similarly, the Tables 3.2 to 3.10 display results for functions f 1 x) to f 5 x) with different starting points. The observation of the results in these tables show a similar pattern as found in Table 3.1. However, in Table 3.4, for the choice of β = 1, 6 th P MM method produces divergent result. Next, we attempt to find the best integer value of β in [ 20, 20] for the class of 3 rd P M and 6 th P MM I.F. that produces minimum number of iterations and its corresponding error. Table 3.11 displays the results for f 1 x)-f 5 x) with suitable initial points and the best integer value for β [ 20, 20] along with number iterations and error. For all the examples, 6 th P MM method produces less number of iterations with least error when compared with 3 rd P M method. To find the best integer value of β in both the class of I.F. leads us to determine the best I.F. Next, the plots for iteration and least error for each integer value of β [ 20, 20], that is 41 different values of β are given. We have taken f 2 x) for verifying the methods 3 rd P M and 6 th P MM. Figures display comparison of number of iterations and error for different β values for f 2 x), x 0) = 1.7, for 3 rd P M I.F. and 6 th P MM I.F. respectively. We find that all the members in the 3 rd P M class are convergent whereas all the members in the 6 th P MM methods are convergent except for β = 1. This is one of the advantages of finding best integer value of β in [ 20, 20].

65 Iteration Iteration β Figure 3.1: Comparison of iterations for f 2 x), x 0) = 1.7 for 3 rd P M Error Error β Figure 3.2: Comparison of error for f 2 x), x 0) = 1.7 for 3 rd P M

66 Iteration Iteration Divergent in 3 rd iteration, β = β Figure 3.3: Comparison of iterations for f 2 x), x 0) = 1.7 for 6 th P MM Error Error Divergent in β = β Figure 3.4: Comparison of error for f 2 x), x 0) = 1.7 for 6 th P MM

67 49 Table 3.11: Results for the best value of β [ 20, 20] for f 1 x)-f 5 x) 3 rd P M 6 th P MM fx) x 0) β N error β N error f 1 x) e e e e 265 f 2 x) e e e e 086 f 3 x) e e e e 262 f 4 x) e e e e 064 f 5 x) e e e e Concluding Remarks We have compared the efficiency index of 2 nd NM and 3 rd P M I.F. along with proposed methods 5 th GM and 6 th P MM in Table It is observed that, proposed 5 th GM and 6 th P MM I.F. have better efficiency index than the compared methods. Hence, we conclude that the proposed 5 th GM and 6 th P MM I.F. perform better than Table 3.12: Comparison of Efficiency Index Methods p f f d EI T EI O 2 nd NM 1.19) rd P M 3.7) th GM 3.6) th P MM 3.10) nd NM and can be a competitor to 2 nd NM and other methods of equivalent order available in the literature.

68 Chapter 4 Two families of Newton-type methods having fourth and sixth order convergence In this chapter, we propose two new Newton-type methods for solving scalar nonlinear equation fx) = 0 with convergence order four and six using the idea of power mean and weight functions. The main goal and motivation in the construction of new methods is to get better efficiency index. In other words, it is desirable to attain convergence order as high as possible with fixed number of function evaluations per iteration. In the case of multipoint methods without memory, this demand is closely connected with the optimal order stated in the Kung-Traub conjecture. In Section 4.1, we present the construction of the new methods. Convergence analysis to obtain the truncation error by using Taylor series is derived in Section 4.2. In Section 4.3, numerical experiments and their results are tabulated. Comparison of the efficiency indices and concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient. 4.1 Construction of new methods Family of Optimal fourth order I.F. We recall that 3 rd P M I.F. 3.7) discussed in Chapter 3 is not an optimal method. To improve this method as an optimal method, we proposed a new 4 th MJ I.F. as 50

69 51 follows: where 2 1/β fx) ψ 4 MJx) = x th ) 1/β [Hτ) Gt)], 4.1) signf x)) f x) β + f y) β y = x 2 3 ux), τ = f y) f x), t = ux). Expanding Hτ) about 1 and Gt) about 0, we have Hτ) Gt) = H1)G0) + τ 1)H 1)G0) + + τ 1)3 H 1)G0) + 6 Choosing H, G and their derivatives as follows τ 1)2 H 1)G0) 2 t 0)3 H1)G 0) H1) = 1, G0) = 1, H 1) = 1/4, G 0) = 0 = G 0), H 1) = β + 5, H 1) = G 0) = 1, 8 we get a weight function as Hτ) Gt) = τ 1) + β ) τ 1) 2 1 ) τ 1) 3 + t 3. 6 This new method 4.1) has one function and two derivative evaluations. Hence, this method reaches optimal convergence with high efficiency Kung and Traub 1974) Family of sixth order I.F. In order to improve the method 4.1) by increasing the order of convergence to six 6 th MJ), we define the following I.F.: 2 1/β fψ ψ 6 MJx) = ψ th 4 MJx) 4 MJx)) th Kτ), 4.2) th signf x)) f x) β + f y) β 1/β ) where Kτ) is obtained by expanding it about τ = 1 as follows: Kτ) = K1) + τ 1)K 1) + τ 1)2 K 1) By choosing K1) = 1, K 1) = 1, K 1) = K 1) =... = 0, we get Kτ) = 2 τ. This new method 4.2) has two functions and two derivative evaluations. Hence, in the sense of Kung and Traub 1974), this method is not optimal.

70 52 Remark The cases β = 1, 1, 2 in 4.1) and 4.2) may be respectively called as 4 th AM and 6 th AM arithmetic mean method), 4 th HM and 6 th HM harmonic mean method) and 4 th SM and 6 th SM square mean method). The case β 0 produces 4 th GM and 6 th GM geometric mean method). These two new family of I.F. 4.1) and 4.2) have been developed in Madhu and Jayakumar 2015b). 4.2 Convergence Analysis of the methods Theorem For sufficiently smooth function f : D R R having a simple root x in the open interval D. If x 0) is sufficiently close to x, then the 4 th MJ family of I.F. 4.1) has local fourth-order convergence. Proof. Taylor expansion of fx) and f x) about x gives [ ] fx) = f x ) e + c 2 e 2 + c 3 e 3 + c 4 e 4 + c 5 e 5 + c 6 e ) and [ ] f x) = f x ) 1 + 2c 2 e + 3c 3 e 2 + 4c 4 e 3 + 5c 5 e 4 + 6c 6 e ) so that ) ) t = ux) = e c 2 e c 22 c 3 e 3 + 7c 2 c 3 4c 32 3c 4 e 4 + 8c 42 20c 22c ) 3 + 6c c 2 c 4 4c 5 e c c 3 2c 3 4.5) and 33c 2 c c 2 2c c 3 c c 2 c 5 5c 6 )e y = x + e c 2e 2 4 ) c 22 c 3 e ) 4c 32 7c 2 c 3 + 3c 4 e c 42 10c 22c ) 3 + 3c c 2 c 4 2c 5 e c c c c 2 c c 2 2c 4 17c 3 c 4 13c 2 c 5 + 5c 6 )e ) Again, the Taylor expansion of f y) about x gives [ ) ) f y) = f x ) 1 + 2c 3 2e + 4c c 3 3 e c c 27 2 c 3 + c 4 e c c 22c ) c c 2 c 4 + 5c 5 e c ) ] 1080c 3 2c c 2 c c 2 2c 4 234c 3 c 4 236c 2 c 5 c 6 )e

71 53 Using equations 4.4) and 4.7), we have τ = c 2e + 4c ) 3 c 3 e c c 2c ) 27 c 4 e c c 22c ) c c 2 c 4 100c 5 e c ) 3186c 3 2c c 2 c c 2 2c 4 567c 3 c 4 453c 2 c c 6 )e 5. Also, we have f x) β + f y) β ) 1/β 2 [ = f x ) c 2e + 1 ) 22 + β)c c 3 e β)c ) β)c 2 c c 4 e β + 6β 2 4β 3 )c ) β)c 2 2c β)c 2 c β)c c 5 ) e β 3β 2 + 2β 3 )c β + 12β 2 8β 3 )c c 3 4.9) β)c 2 2c c 3 c 4 + 8βc 4 ) + c β)c 2 3 ] β)c 5 ) + 366c 6 )e From 4.3) and 4.9), we get 2 1/β fx) ) 1/β signf x)) f x) β + f y) β = e 1 3 c 2e 2 2 ) βc c 3 e ) β)c β)c 2 c 3 29c 4 e β + 3β 2 + 2β 3 )c β)c c β)c 2 c 4 ) β)c c 5 ) e β + 87β β 3 )c β + 24β β 3 )c 3 2c β)c 2 2c 4 ) + 3c β)c β)c 2 3) β)c 3 c c 6 ) e )

72 54 We have Hτ) Gt) = c 2e + 1 ) 1 + 2β)c c 3 e ) β)c β)c 2 c c 4 ) e β)c β)c c 3 + c β)c 4 ) ) β)c c 5 ) e β)c β)c 3 2c 3 c β)c4) 2c β)c 2 3 ) β)c 5 ) + 6c c βc 4 ) + 121c 6 ) e ) Using 4.10) and 4.11) in 4.1), we obtain ) ψ 4 MJx) x = β)c 3 th 2 162c 2 c c 4 ) e β 3β 2 + 2β 3 )c ) + 15β)c 2 2c 3 486c c c 4 ) + 72c 5 e β 114β β 3 )c ) β 24β β 3 )c 3 2c 3 + 3c c 4 + β c 4 )) + 54c β)c 2 3 ) 45c 5 ) 543c c 4 ) 7c 6 ) e 6 + Oe 7 ). Thus, it is found that 4 th MJ method has fourth order convergence. Theorem For sufficiently smooth function f : D R R having a simple root x in the open interval D. If x 0) is sufficiently close to x, then the 6 th MJ family of I.F. 4.2) has sixth-order convergence.

73 55 Proof. Taylor expansion of fψ 4 th MJx)) about x, we have [ 1 fψ 4 MJx)) = f x ) th c c 4 ) + 72c 5 ) e ) β)c c 2 c c 4 ) e β 3β 2 + 2β 3 )c β)c 2 2c 3 486c β 114β β 3 )c β 24β β 3 )c 3 2c 3 + 3c c 4 + β 9 ] + 780c 4 )) + 54c β)c c 5 ) 543c c 4 ) 7c 6 ) )e 6 Then, we have 4.13) Kτ) = 2 τ = c 2e + 4c c 3)e 2 + Oe 3 ). 4.14) Using 4.9), 4.12), 4.13) and 4.14) in 4.2), we obtain ψ 6 MJx) x = β)c 2 th 2 9c 3 ) β)c c 2 c c 4 )) ) e 6 + Oe 7 ). 4.15) Thus, it is found that 6 th MJ method has sixth order convergence. 4.3 Numerical Examples In this section, we give numerical results on some test functions to compare the efficiency of the proposed family of 4 th MJ, 6 th MJ with 2 nd NM, 3 rd P M, 4 th JM, 4 th KA, 6 th CNM and 6 th SJ methods. Numerical computations have been carried out in the MATLAB software, rounding to 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process error = x k+1) x k) < ɛ, where ɛ = and N is the number of iterations required for convergence. d ψ represents the total number of function evaluations. The ACOC denoted as ρ k is given by ρ k = ln xk+1) x k) )/x k) x k 1) ) ln x k) x k 1) )/x k 1) x k 2) ).

74 56 In addition to the functions f 1 x) to f 5 x) considered in section 2.3, we also consider the following functions as examples: f 6 x) = x + 2)e x 1, x = f 7 x) = x cos x, x = f 8 x) = x 2 + sin x 5 ) 1 4, x = Table 4.1: Numerical results for f 1 x), x 0) = 0.9 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd P M 3.7) β = e β = e β = e β = e β = e th MJ4.1) β = e β = e β = e β = e β = e th MJ 4.2) β = e β = e β = e β = e β = e In Tables , we have compared the number of iterations, convergence order, function evaluations per iteration, total number of function evaluation, computational order of convergence, error and CPU time. Tables 4.1 and 4.2 display the results for f 1 x) and f 2 x) respectively for different starting point. The new family of methods and other comparable methods agree with computational order of convergence. The members of the 4 th MJ family converge in less number iterations and with least error than 3 rd P M family whereas both methods use the same number of function evaluations. Also, 6 th MJ family converges with less number of iterations and least error.

75 57 Table 4.2: Numerical results for f 2 x), x 0) = 1.7 Methods N p d d ψ ρ k error CPU 2 nd NM 1.19) e rd P M 3.7) β = e β = e β = e β = e β = e th MJ4.1) β = e β = e β = e β = e β = e th MJ 4.2) β = e β = e β = e β = e β = e Tables 4.3 and 4.4 display the results for f 3 x) and f 4 x) respectively for different starting point. The results are compared for best integer value of β in [ 20, 20] for the family of 3 rd P M, 4 th MJ and 6 th MJ along with 2 nd NM. It is observed that for f 3 x), 6 th MJ method produces convergence with less number of iterations and low error for the case β = 11. For f 4 x), 6 th MJ method produces convergence with less number of iterations and low error for the case β = 20. Tables display the results for f 1 x) to f 8 x) for different starting points. The results are compared for best integer value of β in [ 20, 20] for the family of 3 rd P M, 4 th MJ and 6 th MJ along with 2 nd NM, 4 th JM, 4 th KA, 6 th CNM and 6 th SJ. 4.4 Concluding Remarks We have compared the efficiency indices of some existing I.F. along with proposed family of methods in Table 4.8. It is found that 4 th MJ I.F. has good efficiency index

76 58 Table 4.3: Best value of β [ 20, 20] for f 3 x), x 0) = 1.6 Error N 2 nd NM 1.19) 3 rd P M 3.7) 4 th MJ4.1) 6 th MJ 4.2) β = 1 β = 11 β = 11 x 1 x 0) 1 2.1e e e e 001 x 2 x e e e e 005 x 3 x e e e e 033 x 4 x e e e e 202 x 5 x e e 109 x 6 x e 031 x 7 x e 063 ρ k Table 4.4: Best value of β [ 20, 20] for f 4 x), x 0) = 0.2 Error N 2 nd NM 1.19) 3 rd P M 3.7) 4 th MJ4.1) 6 th MJ 4.2) β =4 β =20 β = 20 x 1 x 0) 1 2.6e e e e 001 x 2 x e e e e 006 x 3 x e e e e 034 x 4 x e e e e 203 x 5 x e e 148 x 6 x e 048 x 7 x e 096 ρ k

77 59 Table 4.5: Comparison of results for best value of β for 3 rd P M with 2 nd NM f x 0) 2 nd NM 1.19) 3 rd P M 3.7) N error β N error f 1 x) e e e e 130 f 2 x) e e e e 071 f 3 x) e e e e 101 f 4 x) e e e e 060 f 5 x) e e e e 147 f 6 x) e e e e 086 f 7 x) e e e e 139 f 8 x) e e e e 085 Table 4.6: Comparison of results for best value of β for 4 th MJ f x 0) 4 th JM 1.42) 4 th KA 1.50) 4 th MJ 4.1) N error N error β N error f 1 x) e e e e e e 089 f 2 x) e e e e e 157 f 3 x) e e e e e e 194 f 4 x) e e e e e e 072 f 5 x) e e e e e e 082 f 6 x) e e e e e e 071 f 7 x) e e e e e e 056 f 8 x) e e e e e e 059

78 60 Table 4.7: Comparison of results for best value of β for 6 th MJ f x 0) 6 th CNM 1.51) 6 th SG 1.45), a=1 6 th MJ 4.2) N error N error β N error f 1 x) e e e e e e 255 f 2 x) e e e e e e 098 f 3 x) e e e e e e 133 f 4 x) e e e e e e 260 f 5 x) e e e e e e 258 f 6 x) e e e e e e 186 f 7 x) e e e e e e 277 f 8 x) e e e e e e 290 compared with 3 rd P M I.F. whereas both have same number of function evaluations. Hence, we conclude that the proposed 4 th MJ and 6 th MJ I.F. perform better than 2 nd NM and can be a competitor to 2 nd NM and other methods of equivalent order available in the literature. Further, 4 th MJ family of methods has optimal order of convergence.

79 61 Table 4.8: Comparison of Efficiency Index Methods p f f d EI T EI O 2 nd NM 1.19) rd P M 3.7) th JM 1.42) th KA 1.50) th CNM 1.51) th SG 1.45) th MJ 4.1) th MJ 4.2)

80 Chapter 5 Family of higher order multi-point iterative methods based on power mean In this chapter, we propose new Jarratt s type methods for solving scalar nonlinear equation fx) = 0 with convergence order four, six and twelve using the idea of power mean and weight functions. The main goal and motivation in the construction of new methods is to get better efficiency index. In other words, it is desirable to attain convergence order as high as possible with fixed number of function evaluations per iteration. In the case of multipoint methods without memory, this demand is closely connected with the optimal order stated in the Kung-Traub conjecture. In Section 5.1, we present the construction of the new methods. Convergence analysis to obtain the truncation error by using Taylor series is derived in Section 5.2. In Section 5.3, numerical experiments and their results are tabulated. In Section 5.4, we have studied basins of attraction for the proposed methods along with Newton s method. Comparison of the efficiency indices and concluding remarks are given in the last section. 62

81 Construction of new methods Family of Optimal fourth order I.F. Consider the I.F. 4.1) given in section By introducing a weight function Hτ, β) instead of Hτ) Gt), a new family of fourth order I.F. 4 th PM) is proposed: where ψ 4 th P Mx) = x fx) Hτ, β), 5.1) Ex, β) Ex, β) = f x) 2 1/β signf x))1 + τ β ) 1/β, τ = f x 2 3 ux)) Hτ, β) = τ 1) + β+5 8 ) τ 1) 2. f x), Family of sixth order I.F. By improving the order of convergence to six 6 th PM) by considering one more step in the method 5.1), we have ψ 6 th P Mx) = ψ 4 th P Mx) G 1 fψ 4 th P Mx)), G 1 = 2 τ Ex, β). 5.2) Family of twelfth order I.F. Recently Soleymani et al. 2012a) improved a sixth-order Jarratt method to a twelfthorder one. In a similar way, applying 2 nd NM on 5.2) and using Theorem 1.2.4, we obtain a new family of twelfth order I.F.: ψ 12 th P Mx) = ψ 6 th P Mx) fψ 6 th P Mx)) f ψ 6 th P Mx)). However, we need to compute two more function evaluations in the above method. To reduce one function evaluation, we estimate f ψ 6 th P Mx)) by the following polynomial: qt) = a 0 + a 1 t x) + a 2 t x) 2 + a 3 t x) 3, 5.3) which satisfies the following conditions qx) = fx), q x) = f x), qψ 4 th P Mx)) = fψ 4 th P Mx)), qψ 6 th P Mx)) = fψ 6 th P Mx)).

82 64 On implementing the above conditions on 5.3), we obtain four linear equations with four unknowns a 0, a 1, a 2 and a 3. From qx) = fx), q x) = f x), we get a 0 = fx) and a 1 = f x). To find a 2 and a 3, we solve the following equations: fψ 4 th P Mx)) = fx) + f x)ψ 4 th P Mx) x) + a 2 ψ 4 th P Mx) x) 2 +a 3 ψ 4 th P Mx) x) 3, fψ 6 th P Mx)) = fx) + f x)ψ 6 th P Mx) x) + a 2 ψ 6 th P Mx) x) 2 +a 3 ψ 6 th P Mx) x) 3. By applying divided differences, the above equations simplify to a 2 + a 3 ψ 4 th P Mx) x) = f[ψ 4 th P Mx), x, x] 5.4) a 2 + a 3 ψ 6 th P Mx) x) = f[ψ 6 th P Mx), x, x] 5.5) where f[y, x] = fy) fx) y x, f[y, x, x] = f[y, x] f x). y x Solving equations 5.4) and 5.5), we have 1 a 2 = f[ψ ψ 4 th P Mx) ψ 6 6 th P Mx) th P Mx), x, x]ψ 4 th P Mx) x) ) a 3 = f[ψ 4 th P Mx), x, x]ψ 6 th P Mx) x) ) 1 f[ψ ψ 4 th P Mx) ψ 4 6 th P Mx) th P Mx), x, x] f[ψ 6 th P Mx), x, x]., 5.6) Further, using equation 5.6) we have the estimation f ψ 6 th P Mx)) q ψ 6 th P Mx)) = a 1 + 2a 2 ψ 6 th P Mx) x) + 3a 3 ψ 6 th P Mx) x) 2 1 = f x)ψ ψ 4 th P Mx) ψ 4 6 th P Mx) th P Mx) ψ 6 th P Mx)) + 2f[ψ 6 th P Mx), x, x]ψ 4 th P Mx) x)ψ 6 th P Mx) x) + f[ψ 4 th P Mx), x, x] 3f[ψ 6 th P Mx), x, x])ψ 6 th P Mx) x) 2 ). Therefore, finally we propose a new family of twelfth order I.F. 12 th P M): ψ 12 th P Mx) = ψ 6 th P Mx) G 2 fψ 6 th P Mx)), G 2 = 1 q ψ 6 th P M x)). 5.7)

83 65 Remark These new family of methods 5.1), 5.2) and 5.7) have been recently proposed in Babajee et al. 2015a). 5.2 Convergence Analysis of the methods Theorem Let a sufficiently smooth function f : D R R has a simple root x in the open interval D. If x 0) is sufficiently close to x, then the 4 th P M family of I.F. 5.1) is of local fourth order convergence. Proof. Let e = x x. Taylor expansion of fx) and f x) about x gives [ ] fx) = f x ) e + c 2 e 2 + c 3 e 3 + c 4 e ) and [ ] f x) = f x ) 1 + 2c 2 e + 3c 3 e 2 + 4c 4 e so that ux) = e c 2 e 2 + 2c 2 2 c 3 )e 3 + 7c 2 c 3 4c 3 2 3c 4 )e Now x 2 3 ux) = x e c 2e c 3 4 ) 3 c 2 2 e c c 2c ) 5.9) 3 c 2 3 e , so that τ = c 2e + 4 c ) 3 c 3 e ) 40 = + 3 c 2c c 4 32 ) 3 c 2 3 e 3 + O e 4), Hτ, β) = c 2 e ) 9 β c ) 3 c 3 e β + 10 ) c 2 c ) 9 β 4 c ) 5.11) 27 c 4 e 3 and Ex, β) = f x ) c 2 e ) 9 β c ) 3 c 3 e β + 10 ) c 2 c β 16 ) c ) c 4 e 3 + O ) e 4). 5.12)

84 66 Substituting equations 5.8), 5.11) and 5.12) into equation 5.1), we obtain, after simplifications, ψ 4 th P Mx) x = C 4 th P Me 4 + O e 5), 10 C 4 th P M = 27 β + 89 ) c 3 2 c 2 c c ) Thus, it is found that 4 th PM method has fourth order convergence. Theorem Let a sufficiently smooth function f : D R R has a simple root x in the open interval D. If x 0) is sufficiently close to x, then the 6 th P M family of I.F. 5.2) is of local sixth order convergence. Proof. Using equations 5.10) and 5.12), we have G 1 = 2 τ Ex, β) = f x CG1 e 2 + O e 3)), C G1 = 1 ) β) c c 3. Using Theorem with p = 4 and q = 2, we have ψ 6 th P Mx) x = C G1 C 4 th P M e 6 + O e 7) = 19 ) β) c c 3 + O e 7) β ) c 3 2 c 2 c )) 9 c 4 e 6 = C 6 th P M e 6 + O e 7), 5.14) where C 6 th P M = c β ) c β β ) ) 2 c 2 c 4 + c 2 c β 209 ) c 3 2 c 3 27 Thus, it is found that 6 th PM method has sixth order convergence. Theorem Let a sufficiently smooth function f : D R R has a simple root x in the open interval D. If x 0) is sufficiently close to x, then the 12 th P M family of I.F. 5.7) is of local twelfth order convergence.

85 67 Proof. We have 1 G 2 = q ψ 6 th P Mx)) = f x CG2 e 6 + O e 7)), ) C G2 = c c 2c ) 81 β ) 27 β c 4 2 c 3 + Using Theorem with p = q = 6, we have ψ 12 th P Mx) x = C G2 C 6 th P M c 2 C 2 6 th P M )e12 + O e 13) = C 12 th P M e 12 + O e 13), C 12 th P M = 1 81 c ) β + 40) c c c 2c β + 40) ) ) 278 c 3 2 c 3 c ) 3 c 2 c 4 2 c c β β2 ) c 2 6. ) 5 2 β + 40) 10 β + 89) c 2 ) c 2 4 c β β β + 40) 10 β + 89) β + 89)2 2 ) 2 β + 40)2 c 6 2 c β + 40)2 10 β + 89) + 1 ) 8 2 β + 40) 10 β + 89)2 c ) 11 9 c c 2 c 4 + c 4 3 c ) 27 β c c 3 ) β + 40) 10 β + 89) β + 40)2 10 β + 89) c 9 2 c β + 40) 10 β + 89) β + 89)2 + 1 ) 2 β + 40)2 c c β + 40)2 10 β + 89) 2 c Thus, it is found that 12 th PM method has twelfth order convergence. 5.15) Remark The cases β = 1, 1, 2 for 4 th P M family respectively corresponds to the fourth order arithmetic mean 4 th AM), harmonic mean 4 th HM) and square mean 4 th SM) methods. The case β 0 produces a fourth order geometric mean 4 th GM) method.

86 68 Remark We note that 4 th AM and 4 th HM methods have recently been obtained by Babajee 2014). Remark The cases β = 1, 1, 2, 0 for 6 th P M and 12 th P M family respectively correspond to sixth and twelfth order arithmetic mean 6 th AM and 12 th AM), harmonic mean 6 th HM and 12 th HM), square mean 6 th SM and 12 th SM) and geometric mean 6 th GM and 12 th GM) methods. 5.3 Numerical Examples In this section, we give numerical results on some test functions to compare the efficiency of the proposed family of 4 th P M, 6 th P M, 12 th P M with 2 nd NM and 3 rd P M methods. Numerical computations have been carried out in the MATLAB software, rounding to 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process error = x k+1) x k) < ɛ, where ɛ = and N is the number of iterations required for convergence. The ACOC denoted as ρ k is given by ρ k = ln xk+1) x k) )/x k) x k 1) ) ln x k) x k 1) )/x k 1) x k 2) ). The same examples f 1 x) - f 8 x) considered in section 4.3 have been considered here for numerical experiments. Table 5.1 shows the results for f 1 x) and f 2 x) for a given starting point. The computational order of convergence agrees with theoretical order. The members of the 4 th P M family converge in less number of iterations and with least error than 3 rd P M family where both methods use the same number of function evaluations. Also, it is observed that 12 th HM method is the most efficient method with respect to iterations and error. Next, we attempt to find the best integer value of β in [ 20, 20] for each family that produces minimum number of iterations and corresponding error. Tables 5.2 and 5.3 show the corresponding results for f 1 x) to f 8 x). If the initial points are very close to the root, then we obtain the least number of iterations and lowest error. Hence, the proposed methods 4 th P M, 6 th P M and 12 th P M have better

87 69 Table 5.1: Comparison of results for f 1 x) and f 2 x) Methods f 1 x) f 2 x) x 0) N error ρ k x 0) N error ρ k 2 nd NM 1.19) e e rd P M 3.7) 3 rd AM e e rd HM e e rd GM e e rd SM e e th P M5.1) 4 th AM e e th HM e e th GM e e th SM e e th P M5.2) 6 th AM e e th HM e e th GM e e th SM e e th P M5.7) 12 th AM e e th HM e e th GM e e th SM e e efficiency when compared to 2 nd NM and 3 rd P M. Suppose the initial points are not close to the root, then we may not get least error in 4 th P M I.F. when compared to 3 rd P M I.F. for example see in Table 5.2, f 2 x) with x 0) = 1.7; f 3 x) with x 0) = 0.5; and f 8 x) with x 0) = 1.5). It is observed from the asymptotic error constant in the equations 5.13), 5.14) and 5.15), we obtain least error for mostly non-positive values of β. We next consider a test function which is of simple cubic type See Drexler 1997) and Babajee and Dauhoo 2006)): f 9 x) = x 3 + ln x, x > 0, x R for which the logarithm restricts the function to be positive and its convex properties of the function are favorable for global convergence. But we focus on the behaviour of the methods with the starting points which are equally spaced with x = 0.01 in the interval 0, 5] to check the robustness of the methods. The root

88 70 Table 5.2: Results for 2 nd NM and for best value of β in [ 20, 20] 2 nd NM 1.19) 3 rd P M 3.7) 4 th P M 5.1) fx) x 0) N error β N error β N error f 1 x) e e e e e e 081 f 2 x) e e e e e e 163 f 3 x) e e e e e e 056 f 4 x) e e e e e e 103 f 5 x) e e e e e e 085 f 6 x) e e e e e e 056 f 7 x) e e e e e e 113 f 8 x) e e e e e e 073 Table 5.3: Results for the best value of β in [ 20, 20] 6 th P M 5.2) 12 th P M 5.7) fx) x 0) β N error β N error f 1 x) e e e e 164 f 2 x) e e e e 075 f 3 x) e e e e 225 f 4 x) e e e e 228 f 5 x) e e e e 177 f 6 x) e e e e 141 f 7 x) e e e e 215 f 8 x) e e e e 085

89 rd PM 4 th PM 6 th PM 12 nd PM N S β a) Variation of N s with β rd PM 4 th PM 6 th PM 12 nd PM 5 w c β b) Variation of w c with β Figure 5.1: Results for f 9 x) x = is correct to 14 digits. A starting point is considered as divergent if it does not satisfy the condition x k+1) x k) < in at most 100 iterations and x 0 at any iterates. We denote the quantity ω c as the mean number of iterations from a starting point until convergence and a penalty of 100 iterations is imposed for diverging points. Let N s denotes the number of successful points of the 500 starting points. We tested for 41 values of β in the interval [ 20, 20] with β = 1. Figure 5.1 a) display that all the members of the 4 families are globally convergent for f 9 x). Figure 5.1 b) shows that the mean w c ) reduces as the order of the

90 72 method increases. We note that the mean increases rapidly for the 3 rd P M family as β > 0. This shows the improved fourth, sixth and twelfth order family are more efficient than the third order family. For the 3 rd P M family, it is the member β = 9 which is the most efficient method with the lowest w c = For the 4 th P M family, it is the member β = 19 which is the most efficient method with the lowest w c = For the 6 th P M family, it is the member β = 5 which is the most efficient method with the lowest w c = For the 12 th P M family, it is the member β = 6 which is the most efficient method with the lowest w c = It is the advantage of varying β. We also note that as the order of the methods increases, the mean iteration number of the members considered is almost constant, resulting in the stability of the methods. Next section compares the dynamic behaviour in the complex plane for 2 nd NM and 4 th P M methods. Detailed study on this aspect is available in Chapter Dynamic Behaviour in the Complex Plane We consider the square region [ 2, 2] [ 2, 2] and in this region, we have equally spaced grid points with mesh h = It is composed of 400 columns and 400 rows which can be related to the pixels of a computer display would represent a region of the complex plane Soleymani et al. 2012a). Each grid point is used as an initial point z 0 and the number of iterations until convergence is counted for each point. Now, we draw the polynomiographs of p 1 z) = z 3 1 with roots α 1 = 1, α 2 = i and α 3 = i. We assign red color if each grid point converge to the root α 1, green color if they converge to the root α 2 and blue color if they converge to the root α 3 in at most 200 iterations and if z n α j < 10 4, j = 1, 2, 3. In this way, the basins of attraction for each root would be assigned a characteristic color. If the iterations do not converge as per the above condition for some specific initial points, we assign black color. Bahman Kalantari coined the term polynomiography to be the art and science of visualization in the approximation of roots of polynomials using I.F. Kalantari 2009). Figure 5.2 shows the polynomiography of the 2 nd NM and 4 th P M methods and thier global

91 a) 2 nd NM 1.19) b) β = 1 in 4 th P M 5.1) C) β = 1 in 4 th P M 5.1) Figure 5.2: Polynomiographs of p 1 z)

92 74 convergence. 5.5 Concluding Remarks We compare the efficiency indices of some existing I.F. along with proposed family of I.F. in Table 5.4. The proposed 4 th P M I.F. has good efficiency index compared with 3 rd P M I.F. whereas both have same number of function evaluations. It is observed that, proposed 4 th P M, 6 th P M and 12 th P M I.F. have better efficiency indices compared to 2 nd NM and 3 rd P M. Hence, we conclude that the proposed 4 th P M, Table 5.4: Comparison of Efficiency Index Methods p f f d EI T EI O 2 nd NM 1.19) rd P M 3.7) th P M 5.1) th P M 5.2) th P M 5.7) th P M and 12 th P M I.F. performs better than 2 nd NM and can be a competitor to 2 nd NM and other methods of equivalent order available in the literature. Further, it is noted that 4 th P M family has optimal order convergence. Dynamic behaviour of 4 th P M family of methods for β = 1 and 1 for whom the polynomiographs are displayed. Whereas for other values of β in the interval [ 20, 20], polynomiographs are yet to be explored in detail. This is due to fact that the polynomial p 3 z) has complex roots and the MATLAB program for obtaining basin attractors has to be developed for different β values. This particular study is open for further research.

93 Chapter 6 Some New Multi-point Iterative Methods and their Basins of Attraction In this chapter, we propose some Jarratt s type methods for solving scalar nonlinear equation fx) = 0 with convergence order four, six and twelve using the idea of weight functions. The main goal and motivation in the construction of new methods is to get better efficiency index. In other words, it is desirable to attain convergence order as high as possible with fixed number of function evaluations per iteration. In the case of multipoint methods without memory, this demand is closely connected with the optimal order stated in the Kung-Traub conjecture. Section 6.1 presents the development of the new fourth order methods and their convergence analysis. Further, the extension of new fourth order methods to sixth and twelfth order methods are given in Section 6.2. Section 6.3 includes some numerical examples and results for the new family of methods along with some equivalent methods, including Newton s method. In Section 6.5, we obtain all possible extraneous fixed points for these methods as a special study. In Section 6.4, we have studied basins of attraction for the proposed fourth order methods, Newton s method and some existing methods. Section 6.6 discusses an application on Planck s radiation law problem. Comparison of the efficiency indices and concluding remarks are given in the last section. 75

94 Construction of new methods Let us consider a third order I.F. 3 rd NW ) for solving nonlinear equation which was presented by Noor and Waseem 2009) y = x 2 fx) 3 f x), ψ 3 rd NW x) = x 4fx) f x) + 3f y). 6.1) This method 6.1) is of order three with three evaluations per full iteration. To improve the order of the above method with the same number of function evaluations leading to an optimal method, we proposed the following class of I.F. which includes weight functions See Madhu and Jayakumar 2016b)) ψ 4 th MJx) = x 4fx) ) f x) + 3f y) Hτ) Gη), 6.2) where Hτ) and Gη) are two weight functions with τ = f y) f x) and η = f x) f y) Convergence Analysis of the methods Theorem Let f : D R R be a sufficiently smooth function having continuous derivatives up to fourth order. If fx) has a simple root x in the open interval D and x 0) is chosen in a sufficiently small neighborhood of x, then the class of method 6.2) is of local fourth-order convergence, when H1) = G1) = 1, H 1) = G 1) = 0, H 1) = 5 8, G 1) = 1 2, H 1) = G 1) < 6.3) and it satisfies the error equation ψ 4 th MJx) x = 1 81 )) 81c 2 c 3 + 9c 4 + c H 1) 32G 1) e 4 +Oe 5 ). Proof. Taylor expansion of fx) and f x) about x gives [ ] fx) = f x ) e + c 2 e 2 + c 3 e 3 + c 4 e ) and [ ] f x) = f x ) 1 + 2c 2 e + 3c 3 e 2 + 4c 4 e )

95 77 so that y = x + e c 2e ) c 22 c 3 e ) 4c 32 7c 2 c 3 + 3c 4 e ) 3 Again, using Taylor expansion of f y) about x gives [ f y) = f x ) c 2e+ 1 4c 22+c ) 3 e c 32+27c ) ] 2 c 3 +c 4 e ) 3 27 Using equations 6.5) and 6.7), we have τ = c 2e + 4c ) 3 c 3 e 2 8 ) 36c 32 45c 2 c c 4 e ) 27 and η = c 2e ) 5c c 3 e ) 8c 32 21c 2 c c 4 e ) 27 Using equations 6.4), 6.5) and 6.7), then we have 4fx) f x) + 3f y) = e c2 2e 3 + 3c 3 2 3c 2 c 3 1 ) 9 c 4 e ) Expanding the weight function Hτ) and Gη) about 1 using Taylor series, we get Hτ) = H1) + τ 1)H 1) τ 1)2 H 1) τ 1)3 H 1) + OH 4) 1)), Gη) = G1) + η 1)G 1) η 1)2 G 1) η 1)3 G 1) + OG 4) 1)). 6.11) Using equations 6.10) and 6.11) in equation 6.2), such that the conditions in equation 6.3) are satisfied, we obtain ψ 4 th MJx) x = 1 81 )) 81c 2 c 3 + 9c 4 + c H 1) 32G 1) e 4 +Oe 5 ). Equation 6.12) shows that method 6.2) has fourth order convergence. 6.12) Remark Note that for each choice of H 1) < and G 1) < in equation 6.12) will give rise to a new optimal fourth order method. Method 6.2) has efficiency index EI O = 1.587, better than method 6.1).

96 78 Two members in the class of method 6.2) satisfying Condition 6.3), with corresponding weight functions, are given in the following: By choosing H 1) = G 1) = 0, we get a new method called as 4 th MJ1 4fx) ψ 4 MJ1x) = x ) ) 2 τ ) ) 2 η 1, 6.13) th f x) + 3f y) 16 4 where its error equation is ψ 4 th MJ1x) x = c3 2 c 2 c ) 9 c 4 e 4 + Oe 5 ). By choosing H 1) = 0, G 1) = 1, we get another method called as 4 th MJ ) ) 2 τ fx) ψ 4 MJ2x) = x th f x) + 3f y) ) 2 1 η where its error equation is ψ 4 th MJ2x) x = ) ) 3 η 1, c3 2 c 2 c ) 9 c 4 e 4 + Oe 5 ). 6.14) Remark By this way, we can propose many such fourth order methods similar to 4 th MJ1 and 4 th MJ2. Further, the methods 4 th MJ1 and 4 th MJ2 are equally good, since they have the same order of convergence and efficiency. Based on the analysis done using basins of attraction, we find that 4 th MJ1 is marginally better than 4 th MJ2, and hence, we have considered 4 th MJ1 to propose higher order methods, namely 6 th MJ3 and 12 th MJ Higher Order Methods We improve the method 4 th MJ1 to a new sixth order method called as 6 th MJ3 ψ 6 th MJ3x) = ψ 4 th MJ1x) fψ 4 th MJ1x)) f x) 1 2 ) 3η ) Babajee et al. 2015a) improved a sixth order Jarratt method to a twelfth order method. Using their technique, we obtain a new twelfth order method 12 th MJ4) ψ 12 th MJ4x) = ψ 6 th MJ3x) fψ 6 th MJ3x)) f ψ 6 th MJ3x)), 6.16)

97 79 where f ψ 6 th MJ3x)) is approximated to reduce one function evaluation: f ψ 6 th MJ3x)) 1 f x)ψ ψ 4 MJ1x) ψ 4 MJ1x) ψ th 6 MJ3x) th 6 MJ3x)) th th + 2f[ψ 6 th MJ3x), x, x]ψ 4 th MJ1x) x)ψ 6 th MJ3x) x) + f[ψ 4 th MJ1x), x, x] 3f[ψ 6 th MJ3x), x, x])ψ 6 th MJ3x) x) 2 ), where f[ψ 4 MJ1x), x, x] = f[ψ 4 MJ1x), x] f x) th, th ψ 4 MJ1x) x th f[ψ 6 MJ3x), x, x] = f[ψ 6 MJ3x), x] f x) th. th ψ 6 MJ3x) x th Convergence Analysis of the methods Theorem Let f : D R R be a sufficiently smooth function having continuous derivatives up to fourth order. If fx) has a simple root x in the open interval D and x 0) is chosen in a sufficiently small neighborhood of x, then method 6.15) is of local sixth order convergence, and it satisfies the error equation ψ 6 th MJ3x) x = c 22 3c 3 )49c 32 27c 2 c 3 + 3c 4 ) e 6 + Oe 7 ). Proof. Taylor expansion of fψ 4 th MJ1x)) about x gives [ 49 fψ 4 MJ1x)) = f x ) th 27 c3 2 c 2 c ) 9 c 4 e c c 2 c 4 12c 5 ) e c c 2 2c3 4529c c 3 2c c 2 2c 4 891c 3 c c 2 25c 2 3 3c 5 ) + 63c 6 )e 6 ]. 6.17) By using equations 6.5), 6.9) and 6.17) in equation 6.15), we obtain ψ 6 th MJ3x) x = c 22 3c 3 )49c 32 27c 2 c 3 + 3c 4 ) e 6 + Oe 7 ). 6.18) Equation 6.18) shows that method 6.15) has sixth order convergence. The following theorem is stated without proof, which can be worked out similar to the above theorem with the help of MATHEMATICA software. Theorem Let f : D R R be a sufficiently smooth function having continuous derivatives up to fourth order. If fx) has a simple root x in the open

98 80 interval D and x 0) is chosen in a sufficiently small neighborhood of x, then method 6.16) is of local twelfth order convergence, and it satisfies the error equation ψ 12 MJ4x) x = 1 ) ) 2 10c 22 3c th 3 49c c 2 c 3 + 3c ) 10c 32 3c 2 c 3 + 3c 4 e 12 + Oe 13 ). 6.3 Numerical Examples In this section, we give numerical results on some test functions to compare the efficiency of the new methods with some existing methods. Numerical computations have been carried out in the MATLAB software with 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process error = x k+1) x k) < ɛ, where ɛ = and N is the number of iterations required for convergence. d ψ represents the total number of function evaluations. The ACOC denoted as ρ k is given by ρ k = ln xk+1) x k) )/x k) x k 1) ) ln x k) x k 1) )/x k 1) x k 2) ). The same examples f 1 x) - f 5 x) considered in section 2.3 have been considered here for numerical experiments. Consider the following fourth order optimal methods for the purpose of comparing results: Method of Sharifi et al. 2012) 4 th SBS1): ψ 4 SBS1x) = x fx) 1 th 4 f x) + 3 ) f y) f y) ) 2 8 f x) 1 69 f y) ) 3 fx) ) ) 4 64 f x) 1 +. f y) 6.19) Method of Sharifi et al. 2012) 4 th SBS2): ψ 4 SBS2x) = x fx) 1 th 4 f x) + 3 ) f y) f y) ) 2 8 f x) 1 1 fx) ) ) f y) Method of Soleymani et al. 2012c) 4 th SKK): 2fx) ψ 4 SKKx) = x th f x) + f y) fx) ) ) f y) f x) 4 f x) + 3 f y) ) ) 2. 4 f x) 6.20) 6.21)

99 81 Method of Singh and Jaiswal 2014) 4 th SJ): 17 ψ 4 SJx) = x th 8 9 f y) 4 f x) + 9 f y) 2) 7 8 f x)) 4 3 ) f y) fx) 4 f x) f x). 6.22) Method of Sharma et al. 2013) 4 th SKS): ψ 4 SKSx) = x 1 th f x) 8 f y) + 3 ) f y) fx) 8 f x) f x). 6.23) Non-optimal method found in Jain 2013) 4 th DJ): ψ 4 th DJx) = ψ 3 rd AMx) ψ 3 rd AMx) x fψ 3 rd AMx)) fx) fψ 3 rd AMx)). 6.24) Table 6.1: Numerical results for f 1 x) Methods x 0) N p d d ψ ρ k error CPU 2 nd NM 1.19) e e rd NW 6.1) e e th JM 1.42) e e th SBS1 6.19) e e th SBS2 6.20) e e th SKK 6.21) e e th SJ 6.22) e e th SKS 6.23) e e th MJ1 6.13) e e th MJ2 6.14) e e Tables 6.1 to 6.5 display the results for f 1 x) to f 5 x) respectively. It is observed that 4 th MJ1, 4 th MJ2 methods converge in less number of iterations and with low error when compared to 2 nd NM and 3 rd NW methods.

100 82 Table 6.2: Numerical results for f 2 x) Methods x 0) N p d d ψ ρ k error CPU 2 nd NM 1.19) e e rd NW 6.1) e e th JM 1.42) e e th SBS1 6.19) e e th SBS2 6.20) e e th SKK 6.21) e e th SJ 6.22) e e th SKS 6.23) e e th MJ1 6.13) e e th MJ2 6.14) e e

101 83 Table 6.3: Numerical results for f 3 x) Methods x 0) N p d d ψ ρ k error CPU 2 nd NM 1.19) e e rd NW 6.1) e e th JM 1.42) e e th SBS1 6.19) e e th SBS2 6.20) e e th SKK 6.21) e e th SJ 6.22) e e th SKS 6.23) e e th MJ1 6.13) e e th MJ2 6.14) e e

102 84 Table 6.4: Numerical results for f 4 x) Methods x 0) N p d d ψ ρ k error CPU 2 nd NM 1.19) e e rd NW 6.1) e e th JM 1.42) e e th SBS1 6.19) e e th SBS2 6.20) e e th SKK 6.21) e e th SJ 6.22) e e th SKS 6.23) e e th MJ1 6.13) e e th MJ2 6.14) e e

103 85 Table 6.5: Numerical results for f 5 x) Methods x 0) N p d d ψ ρ k error CPU 2 nd NM 1.19) e e rd NW 6.1) e e th JM 1.42) e e th SBS1 6.19) e e th SBS2 6.20) e e th SKK 6.21) e e th SJ 6.22) e e th SKS 6.23) e e th MJ1 6.13) e e th MJ2 6.14) e e

104 Basins of attraction Sections 6.1 and 6.3 discussed methods whose roots are in the real domain, that is f : D R R. The study can be extended to functions defined in the complex plane f : D C C having complex zeros. From the fundamental theorem of algebra, a polynomial of degree n with real or complex coefficients has n roots which may or may not be distinct. In such a case a complex initial guess is needed for convergence of complex zeros. Note that we need some basic definitions in order to study functions for complex domain with complex zeros. We give below some definitions required for our study, which are found in Blanchard 1984), Amat et al. 2004), Scott et al. 2011). Let R : C C be a rational map on the Riemann sphere. Definition For z C we define its orbit as the set orbz) = {z, Rz), R 2 z),..., R n z),...}. Definition A periodic point z 0 of the period m is such that R m z 0 ) = z 0 where m is the smallest integer. Definition The Julia set of a nonlinear map Rz) denoted by JR) is the closure of the set of its repelling periodic points. The complementary of JR) is the Fatou set F R). Definition If O is an attracting periodic orbit of period m, we define the basins of attraction to be the open set A C consisting of all points z C for which the successive iterates R m z), R 2m z),... converge towards some point of O. Lemma Every attracting periodic orbit is contained in the Fatou set of R. In fact the entire basins of attraction A for an attracting periodic orbit is contained in the Fatou set. However, every repelling periodic orbit is contained in the Julia set. In the following subsections, we produce some beautiful graphs obtained for the proposed methods and for some existing methods using MATLAB software See Chicharro et al. 2013)). In fact, an iteration function is a mapping of the plane into itself. The common boundaries of these basins of attraction constitute the Julia set of the iteration function and its complement is the Fatou set. This section is necessary in this Chapter to show that how the proposed methods could be considered in

105 87 polynomiography. In the following section, we describe the basins of attraction for Newton s method and some higher order Newton type methods for finding complex roots of polynomials p 1 z) = z 3 1 and p 2 z) = z Polynomiographs of p 1 z) = z 3 1 We consider the square region [ 2, 2] [ 2, 2] and in this region, we have equally spaced grid points with mesh h = This mesh is composed of 400 columns and 400 rows which can be related to the pixels of a computer display that may represent a region of the complex plane, as given in Soleymani et al. 2012b). Each grid point is used as an initial point z 0 and the number of iterations until convergence is counted for each point. Now, we draw the polynomiographs of p 1 z) = z 3 1 with roots α 1 = 1, α 2 = i and α 3 = i. We assign red color if each grid point converges to the root α 1, green color if they converge to the root α 2 and blue color if they converge to the root α 3 in at most 200 iterations and if z n α j < 10 4, j = 1, 2, 3. In this way, the basins of attraction for each root would be assigned a characteristic color. If the iterations do not converge as per the above condition for some specific initial points, we assign black color. Figure 6.1a)-j) show the polynomiographs of the methods for the cubic polynomial p 1 z). There are diverging points for the methods 3 rd NW, 4 th SBS1, 4 th SBS2 and 4 th SKK. All the starting points are convergent for the methods 2 nd NM, 4 th JM, 4 th SJ, 4 th SKS, 4 th MJ1 and 4 th MJ2. In Table 6.6, we classify the number of converging and diverging grid points for each iterative method. Note that a point z 0 belongs to the Julia set if and only if dynamics in a neighborhood of z 0 displays sensitive dependence on the initial conditions, so that nearby initial conditions lead to wildly different behavior after a number of iterations. For this reason some of the methods are getting many divergent points. The common boundaries of these basins of attraction constitute the Julia set of the iteration function Polynomiographs of p 2 z) = z 4 1 Next, we draw the polynomiographs of p 2 z) = z 4 1 with roots α 1 = 1, α 2 = 1, α 3 = i and α 4 = i. We assign a yellow color if each grid point converges to the

106 a) 2 nd NM 1.19) b) 3 rd NW 6.1) c) 4 th JM 1.42) d) 4 th SBS1 6.19) e) 4 th SBS2 6.20) f) 4 th SKK 6.21) g) 4 th SJ 6.22) h) 4 th SKS 6.23) i) 4 th MJ1 6.13) j) 4 th MJ2 6.14) Figure 6.1: Polynomiographs of p 1 z)

107 89 Table 6.6: Comparison of convergent and divergent grids for p 1 z) Methods Convergent grid points Divergent grid points Real roots Complex roots 2 nd NM 1.19) rd NW 6.1) th JM 1.42) th SBS1 6.19) th SBS2 6.20) th SKK 6.21) th SJ 6.22) th SKS 6.23) th MJ1 6.13) th MJ2 6.14) root α 1, red color if they converge to the root α 2, green color if they converge to the root α 3 and blue color if they converge to the root α 4 in atmost 200 iterations and if z n α j < 10 4, j = 1, 2, 3, 4. Therefore, the basins of attraction for each root would be assigned a corresponding color. If the iterations do not converge as per the above condition for some specific initial points, we assign black color. Table 6.7: Comparison of convergent and divergent grids for p 2 z) Methods Convergent grid points Divergent grid points Real roots Complex roots 2 nd NM 1.19) rd NW 6.1) th JM 1.42) th SBS1 6.19) th SBS2 6.20) th SKK 6.21) th SJ 6.22) th SKS 6.23) th MJ1 6.13) th MJ2 6.14) Figure 6.2a)-j) show the polynomiographs of the methods for the quartic polynomial p 2 z). There are diverging points for the methods 3 rd NW, 4 th SBS1, 4 th SBS2, 4 th SKK, 4 th SJ, 4 th SKS, 4 th MJ1 and 4 th MJ2. All the starting points are convergent for 2 nd NM and 4 th JM methods. In Table 6.7, we classify the number of converging and diverging grid points for each iterative method. Also, we observe

108 a) 2 nd NM 1.19) b) 3 rd NW 6.1) c) 4 th JM 1.42) d) 4 th SBS1 6.19) e) 4 th SBS2 6.20) f) 4 th SKK 6.21) g) 4 th SJ 6.22) h) 4 th SKS 6.23) i) 4 th MJ1 6.13) j) 4 th MJ2 6.14) Figure 6.2: Polynomiographs of p 2 z)

109 91 that 4 th SKS, 4 th MJ1 and 4 th MJ2 methods are divergent at less number of grid points than the method of 3 rd NW, 4 th SBS1, 4 th SBS2, 4 th SKK, 4 th SJ. From this comparison based on the basins of attractions for cubic and quartic polynomials, we could generally say that 2 nd NM, 4 th JM, 4 th SKS, 4 th MJ1 and 4 th MJ2 methods are more reliable in solving nonlinear equations. Also, by observing the polynomiographs of p 1 z) and p 2 z), we find certain symmetrical patterns to the x-axis and y-axis, where the starting point z 0 leads to convergent real or complex pair of roots of the respective polynomials. 6.5 A study on extraneous fixed points Definition A point z 0 is a fixed point of R if Rz 0 ) = z 0. Definition A point z 0 is called attracting if R z 0 ) < 1, repelling if R z 0 ) > 1 and neutral if R z 0 ) = 1. If the derivative is also zero then the point is super attracting. It is interesting to note that all the iterative methods can be written as ψx) = x G f x n )ux), ux) = fx) f x). 6.25) As per the definition, x is a fixed point of this method, since ux ) = 0. However, the points ξ x at which G f ξ) = 0 are also fixed points of the method, since G f ξ) = 0, second term on the right side of 6.25) vanishes. Hence, these points ξ are called extraneous fixed points. Moreover, for a general iteration function given by R p z) = z G f z)uz), z C, 6.26) the nature of extraneous fixed points can be discussed. Based on the nature of the extraneous fixed points, the convergence of the iteration process will be determined. For more details on this aspect, the paper by Vrscay and Gilbert 1988) will be useful. In fact, they investigated that if the extraneous fixed points are attractive then the method will give erroneous results. If the extraneous fixed points are repelling or neutral, then the method may not converge to a root near the initial guess.

110 92 In this section, we will discuss the extraneous fixed points of each method for the polynomial z 3 1. As G f does not vanish in theorem 6.5.1, there are no extraneous fixed points. Theorem There are no extraneous fixed points for 2 nd NM 1.19) and 3 rd NW 6.1). Theorem There are six extraneous fixed points for 4 th JM 1.42). Proof. The extraneous fixed point of Jarratt method for which G f = 3f yz)) + f z) 6f yz)) 2f z) are found. Upon substituting yz) = z 2fz) 3f z), we get the equation 1+7z3 +19z z 3 +11z 6 = 0. The extraneous fixed points are found to be ± i, ± i, ± i. All these fixed points are repelling since R z 0 ) > 1) Theorem There are fifty two extraneous fixed points for the method 6.19). Proof. As we found for the method 6.19), G f = ) f z) f yz)) f z) f z) f yz))) 3 f z) 2 + fz)4 f z) f yz)) ) f yz)) f z)) 2. f z) The extraneous fixed points are found to be ± i, ± i, ± i, , , ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i. All these fixed points are repelling since R z 0 ) > 1).

111 93 Theorem There are thirty nine extraneous fixed points for the method 6.20). Proof. For the method 6.20), G f = ) f z) f z) + + fz)3 f z) + 3 f yz)) 81f yz)) 3 8 The extraneous fixed points are at ) f yz)) f z)) 2. f z) ± i, ± i, ± i ± i, , ± i, , ± i, , ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i. All these fixed points are repelling since R z 0 ) > 1). Theorem There are twenty four extraneous fixed points for the method 6.21). Proof. As we found for the method 6.21), G f = 1 1+ f yz)) f z) The extraneous fixed points are found to be )f z) + + fz)4 f z) 3 )2f z) 7 4 f yz)) ) f yz)) 2. f z) ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i. All these fixed points are repelling since R z 0 ) > 1). Theorem There are eighteen extraneous fixed points for the method 6.22). Proof. For the method 6.22), G f = 17 9 f yz)) f z) 8 The extraneous fixed points are at f yz)) f z) ) 2 ) ) f yz)). f z) ± i, , ± i, ,

112 ± i, ± i, ± i, ± i, ± i, ± i. All these fixed points are repelling since R z 0 ) > 1). Theorem There are twelve extraneous fixed points for the method 6.23). Proof. For the method 6.23), G f = The extraneous fixed points are at f z) + 3 f yz)) 8 ) f yz)). f z) ± i, ± i, ± i, ± i, ± i, ± i. All these fixed points are repelling since R z 0 ) > 1). Theorem There are twenty four extraneous fixed points for the method 6.13). Proof. For the method 6.13), G f = f yz)) f z) ) f z) The extraneous fixed points are at f yz)) f z)) )f 2 z) + 14 f z) f z) f yz))) ). 2 f z) f yz)) ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i. All these fixed points are repelling since R z 0 ) > 1). Theorem There are thirty extraneous fixed points for the method 6.14). Proof. For the method 6.14), G f = 1 ) f z) + 5 f yz)) f z)) 2 ) f yz)) 16 f z) f z) f z) f z) f z) f yz))) f yz)) 2 6 f z) f z) f yz))) 3 ). f yz)) 3

113 95 The extraneous fixed points are at ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i, ± i. All these fixed points are repelling since R z 0 ) > 1). 6.6 An application problem To test our methods, we consider the following Planck s radiation law problem found in Bradie 2006) and Jain 2013) ϕλ) = 8πchλ 5 e ch/λkt 1, 6.27) which calculates the energy density within an isothermal blackbody. Here, λ is the wavelength of the radiation, T is the absolute temperature of the blackbody, k is Boltzmann s constant, h is the Planck s constant and c is the speed of light. Suppose, we would like to determine wavelength λ which corresponds to maximum energy density ϕλ). From 6.27), we get 8πchλ ϕ 6 ) ch/λkt )e ch/λkt λ) = e ch/λkt 1 e ch/λkt 1 ) 5 = A B. It can be checked that a maxima for ϕ occurs when B = 0, that is, when ch/λkt )e ch/λkt ) = 5. e ch/λkt 1 Here putting x = ch/λkt, the above equation becomes 1 x 5 = e x. 6.28) Define fx) = e x 1 + x ) The aim is to find a root of the equation fx) = 0. Obviously, one of the root x = 0 is not taken for discussion. As argued in Bradie 2006), the left-hand side of

114 96 Table 6.8: Comparison of results for Planck s radiation law problem x 0) 2 nd NM 1.19) 4 th DJ 6.24) N d ψ ρ k error N d ψ ρ k error e e e e e e e e 112 Table 6.9: Comparison of results for Planck s radiation law problem x 0) 4 th MJ1 6.13) 4 th MJ2 6.14) N d ψ ρ k error N d ψ ρ k error e e e e e e e e ) is zero for x = 5 and e Hence, it is expected that another root of the equation fx) = 0 might occur near x = 5. The approximate root of the equation 6.29) is given by x Consequently, the wavelength of radiation λ) corresponding to which the energy density is maximum is approximated as λ ch. kt ) We apply the methods 2 nd NM, 4 th DJ, 4 th MJ1, 4 th MJ2, 6 th MJ3 and 12 th MJ4 to solve 6.29) and compared the results in Tables From these tables, we note that the root x is reached faster by 12 th MJ4 method than by other methods. This is due to the fact that it has the highest efficiency EI O = Table 6.11 displays the results for fzero command in MATLAB, where N 1 is the number of Table 6.10: Comparison of results for Planck s radiation law problem x 0) 6 th MJ3 6.15) 12 th MJ4 6.16) N d ψ ρ k error N d ψ ρ k error e e e e e e e 203

115 97 Table 6.11: Results for Planck s radiation law problem in fzero x 0) N 1 N d ψ error x e e e e iterations to find the interval containing the root, error is the after N number of iterations. For f zero command, zeros are considered to be points where the function actually crosses, not just touches the x-axis. It is observed that the present methods converge with less number of total function evaluations than fzero solver. 6.7 Concluding Remarks We compare the efficiency index of some I.F. along with proposed methods in Table The proposed 4 th MJ I.F. has good efficiency index compared with 3 rd NW I.F. whereas both have same number of function evaluations. It is observed that, proposed methods 4 th MJ, 6 th MJ3 and 12 th MJ4 have better efficiency indices compared to 2 nd NM. Hence, we conclude that the methods 4 th MJ, 6 th MJ3 and Table 6.12: Comparison of Efficiency Index Methods p f f d EI T EI O 2 nd NM 1.19) rd P M 6.1) th MJ 6.2) th MJ3 6.15) th MJ4 6.16) th MJ4 performs better than 2 nd NM and can be a competitor to 2 nd NM and other methods of equivalent order available in the literature.

116 Chapter 7 Improved Harmonic Mean Newton-type methods for system of nonlinear equations In this chapter, we propose a new fourth order Newton-like method based on harmonic mean and its multi-step version for solving system of nonlinear equations F x) = 0 where F x) = f 1 x), f 2 x),..., f n x)) T, x = x 1, x 2,..., x n ) T, f i : R n R, i = 1, 2,..., n and F : D R n R n is a smooth map and D is an open and convex set, where we assume that x = x 1, x 2, x 3,..., x n) T is a zero of the ) T system and x 0) = x 0) 1, x 0) 2,..., x 0) n is an initial guess sufficiently close to x. For example, problems of the above type arise while solving boundary value problems for differential equations. The differential equations are reduced to system of nonlinear equations, which are in turn solved by the familiar Newton s method having convergence order two See Ostrowski 1960)). In Section 7.1, we present a new algorithm that has fourth order convergence and its multi-step version with order 2r + 4, r 1, r is a positive integer. In Section 7.2, we study the convergence analysis of the new methods using the point of attraction theory. In Section 7.3, efficiency indices and computational efficiency indices for the new methods are discussed. Section 7.4 presents numerical examples and comparison with some known methods. Furthermore, we also study an application problem called the 1-D Bratu problem in Section 7.5. Concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient. 98

117 Construction of new methods One of the basic procedure for solving system of nonlinear equations is the classical second order Newton s method 2 nd NM). It is defined by x k+1) = G 2 NMx k) ) = x k) ux k) ), ux k) ) = [F x k) )] 1 F x k) ) 7.1) nd where k = 0, 1, 2,... and [F x k) )] 1 is the inverse of first Fréchet derivative F x k) ) of the function F x k) ). It is straightforward to see that this method requires the evaluation of one function, one first derivative and one matrix inversion per iteration. Homeier 2005) proposed a third order iterative method called Harmonic Mean Newton s method for solving a scalar nonlinear equation. Grau-Sanchez et al. 2012) proposed the following extension to solve a system of nonlinear equation F x) = 0, henceforth called as 3 rd HM x k+1) = G 3 HMx k) ) rd = x k) 1 [F x k) )] 1 + [F G 2 2 NMx k) ))] 1) F x k) ) nd 7.2) We note that 1 2 [F x k) )] 1 + [F G 2 NMx k) ))] 1) is the average of the inverses nd of two Jacobians. In general, such third order methods free of second derivatives like 7.2) can be used for solving system of nonlinear equations. These methods require one function evaluation and two first order Fréchet derivative evaluations. The convergence analysis of a few such methods using point of attraction theory can be found in Babajee 2010). This 3 rd HM method is more efficient than Halley s method because it does not require the evaluation of a third order tensor of n 3 values while finding the number of function evaluations. We propose a fourth order Harmonic Mean Newton s method 4 th HM) for solving systems of nonlinear equations method proposed in Babajee et al. 2015b)): x k+1) = G 4 HMx k) ) = x k) H th 1 x k) )Ax k) )F x k) ) H 1 x k) ) = I 1 4 τxk) ) I) τxk) ) I) 2, Ax k) ) = 1 [F x k) )] 1 + [F yx k) ))] 1), yx k) ) = x k) uxk) ), τx k) ) = [F x k) )] 1 F yx k) )), ux k) ) = [F x k) )] 1 F x k) ) 7.3) where I is the n n identity matrix. The new fourth order method requires evaluation of one function and two first order Fréchet derivatives for each iteration.

118 100 We further improve the 4 th HM method by additional function evaluations to get a multi-step version called 2r + 4) th HM method given by x k+1) = G 2r+4) th HMx k) ) = µ r x k) ) µ j x k) ) = µ j 1 x k) ) H 2 x k) )Ax k) )F µ j 1 x k) )) H 2 x k) ) = 2I τx k) ), j = 1, 2,..., r, r 1 7.4) µ 0 x k) ) = G 4 HMx k) ) th Remark Note that this multi-step version has order 2r+4, where r is a positive integer and r 0. The case r = 0 is the 4 th HM method. Remark The multi-step version requires one more function evaluation for each iteration. The proposed new methods does not require the evaluation of second or higher order Fréchet derivatives and still reaches higher order convergence. 7.2 Convergence Analysis of the methods In order to prove the convergence results, we recall some important definitions and results from the theory of point of attraction. The main theorem is going to be demonstrated by means of the n-dimensional Taylor expansion of the functions involved. Let F : D R n R n be sufficiently Fréchet differentiable in D. By using the notation introduced in Cordero et al. 2010b), the qth derivative of F at u R n, q 1, is the q-linear function F q) u) : R n R n R n such that F q) u)v 1,..., v q ) R n. It is easy to observe that 1. F q) u)v 1,..., v q 1, ) LR n ) 2. F q) u)v σ1),..., v σq) ) = F q) u)v 1,..., v q ), for all permutation σ of {1, 2,..., q}. So, in the following we will denote: a) F q) u)v 1,..., v q ) = F q) u)v 1... v q, b) F q) u)v q 1 F p) v p = F q) u)f p) u)v q+p 1.

119 101 It is well known that, for x + h R n lying in a neighborhood of a solution x of the nonlinear system F x) = 0, Taylor s expansion can be applied assuming that the Jacobian matrix F x ) is nonsingular) and [ ] p 1 F x + h) = F x ) h + C q h q + Oh p ), 7.5) where C q = 1/q!)[F x )] 1 F q) x ), q 2. We observe that C q h q R n since F q) x ) LR n R n, R n ) and [F x )] 1 LR n ). In addition, we can express F as [ ] p 1 F x + h) = F x ) I + qc q h q 1 + Oh p ), 7.6) where I is the identity matrix. Therefore, qc q h q 1 LR n ). From 7.6), we obtain [F x + h)] 1 = [ I + X 1 h + X 2 h 2 + X 3 h 3 + ] [F x )] 1 + Oh p ), 7.7) where X 1 = 2C 2, X 2 = 4C2 2 3C 3, X 3 = 8C C 2 C 3 + 6C 3 C 2 4C 4,. We denote e k) = x k) x as the error in the kth iteration. The equation e k+1) = Le k)p + Oe k)p+1 ), where L is a p-linear function L LR n R n, R n ), is called the error equation and p is the order of convergence. Observe that e k)p is e k), e k),, e k) ). Definition Point of Attraction). Ortega and Rheinbolt 1970) Let G : D R n R n. Then x is a point of attraction of the iteration q=2 q=2 x k+1) = Gx k) ), k = 0, 1, ) if there is an open neighbourhood S of x defined by Sx ) = {x R n x x < δ}, δ > 0, such that S D and, for any x 0) S, the iterates {x k) } defined by equation 7.8) all lie in D and converge to x.

120 102 Theorem Ostrowski Theorem). Ortega and Rheinbolt 1970) Assume that G : D R n R n has a fixed point x intd) and Gx) is Fréchet differentiable on x. If then x is a point of attraction for x k+1) = Gx k) ). ρg x )) = σ < 1 7.9) We now prove a general result that shows x is a point of attraction of a general iteration function Gx) = P x) Qx)Rx). Theorem Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of x D, which is a solution of the system F x) = 0. Suppose that P, Q, R : D R n R n are sufficiently Fréchet differentiable functionals depending on F ) at each point in D with P x ) = x, Qx ) 0 and Rx ) = 0. Then, there exists a ball on which the mapping S = Sx, δ) = { } x x δ S 0, δ > 0, G : S R n, Gx) = P x) Qx)Rx), for all x S is well-defined; moreover, G is Fréchet differentiable at x, thus G x ) = P x ) Qx )R x ). Proof. Clearly, Gx ) = x. Gx) Gx ) G x )x x ) = P x) Qx)Rx) x P x ) Qx )R x ))x x ) P x) x P x )x x ) + Qx)Rx) + Qx )R x )x x ), using triangle inequality. Since P x) is differentiable in x and P x ) = x, we can assume that δ was chosen sufficiently small such that P x) x P x )x x ) ɛ x x,

121 103 for all x S with ɛ > 0 depending on δ and ɛ = 0 in case P x) = x. Since P, Q and R are continuously differentiable functions, then Q, R and R are bounded: Q x) K 1, R x) K 2, R x) K 3. Now by mean value theorem for integrals Qx) = Qx ) Q x + tx x )) dt x x ) and so that Rx) = 1 0 R x + sx x )) ds x x ), Qx)Rx) Qx )R x )x x ) 1 ) = Qx ) R x + sx x )) R x ) ds x x ) Qx ) Q x + tx x )) R x + sx x )) dt ds x x ) ) R x + sλx x )) ds dλ s x x ) Q x + tx x )) R x + sx x )) dt ds x x ) 2, using triangle inequality, Qx ) R x + sλx x )) ds dλ s x x 2 Q x + tx x )) R x + sx x )) dt ds x x 2, using Schwartz inequality, ) K3 2 Qx ) + K 1 K 2 x x 2, since Q, R and R are bounded, ) K3 δ 2 Qx ) + K 1 K 2 x x, since x x δ. Combining, we have Gx) Gx ) G x )x x ) δ ɛ + K ) 3 2 Qx ) + K 1 K 2 x x

122 104 which shows that Gx) is differentiable in x since δ and ɛ are arbitrary and Qx ), K 1, K 2 and K 3 are constants. Thus G x ) = P x ) Qx )R x ). 7.4). Next, we prove that the convergence order of the proposed methods 7.3) and Theorem Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of x R n that is a solution of the system F x) = 0. Let us suppose that x S = Sx, δ) and F x) is continuous and nonsingular in x, and x 0) is close enough to x. Then x is a point of attraction of the sequence {x k) } obtained using the iterative expression equation 7.3). Furthermore, the sequence converges to x with order four, where the error equation obtained is e k+1) = G 4 th HMx k) ) x = L 1 e k)4 + Oe k)5 ), L 1 = C C 2C C 3C C ) Proof. We first show that x is a point of attraction using Theorem In this case, P x) = x, Qx) = H 1 x)ax), Rx) = F x). Now, since F x ) = 0, we have yx ) = x 2 3 [F x )] 1 F x ) = x, τx ) = F x ) 1 F yx )) = [F x )] 1 F yx )) = I, H 1 x ) = I, Ax ) = 1 [F x )] 1 + [F yx ))] 1) = [F x )] 1, 2 Qx ) = H 1 x )Ax ) = I[F x )] 1 = [F x )] 1 0, Rx ) = F x ) = 0, R x ) = F x ), P x ) = x, P x ) = I, G x ) = P x ) Qx )R x ) = I [F x )] 1 F x ) = 0, so that ρg x )) = 0 < 1 and by Ostrowski s theorem, x is a point of attraction of equation 7.3). Next we establish the fourth order convergence of this method. From

123 105 equations 7.5) and 7.6) we obtain and [ F x k) ) = F x ) e k) + C 2 e k)2 + C 3 e k)3 + C 4 e k)4] + Oe k)5 ) 7.11) [ F x k) ) = F x ) I + 2C 2 e k) + 3C 3 e k)2 + 4C 4 e k)3 + 5C 5 e k)4] + Oe k)5 ), where e k) = x k) x. We have [F x k) )] 1 = [ I + X 1 e k) + X 2 e k)2 + X 3 e k)3] [F x )] 1 + Oe k)4 ) 7.12) where X 1 = 2C 2, X 2 = 4C 2 2 3C 3 and X 3 = 8C C 2 C 3 + 6C 3 C 2 4C 4. Then [F x k) )] 1 F x k) ) = e k) C 2 e k)2 + 2C 2 2 C 3 )e k)3 + Oe k)4 ), and the expression for yx k) ) is yx k) ) = x ek) C 2e k)2 4 3 C2 2 C 3 )e k)3 +2C C 2C 3 2C 3 C 2 + 8C 3 2)e k)4 + Oe k)5 ). Taylor expansion of the Jacobian matrix F yx k) )) is [ F yx k) )) = F x ) I + 2C 2 yx k) ) x ) + 3C 3 yx k) ) x ) 2 ] + 4C 4 yx k) ) x ) 3 + 5C 5 yx k) ) x ) 4 + Oe k)5 ) [ = F x ) I + N 1 e k) + N 2 e k)2 + N 3 e k)3] + Oe k)4 ), N 1 = 2 3 C 2, N 2 = 4 3 C C 3, N 3 = 8 3 C C 2C C 3C C 4. Therefore, τx k) ) = [F x k) )] 1 F yx k) )) = I + N 1 + X 1 )e k) + N 2 + X 1 N 1 + X 2 )e k)2 +N 3 + X 1 N 2 + X 2 N 1 + X 3 )e k)3 + Oe k)4 ) = I 4 3 C 2e k) + 4C C 3)e k) C C 2 C C 3C ) 27 C 4 e k)3 + Oe k)4 )

124 106 and then H 1 x k) ) = I 1 τx k) ) I ) + 1 τx k) ) I ) = I C 2e k) C ) 3 C 3 e k) C C 2C C 3C ) 27 C 4 e k)3 + Oe k)4 ) Also, [ F yx k) )) 1 = [F x )] 1 I N 1 e k) + N1 2 N 2 )e k)2 ) + N 1 N 2 + N 2 N 1 N 31 N 3 e k)3] + Oe k)4 ) [ = I + Y 1 e k) + Y 2 e k)2 + Y 3 e k)3] [F x )] 1 + Oe k)4 ), 7.13) 7.14) where Y 1 = 2 3 C 2, Y 2 = 8 9 C C 3, Y 3 = C C 2C C 3C C 4 On the other hand, using equations 7.12) and 7.14), the harmonic mean can be expressed as [ Ax k) ) = I 4 3 C 2e k) C2 2 5 ) 3 C 3 e k) C C 2C C 3C C 4 ) e k)3] [F x )] ) + Oe k)4 ) Using equations 7.13) and 7.15), we have [ H 1 x k) )Ax k) ) = I C 2 e k) + C2 2 C 3 ) e k) C C 2C C 3C 2 10 ) 9 C 4 e k)3] [F x )] 1 + Oe k)4 ) 7.16) Finally, by using equations 7.11) and 7.16) in Equation 7.3) with some simplifications, the error equation can be expressed as: e k+1) = x k) x H 1 x k) )Ax k) )F x k) ) 79 = 27 C C 2C C 3C ) 9 C 4 e k)4 + Oe k)5 ) 7.17) Thus from equation 7.17), it can be concluded that the order of convergence of the 4 th HM method is four.

125 107 For the case r 1 we state and prove the following theorem. Theorem Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of x R n that is a solution of the system F x) = 0. Let us suppose that x S = Sx, δ) and F x) is continuous and nonsingular in x, and x 0) is close enough to x. Then x is a point of attraction of the sequence {x k) } obtained using the iterative expression equation 7.4). Furthermore the sequence converges to x with order 2r + 4, where r is a positive integer and r 1. Proof. In this case, P x) = µ j 1 x), Qx) = H 2 x)ax), Rx) = F µ j 1 x)), j = 1,..., r. We can show by induction that µ j 1 x ) = x, µ j 1x ) = 0, j = 1,..., r so that P x ) = µ j 1 x ) = x, H 2 x ) = I, Qx ) = H 2 x )Ax ) = I[F x )] 1 = [F x )] 1 0, Rx ) = F µ j 1 x )) = F x ) = 0, P x ) = µ j 1x ) = 0, R x ) = F µ j 1 x ))µ j 1x ) = 0, G x ) = P x ) Qx )R x ) = 0. So ρg x )) = 0 < 1 and by Ostrowski s theorem, x is a point of attraction of equation 7.4). A Taylor expansion of F µ j 1 x k) )) about x yields F µ j 1 x k) )) = F x ) [ µ j 1 x k) ) x ) + C 2 µ j 1 x k) ) x ) ] 7.18) Also, let H 2 x k) ) = I C 2e k) + 4C ) 3 C 3 e k) )

126 108 Using equations 7.15) and 7.19), we have H 2 x k) )Ax k) ) = [ ] I + L 2 e k) [F x )] 1, L 2 = 38 9 C2 2 + C ) Using equations 7.18) and 7.20), we obtain µ j x k) ) x = µ j 1 x k) ) x H 2 x k) )Ax k) )F µ j 1 x k) )) = L 2 e k)2 µ j 1 x k) ) x ) ) Proceeding by induction of equation 7.21) and using equation 7.10), we have µ r x k) ) x = L 1 L 2 r e k)2r+4) + Oe k)2r+5) ), r 1, which shows that the method has 2r + 4 order of convergence. Consider the following iterative methods for solving system of nonlinear equation for the purpose of comparing results: Two step fourth order Newton s method 4 th NR): x k+1) = G 4 th NRx k) ) = G 2 nd NMx k) ) F G 2 nd NMx k) )) 1 F G 2 nd NMx k) )) 7.22) which was recently rediscovered by Noor et al. 2013) using the variational iteration technique. Recently, Sharma et al. 2013) developed a fourth order method, which is given by x k+1) = G 4 SGSx k) ) = x k) W x k) )F x k) ) 1 F x k) ), th W x k) ) = 1 [ I F y k) ) 1 F x k) ) + 3 ] 4 F x k) ) 1 F y k) ), 7.23) y k) = x k) 2 3 F x k) ) 1 F x k) ). Cordero et al. 2012a) presented a sixth order method, which is given by x k+1) = G 6 th CHMT x k) ) = zx k) ) [F G 2 nd NMx k) ))] 1 F zx k) )) zx k) ) = G 2 NMx k) ) Hx k) )[F x k) )] 1 F G nd 2 NMx k) )) [ ] nd Hx k) ) = 2I F x k) ) 1 F G 2 NMx k) )). nd 7.24)

127 Efficiency of the Methods In this section, we consider the efficiency index to compare the proposed methods. Definition Ostrowski 1960) Efficiency Index of an iterative method is defined as EI = p 1 d, where p is the order of convergence and d is the total number of new function evaluations F and F ) per iteration. Definition Cordero et al. 2010b) Computational Efficiency of an iterative method is defined as CE = p 1/d+op), where p is the order of convergence, d is the total number of new function evaluations and op is the total number of operations per iteration. For calculating the term op, number of products and quotients required for solving m linear system with the coefficient matrix solved by LU factorization is 1 3 n3 + mn 2 1 n), where n is the size of each system. 3 Table 7.1 shows the comparison of EI and CE. From this table, we observe that for the values of n = 5 and 10, where n is the size of the system, EI and CE values indicate that the proposed methods have greater value and hence more efficient. Table 7.1: Comparison of EI and CE Method EI n=5 n=10 CE n=5 n=10 2 nd NM 7.1) 2 1 n+n n3 +2n n rd HM 7.2) 3 1 n+2n n3 +4n n th NR 7.22) 4 2n+2n n3 +4n n th SGS 7.23) 4 1 n+2n n3 +5n n th CHMT 7.24) 6 3n+2n n3 +6n n th HM 7.3) 4 1 n+2n n3 +5n n th HM 7.4) 6 2n+2n n3 +7n n Figure 7.1 displays the performance of different methods with respect to EI and CE. It is observed from the figure, the new method 6 th HM is better than 2 nd NM and all other compared methods, for all n 2 with respect to both EI and CE.

128 Efficiency index 2 nd NM 3 rd HM 4 th NR 4 th SGS 6 th CHMT 4 th HM 6 th HM Computational efficiency index 2 nd NM 3 rd HM 4 th NR 4 th SGS 6 th CHMT 4 th HM 6 th HM EI 1.08 CE n n Figure 7.1: Comparison of Efficiency index and Computational efficiency index 7.4 Numerical Examples In this section, we compare the performance of the proposed methods 7.3) and 7.4) with some known methods. The numerical experiments have been carried out using MATLAB software for the test problems given below. The approximate solutions are calculated correct to 1000 digits by using variable precision arithmetic. We use the following stopping criterion for the iterations: err min = x k+1) x k) 2 < ) We have used the approximated computational order of convergence p c given by see Cordero and Torregrosa 2007)) p c log xk+1) x k) 2 / x k) x k 1) 2 ) log x k) x k 1) 2 / x k 1) x k 2) 2 ) 7.26) Let M be the number of iterations required for reaching the minimum residual error err min. Test Problem 1 TP1) We consider the following system of nonlinear equations given in Frontini and Sormani 2004): F x 1, x 2 ) = 0, where F : 4, 6) 5, 7) R 2 and F x 1, x 2 ) = x 2 1 x 2 19, x 3 2/6 x x 2 17). ) 2x1 1 The Jacobian matrix is given by F x) =. The starting 1 2x 1 2 x vector is x 0) = 5.1, 6.1) T and the exact solution is x = 5, 6) T.

129 111 Test Problem 2 TP2) We consider the following system given in Babajee 2010): cos x 2 sin x 1 = 0, x x = 0, x 2 exp x 1 x 2 3 = 0. The solution is x , , ) T. We choose the starting vector x 0) = 1, 0.5, 1.5) T. The Jacobian matrix has 7 non-zero elements and it is given by F x) = cos x 1 sin x 2 0 x x 1 3 ln x 3 1/x 2 2 x x 1 3 x 1 /x 3 exp x 1 0 2x 3. Test Problem 3 TP3) We consider the following system given in Babajee 2010): x 2 x 3 + x 4 x 2 + x 3 ) = 0, x 1 x 3 + x 4 x 1 + x 3 ) = 0, x 1 x 2 + x 4 x 1 + x 2 ) = 0, x 1 x 2 + x 1 x 3 + x 2 x 3 = 1. We solve this system using the initial approximation x 0) = 0.5, 0.5, 0.5, 0.2) T. The solution of this system is x , , , ) T. The Jacobian matrix that has 12 non-zero elements is given by 0 x 3 + x 4 x 2 + x 4 x 2 + x 3 F x 3 + x 4 0 x 1 + x 4 x 1 + x 3 x) =. x 2 + x 4 x 1 + x 4 0 x 1 + x 2 x 2 + x 3 x 1 + x 3 x 1 + x 2 0 Tables show the results for the test problems TP1, TP2, TP3), from which we conclude that the 10 th HM method is the most efficient method with least number of iterations and residual error. Also, we have given CPU time for the proposed methods and some existing methods. Next, we consider the 2r + 4) th HM family of methods 7.4) for finding the least value of r and thus the value of p in order to get the number of iterations M = 2 and err min = 0. To achieve this, TP1 requires r = 6 p = 16), TP2 requires r = 18 p = 40) and TP3 requires r = 8 p = 20).

130 112 Table 7.2: Comparison of different methods for system of nonlinear equations Methods TP1 M err min p c CPU 2 nd NM 7.1) 7 4.6e rd HM 7.2) 5 1.4e th NR 7.22) 4 4.6e th SGS 7.23) th CHMT 7.24) th HM 7.3) 4 1.4e th HM 7.4) th HM 7.4) th HM 7.4) 3 1.1e Table 7.3: Comparison of different methods for system of nonlinear equations Methods TP2 M err min p c CPU 2 nd NM 7.1) 9 1.7e rd HM 7.2) 6 4.5e th NR 7.22) 5 1.7e th SGS 7.23) th CHMT 7.24) th HM 7.3) th HM 7.4) th HM 7.4) 4 1.9e th HM 7.4) 4 2.2e Application The 1-D Bratu problem Buckmire 2003) is given by d 2 U + λ exp Ux) = 0, λ > 0, 0 < x < 1, 7.27) dx2 with the boundary conditions U0) = U1) = 0. The 1-D planar Bratu problem has two known, bifurcated, exact solutions for values of λ < λ c, one solution for λ = λ c and no solution for λ > λ c. The critical value of λ c is simply 8η 2 1), where η is the fixed point of the hyperbolic cotangent function coth x). The exact solution to the equation 7.27) is

131 113 Table 7.4: Comparison of different methods for system of nonlinear equations Methods TP3 M err min p c CPU 2 nd NM 7.1) 8 3.9e rd HM 7.2) 5 2.9e th NR 7.22) 5 2.9e th SGS 7.23) 5 8.8e th CHMT 7.24) 4 4.6e th HM 7.3) 5 5.5e th HM 7.4) 4 6.1e th HM 7.4) th HM 7.4) known and can be presented here as [ cosh x 1 Ux) = 2 ln cosh ) θ 4 2 ) θ 2 ], 7.28) where θ is a constant to be determined, which satisfies the boundary conditions and is carefully chosen and assumed to be the solution of the differential equation 7.27). Using a similar procedure as in Odejide and Aregbesola 2006), we show how to obtain the critical value of λ. Substitute equation 7.28) in equation 7.27), simplify and collocate at the point x = 1 because it is the midpoint of the interval. Another 2 point could be chosen, but low order approximations are likely to be better if the collocation points are distributed somewhat evenly throughout the region. Then, we have θ 2 = 2λ cosh 2 θ 4 ). 7.29) Differentiating equation 7.29) with respect to θ and setting dλ = 0, the critical dθ value λ c satisfies θ = 1 ) ) θ θ 2 λ c cosh sinh. 7.30) 4 4 By eliminating λ from equations 7.29) and 7.30), we have the value of θ c for the critical λ c satisfying ) θ c 4 = coth θc ) for which θ c = can be obtained using an iterative method. We then get λ c = from equation 7.29). Figure 7.2 illustrates this critical value of

132 Variation of θ with λ 15 θ 10 λ c Figure 7.2: Variation of θ for different values of λ. λ λ. The finite dimensional problem using standard finite difference scheme is given by F j U j ) = U j+1 2U j + U j 1 h 2 + λ exp U j = 0, j = 1, 2,..., n ) with discrete boundary conditions U 0 = U n = 0 and the step size h = 1/n. There are n 1 unknowns. The Jacobian is a sparse matrix and its typical number of nonzero per row is three. It is known that the finite difference scheme converges to the lower solution of the 1-D Bratu using the starting vector U 0) = 0, 0,.., 0) T. We use n = 100 and test for 350 λ s in the interval 0, 3.5] h = 0.01). The solution of this problem is given by x = {0.0365, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , } t For each λ, we let M λ be the minimum number of iterations for which U k+1) j U k) j 2 < 1e 13, where the approximation U k) j is calculated correct to 14 decimal places. Let M λ be the mean of iteration number for 350 λ s.

133 115 Table 7.5: Comparison of number of λ s out of 350 λ s) for 1-D Bratu problem Method M = 2 M = 3 M = 4 M = 5 M > 5 M λ 2 nd NM 7.1) rd HM 7.2) th SGS 7.23) th CHMT 7.24) th HM 7.3) th HM 7.4) Figure 7.3 and Table 7.5 give the results for the 1-D Bratu problem, where M represents the number of iterations for convergence. It can be observed from figure 7.3, from four methods considered, as λ increases to its critical value the number of iterations required for convergence increases. From the table 7.5, it is observed that as the order of method increases, the mean of iteration number M λ ) decreases. Also, 6 th HM is the most efficient method among the six methods because it has the lowest M λ and the highest number of λ converging in 2 iterations nd NM 3 rd HM 4 th HM 6 th HM M λ λ Figure 7.3: Variation of number of iteration with λ for 1-D Bratu problem For each λ, we find the minimum order of the 2r + 4) th HM family so that we reach convergence in 2 iterations and the results are shown in Figure 7.4. It can be observed that as the value of λ increases, the value of p required for convergence in 2 iterations also increases. For λ [0.01, 0.04], we require p = 4 4 th HM). For λ [0.05, 0.35], we require p = 6 6 th HM). For λ [0.36, 0.83], we require p = 8 8 th HM). For λ [0.84, 1.29], we require p = th HM). For λ [1.30, 1.66], we require p = th HM). For λ [1.66, 1.95], we require p = th HM).

134 116 For λ [1.96, 2.19], we require p = th HM). For λ [2.20, 2.37], we require p = th HM). For λ [2.38, 2.52], we require p = th HM). For λ [2.53, 2.64], we require p = th HM) and so on. We notice that the width of the interval decreases and the order of the family is very high as λ tends to its critical value. Finally, for λ = 3.5, we require p = 260 to reach convergence in 2 iterations order of method p λ Figure 7.4: Order of the 2r + 4) th HM family for each λ. 7.6 Concluding Remarks In this chapter, a fourth order method and its multi-step version having higher order convergence using weight functions to solve systems of nonlinear equations have been proposed. The proposed methods do not require the evaluation of second or higher order Fréchet derivatives to reach fourth order or higher order of convergence. We have proved a general result that shows x is a point of attraction of a general iteration function. Also for the proposed new methods, it is verified that x is a point of attraction. A few examples have been verified using the proposed methods and compared them with some known methods, which illustrate the superiority of the new methods. The proposed new methods have been applied on a practical problem called 1-D Bratu problem. The results obtained are interesting and encouraging for the new methods. Hence, the proposed methods can be considered competent enough to Newton s method and some of the existing methods.

135 Chapter 8 Efficient Newton-type methods for system of nonlinear equations In this chapter, we have presented some efficient iterative methods of convergence order four, five and six for solving system of nonlinear equations. Our aim is to achieve higher order Newton-type methods with only one inverse of Jacobian matrix. Moreover, we pay special attention to the less number of linear systems to be used in the iterative process. The fourth order method is a two step method, whereas new fifth and sixth order methods are composed of three steps, namely, Newton iteration as the first step and weighted Newton iteration as the second and third step. It is proved that the root x is a point of attraction for the new iterative schemes. The performance of the new methods are verified through numerical examples. As an application, we have implemented the present methods on Chandrasekhar s equation and 1-D Bratu problem. In Section 8.1, a fourth order iterative method for solving systems of nonlinear equation is proposed. Further, two new methods having fifth and sixth order convergence are given. Section 8.2 discusses a convergence analysis of the proposed methods. In section 8.3, efficiency indices and computational efficiency indices for the new methods are discussed. Section 8.4 verifies the new methods with numerical examples and their results are compared with some existing methods. Section 8.5 includes two application problems, namely Chandrasekhar s equation and 1-D Bratu problem. Concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient. 117

136 Construction of new methods Petkovic et al. 2013, 2014) recently developed a fourth order iterative method for solving single equation which is given below: [ ψ 4 th P etx) = x τ 1) + 9 ) ] 2 fx) τ 1 8 f x), y = x 2 fx) 3 f x), τ = f y) f x). 8.1) We have extended the method 8.1) for system of nonlinear equations having fourth order convergence with a total number of function evaluations n + 2n 2. Further, new fifth and sixth order methods with total number of function evaluations 2n + 2n 2 and 2n + 2n 2 respectively, are proposed by using only one inverse evaluation of first Fréchet derivative per iteration. New fourth order method 4 th M1): The above method is extended for solving system of nonlinear equations x k+1) = G 4 M1x k) ) = x k) H th 1 x k) )[F x k) )] 1 F x k) ), [ H 1 x k) ) = I 3 4 τxk) ) I) + 9 ) ] 2 τx k) ) I, 8 τx k) ) = [F x k) )] 1 F yx k) )), 8.2) yx k) ) = x k) 2 3 [F x k) )] 1 F x k) ), where I denotes n n identity matrix. New fifth order method 5 th M2): Adding one more step in the method 8.2), we obtain new fifth order method x k+1) = G 5 M2x k) ) = G th 4 M1x k) ) [F x k) )] 1 F G th 4 M1x k) )). 8.3) th New sixth order method 6 th M3): By using weight function in 8.3), we obtain new sixth order method with same number of function evaluation x k+1) = G 6 M3x k) ) = G th 4 M1x k) ) H th 2 x k) )[F x k) )] 1 F G 4 M1x k) )), [ ) ] th 2 H 2 x k) ) = I 3 2 τxk) ) I) + 1 τx k) ) I. 8.4) 2 These three methods have been developed in Madhu and Jayakumar 2016a).

137 Convergence Analysis of the methods In order to prove the convergence results, we recall some important definitions and results from the theory of point of attraction given in Section 7.2. Theorem Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of x R n, that is a solution of the system F x) = 0. Let us suppose that x S = Sx, δ) and F x) is continuous and nonsingular in x, and x 0) close enough to x. Then the sequence {x k) } k 0 obtained using the iterative expression 8.2) converges locally to x with order 4, where the error equation obtained is e k+1) = G 4 th M1x k) ) x = L 1 e k)4 + Oe k)5 ), L 1 = 1 9 C 4 4C 2 C 3 + 5C C 3 C 2. Proof. We first show that x is point of attraction using Theorem In this case, P x) = x, Qx) = H 1 x)[f x)] 1, Rx) = F x). Now, since F x ) = 0, we have yx ) = x 2 3 [F x )] 1 F x ) = x, τx ) = F x ) 1 F yx )) = [F x )] 1 F yx )) = I, H 1 x ) = I, Qx ) = I[F x )] 1 = [F x )] 1 0, Rx ) = F x ) = 0, R x ) = F x ), P x ) = x, P x ) = I, then G x ) = P x ) Qx )R x ) = I [F x )] 1 F x ) = 0, so that ρg x )) = 0 < 1 and by Ostrowski s Theorem, x is a point of attraction of 8.2). We next establish fourth order convergence of the method. From 7.5) and 7.6), we obtain [ F x k) ) = F x ) e k) +C 2 e k)2 +C 3 e k)3 +C 4 e k)4 +C 5 e k)5 +C 6 e k)6] +Oe k)7 ), and 8.5) [ F x k) ) = F x ) I+2C 2 e k) +3C 3 e k)2 +4C 4 e k)3 +5C 5 e k)4 +6C 6 e k)5] +Oe k)6 ),

138 120 where C k = 1/k!)[F x )] 1 F k) x ), k = 2, 3,..., and e k) = x k) x. We have [F x k) )] 1 = where [ I + X 2 e k) + X 3 e k)2 + X 4 e k)3 + X 5 e k)4 + X 6 e k)5] [F x )] 1, 8.6) X 2 = 2C 2, X 3 = 4C 2 2 3C 3, X 4 = 8C C 2 C 3 + 6C 3 C 2 4C 4, X 5 = 16C C 3 C C 2 C 3 C 2 + 8C 4 C 2 12C 2 2C 3 + 9C C 2 C 4 5C 5, X 6 = 32C C 3 C C 2 C 3 C C 4 C C 2 2C 3 C 2 18C 2 3C 2 16C 2 C 4 C C 5 C C 3 2C 3 18C 3 C 2 C 3 18C 2 C C 4 C 3 16C 2 2C C 3 C C 2 C 4 6C 6. Then [F x k) )] 1 F x k) ) = e k) + K 0 e k)2 + K 1 e k)3 + K 2 e k)4 + K 3 e k)5 + K 4 e k)6 + Oe k)7 ), 8.7) where K 0 = C 2, K 1 = 2C 2 2 2C 3, K 2 = 4C C 2 C 3 + 3C 3 C 2 3C 4, K 3 = 8C 4 2 6C 3 C 2 2 6C 2 C 3 C 2 + 4C 4 C 2 8C 2 2C 3 + 6C C 2 C 4 4C 5, K 4 = 5C 6 2C 2 C 5 14C 2 2C 4 + 9C 3 C C 3 2C 3 12C 3 C 2 C 3 12C 2 C C 4 C 3 16C C 3 C C 2 C 3 C 2 2 8C 4 C C 2 2C 3 C 2 9C 2 3C 2 8C 2 C 4 C 2 + 5C 5 C C 2 C 4. Also, the expression for yx k) ) is yx k) ) = x ek) 2 3 K, where K = K 0 e k)2 + K 1 e k)3 + K 2 e k)4 + K 3 e k)5 + K 4 e k)6.

139 121 Taylor expansion of Jacobian matrix F yx k) )) is [ F yx k) )) = F x ) I + 2C 2 yx k) ) x ) + 3C 3 yx k) ) x ) 2 + 4C 4 yx k) ) x ) 3 + 5C 5 yx k) ) x ) 4 ] + 6C 6 yx k) ) x ) 5 + Oe k)6 ) [ = F x ) I + N 1 e k) + N 2 e k)2 + N 3 e k)3 + N 4 e k)4 + N 5 e k)5] + Oe k)6 ), 8.8) where N 1 = 2 3 C 2, N 2 = 4 3 C C 3, N 3 = 8 3 C C 2C C 3C C 4, N 4 = 4C 2 C C2 2C C4 2 4C 2 C 3 C C 3C C C 4C C 5, N 5 = 16 3 C 2C 5 8C 2 2C C3 2C 3 8C 2 C C C 2 C 3 C 2 2 Therefore, +8C2C 2 3 C C 2C 4 C 2 + 4C 3 C C 3C 2 C C 3C2 3 4C3C C 4C C C 6. τx k) ) = [F x k) )] 1 F yx k) )) = I + N 1 + X 2 )e k) + N 2 + X 2 N 1 + X 3 )e k)2 +N 3 + X 2 N 2 + X 3 N 1 + X 4 )e k)3 +N 4 + X 2 N 3 + X 3 N 2 + X 4 N 1 + X 5 )e k)4 8.9) +N 5 + X 2 N 4 + X 3 N 3 + X 4 N 2 + X 5 N 1 + X 6 )e k)5 + Oe k)6 ). and then H 1 x k) ) = I 3 4 τx k) ) I ) τx k) ) I ) ) = I + R 1 e k) + R 2 e k)2 + R 3 e k)3 + R 4 e k)4 + R 5 e k)5 + Oe k)6 ), where R 1 = C 2, R 2 = 2C 3 C 2 2, R 3 = 4C C 2 C 3 4C 3 C C 4,

140 122 R 4 = C 2C 4 32C2C C2 4 5C 2 C 3 C C 3 C C 4C C 5, R 5 = C 2C C2 2C C2C C 2 C3 2 32C2 5 5C 5 C 2 +14C 2 C 3 C C2C 2 3 C 2 16C 2 C 4 C C 3C C 3 C 2 C 3 31C 3 C C 2 3C C 4C C C C 4 C C 2C 4. Using 8.7) and 8.10), we have H 1 x k) )[F x k) )] 1 F x k) ) = e k) +S 1 e k)4 +S 2 e k)5 +S 3 e k)6 +Oe k)7 ), 8.11) where S 1 = 1 9 C 4 + 4C 2 C 3 5C 3 2 3C 3 C 2, S 2 = 8 27 C C 2C 4 34C2C C C C 3 C2 2 10C 2 C 3 C C 4C 2, S 3 = C C 2C C2 2C C 3C C2C C2C 2 3 C 2 +26C 3 C 2 C 3 20C 2 C C 4C 3 74C2 5 45C 3 C C 2 C 3 C C 4C C 2 3C C 2C 4 C C 5C C 2C C 2. Next, by using 8.11) in 8.2) we have G 4 th M1x k) ) = x k) H 1 x k) )[F x k) )] 1 F x k) ) = x S 1 e k)4 S 2 e k)5 S 3 e k)6 + Oe k)7 ). 8.12) Finally we obtain e k+1) = G 4 th M1x k) ) x = 1 9 C 4 4C 2 C 3 + 5C C 3 C 2 )e k)4 + Oe k)5 ). Hence, we see that the method 8.2) has fourth order convergence. Theorem Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of x R n, that is a solution of the system F x) = 0. Let us suppose that x S = Sx, δ) and F x) is continuous

141 123 and nonsingular in x, and x 0) close enough to x. Then the sequence {x k) } k 0 obtained using the iterative expression 8.3) converges locally to x with order 5, where the error equation obtained is e k+1) = G 5 th M2x k) ) x = L 2 e k)5 + Oe k)6 ), L 2 = 2 9 C 2C 4 8C 2 2C C C 2 C 3 C 2. Proof. We first show that x is point of attraction using Theorem In this case, P x) = G 4 th M1x), Qx) = [F x)] 1, Rx) = F G 4 th M1x)). We can show by induction that G 4 th M1x ) = x, G 4 th M1 x ) = 0, so that P x ) = G 4 M1x ) = x, Qx ) = [F x )] 1 0, th Rx ) = F G 4 M1x )) = F x ) = 0, th P x ) = G 4 th M1 x ) = 0, R x ) = F G 4 M1x ))G th 4 th M1 x ) = 0, G x ) = P x ) Qx )R x ) = 0. So ρg x )) = 0 < 1 and by Ostrowski s Theorem, x is a point of attraction of 8.3). We next establish fifth order convergence of the method. Expanding F G 4 th M1x k) )) about x, we have [ F G 4 M1x k) )) = F x ) S th 1 e k)4 S 2 e k)5 S 3 e k)6] + Oe k)7 ). 8.13) Using equations 8.6) and 8.13), we get [F x k) )] 1 F G 4 th M1x k) )) = S 1 e k)4 S 2 + X 2 S 1 )e k)5 S 3 + X 2 S 2 + X 3 S 1 )e k)6 + Oe k)7 ). 8.14) Again, using 8.14) and 8.12) in 8.3), we get G 5 M2x k) ) th = G 4 M1x k) ) [F x k) )] 1 F G th 4 M1x k) )) th = x S 1 e k)4 S 2 e k)5 S 3 e k)6 S 1 e k)4 ) S 2 + X 2 S 1 )e k)5 S 3 + X 2 S 2 + X 3 S 1 )e k)6 + Oe k)7 ).

142 124 From the above equation, finally we obtain e k+1) = G 5 th M2x k) ) x = 2 9 C 2C 4 8C 2 2C C C 2 C 3 C 2 )e k)5 + Oe k)6 ). Hence, we see that the method 8.3) has fifth order convergence. Theorem Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of x R n, that is a solution of the system F x) = 0. Let us suppose that x S = Sx, δ) and F x) is continuous and nonsingular in x, and x 0) close enough to x. Then the sequence {x k) } k 0 obtained using the iterative expression 8.4) converges locally to x with order 6, where the error equation obtained is e k+1) = G 6 th M3x k) ) x = L 3 e k)6 + Oe k)7 ) L 3 = C C2 2C 4 + 4C 3 C 2 C 3 3C 2 3C 2 5C 3 C C3 2C C2 2C 3 C 2. Proof. In this case, P x) = G 4 th M1x), Qx) = H 2 x)[f x)] 1, Rx) = F G 4 th M1x)). We can show by induction that G 4 th M1x ) = x, G 4 th M1 x ) = 0, so that P x ) = G 4 M1x ) = x, H th 2 x ) = I, Qx ) = H 2 x )[F x )] 1 = I[F x )] 1 = [F x )] 1 0, Rx ) = F G 4 M1x )) = F x ) = 0, th P x ) = G 4 th M1 x ) = 0, R x ) = F G 4 M1x ))G th 4 th M1 x ) = 0, G x ) = P x ) Qx )R x ) = 0. So ρg x )) = 0 < 1 and by Ostrowski s Theorem, x is a point of attraction for method 8.4). By using 8.9) in H 2 x k) ), we get H 2 x k) ) = I + 2C 2 e k) C C 3 ) e k)2 + Oe k)3 ). 8.15)

143 125 Again, using 8.14) and 8.15), we have H 2 x k) )[F x k) )] 1 F G 4 th M1x k) )) = S 1 e k)4 S 2 + X 2 S 1 + 2C 2 S 1 )e k)5 S 3 + X 2 S 2 + X 3 S 1 + 2C 2 S 2 + X 2 S 1 ) + 46 ) 9 C C 3 )S 1 e k) ) Then, using 8.12) and 8.16) in 8.4), we have G 6 M3x k) ) = G th 4 M1x k) ) H th 2 x k) )[F x k) )] 1 F G 4 M1x k) )) th = x S 1 e k)4 S 2 e k)5 S 3 e k)6 S 1 e k)4 S 2 + X 2 S 1 + 2C 2 S 1 )e k)5 S 3 + X 2 S 2 + X 3 S 1 + 2C 2 S 2 + X 2 S 1 ) + 46 )) 9 C C 3 )S 1 e k)6 + Oe k)7 ). From above equation, finally we obtain 230 e k+1) = G 6 M3x k) ) x = th 9 C C2 2C 4 + 4C 3 C 2 C 3 3C3C 2 2 5C 3 C C3 2C ) 9 C2 2C 3 C 2 e k)6 + Oe k)7 ). Hence, we see that the method 8.4) has sixth order convergence. Further, we consider the following iterative methods for solving system of nonlinear equations for the purpose of comparison: Method of Babajee et al. 2012) 4 th BCST ): x k+1) = G 4 th BCST x k) ) = x k) W x k) )[A 1 x k) )] 1 F x k) ), A 1 x k) ) = 1 2 F x k) ) + F yx k) ))), W x k) ) = I 1 4 τxk) ) I) τxk) ) I) 2, 8.17) where τx k) ) and yx k) ) as defined in 4 th M1. Method of Grau-Sanchez et al. 2011) 5 th GGN): x k+1) = G 5 GGNx k) ) = zx k) ) F yx k) )) 1 F zx k) )), th zx k) ) = x k) 1 ] [F x k) ) 1 + F yx k) )) 1 F x k) ), 8.18) 2 yx k) ) = x k) [F x k) )] 1 F x k) ).

144 126 Method of Cordero et al. 2012b) 6 th CT V ) : x k+1) = G 6 th CT V x k) ) 1F = zx k) ) + F x k) ) 2F yx ))) k) zx k) )) 8.19) ) 1 ) zx k) ) = x k) + F x k) ) 2F yx k) )) 3F x k) ) 4F yx k) )) yx k) ) = x k) 1 2 [F x k) )] 1 F x k) ) 8.3 Efficiency of the Methods In this section, consider the definitions and given in Section 7.3 for efficiency index and computational efficiency respectively. Table 8.1 shows the comparison of EI and CE for the methods discussed in this Chapter. From this table, we observe that for the values of n = 5 and 10, where n is the size of the system, EI and CE values indicate that the proposed methods have greater value and hence more efficient. Figure 8.1 displays the performance Table 8.1: Comparison of EI and CE Method EI n=5 n=10 CE n=5 n=10 2 nd NM 7.1) 2 1 n+n n3 +2n n th BCST 8.17) 4 1 n+2n n3 +5n n th GGN 8.18) 5 2n+2n n3 +5n n th CT V 8.19) 6 3n+2n n3 +6n n th M1 8.2) 4 1 n+2n n3 +4n n th M2 8.3) 5 2n+2n n3 +5n n th M3 8.4) 6 2n+2n n3 +5n n of different methods with respect to EI and CE. It is observed from the figure, EI and CE for the new method 6 th M3 is better than 2 nd NM and all other compared methods, for all n 2, where n is the size of the system.

145 Efficiency index 2 nd NM 4 th BCST 5 th GGN 6 th CTV 4 th M1 5 th M2 6 th M Computational efficiency index 2 nd NM 4 th BCST 5 th GGN 6 th CTV 4 th M1 5 th M2 6 th M3 EI 1.08 CE n n Figure 8.1: Comparison of Efficiency index and Computational efficiency index 8.4 Numerical examples In this section, we compare the performance of the contributed methods with Newton s method and few existing methods 8.17)-8.19). The numerical experiments have been carried out using the MATLAB software for the examples given below. The approximate solutions are calculated correct to 1000 digits by using variable precision arithmetic. We use the following stopping criterion for the iteration scheme: err min = x k+1) x k) 2 < ) We have used the approximated computational order of convergence p c given by p c log xk+1) x k) 2 / x k) x k 1) 2 ) log x k) x k 1) 2 / x k 1) x k 2) 2 ). 8.21) Let M be the number of iterations required for reaching the minimum residual error err min ), total number of inverse of Fréchet derivatives n inv ) and total number of function evaluations n total ). The n total is counted as the sum of the total number of function evaluations in F and F, as in Sharma et al. 2013). In addition to the test problems TP1, TP2 and TP3 considered in section 7.4, we also consider the following problems: Test Problem 4 TP4) We consider the following nonlinear system: { exp x1 + x 1 x 2 1 = 0, sin x 1 x 2 ) + x 1 + x 2 1 = 0. We solve this system using the initial approximation x 0) = 0.7, 0.9) T. The solution of this system is x 0, 1) T. The Jacobian matrix has 4 non-zero elements given

146 128 by F x) = exp x1 + x 2 x x 2 cos x 1 x 2 ) 1 + x 1 cos x 1 x 2 ) ). Test Problem 5 TP5) We consider the following nonlinear system: x x x 3 3 = 9, x 1 x 2 x 3 = 1, x 1 + x 2 x 2 3 = 0. The solution is x , , ) T. We choose the starting vector x 0) = 3.0, 0.5, 2.0) T. The Jacobian matrix has 9 non-zero elements and it is given by F x) = 2x 1 2x 2 3x 2 3 x 2 x 3 x 1 x 3 x 1 x x 3. Test Problem 6 TP6) We consider the boundary value problem y + y 3 = 0, y0) = 0, y1) = 1, where equal length partitioning of the interval [0, 1] taken as. u 0 = 0 < u 1 < u 2 <... < u m 1 < u m = 1, u j+1 = u j + h, h = 1/m. Let us define y 0 = yu 0 ) = 0, y 1 = yu 1 ),..., y m 1 = yu m 1 ), y m = yu m ) = 1. If we discretize the problem by using the numerical formula for second derivative y = y k 1 2y k + y k+1 h 2, k = 1, 2, 3,..., m 1, then we obtain a system of m 1 nonlinear equations in m 1 variables, y k 1 2y k + y k+1 + h 2 y 3 k = 0, k = 1, 2, 3,..., m 1. In particular, we solve this problem for m = 16 that is n = 15 by selecting y 0) = 1, 1,..., 1) T as the initial value and the Jacobian matrix has 43 non-zero elements

147 129 and it is given by 3h 2 y h 2 y h 2 y h 2 y h 2 y The solution of this problem is x = { , , , , , , , , , , , , , , } T. It is observed from Table 8.2 that the computational order of convergence p c ) overwhelmingly supports the theoretical order of convergence for all the test problems TP1 - TP6). Also, 6 th M3 I.F. requires less number of iterations than 2 nd NM and few other compared methods. However, as far as the total number of function evaluations n total ) and total number of inverse of Fréchet derivatives n inv ) are concerned, the 6 th M3 requires less number than few other compared methods. 8.5 Applications Chandrasekhar s equation Consider the quadratic integral equation related to Chandrasekhar s work Chandrasekhar 1960) and Ezquerro et al. 2010a) xs) = fs) + λxs) 1 0 ks, t)xt)dt, 8.22) that arises in the study of the radiative transfer theory, the transport of neutrons and the kinetic theory of the gases. Equation 8.22) is also studied by Argyros 1985,

148 130 Table 8.2: Numerical results for Test Problems TP1-TP6) TP Methods M p c F F n total n inv err min TP1 2 nd NM 7.1) e th BCST 8.17) e th GGN 8.18) e th CT V 8.19) th M1 8.2) th M2 8.3) e th M3 8.4) TP2 2 nd NM 7.1) e th BCST 8.17) e th GGN 8.18) e th CT V 8.19) e th M1 8.2) e th M2 8.3) e th M3 8.4) e 051 TP3 2 nd NM 7.1) e th BCST 8.17) e th GGN 8.18) e th CT V 8.19) e th M1 8.2) e th M2 8.3) e th M3 8.4) e 179 TP4 2 nd NM 7.1) e th BCST 8.17) e th GGN 8.18) th CT V 8.19) e th M1 8.2) e th M2 8.3) e th M3 8.4) e 082 TP5 2 nd NM 7.1) e th BCST 8.17) e th GGN 8.18) th CT V 8.19) e th M1 8.2) e th M2 8.3) e th M3 8.4) e 092 TP6 2 nd NM 7.1) e th BCST 8.17) e th GGN 8.18) th CT V 8.19) e th M1 8.2) e th M2 8.3) e th M3 8.4) e 100

149 ) and along with some conditions for the kernel ks, t) in Ezquerro et al. 1999). We consider the maximum norm for the kernel ks, t) as a continuous function in s, t [0, 1] such that 0 < ks, t) < 1 and ks, t) + kt, s) = 1. Moreover, we assume that fs) C[0, 1] is a given function and λ is a real constant. Note that finding a solution for 8.22) is equivalent to solving the equation F x) = 0, where F : C[0, 1] C[0, 1] and F x)s) = xs) fs) λxs) In particular, we consider F x)s) = xs) 1 xs) ks, t)xt)dt, x C[0, 1], s [0, 1]. 8.23) s xt)dt, x C[0, 1], s [0, 1], 8.24) s + t Finally, we approximate numerically a solution for F x) = 0, where F x) is given in 8.24) by means of a discretization procedure. We solve the integral equation 8.24) by the Gauss-Legendre quadrature formula: 1 0 ft)dt 1 2 m β j ft j ), 8.25) j=1 where β j are the weights and t j are the knots tabulated in Table 8.3 for m = 8. Denote x i for the approximations of xt i ), i = 1, 2,...8, we obtain the following nonlinear system: x i x i 8 a ij x j, where a ij = j=1 t iβ j, i = 1, ) 8t i + t j ) Table 8.3: Weights and knots for the Gauss-Legendre formula m = 8) j t j β j

150 132 Table 8.4: Numerical results for Chandrasekhar s equation Methods M F F n total n inv err min 2 nd NM 7.1) e th BCST 8.17) e th GGN 8.18) e th CT V 8.19) th M1 8.2) e th M2 8.3) th M3 8.4) e 016 The stopping criterion for this problem is taken as err min = x k+1) x k) 2 < 10 13, the initial approximation assumed is x 0) = {1, 1,..., 1} t for obtaining the solution of this problem given by x = { , , , , , , , } t. Table 8.4 shows that the proposed methods require less n inv than other compared methods D Bratu problem The 1-D Bratu problem Buckmire 2003) and Babajee et al. 2015b)) is given by d 2 U + λ exp Ux) = 0, λ > 0, 0 < x < 1, 8.27) dx2 with the boundary conditions U0) = U1) = 0. The same problem has been already considered in Section 7.5. Hence, we give only the numerical results by applying the proposed methods of this chapter. Table 8.5: Comparison of number of λ s out of 350 λ s) for 1-D Bratu problem Method M = 2 M = 3 M = 4 M = 5 M > 5 M λ 2 nd NM 7.1) th BCST 8.17) th GGN 8.18) th CT V 8.19) th M1 8.2) th M2 8.3) th M3 8.4)

151 133 Table 8.5 shows the results for 1-D Bratu problem, where M represents the number of iterations for convergence. It can be observed from Table 8.5, the proposed methods 5 th M2, 6 th M3) are more efficient among the compared methods because it has lowest mean iteration number M λ ). 8.6 Concluding Remarks In this chapter, some efficient new iterative methods of order four, five and six by using weight functions to solve systems of nonlinear equations have been proposed. The new methods do not require the evaluation of second or higher order Fréchet derivatives to reach fourth or higher order of convergence. Also, they require evaluation of only one inverse of first order Fréchet derivative and calculate less number of linear systems per iteration. We have proved that x is a point of attraction for the new methods. Few examples have been verified using the proposed methods and compared them with some known methods, which illustrate the superiority of the new methods. From the graphical figures, the efficiency and computational efficiency of the new methods are found to be superior over Newton s method and some existing equivalent methods. The proposed new methods have been applied on two application problems called Chandrasekhar s equation and 1-D Bratu problem. The results show that new methods can be better alternative to Newton s method and some of the existing higher order methods.

152 Chapter 9 An improvement to double-step Newton-type method and its multi-step version In this chapter, we have improved the order of the double-step Newton-type method from four to five using the same number of evaluation of two functions and two first order Fréchet derivatives for each iteration. The multi-step version requires one more function evaluation for each step. The multi-step version converges with order 3r + 5, r 1. Numerical experiments compare the new methods with some existing methods. Our methods are also tested on Chandrasekhar s problem and 2-D Bratu problem to illustrate the applications. In section 9.1, we have proposed a 2-step fifth order method which is an improvement over the 2-step Newton method, which uses two functions and two Fréchet derivative evaluations and only one inverse. A multi-step version with order 3r + 5, r 1 for solving a system of nonlinear equations is also suggested which uses one more additional functional evaluation of each step. Section 9.2 derives convergence analysis of the new methods. In section 9.3, efficiency indices and computational efficiency indices for the new methods are discussed. In section 9.4, numerical examples and their results are discussed comparing with some existing methods. In section 9.5, two application problems are solved using the present method and some existing methods. Concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient. 134

153 Construction of new methods One of the basic procedure for solving system of nonlinear equations is the classical one-step second order Newton s method 2 nd NM) given by x k+1) = G 2 NMx k) ) = x k) [F x k) )] 1 F x k) ), k = 0, 1, 2, ) nd where [F x k) )] 1 is the inverse of first Fréchet derivative F x k) ) of the function F x k) ). It is straightforward to see that this method requires the evaluation of one function, one first derivative and one matrix inversion per iteration. Traub 1964) suggested that multi-step iterative methods are better way to improve the order of convergence free from second derivatives, such modifications of Newton s method have been proposed in the literature; for example see Cordero et al. 2010b), Abad et al. 2013), Noor et al. 2013), Sharma et al. 2013) and Babajee et al. 2015b) and references therein. Traub 1964) proposed a two step variant of Newton s method 3 rd T M) having convergence order three by evaluating two functions, one Fréchet derivatives and its inverse for x k+1) = G 3 rd T Mx k) ) = G 2 nd NMx k) ) [F x k) ))] 1 F G 2 nd NMx k) )). 9.2) The double-step fourth order Newton method 4 th NR) is given by x k+1) = G 4 th NRx k) ) = G 2 nd NMx k) ) [F G 2 nd NMx k) ))] 1 F G 2 nd NMx k) )), 9.3) which was recently rediscovered by Noor et al. 2013) using the variational iteration technique, where two functions, two Fréchet derivatives and their inverse were evaluated. Recently, Abad et al. 2013) combined the Newton and Traub method to obtain a 3-step fourth order method 4 th ACT ), where two functions, two Fréchet derivatives and their inverse were evaluated x k+1) = G 4 th ACT x k) ) = G 2 nd NMx k) ) [F G 3 rd T Mx k) ))] 1 F G 2 nd NMx k) )). 9.4) Again in Abad et al. 2013), a different combination to get a 3-step fifth order method 5 th ACT ), where three functions, two Fréchet derivatives and their inverse were

154 136 evaluated for x k+1) = G 5 th ACT x k) ) = G 3 rd T Mx k) ) [F G 2 nd NMx k) ))] 1 F G 3 rd T Mx k) )). 9.5) New double-step fifth order method 5 th MBJ): x k+1) = G 5 MBJx k) ) th = G 2 NMx k) ) H nd 1 x k) )[F x k) )] 1 F G 2 NMx k) )), nd H 1 x k) ) = 2I τx k) ) τxk) ) I) 2, 9.6) τx k) ) = [F x k) )] 1 F G 2 NMx k) ) ), nd where I is the n n identity matrix. This method uses two function and two Fréchet derivative evaluations and only one inverse to reach fifth order convergence. New multi-step 3r + 5) th order method 3r + 5) th MBJ): We improve the 5 th MBJ method by an additional function evaluation to get the multi-step version 3r + 5) th MBJ method is given by x k+1) = G 3r+5) MBJx k) ) = µ th r x k) ), µ j x k) ) = µ j 1 x k) ) H 2 x k) )[F x k) )] 1 F µ j 1 x k) )), 9.7) H 2 x k) ) = 2I τx k) ) τxk) ) I) 2, µ 0 x k) ) = G 5 MBJx k) ), th j = 1, 2,..., r, r 1. Remark This multi-step version has order 3r + 5, r 1. The case r = 0 is the 5 th MBJ method given in 9.6). These two methods 9.6) and 9.7) have been proposed in Madhu et al. 2016). 9.2 Convergence Analysis of the methods In order to prove the convergence results, we recall some important definitions and results from the theory of point of attraction given in Section 7.2. Theorem Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of x R n, that is a solution of the system F x) = 0. Let us suppose that F x) is continuous and nonsingular

155 137 in x, and x 0) close enough to x. Then the sequence {x k) } k 0 obtained using the iterative expression 9.6) converges locally to x with order 5, where the error equation obtained is e k+1) = G 5 th MBJx k) ) x = L 1 e k)5 + Oe k)6 ), L 1 = 14C C 2C 3 C C 2 2C C 3C ) Proof. We first show that x is a point of attraction using Theorem In this case, P x) = G 2 nd NMx), Qx) = H 1 x)[f x)] 1, Rx) = F G 2 nd NMx)). Now, since F x ) = 0, we have G 2 NMx ) = x [F x )] 1 F x ) = x, nd τx ) = F x ) 1 F G 2 NMx )) = [F x )] 1 F x ) = I, H nd 1 x ) = I, P x ) = G 2 NMx ), P x ) = G nd 2 nd NM x ) = 0, Qx ) = H 1 x )[F x )] 1 = I[F x )] 1 = [F x )] 1 0, Rx ) = F G 2 NMx )) = F x ) = 0, nd R x ) = F G 2 NMx ))G nd 2 nd NM x ) = 0, G x ) = P x ) Qx )R x ) = 0, so that ρg x )) = 0 < 1 and by Ostrowski s theorem, x is a point of attraction of equation 9.6). From 7.5) and 7.6) we obtain and [ F x k) ) = F x ) e k) + C 2 e k)2 + C 3 e k)3 + C 4 e k)4 + C 5 e k)5] + Oe k)6 ), [ F x k) ) = F x ) I + 2C 2 e k) + 3C 3 e k)2 + 4C 4 e k)3 + 5C 5 e k)4] + Oe k)5 ). We have [F x k) )] 1 = 9.9) 9.10) [ I + X 1 e k) + X 2 e k)2 + X 3 e k)3 + X 4 e k)4] [F x )] 1 +Oe k)5 ), 9.11)

156 138 where X 1 = 2C 2, X 2 = 4C 2 2 3C 3, X 3 = 8C C 2 C 3 + 6C 3 C 2 4C 4 and X 4 = 5C 5 + 9C C 2 C 4 + 8C 4 C C C 2 2C 3 12C 3 C C 2 C 3 C 2. Then [F x k) )] 1 F x k) ) = e k) C 2 e k)2 + 2C2 2 C 3 )e k)3 ) + 3C 4 4C C 2 C 3 + 3C 3 C 2 e k)4 9.12) + 6C C2 4 8C2C 2 3 6C 2 C 3 C 2 6C 3 C2 2 +6C 2 C 4 + 4C 4 C 2 4C 5 )e k)5 + Oe k)6 ). Also we have ) G 2 NMx k) ) = x + C nd 2 e k)2 + 2 C 22 + C 3 e k)3 + 3C 4 + 4C2 3 4C 2 C 3 3C 3 C 2 )e k)4 + 6C3 2 8C C2C ) + 6C 2 C 3 C 2 + 6C 3 C 2 2 6C 2 C 4 4C 4 C 2 + 4C 5 )e k)5. Expanding F G 2 nd NMx k) )) and F G 2 nd NMx k) )) about x in Taylor series respectively given below [ F G 2 NMx k) )) = F x ) G nd 2 NMx k) ) x ) + C nd 2 G 2 NMx k) ) x ) 2 nd ] + C 3 G 2 NMx k) ) x ) nd [ = F x ) C 2 e k)2 + 2 C2 2 + C 3 )e k)3 + 3C 4 + 5C2 3 4C 2 C 3 3C 3 C 2 )e k)4 + 6C3 2 12C C2C 2 3 where + 8C 2 C 3 C 2 + 6C 3 C2 2 6C 2 C 4 4C 4 C 2 + 4C 5 )e k)5], 9.14) [ F G 2 NMx k) )) = F x ) I + 2C nd 2 G 2 NMx k) ) x ) nd ] + 3C 3 G 2 NMx k) ) x ) nd [ = F x ) I + P 1 e k)2 + P 2 e k)3 + P 3 e k)4] + Oe k)5 ), 9.15) P 1 = 2C 2 2, P 2 = 4C 2 C 3 4C 3 2, P 3 = 8C C 2 C 4 8C 2 2C 3 + 3C 3 C 2 2 6C 2 C 3 C 2.

157 139 Using equations 9.11) and 9.15), we have ) [F x k) )] 1 F G 2 NMx k) )) = I 2C nd 2 e k) + 6C 22 3C 3 ) + 10C 2 C 3 + 6C 3 C 2 16C 32 4C 4 e k)3 + 5C 5 + 9C C C 2 C 4 + 8C 4 C 2 e k)2 9.16) 28C 2 2C 3 15C 3 C C 2 C 3 C 2 )e k)4 + Oe k)5 ). Then H 1 x k) ) = 2I τx k) ) τxk) ) I) 2 ) = I + 2C 2 e k) C 22 3C 3 e k) C 2C ) 2 C 3C 2 14C C 4 e k)3 + Oe k)4 ).9.17) Using equations 9.11) and 9.14), we have ) [F x k) )] 1 F G 2 NMx k) )) = C nd 2 e k)2 + 2C 3 4C2 2 e k)3 + 13C2 3 8C 2 C 3 6C 3 C 2 + 3C 4 )e k)4 + 12C3 2 38C C2C ) Then + 20C 2 C 3 C C 3 C C 2 C 4 8C 4 C 2 + 4C 5 )e k)5 + Oe k)6 ). H 1 x k) )[F x k) )] 1 F G 2 nd NMx k) )) = C 2 e k)2 + 2C 3 2C 2 2)e k)3 + 3C 4 + 4C 3 2 4C 2 C 3 3C 3 C 2 )e k)4 + 6C C 5 6C 2 C 4 4C 4 C 2 22C C 2C 3 C 2 4C 2 2C C 3C 2 2)e k)5 + Oe k)6 ). Using equations 9.13) and 9.19) in 9.6), we have 9.19) e k+1) = 14C C 2C 3 C C 2 2C C 3C 2 2)e k)5 + Oe k)6 ), 9.20) which proves fifth order convergence. Theorem Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of x R n that is a solution of the system F x) = 0. Let us suppose that x S = Sx, δ) and F x) is continuous and nonsingular in x, and x 0) is close enough to x. Then x is a point of attraction

158 140 of the sequence {x k) } obtained using the iterative expression 9.7). Furthermore the sequence converges locally to x with order 3r + 5, where r is a positive integer and r 1. Proof. In this case, P x) = µ j 1 x), Qx) = H 2 x)f x) 1, Rx) = F µ j 1 x)), j = 1,..., r. We can show by induction that µ j 1 x ) = x, µ j 1x ) = 0, j = 1,..., r so that P x ) = µ j 1 x ) = x, H 2 x ) = I, Qx ) = I[F x )] 1 = [F x )] 1 0, Rx ) = F µ j 1 x )) = F x ) = 0, P x ) = µ j 1x ) = 0, R x ) = F µ j 1 x ))µ j 1x ) = 0, G x ) = P x ) Qx )R x ) = 0. So ρg x )) = 0 < 1 and by Ostrowski s theorem, x is a point of attraction of equation 9.7). A Taylor expansion of F µ j 1 x k) )) about x yields F µ j 1 x k) )) = F x ) [ µ j 1 x k) ) x ) + C 2 µ j 1 x k) ) x ) ] 9.21) Also, let H 2 x k) ) = I + 2C 2 e k) + 3C 3 e k)2 + C 2 C 3 20C C 3 C 2 + 4C 4 ) e k) Using equations 9.11) and 9.22), we have H 2 x k) )[F x k) )] 1 = 9.22) [ ] I + L 2 e k) [F x )] 1, 9.23) where L 2 = 20C 3 2 C 2 C 3 + 3C 3 C 2. Using equations 9.23) and 9.21), we have H 2 x k) )F x k) ) 1 F µ j 1 x k) )) ) ) = I + L 2 e k) µ j 1 x k) ) x ) + C 2 µ j 1 x k) ) x ) = µ j 1 x k) ) x + L 2 e k)3 µ j 1 x k) ) x ) + C 2 µ j 1 x k) ) x ) )

159 141 Using equation 9.24) in equation 9.7), we obtain µ j x k) ) x = µ j 1 x k) ) x ) + C 2 µ j 1 x k) ) x ) µ j 1 x k) ) x + L 2 e k)3 µ j 1 x k) ) x ) = L 2 e k)3 µ j 1 x k) ) x ) ) As we know that µ 0 x k) ) x = Oe k)5 ) and from equation 9.25), for j = 1, 2,... ) µ 1 x k) ) x = L 2 e k)3) ) µ 0 x k) ) x +... = L 2 L 1 e k) ) µ 2 x k) ) x = L 2 e k)3) ) µ 1 x k) ) x +... = L 2 L 2 L 1 )e k) = L 2 2L 1 e k) Proceeding by induction, we have µ r x k) ) x = L 2 ) r L 1 e k)3r+5) ) + Oe k)3r+6) ), r ) which shows that the method has 3r + 5 order of convergence. Remark Multi-step version 3r + 5) th MBJ r 0) methods are constructed from 4 + r evaluation of F and F together. Only one inverse evaluation of Fréchet derivatives F at x k) ) is used for the proposed method 9.7). 9.3 Efficiency of the Methods In this section, consider the definitions and given in Section 7.3 for efficiency index and computational efficiency respectively. Table 9.1 shows the comparison of EI and CE for the methods given in Section 9.1. From this table, we observe that for the values of n = 5 and 10, where n is the size of the system, EI and CE values indicate that the proposed methods have greater value and hence more efficient. Figure 9.1 displays the performance of different methods with respect to EI and CE. It is observed from the figure, EI and CE for the

160 142 Table 9.1: Comparison of EI and CE Method EI n=5 n=10 CE n=5 n=10 2 nd NM 9.1) 2 1 n+n n3 +2n n rd T M 9.2) 3 1 2n+n n3 +3n n th NR 9.3) 4 2n+2n n3 +4n n th ACT 9.4) 4 2n+2n n3 +5n n th ACT 9.5) 5 3n+2n n3 +5n n th MBJ 9.6) 5 2n+2n n3 +5n n th MBJ 9.7) 8 3n+2n n3 +6n n EI Efficiency index 2 nd NM 3 rd TM 4 th NR 4 th ACT 5 th ACT 5 th MBJ 8 th MBJ CE Computational efficiency index 2 nd NM 3 rd TM 4 th NR 4 th ACT 5 th ACT 5 th MBJ 8 th MBJ n n Figure 9.1: Comparison of Efficiency index and Computational efficiency index new method 8 th MBJ is better than 2 nd NM and other compared methods, for all n 2, where n is the size of the system. 9.4 Numerical examples The numerical experiments have been carried out using the MATLAB software for the examples given below. The approximate solutions are calculated correct to 1000 digits by using variable precision arithmetic. We use the following stopping criterion for the iteration scheme: err min = x k+1) x k) 2 < )

161 143 We have used the approximated computational order of convergence p c given by p c log xk+1) x k) 2 / x k) x k 1) 2 ) log x k) x k 1) 2 / x k 1) x k 2) 2 ). 9.28) Let M be the number of iterations required for reaching the minimum residual error err min ). We consider test problems TP3 - TP6 given in section 7.4 and 8.4 in addition to the following test problems as examples: Test Problem 7 TP7) We consider the following nonlinear system: F x 1, x 2 ) = x 1 + expx 2 ) cosx 2 ), 3x 1 x 2 sinx 2 )). The Jacobian matrix is given by F x) = 1 expx2 ) + sinx 2 ) 3 1 cosx 2 ) vector is x 0) = 1.5, 2) T and the exact solution is x = 0, 0) T. Test Problem 8 TP8) We consider the following nonlinear system: { xi x i+1 1 = 0, i = 1, 2, 3,...15, x 15 x 1 1 = 0. The solution is x = 1, 1, 1,..., 1) T. We choose the starting vector ). The starting x 0) = 1.5, 1.5, 1.5,..., 1.5) T. The Jacobian matrix has 30 non-zero elements and it is given by x 2 x x 3 x F x) = x 15 x 14 x x 1 Tables 9.2 to 9.4 show the results for the test problems, from which we conclude that 8 th MBJ and 11 th MBJ methods are the most efficient methods out of the methods compared with the least number of iterations and residual error.

162 144 Table 9.2: Comparison of different methods for system of nonlinear equations Methods TP3 TP4 M err min p c M err min p c 2 nd NM 9.1) 8 3.9e e rd T M 9.2) 6 8.8e e th NR 9.3) 5 2.9e e th ACT 9.4) 5 3.8e e th ACT 9.5) 4 5.7e e th MBJ 9.6) 4 5.0e e th MBJ 9.7) e th MBJ 9.7) Table 9.3: Comparison of different methods for system of nonlinear equations Methods TP5 TP6 M err min p c M err min p c 2 nd NM 9.1) 9 1.7e e rd T M 9.2) 6 3.4e e th NR 9.3) 5 1.7e e th ACT 9.4) 5 1.9e e th ACT 9.5) 5 2.4e th MBJ 9.6) 5 6.7e e th MBJ 9.7) 4 2.0e th MBJ 9.7) Applications Chandrasekhar s equation Consider the quadratic integral equation related to Chandrasekhar s work Chandrasekhar 1960), Ezquerro et al. 2010a) xs) = fs) + λxs) 1 0 ks, t)xt)dt, 9.29) which arises in the study of the radiative transfer theory, the transport of neutrons and the kinetic theory of the gases see detailed discussion in section 8.5). For this application, we use the following stopping criterion err min = x k+1) x k) 2 < 10 5, the initial approximation assumed is x 0) = {1, 1,..., 1} t for obtaining the solution

163 145 Table 9.4: Comparison of different methods for system of nonlinear equations Methods TP7 TP8 M err min p c M err min p c 2 nd NM 9.1) e e rd T M 9.2) 7 9.6e e th NR 9.3) 6 5.3e e th ACT 9.4) 6 2.8e e th ACT 9.5) th MBJ 9.6) 6 1.0e e th MBJ 9.7) e th MBJ 9.7) 4 4.5e Table 9.5: Comparison of iteration and errors for Chandrasekhar s equation M 2 nd NM 3 rd T M 4 th NR 4 th ACT 5 th ACT 5 th MBJ 1 4.9e e e e e e e e e e e e e e e e e of this problem given by x = { , , , , , , , } t. Table 9.5 compares the iteration numbers and their errors for this application. The results show that the proposed method 5 th MBJ is better than 2 nd NM and some other methods D Bratu problem We consider the solution of the Bratu problem in two-dimensions 2 U x + 2 U + λ expu) = 0, x, y D = [0, 1] [0, 1] 9.30) 2 y2 subject to the boundary conditions Ux, y) = 0, x, y D, where D is the boundry of domain D. 9.31) The 2-D Planar Bratu problem has two known, bifurcated, exact solutions for values of λ < λ c, one solution for λ = λ c and no solutions for λ > λ c. The exact solution

164 Variation of θ with λ θ 10 λ c λ Figure 9.2: Variation of θ for different values of λ. to equation 9.30) is known and can be presented here as [ cosh θ 4 Ux, y) = 2 ln ) cosh x 1)y 1)θ) ] 2 2 cosh x 1) )) θ 2 2 cosh y 1 ) )), 9.32) θ 2 2 where θ is a constant to be determined, which satisfies the boundary conditions and is carefully chosen and assumed to be the solution of the differential equation 9.30). The following procedure found in Odejide and Aregbesola 2006), for how to obtain the critical value of λ. Substituting equation 9.32) in 9.30), simplifying and collocating at the point x = 1 2 and y = 1 because it is the midpoint of the interval. 2 Another point could be chosen, but low-order approximations are likely to be better if the collocation points are distributed somewhat evenly throughout the region. Then, we have θ 2 = λ cosh 2 θ 4 ). 9.33) Differentiating equation 9.33) with respect to θ and setting dλ = 0, the critical dθ value λ c satisfies θ = 1 ) ) θ θ 4 λ c cosh sinh. 9.34) 4 4 By eliminating λ from equations 9.33) and 9.34), we have the value of θ c for the critical λ c satisfying ) θ c 4 = coth θc ) and θ c = We then get λ c = from equation 9.34). Fig. 9.2) illustrates this critical value of λ c. The differential equation 9.30) is usually discretized by using the finite difference five point formula with the step size h,

165 147 the resulting nonlinear equations are F U i,j ) = 4U i,j λh 2 expu i,j )) + U i+1,j + U i 1,j + U i,j+1 + U i,j ) where U i,j is U at x i, y j ), x i = ih, y j = jh, i, j = 1, 2,...n. Equation 9.36) represents a set of n n nonlinear equations in U i,j which are then solved by using iterative methods. We use n = 10 and n = 20 for testing 700 λ s in the interval 0, 7] h = 0.01). For each λ, let M λ be the minimum number of iterations for which U k+1) i,j U k) i,j 2 < 1e-11, where the approximation U k) i,j is calculated correct to 14 decimal places. Let M λ be the mean iteration number for the 700 λ s. The solution for the 2-D bratu problem for n=10 is given by x = {0.0656, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , } t Table 9.6: Comparison of number of λ s for 2-D Bratu problem for n = 10 Method M = 2 M = 3 M = 4 M = 5 M λ 2 nd NM 9.1) rd T M 9.2) th NR 9.3) th ACT 9.4) th ACT 9.5) th MBJ 9.6) th MBJ 9.7)

166 148 Table 9.7: Comparison of number of λ s for 2-D Bratu problem for n = 20 Method M = 2 M = 3 M = 4 M = 5 M λ 2 nd NM 9.1) rd T M 9.2) th NR 9.3) th ACT 9.4) th ACT 9.5) th MBJ 9.6) th MBJ 9.7) Tables give the results for 2-D Bratu problem, where M represents the number of iterations for convergence. It can be observed from Table 9.7, proposed method 8 th MBJ is convergent for all the grid points in two iterations. Also, the method 8 th MBJ is the most efficient method among the compared methods for the cases n = 10 and n = 20 because it has the lowest mean iteration number. 9.6 Concluding Remarks In this chapter, a double step fifth order method which is an improvement to the 2-step Newton s method and its multi-step version having higher order convergence using weight functions to solve systems of nonlinear equations have been proposed. The proposed methods do not require evaluation of second or higher order Fréchet derivatives to reach fifth order or higher order of convergence, evaluate only one inverse of first order Fréchet derivative. We have verified that the root x is a point of attraction based on the theory given Ortega and Rheinbolt 1970). A few examples have been verified using the proposed methods and compared them with some known methods, which illustrate the superiority of the new methods. The proposed new methods have been applied on two practical problems called Chandrasekhar s equation and 2-D Bratu problem. The results obtained are interesting and encouraging for the new methods. Hence, the proposed methods can be considered competent enough to Newton s method and some of the existing higher order methods.

167 Chapter 10 Application in Global Positioning System 10.1 Introduction The Global Positioning System GPS) is all weather and space based navigation system. It is a constellation of a minimum of 24 satellites in near circular orbits, positioned at an approximate height of km above from the earth. The satellites travel with a velocity of 3.9 km/sec with an orbital period of 11 hours 58 minutes. From the satellite constellation, the equations required for solving the user position conform a nonlinear system of equations. In addition, some practical considerations will be included in these equations. These equations are usually solved through a linearization technique and fixed point iteration method. That solution of an equation is in a Cartesian coordinate system, and then it is converted into a spherical coordinated system. However, the Earth is not a perfect sphere. Therefore, once the user position is estimated, the shape of the Earth must be taken into consideration. The user position is then translated into the Earth based coordinate system. In this Chapter, we are going to focus our attention on solving the nonlinear system of equations of the GPS giving the results in a Cartesian coordinate system. The position of a point in space can be found by using the distances measured from this point to some known position in space. Figure 10.1 shows a two dimensional case. In order to determine the user position, three satellites S 1, S 2 and S 3 and three distances are required. The trace of a point with constant distance to a fixed point is a circle in the two-dimensional case. Two satellites and two distances give two possible solutions because two circles intersect at two points. One more circle is needed to uniquely 149

168 150 Figure 10.1: Two dimensional user position determine the user position. For similar reasons in a three-dimensional case, four satellites and four distances are needed. Fig shows that three dimensional Figure 10.2: Three dimensional user position case and it is taken from Griffin 2011). The equal-distance trace to a fixed point is a sphere in a three-dimensional case. A GPS receiver knows the location of the satellites because that information is included in the transmitted Ephemeris data. By estimating how far away a satellite is, the receiver also knows it is located somewhere on the surface of an imaginary sphere centered at the satellite. We can find more information about GPS in Tsui 2005) and Abad et al. 2013) Basic Equations for Finding User Position In this section, the basic equations for determining the user position are presented. Assume that the distance measured is accurate and under this condition, three satellites should be sufficient. Let us suppose that there are three known points at locations r 1 or x 1, y 1, z 1 ), r 2 or x 2, y 2, z 2 ), and r 3 or x 3, y 3, z 3 ) and an unknown point

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications Applied Mathematics E-Notes, 17(017), 1-30 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations

More information

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations F. Soleymani Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran E-mail: fazl_soley_bsb@yahoo.com; Tel:

More information

Two New Predictor-Corrector Iterative Methods with Third- and. Ninth-Order Convergence for Solving Nonlinear Equations

Two New Predictor-Corrector Iterative Methods with Third- and. Ninth-Order Convergence for Solving Nonlinear Equations Two New Predictor-Corrector Iterative Methods with Third- and Ninth-Order Convergence for Solving Nonlinear Equations Noori Yasir Abdul-Hassan Department of Mathematics, College of Education for Pure Science,

More information

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur 28 International Journal of Advance Research, IJOAR.org Volume 1, Issue 1, January 2013, Online: Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur ABSTRACT The following paper focuses

More information

Modified Jarratt Method Without Memory With Twelfth-Order Convergence

Modified Jarratt Method Without Memory With Twelfth-Order Convergence Annals of the University of Craiova, Mathematics and Computer Science Series Volume 39(1), 2012, Pages 21 34 ISSN: 1223-6934 Modified Jarratt Method Without Memory With Twelfth-Order Convergence F. Soleymani,

More information

NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS

NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS Bulletin of Mathematical Analysis and Applications ISSN: 181-191, URL: http://www.bmathaa.org Volume 3 Issue 4(011, Pages 31-37. NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS

More information

SOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H.

SOLVING NONLINEAR EQUATIONS USING A NEW TENTH-AND SEVENTH-ORDER METHODS FREE FROM SECOND DERIVATIVE M.A. Hafiz 1, Salwa M.H. International Journal of Differential Equations and Applications Volume 12 No. 4 2013, 169-183 ISSN: 1311-2872 url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijdea.v12i4.1344 PA acadpubl.eu SOLVING

More information

Geometrically constructed families of iterative methods

Geometrically constructed families of iterative methods Chapter 4 Geometrically constructed families of iterative methods 4.1 Introduction The aim of this CHAPTER 3 is to derive one-parameter families of Newton s method [7, 11 16], Chebyshev s method [7, 30

More information

High-order Newton-type iterative methods with memory for solving nonlinear equations

High-order Newton-type iterative methods with memory for solving nonlinear equations MATHEMATICAL COMMUNICATIONS 9 Math. Commun. 9(4), 9 9 High-order Newton-type iterative methods with memory for solving nonlinear equations Xiaofeng Wang, and Tie Zhang School of Mathematics and Physics,

More information

Improving homotopy analysis method for system of nonlinear algebraic equations

Improving homotopy analysis method for system of nonlinear algebraic equations Journal of Advanced Research in Applied Mathematics Vol., Issue. 4, 010, pp. -30 Online ISSN: 194-9649 Improving homotopy analysis method for system of nonlinear algebraic equations M.M. Hosseini, S.M.

More information

IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS 136 IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS 1Ogbereyivwe, O. and 2 Ojo-Orobosa, V. O. Department of Mathematics and Statistics,

More information

Finding simple roots by seventh- and eighth-order derivative-free methods

Finding simple roots by seventh- and eighth-order derivative-free methods Finding simple roots by seventh- and eighth-order derivative-free methods F. Soleymani 1,*, S.K. Khattri 2 1 Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran * Corresponding

More information

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 4 2016, pp. 2987 2997 Research India Publications http://www.ripublication.com/gjpam.htm Chebyshev-Halley s Method without

More information

Some Third Order Methods for Solving Systems of Nonlinear Equations

Some Third Order Methods for Solving Systems of Nonlinear Equations Some Third Order Methods for Solving Systems of Nonlinear Equations Janak Raj Sharma Rajni Sharma International Science Index, Mathematical Computational Sciences waset.org/publication/1595 Abstract Based

More information

Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations

Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations International Engineering Mathematics, Article ID 828409, 11 pages http://dx.doi.org/10.1155/2014/828409 Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear

More information

A New Fifth Order Derivative Free Newton-Type Method for Solving Nonlinear Equations

A New Fifth Order Derivative Free Newton-Type Method for Solving Nonlinear Equations Appl. Math. Inf. Sci. 9, No. 3, 507-53 (05 507 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/0.785/amis/090346 A New Fifth Order Derivative Free Newton-Type Method

More information

A Fifth-Order Iterative Method for Solving Nonlinear Equations

A Fifth-Order Iterative Method for Solving Nonlinear Equations International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767, P-ISSN: 2321 4759 www.ijmsi.org Volume 2 Issue 10 November. 2014 PP.19-23 A Fifth-Order Iterative Method for Solving

More information

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence Palestine Journal of Mathematics Vol. 3(1) (2014), 113 119 Palestine Polytechnic University-PPU 2014 Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence M.A. Hafiz

More information

A three point formula for finding roots of equations by the method of least squares

A three point formula for finding roots of equations by the method of least squares Journal of Applied Mathematics and Bioinformatics, vol.2, no. 3, 2012, 213-233 ISSN: 1792-6602(print), 1792-6939(online) Scienpress Ltd, 2012 A three point formula for finding roots of equations by the

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations Volume 27, N. 3, pp. 269 274, 2008 Copyright 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam An efficient Newton-type method with fifth-order convergence for solving nonlinear equations LIANG FANG 1,2, LI

More information

SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS

SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS Open J. Math. Sci., Vol. 1(017, No. 1, pp. 5-33 ISSN 53-01 Website: http://www.openmathscience.com SOME MULTI-STEP ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS MUHAMMAD SAQIB 1, MUHAMMAD IQBAL Abstract.

More information

A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations

A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations Int. Journal of Math. Analysis Vol. 5 2011 no. 37 1813-1819 A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations F. Soleymani 1 and B. S. Mousavi 2 Young Researchers Club Islamic

More information

Dynamical Behavior for Optimal Cubic-Order Multiple Solver

Dynamical Behavior for Optimal Cubic-Order Multiple Solver Applied Mathematical Sciences, Vol., 7, no., 5 - HIKARI Ltd, www.m-hikari.com https://doi.org/.988/ams.7.6946 Dynamical Behavior for Optimal Cubic-Order Multiple Solver Young Hee Geum Department of Applied

More information

Document downloaded from:

Document downloaded from: Document downloaded from: http://hdl.handle.net/1051/56036 This paper must be cited as: Cordero Barbero, A.; Torregrosa Sánchez, JR.; Penkova Vassileva, M. (013). New family of iterative methods with high

More information

Basins of Attraction for Optimal Third Order Methods for Multiple Roots

Basins of Attraction for Optimal Third Order Methods for Multiple Roots Applied Mathematical Sciences, Vol., 6, no., 58-59 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.988/ams.6.65 Basins of Attraction for Optimal Third Order Methods for Multiple Roots Young Hee Geum Department

More information

Three New Iterative Methods for Solving Nonlinear Equations

Three New Iterative Methods for Solving Nonlinear Equations Australian Journal of Basic and Applied Sciences, 4(6): 122-13, 21 ISSN 1991-8178 Three New Iterative Methods for Solving Nonlinear Equations 1 2 Rostam K. Saeed and Fuad W. Khthr 1,2 Salahaddin University/Erbil

More information

Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and Their Dynamics

Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and Their Dynamics Applied Mathematical Sciences, Vol. 9, 5, no. 49, 747-748 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.988/ams.5.5658 Development of a Family of Optimal Quartic-Order Methods for Multiple Roots and

More information

A Three-Step Iterative Method to Solve A Nonlinear Equation via an Undetermined Coefficient Method

A Three-Step Iterative Method to Solve A Nonlinear Equation via an Undetermined Coefficient Method Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 14, Number 11 (018), pp. 145-1435 Research India Publications http://www.ripublication.com/gjpam.htm A Three-Step Iterative Method

More information

Using Lagrange Interpolation for Solving Nonlinear Algebraic Equations

Using Lagrange Interpolation for Solving Nonlinear Algebraic Equations International Journal of Theoretical and Applied Mathematics 2016; 2(2): 165-169 http://www.sciencepublishinggroup.com/j/ijtam doi: 10.11648/j.ijtam.20160202.31 ISSN: 2575-5072 (Print); ISSN: 2575-5080

More information

New seventh and eighth order derivative free methods for solving nonlinear equations

New seventh and eighth order derivative free methods for solving nonlinear equations DOI 10.1515/tmj-2017-0049 New seventh and eighth order derivative free methods for solving nonlinear equations Bhavna Panday 1 and J. P. Jaiswal 2 1 Department of Mathematics, Demonstration Multipurpose

More information

Some New Three Step Iterative Methods for Solving Nonlinear Equation Using Steffensen s and Halley Method

Some New Three Step Iterative Methods for Solving Nonlinear Equation Using Steffensen s and Halley Method British Journal of Mathematics & Computer Science 19(2): 1-9, 2016; Article no.bjmcs.2922 ISSN: 221-081 SCIENCEDOMAIN international www.sciencedomain.org Some New Three Step Iterative Methods for Solving

More information

A THESIS. Submitted by MAHALINGA V. MANDI. for the award of the degree of DOCTOR OF PHILOSOPHY

A THESIS. Submitted by MAHALINGA V. MANDI. for the award of the degree of DOCTOR OF PHILOSOPHY LINEAR COMPLEXITY AND CROSS CORRELATION PROPERTIES OF RANDOM BINARY SEQUENCES DERIVED FROM DISCRETE CHAOTIC SEQUENCES AND THEIR APPLICATION IN MULTIPLE ACCESS COMMUNICATION A THESIS Submitted by MAHALINGA

More information

Quadrature based Broyden-like method for systems of nonlinear equations

Quadrature based Broyden-like method for systems of nonlinear equations STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 6, March 2018, pp 130 138. Published online in International Academic Press (www.iapress.org) Quadrature based Broyden-like

More information

Convergence of a Third-order family of methods in Banach spaces

Convergence of a Third-order family of methods in Banach spaces International Journal of Computational and Applied Mathematics. ISSN 1819-4966 Volume 1, Number (17), pp. 399 41 Research India Publications http://www.ripublication.com/ Convergence of a Third-order family

More information

University of Education Lahore 54000, PAKISTAN 2 Department of Mathematics and Statistics

University of Education Lahore 54000, PAKISTAN 2 Department of Mathematics and Statistics International Journal of Pure and Applied Mathematics Volume 109 No. 2 2016, 223-232 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: 10.12732/ijpam.v109i2.5

More information

On Newton-type methods with cubic convergence

On Newton-type methods with cubic convergence Journal of Computational and Applied Mathematics 176 (2005) 425 432 www.elsevier.com/locate/cam On Newton-type methods with cubic convergence H.H.H. Homeier a,b, a Science + Computing Ag, IT Services Muenchen,

More information

Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations

Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations e Scientific World Journal, Article ID 34673, 9 pages http://dx.doi.org/0.55/204/34673 Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations T. Lotfi, K. Mahdiani,

More information

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems algorithms Article Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems Xiaofeng Wang * and Xiaodong Fan School of Mathematics and Physics, Bohai University, Jinzhou 203, China;

More information

NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS

NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS NEW DERIVATIVE FREE ITERATIVE METHOD FOR SOLVING NON-LINEAR EQUATIONS Dr. Farooq Ahmad Principal, Govt. Degree College Darya Khan, Bhakkar, Punjab Education Department, PAKISTAN farooqgujar@gmail.com Sifat

More information

A new sixth-order scheme for nonlinear equations

A new sixth-order scheme for nonlinear equations Calhoun: The NPS Institutional Archive DSpace Repository Faculty and Researchers Faculty and Researchers Collection 202 A new sixth-order scheme for nonlinear equations Chun, Changbum http://hdl.handle.net/0945/39449

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

A new family of four-step fifteenth-order root-finding methods with high efficiency index

A new family of four-step fifteenth-order root-finding methods with high efficiency index Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 3, No. 1, 2015, pp. 51-58 A new family of four-step fifteenth-order root-finding methods with high efficiency index Tahereh

More information

A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index

A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index International Journal of Mathematical Modelling & Computations Vol. 04, No. 03, Summer 2014, 277-288 A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index

More information

On Construction of a Class of. Orthogonal Arrays

On Construction of a Class of. Orthogonal Arrays On Construction of a Class of Orthogonal Arrays arxiv:1210.6923v1 [cs.dm] 25 Oct 2012 by Ankit Pat under the esteemed guidance of Professor Somesh Kumar A Dissertation Submitted for the Partial Fulfillment

More information

Family of Optimal Eighth-Order of Convergence for Solving Nonlinear Equations

Family of Optimal Eighth-Order of Convergence for Solving Nonlinear Equations SCECH Volume 4, Issue 4 RESEARCH ORGANISATION Published online: August 04, 2015 Journal of Progressive Research in Mathematics www.scitecresearch.com/journals Family of Optimal Eighth-Order of Convergence

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

ON THE EFFICIENCY OF A FAMILY OF QUADRATURE-BASED METHODS FOR SOLVING NONLINEAR EQUATIONS

ON THE EFFICIENCY OF A FAMILY OF QUADRATURE-BASED METHODS FOR SOLVING NONLINEAR EQUATIONS 149 ON THE EFFICIENCY OF A FAMILY OF QUADRATURE-BASED METHODS FOR SOLVING NONLINEAR EQUATIONS 1 OGHOVESE OGBEREYIVWE, 2 KINGSLEY OBIAJULU MUKA 1 Department of Mathematics and Statistics, Delta State Polytechnic,

More information

A Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation

A Family of Methods for Solving Nonlinear Equations Using Quadratic Interpolation ELSEVIER An International Journal Available online at www.sciencedirect.com computers &.,=. = C o,..ct. mathematics with applications Computers and Mathematics with Applications 48 (2004) 709-714 www.elsevier.com/locate/camwa

More information

Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations

Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations Fayyaz Ahmad Abstract Preconditioning of systems of nonlinear equations modifies the associated Jacobian

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS

ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS Fractals, Vol. 22, No. 4 (2014) 1450013 (16 pages) c World Scientific Publishing Company DOI: 10.1142/S0218348X14500133 ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS

More information

Research Article Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods

Research Article Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods Abstract and Applied Analysis Volume 0, Article ID 3865, 4 pages doi:0.55/0/3865 Research Article Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods F. Soleymani and S. Shateyi Department

More information

PELL S EQUATION NATIONAL INSTITUTE OF TECHNOLOGY ROURKELA, ODISHA

PELL S EQUATION NATIONAL INSTITUTE OF TECHNOLOGY ROURKELA, ODISHA PELL S EQUATION A Project Report Submitted by PANKAJ KUMAR SHARMA In partial fulfillment of the requirements For award of the degree Of MASTER OF SCIENCE IN MATHEMATICS UNDER GUIDANCE OF Prof GKPANDA DEPARTMENT

More information

Optimal derivative-free root finding methods based on the Hermite interpolation

Optimal derivative-free root finding methods based on the Hermite interpolation Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 4427 4435 Research Article Optimal derivative-free root finding methods based on the Hermite interpolation Nusrat Yasmin, Fiza Zafar,

More information

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver Abstract and Applied Analysis Volume 01, Article ID 63893, 8 pages doi:10.1155/01/63893 Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver Mi Young Lee and Changbum

More information

ESTIMATING STATISTICAL CHARACTERISTICS UNDER INTERVAL UNCERTAINTY AND CONSTRAINTS: MEAN, VARIANCE, COVARIANCE, AND CORRELATION ALI JALAL-KAMALI

ESTIMATING STATISTICAL CHARACTERISTICS UNDER INTERVAL UNCERTAINTY AND CONSTRAINTS: MEAN, VARIANCE, COVARIANCE, AND CORRELATION ALI JALAL-KAMALI ESTIMATING STATISTICAL CHARACTERISTICS UNDER INTERVAL UNCERTAINTY AND CONSTRAINTS: MEAN, VARIANCE, COVARIANCE, AND CORRELATION ALI JALAL-KAMALI Department of Computer Science APPROVED: Vladik Kreinovich,

More information

Shijun Liao. Homotopy Analysis Method in Nonlinear Differential Equations

Shijun Liao. Homotopy Analysis Method in Nonlinear Differential Equations Shijun Liao Homotopy Analysis Method in Nonlinear Differential Equations Shijun Liao Homotopy Analysis Method in Nonlinear Differential Equations With 127 figures Author Shijun Liao Shanghai Jiao Tong

More information

A fourth order method for finding a simple root of univariate function

A fourth order method for finding a simple root of univariate function Bol. Soc. Paran. Mat. (3s.) v. 34 2 (2016): 197 211. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v34i1.24763 A fourth order method for finding a simple

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 9 Initial Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

MATHEMATICAL MODELLING OF DISPERSION OF AIR POLLUTANTS IN LOW WIND CONDITIONS

MATHEMATICAL MODELLING OF DISPERSION OF AIR POLLUTANTS IN LOW WIND CONDITIONS MATHEMATICAL MODELLING OF DISPERSION OF AIR POLLUTANTS IN LOW WIND CONDITIONS by ANIL KUMAR YADAV Thesis submitted to the Indian Institute of Technology, Delhi for the award of the degree of DOCTOR OF

More information

A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials

A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials Applied Mathematical Sciences, Vol. 8, 2014, no. 35, 1723-1730 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.4127 A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating

More information

A STUDY ON SOLUTION OF DIFFERENTIAL EQUATIONS USING HAAR WAVELET COLLOCATION METHOD

A STUDY ON SOLUTION OF DIFFERENTIAL EQUATIONS USING HAAR WAVELET COLLOCATION METHOD A STUDY ON SOLUTION OF DIFFERENTIAL EQUATIONS USING HAAR WAVELET COLLOCATION METHOD A PROJECT REPORT SUBMITTED IN FULLFILLMENT OF THE REQUIREMENTS OF THE DEGREE OF MASTER OF SCIENCE IN MATHEMATICS SUBMITTED

More information

A Two-step Iterative Method Free from Derivative for Solving Nonlinear Equations

A Two-step Iterative Method Free from Derivative for Solving Nonlinear Equations Applied Mathematical Sciences, Vol. 8, 2014, no. 161, 8021-8027 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.49710 A Two-step Iterative Method Free from Derivative for Solving Nonlinear

More information

Applications of Number Theory in Statistics

Applications of Number Theory in Statistics Bonfring International Journal of Data Mining, Vol. 2, No., September 202 Applications of Number Theory in Statistics A.M.S. Ramasamy Abstract--- There have been several fascinating applications of Number

More information

DUST EXPLOSION MODELING USING MECHANISTIC AND PHENOMENOLOGICAL APPROACHES

DUST EXPLOSION MODELING USING MECHANISTIC AND PHENOMENOLOGICAL APPROACHES DUST EXPLOSION MODELING USING MECHANISTIC AND PHENOMENOLOGICAL APPROACHES VIMLESH KUMAR BIND DEPARTMENT OF CHEMICAL ENGINEERING Submitted in fulfillment of the requirements of the degree of DOCTOR OF PHILOSOPHY

More information

Iterative Methods for Single Variable Equations

Iterative Methods for Single Variable Equations International Journal of Mathematical Analysis Vol 0, 06, no 6, 79-90 HII Ltd, wwwm-hikaricom http://dxdoiorg/0988/ijma065307 Iterative Methods for Single Variable Equations Shin Min Kang Department of

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 9 Initial Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

A New Modification of Newton s Method

A New Modification of Newton s Method A New Modification of Newton s Method Vejdi I. Hasanov, Ivan G. Ivanov, Gurhan Nedjibov Laboratory of Mathematical Modelling, Shoumen University, Shoumen 971, Bulgaria e-mail: v.hasanov@@fmi.shu-bg.net

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

A Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity

A Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity Article A Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity Fayyaz Ahmad 1,2, *, S Serra-Capizzano 1,3, Malik Zaka Ullah 1,4 and A S Al-Fhaid 4 Received:

More information

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Applied Mathematics and Computation

Applied Mathematics and Computation Applied Mathematics and Computation 8 (0) 584 599 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc Basin attractors for

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

CHAPTER 5: Linear Multistep Methods

CHAPTER 5: Linear Multistep Methods CHAPTER 5: Linear Multistep Methods Multistep: use information from many steps Higher order possible with fewer function evaluations than with RK. Convenient error estimates. Changing stepsize or order

More information

On high order methods for solution of nonlinear

On high order methods for solution of nonlinear On high order methods for solution of nonlinear equation Dr. Vinay Kumar School of Computer and Systems Sciences Jawaharlal Nehru University Delhi, INDIA vinay2teotia@gmail.com Prof. C. P. Katti School

More information

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions Applied Mathematical Sciences, Vol 1, 018, no 3, 137-146 HIKARI Ltd, wwwm-hikaricom https://doiorg/101988/ams018811 A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions Somkid Intep

More information

FREE VIBRATIONS OF FRAMED STRUCTURES WITH INCLINED MEMBERS

FREE VIBRATIONS OF FRAMED STRUCTURES WITH INCLINED MEMBERS FREE VIBRATIONS OF FRAMED STRUCTURES WITH INCLINED MEMBERS A Thesis submitted in partial fulfillment of the requirements for the degree of Bachelor of Technology in Civil Engineering By JYOTI PRAKASH SAMAL

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity

A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity Article A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity Fayyaz Ahmad 1,2,3, *, Toseef Akhter Bhutta 4, Umar Shoaib 4, Malik Zaka Ullah 1,5, Ali

More information

Analytical solution for determination the control parameter in the inverse parabolic equation using HAM

Analytical solution for determination the control parameter in the inverse parabolic equation using HAM Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 12, Issue 2 (December 2017, pp. 1072 1087 Applications and Applied Mathematics: An International Journal (AAM Analytical solution

More information

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes) AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.

More information

Generalization Of The Secant Method For Nonlinear Equations

Generalization Of The Secant Method For Nonlinear Equations Applied Mathematics E-Notes, 8(2008), 115-123 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generalization Of The Secant Method For Nonlinear Equations Avram Sidi

More information

NON-EXTINCTION OF SOLUTIONS TO A FAST DIFFUSION SYSTEM WITH NONLOCAL SOURCES

NON-EXTINCTION OF SOLUTIONS TO A FAST DIFFUSION SYSTEM WITH NONLOCAL SOURCES Electronic Journal of Differential Equations, Vol. 2016 (2016, No. 45, pp. 1 5. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu NON-EXTINCTION OF

More information

Numerical Methods in Informatics

Numerical Methods in Informatics Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch

More information

Numerical Iterative Methods For Nonlinear Problems

Numerical Iterative Methods For Nonlinear Problems UNIVERSITÀ DELL INSUBRIA DIPARTIMENTO DI SCIENZA E ALTA TECNOLOGIA DOCTORAL THESIS Numerical Iterative Methods For Nonlinear Problems Author: Malik Zaka Ullah Supervisor: Prof. Stefano Serra-Capizzano

More information

Numerical Solution of f(x) = 0

Numerical Solution of f(x) = 0 Numerical Solution of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@pdx.edu ME 350: Finding roots of f(x) = 0 Overview Topics covered in these slides

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes

An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes by Chenlu Shi B.Sc. (Hons.), St. Francis Xavier University, 013 Project Submitted in Partial Fulfillment of

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Numerical solutions of nonlinear systems of equations

Numerical solutions of nonlinear systems of equations Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points

More information

AN AUTOMATIC SCHEME ON THE HOMOTOPY ANALYSIS METHOD FOR SOLVING NONLINEAR ALGEBRAIC EQUATIONS. Safwan Al-Shara

AN AUTOMATIC SCHEME ON THE HOMOTOPY ANALYSIS METHOD FOR SOLVING NONLINEAR ALGEBRAIC EQUATIONS. Safwan Al-Shara italian journal of pure and applied mathematics n. 37 2017 (5 14) 5 AN AUTOMATIC SCHEME ON THE HOMOTOPY ANALYSIS METHOD FOR SOLVING NONLINEAR ALGEBRAIC EQUATIONS Safwan Al-Shara Department of Mathematics

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems

Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems J. R. Cash and S. Girdlestone Department of Mathematics, Imperial College London South Kensington London SW7 2AZ,

More information

NUMERICAL MATHEMATICS AND COMPUTING

NUMERICAL MATHEMATICS AND COMPUTING NUMERICAL MATHEMATICS AND COMPUTING Fourth Edition Ward Cheney David Kincaid The University of Texas at Austin 9 Brooks/Cole Publishing Company I(T)P An International Thomson Publishing Company Pacific

More information

Alexander Ostrowski

Alexander Ostrowski Ostrowski p. 1/3 Alexander Ostrowski 1893 1986 Walter Gautschi wxg@cs.purdue.edu Purdue University Ostrowski p. 2/3 Collected Mathematical Papers Volume 1 Determinants Linear Algebra Algebraic Equations

More information

Newton-Raphson Type Methods

Newton-Raphson Type Methods Int. J. Open Problems Compt. Math., Vol. 5, No. 2, June 2012 ISSN 1998-6262; Copyright c ICSRS Publication, 2012 www.i-csrs.org Newton-Raphson Type Methods Mircea I. Cîrnu Department of Mathematics, Faculty

More information

Simple Iteration, cont d

Simple Iteration, cont d Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Simple Iteration, cont d In general, nonlinear equations cannot be solved in a finite sequence

More information

A new modified Halley method without second derivatives for nonlinear equation

A new modified Halley method without second derivatives for nonlinear equation Applied Mathematics and Computation 189 (2007) 1268 1273 www.elsevier.com/locate/amc A new modified Halley method without second derivatives for nonlinear equation Muhammad Aslam Noor *, Waseem Asghar

More information