Convergence Rates on Root Finding

Similar documents
Solution of Nonlinear Equations

Roots of Polynomials

Polynomial Interpolation

Lecture 39: Root Finding via Newton s Method

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Numerical Analysis: Solving Nonlinear Equations

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Generalization Of The Secant Method For Nonlinear Equations

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

ROOT FINDING REVIEW MICHELLE FENG

Simple Iteration, cont d

Numerical Methods in Informatics

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Zeroes of Transcendental and Polynomial Equations. Bisection method, Regula-falsi method and Newton-Raphson method

Solutions of Equations in One Variable. Newton s Method

Root Finding (and Optimisation)

Solution of Algebric & Transcendental Equations

Midterm Review. Igor Yanovsky (Math 151A TA)

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Chapter 3: Root Finding. September 26, 2005

Order of convergence

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Math Numerical Analysis

, applyingl Hospital s Rule again x 0 2 cos(x) xsinx

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Integration of Ordinary Differential Equations

Maria Cameron. f(x) = 1 n

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Numerical Methods I Solving Nonlinear Equations

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.

LECTURE 5: THE METHOD OF STATIONARY PHASE

Lecture 8: Optimization

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Maria Cameron Theoretical foundations. Let. be a partition of the interval [a, b].

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Introductory Numerical Analysis

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Program : M.A./M.Sc. (Mathematics) M.A./M.Sc. (Final) Paper Code:MT-08 Numerical Analysis Section A (Very Short Answers Questions)

Introduction to Numerical Analysis

Numerical Methods. Root Finding

x n+1 = x n f(x n) f (x n ), n 0.

Numerical Methods in Physics and Astrophysics

4 Nonlinear Equations

Numerical Analysis: Approximation of Functions

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Scientific Computing: An Introductory Survey

Nonlinear Optimization

Hermite Interpolation

Numerical Methods in Physics and Astrophysics

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

Supplementary Notes for W. Rudin: Principles of Mathematical Analysis

Numerical Analysis: Interpolation Part 1

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Computational Methods. Solving Equations

NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS

Polynomial Interpolation with n + 1 nodes

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

Solving Higher-Order p-adic Polynomial Equations via Newton-Raphson Method

15 Nonlinear Equations and Zero-Finders

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

How to Explain Usefulness of Different Results when Teaching Calculus: Example of the Mean Value Theorem

Math 112 Rahman. Week Taylor Series Suppose the function f has the following power series:

A fourth order method for finding a simple root of univariate function

Three New Iterative Methods for Solving Nonlinear Equations

Chapter 1. Root Finding Methods. 1.1 Bisection method

Applied Numerical Analysis Quiz #2

FIXED POINT ITERATIONS

7.0: Minimax approximations

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Math 701: Secant Method

Scientific Computing. Roots of Equations

SOLVING EQUATIONS OF ONE VARIABLE

Optimization Tutorial 1. Basic Gradient Descent

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods

Chapter 8: Taylor s theorem and L Hospital s rule

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Continuity. Chapter 4

Numerical Methods. King Saud University

5 Finding roots of equations

Unconstrained optimization

Numerical Study of Some Iterative Methods for Solving Nonlinear Equations

On the Local Convergence of Regula-falsi-type Method for Generalized Equations

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Interpolation & Polynomial Approximation. Hermite Interpolation I

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Homework and Computer Problems for Math*2130 (W17).

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

CS 323: Numerical Analysis and Computing

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

ASSIGNMENT BOOKLET. Numerical Analysis (MTE-10) (Valid from 1 st July, 2011 to 31 st March, 2012)

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Module 5 : Linear and Quadratic Approximations, Error Estimates, Taylor's Theorem, Newton and Picard Methods

Fourier Series. (Com S 477/577 Notes) Yan-Bin Jia. Nov 29, 2016

Lecture 7 Unconstrained nonlinear programming

Comparative Analysis of Convergence of Various Numerical Methods

Transcription:

Convergence Rates on Root Finding Com S 477/577 Oct 5, 004 A sequence x i R converges to ξ if for each ǫ > 0, there exists an integer Nǫ) such that x l ξ > ǫ for all l Nǫ). The Cauchy convergence criterion states that a sequence x i R is convergent if and only if for each ǫ > 0 there exists an Nǫ) such that x l x m < ǫ for all l,m Nǫ). Let a sequence x i R be generated by an iteration function Φ, that is, x i+1 = Φx i ), i = 0,1,,... Let ξ be a fixed point of Φ, that is, ξ = Φξ). Suppose that the sequence {x i } is generated in the neighborhood of ξ. The corresponding iteration method is said to be of at least pth order if there exists a neighborhood Nξ) of ξ such that for all x 0 Nξ) the generated sequence x i+1 = Φx i ), i = 0,1,..., satisfies x i+1 ξ C x i ξ p, where C < 1 if p = 1. In the case of first order convergence, for instance, we have Since C < 1, it follows that x i ξ C x i 1 ξ C x i ξ C i x 0 ξ. lim x i ξ = lim C i x 0 ξ = 0, i i namely, the sequence {x i } will converge to ξ. Now suppose Φ is sufficiently often differentiable in Nξ). If x i Nξ) and if Φ k) ξ) = 0 for k = 1,,...,p 1 but Φ p)ξ) 0, that is, ξ is a zero of order p, then it follows from the Taylor expansion that Because Φξ) = ξ, we obtain x i+1 = Φx i ) = Φξ) + Φp) ξ) x i ξ) p + O x i ξ) p+1). p! x i+1 ξ lim i x i ξ) p = Φp) ξ). p! For p =,3,..., the method is of precisely) pth order. The method is of first order if p = 1 and Φ ξ) < 1. When 0 < Φ ξ) < 1, the sequence {x i } will converge monotonically to ξ as shown in the left figure below. When 1 < Φ ξ) < 0, the sequence will alternate about ξ during convergence as shown in the right figure. 1

x φ ) x x φ ) x x i 0 x i+1 x i + ξ 0 x i x i + ξ x i+1 In the below we study the convergence rates of several root finding methods introduced before. 1 Quadratic Convergence of Newton s Method Newton s method has the iteration function Φx) = x fx) f x) with fξ) = 0. Suppose f is sufficiently continuously differentiable in some neighborhood Nξ). In the nondegenerate case, f ξ) 0. So we have Φξ) = ξ, Φ ξ) = 1 f x)) fx)f x) f x)) x=ξ = 0, since fξ) = 0, ) ) f x)f x) f x)f x) fx)f x) f x)) f x)f x) f x)) fx)f x) Φ ξ) = 4 f x)) x=ξ 3f f ) f ξ)) ξ) fξ) f ξ)) ξ) + fξ)f ξ) f ξ) = 3f f ξ)) ξ) = f ξ) = f ξ) f ξ) 0. f ξ) ) 4 ) 4, since fξ) = 0 So Newton s method is quadratically convergent. In the degenerate case, ξ is an m-fold zero of f, for some m > 1, that is, f i) ξ) = 0, for i = 0,1,...,m 1. We will leave to the students to determine the order of convergence in this case.

Linear Convergence of Regula Falsi For clarity of analysis we let x i = b i for all i. We make some simplification assumptions for the discussion of the convergence behavior: f exists and for some k the following conditions hold: a) x k < a k ; b) fx k ) < 0 and fa k ) > 0; c) f x) 0 for all x [x k,a k ]. x k x k +1 a k a) b) Under these assumptions, either fx k+1 ) = 0 or fx k+1 )fx k ) > 0 and consequently x i < x i+1 < a i+1 = a i. To see this, use the remainder formula for polynomial interpolation at x k and a k : fx) px) = x x k )x a k ) f η) for x [x k,a k ] and a suitable η [x k,a k ]. Under condition c), the above equation implies that fx) px) 0. In particular, fx k+1 ) px k+1 ) 0, which in turn implies that fx k+1 ) 0 since px k+1 ) = 0. Unless fx k+1 ) = 0, in which case the iteration stops at x k+1, we can see that conditions a), b), and c) hold for all i k. Therefore a i = a k = a and x i+1 = afx i) x i fa) fx i ) fa) for all i k. Furthermore, {x i } for i k form a monotone increasing sequence bounded by a. So lim i x i = ξ exists. Consequently, fξ) 0, fa) > 0, and ξ = afξ) ξfa) fξ) fa), which gives ξ a)fξ) = 0. But ξ < a since fξ) 0 < fa). Hence fξ) = 0 and {x i } converges to a zero of f. The above discussion enables us to look at the order of convergence through the iteration function afx) xfa) x i+1 = Φx i ), where Φx) = fx) fa). 3

Since fξ) = 0, we obtain that ) af ξ) fa) fa) + ξfa)f ξ) Φ ξ) = fa) = 1 f ξ a ξ) fξ) fa). By the mean value theorem, there exist η 1,η such that fξ) fa) ξ a fx i ) fξ) x i ξ = fa) ξ a = fx i) x i ξ = f η 1 ), ξ < η 1 < a; 1) = f η ), x i < η < ξ. ) Since f x) 0, f x) increases monotonically in [x i,a], So f η ) f ξ) f η 1 ). Meanwhile, condition ), x i < ξ, and fx i ) < 0 together imply that 0 < f η ). Therefore 0 < f ξ) f η 1 ). We have thus shown that 0 Φ ξ) = 1 f ξ) f η 1 ) < 1. So the regula falsi method converges linearly. From the previous discussion we see that the method of regula falsi will almost always end up with the one-sided convergence demonstrated before. 3 Superlinear Convergence of Secant Method In secant method, the iteration is in the form x i+1 = x i fx i), i = 0,1,... 3) To determine the local convergence rate, without loss of generality we assume that the sequence {x i } is in a small enough neighborhood of the zero ξ and that f is twice differentiable. Subtract ξ from both sides of 3): x i+1 ξ = x i ξ) fx i) = x i ξ) 1 f[x ) i,ξ], since f[x i,ξ] = fx i) fξ) = fx i) x i ξ x i ξ = x i ξ)x i 1 ξ) f[x i 1,x i ] f[x i,ξ] x i 1 ξ) = x i ξ)x i 1 ξ) f[x i 1,x i,ξ]. 4) From error estimation of polynomial interpolation, we learned that = f η 1 ), η 1 I[x i 1,x i ]; f[x i 1,x i,ξ] = 1 f η ), η I[x i 1,x i,ξ], 4

where I[x i 1,x i ] is the smallest interval containing x i 1 and x i, and I[x i 1,x i,ξ] the smallest interval containing x i 1, x i, ξ { If ξ is a simple zero, that is, f ξ) 0, there exists a bound M and an interval J = x x ξ } ǫ for some ǫ > 0 such that 1 f η ) f η 1 ) M, 5) for any η 1,η J. Let e i = M x i ξ and e 0,e 1 < min{1,ǫm}. By induction and using 4) and 5) we can easily show that e i+1 = M x i+1 ξ M ei M ei 1 M M = e ie i 1, and e i min{1,ǫm}, for i = 1,,... Let q = 1 + 5)/ be the root of the equation z z 1 = 0. Then we have e i K qi, i = 0,1,,... where K = max{e 0, q e 1 } < 1. This is because by induction) e i+1 e i e i 1 K qi K qi 1 = K qi 1 q+1) = K qi 1 q = K qi+1. Thus the secant method converges at least as well as a method of order p = 1+ 5 = 1.618... One-step secant requires one additional function evaluation. But one-step Newton requires two f and f ). Therefore two secant steps are as expensive as single Newton step. But two secant steps has a convergence order of 1.618).618. This explains why in practice the secant method always dominates Newton s method with numerical derivatives. References [1] J. Stoer and R. Bulirsch. Introduction to Numerical Analysis. Springer-Verlag New York, Inc., nd edition, 1993. [] M. Erdmann. Lecture notes for 16-811 Mathematical Fundamentals for Robotics. The Robotics Institute, Carnegie Mellon University, 1998. [3] W. H. Press, et al. Numerical Recipes in C++: The Art of Scientific Computing. Cambridge University Press, nd edition, 00. 5