Solutions of Equations in One Variable. Newton s Method

Similar documents
Numerical Differentiation & Integration. Numerical Differentiation II

Numerical Differentiation & Integration. Numerical Differentiation III

Interpolation & Polynomial Approximation. Hermite Interpolation I

Initial-Value Problems for ODEs. Introduction to Linear Multistep Methods

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Chapter 3: Root Finding. September 26, 2005

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

ROOT FINDING REVIEW MICHELLE FENG

Section 4.2: The Mean Value Theorem

CHAPTER-II ROOTS OF EQUATIONS

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods

Scientific Computing: An Introductory Survey

Student Study Session. Theorems

Numerical Methods. Root Finding

Chapter 8: Taylor s theorem and L Hospital s rule

Math 261 Calculus I. Test 1 Study Guide. Name. Decide whether the limit exists. If it exists, find its value. 1) lim x 1. f(x) 2) lim x -1/2 f(x)

What is on today. 1 Linear approximation. MA 123 (Calculus I) Lecture 17: November 2, 2017 Section A2. Professor Jennifer Balakrishnan,

Chapter 2 Solutions of Equations of One Variable

Direct Methods for Solving Linear Systems. Matrix Factorization

Exam 3 MATH Calculus I

Fixed-Point Iteration

Lesson 59 Rolle s Theorem and the Mean Value Theorem

Lecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Math Numerical Analysis

11.11 Applications of Taylor Polynomials. Copyright Cengage Learning. All rights reserved.

A secant line is a line drawn through two points on a curve. The Mean Value Theorem relates the slope of a secant line to the slope of a tangent line.

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

Aim: How do we prepare for AP Problems on limits, continuity and differentiability? f (x)

Introductory Numerical Analysis

X. Numerical Methods

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Calculus BC: Section I

Numerical Analysis Fall. Roots: Open Methods

Numerical solutions of nonlinear systems of equations

1 Lecture 25: Extreme values

Taylor and Maclaurin Series. Copyright Cengage Learning. All rights reserved.

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Solutions to Math 41 First Exam October 18, 2012

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Figure 1: Graph of y = x cos(x)

Chapter 1. Root Finding Methods. 1.1 Bisection method

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Week #6 - Taylor Series, Derivatives and Graphs Section 10.1

MAT137 Calculus! Lecture 9

MA 137: Calculus I for the Life Sciences

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

MA 113 Calculus I Fall 2012 Exam 3 13 November Multiple Choice Answers. Question

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Coordinate Systems. S. F. Ellermeyer. July 10, 2009

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

Numerical Study of Some Iterative Methods for Solving Nonlinear Equations

converges to a root, it may not always be the root you have in mind.

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Mean Value Theorem. MATH 161 Calculus I. J. Robert Buchanan. Summer Department of Mathematics

Mean Value Theorem. MATH 161 Calculus I. J. Robert Buchanan. Summer Department of Mathematics

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

MATH 2053 Calculus I Review for the Final Exam

Infinite Series. Copyright Cengage Learning. All rights reserved.

Numerical Methods in Informatics

Solution of Algebric & Transcendental Equations

5 Finding roots of equations

MATH 151, Fall 2015, Week 12, Section

9.5. Polynomial and Rational Inequalities. Objectives. Solve quadratic inequalities. Solve polynomial inequalities of degree 3 or greater.

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

Appendix 8 Numerical Methods for Solving Nonlinear Equations 1

4. We accept without proofs that the following functions are differentiable: (e x ) = e x, sin x = cos x, cos x = sin x, log (x) = 1 sin x

Calculus. Weijiu Liu. Department of Mathematics University of Central Arkansas 201 Donaghey Avenue, Conway, AR 72035, USA

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

BEKG 2452 NUMERICAL METHODS Solution of Nonlinear Equations

Calculus I Announcements

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Convergence Rates on Root Finding

MAT137 Calculus! Lecture 45

Calculus. Central role in much of modern science Physics, especially kinematics and electrodynamics Economics, engineering, medicine, chemistry, etc.

Midterm Review. Igor Yanovsky (Math 151A TA)

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Differentiation - Quick Review From Calculus

Slope Fields and Differential Equations. Copyright Cengage Learning. All rights reserved.

CHAPTER 4 ROOTS OF EQUATIONS

Final Exam Review Exercise Set A, Math 1551, Fall 2017

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

Partial Derivatives October 2013

Integration. Antiderivatives and Indefinite Integration 3/9/2015. Copyright Cengage Learning. All rights reserved.

Numerical Methods Lecture 3

INFINITE SEQUENCES AND SERIES

Iterative Methods to Solve Systems of Nonlinear Algebraic Equations

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Goals for This Lecture:

University of Connecticut Department of Mathematics

4 Integration. Copyright Cengage Learning. All rights reserved.

AP Calculus Worksheet: Chapter 2 Review Part I

Continuity. MATH 161 Calculus I. J. Robert Buchanan. Fall Department of Mathematics

Module 5 : Linear and Quadratic Approximations, Error Estimates, Taylor's Theorem, Newton and Picard Methods

Calculus I. 1. Limits and Continuity

Transcription:

Solutions of Equations in One Variable Newton s Method Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011 Brooks/Cole, Cengage Learning

Outline 1 Newton s Method: Derivation 2 Example using Newton s Method & Fixed-Point Iteration 3 Convergence using Newton s Method 4 Final Remarks on Practical Application Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 2 / 33

Outline 1 Newton s Method: Derivation 2 Example using Newton s Method & Fixed-Point Iteration 3 Convergence using Newton s Method 4 Final Remarks on Practical Application Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 3 / 33

Newton s Method Context Newton s (or the Newton-Raphson) method is one of the most powerful and well-known numerical methods for solving a root-finding problem. Various ways of introducing Newton s method Graphically, as is often done in calculus. As a technique to obtain faster convergence than offered by other types of functional iteration. Using Taylor polynomials. We will see there that this particular derivation produces not only the method, but also a bound for the error of the approximation. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 4 / 33

Newton s Method Derivation Suppose that f C 2 [a, b]. Let p 0 [a, b] be an approximation to p such that f (p 0 ) 0 and p p 0 is small. Consider the first Taylor polynomial for f(x) expanded about p 0 and evaluated at x = p. f(p) = f(p 0 ) + (p p 0 )f (p 0 ) + (p p 0) 2 f (ξ(p)), 2 where ξ(p) lies between p and p 0. Since f(p) = 0, this equation gives 0 = f(p 0 ) + (p p 0 )f (p 0 ) + (p p 0) 2 f (ξ(p)). 2 Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 5 / 33

Newton s Method Derivation (Cont d) 0 = f(p 0 ) + (p p 0 )f (p 0 ) + (p p 0) 2 f (ξ(p)). 2 Newton s method is derived by assuming that since p p 0 is small, the term involving (p p 0 ) 2 is much smaller, so Solving for p gives 0 f(p 0 ) + (p p 0 )f (p 0 ). p p 0 f(p 0) f (p 0 ) p 1. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 6 / 33

Newton s Method p p 0 f(p 0) f (p 0 ) p 1. Newton s Method This sets the stage for Newton s method, which starts with an initial approximation p 0 and generates the sequence {p n } n=0, by p n = p n 1 f(p n 1) f (p n 1 ) for n 1 Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 7 / 33

Newton s Method: Using Successive Tangents y Slope f 9(p 1 ) y 5 f (x) (p 1, f (p 1 )) p 0 p p 2 p 1 (p 0, f (p 0 )) Slope f 9(p 0 ) x Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 8 / 33

Newton s Algorithm To find a solution to f(x) = 0 given an initial approximation p 0 : 1. Set i = 0; 2. While i N, do Step 3: 3.1 If f (p 0 ) = 0 then Step 5. 3.2 Set p = p 0 f(p 0 )/f (p 0 ); 3.3 If p p 0 < TOL then Step 6; 3.4 Set i = i + 1; 3.5 Set p 0 = p; 4. Output a failure to converge within the specified number of iterations message & Stop; 5. Output an appropriate failure message (zero derivative) & Stop; 6. Output p Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 9 / 33

Newton s Method Stopping Criteria for the Algorithm Various stopping procedures can be applied in Step 3.3. We can select a tolerance ǫ > 0 and generate p 1,...,p N until one of the following conditions is met: p N p N 1 < ǫ (1) p N p N 1 p N < ǫ, p N 0, or (2) f(p N ) < ǫ (3) Note that none of these inequalities give precise information about the actual error p N p. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 10 / 33

Newton s Method as a Functional Iteration Technique Functional Iteration Newton s Method p n = p n 1 f(p n 1) f (p n 1 ) for n 1 can be written in the form with p n = g (p n 1 ) g (p n 1 ) = p n 1 f(p n 1) f (p n 1 ) for n 1 Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 11 / 33

Outline 1 Newton s Method: Derivation 2 Example using Newton s Method & Fixed-Point Iteration 3 Convergence using Newton s Method 4 Final Remarks on Practical Application Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 12 / 33

Newton s Method Example: Fixed-Point Iteration & Newton s Method Consider the function f(x) = cos x x = 0 Approximate a root of f using (a) a fixed-point method, and (b) Newton s Method Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 13 / 33

Newton s Method & Fixed-Point Iteration y y 5 x 1 y 5 cos x 1 q x Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 14 / 33

Newton s Method & Fixed-Point Iteration (a) Fixed-Point Iteration for f(x) = cos x x A solution to this root-finding problem is also a solution to the fixed-point problem x = cos x and the graph implies that a single fixed-point p lies in [0,π/2]. The following table shows the results of fixed-point iteration with p 0 = π/4. The best conclusion from these results is that p 0.74. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 15 / 33

Newton s Method & Fixed-Point Iteration Fixed-Point Iteration: x = cos(x), x 0 = π 4 n p n 1 p n p n p n 1 e n /e n 1 1 0.7853982 0.7071068 0.0782914 2 0.707107 0.760245 0.053138 0.678719 3 0.760245 0.724667 0.035577 0.669525 4 0.724667 0.748720 0.024052 0.676064 5 0.748720 0.732561 0.016159 0.671826 6 0.732561 0.743464 0.010903 0.674753 7 0.743464 0.736128 0.007336 0.672816 Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 16 / 33

Newton s Method (b) Newton s Method for f(x) = cos x x To apply Newton s method to this problem we need f (x) = sin x 1 Starting again with p 0 = π/4, we generate the sequence defined, for n 1, by p n = p n 1 f(p n 1) f(p n 1 ) = p n 1 cos p n 1 p n 1 sin p n 1 1. This gives the approximations shown in the following table. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 17 / 33

Newton s Method Newton s Method for f(x) = cos(x) x, x 0 = π 4 n p n 1 f (p n 1 ) f (p n 1 ) p n p n p n 1 1 0.78539816-0.078291-1.707107 0.73953613 0.04586203 2 0.73953613-0.000755-1.673945 0.73908518 0.00045096 3 0.73908518-0.000000-1.673612 0.73908513 0.00000004 4 0.73908513-0.000000-1.673612 0.73908513 0.00000000 An excellent approximation is obtained with n = 3. Because of the agreement of p 3 and p 4 we could reasonably expect this result to be accurate to the places listed. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 18 / 33

Outline 1 Newton s Method: Derivation 2 Example using Newton s Method & Fixed-Point Iteration 3 Convergence using Newton s Method 4 Final Remarks on Practical Application Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 19 / 33

Convergence using Newton s Method Theoretical importance of the choice of p 0 The Taylor series derivation of Newton s method points out the importance of an accurate initial approximation. The crucial assumption is that the term involving (p p 0 ) 2 is, by comparison with p p 0, so small that it can be deleted. This will clearly be false unless p 0 is a good approximation to p. If p 0 is not sufficiently close to the actual root, there is little reason to suspect that Newton s method will converge to the root. However, in some instances, even poor initial approximations will produce convergence. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 20 / 33

Convergence using Newton s Method Convergence Theorem for Newton s Method Let f C 2 [a, b]. If p (a, b) is such that f(p) = 0 and f (p) 0. Then there exists a δ > 0 such that Newton s method generates a sequence {p n } n=1, defined by p n = p n 1 f(p n 1) f(p n 1 ) converging to p for any initial approximation p 0 [p δ, p + δ] Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 21 / 33

Convergence using Newton s Method Convergence Theorem (1/4) The proof is based on analyzing Newton s method as the functional iteration scheme p n = g(p n 1 ), for n 1, with g(x) = x f(x) f (x). Let k be in (0, 1). We first find an interval [p δ, p + δ] that g maps into itself and for which g (x) k, for all x (p δ, p + δ). Since f is continuous and f (p) 0, part (a) of Exercise 29 in Section 1.1 Ex 29 implies that there exists a δ 1 > 0, such that f (x) 0 for x [p δ 1, p + δ 1 ] [a, b]. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 22 / 33

Convergence using Newton s Method Convergence Theorem (2/4) Thus g is defined and continuous on [p δ 1, p + δ 1 ]. Also g (x) = 1 f (x)f (x) f(x)f (x) [f (x)] 2 = f(x)f (x) [f (x)] 2, for x [p δ 1, p + δ 1 ], and, since f C 2 [a, b], we have g C 1 [p δ 1, p + δ 1 ]. By assumption, f(p) = 0, so g (p) = f(p)f (p) [f (p)] 2 = 0. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 23 / 33

Convergence using Newton s Method g (p) = f(p)f (p) [f (p)] 2 = 0. Convergence Theorem (3/4) Since g is continuous and 0 < k < 1, part (b) of Exercise 29 in Section 1.1 Ex 29 implies that there exists a δ, with 0 < δ < δ 1, and g (x) k, for all x [p δ, p + δ]. It remains to show that g maps [p δ, p + δ] into [p δ, p + δ]. If x [p δ, p + δ], the Mean Value Theorem MVT implies that for some number ξ between x and p, g(x) g(p) = g (ξ) x p. So g(x) p = g(x) g(p) = g (ξ) x p k x p < x p. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 24 / 33

Convergence using Newton s Method Convergence Theorem (4/4) Since x [p δ, p + δ], it follows that x p < δ and that g(x) p < δ. Hence, g maps [p δ, p + δ] into [p δ, p + δ]. All the hypotheses of the Fixed-Point Theorem Theorem 2.4 are now satisfied, so the sequence {p n } n=1, defined by p n = g(p n 1 ) = p n 1 f(p n 1) f, for n 1, (p n 1 ) converges to p for any p 0 [p δ, p + δ]. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 25 / 33

Outline 1 Newton s Method: Derivation 2 Example using Newton s Method & Fixed-Point Iteration 3 Convergence using Newton s Method 4 Final Remarks on Practical Application Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 26 / 33

Newton s Method in Practice Choice of Initial Approximation The convergence theorem states that, under reasonable assumptions, Newton s method converges provided a sufficiently accurate initial approximation is chosen. It also implies that the constant k that bounds the derivative of g, and, consequently, indicates the speed of convergence of the method, decreases to 0 as the procedure continues. This result is important for the theory of Newton s method, but it is seldom applied in practice because it does not tell us how to determine δ. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 27 / 33

Newton s Method in Practice In a practical application... an initial approximation is selected and successive approximations are generated by Newton s method. These will generally either converge quickly to the root, or it will be clear that convergence is unlikely. Numerical Analysis (Chapter 2) Newton s Method R L Burden & J D Faires 28 / 33

Questions?

Reference Material

Exercise 29, Section 1.1 Let f C[a, b], and let p be in the open interval (a, b). Exercise 29 (a) Suppose f(p) 0. Show that a δ > 0 exists with f(x) 0, for all x in [p δ, p + δ], with [p δ, p + δ] a subset of [a, b]. Exercise 29 (b) Return to Newton s Convergence Theorem (1 of 4) Suppose f(p) = 0 and k > 0 is given. Show that a δ > 0 exists with f(x) k, for all x in [p δ, p + δ], with [p δ, p + δ] a subset of [a, b]. Return to Newton s Convergence Theorem (3 of 4)

Mean Value Theorem If f C[a, b] and f is differentiable on (a, b), then a number c exists such that f f(b) f(a) (c) = b a y Parallel lines Slope f 9(c) y 5 f (x) Slope f (b) 2 f (a) b 2 a a c b x Return to Newton s Convergence Theorem (3 of 4)

Fixed-Point Theorem Let g C[a, b] be such that g(x) [a, b], for all x in [a, b]. Suppose, in addition, that g exists on (a, b) and that a constant 0 < k < 1 exists with g (x) k, for all x (a, b). Then for any number p 0 in [a, b], the sequence defined by p n = g(p n 1 ), n 1, converges to the unique fixed point p in [a, b]. Return to Newton s Convergence Theorem (4 of 4)