Nonlinear Equations. Chapter The Bisection Method

Similar documents
Numerical Methods. Root Finding

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 3: Root Finding. September 26, 2005

Scientific Computing: An Introductory Survey

Solution of Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

SOLVING EQUATIONS OF ONE VARIABLE

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Line Search Methods. Shefali Kulkarni-Thaker

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Math Numerical Analysis

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Example - Newton-Raphson Method

Solution of Algebric & Transcendental Equations

Solving Non-Linear Equations (Root Finding)

Chapter 2 Solutions of Equations of One Variable

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Numerical Methods Lecture 3

Exact and Approximate Numbers:

Determining the Roots of Non-Linear Equations Part I

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

15 Nonlinear Equations and Zero-Finders

Math 2414 Activity 1 (Due by end of class Jan. 26) Precalculus Problems: 3,0 and are tangent to the parabola axis. Find the other line.

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 4

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Iteration & Fixed Point

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

Limits and Their Properties

Math 2414 Activity 1 (Due by end of class July 23) Precalculus Problems: 3,0 and are tangent to the parabola axis. Find the other line.

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Figure 1: Graph of y = x cos(x)

Midterm Review. Igor Yanovsky (Math 151A TA)

NUMERICAL METHODS FOR SOLVING EQUATIONS

CHAPTER 10 Zeros of Functions

Numerical Methods in Informatics

Zeros of Functions. Chapter 10

Numerical Analysis: Solving Nonlinear Equations

Chapter 1. Root Finding Methods. 1.1 Bisection method

Finding roots. Lecture 4

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

4.7. Newton s Method. Procedure for Newton s Method HISTORICAL BIOGRAPHY

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

Lecture 5: Finding limits analytically Simple indeterminate forms

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Methods for Advanced Mathematics (C3) Coursework Numerical Methods

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Root Finding Convergence Analysis

1.1: The bisection method. September 2017

November 13, 2018 MAT186 Week 8 Justin Ko

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods

Order of convergence. MA3232 Numerical Analysis Week 3 Jack Carl Kiefer ( ) Question: How fast does x n

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

THS Step By Step Calculus Chapter 1

Computer Problems for Taylor Series and Series Convergence

Bisection Method 8/11/2010 1

UNCONSTRAINED OPTIMIZATION

ROOTFINDING. We assume the interest rate r holds over all N in +N out periods. h P in (1 + r) N out. N in 1 i h i

Root Finding For NonLinear Equations Bisection Method

Chapter 3: The Derivative in Graphing and Applications

Jim Lambers MAT 460 Fall Semester Lecture 2 Notes

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

Additional Material On Recursive Sequences

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10)

Chapter 10 Introduction to the Derivative

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Numerical Optimization

Limits and the derivative function. Limits and the derivative function

Scientific Computing. Roots of Equations

2.3 The Fixed-Point Algorithm

Numerical Solution of f(x) = 0

Nonlinear Equations. Not guaranteed to have any real solutions, but generally do for astrophysical problems.

SECTION 4-3 Approximating Real Zeros of Polynomials Polynomial and Rational Functions

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

by Martin Mendez, UASLP Copyright 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Introductory Numerical Analysis

Theme 1: Solving Nonlinear Equations

Course. Print and use this sheet in conjunction with MathinSite s Maclaurin Series applet and worksheet.

BEKG 2452 NUMERICAL METHODS Solution of Nonlinear Equations

Nonlinear Equations and Continuous Optimization

Single Variable Minimization

Fall 2017 November 10, Written Homework 5

Goals for This Lecture:

Review of elements of Calculus (functions in one variable)

Math Exam 1a. c) lim tan( 3x. 2) Calculate the derivatives of the following. DON'T SIMPLIFY! d) s = t t 3t

Math 75B Practice Problems for Midterm II Solutions Ch. 16, 17, 12 (E), , 2.8 (S)

Chapter 4. Solution of a Single Nonlinear Algebraic Equation

ROOT FINDING REVIEW MICHELLE FENG

AP Calculus AB/IB Math SL2 Unit 1: Limits and Continuity. Name:

SOLVING QUADRATICS. Copyright - Kramzil Pty Ltd trading as Academic Teacher Resources

M151B Practice Problems for Final Exam

Transcription:

Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y = e 016064 Most numerical methods for solving f() = 0 require only the ability to evaluate f() for any f() Figure 61: The function f() = e 1 016064 has two positive roots 61 The Bisection Method This is the simplest method for obtaining a root of the function f() that lies in an interval (a, b) for which f(a) f(b) < 0 Consider the function shown in Fig??, where we assume that only one root eists in (a, b) The bisection method divides (a, b) into two equal halves, and the subinterval containing the root is selected This half is further divided into two equal halves, and the process is repeated After several steps, the interval containing the root reduces to a size smaller than the specified tolerance The mid point of the smallest interval is taken to be the root Bisection Method 1 Let f(a) f(b) < 0 2 c = (a + b)/2 3 if sign[f(c)] = sign[f(a)], then a = c, else b = c 4 if b a < ɛ (specified tolerance), then = (a + b)/2 (root), stop, else go to step 2

Chap 6 Nonlinear Equations CS414 Class Notes 89 f(a) a - c 1 c 2 + b f(b) Figure 62: Bisection method The following points must be kept in mind 1 For floating point arithmetic with β 2, epressing c as [a + (b a)/2] is more accurate 2 The algorithm converges linearly to the root, ie, the ratio e k+1 /e k is a constant, where e k is the error c at the kth step This is implied from the fact that c 2 < c 1, where c 1 and c 2 are midpoints of the interval in two consecutive iterations The work required by this algorithm is summarized in Table?? Step Interval Width Function Evaluations Number Value 0 H 2 f(a), f(b) 1 H/2 1 f(c) 2 H/4 1 f(c) 3 H/8 1 f(c) k H/2 k 1 f(c) Table 61: Operation count in bisection method for initial interval width H = b a Eample 61 How many function evaluations are required to obtain the root with an error of 2 10 for an initial interval width H = b a = 1? Solution At step k, width of interval is (b a)/2 k = 2 k In order to have an error not eceeding 2 10, we must have 2 k /2 = 2 10, therefore k = 9, ie, the number of function evaluations is 9+2 = 11 62 Newton s Method Unlike the bisection method, the Newton scheme does not generate a sequence of nested intervals that bracket the root, but rather a sequence of points that with luck, converge to the root Given an initial estimate for the root, say 0, Newton s method generates a sequence of estimates k, for k = 1, 2, that are epected to be improved approimations of the root At the kth iteration, the net estimate for the root k+1 is obtained by intersecting the tangent to f() from the point ( k, f( k )) with the -ais (See Fig??) The equation for the tangent is f ( k ) = f( k) f( k+1 ) k k+1,

Chap 6 Nonlinear Equations CS414 Class Notes 90 and at the point of intersection with the -ais, f( k+1 ) = 0, giving the following formula for k+1 Newton s Method Let 0 be an initial estimate for the root for k = 1, 2, k+1 = k f( k) f ( k ) if f( k+1 ) ɛ, stop end k+1 = k f( k) f ( k ) tangent (,f( )) k k k+1 k f() Figure 63: Newton s method Eample 62 Obtain the square root of a positive number a Solution We need to obtain the root of f() = 2 a Newton s method for finding the root is given as follows: For a = 4, we have k+1 = k f( k) f ( k ) = k 2 k a 2 k = 1 2 0 = 1, 1 = 1 2 ( k + ak ) ( 1 + 4 ) = 5 1 2, 2 = 1 ( 5 2 2 + 8 ) = 41 5 20, etc Eample 63 Obtain the reciprocal of a number a Solution We have to obtain the root of f() = 1 a Newton s method for the reciprocal of a number is given as follows: For a = 3, k+1 = k f( k) f ( k ) = k 1 k a 1 = k + ( k a 2 k ) = k(2 a k ) 2 k 0 = 1 2, 1 = 1 2 (2 3 2 ) = 1 4, 2 = 1 4 (2 3 4 ) = 5 16, etc

Chap 6 Nonlinear Equations CS414 Class Notes 91 f() 0 1 2 Figure 64: Newton s method for finding square root of a positive number 621 Convergence of Newton s Method In Eample??, if 0 were chosen zero, the method would have failed as f ( 0 ) = 0 Similarly, in Eample??, if 0 were chosen 2 a, then 1 = 0 (2 a 0 ) = 0; and since, f( 1 ) =, the method fails Theorem 61 If r is a root of f(), ie, f(r) = 0, such that f (r) 0, and f () is a continuous function, then there is an open interval containing the root such that for any 0 in this interval, Newton s iterates converge to the root r Eample 64 Non-convergence due to cyclic iterates Let f() = method to cycle between the first two iterates, irrespective of the initial iterate 0 Eample 65 Conditional convergence of Newton s method Consider the function whose derivative is given by k+1 = k f k f k f = 1 + 2 f = (1 + 2 ) 2 2 (1 + 2 ) 2 = 1 2 (1 + 2 ) 2 1 2 This function causes Newton s For two successive iterates to be identical in magnitude with opposite sign, ie, k+1 = k, we must have ( ) ( k (1 + 2 k = k k ) 2 ) 1 + 2 k 1 2, k or Therefore, k = 1 3 2 k = k (1 + 2 k ) (1 2 k ), Eample 66 Divergence of Newton s method See Fig?? Eample 67 Newton s method trapped in a local minima See Fig??

Chap 6 Nonlinear Equations CS414 Class Notes 92 f() 2 0 1 Figure 65: Newton s method for the reciprocal of a number f() Figure 66: Eample of non-convergence of Newton s method due to cyclic iterates 622 Order of convergence Let us now try to determine how fast Newton s method converges to the root Theorem 62 If f(r) = 0; f (r) 0, and f () is continuous, then for 0 close enough to r, e k+1 lim k e 2 k = f (r) 2f (r) where e k is the error at the kth step, given by e k = k r For practical purposes, we can think of this result as stating e k+1 f (r) 2f (r) e2 k Alternately, [ f ] (r) log 10 e k+1 2 log 10 e k log 10 2f (r)

Chap 6 Nonlinear Equations CS414 Class Notes 93 f() - - - Figure 67: Eample of conditionally convergent Newton s method f() 0 1 2 Figure 68: Divergence of Newton s method The left hand side denotes the number of accurate decimal digits at the (k + 1)th iteration and the first term on the right hand side denotes the same for the kth iteration Since we can assume that the term f (r) 2f (r) is a constant, the equation implies that the number of accurate decimal digits doubles at each iteration Note: In general, for other iterative methods e k+1 lim k e k p = C where C is the asymptotic error constant, and p is the order of convergence (p 1) Comparison of Newton with Bisection method At each iteration, bisection method requires a function evaluation whereas Newton s method requires a function evaluation as well as the evaluation of first derivative of the function The first derivative may be computed using software such as Mathematica or Maple Multiple Roots Note that Newton can be slower than bisection (ie, Newton is no longer of 2 nd order convergence) when f(r) = f (r) = = f (m 1) (r) = 0 In this case, Newton s method has linear convergence with asymptotic error constant C = m 1 m where m is the multiplicity of the root

Chap 6 Nonlinear Equations CS414 Class Notes 94 f() 0 1 Figure 69: Eample of Newton s method trapped in a local minimum 2 1 0 2 1 0 m=2 m=3 Figure 610: Newton s method has linear convergence in the presence of multiple roots 63 Secant Method Newton s method requires computing the first derivative of the function along with the function at each iteration Quite often, the first derivative may not be available, or may be epensive to compute The main motivation in developing the secant method is to overcome this drawback of Newton s method In the secant method the first derivative is approimated via numerical differentiation Instead of the iteration used in the Newton s method, ie, k+1 = k f( k) f ( k ), the secant method computes the new estimate for the root as follows k+1 = k f k f[ k, k 1 ], where f[ k, k 1 ] is an approimation to the first derivative f ( k ), and is given by f[ k, k 1 ] = f( k) f( k 1 ) k k 1 The new iterate k+1 is the point at which the secant of f() at k and k 1 intersects the -ais Note that this requires the function value at two points, k and k 1 Therefore, the secant method must be initialized at two points, 0 and 1 Eample 68 Obtain the square root of 3 using the secant method, with 0 = 1, and 1 = 2 We need to compute the root of the function f() = 2 3 The iterates computed by the secant method for the function are given below

Chap 6 Nonlinear Equations CS414 Class Notes 95 f k-1 f k k+1 k k-1 Figure 611: A single iteration of the Secant method The actual root is 3 17320508 k k f k f[ k, k 1 ] f( k )/f[ k, k 1 ] 0 10 20 1 020 10 300 033 2 1666 0222 366 0060 3 172 00165 339 00048 4 173 000031 345 000009 631 Convergence of Secant Method Theorem 63 If f(r) = 0, f (r) 0, and f () is continuous, then there is an open interval containing r such that 0 and 1 in this interval will generate secant iterates k r as k 0 1 Figure 612: Failure of the Secant method 632 Order of convergence Theorem 64 If f(r) = 0; f (r) 0, and f () is continuous, then for 0, 1 close enough to r, e k+1 lim k e k φ = f (r) φ 1 2f (r) where e k = k r is the error, and φ = ( 5 + 1)/2 = 1618 is the root of the equation φ 2 = φ + 1, and is called the Golden Ratio In practice, e k+1 f (r) 2f (r) (e k 1 e k )

Chap 6 Nonlinear Equations CS414 Class Notes 96 0 1 2 3 Figure 613: Divergence of the Secant method In contrast, for Newton iterations, e k+1 f (r) 2f (r) e2 k Thus, Newton s method is more powerful than the secant method A better comparisonof the effectiveness of these two methods should also consider the work needed for convergence For the secant method, But, φ 2 = φ + 1, therefore e k+2 = = f (r) φ 1 2f (r) e k+1 φ f (r) [ φ 1 ] f (r) φ 1 φ 2f (r) 2f (r) e k φ f (r) (φ 1)+(φ 2 φ) 2f (r) e k φ2 e k+2 f (r) 2f (r) φ e k φ+1, Thus, two steps of the secant method have an order of convergence 2618, which is greater than that of Newton 64 Function Iteration We are concerned with solving equations of the form f() = g() where g() is a function of, via the iterative scheme, k+1 = g( k ) k = 0, 1, 2,, where 0 is chosen as an approimation of the fied point = g() Here, is a root of the function f() = g() Clearly, the iterations fail if g() is not defined at some point k Therefore, we assume that: Assumption 1 g() is defined on some interval I = [a, b], ie, a b, and Assumption 2 a g() b

Chap 6 Nonlinear Equations CS414 Class Notes 97 y y= y=g() Figure 614: Function iteration for solving = g() computes the intersection of y = and y = g() y= b y=g() y a a b Figure 615: Function iteration cannot be used for a function that is discontinuous in [a, b] These assumptions are not sufficient, however, to guarantee that = g() has a solution = g() For eample, g() may be discontinuous Thus, we introduce another assumption: Thus, we introduce another assumption: Assumption 3 g() is continuous on [a, b] From assumptions 1 3, we see that = g() has at least one solution if f() = g() has a change in sign in [a, b] If we wish to have the interval [a, b] contain eactly one root, then we must make yet another assumption that guarantees that the function g() does not vary too rapidly in [a, b] Assumption 4 g () L < 1 for a b Using the differential mean-value theorem, we obtain from assumption 4 g( 1 ) g( 2 ) = g (z) 1 2, where 1 < z < 2 Hence g( 1 ) g( 2 ) L 1 2 < 1 2 Theorem 65 Let g() satisfy assumptions 1 4 Then for any 0 in [a, b],

Chap 6 Nonlinear Equations CS414 Class Notes 98 b y= y y=g() a a b Figure 616: A function with multiple roots in [a, b] (a) There eists one and only one root = g() in [a, b] (b) Since a g() b for all in [a, b], then all the iterates k+1 = g( k ), k = 0, 1, 2,, lie in [a, b] (c) The sequence { k } defined by k+1 = g( k ) converges to the unique fied point = g() Proof (a) Let 1 and 2 be two solutions Then from assumption 4, g( 1 ) g( 2 ) < 1 2, or A contradiction! 1 2 < 1 2 (b) Obvious! (c) Consider the error at the k th iterate: e k = k Since k = g( k 1 ) and = g(), then by assumption 4, g( k 1 ) g() L k 1, ie, k L k 1 Therefore, Consequently, e k L e k 1 e 1 L e 0, e 2 L 2 e 0, e k L k e 0 Since g () L < 1, then lim k L k = 0, and lim k e k = 0, proving convergence of the iterative scheme k+1 = g( k ) under assumptions 1 4 Observe that From Taylor s theorem, e k+1 = k+1 = g( k ) = g( + e k ) g( + e k ) = g() + e k g (β)

Chap 6 Nonlinear Equations CS414 Class Notes 99 y= y= y y=g() y y=g() 0 1 2 0 2 3 1 Figure 617: Function iteration converges for g () < 1 y= y 2 0 1 3 y=g() Figure 618: Function iteration diverges for g () > 1 where β lies between and k Therefore, Hence e k+1 = g() + e k g (β) = e k g (β) e k+1 e k = g (β) But, we have just proved that (Theorem??) lim k k = Therefore, e k+1 lim = lim k e k k g (β) = g () Consequently, the iterative scheme k+1 = g( k ) under assumptions 1 4 is 1 st order, with an asymptotic error constant (or asymptotic convergence factor) C = g ()

Chap 6 Nonlinear Equations CS414 Class Notes 100 2 nd order function iterations Let us consider the question: What happens when g () = 0? Let g () and g () be continuous on the interval containing and k (for any k) Then from Taylor s Theorem where γ is between and k Therefore, e k+1 = g( + e k ) = [g() + e k g () + e2 k 2! g (γ)] = 1 2 e2 k g (γ), e k+1 e k 2 = 1 2 g (γ), This implies that when g () = 0, we have a 2 nd -order method, and Note also that if κ = 1 2 g (γ), then which gives the relation where Therefore e k κ 2k 1 e 0 2k, or Therefore, provided that the method converges, and e k+1 lim k e k 2 = 1 2 g () e k κ e k 1 2 κ κ 2 e k 2 4 κ κ 2 κ 4 e k 3 8 e k κ s e 0 2k s = 1 + 2 + 4 + 8 + + 2 k 1 = 2k 1 2 1 = 2k 1 e k [κ e 0 ] 2k 1 e 0 κ e 0 < 1, lim e k = 0 k Let us compare the number of iterations required to reduce the magnitude of the initial error e 0 by a factor of at least 10 ν for 1 st and 2 nd order methods, assuming that g () L < 1, and κ e 0 L < 1 Therefore, 10 ν = L M = L 2N 1 1 st order method: e M L M e 0 2 nd order method: e N L 2N 1 e 0 M = 2 N 1 N = log 2 (M + 1) In other words, if a first order method requires 255 iterations to reduce e 0 by a factor of 10 ν, a second order method will require only log 2 (255 + 1) = 8 iterations to reduce the same error e 0 by a factor of 10 ν Notice that Newton s method can be derived from the function iteration k+1 = g( k ) by choosing where 0 < h() <, and g() = h()f() h() = 1 f () assuming that f () and f () are continuous on an interval [a, b] containing the root and that f () 0 on [a, b]

Chap 6 Nonlinear Equations CS414 Class Notes 101 or or Now, g () = 1 h()f () h ()f(), g () = 1 h()f () 0 = 0 k+1 = g( k ) = k f( k) f ( k ) 2 nd order method Note that if is a multiple root, then f () = 0, violating the assumption that f () 0 on [a, b] which contains the root, and Newton s method becomes only a 1 st order iteration Returning to the 2 nd order scheme, the asymptotic error constant or C = 1 2 g () = 1 2 1 f 3 ()f () + f()f 2 ()f () 2f()f ()f 2 () f 4 () = 1 f () 2 f () e k+1 lim k e k 2 = 1 2 f () f () Assuming 0 is sufficiently close to the root, convergence of Newton s method is assured if 1 f () 2 f () e 0 < 1