Math 471. Numerical methods Root-finding algorithms for nonlinear equations

Similar documents
ROOT FINDING REVIEW MICHELLE FENG

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

Chapter 3: Root Finding. September 26, 2005

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

Math Numerical Analysis

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Numerical Methods in Informatics

Figure 1: Graph of y = x cos(x)

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Solution of Algebric & Transcendental Equations

Numerical Methods Lecture 3

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

MATH 409 Advanced Calculus I Lecture 16: Mean value theorem. Taylor s formula.

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

5 Finding roots of equations

Numerical Analysis: Solving Nonlinear Equations

Chapter 1. Root Finding Methods. 1.1 Bisection method

Numerical Analysis. EE, NCKU Tien-Hao Chang (Darby Chang)

CHAPTER-II ROOTS OF EQUATIONS

Solution of Nonlinear Equations

Midterm Review. Igor Yanovsky (Math 151A TA)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Introductory Numerical Analysis

Solving Non-Linear Equations (Root Finding)

Section 4.2: The Mean Value Theorem

Jim Lambers MAT 460 Fall Semester Lecture 2 Notes

MATH 2053 Calculus I Review for the Final Exam

Lesson 59 Rolle s Theorem and the Mean Value Theorem

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

PARALLEL NUMERICAL METHODS FOR SOLVING NONLINEAR EQUATIONS

Root Finding (and Optimisation)

Continuity. Chapter 4

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Continuity. MATH 161 Calculus I. J. Robert Buchanan. Fall Department of Mathematics

Introduction to Numerical Analysis

Solutions of Equations in One Variable. Newton s Method

Section 1.4 Tangents and Velocity

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Math 131. The Derivative and the Tangent Line Problem Larson Section 2.1

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

MA 123 September 8, 2016

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Caculus 221. Possible questions for Exam II. March 19, 2002

Solutions to Math 41 First Exam October 18, 2012

1.1: The bisection method. September 2017

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the

(a) For an accumulation point a of S, the number l is the limit of f(x) as x approaches a, or lim x a f(x) = l, iff

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Halley s Method: A Cubically Converging. Method of Root Approximation

15 Nonlinear Equations and Zero-Finders

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

If you need the source code from the sample solutions, copy and paste from the PDF should work. If you re having trouble, send me an .

Mean Value Theorem. MATH 161 Calculus I. J. Robert Buchanan. Summer Department of Mathematics

Math 131 Exam 1 October 4, :00-9:00 p.m.

Taylor and Maclaurin Series. Approximating functions using Polynomials.

Mean Value Theorem. MATH 161 Calculus I. J. Robert Buchanan. Summer Department of Mathematics

Math 117: Calculus & Functions II

Section 3.7. Rolle s Theorem and the Mean Value Theorem

Limits, Continuity, and the Derivative

Order of convergence

Simple Iteration, cont d

COURSE Iterative methods for solving linear systems

CS 323: Numerical Analysis and Computing

Line Search Methods. Shefali Kulkarni-Thaker

Limit. Chapter Introduction

2.1 The Tangent and Velocity Problems

Taylor and Maclaurin Series. Approximating functions using Polynomials.

The Mean Value Theorem Rolle s Theorem

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Consequences of Continuity and Differentiability

Discrete dynamics on the real line

Chapter 1 Mathematical Preliminaries and Error Analysis

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10)

Test 3 Review. y f(a) = f (a)(x a) y = f (a)(x a) + f(a) L(x) = f (a)(x a) + f(a)

Since the two-sided limits exist, so do all one-sided limits. In particular:

Optimization and Calculus

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

Lecture 2 (Limits) tangent line secant line

Polynomial Interpolation with n + 1 nodes

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions

MATH 131A: REAL ANALYSIS (BIG IDEAS)

Chapter 1: Limits and Continuity

Numerical Methods. Root Finding

Solving Nonlinear Equations

Justify all your answers and write down all important steps. Unsupported answers will be disregarded.

Intro to Scientific Computing: How long does it take to find a needle in a haystack?

SYSTEMS OF NONLINEAR EQUATIONS

DRAFT - Math 101 Lecture Note - Dr. Said Algarni

Calculus I. When the following condition holds: if and only if

Math 261 Calculus I. Test 1 Study Guide. Name. Decide whether the limit exists. If it exists, find its value. 1) lim x 1. f(x) 2) lim x -1/2 f(x)

MATH 409 Advanced Calculus I Lecture 11: More on continuous functions.

MATH 1A, Complete Lecture Notes. Fedor Duzhin

Sections 2.1, 2.2 and 2.4: Limit of a function Motivation:

Transcription:

Math 471. Numerical methods Root-finding algorithms for nonlinear equations overlap Section.1.5 of Bradie Our goal in this chapter is to find the root(s) for f(x) = 0..1 Bisection Method Intermediate value theorem. f(x) is continuous on [a, b]. If f(a)f(b) < 0, then there must exist at least one root of f(x) = 0 in the interval (a, b). Example. Let f(x) = x 3 5x+1. Then, f( 3) = 11, f( ) = 3 x 1 ( 3, ). Likewise x ( 1, 1) and x 3 (, 3). Bisection method. Q: find root p (a, b) for f(x) = 0. A: generate a sequence of root enclosing intervals (a, b) = (a 0, b 0 ) (a 1, b 1 ) (a, b )... The process is iterative, that is, (a n+1, b n+1 ) is determined by (a n, b n )...(a 0, b 0 ) (in simple cases, (a n, b n ) is sufficient). Each (a n+1, b n+1 ) is either the left half or right half of (a n, b n ), deping on the signs of f(a n ), f ( a n+b n ), f(bn ). More precisely, if f(a n )f ( a n+b n ) < 0, then (an+1, b n+1 ) = (a n, an+bn ) (left half). Otherwise, (a n+1, b n+1 ) = ( an+bn, b n ) (right half). In case f(a n ) = 0 or f(b n ) = 0, the algorithm should stop, yielding an exact root. If this process goes on infinitely, then by squeezing theorem, lim a n = lim b n = lim (a n +b n )/ = p. But, in a computer algorithm, it must stop at certain point n (so that we simply take a n or b n or (a n +b n )/ as the appproximate root) stopping condition 1. b n a n < ε.. f(a n ) < ε or f(b n ) < ε. 3. n = n max. 4.... Here, ε and n max are prescribed as input parameters from outside the algorithm. Error bound. Let p be the exact root that the algorithm converges to. Since it is always true that p (a n, b n ), we know p (a n + b n )/ < b n a n / = n 1 b 0 a 0. So the Bisection method converges at rate O( n ). Also, by the above estimate, if we 1

use stopping condition 1, then ε/ is indeed an error bound for p (a n + b n )/, giving us some confidence on the accuracy of the approximation. Example.. f(x) = x 3, f(1) =, f() = 1 there is a root p (1, ). (theorectically, p = 3 = 1.73...). n a n b n a n+b n f( an+bn ) 0 1 1.5-0.75 1 1.5 1.75 0.065 1.5 1.75 1.65-0.3594 3 1.65 1.75 1.6875-0.153 4 1.6875 1.75...... A sample code with stoppping condition 1. The input parameters are initial guess (a, b) and ε. function c=bisection(a,b,ep) a0=a; b0=b; while ((b-a)>=ep) c=(a+b)/; if (f(c)==0) % root is at the middle point return; if (f(a)*f(c)<0) % root is in left half a0=a; b0=c; else % root is in right half a0=c; b0=b; a=a0; b=b0; c=(a+b)/; Note that an f.m file containing the definition of f(x) should be stored in the same directory as bisection.m. A shorter but less readable code can be the following.

function c=bisection_short(a,b,ep) while ((b-a)>=ep) c=(a+b)/; if (f(a)*f(c)<0) b=c; elseif (f(c)*f(b)<0) a=c; else return; c=(a+b)/; $. False Position method The Bisection method belongs the family of enclosure methods that use a sequence of enclosing intervals (a 0, b 0 ) (a 1, b 1 ) (a, b )... to gradually squeeze onto the exact root. The underlying principle of an enclosure method is the Intermediate Value Theorem. In each step, the choice of the dividing point c n doesn t have to be the middle point of (a n, b n ). What can be an alternative? For the False Position method, it prets that f(x) is a linear function and uses the x-intercept of the line that connects (a n, f(a n )) and (b n, f(b n )) to be the candidate of the dividing point c n. Then, check the sign condition and decide the next interval (a n+1, b n+1 ) to be either (a n, c n ) or (c n, b n ). This method would actually reach the exact root in one step if f(x) is indeed a linear function on (a n, b n ). Otherwise it only gives an approximation and hence the name false position. To find c n, we establish the equation for the line that connects (a n, f(a n )) and (b n, f(b n )). By the point-slope formula, it should be y f(a n ) = f(b n) f(a n ) b n a n (x a n ). To find the x-intercept, set y = 0 in the above equation and solve for x to get the value of the dividing point c n c n = a n f(a n)(b n a n ) f(b n ) f(a n ) 3

or, in a more symmetric way, c n = a nf(b n ) b n f(a n ) b n a n. Convergence and error bound. The text book gives a long but elementary discussion on the error bound. Here, we only give the result: on the n-the step, p c n (c n c n 1 ) c n c n 1 + c n, which can serve as part of the stopping condition..3 Fixed-point iteration. This is a family of schemes that solve the fixedpoint problem g(x) = x g(x) x = 0. So, there is equivalence between fixed-point problem and root-finding problem. Example. f(x) = x 3 = 0 can be re-written as fixed-point problems g 1 (x) = 3 x = x or g (x) = x x + 3 = x or g 3 (x) = x x 3 = x. The (iterative) scheme is simply x n+1 = g(x n ). (Initialize: x 0 = 1.5) g 1 g g 3 n x n = g 1 (x n 1 ) x n = g 3 (x n 1 ) x n = g 3 (x n 1 ) 0 1.5 1.5 1.5 1.5 1.875 1.5 0.1875 1.617 3 3.153 1.8095 4 1.5-3.7849 1.673 5-15.1106 1.7740 If the limit lim x n = y shall exist, we must have y = lim x n = lim g(x n 1 ) = g( lim x n 1 ) = g(y). 4

However, not every choice of g will lead to convergence lim x n = p. For example, from the above table, we see that the numerical schemes are not working for g 1 and g. Theorem. If g (x) k for x (a, b) and if (by some argument) x n (a, b), n = 0, 1,,... and p [a, b], then p x n k n p x 0. In particular, if k < 1, then convergence. Proof. Mean value theorem. Or, as we did in class, Taylor series g(x) g(p) = (x p)g (ξ) for some ξ (x, p) or ξ (p, x). A useful trick in studying error estimates for fixed point iteration p =g(p) x n+1 =g(x n ) p x n+1 =g(p) g(x n ) def. of fixed point numerical scheme subtract the above two Corollary. If max x [a,b] g (x) < 1 and if g maps [a, b] onto [a, b] x 0 [a, b], then obviously all x n [a, b] and thus convergence. and we start with Example. Consider g(x) = x x 3 with x 0 = 1.5. Then, x 1 = 1.875 and try if will work. Indeed, by Calculus, max g(x) = g(1.5) = 1.875 x [1.5,1.875] min x [1.5,1.875] Also satisfied Convergence. [a, b] = [x 0, x 1 ] = [1.5, 1.875] g maps [1.5, 1.875] onto [1.617, 1.875], g(x) = g(1.875) = 1.617 assumption of self-mapping satisfied! k = max x [a,b] g (x) = 0.875 < 1. The concept of contraction mapping is often used for function g(x) on domain [a, b] if there exists a constant k < 1 such that g(x) g(y) k x y for all x [a, b] and y [a, b]. 5

Function with max x [a,b] g (x) < 1 is a special (and common) case of contraction mapping. (Why? Excercise.).4 Newton s method Newton s method is a very popular example of fixed-point iteration x n+1 = g(x n ) where the goal is to solve f(x) = 0 and the iteration function is g(x) = x f(x) f (x) x n+1 = x n f(x n) f (x n ). (1) Intuition: The tangent line passing (x n, f(x n )) has slope f (x n ) and therefore is governed by a linear function y f(x n ) = f (x n )[x x n ]. Use this linear function as an approximation of f so it s x-intercept approximates the root of f. By setting y = 0 in the above equation, we get the approximation x n+1 = x n f(xn) Advantages: Assume f(p) = 0 and f (p) 0. calculation!) f (x n). Then, calculation shows (do the g (p) = 0 () which is in great favor of the smallness condition max x [a,b] g (x) k < 1. Indeed, because g (x) is continuous, there always exist a positive number d such that on the neighborhood [a, b] = [p d, p + d], the smallness condition is satified. Moreover, the requirement that g maps [a, b] onto itself is true. (Why? For any x n [a, b] = [p d, p+d], we have g(x n ) p = g(x n ) g(p) = g (ξ) x n p < d and thus x n+1 = g(x n ) [a, b].) Disadvantage: if f (p) = 0 or is very close to zero, then one will have a zero or small denomenator in (1). The Newton s method will become unstable (think about the round-off error) and should be avoided. Order of convergence. The Newton s method converges at second order if the initial guess is choosen to fall in [p d, p + d], that is lim x n+1 p = λ. (3) x n p Proof. Just like in the general case of fixed-point iteration, subtract the exact equation 6

p = g(p) from the approximation x n+1 = g(x n ). x n+1 p = g(x n ) g(p) = g(p) + g (p)(x n p) + g (ξ)(x n p) / Taylor expansion to the second order! = g (ξ n )(x n p) / due to (). Now let n and by the convergence of x n to p and the fact that ξ n is enclosed by x n and p, we immediately have lim ξ n = g (p)/. Thus, (3) is proved with λ = g (p) /. Fixed-point iteration methods in general are first order, since the zero derivative condition () is not always true.. A variation of Newton s method is the secant method. That is, replace the derivative f (x n ) by yet another approximation [f(x n ) f(x n 1 )]/(x n x n 1 ). This the slope of the secant line connecting (x n, f(x n )) and (x n 1, f(x n 1 )). Scheme Initialize x 0 AND x 1. Then, iterate x n+1 = x n f(x n ) [f(x n ) f(x n 1 )]/(x n x n 1 ) Stoppping condition is usually x n+1 x n < ε since we, handwavingly, regard x n+1 as very close to the exact solution p, which means, p x n x n+1 x n. Both methods converge after one iteration if f(x) is a linear function! 7