ROOT FINDING REVIEW MICHELLE FENG

Similar documents
Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Chapter 1. Root Finding Methods. 1.1 Bisection method

Chapter 3: Root Finding. September 26, 2005

Order of convergence

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Math Numerical Analysis

Introductory Numerical Analysis

Newton s Method and Linear Approximations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Newton s Method and Linear Approximations

Scientific Computing: An Introductory Survey

Solution of Algebric & Transcendental Equations

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Solutions of Equations in One Variable. Newton s Method

Newton s Method and Linear Approximations 10/19/2011

CHAPTER-II ROOTS OF EQUATIONS

Numerical Methods in Informatics

Numerical Analysis Fall. Roots: Open Methods

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

CS 323: Numerical Analysis and Computing

Solution of Nonlinear Equations

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Math 4329: Numerical Analysis Chapter 03: Newton s Method. Natasha S. Sharma, PhD

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

FIXED POINT ITERATION

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Numerical Analysis. EE, NCKU Tien-Hao Chang (Darby Chang)

Numerical Methods. Root Finding

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Midterm Review. Igor Yanovsky (Math 151A TA)

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Section 4.2: The Mean Value Theorem

Numerical differentiation

CS 323: Numerical Analysis and Computing

Zeros of Functions. Chapter 10

Solving Non-Linear Equations (Root Finding)

APPLICATIONS OF DIFFERENTIATION

Infinite series, improper integrals, and Taylor series

Fixed-Point Iteration

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.

Goals for This Lecture:

CHAPTER 10 Zeros of Functions

Math /Foundations of Algebra/Fall 2017 Numbers at the Foundations: Real Numbers In calculus, the derivative of a function f(x) is defined

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the

5 Finding roots of equations

Scientific Computing. Roots of Equations

Caculus 221. Possible questions for Exam II. March 19, 2002

Section 1.4 Tangents and Velocity

Determining the Roots of Non-Linear Equations Part I

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

Tangent Lines and Derivatives

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Math 131. The Derivative and the Tangent Line Problem Larson Section 2.1

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Root Finding (and Optimisation)

Consequences of Continuity and Differentiability

SYSTEMS OF NONLINEAR EQUATIONS

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

Root Finding Methods

The Mean Value Theorem Rolle s Theorem

Section 3.7. Rolle s Theorem and the Mean Value Theorem

GENG2140, S2, 2012 Week 7: Curve fitting

Numerical Analysis: Solving Nonlinear Equations

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

X. Numerical Methods

Announcements. Topics: Homework: - sections , 6.1 (extreme values) * Read these sections and study solved examples in your textbook!

Discrete dynamics on the real line

AP Calculus AB. Limits & Continuity.

Welcome to Math 104. D. DeTurck. January 16, University of Pennsylvania. D. DeTurck Math A: Welcome 1 / 44

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

MATH 1902: Mathematics for the Physical Sciences I

Lecture 8. Root finding II

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

Parabolas and lines

ABSTRACT. HEWITT, CHRISTINA M. Real Roots of Polynomials with Real Coefficients. (Under the direction of Dr. Michael Singer).

This Week. Professor Christopher Hoffman Math 124

Last week we looked at limits generally, and at finding limits using substitution.

Intro to Scientific Computing: How long does it take to find a needle in a haystack?

Figure 1: Graph of y = x cos(x)

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Zeroes of Transcendental and Polynomial Equations. Bisection method, Regula-falsi method and Newton-Raphson method

Numerical Methods I Solving Nonlinear Equations

converges to a root, it may not always be the root you have in mind.

A secant line is a line drawn through two points on a curve. The Mean Value Theorem relates the slope of a secant line to the slope of a tangent line.

Review Sheet 2 Solutions

Numerical Analysis: Interpolation Part 1

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Computational Methods. Solving Equations

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

CHAPTER 4 ROOTS OF EQUATIONS

AP Calculus AB. Limits & Continuity. Table of Contents

3 Polynomial and Rational Functions

Transcription:

ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one zero, where the function achieves values of opposite sign at the endpoints of the interval to guarantee that you find the correct zero (3) Loosely, the algorithm repeatedly cuts the interval in half, identifies which half the zero is in, and repeats the search in that half of the interval. Eventually, when the interval is small enough, we can say we are close enough to the zero and stop iteration. (4) Converges linearly 1.2. Fixed Point Iteration. p 0 given, p n g(p n 1 ) (1) Can convert a root finding problem to a fixed point problem (by solving for x) (2) Any function that maps [a, b] to itself ( x [a, b], g(x) [a, b]) will have at least one fixed point (existence) (3) Any function that maps [a, b] to itself, and which is also not too steep ( g?(x) K < 1 for all x?[a, b]) has a unique fixed point (uniqueness) (4) Fixed point iteration will work for any f meeting the previous conditions (5) Fixed point iteration works faster the lower that K is (K very small gives fast convergence, K close to 1 gives very slow convergence) (6) Proof of convergence follows from the existence and uniqueness conditions. To prove existence, we use the existence condition: Take h(x) = x?g(x). Notice if h(x) = 0 for some x, that x is a fixed point of g. If h(a) = 0 or h(b) = 0, we re done. Now suppose 1

2 MICHELLE FENG not. Then g(a) [a, b] a, so g(a) > a, h(a) < 0. Similarly, g(b) < b h(b) > 0. Then by the intermediate value theorem, there exists x [a, b] s.t. h(x) = 0, and we re finished. Now, we prove uniqueness. Suppose we have two distinct points, p, q which are fixed by g, and we have the uniqueness condition g?(x) K < 1 x [a, b]. Then p q = g(p) g(q) = g (ξ) p q K p q But since K < 1, this can only be true of p q = 0, done. 1.3. Newton s Method. p 0 given, p n = p n 1 f(p n 1) f (p n 1 ) (1) Based on Taylor approximation (specifically linear approximation) (2) Converges quadratically in many cases, fails to converge occasionally (3) Take a point, look at f at that point, draw a tangent line, and find the intersection of the tangent line with zero. Most likely, this will put you closer to the real zero, repeat. (4) Proof of convergence based on fixed point convergence? we only have con- vergence in an interval around p? this interval can be very very small! So if you start too far away, you might find that Newton?s method doesn?t converge at all. (5) Derivation: From Taylor s theorem: Since f(p) = 0 f(p) f(p n ) + f (p n )(p p n ) 0 f(p n ) + f (p n )(p p n ) p p n f(p n) f (p n ) Use the right quantity as our guess p n+1. Alternatively, we can consider Newton?s method graphically? given a point p n, we find the tangent line at p n, and look for it s x-intercept. Done this way, we note that the tangent line at p n goes through the point (p n, f(p n )), and has slope f?(p n ). Writing out the equation of this line, we have f(x) f(p n ) = f (p n )(x p n )

ROOT FINDING REVIEW 3 Solve for the x-intercept by plugging in (p n+1, 0) for (x, f(x)), then solve for p n+1 1.4. Secant Method. p 0, p 1 given, p n = p n 1 f(p n 1)(p n 1 p n 2 ) f(p n 1 ) f(p n 2 ) (1) An adaptation of Newton?s method, useful when you do not know what f? looks like (2) To derive the Secant method, we note that f (x) f(a) f(x) a x if x is close to a. Then replace f (p n 1 ) from Newton s method with the approximation f(p n 1) f(p n 2 ) p n 1 p n 2, and we re done. Alternatively, given p n 1, p n 2, draw a line between them. Let p n be the x-intercept of this line. (3) Converges superlinearly 1.5. Method of False Position. (1) Combines secant and bisection method (2) Like Secant method, requires two start points. Like bisection method, these start points should have opposite sign when f is applied. (3) Formula is the same as for secant method, except that instead of using p n 1, p n 2, we use p n 1 and either p n 2 or p n 3 (whichever one gives us opposite sign from f(p n 1 ) when f is applied). (4) Converges slower than secant method, but has better convergence guarantees (similar to bisection method) 1.6. Laguerre s Method. x 0 given, G = p (x k ) p(x k ), H = G2 p (x k ) p(x k ), a = (1) Used specifically for polynomials n G ± (n 1)(nH G 2 ), x k+1 = x k a (2) Derivation: recall a polynomial p(x) of degree k can be written p(x) = C(x x 1 )(x x 2 ) (x x n) where the x i are roots. Let G = d dx ln p(x) = p (x) p(x)

4 MICHELLE FENG H = d2 dx 2 ln p(x) = p (x) 2 p (x)p(x) (p(x)) 2 = G 2 p (x) p(x) Notice that using the form written before, we can write n 1 G = x x i H = i=1 n i=1 1 (x x i ) 2 Now, suppose that we guess x, and suppose that x x i = a for exactly one root x i, and x x j = b for all other roots x j. Then G = 1 a + n 1, H = 1 b a 2 + n 1 b 2 Then solving for a, we get n a = G ± (n 1)(nH G 2 ) and we can guess that x i = x a. (3) Converges super fast (cubically) for most polynomials, and almost always converges regardless of initial point (4) However, requires you to be using a polynomial, and to compute 2 derivatives. 1.7. Horner s Method. (1) Used for polynomials (2) Since Newton s method only finds one root, Horner s method gives you a way of finding other roots (3) Horner s method allows us to do polynomial long division (see examples on Wikipedia) (4) Use Newton s method to find the first root. Then divide the polynomial using Horner s method by (x x 1 ), where x 1 is the first root. Then use Newton s method to find the second root. Repeat until no roots remain. (5) Efficient for evaluating polynomials, but as a root finding method, it s constrained by the speed of Newton s (which to be fair is usually fast). (6) Additionally, small errors at each step can become magnified, so it isn t very accurate as the number of roots goes up.

ROOT FINDING REVIEW 5 1.8. Multiple Roots. (1) When a function f has multiple roots, Newton?s method converges very slowly (linearly in fact! We?ll prove this in a bit). (2) Loosely, linear convergence happens because if f has a multiple root p, f?(p) = 0, so that as we approach p, we?re looking at tangent lines that are nearly flat, and tangent lines that aren?t very good approximations of f. (3) Why does convergence depend on whether a root is multiple or simple? Let?s take a look at the function that we?re finding a fixed point of with Newton?s method: g(x) = x f(x) f (x) We can write out a Taylor expansion for g(x) around p: g(x) = g(p) + g (p)(x p) + g (ξ) (x p) 2 Now, let s examine g (p). Suppose f has a simple root at p, that is, f (p) 0. Then g (x) = 1 f (x)f (x) f(x)f (x) f (x)f (x) = f(x)f (x) f (x)f (x) But f(p) = 0, and f (p) 0, since p a simple root, so that g (p) = 0. Then g(x) = g(p) + g (ξ) (x p) 2 But then we have g(x) g(p) = g (ξ) x p 2 Plugging in p n = x and recalling that p is a fixed point of g, since p is a root, then g(p n ) g(p) = g (ξ) p n p 2 p n+1 p = g (ξ) p n p 2 p n+1 p lim n p n p 2 = g (p) 2 which gives precisely the formula for quadratic convergence. So what goes wrong when we have a multiple root? If we have a multiple root at p, we have f?(p) = 0. This means that when we take g?(p), we?re not going to get a convenient zero (you can compute on your own that g (p) 0. So now, when you look at the Taylor expansion, we can?t just ignore the linear g?(p)(x?p) term! Instead, we have g(x) = g(p) + g (ξ)(x p)

6 MICHELLE FENG and using the same argument as above, we have p n+1 p lim = g (p) n p n p so we have only linear convergence at best (in fact, g (p) < 1 will be true, so we ll have linear convergence precisely). (4) To fix the problem of linear convergence with multiple roots, we?re going to modify f into a function that Newton?s method will converge quadratically with. To do this, we have to pick a function that has only simple roots, and that also still preserves all the roots of f. To do this, we take µ(x) = f(x) f (x) You can prove that this has the characteristics we want by using the fundamental theorem of algebra. Now we can use Newton s method on µ(x) to get quadratic convergence, and the same roots.