Numerical Methods. Root Finding

Similar documents
Nonlinear Equations. Chapter The Bisection Method

SOLVING EQUATIONS OF ONE VARIABLE

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

CHAPTER-II ROOTS OF EQUATIONS

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 4

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Numerical Methods in Informatics

Chapter 1. Root Finding Methods. 1.1 Bisection method

Iteration & Fixed Point

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Solution of Algebric & Transcendental Equations

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Chapter 3: Root Finding. September 26, 2005

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Solving Non-Linear Equations (Root Finding)

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Solution of Nonlinear Equations

Figure 1: Graph of y = x cos(x)

by Martin Mendez, UASLP Copyright 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Example - Newton-Raphson Method

NUMERICAL METHODS FOR SOLVING EQUATIONS

CHAPTER 4 ROOTS OF EQUATIONS

Chapter 4. Solution of a Single Nonlinear Algebraic Equation

Exact and Approximate Numbers:

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Solutions of Equations in One Variable. Newton s Method

ROOT FINDING REVIEW MICHELLE FENG

FIXED POINT ITERATION

APPLICATIONS OF DIFFERENTIATION

Numerical Solution of f(x) = 0

INTRODUCTION TO NUMERICAL ANALYSIS

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Math Numerical Analysis

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Root Finding (and Optimisation)

Limits and Their Properties

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Finding roots. Lecture 4

R x n. 2 R We simplify this algebraically, obtaining 2x n x n 1 x n x n

Scientific Computing: An Introductory Survey

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

2.3 The Fixed-Point Algorithm

Limits and the derivative function. Limits and the derivative function

Numerical Methods Lecture 3

Finding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University

Finding the Roots of f(x) = 0

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

Goals for This Lecture:

CHAPTER 10 Zeros of Functions

Numerical Analysis: Solving Nonlinear Equations

Nonlinear Equations. Not guaranteed to have any real solutions, but generally do for astrophysical problems.

15 Nonlinear Equations and Zero-Finders

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Numerical Methods. Roots of Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Numerical Analysis Fall. Roots: Open Methods

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

is the intuition: the derivative tells us the change in output y (from f(b)) in response to a change of input x at x = b.

Zeros of Functions. Chapter 10

Numerical Study of Some Iterative Methods for Solving Nonlinear Equations

Determining the Roots of Non-Linear Equations Part I

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Math Methods 12. Portfolio Assignment 4 Type II. 1. The general formula for the binomial expansion of ( x + y) n is given by 2! 3!

Methods for Advanced Mathematics (C3) Coursework Numerical Methods

6.1 The function can be set up for fixed-point iteration by solving it for x

THE SOLUTION OF NONLINEAR EQUATIONS f( x ) = 0.

Comparative Analysis of Convergence of Various Numerical Methods

Comments on An Improvement to the Brent s Method

Finding Roots of Equations

FP1 PAST EXAM QUESTIONS ON NUMERICAL METHODS: NEWTON-RAPHSON ONLY

Appendix 8 Numerical Methods for Solving Nonlinear Equations 1

Applied Mathematics Letters. Combined bracketing methods for solving nonlinear equations

UNIT II SOLUTION OF NON-LINEAR AND SIMULTANEOUS LINEAR EQUATION

Single Variable Minimization

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions

Review: Limits of Functions - 10/7/16

Practical and Efficient Evaluation of Inverse Functions

Scientific Computing. Roots of Equations

Today s class. Numerical differentiation Roots of equation Bracketing methods. Numerical Methods, Fall 2011 Lecture 4. Prof. Jinbo Bi CSE, UConn

E = {x 0 < x < 1}. Z f(x) dx <.

Nonlinear Oscillations and Chaos

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

Line Search Methods. Shefali Kulkarni-Thaker

Mat104 Fall 2002, Improper Integrals From Old Exams

SEE and DISCUSS the pictures on pages in your text. Key picture:

Slopes and Rates of Change

= 1 2 x (x 1) + 1 {x} (1 {x}). [t] dt = 1 x (x 1) + O (1), [t] dt = 1 2 x2 + O (x), (where the error is not now zero when x is an integer.

Line Search Techniques

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

Root Finding Convergence Analysis

Scientific Computing. Roots of Equations

Topic 8c Multi Variable Optimization

Transcription:

Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1

Root Finding Eamples Find real such that: Find real such that: Find roots of quadratic eq. b ± b 4ac 1, a + 4 + 0 c o s ( ) 0 a b c + + 0 use formula For many other fns not easy to determine their roots For eample f ( ) e cannot be solved analytically. Use approimate solutions techniques. Graphical methods If we plot the function f()0, and observe where it crosses the ais, we can have a rough estimate of the root. Limited by the fact that the results are not precise, but useful to get an initial estimate which can be further refined by other method. The graphical methods are also helpful to eplore function behaviour.

Trial and Error Use trial and error: Guess a value of and evaluate whether f() is zero!! If not (as it almost always the case) make another guess, evaluate f() again and determine whether the new value provides a better estimate for the root. Repeat process until a guess is obtained that results in f() being close to zero An Algorithmic Approach Idea: find a sequence of 1,,, 4. so that for some N, N is close to a root. that is, f( N ) < tolerance What do we need?

Requirements for this to work Initial guess: 1 Relationship between n+1 and n and possibly n-1, n-, n-, When to stop the successive guesses? Methods we will study Fied Point Iteration Bracketing Methods -- Bisection Method -- Regular Falsi Method Newton s Method Secand Method (avoiding using derivatives) 4

FIXED POINT ITERATION The equation f() 0 is rearranged into the form g(). Roots of the equation g() are therefore roots of the equation f() 0. This gives rise to the iterative formula n+1 g( n ), n 0, 1,,, with initial approimation 0. The sequence 0, 1,,, will converge to a root α of the equation g() provided a suitable starting value 0 is chosen and 1 < g (α) < 1. f() y y f() 1 10 8 6 4 0 - - -1 0 1 - -4-6 y g() and y 5 4 1 0 - - -1 0-1 1 - - -4 Convergence of Fied-point Iteration Analysis of fied-point iteration reveals that the iterations are convergent on the interval a b If the interval contains a root and if g ( ) < 1, { a b If 1 < g ( ) < 0, the iterations oscillate while converging. If 0 < g ( ) < 1, the iterations converge monotonically. } 5

y Fied Point Iteration: Eample 1 The equation f() 0, where f() 7 +, may be re-arranged to give ( + )/7. Intersection of the graphs of y and y ( + )/7 represent roots of the original equation 7 + 0. 4 y ( + )/7 1 0-5 -4 - - -1 0 1 4 5-1 y - - -4 Fied Point Iteration: Eample 1 The rearrangement ( + )/7 leads to the iteration n + n+ 1, n 0, 1,,,... 7 To find the middle root α, let initial approimation 0. 0 1 + 7 + 7 + 7 1.5714 7 1.5714 1 + 0.989 + + 7 + 7 0.989 7 + 4 0.564 7 0.564 0.454 etc. The iteration slowly converges to give α 0.441 (to s.f.) 6

Fied Point Iteration: Eample 1 The rearrangement ( + )/7 leads to the iteration n + n+ 1, n 0, 1,,,... 7 For 0 the iteration will converge on the middle root α, since g (α) < 1. n 0 1 1.5714 0.989 0.564 4 0.454 5 0.44196 6 0.4409 7 0.4408 8 0.44081 1.5 y 1 0.5 0 0 0.5 1 1.5 α 1 0 α 0.441 (to s.f.) y y ( + )/7 Fied Point Iteration: Eample 1 The rearrangement ( + )/7 leads to the iteration n + n+ 1, n 0, 1,,,... 7 For 0 the iteration will diverge from the upper root α. n 0 1 4.8571 11.679 7.70 4 1686559 5 6.9E+17 y 10 8 6 4 0 0 4 6 8 10 α 0 1 The iteration diverges because g (α) > 1. 7

Fied-point Iteration: Eample - Fied-point Iteration: Eample - 8

g 1 () converges, g () diverges, g () converges very quickly Note (see later): g () is the Newton-Raphson iteration function for 1 f ( ) Bracketing Methods This class of methods eploit the fact that functions changes sign in the vicinity of a root. Recall the following: The Intermediate Value Theorem tells us that if a continuous function is positive at one end of an interval and is negative at the other end of the interval then there is a root somewhere in the interval. Called bracketing methods because two initial guesses for the root are required. The two guesses must bracket or be on either sides of the root. 9

Bracketing Bisection Method: Steps 1) Notice that if f(a)*f(b) <0 then there is a root somewhere between a and b ) Suppose we are lucky enough to be given a and b so that f(a)*f(b) < 0 ) Divide the interval into two and test to see which part of the interval contains the root 4) Repeat 10

Step 1 Step Even though the left hand side could have a root it in we are going to drop it from our search. The right hand side must contain a root!!!. So we are going to focus on it. 11

Step Step 4 Winning interval Which way? Bisection Method Find a f a ) f ( b ) 0 0,b 0 so that ( 0 0 < For k 0,1,... m ( a k + bk if ) / f ( a ) f ( m) 0 k then else ak + 1 ak, bk + 1 m a b k + 1 m, bk + 1 k Now the solution * [ a k + 1, bk + 1] 1

Convergence Rate Analysis Bisection Convergence Rate Every time we split the interval we reduce the search interval by a factor of two. ε ε n So the error in the value of the root of the function after n iterations is given by ε a n b n a0 b n 0 1

Bisection Method From this relationship we can also determine the number of iterations needed to satisfy some given error criterion: Regula Falsi Method The regula falsi method start with two points, (a, f(a)) and (b,f(b)), satisfying the condition that f(a)f(b)<0. The straight line through the points (a, f(a)), (b, f(b)) is f ( b) f ( a) y f ( a) + ( a) b a The net approimation to the zero is the value of where the straight line through the initial points crosses the -ais. a b a af ( b) bf ( a) f ( a) f ( b) f ( a) f ( b) f ( a) 14

Regula Falsi Method (cont.) If there is a zero in the interval [a, c], we leave the value of a unchanged and set b c. On the other hand, if there is no zero in [a, c], the zero must be in the interval [c, b]; so we set a c and leave b unchanged. The stopping condition may test the size of y, the amount by which the approimate solution has changed on the last iteration, or whether the process has continued too long. Typically, a combination of these conditions is used. Regula Falsi Method: Eample Finding the Cube Root of Using Regula Falsi Since f(1) -1, f()6, we take as our starting bounds on the zero a1 and b. Our first approimation to the zero is b a 1 b ( f ( b)) (6) f ( b) f ( a) 6 + 1 6 / 7 8 / 7 1.149 We then find the value of the function: y f ( ) (8 / 7) 0.507 Since f(a) and f(8/7) are both negative, but f(8/7) and f(b) have opposite signs, we set new a8/7, and b remains the same. y f ( ) 15

Bracketing Method: Advantages/Disadvantages Two advantages are: 1. the procedure always converges; that is, a root will always be found, provided that at least one root eists.. there is good information available about the error associated with the result. Two disadvantage are: 1. the method converges relatively slowly.. many iterations may be required. Newton-Raphson Method Use the value and first derivative to etrapolate linearly. 16

Newton-Raphson Method cont. Newton-Raphson Method cont Suppose we know the function f and its derivative f at any point. The tangent at point ( k-1,f( k-1 )) defined by where L y ( 1 ) can be thought of as a linear model of the function f at k-1 L k ( ) f ( k 1) + ( k 1) f ( k 1) k 1 17

Newton s Method ( k 1, f ( k 1)) m Slope of tangent line at point, f ( )) m f ( 1) k ( k 1 k 1 m f ( ) 1 k m 0 f ( ) k 1 k k 1 Tangent line yl k-1 () k k 1 f ( f ( k 1 k 1 ) ) Newton-Raphson Method cont. The zero of the linear model L k-1 is given by: k k 1 f ( f ( k 1 k 1 ) ) If L k-1 is a good approimation to f over a wide interval then k should be a good approimation to a root of f [ Think of k-1 as the current approimation to the root and k as the net one ] 18

Deriving Newton Iteration using Taylors Theorem Let the current value be n and zero is approimately located at n+1, using Taylor epansion 0 f ( ) f ( ) + f ( )( ) + O( ) n+ 1 n n n+ 1 n Solve for n+1, we get n+ 1 n f ( n ) f ( ) n Finding a root of function f() Starting from a guess 0, use Newton-Raphson formula to compute better approimations k k 1 f ( f ( k 1 k 1 ) ) The function g() defined by the formula g( ) f ( ) f ( ) is called the Newton-Raphson iteration function Let p be a root of the function f(), that is, f (p) 0. Easy to see that g(p) p. Thus the Newton-Raphson iteration for finding the root of the equation f () 0 is accomplished by finding a fied point of the function g(). 19

First stage: (1,f(1)) (1-f(1)/f (1)) Zoom in for second stage (,f()) (-f()/f ()) Notice: we are getting closer 0

Newton s Iteration: Eample 1 To solve the equation f() 0, where f() 7 +, use the iteration : To find the upper root α, let initial approimation 0. 0 70 + 7 + 1 0 7 7 0 n 7n + n+ 1 n, n n 7 0, 1,,,....55 1 71 +.55 7.55+ 1.55 7.55 7 1 7 +.41.. 7.41.. +.41.. 7.41.. 7 etc..41157 The iteration quickly converges, giving α.40 (to.s.f.).97795 Newton s Iteration: Eample 1 To solve the equation f() 0, where f() 7 +, use the iteration : n 7n + n+ 1 n, n n 7 To find the upper root α, let initial approimation 0. n 0 1.55.41157.97795 4.9766 f() 15 10 5 0 5.9766-5 0, 1,,... y 7 +..4.6.8..4 1 0 The iteration quickly converges, giving α.40 (to.s.f.) 1

Newton s Iteration: Eample 1 To solve the equation f() 0, where f() 7 +, use the iteration : n 7 n + n+ 1 n, n n 7 Choice of intial approimation 0 0, 1,,... will determine which root is found. n (n) (n) (n) 0-1 1 -.8 0.5.55 -.10419 0.4578.41157 -.8676 0.44080.97795 4 -.8888 0.440808.9766 5 -.8847 0.440808.9766 Initial approimations 0, 0 1 and 0. Iterations converge to.84, 0.441 and.40 respectively (to s.f.) Newton s Iteration for Finding Square Roots Assume that A > 0 is a real number and let approimation to A. Define the sequence p 0 > 0 be an initial } using the recursive rule { p k k0 Then this sequence converges to A; that is, lim k p k A Outline of Proof. Start with function f () A, and notice that the roots of f () 0 are ± A. Now use f () and the derivative f () to write down the Newton-Raphson iteration formula using where p k g( pk 1) It can be proved that the generated sequence will converge for any starting value generated > 0. p 0

Convergence of Newton s Method We will show that the rate of convergence is much faster than the bisection method. However as always, there is a catch. The method uses a local linear approimation, which clearly breaks down near a turning point. Small f ( n ) makes the linear model very flat and will send the search far away Say we chose an initial 1 near a turning point. Then the linear fit shoots off into the distance!. (1,f(1)) (1-f(1)/f (1)) 4

Newton s Method: Advantages/Disadvantages The Newton-Raphson method generally provides rapid convergence to the root, provided the initial value 0 is sufficiently close to the root. How close is sufficiently close? That depends on the characteristics of the function itself. As illustrated by the following eamples, certain initial values of may cause the solution to diverge or not produce a valid result. Failure of Newton s iteration: Eample 1 To solve the equation f() 0, where f() 1/ +, use the iteration : 1/ n + n+ 1 n, n 0, 1,,,... 1/ To find the only root α, let initial approimation 0 1. n 1/ 0 + 1/ 1/( 1) + 1 1/( 1) 1 0 0 1 etc. 1/ 1 + 1/ 1/1 + 1 1/1 1 1 1/ + 1/ 1/ 5 + 5 1/ 5 The iteration quickly diverges, failing to give the root α. 5 85 5

Failure of Newton s iteration: Eample 1... To solve the equation f() 0, where f() 1/ +, use the iteration : 1/ n + n+ 1 n, n 0, 1,,,... 1/ n To find the only root α, let initial approimation 0 1. n (n) 0-1 1 1 5 85 4 1845 5 1.4E+09 f() 6 5 4 1 0 α 0 1 - -1 0 1 4 5 6-1 The iteration quickly diverges, failing to give the root α -1/. - y 1/ + Failure of Newton s iteration: Eample Newton s iteration for f () e can produce a divergent sequence. 6

Failure of Newton s iteration: Eample Newton s iteration for f () can produce a cyclic sequence. Failure of Newton s iteration: Eample 4 Newton s iteration can produce a cyclic sequence 7

Failure of Newton s iteration: Eample 5 Newton s iteration for f () arctan() can produce a divergent oscillating sequence. Newton s Method: algorithm so far Choose initial guess 0 Repeat k + 1 k f ( k ) f ( ) k Until Failure / Convergence How do we determine this? 8

Convergence Criteria A root-finding procedure needs to monitor progress towards the root and stop when current guess is close enough to the desired root Convergence checking will avoid searching to unnecessary accuracy Convergence checking can consider whether two successive approimations to the root are close enough to be considered equal Convergence checking can eamine whether f() is sufficiently close to zero at the current guess Convergence Criteria On the values of : On the values of f(): BOTH OF THESE CRITERIA NEED TO BE SATISFIED FOR THE ALGORITHM TO BE COMPLETE 9

Check against Failure to converge k f ( ) 0 before dividing by this Put a maimum on the number of iterations so as to prevent the sequence wandering (use for loop instead of repeat-until) Check to see if f ( k + 1) or f ( k + 1) f ( k ) are growing rather than decreasing Check if f ( k ) NaN or infinity and whether largest representable no. < k < smallest Check if f ( k ) infinity, then f ) f ( ) ( k + 1 k 0

Software requirements User of such software should provide: Function f ( k ) and its derivative f () An initial iterate 0 Convergence tolerance tol (tol 1 tol ) a small number A maimum number of possible iterations NOTE: if there are numerous solutions, different choices of might lead to different correct answers 0 Newton in Matlab 1

Speed of Convergence Assume that converges to p and set E n p p n for n 0. If two positive constants A 0 and R > 0 eist, and then the sequence is said to converge to p with order of convergence R. The number A is called the asymptotic error constant. If R1, the convergence of is called linear ( new error proportional to old error) If R, the convergence of is called quadratic ( new error proportional to old error squared) Newton s Method: Quadratic Convergence at a Simple Root Eample Start with p 0.4 and use Newton-Raphson iteration to find the root p of the polynomial f () +. The iteration formula for computing {p k } is Checking for quadratic convergence (R ), we get the values in the following table. Taking a closer look: where A /.

Convergence Rate for Newton-Raphson Iteration Assume that Newton-Raphson iteration produces a sequence that converges to the root p of the function f (). If p is a simple root, convergence is quadratic and If p is a multiple root of order M, convergence is linear and M REMINDER: Assume that f C [ a, b] and p ( a, b). We say that f () 0 has a root of order M at p if and only if A root of order M 1 called a simple root, if M > 1 it is called a multiple root. Convergence in Newton Iteration Let be the eact root, i the value in i-th iteration, and ε i i - the error, then 1 f f f f O f ( + ε ) f ( ) + ε f ( ) + O( ε ) ( + ε ) ( ) + ε ( ) + ε ( ) + ( ε ), Rate of convergence: i+ 1 f ( i ) i, f ( ) i f ( i ) i f ( ) i f ( ) / i+ 1 i i f ( i ) f ( ) + εi f ( ) ε ε ε ε + ε f ( ) εi (Quadratic convergence for simple roots) f ( )

Secand Method k+ 1 k 1 f( k ) k f( k 1) f( ) f( ) k k 1 L Slope of L slope of tangent line at is approimated by slope of the secant line L passing through points The iterate k+1 is the root of this secant line L: (i.e. to find k+1 set y0) Secand Method: Eample 1 Finding the Square Root of by Secand Method To find a numerical approimation to, we seek the zero of y f ( ). Since f(1)- and f()1, we take as our starting bounds on the zero 1 0 and 1. Our first approimation to the zero is 1 0 1 5 1 y1 (1) 1.667 y y 1 ( ) 1 Calculation of using secant method. 0 4

Secant method: Eample Consider the secant method used on f() + --1 with 0, 1 0.5. Note this fn is continuous with roots +1,-1. Secant Method: Advantages and Disadvantages The Secant method generally provides fairly rapid convergence to the root but not as rapidly as the Newton-Raphson method. However, ecept for the starting iteration, the secant method requires the evaluation of only one function per iteration while the Newton-Raphson method requires that two functions (the function and its derivative) be evaluated for each iteration. Similar to the Newton s method, the secant method may also encounter runaway, flat spot, and cyclical non-convergence characteristics. 5

1-D Root-Finding Summary Bisection method: (global convergence method) Guaranteed to work provided a bracketing pair is given - never diverges, but slow (linear rate of convergence R1) Newton s method: (local convergence method) risky but fast! (quadratic convergence rate, R) Secant method: (local convergence) less risky, mid-speed (convergence rate R 1.61804), similar to Newton s method but avoids calculation of derivatives Global convergence methods: converge starting from anywhere Local convergence methods: converge if is sufficiently close to root 6