Determining the Roots of Non-Linear Equations Part I

Similar documents
SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Polynomial Interpolation Part II

Solution of Nonlinear Equations

Solving Non-Linear Equations (Root Finding)

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Chapter 3: Root Finding. September 26, 2005

Practical Numerical Analysis: Sheet 3 Solutions

CHAPTER-II ROOTS OF EQUATIONS

Math Numerical Analysis

Scientific Computing: An Introductory Survey

15 Nonlinear Equations and Zero-Finders

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions

Nonlinear Equations. Chapter The Bisection Method

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Solution of Algebric & Transcendental Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Root Finding (and Optimisation)

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

ROOT FINDING REVIEW MICHELLE FENG

Numerical Analysis: Solving Nonlinear Equations

Chapter 1. Root Finding Methods. 1.1 Bisection method

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

INTRODUCTION TO NUMERICAL ANALYSIS

Numerical Methods. Root Finding

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Lecture 8: Optimization

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Chapter 2 Solutions of Equations of One Variable

Numerical Methods in Informatics

Announcements. Topics: Homework: - sections , 6.1 (extreme values) * Read these sections and study solved examples in your textbook!

Numerical mathematics with GeoGebra in high school

Today s class. Numerical differentiation Roots of equation Bracketing methods. Numerical Methods, Fall 2011 Lecture 4. Prof. Jinbo Bi CSE, UConn

Scientific Computing. Roots of Equations

Figure 1: Graph of y = x cos(x)

Numerical Methods Lecture 3

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

5 Finding roots of equations

Section 1.4 Tangents and Velocity

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

Due Date: Thursday, March 22, 2018

SOLVING EQUATIONS OF ONE VARIABLE

Limit. Chapter Introduction

APPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR. Jay Villanueva Florida Memorial University Miami, FL

Zeros of Functions. Chapter 10

Midterm Review. Igor Yanovsky (Math 151A TA)

Consequences of Continuity and Differentiability

CHAPTER 10 Zeros of Functions

Applied Mathematics Letters. Combined bracketing methods for solving nonlinear equations

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Lecture 5. September 4, 2018 Math/CS 471: Introduction to Scientific Computing University of New Mexico

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Introductory Numerical Analysis

Line Search Methods. Shefali Kulkarni-Thaker

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Calculus. Weijiu Liu. Department of Mathematics University of Central Arkansas 201 Donaghey Avenue, Conway, AR 72035, USA

Math 221 Exam II Tuesday Mar 23 5:30-7:00 PM Answers

Exact and Approximate Numbers:

MATH 2053 Calculus I Review for the Final Exam

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

AP Calculus AB. Introduction. Slide 1 / 233 Slide 2 / 233. Slide 4 / 233. Slide 3 / 233. Slide 6 / 233. Slide 5 / 233. Limits & Continuity

CHAPTER 4 ROOTS OF EQUATIONS

A Primer on Multidimensional Optimization

Computational Methods. Solving Equations

AP Calculus AB. Slide 1 / 233. Slide 2 / 233. Slide 3 / 233. Limits & Continuity. Table of Contents

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Math Practice Exam 3 - solutions

Caculus 221. Possible questions for Exam II. March 19, 2002

WEEK 8. CURVE SKETCHING. 1. Concavity

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

Math 2413 General Review for Calculus Last Updated 02/23/2016

Review for Final. The final will be about 20% from chapter 2, 30% from chapter 3, and 50% from chapter 4. Below are the topics to study:

Calculus I. 1. Limits and Continuity

Chapter 2: Functions, Limits and Continuity

Numerical Analysis. EE, NCKU Tien-Hao Chang (Darby Chang)

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10)

Non-linear Equations. Chapter Particle in a Box Potential. h2 d 2 u. dx 2. 0 x>a. d 2 u(x) + 2m h 2 (V 0+ E)u(x)=0 x<a, (4.

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

AP Calculus Summer Prep

Math 261 Calculus I. Test 1 Study Guide. Name. Decide whether the limit exists. If it exists, find its value. 1) lim x 1. f(x) 2) lim x -1/2 f(x)

by Martin Mendez, UASLP Copyright 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

MAE 107 Homework 7 Solutions

The iteration formula for to find the root of the equation

Calculus AB Topics Limits Continuity, Asymptotes

AP Calculus AB. Limits & Continuity.

4. We accept without proofs that the following functions are differentiable: (e x ) = e x, sin x = cos x, cos x = sin x, log (x) = 1 sin x

Modified Bracketing Method for Solving Nonlinear Problems With Second Order of Convergence

Chapter 3: The Derivative in Graphing and Applications

MATH 350: Introduction to Computational Mathematics

Transcription:

Determining the Roots of Non-Linear Equations Part I Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016

Exercise Session

Reviewing the highlights from last time (1/ 2) Computer Exercise Computer exercise Use MATLAB s function parabolic to solve the heat equation heat equation u t (t,x,y) = u(t,x,y) on a square geometry 1 x,y 1 with discontinuous initial data u(0,x,y) = 1 on the disk x 2 +y 2 < 0.42, and u(0,x,y) = 0 otherwise as well as zero Dirichlet boundary conditions. Plot the solution at times 0,0.1,20. [p,e,t] = initmesh( squareg ); [p,e,t] = refinemesh( squareg,p,e,t); u0 = zeros(size(p,2),1); ix = find(sqrt(p(1,:). 2+p(2,:). 2)<0.4); u0(ix) = ones(size(ix)); tlist = linspace(0,0.1,20); u1 = parabolic(u0,tlist, squareb1,p,e,t,1,0,0,1); Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 3 / 44

Reviewing the highlights from last time (2/ 2) Reviewing the highlights from last time 1D Schrödinger Equation The 1D time dependent Schödinger Equation basically reads as iu t (t,z) = k u(t,z)+v(z) u(t,z). Give its FTCS approximation. 1D Finite Difference Approximation Use Taylor approximation to determine a 1D finite difference approximation of the derivative u(x) u x (x). Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 4 / 44

Introduction & Todays Scope

The Catenary The catenary models an idealized hanging chain or cable that is subject to its own weight when supported only at its ends. The catenary is described by ( x y(x) = a cosh, a) where the parameter a depends on the chain s or cable s material properties. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 6 / 44

Determining the parameter a Assume, you we are interested in the material parameter a for a hanging cable between two supporting points that are 100 m apart from each other. By experiments we know that in this situation the maximal displacement is 10 m, as sketched on the previous slide. Plugged into the catenary equation we have ( ) 50 y(50) = a cosh = y(0)+10 = a+10, a such that we obtain a as a root of the non-linear function ( ) 50 g(a) = a cosh a 10. a Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 7 / 44

Determining the roots and the intermediate value theorem Determining the roots of a continuous nonlinear scalar function is actually an application of the intermediate value theorem: Intermediate Value Theorem Let f : [a,b] R be a real-valued continuous function on the interval [a,b], and y 0 is a number between f(a) and f(b), then there is a x 0 [a,b] such that f(x 0 ) = y 0. f(b) > 0 a f(x 0 ) = 0 b f(a) < 0 Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 8 / 44

Today, we will focus on algorithms for root determination Today s topics: Bisection method & regula falsi Convergence analysis of the bisection method Newton s method Corresponding textbook chapters: 3.1 and 3.2 Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 9 / 44

The Bisection Method

The key idea of the bisection method for root determination (1/ 3) f(b) > 0 a f(c) < 0 c b f(a) < 0 At each step we have an interval [a,b] and the values f(a) =: u and f(b) =: v such that uv < 0. I.e., at most one root lies in [a,b]. Next, we construct the midpoint c = 1 2 (a+b) of the interval [a,b] and compute f(c) = w. If w = 0 we have already found a root and the algorithm terminates. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 11 / 44

The key idea of the bisection method for root determination (2/ 3) f(b) > 0 a f(c) < 0 c b f(a) < 0 If w 0 we compute wu and wv, and either wu < 0 (i.e. wv > 0) or wu > 0 (i.e. wv < 0). if wu < 0, a root lies in [a,c] and we define b := c and start again. if wu > 0, a root lies in [c,b] and we define a := c and start again. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 12 / 44

The key idea of the bisection method for root determination (3/ 3) f(b) > 0 a b f(a) < 0 The algorithm terminates if the correct position of the root is found on one the halving points, or if the interval is sufficiently small, e.g. b a < 1 2 10 6. In this case we take 1 2 (a+b) as the best approximation of the root. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 13 / 44

An illustrative example: f(x) = x 3 2sin(x) on [0.5,2] Some computer results with the iterative steps of the bisection method for f(x) = x 3 2sin(x) on [0.5,2]: n c n f(c n ) error 0 1.25 5.52 10 2 0.75 1 0.875 0.865 0.375 2 1.0625 0.548 0.188 3 1.15625 0.285 9.38 10 2 4 1.203125 0.125 4.69 10 2.... 19 1.2361827 4.88 10 6 1.43 10 6 20 1.2361834 2.15 10 6 7.15 10 7 From these data we see, that the convergence towards the real solution seems to be rather slow for this example. How fast is the bisection method in general? Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 14 / 44

The convergence analysis of the bisection method (1/ 2)... (b 0 - a 0 )/2 r - c 0 a 0 r c 0 b0 Suppose, f is a continuous function that takes values of opposite sign at the ends of an interval [a 0,b 0 ]. Then there is a root r in [a 0,b 0 ] by the intermediate value theorem If we use the midpoint c 0 := 1 2 (a 0 +b 0 ) as our estimate of r, we have r c 0 1 2 (b 0 a 0 ). Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 15 / 44

The convergence analysis of the bisection method (2/ 2)... (b 0 - a 0 )/2 r - c 0 a 0 r c 0 b0 Continuing to apply the bisection algorithm and denoting the computed quantities by a 0,b 0,c 0,a 1,b 1,c 1 etc., then by the same reasoning: r c n 1 2 (b n a n ) (for all n 0). Since the widths of the intervals are divided by 2 in each step, we conclude that r c n b 0 a 0 2 n+1. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 16 / 44

... leads to the following theorem Theorem (Convergence of the Bisection Method) If the bisection method is applied to a continuous function f : [a,b] R, where f(a)f(b) < 0, then after n steps, an approximate root will have been computed with error at most (b a)/2 n+1. If an error tolerance has been prescribed in advance, it is possible to determine the number n of steps required in the bisection method upfront. Suppose that we want ε > r c n = b a 2 n+1, then we can determine n by taking logarithms (with any convenient base): n > log(b a) log(2ε) log(2). Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 17 / 44

Applying the convergence theorem for the bisection method Example How many steps of the bisection algorithm are needed to compute a root of f to full machine single precision on a 32-bit word length computer if a = 16 and b = 17 (as well as f(a)f(b) < 0). The root is between the two binary numbers a = (10000.0) 2 and b = (10001.0) 2. Thus, we already know 5 of the binary digits in the answer. Since we can use only 24 bits altogether, that leaves 19 bits to determine. We want the last bit to be correct, so we want the error r c n to be less than ε = 2 19 or ε = 2 20 (to be conservative), i.e. 2 20 > r c n = b a 2 n+1 = 1 2 n+1 = 2 (n+1). Taking reciprocals gives 2 n+1 > 2 20, or n 20. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 18 / 44

Introducing linear speed of convergence Definition (Linear Speed of Convergence) A sequence {x n } n N exhibits linear (speed of) convergence to a limit x, if there is a constant C [0,1) such that x n+1 x C x n x (for all n 1). If this inequality holds for all n N, then x n+1 x C x n x C 2 x n 1 x... C n x 1 x, and thus it is a consequence of linear (speed of) convergence that the following holds x n+1 x AC n (0 C < 1), for some finite number A > 0. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 19 / 44

Linear convergence as upper bound for the bisection method Due to the convergence inequality ε > c n+1 r = b a 2 n+2, we see, that the bisection sequence of root estimates {c n } n N fulfills x n+1 x AC n (0 C < 1), (with equality) for A := 1 4 (b a) and C := 1 2 [0,1). Though, {c n } n N need not obey the defining inequality x n+1 x C x n x (for all n 1). of linear convergence. Thus, we can only say that the bisection method has at most linear speed of convergence. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 20 / 44

Why may the bisection method not have linear speed of convergence? Classroom Problem Find an easy example such that the bisection sequence {c n } n N violates the linear speed inequality c n+1 r C c n r (for all n 1). Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 21 / 44

Some remarks about the bisection method The bisection method is the simplest way to solve a non-linear equation f(x) = 0 for x. It arrives at the root by constraining the interval in which the root lies, and it eventually makes the interval quite small. The bisection method halves the width of the interval at each step. This allows an exact prediction on how long it takes to find the root within any desired degree of accuracy. Root finding by the bisection method thus uses the same idea as the binary search method taught in data structures. In the bisection method, not every guess is closer to the root than the previous guess, because the bisection method does not use the nature of the function f. Often the bisection method is used to get close to one root before switching to a faster method. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 22 / 44

The Regula Falsi

The key idea of the regula falsi (1/ 4) The bisection method does not use any information of the function itself (besides some evaluations of the function). The so called regula falsi (or false position method) is an example showing how to easily include additional information to an algorithm (here the bisection method) in order to build a better one. The key idea of the regula falsi is to use the point where the secant line between f(a) and f(b) intersects the x-axis rather than the midpoint of each interval. I.e., the new estimate c for the root is determined as ( c = b f(b) = af(b) bf(a) f(b) f(a). a b f(a) f(b) ) ( = a f(a) b a f(b) f(a) This still retains the main feature of the bisection method, namely to trap a root in a sequence of intervals of decreasing size. ) Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 24 / 44

The key idea of the regula falsi (2/ 4) Illustration of the update mechanism of the regula falsi (initial step): secant line f(b) > 0 a c b f(c) < 0 f(a) < 0 Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 25 / 44

The key idea of the regula falsi (3/ 4) Illustration of the update mechanism of the regula falsi (1st step): secant line f(b) > 0 a c b f(c) < 0 f(a) < 0 Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 26 / 44

The key idea of the regula falsi (4/ 4) Illustration of the update mechanism of the regula falsi (2nd step): secant line f(b) > 0 a b f(a) < 0 Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 27 / 44

Does the regula falsi really increase the bisection method s speed? For some functions, the regula falsi may repeatedly select the same endpoint (like in our example), and the whole process may degrade to linear convergence. This is always the case if f(x) is convex or concave in a subdivision interval [a k,b k ], i.e., if f (x) has the same sign in that whole interval. Here, one of the interval boundaries than stays the same for all consecutive times, whereas the other converges linearly towards the root. Theorem (Super-Linear Convergence Using the Regula Falsi) The bisection method with a variant of the regula falsi has super-linear convergence towards the root, as long as f is not strictly convex or strictly concave in one of the subdivision intervals (i.e., as long as it is provided that the second derivative has a sign change in any subdivision interval). We will see that later. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 28 / 44

Modification of the regula falsi (1/ 3) For example, when the same endpoint is to be retained twice, a modified regula falsi may use a n f(b n ) b n f(a n ) if f(a n )f(b n ) < 0 f(b n ) f(a n ) c n := 2a n f(b n ) b n f(a n ) if f(a n )f(b n ) > 0 2f(b n ) f(a n ) So rather than selecting points on the same side of the root as the normal regula falsi this modified method changes the slope of the straight line. This produces estimates for the root that are closer to it than those obtained by the normal regula falsi method. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 29 / 44

Modification of the regula falsi (2/ 3) Illustration of the modified regula falsi (normal initial step): secant line f(b) > 0 a c b f(c) < 0 f(a) < 0 Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 30 / 44

Modification of the regula falsi (3/ 3) Illustration of the modified regula falsi (modified 1st step): secant line f(b) > 0 0.5 f(b) a c b f(a) < 0 Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 31 / 44

Newton s method

The key idea of Newton s method (1/ 2) In Newton s method, it is assumed that the function f is differentiable. This implies that the graph of f has a definite slope at each point and hence an unique tangent line. At a certain point (x 0,f(x 0 )) on the graph of f the tangent is a rather good approximation of the function in the vicinity of that point. Analytically, this means that the linear function l(x) = f (x 0 )(x x 0 )+f(x 0 ) is close to the given function f near x 0 ; at x 0 the two functions f and l agree. In Newton s method, we take the zero of the linear approximation l as an approximation of the root of the non-linear function f. This zero is easily found: x 1 = x 0 f(x 0) f (x 0 ). Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 33 / 44

The key idea of Newton s method (2/ 2) Thus, starting at a point x 0, we pass to a new point x 1 obtained from the preceding formula. Naturally, this procedure can be repeated (iterated) to produce a sequence of points: x 2 = x 1 f(x 1) f (x 1 ), x 3 = x 2 f(x 2) f (x 2 ), and so on. Under favorable conditions, the sequence of points approaches a root of f. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 34 / 44

Illustration of Newton s method (1/ 4) Starting at the local minimum (a, f(a)) is an unfavorable condition: a b tangent line Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 35 / 44

Illustration of Newton s method (2/ 4) Perturbing the initial point gives a better starting step towards the root,... tangent line x 0 x 1 a b Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 36 / 44

Illustration of Newton s method (3/ 4)... but still in this case it leads to an unfavorable situation: tangent line x 2 x 0 x 1 a b tangent line Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 37 / 44

Illustration of Newton s method (4/ 4) Classroom Problem Find one or more starting points for Newton s method that lead to convergence towards the root in [a,b]. a b Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 38 / 44

Illustration of Newton s method with another function Classroom Problem Apply Newton s method graphically to the function f(x) = x 3 x + 1 with x 0 = 1. Compute the Newton iteration points x 1 and x 2 analytically, using the Newton update formula x n+1 = x n f(x n) f (x n ). Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 39 / 44

Summary & Outlook

Major concepts covered today (1/ 3): the bisection method For finding a root r of a given continuous function f in an interval [a,b], n steps of the bisection method produce a sequence of intervals [a,b] = [a 0,b 0 ],[a 1,b 1 ],[a 2,b 2 ],...,[a n,b n ], each containing the desired root of the function. The mid-points c 0,c 1,c 2,...,c n of these intervals form a sequence of approximations to the root, namely, c k = 1 2 (a i+b i ). On each interval [a k,b k ], the error e k := r c k obeys the inequality e k 1 2 (b i a i ) and after n steps we have e n 1 2 n+1 (b 0 a 0 ) For an error tolerance ε such that e n < ε, n steps are needed, where n satisfies the inequality n > log(b a) log(2ε) log(2). Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 41 / 44

Major concepts covered today (2/ 3): regula falsi For the k-th step of the regula falsi over the interval [a k,b k ], let c k := a kf(b k ) b k f(a k ) f(b k ) f(a k ). If f(a k )f(c k ) > 0, set a k+1 = c k and b k+1 = b k ; otherwise, set a k+1 = a k and b k+1 = c k. A modification of the regula fasli can be obtained by changing the slope of the secant, e.g., via using the update formula a k f(b k ) b k f(a k ) if f(a k )f(b k ) < 0 f(b k ) f(a k ) c k := 2a k f(b k ) b k f(a k ) if f(a k )f(b k ) > 0 2f(b k ) f(a k ) Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 42 / 44

Major concepts covered today (3/ 3): Newton s method For finding a root of a continuously differentiable function f, Newton s method is given by x n+1 = x n f(x n) f (x n ) (n 0). It requires a given initial value x 0 and two function evaluation (for f and f ) at each step. Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 43 / 44

Preparation for the next lecture Please, prepare these short exercises for the next lecture: 1. Page 123, exercise 1 Find where the graphs of y = 3x and y = exp(x) intersect by finding solutions of exp(x) 3x = 0 correct to four decimal digits with the bisection method. 2. Page 123, exercise 1 (reformulated) Find where the graphs of y = 3x and y = exp(x) intersect by finding solutions of exp(x) 3x = 0 correct to four decimal digits with Newton s method. 3. Computer exercise Write a MATLAB program that solves exp(x) 3x = 0 with Newton s method and plot the resulting error over the number of iterations (convergence plot). Prof. Dr. Florian Rupp GUtech 2016: Numerical Methods 44 / 44