Numerical differentiation

Similar documents
Problem. Set up the definite integral that gives the area of the region. y 1 = x 2 6x, y 2 = 0. dx = ( 2x 2 + 6x) dx.

Tangent Lines and Derivatives

ROOT FINDING REVIEW MICHELLE FENG

FIXED POINT ITERATION

Math Final Exam Review

5.2 Infinite Series Brian E. Veitch

Review: Power series define functions. Functions define power series. Taylor series of a function. Taylor polynomials of a function.

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

POLYNOMIAL EXPRESSIONS PART 1

Infinite series, improper integrals, and Taylor series

1.2 Finite Precision Arithmetic

Last week we looked at limits generally, and at finding limits using substitution.

Lecture 2 (Limits) tangent line secant line

8.5 Taylor Polynomials and Taylor Series

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

Lecture 7 3.5: Derivatives - Graphically and Numerically MTH 124

19. TAYLOR SERIES AND TECHNIQUES

MATH 250 TOPIC 13 INTEGRATION. 13B. Constant, Sum, and Difference Rules

MA 1128: Lecture 19 4/20/2018. Quadratic Formula Solving Equations with Graphs

Infinite Limits. Infinite Limits. Infinite Limits. Previously, we discussed the limits of rational functions with the indeterminate form 0/0.

1.1: The bisection method. September 2017

LECTURE 5, FRIDAY

Page 1. These are all fairly simple functions in that wherever the variable appears it is by itself. What about functions like the following, ( ) ( )

Polynomial Division. You may also see this kind of problem written like this: Perform the division x2 +2x 3

4 Stability analysis of finite-difference methods for ODEs

1 Ordinary differential equations

Tangent Planes, Linear Approximations and Differentiability

MATH 1902: Mathematics for the Physical Sciences I

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

Slope Fields: Graphing Solutions Without the Solutions

REVIEW OF DIFFERENTIAL CALCULUS

Math 163: Lecture notes

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

Limits, Continuity, and the Derivative

Math 1B, lecture 15: Taylor Series

Newton s Method and Linear Approximations

Let s Get Series(ous)

Lecture 7 - Separable Equations

One Solution Two Solutions Three Solutions Four Solutions. Since both equations equal y we can set them equal Combine like terms Factor Solve for x

Finding local extrema and intervals of increase/decrease

A dash of derivatives

Math 1302 Notes 2. How many solutions? What type of solution in the real number system? What kind of equation is it?

DIFFERENTIAL EQUATIONS

Mat104 Fall 2002, Improper Integrals From Old Exams

Derivatives and the Product Rule

Taylor series. Chapter Introduction From geometric series to Taylor polynomials

DIFFERENTIATION RULES

3.4 Using the First Derivative to Test Critical Numbers (4.3)

CHALLENGE! (0) = 5. Construct a polynomial with the following behavior at x = 0:

BOUNDARY VALUE PROBLEMS

Learn how to use Desmos

3 Polynomial and Rational Functions

Generating Function Notes , Fall 2005, Prof. Peter Shor

Math 131. The Derivative and the Tangent Line Problem Larson Section 2.1

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

f (x) = k=0 f (0) = k=0 k=0 a k k(0) k 1 = a 1 a 1 = f (0). a k k(k 1)x k 2, k=2 a k k(k 1)(0) k 2 = 2a 2 a 2 = f (0) 2 a k k(k 1)(k 2)x k 3, k=3

Pure Mathematics P1

Math 212-Lecture 8. The chain rule with one independent variable

6x 2 8x + 5 ) = 12x 8. f (x) ) = d (12x 8) = 12

(x + 3)(x 1) lim(x + 3) = 4. lim. (x 2)( x ) = (x 2)(x + 2) x + 2 x = 4. dt (t2 + 1) = 1 2 (t2 + 1) 1 t. f(x) = lim 3x = 6,

AP Calculus AB. Limits & Continuity.

56 CHAPTER 3. POLYNOMIAL FUNCTIONS

Lecture 5 - Logarithms, Slope of a Function, Derivatives

1 Question related to polynomials

5.3. Polynomials and Polynomial Functions

Polynomial Approximations and Power Series

Math Real Analysis II

Section 4.6 Negative Exponents

What is model theory?

Practice Problems: Integration by Parts

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

Mathematics 1 Lecture Notes Chapter 1 Algebra Review

Homework 4 Solutions, 2/2/7

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

The Chain Rule. The Chain Rule. dy dy du dx du dx. For y = f (u) and u = g (x)

Approximation, Taylor Polynomials, and Derivatives

Math Lecture 4 Limit Laws

Expansion of Terms. f (x) = x 2 6x + 9 = (x 3) 2 = 0. x 3 = 0

Vectors. Vector Practice Problems: Odd-numbered problems from

10.7 Trigonometric Equations and Inequalities

14 Increasing and decreasing functions

Newton s Method and Linear Approximations

Math 425 Fall All About Zero

Precalculus idea: A picture is worth 1,000 words

tom.h.wilson Dept. Geology and Geography West Virginia University Tom Wilson, Department of Geology and Geography

Chapter 1- Polynomial Functions

MTH101 Calculus And Analytical Geometry Lecture Wise Questions and Answers For Final Term Exam Preparation

DIFFERENTIATION RULES

Integration of Rational Functions by Partial Fractions

2t t dt.. So the distance is (t2 +6) 3/2

Area. A(2) = sin(0) π 2 + sin(π/2)π 2 = π For 3 subintervals we will find

5.1 Polynomial Functions

Math 4329: Numerical Analysis Chapter 03: Newton s Method. Natasha S. Sharma, PhD

5.3 Other Algebraic Functions

Math Review and Lessons in Calculus

Parabolas and lines

Section September 6, If n = 3, 4, 5,..., the polynomial is called a cubic, quartic, quintic, etc.

MA 510 ASSIGNMENT SHEET Spring 2009 Text: Vector Calculus, J. Marsden and A. Tromba, fifth edition

This practice exam is intended to help you prepare for the final exam for MTH 142 Calculus II.

Transcription:

Numerical differentiation Paul Seidel 1801 Lecture Notes Fall 011 Suppose that we have a function f(x) which is not given by a formula but as a result of some measurement or simulation (computer experiment) We can t get an exact formula for the derivative f (x) = lim 0 since that would involve an infinite number of computations or measurements with smaller and smaller However we can get an approximate formula for the derivative f (x) at some point x simply by fixing one small : f (x) In principle this appears to be a straightforward process: by the definition of the derivative the approximation gets better as gets smaller and that s that Warning However if we are working with finite precision taking very small can be an issue because we are subtracting two very close numbers f(x) and f(x + ) from each other and then multiplying the result by the very large number 1/ This will magnify errors (you can see that even in your calculator or computer where rounding errors appear for very small choices of ) So in many applications it may be in your interest to not take too small (since you never know f(x) to arbitrary precision if it s the result of an experiment) For this and other reasons we want to take a second look at the approximate formula above and ask how close it gets to the real value of the derivative Let s first explore the issue experimentally: 1

Example Here s an example of approximating f (1) = 1/ for f(x) = x: difference quotient error 01 048808848170151 001191151898484 001 0498751108895 000143788791105 0001 049987504094 0000149375390319 00001 049998750039 0000014993703375884 00001 050001500495 +000001500495 We see that the error is /8 for small This is of course not surprising being the kind of analysis that leads one to discover the quadratic approximation And indeed we can use quadratic approximation to see that in general f(x + ) f(x) + f (x) + () f (x) f (x) + f (x) This is only an approximate formula but it gives one a good idea of the behaviour of the error which in particular is approximately linear in (in the example above f (1/) = 1/4 so our analysis explains the experimental data well) Discussion Linear error in is not very useful for applications If f(x) is known with an error of 10 and f (x) is of order of magnitude approximately 1 we can never determine f (x) with more than 10 3 accuracy: if we take 10 3 the errors in determining f(x) will swamp the result Problem 1 (Solution given at the end) Consider the simple case of computing the derivative of f(x) = sin(x) at x = 0 by a (non-symmetric) difference quotient Determine the error for = 01 = 001 = 0001 It s not linear in What s happening instead? Derive an approximate formula which describes the error We can use our knowledge of Taylor approximations to find a better way of approximately computing derivatives Consider the above error analysis for and : f (x) f (x) f f(x ) f(x) (x) + f (x) We could average and the errors would cancel out This suggests that the average the symmetric difference quotient is a better way to proceed This is graphically intuitive (in terms of secant lines and tangent lines) and its theoretical basis is sound:

Fact If f is differentiable at x we have f (x) = lim 0 = lim 0 1 + f(x) f(x ) f(x + ) f(x ) Also experimentally it works quite well: Example Same example as before f (1) = 1/ for f(x) = x: symmetric difference quotient error 01 05007750598189 00007750598189 001 0500005073449 0000005073449 Now can we analyze the error as before? Quadratic approximation would seem to suggest that it s zero but that s not the right interpretation quadratic approximation just isn t fine enough to show the first significant term Instead we have to use cubic approximation: f(x + ) f(x) + f (x) + () f (x) f(x ) f(x) f (x) + () f (x) + () 3 f (x) () 3 f (x) f(x + ) f(x ) f (x) + () 3 f (x) 3 f(x + ) f(x ) f (x) + () f (x) Conclusion The error in the symmetric difference quotient is approximately quadratic in If one knows more than two values of f(x) one can give formulae that work even better at least for functions that are well-behaved (differentiable to high order) Problem (Solution given at the end) Suppose that we know f(x ) f(x ) f(x + ) f(x + ) Of course we can use that to form the two symmetric difference quotients f (x) f(x + ) f(x ) f (x) f(x + ) f(x ) 4 But is there a way of combining them to make the errors cancel each other out and get a better approximation? 3

Similar ideas apply to higher derivatives which are even more sensitive to errors For instance we can take the formulae f (x + /) f f(x) f(x ) (x /) and derive an approximate formula for the second derivative: f (x) f (x + /) f (x /) f(x + ) f(x) + f(x ) () (clearly one needs at least 3 different values of f to get a grip on f (x) so this is the simplest possible kind of formula) We finish by mentioning one important application which is to numerically (approximately) solve differential equations Suppose we want to program a computer to give us the approximate solution of the following equation: dy dx = y y(0) = 1 We replace the left side by a difference quotient with = 01 and get y(x + 01) y(x) + 01 y(x) This is called a discretization of the original differential equation: it introduces a finite step-size instead of the original real variable x Given y(0) the discretized formula iteratively determines approximate values for y(01) y(0) and so on This is usually called Euler s method let s see how well it works: approximation true solution y = 1/(1 + x) y(0) 1 y(01) 09 0909090909090909 y(0) 0819000000000000 0833333333333333 y(03) 075193900000000 07930793079 y(04) 09538494480879 07148571485714 Errors accumulate fairly rapidly We could be smarter instead and use a symmetric difference quotient: y(x + 01) y(x 01) + 0 y(x) (this doesn t work for x = 0 so we need one step of Euler s method to get us started) This is a symmetric variant of Euler s method sometimes called the leap-frog method Let s see how that goes (leap-frog takes over at the dashed 4

line): approximation true solution y = 1/(1 + x) y(0) 1 y(01) 09 0909090909090909 y(0) 0838 0833333333333333 y(03) 075955100000000 07930793079 y(04) 07139491571 07148571485714 And indeed it s quite a bit better! You ll learn more about these issues in 1803 (or numerical analysis classes) 5

Solution to Problem 1 The point is that in this particular example f (x) happens to vanish exactly at the point under consideration So we have to use cubic approximation anyway f(x + ) f(x) + f (x) + () 3 f (x) () f (x) In our case f (x) = cos(0) = 1 so the approximate error is () / Solution to Problem We use our previous error analysis: f(x + ) f(x ) f (x) + () f (x) f(x + ) f(x ) f (x) + () f (x) 4 3 We can cancel out the two error terms by taking a weighted average f(x + ) f(x ) f(x + ) f(x ) 4 3f (x) 4 which leads to the four-point formula f (x) f(x + ) + 8f(x + ) 8f(x ) + f(x ) 1 The problem did not require one to actually finding the error for this formula However if one wanted to do that a fifth order Taylor approximation (and a lot of cancellation) would show that f (x) four-point formula + () 4 f (x) 30