Numerical differentiation Paul Seidel 1801 Lecture Notes Fall 011 Suppose that we have a function f(x) which is not given by a formula but as a result of some measurement or simulation (computer experiment) We can t get an exact formula for the derivative f (x) = lim 0 since that would involve an infinite number of computations or measurements with smaller and smaller However we can get an approximate formula for the derivative f (x) at some point x simply by fixing one small : f (x) In principle this appears to be a straightforward process: by the definition of the derivative the approximation gets better as gets smaller and that s that Warning However if we are working with finite precision taking very small can be an issue because we are subtracting two very close numbers f(x) and f(x + ) from each other and then multiplying the result by the very large number 1/ This will magnify errors (you can see that even in your calculator or computer where rounding errors appear for very small choices of ) So in many applications it may be in your interest to not take too small (since you never know f(x) to arbitrary precision if it s the result of an experiment) For this and other reasons we want to take a second look at the approximate formula above and ask how close it gets to the real value of the derivative Let s first explore the issue experimentally: 1
Example Here s an example of approximating f (1) = 1/ for f(x) = x: difference quotient error 01 048808848170151 001191151898484 001 0498751108895 000143788791105 0001 049987504094 0000149375390319 00001 049998750039 0000014993703375884 00001 050001500495 +000001500495 We see that the error is /8 for small This is of course not surprising being the kind of analysis that leads one to discover the quadratic approximation And indeed we can use quadratic approximation to see that in general f(x + ) f(x) + f (x) + () f (x) f (x) + f (x) This is only an approximate formula but it gives one a good idea of the behaviour of the error which in particular is approximately linear in (in the example above f (1/) = 1/4 so our analysis explains the experimental data well) Discussion Linear error in is not very useful for applications If f(x) is known with an error of 10 and f (x) is of order of magnitude approximately 1 we can never determine f (x) with more than 10 3 accuracy: if we take 10 3 the errors in determining f(x) will swamp the result Problem 1 (Solution given at the end) Consider the simple case of computing the derivative of f(x) = sin(x) at x = 0 by a (non-symmetric) difference quotient Determine the error for = 01 = 001 = 0001 It s not linear in What s happening instead? Derive an approximate formula which describes the error We can use our knowledge of Taylor approximations to find a better way of approximately computing derivatives Consider the above error analysis for and : f (x) f (x) f f(x ) f(x) (x) + f (x) We could average and the errors would cancel out This suggests that the average the symmetric difference quotient is a better way to proceed This is graphically intuitive (in terms of secant lines and tangent lines) and its theoretical basis is sound:
Fact If f is differentiable at x we have f (x) = lim 0 = lim 0 1 + f(x) f(x ) f(x + ) f(x ) Also experimentally it works quite well: Example Same example as before f (1) = 1/ for f(x) = x: symmetric difference quotient error 01 05007750598189 00007750598189 001 0500005073449 0000005073449 Now can we analyze the error as before? Quadratic approximation would seem to suggest that it s zero but that s not the right interpretation quadratic approximation just isn t fine enough to show the first significant term Instead we have to use cubic approximation: f(x + ) f(x) + f (x) + () f (x) f(x ) f(x) f (x) + () f (x) + () 3 f (x) () 3 f (x) f(x + ) f(x ) f (x) + () 3 f (x) 3 f(x + ) f(x ) f (x) + () f (x) Conclusion The error in the symmetric difference quotient is approximately quadratic in If one knows more than two values of f(x) one can give formulae that work even better at least for functions that are well-behaved (differentiable to high order) Problem (Solution given at the end) Suppose that we know f(x ) f(x ) f(x + ) f(x + ) Of course we can use that to form the two symmetric difference quotients f (x) f(x + ) f(x ) f (x) f(x + ) f(x ) 4 But is there a way of combining them to make the errors cancel each other out and get a better approximation? 3
Similar ideas apply to higher derivatives which are even more sensitive to errors For instance we can take the formulae f (x + /) f f(x) f(x ) (x /) and derive an approximate formula for the second derivative: f (x) f (x + /) f (x /) f(x + ) f(x) + f(x ) () (clearly one needs at least 3 different values of f to get a grip on f (x) so this is the simplest possible kind of formula) We finish by mentioning one important application which is to numerically (approximately) solve differential equations Suppose we want to program a computer to give us the approximate solution of the following equation: dy dx = y y(0) = 1 We replace the left side by a difference quotient with = 01 and get y(x + 01) y(x) + 01 y(x) This is called a discretization of the original differential equation: it introduces a finite step-size instead of the original real variable x Given y(0) the discretized formula iteratively determines approximate values for y(01) y(0) and so on This is usually called Euler s method let s see how well it works: approximation true solution y = 1/(1 + x) y(0) 1 y(01) 09 0909090909090909 y(0) 0819000000000000 0833333333333333 y(03) 075193900000000 07930793079 y(04) 09538494480879 07148571485714 Errors accumulate fairly rapidly We could be smarter instead and use a symmetric difference quotient: y(x + 01) y(x 01) + 0 y(x) (this doesn t work for x = 0 so we need one step of Euler s method to get us started) This is a symmetric variant of Euler s method sometimes called the leap-frog method Let s see how that goes (leap-frog takes over at the dashed 4
line): approximation true solution y = 1/(1 + x) y(0) 1 y(01) 09 0909090909090909 y(0) 0838 0833333333333333 y(03) 075955100000000 07930793079 y(04) 07139491571 07148571485714 And indeed it s quite a bit better! You ll learn more about these issues in 1803 (or numerical analysis classes) 5
Solution to Problem 1 The point is that in this particular example f (x) happens to vanish exactly at the point under consideration So we have to use cubic approximation anyway f(x + ) f(x) + f (x) + () 3 f (x) () f (x) In our case f (x) = cos(0) = 1 so the approximate error is () / Solution to Problem We use our previous error analysis: f(x + ) f(x ) f (x) + () f (x) f(x + ) f(x ) f (x) + () f (x) 4 3 We can cancel out the two error terms by taking a weighted average f(x + ) f(x ) f(x + ) f(x ) 4 3f (x) 4 which leads to the four-point formula f (x) f(x + ) + 8f(x + ) 8f(x ) + f(x ) 1 The problem did not require one to actually finding the error for this formula However if one wanted to do that a fifth order Taylor approximation (and a lot of cancellation) would show that f (x) four-point formula + () 4 f (x) 30