1 / 10 Lecture 5: Function Approximation: Taylor Series MAR514 Geoffrey Cowles Department of Fisheries Oceanography School for Marine Science and Technology University of Massachusetts-Dartmouth
Better Approximations 2 / 10 Thinking about linearization, the goal in that case was to come up with a simpler function that matched the value and derivative of a function at a given point a. This is in fact an advanced step from an even simpler function approximation using simply the value. g(x) = f (a). Our linearization gives us no guarantee about the second derivative. We can construct a better approximation if we match that, and an even better if we match the third derivative. For the case of approximating a function near x = 0, the general form of the approximating function we will use is a polynomial: y = p(x) = a 0 + a 1 x + a 2 x 2 + a 3 x 3... (1)
Sequence of Better approximations 3 / 10
How to determine the coefficients 4 / 10 By matching derivatives of increasingly higher order at the point x = 0, we can pin down the coefficient of the polynomials a n as far as we want to go. We want p(0) = f (0) a 0 = f (0) We want p (0) = 0 + a 1 + 0 + 0... = f (0) a 1 = f (0) We want p (0) = 0 + 0 + 2a 2 + 0 +... = f (0) a 2 = f (0) 2 We want p (0) = 0 + 0 + 6a 3 + 0 +... = f (0) a 3 = f (0) 3 etc.
Taylor Series 5 / 10 This sequence of improved approximations by matching higher order derivatives can be described in an infinite series known as the Taylor series: p k (x) = f (0) + f (0)x + f (0)x 2 2! + f (0)x 3 3! +... + f k (0)x k k! What is that k!. The exclamation mark denotes a factorial. k! is the product of all positive number less than or equal to k. Example: 5! = 1 2 3 4 5 = 120 (3) Don t forget the denominator is a factorial when assembling your Taylor Series. (2)
Taylor Series: cosine! What is the Taylor series approximation of the cosine function near x = 0. Well, the first few derivatives are f (x) = cos(x), f (x) = sin(x), f (x) = cos(x), f (x) = sin(x),... Note three things, The sign of the derivative fluctuates every two derivatives odd derivatives are all sin functions and thus evaluate to 0 at x = 0. even derivatives are cosine functions and thus evaluate to 1 at x = 0 The final expansion: p k (x) = 1 x 2 2! + x 4 x 2k... + ( 1)k... (4) 4! (2k)! Very close to x = 0, we have cos(0) 1. This is a very common mathematics trick known as the small angle approximation. A different approx is used for sine. 6 / 10
What about approximations near x 0? 7 / 10 This is known as the Taylor Series generated by f at x = a. It simplifies to the expression already given when a = 0. p k (a) = f (a) + f (a)(x a) + f (a)(x a) 2... + f k (a)(x a) k k! 2! + f (a)(x a) 3 3! Immediately, you should recognize that if we only match the value (zeroth derivative) and first derivative, we get back our formula for linearization. +
Errors in the Taylor Series 8 / 10 What happens if I do compute all the terms to infinity? For many (not all) Taylor Series, the approximating polynomial p k (x) will converge to the real function f (x) for any value x as k. A longer study of convergence of power series would take too much time for the class but that is where you would be able to surmise the convergence/divergence as the number of terms in the approximation goes to. Errors in the Taylor Series approximations are known as Truncation Errors because they come from ignoring terms, or truncating an infinite series. Nobody wants to compute an infinite number of terms, not even a computer.
Truncation Errors 9 / 10 What is the error if I only match n derivatives (or keep n + 1 terms) f (x) = f (a) = f (a)(x a)+ f (a)(x a) 2 +..+ f n (a) (x a) n +R n (x) 2! n! (5) Here, R n (x) represent the terms containing derivatives of order higher than n extending to. If we kept them all, we would match f (x) exactly. If we don t use these terms (often called Higher Order Terms or H.O.T., then we are truncating the polynomial and we have: f (x) = p n (x) + H.O.T. (6) Thus, the error of our approximation is equal to the terms we through away, or H.O.T..
Bounding the Remainder: R n (x) The remainder is mathematically bounded (with some constraints we won t discuss) by: R n (x) M r n+1 x a n+1 (n + 1)! where R and M are constants. Thus the truncation error: (7) E x a n+1 (8) where x a is the distance from the point of interest x to the point around which we constructed the Taylor Series a. This step is often noted x or h. So, for example in a linearization, we keep only the n = 1 term and thus our error E ( x) 2. Thus for the linearization of a function we have: f (x) = f (a) + f (x)(x a) + C( x) 2 (9) This is what we found in our last lecture when evaluated the errors manually! 10 / 10