Kostas Kokkotas 2 November 6, 2007 2 kostas.kokkotas@uni-tuebingen.de http://www.tat.physik.uni-tuebingen.de/kokkotas Kostas Kokkotas 3
Error Analysis Definition : Suppose that x is an approximation to x. The error is E x = x x, and the relative error is R x = (x x )/x, provided that x 0. EXAMPLE 1. Let π = 3.14159... and if π = 3.14; then the error is: and the relative error: E π = π π = 3.14159 3.14000 = 0.00159 R π = π π π = 0.00159 3.14159 = 0.000506. 2. Let x = 0.000012 and x = 0.000009; then the error is: E x = x x = 0.000012 0.000009 = 0.000003 and the relative error: R x = x x x = 0.000003 0.000012 = 0.25. Kostas Kokkotas 4
Definition: The number x is said to approximate x to d significant digits if d is the largest positive integer for which x x x < 10 d 2 EXAMPLE For the approximations of the previous example we get: 1. π π / π = 0.000506 10 3 /2, then d = 3 (3 significant digits). 2. x x / x = 0.25 < 10 0 /2, then d = 0 (no significant digit). Kostas Kokkotas 5
Truncation Error The notion of truncation error is introduced when a more complicated mathematical expression is replaced with a more elementary formula. This terminology originates from the technique of replacing a complicated function with a truncated Taylor series. For example, the infinite Taylor series sin(x) x x3 3! + x 5 5! x 7 7! +... EXAMPLE: Given I = 1 arctan(x)dx, determine the accuracy of the 0 approximation obtained by replacing the integrand f (x) = arctan(x) with the truncated Taylor expansion of the function arctan(x) x x 3 3 + x 5 5 x 7 7 + x 9 9... Then a term-by-term integration produces I = Z 1 0 x x3 3 + x 5 5 x! 7 dx = 7 x 2 2 x 4 12 + x 6 30 x 8 1! 56 0 = 121 280 = 0.4321428571. The exact value is I = 0.4388245732, thus the relative error is R I = (I I )/I = 0.0152264 = 1.5 10 2 < 10 1 /2 i.e the approximation I agrees with the true answer to one significant digit. Kostas Kokkotas 6
Round-Off Error The computer s representation of real numbers is limited to the fixed precision of the mantissa. True values are sometimes not stored exactly by a computer s representation. This is called round-off error. Typically the actual number that is stored in the computer may undergo chopping or rounding of the last digit. Therefore since the computer hardware works with only a limited number of digits in machine numbers, rounding errors are introduced and propagated in successive approximations. Kostas Kokkotas 7
Loss of Significance Consider two numbers x = 1.11112 and y = 1.11113 which are nearly equal and both carry 5 decimal digits of precision. Suppose that their difference is formed as x y = 0.00001. Since the first 4 digits of x and y are the same, their difference x y contains only four digits of precision. If the output of the computer is written in the form 0.1bbbb 10 5 then ony the first digit is correct and the other 4 are random numbers! This phenomenon is called loss of significance or subtractive cancellation. EXAMPLE: Let s compare the results of calculating f 1 (300) and f 2 (300) ( ) f 1 (x) = x 2 x 2 (x + 1) (x) and f 2 (x) = (x + 1) + (x) are practically two different writings of the same function. In a computer which operates with 8 digit precision we get f 1 (300) = 2596.0 and f 2 (300) = 2595.9147. Thus f 1 (300) f 2 (300) = 0.0853 10 1 instead of 10 8 i.e. 7 digits loss of significance. For the same example in a computer with 16 digits arithmetic instead of precision 10 16 we get f 1 (300) f 2 (300) 4.7 10 11, i.e. loss of 5 significant digits. Kostas Kokkotas 8
O(h n ) Order of Approximation Often a function f (h) is replaced with an approximation p(h) and the error bound is known to be M h n. This leads to the following definition: Definition: Assume that f (h) is approximated by the function p(h) and that there exists a real constant M > 0 and a positive integer n so that f (h) p(h) h n M for sufficient small h (1) We say that p(h) approximates f (h) with order of approximation O(h n ) f (h) = p(h) + O(h n ) When the relation (1) is rewritten in the form f (h) p(h) M h n we see that the notation O(h n ) stands in place of the error bound M h n. Theorem: Assume that f (h) = p(h) + O(h n ) and g(h) = q(h) + O(h m ) and p = min(m, n). Then f (h) + g(h) = p(h) + g(h) + O(h p ) (2) f (h) g(h) = p(h) q(h) + O(h p f (h) ) or g(h) = p(h) q(h) + O(hp )(3) Kostas Kokkotas 9
O(h n ) Order of Approximation Taylor expansion is a very good example of the above theorem. Let s consider the Taylor polynomial expansions: e h = 1 + h + h2 2! + h3 3! + O(h4 ) and cos(h) = 1 h2 2! + h4 4! + O(h6 ). and then try to find the sum and product of the above approximations. The sum is: e h + cos(h) = 1 + h + h2 2! + h 3 3! + O(h4 ) + 1 h 2 2! + h 4 4! + O(h6 ) = 2 + h + h 3 3! + O(h4 ) + h 4 4! + O(h6 ) e h + cos(h) = 2 + h + h3 3! + O(h4 ) and the order of approximation is O(h 4 ). The product will be: e h cos(h) = 1 + h + h2 2! + h! 3 3! + O(h4 ) 1 h2 2! + h! 4 4! + O(h6 ) = 1 + h h3 3 5h 4 24 h 5 24 + h 6 48 + h 7 144 + O(h6 e h cos(h) = 1 + h h3 3 + O(h4 ). and the order of approximation is O(h 4 ). Kostas Kokkotas 10
Error Propagation Here we investigate how error might be propagated in successive computations. If the true values of two numbers are x and y and their approximate values are x and y, which contain errors ε x and ε y respectively. Then we can write x = x + ε x and y = y + ε y and the sum will be x + y = (x + ε x ) + (y + ε y ) = (x + y ) + (ε x + ε x ). (4) Hence, for addition, the error in the sum is the sum of the errors. The propagation of the error in multiplication is more complicated. xy = (x + ε x )(y + ε y ) = x y + x ε y + y ε x + ε x ε x (5) Hence, if x > 1 and y > 1 the terms x ε y and y ε x show that there is a possibility of magnification of the original errors ε x and ε y. Insights cane be gained by looking at the relative error i.e.by rearranging the terms o the previous expression we get: xy x y xy = x ε y + y ε x + ε x ε y xy = x ε y xy + y ε x xy + ε xε y xy. (6) Kostas Kokkotas 11
Error Propagation Furthermore, suppose that x /x 1, y /y 1 and (ε y /x)(ε x /y) = R x R y 0. xy x y xy ε y y + ε x x = R x + R y. (7) This shows that the relative error in the product xy is approximately the sum of the relative errors in the approximations x and y. DEFINITION: Suppose that E(n) represents the growth of error after n steps. If E(n) nε, the growth of error is said to be linear. If E(n) K n ε, the growth of error is called exponential. Obviously if K > 1, the exponential error grows without bound as n,and if 0 < K < 1 the exponential error diminishes to zero as n. Kostas Kokkotas 12
1. Compare the results of calculating f 1 (0.01) and f 2 (0.01) using 6 digits and rounding, where 2. ams f 1 (x) = ex 1 x x 2 and f 2 (x) = 1 2 + x 6 + x 2 24 Kostas Kokkotas 13