(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0.
|
|
- Aron Daniel
- 6 years ago
- Views:
Transcription
1 > 5. Numerical Integration Review of Interpolation Find p n (x) with p n (x j ) = y j, j = 0, 1,,..., n. Solution: p n (x) = y 0 l 0 (x) + y 1 l 1 (x) y n l n (x), l k (x) = n j=1,j k Theorem Let y j = f(x j ), f(x) smooth and p n interpolates f(x) at x 0 < x 1 <... < x n. For any x (x 0, x n ) there is a ξ such that f(x) p n (x) = f (n+1) (ξ) (n + 1)! Example: n = 1, linear: (x x 0 )(x x 1 )... (x x n ) }{{} ψ(x) f(x) p 1 (x) = f (ξ) (x x 0 )(x x 1 ), p 1 (x) = y 1 y 0 (x x 0 ) + y 0. x 1 x 0 x x j x k x j.
2 > 5. Numerical Integration > 5.1 The Trapezoidal Rule Mixed rule Find area under the curve y = f(x) b a f(x)dx = N 1 j=0 xj+1 x j f(x)dx }{{} interpolate f(x) on (x j, x j+1 ) and integrate exactly N 1 j=0 (x j+1 x j ) f(x j) + f(x j+1 )
3 > 5. Numerical Integration > 5.1 The Trapezoidal Rule Trapezoidal Rule xj+1 x j xj+1 = x j f(x) xj+1 x j p 1 (x)dx { f(xj+1 ) + f(x j ) x j+1 x j (x x j ) + f(x j ) = (x j+1 x j ) f(x j) + f(x j+1 ) } dx f(b) + f(a) T 1 (f) = (b a) (6.1)
4 > 5. Numerical Integration > 5.1 The Trapezoidal Rule Another derivation of Trapezoidal Rule: Seek: xj+1 f(x)dx w j f(x j ) + w j+1 f(x j+1 ) x j exact on polynomials of degree 1 (1, x). f(x) 1 : f(x) = x : xj+1 x j f(x)dx = x j+1 x j = w j 1 + w j+1 1 xj+1 xdx = x j+1 x j x j = w jx j + w j+1 x j+1
5 > 5. Numerical Integration > 5.1 The Trapezoidal Rule Example Approximate integral I = 1 0 dx 1 + x The true value is I = ln() = Using (6.1), we obtain T 1 = 1 ( ) = 3 4 = 0.75 and the error is I T 1 (f) = (6.)
6 > 5. Numerical Integration > 5.1 The Trapezoidal Rule To improve on approximation (6.1) when f(x) is not a nearly linear function on [a, b], break interval [a, b] into smaller subintervals and apply (6.1) on each subinterval. If the subintervals are small enough, then f(x) will be nearly linear on each one. Example Evaluate the preceding example by using T 1 (f) on two subintervals of equal length. For two subintervals, and the error I = 1 0 T = 17 4 dx x + 1 = is about 1 4 of that for T 1 in (6.). dx 1 + x = I T = (6.3)
7 > 5. Numerical Integration > 5.1 The Trapezoidal Rule We derive the general formula for calculations using n subintervals of equal length h = b a n. The endpoints of each subinterval are then x j = a + jh, Breaking the integral into n subintegrals I(f) = = b a x1 f(x)dx = x 0 f(x)dx + h f(x 0) + f(x 1 ) xn x 0 x f(x)dx x 1 f(x)dx h f(x 1) + f(x ) j = 0, 1,..., n xn x n 1 f(x)dx h f(x n 1) + f(x n ) The trapezoidal numerical integration rule ( 1 T n (f)=h f(x 0)+f(x 1 )+f(x )+ +f(x n 1 )+ 1 ) f(x n) (6.4)
8 > 5. Numerical Integration > 5.1 The Trapezoidal Rule With a sequence of increasing values of n, T n (f) will usually be an increasingly accurate approximation of I(f). But which sequence of n should be used? If n is doubled repeatedly, then the function values used in each T n (f) will include all earlier function values used in the preceding T n (f). Thus, the doubling of n will ensure that all previously computed information is used in the new calculation, making the trapezoidal rule less expensive than it would be otherwise. ( f(x0 ) T (f) = h + f(x 1 ) + f(x ) ) with Also with h = b a, x 0 = a, x 1 = a + b, x = b. ( f(x0 ) T 4 (f) = h + f(x 1 ) + f(x ) + f(x 3 ) + f(x ) 4) h = b a 4, x 0 = a, x 1 = 3a + b, x = a + b 4 4, x 3 = a + 3b, x 4 = b 4 Only f(x 1 ) and f(x 3 ) need to be evaluated.
9 > 5. Numerical Integration > 5.1 The Trapezoidal Rule Example We give calculations of T n (f) for three integrals I (1) = I () = I (3) = π 0 e x dx = dx 1 + x = tan 1 (4) = dx + cos(x) = π 3 = n I (1) I () I (3) Error Ratio Error Ratio Error Ratio 1.55E E E E E E E E E E E E-9 37, E E * E E * E E * The error for I (1), I () decreases by a factor of 4 when n doubles, for I (3) the answers for n = 3, 64, 18 were correct up to the limits due to rounding error on the computer (16 decimal digits).
10 > 5. Numerical Integration > Simpson s rule To improve on T 1 (f) in (6.1), use quadratic interpolation to approximate f(x) on [a, b]. Let P (x) be the quadratic polynomial that interpolates f(x) at a, c = a+b and b. I(f) = b a b a P (x)dx (6.5) ( ) (x c)(x b) (x a)(x b) (x a)(x c) f(a) + f(c) + (a c)(a b) (c a)(c b) (b a)(b c) f(b) dx This can be evaluated directly. But it is easier with a change of variables. Let h = b a and u = x a. Then and b a (x c)(x b) (a c)(a b) dx = 1 h = 1 h h 0 a+h (u h)(u h)du = 1 h S (f) = h 3 a (x c)(x b)dx ( u ) h u h + h u = h 3 ( f(a) + 4f( a + b ) ) + f(b) 0 (6.6)
11 > 5. Numerical Integration > Simpson s rule Example Then h = b a = 1 and and the error is S (f) = 1 3 I = 1 0 dx 1 + x ( 1 + 4( 3 ) + 1 ) = 5 36 I S = ln() S = = (6.7) while the error for the trapezoidal rule (the number of function evaluations is the same for both S and T ) was I T = The error in S is smaller than that in (6.3) for T by a factor of 1, a significant increase in accuracy.
12 > 5. Numerical Integration > Simpson s rule Figure: An illustration of Simpson s rule (6.6), y = f(x), y = P (x)
13 > 5. Numerical Integration > Simpson s rule The rule S (f) will be an accurate approximation to I(f) if f(x) is nearly quadratic on [a, b]. For the other cases, proceed in the same manner as for the trapezoidal rule. Let n be an even integer, h = b a n and define the evaluation points for f(x) by x j = a + jh, j = 0, 1,..., n We follow the idea from the trapezoidal rule, but break [a, b] = [x 0, x n ] into larger intervals, each containing three interpolation node points: I(f) = = b a x f(x)dx = x 0 f(x)dx + xn x 0 x4 f(x)dx x f(x)dx xn x n f(x)dx h 3 [f(x 0) + 4f(x 1 ) + f(x )] + h 3 [f(x ) + 4f(x 3 ) + f(x 4 )] h 3 [f(x n ) + 4f(x n 1 ) + f(x n )]
14 > 5. Numerical Integration > Simpson s rule Simpson s rule: S n (f) = h 3 (f(x 0)+4f(x 1 )+f(x )+4f(x 3 )+f(x 4 ) (6.8) + +f(x n )+4f(x n 1 )+f(x n )) It has been among the most popular numerical integration methods for more than two centuries.
15 > 5. Numerical Integration > Simpson s rule Example Evaluate the integrals I (1) = I () = I (3) = π 0 e x dx = dx 1 + x = tan 1 (4) = dx + cos(x) = π = n I (1) I () I (3) Error Ratio Error Ratio Error Ratio -3.56E E E E E E E E E E E E E E-9 37, E E * E E * For I (1), I (), the ratio by which the error decreases approaches 16. For I (3), the errors converge to zero much more rapidly
16 > 5. Numerical Integration > 5. Error formulas Theorem Let f C [a, b], n N. The error in integrating I(f) = b a f(x)dx using the trapezoidal rule T n (f) = h [ 1 f(x 0) + f(x 1 ) + f(x ) + + f(x n 1 ) + 1 f(x n) ] is given by En T I(f) T n (f) = h (b a) f (c n ) (6.9) 1 where c n is some unknown point in a, b, and h = b a n.
17 > 5. Numerical Integration > 5. Error formulas Theorem Suppose that f C [a, b], and h = max j (x j+1 x j ). Then b f(x)dx (x j+1 x j ) f(x j+1)+f(x j ) a j b a 1 h max f (x). a x b Proof: Let I j be the j th subinterval and p 1 = linear interpolant on I j at x j, x j+1. Local error: f(x) p 1 (x) = f (x) xj+1 (x x j )(x x j+1 ). }{{} ψ(x) f(x)dx (x j+1 x j ) f(x j) + f(x j+1 ) x j =
18 > 5. Numerical Integration > 5. Error formulas proof Hence = 1 xj+1 x j xj+1 x j f (ξ)! 1 max a x b f (x) (x x j )(x x j+1 ) f (ξ) x x j x x j+1 dx xj+1 x j = 1 max f (x) (x j+1 x j ) 3. a x b 6 (x x j )(x j+1 x)dx local error 1 1 max a x b f (x) h 3 j, h j = x j+1 x j.
19 > 5. Numerical Integration > 5. Error formulas proof Finally, b global error = f(x)dx a j ( xj+1 = j j (x j+1 x j ) f(x j) + f(x j+1 ) f(x)dx (x j+1 x j ) f(x j) + f(x j+1 ) x j 1 1 max a x b f (x) (x j+1 x j ) 3 = 1 1 max a x b f (x) h 1 max a x b f (x) ) n (x j+1 x j ). j=0 } {{ } b a local error j n (x j+1 x j ) 3 }{{} j=0 h (x j+1 x j )
20 > 5. Numerical Integration > 5. Error formulas Example Recall the example I(f) = 1 0 dx 1 + x = ln Here f(x) = 1 1+x, [a, b] = [0, 1], and f (x) = (1+x) 3. Then by (6.9) E T n (f) = h 1 f (c n ), 0 c n 1, h = 1 n. This cannot be computed exactly since c n is unknown. But and therefore max f (x) = max 0 x 1 0 x 1 (1 + x) 3 = En T (f) h h () = 1 6 For n = 1 and n = we have E1 T (f) 1 = 0.167, E T (f) }{{} 6 }{{} ( 1 ) 6 =
21 > 5. Numerical Integration > 5. Error formulas A possible weakness in the trapezoidal rule can be inferred from the assumption of the theorem for the error. If f(x) does not have two continuous derivatives on [a, b], then T n (f) does converge more slowly?? YES for some functions, especially if the first derivative is not continuous.
22 > 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) The error formula (6.9) En T (f) I(f) T n (f) = h (b a) 1 f (c n ) can only be used to bound the error, because f (c n ) is unknown. This can be improved by a more careful consideration of the error formula. A central element of the proof of (6.9) lies in the local error α+h α for some c [α, α + h]. f(α) + f(α + h) f(x)dx h = h3 1 f (c) (6.10)
23 > 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) Recall the derivation of the trapezoidal rule T n (f) and use the local error (6.10): E T n (f) = = b a x1 f(x)dx T n (f) = xn x 0 f(x)dx h f(x 0) + f(x 1 ) x xn f(x)dx T n (f) f(x)dx h f(x n 1) + f(x n ) x n 1 = h3 1 f (γ 1 ) h3 1 f (γ ) h3 1 f (γ n ) with γ 1 [x 0, x 1 ], γ [x 1, x ],... γ n [x n 1, x n ], and ( En T (f) = h 1 hf (γ 1 ) + + hf (γ n ) }{{} =(b a)f (c n) x + f(x)dx h f(x 1) + f(x ) x 1 ), c n [a, b].
24 > 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) To estimate the trapezoidal error, observe that hf (γ 1 ) + + hf (γ n ) is a Riemann sum for the integral b The Riemann sum is based on the partition a f (x)dx = f (b) f (a) (6.11) [x 0, x 1 ], [x 1, x ],..., [x n 1, x n ] of [a, b]. As n, this sum will approach the integral (6.11). With (6.11), we find an asymptotic estimate (improves as n increases) E T n (f) h 1 (f (b) f (a)) =: ẼT n (f). (6.1) As long as f (x) is computable, ẼT n (f) will be very easy to compute.
25 > 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) Example Again consider I = 1 0 dx 1 + x. Then f 1 (x) =, and the asymptotic estimate (6.1) yields (1 + x) the estimate ( Ẽn T = h 1 1 (1 + 1) 1 ) (1 + 0) = h 16, h = 1 n and for n = 1 and n = Ẽ1 T = 1 16 = 0.065, ẼT = I T 1 = , I T = 0.015
26 > 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) The estimate ẼT n (f) = h 1 (f (b) f (a)) has several practical advantages over the earlier formula (6.9) E T n (f) = h (b a) 1 f (c n ). 1 It confirms that when n is doubled (or h is halved), the error decreases by a factor of about 4, provided that f (b) f (a) 0. This agrees with the results for I (1) and I (). (6.1) implies that the convergence of T n (f) will be more rapid when f (b) f (a) = 0. This is a partial explanation of the very rapid convergence observed with I (3) 3 (6.1) leads to a more accurate numerical integration formula by taking ẼT n (f) into account: I(f) T n (f) h 1 (f (b) f (a)) I(f) T n (f) h 1 (f (b) f (a)) := CT n (f), (6.13) the corrected trapezoidal rule
27 > 5. Numerical Integration > 5..1 Asymptotic estimate of T n(f) Example Recall the integral I (1), I = 1 0 e x dx = n I T n (f) Ẽ n (f) CT n (f) I CT n (f) Ratio 1.545E E E E E E E E E E-4.395E E E E E E E E Table: Example of CT n (f) and Ẽn(f) Note that the estimate Ẽn T (f) = h e 1, h = 1 6 n is a very accurate estimator of the true error. Also, the error in CT n (f) converges to zero at a more rapid rate than does the error for T n (f). When n is doubled, the error in CT n (f) decreases by a factor of about 16.
28 > 5. Numerical Integration > 5.. Error formulae for Simpson s rule Theorem Assume f C 4 [a, b], n N. The error in using Simpson s rule is En S (f) = I(f) S n (f) = h4 (b a) f (4) (c n ) (6.14) 180 with c n [a, b] an unknown point, and h = b a n. Moreover, this error can be estimated with the asymptotic error formula Ẽ S n (f) = h4 180 (f (b) f (a)) (6.15) Note that (6.14) says that Simpson s rule is exact for all f(x) that are polynomials of degree 3, whereas the quadratic interpolation on which Simpson s rule is based is exact only for f(x) a polynomial of degree. The degree of precision being 3 leads to the power h 4 in the error, rather than the power h 3, which would have been produced on the basis of the error in quadratic interpolation. The higher power of h 4, and the simple form of the method that historically have caused Simpson s rule to become the most popular numerical integration rule.
29 > 5. Numerical Integration > 5.. Error formulae for Simpson s rule Example dx 1+x : Recall (6.7) where S (f) was applied to I = ( S (f) = 1 + 4( 3 3 ) + 1 ) = 5 36 = f(x) = x, f 3 (x) = 6 (1 + x) 4, f (4) 4 (x) = (1 + x) 5 The exact error is given by E S n (f) = h4 180 f (4) (c n ), h = 1 n for some 0 c n 1. We can bound it by The asymptotic error is given by For n =, ẼS n Ẽ S n (f) = h4 180 En S (f) h4 h4 4 = ( 6 (1 + 1) 4 6 (1 + 0) 4 ) = h4 3 = ; the actual error is
30 > 5. Numerical Integration > 5.. Error formulae for Simpson s rule The behavior in I(f) S n (f) can be derived from (6.15): Ẽn S (f) = h4 ( f (b) f (a) ), 180 i.e., when n is doubled, h is halved, and h 4 decreases by of factor of 16. Thus, the error E S n (f) should decrease by the same factor, provided that f (a) f (b). This is the error observed with integrals I (1) and I (). When f (a) = f (b), the error will decrease more rapidly, which is a partial explanation of the rapid convergence for I (3).
31 > 5. Numerical Integration > 5.. Error formulae for Simpson s rule The theory of asymptotic error formulae E n (f) = Ẽn(f) (6.16) such as for E T n (f) and E S n (f), says that (6.16) will vary with the integrand f, which is illustrated with the two cases I (1) and I (). From (6.14) and (6.15) we infer that Simpson s rule will not perform as well if f(x) is not four times continuously differentiable, on [a, b].
32 > 5. Numerical Integration > 5.. Error formulae for Simpson s rule Example Use Simpson s rule to approximate I = 1 0 xdx = 3. n Error Ratio.860E E E E E-4.83 Table: Simpson s rule for x The column Ratio show the convergence is much slower.
33 > 5. Numerical Integration > 5.. Error formulae for Simpson s rule As was done for the trapezoidal rule, a corrected Simpson s rule can be defined: CS n (f) = S n (f) h4 ( f (b) f (a) ) (6.17) 180 This will usually will be more accurate approximation than S n (f).
34 > 5. Numerical Integration > 5..3 Richardson extrapolation The error estimates for Trapezoidal rule (6.1) En T (f) h ( f (b) f (a) ) 1 and Simpson s rule (6.15) Ẽn S (f) = h4 ( f (b) f (a) ) 180 are both of the form I I n c n p (6.18) where I n denotes the numerical integral and h = b a n. The constants c and p vary with the method and the function. With most integrands f(x), p = for the trapezoidal rule and p = 4 for Simpson s rule. There are other numerical methods that satisfy (6.18), with other value of p and c. We use (6.18) to obtain a computable estimate of the error I I n, without needing to know c explicitly.
35 > 5. Numerical Integration > 5..3 Richardson extrapolation Replacing n by n and comparing to (6.18) I I n p (I I n ) c n p I I n c p n p (6.19) and solving for I gives the Richardson s extrapolation formula ( p 1)I p I n I n I 1 p 1 (p I n I n ) R n (6.0) R n is an improved estimate of I, based on using I n, I n, p, and the assumption (6.18). How much more accurate than I n depends on the validity of (6.18), (6.19).
36 > 5. Numerical Integration > 5..3 Richardson extrapolation To estimate the error in I n, compare it with the more accurate value R n I I n R n I n = 1 p 1 (p I n I n ) I n I I n 1 p 1 (I n I n ) (6.1) This is Richardson s error estimate.
37 > 5. Numerical Integration > 5..3 Richardson extrapolation Example Using the trapezoidal rule to approximate we have I = 1 0 e x dx = T = , T 4 = Using (6.0) I 1 p 1 (p I n I n ) with p = and n =, we obtain I R 4 = 1 3 (4I 4 I ) = 1 3 (4T 4 T ) = The error in R 4 is ; and from a previous Table, R 4 is more accurate than T 3. To estimate the error in T 4, use (6.1) to get I T (T 4 T ) = The actual error in T 4 is ; and thus (6.1) is a very accurate error estimate.
38 > 5. Numerical Integration > 5..4 Periodic Integrands Definition A function f(x) is periodic with period τ if f(x) = f(x + τ), x R (6.) and this relation should not be true with any smaller value of τ. For example, f(x) = e cos(πx) is periodic with periodic τ =. If f(x) is periodic and differentiable, then its derivatives are also periodic with period τ.
39 > 5. Numerical Integration > 5..4 Periodic Integrands Consider integrating I = b a f(x)dx with trapezoidal or Simpson s rule, and assume that b a is and integer multiple of the period τ. Assume f(x) C [a, b] (has derivatives of any order). Then for all derivatives of f(x), the periodicity of f(x) implies that f (k) (a) = f (k) (b), k 0 (6.3) If we now look at the asymptotic error formulae for the trapezoidal and Simpson s rules, they become zero because of (6.3). Thus, the error formulae ẼT n (f) and ẼS n (f) should converge to zero more rapidly when f(x) is a periodic function, provided b a is an integer multiple of the period of f.
40 > 5. Numerical Integration > 5..4 Periodic Integrands The asymptotic error formulae ẼT n (f) and ẼS n (f) can be extended to higher-order terms in h, using the Euler-MacLaurin expansion and the higher-order terms are multiples of f (k) (b) f (k) (a) for all odd integers k 1. Using this, we can prove that the errors E T n (f) and E S n (f) converge to zero even more rapidly than was implied by the earlier comments for f(x) periodic. Note that the trapezoidal rule is the preferred integration rule when we are dealing with smooth periodic integrands. The earlier results for the integral I (3) illustrate this.
41 > 5. Numerical Integration > 5..4 Periodic Integrands Example The ellipse with boundary ( x ) ( y ) + = 1 a b has area πab. For the case in which the area is π (and thus ab = 1), we study the variation of the perimeter of the ellipse as a and b vary. The ellipse has the parametric representation (x, y) = (a cos θ, b sin θ), 0 θ π (6.4) By using the standard formula for the perimeter, and using the symmetry of the ellipse about the x-axis, the perimeter is given by π (dx ) ( ) dy P = + dθ 0 dθ dθ π = a sin θ + b cos θdθ 0
42 > 5. Numerical Integration > 5..4 Periodic Integrands Since ab = 1, we write this as π 1 P (b) = 0 b sin θ + b cos θdθ = π (b b 4 1) cos θ + 1dθ (6.5) 0 We consider only the case with 1 b <. Since the perimeters for the two ellipses ( x a ) + ( y b ) = 1 and ( x b ) + ( y a ) = 1 are equal, we can always consider the case in which the y-axis of the ellipse is larger than or equal to its x-axis; and this also shows ( ) 1 P = P (b), b > 0 (6.6) b
43 > 5. Numerical Integration > 5..4 Periodic Integrands The integrand of P (b) f(θ) = b [ (b 4 1) cos θ + 1 ] 1 is periodic with period π. As discussed above, the trapezoidal rule is the natural choice for numerical integration of (6.5). Nonetheless, there is a variation in the behaviour of f(θ) as b varies, and this will affect the accuracy of the numerical integration π/ Figure:The graph of integrand f(θ) : b =, 5, 8
44 > 5. Numerical Integration > 5..4 Periodic Integrands n b = b = 5 b = Table: Trapezoidal Rule Approximation of (6.5) Note that as b increases, the trapezoidal rule converges more slowly. This is due to the integrand f(θ) changing more rapidly as b increases. For large b, f(θ) changes very rapidly in the vicinity of θ = 1 π; and this causes the trapezoidal rule to be less accurate than when b is smaller, near 1. To obtain a certain accuracy in the perimeter P (b), we must increase n as b increases.
45 > 5. Numerical Integration > 5..4 Periodic Integrands z=p(b) Figure: The graph of perimeter function P (b) for ellipse The graph of P (b) reveals that P (b) 4b for large b. Returning to (6.5), we have for large b P (b) π ( b 4 cos θ ) 1 dθ b 0 π b b cos θ dθ = 4b 0 We need to estimate the error in the above approximation to know when we can use it to replace P (b); but it provides a way to avoid the integration of (6.5) for the most badly behaved cases.
46 > 5. Numerical Integration > Review and more Review xj+1 x j f(x)dx xj+1 x j p n (x)dx } {{ } I j p n (x) interpolates at x (0) j, x (1) j,..., x (n) j points on [x j, x j+1 ] Local error: xj+1 x j f(x)dx I j = xj+1 x j f (n+1)(ξ) (n + 1)! ψ(x)dx (integrand is error in interpolation) where ψ(x) = (x x (0) j )(x x (1) j ) (x x (n) j ).
47 > 5. Numerical Integration > Review and more 6 x ψ(x) x_j x_{j+1} x j x x j+1 Conclusion: exact on P n 1 local error C max f (n+1) (x) h n+ global error C max f (n+1) (x) h n+1 (b a)
48 > 5. Numerical Integration > Review and more Observation: If ξ is a point on (x j, x j+1 ), then (if g is continuous) i.e., g(ξ) = g(x j+ 1 ) + O(h) g(ξ) = g(x j+ 1 ) + (ξ x j+ 1 ) g (η) }{{} h } {{ } O(h)
49 > 5. Numerical Integration > Review and more Local error: 1 (n + 1)! = + xj+1 x j f (n+1) (ξ) ψ(x)dx }{{} =f (n+1) (x j+ 1 )+O(h) xj+1 1 (n + 1)! f (n+1) (x j+ 1 ) ψ(x)dx x j }{{} Dominant Term O(h n+ ) C (n+1)! max f (n+) h n+3 { }}{ 1 xj+1 f (n+) (η(x)) (ξ x (n + 1)! j+ 1 ) ψ(x)dx x j } {{ } take max out }{{} integrate } {{ } Higher Order Terms O(h n+3 )
50 > 5. Numerical Integration > Review and more The dominant term: case N = 1, Trapezoidal Rule 0.05 ψ(x) = (x x j )(x x j+1 ) x_j x_{j+1}
51 > 5. Numerical Integration > Review and more The dominant term: case N =, Simpson s Rule xj+1 ψ(x) = (x x j )(x x j+ 1 )(x x j+1 ) ψ(x)dx = 0 x j 0. ψ(x) = (x x j )(x x j+1/ )(x x j+1 ) x_j x_{j+1/} x_{j+1}
52 > 5. Numerical Integration > Review and more The dominant term: case N = 3, Simpson s 3/8 s Rule Local error = O(h 5 ) 0.1 ψ(x) = (x x j )(x x j+1/3 )(x x j+/3 )(x x j+1 ) x_j x_{j+1/3} x_{j+/3} x_{j+1}
53 > 5. Numerical Integration > Review and more The dominant term: case N = 4 ψ(x)dx = 0 local error = O(h 7 ) x_j x_{j+1}
54 > 5. Numerical Integration > Review and more Simpson s Rule Is exact on P (and P 3 actually) x j+1/ x j+1/ = (x j + x j+1 )/ Seek: xj+1 x j f(x)dx w j f(x j ) + w j+1/ f(x j+1/ ) + w j+1 f(x j+1 )
55 > 5. Numerical Integration > Review and more Exact on 1, x, x : 1 : x : x : xj+1 x j 1dx = x j+1 x j = w j 1 + w j+1/ 1 + w j+1 1, xj+1 xdx = x j+1 x j xj+1 x dx = x3 j+1 x j 3 x j = w jx j + w j+1/ x j+1/ + w j+1 x j+1, x3 j 3 = w jx j + w j+1/ x j+1/ + w j+1x j linear system: w j = 1 6 (x j+1 x j ); w j+1/ = 4 6 (x j+1 x j ); w j+1 = 1 6 (x j+1 x j ).
56 > 5. Numerical Integration > Review and more Theorem h = max(x j+1 x j ), I(f) = Simpson s rule approximation: b a f(x)dx I(f) b a 880 h4 max Trapezoid rule versus Simpson s rule Cost in TR Cost in SR a x b f (4) (x) = function evaluations/integral no. intervals 3 function evaluations/integral no. intervals = 3 ; Accuracy in TR Accuracy in SR = h b a 1 h 4 b a 880. (reducible to 1 if storing the previous values) = 40 h. E.g. for h = , i.e. SR is more accurate than TR by factor of
57 > 5. Numerical Integration > Review and more What if there is round-off error? Suppose we use the method with f(x j ) computed = f(x j ) true ± ε j, ε j = O(machine precision) }{{} =ε b a f(x)dx j (x j+1 x j ) f(x j+1) computed + f(x j ) computed = j (x j+1 x j ) f(x j+1) ± ε j+1 + f(x j ) ± ε j = (x j+1 x j ) f(x j+1) + f(x j ) + (x j+1 x j ) ±ε j+1 ± ε j j j }{{}}{{} value in exact arithmetic contribution of round-off error ε(b a)
58 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration The numerical methods studied in the first sections were based on integrating 1 linear (trapezoidal rule) and quadratic (Simpson s rule) and the resulting formulae were applied on subdivisions of ever smaller subintervals. We consider now a numerical method based on exact integration of polynomials of increasing degree; no subdivision of the integration interval [a, b] is used. Recall the Section 4.4 of Chapter 4 on approximation of functions.
59 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Let f(x) C[a, b]. Then ρ n (f) denotes the smallest error bound that can be attained in approximating f(x) with a polynomial p(x) of degree n on the given interval a x b. The polynomial m n (x) that yields this approximation is called the minimax approximation of degree n for f(x) and ρ n (f) is called the minimax error. max f(x) m n(x) = ρ n (f) (6.7) a x b
60 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Example Let f(x) = e x for x [0, 1] n ρ n (f) n ρ n (f) E E E E E E E E E E-10 Table: Minimax errors for e x, 0 x 1 The minimax errors ρ n (f) converge to zero rapidly, although not at a uniform rate.
61 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration If we have a numerical integration formula to integrate low- to moderate-degree polynomials exactly, then the hope is that the same formula will integrate other functions f(x) almost exactly, if f(x) is well approximable by such polynomials.
62 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration To illustrate the derivation of such integration formulae, we restrict ourselves to the integral 1 I(f) = f(x)dx. 1 The integration formula is to have the general form: (Gaussian numerical integration method) I n (f) = n w j f(x j ) (6.8) j=1 and we require that the nodes {x 1,..., x n } and weights {w 1,..., w n } be so chosen that I n (f) = I(f) for all polynomials f(x) of as large degree as possible.
63 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Case n = 1 The integration formula has the form 1 1 f(x)dx w 1 f(x 1 ) (6.9) Using f(x) 1, and forcing (6.9) = w 1 Using f(x) = x 0 = w 1 x 1 which implies x 1 = 0. Hence (6.9) becomes 1 1 f(x)dx f(0) I 1 (f) (6.30) This is the midpoint formula, and is exact for all linear polynomials. To see that (6.30) is not exact for quadratics, let f(x) = x. Then the error in (6.30) is 1 1 x dx (0) = 3 0, hence (6.30) has degree of precision 1.
64 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Case n = The integration formula is 1 1 f(x)dx w 1 f(x 1 ) + w f(x ) (6.31) and it has four unspecified quantities: x 1, x, w 1, w. To determine these, we require (6.31) to be exact for the four monomials obtaining 4 equations f(x) = 1, x, x, x 3. = w 1 + w 0 = w 1 x 1 + w x 3 = w 1x 1 + w x 0 = w 1 x w x 3
65 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Case n = This is a nonlinear system with a solution 3 3 w 1 = w = 1, x 1 = 3, x = 3 (6.3) and another one based on reversing the signs of x 1 and x. This yields the integration formula ( 1 ) ( ) 3 3 f(x)dx f + f I (f) (6.33) which has degree of precision 3 (exact on all polynomials of degree 3 and not exact for f(x) = x 4 ).
66 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Example Approximate I = 1 1 e x dx = e e 1 = Using we get 1 1 ( ) ( ) 3 3 f(x)dx f + f I (f) 3 3 I = e e 3 3 I I = = The error is quite small, considering we are using only node points.
67 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Case n > We seek the formula (6.8) I n (f) = n w j f(x j ) j=1 which has n points unspecified parameters x 1,..., x, w 1,..., w n, by forcing the integration formula to be exact for n monomials f(x) = 1, x, x,..., x n 1 In turn, this forces I n (f) = I(f) for all polynomials f of degree n 1. This leads to the following system of n nonlinear equations in n unknowns: = w 1 + w w n 0 = w 1 x 1 + w x w n x n 3 = w 1 x 1 + w x w n x n (6.34) n 1. = w 1 x n 1 + w x w n x n n 0 = w 1 x n w x w n x n 1 n The resulting formula I n (f) has degree of precision n 1.
68 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Solving this system is a formidable problem. The nodes {x i } and weights {w i } have been calculated and collected in tables for most commonly used values of n. n x i w i ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± Table: Nodes and weights for Gaussian quadrature formulae
69 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration There is also another approach to the development of the numerical integration formula (6.8), using the theory of orthogonal polynomials. From that theory, it can be shown that the nodes {x 1,..., x n } are the zeros of the Legendre polynomials of degree n on the interval [ 1, 1]. Recall that these polynomials were introduced in Section 4.7. For example, P (x) = 1 (3x 1) and its roots are the nodes given in (6.3) x 1 = 3 3, x = 3 3. Since the Legendre polynomials are well known, the nodes {x j } can be found without any recourse to the nonlinear system (6.34).
70 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration The sequence of formulae (6.8) is called Gaussian numerical integration method. From its definition, I n (f) uses n nodes, and it is exact for all polynomials of degree n 1. I n (f) is limited to 1 f(x)dx, an integral 1 over [ 1, 1]. But this limitation is easily removed. Given an integral I(f) = introduce the linear change of variable x = transforming the integral to with b a f(x)dx (6.35) b + a + t(b a), 1 t 1 (6.36) I(f) = b a Now apply Ĩn to this new integral. 1 1 ( ) b + a + t(b a) f(t) = f f(t)dt (6.37)
71 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Example Apply Gaussian numerical integration to the three integrals I (1) = 1 0 e x dx, I () = 4 dx 0, I (3) = π dx 1+x 0 +cos(x), which were used as examples for the trapezoidal and Simpson s rules. All are reformulated as integrals over [ 1, 1]. The error results are n Error in I (1) Error in I () Error in I (3).9E E- 8.3E E E E E E E E E-3-8.1E E-11.74E E E E E- 10 * 1.7E E-3 15 * 7.40E E-5 0 * * 3.96E-7
72 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration If these results are compared to those of trapezoidal and Simpson s rule, then Gaussian integration of I (1) and I () is much more efficient than the trapezoidal rules. But then integration of the periodic integrand I (3) is not as efficient as with the trapezoidal rule. These results are also true for most other integrals. Except for periodic integrands, Gaussian numerical integration is usually much more accurate than trapezoidal and Simpson rules. This is even true with many integrals in which the integrand does not have a continuous derivative.
73 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Example Use Gaussian integration on I = 1 0 xdx = 3 The results are n I I n Ratio -7.E E E E E E where n is the number of node points. The ratio column is defined as I I 1 n I I n and it shows that the error behaves like I I n c n 3 (6.38) for some c. The error using Simpson s rule has an empirical rate of 1 convergence proportional to only n, a much slower rate than (6.38). 1.5
74 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration A result that relates the minimax error to the Gaussian numerical integration error. Theorem Let f C[a, b], n 1. Then, if we aply Gaussian numerical integration to I = b a f(x)dx, the error I n satisfies I(f) I n (f) (b a)ρ n 1 (f) (6.39) where ρ n 1 (f) is the minimax error of degree n 1 for f(x) on [a, b].
75 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Example Using the table apply (6.39) to n ρ n (f) n ρ n (f) E E E E E E E E E E-10 I = 1 For n = 3, the above bound implies The actual error is 9.95E 6. 0 e x dx I I 3 ρ 5 (e x ) =
76 > 5. Numerical Integration > 5.3 Gaussian Numerical Integration Gaussian numerical integration is not as simple to use as are the trapezoidal and Simpson rules, partly because the Gaussian nodes and weights do not have simple formulae and also because the error is harder to predict. Nonetheless, the increase in the speed of convergence is so rapid and dramatic in most instances that the method should always be considered seriously when one is doing many integrations. Estimating the error is quite difficult, and most people satisfy themselves by looking at two or more succesive values. If n is doubled, then repeatedly comparing two successive values, I n and I n, is almost always adequate for estimating the error in I n I I n I n I n This is somewhat inefficient, but the speed of convergence in I n is so rapid that this will still not diminish its advantage over most other methods.
77 > 5. Numerical Integration > Weighted Gaussian Quadrature A common problem is the evaluation of integrals of the form I(f) = b a w(x)f(x)dx (6.40) with f(x) a well-behaved function and w(x) a possibly (and often) ill-behaved function. Gaussian quadrature has been generalized to handle such integrals for many functions w(x). Examples include 1 1 f(x) 1, 1 xf(x)dx, f(x) ln( 1 1 x x )dx. The function w(x) is called a weight function. 0 0
78 > 5. Numerical Integration > Weighted Gaussian Quadrature We begin by imitating the development given earlier in this section, and we do so for the special case of I(f) = 1 0 f(x) x dx in which w(x) = 1 x. As before, we seek numerical integration formulae of the form I n (f) = n w j f(x j ) (6.41) j=1 and we require that the nodes {x 1,..., x n } and the weights {w 1,..., w n } be so chosen that I n (f) = I(f) for polynomials f(x) of as large as possible.
79 > 5. Numerical Integration > Weighted Gaussian Quadrature Case n = 1 The integration formula has the form 1 0 f(x) x dx w 1 f(x 1 ) We force equality for f(x) = 1 and f(x) = x. This leads to equations w 1 = 1 w 1 x 1 = x dx = 0 x x dx = 3 Solving for w 1 and x 1, we obtain the formula 1 and it has the degree of precision 1. 0 f(x) x dx f( 1 3 ) (6.4)
80 > 5. Numerical Integration > Weighted Gaussian Quadrature Case n = The integration formula has the form 1 0 f(x) x dx w 1 f(x 1 ) + w f(x ) (6.43) We force equality for f(x) = 1, x, x, x 3. This leads to equations 1 1 w 1 + w = dx = x 0 w 1 x 1 + w x = w 1 x 1 + w x = w 1 x w x 3 = x x dx = 3 x x dx = 5 x 3 x dx = 7 This has the solution x 1 = = , x = = w 1 = = , w = = The resulting formula (6.43) has degree of precision 3.
81 > 5. Numerical Integration > Weighted Gaussian Quadrature Case n > We seek formula (6.41), which has n unspecified parameters, x 1,..., x n, w 1,..., w n, by forcing the integration formula to be exact for the n monomials f(x) = 1, x, x,..., x n 1. In turn, this forces I n (f) = I(f) for all polynomials f of degree n 1. This leads to the following system of n nonlinear equations in n unknowns: w 1 + w w n = w 1 x 1 + w x w n x n = 3 (6.44) w 1 x 1 + w x w n x n = 5. w 1 x n w x n w n x n 1 n = 4n 1 The resulting formula I n (f) has degree of precision n 1. As before, this system is very difficult to solve directly, but there are alternative methods of deriving {x i } and {w i }. It is based on looking at the polynomials that are orthogonal with respect to the weight function w(x) = 1 x on the interval [0, 1].
82 > 5. Numerical Integration > Weighted Gaussian Quadrature Example We evaluate I = 1 0 cos(πx) x dx = using (6.4) and (6.43) 1 0 f(x) x dx f( 1 ) f(x) x dx w 1 f(x 1 ) + w f(x ) = I is a reasonable estimate of I, with I I =
83 > 5. Numerical Integration > Weighted Gaussian Quadrature A general theory can be developed for the weighted Gaussian quadrature b n I(f) = w(x)f(x)dx w j f(x j ) = I n (f) (6.45) a It requires the following assumptions for the weight function w(x): 1 w(x) > 0 for a < x < b; For all integers n n, b a j=1 w(x) x n dx < These hypotheses are the same as were assumed for the generalized least squares approximation theory following Section 4.7 of Chapter 4. This is not accidental since both Gaussian quadrature and least squares approximation theory are dependent on the subject of orthogonal polynomials. The node points {x j } solving the system (6.44) are the zeros of the degree n orthogonal polynomial on [a, b] with respect to the weight function w(x) = 1 x. For the generalization (6.45), the nodes {x i } are the zeros of the degree n orthogonal polynomial on [a, b] with respect to the weight function w(x).
84 > 5. Numerical Integration > Supplement Gauss s idea: The optimal abscissas of the κ point Gaussian quadrature formulas are precisely the roots of the orthogonal polynomial for the same interval and weighting function. b f(x)dx = xj+1 f(x)dx a j x j }{{} composite formula = 1 ( xj+1 x j f t + x ) ( ) j+1 + x j xj+1 x j dt j 1 }{{} κ = R 1 1 g(t)dt l=1 w l g(ql) }{{} κ point Gauss Rule for max accuracy w 1,..., w κ : weights, q 1,..., q κ : quadrature points on ( 1, 1). Exact on polynomials p(x) P κ 1, i.e., 1, t, t,..., t κ 1.
85 > 5. Numerical Integration > Supplement Example: 3 point Gauss, exact on P 5 exact on 1, t, t, t 3, t 4, t g(t)dt w 1 g(q 1 ) + w g(q ) + w 3 g(q 3 ) 1 1 1dt = = w w + w t dt = 0 = w 1q 1 + w q + w 3 q t dt = 3 = w 1q 1 + w q + w 3q t3 dt = 0 = w 1 q w q 3 + w 3q t4 dt = 5 = w 1q w q 4 + w 3q t5 dt = 0 = w 1 q w q 5 + w 3q 5 3 Guess: q 1 = q 3, q = 0(q 1 q q 3 ), w 1 = w 3.
86 > 5. Numerical Integration > Supplement Example: 3 point Gauss, exact on P 5 exact on 1, t, t, t 3, t 4, t 5 With this guess: w 1 + w = w 1 q 1 = /3 w 1 q 4 1 = /5, hence 3 q 1 = 5, q 3 = 3 5 w 1 = 5 9, w 3 = 5 9, w = 8 9 A. H. Stroud and D. Secrest: Gaussian Quadrature Formulas. Englewood Cliffs, NJ: Prentice-Hall, 1966.
87 > 5. Numerical Integration > Supplement 1 The idea of Gauss Gauss-Lobatto 1 1 g(t)dt = w 1 g( 1) + w g(q ) + + w k 1 g( k 1 ) + w k g(1) k nodes locates as k-point formula; is accurate P k 3. (Order decreased by beside the Gauss quadrature formula).
88 > 5. Numerical Integration > Supplement Adaptive Quadrature Problem Given b a f(x)dx and ε preassigned tolerance compute I(f) with (a) to assured accuracy b a b a f(x)dx f(x)dx I(f) < ε (b) at minimal / near minimal cost (no. function evaluations) Strategy: LOCALIZE!
89 > 5. Numerical Integration > Supplement Localization Theorem Let I(f) = j I j(f) where I j (f) x j+1 x j f(x)dx. xj+1 If f(x)dx I j (f) x j < ε(x j+1 x j ) (= local tolerance), b a b then f(x)dx I(f) < ε(= tolerance) Proof: a b f(x)dx I(f) = xj+1 f(x)dx I(f) a j x j j = ( ) xj+1 f(x)dx I j (f) xj+1 f(x)dx I j (f) j x j j x j = ε(x j+1 x j ) = ε (x j+1 x j ) = ε (b a) = ε. b a b a b a j j
90 > 5. Numerical Integration > Supplement Need: and Estimator for local error Strategy when to cut h to ensure accuracy? when to increase h to ensure minimal cost? One approach: halving and doubling! Recall: Trapezoidal rule I j (x j+1 x j ) f(x j)+f(x j+1 ). A priori estimate: xj+1 f(x)dx I j = (x j+1 x j ) 3 f (s j ) x j 1 for some s j in (x j, x j+1 ).
91 > 5. Numerical Integration > Supplement Step 1: compute I j I j = f(x j+1)+f(x j ) (x j+1 x j ) Step : cut interval in half + reuse trapezoidal rule I j = f(x j) + f(x j+ 1 Error estimate: xj+1 x j xj+1 ) (x j+ 1 x j ) + f(x j+1) + f(x j+ 1 ) (x j+1 x j+ 1 ) f(x)dx I j = h3 j 1 f (ξ j ) = e j 1 st use of trapezoid rule f(x)dx I j = (h j/) 3 x j 1 h 3 j = f 4 (ξ j ) +O(hj) }{{} f (η 1 ) + (h j/) 3 f (η ) 1 nd use of TR e j
92 > 5. Numerical Integration > Supplement Substracting e j = 4e j + Higher Order Terms I j I j = 3e j + O(h 4 ) = e j = I j I j 3 + Higher Order Terms }{{} O(h 4 )
93 > 5. Numerical Integration > Supplement 4 points Gauss: exact on P 7 local error: O(h 9 ) global error: O(h 8 ) A priori estimate: xj+1 f(x)dx I j = C(x j+1 x j ) 9 f (8) (ξ j ) x j xj+1 f(x)dx I j = Ch 9 jf (8) (ξ j ) x j xj+1 ( ) 9 hj f(x)dx I j = C f (8) (ξ j) + C x j I j I j = 5e j + O(h 10 ) e j = I j I j 5 ( hj ) 9 f (8) (ξ j ) }{{} = C 8 h9 j f (8) (ξ j )+O(h 10 ) + Higher Order Terms }{{} O(h 10 )
94 > 5. Numerical Integration > Supplement Algorithm Input: a, b, f(x) upper error tolerance: ε max initial mesh width: h Initialize: Integral = 0.0 x L = a ε min = 1 ε k+3 max * x R = x L + h ( If x R > b, x R b ) do integral 1 more time and stop Compute on x L, x R : I, I and EST EST = I I k+1 1 (if exact on P k )
95 > 5. Numerical Integration > Supplement error is just right : h If ε min b a < EST < ε max Integral Integral + I x L x R go to * error is too small : If EST ε min b a h Integral Integral + I x L x R h h go to * error is too big : If EST εmax b a h h h/.0 go to * STOP END h b a
96 > 5. Numerical Integration > Supplement Trapezium rule xj+1 f(x)dx (x j+1 x j ) f(x j+1) + f(x j ) x j xj+1 x j f(x) p 1 (x)dx = = f (x) xj+1 x j ψ(x)dx }{{} integrate exactly xj+1 x j f (ξ) xj+1 +O(h) ψ(x)dx x j (x x j )(x x j+1 ) dx }{{} ψ(x)
97 > 5. Numerical Integration > Supplement The mysteries of ψ(x) 1.5 x ψ(x) x_j q_1 q_ q_3 q_4 q_5 q_6 q_7 x j x x j+1 ψ(x) = (x q 1 )(x q ) (x q 7 )
98 > 5. Numerical Integration > Supplement Error in k + 1 point quadrature p k (x) interpolates f(x) = f(x) p k (x) = f k+1 (ξ) (k+1)! ψ(x) (x j ) q 1 < q <... < q k+1 ( x j+1 ) xj+1 x j f(x)dx } {{ } true xj+1 x j p k (x)dx = } {{ } approx xj+1 x j ψ(x) (k + 1)! f (k+1) (ξ)dx
99 > 5. Numerical Integration > Supplement 1. A simple error bound Ignoring oscillation of ψ(x): error max f k+1 (k + 1)! xj+1 x j ψ(x) dx }{{} = R x j+1 x j x q 1 x q k+1 h R k+1 x j+1 x j dx 1.5 x 105 max f k+1 x j+1 x j k+ (k + 1)! ψ(x) and abs(ψ(x)) x_j q_1 q_ q_3 q_4 q_5 q_6 q_7 x j x x j+1
100 > 5. Numerical Integration > Supplement. Analysis without cancelation Lemma Let ξ, x (x j, x j+1 ). Then x j < ξ < x < x j+1 f (k+1) (ξ) = f (k+1) (x) + (ξ x)f k+ (η) MVT for some η between ξ and x, and ξ x x j+1 x j h.
101 > 5. Numerical Integration > Supplement. Analysis without cancelation error true approx 1 xj+1 [ ] = ψ(x) f (k+1) (x) +(ξ x)f (k+) (η) dx (k + 1)! x j }{{}}{{} fixed O(h) f (k+1) (x) (k + 1)! + 1 (k + 1)! max f (k+) (k + 1)! xj+1 x j ψ(x)dx }{{} R x =0 j+1 if x j xj+1 x j xj+1 hk+3 max f (k+)(x) (k + 1)! ψ(x)dx=0 f (k+) (η)(ξ x)ψ(x) ξ x ψ(x) x j }{{}}{{} h h k+1 dx The error for Simpson s rule, i.e. cancelation.
102 > 5. Numerical Integration > Supplement Lemma ψ(x) interpolates zero at k + 1 points (deg ψ(x) = k + 1) The general result of p(q l ) = 0, l = 1,..., k + 1, p P k+1 is p(x) = Constant ψ(x).
103 > 5. Numerical Integration > Supplement Questions: 1) How to pick the points q 1,..., q k+1 so that 1 1 integrates P k+m exactly? g(x)dx w 1 g(q 1 ) w k+1 g(q k+1 ) (6.46) ) What does this imply about the error? Remark If m < 1, pick q 1,..., q k+1 so that 1 1 ψ(x)dx = 0 and then the error converges O(h k+3 ). m=
104 > 5. Numerical Integration > Supplement Step 1 Let r 1 be some fixed point on [ 1, 1]: 1 < q 1 < q <... < r 1 <... < q k < q k+1 p k+1 (x) = p k (x) + ψ(x) g(r 1) p k (r 1 ) ψ(r 1 ) p k interpolates g(x) at q 1,..., q k+1. Claim: p k+1 interpolates g(x) at k + points q 1,..., q k+1, r 1. Suppose now that (6.46) is exact on P k+1, then from (6.47) 1 g(x)dx 1 }{{} = true 1 1 p k+1 (x) }{{} 1 substitute (6.47) 1 g(x)dx p k (x)dx 1 } {{ 1 } error in k + 1 quadrature rule E k+1 dx error in k + quadrature rule, E k+ 1 1 ψ(x) f(r 1) p k (r 1 ) dx ψ(r 1 ) (6.47)
105 > 5. Numerical Integration > Supplement Step 1 Conclusion 1 So E k+ = E k+1 f(r 1) p k (r 1 ) ψ(r 1 ) 1 1 ψ(x)dx If 1 1 ψ(x)dx = 0, then error in k + 1 point rule is exactly the same as if we had used k + points.
106 > 5. Numerical Integration > Supplement Step Let r 1, r be fixed points in [ 1, 1]. So interpolate at k + 3 points: q 1,..., q k+1, r 1, r p k+ (x) = p k (x) + ψ(x)(x r 1 ) g(r ) p k (r ) (r r 1 )ψ(r ) + ψ(x)(x r ) g(r 1) p k (r 1 ) (r 1 r )ψ(r 1 ) Consider error in a rule with k points: 1 error in k + 3 p. r. = = 1 g(x)dx g(x)dx 1 g(r 1) p k (r 1 ) (r 1 r )ψ(r 1 ) 1 1 p k+ (x)dx p k (x)dx g(r ) p k (r ) (r r 1 )ψ(r ) 1 1 ψ(x)(x r )dx. 1 1 (6.48) ψ(x)(x r 1 )dx
107 > 5. Numerical Integration > Supplement Step So E k+3 = E k+1 + Const 1 1 ψ(x)(x r 1)dx + Const 1 1 ψ(x)(x r )dx Conclusion If 1 1 ψ(x)dx = 0 and 1 1 xψ(x)dx = 0, then error in k + 1 point rule has the same error as k + 3 point rule.
108 > 5. Numerical Integration > Supplement... So 1 1 E k+1+m = E k+1 + C 0 ψ(x)dx + C 1 ψ(x)x 1 dx +... (6.49) Conclusion C m ψ(x)x m 1 dx (6.50) 1 If 1 1 ψ(x)xj dx = 0, j = 0,..., m 1, then error is as good as using m extra points. 1
109 > 5. Numerical Integration > Supplement Overview Interpolating Quadrature Interpolate f(x) at q 0, q 1, q,..., q k p k (x) f(x) p k (x) = f (k+1) (ξ) (k + 1)! (x q 0)(x q 1 )... (x q k ) 1 1 Gauss rules f(x)dx 1 1 p k (x)dx = pick q l to maximize exactness what is the accuracy what are the q l s? 1 (k + 1)! 1 1 f (k+1) (ξ)ψ(x)dx
110 > 5. Numerical Integration > Supplement Overview Interpolate at k m points q 0,..., q k, r 1..., r m error E k+m Interpolate at k + 1 points q 0,..., q k error 1 = E k + c 0 1 ψ(x) 1dx 1 +c 1 1 ψ(x) xdx c m 1 ψ(x) xm 1 dx Definition p(x) is the µ + 1 st orthogonal polynomial on [ 1, 1] (weight w(x) 1) if p(x) P µ+1 and 1 1 p(x)xl dx = 0, l = 0,..., µ, i.e., 1 1 p(x)q(x)dx = 0 q P µ.
111 > 5. Numerical Integration > Supplement Overview Pick q 0, q 1,..., q k so 1 1 ψ(x) 1dx = ψ(x) xdx = ψ(x) xm 1 dx = 0 1 ψ(x) }{{} 1 deg k+1 deg m 1 {}}{ q(x) dx = 0, q P m 1 So, maximum accuracy if ψ(x) is the orthogonal polynomial of degree k + 1: 1 1 ψ(x)q(x)dx = 0 q P k m 1 = k, m = k + 1 So, the Gauss quadrature points are the roots of the orthogonal polynomial.
112 > 5. Numerical Integration > Supplement Overview Adaptivity I = I 1 + I Trapezium rule s local error = O(h 3 ) xj+1 x j xj+1 f(x)dx I = 4e e 8e 1, e 8e so, e 4e x j f(x)dx I = e = e 1 + e I I = 3e (+ Higher Order Terms) e = I I 3 (+ Higher Order Terms)
113 > 5. Numerical Integration > Supplement Overview Final Observation T rue Approx 4e T rue Approx e So, can solve for e + T rue Solving for T rue: } equations, unknowns: e, T rue T rue I + e I + I I I 1 3 I (+ Higher Order Terms)
114 > 5. Numerical Integration > 5.4 Numerical Differentiation To numerically calculate the derivative of f(x), begin by recalling the definition of derivative This justifies using f (x) = lim x 0 f(x + h) f(x) h f (x) f(x + h) f(x) h D h f(x) (6.51) for small values of h. D h f(x) is called a numerical derivative of f(x) with stepsize h.
115 > 5. Numerical Integration > 5.4 Numerical Differentiation Example Use D h f to approximate the derivative of f(x) = cos(x) at x = π 6. h D n (f) Error Ratio Looking at the error column, we see the error is nearly proportional to h; when h is halved, the error is almost halved.
116 > 5. Numerical Integration > 5.4 Numerical Differentiation To explain the behaviour in this example, Taylor s theorem can be used to find an error formula. Expanding f(x + h) about x, we get f(x + h) = f(x) + hf (x) + h f (c) for some c between x and x + h. Substituting on the right side of (6.51), we obtain D h f(x) = 1 h ] } {[f(x) + hf (x) + h f (c) f(x) = f (x) + h f (c) f (x) D h f(x)= h f (c) (6.5) The error is proportional to h, agreeing with the results in the Table above.
117 > 5. Numerical Integration > 5.4 Numerical Differentiation For that example, ( f π ) ( π ) D h f = h cos(c) (6.53) 6 6 where c is between 1 6 π and 1 6 π + h. Let s check that if c is replaced by 1 6π, then the RHS of (6.53) agrees with the error column in the Table. As seen in the example, we use the formula (6.51) with a positive stepsize h > 0. The formula (6.51) is commonly known as the forward difference formula for the first derivative. We can formally replace h by h in (6.51) to obtain the formula f f(x) f(x h) (x), h > 0 (6.54) h This is the backward difference formula for the first derivative. A derivation similar to that leading to (6.5) shows that f f(x) f(x h) (x) = h h f (c)) (6.55) for some c between x and x h. Thus, we expect the accuracy of the backward difference formula to be almost the same as that of the forward difference formula.
118 > 5. Numerical Integration > Differentiation Using Interpolation Let P n (x) denote the degree n polynomial that interpolates f(x) at n + 1 node points x 0,..., x n. To calculate f (x) at some point x = t, use Many different formulae can be obtained by 1 varying n and by f (t) P n (t) (6.56) varying the placement of the nodes x 0,..., x n relative to the point t of interest.
COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method
COURSE 7 3. Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method The presence of derivatives in the remainder difficulties in applicability to practical problems
More informationCS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationJim Lambers MAT 460/560 Fall Semester Practice Final Exam
Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationNUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.
NUMERICAL METHODS 1. Rearranging the equation x 3 =.5 gives the iterative formula x n+1 = g(x n ), where g(x) = (2x 2 ) 1. (a) Starting with x = 1, compute the x n up to n = 6, and describe what is happening.
More informationX. Numerical Methods
X. Numerical Methods. Taylor Approximation Suppose that f is a function defined in a neighborhood of a point c, and suppose that f has derivatives of all orders near c. In section 5 of chapter 9 we introduced
More informationDepartment of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004
Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question
More informationIntegration, differentiation, and root finding. Phys 420/580 Lecture 7
Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:
More informationNumerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.
Unit IV Numerical Integration and Differentiation Numerical integration and differentiation quadrature classical formulas for equally spaced nodes improper integrals Gaussian quadrature and orthogonal
More informationRomberg Integration and Gaussian Quadrature
Romberg Integration and Gaussian Quadrature P. Sam Johnson October 17, 014 P. Sam Johnson (NITK) Romberg Integration and Gaussian Quadrature October 17, 014 1 / 19 Overview We discuss two methods for integration.
More informationScientific Computing
2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation
More informationIntroductory Numerical Analysis
Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection
More informationMA 3021: Numerical Analysis I Numerical Differentiation and Integration
MA 3021: Numerical Analysis I Numerical Differentiation and Integration Suh-Yuh Yang ( 楊肅煜 ) Department of Mathematics, National Central University Jhongli District, Taoyuan City 32001, Taiwan syyang@math.ncu.edu.tw
More informationApplications of Differentiation
Applications of Differentiation Definitions. A function f has an absolute maximum (or global maximum) at c if for all x in the domain D of f, f(c) f(x). The number f(c) is called the maximum value of f
More informationNumerical Methods. King Saud University
Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation
More informationScientific Computing: Numerical Integration
Scientific Computing: Numerical Integration Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Fall 2015 Nov 5th, 2015 A. Donev (Courant Institute) Lecture
More informationSection 6.6 Gaussian Quadrature
Section 6.6 Gaussian Quadrature Key Terms: Method of undetermined coefficients Nonlinear systems Gaussian quadrature Error Legendre polynomials Inner product Adapted from http://pathfinder.scar.utoronto.ca/~dyer/csca57/book_p/node44.html
More informationOutline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation
Outline Numerical Integration Numerical Differentiation Numerical Integration Numerical Differentiation 3 Michael T. Heath Scientific Computing / 6 Main Ideas Quadrature based on polynomial interpolation:
More informationPrinciples of Scientific Computing Local Analysis
Principles of Scientific Computing Local Analysis David Bindel and Jonathan Goodman last revised January 2009, printed February 25, 2009 1 Among the most common computational tasks are differentiation,
More informationNumerical Integration
Numerical Integration Sanzheng Qiao Department of Computing and Software McMaster University February, 2014 Outline 1 Introduction 2 Rectangle Rule 3 Trapezoid Rule 4 Error Estimates 5 Simpson s Rule 6
More informationMa 530 Power Series II
Ma 530 Power Series II Please note that there is material on power series at Visual Calculus. Some of this material was used as part of the presentation of the topics that follow. Operations on Power Series
More informationCh. 03 Numerical Quadrature. Andrea Mignone Physics Department, University of Torino AA
Ch. 03 Numerical Quadrature Andrea Mignone Physics Department, University of Torino AA 2017-2018 Numerical Quadrature In numerical analysis quadrature refers to the computation of definite integrals. y
More information(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by
1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How
More informationLECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).
CE 025 - Lecture 6 LECTURE 6 GAUSS QUADRATURE In general for ewton-cotes (equispaced interpolation points/ data points/ integration points/ nodes). x E x S fx dx hw' o f o + w' f + + w' f + E 84 f 0 f
More informationMATH 1242 FINAL EXAM Spring,
MATH 242 FINAL EXAM Spring, 200 Part I (MULTIPLE CHOICE, NO CALCULATORS).. Find 2 4x3 dx. (a) 28 (b) 5 (c) 0 (d) 36 (e) 7 2. Find 2 cos t dt. (a) 2 sin t + C (b) 2 sin t + C (c) 2 cos t + C (d) 2 cos t
More information12.0 Properties of orthogonal polynomials
12.0 Properties of orthogonal polynomials In this section we study orthogonal polynomials to use them for the construction of quadrature formulas investigate projections on polynomial spaces and their
More informationChapter 4: Interpolation and Approximation. October 28, 2005
Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error
More informationMathematics for Engineers. Numerical mathematics
Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set
More informationCOURSE Numerical integration of functions
COURSE 6 3. Numerical integration of functions The need: for evaluating definite integrals of functions that has no explicit antiderivatives or whose antiderivatives are not easy to obtain. Let f : [a,
More informationSection x7 +
Difference Equations to Differential Equations Section 5. Polynomial Approximations In Chapter 3 we discussed the problem of finding the affine function which best approximates a given function about some
More informationIn numerical analysis quadrature refers to the computation of definite integrals.
Numerical Quadrature In numerical analysis quadrature refers to the computation of definite integrals. f(x) a x i x i+1 x i+2 b x A traditional way to perform numerical integration is to take a piece of
More information2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1
Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear
More informationNumerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018
Numerical Analysis Preliminary Exam 0.00am.00pm, January 9, 208 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,
Panel integration Week 12: Monday, Apr 16 Suppose we want to compute the integral b a f(x) dx In estimating a derivative, it makes sense to use a locally accurate approximation to the function around the
More informationSECTION A. f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes.
SECTION A 1. State the maximal domain and range of the function f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes. 2. By evaluating f(0),
More information8.3 Numerical Quadrature, Continued
8.3 Numerical Quadrature, Continued Ulrich Hoensch Friday, October 31, 008 Newton-Cotes Quadrature General Idea: to approximate the integral I (f ) of a function f : [a, b] R, use equally spaced nodes
More informationf(x) g(x) = [f (x)g(x) dx + f(x)g (x)dx
Chapter 7 is concerned with all the integrals that can t be evaluated with simple antidifferentiation. Chart of Integrals on Page 463 7.1 Integration by Parts Like with the Chain Rule substitutions with
More informationHomework and Computer Problems for Math*2130 (W17).
Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should
More informationCS 257: Numerical Methods
CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus
More informationNumerical Analysis: Interpolation Part 1
Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,
More informationDifferentiation and Integration
Differentiation and Integration (Lectures on Numerical Analysis for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 12, 2018 1 University of Pennsylvania 2 Boston College Motivation
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationINTRODUCTION TO COMPUTER METHODS FOR O.D.E.
INTRODUCTION TO COMPUTER METHODS FOR O.D.E. 0. Introduction. The goal of this handout is to introduce some of the ideas behind the basic computer algorithms to approximate solutions to differential equations.
More informationUnit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright
cs416: introduction to scientific computing 01/9/07 Unit : Solving Scalar Equations Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright 1 Introduction We now
More informationMATH 31B: MIDTERM 2 REVIEW. sin 2 x = 1 cos(2x) dx = x 2 sin(2x) 4. + C = x 2. dx = x sin(2x) + C = x sin x cos x
MATH 3B: MIDTERM REVIEW JOE HUGHES. Evaluate sin x and cos x. Solution: Recall the identities cos x = + cos(x) Using these formulas gives cos(x) sin x =. Trigonometric Integrals = x sin(x) sin x = cos(x)
More informationAPPM/MATH Problem Set 6 Solutions
APPM/MATH 460 Problem Set 6 Solutions This assignment is due by 4pm on Wednesday, November 6th You may either turn it in to me in class or in the box outside my office door (ECOT ) Minimal credit will
More informationn 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes
Root finding: 1 a The points {x n+1, }, {x n, f n }, {x n 1, f n 1 } should be co-linear Say they lie on the line x + y = This gives the relations x n+1 + = x n +f n = x n 1 +f n 1 = Eliminating α and
More informationLectures 9-10: Polynomial and piecewise polynomial interpolation
Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j
More informationChapter 3: Root Finding. September 26, 2005
Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division
More informationAnswers to Homework 9: Numerical Integration
Math 8A Spring Handout # Sergey Fomel April 3, Answers to Homework 9: Numerical Integration. a) Suppose that the function f x) = + x ) is known at three points: x =, x =, and x 3 =. Interpolate the function
More informationFIXED POINT ITERATION
FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial
More informationMath 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS
Math 473: Practice Problems for Test 1, Fall 011, SOLUTIONS Show your work: 1. (a) Compute the Taylor polynomials P n (x) for f(x) = sin x and x 0 = 0. Solution: Compute f(x) = sin x, f (x) = cos x, f
More informationIntroduction to Numerical Analysis
Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes
More informationEngg. Math. II (Unit-IV) Numerical Analysis
Dr. Satish Shukla of 33 Engg. Math. II (Unit-IV) Numerical Analysis Syllabus. Interpolation and Curve Fitting: Introduction to Interpolation; Calculus of Finite Differences; Finite Difference and Divided
More informationCalculus II Study Guide Fall 2015 Instructor: Barry McQuarrie Page 1 of 8
Calculus II Study Guide Fall 205 Instructor: Barry McQuarrie Page of 8 You should be expanding this study guide as you see fit with details and worked examples. With this extra layer of detail you will
More information8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0
8.7 Taylor s Inequality Math 00 Section 005 Calculus II Name: ANSWER KEY Taylor s Inequality: If f (n+) is continuous and f (n+) < M between the center a and some point x, then f(x) T n (x) M x a n+ (n
More informationApplied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight
Applied Numerical Analysis (AE0-I) R. Klees and R.P. Dwight February 018 Contents 1 Preliminaries: Motivation, Computer arithmetic, Taylor series 1 1.1 Numerical Analysis Motivation..........................
More informationMA2501 Numerical Methods Spring 2015
Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive
More informationSimpson s 1/3 Rule Simpson s 1/3 rule assumes 3 equispaced data/interpolation/integration points
CE 05 - Lecture 5 LECTURE 5 UMERICAL ITEGRATIO COTIUED Simpson s / Rule Simpson s / rule assumes equispaced data/interpolation/integration points Te integration rule is based on approximating fx using
More informationINTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.
INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x
More information4.9 APPROXIMATING DEFINITE INTEGRALS
4.9 Approximating Definite Integrals Contemporary Calculus 4.9 APPROXIMATING DEFINITE INTEGRALS The Fundamental Theorem of Calculus tells how to calculate the exact value of a definite integral IF the
More informationGoal: Approximate the area under a curve using the Rectangular Approximation Method (RAM) RECTANGULAR APPROXIMATION METHODS
AP Calculus 5. Areas and Distances Goal: Approximate the area under a curve using the Rectangular Approximation Method (RAM) Exercise : Calculate the area between the x-axis and the graph of y = 3 2x.
More informationMathematical Methods for Physics and Engineering
Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory
More informationEXAMPLE OF ONE-STEP METHOD
EXAMPLE OF ONE-STEP METHOD Consider solving y = y cos x, y(0) = 1 Imagine writing a Taylor series for the solution Y (x), say initially about x = 0. Then Y (h) = Y (0) + hy (0) + h2 2 Y (0) + h3 6 Y (0)
More informationTHE TRAPEZOIDAL QUADRATURE RULE
y 1 Computing area under y=1/(1+x^2) 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Trapezoidal rule x THE TRAPEZOIDAL QUADRATURE RULE From Chapter 5, we have the quadrature
More informationPhysics 115/242 Romberg Integration
Physics 5/242 Romberg Integration Peter Young In this handout we will see how, starting from the trapezium rule, we can obtain much more accurate values for the integral by repeatedly eliminating the leading
More informationInterpolation Theory
Numerical Analysis Massoud Malek Interpolation Theory The concept of interpolation is to select a function P (x) from a given class of functions in such a way that the graph of y P (x) passes through the
More informationLEAST SQUARES APPROXIMATION
LEAST SQUARES APPROXIMATION One more approach to approximating a function f (x) on an interval a x b is to seek an approximation p(x) with a small average error over the interval of approximation. A convenient
More informationFixed point iteration and root finding
Fixed point iteration and root finding The sign function is defined as x > 0 sign(x) = 0 x = 0 x < 0. It can be evaluated via an iteration which is useful for some problems. One such iteration is given
More informationMATH 163 HOMEWORK Week 13, due Monday April 26 TOPICS. c n (x a) n then c n = f(n) (a) n!
MATH 63 HOMEWORK Week 3, due Monday April 6 TOPICS 4. Taylor series Reading:.0, pages 770-77 Taylor series. If a function f(x) has a power series representation f(x) = c n (x a) n then c n = f(n) (a) ()
More informationWe consider the problem of finding a polynomial that interpolates a given set of values:
Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationMA 8019: Numerical Analysis I Solution of Nonlinear Equations
MA 8019: Numerical Analysis I Solution of Nonlinear Equations Suh-Yuh Yang ( 楊肅煜 ) Department of Mathematics, National Central University Jhongli District, Taoyuan City 32001, Taiwan syyang@math.ncu.edu.tw
More informationStat 451 Lecture Notes Numerical Integration
Stat 451 Lecture Notes 03 12 Numerical Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 5 in Givens & Hoeting, and Chapters 4 & 18 of Lange 2 Updated: February 11, 2016 1 / 29
More informationEngg. Math. I. Unit-I. Differential Calculus
Dr. Satish Shukla 1 of 50 Engg. Math. I Unit-I Differential Calculus Syllabus: Limits of functions, continuous functions, uniform continuity, monotone and inverse functions. Differentiable functions, Rolle
More informationExam in TMA4215 December 7th 2012
Norwegian University of Science and Technology Department of Mathematical Sciences Page of 9 Contact during the exam: Elena Celledoni, tlf. 7359354, cell phone 48238584 Exam in TMA425 December 7th 22 Allowed
More informationExtrapolation in Numerical Integration. Romberg Integration
Extrapolation in Numerical Integration Romberg Integration Matthew Battaglia Joshua Berge Sara Case Yoobin Ji Jimu Ryoo Noah Wichrowski Introduction Extrapolation: the process of estimating beyond the
More informationSection Taylor and Maclaurin Series
Section.0 Taylor and Maclaurin Series Ruipeng Shen Feb 5 Taylor and Maclaurin Series Main Goal: How to find a power series representation for a smooth function us assume that a smooth function has a power
More informationNumerical Methods I: Numerical Integration/Quadrature
1/20 Numerical Methods I: Numerical Integration/Quadrature Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 30, 2017 umerical integration 2/20 We want to approximate the definite integral
More informationPractice Exam 2 (Solutions)
Math 5, Fall 7 Practice Exam (Solutions). Using the points f(x), f(x h) and f(x + h) we want to derive an approximation f (x) a f(x) + a f(x h) + a f(x + h). (a) Write the linear system that you must solve
More informationNumerical Methods. Aaron Naiman Jerusalem College of Technology naiman
Numerical Methods Aaron Naiman Jerusalem College of Technology naiman@jct.ac.il http://jct.ac.il/ naiman based on: Numerical Mathematics and Computing by Cheney & Kincaid, c 1994 Brooks/Cole Publishing
More informationMath 122 Test 3. April 15, 2014
SI: Math 1 Test 3 April 15, 014 EF: 1 3 4 5 6 7 8 Total Name Directions: 1. No books, notes or 6 year olds with ear infections. You may use a calculator to do routine arithmetic computations. You may not
More informationYou can learn more about the services offered by the teaching center by visiting
MAC 232 Exam 3 Review Spring 209 This review, produced by the Broward Teaching Center, contains a collection of questions which are representative of the type you may encounter on the exam. Other resources
More informationExample 1 Which of these functions are polynomials in x? In the case(s) where f is a polynomial,
1. Polynomials A polynomial in x is a function of the form p(x) = a 0 + a 1 x + a 2 x 2 +... a n x n (a n 0, n a non-negative integer) where a 0, a 1, a 2,..., a n are constants. We say that this polynomial
More informationSolutions to Math 41 Final Exam December 10, 2012
Solutions to Math 4 Final Exam December,. ( points) Find each of the following limits, with justification. If there is an infinite limit, then explain whether it is or. x ln(t + ) dt (a) lim x x (5 points)
More informationNumerical Integra/on
Numerical Integra/on Applica/ons The Trapezoidal Rule is a technique to approximate the definite integral where For 1 st order: f(a) f(b) a b Error Es/mate of Trapezoidal Rule Truncation error: From Newton-Gregory
More informationAn idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim
An idea how to solve some of the problems 5.2-2. (a) Does not converge: By multiplying across we get Hence 2k 2k 2 /2 k 2k2 k 2 /2 k 2 /2 2k 2k 2 /2 k. As the series diverges the same must hold for the
More information5 Numerical Integration & Dierentiation
5 Numerical Integration & Dierentiation Department of Mathematics & Statistics ASU Outline of Chapter 5 1 The Trapezoidal and Simpson Rules 2 Error Formulas 3 Gaussian Numerical Integration 4 Numerical
More informationOutline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation
Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data
More informationLecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.
Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. For example f(x) = 1 1 x = 1 + x + x2 + x 3 + = ln(1 + x) = x x2 2
More informationn=0 ( 1)n /(n + 1) converges, but not
Math 07H Topics for the third exam (and beyond) (Technically, everything covered on the first two exams plus...) Absolute convergence and alternating series A series a n converges absolutely if a n converges.
More information18.01 EXERCISES. Unit 3. Integration. 3A. Differentials, indefinite integration. 3A-1 Compute the differentials df(x) of the following functions.
8. EXERCISES Unit 3. Integration 3A. Differentials, indefinite integration 3A- Compute the differentials df(x) of the following functions. a) d(x 7 + sin ) b) d x c) d(x 8x + 6) d) d(e 3x sin x) e) Express
More informationNumerical Integration
Chapter 1 Numerical Integration In this chapter we examine a few basic numerical techniques to approximate a definite integral. You may recall some of this from Calculus I where we discussed the left,
More informationPractice problems from old exams for math 132 William H. Meeks III
Practice problems from old exams for math 32 William H. Meeks III Disclaimer: Your instructor covers far more materials that we can possibly fit into a four/five questions exams. These practice tests are
More informationWeek 2 Techniques of Integration
Week Techniques of Integration Richard Earl Mathematical Institute, Oxford, OX LB, October Abstract Integration by Parts. Substitution. Rational Functions. Partial Fractions. Trigonometric Substitutions.
More information1 Taylor-Maclaurin Series
Taylor-Maclaurin Series Writing x = x + n x, x = (n ) x,.., we get, ( y ) 2 = y ( x) 2... and letting n, a leap of logic reduces the interpolation formula to: y = y + (x x )y + (x x ) 2 2! y +... Definition...
More informationLECTURE 14 NUMERICAL INTEGRATION. Find
LECTURE 14 NUMERCAL NTEGRATON Find b a fxdx or b a vx ux fx ydy dx Often integration is required. However te form of fx may be suc tat analytical integration would be very difficult or impossible. Use
More informationNumerical techniques to solve equations
Programming for Applications in Geomatics, Physical Geography and Ecosystem Science (NGEN13) Numerical techniques to solve equations vaughan.phillips@nateko.lu.se Vaughan Phillips Associate Professor,
More information1.4 Techniques of Integration
.4 Techniques of Integration Recall the following strategy for evaluating definite integrals, which arose from the Fundamental Theorem of Calculus (see Section.3). To calculate b a f(x) dx. Find a function
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More information