y 1 Computing area under y=1/(1+x^2) 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Trapezoidal rule x THE TRAPEZOIDAL QUADRATURE RULE From Chapter 5, we have the quadrature formula b ( ) b a g(x) dx = [g(a)+g(b)] a 2 (b a)3 g (ξ) 12 for some a ξ b.
THE TRAPEZOIDAL RULE FOR ODEs Integrate Y (x) =f(x, Y (x)) over the interval[x n,x ]: Y (x )=Y (x n )+ x x n f(x, Y (x)) dx Apply the trapezoidal quadrature rule to this integral: Y (x ) = Y (x n ) + h 2 [f(x n,y(x n )) + f(x,y(x )] h3 12 Y (ξ n ) Dropping the error term, we have the trapezoidal rule: y = y n + h 2 [f(x n,y n )+f(x,y )]
y = y n + h 2 [f(x n,y n )+f(x,y )], n 0 This is a one-step implicit method. Its truncation error is T n (Y )= h3 12 Y (ξ n ) For its error, we can use the generaltheory of 6.3; or directly, Y (x n ) y h (x n ) e 2K(x n x 0 ) Y 0 y h (x 0 ) + e2k(x n x 0 ) 1 h2 Y K 12 provided hk 1 and the Lipschitz condition is satisfied.
STABILITY Let y 0 z 0 ɛ and z = z n + h 2 [f(x n,z n )+f(x,z )], n 0 Then y n z n ɛe 2K(x n x 0 ), x0 x n b ASYMPTOTIC ERROR FORMULA We can show Y (x n ) y h (x n )=D(x n )h 2 + O(h 3 ) D (x) =f y (x, Y (x))d(x) 1 12 Y (x), D(x 0 )=0 assuming y 0 = Y 0.
RICHARDSON EXTRAPOLATION At any node point x common to the use of stepsizes h and 2h, wehave Then Y (x) y h (x) =D(x)h 2 + O(h 3 ) Y (x) y 2h (x) =4D(x)h 2 + O(h 3 ) Y (x) y 2h (x). =4[Y (x) y h (x)] Y (x). = y h (x)+ 1 3 [y h(x) y 2h (x)]
ITERATIVE SOLUTION To solve for y in define y = y n + h 2 [f(x n,y n )+f(x,y )] y (j+1) = y n + h 2 [ f(x n,y n )+f(x,y (j) ) ] for j =0, 1, 2,...,withaninitialguessy (0) y. To analyze the convergence: y y (j+1) = h [ f(x,y ) f(x,y (j) ] 2 ) y y (j+1) = h 2 f(x,y ) f(x,y (j) ) hk 2 y y (j), j 0 According to this, convergence is assured if we choose h so small that hk 2 < 1
If we use the mean value theorem, we get a better idea of nature of the convergence: y y (j+1). = h [ f(x,y ) y y (j) ] 2 y for j =0, 1, 2,... Thus the crucialfactor is h f(x,y ) 2 y and we need its magnitude to be less than 1. The smaller this factor, the faster the convergence. Also, note that with most stable differential equations, the partialderivative is negative. Thus the error in the iterates will oscillate between negative and positive, which means the iterates are oscillating about the desired solution y.
CHOOSING THE INITIAL GUESS We generally to choose y y (0) = O(hp ) for some p>0. How to choose y (0)? (a) The preceding answer: (b) Euler s method: y (0) = y n y (0) = y n + hf(x n,y n ) (c) Midpoint method: y (0) = y n 1 +2hf(x n,y n )
THE LOCAL SOLUTION An important idea to consider is that of the local solution of the differentialequation. Given that we have gotten to the point (x n,y n ) with the trapezoidal method, consider the initial value problem y = f(x, y), x x n, y(x n )=y n Denote the solution of this problem by u n (x). It is the exact solution of the differential equation from x n onwards, based on our best knowledge of the solution at x n. Thus u n (x) =f(x, u n(x)), x x n, u n (x n )=y n It is this solution that we are truly estimating at each step.
If we proceed in analogy with the derivation of Euler s method, u n (x ) = u n (x n )+hu n (x n)+ h2 2 u n (ξ n) If we let then = y n + hf(x n,y n )+ h2 2 u n(ξ n ) y (0) = y n + hf(x n,y n ) u n (x ) y (0) = h2 2 u n (ξ n)
Similarly, for the derivation of the trapezoidal method, u n (x ) = u n (x n ) + h 2 [f(x n,u n (x n )) + f(x,u n (x ))] h3 12 u n (ζ n ) = y n + h 2 [f(x n,y n )+f(x,u n (x ))] h3 12 u From this, we can derive n (ζ n ) u n (x ) y = h3 asisdescribedinthetext. 12 u n (x n )+O(h 4 )
If we use the Euler predictor, then y (0) = y n + hf(x n,y n ) y y (0) = h2 2 u n(ξ n )+ h3 12 u n (x n )+O(h 4 ) = h2 2 u n (ξ n)+o(h 3 ) Return to error in the iteration, Then y y (j+1) hk 2 y y (1) y y (j) y y (2) O(h3 ) O(h4 ), j 0
Usually we try to make the iteration error less significant than the truncation error in the trapezoidal method, T n (Y )= h3 12 Y (ξ n ) Thus we would use two iterations of the trapezoidal iteration equation y (j+1) = y n + h 2 and then let y = y (2). [ f(x n,y n )+f(x,y (j) ) ], j =0, 1
If we repeat this discussion with the midpoint predictor, y (0) = y n 1 +2hf(x n,y n ) then we can derive u n (x ) y (0) = 5h3 12 u n (x n )+O(h 4 ) Combined with the localerror for the trapezoidalrule we have u n (x ) y = h3 y y (0) = h3 2 u 12 u n (x n )+O(h 4 ) n (x n )+O(h 4 ) For the iteration, we need iterate only once, obtaining y y (1) O(h4 ) We then proceed with y = y (1).
A-STABILITY For reasons examined in a later section ( 6.9), we look at the behaviour of numericalsolutions to y = λy, y(0) = 1 when λ is a real or complex number with real(λ) < 0. The true solution is Y (x) = e λx ; and as x, Y (x) 0. We ask when the numericalsolution has the same behaviour. The trapezoidalmethod for this case is y = y n + h 2 [λy n + λy ] We can solve for y to get y = 1+ hλ 2 1 hλ 2 y n, y 0 =1
Together with y 0 = 1, this leads to y n = 1+ hλ 2 n, n 0 1 hλ 2 If λ<0 and is real, then clearly y n 0asn. By a slightly more complicated argument, the same is true if real(λ) < 0. Numericalmethods for which this is true, independent of the size of h, are called A-stable methods. A-stable methods are very useful is solving stiff differentialequations, which we explore in 6.9.