Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen January 13, 2013 1 / 34
Contents 1 1-D BVP and FDM 2 2-D BVP and FDM 3 Higher order schemes 4 Iterative matrix solution 5 Discontinuous coefficients, finite volume method 6 Convection dominated problem General approach of numerical methods: Stability + Consistency = Convergence 2 / 34
1-D boundary value problem Differential equation: u (x) + c(x)u(x) = f(x) x Ω = (0, 1) u(0) = 0, u(1) = 0 (1) Finite difference mesh: Let N 2 be an integer and let mesh size: h = 1 N x = 0 x = 1 0 1 2 h N 2 N 1 N mesh points: x i = ih, i = 0, 1,..., N Ω h = {x i : i = 1, 2,..., N 1}, Γ h = {x 0, x N }, Ωh = Ω h Γ h U i = numerical approximation to u(x i ) Need to find U 1, U 2,..., U N 1 3 / 34
Finite difference approximation Let u : [0, 1] R. By Taylor series u(x i±1 ) = u(x i ± h) = u(x i ) ± hu (x i ) + h2 2 u (x i ) ± h3 6 u (x i ) + O ( h 4) Forward difference for u : u C 2 [0, 1] D + x u(x i ) := u(x i+1) u(x i ) h Backward difference for u : u C 2 [0, 1] D x u(x i ) := u(x i) u(x i 1 ) h Central difference for u : u C 4 [0, 1] = u (x i ) + O (h) = u (x i ) + O (h) D + x D x u(x i ) = D x D + x u(x i ) = u(x i 1) 2u(x i ) + u(x i+1 ) h 2 = u (x i ) + O ( h 2) Note: If u C 3 [0, 1] then D + x D x u(x i ) u (x i ) = O (h). 4 / 34
Finite difference method: u D + x D x u Find (U 1, U 2,..., U N 1 ) such that (AU) i := D + x D x U i + c(x i )U i = f(x i ) i = 1, 2,..., N 1 U 0 = 0, U N = 0 (2) Then the hope is that U i u(x i ) which we have to prove. In matrix notation or more compactly AU = F, A R (N 1) (N 1), U R N 1, F R N 1 Does solution exist, i.e., is the matrix A invertible? 5 / 34
Discrete inner product Let V, W be two grid functions defined at the mesh points, vanishing at i = 0, N. Define the discrete inner product (V, W ) h = which resembles the L 2 inner product (v, w) = 1 0 N 1 hv i W i v(x)w(x)dx Lemma (Summation by parts) Suppose V is a grid function defined at the mesh-points x i, i = 0, 1,..., N and let V 0 = V N = 0. Then ( D + x Dx V, V ) N h = h Dx V i 2 (3) 6 / 34
Discrete inner product Proof: We do summation by parts, ( D + x D x V, V ) h N 1 = h(d x + Dx V i )V i N 1 = = = = N N i=2 N N 1 V i+1 V i V i + h N 1 V i V i 1 V i 1 + h N V i V i 1 V i 1 + h V i V i 1 (V i V i 1 ) = h V i V i 1 V i h V i V i 1 V i h (shift indices) V i V i 1 V i (V 0 = V N = 0) h N h Dx V i 2 Discrete analogue of 1 v vdx = 1 0 0 (v ) 2 dx (v(0) = v(1) = 0) 7 / 34
Existence of discrete solution Let V be a grid function such that V 0 = V N = 0 and let c 0. Then (AV, V ) h = ( D + x D x V + cv, V ) h = ( D x + Dx V, V ) N h + (cv, V ) h h Dx V i 2 0 If AV = 0 for some V then necessarily N h Dx V i 2 = 0 = Dx V i = 0, i = 1,..., N = V 0 = V 1 =... = V N But since V 0 = V N = 0, we obtain that V = 0. Hence AV = 0 if and only if V = 0 from which we deduce that A is a non-singular matrix. Theorem (Existence of FD solution) Suppose c and f are continuous functions on [0, 1] and c(x) 0, x [0, 1]. Then the finite difference scheme (2) has a unique solution U = A 1 F. 8 / 34
Discrete norms Discrete L 2 norm Discrete Sobolev norm where Using this notation U h := U 1,h := ( N 1 ) 1 2 (U, U) h = h U i 2 ( ) U 2 h + D x U] 2 1 2 h N V ] 2 h := h V i 2 (includes last grid point i = N) (AV, V ) h D + x V ] 2 h (equality if c 0) Using a discrete version of Poincare-Friedrichs inequality, we will show that (AV, V ) h c 0 V 2 1,h where c 0 is a positive constant. This is a discrete coercivity property. 9 / 34
Lemma (Discrete Poincare-Friedrichs inequality) Let V be a mesh function with V 0 = V N = 0. Then c > 0, independent of V and h, such that V 2 h c D x V ] 2 h (4) for all such V. Proof: Using Cauchy-Schwarz inequality, we have i V i 2 = (Dx V j )h j=1 2 i h j=1 i h Dx V j 2 = ih j=1 i h Dx V j 2 j=1 V 2 h = N 1 h V i 2 (N 1)N 2 h 2 N 1 N j=1 ih 2 i h Dx V j 2 j=1 ( N 1 i ) h 2 N j=1 h D x V j 2 h D x V j 2 1 2 D x V ] 2 h, since (N 1)N < 1 h 2 which proves (4) with c = 1 2. 10 / 34
Discrete coercivity property: Combining we get (AV, V ) h D x V ] 2 h 1 c V 2 h c (AV, V ) h V 2 h and (AV, V ) h D x V ] 2 h (AV, V ) h (1 + c ) 1 ( V 2 h + D x V ] 2 h ) With c 0 = (1 + c ) 1 = 2 3, we have the coercivity property (AV, V ) h c 0 V 2 1,h (5) Theorem (Stability of FD solution) The scheme (2) is stable in the sense that U 1,h 1 c 0 f h (6) Proof: Use coercivity (5) and Cauchy-Schwarz c 0 U 2 1,h (AU, U) h = (f, U) h f h U h f h U 1,h 11 / 34
Global error and truncation error The global error between the true solution u and the numerical solution U is e i := u(x i ) U i, i = 0, 1,..., N Due to boundary conditions Then e 0 = e N = 0 Ae i = Au(x i ) AU i = Au(x i ) f(x i ) = D x + Dx u(x i ) + c(x i )u(x i ) [ u (x i ) + c(x i )u(x i )] = u (x i ) D x + Dx u(x i ), i = 1, 2,..., N 1 Local truncation error: error in central difference approximation τ i := u (x i ) D x + Dx u(x i ) Thus the error satisfies the equation (Ae) i = τ i, i = 1, 2,..., N 1 e 0 = e N = 0 (7) 12 / 34
Theorem (Error estimate) Let f C[0, 1], c C[0, 1] with c(x) 0 and suppose that the solution of (1) belongs to C 4 [0, 1]. Then u U 1,h h2 8 u(4) (8) Proof: Using Taylor series with remainder term, show that so that τ i = u (x i ) D + x D x u(x i ) = h2 12 u(4) (ξ i ), ξ i [x i 1, x i+1 ] τ i h2 12 sup u (4) (x) h2 x i 1 x x i+1 12 sup Applying stability result (6) to (7) we obtain ( e 1,h 1 τ c h = 1 N 1 0 c 0 0 x 1 h τ i 2 ) 1 2 u (4) (x) (9) h2 12c 0 u (4) which yields the error estimate (8) since c 0 = 2 3. 13 / 34
General framework Linear differential equation Lu = f in Ω lu = g on Γ (10) Finite difference approximation Two key steps: (1) Show that the scheme is stable: where C s > 0 is independent of f, g, h. L h U = f h in Ω h l h U = g h on Γ h (11) U Ωh C s ( fh Ωh + g h Γh ) 14 / 34
General framework (2) Show that the scheme is consistent: Local truncation error τ Ωh = L h u f h in Ω h τ Γh = l h u g h in Γ h For Dirichlet problem, τ Γh = 0. Assuming sufficiently smooth solution u show that τ Ωh Ωh + τ Γh Γh C τ h p as h 0 where C τ > 0 independent of h but might depend on u and p > 0. Lax equivalence theorem Suppose the finite difference scheme (11) is stable and consistent. Then it is a convergent approximation of (10). Proof: Define the global error e = u U. Then L h e = L h u L h U = L h u f h = τ Ωh 15 / 34
General framework and similarly l h e = τ Γh Error is governed by the equation L h e = τ Ωh in Ω h l h e = τ Γh in Γ h By stability and consistency of the scheme u U Ωh = e Ωh C s ( τ Ωh Ωh + τ Γh Γh ) C s C τ h p Convergence of U now follows since u U Ωh 0 as h 0 The quantity p is called the order of accuracy of the scheme. It is desirable to have a large value of p since we can get more accurate solution with smaller number of grid points. 16 / 34
We will next show some results in the maximum norm. We begin with some definitions. Definitions Non-negative matrix: A matrix A is said to be non-negative if all its entries are non-negative. We will indicate this property by writing A 0. Non-negative vector: A vector V is said to be non-negative if all its entries are non-negative. We will indicate this property by writing V 0. Monotone matrix: A real, square matrix A is said to be monotone if it is invertible and the matrix A 1 is non-negative. M-matrix: A real, square matrix A = (a ij ) is called an M-matrix if aii > 0 and a ij 0 for i j A 1 is non-negative Thus an M-matrix is also a monotone matrix. 17 / 34
Theorem (Characterization of monotone matrices) A real matrix A of order n is monotone if and only if the inclusion {v R n : Av 0} {v R n : v 0} is satisfied. Proof: (a) If A is monotone and the vector Av is non-negative then v = A 1 (Av) 0 (b) Conversely, suppose the inclusion is satisfied. Then } Av = 0 = v 0 = v = 0 A( v) = 0 = v 0 Hence A is non-singular and A 1 exists. The j th column vector of A 1 is b j = A 1 e j, e j = [0,..., 0, 1, 0,..., 0] j th position 18 / 34
so that Ab j = e j 0 = b j 0 = A 1 0 Theorem Suppose that c is non-negative. Then the matrix A in (2) is monotone. Proof: Let A R (N 1) (N 1) be the matrix of the finite difference scheme. Due to above characterization, it is enough to show that Av 0 = v 0 Given any vector v R N 1 such that Av 0, let p {1,..., N 1} be an integer satisfying v p v i for i = 1, 2,..., N 1 ( i.e., v p = min 1 i N 1 v i) We have to show that v p 0. (a) If p = 1 0 (2 + c 1 h 2 )v 1 v 2 = (1 + c 1 h 2 )v 1 + (v 1 v 2 ) (1 + c 1 h 2 )v 1 19 / 34
(b) If 2 p N 2 then (c) If p = N 1 then 0 v p 1 + (2 + c p h 2 )v p v p+1 c p h 2 v p Hence we have 0 v N 2 + (2 + c N 1 h 2 )v N 1 (1 + c N 1 h 2 )v N 1 min v i 0 if c i > 0, 2 i N 2 1 i N 1 It remains to look at the case where atleast one of the c i, 2 i N 2 is zero. We already know that A is invertible (even if c 0). Now the matrix A + αi is monotone for every α > 0. This implies that (A + αi) 1 0 The elements of (A + αi) 1 are continuous functions of α 0 and hence it follows that A 1 0. 20 / 34
Matrix norm For any square matrix M = (m ij ) R n n and vector norm : R n R + MV M := max V 0 V = max MV V =1 In particular M = max 1 i n j=1 n m ij Theorem (Error in max norm) Suppose that c is non-negative. If the solution u of the BVP (1) satisfies u C 4 [0, 1] then we have the bound max u i U i = u U 1 i N 1 h2 96 sup u (4) (x) 0 x 1 21 / 34
Proof: (1) We first show the stability result A 1 1 ( A 1 8 A 1 0 1 ) 8 (12) Let A 0 be the matrix A with c = 0. Since A, A 0 are monotone Since c is non-negative so that A 1 0 and A 1 0 0 A A 0 = diag(c i ) 0 A 1 0 A 1 = A 1 0 (A A 0)A 1 0 Then using the expression of the matrix norm we obtain A 1 A 1? Observe that A 1 0 0 = A 1 0 = A 1 0 E, E = [1, 1,..., 1] R N 1 0 22 / 34
But A 1 0 E is the finite difference approximation to the solution of The solution is v (x) = 1 x Ω = (0, 1) v(0) = 0, v(1) = 0 v(x) = 1 2 x(1 x) = v(3) (x) = v (4) (x) = 0 Hence the finite difference solution A 1 0 E is exact at the nodes, i.e., (A 1 0 E) i = v(x i ), 1 i N 1 so that A 1 0 E = max 1 i N 1 v(x i) max 0 x 1 v(x) = 1 8 (2) We have already seen the error equation (7) for e i = u(x i ) U i (Ae) i = τ i, i = 1, 2,..., N 1 e 0 = e N = 0 23 / 34
Hence using (12) and (9) e = A 1 τ A 1 τ 1 h 2 8 12 max 0 x 1 u(4) (x) which proves the desired result. Maximum principle (Differential equation) Suppose that c(x) 0 and } u (x) + c(x)u(x) 0, 0 x 1 u(0) 0, u(1) 0 = u(x) 0 Maximum principle (FDM) Suppose A is monotone. Then } AU 0 U 0 0, U N 0 = U 0 24 / 34
Steady diffusion-convection-reaction Au := au + bu + cu = f, in Ω = (0, 1) (13) u(0) = u 0, u(1) = u 1 a(x), b(x), c(x) are smooth functions and a > 0, c 0 in Ω Finite difference approximation of PDE (AU) j := a j U j 1 2U j+u j+1 h 2 or, for j = 1, 2,..., N 1 U 0 = u 0, U N = u 1 + b j U j+1 U j 1 2h + c j U j = f j (14) (a j + 1 2 hb j)u j 1 + (2a j + h 2 c j )U j (a j 1 2 hb j)u j+1 = h 2 f j 25 / 34
Steady diffusion-convection-reaction Discrete maximum principle Assume that h is so small that a j ± 1 2 hb j 0 and that U satisfies AU j 0 (AU j 0). 1 If c = 0, then max U j = max{u 0, U N } 0 j N ( ) min U j = min{u 0, U N } 0 j N 2 If c 0, then max U j max{u 0, U N, 0} 0 j N ( ) min U j min{u 0, U N, 0} 0 j N 26 / 34
Steady diffusion-convection-reaction 1 Since c = 0 and AU j 0 U j = (a j + 1 2 hb j) U j 1 + (a j 1 2 hb j) U j+1 + h2 AU j 2a j 2a j 2a j (a j + 1 2 hb j) 2a j U j 1 + (a j 1 2 hb j) 2a j U j+1 max(u j 1, U j+1 ) for 1 j N 1 Assume that U has an interior maximum at U j, i.e., U j = max 0 k N U k But this contradicts the above inequality unless U j = U j 1 = U j+1 which means that U j = constant = U 0 = U N. Hence the maximum of {U j } N 0 must occur on the boundary. 27 / 34
Steady diffusion-convection-reaction 2 Case c 0 and AU j 0: Remark: 1 If U j 0, then we are done. 2 Otherwise assume that max j U j = U k > 0 for some 1 k N 1. Let (l, r) be the largest subinterval containing k such that U j > 0, j (l, r). 3 We now have ÃUj := AUj cjuj 0 in (x l, x r). Applying result of Part 1, we have U k = max{u l, U r}. 4 But then x l and x r cannot both be interior points of Ω, for then either U l or U r would be positive, and the interval (x l, x r) would not be the largest subinterval with U j > 0. This implies that U k = max{u 0, U N }. The conditions in the theorem ensure that A is an M-matrix. Remark: Key concept used in proof was convexity. An M-matrix gives convexity property to the scheme. Remark: For proof of maximum principle in continuous case, see e.g., Larsson and Thomee. 28 / 34
Mesh Peclet number Note that a j = viscosity coefficient b j = convection speed The condition a j ± 1 2 hb j 0 requires that P j = h b j a j 2 Here, P j is called mesh Peclet number. If b j 0, i.e., there is no convection, then the condition is trivially satisfied for all h. When convection is large, we have to choose a small mesh h, which increases computational cost and hence is not desirable. For non-linear problems, the speed b will depend on the solution, which is itself unknown. These problems arise because we chose a central difference approximation for the term bu which has a hyperbolic character. 29 / 34
Numerical solution Consider the boundary value problem u (x) = f(x), x (a, b) with boundary condition u(a) = u a, u(b) = u b At i = 1 For i = 2,..., N 2 2 h 2 U 1 1 h 2 U 2 = f 1 + 1 h 2 u a 1 h 2 U i 1 + 2 h 2 U i 1 h 2 U i+1 = f i At i = N 1 1 h 2 U N 2 + 2 h 2 U N 1 = f N 1 + 1 h 2 u b 30 / 34
FDM for u = f For N = 11, putting all equations together 1 h 2 or 2 1 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 0 1 2 AU = b U 1 U 2 U 3 U 4 U 5 U 6 = U 7 U 8 U 9 U 10 f 1 + ua h 2 f 2 f 3 f 4 f 5 f 6 f 7 f 8 f 9 f 10 + u b h 2 We have N 1 equations for the N 1 unknowns: [U 1, U 2,..., U N 1 ] 31 / 34
FDM for ODE We take f(x) = sin(x), (a, b) = (0, 2π), u(a) = u(b) = 0 Efficient solution using Thomas Tri-diagonal algorithm 1 0.8 Numerical Exact 0.6 0.4 0.2 u(x) 0 0.2 0.4 0.6 0.8 bvp 1d.m 1 0 1 2 3 4 5 6 x 32 / 34
This algorithm only requires i 1 about n 3 /3 flops, i.e. saves about twice the computing time of the LUh factorization ii = a ii andh about 2 ik half. the memory. Thomas tri-diagonal algorithm k=1 This Let algorithm us now consider only requires the particular about n case of a linear system with non-singular tridiagonal n matrix n tri-diagonal A of the form matrix A 3 /3 flops, i.e. saves about twice the computing General time of the LU factorization and about half the memory. a 1 c 1 0 Let us now consider the particular case. of a linear A= b 2 a.. system with non-singular tridiagonal matrix A of the form 2... cn 1. a 1 c 1 0 0. b n a n A= b 2 a.. 2 In this case, the L and U matrices of. the LU factorization.. of A are two bidiagonal Make matrices an LU decomposition cn 1. of the type 0 b A = LU n a n In this case, the1 L and U matrices 0 of the LU factorization α 1 c 1 of A are 0 two bidiagonal with L lower triangular β 2 1and U upper triangular L=......, U= matrix.. α.. matrices of the type 2... cn 1. 1 0 α 1 c 1 0 0 β n 1 0 α β 2 1 n L= The α i and. β i unknown... coefficients.., can U=. α.. 2 be easily computed... through cn 1. the following equations: 0 β n 1 0 α n The α α 1 = a 1, β i = b i i and β i unknown coefficients, αcan i = be a i easily β i c i 1 computed,i=2,...,n. through the following equations: α i 1 33 / 34
Thomas tri-diagonal algorithm The α i and β i are obtained from α 1 = a 1, β i = b i α i 1, α i = a i β i c i 1, i = 2, 3,..., n We want to solve We do this in two steps: Ax = b = LUx = b Ly = b and Ux = y 1 Solve Ly = b by forward substitution y 1 = b 1 β 2 y 1 + y 2 = b 2 β 3 y 2 + y 3 = b 3 etc. 2 Solve Ux = y by backward substitution. α n x n = y n No need to store the full matrix A. Store only the three diagonals. Solution obtained in O (N) operations. α n 1 x n 1 + c n 1 x n = y n 1 α n 2 x n 2 + c n 2 x n 1 = y n 2 etc. 34 / 34