where denotes the Laplacian, Ω = (0,1) (0,1), and Ω is the boundary of Ω. Let ρ x = {x i } N+1
|
|
- Leonard Hunter
- 5 years ago
- Views:
Transcription
1 MODIFIED NODAL CUBIC SPLINE COLLOCATION FOR POISSON S EQUATION ABEER ALI ABUSHAMA AND BERNARD BIALECKI Abstract. We present a new modified nodal cubic spline collocation scheme for solving the Dirichlet problem for Poisson s equation on the unit square. We prove existence and uniqueness of a solution of the scheme and show how the solution can be computed on an (N + 1) (N + 1) uniform partition of the square with cost O(N 2 logn) using a direct fast Fourier transform method. Using two comparison functions, we derive an optimal fourth order error bound in the continuous maximum norm. We compare our scheme with other modified nodal cubic spline collocation schemes, in particular, the one proposed by Houstis et al. in [8]. We believe that our paper gives the first correct convergence analysis of a modified nodal cubic spline collocation for solving partial differential equations. Key words. nodal collocation, cubic splines, convergence analysis, interpolants AMS subject classifications. 65N35, 65N12, 65N15, 65N22 1. Introduction. De Boor [7] proved that classical nodal cubic spline collocation for solving two-point boundary value problems is only second order accurate and no better. For two-point boundary value problems, Archer [2] and independently Daniel and Swartz [6] developed a modified nodal cubic spline collocation (MNCSC) scheme which is fourth order accurate. The approximate solution in this scheme satisfies higher-order perturbations of the ordinary differential equation at the partition nodes. Based on the method of [2] and [6], Houstis et al. [8] derived a fourth order MNCSC scheme for solving elliptic boundary value problems on rectangles. For the Helmholtz equation, a direct fast Fourier transform (FFT) algorithm for solving this scheme was proposed recently in [3]. In this paper, we consider the Dirichlet boundary value problem for Poisson s equation (1.1) u = f in Ω, u = 0 on Ω, where denotes the Laplacian, Ω = (0,1) (0,1), and Ω is the boundary of Ω. Let ρ x = {x i } i=0 be a uniform partition of [0,1] in the x-direction such that x i = ih, i = 0,...,N + 1, where h = 1/(N + 1). For the sake of simplicity, we assume that a uniform partition ρ y = {y j } i=0 of [0,1] in the y-direction is such that y j = x j. Let S 3 be the space of cubic splines defined by S 3 = {v C 2 [0,1] : v [xi 1,x i] P 3,i = 1,...,N + 1}, where P 3 denotes the set of all polynomials of degree 3, and let S D = {v S 3 : v(0) = v(1) = 0}. Our MNCSC scheme for solving (1.1) is formulated as follows: Find u h S D S D such that (1.2) u h (x i,y j ) h2 6 D2 xdyu 2 h (x i,y j ) = f(x i,y j ) h2 12 f(x i,y j ), i,j = 0,...,N + 1. Department of Mathematical and Computer Sciences, Colorado School of Mines, Golden, Colorado , U.S.A. (ashama@mines.edu) Department of Mathematical and Computer Sciences, Colorado School of Mines, Golden, Colorado , U.S.A. (bbialeck@mines.edu) 1
2 The scheme (1.2) is motivated by the fourth order finite difference method for (1.1), see, for example, equation (7) in section 4.5 of [9]. Using u h = u = 0 on Ω and (1.1), we see that (1.2) is equivalent to: (1.3) 2D 2 xd 2 yu h (x i,y j ) = f(x i,y j ), i,j = 0,N + 1, (1.4) Dxu 2 h (x i,y j ) h2 6 D2 xdyu 2 h (x i,y j ) = f(x i,y j ) h2 12 f(x i,y j ), i = 0,N + 1, j = 1,...,N, (1.5) Dyu 2 h (x i,y j ) h2 6 D2 xdyu 2 h (x i,y j ) = f(x i,y j ) h2 12 f(x i,y j ), i = 1,...,N, j = 0,N + 1, (1.6) u h (x i,y j ) h2 6 D2 xd 2 yu h (x i,y j ) = f(x i,y j ) h2 12 f(x i,y j ), i,j = 1,...,N. The scheme (4.2) (4.4) of [8] for (1.1) is: Find u h S D S D satisfying (1.3) and (1.7) D 2 xu h (x i,y j ) = f(x i,y j ), i = 0,N + 1, j = 1,...,N, (1.8) D 2 yu h (x i,y j ) = f(x i,y j ), i = 1,...,N, j = 0,N + 1, (1.9) (L x + L y )u h (x i,y j ) = f(x i,y j ), i,j = 1,...,N, where, for i,j = 1,...,N, (1.10) L x v(x i,y j ) = 1 12 [ D 2 x v(x i 1,y j ) + 10D 2 xv(x i,y j ) + D 2 xv(x i+1,y j ) ], L y v(x i,y j ) = 1 [ D 2 12 y v(x i,y j 1 ) + 10Dyv(x 2 i,y j ) + Dyv(x 2 i,y j+1 ) ]. Our scheme and that of [8] are identical at the corners of Ω. However, they are different at the remaining partition nodes. While (1.4) (1.6) involve perturbations of both the left- and right-hand sides, (1.9) involves a perturbation of the left-hand side only. Numerical results show that our scheme exhibits superconvergence phenomena while that of [8] does not. An outline of this paper is as follows. We give preliminaries in section 2. The matrix-vector form of our scheme, an existence and uniqueness proof of its solution, and a direct FFT algorithm for solving the scheme are presented in section 3. In section 4, using two comparison functions, we derive a fourth order error bound in the continuous maximum norm. In section 5, we give convergence analysis of the scheme in [4] that consists of (1.3) (1.5) and (1.9). We also explain why convergence analysis of the scheme (1.3) and (1.7) (1.9), given in [8], is incorrect. This is why, we believe, our paper gives the first correct convergence analysis of MNCSC for solving partial differential equations. Section 6 includes numerical results obtained using our scheme. 2
3 2. Preliminaries. We extend the uniform partition ρ x = {x i } i=0 outside of [0,1] using x i = ih,i = 3, 2, 1,N + 2,N + 3,N + 4, and introduce I i = [x i 1,x i ], i = 2,...,N + 4. Let {B m } N+2 m= 1 be the basis for S 3 defined by g 1 [(x x m 2 )/h], x I m 1, g 2 [(x x m 1 )/h], x I m, (2.1) B m (x) = g 2 [(x m+1 x)/h], x I m+1, g 1 [(x m+2 x)/h], x I m+2, 0, otherwise, where (2.2) g 1 (x) = x 3, g 2 (x) = 1 + 3x + 3x 2 3x 3. The basis functions are such that, for m = 0,...,N + 1, (2.3) B m 1 (x m ) = 1, B m (x m ) = 4, B m+1 (x m ) = 1, B m 1(x m ) = 6/h 2, B m(x m ) = 12/h 2, B m+1(x m ) = 6/h 2. Let {B D m} m=0 be the basis for SD defined by (2.4) B D 0 = B 0 4B 1, B D 1 = B 1 B 1, B D m = B m, m = 2,...,N 1, B D N = B N B N+2, B D = B 4B N+2. It follows from (2.3) that (2.5) B0 D (x 1 ) = 1, B1 D (x 1 ) = 4, B1 D (x 2 ) = 1, BN D(x N 1) = 1, BN D(x N) = 4, B D (x N) = 1, [ ] B D 0 (x0 ) = 36/h 2, [ ] B D 0 (x1 ) = 6/h 2, (2.6)[ ] B D N (xn 1 ) = 6/h 2, [ ] B D N (x ) = 0, [ ] B D 1 (x0 ) = 0, [ ] B D 1 (x1 ) = 12/h 2, [ ] B D N (xn ) = 12/h 2, [ ] B D (x ) = 36/h 2. [ ] B D 1 (x2 ) = 6/h 2, [ ] B D (xn ) = 6/h 2, It also follows from (2.5), (2.6), (2.4), and (2.3) that, for i = 1,...,N, { Bm(x D i ) h2 6 [BD m] 6, m = i, (2.7) (x i ) = m = 0,...,N , m i, Throughout the paper, C denotes a generic positive constant that is independent of u and h. Lemma 2.1. {Bm} D m=0 of (2.4) satisfy max x [0,1] BD m(x) C,m = 0,...,N + 1. Proof. For each fixed m = 1,...,N + 2, using I i = [x i 1,x i ] and x i = ih, we have (2.8) 0 (x x m 2 )/h 1, x I m 1, 0 (x x m 1 )/h 1, x I m, 0 (x m+1 x)/h 1, x I m+1, 0 (x m+2 x)/h 1, x I m+2. Equations (2.2) and (2.8) give (2.9) g 1 [(x x m 2 )/h] 1, x I m 1, g 2 [(x x m 1 )/h] 7, x I m, g 2 [(x m+1 x)/h] 7, x I m+1, g 1 [(x m+2 x)/h] 1, x I m+2. 3
4 Using (2.1) and (2.9), we see that max B m(x) C,m = 1,...,N + 2. Hence x [0,1] the required inequality follows from (2.4) which implies that each Bm D is a linear combination of at most two of the functions {B n } N+2 n= 1. For {Bm} D m=0 of (2.4), we introduce N N matrices A and B defined by (2.10) A = (a i,m ) N i,m=1, a i,m = [B D m] (x i ), B = (b j,n ) N j,n=1, b j,n = B D n (y j ). It follows from (2.4), (2.3), (2.5), and (2.6) that (2.11) A = 6h 2 T, B = T + 6I, where I is the identity matrix and the N N matrix T is given by T = (2.12) Lemma 2.2. If B[u 1,...,u N ] T = [v 1,...,v N ] T, where B is defined in (2.10), then max u i C max v i. 1 i N 1 i N Proof. It follows from (2.10), (2.11), and (2.12) that b i,i b i,j 2, i = 1,...,N. i =j Hence the required result follows, for example, from the discussion on page 21 in [1]. In what follows, [u 1,1,...,u N,N ] T is the short notation for [u 1,1,...,u 1,N,u 2,1,...,u 2,N,...,u N,1,...,u N,N ] T. Lemma 2.3. If u = [u 1,1,...,u N,N ] T and v = [v 1,1,...,v N,N ] T are such that (B B)u = v, where B is defined in (2.10), then max u i,j C max v i,j. 1 i,j N 1 i,j N Proof. Since B B = (B I)(I B), we have (2.13) v = (B I)w, w = (I B)u. Using ( 2.13) and Lemma 2.2, we obtain max u i,j C max w i,j, 1 i,j N 1 i,j N max w i,j C max v i,j, 1 i,j N 1 i,j N which imply the required inequality. It is well known (see Theorem of [10]) that for T of (2.12), we have (2.14) QTQ = Λ, QQ = I, where the N N matrices Λ and Q are given by (2.15) Λ = diag(λ i ) N i=1, λ i = 4sin 2 iπ 2(N + 1), (2.16) Q = (q i,j ) N i,j=1, q i,j = 4 ( ) 1/2 2 sin ijπ N + 1 N + 1.
5 Lemma 2.4. If v = [v 1,1,...,v N,N ] T and w = [w 1,1,...,w N,N ] T are such that [ T h 2 I + I T ( h 2 + h2 T 6 h 2 T )] (2.17) h 2 v = w, where T is the matrix defined in (2.12), then max 1 i,j N v2 i,j Ch 2 N N i=1 j=1 w 2 i,j. Proof. The matrix in (2.17) arises in the fourth order finite difference method for (1.1). Hence the desired result follows, for example, from the last unnumbered equation on page 296 in [9]. Finally, we observe that the matrix-vector form of (2.18) is φ i,j = N N c (1) i,m c (2) j,n ψ m,n, m=1 n=1 i,j = 1,...,N, (2.19) where C 1 = ( ) N ( c (1) i,m, C 2 = i,m=1 φ = (C 1 C 2 )ψ, c (2) j,n ) N j,n=1, and φ = [φ 1,1,...,φ N,N ] T, ψ = [ψ 1,1,...,,ψ N,N ] T. 3. Matrix-Vector Form of Scheme. Since dim(s D S D ) = (N + 2) 2, the scheme (1.3) (1.6) involves (N +2) 2 equations in (N +2) 2 unknowns. Using the basis {B D m} m=0 of (2.4) for the space SD, we have (3.1) u h (x,y) = m=0 n=0 Substituting (3.1) into (1.3), we obtain (3.2) 2 m=0 n=0 u m,n B D m(x)b D n (y). u m,n [B D m] (x i )[B D n ] (y j ) = f(x i,y j ), i,j = 0,N + 1. Using (2.6), we conclude that (3.2) gives (3.3) Substituting (3.1) into (1.4), we obtain (3.4) m=0 n=0 u i,j = h f(x i,y j ), i,j = 0,N + 1. ) u m,n [Bm] D (x i ) (B Dn (y j ) h2 6 [BD n ] (y j ) = f(x i,y j ) h2 12 f(x i,y j ), i = 0,N + 1, j = 1,...,N. Using (2.6) and (2.7), we see that (3.4) gives (3.5) u i,j = h2 216 f(x i,y j ) + h f(x i,y j ), i = 0,N + 1, j = 1,...,N. 5
6 Using (3.5) and symmetry with respect to x and y, we conclude that (1.5) gives (3.6) u i,j = h2 216 f(x i,y j ) + h f(x i,y j ), i = 1,...,N, j = 0,N + 1. Substituting (3.1) into (1.6), we obtain (3.7) m=0 n=0 u m,n ([B m] D (x i )Bn D (y j ) + [B Dm(x ] ) i ) h2 6 [BD m] (x i ) [Bn D ] (y j ) = f(x i,y j ) h2 12 f(x i,y j ), i,j = 1,...,N. Moving the terms involving {u m,n } n=0, m = 0,N + 1, {u m,n} N m=1, n = 0,N + 1, to the right-hand side of (3.7), we get (3.8) where N N m=1 n=1 m=1 n=0, u m,n ([B m] D (x i )Bn D (y j ) + [B Dm(x ] ) i ) h2 6 [BD m] (x i ) [Bn D ] (y j ) = p i,j, i,j = 1,...,N, p i,j = f(x i,y j ) h2 12 f(x i,y j ) u m,n ([B m] D (x i )Bn D (y j ) + [B Dm(x ] i ) h2 6 [BD m] (x i ) m=0, n=0 N u m,n ([B m] D (x i )Bn D (y j ) + [B Dm(x ] i ) h2 6 [BD m] (x i ) Using (2.18) (2.19), we write (3.8) as [ (3.9) A B + (B h2 6 A ) ] A u = p, ) [Bn D ] (y j ) ) [Bn D ] (y j ). where u = [u 1,1,...,u N,N ] T, p = [p 1,1,...,p N,N ] T, and A, B are defined in (2.10). Using (2.11), we see that ) A B + (B h2 6 A A = 6 [6T I + (6I + T) T], h2 and hence the system (3.9) simplifies to (3.10) 6h 2 [6T I + (6I + T) T]u = p. We are now ready to prove existence and uniqueness of u h in S D S D that satisfies (1.3) (1.6). Theorem 3.1. There exists unique u h in S D S D satisfying (1.3) (1.6). Proof. Since the number of equations in (1.3) (1.6) is equal to the number of unknowns, we assume that the right-hand side in (1.3) (1.6) is zero, and show that u h = 0 is the only solution of the resulting scheme. Using (3.1), (3.3), (3.5), and (3.6), we have (3.11) u m,n = 0, m = 0,N + 1,n = 0,...,N + 1, m = 1,...,N,n = 0,N
7 Clearly (3.12) [ T 6h 2 [6T I + (6I + T) T] = 36 h 2 I + I T ( h 2 + h2 T 6 h 2 T )] h 2. Hence it follows from (3.10) with p replaced by 0, (3.12), and Lemma 2.4 that (3.13) u m,n = 0, m,n = 1,...,N. Equations (3.1), (3.11), and (3.13) give u h = 0. Using Q of (2.16), we see that (3.10) is equivalent to (3.14) 6h 2 (Q I)[6T I + (6I + T) T](Q I)(Q 1 I)u = (Q I)p. Introducing u = (Q 1 I)u and p = (Q I)p, and using (3.14) and (2.14), we obtain (3.15) 6h 2 [6Λ I + (6I + Λ) T]u = p, where Λ is defined in (2.15). The system (3.15) reduces to the N independent systems (3.16) 6h 2 [6λ i I + (6 + λ i )T]u i = p i, i = 1,...,N, where u i = [u i,1,...,u i,n ]T, p i = [p i,1,...,p i,n ]T, i = 1,...,N. We have the following algorithm for solving (3.10): Step 1. Compute p = (Q I)p. Step 2. Solve the N systems in (3.16). Step 3. Compute u = (Q I)u. Since the entries of Q in (2.16) are given in terms of sines, steps 1 and 3 are performed each using FFTs at a cost O(N 2 log N). In step 2, the systems are tridiagonal, so this step is performed at a cost O(N 2 ). Thus the total cost of the algorithm is O(N 2 log N). 4. Convergence Analysis. In what follows, C(u) denotes a generic positive constant that is independent of h, but depends on u. Our goal is to show that if u in C 6 (Ω) and u h in S D S D are the solutions of (1.1) and (1.3) (1.6), respectively, then (4.1) u u h C(Ω) C(u)h 4, where g C(Ω) = max g(x) for g in C(Ω). x Ω To prove (4.1), for u in C 4 (Ω), we introduce two comparison functions, the spline interpolants S and Z in S D S D of u defined respectively by (4.2) D 2 xd 2 ys(x i,y j ) = D 2 xd 2 yu(x i,y j ), i,j = 0,N + 1, (4.3) D 2 xs(x i,y j ) h2 6 D2 xd 2 ys(x i,y j ) = D 2 xu(x i,y j ) h2 12 D4 xu(x i,y j ) h2 6 D2 xd 2 yu(x i,y j ), i = 0,N + 1, j = 1,...,N, 7
8 (4.4) D 2 ys(x i,y j ) h2 6 D2 xd 2 ys(x i,y j ) = D 2 yu(x i,y j ) h2 12 D4 yu(x i,y j ) h2 6 D2 xd 2 yu(x i,y j ), i = 1,...,N, j = 0,N + 1, (4.5) and S(x i,y j ) = u(x i,y j ), i,j = 1,...,N, (4.6) D 2 xd 2 yz(x i,y j ) = D 2 xd 2 yu(x i,y j ), i,j = 0,N + 1, (4.7) D 2 xz(x i,y j ) = D 2 xu(x i,y j ), i = 0,N + 1, j = 1,...,N, (4.8) D 2 yz(x i,y j ) = D 2 yu(x i,y j ), i = 1,...,N, j = 0,N + 1, (4.9) Z(x i,y j ) = u(x i,y j ), i,j = 1,...,N. It follows from (1.1) that (4.10) f = D 2 xu + D 2 yu, f = D 4 xu + D 4 yu + 2D 2 xd 2 yu. Hence, using u = 0 on Ω, we see that (1.3) (1.5) reduce, respectively, to (4.11) D 2 xd 2 yu h (x i,y j ) = D 2 xd 2 yu(x i,y j ), i,j = 0,N + 1, (4.12) D 2 xu h (x i,y j ) h2 6 D2 xd 2 yu h (x i,y j ) = D 2 xu(x i,y j ) h2 12 D4 xu(x i,y j ) h2 6 D2 xd 2 yu(x i,y j ), i = 0,N + 1, j = 1,...,N, (4.13) D 2 yu h (x i,y j ) h2 6 D2 xd 2 yu h (x i,y j ) = D 2 yu(x i,y j ) h2 12 D4 yu(x i,y j ) h2 6 D2 xd 2 yu(x i,y j ), i = 1,...,N, j = 0,N + 1. Comparing (4.11) (4.13) and (4.2) (4.4), we see that u h and S are defined in the same way for i = 0,N + 1,j = 0,...,N + 1, and i = 1,...,N, j = 0,N + 1. On the other hand, (4.6) (4.8) are a simplified, tensor product, version of (4.2) (4.4). The triangle inequality gives (4.14) u u h C(Ω) u Z C(Ω) + Z S C(Ω) + S u h C(Ω). In what follows, we bound the three terms on the right-hand side of (4.14). 8
9 4.1. Bounding u Z C(Ω). We need the following results. Lemma 4.1. Let the interpolant I x v in S 3 of v in C 2 [0,1] be defined by (4.15) (I x v) (x i ) = v (x i ),i = 0,N + 1, I x v(x i ) = v(x i ),i = 0,...,N + 1. Then (4.16) max v(x) I xv(x) C max x [0,1] x [0,1] v (x) h 2. If v C 4 [0,1], then (4.17) max v(x) I xv(x) C max x [0,1] x [0,1] v(4) (x) h 4. Proof. First we prove (4.16). Using the discussion on page 404 in [5], we have (4.18) I x v(x) = v(x i ) + B i (x x i ) + C i (x x i ) 2 + D i (x x i ) 3, x [x i,x i+1 ], for i = 0,...,N, where (4.19) B i = h 6 r i+1 h 3 r i + 1 h [v(x i+1) v(x i )], C i = r i 2, D i = 1 6h (r i+1 r i ), and r i = (I x v) (x i ). Equations (4.18) and (4.19) give (4.20) where (4.21) I x v(x) v(x) = A i (x) h 6 r i+1(x x i ) h 3 r i(x x i ) + r i 2 (x x i) h (r i+1 r i )(x x i ) 3, x [x i,x i+1 ], A i (x) = v(x i ) v(x) + v(x i+1) v(x i ) (x x i ), x [x i,x i+1 ]. h Using (4.20) and the triangle inequality, we obtain, for x [x i,x i+1 ], I x v(x) v(x) A i (x) + h ( r 2 i + 1 ) (4.22) 3 r i+1 A i (x) h2 max r i. 0 i We introduce where (4.23) E = (e i,j ) i,j=0 = , r = [r 0,...,r ] T, p = [p 0,...,p ] T, { v p i = (x i ), i = 0,N + 1, h 2 [v(x i 1 ) 2v(x i ) + v(x i+1 )], i = 1,...,N. It follows from the discussion on pages 400 and 401 in [5] that Er = p. Since e i,i e i,j 1, i,j = 0,...,N + 1, the discussion on page 21 in [1] implies that i j (4.24) max r i C max p i. 0 i 0 i 9
10 Using Taylor s theorem, we obtain (4.25) v(x i 1 ) 2v(x i ) + v(x i+1 ) Ch 2 max x [0,1] v (x), i = 1,...,N. It follows from (4.24), (4.23) and (4.25), that (4.26) max r i C max 0 i x [0,1] v (x). Using Taylors theorem to expand v(x), x [x i,x i+1 ], around x i, we have (4.27) v(x) = v(x i ) + (x x i )v (x i ) + (x x i) 2 v (ξ i,x ), x i ξ i,x x. 2 Using (4.21), (4.27), and the triangle inequality, we obtain, for x [x i,x i+1 ], A i (x) = (x x i ) 2 v (ξ i,x ) h 2 2 (x x (4.28) i)v (ξ i,xi+1 ) h2 max x [0,1] v (x). Inequality (4.16) follows from (4.22), (4.26), and (4.28). A proof of (4.17) is given in the proof of Theorem in [1]. Lemma 4.2. If u C 4 (Ω), Z in S D S D is defined by (4.6) (4.9), and I x u and I y u are defined in (4.15), then for (x,y) in Ω, we have (4.29) Z(x,y) = I x (I y u)(x,y), D 2 x(i y u)(x,y) = I y (D 2 xu)(x,y). Proof. Let {C i } N+3 i=0 be the basis for S 3 such that C i (x j ) = δ ij, i,j = 0,...,N + 1, (4.30) C i (x j) = 0, i = 0,...,N + 1, j = 0,N + 1, C N+2 (x j ) = C N+3 (x j ) = 0, j = 0,...,N + 1, C N+2 (x 0) = C N+3 (x ) = 1, C N+2 (x ) = C N+3 (x 0) = 0, where δ ij is the Kronecker delta. Using (4.15) and (4.30), we have for (x,y) Ω, I x (I y u)(x,y) = I x u(x,y j )C j (y) + Dyu(x,y 2 0 )C N+2 (y) + Dyu(x,y 2 )C N+3 (y) = i=0 j=0 u(x i,y j )C j (y) + Dyu(x 2 i,y 0 )C N+2 (y) + Dyu(x 2 i,y )C N+3 (y) C i (x) j=0 + Dxu(x 2 0,y j )C j (y) + DxD 2 yu(x 2 0,y 0 )C N+2 (y) j=0 +DxD 2 yu(x 2 0,y )C N+3 (y) ] C N+2 (x) + Dxu(x 2,y j )C j (y) +D 2 xd 2 yu(x,y 0 )C N+2 (y) + D 2 xd 2 yu(x,y )C N+3 (y) ] C N+3 (x). 10 j=0
11 Since u = 0 on Ω, all terms involving C 0 (x), C (x), C 0 (y), C (y) drop out which implies that I x (I y u) S D S D. Using (4.30), we verify that I x (I y u) satisfies (4.6) (4.9), that is, (4.6) (4.9) hold with I x (I y u) in place of Z. Hence, the uniqueness of the interpolant Z implies the first equation in (4.29). To prove the second equation in (4.29), we use (4.15) and (4.30) to see that for (x,y) Ω, I y (Dxu)(x,y) 2 = Dxu(x,y 2 j )C j (y) + DxD 2 yu(x,y 2 0 )C N+2 (y) + DxD 2 yu(x,y 2 )C N+3 (y) j=0 = Dx 2 u(x,y j )C j (y) + Dyu(x,y 2 0 )C N+2 (y) + Dyu(x,y 2 )C N+3 (y) j=0 = D 2 x(i y u)(x,y). Theorem 4.1. If u C 4 (Ω) and Z in S D S D is defined by (4.6) (4.9), then u Z C(Ω) C(u)h 4. Proof. Using (4.29) and the triangle inequality, we have (4.31) u Z C(Ω) u I x u C(Ω) + I x (u I y u) (u I y u) C(Ω) + u I y u C(Ω). For any fixed y in [0,1], I x u(,y) is the cubic spline interpolant of u(,y). Using this, symmetry with respect to x and y, and (4.17), we have (4.32) u I x u C(Ω) C(u)h 4, u I y u C(Ω) C(u)h 4. For any fixed y in [0,1], I x (u I y u)(,y) is the cubic spline interpolant of (u I y u)(,y). Hence it follows from (4.16) that (4.33) I x (u I y u) (u I y u) C(Ω) C D 2 x(u I y u) C(Ω) h 2. Using (4.29) and(4.16), we obtain (4.34) D 2 x(u I y u) C(Ω) = D 2 xu I y (D 2 xu) C(Ω) C D 2 xd 2 yu C(Ω) h 2. Combining (4.33) and (4.34), we have (4.35) I x (u I y u) (u I y u) C(Ω) C(u)h 4. The desired inequality now follows from (4.31), (4.32), and (4.35) Bounding Z S C(Ω). We start by proving the following lemma. Lemma 4.3. If u C 4 (Ω) and (4.36) S(x,y) = s m,n Bm(x)B D n D (y), Z(x,y) = m=0 n=0 m=0 n=0 are defined by (4.2) (4.5) and (4.6) (4.9), respectively, then s m,n z m,n C(u)h 4, m,n = 0,...,N z m,n Bm(x)B D n D (y),
12 Proof. Using (4.2), (4.6), and following the derivation of (3.3) from (1.3), we obtain (4.37) s m,n = z m,n, m,n = 0,N + 1. Next we prove the required inequality for m = 0, n = 1,..., N. Using (4.7), we have (4.38) D 2 x(s Z)(x 0,y j ) = D 2 xs(x 0,y j ) D 2 xu(x 0,y j ), j = 1,...,N. It follows from (4.36), (4.37), and (2.6) that (4.39) N Dx(S 2 Z)(x 0,y j ) = 36h 2 (s 0,n z 0,n )Bn D (y j ), j = 1,...,N. Using (4.36), (2.6), (2.4), (2.3), and (2.5), we obtain, for j = 1,...,N, (4.40) n=1 DxS(x 2 0,y j ) = 36h 2 s 0,n Bn D (y j ) = 36h 2 (s 0,j 1 + 4s 0,j + s 0,j+1 ). n=0 Substituting (4.39) and (4.40) into (4.38), and multiplying through by h 2 /36, we have N (4.41) (s 0,n z 0,n )Bn D (y j ) = s 0,j 1 + 4s 0,j + s 0,j+1 + h2 36 D2 xu(x 0,y j ) n=1 for j = 1,..., N. Using (4.2), (4.3), and following the derivations of (3.3) from (1.3) and (3.5) from (1.4), we obtain (4.42) and (4.43) s 0,j = h D2 xd 2 yu(x 0,y j ), j = 0,N + 1, s 0,j = [D h2 2xu(x ] 0,y j ) h D4 xu(x 0,y j ) h2 6 D2 xdyu(x 2 0,y j ) for j = 1,...,N. Since u = 0 on Ω, (4.42) is the same as (4.43) with j = 0,N + 1. This observation and (4.43) imply that for j = 1,...,N, we have s 0,j±1 = [D h2 2xu(x ] (4.44) 0,y j±1 ) h D4 xu(x 0,y j±1 ) h2 6 D2 xdyu(x 2 0,y j±1 ). Using Taylor s theorem, we obtain (4.45) D 2 xu(x 0,y j±1 ) = D 2 xu(x 0,y j ) ± hd 2 xd y u(x 0,y j ) + h2 2 D2 xd 2 yu(x 0,ξ ± j ), where y j 1 ξ j y j, y j ξ + j y j+1. Using (4.44), (4.43), and (4.45), we obtain s 0,j 1 + 4s 0,j + s 0,j+1 + h2 (4.46) 36 D2 xu(x 0,y j ) C(u)h4, j = 1,...,N. It follows from (4.46) that (4.41) is a system in {s 0,n z 0,n } N n=1 with the matrix B defined in(2.10) and with each entry on the right-hand side bounded in absolute value by C(u)h 4. Hence, Lemma 2.2 implies (4.47) max s 0,n z 0,n C(u)h 4. 1 n N 12
13 Using (4.47) and symmetry with respect to x and y, we also have (4.48) max s,n z,n C(u)h 4, 1 n N max s m,n z m,n C(u)h 4, n = 0,N m N Finally we prove the required inequality for m,n = 1,...,N. Using (4.5) and (4.9), we have (S Z)(x i,y j ) = 0, i,j = 1,...,N, which, by (4.36) and (4.37), can be written as (4.49) where N N (s m,n z m,n )Bm(x D i )Bn D (y j ) = d i,j, i,j = 1,...N, m=1 n=1 d i,j = N N + (z m,n s m,n )Bm(x D i )Bn D (y j ). m=0, n=1 m=1 n=0, Since for any fixed i,j, each of the above double sums reduces to at most three terms, using the triangle inequality, (4.47), (4.48), and Lemma 2.1, we obtain (4.50) d i,j C(u)h 4, i,j = 1,...,N. It follows from (2.18) (2.19) that (4.49) is a system in {z m,n s m,n } N m,n=1 with the matrix B B, where B is defined in (2.10). Hence, for m, n = 1,..., N, the required inequality follows from (4.50) and Lemma 2.3. Theorem 4.2. If u C 4 (Ω) and S, Z in S D S D are defined by (4.2) (4.5) and (4.6) (4.9), respectively, then Z S C(Ω) C(u)h 4. Proof. Since Z S is continuous on Ω, there is (x,y ) in Ω such that Z S C(Ω) = (Z S)(x,y ). Hence, (4.36) and the triangle inequality imply Z S C(Ω) s m,n z m,n Bm(x D ) Bn D (y ). m=0 n=0 Since the above double sum reduces to at most nine terms, the required inequality follows from Lemmas 4.3 and Bounding S u h C(Ω) and u u h C(Ω). We need the following results. Lemma 4.4. If u C 6 (Ω) and S in S D S D is defined by (4.2) (4.5), then for i = 0,N + 1, j = 1,...,N, D 2 (4.51) x DyS(x 2 i,y j ) DxD 2 yu(x 2 i,y j ) C(u)h 2, (4.52) D2 xs(x i,y j ) Dxu(x 2 i,y j ) + h2 12 D4 xu(x i,y j ) C(u)h4. 13
14 Proof. We prove (4.51) for i = 0; for i = N + 1, (4.51) follows by symmetry with respect to x. Using (4.36), we obtain D 2 xd 2 ys(x 0,y j ) = m=0 n=0 and hence (2.4), (2.3), and (2.6) imply [ ] s m,n B D m (x0 ) [ Bn D ] (yj ), j = 1,...,N, (4.53) D 2 xd 2 ys(x 0,y j ) = 216h 4 (s 0,j 1 2s 0,j + s 0,j+1 ), j = 1,...,N. Equations (4.53), (4.43), and (4.44) give, for j = 1,...,N, (4.54) DxD 2 ys(x 2 0,y j ) DxD 2 yu(x 2 0,y j ) = DxD 2 yu(x 2 0,y j ) +h 2 [ Dxu(x 2 0,y j 1 ) 2Dxu(x 2 0,y j ) + Dxu(x 2 0,y j+1 ) ] 1 [ D 4 12 x u(x 0,y j 1 ) 2Dxu(x 4 0,y j ) + Dxu(x 4 0,y j+1 ) ] 1 6 [ D 2 x D 2 yu(x 0,y j 1 ) 2D 2 xd 2 yu(x 0,y j ) + D 2 xd 2 yu(x 0,y j+1 ) ]. Using Taylor s theorem, we obtain (4.55) D 2 xu(x 0,y j±1 ) = D 2 xu(x 0,y j ) ± hd 2 xd y u(x 0,y j ) + h2 2 D2 xd 2 yu(x 0,y j ) ± h3 3! D2 xd 3 yu(x 0,y j ) + h4 4! D2 xd 4 yu(x 0,ξ ± j ), (4.56) D 4 xu(x 0,y j±1 ) = D 4 xu(x 0,y j ) ± hd 4 xd y u(x 0,y j ) + h2 2 D4 xd 2 yu(x 0,η ± j ), (4.57) D 2 xd 2 yu(x 0,y j±1 ) = D 2 xd 2 yu(x 0,y j ) ± hd 2 xd 3 yu(x 0,y j ) + h2 2 D2 xd 4 yu(x 0,κ ± j ), where y j 1 ξ j, η j,κ j y j, y j ξ + j,η+ j,κ+ j y j+1. Equations (4.55) (4.57) give h 2 [ Dxu(x 2 0,y j 1 ) 2Dxu(x 2 0,y j ) + Dxu(x 2 0,y j+1 ) ] DxD 2 yu(x 2 0,y j ) C(u)h 2, D 4 x u(x 0,y j 1 ) 2Dxu(x 4 0,y j ) + Dxu(x 4 0,y j+1 ) C(u)h 2, D 2 x Dyu(x 2 0,y j 1 ) 2DxD 2 yu(x 2 0,y j ) + DxD 2 yu(x 2 0,y j+1 ) C(u)h 2, and hence (4.51) for i = 0 follows from (4.54) and the triangle inequality. Using (4.3) and (4.51), we obtain (4.52). Lemma 4.5. If u C 6 (Ω) and S in S D S D is defined by (4.2) (4.5), then, for i,j = 1,...,N, we have D2 xs(x i,y j ) Dxu(x 2 i,y j ) + h2 (4.58) 12 D4 xu(x i,y j ) C(u)h4, (4.59) D2 ys(x i,y j ) Dyu(x 2 i,y j ) + h2 12 D4 yu(x i,y j ) C(u)h4, (4.60) D 2 xd 2 ys(x i,y j ) D 2 xd 2 yu(x i,y j ) C(u)h 2. 14
15 Proof. First we prove (4.58). For i = 0,...,N + 1, j = 1,...,N, we introduce d i,j = DxS(x 2 i,y j ) [D 2xu(x ] (4.61) i,y j ) h2 12 D4 xu(x i,y j ). Then (4.62) where d i 1,j + 4d i,j + d i+1,j = φ i,j ψ i,j, i,j = 1,...,N, (4.63) φ i,j = DxS(x 2 i 1,y j ) + 4DxS(x 2 i,y j ) + D ] xs(x 2 i+1,y j ) 6 [D 2xu(x i,y j ) + h2 12 D4 xu(x i,y j ), ψ i,j = Dxu(x 2 i 1,y j ) h2 12 D4 xu(x i 1,y j ) + 4 [D 2xu(x ] i,y j ) h2 12 D4 xu(x i,y j ) +Dxu(x 2 i+1,y j ) h2 12 D4 xu(x i+1,y j ) 6 [D 2xu(x ] i,y j ) + h2 (4.64) 12 D4 xu(x i,y j ) = Dxu(x 2 i 1,y j ) 2Dxu(x 2 i,y j ) + Dxu(x 2 i+1,y j ) h2 12 [ D 4 x u(x i 1,y j ) + 10D 4 xu(x i,y j ) + D 4 xu(x i+1,y j ) ]. Since S(,y j ) S 3, (2.1.7) in [1], (4.5), and S = u = 0 on Ω, imply that (4.65) D 2 xs(x i 1,y j ) + 4D 2 xs(x i,y j ) + D 2 xs(x i+1,y j ) = 6h 2 [u(x i 1,y j ) 2u(x i,y j ) + u(x i+1,y j )], i,j = 1,...,N. Using Taylor s theorem, we obtain u(x i±1,y j ) = u(x i,y j ) ± hd x u(x i,y j ) + h2 2 D2 xu(x i,y j )± h3 3! D3 xu(x i,y j ) + h4 4! D4 xu(x i,y j )± h5 5! D5 xu(x i,y j ) + h6 6! D6 xu(ξ ± i,y j), where x i 1 ξ i x i, x i ξ + i x i+1, and hence h 2 [u(x i 1,y j ) 2u(x i,y j ) + u(x ] i+1,y j )] (4.66) [D 2xu(x i,y j ) + h2 12 D4 xu(x i,y j ) C(u)h 4, i,j = 1,...,N. Using (4.63), (4.65), and (4.66), we obtain (4.67) φ i,j C(u)h 4, i,j = 1,...,N. Using Taylor s theorem, we obtain D 2 xu(x i±1,y j ) = D 2 xu(x i,y j ) ± hd 3 xu(x i,y j ) + h2 2 D4 xu(x i,y j ) ± h3 3! D5 xu(x i,y j ) + h4 4! D6 xu(ξ ± i,y j), D 4 xu(x i±1,y j ) = D 4 xu(x i,y j ) ± hd 5 xu(x i,y j ) + h2 2 D6 xu(η ± i,y j), 15
16 where x i 1 ξ i,η i (4.68) x i, x i ξ + i,η+ i x i+1, and hence (4.64) gives ψ i,j C(u)h 4, i,j = 1,...,N. Using (4.61) and (4.52), we have (4.69) d i,j C(u)h 4, i = 0,N + 1, j = 1,...,N. It follows from (4.67) (4.69) that moving d 0,j and d,j to the right-hand side of (4.62), we obtain, for each j = 1,...,N, a system in {d i,j } N i=1 with the matrix B of (2.10) (2.12), and with each entry on the right-hand side bounded in absolute value by C(u)h 4. Hence (4.58) follows from (4.61) and Lemma 2.2, and (4.59) follows from (4.58) by symmetry with respect to x and y. Next we prove (4.60). Since S(x, ) S 3 for x [0,1], (2.1.7) in [1] gives (4.70) D 2 ys(x,y j 1 ) + 4D 2 ys(x,y j ) + D 2 ys(x,y j+1 ) = 6h 2 [S(x,y j 1 ) 2S(x,y j ) + S(x,y j+1 )], j = 1,...,N, x [0,1]. Differentiating (4.70) twice with respect to x, we obtain, for j = 1,...,N, x [0,1], (4.71) D 2 xd 2 ys(x,y j 1 ) + 4D 2 xd 2 ys(x,y j ) + D 2 xd 2 ys(x,y j+1 ) = 6h 2 [ D 2 xs(x,y j 1 ) 2D 2 xs(x,y j ) + D 2 xs(x,y j+1 ) ]. Using (4.71) with x = x i 1,x i,x i+1, we obtain, for i,j = 1,...,N, (4.72) D 2 xd 2 ys(x i 1,y j 1 ) + 4D 2 xd 2 ys(x i 1,y j ) + D 2 xd 2 ys(x i 1,y j+1 ) = 6h 2 [ D 2 xs(x i 1,y j 1 ) 2D 2 xs(x i 1,y j ) + D 2 xs(x i 1,y j+1 ) ], (4.73) D 2 xd 2 ys(x i,y j 1 ) + 4D 2 xd 2 ys(x i,y j ) + D 2 xd 2 ys(x i,y j+1 ) = 6h 2 [ D 2 xs(x i,y j 1 ) 2D 2 xs(x i,y j ) + D 2 xs(x i,y j+1 ) ], (4.74) D 2 xd 2 ys(x i+1,y j 1 ) + 4D 2 xd 2 ys(x i+1,y j ) + D 2 xd 2 ys(x i+1,y j+1 ) = 6h 2 [ D 2 xs(x i+1,y j 1 ) 2D 2 xs(x i+1,y j ) + D 2 xs(x i+1,y j+1 ) ]. Adding (4.72), (4.74) and (4.73) multiplied through by 4, and using (4.65) and S = u = 0 on Ω, we obtain (4.75) D 2 xd 2 ys(x i 1,y j 1 ) + 4D 2 xd 2 ys(x i 1,y j ) + D 2 xd 2 ys(x i 1,y j+1 ) +4D 2 xd 2 ys(x i,y j 1 ) + 16D 2 xd 2 ys(x i,y j ) + 4D 2 xd 2 ys(x i,y j+1 ) +D 2 xd 2 ys(x i+1,y j 1 ) + 4D 2 xd 2 ys(x i+1,y j ) + D 2 xd 2 ys(x i+1,y j+1 ) = 36h 4 α i,j, i,j = 1,...,N, where (4.76) α i,j = u(x i 1,y j 1 ) 2u(x i,y j 1 ) + u(x i+1,y j 1 ) 2u(x i 1,y j ) + 4u(x i,y j ) 2u(x i+1,y j ) +u(x i 1,y j+1 ) 2u(x i,y j+1 ) + u(x i+1,y j+1 ). Using (4.76) and the discussion on pages in [9], we have (4.77) h 4 α i,j DxD 2 yu(x 2 i,y j ) C(u)h 2, i,j = 1,...,N. 16
17 Equation (4.75) is equivalent to (4.78) D 2 xd 2 y(s u)(x i 1,y j 1 ) + 4D 2 xd 2 y(s u)(x i 1,y j ) +D 2 xd 2 y(s u)(x i 1,y j+1 ) + 4D 2 xd 2 y(s u)(x i,y j 1 ) +16D 2 xd 2 y(s u)(x i,y j ) + 4D 2 xd 2 y(s u)(x i,y j+1 ) +D 2 xd 2 y(s u)(x i+1,y j 1 ) + 4D 2 xd 2 y(s u)(x i+1,y j ) +D 2 xd 2 y(s u)(x i+1,y j+1 ) = 36h 4 α i,j β i,j, i,j = 1,...,N, where β i,j = DxD 2 yu(x 2 i 1,y j 1 ) + 4DxD 2 yu(x 2 i 1,y j ) + DxD 2 yu(x 2 i 1,y j+1 ) (4.79) +4DxD 2 yu(x 2 i,y j 1 ) + 16DxD 2 yu(x 2 i,y j ) + 4DxD 2 yu(x 2 i,y j+1 ) +DxD 2 yu(x 2 i+1,y j 1 ) + 4DxD 2 yu(x 2 i+1,y j ) + DxD 2 yu(x 2 i+1,y j+1 ). Using Taylor s theorem, we obtain D 2 xd 2 yu(x i 1,y j±1 ) = D 2 xd 2 yu(x i,y j ) hd 3 xd 2 yu(x i,y j ) ± hd 2 xd 3 yd y u(x i,y j ) + ɛ ± i,j, D 2 xd 2 yu(x i+1,y j±1 ) = D 2 xd 2 yu(x i,y j ) + hd 3 xd 2 yu(x i,y j ) ± hd 2 xd 3 yd y u(x i,y j ) + σ ± i,j, D 2 xd 2 yu(x i,y j±1 ) = D 2 xd 2 yu(x i,y j ) ± hd 2 xd 3 yu(x i,y j ) + µ ± i,j, D 2 xd 2 yu(x i±1,y j ) = D 2 xd 2 yu(x i,y j ) ± hd 3 xd 2 yu(x i,y j ) + ν ± i,j. where ɛ ±, σ ±, µ ±, ν ± C(u)h 2, i,j = 1,...,N, and hence (4.79) gives (4.80) i,j i,j i,j i,j βi,j 36D 2 xd 2 yu(x i,y j ) C(u)h 2, i,j = 1,...,N. It follows from (4.77) and (4.80) that the right-hand side of (4.78) is bounded in absolute value by C(u)h 2. Using (4.2) and moving terms involving D 2 xd 2 y(s u)(x i,y j ), i = 0,N + 1,j = 1,...,N, i = 1,...,N,j = 0,N + 1, to the right-hand side of (4.78), we obtain a system in {DxD 2 y(s 2 u)(x i,y j )} N i,j=1 with the matrix B B, where B is given in (2.10) (2.12). By (4.51) and symmetry with respect to x and y, each entry on the right-hand side in this system is bounded in absolute value by C(u)h 2. Therefore, (4.60) follows from Lemma 2.3. Lemma 4.6. If u C 6 (Ω) and (4.81) u h (x,y) = u m,n Bm(x)B D n D (y), S(x,y) = m=0 n=0 s m,n Bm(x)B D n D (y), m=0 n=0 are defined by (1.3) (1.6) and (4.2) (4.5), respectively, then (4.82) max s m,n u m,n C(u)h 4, m,n = 1,...,N. 1 m,n N Proof. Using (4.2) (4.4), (4.11) (4.13), and following the derivations of (3.3) from (1.3), (3.5) from (1.4), and (3.6) from (1.5), we conclude that (4.83) s m,n = u m,n, m = 0,N + 1,n = 0,...,N + 1, m = 1,...,N,n = 0,N + 1. We define {w i,j } N i,j=1 by (4.84) (S u h )(x i,y j ) h2 6 D2 xd 2 y(s u h )(x i,y j ) = w i,j, i,j = 1,...,N. 17
18 Using (4.84), (1.6), and (4.10), we obtain w i,j = DxS(x 2 i,y j ) + DyS(x 2 i,y j ) h2 6 D2 xdys(x 2 i,y j ) Dxu(x 2 i,y j ) Dyu(x 2 i,y j ) + h2 [ D 4 12 x u(x i,y j ) + Dyu(x 4 i,y j ) + 2DxD 2 yu(x 2 i,y j ) ], and hence (4.58) (4.60) and the triangle inequality imply that (4.85) w i,j C(u)h 4, i,j = 1,...,N. Introducing v = [s 1,1 u 1,1,...,s N,N u N,N ] T, w = [w 1,1,...,w N,N ] T, using (4.84), (4.81), (4.83), and following the derivation of (3.10) from (1.6), we obtain (4.86) 6h 2 [6T I + (6I + T) T]v = w. N Since h = 1/(N +1), (4.85) gives h 2 wi,j 2 C 2 (u)h 8 and hence (4.82) follows from i,j=1 (4.86), (3.12), and Lemma 2.4. Theorem 4.3. If u C 6 (Ω) and u h and S in S D S D are defined by (1.3) (1.6) and (4.2) (4.5), respectively, then S u h C(Ω) C(u)h 4. Proof. Since u h S is continuous on Ω, there is (x,y ) in Ω such that u h S C(Ω) = (S u h )(x,y ). Hence, (3.1), (4.36), (4.83), and the triangle inequality give u h S C(Ω) N m=1 n=1 N s m,n u m,n Bm(x D ) Bn D (y ). Since the above double sum reduces to at the most nine terms, the desired result follows from Lemmas 4.6 and 2.1. Theorem 4.4. If u in C 6 (Ω) and u h in S D S D are the solutions of (1.1) and (1.3) (1.6), respectively, then u u h C(Ω) C(u)h 4. Proof. The required inequality follows from (4.14) and Theorems 4.1, 4.2, Other Schemes. Consider the scheme for solving (1.1) formulated as follows: Find u h S D S D satisfying (1.3) (1.5) and (1.9). This scheme is essentially the same as the scheme (4.1) (4.3) in [4], except that (1.5) is replaced in [4] with 1 12 [13D2 yu h (x i,y j ) 2D 2 yu h (x i,y j+1 ) + D 2 yu h (x i,y j+2 )] = f(x i,y j ), j = 0, 1 12 [D2 yu h (x i,y j 2 ) 2D 2 yu h (x i,y j 1 ) + 13D 2 yu h (x i,y j )] = f(x i,y j ), j = N + 1, where i = 1,...,N. It follows from (3.1), the discussion in section 3, and (2.9) of [4] that the the matrix-vector form of (1.3) (1.5) and (1.9) is (5.1) (A B + B A)u = p, 18
19 where u = [u 1,1,...,u N,N ] T, p = [p 1,1,...,p N,N ] T, p i,j = f(x i,y j ) N m=1 n=0, m=0, n=0 u m,n [ Lx B D m(x i )B D n (y j ) + B D m(x i )L y B D m(y j ) ] u m,n [ Lx B D m(x i )B D n (y j ) + B D m(x i )L y B D n (y j ) ], {u i,j } j=0, i = 0,N + 1, {u i,j} N i=1, j = 0,N + 1, are given in (3.3), (3.5), (3.6), (5.2) A = 1 2h 2 (T T), B = T + 6I, and T is defined in (2.12). Lemma 5.1. Assume A, B are as in (5.2) and v = [v 1,1,...,v N,N ] T, w = [w 1,1,...,w N,N ] T are such that (A B + B A)v = w. Then Proof. It follows from (5.2) that (5.3) max 1 i,j N v2 i,j Ch 2 N N i=1 j=1 w 2 i,j. A B + B A = 1 2h 2 (T 2 T) + 3 h 2 (T 2 I) (T T) + (T I) h2 h h 2 (T T 2 ) + 3 h 2 (I T 2 ) (T T) + (I T) = 36[r(T) + s(t)], h2 h2 where for an N N matrix P, (5.4) r(p) = h 2 (P I + I P), (5.5) s(p) = 1 1 ( (P P) + 3h2 P 2 72h 2 P + P P 2) + 1 ( P 2 12h 2 I + I P 2). First, we will show that (5.6) ([r(t) + s(t)]z,z) 2 9 (r(t)z,z), z RN2, where (, ) is the standard inner product in R N2. It follows from (2.14) and Q T = Q for Q of (2.16) that (r(t)z,z) = ([Q Q]r(Λ)[Q Q]z,z) = (r(λ)[q Q]z,[Q Q]z), (s(t)z,z) = ([Q Q]s(Λ)[Q Q]z,z) = (s(λ)[q Q]z,[Q Q]z), where Λ is given in (2.15). Hence (5.6) is equivalent to ([r(λ) + s(λ)]z,z) 2 9 (r(λ)z,z), z RN2, which, by (5.4), (5.5), and (2.15), is in turn equivalent to (5.7) g(λ i,λ j ) 0, 19 i,j = 1,...,N,
20 where g(x,y) = 7 9 (x + y) xy (x2 y + xy 2 ) (x2 + y 2 ). It follows from (2.15) that 4 λ i 0, i = 1,...,N. Hence, (5.7) follows from g(x,y) 0, x,y [ 4,0], which is established using elementary calculus. The matrices r(t) and s(t) are symmetric, r(t)s(t) = s(t)r(t), and r(t) is positive definite. Hence, (5.6) and 6) on page 135 in [9] imply that (5.8) r(t)z [r(t) + s(t)]z 2, z R N2. where 2 is the two vector norm. It is known (see, for example, the embedding theorem on page 281 in [9]) that (5.9) max 1 i,j N z2 i,j 1 4 h2 r(t)z 2 2, z = [z 1,1,...,z N,N ] T R N2. Hence the desired result follows from (5.9), (5.8), and (5.3). Theorem 5.1. If u C 6 (Ω) and u h and S are defined by (1.3) (1.5) and (1.9), and (4.2) (4.5), respectively, then S u h C(Ω) C(u)h 4. Proof. Following the proof of Lemma 4.6, we define {w i,j } N i,j=1 by (5.10) (L x + L y )(S u h )(x i,y j ) = w i,j, i,j = 1,...,N. Using (5.10), (1.9), (1.1), we obtain w i,j = L x S(x i,y j ) D 2 xu(x i,y j ) + L y S(x i,y j ) D 2 yu(x i,y j ). Equations (1.10), (4.58), and (4.52) give, for i,j = 1,...,N, L x S(x i,y j ) D 2 xu(x i,y j ) = D 2 xs(x i,y j ) D 2 xu(x i,y j ) [D2 xs(x i 1,y j ) 2D 2 xs(x i,y j ) + D 2 xs(x i+1,y j )] = h2 12 D4 xu(x i,y j ) [D2 xu(x i 1,y j ) 2D 2 xu(x i,y j ) + D 2 xu(x i+1,y j )] h2 144 [D4 xu(x i 1,y j ) 2D 4 xu(x i,y j ) + D 4 xu(x i+1,y j )] + ɛ i,j, where ɛ i,j C(u)h 4, i,j = 1,...,N. Hence Taylor s theorem and similar considerations for L y S(x i,y j ) D 2 yu(x i,y j ) show that (4.85) holds. It follows from (4.81) and (4.83) that the matrix-vector form of (5.10) is (A B + B A)v = w, where A, B are as in (5.2), v = [s 1,1 u 1,1,...,s N,N u N,N ] T, w = [w 1,1,...,w N,N ] T. Hence Lemma 5.1 implies (4.82) and the desired result follows from the proof of Theorem 4.3. Theorem 5.2. If u in C 6 (Ω) and u h in S D S D are the solutions of (1.1), and (1.3) (1.5) and (1.9), respectively, then (5.11) u u h C(Ω) C(u)h 4. 20
21 Proof. The required inequality follows from (4.14) and Theorems 4.1, 4.2, 5.1. It is claimed in Theorem 4.1 of [8] that for the scheme (1.3) and (1.7) (1.9), one has (5.11) provided that u C 6 (Ω). The proof of this claim in [8] is based on using Z defined in (4.6) (4.9) as a comparison function. It is claimed, for example, in Lemma 2.1 of [8] that Z has properties (4.58) and (4.59), that is, (4.58) and (4.59) hold with Z in place of S. Unfortunately, numerical examples indicate that such property does not hold even in one dimensional case. Specifically, for u(x) = x(x 1)e x and Z S D such that Z(x i ) = u(x i ), i = 1,...,N, Z (x i ) = u (x i ), i = 0,N + 1, we only have max 1 i N Z (x i ) u (x i ) + h2 12 u(4) (x i ) = Ch2 and not better. It should be noted that the convergence analysis of [6] for two-point boundary value problems involves the comparison function S S D defined by S(x i ) = u(x i ), i = 1,...,N, S (x i ) = u (x i ) h2 12 u(4) (x i ), i = 0,N + 1, which, in part, was motivation for the definition (4.2) (4.5). The convergence analysis of the scheme (4.2) (4.4) in [8] remains an open problem. We believe that such analysis may require proving stability not only with respect to the right-hand side but also with respect to the boundary conditions. 6. Numerical Results. We used scheme (1.3) (1.6) and algorithm of section 3 to solve a test problem (1.1). The computations were carried out in double precision. We determined the nodal and global errors using the formulas w h = max w(x i,y j ), w 0 i,j C(Ω) max w(t i,t j ), 0 i,j 501 where t i = i/501, i = 1,...,501. Convergence rates were determined using the formula rate = log(e N/2 /e N ) log[(n + 1)/(N/2 + 1)], where e N is the error corresponding to the partition ρ x ρ y. We took f in (1.1) corresponding to the exact solution u(x,y) = 3e xy (x 2 x)(y 2 y). We see from the results in Tables 1 and 2 that the scheme (1.3) (1.6) produces fourth order accuracy for u in both the discrete and the continuous maximum norms. We also observe superconvergence phenomena since the derivative approximations at the partition nodes are of order four. REFERENCES [1] J. H. Ahlberg, E. N. Nilson, and J. L. Walsh, The Theory of Splines and Their Applications, Academic Press, New York,
22 Table 1 Nodal errors and convergence rates for u, u x, u y, and u xy u u h h (u u h ) x h (u u h ) y h (u u h ) xy h N Error Rate Error Rate Error Rate Error Rate Table 2 Global errors and convergence rates for u, u x, u y, and u xy u u h C(Ω) (u u h ) x C(Ω) (u u h ) y C(Ω) (u u h ) xy C(Ω) N Error Rate Error Rate Error Rate Error Rate [2] D. Archer, An O(h 4 ) cubic spline collocation method for quasilinear parabolic equations, SIAM J. Number. Anal., 14 (1977), [3] B. Bialecki, G. Fairweather, and A. Karageorghis, Matrix decomposition algorithms for modified spline collocation for Helmholtz problems, SIAM J. Sci. Comput., 24 (2003), [4] B. Bialecki, G. Fairweather, and A. Karageorghis, Optimal superconvergent one step nodal cubic spline collocation methods, SIAM J. Sci. Comput., 27 (2005), [5] W. Cheney, and D. Kincaid, Numerical Mathematics and Computing, Brooks Cole, California, [6] J. W. Daniel and B. K. Swartz, Extrapolated collocation for two point boundary value problems using cubic splines, J. Inst. Math Appl., 16 (1975), [7] C. de Boor, The Method of Projections as Applied to the Numerical Solution of Two Point Boundary Value Problems Using Cubic Splines, Ph.D. thesis, University of Michigan, Ann Arbor, Michigan, [8] E. N. Houstis, E. A. Vavalis, and J. R. Rice, Convergence of O(h 4 ) cubic spline collocation methods for elliptic partial differential equations, SIAM J. Numer. Anal., 25 (1988), [9] A. A. Samarski, The Theory of Difference Schemes, Marcel Dekker, Inc., New York, Basel, [10] C. Van Loan, Computational Frameworks for the Fast Fourier Transform, SIAM, Philadelphia,
Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs
Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u
More informationSimple Examples on Rectangular Domains
84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation
More informationPIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED
PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED ALAN DEMLOW Abstract. Recent results of Schatz show that standard Galerkin finite element methods employing piecewise polynomial elements of degree
More informationLECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)
LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,
More informationCubic Splines. Antony Jameson. Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305
Cubic Splines Antony Jameson Department of Aeronautics and Astronautics, Stanford University, Stanford, California, 94305 1 References on splines 1. J. H. Ahlberg, E. N. Nilson, J. H. Walsh. Theory of
More informationA collocation method for solving some integral equations in distributions
A collocation method for solving some integral equations in distributions Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA sapto@math.ksu.edu A G Ramm
More informationGAKUTO International Series
GAKUTO International Series Mathematical Sciences and Applications, Vol.28(2008) Proceedings of Fourth JSIAM-SIMMAI Seminar on Industrial and Applied Mathematics, pp.139-148 A COMPUTATIONAL APPROACH TO
More informationMULTILEVEL PRECONDITIONERS FOR NONSELFADJOINT OR INDEFINITE ORTHOGONAL SPLINE COLLOCATION PROBLEMS
MULTILEVEL PRECONDITIONERS FOR NONSELFADJOINT OR INDEFINITE ORTHOGONAL SPLINE COLLOCATION PROBLEMS RAKHIM AITBAYEV Abstract. Efficient numerical algorithms are developed and analyzed that implement symmetric
More informationAbstract. 1. Introduction
Journal of Computational Mathematics Vol.28, No.2, 2010, 273 288. http://www.global-sci.org/jcm doi:10.4208/jcm.2009.10-m2870 UNIFORM SUPERCONVERGENCE OF GALERKIN METHODS FOR SINGULARLY PERTURBED PROBLEMS
More informationJuan Vicente Gutiérrez Santacreu Rafael Rodríguez Galván. Departamento de Matemática Aplicada I Universidad de Sevilla
Doc-Course: Partial Differential Equations: Analysis, Numerics and Control Research Unit 3: Numerical Methods for PDEs Part I: Finite Element Method: Elliptic and Parabolic Equations Juan Vicente Gutiérrez
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationAn Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations.
An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations by Tong Chen A thesis submitted in conformity with the requirements
More informationWRT in 2D: Poisson Example
WRT in 2D: Poisson Example Consider 2 u f on [, L x [, L y with u. WRT: For all v X N, find u X N a(v, u) such that v u dv v f dv. Follows from strong form plus integration by parts: ( ) 2 u v + 2 u dx
More informationNodal O(h 4 )-superconvergence of piecewise trilinear FE approximations
Preprint, Institute of Mathematics, AS CR, Prague. 2007-12-12 INSTITTE of MATHEMATICS Academy of Sciences Czech Republic Nodal O(h 4 )-superconvergence of piecewise trilinear FE approximations Antti Hannukainen
More informationNumerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value
Numerical Analysis of Differential Equations 188 5 Numerical Solution of Elliptic Boundary Value Problems 5 Numerical Solution of Elliptic Boundary Value Problems TU Bergakademie Freiberg, SS 2012 Numerical
More informationA Finite Element Method Using Singular Functions for Poisson Equations: Mixed Boundary Conditions
A Finite Element Method Using Singular Functions for Poisson Equations: Mixed Boundary Conditions Zhiqiang Cai Seokchan Kim Sangdong Kim Sooryun Kong Abstract In [7], we proposed a new finite element method
More informationCLASSIFICATION AND PRINCIPLE OF SUPERPOSITION FOR SECOND ORDER LINEAR PDE
CLASSIFICATION AND PRINCIPLE OF SUPERPOSITION FOR SECOND ORDER LINEAR PDE 1. Linear Partial Differential Equations A partial differential equation (PDE) is an equation, for an unknown function u, that
More information1 Discretizing BVP with Finite Element Methods.
1 Discretizing BVP with Finite Element Methods In this section, we will discuss a process for solving boundary value problems numerically, the Finite Element Method (FEM) We note that such method is a
More informationNumerical Methods for Two Point Boundary Value Problems
Numerical Methods for Two Point Boundary Value Problems Graeme Fairweather and Ian Gladwell 1 Finite Difference Methods 1.1 Introduction Consider the second order linear two point boundary value problem
More informationLecture 1. Finite difference and finite element methods. Partial differential equations (PDEs) Solving the heat equation numerically
Finite difference and finite element methods Lecture 1 Scope of the course Analysis and implementation of numerical methods for pricing options. Models: Black-Scholes, stochastic volatility, exponential
More information2 Two-Point Boundary Value Problems
2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x
More informationQUINTIC SPLINE SOLUTIONS OF FOURTH ORDER BOUNDARY-VALUE PROBLEMS
INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING Volume 5, Number, Pages 0 c 2008 Institute for Scientific Computing Information QUINTIC SPLINE SOLUTIONS OF FOURTH ORDER BOUNDARY-VALUE PROBLEMS
More informationPartial Differential Equations
Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,
More informationu xx + u yy = 0. (5.1)
Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function
More informationA FINITE DIFFERENCE DOMAIN DECOMPOSITION ALGORITHM FOR NUMERICAL SOLUTION OF THE HEAT EQUATION
mathematics of computation volume 57, number 195 july 1991, pages 63-71 A FINITE DIFFERENCE DOMAIN DECOMPOSITION ALGORITHM FOR NUMERICAL SOLUTION OF THE HEAT EQUATION CLINT N. DAWSON, QIANG DU, AND TODD
More informationLocal pointwise a posteriori gradient error bounds for the Stokes equations. Stig Larsson. Heraklion, September 19, 2011 Joint work with A.
Local pointwise a posteriori gradient error bounds for the Stokes equations Stig Larsson Department of Mathematical Sciences Chalmers University of Technology and University of Gothenburg Heraklion, September
More informationA CLASS OF EVEN DEGREE SPLINES OBTAINED THROUGH A MINIMUM CONDITION
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume XLVIII, Number 3, September 2003 A CLASS OF EVEN DEGREE SPLINES OBTAINED THROUGH A MINIMUM CONDITION GH. MICULA, E. SANTI, AND M. G. CIMORONI Dedicated to
More informationFinite Difference Methods for Boundary Value Problems
Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point
More informationSpline Element Method for Partial Differential Equations
for Partial Differential Equations Department of Mathematical Sciences Northern Illinois University 2009 Multivariate Splines Summer School, Summer 2009 Outline 1 Why multivariate splines for PDEs? Motivation
More informationNumerical Solutions to Partial Differential Equations
Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Numerical Methods for Partial Differential Equations Finite Difference Methods
More informationA brief introduction to finite element methods
CHAPTER A brief introduction to finite element methods 1. Two-point boundary value problem and the variational formulation 1.1. The model problem. Consider the two-point boundary value problem: Given a
More informationThe Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods
The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June
More information[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of,
269 C, Vese Practice problems [1] Write the differential equation u + u = f(x, y), (x, y) Ω u = 1 (x, y) Ω 1 n + u = x (x, y) Ω 2, Ω = {(x, y) x 2 + y 2 < 1}, Ω 1 = {(x, y) x 2 + y 2 = 1, x 0}, Ω 2 = {(x,
More informationINTRODUCTION TO FINITE ELEMENT METHODS
INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.
More informationb i (x) u + c(x)u = f in Ω,
SIAM J. NUMER. ANAL. Vol. 39, No. 6, pp. 1938 1953 c 2002 Society for Industrial and Applied Mathematics SUBOPTIMAL AND OPTIMAL CONVERGENCE IN MIXED FINITE ELEMENT METHODS ALAN DEMLOW Abstract. An elliptic
More informationPoisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.
Poisson Solvers William McLean April 21, 2004 Return to Math3301/Math5315 Common Material 1 Introduction Many problems in applied mathematics lead to a partial differential equation of the form a 2 u +
More informationApplied Numerical Analysis Quiz #2
Applied Numerical Analysis Quiz #2 Modules 3 and 4 Name: Student number: DO NOT OPEN UNTIL ASKED Instructions: Make sure you have a machine-readable answer form. Write your name and student number in the
More informationChapter 7: Bounded Operators in Hilbert Spaces
Chapter 7: Bounded Operators in Hilbert Spaces I-Liang Chern Department of Applied Mathematics National Chiao Tung University and Department of Mathematics National Taiwan University Fall, 2013 1 / 84
More informationMathematical Journal of Okayama University
Mathematical Journal of Okayama University Volume 48, Issue 1 2006 Article 1 JANUARY 2006 On Euclidean Algorithm Kaoru Motose Hirosaki University Copyright c 2006 by the authors. Mathematical Journal of
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationA POSTERIORI ERROR ESTIMATES BY RECOVERED GRADIENTS IN PARABOLIC FINITE ELEMENT EQUATIONS
A POSTERIORI ERROR ESTIMATES BY RECOVERED GRADIENTS IN PARABOLIC FINITE ELEMENT EQUATIONS D. LEYKEKHMAN AND L. B. WAHLBIN Abstract. This paper considers a posteriori error estimates by averaged gradients
More informationOn second order sufficient optimality conditions for quasilinear elliptic boundary control problems
On second order sufficient optimality conditions for quasilinear elliptic boundary control problems Vili Dhamo Technische Universität Berlin Joint work with Eduardo Casas Workshop on PDE Constrained Optimization
More informationScientific Computing I
Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to
More informationFinite difference method for elliptic problems: I
Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen
More informationCubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines
Cubic Splines MATH 375 J. Robert Buchanan Department of Mathematics Fall 2006 Introduction Given data {(x 0, f(x 0 )), (x 1, f(x 1 )),...,(x n, f(x n ))} which we wish to interpolate using a polynomial...
More informationExtra Problems and Examples
Extra Problems and Examples Steven Bellenot October 11, 2007 1 Separation of Variables Find the solution u(x, y) to the following equations by separating variables. 1. u x + u y = 0 2. u x u y = 0 answer:
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i
More informationApplied/Numerical Analysis Qualifying Exam
Applied/Numerical Analysis Qualifying Exam August 9, 212 Cover Sheet Applied Analysis Part Policy on misprints: The qualifying exam committee tries to proofread exams as carefully as possible. Nevertheless,
More informationA Block Red-Black SOR Method. for a Two-Dimensional Parabolic. Equation Using Hermite. Collocation. Stephen H. Brill 1 and George F.
1 A lock ed-lack SO Method for a Two-Dimensional Parabolic Equation Using Hermite Collocation Stephen H. rill 1 and George F. Pinder 1 Department of Mathematics and Statistics University ofvermont urlington,
More informationarxiv: v1 [math.na] 29 Feb 2016
EFFECTIVE IMPLEMENTATION OF THE WEAK GALERKIN FINITE ELEMENT METHODS FOR THE BIHARMONIC EQUATION LIN MU, JUNPING WANG, AND XIU YE Abstract. arxiv:1602.08817v1 [math.na] 29 Feb 2016 The weak Galerkin (WG)
More informationHigh Accuracy Finite Difference Approximation to Solutions of Elliptic Partial Differential Equations
Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 1977 High Accuracy Finite Difference Approximation to Solutions of Elliptic Partial Differential
More informationTHE L 2 -HODGE THEORY AND REPRESENTATION ON R n
THE L 2 -HODGE THEORY AND REPRESENTATION ON R n BAISHENG YAN Abstract. We present an elementary L 2 -Hodge theory on whole R n based on the minimization principle of the calculus of variations and some
More informationA note on W 1,p estimates for quasilinear parabolic equations
200-Luminy conference on Quasilinear Elliptic and Parabolic Equations and Systems, Electronic Journal of Differential Equations, Conference 08, 2002, pp 2 3. http://ejde.math.swt.edu or http://ejde.math.unt.edu
More informationSUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS
SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS A. RÖSCH AND R. SIMON Abstract. An optimal control problem for an elliptic equation
More informationNumerische Mathematik
Numer. Math. (997) 76: 479 488 Numerische Mathematik c Springer-Verlag 997 Electronic Edition Exponential decay of C cubic splines vanishing at two symmetric points in each knot interval Sang Dong Kim,,
More information256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.
56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =
More informationMaximum norm estimates for energy-corrected finite element method
Maximum norm estimates for energy-corrected finite element method Piotr Swierczynski 1 and Barbara Wohlmuth 1 Technical University of Munich, Institute for Numerical Mathematics, piotr.swierczynski@ma.tum.de,
More informationRemarks on the analysis of finite element methods on a Shishkin mesh: are Scott-Zhang interpolants applicable?
Remarks on the analysis of finite element methods on a Shishkin mesh: are Scott-Zhang interpolants applicable? Thomas Apel, Hans-G. Roos 22.7.2008 Abstract In the first part of the paper we discuss minimal
More informationSuperconvergence of discontinuous Galerkin methods for 1-D linear hyperbolic equations with degenerate variable coefficients
Superconvergence of discontinuous Galerkin methods for -D linear hyperbolic equations with degenerate variable coefficients Waixiang Cao Chi-Wang Shu Zhimin Zhang Abstract In this paper, we study the superconvergence
More informationChapter 4: Interpolation and Approximation. October 28, 2005
Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error
More informationNumerical Methods for Partial Differential Equations
Numerical Methods for Partial Differential Equations Steffen Börm Compiled July 12, 2018, 12:01 All rights reserved. Contents 1. Introduction 5 2. Finite difference methods 7 2.1. Potential equation.............................
More informationUltraconvergence of ZZ Patch Recovery at Mesh Symmetry Points
Ultraconvergence of ZZ Patch Recovery at Mesh Symmetry Points Zhimin Zhang and Runchang Lin Department of Mathematics, Wayne State University Abstract. The ultraconvergence property of the Zienkiewicz-Zhu
More informationError formulas for divided difference expansions and numerical differentiation
Error formulas for divided difference expansions and numerical differentiation Michael S. Floater Abstract: We derive an expression for the remainder in divided difference expansions and use it to give
More informationi=1 α i. Given an m-times continuously
1 Fundamentals 1.1 Classification and characteristics Let Ω R d, d N, d 2, be an open set and α = (α 1,, α d ) T N d 0, N 0 := N {0}, a multiindex with α := d i=1 α i. Given an m-times continuously differentiable
More informationThere are five problems. Solve four of the five problems. Each problem is worth 25 points. A sheet of convenient formulae is provided.
Preliminary Examination (Solutions): Partial Differential Equations, 1 AM - 1 PM, Jan. 18, 16, oom Discovery Learning Center (DLC) Bechtel Collaboratory. Student ID: There are five problems. Solve four
More informationThe Dirichlet problem for non-divergence parabolic equations with discontinuous in time coefficients in a wedge
The Dirichlet problem for non-divergence parabolic equations with discontinuous in time coefficients in a wedge Vladimir Kozlov (Linköping University, Sweden) 2010 joint work with A.Nazarov Lu t u a ij
More informationarxiv: v1 [math.na] 1 May 2013
arxiv:3050089v [mathna] May 03 Approximation Properties of a Gradient Recovery Operator Using a Biorthogonal System Bishnu P Lamichhane and Adam McNeilly May, 03 Abstract A gradient recovery operator based
More information2 A Model, Harmonic Map, Problem
ELLIPTIC SYSTEMS JOHN E. HUTCHINSON Department of Mathematics School of Mathematical Sciences, A.N.U. 1 Introduction Elliptic equations model the behaviour of scalar quantities u, such as temperature or
More information10 The Finite Element Method for a Parabolic Problem
1 The Finite Element Method for a Parabolic Problem In this chapter we consider the approximation of solutions of the model heat equation in two space dimensions by means of Galerkin s method, using piecewise
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationLehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V
Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea
More informationMath 361: Homework 1 Solutions
January 3, 4 Math 36: Homework Solutions. We say that two norms and on a vector space V are equivalent or comparable if the topology they define on V are the same, i.e., for any sequence of vectors {x
More informationNUMERICAL METHOD FOR THE MIXED VOLTERRA-FREDHOLM INTEGRAL EQUATIONS USING HYBRID LEGENDRE FUNCTIONS
Conference Applications of Mathematics 215, in honor of the birthday anniversaries of Ivo Babuška (9), Milan Práger (85), and Emil Vitásek (85) J. Brandts, S. Korotov, M. Křížek, K. Segeth, J. Šístek,
More informationMultiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions
Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions March 6, 2013 Contents 1 Wea second variation 2 1.1 Formulas for variation........................
More informationRational Chebyshev pseudospectral method for long-short wave equations
Journal of Physics: Conference Series PAPER OPE ACCESS Rational Chebyshev pseudospectral method for long-short wave equations To cite this article: Zeting Liu and Shujuan Lv 07 J. Phys.: Conf. Ser. 84
More informationAsymptotic behavior of infinity harmonic functions near an isolated singularity
Asymptotic behavior of infinity harmonic functions near an isolated singularity Ovidiu Savin, Changyou Wang, Yifeng Yu Abstract In this paper, we prove if n 2 x 0 is an isolated singularity of a nonegative
More informationSpectra of Multiplication Operators as a Numerical Tool. B. Vioreanu and V. Rokhlin Technical Report YALEU/DCS/TR-1443 March 3, 2011
We introduce a numerical procedure for the construction of interpolation and quadrature formulae on bounded convex regions in the plane. The construction is based on the behavior of spectra of certain
More informationIn this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,
Chapter 8 Elliptic PDEs In this chapter we study elliptical PDEs. That is, PDEs of the form 2 u = lots, where lots means lower-order terms (u x, u y,..., u, f). Here are some ways to think about the physical
More informationA very short introduction to the Finite Element Method
A very short introduction to the Finite Element Method Till Mathis Wagner Technical University of Munich JASS 2004, St Petersburg May 4, 2004 1 Introduction This is a short introduction to the finite element
More informationNumerical Methods for Engineers and Scientists
Numerical Methods for Engineers and Scientists Second Edition Revised and Expanded Joe D. Hoffman Department of Mechanical Engineering Purdue University West Lafayette, Indiana m MARCEL D E К К E R MARCEL
More informationFrom Completing the Squares and Orthogonal Projection to Finite Element Methods
From Completing the Squares and Orthogonal Projection to Finite Element Methods Mo MU Background In scientific computing, it is important to start with an appropriate model in order to design effective
More informationFinite Elements. Colin Cotter. February 22, Colin Cotter FEM
Finite Elements February 22, 2019 In the previous sections, we introduced the concept of finite element spaces, which contain certain functions defined on a domain. Finite element spaces are examples of
More informationCurve Fitting. 1 Interpolation. 2 Composite Fitting. 1.1 Fitting f(x) 1.2 Hermite interpolation. 2.1 Parabolic and Cubic Splines
Curve Fitting Why do we want to curve fit? In general, we fit data points to produce a smooth representation of the system whose response generated the data points We do this for a variety of reasons 1
More informationNumerical Analysis and Methods for PDE I
Numerical Analysis and Methods for PDE I A. J. Meir Department of Mathematics and Statistics Auburn University US-Africa Advanced Study Institute on Analysis, Dynamical Systems, and Mathematical Modeling
More informationOn the convergence rate of a difference solution of the Poisson equation with fully nonlocal constraints
Nonlinear Analysis: Modelling and Control, 04, Vol. 9, No. 3, 367 38 367 http://dx.doi.org/0.5388/na.04.3.4 On the convergence rate of a difference solution of the Poisson equation with fully nonlocal
More informationON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT
ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT Received: 31 July, 2008 Accepted: 06 February, 2009 Communicated by: SIMON J SMITH Department of Mathematics and Statistics La Trobe University,
More informationParallel Galerkin Domain Decomposition Procedures for Parabolic Equation on General Domain
Parallel Galerkin Domain Decomposition Procedures for Parabolic Equation on General Domain Keying Ma, 1 Tongjun Sun, 1 Danping Yang 1 School of Mathematics, Shandong University, Jinan 50100, People s Republic
More informationTHE METHOD OF LINES FOR PARABOLIC PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS
JOURNAL OF INTEGRAL EQUATIONS AND APPLICATIONS Volume 4, Number 1, Winter 1992 THE METHOD OF LINES FOR PARABOLIC PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS J.-P. KAUTHEN ABSTRACT. We present a method of lines
More informationComputation Fluid Dynamics
Computation Fluid Dynamics CFD I Jitesh Gajjar Maths Dept Manchester University Computation Fluid Dynamics p.1/189 Garbage In, Garbage Out We will begin with a discussion of errors. Useful to understand
More informationLecture 10 Polynomial interpolation
Lecture 10 Polynomial interpolation Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 37, pp. 166-172, 2010. Copyright 2010,. ISSN 1068-9613. ETNA A GRADIENT RECOVERY OPERATOR BASED ON AN OBLIQUE PROJECTION BISHNU P. LAMICHHANE Abstract.
More informationCreating materials with a desired refraction coefficient: numerical experiments
Creating materials with a desired refraction coefficient: numerical experiments Sapto W. Indratno and Alexander G. Ramm Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA
More informationOptimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms
Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Marcus Sarkis Worcester Polytechnic Inst., Mass. and IMPA, Rio de Janeiro and Daniel
More informationMatrix assembly by low rank tensor approximation
Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis
More informationMaximum Principles for Parabolic Equations
Maximum Principles for Parabolic Equations Kamyar Malakpoor 24 November 2004 Textbooks: Friedman, A. Partial Differential Equations of Parabolic Type; Protter, M. H, Weinberger, H. F, Maximum Principles
More informationChapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS
Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS 5.1 Introduction When a physical system depends on more than one variable a general
More informationNOTES ON SCHAUDER ESTIMATES. r 2 x y 2
NOTES ON SCHAUDER ESTIMATES CRISTIAN E GUTIÉRREZ JULY 26, 2005 Lemma 1 If u f in B r y), then ux) u + r2 x y 2 B r y) B r y) f, x B r y) Proof Let gx) = ux) Br y) u r2 x y 2 Br y) f We have g = u + Br
More informationRegularity of Weak Solution to Parabolic Fractional p-laplacian
Regularity of Weak Solution to Parabolic Fractional p-laplacian Lan Tang at BCAM Seminar July 18th, 2012 Table of contents 1 1. Introduction 1.1. Background 1.2. Some Classical Results for Local Case 2
More informationChapter 3 Second Order Linear Equations
Partial Differential Equations (Math 3303) A Ë@ Õæ Aë áöß @. X. @ 2015-2014 ú GA JË@ É Ë@ Chapter 3 Second Order Linear Equations Second-order partial differential equations for an known function u(x,
More informationCubic B-spline Collocation Method for Fourth Order Boundary Value Problems. 1 Introduction
ISSN 1749-3889 print, 1749-3897 online International Journal of Nonlinear Science Vol.142012 No.3,pp.336-344 Cubic B-spline Collocation Method for Fourth Order Boundary Value Problems K.N.S. Kasi Viswanadham,
More information