A WELL-CONDITIONED COLLOCATION METHOD USING A PSEUDOSPECTRAL INTEGRATION MATRIX

Size: px
Start display at page:

Download "A WELL-CONDITIONED COLLOCATION METHOD USING A PSEUDOSPECTRAL INTEGRATION MATRIX"

Transcription

1 A WELL-CONDITIONED COLLOCATION METHOD USING A PSEUDOSPECTRAL INTEGRATION MATRIX LI-LIAN WANG, MICHAEL DANIEL SAMSON, AND XIAODAN ZHAO Abstract. In this paper, a well-conditioned collocation method is constructed for solving general p-th order linear differential equations with various types of boundary conditions. Based on a suitable Birkhoff interpolation, we obtain a new set of polynomial basis functions that results in a collocation scheme with two important features: the condition number of the linear system is independent of the number of collocation points; and the underlying boundary conditions are imposed exactly. Moreover, the new basis leads to exact inverse of the pseudospectral differentiation matrix (PSDM) of the highest derivative (at interior collocation points), which is therefore called the pseudospectral integration matrix (PSIM). We show that PSIM produces the optimal integration preconditioner, and stable collocation solutions with even thousands of points. Key words. Birkhoff interpolation, integration preconditioning, collocation method, pseudospectral differentiation matrix, pseudospectral integration matrix, condition number AMS subject classifications. 65N35, 65E05, 65M70, 41A05, 41A10, 41A5 1. Introduction. The spectral collocation method is implemented in physical space, and approximates derivative values by direct differentiation of the Lagrange interpolating polynomial at a set of Gauss-type points. Its fairly straightforward realization is akin to the high-order finite difference method (cf. [1, 46]). This marks its advantages over the spectral method using modal basis functions in dealing with variable coefficient and/or nonlinear problems (see various monographs on spectral methods [4, 6, 3, 6, 30, 4]). However, the practitioners are plagued with the involved ill-conditioned linear systems (e.g., the condition number of the p-th order differential operator grows like N p ). This longstanding drawback causes severe degradation of expected spectral accuracy [47], while the accuracy of machine zero can be well observed from the well-conditioned spectral-galerkin method (see e.g., [40]). In practice, it becomes rather prohibitive to solve the linear system by a direct solver or even an iterative method, when the number of collocation points is large. One significant attempt to circumvent this barrier is the use of suitable preconditioners. Preconditioners built on low-order finite difference or finite element approximations can be found in e.g., [13, 14, 7, 3, 33, 5]. The integration preconditioning (IP) proposed by Coutsias, Hagstrom and Hesthaven et al. [1, 11, 9] (with ideas from Clenshaw [9]) has proven to be efficient. We highlight that the IP in Hesthaven [9] led to a significant reduction of the condition number from O(N 4 ) to O( N) for second-order differential linear operators with Dirichlet boundary conditions (which were imposed by the penalty method []). Elbarbary [18] improved the IP in [9] through carefully manipulating the involved singular matrices and imposing the boundary conditions by some auxiliary equations. Another remarkable approach is the spectral integration method proposed by Greengard [5] (also see [53]), which recasts the differential form into integral form, and then approximates the solution by orthogonal polynomials. This method was incorporated into the chebop system Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, , Singapore. The research of the authors is partially supported by Singapore MOE AcRF Tier 1 Grant (RG 15/1), MOE AcRF Tier Grant (MOE 013-T-1-095, ARC 44/13) and Singapore A STAR-SERC-PSF Grant (1-PSF-007). National Heart Center Singapore, 5 Hospital Drive, , Singapore. 1

2 L. WANG, M. SAMSON & X. ZHAO [16, 15]. A relevant approach by El-Gendi [17] is not based on reformulating the differential equations, but uses the integrated Chebyshev polynomials as basis functions. Then the spectral integration matrix (SIM) is employed in place of PSDM to obtain much better conditioned linear systems (see e.g., [37, 3, 38, 19] and the references therein). In this paper, we take a very different routine to construct well-conditioned collocation methods. The essential idea is to associate the highest differential operator and underlying boundary conditions with a suitable Birkhoff interpolation (cf. [35, 44]) that interpolates the derivative values at interior collocation points, and interpolate the boundary data at endpoints. This leads to the so-called Birkhoff interpolation basis polynomials with the following distinctive features: (i) Under the new basis, the linear system of a usual collocation scheme is well-conditioned, and the matrix of the highest derivative is diagonal or identity. Moreover, the underlying boundary conditions are imposed exactly. This technique can be viewed as the collocation analogue of the well-conditioned spectral-galerkin method (cf. [40, 41, 7]) (where the matrix of the highest derivative in the Galerkin system is diagonal under certain modal basis functions). (ii) The new basis produces the exact inverse of PSDM of the highest derivative (involving only interior collocation points). This inspires us to introduce the concept of pseudospectral integration matrix (PSIM). The integral expression of the new basis offers a stable way to compute PSIM and the inverse of PSDM even for thousands of collocation points. (iii) This leads to optimal integration preconditioners for the usual collocation methods, and enables us to have insights into the IP in [9, 18]. Indeed, the preconditioning from Birkhoff interpolation is natural and optimal. We point out that Castabile and Longo [10] touched on the application of Birkhoff interpolation (see (3.1)) to second-order boundary value problems (BVPs), but the focus of this work was largely on the analysis of interpolation and quadrature errors. Zhang [54] considered the Birkhoff interpolation (see (4.1)) in a very different context of superconvergence of polynomial interpolation. Collocation methods based on a special Birkhoff quadrature rule for Neumann problems were discussed in [0, 48], and were extended to mixed boundary conditions in [49]. It is also noteworthy to point out recent interest in developing spectral solvers (see e.g., [34, 8, 39, 8]). The rest of the paper is organized as follows. In, we review several topics that are pertinent to the forthcoming development. In 3, we elaborate on the new methodology for second-order BVPs. In 4, we present miscellaneous extensions of the approach to first-order initial value problems (IVPs), higher order equations and multiple dimensions.. Birkhoff interpolation and pseudospectral differentiation matrix. We briefly review several topics directly bearing on the subsequential algorithm. We introduce the pseudospectral integration matrix, which is a central piece of the puzzle for our new approach..1. Birkhoff interpolation. Let {x j } N j=0 points such that be a set of distinct interpolation x 0 < x 1 < < x N < x N 1. (.1)

3 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION 3 Let P K be the set of all algebraic polynomials of degree at most K. Given K +1 data {yj m } (with K N), we consider the interpolation problem (cf. [35, 44]): Find p K P K such that p (m) K (x j) = y m j (K + 1 equations). (.) We have the Hermite interpolation if for each j, the orders of derivatives in (.) form an unbroken sequence, m = 0, 1,, m j. In this case, the interpolation polynomial p K uniquely exists. On the other hand, if some of the sequences are broken, we have the Birkhoff interpolation. However, the existence and uniqueness of the Birkhoff interpolation polynomial are not guaranteed (cf. [35, 44]). For example, for (.) with K = N =, and given data {y 0 0, y 1 1, y 0 }, the quadratic polynomial p (x) does not exist, when x 1 = (x 0 + x )/. This happens to Legendre/Chebyshev Gauss Lobatto points, where x 0 =, x 1 = 0 and x = 1. In this paper, we consider special Birkhoff interpolation at Gauss-type points, and some variants that incorporate mixed boundary data, for instance, ap K () + bp K () = y 0 with constants a, b (see 3.5)... Legendre and Chebyshev polynomials. Without loss of generality, we present the new collocation approach at Legendre and Chebyshev points, but it is straightforward to extend the idea to Jacobi polynomials. Let P k (x) and T k (x) (with x I := (, 1)) be respectively the Legendre and Chebyshev polynomials (cf. [45]). They are mutually orthogonal: 1 P k (x)p j (x)dx = γ k δ kj, 1 T k (x)t j (x) 1 x dx = c kπ δ kj, (.3) where γ k = /(k + 1), c 0 = and c k = 1 for k 1. There hold P k (x) = 1 ( P k + 1 k+1 (x) P k(x) ), 1 T k (x) = (k + 1) T k+1(x) 1 (k 1) T k(x), (.4) for k 1 and k, respectively. Moreover, we have P k (±1) = (±1) k, P k(±1) = 1 (±1)k k(k + 1), T k (±1) = (±1) k, T k(±1) = (±1) k k. (.5) Let {x j, ω j } N j=0 be the Gauss Lobatto (GL) points (with x 0 =, x N = 1) and quadrature weights. (i) The Legendre Gauss Lobatto (LGL) points are zeros of (1 x )P N (x), and ω j = 1 N(N + 1) PN (x, 0 j N. (.6) j) (ii) The Chebyshev Gauss Lobatto (CGL) points and quadrature weights are x j = cos(jh), 0 j N; ω 0 = ω N = h, ω j = h, 1 j N 1; h = π N. (.7)

4 4 L. WANG, M. SAMSON & X. ZHAO Then we have the exactness 1 φ(x)ω(x)dx = N φ(x j )ω j, φ P N, (.8) j=0 where ω(x) = 1, (1 x ) / for the Legendre and Chebyshev cases, respectively. Hereafter, let {l j } N j=0 be the Lagrange interpolation basis polynomials. Denoting d (k) ij := l (k) j (x i ), we introduce the matrices D (k) = ( d (k) ij )0 i,j N, D(k) in = ( d (k) ) ij, k 1. (.9) 1 i,j N In particular, we denote D = D (1), and D in = D (1) in. We recall the following property (see e.g., [4, Theorem 3.10]): D (k) = DD D = D k, k 1, (.10) so higher-order differentiation matrix is a product of the first-order one. Set p (k) := ( p (k) (x 0 ),, p (k) (x N ) ) t, p := p (0). (.11) The pseudospectral differentiation process is performed via D (k) p = D k p = p (k), k 1. (.1) It is noteworthy that differentiation via (.1) suffers from significant round-off errors for large N, due to the involvement of ill-conditioned operations (cf. [50]). The matrix D (k) is singular (a simple proof: D (k) 1 = 0, where 1 = (1, 1,, 1) t, so the rows of D (k) are linearly dependent), while D (k) in is nonsingular. In addition, the condition numbers of D (k) in and D(k) I N+1 behave like O(N k ) (see e.g., [6, Section 4.3])..3. Integration preconditioning. We briefly examine the essential idea of constructing integration preconditioners in [9, 18] (inspired by [1, 11]). Consider for example the Legendre case. By (.3) and (.8), l j (x) = N k=0 ω j γ k P k (x j )P k (x), 0 j N, so l j (x) = N k= ω j γ k P k (x j )P k (x), (.13) where γ k = /(k + 1) for 0 k N 1, and γ N = /N. The key observation in [9] is that the pseudospectral differentiation process actually involves ill-conditioned operations (see e.g., [4, (3.176c)]): 0 l k P k (x) = (l + 1/) ( k(k + 1) l(l + 1) ) P l (x), (.14) k+l even where the coefficients in the summation grow like k. However, the inverse operations are stable, thanks to the compact formula, derived from (.4): P k (x) = α k P k (x) + β kp k (x) + α k+1p α k = 1 (k 1)(k + 1), β k = k+ (x), (k 1)(k + 3), (.15)

5 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION 5 where α k and β k decay like k. Based on (.15), [9, 18] attempted to precondition the collocation system by the inverse of D (). However, since D () is singular, there exist multiple ways to manipulate the involved singular matrices. The boundary conditions were imposed by the penalty method (cf. []) in [9], and using auxiliary equations in [18]. The condition number of the preconditioned system for e.g., the operator d dx µ with Dirichlet boundary conditions, behaves like O( N)..4. Pseudospectral integration matrix. We take a quick glance at the idea of the new method to be presented in 3. Different from (.1), we consider pseudospectral differentiation merely on interior GL points, for example, D () p = p () where p () := ( p(), p () (x 1 ), p () (x N ), p(1) ) t, (.16) and the matrix D () is obtained by replacing the first and last rows of D () by e 1 = (1, 0,, 0) and e N = (0,, 0, 1), respectively. Note that the matrix D () is nonsingular. Based on a suitable Birkhoff interpolation, we obtain the exact inverse matrix, denoted by B, of D (). This leads to the inverse process of (.16): B p () = p, (.17) which performs twice integration at the interior GL points, but leaves the function values at endpoints unchanged. For this reason, we call B the second-order pseudospectral integration matrix. It is important to point out that the computation of PSIM is stable even for thousands of collocation points, as all operations involve well-conditioned operations (e.g., (.15) is built-in). 3. New collocation methods for second-order BVPs. In this section, we elaborate on the construction of the new approach outlined in.4 for solving secondorder BVPs. We start with Dirichlet boundary conditions, and then consider general mixed boundary conditions in the latter part of this section Birkhoff interpolation at Gauss Lobatto points. Let {x j } N j=0 in (.1) be a set of GL points as before. Given u C (I), we consider the special case of (.): Find p P N such that p (x j ) = u (x j ), 1 j N 1; p(±1) = u(±1). (3.1) The Birkhoff interpolation polynomial p of u can be uniquely determined by N p(x) = u()b 0 (x) + u (x j )B j (x) + u(1)b N (x), x [, 1], (3.) j=1 if one can find {B j } N j=0 P N, such that B 0 () = 1, B 0 (1) = 0, B 0 (x i ) = 0, 1 i N 1; (3.3) B j () = 0, B j (1) = 0, B j i) = δ ij, 1 i, j N 1; (3.4) B N () = 0, B N (1) = 1, B N (x i) = 0, 1 i N 1. (3.5)

6 6 L. WANG, M. SAMSON & X. ZHAO We call {B j } N j=0 the Birkhoff interpolation basis polynomials of (3.1), which are the counterpart of the Lagrange basis polynomials {l j } N j=0. Theorem 3.1. Let {x j } N j=0 be a set of Gauss Lobatto points. The Birkhoff interpolation basis polynomials {B j } N j=0 defined in (3.3) (3.5) are given by B 0 (x) = 1 x, B N (x) = 1 + x ; (3.6) B j (x) = 1 + x 1 x (t 1) l j (t)dt + (x t) l j (t)dt, 1 j N 1, (3.7) where { l j } N j=1 are the Lagrange basis polynomials (of degree N ) associated with N 1 interior Gauss Lobatto points {x j } N j=1. Moreover, we have B 0 (x) = B N (x) = ; B j (x) = 1 1 (t 1) l j (t)dt + x l j (t)dt, 1 j N 1. (3.8) Proof. One verifies readily from (3.3) and (3.5) that B 0 and B N must be linear polynomials given by (3.6). Using (3.4) and the fact B j (x), l j (x) P N, we find that B j (x) = l j (x), so solving this ordinary differential equation with boundary conditions B j (±1) = 0, leads to the expression in (3.7). Finally, (3.8) follows from (3.6) (3.7) directly. Let b (k) ij := B (k) j (x i ), and define the matrices B (k) = ( b (k) ij )0 i,j N, B(k) in = ( b (k) ) ij, k 1. (3.9) 1 i,j N In particular, we denote b ij := B j (x i ), B = B (0) and B in = B (0) in. Remark 3.1. The integration process (.17) is a direct consequence of (3.), as the Birkhoff interpolation polynomial of any p P N is itself. Theorem 3.. There hold B (k) = D (k) B = D k B = DB (k), k 1, (3.10) D () in B in = I N, D() B = IN+1, (3.11) where I M is an M M identity matrix, and the matrix D () is defined in (.16). To be not distracted from the main results, we postpone the derivation of the formulas in Appendix A. The formula (3.10) can be viewed as an analogue of (.10), and the matrices B and B (1) are called the second-order and first-order PSIMs, respectively. 3.. Computation of PSIMs. Now, we describe stable algorithms for computing the matrices B and B (1). For convenience, we introduce the integral operators: x x u(x) = u(t)dt; m x u(x) = x ( (m) x u(x) ), m. (3.1)

7 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION 7 By (.4), (.5) and (.15), and x P k(x) = 1 ( Pk+1 (x) P k (x) ), k 1; x k + 1 P 0(x) = 1 + x, (3.13) x P k(x) = x P 0(x) = Similarly, we have P k+ (x) (k + 1)(k + 3) P k (x) (k 1)(k + 3) + P k (x) (k 1)(k + 1), k ; (1 + x), x P 1(x) = (1 + x) (x ). 6 (3.14) Using (3.15) recursively yields x T k(x) = x T k (x) = T k+1(x) (k + 1) T k(x) (k 1) ()k k 1, k ; x T 0(x) = 1 + x, x T 1(x) = x 1. T k+ (x) 4(k + 1)(k + ) T k(x) (k 1) + T k (x) 4(k 1)(k ) ()k (1 + x) k 1 3() k (k 1)(k 4), k 3; x T (1 + x) 0(x) =, x T 1(x) = (1 + x) (x ), 6 x T (x) = x(1 + x) (x ). 6 (3.15) (3.16) m x Remark 3.. Observe that x m P k (±1) = 0 for all k m with m = 1,, while T k (1) may not vanish. The integrated Legendre and/or Chebyshev polynomials are used to construct well-conditioned spectral-galerkin methods, hp-element methods (see [40, 41, 7], and [4] for a review), and spectral integral methods (see e.g., [9, 17, 5]). Proposition 3.3 (Birkhoff interpolation at LGL points). Let {x j, ω j } N j=0 be the LGL points and weights given in (.6). Then the Birkhoff interpolation basis polynomials {B j } N j=1 in Theorem 3.1 can be computed by B j (x) = ( β 1j β 0j )x + 1 N + k=0 β kj x P k(x) γ k, (3.17) where γ k = /(k + 1), x P k (x) is given in (3.14), and β kj = (P k (x j ) 1 ()N+k P N (x j ) 1 + ) ()N+k P N (x j ) ω j. (3.18) Moreover, we have B j (x) = β 1j β 0j N + k=0 β kj x P k(x) γ k, (3.19)

8 8 L. WANG, M. SAMSON & X. ZHAO where x P k(x) is given in (3.13). We provide the proof of this proposition in Appendix B. Proposition 3.4 (Birkhoff interpolation at CGL points). The Birkhoff interpolation basis polynomials { } N B j in Theorem 3.1 at { x j=1 j = cos(jh) } N j=0 with h = π/n, can be computed by B j (x) = N k=0 β kj { x T k (x) 1 + x } x T k (1), (3.0) where x T k (x) is given in (3.16), and β kj = {T k (x j ) 1 ()N+k T N (x j ) 1 + } ()N+k T N (x j ). (3.1) c k N Moreover, we have N B j (x) = k=0 β kj { x T k(x) x T k (1) }, (3.) where x T k (x) is computed by (3.15). Here, c 0 = and c k = 1 for k 1 as in (.3). We omit the proof, since it is very similar to that of Proposition 3.3. Remark 3.3. Like (.15), the formulas for evaluating integrated Legendre and/or Chebyshev polynomials are sparse and the coefficients decay. This allows for stable computation of PSIM even for thousands of collocation points. Remark 3.4. In the above, we only provide the Birkhoff basis polynomials for the Legendre and Chebyshev cases, but the method can be extended to Jacobi polynomials straightforwardly Collocation schemes. Consider the BVP: u (x) + r(x)u (x) + s(x)u(x) = f(x), x I; u(±1) = u ±, (3.3) where the given functions r, s, f C(I). Let {x j } N j=0 be the set of Gauss Lobatto points as in (3.1). Then the collocation scheme for (3.3) is to find u N P N such that u N(x i ) + r(x i )u N(x i ) + s(x i )u N (x i ) = f(x i ), 1 i N 1; u N (±1) = u ±. (3.4) (i) The usual collocation scheme. Let {l j } N j=0 be the Lagrange basis polynomials. Write N u N (x) = u l 0 (x) + u + l N (x) + u N (x j )l j (x), and insert it into (3.4), leading to ( D () in + Λ rd (1) in + Λ s) u = f ub, (3.5) where f = ( f(x 1 ),, f(x N ) ) t, u is the vector of unknowns {un (x i )} N i=1, Λ r = diag ( r(x 1 ),..., r(x N ) ), Λ s = diag ( s(x 1 ),..., s(x N ) ), and u B is the vector of j=1

9 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION 9 { u (d () i0 +r(x i)d (1) i0 )+u +(d () in +r(x i)d (1) in )} N. It is known that the condition number i=1 of the coefficient matrix in (3.5) grows like O(N 4 ). (ii) Preconditioning by PSIM. Thanks to the property: B in D () in = I N (see Theorem 3.), the matrix B in can be used to precondition the ill-conditioned system (3.5), leading to ( IN + B in Λ r D (1) in + B inλ s ) u = Bin ( f ub ). (3.6) Note that the matrix of the highest derivative is identity. Remark 3.5. Different from [9, 18], we work with the system involving D () in, rather than the singular matrix D (). On the other hand, the boundary conditions are imposed exactly which are incorporated into the basis polynomials (see 3.5). Therefore, this requires to pre-compute the basis. Note that the boundary conditions in [9] are imposed asymptotically by the penalty method, but this does not require the change of basis for different boundary conditions. (iii) A modal approach. We have from (3.) that N u N (x) = u B 0 (x) + u + B N (x) + u N(x j )B j (x). (3.7) Then the matrix form of (3.4) reads ( IN + Λ r B (1) in + Λ sb in ) v = f u v u + v +, (3.8) where v = ( u N (x 1),, u N (x N) ) t, and ( v = r(x 1) ( r(x1 ) v + = j=1 + s(x 1 ) 1 x 1,, r(x N) + s(x 1 ) 1 + x 1,, r(x N) + s(x N ) 1 x ) N t, + s(x N ) 1 + x ) N t. We take the following steps to solve (3.4): 1. Pre-compute B and B (1) via the formulas in Propositions ;. Find v by solving the system (3.8); 3. Recover u = (u N (x 1 ),, u N (x N )) t from (3.7): u = B in v + u b 0 + u + b N, (3.9) where b j = ( B j (x 1 ),, B j (x N ) ) t for j = 0, N. Remark 3.6. Compared with (3.6), the linear system (3.8) does not involve any differentiation matrix, but an additional step (3.9) is needed for a typical modal basis. Remark 3.7. The unknowns under the new basis in (3.7) (3.8) are the approximations to {u (x j )}. This situation is reminiscent to the spectral integration method [5], which is build upon the orthogonal polynomial expansion of u (x). Thus, the approach (iii) can be regarded as the collocation counterpart of the modal approach in [5]. Below, we have some insights into eigenvalues of the new collocation system for d the operator: dx µ (i.e., Helmholtz (resp. modified Helmholtz) operator for µ < 0 (resp. µ 0)) with Dirichlet boundary conditions. Its proof is given in Appendix C.

10 10 L. WANG, M. SAMSON & X. ZHAO Proposition 3.5. In the LGL case, the eigenvalues of I N µb in are all real and distinct, which are uniformly bounded. More precisely, for any eigenvalue λ of I N µb in, we have 4µπ 1+c N N 4 < λ < 1+ 4µ 4µ, if µ 0; 1+ π π < λ < 1+c N 4µπ, if µ < 0, (3.30) N4 where c N 1 for large N. Remark 3.8. As a consequence of (3.30), the condition number of I N µb in is independent of N. For example, it is uniformly bounded by 1 + 4µ/π for µ 0. It is noteworthy that if µ = k with k 1 (i.e., Helmholtz equation with high wave-number), then the condition number behaves like O(k ), but independent of N. Remark 3.9. We can obtain similar bounds for the CGL case by using the bounds for eigenvalues of D () in in e.g., [51] and [6, Section 4.3]. However, it appears open to conduct similar eigen-analysis for (3.6) and (3.8) with general variable coefficients Numerical results. Now, we compare the condition numbers of the linear systems between the Lagrange collocation (LCOL) scheme (3.5), Birkhoff collocation (BCOL) scheme (3.8), preconditioned LCOL (P-LCOL) scheme (3.6), and the preconditioned scheme from [18] (which improved that in [9]) (PLCOL), respectively. We also look at the number of iterations for solving the systems via BiCGSTAB in Matlab, and compare their convergence behavior. We first consider the example u (x) (1 + sin x)u (x) + e x u(x) = f(x), x (, 1); u(±1) = u ±, (3.31) with the exact solution u(x) = e (x )/. Observe from Table 3.1 that the condition numbers of two new approaches are independent of N, and do not induce round-off errors. As already mentioned, the condition number of PLCOL in [18] grows like O( N), and that of LCOL behaves like O(N 4 ). Table 3.1 Comparison of condition numbers, accuracy and iterations for (3.31) LCOL (3.5) PLCOL [18] BCOL (3.8) P-LCOL (3.6) N Cond.# Error iters Cond.# Error iters Cond.# Error iters Cond.# Error iters Legendre e e e e e e e e e e e e e e e e e e e e e e e e e-14 9 Chebyshev e e e e e e+07.87e e e e e e e e e e e e e e e e e e e-15 9 In Figure 3.1, we depict the distribution of the eigenvalues (in magnitude) of the coefficient matrices of BCOL, PLCOL and P-LCOL with N = 104 for both Legendre and Chebyshev cases. Observe that almost all of them are concentrated around 1, though it appears nontrivial to show this rigorously as in Proposition We next consider (3.31) with f C (Ī) and the exact solution u C3 (Ī), given by u(x) = { cosh(x + 1) x / x, x < 0, cosh(x + 1) cosh(x) x + 1, 0 x 1.

11 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION 11 PLCOL PLCOL P LCOL P LCOL BCOL BCOL Fig Distribution of magnitude of eigenvalues for the coefficient matrices of collocation schemes with N = 104. Left: LGL; right: CGL. Note that u has a Sobolev-regularity in H 4 ǫ (I) with ǫ > 0. In Figure 3., we graph the maximum point-wise errors for BCOL, LCOL and PLCOL, where the slope of the lines is approximately 4. We see that the BCOL and PLCOL are free of round-off errors even for thousands of points, though the PLCOL system (in [18]) has a mildly growing condition number BCOL LCOL PLCOL BCOL LCOL PLCOL N N Fig. 3.. Comparison of maximum pointwise errors. Left: LGL; right: CGL Mixed boundary conditions. Consider the second-order BVP (3.3) equipped with mixed boundary conditions: B [u] := a u() + b u () = c, B + [u] := a + u(1) + b + u (1) = c +, (3.3) where a ±, b ± and c ± are given constants. We first assume that d := a + a a + b + a b + 0, (3.33) which excludes Neumann boundary conditions (i.e., a = a + = 0) to be considered later.

12 1 L. WANG, M. SAMSON & X. ZHAO We associate (3.3) with the Birkhoff-type interpolation: Find p P N such that p (x j ) = c j, 1 j N 1, B ± [p] = c ±, (3.34) where {x j } N j=1 are interior Gauss Lobatto points, and {c ±, c j } are given data. As before, we look for the interpolation basis polynomials, still denoted by {B j } N j=0, satisfying B [B 0 ] = 1, B 0 i) = 0, 1 i N 1, B + [B 0 ] = 0; B [B j ] = 0, B j (x i ) = δ ij, 1 i N 1, B + [B j ] = 0, 1 j N 1; B [B N ] = 0, B N(x i ) = 0, 1 i N 1, B + [B N ] = 1. (3.35) Following the same lines as for the proof of Theorem 3.1, we find that if d 0, B 0 (x) = a + d (1 x) + b + d, B N(x) = a d (1 + x) b d, (3.36) and for 1 j N 1, B j (x) = x (x t) l j (t)dt ( a d (1 + x) b ) 1 ( a+ (1 t) + b + ) lj (t)dt, (3.37) d where { l j } are the Lagrange basis polynomials associated with the interior Gauss Lobatto points as defined in Theorem 3.1. Thus, for any u C (I), its interpolation polynomial is given by p(x) = ( B [u] ) N B 0 (x) + u (x j )B j (x) + ( B + [u] ) B N (x). (3.38) j=1 One can find formulas for computing {B j } N j=1 on LGL and CGL points by using the same argument as in Proposition 3.3. We leave it to the interested readers. Armed with the new basis, we can impose mixed boundary conditions exactly, and the linear system resulted from the usual collocation scheme is well-conditioned. Here, we test the method on the second-order equation in (3.3) but with the mixed boundary conditions: u(±1)±u (±1) = u ±. In Table 3., we list the condition numbers of the usual collocation method (LCOL, where the boundary conditions are treated by the tau-method), and the Birkhoff collocation method (BCOL) for both Legendre and Chebyshev cases. Once again, the new approach is well-conditioned. Table 3. Comparison of condition numbers r = 0 and s = r = s = N Chebyshev Legendre Chebyshev Legendre BCOL LCOL BCOL LCOL BCOL LCOL BCOL LCOL e e e e e e e e e e e e e e e e+11

13 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION Neumann boundary conditions. The previous discussions exclude the Neumann boundary conditions, which need much care. Consider the Poisson equation: u (x) = f(x), x I; u (±1) = 0, (3.39) where f is a continuous function such that 1 f(x)dx = 0. Its solution is unique up to any additive constant. To ensure the uniqueness, we supply (3.39) with an additional condition: u() = u. Observe that the interpolation problem (3.34) is not well-posed if B ± [u] = u (±1). Here, we consider the following special case of (.): Find p P N+1 such that p() = y 0 0, p () = y 1 0, p (x j ) = y j, 1 j N 1, p (1) = y 1 N, (3.40) where {x j } N j=1 are interior Gauss Lobatto points, and the data {yj m } are given. However, this interpolation problem is only conditionally well-posed. For example, in the LGL and CGL cases, we have to assume that N is odd (see Remark 3.10 below). We look for basis polynomials, still denoted by {B j } N+1 j=0, such that for 1 i, j N, B 0() = 1, B 0 () = B 0 (x i ) = B 0(1) = 0; B N(1) = 1, B N () = B N() = B N(x i ) = 0; B j () = B j (±1) = 0, B j (x i) = δ ij ; B N+1 () = 1, B N+1(±1) = B N+1(x i ) = 0. N Let Q N (x) = c N j=1 (x x j) with c N 0. Following the proof of Theorem 3.1, we find that if 1 Q N(t)dt 0, we have B 0 (x) = 1 + x B N+1 (x) 1, x (x t)q x N(t)dt 1 Q, B N (x) = (x t)q N(t)dt 1 N(t)dt Q, N(t)dt (3.41) and for 1 j N 1, x ( 1 B j (x) = (x t) l j (t)dt T N ) lj (t)dt B N (x), lj (x) = Q N (x) (x x j )Q N (x j). (3.4) Remark In the Legendre/Chebyshev case, we have Q N (x) = P N (x) or (x), so by (.5), 1 Q N (t)dt = 1 P N(t)dt = 1 () N = 1 T N(t)dt, which is nonzero, if and only if N is odd. We plot in Figure 3.3 the maximum point-wise errors of the usual collocation (LCOL), and Birkhoff collocation (BCOL) methods for (3.39) with the exact solution u(x) = cos(10x) cos(10). Note that the condition numbers of systems obtained from BCOL are all 1. We see that BCOL outperforms LCOL as before. Remark As remarked in [18, Section 5.], how to implement the integration preconditioning technique therein for Neumann boundary conditions remains open.

14 14 L. WANG, M. SAMSON & X. ZHAO BCOL LCOL BCOL LCOL N N Fig Comparison of maximum pointwise errors. Left: LGL; right: CGL. 4. Miscellaneous extensions and discussions. In this section, we consider various extensions of the Birkhoff interpolation and new collocation methods to numerical solution of first-order initial value problems (IVPs), higher order equations and multi-dimensional problems First-order IVPs. To this end, let {x j } N j=0 in (.1) be a set of Gauss Radau interpolation points (with x 0 = ). The counterpart of (3.1) in this context reads: for any u C 1 (I), Find p P N such that p() = u(), p (x j ) = u (x j ), 1 j N. (4.1) One verifies readily that p(x) can be uniquely expressed by p(x) = u()b 0 (x) + N u (x j )B j (x), x [, 1], (4.) where the Birkhoff interpolation basis polynomials {B j } N j=0 P N satisfy j=1 B 0 () = 1, B 0(x i ) = 0, 1 i N; B j () = 0, B j (x i) = δ ij, 1 i, j N. (4.3) Let {l j } N j=0 be the Lagrange basis polynomials associated with {x j} N j=0. Set b ij := B j (x i ) and d ij := l j (x i). Define B = (b ij ) 0 i,j N, B in = (b ij ) 1 i,j N, D = (d ij ) 0 i,j N, D in = (d ij ) 1 i,j N. (4.4) Like (3.11), we have the following important properties. Their derivation is essentially the same as that in Theorem 3., so we omit it. Theorem 4.1. There hold D in B in = I N, DB = IN+1, (4.5) where D is obtained by replacing the first row of D by e 1 = (1, 0,, 0).

15 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION 15 As with Propositions , we provide formulas to compute {B j } for Legendreand Chebyshev Gauss Radau points. To avoid repetition, we omit the proofs. Proposition 4. (Birkhoff interpolation at LGR points). Let {x j, ω j } N j=0 be the LGR quadrature points (zeros of P N (x) + P N+1 (x) with x 0 = ) and weights given by 1 1 x j ω j = (N + 1) PN (x, 0 j N. (4.6) j) Then the Birkhoff interpolation basis polynomials { } N B j in (4.3) can be computed j=0 by B 0 (x) = 1; B j (x) = N k=0 where γ k = k+1, x P k(x) is given in (3.13), and α kj x P k(x) γ k, 1 j N, (4.7) α kj = ( P k (x j ) () N+k P N (x j ) ) ω j. (4.8) Proposition 4.3 (Birkhoff interpolation at CGR points). The Birkhoff interpolation basis polynomials { B j } N j=0 in (4.3) at CGR points { x j = cos(jh) } N h = π N+1, are computed by B 0 (x) = 1; B j (x) = N k=0 j=0, α kj x T k (x), 1 j N, (4.9) where x T k (x) is defined in (3.15), and 4 ( α kj = Tk (x j ) () N+k T N (x j ) ), (4.10) c k (N + 1) with c 0 = and c k = 1 for k 1. Consider the collocation scheme at GR points for which is to find u N P N such that u (x) + γ(x)u(x) = f(x), x I; u() = u, (4.11) u N (x j) + γ(x j )u N (x j ) = f(x j ), 1 j N; u N () = u. (4.1) We intend to compare two collocation approaches based on the Lagrange basis (LCOL), and the new Birkhoff basis (BCOL). The involved linear systems can be formed in the same manner as for the second-order BVPs in 3.3. We tabulate in Table 4.1 the condition numbers of LCOL and BCOL with γ = 1, sinx and various N. As what we have observed from previous section, the condition numbers of BCOL are independent of N, while those of LCOL grow like N. We next consider (4.11) with γ(x) = sin x, f(x) = 0 sin(500x ) and an oscillatory solution: x u(x) = 0 exp( cos(x)) exp(cos(t))sin(500t )dt. (4.13) In Figure 4.1 (left), we plot the exact solution (4.13) at 000 evenly-spaced points against the numerical solution obtained by BCOL with N = 640. In Figure 4.1 (right), we plot the maximum pointwise errors of LCOL and BCOL for the Chebyshev case. It indicates that even for large N, the BCOL is very stable.

16 16 L. WANG, M. SAMSON & X. ZHAO Table 4.1 Comparison of the condition numbers γ = 1 γ = sinx N Chebyshev Legendre Chebyshev Legendre BCOL LCOL BCOL LCOL BCOL LCOL BCOL LCOL e e e e e e e e e e e e e e e e BCOL LCOL x N Fig Left: exact solution versus numerical solution. Right: comparison of numerical errors (Chebyshev). 4.. Higher order equations. The proposed methods can be directly extended to higher order BVPs. To illustrate the ideas, we consider fifth-order BVPs and also apply the method in space to solve the time-dependent fifth-order Korteweg de Vries (KdV) equation. For example, given the set of the boundary data {u(±1), u (±1), u (1)}, we define the associated Birkhoff interpolation problem: Find p P N+3, such that for any u C 5 (, 1), p (5) (x j ) = u (5) (x j ), 1 j N 1; p(±1) = u(±1), p (±1) = u (±1), p (1) = u (1). (4.14) Correspondingly, we can construct the Birkhoff interpolation basis { } N+3 B j j=0, where the basis polynomials associated with the interior GL points {x j } N j=1 are defined by B j (±1) = B j(±1) = B j (1) = 0, B (5) j (x i ) = δ ij, 1 i, j N 1, and the one corresponding to u() satisfies B 0 () = 1, B 0 (1) = B 0(±1) = B 0(1) = B (5) 0 (x i) = 0, 1 i N 1. Likewise, we can define {B N+j } 3 j=0. At the LGL and CGL points, the basis polynomials can be computed stably in terms of integrated Legendre and Chebyshev polynomials as in Propositions This leads to the pseudospectral integration

17 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION 17 matrices for well-conditioned collocation methods. Here, we skip the details, but just test the method on the problem: u (5) (x) + sin(10x)u (x) + xu(x) = f(x), x I; u(±1) = u (±1) = u (1) = 0, (4.15) with exact solution u(x) = sin 3 (πx). We compare the usual Lagrange collocation method (LCOL), the new Birkhoff collocation (BCOL) scheme at CGL points, and the special collocation method (SCOL). We refer to the SCOL as in [31] and [4, Page 18], which is based on the interpolation problem: Find p P N+3 such that p(y j ) = u(y j ), 1 j N 1; p (k) (±1) = u (k) (±1), k = 0, 1; p (1) = u (1), where {y j } N j=1 are zeros of the Jacobi polynomial P (3,) N (x). We plot in Figure 4. (left) convergence behavior of three methods, which clearly indicates the new approach is well-conditioned and significantly superior to the other two BCOL LCOL SCOL 10 t = 1 t = 50 t = N N Fig. 4.. Comparison of three collocation schemes (left), and maximum pointwise errors of the Crank Nicolson-leap-frog and BCOL for fifth-order KdV equation (right). We next apply the new method in space to solve the fifth-order KdV equation: t u + γu x u + ν 3 x u µ 5 x u = 0, u(x, 0) = u 0(x). (4.16) For γ 0, and µν > 0, it has the exact soliton solution (cf. [4, Page 33] and the original references therein): ( [ u(x, t) = η ν ν 169µγ sech4 x 5µ ) ]) (γη ν t x 0, (4.17) 169µ where η 0 and x 0 are any constants. Since the solution decays exponentially, we can approximate the initial value problems by imposing homogeneous boundary conditions over x ( L, L) as long as the soliton wave does not reach the boundaries. Let τ be the time step size, and {ξ j = Lx j } N j=0 with {x j} N j=0 being CGL points. Let uk be the finite difference approximation of u(, kτ). Then we adopt the Crank Nicolson leapfrog scheme in time and the new collocation method in space, that is, find u k+1 N P N+3

18 18 L. WANG, M. SAMSON & X. ZHAO such that for 1 j N 1, u k+1 N (ξ j) u k N (ξ j) + ν x 3 τ = γ x u k N(ξ j )u k N(ξ j ); ( u k+1 N + ) ( uk N u (ξ j ) µ x 5 k+1 N u k N(±L) = x u k N(±L) = xu k N(L) = 0, k 0. + ) uk N (ξ j ) (4.18) In Figure 4. (right), we depict the maximum pointwise errors at CGL points for (4.16)-(4.17) with µ = γ = 1, ν = 1.1, η 0 = 0, x 0 = 0, L = 50 and τ = 0.001, for t = 1, 50, 100. It indicates that the scheme is stable and accurate, which is comparable to the well-conditioned dual-petrov Galerkin scheme in [4, Chapter 6] Multi-dimensional cases. For example, we consider the two-dimensional BVP: u γu = f in Ω = (, 1) ; u = 0 on Ω, (4.19) where γ 0 and f C(Ω). The collocation scheme is on tensorial LGL points: find u N (x, y) P N such that ( ) un γu N (xi, y j ) = f(x i, y j ), 1 i, j N 1; u N = 0 on Ω, (4.0) where {x i } and {y j } are LGL points. As with the spectral-galerkin method [40, 43], we use the matrix decomposition (or diagonalization) technique (see [36, 1]). We illustrate the idea by using partial diagonalization (see [4, Section 8.1]). Write u N (x, y) = and obtain from (4.0) the system: N k,l=1 u kl B k (x)b l (y), UB t in + B inu γb in UB t in = F, (4.1) where U = (u kl ) 1 k,l N and F = (f kl ) 1 k,l N. We consider the generalized eigen-problem: B in x = λ ( I N γb in ) x. (4.) We know from Proposition 3.5 and Remark 3.9 that the eigenvalues are distinct. Let Λ be the diagonal matrix of the eigenvalues, and E be the matrix whose columns are the corresponding eigenvectors. Then we have B in E = ( I N γb in ) EΛ. We describe the partial diagonalization (see [4, Section 8.1]). Set U = EV. Then (4.1) becomes V B t in + ΛV = G := E ( I N γb in ) F. Taking transpose of the above equation leads to B in V t + V t Λ = G t.

19 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION 19 Let v p be the transpose of p-th row of V, and likewise for g p. Then we solve the systems: ( Bin + λ p I N ) vp = g p, p = 1,,, N 1. (4.3) As shown in, the coefficient matrix is well-conditioned. Remark 4.1. It is seen that the extension to multiple dimensions essentially relies on solving the generalized eigen-problem (4.). We remark that B in has the same condition number as D () in. Nevertheless, this is common for tensor-based approaches including the spectral-galerkin method (see e.g., [4, 1]) and other aforementioned integration preconditioning techniques (see e.g., [5, 11, 1, 9, 18]). As a numerical illustration, we consider (4.19) with γ = 0 and the exact solution, u = { (sinh(x + 1) x 1)cos(πy/)e xy, x < 0, (sinh(x + 1) sinh(x) 1 (sinh() sinh(1) 1)x 3 )cos(πy/)e xy, x 0, which is first-order differentiable in x, while smooth in y. We fix N y = 16 (with negligible errors in y, and requiring to solve a generalized eigen-problem), and examine the accuracy for different N x. In Figure 4.3, we plot the maximum pointwise errors against various N x of the new approach for more than one thousand points. We see that the new approach is stable with an expected rate of convergence. Moreover, it is comparable to the spectral-galerkin method (SGAL) in [40], as seen from Figure 4.3 (left), where the errors of two methods are indistinguishable BCOL SGAL slope: BCOL slope: N x N x Fig Maximum pointwise errors. Left: LGL; right: CGL Concluding remarks. We conclude the paper with a short summary of the main contributions, and make a remark on the Birkhoff interpolation error estimates. In this paper, we tackled the longstanding ill-conditioning issue of collocation / pseudospectral methods from a new perspective. Based on a suitable Birkhoff interpolation problem, we obtained dual nature Birkhoff interpolation basis polynomials. (i) Such a basis led to optimal integration preconditioners for usual collocation schemes based on Lagrange interpolation. For the first time, we introduced in this paper the notion of pseudospectral integration matrix.

20 0 L. WANG, M. SAMSON & X. ZHAO (ii) The collocation linear system under the new basis is well-conditioned, and the matrix corresponding to the highest derivative of the equation is diagonal or identity. The new approach could be viewed as the collocation analogue of the wellconditioned Galerkin method in [40]. We considered in the paper the new collocation method based on Legendre and Chebyshev polynomials. One can extend the method to general Jacobi polynomials, and orthogonal systems in unbounded domains. We make a final remark on the error estimates of the Birkhoff interpolation. We revisit the interpolation problem (4.1) at Legendre Gauss Radau (LGR) points. For any u C 1 (, 1), we denote its Birkhoff interpolant by IN B u, which satisfies the interpolation conditions: ( I B N u ) (xj ) = u ( (x j ), 1 j N; I B N u ) () = u(). (4.4) Let IN G be the Lagrange Gauss interpolation operator at the interior LGR points {x j } N j=1, such that for any v C(, 1), ( IN G v) (x j ) = v(x j ), 1 j N. Then we find from (4.4) that ( I B N u ) ( (x) = I G N u ) (x) P N, x [, 1]. (4.5) Note that {x j } N (0,1) j=1 are zeros of the Jacobi polynomial P N (x). Define the weighted L ω -space with ω = 1 x with the norm ω. Using (4.5) and [4, Thm. 3.41], we obtain that for m N + 1, (INu B u) ω = INu G u ω (N m + )! c (N + m) m (1 x ) (m)/ x m N! u ω, (4.6) where c is a positive constant independent of m, N and u. Note that if m is fixed, the order of convergence is O(N 1 m ). Remark 4.. By (4.4) and (4.5), ( I B N u u ) (x) = x [( I G N u ) (t) u (t) ] dt. We expect the convergence I B N u u L = O(N m ). However, it appears challenging to obtain this optimal order, though we observed such a convergence in computation. We wish to report this and error estimates for other related Birkhoff interpolations e.g., (3.1) in a future work. Acknowledgement The first author would like to thank Prof. Benyu Guo and Prof. Jie Shen for fruitful discussions, and thank Prof. Zhimin Zhang for the stimulating Birkhoff interpolation problem considered in the recent paper [54]. The authors are grateful to the anonymous referees for many valuable comments. Appendix A. Proof of Theorem 3.. We first prove (3.10). For any φ P N, we write φ(x) = N p=0 φ(x p)l p (x), so we have φ (k) (x) = N p=0 φ(x p)l p (k) (x), k 1. Taking φ = B j ( P N ) and x = x i, we obtain N b (k) ij = p=0 d (k) ip b pj, k 1, (A.1)

21 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION 1 which implies B (k) = D (k) B. The second equality follows from (.10), and the last identity in (3.10) is due to the recursive relation B (k) = D k B. We now turn to the proof of (3.11). It is clear that by (3.5), b 0j = b Nj = 0 for 1 j N 1 and b () ij = δ ij for 1 i, j N 1. Taking k = in (A.1) leads to δ ij = N p=1 d () ip b pj, 1 i, j N 1. This yields D () in B in = I N, from which the second statement follows directly. Appendix B. Proof of Proposition 3.3. Since B j P N, we expand it in terms of Legendre polynomials: N j (x) = P k (x) 1 β kj where β kj = B j γ (x)p k(x)dx. k B k=0 (B.1) Using (.8), (.5) and (3.4), leads to β kj = 1 B j (x)p k(x)dx = ( () k B j () + B j (1)) ω 0 + P k (x j )ω j, 1 j N 1. (B.) Notice that the last identity of (B.) is valid for all k N + 1. Taking k = N 1, N, we obtain from (.3) that the resulted integrals vanish, so we have the linear system of B j (±1): ( () N B j () + B j (1) ) ω 0 + P N (x j )ω j = 0, ( () N B j () + B j (1) ) ω 0 + P N (x j )ω j = 0. Therefore, we solve it and find that B j (±1) = (±1)N ω j ω 0 ( PN (x j ) ± P N (x j ) ), 1 j N 1. (B.3) Inserting (B.3) into (B.) yields the expression for β kj in (3.18). Next, it follows from (B.1) that B j (x) = N k=0 β kj x P k (x) γ k + C 1 + C (x + 1), (B.4) where C 1 and C are constants to be determined by B j (±1) = 0. Observe from (3.14) that x P k() = 0 for k 0 and x P k(1) = 0 for k. This implies C 1 = 0 and C = β 0j γ 0 x P 0 (1) β 1j γ 1 x P 1 (1) = β 1j β 0j. Thus, (3.17) follows. Finally, differentiating (3.17) leads to (3.19). Appendix C. Proof of Proposition 3.5. From [5, Theorem 7], we know that all eigenvalues of D () in, denoted by {λ N,l} N l=1, are real, distinct and negative. We arrange them as λ N,N < < λ N,1 < 0.

22 L. WANG, M. SAMSON & X. ZHAO We diagonalize D () in and write it as D() in = QΛ λq, where Q is formed by the eigenvectors and Λ λ is the diagonal matrix of all eigenvalues. Since B in = ( D () ) in (cf. Theorem 3.), we have I N µb in = Q(I N µλ ) λ Q. Therefore, the eigenvalues of I N µb in are {1 µλ N,l }N l=1, which are real and distinct. Then the bounds in (3.30) can be obtained from the properties: λ N,1 > π /4 (see [5, Last line on Page 86] and [, Theorem.1]), and λ N,N = c N N 4 /(4π ) (see [5, Proposition 9]). REFERENCES [1] B. Bialecki, G. Fairweather, and A. Karageorghis, Matrix decomposition algorithms for elliptic boundary value problems: a survey, Numer. Algorithms, 56 (011), pp [] T.Z. Boulmezaoud and J.M. Urquiza, On the eigenvalues of the spectral second order differentiation operator and application to the boundary observability of the wave equation, J. Sci. Comput., 31 (007), pp [3] J.P. Boyd, Chebyshev and Fourier Spectral Methods, Dover Publications Inc., 001. [4] C. Canuto, High-order methods for PDEs: recent advances and new perspectives, in ICIAM 07 6th International Congress on Industrial and Applied Mathematics, Eur. Math. Soc., Zürich, 009, pp [5] C. Canuto, P. Gervasio, and A. Quarteroni, Finite-element preconditioning of G-NI spectral methods, SIAM J. Sci. Comput., 31 (009/10), pp [6] C. Canuto, M.Y. Hussaini, A. Quarteroni, and T.A. Zang, Spectral Methods: Fundamentals in Single Domains, Springer, Berlin, 006. [7] C. Canuto and A. Quarteroni, Preconditioned minimal residual methods for Chebyshev spectral calculations, J. Comput. Phys., 60 (1985), pp [8] F. Chen and J. Shen, Efficient spectral-galerkin methods for systems of coupled second-order equations and their applications, J. Comput. Phys., 31 (01), pp [9] C.W. Clenshaw, The numerical solution of linear differential equations in Chebyshev series, in Mathematical Proceedings of the Cambridge Philosophical Society, vol. 53, Cambridge Univ Press, 1957, pp [10] F.A. Costabile and E. Longo, A Birkhoff interpolation problem and application, Calcolo, 47 (010), pp [11] E. Coutsias, T. Hagstrom, J.S. Hesthaven, and D. Torres, Integration preconditioners for differential operators in spectral τ-methods, in Proceedings of the Third International Conference on Spectral and High Order Methods, Houston, TX, 1996, pp [1] E.A. Coutsias, T. Hagstrom, and D. Torres, An efficient spectral method for ordinary differential equations with rational function coefficients, Math. Comp., 65 (1996), pp [13] M.O. Deville and E.H. Mund, Chebyshev pseudospectral solution of second-order elliptic equations with finite element preconditioning, J. Comput. Phys., 60 (1985), pp [14], Finite element preconditioning for pseudospectral solutions of elliptic problems, SIAM J. Sci. Stat. Comput., 11 (1990), pp [15] T.A. Driscoll, Automatic spectral collocation for integral, integro-differential, and integrally reformulated differential equations, J. Comput. Phys., 9 (010), pp [16] T.A. Driscoll, F. Bornemann, and L.N. Trefethen, The Chebop system for automatic solution of differential equations, BIT, 48 (008), pp [17] S.E. El-Gendi, Chebyshev solution of differential, integral and integro-differential equations, Comput. J., 1 (1969/1970), pp [18] M.E. Elbarbary, Integration preconditioning matrix for ultraspherical pseudospectral operators, SIAM J. Sci. Comput., 8 (006), pp (electronic). [19] K.T. Elgindy and K.A. Smith-Miles, Solving boundary value problems, integral, and integrodifferential equations using Gegenbauer integration matrices, J. Comput. Appl. Math., 37 (013), pp [0] A. Ezzirani and A. Guessab, A fast algorithm for Gaussian type quadrature formulae with mixed boundary conditions and some lumped mass spectral approximations, Math. Comp., 68 (1999), pp [1] B. Fornberg, A Practical Guide to Pseudospectral Methods, Cambridge University Press, [] D. Funaro and D. Gottlieb, A new method of imposing boundary conditions in pseudospectral approximations of hyperbolic equations, Math. Comp., 51 (1988), pp

23 COLLOCATION METHODS AND BIRKHOFF INTERPOLATION 3 [3] F. Ghoreishi and S.M. Hosseini, The Tau method and a new preconditioner, J. Comput. Appl. Math., 163 (004), pp [4] D. Gottlieb and S.A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications, Society for Industrial Mathematics, [5] L. Greengard, Spectral integration and two-point boundary value problems, SIAM J. Numer. Anal., 8 (1991), pp [6] B.Y. Guo, Spectral Methods and Their Applications, World Scientific Publishing Co. Inc., River Edge, NJ, [7] B.Y. Guo, J. Shen, and L.L. Wang, Optimal spectral-galerkin methods using generalized Jacobi polynomials, J. Sci. Comput., 7 (006), pp [8] B.Y. Guo and Z.Q. Wang, A collocation method for generalized nonlinear Klein-Gordon equation, Adv. Comput. Math., (DOI: /s , 013). [9] J. Hesthaven, Integration preconditioning of pseudospectral operators. I. Basic linear operators, SIAM J. Numer. Anal., 35 (1998), pp [30] J. Hesthaven, S. Gottlieb, and D. Gottlieb, Spectral Methods for Time-Dependent Problems, Cambridge Monographs on Applied and Computational Mathematics, Cambridge, 007. [31] W. Huang and D. Sloan, The pseudospectral method for third-order differential equations, SIAM J. Numer. Anal., 9 (199), pp [3] S.D. Kim and S.V. Parter, Preconditioning Chebyshev spectral collocation method for elliptic partial differential equations, SIAM J. Numer. Anal., 33 (1996), pp [33], Preconditioning Chebyshev spectral collocation by finite difference operators, SIAM J. Numer. Anal., 34 (1997), pp [34] P.W. Livermore, Galerkin orthogonal polynomials, J. Comput. Phys., 9 (010), pp [35] G.G. Lorentz, K. Jetter, and S.D. Riemenschneider, Birkhoff Interpolation, vol. 19 of Encyclopedia of Mathematics and its Applications, Addison-Wesley Publishing Co., Reading, Mass., [36] R.E. Lynch, J.R. Rice, and D.H. Thomas, Direct solution of partial differential equations by tensor product methods, Numer. Math., 6 (1964), pp [37] B. Mihaila and I. Mihaila, Numerical approximations using Chebyshev polynomial expansions: El-Gendi s method revisited, J. Phys. A, 35 (00), pp [38] B.K. Muite, A numerical comparison of Chebyshev methods for solving fourth order semilinear initial boundary value problems, J. Comput. Appl. Math., 34 (010), pp [39] S. Olver and A. Townsend, A fast and well-conditioned spectral method, SIAM Review, 55 (013), pp [40] J. Shen, Efficient spectral-galerkin method I. direct solvers for second- and fourth-order equations by using Legendre polynomials, SIAM J. Sci. Comput., 15 (1994), pp [41], A new dual-petrov-galerkin method for third and higher odd-order differential equations: Application to the KDV equation, SIAM J. Numer. Anal, 41 (003), pp [4] J. Shen, T. Tang, and L.L. Wang, Spectral Methods: Algorithms, Analysis and Applications, vol. 41 of Series in Computational Mathematics, Springer-Verlag, Berlin, Heidelberg, 011. [43] J. Shen and L.L. Wang, Fourierization of the Legendre-Galerkin method and a new space-time spectral method, Appl. Numer. Math., 57 (007), pp [44] Y.G. Shi, Theory of Birkhoff Interpolation, Nova Science Pub Incorporated, 003. [45] G. Szegö, Orthogonal Polynomials (Fourth Edition), AMS Coll. Publ., [46] L.N. Trefethen, Spectral Methods in MATLAB, vol. 10 of Software, Environments, and Tools, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 000. [47] L.N. Trefethen and M.R. Trummer, An instability phenomenon in spectral methods, SIAM J. Numer. Anal., 4 (1987), pp [48] L.L. Wang and B.Y. Guo, Interpolation approximations based on Gauss-Lobatto-Legendre- Birkhoff quadrature, J. Approx. Theory, 161 (009), pp [49] Z.Q. Wang and L.L. Wang, A collocation method with exact imposition of mixed boundary conditions, J. Sci. Comput., 4 (010), pp [50] J.A. Weideman and S.C. Reddy, A MATLAB differentiation matrix suite, ACM Transactions on Mathematical Software (TOMS), 6 (000), pp [51] J.A.C. Weideman and L.N. Trefethen, The eigenvalues of second-order spectral differentiation matrices, SIAM J. Numer. Anal., 5 (1988), pp [5] B.D. Welfert, On the eigenvalues of second-order pseudospectral differentiation operators, Comput. Methods Appl. Mech. Engrg., 116 (1994), pp ICOSAHOM 9 (Montpellier, 199). [53] A. Zebib, A Chebyshev method for the solution of boundary value problems, J. Comput. Phys.,

arxiv: v1 [math.na] 4 Nov 2015

arxiv: v1 [math.na] 4 Nov 2015 ON WELL-CONDITIONED SPECTRAL COLLOCATION AND SPECTRAL METHODS BY THE INTEGRAL REFORMULATION KUI DU arxiv:529v [mathna 4 Nov 25 Abstract Well-conditioned spectral collocation and spectral methods have recently

More information

A Numerical Solution of the Lax s 7 th -order KdV Equation by Pseudospectral Method and Darvishi s Preconditioning

A Numerical Solution of the Lax s 7 th -order KdV Equation by Pseudospectral Method and Darvishi s Preconditioning Int. J. Contemp. Math. Sciences, Vol. 2, 2007, no. 22, 1097-1106 A Numerical Solution of the Lax s 7 th -order KdV Equation by Pseudospectral Method and Darvishi s Preconditioning M. T. Darvishi a,, S.

More information

Fast Structured Spectral Methods

Fast Structured Spectral Methods Spectral methods HSS structures Fast algorithms Conclusion Fast Structured Spectral Methods Yingwei Wang Department of Mathematics, Purdue University Joint work with Prof Jie Shen and Prof Jianlin Xia

More information

ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction

ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction Journal of Computational Mathematics, Vol.6, No.6, 008, 85 837. ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * Tao Tang Department of Mathematics, Hong Kong Baptist

More information

A New Collocation Scheme Using Non-polynomial Basis Functions

A New Collocation Scheme Using Non-polynomial Basis Functions J Sci Comput (2017) 70:793 818 DOI 10.1007/s10915-016-0269-7 A ew Collocation Scheme Using on-polynomial Basis Functions Chao Zhang 1 Wenjie Liu 2 Li-Lian Wang 3 Received: 27 August 2015 / Revised: 20

More information

SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS

SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS 31 October, 2007 1 INTRODUCTION 2 ORTHOGONAL POLYNOMIALS Properties of Orthogonal Polynomials 3 GAUSS INTEGRATION Gauss- Radau Integration Gauss -Lobatto Integration

More information

1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel

1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel 1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel Xiaoyong Zhang 1, Junlin Li 2 1 Shanghai Maritime University, Shanghai,

More information

Rational Chebyshev pseudospectral method for long-short wave equations

Rational Chebyshev pseudospectral method for long-short wave equations Journal of Physics: Conference Series PAPER OPE ACCESS Rational Chebyshev pseudospectral method for long-short wave equations To cite this article: Zeting Liu and Shujuan Lv 07 J. Phys.: Conf. Ser. 84

More information

T u

T u WANG LI-LIAN Assistant Professor Division of Mathematical Sciences, SPMS Nanyang Technological University Singapore, 637616 T 65-6513-7669 u 65-6316-6984 k lilian@ntu.edu.sg http://www.ntu.edu.sg/home/lilian?

More information

Instructions for Matlab Routines

Instructions for Matlab Routines Instructions for Matlab Routines August, 2011 2 A. Introduction This note is devoted to some instructions to the Matlab routines for the fundamental spectral algorithms presented in the book: Jie Shen,

More information

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim. 56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Mathematics 265 (2014) 264 275 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

Optimal Spectral-Galerkin Methods Using Generalized Jacobi Polynomials

Optimal Spectral-Galerkin Methods Using Generalized Jacobi Polynomials Journal of Scientific Computing ( 2006) DO: 10.1007/s10915-005-9055-7 Optimal Spectral-Galerkin Methods Using Generalized Jacobi Polynomials Ben-Yu Guo, 1 Jie Shen, 2 and Li-Lian Wang 2,3 Received October

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

A fast and well-conditioned spectral method: The US method

A fast and well-conditioned spectral method: The US method A fast and well-conditioned spectral method: The US method Alex Townsend University of Oxford Leslie Fox Prize, 24th of June 23 Partially based on: S. Olver & T., A fast and well-conditioned spectral method,

More information

Spectral Collocation Method for Parabolic Partial Differential Equations with Neumann Boundary Conditions

Spectral Collocation Method for Parabolic Partial Differential Equations with Neumann Boundary Conditions Applied Mathematical Sciences, Vol. 1, 2007, no. 5, 211-218 Spectral Collocation Method for Parabolic Partial Differential Equations with Neumann Boundary Conditions M. Javidi a and A. Golbabai b a Department

More information

Interpolation approximations based on Gauss Lobatto Legendre Birkhoff quadrature

Interpolation approximations based on Gauss Lobatto Legendre Birkhoff quadrature Journal of Approximation Theory 161 009 14 173 www.elsevier.com/locate/jat Interpolation approximations based on Gauss Lobatto Legendre Birkhoff quadrature Li-Lian Wang a,, Ben-yu Guo b,c a Division of

More information

NUMERICAL SOLUTION FOR CLASS OF ONE DIMENSIONAL PARABOLIC PARTIAL INTEGRO DIFFERENTIAL EQUATIONS VIA LEGENDRE SPECTRAL COLLOCATION METHOD

NUMERICAL SOLUTION FOR CLASS OF ONE DIMENSIONAL PARABOLIC PARTIAL INTEGRO DIFFERENTIAL EQUATIONS VIA LEGENDRE SPECTRAL COLLOCATION METHOD Journal of Fractional Calculus and Applications, Vol. 5(3S) No., pp. 1-11. (6th. Symp. Frac. Calc. Appl. August, 01). ISSN: 090-5858. http://fcag-egypt.com/journals/jfca/ NUMERICAL SOLUTION FOR CLASS OF

More information

A Legendre Computational Matrix Method for Solving High-Order Fractional Differential Equations

A Legendre Computational Matrix Method for Solving High-Order Fractional Differential Equations Mathematics A Legendre Computational Matrix Method for Solving High-Order Fractional Differential Equations Mohamed Meabed KHADER * and Ahmed Saied HENDY Department of Mathematics, Faculty of Science,

More information

Abstract. 1. Introduction

Abstract. 1. Introduction Journal of Computational Mathematics Vol.28, No.2, 2010, 273 288. http://www.global-sci.org/jcm doi:10.4208/jcm.2009.10-m2870 UNIFORM SUPERCONVERGENCE OF GALERKIN METHODS FOR SINGULARLY PERTURBED PROBLEMS

More information

Superconvergence of Jacobi Gauss-Type Spectral Interpolation

Superconvergence of Jacobi Gauss-Type Spectral Interpolation J Sci Comput (204 59:667 687 DOI 0.007/s095-03-9777-x Superconvergence of Jacobi Gauss-Type Spectral Interpolation Li-Lian Wang Xiaodan Zhao Zhimin Zhang Received: 23 April 203 / Accepted: 9 August 203

More information

Least-Squares Spectral Collocation with the Overlapping Schwarz Method for the Incompressible Navier Stokes Equations

Least-Squares Spectral Collocation with the Overlapping Schwarz Method for the Incompressible Navier Stokes Equations Least-Squares Spectral Collocation with the Overlapping Schwarz Method for the Incompressible Navier Stokes Equations by Wilhelm Heinrichs Universität Duisburg Essen, Ingenieurmathematik Universitätsstr.

More information

The automatic solution of PDEs using a global spectral method

The automatic solution of PDEs using a global spectral method The automatic solution of PDEs using a global spectral method 2014 CBMS-NSF Conference, 29th June 2014 Alex Townsend PhD student University of Oxford (with Sheehan Olver) Supervised by Nick Trefethen,

More information

Generalised Summation-by-Parts Operators and Variable Coefficients

Generalised Summation-by-Parts Operators and Variable Coefficients Institute Computational Mathematics Generalised Summation-by-Parts Operators and Variable Coefficients arxiv:1705.10541v [math.na] 16 Feb 018 Hendrik Ranocha 14th November 017 High-order methods for conservation

More information

A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions

A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions Applied Mathematical Sciences, Vol. 5, 2011, no. 23, 1145-1152 A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions Z. Avazzadeh

More information

RATIONAL CHEBYSHEV COLLOCATION METHOD FOR SOLVING NONLINEAR ORDINARY DIFFERENTIAL EQUATIONS OF LANE-EMDEN TYPE

RATIONAL CHEBYSHEV COLLOCATION METHOD FOR SOLVING NONLINEAR ORDINARY DIFFERENTIAL EQUATIONS OF LANE-EMDEN TYPE INTERNATIONAL JOURNAL OF INFORMATON AND SYSTEMS SCIENCES Volume 6, Number 1, Pages 72 83 c 2010 Institute for Scientific Computing and Information RATIONAL CHEBYSHEV COLLOCATION METHOD FOR SOLVING NONLINEAR

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations

An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations Moshe Israeli Computer Science Department, Technion-Israel Institute of Technology, Technion city, Haifa 32000, ISRAEL Alexander

More information

A Gauss Lobatto quadrature method for solving optimal control problems

A Gauss Lobatto quadrature method for solving optimal control problems ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes

More information

We consider in this paper Legendre and Chebyshev approximations of the linear hyperbolic

We consider in this paper Legendre and Chebyshev approximations of the linear hyperbolic LEGEDRE AD CHEBYSHEV DUAL-PETROV-GALERKI METHODS FOR HYPERBOLIC EQUATIOS JIE SHE & LI-LIA WAG 2 Abstract. A Legendre and Chebyshev dual-petrov-galerkin method for hyperbolic equations is introduced and

More information

Gradient Method Based on Roots of A

Gradient Method Based on Roots of A Journal of Scientific Computing, Vol. 15, No. 4, 2000 Solving Ax Using a Modified Conjugate Gradient Method Based on Roots of A Paul F. Fischer 1 and Sigal Gottlieb 2 Received January 23, 2001; accepted

More information

Numerical Solution of Two-Dimensional Volterra Integral Equations by Spectral Galerkin Method

Numerical Solution of Two-Dimensional Volterra Integral Equations by Spectral Galerkin Method Journal of Applied Mathematics & Bioinformatics, vol.1, no.2, 2011, 159-174 ISSN: 1792-6602 (print), 1792-6939 (online) International Scientific Press, 2011 Numerical Solution of Two-Dimensional Volterra

More information

JACOBI SPECTRAL GALERKIN METHODS FOR VOLTERRA INTEGRAL EQUATIONS WITH WEAKLY SINGULAR KERNEL

JACOBI SPECTRAL GALERKIN METHODS FOR VOLTERRA INTEGRAL EQUATIONS WITH WEAKLY SINGULAR KERNEL Bull. Korean Math. Soc. 53 (016), No. 1, pp. 47 6 http://dx.doi.org/10.4134/bkms.016.53.1.47 JACOBI SPECTRAL GALERKIN METHODS FOR VOLTERRA INTEGRAL EQUATIONS WITH WEAKLY SINGULAR KERNEL Yin Yang Abstract.

More information

A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS

A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS Proceedings of ALGORITMY 2005 pp. 222 229 A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS ELENA BRAVERMAN, MOSHE ISRAELI, AND ALEXANDER SHERMAN Abstract. Based on a fast subtractional

More information

Research Article A Legendre Wavelet Spectral Collocation Method for Solving Oscillatory Initial Value Problems

Research Article A Legendre Wavelet Spectral Collocation Method for Solving Oscillatory Initial Value Problems Applied Mathematics Volume 013, Article ID 591636, 5 pages http://dx.doi.org/10.1155/013/591636 Research Article A Legendre Wavelet Spectral Collocation Method for Solving Oscillatory Initial Value Problems

More information

Alex Townsend Cornell University

Alex Townsend Cornell University Continuous analogues of Krylov methods for differential operators Alex Townsend Cornell University townsend@cornell.edu Joint work with: Marc Aurèle Gilles Based on Continuous analogues of Krylov methods

More information

Jacobi spectral collocation method for the approximate solution of multidimensional nonlinear Volterra integral equation

Jacobi spectral collocation method for the approximate solution of multidimensional nonlinear Volterra integral equation Wei et al. SpringerPlus (06) 5:70 DOI 0.86/s40064-06-3358-z RESEARCH Jacobi spectral collocation method for the approximate solution of multidimensional nonlinear Volterra integral equation Open Access

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive

More information

A collocation method for solving some integral equations in distributions

A collocation method for solving some integral equations in distributions A collocation method for solving some integral equations in distributions Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA sapto@math.ksu.edu A G Ramm

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (997) 76: 479 488 Numerische Mathematik c Springer-Verlag 997 Electronic Edition Exponential decay of C cubic splines vanishing at two symmetric points in each knot interval Sang Dong Kim,,

More information

Superconvergence of discontinuous Galerkin methods for 1-D linear hyperbolic equations with degenerate variable coefficients

Superconvergence of discontinuous Galerkin methods for 1-D linear hyperbolic equations with degenerate variable coefficients Superconvergence of discontinuous Galerkin methods for -D linear hyperbolic equations with degenerate variable coefficients Waixiang Cao Chi-Wang Shu Zhimin Zhang Abstract In this paper, we study the superconvergence

More information

Spectral method for the unsteady incompressible Navier Stokes equations in gauge formulation

Spectral method for the unsteady incompressible Navier Stokes equations in gauge formulation Report no. 04/09 Spectral method for the unsteady incompressible Navier Stokes equations in gauge formulation T. W. Tee I. J. Sobey A spectral method which uses a gauge method, as opposed to a projection

More information

Preface. 2 Linear Equations and Eigenvalue Problem 22

Preface. 2 Linear Equations and Eigenvalue Problem 22 Contents Preface xv 1 Errors in Computation 1 1.1 Introduction 1 1.2 Floating Point Representation of Number 1 1.3 Binary Numbers 2 1.3.1 Binary number representation in computer 3 1.4 Significant Digits

More information

Overlapping Schwarz preconditioners for Fekete spectral elements

Overlapping Schwarz preconditioners for Fekete spectral elements Overlapping Schwarz preconditioners for Fekete spectral elements R. Pasquetti 1, L. F. Pavarino 2, F. Rapetti 1, and E. Zampieri 2 1 Laboratoire J.-A. Dieudonné, CNRS & Université de Nice et Sophia-Antipolis,

More information

arxiv: v1 [math.na] 21 Nov 2012

arxiv: v1 [math.na] 21 Nov 2012 The Fourier Transforms of the Chebyshev and Legendre Polynomials arxiv:111.4943v1 [math.na] 1 Nov 1 A S Fokas School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 138, USA Permanent

More information

ADOMIAN-TAU OPERATIONAL METHOD FOR SOLVING NON-LINEAR FREDHOLM INTEGRO-DIFFERENTIAL EQUATIONS WITH PADE APPROXIMANT. A. Khani

ADOMIAN-TAU OPERATIONAL METHOD FOR SOLVING NON-LINEAR FREDHOLM INTEGRO-DIFFERENTIAL EQUATIONS WITH PADE APPROXIMANT. A. Khani Acta Universitatis Apulensis ISSN: 1582-5329 No 38/214 pp 11-22 ADOMIAN-TAU OPERATIONAL METHOD FOR SOLVING NON-LINEAR FREDHOLM INTEGRO-DIFFERENTIAL EQUATIONS WITH PADE APPROXIMANT A Khani Abstract In this

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Spatial Resolution Properties of Mapped Spectral Chebyshev Methods

Spatial Resolution Properties of Mapped Spectral Chebyshev Methods Spatial Resolution Properties of Mapped Spectral Chebyshev Methods Bruno Costa Wai-Sun Don Aline Simas June 4, 006 Abstract In this article we clarify some fundamental questions on the resolution power

More information

Research Article A Note on the Solutions of the Van der Pol and Duffing Equations Using a Linearisation Method

Research Article A Note on the Solutions of the Van der Pol and Duffing Equations Using a Linearisation Method Mathematical Problems in Engineering Volume 1, Article ID 693453, 1 pages doi:11155/1/693453 Research Article A Note on the Solutions of the Van der Pol and Duffing Equations Using a Linearisation Method

More information

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Y.C. Hon and R. Schaback April 9, Abstract This paper solves the Laplace equation u = on domains Ω R 3 by meshless collocation

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Spectral Methods and Inverse Problems

Spectral Methods and Inverse Problems Spectral Methods and Inverse Problems Omid Khanmohamadi Department of Mathematics Florida State University Outline Outline 1 Fourier Spectral Methods Fourier Transforms Trigonometric Polynomial Interpolants

More information

EFFICIENT SPECTRAL COLLOCATION METHOD FOR SOLVING MULTI-TERM FRACTIONAL DIFFERENTIAL EQUATIONS BASED ON THE GENERALIZED LAGUERRE POLYNOMIALS

EFFICIENT SPECTRAL COLLOCATION METHOD FOR SOLVING MULTI-TERM FRACTIONAL DIFFERENTIAL EQUATIONS BASED ON THE GENERALIZED LAGUERRE POLYNOMIALS Journal of Fractional Calculus and Applications, Vol. 3. July 212, No.13, pp. 1-14. ISSN: 29-5858. http://www.fcaj.webs.com/ EFFICIENT SPECTRAL COLLOCATION METHOD FOR SOLVING MULTI-TERM FRACTIONAL DIFFERENTIAL

More information

A new class of highly accurate differentiation schemes based on the prolate spheroidal wave functions

A new class of highly accurate differentiation schemes based on the prolate spheroidal wave functions We introduce a new class of numerical differentiation schemes constructed for the efficient solution of time-dependent PDEs that arise in wave phenomena. The schemes are constructed via the prolate spheroidal

More information

http://www.springer.com/3-540-30725-7 Erratum Spectral Methods Fundamentals in Single Domains C. Canuto M.Y. Hussaini A. Quarteroni T.A. Zang Springer-Verlag Berlin Heidelberg 2006 Due to a technical error

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Accuracy Enhancement Using Spectral Postprocessing for Differential Equations and Integral Equations

Accuracy Enhancement Using Spectral Postprocessing for Differential Equations and Integral Equations COMMUICATIOS I COMPUTATIOAL PHYSICS Vol. 5, o. -4, pp. 779-79 Commun. Comput. Phys. February 009 Accuracy Enhancement Using Spectral Postprocessing for Differential Equations and Integral Equations Tao

More information

A numerical study of a technique for shifting eigenvalues of radial basis function differentiation matrices

A numerical study of a technique for shifting eigenvalues of radial basis function differentiation matrices A numerical study of a technique for shifting eigenvalues of radial basis function differentiation matrices Scott A. Sarra Marshall University and Alfa R.H. Heryudono University of Massachusetts Dartmouth

More information

Finding eigenvalues for matrices acting on subspaces

Finding eigenvalues for matrices acting on subspaces Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department

More information

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots, Chapter 8 Elliptic PDEs In this chapter we study elliptical PDEs. That is, PDEs of the form 2 u = lots, where lots means lower-order terms (u x, u y,..., u, f). Here are some ways to think about the physical

More information

Gaussian interval quadrature rule for exponential weights

Gaussian interval quadrature rule for exponential weights Gaussian interval quadrature rule for exponential weights Aleksandar S. Cvetković, a, Gradimir V. Milovanović b a Department of Mathematics, Faculty of Mechanical Engineering, University of Belgrade, Kraljice

More information

c 2003 Society for Industrial and Applied Mathematics

c 2003 Society for Industrial and Applied Mathematics SIAM J. NUMER. ANAL. Vol. 41, No. 6, pp. 333 349 c 3 Society for Industrial and Applied Mathematics CONVERGENCE ANALYSIS OF SPECTRAL COLLOCATION METHODS FOR A SINGULAR DIFFERENTIAL EQUATION WEIZHANG HUANG,

More information

Sufficient conditions for functions to form Riesz bases in L 2 and applications to nonlinear boundary-value problems

Sufficient conditions for functions to form Riesz bases in L 2 and applications to nonlinear boundary-value problems Electronic Journal of Differential Equations, Vol. 200(200), No. 74, pp. 0. ISSN: 072-669. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu ftp ejde.math.swt.edu (login: ftp) Sufficient conditions

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

Existence of solution and solving the integro-differential equations system by the multi-wavelet Petrov-Galerkin method

Existence of solution and solving the integro-differential equations system by the multi-wavelet Petrov-Galerkin method Int. J. Nonlinear Anal. Appl. 7 (6) No., 7-8 ISSN: 8-68 (electronic) http://dx.doi.org/.75/ijnaa.5.37 Existence of solution and solving the integro-differential equations system by the multi-wavelet Petrov-Galerkin

More information

Francisco de la Hoz 1. Introduction

Francisco de la Hoz 1. Introduction ESAIM: PROCEEDINGS, March 212, Vol. 35, p. 216-221 Fédération Denis Poisson Orléans-Tours et E. Trélat UPMC, Editors A MATRIX-BASED NUMERICAL METHOD FOR THE SIMULATION OF THE TWO-DIMENSIONAL SINE-GORDON

More information

Explicit construction of rectangular differentiation matrices

Explicit construction of rectangular differentiation matrices IMA Journal of Numerical Analysis (206) 36, 68 632 doi:0.093/imanum/drv03 Advance Access publication on April 26, 205 Explicit construction of rectangular differentiation matrices Kuan Xu Mathematical

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

RBF Collocation Methods and Pseudospectral Methods

RBF Collocation Methods and Pseudospectral Methods RBF Collocation Methods and Pseudospectral Methods G. E. Fasshauer Draft: November 19, 24 Abstract We show how the collocation framework that is prevalent in the radial basis function literature can be

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

arxiv:gr-qc/ v1 6 Sep 2006

arxiv:gr-qc/ v1 6 Sep 2006 Introduction to spectral methods Philippe Grandclément Laboratoire Univers et ses Théories, Observatoire de Paris, 5 place J. Janssen, 995 Meudon Cede, France This proceeding is intended to be a first

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Outline. 1 Boundary Value Problems. 2 Numerical Methods for BVPs. Boundary Value Problems Numerical Methods for BVPs

Outline. 1 Boundary Value Problems. 2 Numerical Methods for BVPs. Boundary Value Problems Numerical Methods for BVPs Boundary Value Problems Numerical Methods for BVPs Outline Boundary Value Problems 2 Numerical Methods for BVPs Michael T. Heath Scientific Computing 2 / 45 Boundary Value Problems Numerical Methods for

More information

A Spectral Method for Nonlinear Elliptic Equations

A Spectral Method for Nonlinear Elliptic Equations A Spectral Method for Nonlinear Elliptic Equations Kendall Atkinson, David Chien, Olaf Hansen July 8, 6 Abstract Let Ω be an open, simply connected, and bounded region in R d, d, and assume its boundary

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

UNIFORM BOUNDS FOR BESSEL FUNCTIONS

UNIFORM BOUNDS FOR BESSEL FUNCTIONS Journal of Applied Analysis Vol. 1, No. 1 (006), pp. 83 91 UNIFORM BOUNDS FOR BESSEL FUNCTIONS I. KRASIKOV Received October 8, 001 and, in revised form, July 6, 004 Abstract. For ν > 1/ and x real we shall

More information

Evaluation of Chebyshev pseudospectral methods for third order differential equations

Evaluation of Chebyshev pseudospectral methods for third order differential equations Numerical Algorithms 16 (1997) 255 281 255 Evaluation of Chebyshev pseudospectral methods for third order differential equations Rosemary Renaut a and Yi Su b a Department of Mathematics, Arizona State

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

On the positivity of linear weights in WENO approximations. Abstract

On the positivity of linear weights in WENO approximations. Abstract On the positivity of linear weights in WENO approximations Yuanyuan Liu, Chi-Wang Shu and Mengping Zhang 3 Abstract High order accurate weighted essentially non-oscillatory (WENO) schemes have been used

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Numerical analysis of the one-dimensional Wave equation subject to a boundary integral specification

Numerical analysis of the one-dimensional Wave equation subject to a boundary integral specification Numerical analysis of the one-dimensional Wave equation subject to a boundary integral specification B. Soltanalizadeh a, Reza Abazari b, S. Abbasbandy c, A. Alsaedi d a Department of Mathematics, University

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Chebyshev polynomials for solving two dimensional linear and nonlinear integral equations of the second kind

Chebyshev polynomials for solving two dimensional linear and nonlinear integral equations of the second kind Volume 31, N. 1, pp. 127 142, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam Chebyshev polynomials for solving two dimensional linear and nonlinear integral equations

More information

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New

More information

Orthogonal Polynomials, Quadratures & Sparse-Grid Methods for Probability Integrals

Orthogonal Polynomials, Quadratures & Sparse-Grid Methods for Probability Integrals 1/31 Orthogonal Polynomials, Quadratures & Sparse-Grid Methods for Probability Integrals Dr. Abebe Geletu May, 2010 Technische Universität Ilmenau, Institut für Automatisierungs- und Systemtechnik Fachgebiet

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

The elliptic sinh-gordon equation in the half plane

The elliptic sinh-gordon equation in the half plane Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 8 25), 63 73 Research Article The elliptic sinh-gordon equation in the half plane Guenbo Hwang Department of Mathematics, Daegu University, Gyeongsan

More information

8 A pseudo-spectral solution to the Stokes Problem

8 A pseudo-spectral solution to the Stokes Problem 8 A pseudo-spectral solution to the Stokes Problem 8.1 The Method 8.1.1 Generalities We are interested in setting up a pseudo-spectral method for the following Stokes Problem u σu p = f in Ω u = 0 in Ω,

More information

Quadrature approaches to the solution of two point boundary value problems

Quadrature approaches to the solution of two point boundary value problems Quadrature approaches to the solution of two point boundary value problems Seth F. Oppenheimer Mohsen Razzaghi Department of Mathematics and Statistics Mississippi State University Drawer MA MSU, MS 39763

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

Bessel Functions Michael Taylor. Lecture Notes for Math 524

Bessel Functions Michael Taylor. Lecture Notes for Math 524 Bessel Functions Michael Taylor Lecture Notes for Math 54 Contents 1. Introduction. Conversion to first order systems 3. The Bessel functions J ν 4. The Bessel functions Y ν 5. Relations between J ν and

More information

On the Lebesgue constant of barycentric rational interpolation at equidistant nodes

On the Lebesgue constant of barycentric rational interpolation at equidistant nodes On the Lebesgue constant of barycentric rational interpolation at equidistant nodes by Len Bos, Stefano De Marchi, Kai Hormann and Georges Klein Report No. 0- May 0 Université de Fribourg (Suisse Département

More information

A FAST POISSON SOLVER BY CHEBYSHEV PSEUDOSPECTRAL METHOD USING REFLEXIVE DECOMPOSITION. Teng-Yao Kuo, Hsin-Chu Chen and Tzyy-Leng Horng*

A FAST POISSON SOLVER BY CHEBYSHEV PSEUDOSPECTRAL METHOD USING REFLEXIVE DECOMPOSITION. Teng-Yao Kuo, Hsin-Chu Chen and Tzyy-Leng Horng* TAWANESE JOURNAL OF MATHEMATCS Vol. 17, No. 4, pp. 1167-1181, August 213 DO: 1.1165/tjm.17.213.2574 This paper is available online at http://journal.taiwanmathsoc.org.tw A FAST POSSON SOLVER BY CHEBYSHEV

More information

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,

More information

An a posteriori error estimate and a Comparison Theorem for the nonconforming P 1 element

An a posteriori error estimate and a Comparison Theorem for the nonconforming P 1 element Calcolo manuscript No. (will be inserted by the editor) An a posteriori error estimate and a Comparison Theorem for the nonconforming P 1 element Dietrich Braess Faculty of Mathematics, Ruhr-University

More information

Solving the steady state diffusion equation with uncertainty Final Presentation

Solving the steady state diffusion equation with uncertainty Final Presentation Solving the steady state diffusion equation with uncertainty Final Presentation Virginia Forstall vhfors@gmail.com Advisor: Howard Elman elman@cs.umd.edu Department of Computer Science May 6, 2012 Problem

More information

A Spectral Method for Elliptic Equations: The Neumann Problem

A Spectral Method for Elliptic Equations: The Neumann Problem A Spectral Method for Elliptic Equations: The Neumann Problem Kendall Atkinson Departments of Mathematics & Computer Science The University of Iowa Olaf Hansen, David Chien Department of Mathematics California

More information

NUMERICAL SOLUTIONS OF SYSTEM OF SECOND ORDER BOUNDARY VALUE PROBLEMS USING GALERKIN METHOD

NUMERICAL SOLUTIONS OF SYSTEM OF SECOND ORDER BOUNDARY VALUE PROBLEMS USING GALERKIN METHOD GANIT J. Bangladesh Math. Soc. (ISSN 1606-3694) 37 (2017) 161-174 NUMERICAL SOLUTIONS OF SYSTEM OF SECOND ORDER BOUNDARY VALUE PROBLEMS USING GALERKIN METHOD Mahua Jahan Rupa 1 and Md. Shafiqul Islam 2

More information