Interpolation approximations based on Gauss Lobatto Legendre Birkhoff quadrature
|
|
- Eustace Watts
- 5 years ago
- Views:
Transcription
1 Journal of Approximation Theory Interpolation approximations based on Gauss Lobatto Legendre Birkhoff quadrature Li-Lian Wang a,, Ben-yu Guo b,c a Division of Mathematical Sciences, School of Physical and Mathematical Sciences, anyang Technological University TU, , Singapore b Department of Mathematics, Shanghai ormal University, Shanghai, 0034, China c Scientific Computing Key Laboratory of Shanghai Universities, Shanghai E-institute for Computational Science, China Received 30 May 007; received in revised form 31 July 008; accepted 7 August 008 Available online 13 ovember 008 Communicated by Jóseph Szabados Abstract We derive in this paper the asymptotic estimates of the nodes and weights of the Gauss Lobatto Legendre Birkhoff GLLB quadrature formula, and obtain optimal error estimates for the associated GLLB interpolation in Jacobi weighted Sobolev spaces. We also present a user-oriented implementation of the pseudospectral methods based on the GLLB quadrature nodes for eumann problems. This approach allows an exact imposition of eumann boundary conditions, and is as efficient as the pseudospectral methods based on Gauss Lobatto quadrature for PDEs with Dirichlet boundary conditions. c 008 Elsevier Inc. All rights reserved. Keywords: GLLB quadrature rule; Collocation method; eumann problems; Asymptotic estimates; Interpolation errors 1. Introduction In a recent work, Ezzirani and Guessab [8] proposed a fast algorithm for computing the nodes and weights of some Gauss Birkhoff type quadrature formulae cf. [19,0] with mixed boundary conditions. One special rule of particular importance, termed as the Corresponding author. address: lilian@ntu.edu.sg L.-L. Wang /$ - see front matter c 008 Elsevier Inc. All rights reserved. doi: /j.jat
2 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Gauss Lobatto Legendre Birkhoff GLLB quadrature, takes the form 1 φxdx φ ω + φx j ω j + φ 1ω +, 1.1 which is exact for all polynomials of degree 1, and whose interior nodes are zeros of the quasi-orthogonal polynomial cf. [5] formed by a linear combination of the Jacobi polynomials J,, x and J 3 x. Motivated by [8], our first intention is to study the asymptotic behaviors of the nodes and weights of 1.1. Two important results are and x j j + 1/π = cos, 1 j 1, ω j = π sin θ j, θ j = arccos x j, 1 j With the aid of these estimates, we are able to analyze the GLLB interpolation errors: I u u L I + 1 x 1/ I u u L I c m 1 x m / u m L I, 1.4 where I =, 1 and I is the Gauss Birkhoff interpolation operator associated with the GLLB points. Similar to the approximation results on the Legendre Gauss Lobatto interpolation obtained in [15,16], the estimate 1.4 is optimal. The GLLB quadrature formula involves derivative values φ ±1 at the endpoints. Hence its nodes can be a natural choice of the preassigned interpolation points for many important problems, such as numerical integration and integral equations with the data of derivatives at the endpoints, etc. In particular, it plays an important role in spectral approximations of second-order elliptic problems with eumann boundary conditions [8]. As we know, the numerical solutions of such problems given by the commonly used Galerkin method with any fixed mode, do not fulfill the eumann boundary conditions exactly. Thereby, we prefer to collocation methods oftentimes, whose solutions satisfy the physical boundary conditions. Whereas, in a collocation method, a proper choice of the collocation points is crucial in terms of accuracy, stability and ease of treating the boundary conditions BCs, especially, the treatment of BCs is even more critical due to the global feature of spectral method [18,1]. The GLLB collocation approximation takes the boundary conditions into account in such a way that the resulting collocation systems have diagonal mass matrices, and therefore leads to explicit time discretization for time-dependent problems [8]. However, like the collocation method based on Gauss Lobatto points [17,5,,13, 3,4,9], the GLLB differentiation matrices are ill-conditioned. Thus a suitable preconditioned iteration solver is preferable in actual computations. We construct in this paper an efficient finitedifference preconditioning for one-dimensional second-order elliptic equations. We also present a user-oriented implementation and an error analysis of the GLLB pseudospectral methods based on variational formulations. The rest of the paper is organized as follows. In Section, we introduce the GLLB quadrature formula, and then investigate the asymptotic behaviors of its nodes and weights in Section 3. With the aid of asymptotic estimates and some other approximation results, we are able to analyze the GLLB interpolation errors in Section 4. We give a user-oriented description of the
3 144 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory implementation of the GLLB collocation method, and present some illustrative numerical results in Section 5. The final section is for some concluding remarks.. The GLLB quadrature formula In this section, we introduce the Gauss Lobatto Legendre Birkhoff quadrature formula..1. Preliminaries We start with some notations to be used in the subsequent sections otations Let I :=, 1, and χx L 1 I be a generic positive weight function defined in I. For any integer r 0, the weighted Sobolev space Hχ r I is defined as usual with the inner product, semi-norm and norm denoted by u, v r,χ, v r,χ and v r,χ, respectively. In particular, L χ I = H χ 0I, u, v χ = u, v 0,χ and v χ = v 0,χ. For any real r > 0, Hχ r I and its norm are defined by space interpolation as in Admas [1]. In cases where no confusion would arise, χ may be dropped from the notations, whenever χ 1. Let ω α x = 1 x α, x I with α > be the Gegenbauer weight function. In particular, denote ωx = 1 x. Let be the set of all non-negative integers. For any, let P be the set of all algebraic polynomials of degree at most. Without loss of generality, we assume that 4 throughout this paper. Denote by c a generic positive constant, which is independent of any function and mode in series expansion. For two sequences {z n } and {w n } with w n 0, the expression z n w n means z n / w n c 0 as n. In particular, z n = wn means z n /w n 1 for n 1, or z n /w n 1 for n. For simplicity, we sometimes denote x l v = dl v = v l, for any integer l 1. dx l.1.. Jacobi polynomials We recall some relevant properties of the Jacobi polynomials, denoted by J n α,β x, x I with α, β > and n 0, which are mutually orthogonal with respect to the Jacobi weight ω α,β x = 1 x α 1 + x β : 1 J α,β n xj m α,β where δ mn is the Knonecker symbol, and xω α,β xdx = γ n α,β δ mn,.1 γ n α,β α+β+1 Γ n + α + 1Γ n + β + 1 = n + α + β + 1Γ n + 1Γ n + α + β In the subsequent discussions, we will mainly use two special types of Jacobi polynomials. The first one is J, n x, defined by the three-term recursive relation see, e.g., Szegö [3]: nn + 4J n, x = n + n + 3x J,, n x n + 1n + J n x, J, 0 x = 1, J, 1 x = 3x, x [, 1]..3
4 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory The coefficient of leading term of J n, x is K n, n + 4! = n n!n + 4!..4 Moreover, we have J, n x = n J, n x, J, n 1 = n + 1n +..5 The corresponding monic polynomial is defined by dividing the leading coefficient in.4: P n x := λ n J n, x with λ n = K n,..6 As a direct consequence of.3 and.6, P n+1 x = x P n x a n P n x with a n = nn + 4 n + 3n + 5,.7 where P = 0 and P 0 = 1. By.1., the normalized Jacobi polynomials P n x := λ n J n, x with λ n = γ, n /,.8 satisfy P n ω = 1. We will also utilize the Legendre polynomials, denoted by L n x, which are mutually orthogonal with respect to the unit weight ω = 1. ote that the constant of orthogonality and leading coefficient respectively are γ 0,0 n = L n = n + 1, K 0,0 n = n! n n!..9 Recall that L n ±1 = ±1 n and L n ±1 = 1 n nn GLLB quadrature formula The GLLB quadrature formula can be derived from Theorem 4. of Ezzirani and Guessab [8], which belongs to a Birkhoff type rule see, e.g., [19,0]. Theorem.1. Let {P n } be the monic Jacobi polynomials defined in.6. Consider the quasiorthogonal polynomial: where Q x = P x + b P 3 x,,.10 b = := b I + bi..11 Then we have that
5 146 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory For 4, < b < The zeros of Q x, denoted by { } x j, are distinct, real and all located in, 1. 3 There exists a unique set of weights { } ω j such that j=0 1 φxdx = φ ω The interior weights are all positive and explicitly expressed by with ω j = A = φx j ω j + φ 1ω := S [φ], φ P..13 A 1 P x j Q x j 1 x, 1 j 1,.14 j 1 b λ a λ,.15 where the constants a, b, λ and λ are defined in.7,.11,.6 and.8, respectively. Proof. The proof is based on a reassembly and extension of some relevant results in [8]. For clarity, we first present a one-to-one correspondence between the notations of [8] and those of this paper: ln 1, d ˆσ 1 x dx, ˆπ k P k, π k P k, q n, Q, β k a k, ŝ b, β n a b, J n ˆσ J. We next derive the expression of b. By Theorem 4. and 4.3 of [8], we find that.16 a b = β = Hence, a direct computation leads to b = a β.7 = RHS of The special values b j j = 1,, 3 can be computed directly via.11. To obtain the bounds for b with 4, we first prove that b < Since b = b I bi < bi, it suffices to verify W 1 := > 0. A direct calculation yields W 1 =
6 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Hence.17 is valid for 4, and it remains to prove b > Clearly, + 3 > + = 1 + and 1 > 3. Therefore, b > > 1 + [ ] > > > > = Thus, a combination of.17 and.18 leads to.1. Observe from.1 that for 3, b > 0. Therefore, by Theorem 4.1 of [8], the quasi-orthogonal polynomial can be expressed as a determinant Q x = P x + b P 3 x = det xi J,.19 where I is an identity matrix, and J is a symmetric tri-diagonal matrix of order 1, a1 0 a1 0 a. J = a 3 0 a a b b 0.0 with a n and b given in.7 and.11, respectively. Therefore, all the zeros of Q are real and distinct. 3 Theorem 4. of [8] reveals that with such a choice of b, all the zeros of Q are distributed in, 1, and the quadrature formula.13 is unique. Moreover, the interior weights {ω j } are all positive. 4 We now consider the expressions of the weights. By Formula 38 of [8], 1 x j ω j = A K x, 1 j 1,.1 j where A = 1 b /a, and K x j = A 3 k=0 P k x j + P x j..
7 148 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory To simplify K x j, we recall the Christoffel Darboux formula see, e.g., Szegö [3]: P n+1 x P n x P n x P n+1 x = d n+1 d n n k=0 where d k is the leading coefficient P k x, and by.6 and.8, Hence, P k x > 0, x [, 1],.3 d k = λ k λ k, P k x = d k P k x..4 Q x P x P xq x.10 = [ P x P x P xp x ] [ b P xp 3x P 3 x P x ].4 1 [ = P d x P x P x P x ] b [ P d x P 3 x P 3 x P x ] = P k d x b 3 d P k=0 d 3 k x.5 {[ k=0 = 1 1 b ] } d 3 P k d x + P x.6 = 1 d In the last step, we used the identity a = λ 3 λ λ λ 3 { [ 1 b d 3 k=0 ] 3 a } P k x + P x. k=0 = d 3 d,.6 which can be verified directly by the definition of the constants. Thus, taking x = x j in.5 and using the fact: Q x j = 0, 1 j 1, gives { } Q x j P x j.5 1 = A 3 P k x j + P x j = 1 K d x j,.7 which implies that d k=0 K x j.4 = d Q x jp x j.4 = Plugging it into.1 leads to λ λ Q x jp x j. Remark.1. 1 The choice of b so that all zeros of Q x are in, 1, is unique, and thereby uniquely determines the quadrature rule. It is seen from.1 that b is uniformly bounded with b = 1/4, whose behavior is depicted in Fig. 3.1 left. The formulae.19.0 indicate that the quadrature nodes {x j } are eigenvalues of the symmetric tridiagonal matrix J, and therefore, can be evaluated by using some standard methods e.g., the QR-algorithm as with the classical Gauss quadrature see, e.g., [11].
8 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Fig Left: the constant b and its bounds cf against. Right: asymptotic behavior of min and max against right. Making use of Formula 41 of [8], the weights {ω j } can be computed from the first component of the orthonormal eigenvectors of J. 3 Alternative to the eigen-method, the nodes {x j } can be located by a root-finding method, say ewton Raphson iteration, which turns out to be more efficient for a quadrature rule of higher order. A good initial approximation might be {x 0 j } given in 3.4. Accordingly, the weights {ω j } can be computed via the compact formulae It is worthwhile to point out that the quadrature nodes and weights are symmetric x j + x j = 0, ω j = ω j, j = 1,,..., 1..8 Therefore, the computational cost can be halved. 5 Let h 0 and h be the Lagrangian polynomials given in 5.6 of this paper. The boundary weights ω 0 = 1 h 0 xdx, ω = 1 h xdx. One verifies that ω 0 = ω, and an explicit evaluation of the above integrals by using the relevant properties of Jacobi polynomials leads to the formula for ω 0 in Theorem 4. of [8]. The GLLB quadrature formula.13 enjoys the same degree of exactness as the Gauss Legendre Lobatto GLL quadrature with + 1 nodes, but in contrast to GLL, GLLB includes the boundary derivative values φ ±1, which is thereby suitable for exactly imposing eumman boundary conditions see Section Asymptotic properties of the nodes and weights In this section, we make a quantitative asymptotic estimate of the nodes and weights in the GLLB quadrature formula.13. These properties will play an essential role in our subsequent analysis of the GLLB interpolation errors Asymptotic property of the nodes We start with the interlacing property. Theorem 4.1 of [0] reveals that the zeros of Q interlace with those of Q. The following theorem indicates that the zeros of Q also
9 150 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory interlace with those of P, which can be proved in a fashion similar to that in [0]. For integrity, we provide the proof below. Theorem 3.1. Let {x j } and {y k} k=1 be the zeros of Q x and P x, respectively. Then we have 1 = y 0 < x 1 < y 1 < x < y < < y < x < 1 = y. 3.1 Proof. We take three steps to complete the proof. Step I: Show that Q y j Q y j+1 < 0, j = 1,,... 3, 3. equivalently to say, between two consecutive zeros of P x, there exists at least one zero of Q x. To justify this, we first deduce that Q x P x P xq x > 0, x [, 1], 3.3 which is a consequence of.7,.5 and.1, since d > 0, 1 b 1 > 1 + a , 4. a Thanks to P y k = 0, taking x = y j, y j+1 in 3.3 yields P y jq y j < 0, P y j+1q y j+1 < 0, 3.4 which certainly implies P y j P y j+1 Q y j Q y j+1 > 0. Therefore, to prove 3., it suffices to check that otice that P y j P y j+1 < P x = d and consequently, k=1 x y k, P y j j = d y j y k k=1 k= j+1 P y j+1 = j 3 d C, where C 1 and C are two positive constants. Hence y j y k = j d C 1, 3.6 P y j P y j+1 = j 5 d C 1C < 0, 1 j This validates 3.5 and thereby 3. follows. Step II: At this point, it remains to consider the possibility of possessing zeros of Q x in subintervals y, 1 and, y 1. Clearly, by the first identity of 3.6, the largest zero y satisfies P y > 0. Thus, by 3.4, Q y <
10 On the other hand, we have L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Q 1 = P 1 + b P 3 1 > 0, 3.9 which is due to a direct calculation using.5.6 and.1: P 1 P b = b > 0, and P 3 1 > 0. The sign of indicates that there is at least one zero of Q x in the interval y, 1. Using the symmetry of the zeros see [8,3]: x j = x j, j = 1,..., 1; y k = y k, k = 1,...,, 3.10 we deduce that the interval, y 1 also contains at least one zero of Q x. Final step: A combination of the previous statements reaches the conclusion: each of the 1 subintervals { y j, y j+1 } j=0 y 0 =, y = 1 contains at least one of the 1 zeros of Q x, and therefore there can only be a unique one in each subinterval. To visualize the above interlacing property, we denote min = min 1 j y j x j, According to Theorem 3.1, we expect to see that max > min max = max 1 j y j x j, > 0, 4, lim min = lim max = 0, 3.1 which is illustrated by Fig. 3.1right. In the analysis of interpolation error, we need more precise asymptotic estimates of the GLLB quadrature nodes. For this purpose, hereafter, we assume that the zeros {x j } of Q x are arranged in descending order. We make the change of variables x = cos θ, θ [0, π], θ j = cos x j, j = 1,,..., The main result is stated in the following theorem. Theorem 3.. Let { θ j } where be the same as in Then θ j I j := θ j, θ j 0, π, 1 j 1, 3.14 θ j = j + 3 π, 0 j In other words, 0 < θ 0 < θ 1 < θ 1 < θ < θ < < θ < θ < θ < π Proof. By the intermediate mean-value theorem, 3.14 is equivalent to Q cos θ j Q cos θ j < 0, j = 1,,...,
11 15 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory To prove this result, we first recall Formula of Szegö [3]: J n, cos θ = n 1 Gθ cos n + 5 θ + γ + On 3, θ 0, π, 3.18 where Gθ = π 1 sin θ 5 cos θ 5 = 4 π sin θ 5 5, γ = π Hence, using.6,.10 and 3.18 gives Q cos θ.10 = P cos θ + b P 3 cos θ.6 = λ J, cos θ + b λ 3 J, 3 cos θ { 3.18 = Gθ 1 1 λ cos + 3 θ + γ b λ 3 cos 1 } θ + γ } + {λ O b λ 3 O 3 3 := H θ + R. 3.0 otice that θ j in 3.15 solves the equation + 1 θ j + γ = j 1 π, 0 j 1, which implies + 3 θ j + γ = j 1 π + θ j, 1 θ j + γ = j 1 π θ j. Therefore, taking θ = θ j in 3.0 and using trigonometric identities, gives { H θ j = G θ j 1 1 λ cos j 1 π + θ j b λ 3 cos j 1 } π θ j = j } sin θ j G θ j { 1 1 λ 3 1 b λ 3 := j S θ j. 3.1 Since b > 0, λ k > 0 and θ j 0, π, we have that S θ j > 0, and thereby sgn H θ j = j, 0 j Moreover, by.1, 3.19 and 3.1, H θ j 5 π 1 sin θ j 3 { 1 1 λ + c 1 λ + λ 3, } 3 1 λ 3
12 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Fig. 3.. Left: the error solid line and.9 against. Right: asymptotic behavior of δ 1 δ cf against. solid line and which, together with the fact b = 1 4, implies R c 3 λ + λ 3 c H θ j. 3.3 Hence, a combination of leads to that sgn Q cos θ j = sgn H θ j = j, 0 j 1, 1, which implies 3.17 and therefore, there exists at least one zero of Q cos θ in each subinterval I j = θ j, θ j, 1 j 1. Because the number of zeros equals to the number of subintervals, there exists exactly one zero in I j. As a consequence of Theorem 3., a good asymptotic approximation to the zeros {x j } j=0 might be x j = cos θ j = x 0 j θ j + θ j := cos = cos To illustrate this property numerically, we denote = max x j x 0. 1 j j j + 1/π, 1 j We plot in Fig. 3. left the error solid line and the reference value.9 against, which indicates that decays algebraically at a rate of.9, and predicts that x j = cos j + 1/π O.9, 1 j Asymptotic properties of the weights We establish below an important result concerning the asymptotic behaviors of the interior quadrature weights {ω j } in.13.
13 154 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory { } Theorem 3.3. Let θ j be the zeros of the quadrature trigonometric polynomial Q cos θ. Then for any 1, ω j = π sin θ j, j = 1,,..., Proof. We rewrite the weights as where.14 ω j =.6 =.10 A 1 1 x j P x j Q x j A 1 1 λ λ 3 1 x j J, x jw x j, 3.7 W x := λ J, λ x + b J, 3 x = λ 3 Q x To prove 3.6, it suffices to study the asymptotic behaviors of the constant, J, x j and W x j in 3.7. For clarity, we split the rest of the proof into three steps. Step I: Let us first estimate the constants in A direct calculation using.6.7 and.1 leads to a 1 = 4, b = 1 4, 1 = 16 λ, which, together with.15, implies A = 1 b λ λ 3 a Step II: Let Θ, j = We next show that λ λ 3 λ λ + = = 1 λ 3 1, 3.9 = θ j 5 π sin Θ, j = 0, Since Q x j = 0, we have W cos θ j 3.8 = λ 3 Q cos θ j = λ 3 Q x j = 0. Hence, by 3.18, 0 = W cos θ j = λ λ 3 J, cos θ j + b J, 3 cos θ j 3.18 = 3 1 Gθ j { c cos Θ, j + θ j + b cos Θ, j θ j } + O 3 = 3 1 Gθ j { b c sin θ j sin Θ, j + b + c cos θ j cos Θ, j } + O 3,
14 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory where Gθ j is given in 3.19, and by 3.9, c = λ λ 3 Consequently, 1 3 λ λ = 1 = 1 λ λ = b c sin θ j sin Θ, j + b + c cos θ j cos Θ, j + O sin 5/ θ j On the other hand, using 3.9 and 3.33 leads to b + c = 0, b c 1 =, for Since sin θ j 0> 0, the desired result 3.3 follows from Step III: Applying Formula of [3] note: this formula may be derived by differentiating 3.18 and using 3.8 to W x, and using trigonometric identities, yields dw dθ cos θ j { } = 3 1 Gθ j c sinθ, j + θ j b sinθ, j θ j + sin θ j O1 { = 3 1 Gθ j b c sin θ j cos Θ, j b + c cos θ j sin Θ, j } + sin θ j O Hence, by 3.3 and 3.35, dw dθ cos θ j = 1 Gθ j sin θ j cos Θ, j, 1 j 1, 3.37 which implies W x j = dw dθ cos θ j dθ dx = dw θ=θ j dθ cos θ 1 j = sin θ j Gθ j cos Θ, j On the other hand, by 3.18, J, cos θ j = 1 Gθ j cos Θ, j, 1 j Multiplying 3.38 by 3.39 yields J, x jw x j = 1 G θ j cos Θ, j = 1 G θ j, 3.40 where in the last step, we used the fact: cos Θ, j 3.3 = 1, Finally, thanks to x = j 1 sin 4, G θ j 3.19 = 5 θ j π 1 sin 5 θ j, the desired result 3.6 follows from 3.7, 3.30 and 3.40.
15 156 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory As a direct consequence of 3.4 and 3.6, we derive the following explicit asymptotic expression: ω j = ω 0 j := π To examine how good it is, we set δ 1 = π max 1 j j + 1/π sin, 1 j { sin θ j ω j By Theorem 3.3, we expect to see that } {, δ = π 0 } ω max j. 1 j ω j δ i = 1, i = 1,, 1, 3.43 which indeed can be visualized from Fig. 3.right. So far, we have derived two asymptotic estimates for the GLLB nodes and weights cf. 3.4 and 3.4, which will be useful for the analysis of the GLLB interpolation error in the forthcoming section. 4. GLLB interpolation error estimates This section is devoted to the analysis of the GLLB interpolation errors in Sobolev norms, which will be used for the error analysis of GLLB pseudospectral methods. We first state the main result, and then present the ingredients for the proof including some inequalities and orthogonal projections. Finally, we give the proof of the interpolation errors The main result We begin with the definition of the GLLB interpolation operator associated with the GLLB quadrature formula. The GLLB interpolant I u P, satisfies I u ±1 = u ±1, I ux j = ux j, 1 j Denote the weight functions by ωx = 1 x and ω α x = 1 x α. Let ᾱ = max {0, [α] 1} with [α] being the largest integer α. To describe the error, we introduce the weighted Sobolev space } Bα {u m I := L I : x k u L ω α+k I, ᾱ + 1 k m H ᾱi, m 0, with the norm u B m α I = u H ᾱi + m k=ᾱ+1 k x u ω α+k 1. The main result on the GLLB interpolation error is stated as follows. Theorem 4.1. For any u B m I and m, I u u + x I u u ω c 1 m x m u ωm. 4.
16 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Preparations for the proof The main ingredients for the proof Theorem 4.1 consist of the asymptotic estimates cf. Theorem 3. and 3.3, several inequalities and the approximation property of one specific orthogonal projection to be stated below Some inequalities For notational convenience, we define the discrete inner product and discrete norm associated with the GLLB quadrature formula as u, v = S [u v], v = v, v 1, where S [ ] represents the finite sum in.13. Clearly, the exactness.13 implies that φ, ψ = φ, ψ, φ ψ P. 4.3 We have the following equivalence between the continuous and discrete norms over the polynomial space P, which will be useful in the analysis of interpolation error, and the GLLB pseudospectral methods for nonlinear problems. Lemma 4.1. For any integer 4, where φ φ 1 + C φ 3 φ, φ P, 4.4 C = Proof. We first prove that 4.4 holds for the Legendre polynomial L. That is γ 0,0 = L L, L 1 + C L = 1 + C γ 0, For this purpose, set ψx = L x K 0,0 1 x Q xp 3 x. Since the leading coefficient of Q P 3 is one, we deduce from.9 that ψ P. Hence, using the fact Q x j = 0 and the orthogonality, gives L, L = 1, ψ 4.3 = 1, ψ =.10 =.1 1 L xdx b 1 K 0,0 ψxdx 1 P 3 x1 x dx = γ 0,0 b λ 3 K 0,0, γ 3 = 1 b λ 3 K 0,0, γ 3 γ 0,0 Using..6 and.9 to work out the constants, yields λ 3 K 0,0, γ 3 γ 0, =. 1 γ 0,0. 4.6
17 158 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Furthermore, by.1, we have that for 4, 0 < b λ 3 K 0,0, γ 3 γ 0, < = C Accordingly, a combination of leads to 4.5. ext, for any φ P, we write φx = and therefore φ = ˆφ n L n x = n=0 n=0 n=0 ˆφ n γ 0,0 n = Φ + ˆφ γ 0,0. ˆφ n L n x + ˆφ L x := Φx + ˆφ L x, It is clear that Φ x P, and so by 4.3 and the orthogonality of Legendre polynomials, Φ, L = Φ, L = 0, which implies φ = Φ, Φ + ˆφ Φ, L + ˆφ L, L = Φ + ˆφ L, L = n=0 ˆφ n γ 0,0 n + ˆφ L, L. Finally, applying 4.5 leads to the desired result. The following Bernstein Markov type inequality holds for polynomials in the finitedimensional space: X = { φ P : φ ±1 = 0 }. 4.8 Lemma 4.. For any φ X, Proof. Let φ ω + 1 φ + 1 φ. 4.9 nn + 1 ψ n x = L n x + e n L n+ x, e n = n + n Since L n ±1 = 1 n nn + 1, we have that {ψ n } n=0 forms a basis for X. Consequently, for any φ X, φx = n=0 φ n ψ n x = n=0 φ n L n x + e n φ n L n x n= = φ 0 L 0 x + φ 1 L 1 x + φ n + e n φ n L n x n= + e 3 φ 3 L x + e φ L x. 4.11
18 By the recursive relation and 4.10, L k x = 1 φ x = L.-L. Wang, B.-y. Guo / Journal of Approximation Theory n=0 1,1 k + 1J k x, k 1, φ n ψ n x = 1 3 n + φ n+1 J n 1,1 x + 1 n=1 n=0 n=1 = φ 1 J 1,1 0 x n + φ n+1 + e n φ n J n 1,1 x + 1 e 3 φ 3 J 1,1 x e φ J 1,1 x. Using the orthogonality.1. gives and φ = φ 0 γ 0,0 0 + φ 1 γ 0,0 1 + φ n + e n φ n γ n 0,0 n= + e 3 φ 3 γ 0,0 + e φ γ 0,0, φ ω = φ 1 γ 1, n + φ n+1 + e n φ n γ n 1,1 4 n= e 3 φ 3 γ 1, e φ γ 1,1 = φ 1 γ 1, n + 1 φ n + e n φ n γ 1,1 4 n n= e 3 φ 3 γ 1, e φ γ 1,1. In view of the above facts, we deduce from. that φ ω 1 { 4 max n + 1 γ 1,1 0,0 1 n n γ n } φ = + 1 φ. This implies the desired result. n + e n φ n J 1,1 n In the preceding analysis, we will also use the following Poincaré inequality see, e.g., [5]. Lemma 4.3. For any u H 1 I, u c u + ū, where ū = 1 x uxdx Orthogonal projections We first consider the orthogonal projection π : L I P such that for any u L I, π u u, φ = 0, φ P The following result can be found in [10,14].
19 160 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Lemma 4.4. For any u B0 m I and m 0, π u u c m m x u ω m We now turn to the second orthogonal projection. For simplicity, we denote x vx = x vydy, x vx = 1 x vydy, v = 1 vxdx. Define the space { } X := v : v H I, v ±1 = 0, X 0 := {v X : v = 0}, X 0 := P X 0. Consider the orthogonal projection π 1,0 : X 0 X 0, defined by aπ 1,0 v v, φ = 0, φ X 0, 4.15 where the bilinear form au, v := x u, x v. Recall that for real µ 0, the Sobolev space H µ I and its norm µ are defined by space interpolation as in [1]. Lemma 4.5. For any v X 0 B m I with m, π 1,0 v v µ c µ m x m v ωm, 0 µ Proof. We first consider the case µ = 1. Let v x = ξ + x v1x + x x π x vx 4.17 where the constant ξ is chosen such that v = v. Clearly, x v 1 = x v1 and by 4.13 with φ = 1, x v = x v1 π x vxdx = xv1 x vxdx = xv I I For clarity, let us denote g = v v. Thanks to the above facts, we have from 4.13 and 4.14 that v v 1 = xv v, x g = x v v, g = π x v x v, g = π x v x v, π g g π x v x v π g g c 1 m m x v ω m x g ω c 1 m m x v ω m v v Moreover, by the Poincaré inequality 4.1 and 4.15, π 1,0 v v 1,0 1 c π v v 1 1,0 1,0 = caπ v v, π v v = caπ 1,0 v v, v v c π 1,0 v v 1 v v 1. In view of this fact, we have from 4.19 that π 1,0 v v 1 c v v 1 c 1 m x m v ωm. 4.0
20 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory We next prove the case µ = 0 by using a duality argument see, e.g., [6]. For any f L I, we consider an auxiliary problem. It is to find w X 0 such that aw, z = f, z, z X By the Poincaré inequality cf. 4.1 and Lax Milgram Lemma, the problem 4.1 has a unique solution with the regularity w c f. Taking z = π 1,0 v v in 4.1, we deduce from 4.15, 4.0 and 4. that π 1,0 Consequently 1,0 1,0 1,0 v v, f = aπ v v, w = aπ v v, π w w π 1,0 v v = sup f L I f 0 π 1,0 v v 1 π 1,0 w w 1 c m x m v ω m x w c m x m v ωm f. 4. π 1,0 v v, f c m x m f v ωm. 4.3 Finally, we get the desired result by 4.0, 4.3 and space interpolation. With the aid of the previous preparations, we are able to derive the following important result. Theorem 4.. There exists an operator π 1 : X X, such that π 1 v = v, and aπ 1 v v, φ = 0, φ X. 4.4 Moreover, for any v X B m I with m, π 1 v v µ c µ m x m v ωm, 0 µ Proof. For any v X, since v v/ X 0, we define π 1 1,0 vx = π vx v/ + v/. One verifies readily that π 1 v X and π 1 v = v. Moreover, by 4.15, aπ 1 1,0 v v, φ = aπ v v/ v v/, φ φ/ = 0, φ X. Moreover, by 4.16 and the fact r, π 1 v v µ = π 1,0 v v/ v v/ µ c µ m x m v v/ ω m c µ m x m v ω m. This leads to Continuity of the GLLB interpolation operator In order to prove Theorem 4.1, it is essential to show that I is a continuous operator from X to L I, as stated below.
21 16 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Lemma 4.6. For any v X, I v c v + x v ω, 4.6 where the weight function ωx = 1 x. Proof. In this proof, we mainly use Theorems 3. and 3.3. Let x = cos θ and ˆvθ = vcos θ. Then by.13 and 4.4 and Theorem 3.3, I v I v = v x j ω j c ˆv θ j sin θ j. 4.7 By using an inequality of space interpolation see formula 13.7 of [], we know that for any f H 1 a, b, max f x c 1 f L a x b b a a,b + b a x f L a,b. 4.8 ow, let θ 0, θ and I j be the same as in Theorem 3.. Denote a 0 = θ 0 and a 1 = θ. Then by 4.7 and 4.8, I v c c sup θ Ī j ˆvθ sin θ ˆvθ sin θ + θ ˆvθ sin θ L Ī j c ˆvθ sin θ + θ ˆvθ sin θ L 0,π L Ī j L [a 0,a 1 ] c ˆvθ sin θ + ˆvθsin θ / L 0,π L [a 0,a 1 ] + θ ˆvθ sin θ c ˆvθ sin θ + θ ˆvθ sin θ L 0,π + sup 1 L 0,π a 0 θ a 1 L 0,π. sin θ ˆvθ sin θ L [a 0.a 1 ] Since Theorem 3. implies sup a0 θ a 1 1 sin θ I v c v + x v ω, c, we transform θ back to x, and obtain which ends the proof.
22 4.4. Proof of Theorem 4.1 Let L.-L. Wang, B.-y. Guo / Journal of Approximation Theory v x = v x xv1 + x x π x vx P. Like 4.18, we have v = v and xv ±1 = xv±1. Thus, by 4.13, v v, φ = x v = π x v x v, x x φ v, x A similar argument as in the derivation of 4.19 leads to x φ = 0, φ P v v 1 c 1 m x m v ωm Denote g = v v. Then by 4.14, 4.9 and 4.30, v v = v v, g = v v, x which implies x x g = v v, x = x v v, π 3 g x g v v 1 π 3 x c m x m v ω m g ω c m x m v ω m v v, x g x π 3 x g g x g v v c m x m v ωm Since I v = v and v v X, we have from Lemma 4.6, 4.30 and 4.31 that I v v = I v v c v v + x v v ω By the above and Lemma 4., c v v + v v 1 c m x m v ωm. 4.3 x I v v ω c I v v c 1 m x m v ωm Finally, a combination of leads to that I v v + x I v v ω I v v + xi v v ω + v v + xv v ω c 1 m m x v ω m. This ends the proof Corollaries We present below two mathematical consequences. Corollary 4.1. For any φ X M and ψ X L, we have I φ c1 + M φ, 4.34 φ, ψ c1 + M 1 + L φ ψ. 4.35
23 164 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Proof. Using 4.4 and Lemmas 4. and 4.6 gives and I φ c I φ c φ + x φ ω c1 + M φ, φ, ψ = I φ, I ψ I φ I ψ c1 + M 1 + L φ ψ. Corollary 4.. For any v X B m I, m and any φ X, v, φ v, φ c m m x v ω m φ Proof. By 4.3, 4.4, and Theorem 4.1, v, φ v, φ v, φ π 1 v, φ + c π 1 v v + I v v c m m x v ω m φ. π 1 v, φ I v, φ φ This result is useful in numerical analysis of the related pseudospectral scheme. 5. GLLB pseudospectral methods and error estimates Among several different versions of spectral approximations, pseudospectral methods are commonly used and more preferable in industrial codes owing to its ease of implementations and treatment of nonlinear problems. Most existing literature concerning this method is based on the collocation points that are identified as generalized Gauss Lobatto quadrature formulae [4,9, 1,18]. In a pseudospectral method, the choice of collocation points is crucial for the stability and treatment of boundary conditions [1]. The GLLB quadrature formula.13 involves first-order derivative values at the endpoints, which allows a natural and exact imposition of eumann boundary conditions. Based on this, Ezzirani and Guessab [8] proposed a GLLB collocation method for some model elliptic equations, and showed that this method leads to a resulting discrete system with a diagonal mass matrix, and thereby can be used to introduce explicit resolutions in the lumped mass method for the time-dependent problems. The main purposes of this section are two-folds: i to present a user-oriented implementation of the GLLB psuedospectral methods based on pure collocation and variational formulations, and ii to make an error analysis of the GLLB pseudospectral method based on a discrete variational formulation. We will first restrict our attentions to one-dimensional problems, and then follow with multi-dimensional cases. It should be pointed out that the methods go beyond these model equations GLLB collocation method Consider the elliptic equation: { L[u]x := u x + bxux = f x, bx 0, x I =, 1, u ±1 = g ±, 5.1
24 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory where in the case of bx 0, the problem admits a solution only provided that the given data satisfy the compatibility 1 f xdx = g g +. Let {x j } j=0 be the GLLB quadrature nodes. The GLLB collocation approximation to 5.1 is to find u P such that { L[u ]x j = u x j + bx j u x j = Fx j, 1 j 1, u ±1 = g 5.3 ±, where Fx is a consistent approximation to f x with Fx = f x for bx 0, and the case for bx 0 to be specified below. We see that like the LGL collocation method for Dirichlet problems, the numerical solution satisfies the boundary conditions exactly. Remark 5.1. In the case of bx 0, the scheme 5.3 is reduced to 5. u x j = Fx j, 1 j 1; u ±1 = g ±. 5.4 Let I L F x P be the Lagrange interpolation polynomial of F associated with the interior GLLB nodes {x j }. Since u P, the scheme 5.4 implies u x = I L F x, x I ; u ±1 = g ±. Thus, a direct integration leads to 1 I L F xdx = g g This means that 5.4 has a solution as long as the above compatibility holds. However, since f, g + and g are given, the compatibility is not valid if we take Fx = f x. To meet the condition 5.5, we may follow the idea of Guo cf. page 558 of [1] to take Fx = f x 1 ] S [I L f + 1 g g +, where the functional S [ ] is defined in.13. It is clear that for 3, we have I L F x = I L f x 1 ] S [I L f + 1 g g +. By virtue of.13 and 5., we have that 1 1 ] I L F xdx = I L f xdx S [I L f + g g + = g g +. We now examine the matrix form of 5.3. Let { } h j j=0 P be the set of Lagrangian polynomials associated with the GLLB points: h 0 = 1, h 0 1 = h 0x k = 0, 1 k 1, h j ±1 = 0, h jx k = δ jk, 1 j, k 1, h 1 = 1, h = h x k = 0, 1 k 1, 5.6
25 166 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory whose explicit expressions are given in Appendix A. It is clear that { h j } j=0 P and spans the polynomial space P. Under this nodal basis, we write u x = g h 0 x + g + h x + k=1 a k h k x P, 5.7 and determine the unknowns {a k } k=1 by 5.3, i.e., the system A c a := D in + B a = b, 5.8 where d jk = h k x j, D in = d jk, 1 j,k B = diag bx 1, bx,..., bx, a = a 1, a,..., a T, b = f x 1,..., f x T + d T 10, d 0 0,..., d g + d 1, d,..., d T g+. As with the usual collocation schemes, the GLLB method is easy to implement once the associated differentiation matrices are pre-computed. The following lemma provides a recursive way to evaluate the differentiation matrices. Lemma 5.1. Let d l jk = hl k x j = dl h k dx l x j, D l = Then we have { D l+1 = D 1 D l, l 0, D 0 = D 0 = I +1, d l jk 0 j,k 5.9, l where I +1 is the identity matrix of order + 1, and D l is identical to D l except that the first and last rows of D l are replaced by d l+1 00, d l+1 01,..., d l+1 0 and, d l+1,..., d l+1, respectively. d l Proof. For any φ P, we have that φx = φ h 0 x + which implies that φ x = φ h 0 x + φx j h j x + φ 1h x P, φx j h j x + φ 1h x P. Let x 0 = and x = 1. Taking φx = h l k x and x = x i leads to that for all 0 i, k and l 0,
26 d l+1 ik L.-L. Wang, B.-y. Guo / Journal of Approximation Theory = h l+1 k x i = h l+1 k x 0 h 0 x i + h l k x jh j x i + h l+1 k x h x i = d 1 i0 dl+1 0k + d 1 i j d l jk + d1 i dl+1 k, 5.1 which implies the desired result. As a consequence of this lemma, it suffices to evaluate the first-order differentiation matrix D 1 and values h l+1 j ±1 for l 1 and 0 j to compute higher-order differentiation matrices. Although the collocation scheme 5.3 is easy to implement, the GLLB differentiation matrix D is full with CondD 4, and thereby when is large, the accuracy of nodes and of entries of D are subject to significant roundoff errors. To overcome this trouble, it is advisable to construct a preconditioning for the system 5.8, and use a suitable iteration solver [4,7]. As a matter of fact, Canuto [4] proposed a finite-difference preconditioning for the LGL collocation method for eumann problems, but it cannot be applied to this context directly. However, with a different treatment of the boundary conditions, we are able to derive an optimal preconditioner as that in [4] for the LGL collocation method. To this end, we assume that {x j } j=0 are arranged in an ascending order, and let δ j = x j+1 x j, u j = ux j, u j = u x j, f j = f x j. Taking the eumann boundary conditions into account, we discretize u 1 and u as u 1 u u 1 + δ 1 u 0 δ 1 δ 1 / + δ 0, u u u δ u δ δ / + δ, 5.13 and use centered differences for the interiors {u j } j=. More precisely, the finite-difference approximation to 5.1 reads u 1 u δ 1 δ 1 / + δ 0 + b g 1u 1 = f 1 +, [ δ 1 /] + δ 0 u j δ j [δ j + δ j+1 ] + + b j u j δ j δ j j, u u δ δ / + δ + b u = f u j+1 δ j [δ j + δ j+1 ] = f j, g + δ / + δ Denote by A d the coefficient matrix of the above system. Table 5.1 contains the spectral radii i.e., the largest modulus of the eigenvalues of A c cf. 5.8 and A d A c with b = 1 in columns 3 and bx = sin100π x in columns 4 5. The eigenvalues of two matrices are real, positive and distinct in all cases. We also point out that the smallest modulus of the eigenvalues of A d A c is one for both cases, while that of A c is one for b = 1 and 100 for bx = 100+sin100π x. The eigenvalues of the preconditioned matrix A d A c lie in the interval [1, π /4], as shown in Table 5.1, and therefore the system 5.8 can be solved efficiently by an iteration solver.
27 168 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Table 5.1 The spectral radii of A c and A d A c. ρ A c ρ A d A c ρ A c ρ A d A c 5.. GLLB pseudospectral method in a variational form The collocation scheme 5.3 is based on a strong form of the original equation, while it is more preferable to implement the GLLB pseudospectral method in a discrete weak form. For simplicity, we consider 5.1 with a constant coefficient and homogeneous eumann boundary conditions, i.e., bx b and g ± = 0. The GLLB pseudospectral scheme reads { Find u X such that u, v + bu, v = f, v, v X Remark 5.. Unlike the collocation method based on Legendre Gauss Lobotto points for Dirichlet boundary conditions, the GLLB scheme 5.3 with bx b and g ± = 0 is not equivalent to the pseudospectral scheme Indeed, multiplying 5.3 with Fx = f x by v x j ω j and summing the result for j = 1,,...,, we may rewrite the GLLB collocation scheme as { Find u X such that u, v + bu 5.16, v = f, v + R, v X, where R = L[u] v ω 0 + L[u] 1v 1ω f v ω 0 f 1v 1ω = u bu + f v ω 0 u 1 bu 1 + f 1 v 1ω Hence, up to a boundary residual, two schemes are equivalent. We now examine the matrix form of the system As usual, we may choose the nodal basis {h k } k=1 for X. By 5.6, one verifies that the coefficient matrix under this basis is A p := D 1 in T 1 WD + bw in where D 1 in = d 1 jk cf. 5.10, and W = diagω 1, ω,..., ω. We see that A p 1 j,k is full and ill-conditioned as the GLLB collocation system 5.8 cf. Table 5., so it subjects to similar roundoff errors cf. Fig. 5.1.
28 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Table 5. Condition numbers. b = 0.01 b = 1 b = A c 1.5e e e+0 3 A p 4.33e e e+01 3 A s A c 1.95e e e A p 3.43e e e+0 64 A s A c 3.08e e+06.88e A p.74e+07.79e e A s A c 4.90e e e A p.19e+08.4e e A s Fig Convergence rate L -errors vs. for b = 1 left and b = 100 right. In fact, it is more advisable to use a modal basis and perform the GLLB method in the frequency space. Using this approach, the spectral linear system will be sparse and wellconditioned. As with [], we define the basis function as a compact combination of the Legendre polynomials φ 0 x = 1, φ n x := d n L n x + e n L n+ x, n 1, 5.19 where n + n + 3 d n = nn + 1n + 3, e nn + 1 n = n + n + 3. Since L n ±1 = 1 n+1 nn + 1, one can verify readily that φ n ±1 = 0, and X = span{φ 0, φ 1,..., φ }. Moreover, as shown in Appendix B, we have φ n x = 1 x P n x = λ n 1 x J, n x, n 1, 5.0
29 170 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory which, together with.1,.8 and 4.3, leads to { s jk := φ k 1, 1 j = k,, φ j = φ k, φ j = 0, otherwise. Meanwhile, using 4.3, 5.19 and the orthogonality of the Legendre polynomials yields m jk := { φ k, φ j = d L + e L, L, k = j =, φ k, φ j, otherwise, 0, only if k = j or k = j ±. Hence, by setting u x = n=0 ûnφ n x, and û = û 0, û 1,..., û T, f j = f, φ j, ˆf = ˆ f 0, ˆ f 1,..., ˆ f T, the system 5.15 under the basis 5.19 becomes 5.1 A s û := [ diag0, 1,..., 1 + bm jk 0 j,k ] û = ˆf. 5. We see that A s is pentadiagonal with three nonzero diagonals, which can be decoupled into two tridiagonal sub-matrices, and inverted efficiently as in []. Moreover, the entries of the coefficient matrix can be evaluated exactly. We can also prove that the condition number of A s does not depend on. To show this, we define the discrete l -inner product and norm, i.e., for any two vectors of length, u, v l := j=0 u jv j and u l = u, u 1/. By using 5.1, 4.3, the definition of A l s, 4.4 and the Poincaré inequality 4.1 successively, we derive that û l = u + û 0 u, u + b u, u + û 0 = A sû, û l + û 0 = u + b u + û 0 u + 9b u + û cb u + cb ū + û cb u + û 0 = û l, 5.3 where in the last step, we used the fact ū = 1 u xdx = û 0. Therefore, we claim that CondA s 1 + cb, which is also verified numerically by Table 5.. We compare in Table 5. the condition numbers of the matrices A c, A p and A s resulting from the pure collocation method PCOL, cf. 5.8, pseudospectral method with nodal basis PSD, cf and pseudospectral method with modal basis PSMD, cf. 5., respectively. We see that for various b and, the condition numbers of the system 5. are small and independent of also cf. 5.3, while those of the collocation scheme and pseudospectral method using a nodal basis grow like O 4. To check the accuracy, we take b = 1 and ux = cos 1π x as the exact solution. The discrete L -errors against various are plotted at the left of Fig. 5.1, which indicates an exponential convergence rate. We also see that PSD and PSMD methods can provide better numerical results than PCOL method. To exam the effect of roundoff errors, we take b = 100 and ux = cos 8π x as an exact solution. We observe from Fig. 5.1right that the effect of roundoff errors is much more severe in PCOL method. Indeed, like the Gauss Lobatto pseudospectral method, both PSD and PSMD have higher accuracy, when compared to PCOL. However, in multi-dimensional cases, PSMD based on tensor product of the basis functions is
30 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory more preferable, since it involves sparse and well-conditioned systems, which can be inverted very efficiently by using the matrix decomposition techniques cf. [] Error estimates In this section, we apply the approximation results established in the previous section to analyze the errors of the GLLB pseudospectral scheme Theorem 5.1. Let u and u be respectively the solutions of 5.15 and 5.1 with bx = b > 0 and g ± = 0. If u X B m I and f Bs I with integers m, s, then u u µ c µ m x m u ω m + s x s f ωs, µ = 0, Proof. For simplicity, let e = π 1 u u and E v φ = v, φ v, φ. By 4.3, 5.1 and 5.15, x u u, x φ + b u, φ u, φ = E f φ, φ X. Thanks to 4.4, we can rewrite the above equation as x e, x φ + b e, φ = b π 1 u, φ u, φ + E f φ, φ X. 5.5 Using 4.4, Theorems 4.1 and 4., and Corollary 4. leads to π 1 u, φ u, φ π 1 u I u, φ + E u φ and E f φ s s x f ω s φ. π 1 u I u φ + E u φ c m m x u ω m φ, Hence, taking φ = e in 5.5 and using 4.4, we reach that e 1 + e c m m x u ω m + s s x f ω s. Finally, using Theorem 4. again leads to the desired result. Remark 5.3. Theorem 4. of [4] presents an analysis estimate in L -norm of a modified LGL collocation method for 5.1 with b = 1 with a convergence order O m. Obviously, the estimate 5.4 improve the existing result essentially, and seems optimal with the order O m. 6. Concluding remarks In the foregoing discussions, we restrict our attentions to one-dimensional linear problems, but the ideas and techniques go beyond these equations. An interesting extension is on the construction and analysis of a quadrature formula and the associated pseudospectral approximations for mixed boundary conditions α ± u ±1 + β ± u±1 where α ± and β ± are constants such that u + bu = f, in, 1; α ± u ±1 + β ± u±1 = g ±, 6.1 is well-posed. We will report this topic in our future work.
31 17 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory In summary, we have derived in this paper the asymptotic properties of the nodes and weights of the GLLB quadrature formula, and obtained optimal estimates for the GLLB interpolation errors. We also presented a detailed implementation of the associated GLLB pseudospectral method for second-order PDEs. Acknowledgments The work of the first author is partially supported by a Startup Grant of TU, Singapore MOE Grant T07B0, and Singapore RF007IDM-IDM The work of the second author is supported in part by SF of China o , Science and Technology Commission of Shanghai Municipality, Grant o , Shanghai Leading Academic Discipline Project o. S30405 and Fund for E-institute of Shanghai Universities o. E Appendix A. Expressions of the Lagrangian basis For notational simplicity, let qx := Q x be the quadrature polynomial given in.10, and define q j x := 1 Q x Q x = 1 qx j x x j q P, 1 j 1. A.1 x j x x j We see that { } q j are the Lagrangian basis associated with the interior points { } x j of the GLLB quadrature. This matter with 5.6 implies that the corresponding basis functions can be expressed as follows. Boundary basis functions : [ q 1x 1 q1 ] qx h 0 x = qq [ 1 q 1q q q1, q x + 1 q ] qx h x = q1q + q 1q q 1q. Interior basis functions: where A. h j x = x + ax + b x j + ax j + b q jx, 1 j 1, A.3 q j 1q j a = + q j 1q j q j 1q j q j 1q j q j1q j, b = 3q j1q j 3q j 1q j + q j 1q j 4q j1q j q j 1q j q j 1q j q j1q j. A.4 Appendix B. The derivation of 5.0 We first recall the formula see, e.g., [3]: x J n α,β x = 1 α+1,β+1 n + α + β + 1J n x, n 1. B.1
32 L.-L. Wang, B.-y. Guo / Journal of Approximation Theory Using B.1 and the expressions of d n and e n gives 1 φ n x = d 1,1 n n + 1J = 1 d nn + 1 By B.1 and formula of [3], 1 x d dx J n 1,1 n + 1n + 3 x = n + 3 A combination of the above two facts leads to φ n n x + 1 n + 3e n J 1,1 n+1 x J 1,1 n x n n + J 1,1 n+1 x. J 1,1 n x n n + J 1,1 n+1 x. n + 3 x = d n 1 x J, 4 n x = λ n 1 x J, n x. Indeed, the normalized factor d n is chosen such that {φ n } is orthonormal in L ω I. References [1] R.A. Adams, Sobolev Spaces, Academic Press, ew York, [] C. Bernardi, Y. Maday, Spectral methods, in: P.G. Ciarlet, J.L. Lions Eds., Handbook of umerical Analysis, in: Techniques of Scientific Computing, vol. 5, Elsevier, Amsterdam, 1997, pp [3] J.P. Boyd, Chebyshev and Fourier Spectral Methods, second ed., Dover Publications Inc., Mineola, Y, 001. [4] C. Canuto, Boundary conditions in Chebyshev and Legendre methods, SIAM umer. Anal [5] C. Canuto, M.Y. Hussaini, A. Quarteroni, T.A. Zang, Spectral Methods: Fundamentals in Single Domains, Springer, Berlin, 006. [6] P.G. Ciarlet, Finite Element Methods for Elliptic Problems, orth-holland, Amsterdam, [7] M.O. Deville, E.H. Mund, Finite-element preconditioning for pseudospectral solutions of elliptic problems, SIAM J. Sci. Stat. Comput [8] A. Ezzirani, A. Guessab, A fast algorithm for Gaussian type quadrature formulae with mixed boundary conditions and some lumped mass spectral approximations, Math. Comp [9] B. Fornberg, A Practical Guide to Pseudospectral Methods, Cambridge University Press, [10] D. Funaro, Polynomial Approximations of Differential Equations, Spring-Verlag, 199. [11] G.H. Golub, J.H. Welsch, Calculation of Gauss quadrature rules, Math. Comp [1] Ben-yu Guo, Difference Methods for Partial Differential Equations, Science Press, Beijing, [13] Ben-yu Guo, Spectral Methods and their Applications, World Scientific, Singapore, [14] Ben-yu Guo, Jacobi approximations in certain Hilbert spaces and their applications to singular differential equations, J. Math. Anal. Appl [15] Ben-yu Guo, Li-lian Wang, Jacobi interpolation approximations and their applications to singular differential equations, Adv. Comput. Math [16] Ben-yu Guo, Li-lian Wang, Jacobi approximations in non-uniformly Jacobi-weighted Sobolev spaces, J. Approx. Theory [17] J. Hesthaven, S. Gottlieb, D. Gottlieb, Spectral Methods for Time-Dependent Problems, Cambridge University Press, 007. [18] W.Z. Huang, D.M. Sloan, The pseudospectral method for third order differential equations, SIAM J. umer. Anal [19] K. Jetter, A new class of Gaussian quadarature formulas based on Birkhoff type data, SIAM J. umer. Anal [0] K. Jetter, Uniqueness of Gauss Birkhoff quadrature formulas, SIAM J. umer. Anal [1] W.J. Merryfield, B. Shizgal, Properties of collocation third-derivatives operators, J. Comput. Phys [] J. Shen, Efficient spectral-galerkin method I, direct solvers for second- and fourth-order equations by using Legendre polynomials, SIAM J. Sci. Comput [3] G. Szegö, Orthogonal Polynomials, Amer. Math. Soc., Providence, RI, [4] L.. Trefethen, Spectral Methods in Matlab, SIAM, Philadelphia, 000. [5] Y. Xu, Quasi-orthogonal polynomials, quadrature, and interpolation, J. Math. Anal. Appl
PART IV Spectral Methods
PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,
More informationWe consider in this paper Legendre and Chebyshev approximations of the linear hyperbolic
LEGEDRE AD CHEBYSHEV DUAL-PETROV-GALERKI METHODS FOR HYPERBOLIC EQUATIOS JIE SHE & LI-LIA WAG 2 Abstract. A Legendre and Chebyshev dual-petrov-galerkin method for hyperbolic equations is introduced and
More information1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel
1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel Xiaoyong Zhang 1, Junlin Li 2 1 Shanghai Maritime University, Shanghai,
More informationOptimal Spectral-Galerkin Methods Using Generalized Jacobi Polynomials
Journal of Scientific Computing ( 2006) DO: 10.1007/s10915-005-9055-7 Optimal Spectral-Galerkin Methods Using Generalized Jacobi Polynomials Ben-Yu Guo, 1 Jie Shen, 2 and Li-Lian Wang 2,3 Received October
More informationRational Chebyshev pseudospectral method for long-short wave equations
Journal of Physics: Conference Series PAPER OPE ACCESS Rational Chebyshev pseudospectral method for long-short wave equations To cite this article: Zeting Liu and Shujuan Lv 07 J. Phys.: Conf. Ser. 84
More informationA Collocation Method with Exact Imposition of Mixed Boundary Conditions
J Sci Comput (010) 4: 91 317 DOI 10.1007/s10915-009-935-x A Collocation Method with Exact Imposition of Mixed Boundary Conditions Zhong-Qing Wang Li-Lian Wang Received: 3 May 009 / Revised: 3 September
More informationON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction
Journal of Computational Mathematics, Vol.6, No.6, 008, 85 837. ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * Tao Tang Department of Mathematics, Hong Kong Baptist
More informationSPECTRAL METHODS: ORTHOGONAL POLYNOMIALS
SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS 31 October, 2007 1 INTRODUCTION 2 ORTHOGONAL POLYNOMIALS Properties of Orthogonal Polynomials 3 GAUSS INTEGRATION Gauss- Radau Integration Gauss -Lobatto Integration
More informationarxiv: v1 [math.na] 4 Nov 2015
ON WELL-CONDITIONED SPECTRAL COLLOCATION AND SPECTRAL METHODS BY THE INTEGRAL REFORMULATION KUI DU arxiv:529v [mathna 4 Nov 25 Abstract Well-conditioned spectral collocation and spectral methods have recently
More informationAn Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations
An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations Moshe Israeli Computer Science Department, Technion-Israel Institute of Technology, Technion city, Haifa 32000, ISRAEL Alexander
More informationRecurrence Relations and Fast Algorithms
Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations
More information256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.
56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =
More informationAbstract. 1. Introduction
Journal of Computational Mathematics Vol.28, No.2, 2010, 273 288. http://www.global-sci.org/jcm doi:10.4208/jcm.2009.10-m2870 UNIFORM SUPERCONVERGENCE OF GALERKIN METHODS FOR SINGULARLY PERTURBED PROBLEMS
More informationOverlapping Schwarz preconditioners for Fekete spectral elements
Overlapping Schwarz preconditioners for Fekete spectral elements R. Pasquetti 1, L. F. Pavarino 2, F. Rapetti 1, and E. Zampieri 2 1 Laboratoire J.-A. Dieudonné, CNRS & Université de Nice et Sophia-Antipolis,
More informationA FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS
Proceedings of ALGORITMY 2005 pp. 222 229 A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS ELENA BRAVERMAN, MOSHE ISRAELI, AND ALEXANDER SHERMAN Abstract. Based on a fast subtractional
More informationOn the positivity of linear weights in WENO approximations. Abstract
On the positivity of linear weights in WENO approximations Yuanyuan Liu, Chi-Wang Shu and Mengping Zhang 3 Abstract High order accurate weighted essentially non-oscillatory (WENO) schemes have been used
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationSimple Examples on Rectangular Domains
84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation
More informationSuperconvergence of discontinuous Galerkin methods for 1-D linear hyperbolic equations with degenerate variable coefficients
Superconvergence of discontinuous Galerkin methods for -D linear hyperbolic equations with degenerate variable coefficients Waixiang Cao Chi-Wang Shu Zhimin Zhang Abstract In this paper, we study the superconvergence
More informationA Numerical Solution of the Lax s 7 th -order KdV Equation by Pseudospectral Method and Darvishi s Preconditioning
Int. J. Contemp. Math. Sciences, Vol. 2, 2007, no. 22, 1097-1106 A Numerical Solution of the Lax s 7 th -order KdV Equation by Pseudospectral Method and Darvishi s Preconditioning M. T. Darvishi a,, S.
More informationA new class of highly accurate differentiation schemes based on the prolate spheroidal wave functions
We introduce a new class of numerical differentiation schemes constructed for the efficient solution of time-dependent PDEs that arise in wave phenomena. The schemes are constructed via the prolate spheroidal
More informationJournal of Computational and Applied Mathematics
Journal of Computational and Applied Mathematics 265 (2014) 264 275 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
More informationNumerische Mathematik
Numer. Math. (997) 76: 479 488 Numerische Mathematik c Springer-Verlag 997 Electronic Edition Exponential decay of C cubic splines vanishing at two symmetric points in each knot interval Sang Dong Kim,,
More informationWojciech Czernous PSEUDOSPECTRAL METHOD FOR SEMILINEAR PARTIAL FUNCTIONAL DIFFERENTIAL EQUATIONS
Opuscula Mathematica Vol. 30 No. 2 2010 http://dx.doi.org/10.7494/opmath.2010.30.2.133 Wojciech Czernous PSEUDOSPECTRAL METHOD FOR SEMILINEAR PARTIAL FUNCTIONAL DIFFERENTIAL EQUATIONS Abstract. We present
More informationFast Structured Spectral Methods
Spectral methods HSS structures Fast algorithms Conclusion Fast Structured Spectral Methods Yingwei Wang Department of Mathematics, Purdue University Joint work with Prof Jie Shen and Prof Jianlin Xia
More informationWRT in 2D: Poisson Example
WRT in 2D: Poisson Example Consider 2 u f on [, L x [, L y with u. WRT: For all v X N, find u X N a(v, u) such that v u dv v f dv. Follows from strong form plus integration by parts: ( ) 2 u v + 2 u dx
More informationT u
WANG LI-LIAN Assistant Professor Division of Mathematical Sciences, SPMS Nanyang Technological University Singapore, 637616 T 65-6513-7669 u 65-6316-6984 k lilian@ntu.edu.sg http://www.ntu.edu.sg/home/lilian?
More informationInstructions for Matlab Routines
Instructions for Matlab Routines August, 2011 2 A. Introduction This note is devoted to some instructions to the Matlab routines for the fundamental spectral algorithms presented in the book: Jie Shen,
More informationApproximation by Conditionally Positive Definite Functions with Finitely Many Centers
Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal
More informationNumerical Solution of Two-Dimensional Volterra Integral Equations by Spectral Galerkin Method
Journal of Applied Mathematics & Bioinformatics, vol.1, no.2, 2011, 159-174 ISSN: 1792-6602 (print), 1792-6939 (online) International Scientific Press, 2011 Numerical Solution of Two-Dimensional Volterra
More informationOptimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36
Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,
More informationClass notes: Approximation
Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R
More informationSpectra of Multiplication Operators as a Numerical Tool. B. Vioreanu and V. Rokhlin Technical Report YALEU/DCS/TR-1443 March 3, 2011
We introduce a numerical procedure for the construction of interpolation and quadrature formulae on bounded convex regions in the plane. The construction is based on the behavior of spectra of certain
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationFinite Element Method for Ordinary Differential Equations
52 Chapter 4 Finite Element Method for Ordinary Differential Equations In this chapter we consider some simple examples of the finite element method for the approximate solution of ordinary differential
More informationarxiv:gr-qc/ v1 6 Sep 2006
Introduction to spectral methods Philippe Grandclément Laboratoire Univers et ses Théories, Observatoire de Paris, 5 place J. Janssen, 995 Meudon Cede, France This proceeding is intended to be a first
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationSecond Order Elliptic PDE
Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic
More informationSEMILINEAR ELLIPTIC EQUATIONS WITH DEPENDENCE ON THE GRADIENT
Electronic Journal of Differential Equations, Vol. 2012 (2012), No. 139, pp. 1 9. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu SEMILINEAR ELLIPTIC
More informationNumerical Methods for Two Point Boundary Value Problems
Numerical Methods for Two Point Boundary Value Problems Graeme Fairweather and Ian Gladwell 1 Finite Difference Methods 1.1 Introduction Consider the second order linear two point boundary value problem
More informationA Legendre Computational Matrix Method for Solving High-Order Fractional Differential Equations
Mathematics A Legendre Computational Matrix Method for Solving High-Order Fractional Differential Equations Mohamed Meabed KHADER * and Ahmed Saied HENDY Department of Mathematics, Faculty of Science,
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationBoundary Value Problems and Iterative Methods for Linear Systems
Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In
More informationOrthogonal Polynomials and Gaussian Quadrature
Orthogonal Polynomials and Gaussian Quadrature 1. Orthogonal polynomials Given a bounded, nonnegative, nondecreasing function w(x) on an interval, I of the real line, we consider the Hilbert space L 2
More informationINTRODUCTION TO FINITE ELEMENT METHODS
INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.
More informationJacobi spectral collocation method for the approximate solution of multidimensional nonlinear Volterra integral equation
Wei et al. SpringerPlus (06) 5:70 DOI 0.86/s40064-06-3358-z RESEARCH Jacobi spectral collocation method for the approximate solution of multidimensional nonlinear Volterra integral equation Open Access
More informationMaximum norm estimates for energy-corrected finite element method
Maximum norm estimates for energy-corrected finite element method Piotr Swierczynski 1 and Barbara Wohlmuth 1 Technical University of Munich, Institute for Numerical Mathematics, piotr.swierczynski@ma.tum.de,
More informationGaussian interval quadrature rule for exponential weights
Gaussian interval quadrature rule for exponential weights Aleksandar S. Cvetković, a, Gradimir V. Milovanović b a Department of Mathematics, Faculty of Mechanical Engineering, University of Belgrade, Kraljice
More information8 A pseudo-spectral solution to the Stokes Problem
8 A pseudo-spectral solution to the Stokes Problem 8.1 The Method 8.1.1 Generalities We are interested in setting up a pseudo-spectral method for the following Stokes Problem u σu p = f in Ω u = 0 in Ω,
More informationLECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)
LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,
More informationc 2003 Society for Industrial and Applied Mathematics
SIAM J. NUMER. ANAL. Vol. 41, No. 6, pp. 333 349 c 3 Society for Industrial and Applied Mathematics CONVERGENCE ANALYSIS OF SPECTRAL COLLOCATION METHODS FOR A SINGULAR DIFFERENTIAL EQUATION WEIZHANG HUANG,
More informationFinite difference method for elliptic problems: I
Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen
More informationGeometric Multigrid Methods
Geometric Multigrid Methods Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University IMA Tutorial: Fast Solution Techniques November 28, 2010 Ideas
More informationLeast-Squares Spectral Collocation with the Overlapping Schwarz Method for the Incompressible Navier Stokes Equations
Least-Squares Spectral Collocation with the Overlapping Schwarz Method for the Incompressible Navier Stokes Equations by Wilhelm Heinrichs Universität Duisburg Essen, Ingenieurmathematik Universitätsstr.
More informationOn an Approximation Result for Piecewise Polynomial Functions. O. Karakashian
BULLETIN OF THE GREE MATHEMATICAL SOCIETY Volume 57, 010 (1 7) On an Approximation Result for Piecewise Polynomial Functions O. arakashian Abstract We provide a new approach for proving approximation results
More informationBessel Functions Michael Taylor. Lecture Notes for Math 524
Bessel Functions Michael Taylor Lecture Notes for Math 54 Contents 1. Introduction. Conversion to first order systems 3. The Bessel functions J ν 4. The Bessel functions Y ν 5. Relations between J ν and
More informationL. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS
Rend. Sem. Mat. Univ. Pol. Torino Vol. 57, 1999) L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Abstract. We use an abstract framework to obtain a multilevel decomposition of a variety
More informationON THE CONVERGENCE OF THE FOURIER APPROXIMATION FOR EIGENVALUES AND EIGENFUNCTIONS OF DISCONTINUOUS PROBLEMS
SIAM J. UMER. AAL. Vol. 40, o. 6, pp. 54 69 c 003 Society for Industrial and Applied Mathematics O THE COVERGECE OF THE FOURIER APPROXIMATIO FOR EIGEVALUES AD EIGEFUCTIOS OF DISCOTIUOUS PROBLEMS M. S.
More informationA Spectral Method for Elliptic Equations: The Neumann Problem
A Spectral Method for Elliptic Equations: The Neumann Problem Kendall Atkinson Departments of Mathematics & Computer Science The University of Iowa Olaf Hansen, David Chien Department of Mathematics California
More informationReconstruction of sparse Legendre and Gegenbauer expansions
Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche We present a new deterministic algorithm for the reconstruction of sparse Legendre expansions from a small number
More informationCONVERGENCE ANALYSIS OF THE JACOBI SPECTRAL-COLLOCATION METHODS FOR VOLTERRA INTEGRAL EQUATIONS WITH A WEAKLY SINGULAR KERNEL
COVERGECE AALYSIS OF THE JACOBI SPECTRAL-COLLOCATIO METHODS FOR VOLTERRA ITEGRAL EQUATIOS WITH A WEAKLY SIGULAR KEREL YAPIG CHE AD TAO TAG Abstract. In this paper, a Jacobi-collocation spectral method
More informationBernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION
Malaysian Journal of Mathematical Sciences 6(2): 25-36 (202) Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces Noli N. Reyes and Rosalio G. Artes Institute of Mathematics, University of
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationPreface. 2 Linear Equations and Eigenvalue Problem 22
Contents Preface xv 1 Errors in Computation 1 1.1 Introduction 1 1.2 Floating Point Representation of Number 1 1.3 Binary Numbers 2 1.3.1 Binary number representation in computer 3 1.4 Significant Digits
More informationOrthogonal polynomials
Orthogonal polynomials Gérard MEURANT October, 2008 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed
More informationChapter 5 A priori error estimates for nonconforming finite element approximations 5.1 Strang s first lemma
Chapter 5 A priori error estimates for nonconforming finite element approximations 51 Strang s first lemma We consider the variational equation (51 a(u, v = l(v, v V H 1 (Ω, and assume that the conditions
More informationLecture Note III: Least-Squares Method
Lecture Note III: Least-Squares Method Zhiqiang Cai October 4, 004 In this chapter, we shall present least-squares methods for second-order scalar partial differential equations, elastic equations of solids,
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationEFFICIENT SPECTRAL COLLOCATION METHOD FOR SOLVING MULTI-TERM FRACTIONAL DIFFERENTIAL EQUATIONS BASED ON THE GENERALIZED LAGUERRE POLYNOMIALS
Journal of Fractional Calculus and Applications, Vol. 3. July 212, No.13, pp. 1-14. ISSN: 29-5858. http://www.fcaj.webs.com/ EFFICIENT SPECTRAL COLLOCATION METHOD FOR SOLVING MULTI-TERM FRACTIONAL DIFFERENTIAL
More informationA Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems
A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura
More informationAn Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence
An Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence John Pearson MSc Special Topic Abstract Numerical approximations of
More informationBivariate Lagrange interpolation at the Padua points: The generating curve approach
Journal of Approximation Theory 43 (6) 5 5 www.elsevier.com/locate/jat Bivariate Lagrange interpolation at the Padua points: The generating curve approach Len Bos a, Marco Caliari b, Stefano De Marchi
More informationPARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS
PARTITION OF UNITY FOR THE STOES PROBLEM ON NONMATCHING GRIDS CONSTANTIN BACUTA AND JINCHAO XU Abstract. We consider the Stokes Problem on a plane polygonal domain Ω R 2. We propose a finite element method
More informationA Gauss Lobatto quadrature method for solving optimal control problems
ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes
More informationApplied Numerical Mathematics
Applied Numerical Mathematics 131 (2018) 1 15 Contents lists available at ScienceDirect Applied Numerical Mathematics www.elsevier.com/locate/apnum The spectral-galerkin approximation of nonlinear eigenvalue
More informationSolutions Preliminary Examination in Numerical Analysis January, 2017
Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which
More informationResearch Article An Extension of the Legendre-Galerkin Method for Solving Sixth-Order Differential Equations with Variable Polynomial Coefficients
Mathematical Problems in Engineering Volume 2012, Article ID 896575, 13 pages doi:10.1155/2012/896575 Research Article An Extension of the Legendre-Galerkin Method for Solving Sixth-Order Differential
More informationDepartment of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004
Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationHankel Operators plus Orthogonal Polynomials. Yield Combinatorial Identities
Hanel Operators plus Orthogonal Polynomials Yield Combinatorial Identities E. A. Herman, Grinnell College Abstract: A Hanel operator H [h i+j ] can be factored as H MM, where M maps a space of L functions
More informationA New Collocation Scheme Using Non-polynomial Basis Functions
J Sci Comput (2017) 70:793 818 DOI 10.1007/s10915-016-0269-7 A ew Collocation Scheme Using on-polynomial Basis Functions Chao Zhang 1 Wenjie Liu 2 Li-Lian Wang 3 Received: 27 August 2015 / Revised: 20
More informationSpectral Collocation Method for Parabolic Partial Differential Equations with Neumann Boundary Conditions
Applied Mathematical Sciences, Vol. 1, 2007, no. 5, 211-218 Spectral Collocation Method for Parabolic Partial Differential Equations with Neumann Boundary Conditions M. Javidi a and A. Golbabai b a Department
More informationOn Gauss-type quadrature formulas with prescribed nodes anywhere on the real line
On Gauss-type quadrature formulas with prescribed nodes anywhere on the real line Adhemar Bultheel, Ruymán Cruz-Barroso,, Marc Van Barel Department of Computer Science, K.U.Leuven, Celestijnenlaan 2 A,
More informationOn the Chebyshev quadrature rule of finite part integrals
Journal of Computational and Applied Mathematics 18 25) 147 159 www.elsevier.com/locate/cam On the Chebyshev quadrature rule of finite part integrals Philsu Kim a,,1, Soyoung Ahn b, U. Jin Choi b a Major
More informationON MATRIX VALUED SQUARE INTEGRABLE POSITIVE DEFINITE FUNCTIONS
1 2 3 ON MATRIX VALUED SQUARE INTERABLE POSITIVE DEFINITE FUNCTIONS HONYU HE Abstract. In this paper, we study matrix valued positive definite functions on a unimodular group. We generalize two important
More informationIterative Methods for Linear Systems
Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the
More informationMemoirs on Differential Equations and Mathematical Physics
Memoirs on Differential Equations and Mathematical Physics Volume 34, 005, 1 76 A. Dzhishariani APPROXIMATE SOLUTION OF ONE CLASS OF SINGULAR INTEGRAL EQUATIONS BY MEANS OF PROJECTIVE AND PROJECTIVE-ITERATIVE
More informationFinite Difference Methods for Boundary Value Problems
Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point
More informationWavelets and regularization of the Cauchy problem for the Laplace equation
J. Math. Anal. Appl. 338 008440 1447 www.elsevier.com/locate/jmaa Wavelets and regularization of the Cauchy problem for the Laplace equation Chun-Yu Qiu, Chu-Li Fu School of Mathematics and Statistics,
More informationEvaluation of Chebyshev pseudospectral methods for third order differential equations
Numerical Algorithms 16 (1997) 255 281 255 Evaluation of Chebyshev pseudospectral methods for third order differential equations Rosemary Renaut a and Yi Su b a Department of Mathematics, Arizona State
More informationMaximizing the numerical radii of matrices by permuting their entries
Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and
More informationExistence of minimizers for the pure displacement problem in nonlinear elasticity
Existence of minimizers for the pure displacement problem in nonlinear elasticity Cristinel Mardare Université Pierre et Marie Curie - Paris 6, Laboratoire Jacques-Louis Lions, Paris, F-75005 France Abstract
More informationPIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED
PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED ALAN DEMLOW Abstract. Recent results of Schatz show that standard Galerkin finite element methods employing piecewise polynomial elements of degree
More informationCOMPUTATION OF GAUSS-KRONROD QUADRATURE RULES
MATHEMATICS OF COMPUTATION Volume 69, Number 231, Pages 1035 1052 S 0025-5718(00)01174-1 Article electronically published on February 17, 2000 COMPUTATION OF GAUSS-KRONROD QUADRATURE RULES D.CALVETTI,G.H.GOLUB,W.B.GRAGG,ANDL.REICHEL
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationIterative methods for positive definite linear systems with a complex shift
Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationA NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES
Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University
More informationNUMERICAL SOLUTION FOR CLASS OF ONE DIMENSIONAL PARABOLIC PARTIAL INTEGRO DIFFERENTIAL EQUATIONS VIA LEGENDRE SPECTRAL COLLOCATION METHOD
Journal of Fractional Calculus and Applications, Vol. 5(3S) No., pp. 1-11. (6th. Symp. Frac. Calc. Appl. August, 01). ISSN: 090-5858. http://fcag-egypt.com/journals/jfca/ NUMERICAL SOLUTION FOR CLASS OF
More information