Reconstruction of sparse Legendre and Gegenbauer expansions

Size: px
Start display at page:

Download "Reconstruction of sparse Legendre and Gegenbauer expansions"

Transcription

1 BIT Numer Math 06 56: DOI 0.007/s Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche Received: 6 March 04 / Accepted: 8 December 05 / Published online: 9 December 05 Springer Science+Business Media Dordrecht 05 Abstract We present a new deterministic approximate algorithm for the reconstruction of sparse Legendre expansions from a small number of given samples. Using asymptotic properties of Legendre polynomials, this reconstruction is based on Pronylike methods. The method proposed is robust with respect to noisy sampled data. Furthermore we show that the suggested method can be extended to the reconstruction of sparse Gegenbauer expansions of low positive order. Keywords Legendre polynomials Sparse Legendre expansions Gegenbauer polynomials Ultraspherical polynomials Sparse Gegenbauer expansions Sparse recovering Sparse Legendre interpolation Sparse Gegenbauer interpolation Asymptotic formula Prony-like method Mathematics Subject Classification 65D05 33C45 4A45 65F5 Introduction In this paper, we present a new deterministic approximate approach to the reconstruction of sparse Legendre and Gegenbauer expansions, if relatively few samples on a special grid are given. Communicated by Michael S. Floater. B Daniel Potts potts@mathematik.tu-chemnitz.de Manfred Tasche manfred.tasche@uni-rostock.de Department of Mathematics, Technische Universität Chemnitz, 0907 Chemnitz, Germany Institute of Mathematics, University of Rostock, 805 Rostock, Germany

2 00 D. Potts, M. Tasche Recently the reconstruction of sparse trigonometric polynomials has attained much attention. There exist recovery methods based on random sampling related to compressed sensing see e.g. [5,6,,8] and the references therein and methods based on deterministic sampling related to Prony-like methods see e.g. [6] and the references therein. Both methods are already generalized to other polynomial systems. Rauhut and Ward [9] presented a recovery method of a polynomial of degree at most N given in Legendre expansion with M nonzero terms, where OM log N 4 random samples are taken independently according to the Chebyshev probability measure of [, ]. Some recovery algorithms in compressive sensing are based on weighted l -minimization see [9,0] and the references therein. Exact recovery of sparse functions can be ensured only with a certain probability. Peter et al. [5] have presented a Prony-like method for the reconstruction of sparse Legendre expansions, where only M + function resp. derivative values at one point are given. We mention that sparse Legendre expansions are important for the analysis of spherical functions which are represented by expansions of spherical harmonics. In [], it was observed that the equilibrium state admits a sparse representation in spherical harmonics see [, Section 4.3], i.e., the equilibrium state can be represented by a very short Legendre expansion see [, Figure 4.5]. Recently, the authors have described a unified approach to Prony-like methods in [6] and applied it to the recovery of sparse expansions of Chebyshev polynomials of first and second kind in [7]. Similar sparse interpolation problems for special polynomial systems have been formerly explored in [3,4,8,] and also solved by Prony-like methods. A very general approach for the reconstruction of sparse expansions of eigenfunctions of suitable linear operators was suggested by Peter and Plonka [4]. New reconstruction formulas for M-sparse expansions of orthogonal polynomials using the Sturm Liouville operator, were presented. However one has to use sampling points and derivative values. In this paper we present a new method for the reconstruction of sparse Legendre expansions which is based on a local approximation of Legendre polynomials by cosine functions. Therefore this algorithm is closely related to [7]. Note that fast algorithms for the computation of Fourier Legendre coefficients in a Legendre expansion see [,7] are based on similar asymptotic formulas of the Legendre polynomials. However the key idea is that the conveniently scaled Legendre polynomials behave similar to the cosine functions near zero. Therefore we use a sampling grid located near zero. Finally we generalize this method to sparse Gegenbauer expansions. The outline of this paper is as follows. In Sect., we collect some useful properties of Legendre polynomials. In Sect. 3, we present the new reconstruction method for sparse Legendre expansions. We extend our recovery method in Sect. 4 to the case of sparse Gegenbauer expansions of low positive order. Finally we show some results of numerical experiments in Sect. 5. In Example 5.5, it is shown that the method proposed is robust with respect to noisy sampled data.

3 Reconstruction of sparse Legendre and Gegenbauer expansions 0 Properties of Legendre polynomials As known, the Legendre polynomials P n are special Gegenbauer polynomials C n α of order α = so that the properties of P n follow from corresponding properties of the Gegenbauer polynomials see e.g. [, pp ]. For each n N 0, the Legendre polynomials P n can be recursively defined by P n+ x := n + 3 n + xp n+x n + n + P nx x R with P 0 x := and P x := x see [, p. 8]. The Legendre polynomial P n can be represented in the explicit form P n x = n/ n j=0 see [, p. 84]. Hence it follows that for m N 0 j n j! j! n j! n j! xn j P m 0 = m m! m m!, P m+ 0 = 0,. P m+ 0 = m m +! m m!, P m 0 = 0.. Further, Legendre polynomials of even degree are even and Legendre polynomials of odd degree are odd, i.e. P n x = n P n x..3 Moreover, the Legendre polynomial P n satisfies the following homogeneous linear differential equation of second order x P n x x P n x + n n + P nx = 0.4 see [, p. 80]. In the Hilbert space L / [, ] with the constant weight,the normalized Legendre polynomials form an orthonormal basis, since L n x := n + P n x n N 0.5 L n x L m x dx = δ n m m, n N 0, where δ n denotes the Kronecker symbol see [, p. 8]. Note that the uniform norm max L nx = L n = L n =n + / x [,] is increasing with respect to n see [, p.64].

4 0 D. Potts, M. Tasche Let M be a positive integer. A polynomial Hx := d b k L k x.6 k=0 of degree d with d M is called M-sparse in the Legendre basis or simply a sparse Legendre expansion, ifm coefficients b k are nonzero and if the other d M + coefficients vanish. Then such an M-sparse polynomial H can be represented in the form M 0 M Hx = c 0, j L n0, j x + c,k L n,k x.7 j= with c 0, j := b n0, j = 0 for all even n 0, j with 0 n 0, < n 0, < < n 0,M0 and with c,k := b n,k = 0 for all odd n,k with n, < n, < < n,m.the positive integer M = M 0 + M is called the Legendre sparsity of the polynomial H. The numbers M 0, M N 0 are the even and odd Legendre sparsities, respectively. Remark. The sparsity of a polynomial depends essentially on the chosen polynomial basis. If k= T n x := cosn arccos x x [, ] denotes the nth Chebyshev polynomial of first kind, then the nth Legendre polynomial P n can be represented in the Chebyshev basis by P n x = n n/ j! n j! δ n j j! n j! T n jx j=0 see [, p. 90]. Thus a sparse polynomial in the Legendre basis is in general not a sparse polynomial in the Chebyshev basis. In other words, one has to solve the reconstruction problem of a sparse Legendre expansion without change of the Legendre basis. As in [9], we transform the Legendre polynomial system {L n ; n N 0 } into a uniformly bounded orthonormal system. We introduce the functions Q n :[, ] R for each n N 0 by Q n x := π 4 n + π x L n x = 4 x P n x..8 Note that these functions Q n have the same symmetry properties.3 as the Legendre polynomials, namely Q n x = n Q n x x [, ]..9

5 Reconstruction of sparse Legendre and Gegenbauer expansions 03 Further the functions Q n are orthonormal in the Hilbert space L w [, ] with the Chebyshev weight wx := π x /, since for all m, n N 0 Q n x Q m xwx dx = L n x L m x dx = δ n m. Note that wx dx =. In the following, we use the standard substitution x = cos θθ [0, π] and obtain Q n cos θ = π n + π sin θ Ln cos θ = sin θ Pn cos θ θ [0, π]. Lemma. For all n N 0, the functions Q n cos θ are uniformly bounded on the interval [0, π], i.e. Q n cos θ < θ [0, π]..0 Since Lemma. is a special case of Lemma 4. for α =, we abstain here from a separate proof. Now we describe the asymptotic properties of the Legendre polynomials. Theorem. For each n N 0, the function Q n cos θ can be represented by the asymptotic formula [ Q n cos θ = λ n cos n + θ π ] + R n θ θ [0, π]. 4 with the scaling factor and the error term R n θ := 4n + λ n := θ π/ 4m+π π 4m+3 m! m m! n = m, m+! m m! n = m + sin [ n + ] θ τ sin τ Q n cos τdτ θ 0, π.. = 0 and has the symmetry The error term R n θ fulfills the conditions R n π = R n π property R n π θ = n R n θ..3 Further, the error term can be estimated by R n θ cot θ..4 n +

6 04 D. Potts, M. Tasche Since Theorem. is a special case of Theorem 4. for α =, we leave out a separate proof. Remark. From Theorem. it follows the asymptotic formula of Laplace see [, p. 94] that for θ 0, π Q n cos θ = [ cos n + θ π ] + On n..5 4 The error bound holds uniformly in [ε, π ε] with ε 0, π. Remark.3 For arbitrary m N 0, the scaling factors λ n in. can be expressed in the following form with 4m+π α m n = m, λ n = π 4m+3 m + α m+ n = m + α 0 :=, α n := 3 n + 4 n n N. In Fig., we plot the expression tanθ R n θ for some polynomial degrees n. We have proved the estimate.4 in Theorem.. In the Table, one can see that the maximum value of tanθ R n θ on the interval [0, π] is much smaller than n+. We observe that the upper bound.4 of R n θ is very accurate in a small neighborhood of θ = π. Substituting t = θ π [ π, π ], we obtain by.9 and. that x 0 3 x pi/4 pi/ 3 pi/4 pi 0 0 pi/4 pi/ 3 pi/4 pi 0 0 pi/4 pi/ 3 pi/4 pi 0 0 pi/4 pi/ 3 pi/4 pi Fig. Expression tanθ R n θ for n {3,, 5, 0} Table Maximum values of tanθ R n θ for some polynomial degrees n n max tanθr nθ θ [0,π]

7 Reconstruction of sparse Legendre and Gegenbauer expansions 05 [ Q n sin t = n λ n cos n + nπ = n λ n cos sin [ n + t t + nπ ] + n R n t + π [ cos n + ] nπ t n λ n sin ] + n R n t + π..6 3 Prony-like method In a first step we determine the even and odd indexes n 0, j, n,k in.7, similar to [7]. We use.8 and consider the function π 4 x Hx = M 0 c 0, j Q n0, j x + c,k Q n,k x 3. j= on [, ]. In the following we use the asymptotic formulas of Q n0, j and Q n,k from Theorem.. We substitute x = cost + π = sin t for t [ π, π ].Nowwehave to determine indexes n 0, j and n,k from sampling values of the function π [ cos th sin t t π, π ]. 3. We introduce the function M 0 [ Ft:= d 0, j cos n 0, j + ] t + j= with the coefficients M k= M k= [ d,k sin n,k + [ t] t π, π ] d 0, j := n 0, j / c 0, j λ n0, j, d,k := n,k+/ c,k λ n,k. 3.3 By.9 and.6, the new function 3.3 approximates the sampled function 3. in a small neighborhood of t = 0. Then we form Ft + F t M 0 = j= [ d 0, j cos n 0, j + ] t 3.4 and Ft F t M = k= [ d,k sin n,k + ] t. 3.5 Now we proceed similar to [7], but we use only sampling points near 0, due to the small values of the error term R n t + π in a small neighborhood of t = 0 [see.4].

8 06 D. Potts, M. Tasche Let N N be sufficiently large such that N > M and N is an upper bound of π the degree of the polynomial.6. For u N := sin N we form the nonequidistant kπ sine grid {u N,k := sin N ; k = M,...,M } in the interval [, ]. We consider the following problem of sparse Legendre interpolation: For given sampled data h k := π cos kπ N H sin kπ N k = M,...,M determine all parameters n 0, j j =,...,M 0 of the sparse cosine sum 3.4, determine all parameters n,k k =,...,M of the sparse sine sum 3.5 and finally determine all coefficients c 0, j j =,...,M 0 and c,k k =,...,M of the sparse Legendre expansion Sparse even Legendre interpolation For a moment, we assume that the even Legendre sparsity M 0 of the polynomial.7 is known. Then we see that the above interpolation problem is closely related to the interpolation problem of the sparse, even trigonometric polynomial h k + h k M 0 f k := d 0, j cos n 0, j + /kπ N j= k = M 0,...,M 0, 3.6 where the sampled values f k k = M 0,...,M 0 are approximately given by h k+h k. Note that f k = f k k = 0,...,M 0. We introduce the Prony polynomial Π 0 of degree M 0 with the leading coefficient M0, whose roots are cos n 0, j +/π N j =,...,M 0, i.e. M 0 Π 0 x = M 0 j= x cos n 0, j + /π. N Then the Prony polynomial Π 0 can be represented in the Chebyshev basis by Π 0 x = M 0 l=0 p 0,l T l x p 0,M0 :=. 3.7 The coefficients p 0,l of the Prony polynomial 3.7 can be characterized as follows: Lemma 3. See [7, Lemma 3.] For all k = 0,..., M 0, the sampled data f k and the coefficients p 0,l of the Prony polynomial 3.7 satisfy the equations M 0 f k+l + f l k p 0,l = f k+m0 + f M0 k. l=0

9 Reconstruction of sparse Legendre and Gegenbauer expansions 07 Using Lemma 3., one obtains immediately a Prony method for sparse even Legendre interpolation in the case of known even Legendre sparsity. This algorithm is similar to [7, Algorithm.7] and omitted here. In practice, the even/odd Legendre sparsities M 0, M of the polynomial.7 of degree at most N are unknown. Then we can apply the same technique as in [7, Section 3]. We assume that an upper bound L N of max {M 0, M } is known, where N N is sufficiently large with max {M 0, M } L N. In order to improve the numerical stability, we allow to choose more sampling points. Therefore we introduce an additional parameter K with L K N such that we use K + L sampling points of.7, more precisely we assume that sampled data f k k = 0,...,L + K from 3.6 are given. With the L + K sampled data f k R k = 0,...,L + K, we form the rectangular Toeplitz-plus-Hankel matrix H 0 K,L+ := f k+l + f l k K,L k,l= Note that H 0 K,L+ is rank deficient with rank M 0 see [7, Lemma 3.]. 3. Sparse odd Legendre interpolation First we assume that the odd Legendre sparsity M of the polynomial.7 is known. Then we see that the above interpolation problem is closely related to the interpolation problem of the sparse, odd trigonometric polynomial h k h k M g k := d, j sin n, j + /kπ N j= k = M,...,M, 3.9 where the sampled values g k k = 0,...,M are approximately given by h k h k. Note that g k = g k k = 0,...,M. We introduce the Prony polynomial Π of degree M with the leading coefficient M, whose roots are cos n, j +/π N j =,...,M, i.e. Π x = M M j= x cos n, j + /π. N Then the Prony polynomial Π can be represented in the Chebyshev basis by Π x = M l=0 p,l T l x p,m :=. 3.0 The coefficients p,l of the Prony polynomial 3.0 can be characterized as follows:

10 08 D. Potts, M. Tasche Lemma 3. For all k = 0,...,M, the sampled data g k and the coefficients p,l of the Prony polynomial 3.0 satisfy the equations M g k+l + g k l p,l = g k+m + g k M. 3. l=0 Proof Using sinα + β + sinα β = sinα cos β, we obtain by 3.9 that M g k+l + g k l = d, j sin n, j + /k + lπ + sin n, j + /k lπ N N j= M = d, j sin n, j + /kπ N j= Thus we conclude that cos n, j + /lπ. N M g k+l +g k l p,l = d, j sin n, j + /kπ N l=0 M j= M = d, j sin n, j +/kπ N j= M l=0 p,l cos n, j + /lπ N Π cos n, j +/π N =0. By p,m =, this implies the assertion 3.. Using Lemma 3., one can formulate a Prony method for sparse odd Legendre interpolation in the case of known odd Legendre sparsity. This algorithm is similar to [7, Algorithm.7] and omitted here. In general, the even/odd Legendre sparsities M 0 and M of the polynomial.7 of degree at most N are unknown. Similarly to Sect. 3., letl N be a convenient upper bound of max {M 0, M }, where N N is sufficiently large with max {M 0, M } L N. In order to improve the numerical stability, we allow to choose more sampling points. Therefore we introduce an additional parameter K with L K N such that we use K + L sampling points of.7, more precisely we assume that sampled data g k k = 0,...,L + K from 3.9 are given. With the L + K sampled data g k R k = 0,...,L + K we form the rectangular Toeplitz-plus-Hankel matrix H K,L+ := g k+l + g k l K,L k,l=0. 3. Note that H K,L+ is rank deficient with rank M. This is an analogous result to [7, Lemma 3.].

11 Reconstruction of sparse Legendre and Gegenbauer expansions Sparse Legendre interpolation In this subsection, we sketch a Prony-like method for the computation of the polynomial degrees n 0, j and n,k of the sparse Legendre expansion.7. Mainly we use singular value decompositions SVD of the Toeplitz-plus-Hankel matrices 3.8 and 3.. For details of this method see [7, Section 3]. We start with the singular value factorizations H 0 K,L+ = U 0 K D0 K,L+ W 0 L+, H K,L+ = U K D K,L+ W L+, where U 0 K, U K, W 0 L+ and W L+ are orthogonal matrices and where D0 D K,L+ are rectangular diagonal matrices. The diagonal entries of D0 K,L+ singular values of 3.8 arranged in nonincreasing order σ 0 σ 0 σ 0 M 0 σ 0 0 M 0 + σ L+ 0. K,L+ and are the We determine the largest M 0 such that σ 0 M 0 /σ 0 >ε, which is approximately the rank of the matrix 3.8 and which coincides with the even Legendre sparsity M 0 of the polynomial.7. Similarly, the diagonal entries of D K,L+ are the singular values of 3. arranged in nonincreasing order σ σ σ M σ M + σ L+ 0. We determine the largest M such that σ M /σ >ε, which is approximately the rank of the matrix 3. and which coincides with the odd Legendre sparsity M of the polynomial.7. Note that there is often a gap in the singular values, such that we can choose ε = 0 8 in general. Introducing the matrices D 0 K,M 0 := D 0 K,L+ : K, : M 0 = W 0 0 M 0,L+ := W L+ : M 0, : L +, D K,M := D K,L+ : K, : M = W M,L+ := W L+ : M, : L +, diag σ 0 j M 0 j= O K M0,M 0 diag σ j M j= O K M,M we can simplify the SVD of the Toeplitz-plus-Hankel matrices 3.8 and 3. as follows,, H 0 K,L+ = U 0 K D0 K,M 0 W 0 M 0,L+, H K,L+ = U K D K,M W M,L+.

12 030 D. Potts, M. Tasche Using the known submatrix notation and setting W 0 0 M 0,Ls := W M 0,L+ : M 0, + s : L + s s = 0,, 3.3 W M,Ls := W M,L+ : M, + s : L + s s = 0,, 3.4 we form the matrices F 0 M 0 := F M := W 0 M 0,L 0 0 W M 0,L, 3.5 W M,L 0 W M,L, 3.6 where W 0 M 0,L 0 denotes the Moore Penrose pseudoinverse of W 0 M 0,L 0. Finally we determine the nodes x 0, j [, ] j =,...,M 0 and x, j [, ] j =,...,M as eigenvalues of the matrix F 0 M 0 and F M, respectively. Thus the algorithm reads as follows: Algorithm 3. Sparse Legendre interpolation based on SVD Input: L, K, N N N, 3 L K N, L is upper bound of max{m 0, M }, sampled values H sin N kπ k = L K,...,L + K of the polynomial.7 ofdegreeatmostn.. Compute h k := and form π cos f k := h k + h k kπ N H sin, g k := h k h k kπ k = L K,...,L + K N k = L K,...,L + K.. Compute the SVD of the rectangular Toeplitz plus Hankel matrices 3.8 and 3.. Determine the approximate rank M 0 of 3.8 such that σ 0 M 0 /σ 0 > 0 8 and form the matrix 3.3. Determine the approximate rank M of 3. such that σ M /σ > 0 8 and form the matrix Compute all eigenvalues x 0, j [, ] j =,...,M 0 of the square matrix 3.5. Assume that the eigenvalues are ordered in the following form x 0, > x 0, > > x 0,M0. Calculate n 0, j := [ N π arccos x 0, j ] j =,...,M 0, where [x] := x means rounding of x R to the nearest integer. 4. Compute all eigenvalues x, j [, ] j =,...,M of the square matrix 3.6. Assume that the eigenvalues are ordered in the following form x, > x, > > x,m. Calculate n, j := [ N π arccos x, j ] j =,...,M. 5. Compute the coefficients c 0, j R j =,...,M 0 and c, j R j =,...,M as least squares solutions of the overdetermined linear Vandermonde like systems

13 Reconstruction of sparse Legendre and Gegenbauer expansions 03 M 0 kπ c 0, j Q n0, j sin = f k k = 0,...,L + K, N j= M kπ c, j Q n, j sin = g k k = 0,...,L + K. N j= Output: M 0 N 0, n 0, j N 0 0 n 0, < n 0, < < n 0,M0 < N, c 0, j R j =,...,M 0. M N 0, n, j N n, < n, < < n,m < N, c, j R j =,...,M. Remark 3. The Algorithm 3. is very similar to [7, Algorithm 3.5]. Note that one can also use the QR decomposition of the rectangular Toeplitz-plus-Hankel matrices 3.8 and 3. instead of the SVD. In that case one obtains an algorithm similar to [7, Algorithm 3.4]. 4 Extension to Gegenbauer polynomials In this section we show that our reconstruction method can be generalized to sparse Gegenbauer expansions of low positive order. The Gegenbauer polynomials C n α of degree n N 0 and fixed order α>0can be defined by the recursion relation see [, p.8]: C α α + n + n+ x := xc α α + n n+ x n + n + Cα n x n N 0 with C α 0 x := and C α x := α x. Sometimes, C n α are called ultraspherical polynomials too. In the case α =, one obtains again the Legendre polynomials P n = C n /. By[, p. 84], an explicit representation of the Gegenbauer polynomial reads as follows C α n n/ C n α j Γn j + α x = ΓαΓj + Γn j + xn j. j=0 Thus the Gegenbauer polynomials satisfy the symmetry relations Further one obtains that for m N 0 C α d dx Cα m+ C n α x = n C n α x. 4. m 0 = m Γ m + ΓαΓm +, Cα m+ 0 = m Γα+ m +, ΓαΓm + 0 = 0, 4. d dx Cα m 0 =

14 03 D. Potts, M. Tasche Moreover, the Gegenbauer polynomial C n α satisfies the following homogeneous linear differential equation of second order see [, p.80] x d dx Cα n x α + x d dx Cα n x + n n + αc α n x = Further, the Gegenbauer polynomials are orthogonal over the interval [, ] with respect to the weight function w α x = Γα+ πγα+ x α / x,, i.e. more precisely by [, p. 8] C m α x Cα n xw α x dx = Note that the weight function w α is normalized by αγα + n n + α Γ n + Γα δ m n m, n N 0. w α x dx =. Then the normalized Gegenbauer polynomials L α n x := n + α Γ n + Γα C n α x n N αγα + n form an orthonormal basis in the weighted Hilbert space L w α[, ]. Let M be a positive integer. A polynomial Hx := d k=0 b k L α k x of degree d with d M is called M-sparse in the Gegenbauer basis or simply a sparse Gegenbauer expansion, ifm coefficients b k are nonzero and if the other d M + coefficients vanish. Then such an M-sparse polynomial H can be represented in the form M 0 Hx = c 0, j L α n 0, j x + j= M k= c,k L α n,k x 4.6 with c 0, j := b n0, j = 0 for all even n 0, j with 0 n 0, < n 0, < < n 0,M0 and with c,k := b n,k = 0 for all odd n,k with n, < n, < < n,m. The positive integer M = M 0 + M is called the Gegenbauer sparsity of the polynomial H. The integers M 0, M are the even and odd Gegenbauer sparsities, respectively.

15 Reconstruction of sparse Legendre and Gegenbauer expansions 033 Now for each n N 0, we introduce the functions Q α n Q α n x := Γα+ π Γ α + x α/ L α n x x [, ] 4.7 These functions Q α n polynomials, namely possess the same symmetry properties 4. as the Gegenbauer Q α n x = n Q α x x [, ]. 4.8 n Further the functions Q α n are orthonormal in the weighted Hilbert space L w [, ] with the Chebyshev weight wx = π x /, since for all m, n N 0 Q α m x Qα n xwx dx = by L α m x Lα n xwα x dx = δ m n. In the following, we use the standard substitution x = cos θθ [0, π] and obtain Q α n cos θ := Γα+ π Γα+ sin θ α L α n cos θ θ [0, π]. Lemma 4. For all n N 0 and α 0,, the functions Q α n cos θ are uniformly bounded on the interval [0, π], i.e. Q α n cos θ < θ [0, π]. 4.9 Proof For n N 0 and α 0,, we know by [3]thatforallθ [0, π] sin θ α C n α cos θ < α Γα n + αα. Then for the normalized Gegenbauer polynomials L α n, we obtain the estimate sin θ α L α α n cos θ < Γα Using the duplication formula of the Gamma function ΓαΓ n + α Γn + Γα n + α α. αγα + n α + = α πγα, 4.0 we can estimate

16 034 D. Potts, M. Tasche For α = see [9] Q α n cos θ < Γn + Γα + n n + αα /., we obtain Q/ n cos θ <. In the following, we use the inequalities n + σ σ Γn + < n Γn + σ < σ + σ for all n N and σ 0,. In the case 0 <α<, the estimate 4. with σ = α implies that Q α n cos θ < n + α + 4 n + α α+/. Since n α < n + α for all n N, we conclude that Q α n cos θ < α. In the case <α<, we set β := α 0,. By4. with σ = β, we can estimate Γn + Γn + α = Γn + n n + βγ n + β < β n + β + β + 4. Hence we obtain by n + β + 4 < n + β that Q α n cos θ < n β/ n + β + β + n + α α / 4 < n β/ + β + n + β β/ <. 4 Finally, by Q α αγα π 0 cos θ = Γ α + sin θ α and Q α 0 cos θ 4 π, we see that the estimate 4.9isalsotrueforn = 0.

17 Reconstruction of sparse Legendre and Gegenbauer expansions 035 By 4.4, the function Q α n cos θsatisfies the following linear differential equation of second order see [, p.8] d dθ Qα n cos θ+ n + α + α α sin θ Q α n cos θ = 0 θ 0, π. 4. By the method of Liouville Stekloff, see [, pp. 0 ], we show that for arbitrary n N 0, the function Q α n cos θ is approximately equal to some multiple of cos[n + α θ απ ] in a small neighborhood of θ = π. Theorem 4. For each n N 0 and α 0,, the function Q α n cos θ can be represented by the asymptotic formula [ Q α n cos θ = λ n cos n + α θ απ with the scaling factor λ n := and the error term R n α α θ := α n + α m+α Γ m+ Γα+m Γm+ m++α Γ α+m+ θ π/ ] α / Γ m+ + R n α θ θ [0, π] 4.3 Γm+ n = m, α+/ Γα+m+ Γm+ n = m + sin[n + αθ τ] sin τ The error term R n α θ satisfies the conditions π d R n α = dθ Rα n and has the symmetry property R α n Further, the error term can be estimated by Q α n cos τdτ θ 0, π. 4.4 π = 0 π θ = n R α θ. 4.5 R n α α θ α cot θ. 4.6 n + α Proof. Using the method of Liouville Stekloff see [, pp. 0 ], we derive the asymptotic formula 4.3 from the differential equation 4., which can be written in the form d dθ Qα n cos θ+ n + α Q α α n cos θ = α sin θ n Qα n cos θ θ 0, π. 4.7

18 036 D. Potts, M. Tasche Since the homogeneous linear differential equation has the fundamental system [ cos n + α θ απ d dθ X θ + n + α X θ = 0 ] [, sin n + α θ απ ], the differential equation 4.7 can be transformed into the Volterra integral equation [ Q α n cos θ = λ n cos n + α θ απ α α n + α θ π/ ] [ + μ n sin n + α θ απ sin[n + αθ τ] sin τ Q α n cos τdτ θ 0, π with certain real constants λ n and μ n. Introducing R n α θ by 4.4, we see immediately that R n α = 0. From d π dθ Rα n θ = α α θ π/ cos[n + αθ τ] sin τ Q α n ] cos τdτ it follows that dθ d Rα n π = 0.. Now we determine the constants λ n and μ n. For arbitrary even n = m m N 0, the function Q α m cos θ can be represented in the form [ Q α m cos θ=λ m cos m + α θ απ ] [ +μ m sin m + α θ απ ] + R α m θ Hence the condition R α m π = 0 means that Qα m 0 = m λ m.using4.7, 4.5, 4., and the duplication formula 4.0, we obtain that λ m = m + α Γ m + Γα + m α / Γ m +. Γm + From d for θ = π 0 = 0by4.3 it follows that the derivative of Qα m cos θ vanishes d. Thus the second condition dθ Rα m π = 0 implies that dx Cα m i.e. μ m = 0. 0 = μ m m + α m,

19 Reconstruction of sparse Legendre and Gegenbauer expansions If n = m + m N 0 is odd, then [ Q α m+ cos θ = λ m+ cos m + + αθ απ ] [ + μ m+ sin m + + α θ απ ] + R α m+ θ. Hence the condition R α m+ π = 0 implies by Cα m+ 0 = 0[see4.] that 0 = μ m+ m, i.e. μ m+ = 0. The second condition dθ d Rα m+ π = 0 reads as follows n + α Γ α Γ α Γ m + π Γα+ Γα + m + Γα = λ m+ m + + α m. Thus we obtain by 4.3 and the duplication formula 4.0 that λ m+ = Γm + m + + α Γ α + m + d dx Cα m+ 0 α+/ Γα+ m +. Γm + 4. As shown, the error term R α n θ has the explicit representation 4.4. Using 4.9, we estimate this integral and obtain R n α α θ α n + α θ π/ sin τ dτ The symmetry property 4.5 of the error term R n α θ = Qα cos θ λ n cos n = α α n + α [ n + αθ απ cot θ. ] follows from the fact that Q α n cos θ and cos[n + αθ απ symmetry properties as 4.8. This completes the proof. ] possess the same Remark 4. The following result is stated in [0]: If α, then πγ α + sin θ α L α n cos θ Γα+ α /6 + α / n for all n 6 and θ [0, π]. Using4., we can see that sin θ α L α n cos θ is uniformly bounded for all α, n 6 and θ [0, π]. Using above estimate, one can extend Theorem 4. to the case of moderately sized order α.

20 038 D. Potts, M. Tasche We observe that the upper bound 4.6 of R n α θ is very accurate in a small neighborhood of θ = π. By the substitution t = θ π [ π, π ] and 4.8, we obtain Q α n [ sin t = n λ n cos n + α t + nπ ] + n R n t + π nπ nπ = n λ n cos cos[n+α t] n λ n sin + n R n t + π. sin[n+α t] Now the Algorithm 3. can be straightforward generalized to the case of a sparse Gegenbauer expansion 4.6: Algorithm 4. Sparse Gegenbauer interpolation based on SVD Input: L, K, N N N, 3 L K N, L is upper bound of max{m 0, M }, sampled values H sin N kπ k = L K,...,L + K of polynomial 4.6 ofdegreeatmostn and of low order α>0.. Compute for k = L K,...,L + K h k := Γα+ π Γα+ cos kπ α H sin N kπ N and form f k := h k + h k, g k := h k h k.. Compute the SVD of the rectangular Toeplitz plus Hankel matrices 3.8 and 3.. Determine the approximate rank M 0 of 3.8 such that σ 0 M 0 /σ 0 > 0 8 and form the matrix 3.3. Determine the approximate rank M of 3. such that σ M /σ > 0 8 and form the matrix Compute all eigenvalues x 0, j [, ] j =,...,M 0 of the square matrix 3.5. Assume that the eigenvalues are ordered in the following form x 0, > x 0, > > x 0,M0. Calculate n 0, j := [ N π arccos x 0, j α] j =,...,M Compute all eigenvalues x, j [, ] j =,...,M of the square matrix 3.6. Assume that the eigenvalues are ordered in the following form x, > x, > > x,m. Calculate n, j := [ N π arccos x, j α] j =,...,M. 5. Compute the coefficients c 0, j R j =,...,M 0 and c, j R j =,...,M as least squares solutions of the overdetermined linear Vandermonde like systems

21 Reconstruction of sparse Legendre and Gegenbauer expansions 039 M 0 j= M j= c 0, j Q α kπ n 0, j sin = f k k = 0,...,L + K, N c, j Q α kπ n, j sin = g k k = 0,...,L + K. N Output: M 0 N 0, n 0, j N 0 0 n 0, < n 0, < < n 0,M0 < N, c 0, j R j =,...,M 0. M N 0, n, j N n, < n, < < n,m < N, c, j R j =,...,M. 5 Numerical examples Now we illustrate the behavior and the limits of the suggested algorithms. Using IEEE standard floating point arithmetic with double precision, we have implemented our algorithms in MATLAB. In Example 5., an M-sparse Legendre expansion is given in the form.7 with normed Legendre polynomials of even degree n 0, j j =,...,M 0 and odd degree n,k k =,...,M, respectively, and corresponding real non-vanishing coefficients c 0, j and c,k, respectively. In Examples 5.3 and 5.4,an M-sparse Gegenbauer expansion is given in the form 4.6 with normed Gegenbauer polynomials of even/odd degree n 0, j resp. n,k and order α>0 and corresponding real non-vanishing coefficients c 0, j resp. c,k. We compute the absolute error of the coefficients by ec := max j=,...,m 0 k=,...,m { c 0, j c 0, j, c,k c,k } c := c 0,,...,c 0,M0, c,,...,c,m T, where c 0, j and c,k are the coefficients computed by our algorithms. The symbol + in the Tables, 3, 4 and 5 means that all degrees n j are correctly reconstructed, the Table Results of Example 5. for exact sampled data N K L Algorithm 3. ec e e e e e 3

22 040 D. Potts, M. Tasche symbol indicates that the reconstruction of the degrees fails. We present the error ec in the last column of the tables. Example 5. We start with the reconstruction of a 5-sparse Legendre expansion.7 which is a polynomial of degree 00. We choose the even degrees n 0, = 6, n 0, =, n 0,3 = 00 and the odd degrees n, = 75, n, = 77 in.7. The corresponding coefficients c 0, j and c,k are equal to. Note that for the parameters N = 400 and K = L = 5, due to roundoff errors, some eigenvalues x 0, j resp. x,k are not contained in [, ]. But we can improve the stability by choosing more sampling values. In the case N = 500, K = 9 and L = 5, we need only K + L = 7 sampled values of 3. for the exact reconstruction of the 5-sparse Legendre expansion.7. Example 5. Now we show that the Algorithm 3. is robust with respect to noisy sampled data. We reconstruct a 5-sparse Legendre expansion.7 with the even degrees n 0, =, n 0, = 50 and the odd degrees n, = 75, n, = 77 and n,3 = 33, where the corresponding coefficients c 0, j and c,k are equal to. Then we add noise of size 0 δ η, where η is uniformly distributed in [, ], to each sampling value. The corresponding results of Algorithm 3. are shown in Table 3. Note that for certain parameters, some eigenvalues x 0, j resp. x,k are not contained in [, ].But we can improve the stability of Algorithm 3. by choosing more sampling values see Table 3. Note that we can also reconstruct the polynomials for higher noisy level, if we replace the identification of the rank M 0 of 3.8 and M of 3. instepof Algorithm 3. by a gap condition. For the results see the last four lines of Table 3. Example 5.3 We consider now the reconstruction of a 5-sparse Gegenbauer expansion 4.6oforderα>0which is a polynomial of degree 00. Similar as in Example 5., we choose the even degrees n 0, = 6, n 0, =, n 0,3 = 00 and the odd degrees n, = 75, n, = 77 in 4.6. The corresponding coefficients c 0, j and c,k are equal to. Here we use only L + K = 9 sampled values for the exact recovery of the 5-sparse Gegenbauer expansion 4.6 of degree 00. Despite the fact, that we show in Theorem 4. only results for α 0,, we show also some examples for α>. But the suggested method fails for α = 3.5. In this case our algorithm cannot exactly detect the smallest degrees n 0, = 6 and n 0, =, but all the higher degrees are exactly detected. Example 5.4 We consider the reconstruction of a 5-sparse Gegenbauer expansion 4.6 of order α>0which does not consist of Gegenbauer polynomials of low degrees. Thus we choose the even degrees n 0, = 60, n 0, = 0, n 0,3 = 00 and the odd degrees n, = 75, n, = 77 in 4.6. The corresponding coefficients c 0, j and c,k are equal to. In Table 5, we show also some examples for α.5. But the suggested method fails for α = 8. In this case, Algorithm 4. cannot exactly detect the smallest degree n 0, = 60, but all the higher degrees are exactly recovered. This observation is in perfect accordance with the very good local approximation near θ = π/, see Theorem 4. and Remark 4..

23 Reconstruction of sparse Legendre and Gegenbauer expansions 04 Table 3 Results of Example 5. for noisy sampled data N K L δ Algorithm 3. ec e e e e e e e 04 Table 4 Results of Example 5.3 α N K L Algorithm 4. ec e e e e e e e Table 5 Results of Example 5.4 α N K L Algorithm 4. ec e e e e e e e e e Example 5.5 We stress again that the Prony-like methods are very powerful tools for the recovery of a sparse exponential sum Sx := M c j e f j x j= x 0

24 04 D. Potts, M. Tasche with distinct numbers f j [ δ, 0] +i [ π, π 0 δ and complex non vanishing coefficients c j, if only finitely many sampled data of S are given. In [7, Example 5.5], we have presented a method to reconstruct functions of the form Fθ = M c j cosν j θ+ d j sinμ j θ θ [0, π]. j= with real coefficients c j, d j and distinct frequencies ν j, μ j > 0 by sampling the function F. Now we reconstruct a sum of sparse Legendre and Chebyshev expansions Hx := M M c j L n j x + d k T mk x x [, ]. 5. j= k= Here we choose c j = d k =, M = M = 5 and n j 5 j= = 6, 3, 65, 68, 90T and m k 5 k= = 60, 0, 75, 78, 00T. Substituting x = sin t for t [ π, π ], we obtain π M M π cos th sin t= c j Q n j sin t+ d k j= By Theorem., the function k= cos t cos m k t + m kπ. 5. M c j λ n j j= [ cos n j + t + n ] jπ M π + d k cos m k t + m kπ k= 5.3 approximates 5. in the near of 0. We apply Algorithm 3. with N = 00 and K = L = 0. In step 3 of Algorithm 3., we calculate all frequencies n j + and m k of 5.3, if n j and m k are even: N 7 arccos x 0, j =99.99, 90.50, 78.00, 68.49, 0.00, , 6.5 T. π j= In step 4 of Algorithm 3., we determine all frequencies n j + and m k of 5.3, if n j and m k are odd: N 3 arccos x, j = , , T. π j= Thus the Legendre polynomials L n j x in 5. have the even degrees 6, 68, and 90 and the odd degrees 3 and 65. Thus the Chebyshev polynomials T mk x in 5.

25 Reconstruction of sparse Legendre and Gegenbauer expansions 043 possess the even degrees 60, 0, 78, and 00 and the odd degree 75. Finally, one can compute the coefficients c j and d k. Acknowledgments The first named author gratefully acknowledges the support by the German Research Foundation within the project PO 7/0-. Moreover, the authors would like to thank the anonymous referees for their valuable comments, which improved this paper. References. Backofen, R., Gräf, M., Potts, D., Praetorius, S., Voigt, A., Witkowski, T.: A continuous approach to discrete ordering on S. Multiscale Model. Simul. 9, Bogaert, I., Michiels, B., Fostier, J.: O computation of Legendre polynomials and Gauss Legendre nodes and weights for parallel computing. SIAM J. Sci. Comput. 343, C83 C Giesbrecht, M., Labahn, G., Lee, W.-S.: Symbolic numeric sparse polynomial interpolation in Chebyshev basis and trigonometric interpolation. In: Proceedings of Computer Algebra in Scientific Computation CASC 004, pp Giesbrecht, M., Labahn, G., Lee, W.-S.: Symbolic numeric sparse interpolation of multivariate polynomials. J. Symb. Comput. 44, Hassanieh, H., Indyk, P., Katabi, D., Price, E.: Nearly optimal sparse Fourier transform. In: Proceedings of the 0 ACM Symposium on Theory of Computing STOC 0 6. Hassanieh, H., Indyk, P., Katabi, D., Price, E.: Simple and practical algorithm for sparse Fourier transform. In: ACM SIAM Symposium on Discrete Algorithms SODA, pp Iserles, A.: A fast and simple algorithm for the computation of Legendre coefficients. Numer. Math. 73, Kaltofen, E., Lee, W.-S.: Early termination in sparse interpolation algorithms. J. Symb. Comput. 36, Kershaw, D.: Some extensions of W. Gautschi s inequalities for the Gamma function. Math. Comput. 4, Krasikov, I.: On the Erdélyi Magnus Nevai conjecture for Jacobi polynomials. Constr. Approx. 8, Kunis, S., Rauhut, H.: Random sampling of sparse trigonometric polynomials II. Orthogonal matching pursuit versus basis pursuit. Found. Comput. Math. 8, Lakshman, Y.N., Saunders, B.D.: Sparse polynomial interpolation in nonstandard bases. SIAM J. Comput. 4, Lorch, L.: Inequalities for ultraspherical polynomials and the Gamma function. J. Approx. Theory 40, Peter, T., Plonka, G.: A generalized Prony method for reconstruction of sparse sums of eigenfunctions of linear operators. Inverse Probl. 9, Peter, T., Plonka, G., Roşca, D.: Representation of sparse Legendre expansions. J. Symb. Comput. 50, Potts, D., Tasche, M.: Parameter estimation for nonincreasing exponential sums by Prony-like methods. Linear Algebra Appl. 439, Potts, D., Tasche, M.: Sparse polynomial interpolation in Chebyshev bases. Linear Algebra Appl. 44, Rauhut, H.: Random sampling of sparse trigonometric polynomials. Appl. Comput. Harmon. Anal., Rauhut, H., Ward, R.: Sparse Legendre expansions via l -minimization. J. Approx. Theory 64, Rauhut, H., Ward, R.: Interpolation via weighted l -minimization. Appl. Comput. Harmon. Anal. in print. Szegő, G.: Orthogonal Polynomials, 4th edn. Amer. Math. Soc, Providence 975

Reconstruction of sparse Legendre and Gegenbauer expansions

Reconstruction of sparse Legendre and Gegenbauer expansions Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche We present a new deterministic algorithm for the reconstruction of sparse Legendre expansions from a small number

More information

Sparse polynomial interpolation in Chebyshev bases

Sparse polynomial interpolation in Chebyshev bases Sparse polynomial interpolation in Chebyshev bases Daniel Potts Manfred Tasche We study the problem of reconstructing a sparse polynomial in a basis of Chebyshev polynomials (Chebyshev basis in short)

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 441 (2014 61 87 Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa Sparse polynomial

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

A generalized Prony method for sparse approximation

A generalized Prony method for sparse approximation A generalized Prony method for sparse approximation Gerlind Plonka and Thomas Peter Institute for Numerical and Applied Mathematics University of Göttingen Dolomites Research Week on Approximation September

More information

Parameter estimation for nonincreasing exponential sums by Prony like methods

Parameter estimation for nonincreasing exponential sums by Prony like methods Parameter estimation for nonincreasing exponential sums by Prony like methods Daniel Potts Manfred Tasche Let z j := e f j with f j (, 0] + i [ π, π) be distinct nodes for j = 1,..., M. With complex coefficients

More information

PARAMETER ESTIMATION FOR MULTIVARIATE EXPONENTIAL SUMS

PARAMETER ESTIMATION FOR MULTIVARIATE EXPONENTIAL SUMS PARAMETER ESTIMATION FOR MULTIVARIATE EXPONENTIAL SUMS DANIEL POTTS AND MANFRED TASCHE Abstract. The recovery of signal parameters from noisy sampled data is an essential problem in digital signal processing.

More information

Parameter estimation for exponential sums by approximate Prony method

Parameter estimation for exponential sums by approximate Prony method Parameter estimation for exponential sums by approximate Prony method Daniel Potts a, Manfred Tasche b a Chemnitz University of Technology, Department of Mathematics, D 09107 Chemnitz, Germany b University

More information

Prony Methods for Recovery of Structured Functions

Prony Methods for Recovery of Structured Functions GAMM-Mitteilungen, 3 June 2014 Prony Methods for Recovery of Structured Functions Gerlind Plonka 1, and Manfred Tasche 2, 1 Institute of Numerical and Applied Mathematics, University of Göttingen, Lotzestraße

More information

Special Functions of Mathematical Physics

Special Functions of Mathematical Physics Arnold F. Nikiforov Vasilii B. Uvarov Special Functions of Mathematical Physics A Unified Introduction with Applications Translated from the Russian by Ralph P. Boas 1988 Birkhäuser Basel Boston Table

More information

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Mark Iwen Michigan State University October 18, 2014 Work with Janice (Xianfeng) Hu Graduating in May! M.A. Iwen (MSU) Fast

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,

More information

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT Received: 31 July, 2008 Accepted: 06 February, 2009 Communicated by: SIMON J SMITH Department of Mathematics and Statistics La Trobe University,

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Sparse recovery for spherical harmonic expansions

Sparse recovery for spherical harmonic expansions Rachel Ward 1 1 Courant Institute, New York University Workshop Sparsity and Cosmology, Nice May 31, 2011 Cosmic Microwave Background Radiation (CMB) map Temperature is measured as T (θ, ϕ) = k k=0 l=

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

Fast Algorithms for Spherical Harmonic Expansions

Fast Algorithms for Spherical Harmonic Expansions Fast Algorithms for Spherical Harmonic Expansions Vladimir Rokhlin and Mark Tygert Research Report YALEU/DCS/RR-1309 December 17, 004 Abstract An algorithm is introduced for the rapid evaluation at appropriately

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

A trigonometric orthogonality with respect to a nonnegative Borel measure

A trigonometric orthogonality with respect to a nonnegative Borel measure Filomat 6:4 01), 689 696 DOI 10.98/FIL104689M Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat A trigonometric orthogonality with

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 35, pp. 148-163, 2009. Copyright 2009,. ISSN 1068-9613. ETNA SPHERICAL QUADRATURE FORMULAS WITH EQUALLY SPACED NODES ON LATITUDINAL CIRCLES DANIELA

More information

Two special equations: Bessel s and Legendre s equations. p Fourier-Bessel and Fourier-Legendre series. p

Two special equations: Bessel s and Legendre s equations. p Fourier-Bessel and Fourier-Legendre series. p LECTURE 1 Table of Contents Two special equations: Bessel s and Legendre s equations. p. 259-268. Fourier-Bessel and Fourier-Legendre series. p. 453-460. Boundary value problems in other coordinate system.

More information

Approximation theory

Approximation theory Approximation theory Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 1 1.3 6 8.8 2 3.5 7 10.1 Least 3squares 4.2

More information

Asymptotics of Integrals of. Hermite Polynomials

Asymptotics of Integrals of. Hermite Polynomials Applied Mathematical Sciences, Vol. 4, 010, no. 61, 04-056 Asymptotics of Integrals of Hermite Polynomials R. B. Paris Division of Complex Systems University of Abertay Dundee Dundee DD1 1HG, UK R.Paris@abertay.ac.uk

More information

Real, Tight Frames with Maximal Robustness to Erasures

Real, Tight Frames with Maximal Robustness to Erasures Real, Tight Frames with Maximal Robustness to Erasures Markus Püschel 1 and Jelena Kovačević 2,1 Departments of 1 ECE and 2 BME Carnegie Mellon University Pittsburgh, PA Email: pueschel@ece.cmu.edu, jelenak@cmu.edu

More information

Orthogonal polynomials

Orthogonal polynomials Orthogonal polynomials Gérard MEURANT October, 2008 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed

More information

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates. LEGENDRE POLYNOMIALS AND APPLICATIONS We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.. Legendre equation: series solutions The Legendre equation is

More information

Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION

Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION Malaysian Journal of Mathematical Sciences 6(2): 25-36 (202) Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces Noli N. Reyes and Rosalio G. Artes Institute of Mathematics, University of

More information

Sobolev Duals of Random Frames

Sobolev Duals of Random Frames Sobolev Duals of Random Frames C. Sinan Güntürk, Mark Lammers, 2 Alex Powell, 3 Rayan Saab, 4 Özgür Yılmaz 4 Courant Institute of Mathematical Sciences, New York University, NY, USA. 2 University of North

More information

Spherical Coordinates and Legendre Functions

Spherical Coordinates and Legendre Functions Spherical Coordinates and Legendre Functions Spherical coordinates Let s adopt the notation for spherical coordinates that is standard in physics: φ = longitude or azimuth, θ = colatitude ( π 2 latitude)

More information

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES Indian J Pure Appl Math, 49: 87-98, March 08 c Indian National Science Academy DOI: 0007/s36-08-056-9 INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES João Lita da Silva Department of Mathematics and

More information

Bounds on Turán determinants

Bounds on Turán determinants Bounds on Turán determinants Christian Berg Ryszard Szwarc August 6, 008 Abstract Let µ denote a symmetric probability measure on [ 1, 1] and let (p n ) be the corresponding orthogonal polynomials normalized

More information

Waves on 2 and 3 dimensional domains

Waves on 2 and 3 dimensional domains Chapter 14 Waves on 2 and 3 dimensional domains We now turn to the studying the initial boundary value problem for the wave equation in two and three dimensions. In this chapter we focus on the situation

More information

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ. 4 Legendre Functions In order to investigate the solutions of Legendre s differential equation d ( µ ) dθ ] ] + l(l + ) m dµ dµ µ Θ = 0. (4.) consider first the case of m = 0 where there is no azimuthal

More information

Fourier extension and sampling on the sphere

Fourier extension and sampling on the sphere Fourier extension and sampling on the sphere Daniel Potts Technische Universität Chemnitz Faculty of Mathematics 97 Chemnitz, Germany Email: potts@mathematiktu-chemnitzde iel Van Buggenhout KU Leuven Faculty

More information

Using Hankel structured low-rank approximation for sparse signal recovery

Using Hankel structured low-rank approximation for sparse signal recovery Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,

More information

Recall that any inner product space V has an associated norm defined by

Recall that any inner product space V has an associated norm defined by Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 7, No. 6, pp. 1903 198 c 006 Society for Industrial and Applied Mathematics FAST ALGORITHMS FOR SPHERICAL HARMONIC EXPANSIONS VLADIMIR ROKHLIN AND MARK TYGERT Abstract. An algorithm

More information

Errata List Numerical Mathematics and Computing, 7th Edition Ward Cheney & David Kincaid Cengage Learning (c) March 2013

Errata List Numerical Mathematics and Computing, 7th Edition Ward Cheney & David Kincaid Cengage Learning (c) March 2013 Chapter Errata List Numerical Mathematics and Computing, 7th Edition Ward Cheney & David Kincaid Cengage Learning (c) 202 9 March 203 Page 4, Summary, 2nd bullet item, line 4: Change A segment of to The

More information

SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS

SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS 31 October, 2007 1 INTRODUCTION 2 ORTHOGONAL POLYNOMIALS Properties of Orthogonal Polynomials 3 GAUSS INTEGRATION Gauss- Radau Integration Gauss -Lobatto Integration

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

Research Statement. James Bremer Department of Mathematics, University of California, Davis

Research Statement. James Bremer Department of Mathematics, University of California, Davis Research Statement James Bremer Department of Mathematics, University of California, Davis Email: bremer@math.ucdavis.edu Webpage: https.math.ucdavis.edu/ bremer I work in the field of numerical analysis,

More information

A numerical study of the Xu polynomial interpolation formula in two variables

A numerical study of the Xu polynomial interpolation formula in two variables A numerical study of the Xu polynomial interpolation formula in two variables Len Bos Dept. of Mathematics and Statistics, University of Calgary (Canada) Marco Caliari, Stefano De Marchi Dept. of Computer

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

Numerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis

Numerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 12, Issue 2 (December 2017), pp. 988 1001 Applications and Applied Mathematics: An International Journal (AAM) Numerical Experiments

More information

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson Advanced Computational Fluid Dynamics AA5A Lecture Approximation Theory Antony Jameson Winter Quarter, 6, Stanford, CA Last revised on January 7, 6 Contents Approximation Theory. Least Squares Approximation

More information

Computation of adaptive Fourier series by sparse approximation of exponential sums

Computation of adaptive Fourier series by sparse approximation of exponential sums Computation of adaptive Fourier series by sparse approximation of exponential sums Gerlind Plonka Vlada Pototskaia 3rd January 2018 Abstract. In this paper, we study the convergence of adaptive Fourier

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

Hankel Operators plus Orthogonal Polynomials. Yield Combinatorial Identities

Hankel Operators plus Orthogonal Polynomials. Yield Combinatorial Identities Hanel Operators plus Orthogonal Polynomials Yield Combinatorial Identities E. A. Herman, Grinnell College Abstract: A Hanel operator H [h i+j ] can be factored as H MM, where M maps a space of L functions

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

On an Eigenvalue Problem Involving Legendre Functions by Nicholas J. Rose North Carolina State University

On an Eigenvalue Problem Involving Legendre Functions by Nicholas J. Rose North Carolina State University On an Eigenvalue Problem Involving Legendre Functions by Nicholas J. Rose North Carolina State University njrose@math.ncsu.edu 1. INTRODUCTION. The classical eigenvalue problem for the Legendre Polynomials

More information

Application of the AAK theory for sparse approximation of exponential sums

Application of the AAK theory for sparse approximation of exponential sums Application of the AAK theory for sparse approximation of exponential sums Gerlind Plonka Vlada Pototskaia 26th September 216 In this paper, we derive a new method for optimal l 1 - and l 2 -approximation

More information

1 Solutions in cylindrical coordinates: Bessel functions

1 Solutions in cylindrical coordinates: Bessel functions 1 Solutions in cylindrical coordinates: Bessel functions 1.1 Bessel functions Bessel functions arise as solutions of potential problems in cylindrical coordinates. Laplace s equation in cylindrical coordinates

More information

Reconstruction of non-stationary signals by the generalized Prony method

Reconstruction of non-stationary signals by the generalized Prony method Reconstruction of non-stationary signals by the generalized Prony method Gerlind Plonka Institute for Numerical and Applied Mathematics, University of Göttingen Lille, June 1, 18 Gerlind Plonka (University

More information

An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations

An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations Moshe Israeli Computer Science Department, Technion-Israel Institute of Technology, Technion city, Haifa 32000, ISRAEL Alexander

More information

A multivariate generalization of Prony s method

A multivariate generalization of Prony s method A multivariate generalization of Prony s method Ulrich von der Ohe joint with Stefan Kunis Thomas Peter Tim Römer Institute of Mathematics Osnabrück University SPARS 2015 Signal Processing with Adaptive

More information

On the Lebesgue constant of subperiodic trigonometric interpolation

On the Lebesgue constant of subperiodic trigonometric interpolation On the Lebesgue constant of subperiodic trigonometric interpolation Gaspare Da Fies and Marco Vianello November 4, 202 Abstract We solve a recent conjecture, proving that the Lebesgue constant of Chebyshev-like

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Sparse FFT for Functions with Short Frequency Support

Sparse FFT for Functions with Short Frequency Support Sparse FFT for Functions with Short Frequency Support Sina ittens May 10, 017 Abstract In this paper we derive a deterministic fast discrete Fourier transform algorithm for a π-periodic function f whose

More information

COMPUTATION OF BESSEL AND AIRY FUNCTIONS AND OF RELATED GAUSSIAN QUADRATURE FORMULAE

COMPUTATION OF BESSEL AND AIRY FUNCTIONS AND OF RELATED GAUSSIAN QUADRATURE FORMULAE BIT 6-85//41-11 $16., Vol. 4, No. 1, pp. 11 118 c Swets & Zeitlinger COMPUTATION OF BESSEL AND AIRY FUNCTIONS AND OF RELATED GAUSSIAN QUADRATURE FORMULAE WALTER GAUTSCHI Department of Computer Sciences,

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

Electromagnetism HW 1 math review

Electromagnetism HW 1 math review Electromagnetism HW math review Problems -5 due Mon 7th Sep, 6- due Mon 4th Sep Exercise. The Levi-Civita symbol, ɛ ijk, also known as the completely antisymmetric rank-3 tensor, has the following properties:

More information

An integral formula for L 2 -eigenfunctions of a fourth order Bessel-type differential operator

An integral formula for L 2 -eigenfunctions of a fourth order Bessel-type differential operator An integral formula for L -eigenfunctions of a fourth order Bessel-type differential operator Toshiyuki Kobayashi Graduate School of Mathematical Sciences The University of Tokyo 3-8-1 Komaba, Meguro,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Variational Principal Components

Variational Principal Components Variational Principal Components Christopher M. Bishop Microsoft Research 7 J. J. Thomson Avenue, Cambridge, CB3 0FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop In Proceedings

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in 86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

More information

Discrete Orthogonal Polynomials on Equidistant Nodes

Discrete Orthogonal Polynomials on Equidistant Nodes International Mathematical Forum, 2, 2007, no. 21, 1007-1020 Discrete Orthogonal Polynomials on Equidistant Nodes Alfredo Eisinberg and Giuseppe Fedele Dip. di Elettronica, Informatica e Sistemistica Università

More information

Best constants in Markov-type inequalities with mixed Hermite weights

Best constants in Markov-type inequalities with mixed Hermite weights Best constants in Markov-type inequalities with mixed Hermite weights Holger Langenau Technische Universität Chemnitz IWOTA August 15, 2017 Introduction We consider the inequalities of the form D ν f C

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Separation of Variables in Polar and Spherical Coordinates

Separation of Variables in Polar and Spherical Coordinates Separation of Variables in Polar and Spherical Coordinates Polar Coordinates Suppose we are given the potential on the inside surface of an infinitely long cylindrical cavity, and we want to find the potential

More information

A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS. Masaru Nagisa. Received May 19, 2014 ; revised April 10, (Ax, x) 0 for all x C n.

A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS. Masaru Nagisa. Received May 19, 2014 ; revised April 10, (Ax, x) 0 for all x C n. Scientiae Mathematicae Japonicae Online, e-014, 145 15 145 A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS Masaru Nagisa Received May 19, 014 ; revised April 10, 014 Abstract. Let f be oeprator monotone

More information

Quadrature Formulas for Infinite Integrals

Quadrature Formulas for Infinite Integrals Quadrature Formulas for Infinite Integrals By W. M. Harper 1. Introduction. Since the advent of high-speed computers, "mechanical" quadratures of the type (1) f w(x)f(x)dx^ Hif(aj) Ja 3=1 have become increasingly

More information

Infinite Series. 1 Introduction. 2 General discussion on convergence

Infinite Series. 1 Introduction. 2 General discussion on convergence Infinite Series 1 Introduction I will only cover a few topics in this lecture, choosing to discuss those which I have used over the years. The text covers substantially more material and is available for

More information

Notes: Most of the material presented in this chapter is taken from Jackson, Chap. 2, 3, and 4, and Di Bartolo, Chap. 2. 2π nx i a. ( ) = G n.

Notes: Most of the material presented in this chapter is taken from Jackson, Chap. 2, 3, and 4, and Di Bartolo, Chap. 2. 2π nx i a. ( ) = G n. Chapter. Electrostatic II Notes: Most of the material presented in this chapter is taken from Jackson, Chap.,, and 4, and Di Bartolo, Chap... Mathematical Considerations.. The Fourier series and the Fourier

More information

Determinant evaluations for binary circulant matrices

Determinant evaluations for binary circulant matrices Spec Matrices 014; :187 199 Research Article Open Access Christos Kravvaritis* Determinant evaluations for binary circulant matrices Abstract: Determinant formulas for special binary circulant matrices

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

10.2-3: Fourier Series.

10.2-3: Fourier Series. 10.2-3: Fourier Series. 10.2-3: Fourier Series. O. Costin: Fourier Series, 10.2-3 1 Fourier series are very useful in representing periodic functions. Examples of periodic functions. A function is periodic

More information

Bivariate Lagrange interpolation at the Padua points: The generating curve approach

Bivariate Lagrange interpolation at the Padua points: The generating curve approach Journal of Approximation Theory 43 (6) 5 5 www.elsevier.com/locate/jat Bivariate Lagrange interpolation at the Padua points: The generating curve approach Len Bos a, Marco Caliari b, Stefano De Marchi

More information

Orthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka

Orthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka Orthogonal polynomials with respect to generalized Jacobi measures Tivadar Danka A thesis submitted for the degree of Doctor of Philosophy Supervisor: Vilmos Totik Doctoral School in Mathematics and Computer

More information

Orthogonal diagonalization for complex skew-persymmetric anti-tridiagonal Hankel matrices

Orthogonal diagonalization for complex skew-persymmetric anti-tridiagonal Hankel matrices Spec. Matrices 016; 4:73 79 Research Article Open Access Jesús Gutiérrez-Gutiérrez* and Marta Zárraga-Rodríguez Orthogonal diagonalization for complex skew-persymmetric anti-tridiagonal Hankel matrices

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Notes on Special Functions

Notes on Special Functions Spring 25 1 Notes on Special Functions Francis J. Narcowich Department of Mathematics Texas A&M University College Station, TX 77843-3368 Introduction These notes are for our classes on special functions.

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Introduction to orthogonal polynomials. Michael Anshelevich

Introduction to orthogonal polynomials. Michael Anshelevich Introduction to orthogonal polynomials Michael Anshelevich November 6, 2003 µ = probability measure on R with finite moments m n (µ) = R xn dµ(x)

More information

ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS

ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS COMM. APP. MATH. AND COMP. SCI. Vol., No., ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS PER-GUNNAR MARTINSSON, VLADIMIR ROKHLIN AND MARK TYGERT We observe that, under

More information

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim. 56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =

More information

Ultraspherical moments on a set of disjoint intervals

Ultraspherical moments on a set of disjoint intervals Ultraspherical moments on a set of disjoint intervals arxiv:90.049v [math.ca] 4 Jan 09 Hashem Alsabi Université des Sciences et Technologies, Lille, France hashem.alsabi@gmail.com James Griffin Department

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Fast algorithms based on asymptotic expansions of special functions

Fast algorithms based on asymptotic expansions of special functions Fast algorithms based on asymptotic expansions of special functions Alex Townsend MIT Cornell University, 23rd January 2015, CAM Colloquium Introduction Two trends in the computing era Tabulations Software

More information

Physics 250 Green s functions for ordinary differential equations

Physics 250 Green s functions for ordinary differential equations Physics 25 Green s functions for ordinary differential equations Peter Young November 25, 27 Homogeneous Equations We have already discussed second order linear homogeneous differential equations, which

More information

On Turán s inequality for Legendre polynomials

On Turán s inequality for Legendre polynomials Expo. Math. 25 (2007) 181 186 www.elsevier.de/exmath On Turán s inequality for Legendre polynomials Horst Alzer a, Stefan Gerhold b, Manuel Kauers c,, Alexandru Lupaş d a Morsbacher Str. 10, 51545 Waldbröl,

More information

On the Lebesgue constant of barycentric rational interpolation at equidistant nodes

On the Lebesgue constant of barycentric rational interpolation at equidistant nodes On the Lebesgue constant of barycentric rational interpolation at equidistant nodes by Len Bos, Stefano De Marchi, Kai Hormann and Georges Klein Report No. 0- May 0 Université de Fribourg (Suisse Département

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (997) 76: 479 488 Numerische Mathematik c Springer-Verlag 997 Electronic Edition Exponential decay of C cubic splines vanishing at two symmetric points in each knot interval Sang Dong Kim,,

More information

An Example of Embedded Singular Continuous Spectrum for One-Dimensional Schrödinger Operators

An Example of Embedded Singular Continuous Spectrum for One-Dimensional Schrödinger Operators Letters in Mathematical Physics (2005) 72:225 231 Springer 2005 DOI 10.1007/s11005-005-7650-z An Example of Embedded Singular Continuous Spectrum for One-Dimensional Schrödinger Operators OLGA TCHEBOTAEVA

More information

Explicit construction of rectangular differentiation matrices

Explicit construction of rectangular differentiation matrices IMA Journal of Numerical Analysis (206) 36, 68 632 doi:0.093/imanum/drv03 Advance Access publication on April 26, 205 Explicit construction of rectangular differentiation matrices Kuan Xu Mathematical

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information