Linear Algebra and its Applications

Size: px
Start display at page:

Download "Linear Algebra and its Applications"

Transcription

1 Linear Algebra and its Applications 441 ( Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa Sparse polynomial interpolation in Chebyshev bases Daniel Potts a,, Manfred Tasche b a Chemnitz University of Technology, Department of Mathematics, D Chemnitz, Germany b University of Rostock, Institute of Mathematics, D Rostock, Germany ARTICLE INFO Article history: Received 19 July 2012 Accepted 9 February 2013 Available online 11 March 2013 Submitted by V Mehrmann AMS classification: 65D05 41A45 65F15 65F20 Keywords: Sparse interpolation Chebyshev basis Chebyshev polynomial Sparse polynomial Prony-like method ESPRIT Matrix pencil factorization Companion matrix Prony polynomial Eigenvalue problem Rectangular Toeplitz-plus-Hankel matrix ABSTRACT We study the problem of reconstructing a sparse polynomial in a basis of Chebyshev polynomials (Chebyshev basis in short from given samples on a Chebyshev grid of [ 1, 1] A polynomial is called M-sparse in a Chebyshev basis, if it can be represented by a linear combination of M Chebyshev polynomials For a polynomial with known and unknown Chebyshev sparsity, respectively, we present efficient reconstruction methods, where Prony-like methods are used The reconstruction results are mainly presented for bases of Chebyshev polynomials of first and second kind, respectively But similar issues can be obtained for bases of Chebyshev polynomials of third and fourth kind, respectively 2013 Elsevier Inc All rights reserved 1 Introduction The central issue of compressive sensing is the recovery of sparse signals from a rather small set of measurements, where a sparse signal can be represented in some basis by a linear combination with few nonzero coefficients For example, a 1-periodic trigonometric polynomial of degree at most N 1 with only M nonzero exponential terms can be recovered by O(M log 4 (N sampling points that are Corresponding author addresses: potts@mathematiktu-chemnitzde (D Potts, manfredtasche@uni-rostockde (M Tasche /$ - see front matter 2013 Elsevier Inc All rights reserved

2 62 D Potts, M Tasche / Linear Algebra and its Applications 441 ( { j randomly chosen from the equidistant grid ; N j = 0,,N } 1, where M N (see [23] Recently, Rauhut and Ward [21] have presented a recovery method of a polynomial of degree at most N 1 given in Legendre expansion with M nonzero terms, where O(M log 4 (N random samples are taken independently according to the Chebyshev probability measure of [ 1, 1] The recovery algorithms in compressive sensing are often based on l 1 -minimization Exact recovery of sparse signals or functions can be ensured only with a certain probability The method of [21] can extended to sparse polynomial interpolation in a basis of Chebyshev polynomials too In contrast to these random recovery methods, there exist also deterministic methods for the reconstruction of an exponential sum H(t := c j e if jt j=1 (t R with distinct frequencies f j [ π, π and complex coefficients Such methods are the Prony-like methods [19], such as the classical Prony method, annihilating filter method [5], ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques [22], matrix pencil method [10,9], and approximate Prony method [3,18] This approach allows the recovery of all parameters of H, ie M, f j and c j for j = 1,,M, from equidistant samples H(k(k = 0,,, where N M Prony-like methods can be applied also for the reconstruction of sparse trigonometric polynomials [19,Example 42] Note that the classical Prony method is equivalent to the annihilating filter method Unfortunately, the classical Prony method is very sensitive to noise in the sampled data Hence numerous modifications have been proposed in order to improve the numerical behavior of the Prony method Efficient Prony-like methods are important within many disciplines in sciences and engineering (see [15] For a survey of the most successful methods for the data fitting problem with linear combinations of complex exponentials, we refer to [14] Note that a variety of papers compare the statistical properties of the different algorithms, see eg [10,1,2,6] Similar results for our new suggested algorithms are of great interest, but are behind the scope of this paper In this paper, we present a new deterministic approach to sparse polynomial interpolation in a basis of Chebyshev polynomials, if relatively few samples of a Chebyshev grid of [ 1, 1] are given Note that Chebyshev grids are much better suited for the recovery of polynomials than uniform grids (see [4] For n N 0,thenth Chebyshev polynomial of first kind can be defined by T n (x := cos(n arccos x (x [ 1, 1] (see for example [13, p 2] These polynomials are orthogonal with respect to the weight (1 x 2 1/2 on ( 1, 1 (see [13, p 73] and form the Chebyshev-1 basis Let M be a positive integer A polynomial h(x = d b k T k (x of degree d M is called to be M-sparse in the Chebyshev-1 basis,ifm coefficients b k are nonzero and if the other d M + 1 coefficients vanish Then such a M-sparse polynomial h can be represented in the form h(x = c j T nj (x j=1 (11 with c j := b nj = 0 and 0 n 1 < n 2 < < n M = d TheintegerM is called the Chebyshev-1 sparsity of the polynomial (11 Recently the authors have presented a unified approach to Prony-like methods for the parameter estimation of an exponential sum [19], namely the classical Prony method, the matrix pencil method [9], and the ESPRIT method [22] The main idea is based on the evaluation of the eigenvalues of a matrix

3 D Potts, M Tasche / Linear Algebra and its Applications 441 ( which is similar to the companion matrix of the Prony polynomial To this end we have computed the singular value decomposition (SVD or the QR decomposition of a special Toeplitz-plus-Hankel matrix (T+H matrix The aim of this paper is to generalize this unified approach in order to obtain stable algorithms for an interpolation problem of a sparse polynomial (11 in the Chebyshev-1 basis Similar sparse interpolation problems are formerly explored in [12,11,7] and solved by Prony methods For known Chebyshev-1 sparsity, Theorem 26 shows that an M-sparse polynomial (11 inachebyshev basis can be reconstructed from only 2M samples on a special Chebyshev grid Our method can be considered as special case of a reconstruction of sparse sums of eigenfunctions of a Chebyshev-shift operator, for details see [17, Remark 46] A Prony-like method for sparse Legendre reconstruction was suggested in [16] This method can be also generalized to other polynomial systems, but one needs there high order derivatives of the sparse polynomial For the sparse interpolation of a multivariate polynomial, we refer to [8] The outline of this paper is as follows In Section 2, we collect some useful properties of T+H matrices and Vandermonde-like matrices Further we formulate the algorithms, if the Chebyshev-1 sparsity M of (11 is known and if only 2M sampleddataof(11 on a special Chebyshev grid are given In Section 3, we obtain corresponding results on sparse polynomial interpolation for unknown Chebyshev-1 sparsity M of (11 Furthermore one can improve the numerical stability of the algorithms by using more sampling values (see Section 5 In Section 4, we discuss the sparse interpolation in the basis of Chebyshev polynomials of second kind Finally we present some numerical experiments in Section 5, where we apply our methods to sparse polynomial interpolation In the following we use standard notations By N and N 0, respectively, we denote the set of all positive and nonnegative integers, respectively The Kronecker symbol is δ k The linear space of all column vectors with N real components is denoted by R N, where o is the corresponding zero vector The linear space of all real M-by-N matrices is denoted by R M N, where O M,N is the corresponding zero matrix For a matrix A M,N R M N, its transpose is denoted by A T M,N, and its Moore Penrose pseudoinverse by A M,N A square matrix A M,M is abbreviated to A M ByI M we denote the M-by-M identity matrix By null A M,N we denote the null space of a matrix A M,N Further we use the known submatrix notation Thus A M,M+1 (1 : M, 2 : M + 1 is the submatrix of A M,M+1 obtained by extracting rows 1 through M and columns 2 through M + 1, and A M,M+1 (1 : M, M + 1 means the last column vector of A M,M+1 Definitions are indicated by the symbol := Other notations are introduced when needed 2 Interpolation for known Chebyshev-1 sparsity This section has an introductory character Under the restricted assumption that the Chebyshev-1 sparsity M of the polynomial (11isaprioriknown, we introduce the problem (21 of sparse polynomial interpolation in the Chebyshev-1 basis and the related Prony polynomial (23 Then we collect some useful properties of square T+H matrices and square Vandermonde-like matrices We find a factorization (28 of the T+H matrix and prove an interesting relation between the Prony polynomial (23 and its companion matrix (see Lemma 25 Similar sparse interpolation problems in the Chebyshev-1 basis are formerly explored in [12,11,7] and solved by a Prony method (such as Algorithm 27 In [12,11], the grid {T k (a = cosh (k ( arcosh a; k = 0,,2M 1} with fixed a > 1isusedforthe interpolation In [7], the grid {T k cos 2π N = cos 2kπ ; N k = 0,,2M } 1 with N 2 n M is applied for interpolation The main results of Section 2 are the Algorithms 29 and 210 Let N N be sufficiently large such that N > M and is an upper bound of the degree of the π polynomial (11 For u N := cos 2N 1 we form the nonequidistant Chebyshev grid { u N,k := T k (u N = kπ cos ; 2N 1 k = 0,,2M } 1 of the interval [ 1, 1] Note that T 2N 1 (u N,k = ( 1 k (k = 0,,2M 1 We consider the following problem of sparse polynomial interpolation in the Chebyshev-1 basis: For given sampled data ( h k := h(u N,k = h cos kπ (k = 0,,2M 1 (21

4 64 D Potts, M Tasche / Linear Algebra and its Applications 441 ( determine all parameters n j and c j (j = 1,,M of the sparse polynomial (11 If we substitute x = cos t (t [0, π], then we see that the above interpolation problem is closely related to the interpolation problem of the sparse, even trigonometric polynomial g(t := h(cos t = c j cos(n j t (t [0, π], (22 j=1 ( where the sampled values g kπ 2N 1 = hk (k = 0,,2M 1 are given (see [7,20] We introduce the Prony polynomial P of degree M with the leading coefficient 2 M 1, whose roots are x j := T nj (u N = cos P(x = 2 M 1 M j=1 n jπ 2N 1 ( x cos (j = 1,,M, ie n j π (23 Then the Prony polynomial P can be represented in the Chebyshev-1 basis by P(x = p l T l (x (p M := 1 (24 l=0 The coefficients p j of the Prony polynomial (24 can be characterized as follows: Lemma 21 For all k = 0,, M 1,thesampleddatah k and the coefficients p l of the Prony polynomial (24 satisfy the equations M 1 j=0 (h j+k + h j k p j = (h k+m + h M k (25 Proof Using cos(α + β + cos(α β = 2 cos α cos β, we obtain by(22 that ( h j+k + h j k = 2 c l cos n l(j + kπ + cos n l(j kπ l=1 n l jπ = 2 c l cos cos n l kπ 2N 1 (26 l=1 Thus we conclude that ( n l kπ n l jπ hj+k + h j k pj = 2 c l cos p j cos j=0 l=1 j=0 ( n l kπ = 2 c l cos P n l π cos = 0 l=1 By p M = 1, this implies the assertion (25 Introducing the vectors h(k := (h j+k + h j k M 1 j=0 (k = 0,,M and the square T+H matrix

5 D Potts, M Tasche / Linear Algebra and its Applications 441 ( H M (0 := (h j+k + h j k M 1 = ( j, h(0 h(1 h(m 1 2 h 0 2 h 1 2 h M 1 2 h 1 h 2 + h 0 h M + h M 2 =, 2 h M 1 h M + h M 2 h 2M 2 + h 0 then by (25 the vector p := (p k M 1 H M (0 p = h(m is a solution of the linear system (27 Lemma 22 Let M and N be integers with 1 M N Further let h be an M-sparse polynomial of degree at most in the Chebyshev-1 basis If h(u N,j = 0 for j = 0,,M 1, then h is identically zero Further the Vandermonde-like matrix V M (x := ( T nj (u N,k M 1,M = (,j=1 T k (x j M 1,M = n j kπ M 1,M,j=1 (cos,j=1 with x := (x j M j=1 is nonsingular and the T+H matrix H M(0 can be factorized in the following form H M (0 = 2 V M (x(diag c V M (x T (28 and is nonsingular Proof 1 Assume that the Vandermonde-like matrix V M (x is singular Then there exists a vector d = (d l M 1 l=0 = o such that d T V M (x = o T We consider the even trigonometric polynomial D of order at most M 1givenby D(t = M 1 l=0 d l cos(lt (t R n jπ Hence d T V M (x = o T implies that t j = [0, π] (j = 1,,M are roots of D These 2N 1 M roots are distinct, because 0 n 1 < < n M < 2N But this is impossible, since the even trigonometric polynomial D = 0ofdegreeatmostM 1cannot have M distinct roots in [0,π] Therefore, V M (x is nonsingular If h(u N,j = 0forj = 0,,M 1, then V M (x c = osincev M (x is nonsingular, c is equal to o, such that h is identically zero 2 The factorization (28 of the T+H matrix H M (0 followsimmediatelyfrom(26 Since c j = 0(j = 1,,M, diagc is nonsingular Further the Vandermonde-like matrix V M (x is nonsingular, such that H M (0 is nonsingular too Introducing the matrix p p p 2 P M := R M M p M p M p M 1

6 66 D Potts, M Tasche / Linear Algebra and its Applications 441 ( and using the linear system (27, we see that H M (0 P M = H M (1 + ( oh(0 h(m 2 with the T+H matrix H M (1 := ( ( h(1 h(2 h(m = hj+k+1 + M 1 h j k 1 j, RM M This T+H matrix has the following properties: Lemma 23 The T+H matrix H M (1 can be factorized in the following form H M (1 = 2 V M (x(diag c V M (xt (29 with the Vandermonde-like matrix V (x := ( M T k (x j M k,j=1 Further the matrices H M(1 and V M (x are nonsingular Proof 1 By Lemma 21 we know that (h j+k + h j k p k = 0 (j = 0,,2N M 1 Consequently we obtain where H M (0(p k M 1 = h(m, H M(1(p k+1 M 1 = p 0 h(0, p 0 = 2 M 1 ( 1 M M n j π cos j=1 does not vanish This implies that h(m span {h(0,,h(m 1}, h(0 span {h(1,,h(m} Thus we obtain that rank H M (0 = rank H M (1 = M 2 The (j, kth element of the matrix product 2 V M (x(diag c V M (xt can be analogously computed as (26 such that 2 c l T nl (u N,j T nl (u N,k = h j+k+1 + h j k 1 l=1 Since H M (1, V M (x, and diag c are nonsingular, it follows from (29 that the Vandermonde-like matrix V M (x is nonsingular too In the following Lemmas 24 and 25 we show that the zeros of the Prony polynomial (24 canbe computed via solving an eigenvalue problem To this end, we represent the Chebyshev polynomial T M in the form of a determinant

7 D Potts, M Tasche / Linear Algebra and its Applications 441 ( Lemma 24 Let M be a positive integer Further let E M := diag shift matrix S M := ( δ j k 1 + δ j k+1 M 1 j, = R M M ( 1 2, 1,, 1 T R M and the modified Then det (2x E M S M = T M (x (x R Proof We show this by induction For M = 1 and M = 2 it follows immediately the assertion For M 3 we compute the determinant x x x x x x using cofactors of the last row (cf [13, p 18] Then we obtain the known recursion of the Chebyshev polynomials T M (x = 2xT M 1 (x T M 2 (x (see [13, p 2] This completes the proof Now we show that 1 2 E 1 M P M is the companion matrix of the Prony polynomial (24 in the Chebyshev- 1basis Lemma 25 Let M be a positive integer Then 1 2 E 1 M P M is the companion matrix of the Prony polynomial (24 in the Chebyshev-1 basis, ie det ( ( 2 x E M P M = 2 M 1 det x I M 1 2 E 1 M P M = P(x (x R Proof Applying Lemma 24 and P M = S M ( o op, (210 we compute det (2 x E M P M using cofactors of the last column Then we obtain on the one hand

8 68 D Potts, M Tasche / Linear Algebra and its Applications 441 ( det ( M 1 2 x E M P M = TM (x + p l T l (x = P(x (x R On the other hand it follows that l=0 det ( ( 2 x E M P M = det (2 EM det x I M 1 2 E 1 M P M with det (2 E M = 2 M 1 This completes the proof Theorem 26 Let M and N be integers with 1 M < N Let h be a M-sparse polynomial of degree at most in the Chebyshev-1 basis Then the M coefficients c j R (j = 1,M and the M nonnegative integers n j (j = 1,M of ( (11 can be reconstructed from the 2M samples h k = h cos (k = 0,,2M 1 kπ 2N 1 Proof Using Lemma 21, we obtain the linear system (27 The matrix H M (0 is nonsingular by Lemma 22 By Lemma 25, the eigenvalues of the companion matrix 1 2 E 1 M P M of the Prony polynomial (24 in the Chebyshev-1 basis coincide with the zeros of (24 By (210, we compute the zeros of the Prony polynomial (24 via solving an eigenvalue problem such that we obtain the nonnegative integers n j (j = 1,M We form the Vandermonde-like matrix V M (x with x j = T nj (u N (j = 1,,M, which is nonsingular by Lemma 22, and obtain finally the coefficients c j R (j = 1,,M Thus we can summarize: Algorithm 27 (Prony method for sparse Chebyshev-1 interpolation Input: N N with N > M, h k = h(u N,k R (k = 0,,2M 1, M N Chebyshev-1 sparsity of the polynomial (11ofdegreeatmost 1 Solve the square system H M (0(p j M 1 j=0 = h(m 2 Determine the simple roots x j (j = 1,M of the Prony polynomial (24, where 1 x 1 > x 2 > > x M 1, and compute then n j := [ ] 2N 1 arccos x j (j = 1,,M, where [x] := x + 05 means rounding of x R to the nearest integer 3 Compute c j R (j = 1,,M as solution of the square Vandermonde-like system V M (x c = (h k M 1 π with c := (c j M j=1 Output: n j N 0 (0 n 1 < n 2 << n M < 2N, c j R (j = 1,,M Now we show that the matrix pencil method follows directly from the Prony method First we observe that H M (0 = 2 V M (x(diag c V M (x T Since c j = 0 (j = 1,,M, thematrixh M (0 has the rank M and is invertible Note that the Chebyshev-1 sparsity of the polynomial (11 coincides with the rank of H M (0 Hence we conclude that

9 D Potts, M Tasche / Linear Algebra and its Applications 441 ( det (2 x H M (0 E M H M (0 P M = det (H M (0 det (2 x E M P M = det (H M (0 P(x such that the eigenvalues of the square matrix pencil 2 x H M (0 E M H M (0 P M (x R (211 are exactly x j = cos n jπ [ 1, 1] 2N 1 (j = 1,,M Eacheigenvaluex j of the matrix pencil (211 is simple and has a right eigenvector v = (v k M 1 with M 1 v M 1 = T M (x j = p l T l (x j, l=0 since P(x j = 0 and P has the form (24 By this special choice of v M 1 onecaneasilydeterminethe other components v M 2,, v 0 which can be recursively computed from the linear system P M v = 2 x j E M v HenceweobtainH M (0 P M v = 2 x j H M (0 E M v, where the matrices can be represented in the following form H M (0 P M = H M (1 + ( oh(0 h(m 2, 2 H M (0 E M = H M (0 + ( oh(1 h(m 1 Example 28 In the case M = 3wehavetosolvethelinearsystem 01 p 0 v 0 x j v p 1 v 1 = 2x j v 1 01 p 2 v 2 2x j v 2 with v 2 = T 3 (x j = 2 p l T l (x j l=0 Then we determine the other components of the eigenvector v = (v l 2 l=0 as v 1 = p 1 T 0 (x j (2p 0 + p 2 T 1 (x j p 1 T 2 (x j, v 0 = (p 0 + p 2 T 0 (x j 2p 1 T 1 (x j 2p 0 T 2 (x j In the following, we factorize the square T+H matrices H M (s(s = 0, 1 simultaneously Therefore we introduce the rectangular T+H matrix H M,M+1 := ( ( H M (0 H M (1(1 : M, M = h(0 h(1 h(m (212

10 70 D Potts, M Tasche / Linear Algebra and its Applications 441 ( such that conversely H M (s = H M,M+1 (1 : M, 1 + s : M + s (s = 0, 1 (213 Then we compute the QR factorization of H M,M+1 with column pivoting and obtain H M,M+1 M+1 = Q M R M,M+1 with an orthogonal matrix Q M, a permutation matrix M+1, and a trapezoidal matrix R M,M+1, where R M,M+1 (1 : M, 1 : M is a nonsingular upper triangular matrix Note that the permutation matrix M+1 is chosen such that the diagonal entries of R M,M+1 (1 : M, 1 : M have nonincreasing absolute values Using the definition S M,M+1 := R M,M+1 T M+1, we infer that by (213 H M (s = Q M S M (s (s = 0, 1, where S M (s := S M,M+1 (1 : M, 1 + s : M + s (s = 0, 1 Hence we can factorize the matrices 2 H M (0 E M and H M (0 P M in the following form 2 H M (0 E M = H M (0 + ( oh(1 h(m 1 = Q M S (0, M H M (0 P M = H M (1 + ( oh(0 h(m 2 = Q M S (1, M where S M (0 := S M(0 + ( os M (1(1 : M, 1 : M 1, (214 S M (1 := S M(1 + ( os M (0(1 : M, 1 : M 1 (215 Since Q M is orthogonal, the generalized eigenvalue problem of the matrix pencil (211 isequivalent to the generalized eigenvalue problem of the matrix pencil x S M (0 S M (1 = S M (0 ( x I M ( S M (0 1 S M (1 (x R Since H M (0 is nonsingular by Lemma 22,thematrix2H M (0 E M is nonsingular too Hence S M (0 = 2 Q M H M(0 E M is invertible We summarize this method: Algorithm 29 (Matrix pencil factorization based on QR decomposition for sparse Chebyshev-1 interpolation Input: N N with N > M, h k = h(u N,k R (k = 0,,2M 1, M N Chebyshev-1 sparsity of the polynomial (11ofdegreeatmost

11 D Potts, M Tasche / Linear Algebra and its Applications 441 ( Compute the QR factorization with column pivoting of the rectangular T+H matrix (212 and form the matrices (214 and (215 2 Determine the eigenvalues x j [ 1, 1] (j = 1,,M of the square matrix ( S M (0 1 S M (1, where x j are ordered in the following way 1 x 1 > x 2 > > x M 1 Form n j := [ 2N 1 π ] arccos x j (j = 1,,M 3 Compute c j R (j = 1,,M as solution of the square Vandermonde-like system V M (x c = (h k M 1 with x := (x j M j=1 and c := (c j M j=1 Output: n j N 0 (0 n 1 < n 2 << n M < 2N, c j R (j = 1,,M In contrast to Algorithm 29, we use now the singular value decomposition (SVD of the rectangular Hankel matrix (212 and obtain a method which is known as the ESPRIT method Applying the SVD to H M,M+1,weobtain H M,M+1 = U M D M,M+1 W M+1 with orthogonal matrices U M, W M+1 and a diagonal matrix D M,M+1, whose diagonal entries are the ordered singular values σ 1 σ 2 σ M > 0ofH M,M+1 Introducing D M := D M,M+1 (1 : M, 1 : M, W M,M+1 := W M+1 (1 : M, 1 : M + 1, we can simplify the SVD of (212by H M,M+1 = U M D M W M,M+1 Note that W M,M+1 W T M,M+1 = I M Setting W M (s := W M,M+1 (1 : M, 1 + s : M + s (s = 0, 1, it follows from (213 that H M (s = U M D M W M (s(s = 0, 1 Hence we can factorize the matrices 2 H M (0 E M and H M (0 P M in the following form 2 H M (0 E M = H M (0 + ( oh(1 h(m 1 = UM D M W M (0, H M (0 P M = H M (1 + ( oh(0 h(m 2 = UM D M W M (1, where W M (0 := W M(0 + ( ow M (1(1 : M, 1 : M 1, (216 W M (1 := W M(1 + ( ow M (0(1 : M, 1 : M 1 (217

12 72 D Potts, M Tasche / Linear Algebra and its Applications 441 ( Clearly, W M (0 = 2 D 1 M UT M H M(0 E M is a nonsingular matrix by construction Then we infer that the generalized eigenvalue problem of the matrix pencil (211 is equivalent to the generalized eigenvalue problem of the matrix pencil x W M (0 W M (1 = W M (0 ( x I M ( W M (0 1 W M (1, since U M is orthogonal and D M is invertible Therefore we obtain that P M = ( H M (0 1 UM D M W M (1 Algorithm 210 (ESPRIT method for sparse Chebyshev-1 interpolation Input: N N with N > M, h k R (k = 0,,2M 1, M N Chebyshev-1 sparsity of the polynomial (11ofdegreeatmost 1 Compute the SVD of the Hankel matrix (212 and form the matrices (216 and (217 2 Determine the eigenvalues x j [ 1, 1] (j = 1,M of ( W M (0 1 W M (1, where x j are ordered in the following form 1 x 1 > x 2 > > x M 1 Form n j := [ ] 2N 1 π arccos x j (j = 1,,M 3 Compute the coefficients c j R (j = 1,,M as solution of the square Vandermonde-like system V M (x c = (h k M 1 with x := (x j M j=1 and c := (c j M j=1 Output: n j N 0 (0 n 1 < n 2 << n M < 2N, c j R (j = 1,,M Remark 211 The last step of the Algorithms can be replaced by the computation of the real coefficients c j (j = 1,,M as least squares solution of the overdetermined Vandermonde-like V 2M,M (x c = (h k 2M 1 with the rectangular Vandermonde-like matrix V 2M,M (x := ( T k (x j 2M 1,M,j=1 = (cos n j kπ 2M 1,M,j=1 In the case of sparse Chebyshev-1 interpolation of (11 with known Chebyshev-1 sparsity M, we have seen that each method determines the eigenvalues x j (j = 1,,M of the matrix pencil 2 x E M P M, where 1 2 E 1 M P M is the companion matrix of the Prony polynomial (24 inthe Chebyshev-1 basis 3 Interpolation for unknown Chebyshev-1 sparsity This section is the core of the paper Here we consider the problem of sparse polynomial interpolation in the important case of unknown Chebyshev-1 sparsity M of the polynomial (11 We assumeonly that an upper bound of the Chebyshev-1 sparsity is known Roughly spoken, we generalize the results of Section 2 to rectangular T+H matrices and rectangular Vandermonde-like matrices We show factorizations of rectangular T+H matrices and the interesting relation (38 between the modified Prony polynomial (36 and the T+H matrices (see Lemma 32 The zeros of the modified Prony polynomial can be computed via solving an eigenvalue problem of the related companion matrix The main results of Section 3 are the Algorithms Numerical examples in Section 5 show that the Algorithms 34 and 35 are numerically stable in the floating point arithmetic

13 D Potts, M Tasche / Linear Algebra and its Applications 441 ( Let L N be convenient upper bound of the unknown Chebyshev-1 sparsity M of the polynomial (11ofdegreeatmost, where N N is sufficiently large with M L N Inorderto improve the numerical stability, we allow to choose more sampling points Therefore we introduce an additional parameter K with L K N such that we use K + L sampling points of (11, more precisely we assume that noiseless sampled data h k = h(u N,k (k = 0,,L + K 1 are given With the L + K sampled data h k R (k = 0,,L + K 1 we form the rectangular T+H matrices H K,L+1 := ( h l+m + K 1,L h l m l,m=0 (31 H K,L (s := ( h l+m+s + K 1,L 1 h l m s l,m=0 (s = 0, 1 (32 Then H K,L (1 is a shifted version of the T+H matrix H K,L (0 and H K,L+1 = ( H K,L (0 H K,L (1(1 : K, L, H K,L (s = H K,L+1 (1 : K, 1 + s : L + s (s = 0, 1 (33 Note that in the special case M = L = K we obtain again the matrices (212 and (213 Using the coefficients p k (k = 0,,M 1 of the Prony polynomial (24, we form the vector p L := (p k L 1 with p M := 1, p M+1 = = p L 1 := 0 By S L := ( δ k l 1 + δ L 1 k l+1 k,l=0 we denote the sum of forward and backward shift matrix, where δ k is the Kronecker symbol Analogously, we introduce p L+1 := (p k L with p L := 0, if L > M, and S L+1 := ( δ k l 1 + δ L k l+1 k,l=0 Lemma 31 LetL,K,M,N N with M L K N be given Furthermore, let h k = h(u N,k (k = 0,,L + K 1 be noiseless sampled data of the sparse polynomial (11 of degree at most with coefficients c j R \{0} (j = 1,,MThen rank H K,L+1 = rank H K,L (s = M (s = 0, 1 (34 If L = M, then null H K,M+1 = span {p M+1 } and null H K,M (s ={o} for s = 0, 1 IfL> M, then and null H K,L+1 = span {p L+1, S L+1 p L+1,,S L M L+1 p L+1}, null H K,L (s = span {p L, S L p L,,S L M 1 L p L } (s = 0, 1 dim (null H K,L+1 = L M + 1, dim (null H K,L (s = L M (s = 0, 1 1 For x j = T nj (u N (j = 1,,M, we introduce the rectangular Vandermonde-like matri- Proof ces V K,M (x := ( T k 1 (x j ( K,M = cos n K,M j(k 1π, (35 k,j=1 k,j=1 V K,M (x := ( T k (x j K,M k,j=1 = (cos n j kπ K,M k,j=1, which have the rank M, sincev M (x and V M (x are nonsingular by Lemmas 22 and 23 Then the rectangular T+H matrices (31 and (32 can be factorized in the following form

14 74 D Potts, M Tasche / Linear Algebra and its Applications 441 ( H K,L+1 = 2 V K,M (x(diag c V L+1,M (x T, H K,L (0 = 2 V K,M (x(diag c V L,M (x T, H K,L (1 = 2 V K,M (x(diag c V L,M (xt with x = (x j M j=1 and c = (c j M j=1 This can be shown in similar way as in the proof of Lemma 22 Sincec j = 0 and since x j [ 1, 1] are distinct, we obtain (34 Using rank estimation, we can determine the rank and thus the Chebyshev-1 sparsity of the sparse polynomial (11 By (34 and H K,L+1 p M+1 = o (see (25, the 1-dimensional null space of H K,L+1 is spanned by p M+1 Furthermore, the null spaces of H K,L (s are trivial for s = 0, 1 2 Assume that L > MFrom ( p m hl+m+s + h l m s = 0 (l = 0,, 2N M s 1; s = 0, 1 m=0 it follows that H K,L+1 (S j L+1 p L+1 = o (j = 0,, L M and analogously H K,L (s(s j L p L = o (j = 0,, L M 1; s = 0, 1, where o denotes the corresponding zero vector By p M = 1, we see that the vectors S j L+1 p L+1 (j = 0,, L M and S j L p L (j = 0,, L M 1 are linearly independent and located in null H K,L+1, and null H K,L (s, respectively 3 Let again L > M Now we prove that null H K,L+1 is contained in the linear span of the vectors S j L+1 p L+1 (j = 0,,L M Letu = (u l L l=0 RL+1 be an arbitrary right eigenvector of H K,L+1 related to the eigenvalue 0 and let U be the corresponding polynomial U(x = L u l T l (x (x R l=0 Using the noiseless sampled data h k = h(u N,k (k = 0,,, weobtain 0 = L (h l+m + h l m u m = m=0 L u m m=0 j=1 [ c j Tnj (u N,l+m + T nj (u N, l m ] Thus by T nj (u N,l+m + T nj (u N, l m = T l+m (x j + T l m (x j = 2 T l (x j T m (x j it follows that 0 = 2 c j T l (x j U(x j (l = 0,,2N L 1 j=1 and hence by (35 V K,M (x ( c j U(x j M j=1 = o

15 D Potts, M Tasche / Linear Algebra and its Applications 441 ( Since x j [ 1, 1] (j = 1,,M are distinct by assumption, the square Vandermonde-like matrix V M (x is nonsingular by Lemma 22 Hence we obtain U(x j = 0 (j = 1,,M by c j = 0 Thus it follows that U(x = P(x R(x with certain polynomial R(x = L M r k T k (x (x R; r k R But this means for the coefficients of the polynomials P, R, and U that u = r 0 p L r 1 S L+1 p L r L M S L M L+1 p L+1 Hence the vectors S j L+1 p L+1 (j = 0,,L M form a basis of null H K,L+1 such that dim(null H K,L+1 = L M + 1 Similarly, one can show the results for the other T+H matrices (32 This completes the proof The Prony method for sparse Chebyshev-1 interpolation (with unknown Chebyshev-1 sparsity M is based on the following result Lemma 32 LetL,K,M,N N with M L K N be given Let h k = h(u N,k (k = 0,,L+K 1 be noiseless sampled data of the sparse polynomial (11 of degree at most with coefficients c j R \{0} Then following assertions are equivalent: (i The polynomial Q(x := L q k T k (x (x R; q L := 1 (36 with real coefficients q k has M distinct zeros x j [ 1, 1] (j = 1,,M (ii The vector q = (q k L 1 is a solution of the linear system H K,L (0 q = h(l (h(l := ( h L+m + K 1 h L m m=0 (37 (iii The matrix Q L := S L ( o oq R L L has the property H K,L (0 Q L = H K,L (1 + ( oh(0 h(l 2 (38 Further the eigenvalues of 1 2 E 1 L Q L coincide with the zeros of the polynomial (36 Proof 1 From (i it follows (ii: Assume that Q(x j = 0 (j = 1,,M Form = 0,,K 1, we compute the sums s m := L (h k+m + h k m q k Using h k = h(u N,k (k = 0,,L + K 1, (11, and the known identities (see eg [13,p17 and p 31] 2 T j (x T k (x = T j+k (x + T j k (x, T j ( Tk (x = T j+k (x (j, k N 0,

16 76 D Potts, M Tasche / Linear Algebra and its Applications 441 ( we obtain s m = = L q k [ h ( T k+m (u N + h ( T k m (u N ] L c l l=1 By q L = 1 this implies that q k [T k+m (x l + T k m (x l ] = 2 c l T m (x l Q(x l = 0 l=1 L 1 (h k+m + h k m q k = 1 (h L+m + h L m (m = 0,,K 1 Hence we get (37 2 From (ii it follows (iii: Assume that q = (q l L 1 l=0 is a solution of the linear system (37 Then by H K,L (0(δ k j L 1 = h(j = ( h k+j + h k j K 1 (j = 1,,L 1, H K,L (0 q = h(l = ( h k+l + h k L K 1, we obtain (38 column by column 3 From (iii it follows (i: By (38weobtain(37, since the last column of Q L reads (δ L 2 j L 1 j=0 q and since the last column of H K,L (1 + ( oh(0 h(l 2 is equal to h(l + h(l 2 Then(37implies L (h k+m + h k m q k = 0 (m = 0,,K 1 As shown in the first step, we obtain c l T m (x l Q(x l = 0 (m = 0,,K 1, l=1 ie by (35 finally V K,M (x ( c l Q(x l M l=1 = o Especially we conclude that V M (x ( c l Q(x l M l=1 = o Since x j [ 1, 1] (j = 1,,M are distinct, the square Vandermonde-like matrix V M (x is nonsingular by Lemma 22 such that Q(x j = 0 (j = 1,,M 4 From Lemma 25, itfollowsthat det ( 2x E L Q L = Q(x (x R Hence the eigenvalues of the square matrix 1 2 E 1 L Q L coincide with the zeros of the polynomial (36 This completes the proof

17 D Potts, M Tasche / Linear Algebra and its Applications 441 ( In the following, we denote a polynomial (36 asamodified Prony polynomial of degree L (M L N, if the corresponding coefficient vector q = (q k L 1 is a solution of the linear system (37 Then (36 has the same zeros x j [ 1, 1] (j = 1,,M as the Prony polynomial (24, but (36 has L M additional zeros, if L > M The eigenvalues of 1 2 E 1 L Q L coincide with the zeros of the polynomial (36 Now we formulate Lemma 32 as an algorithm Since the unknown coefficients c j (j = 1,,M do not vanish, we can assume that c j >εfor convenient bound ε(0 <ε 1 Algorithm 33 (Prony method for sparse Chebyshev-1 interpolation Input: L, K, N N (N 1, 3 L K N, L is upper bound of the Chebyshev-1 sparsity M of (11ofdegreeatmost, h k = h(u N,k R (k = 0,,L + K 1, 0<ε 1 1 Compute the least squares solution q = (q k L 1 of the rectangular linear system (37 2 Determine the simple roots x j [ 1, 1] (j = 1,, M of the modified Prony polynomial (36, ie, compute all eigenvalues x j [ 1, 1] (j = 1,, M of the companion matrix 1 2 E 1 L Q L Assume that x j are ordered in the following form 1 x 1 > x 2 > > x M 1 Note that rank H K,L (0 = M M 3 Compute c j R (j = 1,, M as least squares solution of the overdetermined linear Vandermonde-like system V L+K, M( x( c j M j=1 = (h k L+K 1 with x := ( x j M j=1 and V L+K, M( x := ( T k ( x j L+K 1, M,j=1 4 Delete all the x l (l {1,, M} with cl εand denote the remaining values by x j (j = 1,,M with M M Calculatenj := [ ] 2N 1 π arccos x j (j = 1,,M 5 Repeat step 3 and compute c = (c j M j=1 RM as least squares solution of the overdetermined linear Vandermonde-like system V L+K,M (x c = (h k L+K 1 with x := (x j M j=1 and V L+K,M(x := ( T k (x j L+K 1,M,j=1 = ( cos n jkπ L+K 1,M 2N 1,j=1 Output: M N, n j N 0 (0 n 1 < n 2 << n M < 2N, c j R (j = 1,,M Now we show that the Prony method for sparse Chebyshev-1 interpolation can be improved to a matrix pencil method As known, a rectangular matrix pencil may not have eigenvalues in general But this is not the case for our rectangular matrix pencil 2x H K,L (0 E L H K,L (0 Q L, (39 which has x j [ 1, 1] (j = 1,,M as eigenvalues Note that by (38 bothmatricesh K,L (0 E L and H K,L (0 Q L are known by the given sampled data h k (k = 0,, The matrix pencil (39 has at least the eigenvalues x j [ 1, 1] (j = 1,,MIfv C L is a right eigenvector related to x j, then by ( 2xj H K,L (0 E L H K,L (0 Q L v = HK,L (0 ( 2x j E L Q L v and det ( 2x j E L Q L = Q(xj = 0

18 78 D Potts, M Tasche / Linear Algebra and its Applications 441 ( we see that v = (v k L 1 is a right eigenvector of the square eigenvalue problem 1 2 E 1 L Q L v = x j v A right eigenvector can be determined by L 1 v L 1 = T L (x j = q l T l (x j, l=0 whereas the other components v L 2,, v 0 can be computed recursively from the linear system Q L v = 2x j E L v Now we factorize the rectangular T+H matrices (32 simultaneously For this reason, we compute the QR decomposition of the rectangular T+H matrix (31 By (34, the rank of the T+H matrix H K,L+1 is equal to MHenceH K,L+1 is rank deficient Therefore we apply QR factorization with column pivoting and obtain H K,L+1 L+1 = U K R K,L+1 with an orthogonal matrix U K, a permutation matrix L+1, and a trapezoidal matrix R K,L+1 = R K,L+1(1 : M, 1 : L + 1, O K M,L+1 where R K,L+1 (1 : M, 1 : M is a nonsingular upper triangular matrix By the QR decomposition we can determine the rank M of the T+H matrix (31 and hence the Chebyshev-1 sparsity of the sparse polynomial (11 Note that the permutation matrix L+1 is chosen such that the diagonal entries of R K,L+1 (1 : M, 1 : M have nonincreasing absolute values We denote the diagonal matrix containing these diagonal entries by D M With S K,L+1 := R K,L+1 T = S K,L+1(1 : M, 1 : L + 1 L+1, (310 we infer that by (33 with H K,L (s = U K S K,L (s (s = 0, 1 O K M,L+1 S K,L (s := S K,L+1 (1 : K, 1 + s : L + s (s = 0, 1 Hence we can factorize the matrices 2 H K,L (0 E L and H K,L (0 Q L in the following form 2 H K,L (0 E L = H K,L (0 + ( oh(1 h(l 1 = UK S K,L (0, H K,L (0 Q L = H K,L (1 + ( oh(0 h(l 2 = UK S K,L (1,

19 D Potts, M Tasche / Linear Algebra and its Applications 441 ( where S K,L (0 := S K,L(0 + ( os K,L (1(1 : K, 1 : L 1, S K,L (1 := S K,L(1 + ( os K,L (0(1 : K, 1 : L 1 Since U K is orthogonal, the generalized eigenvalue problem of the matrix pencil (39isequivalentto the generalized eigenvalue problem of the matrix pencil x S (0 K,L S K,L (1 (x R Using the special structure of (310, we can simplify the matrix pencil with x T M,L (0 T M,L (1 (x R (311 T M,L (s := S K,L (1 : M, 1 + s : L + s (s = 0, 1 (312 Here one can use the matrix D M as diagonal preconditioner and proceed with T M,L (s := D 1 M T M,L(s (313 Then the generalized eigenvalue problem of the transposed matrix pencil x T M,L (0T T M,L (1T has the same eigenvalues as the matrix pencil (311 except for the zero eigenvalues and it can be solved as eigenvalue problem of the M-by-M matrix F QR M := ( T M,L (0T T M,L (1T (314 Finally we obtain the nodes x j [ 1, 1] (j = 1,,M as the eigenvalues of (314 Algorithm 34 (Matrix pencil factorization based on QR decomposition for sparse Chebyshev-1 interpolation Input: L, K, N N (N 1, 3 L K N, L is upper bound of the Chebyshev-1 sparsity M of (11ofdegreeatmost, h k = h(u N,k R (k = 0,,L + K 1 1 Compute QR factorization of the rectangular T+H matrix (31 Determine the rank of (31 and form the matrices (312 and (313 2 Determine the eigenvalues x j [ 1, 1] (j = 1,,M of the square matrix (314 Assume that x j are ordered in the following form 1 x 1 > x 2 >> x M 1 Calculate n j := [ 2N 1 π arccos x j ] (j = 1,,M 3 Compute the coefficients c j R (j = 1,,M as least squares solution of the overdetermined linear Vandermonde-like system V L+K,M (x(c j M j=1 = (h k L+K 1 with x := (x j M j=1 and V L+K,M(x := ( T k (x j L+K 1,M,j=1 = ( cos n jkπ L+K 1,M 2N 1,j=1 Output: M N, n j N 0 (0 n 1 < n 2 << n M < 2N, c j R (j = 1,,M

20 80 D Potts, M Tasche / Linear Algebra and its Applications 441 ( In the following we derive the ESPRIT method by similar ideas as above, but now we use the SVD of the T+H matrix (31, which is rank deficient by (34 Therefore we use the factorization H K,L+1 = U K D K,L+1 W L+1, where U K and W L+1 are orthogonal matrices and where D K,L+1 is a rectangular diagonal matrix The diagonal entries of D K,L+1 are the singular values of (31 arranged in nonincreasing order σ 1 σ 2 σ M >σ M+1 = = σ L+1 = 0 Thus we can determine the rank M of the Hankel matrix (31 which coincides with the Chebyshev-1 sparsity of the polynomial (11 Introducing the matrices D K,M := D K,L+1 (1 : K, 1 : M = diag (σ j M j=1, O K M,M W M,L+1 := W L+1 (1 : M, 1 : L + 1, we can simplify the SVD of the Hankel matrix (31 as follows H K,L+1 = U K D K,M W M,L+1 Note that W M,L+1 W T M,L+1 = I M Setting W M,L (s = W M,L+1 (1 : M, 1 + s : L + s (s = 0, 1, (315 it follows from (33 that H K,L (s = U K D K,M W M,L (s(s = 0, 1 Hence we can factorize the matrices 2 H K,L (0 E L and H K,L (0 Q L in the following form 2 H K,L (0 E L = H K,L (0 + ( oh(1 h(l 1 = UK D K,M W K,L (0, H K,L (0 Q L = H K,L (1 + ( oh(0 h(l 2 = UK D K,M W K,L (1, where W K,L (0 := W K,L(0 + ( ow K,L (1(1 : K, 1 : L 1, W K,L (1 := W K,L(1 + ( ow K,L (0(1 : K, 1 : L 1 Since U K is orthogonal, the generalized eigenvalue problem of the rectangular matrix pencil (39 is equivalent to the generalized eigenvalue problem of the matrix pencil x D K,M W (0 M,L D K,M W M,L (1 (316 If we multiply the transposed matrix pencil (316 from the right side with 1 diag (σ j M j=1, O K M,M we obtain the generalized eigenvalue problem of the matrix pencil

21 x W M,L (0T W M,L (1T, D Potts, M Tasche / Linear Algebra and its Applications 441 ( which has the same eigenvalues as the matrix pencil (316 except for the zero eigenvalues Finally we determine the nodes x j [ 1, 1] (j = 1,,M as eigenvalues of the matrix F SVD M := ( W M,L (0T W M,L (1T (317 Thus the ESPRIT algorithm reads as follows: Algorithm 35 (ESPRIT method for sparse Chebyshev-1 interpolation Input: L, K, N N (N 1, 3 L K N, L is upper bound of the Chebyshev-1 sparsity M of (11ofdegreeatmost, h k = h(u N,k R (k = 0,,L + K 1 1 Compute the SVD of the rectangular T+H matrix (31 Determine the rank M of (31 and form the matrices (315 2 Compute all eigenvalues x j [ 1, 1] (j = 1,,M of the square matrix (317 Assume that the eigenvalues are ordered in the following form 1 x 1 > x 2 > > x M 1 Calculate n j := [ 2N 1 π arccos x j ] (j = 1,,M 3 Compute the coefficients c j R (j = 1,,M as least squares solution of the overdetermined linear Vandermonde-like system V L+K,M (x c = (h k L+K 1 with x := (x j M j=1 and c := (c j M j=1 Output: M N, n j N 0 (0 n 1 < n 2 << n M < 2N, c j R (j = 1,,M 4 Sparse polynomial interpolation in Chebyshev-2 basis In this section, we discuss the sparse interpolation in the basis of Chebyshev polynomials of second kind Here we use analogous ideas as in Sections 2 and 3 Thus Lemma 41 corresponds to Lemma 21 Note that one can extend this approach to the Chebyshev polynomials of third and fourth kind, respectively For n N 0 and x ( 1, 1, thechebyshev polynomial of second kind is defined by U n (x := (1 x 2 1/2 sin ( (n + 1 arccos x (see for example [13, p 3] These polynomials are orthogonal with respect to the weight (1 x 2 1/2 on [ 1, 1] (see [13, p 74] and form the Chebyshev-2 basis For M, N N with M < N, we consider a polynomial h of degree at most 2N 1, which is M-sparse in the Chebyshev-2 basis, ie h(x = c j U nj (x j=1 (41 with 0 n 1 < n 2 < < n M The integer M is called Chebyshev-2 sparsity of (41 Note that the sparsity depends on the choice of Chebyshev basis Using T 0 = U 0, T 1 = U 1 /2 and T n = (U n U n 2 /2 forn 2(cf[13,p4],weobtainforN 1 U 2N 2 + U 2N 1 = T (T T 2N 1

22 82 D Potts, M Tasche / Linear Algebra and its Applications 441 ( Thus the 2-sparse polynomial U 2N 2 + U 2N 1 in the Chebyshev-2 basis is not a sparse polynomial in the Chebyshev-1 basis For sake of brevity, we restrict us on the discussion of the sparse polynomial interpolation in the Chebyshev-2 basis We present only the Prony method in the case of given Chebyshev-2 sparsity (see Algorithm 42 But we emphasize that one can extend this approach the Chebyshev polynomials of third and fourth kind (see [13, p 5], which are defined for n N 0 by (( V n (x := cos cos arccos x n ( 1, W n (x := 2 arccos x Substituting x = cos t, we obtain for all t [0, π] h(cos t sin t = By sampling at t = (( sin sin arccos x n ( (x 1 ( 1, 1 2 arccos x c j sin ( (n j + 1 t (42 j=1 πk (k = 0,,, itfollowsthat 2N 1 ( h k := h cos πk πk sin 2N = M c j sin 1 j=1 ( (n j + 1 πk (43 Further we set h k := hk (k = 1,, In this case, we introduce the Prony polynomial by P(x := 2 M 1 M ( j=1 x cos (n j + 1π which can be represented again in the Chebyshev-1 basis in the form, (44 P(x = p l T l (x (p M = 1 l=0 The coefficients p l of the Prony polynomial (44 can be characterized as follows: Lemma 41 For all k = 1,,M, the scaled sampled values (43 and the coefficients p l of the Prony polynomial (44 fulfill the equations M 1 j=0 ( hj+k hj k p j = ( hm+k hm k Proof Using sin(α + β sin(α β = 2sinα cos β, weobtainforj, k = 0,,M h j+k hj k = 2 l=1 c l sin (n l + 1πk cos (n l + 1πj (45 Note that the Eq (45 istrivialfork = 0 and therefore omitted From (45 it follows that

23 D Potts, M Tasche / Linear Algebra and its Applications 441 ( M ( hj+k hj k p j = 2 p j c l sin (n l + 1πk cos (n l + 1πj j=0 j=0 l=1 = 2 c l sin (n ( l + 1πk P cos (n l + 1πj = 0 l=1 By p M = 1, this implies the assertion If we introduce the T+H matrix H M (0 := ( hj+k hj k M,M 1 k=1,j=0 and the vector h(m := ( hm+k hm k M k=1, then by Lemma 41 the vector p := (p j M 1 j=0 is a solution of the linear system H M (0 p = h(m (46 By (45, the T+H matrix HM (0 can be factorized in the form H M (0 = 2 V s M (diag c ( V c M T (47 with the Vandermonde-like matrices ( V c := M cos (n M 1,M ( l + 1πj, V s 2N := M sin (n M l + 1πk 1 j=0,l=1 k,l=1 and the diagonal matrix of c = (c l M l=1 Both Vandermonde-like matrices are nonsingular Assume that V c M is singular Then there exists a vector d = (d l M 1 l=0 = o with d T V c = M ot Introducing D(x := M 1 l=0 d l cos(lx, this even trigonometric polynomial of order at most M 1 has M distinct zeros (n l+1π 2N 1 (0, π] (j = 1,,M But this can be only the case, if D vanishes identically Similarly, one can see that V s M is nonsingular too From (47 it follows that HM (0 is also nonsingular Thus we obtain: Algorithm 42 (Prony method for sparse Chebyshev-2 interpolation Input: N N with N > M, hk R (k = 0,,2M 1, M N Chebyshev-2 sparsity of the polynomial (41 ofdegreeatmost 1 Solve the square linear system (46 2 Determine the simple roots x j (j = 1,M of the Prony polynomial (44, where 1 x 1 > x 2 >> x M 1, and compute then n j := [ 2N 1 π arccos x j ] 1(j = 1,,M 3 Compute c j R (j = 1,,M as solution of the square Vandermonde-like system V s M c = ( hk M 1 with c := (c j M j=1 Output: n j N 0 (0 n 1 < n 2 << n M < 2N, c j R (j = 1,,M

24 84 D Potts, M Tasche / Linear Algebra and its Applications 441 ( Table 51 ResultsofExample51 N K L Alg 33 Alg 34 Alg 35 e(c e e e e e e e e e e e-15 Immediately we can see that the Algorithms 34 and 35 can be generalized in a straightforward manner, since the Prony polynomial P is represented in the Chebyshev-1 basis We will denote these generalizations by Algorithms 34 and 35, respectively 5 Numerical examples Now we illustrate the behavior and the limits of the suggested algorithms Using IEEE standard floating point arithmetic with double precision, we have implemented our algorithms in MATLAB In the Examples 51 53,anM-sparse polynomial is given in the form (11 with Chebyshev polynomials T nj of degree n j and real coefficients c j = 0 (j = 1,,M We compute the absolute error of the coefficients by e(c := max c j c j (c := (c j M, j=1 j=1,,m where c j are the coefficients computed by our algorithms In Example 54 we generalize the method to a sparse nonpolynomial interpolation Finally in Example 55, we present an example of sparse polynomial interpolation in the Chebyshev-2 basis In all examples we observe that the numerical stability of the Algorithms 34 and 35 can be improved by using more sampling values π Example 51 We start with the following example We choose M = 5, c j = j, u N := cos 2N 1 and (n 1, n 2, n 3, n 4, n 5 = (6, 12, 176, 178, 200 in (11 The symbols + and in the Table 51 mean that all degrees n j are correctly reconstructed and accordingly the reconstruction fails Since after a successful reconstruction the last step is the same in the Algorithms 33 35,wepresenttheerrore(c in the last column of the Table 51 Note that for the parameters N = 300 and K = L = 5theT+H matrix in step 1 of Algorithm 33, see(37, has a condition number cond(h 5 ( Due to roundoff errors, some eigenvalues x j are not contained in [ 1, 1] We can improve the stability by choosing more sampling values Further we remark that the stability of computing the eigenvalues x j depends on the stability of the different methods used in step 1 of the Algorithms 33, 34 and 35, respectively Example 52 It is difficult to reconstruct a sparse polynomial (11 in the case, if some degrees n j of the Chebyshev polynomials T nj differ only a little Therefore we consider the sparse polynomial (11 with (n 1, n 2, n 3, n 4, n 5 = (60, 120, 1760, 1780, 2000 and again c j = j (j = 1,,5 Theresults are shown in Table 52

25 D Potts, M Tasche / Linear Algebra and its Applications 441 ( Table 52 ResultsofExample52 N K L Alg 33 Alg 34 Alg 35 e(c e e e-16 Table 53 ResultsofExample53 N K L σ Alg 33 Alg 34 Alg 35 e(c e e e e-04 Example 53 Similarly as in Example 51, we choose M = 5, c j = j (j = 1,,5 and (n 1, n 2, n 3, n 4, n 5 = (6, 12, 176, 178, 200 We reconstruct the sparse polynomial (11 from samples of a random Chebyshev grid For this purpose, we choose a random integer σ [1, N 1] such that its inverse σ 1 modulo exists Assume that N fulfills the conditions n j By ( knj π T nj (u N,k = cos ( (σ k(σ cos 1 n j mod (2N 1π if σ 1 n 2N 1 j mod ( N, = ( (σ k(2n 1 (σ cos 1 n j mod (2N 1π if σ 1 n j mod ( >N 2N 1 T σ = 1 n j mod (2N 1(u N,σ k if σ 1 n j mod ( <N, T 2N 1 (σ 1 n j mod (2N 1 (u N,σ k if σ 1 n j mod ( N we are able to recover the degrees n j from the sampling set u N,σ k = cos σ kπ for k = 0,,K +L 1 2N 1 The main advantage is that the degrees σ 1 n j are much better separated than the original degrees n j The results are shown in the Table 53 Note that the Algorithm 33 determines the eigenvalues x j, which give the correct degrees n j after step 2, but the selection of these correct degrees fails in general in step 4 Example 54 This example shows a straightforward generalization to a sparse nonpolynomial interpolation We consider special functions the form h(x := c j cos(ν j arccos(x (x [ 1, 1], j=1 where ν j R with 0 ν 1 < < ν M < 2N are not necessarily integers Using t = arccos(x, we obtain g(t = c j cos(ν j t (t [0, π] j=1

26 86 D Potts, M Tasche / Linear Algebra and its Applications 441 ( Fig 51 The sparse polynomial (11ofExample53 for N = 300 and 100 samples with σ = 1(leftandσ = 251 (right Table 54 ResultsofExample54 N K L Alg 33 Alg 34 Alg 35 e(c e e e e e e e e-10 π As in Example 51 we choose M = 5, c j = j, u N := cos and(ν 2N 1 1,ν 2,ν 3,ν 4,ν 5 = (61, 122, 1763, 1784, 2005 We compute the error of the values ν j R by e(ν := max ν j ν j (ν := (ν j 5, j=1 j=1,,5 where ν j are the values computed by our algorithms This corresponding errors e(ν are shown in the kπ Table 54Wesamplethefunctiong at the nodes for k = 0,,L + K 1 and present the error 2N 1 e(c in the last column of Table 54 based on Algorithm 33 The results show that the Algorithms 34 and 35 can be used to find the entries ν j and the coefficients c j Example 55 Finally, we consider a sparse polynomial (41 in Chebyshev-2 basis To this end, we π choose M = 5, c j = j (j = 1,,5, u N := cos 2N 1 and(n 1, n 2, n 3, n 4, n 5 = (6, 12, 176, 178, 190 The symbols + and in the Table 55 mean that all degrees n j of the Chebyshev polynomials U nj are correctly reconstructed and accordingly the reconstruction fails Remember that the generalizations of Algorithms 34 and 35 for the Chebyshev-2 basis are denoted by Algorithms 34 and 35, respectively Since after a successful reconstruction the last step is the same in our algorithms, we present the error e(c in the last column of the Table 55 FromTable55 we observe that the algorithms for sparse polynomial interpolation in Chebyshev-2 basis behaves very similar as the algorithms for sparse polynomial interpolation in Chebyshev-1 basis Similar as in Example 54, we can deal with functions of the form h(t = M j=1 d j sin(μ j t by using the relation (42, and furthermore with functions of the form f (t = ( cj cos(ν j t + d j sin(μ j t (t [0, π] j=1

Sparse polynomial interpolation in Chebyshev bases

Sparse polynomial interpolation in Chebyshev bases Sparse polynomial interpolation in Chebyshev bases Daniel Potts Manfred Tasche We study the problem of reconstructing a sparse polynomial in a basis of Chebyshev polynomials (Chebyshev basis in short)

More information

Reconstruction of sparse Legendre and Gegenbauer expansions

Reconstruction of sparse Legendre and Gegenbauer expansions Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche We present a new deterministic algorithm for the reconstruction of sparse Legendre expansions from a small number

More information

Parameter estimation for nonincreasing exponential sums by Prony like methods

Parameter estimation for nonincreasing exponential sums by Prony like methods Parameter estimation for nonincreasing exponential sums by Prony like methods Daniel Potts Manfred Tasche Let z j := e f j with f j (, 0] + i [ π, π) be distinct nodes for j = 1,..., M. With complex coefficients

More information

Reconstruction of sparse Legendre and Gegenbauer expansions

Reconstruction of sparse Legendre and Gegenbauer expansions BIT Numer Math 06 56:09 043 DOI 0.007/s0543-05-0598- Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche Received: 6 March 04 / Accepted: 8 December 05 / Published online:

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

A generalized Prony method for sparse approximation

A generalized Prony method for sparse approximation A generalized Prony method for sparse approximation Gerlind Plonka and Thomas Peter Institute for Numerical and Applied Mathematics University of Göttingen Dolomites Research Week on Approximation September

More information

Parameter estimation for exponential sums by approximate Prony method

Parameter estimation for exponential sums by approximate Prony method Parameter estimation for exponential sums by approximate Prony method Daniel Potts a, Manfred Tasche b a Chemnitz University of Technology, Department of Mathematics, D 09107 Chemnitz, Germany b University

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Prony Methods for Recovery of Structured Functions

Prony Methods for Recovery of Structured Functions GAMM-Mitteilungen, 3 June 2014 Prony Methods for Recovery of Structured Functions Gerlind Plonka 1, and Manfred Tasche 2, 1 Institute of Numerical and Applied Mathematics, University of Göttingen, Lotzestraße

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

PARAMETER ESTIMATION FOR MULTIVARIATE EXPONENTIAL SUMS

PARAMETER ESTIMATION FOR MULTIVARIATE EXPONENTIAL SUMS PARAMETER ESTIMATION FOR MULTIVARIATE EXPONENTIAL SUMS DANIEL POTTS AND MANFRED TASCHE Abstract. The recovery of signal parameters from noisy sampled data is an essential problem in digital signal processing.

More information

Sums of diagonalizable matrices

Sums of diagonalizable matrices Linear Algebra and its Applications 315 (2000) 1 23 www.elsevier.com/locate/laa Sums of diagonalizable matrices J.D. Botha Department of Mathematics University of South Africa P.O. Box 392 Pretoria 0003

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Reconstruction of non-stationary signals by the generalized Prony method

Reconstruction of non-stationary signals by the generalized Prony method Reconstruction of non-stationary signals by the generalized Prony method Gerlind Plonka Institute for Numerical and Applied Mathematics, University of Göttingen Lille, June 1, 18 Gerlind Plonka (University

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1 DETERMINANTS 1 Solving linear equations The simplest type of equations are linear The equation (1) ax = b is a linear equation, in the sense that the function f(x) = ax is linear 1 and it is equated to

More information

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am ECS130 Scientific Computing Lecture 1: Introduction Monday, January 7, 10:00 10:50 am About Course: ECS130 Scientific Computing Professor: Zhaojun Bai Webpage: http://web.cs.ucdavis.edu/~bai/ecs130/ Today

More information

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix 1 Matrix Algebra Reading [SB] 81-85, pp 153-180 11 Matrix Operations 1 Addition a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn + b 11 b 12 b 1n b 21 b 22 b 2n b m1 b m2 b mn a 11 + b 11 a 12 + b 12 a 1n

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Matrices and Determinants

Matrices and Determinants Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

8 A pseudo-spectral solution to the Stokes Problem

8 A pseudo-spectral solution to the Stokes Problem 8 A pseudo-spectral solution to the Stokes Problem 8.1 The Method 8.1.1 Generalities We are interested in setting up a pseudo-spectral method for the following Stokes Problem u σu p = f in Ω u = 0 in Ω,

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Journal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field

Journal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field Journal of Symbolic Computation 47 (2012) 480 491 Contents lists available at SciVerse ScienceDirect Journal of Symbolic Computation journal homepage: wwwelseviercom/locate/jsc On the Berlekamp/Massey

More information

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University

More information

Vector Space Concepts

Vector Space Concepts Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Exponential Decomposition and Hankel Matrix

Exponential Decomposition and Hankel Matrix Exponential Decomposition and Hankel Matrix Franklin T Luk Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong luk@csecuhkeduhk Sanzheng Qiao Department

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

12.4 The Diagonalization Process

12.4 The Diagonalization Process Chapter - More Matrix Algebra.4 The Diagonalization Process We now have the background to understand the main ideas behind the diagonalization process. Definition: Eigenvalue, Eigenvector. Let A be an

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Stability of the Gram-Schmidt process

Stability of the Gram-Schmidt process Stability of the Gram-Schmidt process Orthogonal projection We learned in multivariable calculus (or physics or elementary linear algebra) that if q is a unit vector and v is any vector then the orthogonal

More information

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes. Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes Matrices and linear equations A matrix is an m-by-n array of numbers A = a 11 a 12 a 13 a 1n a 21 a 22 a 23 a

More information