Exponential Decomposition and Hankel Matrix

Similar documents
Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Bidiagonal Decomposition and Least Squares Problems

Integer Least Squares: Sphere Decoding and the LLL Algorithm

A Divide-and-Conquer Method for the Takagi Factorization

The restarted QR-algorithm for eigenvalue computation of structured matrices

Error estimates for the ESPRIT algorithm

Total least squares. Gérard MEURANT. October, 2008

Numerical Methods in Matrix Computations

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Parameter estimation for nonincreasing exponential sums by Prony like methods

Fast Hankel Tensor-Vector Products and Application to Exponential Data Fitting

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Linear Methods in Data Mining

Lecture 5 Singular value decomposition

Homework 1. Yuan Yao. September 18, 2011

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

The Lanczos and conjugate gradient algorithms

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Foundations of Matrix Analysis

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

Sparse polynomial interpolation in Chebyshev bases

Linear Algebra Methods for Data Mining

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Exponentials of Symmetric Matrices through Tridiagonal Reductions

UNIT 6: The singular value decomposition.

Linear Algebra Methods for Data Mining

Normalized power iterations for the computation of SVD

Introduction. Chapter One

Singular Value Decomposition

Linear Least Squares. Using SVD Decomposition.

Algebra C Numerical Linear Algebra Sample Exam Problems

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

A Note on Simple Nonzero Finite Generalized Singular Values

The Singular Value Decomposition

MATH 350: Introduction to Computational Mathematics

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Structured Matrices and Solving Multivariate Polynomial Equations

Using semiseparable matrices to compute the SVD of a general matrix product/quotient

8. Diagonalization.

Eigenvalue Problems and Singular Value Decomposition

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Numerical Methods I Singular Value Decomposition

MATH36001 Generalized Inverses and the SVD 2015

Principal Component Analysis for Distributed Data Sets with Updating

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Analysis of a Fast Hankel Eigenvalue Algorithm Franklin T. Luk a and Sanzheng Qiao b a Department of Computer Science Rensselaer Polytechnic Institute

Introduction to Numerical Linear Algebra II

Applied Numerical Linear Algebra. Lecture 8

Efficient and Accurate Rectangular Window Subspace Tracking

Multiplicative Perturbation Analysis for QR Factorizations

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

EECS 275 Matrix Computation

arxiv: v1 [math.na] 1 Sep 2018

Large-scale eigenvalue problems

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

Preliminary Examination, Numerical Analysis, August 2016

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

1 Singular Value Decomposition and Principal Component

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Singular Value Decomposition

The spectral decomposition of near-toeplitz tridiagonal matrices

Inheritance properties and sum-of-squares decomposition of Hankel tensors: theory and algorithms

MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Singular Value Decomposition

15 Singular Value Decomposition

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

MATH 581D FINAL EXAM Autumn December 12, 2016

Notes on Eigenvalues, Singular Values and QR

Singular Value Decomposition

Review problems for MA 54, Fall 2004.

Tikhonov Regularization for Weighted Total Least Squares Problems

The antitriangular factorisation of saddle point matrices

Numerical Linear Algebra

Scientific Computing

Lecture notes: Applied linear algebra Part 2. Version 1

Linear Algebra (Review) Volker Tresp 2017

Some inequalities for sum and product of positive semide nite matrices

A fast randomized algorithm for overdetermined linear least-squares regression

Black Box Linear Algebra

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

APPLIED NUMERICAL LINEAR ALGEBRA

8 The SVD Applied to Signal and Image Deblurring

Transcription:

Exponential Decomposition and Hankel Matrix Franklin T Luk Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong luk@csecuhkeduhk Sanzheng Qiao Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 Canada qiao@mcmasterca 1 Introduction David Vandevoorde Edison Design Group, Belmont, CA 94002-3112 USA daveed@vandevoordecom We consider an important problem in mathematical modeling: Exponential Decomposition Given a finite sequence of signal values {s 1, s 2,, s n }, determine a minimal positive integer r, complex coefficients {c 1, c 2,, c r }, and distinct complex knots {z 1, z 2,, z r }, so that the following exponential decomposition signal model is satisfied: s k = r i=1 c i z k 1 i, for k = 1,,n (11) We usually solve the above problem in three steps First, determine the rank r of the signal sequence Second, find the knots z i Third, calculate the coefficients {c i } by solving an r r Vandermonde system: where 1 1 1 W (r) z 1 z 2 z r =, c = z1 r 1 z2 r 1 zr r 1 W (r) c = s (r), (12) c 1 c 2 c r and s(r) = A square Vandermonde system (12) can be solved efficiently and stably (see [1]) s 1 s 2 s r 275

276 Exponential Decomposition and Hankel Matrix There are two well known methods for solving our problem, namely, Prony [8] and Kung [6] Vandevoorde [9] presented a single matrix pencil based framework that includes [8] and [6] as special cases By adapting an implicit Lanczos process in our matrix pencil framework, we present an exponential decomposition algorithm which requires only O(n 2 ) operations and O(n) storage, compared to O(n 3 ) operations and O(n 2 ) storage required by previous methods The relation between this problem and a large subset of the class of Hankel matrices was fairly widely known, but a generalization was unavailable [3] We propose a new variant of the Hankel-Vandermonde decomposition and prove that it exists for any given Hankel matrix We also observe that this Hankel- Vandermonde decomposition of a Hankel matrix is rank-revealing As with many other rank-revealing factorizations, the Hankel-Vandermonde decomposition can be used to find a low rank approximation to the given perturbed matrix In addition to the computational savings, our algorithm has an advantage over many other popular methods in that it produces an approximation that exhibits the exact desired Hankel structure This paper is organized as follows After introducing notations in 2, we discuss the techniques for determining rank and separating signals from noise in 3 In 4, we describe the matrix pencil framework and show how it unifies two previously known exponential decomposition methods We devote 5 to our new fast algorithm based on the framework and 6 to our new Hankel-Vandermonde decomposition 2 Notations We adopt the Matlab notation; for example, the symbol A i:j,k:l denotes a submatrix formed by intersecting rows i through j and columns k through l of A, and A :,k:l is a submatrix consisting of columns k through l of A We define E (m) as an m m exchange matrix: 0 0 1 E (m) = 1 0 0, for m = 1, 2, 1 0 0

Franklin T Luk 277 We also define a sequence of n-element vectors {w (0) (z),w (1) (z),,w (n 1) (z)} as follows: 1 0 z k 1 0 w (0) (z) = z k and w (k) (z) = 1, z k+1 (k + 1)z z n 1 (n 1)! k! (n 1 k)! zn 1 k for k = 1, 2,,n 1 We see that w (k) (z) is a scaled k-th derivative of w (0) (z), normalized so that the first nonzero element equals unity We introduce the concept of a confluent Vandermonde slice of order n m: [ ] C (m) (z) = w (0) (z),w (1) (z),,w (m 1) (z), for m = 1, 2, A confluent Vandermonde matrix [5] is a matrix of the form [ ] C (m1) (z 1 ), C (m2) (z 2 ),, C (mj) (z j ) The positive integer m i is called the multiplicity of z i in the confluent Vandermonde matrix For our purposes, we define a new and more general structure that we call a biconfluent Vandermonde matrix: [ ] B (m1) (z 1 ), B (m2) (z 2 ),, B (mj) (z j ), where B (mi) (z i ) equals either C (mi) (z i ) or E (n) C (mi) (z i ) 3 Finding Rank and Denoising Data Recall our sequence of signal values We want to find r, the rank of the data Since the values contain errors, finding the rank means separating the signals from the noise We modify our notations and use {ŝ j } to represent the sequence of polluted signal values Although we do not know the exact value of r, we would know a range in which r must lie So we find two positive integers k and l such that k + 1 r, l r and k + l n Since n is usually large and r quite small, choosing k and l to satisfy the above inequalities is not a problem We construct a (k + 1) l Hankel matrix Ĥ by ŝ 1 ŝ 2 ŝ l ŝ 2 ŝ 3 ŝ l+1 Ĥ = ŝ k ŝ k+1 ŝ k+l 1 ŝ k+1 ŝ k+2 ŝ k+l

278 Exponential Decomposition and Hankel Matrix If there is no noise in the signal values, the rank of Ĥ will be exactly r Due to noise, the rank of Ĥ will be greater than r Compute a singular value decomposition (SVD) of Ĥ: Ĥ = UΣV H, where U and V are unitary matrices, and Σ = diag(σ 1, σ 2,,σ min(k+1,l) ) R (k+1) l We find r from looking for a gap in the singular values: σ 1 σ 2 σ r σ r+1 σ r+2 σ min(k+1,l) 0 (31) There could be multiple gaps in the sequence of singular values, in which case we could either pick the first gap or use other information to choose r from the several possibilities This is how we determine the numerical rank r of Ĥ, and thereby the rank of the signal data A (k + 1) l matrix A of exact rank r is obtained from A = U :,1:r Σ 1:r,1:r (V :,1:r ) H Unfortunately, the matrix A would have lost its Hankel structure We want to find a Hankel matrix H that will be close to A Hankel Matrix Approximation Given a (k + 1) l matrix A of rank-r, find a (k + 1) l Hankel matrix H of rank-r such that A H F = min A simple way to get a Hankel structure from A is to average along the antidiagonals; that is, s 1 = a 11, s 2 = (a 12 + a 21 )/2, s 3 = (a 13 + a 22 + a 31 )/3, However, the rank of the resultant Hankel matrix would inevitably be greater than r An SVD could be used to reduce the rank to r, but the decomposition would destroy the Hankel form Cadzow [2] proposed that we iterate using a two-step cycle of an SVD followed by averaging along the antidiagonals He proved that the iteration would converge under some mild assumptions Note that in the matrix approximation problem, the obvious selection of the Frobenius norm may not be appropriate, since this norm would give an unduly large weighting to the central elements of the Hankel structure A weighted Frobenius matrix could be constructed, but such a norm is no longer unitarily invariant Nonetheless, let us assume that we have denoised the data and obtained a (k + 1) l Hankel matrix H of rank r: s 1 s 2 s l s 2 s 3 s l+1 H = (32) s k s k+1 s k+l 1 s k+1 s k+2 s k+l

4 Matrix Pencil Framework For this section, we take Franklin T Luk 279 l = r in (32) and we assume that rank(h) = r We present a matrix pencil framework which includes the methods of Prony [8] and Kung [6] as special cases We begin by defining two k-by-r submatrices of H: So, and H 1 H 1:k,: and H 2 H 2:k+1,: (41) s 1 s 2 s r 1 s r s 2 s 3 s r s r+1 H 1 = s k s k+1 s r+k 2 s r+k 1 s 2 s 3 s r s r+1 s 3 s 4 s r+1 s r+2 H 2 = s k+1 s k+2 s r+k 1 s r+k Note that column i of H 2 equals column (i + 1) of H 1, for i = 1, 2,,r 1 What about the last column of H 2? Let {z i } denote the roots of the polynomial q r (z) = z r + γ r 1 z r 1 + + γ 1 z + γ 0 (42) For any m = 1, 2,, k, we can verify that r 1 s r+m + γ i s i+m = c 1 z1 m 1 q r (z 1 ) + c 2 z2 m 1 q r (z 2 ) + + c r zr m 1 q r (z r ) = 0 i=0 It follows that Thus, where r 1 s r+m = γ i s i+m i=0 H 1 X = H 2, (43) 0 0 0 γ 0 1 0 0 γ 1 X = 0 1 0 γ 2 0 0 1 γ r 1

280 Exponential Decomposition and Hankel Matrix Hence X is the companion matrix of the polynomial (42), and the knots z i are the eigenvalues of the X Let v i denote an eigenvector of X corresponding to z i ; that is It follows from (43) that Xv i = z i v i z i H 1 v i = H 2 v i In general, the knots z i are the eigenvalues of matrix Y satisfying the matrix equation: FH 1 GY = FH 2 G, (44) for any nonsingular F and G In other words, let F and G be any nonsingular matrices of orders k and r respectively, then the knots z i are the eigenvalues of the pencil-equation FH 2 Gv λfh 1 Gv = 0 41 Prony s Method For Prony s method, we consider the case when n = 2r and set k = r Let F = G = I in the framework (44), then the knots are the eigenvalues of the matrix Y satisfying H 1 Y = H 2 Now, (43) shows that the companion matrix X of the polynomial (42) satisfies this matrix equation Thus the roots of the polynomial (42) are the knots Moreover, the last column of (43) indicates that the coefficient vector g = (γ 0, γ 1,,γ r 1 ) T of the polynomial (42) is the solution of the r r Yule-Walker system: This leads to the Prony s method: Prony s Method H 1 g = (s r+1, s r+2,, s 2r ) T (45) 1 Solve a square equation (45), ie, n = 2r, for the coefficients γ i ; 2 Determine the roots z i of the polynomial q r (z) of (42); 3 Find the coefficients c i by solving the square Vandermonde system: W (r) c = s (r) of (12) 42 Kung s Method Now, we derive Kung s Hankel SVD method [6] using our framework Let be the SVD of H in (32) Denote H = UΣV H U 1 = U 1:k,: and U 2 = U 2:k+1,:

Then we have Franklin T Luk 281 H 1 = U 1 ΣV H and H 2 = U 2 ΣV H Setting F = I and G = V Σ 1 in the framework (44), we know that the eigenvalues of Y in U 1 Y = U 2 (46) are the knots z i The matrix Y can be solved by an approximation method, for example, least squares The following procedure is sometimes referred to as the Hankel SVD (HSVD) algorithm: Kung s HSVD Method 1 Solve (46) for Y in a least squares sense; 2 Find the knots z i as the eigenvalues of Y ; 3 Find the coefficients c i by solving (12) Kung s method computes better results than Prony s method See [9] for numerical examples However, it is based on the SVD, which is expensive to compute 5 Lanczos Process In this section, we assume that k = r First, we expand H into a 2r 2r Hankel matrix: s 1 s 2r Ĥ = s 2r 0 and assume that it is strongly nonsingular, ie, all its leading principal submatrices are nonsingular Suppose that we have a triangular decomposition Ĥ = R T DR where R is upper triangular and D diagonal Since the Hankel matrix H in (32) equals Ĥ1:k+1,1:r, we have Thus, and H = (R 1:r,1:k+1 ) T D 1:r,1:r R 1:r,1:r (51) H 1 H 1:k,: = (R 1:r,1:k ) T DR 1:r,1:r H 2 H 2:k+1,: = (R 1:r,2:k+1 ) T DR 1:r,1:r Setting F = I and G = (R 1:r,1:r ) 1 (D 1:r,1:r ) 1 in our framework (44), we know that the eigenvalues of a matrix Y satisfying (R 1:r,1:k ) T Y = (R 1:r,2:k+1 ) T (52)

282 Exponential Decomposition and Hankel Matrix are the desired knots Now, using Lanczos process, we show that we can find a tridiagonal matrix satisfying (52) Let e = (1, 1,,1) T and D(z) = diag(z 1,,z r ), then the transpose of the Vandermonde matrix W T = [e, D(z)e,, D(z) k+1 e] is a Krylov matrix Similar to the standard Lanczos method [5], we can find a tridiagonal matrix T such that BT = D(z)B or T T B T = B T D(z), (53) where B is orthogonal with respect to D(c) = diag(c 1,, c r ), ie, B T D(c)B = D, a diagonal matrix Lanczos method also gives a QR decomposition of the Krylov matrix W T = BR 1:r,1:k+1 (54) where R is an r (k + 1) upper trapezoidal matrix It follows that (W 1:k,: ) T = BR 1:r,1:k and (W 2:k+1,: ) T = BR 1:r,2:k+1 It is obvious that (W 2:k+1,: ) T = D(z)(W 1:k,: ) T Then we have D(z)BR 1:r,1:k = BR 1:r,2:k+1 From (53), substituting D(z)B with BT in the above equation, we get TR 1:r,1:k = R 1:r,2:k+1 or (R 1:r,1:k ) T T T = (R 1:r,2:k+1 ) T (55) On the other hand, it can be easily verified from straightforward multiplication that H = W 1:k+1,: D(c)(W 1:r,: ) T (56) Then, from (54), we have the decomposition H = (R 1:r,1:k+1 ) T DR 1:r,1:r This says that the upper trapezoidal matrix R 1:r,1:k+1 computed by the Lanczos method gives the decomposition (51) of H and we can find a tridiagonal T T satisfying (52) Thus the eigenvalues of T T are the knots In fact, the equations in (53) also show that the eigenvalues of T T are the knots z i Moreover, (55) gives a three-term recursion of the rows of R, which leads to an efficient method for generating the tridiagonal T and the rows of R To start the process, we note that the first row of R, or the first column of R T, is the scaled first column of Ĥ since Ĥ = RT DR and R is upper triangular and D is diagonal The process is outlined as follows For details, see [9]

Franklin T Luk 283 Lanczos Process 1 Initialize R 1,1:2r = (s 1,, s 2r )/s 1 ; 2 Set α 1 = R 1,2 ; 3 Use the recurrence R i 2,i:2r i+1 = R i 1,i+1:2r i+2 α i 1 R i 1,i:2r i+1 β 2 i 1R i,i:2r i+1 to compute a new row R i,i:2r i+1 ; 4 Compute new α i = R i,i+1 R i 1,i ; β 2 i 1 = R i,i+2 α i R i,i+1 R i 1,i+1 The tridiagonal T T is of the form α 1 β1 2 0 1 α 2 β 2 2 T T = 1 α r 1 βr 1 2 0 1 α r Then we transform T into a complex symmetric tridiagonal matrix by scaling its rows and columns Since the transformed T is complex symmetric, we apply the complex-orthogonal transformations in the QR method [4, 7] to compute its eigenvalues, ie, the knots 6 A New Hankel-Vandermonde Decomposition The matrix product formulation (56) does not apply to an arbitrary Hankel matrix Even when it exists, the exponential decomposition is not always unique In this section, we will generalize this Hankel factorization so that the new decomposition will exist for any Hankel matrix and exhibit the rank of the given Hankel matrix We start by establishing some useful properties of Hankel matrices which are closely related to Prony s method Observe that the signal model (11) describes the solution of a linear difference equation (LDE) and that Prony s method computes the coefficients γ i in its characteristic polynomial y k+r + γ r 1 y k+r 1 + + γ 0 y k = 0, whose solution starts with the sequence (s 1, s 2,, s 2r 1 ) Indeed, the coefficients γ i form the solution of the Hankel system: s 1 s 2 s r s s 2 s 3 s r+1 γ r+1 0 γ 1 s r+2 s r 1 s r s 2r 2 =, (61) s γ 2r 1 s r s r+1 s r 1 2r 1 η

284 Exponential Decomposition and Hankel Matrix where η can be chosen arbitrarily Thus γ i are determined by the parameter η In preparation of a general result, we introduce a new concept of a Prony η-continuation Let H be an r r nonsingular Hankel matrix The k m Prony η-continuation of H is the Hankel matrix s 1 (η) s 2 (η) s m (η) s 2 (η) s 3 (η) s m+1 (η) Pro(H, η) = s k (η) s k+1 (η) s k+m 1 (η) whose entries s i (η) are determined by the solution (γ 0,,γ r 1 ) T of (61) Note that s i (η) = s i for i < 2r and s 2r (η) = η and rank(pro(h, η)) = r if k r and m r Thus we embed a square and nonsingular H into a general Hankel matrix Using the Prony continuation, we are able to prove that a Hankel matrix can be decomposed into a sum of two Prony continuations of two nonsingular Hankel matrices We refer the proof to [9] Let H be a k k Hankel matrix of rank r, then there exist values η 1 and η 2, and nonsingular Hankel matrices H 1 and H 2, with respective sizes r 1 r 1 and r 2 r 2, such that r = r 1 + r 2 and H = Pro(H 1, η 1 ) + E k Pro(E k H 2 E k, η 2 )E k Then, using confluent Vandermonde matrix, we obtain the following theorem of Hankel-Vandermonde decomposition [9] Let H be an arbitrary k m real or complex Hankel matrix There exist positive integers m i (i = 1,,j) and distinct complex numbers z i (i = 1,,j) such that H = W k diag(c 1, C 2,, C j )W T m, (62) where W l denotes an l r biconfluent Vandermonde matrix: W l = [ B m1 (z 1 ),,B mj (z j ) ], with r = m 1 +m 2 + +m j, and the C i are nonsingular left (upper) triangular m i m i Hankel matrices: c (i) 1 c (i) 2 c m (i) i c (i) C i = 2 0 c m (i) i 0 0 Obviously, (62) is a generalization of (56) in that H can be any Hankel matrix Also, we note that

rank(h) = min(k, m, r); Franklin T Luk 285 The values z i can be chosen to lie on the unit disk, ie, z i 1 Conclusion We have proposed a matrix-pencil framework, which unifies previously known exponential decomposition methods; developed a new fast Lanczos based exponential decomposition algorithm; presented a general Hankel- Vandermonde decomposition A low rank Hankel approximation to a given Hankel matrix can be derived from our general Hankel-Vandermonde decomposition (see [9]) Bibliography 1 A Björck and V Pereyra, Solution of Vandermonde systems of equations, Math Comp, 24 (1970), pp 893 903 2 JA Cadzow, Signal enhancement a composite property mapping algorithm, IEEE Trans Acoust, Speech, Sig Proc, 36 (1988), pp 49 62 3 H Chen, S Van Huffel, and J Vandewalle, Bandpass filtering for the HTLS estimation algorithm: Design, evaluation and SVD analysis, In M Moonen and B De Moor, editors, SVD and Signal Processing III, pp 391 398, Amsterdam, Elsevier, 1995 4 JK Cullum and RA Willoughby, A QL procedure for computing the eigenvalues of complex symmetric tridiagonal matrices, SIAM J Matrix Anal Appl, 17:1 (1996), pp 83 109 5 GH Golub and CF Van Loan, Matrix Computations, 3rd edition, Johns Hopkins University Press, Baltimore, 1996 6 SY Kung, KS Arun, and BD Rao, State-space and singular value decomposition-based approximation methods for the harmonic retrieval problem, J Opt Soc Am, 73 (1983), pp 1799 1811 7 FT Luk and S Qiao, Using complex-orthogonal transformations to diagonalize a complex symmetric matrix, Advanced Signal Processing: Algorithms, Architectures, and Implementations VII, FT Luk, Ed, Proc SPIE 3162, pp 418 425, 1997 8 R Prony, Essai expérimental et analytique sur les lois de la delatabilité et sur celles de la force expansive de la vapeur de l eau et de la vapeur de l alkool, à différentes température, J de l Ecole Polytechnique, 1 (1795), pp 24 76 9 David Vandevoorde, A fast exponential decomposition algorithm and its applications to structured matrices, Ph D thesis, Rensselaer Polytechnic Institute, Troy, New York, 1996