Analysis of a Fast Hankel Eigenvalue Algorithm Franklin T. Luk a and Sanzheng Qiao b a Department of Computer Science Rensselaer Polytechnic Institute

Similar documents
then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

Exponential Decomposition and Hankel Matrix

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

ESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then

The restarted QR-algorithm for eigenvalue computation of structured matrices

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

The Lanczos and conjugate gradient algorithms

6.4 Krylov Subspaces and Conjugate Gradients

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract

14.2 QR Factorization with Column Pivoting

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

An algorithm for symmetric generalized inverse eigenvalue problems

Institute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Introduction to Linear Algebra. Tyrone L. Vincent

Numerical Linear Algebra

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Norms and Perturbation theory for linear systems

KEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

Gaussian Elimination for Linear Systems

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

Linear Algebra. James Je Heon Kim

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

A Divide-and-Conquer Method for the Takagi Factorization

Eigenvalues and eigenvectors

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

MAT 610: Numerical Linear Algebra. James V. Lambers

Solving linear equations with Gaussian Elimination (I)

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

AMS526: Numerical Analysis I (Numerical Linear Algebra)

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

The idea is that if we restrict our attention to any k positions in x, no matter how many times we

12 CHAPTER 1. PRELIMINARIES Lemma 1.3 (Cauchy-Schwarz inequality) Let (; ) be an inner product in < n. Then for all x; y 2 < n we have j(x; y)j (x; x)

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

2 MULTIPLYING COMPLEX MATRICES It is rare in matrix computations to be able to produce such a clear-cut computational saving over a standard technique

4 Linear Algebra Review

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Numerical methods for solving linear systems

Numerical Methods I Eigenvalue Problems

Algebra C Numerical Linear Algebra Sample Exam Problems

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Functional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009

10.34: Numerical Methods Applied to Chemical Engineering. Lecture 2: More basics of linear algebra Matrix norms, Condition number

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Eigenvalue Problems and Singular Value Decomposition

Dense LU factorization and its error analysis

Numerical Methods in Matrix Computations

Linear algebra & Numerical Analysis

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Numerical Analysis Lecture Notes

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

Some inequalities for sum and product of positive semide nite matrices

MS&E 318 (CME 338) Large-Scale Numerical Optimization

CS 246 Review of Linear Algebra 01/17/19

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Scientific Computing: An Introductory Survey

Linear Algebra: Matrix Eigenvalue Problems

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Outline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

An Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

EIGENVALUE PROBLEMS (EVP)

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Linear Algebra: Characteristic Value Problem

Matrices, Moments and Quadrature, cont d

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

The in uence of orthogonality on the Arnoldi method

Daily Register Q7 w r\ i i n i i /"\ HTT /"\I AIM MPmf>riAnpn ^ ^ oikiar <mn ^^^*»^^*^^ _.. *-.,..,» * * w ^.. i nr\r\

EECS 275 Matrix Computation

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Block Bidiagonal Decomposition and Least Squares Problems

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Matrix Factorization and Analysis

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Fast Hankel Tensor-Vector Products and Application to Exponential Data Fitting

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

Linear Algebra March 16, 2019

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

I.C.F. Ipsen small magnitude. Our goal is to give some intuition for what the bounds mean and why they hold. Suppose you have to compute an eigenvalue

Transcription:

nalysis of a Fast Hankel Eigenvalue lgorithm Franklin T. Luk a and Sanzheng Qiao b a Department of omputer Science Rensselaer Polytechnic Institute Troy, New York 28 US b Department of omputing and Software McMaster University Hamilton, Ontario L8S 4L7 anada STRT This paper analyzes the important steps of an O(n 2 log n) algorithm for nding the eigenvalues of a complex Hankel matrix. The three key steps are a Lanczos-type tridiagonalization algorithm, a fast FFT-based Hankel matrix-vector product procedure, and a QR eigenvalue method based on complex-orthogonal transformations. In this paper, we present an error analysis of the three steps, as well as results from numerical experiments. Keywords: Hankel matrix, eigenvalue decomposition, Lanczos tridiagonalization, Hankel matrix-vector multiplication, complex-orthogonal transformations, error analysis.. INTRODUTION The eigenvalue decomposition of a structured matrix has important applications in signal processing. In this paper, we consider a complex Hankel matrix H 2 nn : H h h 2 : : : h n h n h 2 h 3 : : : h n h n+.......... h n h n : : : h 2n 3 h 2n 2 h n h n+ : : : h 2n 2 h 2n : () The authors 4 proposed a fast algorithm for nding the eigenvalues of H. The key step is a fast Lanczos tridiagonalization algorithm which employs a fast Hankel matrix-vector multiplication based on the Fast Fourier transform (FFT). Then the algorithm performs a QR-like procedure using the complex-orthogonal transformations in the diagonalization to nd the eigenvalues. In this paper, we present an error analysis and discuss the stability properties of the key steps of the algorithm. This paper is organized as follows. In Section 2, we discuss eigenvalue computation using complex-orthogonal transformations, and describe the overall eigenvalue procedure and three key steps. In Section 3, we introduce notations and error analysis of some basic complex arithmetic operations. We analyze the orthogonality of the Lanczos vectors in Section 4 and the accuracy of the Hankel matrix-vector multiplication in Section 5. In Section 6, we derive the condition number of a complex-orthogonal plane rotation and we provide a backward error analysis of the application of this transformation to a vector. Numerical results are given in Section 7. Supported by the Natural Sciences and Engineering Research ouncil of anada under grant OGP463

2. OMPLEX-ORTHOGONL TRNSFORMTIONS n eigenvalue decomposition of a nondefective matrix H is given by H = XDX : (2) We will pick the matrix X to be complex-orthogonal ; that is, XX T = I. So, H = XDX T. We apply a special Lanczos tridiagonalization to the Hankel matrix (assuming that the Lanczos process does not prematurely terminate): where Q is complex-orthogonal and J is complex-symmetric tridiagonal: J Let Q ( q q 2 : : : q n ). Since Q T Q = I, we see that H = QJQ T ; (3) 2 2...... 2...... n : (4) n n q T i q j = ij : (5) We call the property dened by (5) as complex-orthogonality (c-orthogonality for short). The dominant cost of the Lanczos method is matrix-vector multiplication. We propose an O(n log n) FFT-based multiplication scheme so that we can tridiagonalize a Hankel matrix in O(n 2 log n) operations. Given w 2 n we want to compute the product p = Hw. Dene bc 2 2n by Let w ( w w 2 : : : w n ) T. Dene bw 2 2n by bc = ( h n h n+ : : : h 2n h h 2 : : : h n ) T : (6) bw = ( w n w n : : : w : : : ) T : (7) Let t(v) denote one-dimensional FFT of a vector v, it(v) one-dimensional inverse FFT of v, and \." componentwise multiplication of two vectors. The desired product is given by the rst n elements of the vector it(t(bc):t( bw)). The authors 4 showed that lgorithm 2 requires 3n log(n) + O(n) ops (real oating-point operations). To maintain symmetry and tridiagonality of J, we apply complex-orthogonal rotations in the QR iterations: J = W DW T ; where W is complex-orthogonal and D is diagonal. Thus, we get (2) with X = QW. The workhorse of our QR-like iteration 3 is a complex-orthogonal \plane-rotation" P 2 nn. That is, P is an identity matrix except for four strategic positions formed by the intersection of the ith and jth rows and columns: c s ; (8) s c pii p G = ij = p ji p jj where c 2 + s 2 =. The matrix G is constructed to annihilate the second component of a 2-vector x = (x ; x 2 ) T. We present our computational schemes in the following. Overall Procedure (Hankel Eigenvalue omputation). Given H, compute all its eigenvalues.. pply a Lanczos procedure (lgorithm ) to reduce H to a tridiagonal form J. Use lgorithm 2 to compute the Hankel matrix-vector products. 2. pply a QR-type procedure (lgorithm 3) to nd the eigenvalues of J.

lgorithm (Lanczos Tridiagonalization). Given H, this algorithm computes a tridiagonal matrix J. Initialize q such that q T q = ; v = Hq ; for j = : n j = q T j v j; r j = v j j q j ; if j < n q j = r T j r j; if j =, break; end; q j+ = r j = j ; v j+ = Hq j+ j q j ; end end lgorithm 2 (Fast Hankel Matrix-Vector Multiplication). Given w 2 n, compute p = Hw. onstruct the vectors bc and bw as in (6) and (7). alculate y 2 2n by y = it(t(bc): t(bw)): The desired p 2 n is given by the rst n elements of y: p = ( y y 2 : : : y n y n ) T. lgorithm 3 (omplex-symmetric QR Step). Given J 2 mm, this algorithm implements one step of the QR method with the Wilkinson shift. Initialize Q = I; Find the eigenvalue of J m :m;m :m that is closer to J mm ; Set x = J ; x 2 = J 2 ; for k = : m Find a complex-orthogonal matrix P T k to annihilate x 2 using x ; J = P T k JP k; Q = QP k ; if k < m x = J k+;k ; x 2 = J k+2;k ; end if end for. 3. ERROR NLYSIS FUNDMENTLS Let u denote the unit roundo. Since the objective of this paper is to analyze the stability of our algorithm 4 and not to derive precise error bounds, we carry out rst-order error analysis. Sometimes, we ignore moderate constant coecients. The following results for the basic complex arithmetic operations are from Higham 2 : fl(x y) = (x y)( + ); jj u; (9) fl(xy) = xy( + ); jj p 2 2 ; () fl(x=y) = (x=y)( + ); jj p 2 4 ; () Using () and, similar to the real case, 2 repeatedly applying (9), we can show that for two n-vectors x and y where fl(x T y) = x T (y + y) = (x + x) T y; (2) jyj n+ jyj; jxj n+ jxj; n+ = We assume that nu. So we can think of n as approximately nu. (n + )u (n + )u :

4. LNZOS TRIDIGONLIZTION nalogous to the analysis of the conventional Lanczos tridiagonalization, 5 we shall show that column vectors q j can lose c-orthogonality. Let b j = fl(bq T j bv j ) = bq T j (bv j + v) for some v. It follows immediately from (2) that kvk 2 n+ kbv j k 2. Thus, and If for some v and q, then (9) and () show that b j = bq T j bv j + ; jj n+ kbq j k 2 kbv j k 2 ; (3) jb j j ( + n+ )kbq j k 2 kbv j k 2 : (4) br j = fl(bv j b j bq j ) (bv j + v) b j (bq j + q) (5) kqk 2 p 2 2 kbq j k 2 ; kvk 2 ukbv j b j bq j k 2 : Denoting r = v b j q as the rst order error in br j and using (4), we get Now, from (), we have that is, To check c-orthogonality, we multiply bq T j krk 2 ukbv j b j bq j k 2 + jb j j p 2 2 kbq j k 2 (u + ( p 2 2 + u)kbq j k 2 2)kbv j k 2 : (6) bq j+ = fl(br j =b j ) = (br j + r)=b j ; krk 2 p 2 4 kbr j k 2 ; b j bq j+ = br j + r; krk 2 p 2 4 kbr j k 2 : (7) on the both sides of the above equation: b j bq T j bq j+ = bq T j (br j + r) bq T j bv j b j bq T j bq j + bq T j (r + r): Since bq j is normalized and bq T j bv j = b j from (3), we get Next we derive the bound for krk 2 by using (7), (5), and (4), It follows from (6) and (9) that Finally, (3) and (8) imply that b j bq T j bq j+ + bq T j (r + r): (8) krk 2 p 2 4 kbv j b j bq j k 2 p 2 4 ( + kbq j k 2 2)kbv j k 2 : (9) kr + rk 2 krk 2 + krk 2 = ( p 2 2 + u)kbq j k 2 2 + u kbv j k 2 + p 2 4 ( + kbq j k 2 2)kbv j k 2 ( p 2 2 + p 2 4 + u)kbq j k 2 2 + ( p 2 4 + u) kbv j k 2 : jb j bq T j bq j+ j jj + kr + rk 2 kbq j k 2 ( n+ + p 2 4 + u) + ( p 2 2 + p 2 4 + u)kbq j k 2 2 kbv j k 2 kbq j k 2 : Using the approximation n nu when nu and ignoring the moderate constant coecients, we obtain jbq T j bq j+ j (n + kbq jk 2 2)kbq j k 2 kbv j k 2 jb j j! u: (2) The above inequality says that bq j can lose c-orthogonality when either jb j j is small or kbq j k 2 is large (recall that bq j and b j denote computed results). Hence the loss is due to either the cancellation in computing j or the growth in kbq j k 2, and not the accumulation of roundo errors. In the real case, kbq j k 2 and (2) is consistent with the results in Paige, 5 whereas in the complex case, the norm kbq j k 2 can be arbitrarily large. This is an additional factor that causes the loss of c-orthogonality.

5. HNKEL MTRIX-VETOR MULTIPLITION The only computation in the fast Hankel matrix-vector multiplication is Denoting the computed y as y = it(t(bc): t(bw)): by = fl(it(t(bc): t(bw))); we consider the computation of t(bw). ssume that the weights! j k in the FFT are precalculated and that the computed weights b! j k satisfy b! j k =!j k + kj; where j kj j for all j and k. Depending on the method that computes the weights, we can pick = cu; = cu log j; or = cuj; where c denotes a constant dependent on the method. 6 Following Higham 2 we let We have where pproximately, Similarly, let Then x = t(bw) and bx = fl(t(bw)): kx bxk 2 p log2 (2n ) 2n kxk 2 ; log 2 (2n ) kx bxk 2 = + 4 ( + ): p! 2n log2 n kxk 2 : (2) log 2 n v = t(bc) and bv = fl(t(bc)): p! 2n log2 n kv bvk 2 kvk 2 : (22) log 2 n Next, we consider the componentwise multiplication z = bv: bx. Denoting the computed z as bz = fl(bv: bx); we have approximately Finally, we consider Since F n is the FFT matrix, we have Thus, similar to (2) and (22), we get kz bzk 2 u kzk 2 : (23) y = it(bz) and by = fl(it(bz)): ky byk 2 F n = n F n : p! 2n log2 n kyk 2 : (24) log 2 n

s n =2 F n is unitary, it follows that onsequently, using (23) and (24), we get kyk 2 = kit(bz)k 2 = kf 2n bzk 2 =(2n ) = kbzk 2 = p 2n kbzk 2 = p 2n: ky byk 2 log2 n log 2 n log2 n kbzk 2 ( + u)kzk 2 : log 2 n Ignoring the second and higer order terms of u, we have log2 n ky byk 2 kbv: bxk 2 : (25) log 2 n Now, it remains to express kbv: bxk 2 in terms of kbck 2 and kbwk 2. Using this inequality: ka: bk 2 max ja i j kbk 2 kak 2 kbk 2 i and ignoring the second and higher order terms of u, we obtain It follows from (2) and (22) that Note that and kbv: bxk 2 kv: xk 2 + k(v bv): xk 2 + kbv: (x bx)k 2 kvk 2 kxk 2 + kv bvk 2 kxk 2 + kbvk 2 kx bxk 2 kvk 2 kxk 2 + kv bvk 2 kxk 2 + kvk 2 kx bxk 2 : kbv: bxk 2 + 2 p 2n log n 2 kvk 2 kxk 2 : (26) log 2 n kvk 2 = kt(bc)k 2 p 2n kbck 2 kxk 2 p 2n kbwk 2 : Using (25) and (26) and ignoring the second and higher order terms, we obtain 2n log2 n ky byk 2 kbck 2 kbwk 2 : log 2 n What does the above error bound tell us about the accuracy of the fast multiplication? For simplicity, we assume that the entries in the Hankel matrix H are about the same size. Recall that bc consists of the rst column and the last row of H. So kbck 2 p 2=n khk F ; and we conclude that ky byk 2 2n p 2 log 2 n For conventional matrix-vector multiplication Hw, we have 2! log 2 n p n khk F kbwk 2 : khw fl(hw)k 2 n khk 2 kwk 2 : Since n nu and is of size u, our fast multiplication is more accurate than conventional multiplication by roughly a factor of log 2 n= p n. In summary, the fast Hankel matrix-vector multiplication scheme is stable. This is consistent with the error bound for the FFT, which says that the FFT has an error bound smaller than that for conventional multiplication by the same factor as the reduction in complexity of the method. 2

6. ONDITION OF OMPLEX-ORTHOGONL TRNSFORMTION In this section, we derive the condition number of G of (8) and present a backward error analysis of an application of G. First, consider cond(g). We have t iv G H G = ; (27) iv t where t = jcj 2 + jsj 2 and v = 2 Im(cs): It can be veried that the two eigenvalues of G H G are t + v and t v, and that s t + jvj cond(g) = t jvj : Suppose that G is constructed to annihilate the second entry of x = (x ; x 2 ) T. Then in (8) where onsequently, in (27) The two eigenvalues t v are therefore c = x and s = x 2 ; = jx T xj : t = x H x and v = x H i i i x H i x and x H i i The condition number of G can be arbitrarily large for some complex x. For example, when x = ( + ; i) T, the two eigenvalues are jj j2 + j and : j2 + j jj This also shows that kgk 2 can be arbitrarily large. For any real x 6=, however, t =, v =, and cond(g) = kgk 2 =. Now we consider the application of G to y = (y ; y 2 ) T. Let From (9) and (), we get bz = bz = fl(gy) = fl cy + sy 2 sy + cy 2 : x: (cy ( + ) + sy 2 ( + 2 ))( + 3 ) ( sy ( + 4 ) + cy 2 ( + 5 ))( + 6 ) where j i j p 2 2 for i = ; 2; 4; 5 and j i j u for i = 3; 6. Using rst-order error analysis, we derive c( + bz = (G + G)y where G = 3 ) s( 2 + 3 ) s( 4 + 6 ) s( 5 + 6 ) and jgj ( p 2 2 + u) jgj : This shows that the application of G is perfectly stable in the real case but can be unstable in the complex case because jgj can be large for some complex x. x: ;

9 ERRORS 2 3 4 2 3 4 5 6 7 8 9 TESTS Figure. Errors in the eigenvalues of one hundred random Hankel matrices. 7. NUMERIL EXPERIMENTS We implemented the fast eigenvalue algorithm for Hankel matrices in MTL and tested the following examples on a SUN Ultra station. Example We generated one hundred 2 2 random complex Hankel matrices. Each matrix was constructed by choosing two random complex vectors as its rst column and as its last row. oth the real and imaginary parts of each component of the random vectors were uniformally distributed over [ ; ]. For each matrix, we measured errors by assuming that the eigenvalues computed by the MTL function eig are exact. Let b and be the eigenvalues computed by our program and by MTL, respectively. Figure plots the square roots of the sum of squares of the relative errors: E eig = We observe that the relative accuracy is generally better than 2. 2X i= jb i i j 2 j i j 2! =2 : (28) Example 2 rank decient complex Hankel matrix H was generated using the Vandermonde decomposition: H = z z 2 z k...... z n z n 2 z n k a a 2... a k z z n z 2 z n 2...... z k z n k : (29) The n n matrix H has rank k. In this example, we chose n =, k = 6, z i as random complex numbers with modulus one, and a i as random real numbers uniformly distributed on [ ; ]: z = :8585 :528i :995 :3i :838 + :5565i :9 :9959i :9855 :696i :3677 + :9299i ; a = :8436 :4764 :6475 :886 :879 :8338 Using (29), we generated the rst column and the last row of a complex Hankel matrix. Then we slightly perturbed these two vectors by adding two small random noise vectors of size 6 with components normally distributed with :

zero mean. We obtained the following two vectors as the rst column and the last row of our test Hankel matrix: H ;: = 2:887 :i :846 :394i :9 :29i :4866 2:6229i :9623 2:77i :738 :599i :2395 :255i :23 + :427i :8873 + :759i :35 :676i ; H :;n = :35 :676i :7279 :566i :542 :7927i :2653 :88i :793 2:353i :696 3:342i :2362 2:884i :845 :9649i :269 + :3874i 2:5246 + :6284i In the sixth and seventh Lanczos iterations, we got small subdiagonal elements: 6 = (3:436 + :38i) 4 and 7 = (2:738 + 7:32i) 6 : The complex-orthogonality of the matrix Q computed by the Lanczos method was lost: Our fast algorithm computed the eigenvalues kq T Q I k F = 2:: :368 9:27i 4:339 7:236i :3928 + 6:754i :86 + :94i :442 :355i :6 + :25i : :i : + :i 4:3588 7:929i :428 9:222i The last two eigenvalues are spurious. For comparison, MTL eig gave :368 9:27i 4:339 7:236i :3928 + 6:754i :86 + :94i :442 :355i :6 + :25i : :i : + :i : + :i : + :i The errors in the rst six eigenvalues were in the magnitude of 3. So, if we know the rank or an estimate k, we can stop the Lanczos process after k iterations. This leads to an algorithm for computing the k dominant eigenvalues of a complex Hankel matrix. In this example, we quit the Lanczos procedure after six iterations and computed the eigenvalues of the 6-by-6 triadiagonal matrix. ll six computed eigenvalues were correct to at least four digits. The three largest eigenvalues were correct to at least nine digits. T :

2 ERRORS 3 4 2 3 4 5 6 7 8 9 TESTS Figure 2. Errors in the eigenvalues of one hundred random Hermitian Toeplitz matrices. Example 3 Our fast algorithm for complex Hankel matrices can be adapted for Hermitian Toeplitz matrices by incorporating the following simple changes: Replace the fast matrix-vector multiplication algorithm for Hankel matrices with one for Toeplitz matrices. Replace the modied Lanczos method with a conventional one for Hermitian matrices; Replace the modied QR method with a conventional one for Hermitian matrices. We generated one hundred 2-element random complex vectors as the rst columns of the Hermitian Toeplitz matrices. Figure 2 shows the error parameter E eig as dened in (28). Not surprisingly, the errors here are smaller than those in Example ; specically, the accuracy is better than 3. REFERENES. G. H. Golub and. F. Van Loan. Matrix omputations. 3rd Ed., The Johns Hopkins University Press, altimore, MD, US, 996. 2. N. J. Higham. ccuracy and Stability of Numerical lgorithms. Society for Industrial and pplied Mathematics, Philadelphia, P, US, 996. 3. F. T. Luk and S. Qiao. Using omplex-orthogonal Transformations to Diagonalize a omplex-symmetric Matrix. In dvanced Signal Processing lgorithms, rchitectures, and Implementations VII, Franklin T. Luk, Editor, Proceedings of SPIE Vol. 362, pp. 48{425, 997. 4. F. T. Luk and S. Qiao. Fast Eigenvalue lgorithm for Hankel Matrices. In dvanced Signal Processing lgorithms, rchitectures, and Implementations VIII, Franklin T. Luk, Editor, Proceedings of SPIE Vol. 346, pp. 249{256, 998. 5... Paige. Error nalysis of the Lanczos lgorithm for Tridiagonalizing a Symmetric Matrix. J. Inst. Maths pplics, Vol. 8, pp. 34{349, 976. 6.. F. Van Loan. omputational Frameworks for the Fast Fourier Transform, Society for Industrial and pplied Mathematics, Philadelphia, P, US, 992.