NORTHERN ILLINOIS UNIVERSITY

Size: px
Start display at page:

Download "NORTHERN ILLINOIS UNIVERSITY"

Transcription

1 ABSTRACT Name: Santosh Kumar Mohanty Department: Mathematical Sciences Title: Ecient Algorithms for Eigenspace Decompositions of Toeplitz Matrices Major: Mathematical Sciences Degree: Doctor of Philosophy Approved by: Date: Dissertation Director NORTHERN ILLINOIS UNIVERSITY

2 ABSTRACT This dissertation is devoted to the study of eigenvalue problems for Hermitian and real symmetric Toeplitz matrices. Spectral properties of these matrices are reviewed and some new results are derived. Algorithms for computing the eigenspaces of these matrices are studied. Some of the algorithms are generalized or extended, and their importance is emphasized. A new algorithm for computing all the eigenpairs of a real symmetric Toeplitz matrix is proposed. All the algorithms are implemented and tested through both MATLAB and FORTRAN77 codes. The main focus of the dissertation is the algorithm for the eigenvalue problem of a real symmetric Toeplitz matrix. For matrices having well separated eigenvalues or eigenvalues of multiplicity at most 2, the method performs well and computes all the eigenpairs in O(n 2 log n) arithmetic operations. The complexity of the algorithm may go up to O(n 3 ) for matrices having clustered eigenvalues. Numerical experiments conrm that the speed of this algorithm is asymptotically faster than the existing algorithms. The research reported in the dissertation will be of interest to people across many disciplines which include applied and computational mathematics, statistics, and signal processing.

3 NORTHERN ILLINOIS UNIVERSITY EFFICIENT ALGORITHMS FOR EIGENSPACE DECOMPOSITIONS OF TOEPLITZ MATRICES A DISSERTATION SUBMITTED TO THE GRADUATE SCHOOL IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE DOCTOR OF PHILOSOPHY DEPARTMENT OF MATHEMATICAL SCIENCES BY SANTOSH KUMAR MOHANTY DEKALB, ILLINOIS DECEMBER 1993

4 Certication: In accordance with departmental and Graduate School policies, this dissertation is accepted in partial fulllment of degree requirements. { Dissertation Director { Date

5 ACKNOWLEDGEMENTS Foremost I would like to thank Professor Gregory S. Ammar, my Dissertation Director, for his guidance and encouragement, for his insights, skills and zeal as a teacher and researcher, and for the many productive hours of discussion with him. I would like to express my gratitude to Professor Robert Wheeler for his guidance and support all the time. Many people in the department have been very helpful and supportive over the years. In particular, I would like to thank Professor Biswa Nath Datta for motivating my interest in supercomputing, Professor Karabi Datta for her generosity, and Professor Henry Leonard for his encouragement. I would also like to thank my friends in the department for their support and help in every aspect. This dissertation would not have been written without the help of many people, but I would like to especially thank my family, particularly my parents, for their patience and support.

6 TABLE OF CONTENTS Page LIST OF TABLES : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : LIST OF FIGURES : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : vi vii CHAPTER 1. INTRODUCTION : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Organization of this Dissertation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 CHAPTER 2. REVIEW OF LITERATURE : : : : : : : : : : : : : : : : : : : Preliminaries : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Toeplitz Solvers : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Toeplitz Eigenvalue Problem : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : High Performance Computing in Linear Algebra : : : : : : : : : : : : : : : 22 CHAPTER 3. EIGENSPACE DECOMPOSITION OF HERMITIAN TOEPLITZ MATRICES : : : : : : : : : : : : : : : : : : Results on the Spectral Properties : : : : : : : : : : : : : : : : : : : : : : : : : : : : An algorithm for the Hermitian Toeplitz Eigenproblem with a Quartically Convergent Root-nder : : : : : : : : : : : : : : : : : : : : Reducing the Hermitian Toeplitz Eigenproblem to a Real Symmetric Eigenproblem of the Same Size : : : : : : : : : : : : : : : Computation of Extremal Eigenvalues : : : : : : : : : : : : : : : : : : : : : : : : : 46

7 Page CHAPTER 4. A FAST ALGORITHM FOR COMPUTING THE EIGENSPACE OF A REAL SYMMETRIC TOEPLITZ MATRIX : : : : : : : : : : : : An Outline of the Method : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Tools for the Algorithm : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : The Algorithm : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Numerical Implementation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 76 CHAPTER 5. SUMMARY AND CONCLUSIONS : : : : : : : : : : : 80 BIBLIOGRAPHY : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 83 v

8 LIST OF TABLES Page 3.1 Comparison of Performance of the Algorithm 3.2 and Beex's Algorithm : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Performance of the Algorithm 3.3 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Performance of the Algorithm 3.4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Comparison of Performance of the Algorithm 4.3 and LAPACK Code : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Comparison of Growth Rate of Execution Time : : : : : : : : : : : : : : : Maximum Residual of the Eigenpairs and Orthogonality of the Eigenvectors : : : : : : : : : : : : : : : : : : : : : : : : : 78

9 LIST OF FIGURES Page 3.1 Comparison of CPU Time in LAPACK (Algorithm 3.4) : : : : : : : : Comparison of Flops in MATLAB (Algorithm 3.4) : : : : : : : : : : : : Comparison of CPU Time (Algorithm 4.3 and LAPACK Code) Comparison of Growth Rate (Algorithm 4.3 and LAPACK Code) 79

10 CHAPTER 1 INTRODUCTION Many matrix problems arising in signal processing, information theory, linear prediction theory, the theory of probability, the trigonometric moment problem, the theory of orthogonal polynomials, and in several other applications involve Toeplitz matrices. There are classical fast algorithms [1, 2, 3, 23, 40, 46] for solving Toeplitz linear systems. The stability of these algorithms is questionable for indenite Toeplitz matrices. There is a substantial number of articles [8, 9, 10, 13, 20, 24, 29, 48, 49, 50, 51] on spectral properties and eigenspace decompositions of Toeplitz matrices. Unfortunately, there is no completely satisfactory structure-exploiting numerically stable algorithm for computing the eigenspace of a Toeplitz matrix as most of them rely on solving indenite Toeplitz linear systems. This dissertation is devoted to the study of spectral properties and eigenspace decompositions of Hermitian Toeplitz matrices and real symmetric Toeplitz matrices. The main focus is on our algorithm for the eigenvalue problem of a real symmetric Toeplitz matrix. This algorithm reduces the problem to a number of

11 matrix-vector multiplications. The complexity of the algorithm depends on the distribution of eigenvalues and varies in between O(n 2 log n) and O(n 3 ). The best performance is obtained if the eigenvalues are well separated or have multiplicity at most 2. Some earlier results and algorithms in the eld are reviewed and extended. Some new spectral properties of Hermitian Toeplitz matrices and real symmetric Toeplitz matrices are derived. The technical contents of the dissertation can be outlined as follows. A result for computing a bound for the spectrum of a Hermitian or real symmetric Toeplitz matrix and all its leading principal submatrices with O(n) ops is obtained. Analytical results concerning the eigenvalues and eigenvectors of two consecutive submatrices of a Hermitian or real symmetric Toeplitz matrix are derived. An algorithm for computing the eigenspace of a Hermitian Toeplitz matrix and all its leading principal submatrices is proposed. This algorithm has the same basic structure as that of Beex's [9] algorithm, but uses a quartically convergent root-nder. This increases the rate of convergence and saves more than 25% of iterations over [9]. The complexity of the algorithm is O(n 4 ) in a sequential machine and O(n 3 ) in a parallel machine with O(n) processors. A simple but important result which reduces the eigenvalue problem of a Hermitian Toeplitz matrix to the eigenvalue problem of a real symmetric matrix of same size is reviewed. It is shown that it reduces the ops and CPU time 2

12 by 75% and the order of data movement by 50% when the computation with LAPACK routines is performed. The result of Cybenko and Van Loan [20] for computing the minimal eigenvalue of a real symmetric positive denite Toeplitz matrix is extended to compute extremal eigenvalues of a Hermitian indenite Toeplitz matrix in O(n 2 log 2 n) ops. Also, we revise and restructure [20] to compute the extremal eigenvalues and eigenvectors of a Hermitian or real symmetric Toeplitz matrix and all its leading principal submatrices with O(n 3 ) ops. Then, the main results of computing the eigenspace of a real symmetric Toeplitz matrix by reducing the problem to a number of matrix-vector multiplications are presented. The algorithm is rich in the use of FFTs, and level 2 and level 3 BLAS. For matrices having well separated eigenvalues, our method involves O(n 2 log n) ops with O(n 2 ) data movement. The spectral properties of real symmetric Toeplitz matrices are exploited so that the algorithm performs nicely without any overhead in computing eigenvalues of multiplicity 2. If the eigenvalues are extremely clustered or have multiplicity greater than 2, then there is an overhead in computation which may go up to O(n 3 ). Numerical experiments conrm that the algorithm is asymptotically faster than the existing algorithms for large matrices. During the course of the development of this algorithm, the Levinson-Durbin algorithm for solving the Yule-Walker equations and the Gohberg-Semencul formula for the inverse of a Toeplitz matrix 3

13 are eciently implemented for complex symmetric Toeplitz matrices. 1.1 Organization of this Dissertation. This dissertation is organized as follows: The second chapter reviews the basic results of numerical linear algebra and Toeplitz matrices that we need frequently in our dissertation. The solution of linear systems and eigenvalue problems are discussed with reference to Hermitian and real symmetric Toeplitz matrices. The Basic Linear Algebra Subprograms (BLAS) and the Linear Algebra PACKage (LAPACK) are introduced, with their roles as algorithmic tools that ensure good performance on high-performance computers. In the third chapter, some results on the spectral properties of Hermitian and real symmetric Toeplitz matrices are presented. Two dierent algorithms on the eigenspace decompositions of Hermitian Toeplitz matrices are discussed. An algorithm for computing extremal eigenvalues of a Hermitian Toeplitz matrix and all its leading principal submatrices is presented. The fourth chapter contains our main results on the eigenvalue problem of a real symmetric Toeplitz matrix. Tools for the algorithm are described and the algorithm is proposed. The fth chapter summarizes all the results of this dissertation. Related issues and problems are discussed concerning future investigation. We follow a style in this dissertation so that it can reach a wide group of 4

14 people. The following points are to be noted. Known and basic results are stated. New results are stated and their proofs are briey outlined. Algorithms are written in the style of MATLAB code. Numerical implementation of each algorithm is conducted in MATLAB, and on a SUN SPARCstation with FORTRAN77 code. 5

15 CHAPTER 2 REVIEW OF LITERATURE In this chapter, some basic results pertaining to the solution of linear systems and the eigenvalue problem for Toeplitz matrices are reviewed. Also, some of the recently developed linear algebra software packages that exploit high performance computer architectures are briey presented. Notation for use in subsequent chapters is established. 2.1 Preliminaries. In this section, an ecient procedure for computing the matrix-vector product with a Toeplitz matrix as the coecient matrix is outlined and some denitions are stated Denitions. A matrix M = M n = [m jk ] 2 C nn is said to be a Toeplitz matrix if m jk = m j k; j; k = 1 : n: The Toeplitz matrix M is Hermitian if m j k = m k j; j; k = 1 : n:

16 The symbol T is used for a real Toeplitz matrix. For any matrix S, S t represents the transpose of S, S denotes the conjugate of S, and the matrix S? represents the conjugate transpose of S. A positive denite matrix S 2 C nn is a Hermitian matrix such that ~x? S~x is positive for every nonzero vector ~x 2 C n. The symbols I and J represent the identity matrix and the exchange matrix of appropriate size, where by the exchange matrix, we mean the identity matrix with its columns arranged in reverse order. The matrix E denotes the cyclic shift matrix, which is given by E = (~e 2 ; ~e 3 ; : : : ; ~e n ; ~e 1 ); where ~e j is the jth column of I. A matrix C is said to be a circulant matrix of order n if C = n 1 X j=0 j E j ; where ~ = [ 0 ; 1 ; : : : ; n 1] t is the rst column of C. The Fourier transform matrix F n of order n is dened as F n = [! n jk ] n 1 j;k=0 ;! n = e i2 n : The inverse Fourier transform matrix W n = F 1 n is given by W n = 1 n [!jk n ] n 1 j;k=0 = 1 F n n : A matrix U 2 C nn is said to be a unitary matrix if 7

17 U? U = I. An orthogonal matrix is a real unitary matrix. A Toeplitz matrix M satises the following two important properties: M is persymmetric, i.e., M = JM t J: All the principal submatrices M j, j = 1 : n 1 are also Toeplitz matrices. The eigenvalues of a matrix S 2 C nn are the n roots of its characteristic polynomial p(z) = det(zi S). The set of these roots is called the spectrum and is denoted by (S). If 2 (S) then there exists a nonzero vector ~u 2 C n that satises S~u = ~u. This vector ~u is called an eigenvector of S corresponding to the eigenvalue and the pair (; ~u) is called an eigenpair of S. If S is a Hermitian matrix, then there exists a unitary matrix U such that U? SU = ; where is a diagonal matrix with diagonal entries as the eigenvalues of S. The jth column of U is an eigenvector of S corresponding to the jth diagonal entry of. The above decomposition is called an eigenspace decomposition of S. 8

18 The p-norm of a vector ~v 2 C n is denoted by k ~v k p and is dened as k ~v k p = 8 >< >: ( P n j=1 jv j j p ) 1 p if 1 p < 1; max 1jn jv j j if p = 1: The p-norm of a matrix S 2 C nn is dened in terms of the vector p-norm in the following way: k S k p = max k S~v k p : k~vk p=1 For ~v 2 C n, let ~w = J~v. The vector ~v is an even vector if ~w = ~v. The vector ~v is an odd vector if ~w = ~v. A op represents one oating point operation. This can be either one oating point multiplication or one oating point addition. If ~^x is a computed solution for the linear system S~x = ~ b, then the error ~e is the dierence ~x ~^x and the residual ~r is the dierence ~ b S ~^x: The relative error e r and the relative residual r r are then dened by e r = k ~e k k ~x k and r r = k ~r k k ~ b k respectively. Dene the condition number cond(s) of a square matrix S by cond(s) =k S kk S 1 k with the convention that cond(s) = 1 for singular S. The condition number of S is greater than or equal to 1 for every p-norm. Matrices with small condition 9

19 numbers are said to be well-conditioned. An algorithm for solving linear systems is weakly stable if for each well-conditioned matrix S and for each right hand side vector ~ b the relative error e r is small FFTs and Matrix-Vector Product. Consider the matrix-vector product ~w = A~v; A 2 C nn ; ~v; ~w 2 C n : This can be computed with O(n 2 ) ops. If the matrix A is the Fourier transform matrix F n or the inverse Fourier transform matrix W n, the computation of ~w can be performed in O(n log n) ops by using any one of many well-known techniques, collectively called Fast Fourier Transforms (FFTs). The exact number of ops depends on composite nature of the number n, the data type of the input vector, and the particular FFT algorithm that is used. For example, if n is a power of 2, and if the data vector is complex, then the radix-two Cooley-Tukey algorithm requires at most 5n log 2 n + O(n) operations. (For more details, see [3] and the references therein.) For a circulant matrix C with rst column, ~ the matrix-vector product ~w = C~v is equal to the cyclic convolution of the vectors ~ and ~v. Moreover, ~w equals to the cyclic convolution of ~ and ~v if and only if F n ~w equals to the componentwise product of F n ~ and Fn ~v. So ~w can be computed by using three FFTs of size n. Hence, the matrix-vector product for a Toeplitz matrix can be obtained with O(n log n) ops by embedding the Toeplitz matrix in a circulant 10

20 matrix of twice the size. (For more details, see [6].) Given ~a and ~ b, the rst column and the rst row of a Toeplitz matrix M, and a vector ~x, the following algorithm computes the matrix-vector product by using the above concepts. Algorithm [TVEC]. Input: ~a, ~ b, ~x; Output: ~y n = length (~a) Initialize c(1 : 2n) = ~0; d(1 : 2n) = ~0 c(1 : n) = ~a; c(n + 2 : 2n) = b(n : 1 : 2) d(1 : n) = ~x ~c = W 2n ~c; d ~ = W2n d ~ for j = 1 : 2n c j = c j d j end for ~c = F 2n ~c; ~y = c(1 : n) end [TVEC]. The above algorithm requires three FFTs of size 2n. The computation involved with the rst FFT is independent of the vector ~x. Therefore, if we need to compute M~x with k dierent vectors then this algorithm would use (2k + 1) FFTs of size 2n. 11

21 2.2 Toeplitz Solvers. Systems of linear equations with Toeplitz coecient matrices arise in many applications such as time series analysis, linear prediction, signal processing and detection, the theory of orthogonal polynomials, spectral estimation, system identication, Pade approximation and statistics (see [1, 12, 38] and the references therein). There are classical fast algorithms for solving Toeplitz systems in O(n 2 ) ops as compared to O(n 3 ) ops for general linear systems. Given a linear system M~x = ~ b; (2:2:1) these algorithms implicitly compute either a triangular factorization of M 1 or a triangular factorization of M. on triangular factorization of M 1 The class of fast Toeplitz solvers based includes the Levinson algorithm and its variants [1, 3, 23, 40, 46, 56]. Toeplitz solvers based on triangular factorization are connected with the classical work of Schur, and hence are called Schur-type algorithms [1, 2, 46]. These fast Toeplitz solvers are connected to the classical theory of polynomials orthogonal on the unit circle (see [1, 2] and the references therein). Also, there exist superfast direct Toeplitz solvers that require O(n log 2 n) ops [1, 3, 23, 46] and iterative Toeplitz solvers that require O(n log n) ops on certain problems [14, 43]. 12

22 2.2.1 The Yule-Walker Equations. Consider a Hermitian Toeplitz matrix M with its rst column ~m = [m 0 ; m 1 ; : : : ; m n 1] t : The Yule-Walker equation of order l is then dened to be M l ~r l = ~y l ; 1 l n 1; (2:2:2) where ~y l = [m 1 ; m 2 ; : : : ; m l ] t. This equation arises in a variety of engineering, statistical, and mathematical areas. The most popular algorithm for solving the Yule-Walker equation is called the Levinson-Durbin algorithm [19, 34]. This is a recursive algorithm and the (k + 1)th order Yule-Walker system can be solved in O(k) ops from the solution of the kth order Yule-Walker system. The Levinson-Durbin algorithm for solving the Yule-Walker equation runs to completion in exact arithmetic if M is strongly regular. By strongly regular, it is meant that the matrix M and all its leading principal submatrices are nonsingular. If any leading principal submatrix is close to singular (ill-conditioned), then numerical instability occurs in the Levinson-Durbin algorithm. Levinson's algorithm for solving a Toeplitz linear system with general right hand side is a two phase recursive algorithm. The rst phase computes the solution of the Yule-Walker equation and the second phase updates the solution for general right hand side. Each phase of the algorithm requires 2n 2 complex arithmetic operations. 13

23 2.2.2 The Gohberg-Semencul Formula. If M and M n 1 are both non-singular and ~r n 1 is the solution of (n 1)th order Yule-Walker equation, then the inverse of M can be represented by the Gohberg-Semencul formula as M 1 = 1 (U? U LL? ); (2:2:3) where U is a unit upper triangular Toeplitz matrix and L is a strictly lower triangular Toeplitz matrix. The rst column of U? and L are given by U? (:; 1) = J~r n and L(:; 1) = ~r n respectively. The scalar is also obtained from the solution of the Yule-Walker equation (2.2.2). (For more details, see [1, 2, 6].) If the factored form (2.2.3) of M 1 is known, then the matrix-vector product M 1 ~ b can be computed with O(n log n) ops by using FFTs [6]. Thus, we can substitute the above technique for the second phase of Levinson's algorithm to obtain an asymptotically faster O(n 2 ) Toeplitz solver [1, 2, 40] The Numerical Stability of the Levinson-Durbin Algorithm. Cybenko [19] studied the numerical stability of the Levinson-Durbin algorithm for solving the Yule-Walker equations with a positive denite real symmetric Toeplitz matrix. He shows that the algorithm is weakly stable with 14

24 accuracy comparable to that of the Cholesky algorithm for a certain subclass of these matrices. His argument is based on the analytic results of an error analysis for xed-point and oating-point arithmetic. Recently, Brent [12] showed that the Levinson-Durbin algorithm for solving the Yule-Walker equations is weakly stable for all positive denite real symmetric Toeplitz matrices. It is heuristically believed that if the Hermitian Toeplitz matrix is strongly regular and none of the leading principal submatrices is ill-conditioned then numerical instability can be avoided in the Levinson-Durbin algorithm. Toeplitz matrices are in general not strongly regular. In fact, Hermitian indenite Toeplitz systems arise in spectral estimation, when inverse iteration is used to compute eigenvalues of Hermitian positive denite Toeplitz systems [9, 10, 20, 48]. It is known that the Levinson-Durbin algorithm and the Schur-type Toeplitz solvers can be extended to handle exactly singular leading principal submatrices [25, 55]. These algorithms are again based on either a block triangular factorization of the Toeplitz matrix or a block triangular factorization of the inverse of the Toeplitz matrix. However, in nite precision arithmetic, a numerically robust solver must also be able to address non-singular ill-conditioned leading principal submatrices. Recently, Chan and Hansen [15], and Freund and Zha [31] presented dierent look-ahead schemes for the Levinson-Durbin algorithm to deal with ill-conditioned or exactly singular principal submatrices. 15

25 2.3 Toeplitz Eigenvalue Problem. Finite Toeplitz matrices are enriched with algebraic theory. But the literature is not that rich on the spectral theory of these matrices, except for the important results that have been obtained concerning asymptotic estimates of the eigenvalues when the dimension tends to innity [38]. In this section, some of the known results on the spectral decompositions of Hermitian and real symmetric Toeplitz matrices are reviewed. Since real symmetric Toeplitz matrices are a subclass of matrices which are both symmetric and persymmetric, we rst start with spectral properties of these matrices Eigenspace of Matrices which are both Symmetric and Persymmetric. Given 2 R, let de denote the smallest integer greater than or equal to, and let bc denote the largest integer less than or equal to. Also, let V be the set of all real matrices of size n n which are both symmetric and persymmetric. Theorem (Cantoni and Butler [13]): The set V satises the following algebraic properties: (a) V forms an Abelian group under matrix addition. (b) The non-singular members of V form a non-abelian group under matrix multiplication. Theorem (Cantoni and Butler [13]): Let V 2 V and l = b n c. 2 16

26 (a) If n is even, then V = A B B t JAJ ; and V is orthogonally similar to ~V = A JB 0 0 A + JB : (b) If n is odd, then V = A ~v B t ~v t ~ vt J B J~v JAJ ; and V is orthogonally similar to ~V = A JB ~0 0 ~0 p 2 v ~ t p 0 2~v A + JB ; where A; B 2 R ll ; B t = JBJ; ~v 2 R l, and 2 R. Theorem (Cantoni and Butler [13]): V 2 V if and only if each invariant subspace of V is spanned by odd and even eigenvectors. In fact, the invariant subspace of V spanned by odd and even eigenvectors can be determined from the eigenvectors of Q 1 and Q 2 respectively, where 17

27 Q 1 = A JB and Q 2 = 8 >< >: A + JB if n is even, p 2~v t p 2~v A + JB if n is odd. The scalar, the vector ~v, and the matrices A and B are dened in Theorem Spectral Properties of Hermitian and Real Symmetric Toeplitz Matrices. Let M = [m ij ] = [m i j], m i j = m j i be a Hermitian Toeplitz matrix and M n 1 be its principal submatrix of order n 1. Let ~y n 1 = [m 1 ; m 2 ; : : : ; m n 1] t ; and ~x n 1() = [x 1;n 1(); x 2;n 1(); : : : ; x n 1;n 1()] t be the solution of (M n 1 I n 1)~x n 1() = ~y n 1; 2 R; 62 (M n 1): (2:3:1) Dene ~u() = ~x n 1() ; p l() = det(m l I l ); 1 l n; 18

28 and q l () = p l() ; 2 l n: (2:3:2) p l 1() Theorem (Trench [48]): Let ~u() and q n () be dened as in (2.3.2). Then (a) q n () = m 0 ~y? n 1 ~x n 1(), and q 0 n() = 1 k ~x n 1() k 2 2. (b) If 2 (M), then ~u() is an associated eigenvector. If M l I l is nonsingular for 1 l n 1, then (2.3.1) can be solved recursively by using the Levinson-Durbin algorithm for a shifted Yule-Walker equation. The algorithm for solving (2.3.1) follows: Algorithm [SYWE]. x 11 () = m 1 (m 0 ), 1() = m 0 for l = 2 : n 1 end for for j = 1 : l 1 end for end [SYWE]. l () = (1 j x l 1;l 1() j 2 ) l 1() x jl () = x j;l 1() x ll ()x l j;l 1() x ll () = ( 1 l ())(m l P l 1 j=1 m l jx j;l 1()) Dene the unit lower triangular matrix L l (), whose (i; j) entry is given by ( x i j;i+1()); i = 2 : m; j = 1 : i 1 19

29 Cybenko [19] has shown that for l 2, L l () satises the equation L l ()(M l I l )L? l () = diag( 1 (); : : : ; l ()) Theorem (Trench [48]): Let N l () be the number of eigenvalues of M l (counting multiplicities) less than. Then N l () equals the number of negative values among f 1 (); : : : ; l ()g, provided 62 (M j ), j = 1 : l. Theorem (Trench [48]): Let ; 2 R such that ; 62 (M j ), j = 1 : n and assume that (; ) contains exactly one simple eigenvalue of M = M n. Then (; ) does not contain any eigenvalue of M n 1 if and only if n () > 0 and n () < 0. Theorem (Delsarte and Genin [24]): For any real symmetric Toeplitz matrix T of order n, there exists d n e even eigenvectors and b n c odd eigenvec- 2 2 tors. Theorem (Delsarte and Genin [24]): Let be an eigenvalue of T with multiplicity k. (a) If k is even then is associated with an equal number of even and odd eigenvectors. (b) If k is odd, then the number of even and odd eigenvectors associated with dier by one Computing the Eigenspaces of a Toeplitz Matrix. In the previous section, several spectral properties of Hermitian and real symmetric Toeplitz matrices were presented. Unfortunately, there is no com- 20

30 pletely satisfactory structure-exploiting algorithm for computing the eigenspaces of a Toeplitz matrix. Most of the well-known algorithms for eigenvalue problems of real symmetric or Hermitian matrices rst reduce the matrix to a tridiagonal matrix and then use dierent procedures to compute the eigenvalues of the reduced tridiagonal matrix [34]. In the case of a Toeplitz matrix, the cost of reduction to tridiagonal form is of the same order as that of an ordinary symmetric (or Hermitian) matrix, and the Toeplitz structure is not preserved in this reduction. Hence, researchers are trying to compute the eigenspaces of a Toeplitz matrix without reducing it to tridiagonal form. Most of these procedures rely on the Levinson- Durbin algorithm for an indenite Toeplitz matrix in one way or another. But it is well known that the Levinson-Durbin algorithm is numerically sensitive for indenite matrices. Hence, the algorithms are not numerically robust. Trench [48] proposed an algorithm for computing the spectrum of a Hermitian Toeplitz matrix based on nding the roots of the rational function (2.3.2). The cost of computation for each eigenpair is O(n 2 ) and the algorithm can compute those eigenvalues which are not the eigenvalues of any of its principal submatrices. Hence, the algorithm may fail or yield inaccurate results in oating point arithmetic if we need to compute the entire spectrum. Beex [9] proposed an iterative algorithm for the eigenspace decomposition of a Hermitian Toeplitz matrix. The algorithm recursively computes the 21

31 eigenspace of the rth order principal submatrix from the computed eigenspace of the (r 1)th order principal submatrix. The total operation count is of O(n 4 ) and also it fails to compute those eigenvalues which are eigenvalues of any of its principal submatrices. The computation of the minimal eigenvalue of a Toeplitz matrix is of interest in the literature of signal processing and estimation. Cybenko and Van Loan [20] proposed a method to compute the minimal eigenvalue of a positive denite real symmetric Toeplitz matrix. Using the language of signature, they derived a bisection-newton type method to compute the minimal eigenvalue with at most O(n 2 log 2 n) ops. 2.4 High Performance Computing in Linear Algebra. In this section, the linear algebra software packages BLAS and LAPACK with their role as ecient computational tools that ensure good performance on dierent computing systems are briey described. The following presentation is based on the review of [7] and the references therein Basic Linear Algebra Subprograms (BLAS). The performance of an algorithm depends on two important factors: the number of ops and the volume of memory trac. The cost of data movement between memory and registers is proportional to the cost of ops on high performance computers. This motivates the design of algorithms that minimize the data movement and maximize the data reuse. The BLAS provides an ecient tool for linear 22

32 algebra problems by providing a well-dened interface with dierent machine architectures for elementary matrix and vector operations. It also oers other benets like enhancing the robustness and readability of a code and improving its portability. The BLAS routines are mainly classied into three levels. The level 1 BLAS implements common vector-vector operations such as ~y ~y + ~x; the level 2 BLAS implements matrix-vector operations such as ~y A~x + ~y; and level 3 BLAS performs matrix-matrix operations like C AB + C; where, are scalars, ~x and ~y are vectors, and A, B and C are matrices Linear Algebra PACKage (LAPACK). LAPACK is a library of FORTRAN77 routines for solving the commonly occurring problems in numerical linear algebra. It supersedes LINPACK and EISPACK, principally by restructuring the software to achieve greater eciency on modern high performance computers. It contains driver routines for solving standard types of problems, computational routines to perform distinct computational tasks, and auxiliary routines to perform certain subtasks. It is designed to give high ef- ciency on vector processors, high-performance super-scalar workstations, and shared memory multiprocessors. With its present form, it is less likely to give 23

33 good performance on massively parallel SIMD machines, or distributed memory machines. LAPACK routines are written so that the computations are performed by calling BLAS whenever it is required. The BLAS enable LA- PACK routines to achieve high performance with portable code. 24

34 CHAPTER 3 EIGENSPACE DECOMPOSITION OF HERMITIAN TOEPLITZ MATRICES This chapter contains results related to the eigenvalue problem of a Hermitian Toeplitz matrix. First, some results on the spectral properties of these matrices are presented. Subsequently, an algorithm is proposed for computing the eigenspaces of a Hermitian Toeplitz matrix and all its leading principal submatrices. The result of reducing the Hermitian Toeplitz eigenproblem to a real symmetric eigenproblem of the same size and the importance of this result with respect to present-day high performance computers is re-established. Finally, the method of Cybenko and Van Loan [20] for computing the minimal eigenvalue of a real symmetric positive denite Toeplitz matrix is reviewed and an algorithm for computing the extremal eigenvalues of a Hermitian Toeplitz matrix and all its leading principal submatrices is presented. 3.1 Results on the Spectral Properties. Let ~m = [m 0 ; m 1 ; : : : ; m n 1] t be the rst column of a Hermitian Toeplitz

35 matrix M. Dene p = b n 1 2 c and r = 8 >< >: p if n is odd, p + 1 if n is even. Theorem 3.1.1: (M) [ + m 0 ; + m 0 ], where = rx jm i j + px i=1 i=1 max(jm i j; jm n ij): Proof: Let M = m 0 I + ~ M, where ~ M = [ ~m ij ] is the Hermitian Toeplitz matrix with rst column [0; m 1 ; m 2 ; : : : ; m n 1] t : If we apply the Gershgorin Circle Theorem [34] to M, then (M) [ n i=1 K i, where each interval K i is given by K i = ft 2 R : jt m 0 j nx j=1 j ~m ij j; ~m ij = m i j; j 6= ig: Dene = rx jm i j + px i=1 i=1 max(jm i j; jm n ij) and K = ft 2 R : jt m 0 j g: Then K = [ + m 0 ; + m 0 ]: 26

36 It is easy to show that K i K for each i. This implies [ n i=1 K i K: Hence, (M) K, which proves the theorem. 2 Corollary to Theorem 3.1.1: If max jm j j 1, j = 0 : n 1, then (M) [ n; n]: Let M l+1 be the principal submatrix of order l + 1 of M, l = 1 : n 1. Then we can write M l+1 as M l+1 = m 0 ~m? l ~m l M l ; ~m l = [m 1 ; m 2 ; : : : ; m l ] t : (3:1:1) Theorem 3.1.2: Let (; ~u) be an eigenpair of M l+1 and u 1 be the rst entry of ~u. If u 1 = 0, then 2 (M l ). Proof: Let ~u = [u 1 ; ~ ~u t ] t. Since u 1 = 0, ~u = [0; ~ ~u t ] t. If 2 (M l+1 ), then (M l+1 I)~u = ~0. This implies m 0 ~m? l ~m l M l I ~~u = ~ : The above matrix equation is equivalent to ~m? l ~ ~u = 0 (3:1:2) 27

37 and (M l I)~~u = ~0: (3:1:3) The last equation proves the theorem. 2 Corollary to Theorem 3.1.2: Let (; ~~u) be an eigenpair of M l. If ~m? ~ l ~u = 0, then 2 (M l+1 ). In fact, the eigenvector of M l+1 associated with the eigenvalue can be written as ~u = [0; ~~u t ] t. Theorem 3.1.3: Let be a simple eigenvalue of M l+1 and let ~u be an eigenvector corresponding to the eigenvalue. If u 1 is the rst entry of ~u, then u 1 = 0 if and only if 2 (M l ). Proof: The necessary condition u 1 = 0 ) 2 (M l ) (3:1:4) follows directly from Theorem We need to show that the sucient condition 2 (M l ) ) u 1 = 0 (3:1:5) also holds. This is equivalent to showing that u 1 6= 0 ) 62 (M l ): (3:1:6) Let u 1 6= 0 and 2 (M l ). Also, let ~u = [u 1 ; ~v t ] t. Since (; ~u) is an eigenpair 28

38 of M l+1, this implies m 0 ~m? l ~m l M l I u 1 ~v = ~ : (3:1:7) The above matrix equation is equivalent to ~m? l ~v = u 1(m 0 ) (3:1:8) and (M l I)~v = u 1 ~m l : (3:1:9) Consider the case ~v = ~0. Then (3.1.9) implies u 1 ~m l = ~0. Since u 1 6= 0, this implies ~m l = ~0. Hence M l+1 = m 0 I and m 0 =. This means is an eigenvalue of M l+1 with multiplicity l + 1, which is a contradiction. So ~v 6= ~0. The vector ~v is a nontrivial solution to the singular linear system (3.1.9). So the linear system (3.1.9) is consistent and there exists at least one more linearly independent solution vector ~w to this system. Hence, (M l I) ~w = u 1 ~m l : (3:1:10) If we consider the conjugate transpose of both the sides, then ~w? (M l I)? = ~w? (M l I) = ~m? l u? 1 = u? 1 ~m? l : 29

39 Multiplying both the sides by the vector ~v, we obtain ~w? (M l I)~v = u? 1 ~m? l ~v: This implies ~w? ( u 1 ~m l ) = u? 1( u 1 (m 0 )): Since u 1 6= 0, we have ~w? ~m l = u? 1 (m 0 ): Since (m 0 ) is real, the conjugate transpose of both the sides on the above equation implies ~m? l ~w = u 1(m 0 ): Hence, the vector ~^u = [u 1 ; ~w t ] t is an eigenvector of M l+1 corresponding to the eigenvalue, and ~^u and ~u are linearly independent. This is a contradiction as is a simple eigenvalue of M l+1. Therefore, u 1 6= 0 ) 62 (M l ). This completes the proof. 2 Theorem 3.1.4: Let T be a real symmetric Toeplitz matrix of order n. If the spectra of two consecutive leading principal submatrices of T are simple, then they interlace strictly. Proof: Let T k be the leading principal submatrix of order k, k = 1 : n, and assume that (T l ) and (T l 1) are simple for some l, 2 l n. By the Interlacing Theorem [34], we know that (T l ) and (T l 1) interlace. We want to show that they interlace strictly. 30

40 Suppose (T l ) and (T l 1) do not interlace strictly. Then there exists a real number such that (; ~u) is an eigenpair of T l and (; ~v) is an eigenpair of T l 1, where ~u and ~v are normalized vectors. Since (T l ) and (T l 1) are simple, the vectors ~u and ~v are unique. By Theorem 2.3.3, we have u i = u l+1 i and v j = v l j; i = 1 : l; j = 1 : l 1: (3:1:11) By Theorem 3.1.3, ~u = ~v : This implies u 1 = 0 and u i = v i 1; i = 2 : l: (3:1:12) The equations (3.1.11) and (3.1.12) together imply ~u = ~0 and ~v = ~0, which is a contradiction. Hence, (T l ) and (T l 1) interlace strictly. This completes the proof An Algorithm for the Hermitian Toeplitz Eigenproblem with a Quartically Convergent Root-nder. In this section, a method for computing the eigenvalues and the eigenvectors of a Hermitian Toeplitz matrix and all its leading principal submatrices is presented. This kind of decomposition is of high importance in areas like array processing. Several recent papers have dealt with the spectral decomposition of Hermitian Toeplitz matrices. Basically, they use dierent root-nders to 31

41 compute the roots of secular equations which are the eigenvalues of the matrix, and use the Levinson-Durbin algorithm to compute the eigenvectors. The root- nders that are available in the literature are at most quadratically convergent near the solution. A method is proposed which has the same basic structure as that of most of the algorithms, but our root-nding technique, which uses Cardan's rule for evaluating a real root of a cubic polynomial, converges quartically near the solution. Therefore, it saves a signicant number of iterations at each recursive step Eigenspace Decomposition of a Hermitian Toeplitz Matrix. Consider a Hermitian Toeplitz matrix M = M n = [m i j], m i j = m j i of order n. Let us assume that the eigenvalues of any two leading principal submatrices are distinct. This also implies that the eigenvalues of two consecutive submatrices interlace strictly. The aim is to construct a recursive algorithm for the eigenspace decomposition of M. Let us assume that the eigenspace decomposition of M l for some l < n is known. We want to compute the eigenspace of M l+1. Let M l have the eigenspace decomposition M l = U l l U? l ; (3:2:1) where l is a diagonal matrix with diagonal entries 1;l ; : : : ; l;l. Let (M l+1 ) = 1;l+1 ; : : : ; l+1;l+1 : (3:2:2) 32

42 If ^` and ^} are the lower and upper bounds of the spectrum of M l+1, then by our initial assumption, and by the interlacing property, we have ^` < 1;l+1 < 1;l < 2;l+1 : : : < l;l < l+1;l+1 < ^}: (3:2:3) Let 2 (M l+1 ) with eigenvector ~a. We can scale ~a such that ~a = [1; ~ ~a t ] t : (3:2:4) This is always possible because a 1 6= 0 by our initial assumption. Now, M l+1 ~a = ~a: This implies m 0 ~m? l ~m l M l I ~~a = ~ ; (3:2:5) ~m l = [m 1 ; m 2 ; : : : ; m l ] t : (3:2:6) The above equations imply (m 0 ) + ~m? l ~ ~a = 0 (3:2:7) and ~m l + (M l I) ~ ~a = ~0: (3:2:8) 33

43 Since (M l I) is non-singular, (M l I) 1 ~m l = ~ ~a: (3:2:9) Substituting the above equation in the equation (3.2.7), we obtain (m 0 ) ~m? l (M l I) 1 ~m l = 0: (3:2:10) Let F () = ~m? l (M l I) 1 ~m l + ( m 0 ): (3:2:11) The equation (3.2.9) implies (m 0 ) ~m? l U l( l I) 1 U? l ~m l = 0 or (m 0 ) lx i=1 j i j 2 ( i;l ) = 0; where ~ = [ 1 ; 2 ; : : : ; l ] t = U? l ~m l. This implies F () = lx i=1 j i j 2 ( i;l ) + m 0: (3:2:12) We want to nd the roots of the secular equation F () = 0. Once we obtain a root, we can solve the Yule-Walker equation (3.2.8) to obtain ~ ~a. Then we can update a unit norm vector ~u from ~a, where ~a is given by (3.2.4). 34

44 3.2.2 Root Finding Technique for F (). Here we describe a root-nder based on Cardan's solution to a cubic equation which converges quartically near the solution. Consider a general cubic equation p(x) = ax 3 + bx 2 + cx + d = 0: (3:2:13) This can be written as z 3 + 3Hz + G = 0, (3:2:14) where z = x + b 3a ; 3ac b2 H = ; 9a 2 and G = 2b3 9abc + 27a 2 d 27a 3 : The roots of the equation (3.2.14) can be written as z 1 = u + v; z 2 =!u +! 2 v; and z 3 =! 2 u +!v; where u is a real cube root of G+p G 2 +4H 3 2, v = H u, and! is the primitive cube root of unity. Hence, the roots of the equation (3.2.13) can be obtained from 35

45 the equation (3.2.14) by the transformation x = z b. (For more details, see 3a [22].) Let us consider the equation F () = 0 and concentrate on the jth interval [a; b] = [ j 1;l; j;l ]: F () = lx i=1 j i j 2 ( i;l ) + m 0; F (a + ) = 1; F (b ) = +1; and F () has only one simple zero in [a; b]. F 0 () = lx i=1 j i j 2 ( i;l ) > 1; F 0 (a + ) = F 0 (b ) = +1: F 00 () = 2 lx i=1 j i j 2 ( i;l ) 3 ; and F 000 () = 6 lx i=1 j i j 2 ( i;l ) 4 > 0: Let c 2 (a; b) be xed and e = c. Then 2 (a; b) ) e 2 (a c; b c) (a b; b a) and F () = F (c + c) = F (c + e) = 0: Let g(e) be the rst four terms of the Taylor series expansion of F (c+e). Then F () g(e) = F (c) + F 0 (c)e + F 00 (c) 2 36 e 2 + F 000 (c) e 3 : 6

46 Now, g(e) is a cubic polynomial and by construction, it has one real root in the interval (a c; b c). This can be computed eciently by Cardan's method. Once e is evaluated, can be obtained from e by the relation = e+c. Moreover, since F 0 () > 1, the direction in which the next approximation is obtained is the direction of the root. If at any iterative step with starting approximation c 2 (a; b), the computed goes outside the interval (a; b), we can start the next approximation with c = 8 >< >: c+b 2 if > b; a+c 2 if < a: The root of F () is computed by a cubic g(e) which agrees with F () up to the 4th term in Taylor's series. So the method converges quartically near the solution. The process can be repeated to compute all the roots of F () as they lie in distinct intervals Numerical Implementation. The quartic root-nder technique described in the previous section is used to compute the eigenvalues of M l+1 from the eigenspace of M l and then use the Levinson-Durbin algorithm to compute the eigenvectors of M l+1, l = 1 : n 1. The above algorithm works with the assumption that the eigenvalues of any two principal submatrices are distinct, which is not realistic. Suppose we drop this assumption and we need to compute the eigenspace of M l+1 from the eigenspace of M l recursively. 37

47 Let i;l, i = 1 : l be the eigenvalues of M l. If i;l 2 (M l+1 ), or i;l is close to an eigenvalue of M l+1, then F () either does not exist or is very large. However, by the corollary to Theorem and Theorem 3.1.3, we can deal with these cases and the computation of corresponding eigenvectors can be done very cheaply. The eigenvalues of M l+1, which are distinct from the eigenvalues of M l, can be computed eciently by locating the roots of F (). Still we have the problem in evaluating the eigenvector if 2 (M l+1 ) and either 2 (M j ) or is close to an eigenvalue of M j for some j, 1 j l 1; as the fast Toeplitz solvers for solving the Yule-Walker equations either would not run to completion or would be computationally unstable. In that case, we need to consider the look-ahead Toeplitz solvers [15, 31] to avoid instability in computation. The results of some numerical experiments with our algorithm and Beex's algorithm are presented in Table 3.1. We computed the eigenvalues of a randomly generated Hermitian Toeplitz matrix of order n and all of its leading principal submatrices for various values of n. We can see that the execution time for the algorithms is almost the same. However, our algorithm requires nearly 25% fewer iterations. Since the evaluation of each eigenpair of M l+1 is independent of the others, the computation of all the eigenpairs can be done independently in a parallel machine. The speed of the Schur algorithm is faster than the speed of the 38

48 Levinson-Durbin algorithm in a parallel machine [55]. So for parallel implementation, one should use the Schur algorithm for solving the Yule-Walker equations. The complexity of the algorithm is O(n 4 ) in a sequential machine and O(n 3 ) in a parallel machine having O(n) processors. This algorithm may perform better in a parallel machine in comparison to Beex's algorithm because of considerable savings in the number of iterations at each recursive step. At this point, we have not done the parallel implementation of the code. We also have not implemented the look-ahead Toeplitz solvers in the code. TABLE 3.1 Comparison of Performance of the Algorithm 3.2 and Beex's Algorithm (SUN SPARC-10) A: The Algorithm 3.2 :: B: Beex's Algorithm E: Execution Time in Seconds n A(E) B(E) I: Number of Iterations n: Matrix Size A(E) B(E) A(I) B(I) A(I) B(I)

49 3.3 Reducing the Hermitian Toeplitz Eigenproblem to a Real Symmetric Eigenproblem of the Same Size. In this section, the technique of reducing the eigenvalue problem of a Hermitian Toeplitz matrix to an eigenvalue problem of a real symmetric matrix of the same size is described. This result is known to the scientic community [52]. But we want to point out that it should be given more importance considering the collective inability of solving a Hermitian Toeplitz eigenproblem more eciently. Moreover, the reduction to real form reduces the operation count and data movement; this is very important in present-day high performance computers. Also, a concise matrix-theoretic proof of the reduction is given Techniques of Reduction. Let M be a Hermitian Toeplitz matrix of size n. Since M is a Hermitian matrix, we can write M as M = A + ib; where A is real symmetric and B is real and skew-symmetric. Let (; ~u) be an eigenpair of M, 2 R, ~u = ~x + i~y, ~u 2 C n ; ~x; ~y 2 R n. So we have (A + ib)(~x + i~y) = (~x + i~y): (3:3:1) This implies A~x B~y = ~x and B~x + A~y = ~y: 40

50 The above problem is equivalent to A B B A ~x ~y = ~x ~y or A B B A ~y ~x = ~y ~x : (3:3:2) Let Q = A B B A : (3:3:3) We can see that 2 (M) with multiplicity k if and only if 2 (Q) with multiplicity 2k. Hence, the problem of solving the eigenvalue problem of a Hermitian matrix of size n is equivalent to solving the eigenvalue problem of a real symmetric matrix of size 2n. The above result is true for any Hermitian matrix. Since M has also the Toeplitz structure, the matrices A and B satisfy the following properties: A = JA t J = JAJ and B = J( B)J = JB t J: So the matrix A is both symmetric and persymmetric; the matrix B is both skew-symmetric and persymmetric. 41

51 Hence, we can write the expression (3.3.3) as Q = A B B A = A B JBJ JAJ : (3:3:4) Therefore, Q is both symmetric and persymmetric. By Theorem (a), (Q) = (Q 1 ) [ (Q 2 ); (3:3:5) where Q 1 = A JB and Q 2 = A + JB. Theorem 3.3.1: Let ~ ~u = Ju. Then (; ~u) is an eigenpair associated with Q 1 if and only if (; ~ ~u) is an eigenpair associated with Q 2. Proof: We know that A = JA t J = JAJ, JA = AJ, J(A~u) = A(J~u), ~ ~ (Au) = ~ (A~u): Also, B = JBJ, BJ = JB: Let (; ~u) be an eigenpair of Q 1. Then (A JB)~u = ~u, J(A JB)~u = J(~u), (JA B)~u = ~ Ju, (AJ + JBJ)~u = ~~u, (A + JB)J~u = ~~u, (A + JB)~~u = ~~u: This implies (; ~ ~u) is an eigenpair of Q

52 Therefore, we need to solve the eigenvalue problem for either Q 1 or Q 2 in order to solve the eigenvalue problem for Q, and hence that of M. A unit norm eigenvector of M corresponding to the eigenvalue is given by p 1 ~ 2 ~u + i~u. So the eigenvalue problem of a Hermitian Toeplitz matrix M is equivalent to the eigenvalue problem of a real symmetric matrix Q 1 of the same size Complexity Analysis. Dene 1 pair of operations as 1 multiplication and 1 addition. Then 1 complex pair 4 real pairs. The number of multiplications and additions involved with an eigenvalue algorithm are approximately same. Hence, for matrices of same size, the operations involved with a real symmetric matrix would be 1 th of the operations involved 4 with a Hermitian matrix. So the number of operations required for the evaluation of (Q 1 ) is approximately 1 th of the number required for the evaluation of (M). 4 Also, 1 complex vector is equivalent to 2 real vectors in the sense of data movement. Hence, by reducing the eigenvalue problem of M to the eigenvalue problem of Q 1, we reduce the data movement by 50%. In the case of a sequential machine, the eect of data movement on execution time is not signicant, so the performance of the algorithm with real arithmetic would be 4 times faster than that of complex arithmetic. However, in the case of vector or parallel processors, the amount of data 43

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Eigenvalue problems and optimization

Eigenvalue problems and optimization Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

Symmetric rank-2k update on GPUs and/or multi-cores

Symmetric rank-2k update on GPUs and/or multi-cores Symmetric rank-2k update on GPUs and/or multi-cores Assignment 2, 5DV050, Spring 2012 Due on May 4 (soft) or May 11 (hard) at 16.00 Version 1.0 1 Background and motivation Quoting from Beresford N. Parlett's

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

Solving the Inverse Toeplitz Eigenproblem Using ScaLAPACK and MPI *

Solving the Inverse Toeplitz Eigenproblem Using ScaLAPACK and MPI * Solving the Inverse Toeplitz Eigenproblem Using ScaLAPACK and MPI * J.M. Badía and A.M. Vidal Dpto. Informática., Univ Jaume I. 07, Castellón, Spain. badia@inf.uji.es Dpto. Sistemas Informáticos y Computación.

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Outline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina

Outline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina A Fast Recursive Algorithm for Constructing Matrices with Prescribed Eigenvalues and Singular Values by Moody T. Chu North Carolina State University Outline Background Schur-Horn Theorem Mirsky Theorem

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y

Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y November 4, 994 Abstract The preconditioned conjugate

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

Matrix Eigensystem Tutorial For Parallel Computation

Matrix Eigensystem Tutorial For Parallel Computation Matrix Eigensystem Tutorial For Parallel Computation High Performance Computing Center (HPC) http://www.hpc.unm.edu 5/21/2003 1 Topic Outline Slide Main purpose of this tutorial 5 The assumptions made

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra, 4th day, Thursday 7/1/04 REU Info: Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

2 MULTIPLYING COMPLEX MATRICES It is rare in matrix computations to be able to produce such a clear-cut computational saving over a standard technique

2 MULTIPLYING COMPLEX MATRICES It is rare in matrix computations to be able to produce such a clear-cut computational saving over a standard technique STABILITY OF A METHOD FOR MULTIPLYING COMPLEX MATRICES WITH THREE REAL MATRIX MULTIPLICATIONS NICHOLAS J. HIGHAM y Abstract. By use of a simple identity, the product of two complex matrices can be formed

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

Roundoff Error. Monday, August 29, 11

Roundoff Error. Monday, August 29, 11 Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate

More information

ARPACK. Dick Kachuma & Alex Prideaux. November 3, Oxford University Computing Laboratory

ARPACK. Dick Kachuma & Alex Prideaux. November 3, Oxford University Computing Laboratory ARPACK Dick Kachuma & Alex Prideaux Oxford University Computing Laboratory November 3, 2006 What is ARPACK? ARnoldi PACKage Collection of routines to solve large scale eigenvalue problems Developed at

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Numerical Results on the Transcendence of Constants. David H. Bailey 275{281

Numerical Results on the Transcendence of Constants. David H. Bailey 275{281 Numerical Results on the Transcendence of Constants Involving e, and Euler's Constant David H. Bailey February 27, 1987 Ref: Mathematics of Computation, vol. 50, no. 181 (Jan. 1988), pg. 275{281 Abstract

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Department of. Computer Science. Functional Implementations of. Eigensolver. December 15, Colorado State University

Department of. Computer Science. Functional Implementations of. Eigensolver. December 15, Colorado State University Department of Computer Science Analysis of Non-Strict Functional Implementations of the Dongarra-Sorensen Eigensolver S. Sur and W. Bohm Technical Report CS-9- December, 99 Colorado State University Analysis

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Toeplitz matrices. Niranjan U N. May 12, NITK, Surathkal. Definition Toeplitz theory Computational aspects References

Toeplitz matrices. Niranjan U N. May 12, NITK, Surathkal. Definition Toeplitz theory Computational aspects References Toeplitz matrices Niranjan U N NITK, Surathkal May 12, 2010 Niranjan U N (NITK, Surathkal) Linear Algebra May 12, 2010 1 / 15 1 Definition Toeplitz matrix Circulant matrix 2 Toeplitz theory Boundedness

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Automatica, 33(9): , September 1997.

Automatica, 33(9): , September 1997. A Parallel Algorithm for Principal nth Roots of Matrices C. K. Koc and M. _ Inceoglu Abstract An iterative algorithm for computing the principal nth root of a positive denite matrix is presented. The algorithm

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra By: David McQuilling; Jesus Caban Deng Li Jan.,31,006 CS51 Solving Linear Equations u + v = 8 4u + 9v = 1 A x b 4 9 u v = 8 1 Gaussian Elimination Start with the matrix representation

More information

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Heinrich Voss. Dedicated to Richard S. Varga on the occasion of his 70th birthday. Abstract

Heinrich Voss. Dedicated to Richard S. Varga on the occasion of his 70th birthday. Abstract A Symmetry Exploiting Lanczos Method for Symmetric Toeplitz Matrices Heinrich Voss Technical University Hamburg{Harburg, Section of Mathematics, D{27 Hamburg, Federal Republic of Germany, e-mail: voss

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

NUMERICAL SOLUTION OF THE EIGENVALUE PROBLEM FOR HERMITIAN TOEPLITZ MATRICES. William F. Trench* SIAM J. Matrix Anal. Appl.

NUMERICAL SOLUTION OF THE EIGENVALUE PROBLEM FOR HERMITIAN TOEPLITZ MATRICES. William F. Trench* SIAM J. Matrix Anal. Appl. NUMERICAL SOLUTION OF THE EIGENVALUE PROBLEM FOR HERMITIAN TOEPLITZ MATRICES William F. Trench* SIAM J. Matrix Anal. Appl. 10 (1989) 135-156 Abstract. An iterative procedure is proposed for computing the

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B.

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B. EE { QUESTION LIST EE KUMAR Spring (we will use the abbreviation QL to refer to problems on this list the list includes questions from prior midterm and nal exams) VECTORS AND MATRICES. Pages - of the

More information

1 Matrices and vector spaces

1 Matrices and vector spaces Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

6 Linear Systems of Equations

6 Linear Systems of Equations 6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential

More information