Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Size: px
Start display at page:

Download "Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices"

Transcription

1 Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices have existed since the beginning of this century [17]. Several new fast and superfast algorithms to solve structured matrices such as Toeplitz and Hankel matrices have been proposed and studied over the last couple of decades. A fundamental problem with these fast methods is that they do not allow any form of pivoting because the structure of the matrices is destroyed. In [12], [11] the authors suggest ways to overcome this problem by transforming one class of structured matrices to another using fast trigonometric transforms in such a way that pivoting may be incorporated in the factorization algorithms. These algorithms factor Toeplitz, Hankel and Vandermonde matrices by converting them to Cauchy matrices and performing Gaussian elimination with partial pivoting. It was shown that the special displacement structure of Cauchy matrices is conducive to pivoting strategies such as partial pivoting. The algorithms suggested in [12], [11], however, did not exploit properties such as realness and symmetry simultaneously in the matrices. Recently there has been a surge of activity in this area and new variants are constantly being developed to solve various structured matrix problems with pivoting. This paper attempts to collect all the variants to solve Toeplitz matrices and compare them from a computational standpoint. Emphasis is given to variants that exploit properties such as realness and symmetry. We also present some new extensions of these algorithms to solve Toeplitz least-squares problems. These extensions are based on the normal equations and the augmented system of equations. Finally, we also suggest how a rank-revealing QR algorithm can be obtained for Toeplitz matrices by converting them to Cauchy-like matrices. Most problems in signal processing and systems theory yield block Toeplitz and block Hankel systems that are real and symmetric. Least-squares problems in these areas also give rise to real matrices. In [10], Gallivan et al. propose an algorithm based on the idea of converting real, symmetric positive semi-denite block Toeplitz systems into Cauchy-like matrices using the Hartley transform. This algorithm preserves the desirable properties such as realness and symmetry, thereby signicantly reducing the complexity of the method. The algorithm has one drawback - in some cases there is a possibility of breakdown. In such cases, the authors suggest that realness be given up and that the real Cauchy-like matrix be converted to a complex Cauchy-like matrix. The factorization then preserves the Hermitian property of the complex Cauchy-like matrix. In this paper we shall discuss variants of the original Cauchy-based algorithm to factor real, symmetric Toeplitz matrices that do not have the drawback mentioned above and still exploit the properties of realness and symmetry simultaneously. Section 2 reviews some fundamental concepts such as the displacement structure of a matrix and shows how one class of structured matrices may be transformed to another class using trigonometric transforms. Section 3 reviews the Gaussian elimination method applied to structured matrices as proposed by Gohberg et al. in [11]. A variant of this algorithm that uses a symmetric form of the displacement equation, [10], is also reviewed in section 3. Section 4 presents a variant to factor Hermitian Toeplitz matrices that exploits the Hermitian symmetry property. This section also computes the computational savings that results from exploiting the Hermitian property. Section 5 deals with factoring real unsymmetric Toeplitz matrices and estimates the computational cost. Section 6 deals with factoring real and symmetric Toeplitz matrices 1

2 2 estimates the savings in computation that results from exploiting both properties simultaneously. Section 6 also presents a new algorithm to convert a Hermitian Toeplitz matrix to a rel, symmetric Cauchy-like matrix. The factorization of this resulting real, symmetric Cauchy-like matrix is signicantly less expensive than the algorithm that converts a Hermitian Toeplitz matrix to a Hermitian Cauchy-like matrix before factorization. Section 7 discusses algorithms to solve real Toeplitz least-squares problems using the method of normal equations and the augmented system of equations. Section 8 derives a new rank revealing QR factorization algorithm based on the conversion of Toeplitz matrices to Cauchy-like matrices. Section 9 discusses generalization of the algorithms to block Toeplitz matrices. We also present results from experiments on several high performance architectures such as the Cray Y-MP, J90, and T90. 2 Transformations between classes of structured matrices In [14] Kailath et al. introduced the idea of displacement structure to describe the structure in Toeplitz matrices that made them conducive to fast factorization schemes. Consider a real Toeplitz matrix T. Let Z be a down-shift matrix that shifts a matrix one row down when it is applied from the left. The matrix T? ZT Z T can immediately be recognized as a sparse matrix with only the rst row and column consisting of non-zero elements. This matrix has a rank two factorization. The equation T? Z T Z T = G H T (1) is called displacement equation of the Toeplitz matrix w.r.t. to (Z; Z T ) and the rank of the matrices G and H is the displacement rank. For Toeplitz matrices this rank is 2. The matrix T can be easily constructed from G and H and hence these matrices are referred to as the generators of T w.r.t. (Z; Z T ). The displacement equation may also be written as Z T? T Z = G H T : (2) We refer to Equation (1) as the displacement equation of type I and (2) as the displacement equation of type II. For real, symmetric Toeplitz matrices the displacement equation is of the form following forms: T? Z T Z T = G J G T (3) where J is a symmetric (diagonal) matrix and G has a displacement rank of 2. Factorizations of the forms described above can be obtained analytically from the entries of the Toeplitz matrix without the need for explicit rank factorizations. Fast algorithms based on the above displacement structure to factor Toeplitz matrices are commonly referred to as Schur or Bareiss-type algorithms. These are in contrast to Levinsontype algorithms that factor the inverse of the Toeplitz matrix. In [8], Chun et al. describe a method to generalize the Schur algorithm to factor Toeplitz matrices to the QR factorization of Toeplitz matrices. They do this by demonstrating that the matrix T T T has a displacement rank of 4 w.r.t. (Z; Z T ). Since this algorithm is a generalization of the classical Schur algorithm (displacement rank > 2), it is referred to as the generalized Schur algorithm. Again as in the case of Toeplitz matrices, the generators can be computed analytically without the need for explicit rank factorization. All this extends quite naturally to block Toeplitz matrices which have similar displacement structure properties. If T is a block Toeplitz matrix with a block size of m, and Z is the down-shift matrix that shifts T down by m rows when applied from the left, then T has a displacement rank of 2m w.r.t. (Z; Z T ). Similarly T T T has a displacement rank of 4m w.r.t. the same matrices. In general, any matrix A that has a low displacement rank w.r.t. the matrices (F l ; F r ) is called a structured matrix. The following equation represents a general displacement equation of type I. F l A? A F r = G H T (4) Cauchy-like matrices have the property that the displacement matrices F l and F r are diagonal. Any transform that diagonalizes the matrices F l and F r can, therefore, be used to convert the given structured matrix A

3 3 to a Cauchy-like matrix. We show that the displacement structure of a Cauchy-like matrix is invariant to pivoting. Let the displacement equation for a cauchy matrix C be: D l C? C D r = G H T (5) Let P be a permutation matrix (P T P = I) corresponding to a partial pivoting operation. Applying this permutation to the Cauchy matrix C yields P D l C? P C D r = P G H T (P D l P T ) (P C)? (P C) D r = (P G) H T ^D l ^C? ^C Dr = ^G H T : (6) It is clear that ^Dl has the same structure as D l (diagonal) and that the equation remains unchanged in structure. It is also easily veried that if D l were not diagonal, such a permutation would destroy the displacement structure. For symmetric matrices with symmetric pivoting we would require both D l and D r to be diagonal. In particular, consider type II of the displacement equation for a Toeplitz matrix T of size n w.r.t. (Z; Z), where Z is the circulant down-shift matrix. Z T? T Z = GH T (7) It is known that the discrete Fourier transform (DFT) diagonalizes the displacement matrix Z. Let F be the DFT matrix of size n dened by F = p 1 n [e 2i kj n ] 0k;j(n?1), then where is a diagonal matrix with F Z F = (8) (j; j) = e 2i n j for j = 0 (n? 1); where i = p?1: Applying the transformation F (:)F to the displacement equation we have F Z T F? F T Z F = F G H F (F Z F ) (F T F )? (F T F ) (F Z F ) = ^G ^H C? C = ^G ^H (9) where the displacement matrices D l and D r of (5) are both equal to and C is a Cauchy-like matrix. Note that if T were a real matrix, G and H would also be real. The DFT transformation converts these real matrices to complex matrices. This is undesirable because it increases the amount of computation involved in the factorization and the triangular solves. It will be shown later that at each step of the factorization of Cauchy-like matrices of the form shown in (5) one has to solve Lyapunov equations derived from the displacement equation. It is well known that the Lyapunov equation shown in (5) can be solved only if the eigenvalues of F l and F r are distinct. This in turn implies that the diagonal matrices D l and D r must have distinct entries. If they have some identical eigenvalues then, one has to compute certain additional parameters which need to be updated throughout the factorization algorithm. This increases the complexity of the algorithm. Keeping this in mind, Gohberg et al. [11] introduce a dierent form of the displacement equation to solve Toeplitz matrices. Consider the displacement matrices Z 1 and Z?1 dened as: Z 1 = ; Z?1 = ? (10)

4 4 The displacement equation for Toeplitz matrices can be written as Z 1 T? T Z?1 = G H T (11) with a displacement rank of 2. It is well known that the DFT matrix F diagonalizes both Z 1 and Z?1 as: F Z 1 F = F and F (D Z?1 D?1 ) F = F? where The displacement equation can now be rewritten as: F = diag(1; e 2i 2i n ; ; e n (n?1) ); F? = diag(e i 3i (2n?1)i n ; e n ; ; e n ) and D = diag(1; e i n ; ; e (n?1)i n ) (12) (F Z 1 F ) (F T D?1 F )? (F T D?1 F ) (F D Z?1 D?1 F ) = (F G) (H T D?1 F ) F C? C F? = ^G ^H (13) where C is a Cauchy-like matrix dened by C = F T D?1 F. It can be seen from the above displacement equation that if T is a real matrix, the DFT matrix destroys the realness property. Also if T is symmetric, the Cauchy-like matrix that it is transformed to is no longer hermitian. In the next few subsections we review several forms of displacement equations and the corresponding fast trigonometric transforms that convert the Toeplitz matrices to Cauchy-like matrices. 2.1 Non-Hermitian Toeplitz matrices Consider a non-hermitian Toeplitz matrix T. The displacement equation for such a matrix using the displacement matrices Z 1 and Z?1 mentioned above would be of the form Z 1 T? T Z?1 = G H : (14) The Toeplitz matrix in the above equation could be converted to a Cauchy-like matrix as demonstrated in (13). 2.2 Hermitian Toeplitz matrices The technique described in section 2.1 could be applied to Hermitian Toeplitz matrices. However, doing so would convert the Hermitian Toeplitz matrix to a non-hermitian Cauchy-like matrix. In order to maintain Hermitian symmetry, one may use the following displacement equations. T? Z 1 T Z T 1 = G J G (15) Applying the DFT transformation F (:)F to the above equation we get the following displacement equation for the Hermitian Cauchy-like matrix C = F T F : C? C = ^G J ^G (16) 2.3 Real unsymmetric Toeplitz matrices The techniques described in sections 2.1 and 2.2 would destroy the realness property of Toeplitz matrices. One would like to preserve the property of realness to avoid complex arithmetic which is in general more expensive than real arithmetic. Just as the discrete Fourier transform was used to convert from complex Toeplitz matrices to complex Cauchy-like matrices, several real trigonometric transforms such as the discrete Sine, Cosine and Hartley transforms can be used to convert real Toeplitz matrices to real Cauchy-like matrices. In

5 5 this section we demonstrate how these transforms may be used. We rst review the displacement matrices and the real trigonometric transforms that diagonalize them [4]. We then show how they may be used to convert real Toeplitz matrices into real Cauchy-like matrices. Consider the general displacement matrix Z of size n dened as: Z = It can be easily veried that when = = 0, the discrete Sine transform (DST) matrix dened as: r 2 S 00 = sin ij ; i; j = 1; ; n (18) n + 1 n + 1 diagonalizes Z 00. Specically, we have: S 00 Z 00 S 00 = 2 diag(cos (17) j ); j = 1; ; n (19) n + 1 When = = 1, then the discrete Cosine transform-ii (DCT-II) S 11 diagonalizes Z 11. r 2 (2i + 1)j S 11 = k j cos ; i; j = 0; ; n? 1 n 2n S T 11 Z 11 S 11 = 2 diag(cos j ); j = 0; ; n? 1 (20) n Here k j = 1= p 2 for j = 0 and k j = 1 otherwise. For = =?1, the discrete Sine transform-ii (DST-II) S?1?1 diagonalizes Z?1?1. r 2 (2i? 1)j S?1?1 = k j sin ; i; j = 1; ; n n 2n S T?1?1 Z?1?1 S?1?1 = 2 diag(cos j ); j = 1; ; n (21) n Here, k j = 1= p 2 for j = n and k j = 1 otherwise. If =?1 and = 1 or vice-versa, then the discrete Sine transform-iv (DST-IV) S?11 diagonalizes Z?11 and the discrete Cosine transform-iv (DCT-IV) S 1?1 diagonalizes Z 1?1. S?11 = r 2 sin n (2i + 1)(2j + 1) ; i; j; 0; ; n? 1 4n (2j + 1) S?11 Z?11 S?11 = 2 diag(cos ); j = 0; ; n? 1 r 2n 2 (2i + 1)(2j + 1) S 1?1 = cos ; i; j; 0; ; n? 1 n 4n (2j + 1) S 1?1 Z 1?1 S 1?1 = 2 diag(cos ); j = 0; ; n? 1 (22) 2n The displacement matrices Z can be used to formulate the displacement equations for unsymmetric Toeplitz matrices. However, unlike the case of Z 1, the displacement rank w.r.t. to Z will be 4 and not 2. This is the penalty one incurs for staying the real domain. It will be shown in a later section that this increase in the displacement rank does not result in an algorithm more expensive than the one that transforms the real matrix to a complex Cauchy-like matrix.

6 6 Consider a real unsymmetric Toeplitz matrix T. One could use any of the following displacement equations : Z 00 T? T Z 11 = G 1 H T 1 Z 00 T? T Z?1?1 = G 2 H T 2 Z 00 T? T Z 1?1 = G 3 H T 3 Z 00 T? T Z?11 = G 4 H T 4 (23) Each of the equations shown above results in a displacement rank of 4. The corresponding real trigonometric transformations can then be applied to obtain the displacement equation for the corresponding real Cauchylike matrix. It must be mentioned that the matrices G 1 ; ; G 4 and H 1 ; ; H 4 can be calculated analytically without the need for a rank factorization routine. 2.4 Real Symmetric Toeplitz matrices If the Toeplitz matrix is real and symmetric, then the techniques described in section 2.3 would convert it to an unsymmetric Cauchy-like matrix. One may, however, use symmetric forms of the displacement equations described in section 2.3 : T Z T? T Z = G G (24) where T is a real symmetric Toeplitz matrix of size n, is a skew-symmetric signature matrix and may be either 0 or 1. By construction, it is easy to see that the displacement of T w.r.t. Z is of the form : Z T? T Z = 2 4 0?a T 0 a 0 ea 0?a T e (25) where a is a vector of length n? 2 that depends on the displacement matrix Z and e is the reection permutation matrix of size n? 2. From the above displacement equation, we see that the generator G can be written as : 2 3 G = n?2 a 0 n?2 ea 0 5 (26) In the above equation 0 n?2 is a vector of zeros of size (n? 2) 1. The Sine-I (S 00 ) or the Cosine-II (S 11 ) transforms can be used to diagonalize the displacement matrices Z 00 and Z 11. The corresponding displacement equation for the Cauchy-like matrix C is : L C? C L = ^G ^GT (27) where L = S T Z S, C = S T T S and ^G = S T G. The displacement rank of the above equations is 4. Interestingly, the Cauchy-like matrix C has a lot of sparsity that can be exploited during the factorization algorithm. Specically, if P is the odd-even sort permutation matrix (i.e. P x = [x 1 x 3 x 2 x 4 ] T ), then P C P T = M1 0 0 M 2 where M 1 is a Cauchy-like matrix of size dn=2e and M 2 is of size bn=2c. In addition, it can be shown that the matrices M 1 and M 2 have a displacement rank of 2 as opposed to C that has a displacement rank of 4. We prove this for both even and odd n. First, consider the case when n is even. From the denitions of S 00 and S 11, it can be seen that they satisfy the following condition when n is even : P S P T = (28) S1 S 2 ES 1?ES 2 ; (29)

7 7 where S 1 and S 2 are submatrices of size n=2 that depend on the trigonometric transform S 00 or S 11 and E is the reection permutation matrix of size n=2. Let us partition the matrix P T P T as, P T P T = T1 T 2 T T : (30) 2 T 1 Here T 1 is a symmetric Toeplitz matrix and T 2 is an non-symmetric Toeplitz matrix of size n=2. It follows that P C P T = (P S T P T ) (P T P T ) (P S P T ) S T 1 S = T E 1 S T 2?S T 2 E The (2; 1) entry in the above matrix equation is T1 T 2 S1 S 2 T T : (31) 2 T 1 ES 1?ES 2 (S T 2 T 1? S T 2 E T T 2 ) S 1 + (S T 2 T 2? S T 2 E T 1 ) E S 1 Rearranging the terms we have S T 2 (T 1? E T 1 E) S 1 + S T 2 (T 2 E? E T T 2 ) S 1 = 0; since T 1 and T 2 are Toeplitz matrices. The matrix P C P T is symmetric and the (1; 2) sub-matrix is also zero. This proves that one can now solve 2 smaller systems of size n=2 instead of one large system of size n. Having proved that P C P T is of the form shown in (28), we now show that M 1 and M 2 have a displacement rank of 2. Applying the permutation matrix P to the displacement equation (27), we have (P L P T ) (P C P T )?(P C P T ) (P L P T ) = (P S T P T ) (P G ) (G T P T ) (P S P T ): (32) From (26), we see that P G can be partitioned as : g1 g P G = 3 E g 2 E g 4 g 2 g 4 E g 1 E g 3 (33) where g 1 ; g 2 ; g 3 and g 4 are vectors of lenth n=2. From (33) and (29), we have (P S T S P T T ) (P G ) = 1 g 1 + S T 1 E g 2 S T 1 g 3 + S T 1 E g 4 S T 1 g 1 + S T 1 E g 2 S T 1 g 3 + S T 1 E g 4 S T g 2 1? S T E g 2 2 S T g 2 3? S T E g 2 4?S T g S T E g 2 2?S T g S T E g : 2 4 (34) From the above equation, it is clear that the generators of M 1 and M 2 have rank 2. This proves that the displacement rank of M 1 and M 2 is 2. We now outline the proof for odd n. If n is odd, then the permuted trigonometric transform P S has the following property : P S P T = S1 S 3 S 2 S 4 where S 1 = E 1 S 1 ; S 2 = E 2 S 2 ; S 3 =?E 1 S 3 ; S 4 =?E 2 S 4. E 1 and E 2 are reection permutation matrices of size dn=2e and bn=2c respectively. Now partitioning P T P T conformally as we have P T P T = T1 T 2 T T 2 T 3 P C P T = (P S T P T ) (P T P T ) (P S P T ) S T 1 S = T 2 S T 3 S T 4 P T (35) T1 T 2 T T 2 T 3 S1 S 3 S 2 S 4 : (36)

8 8 Again, the (2; 1) sub-matrix in the above matrix equation is S T 3 T 1 S 1 + S T 4 T T 2 S 1 + S T 3 T 2 S 2 + S T 4 T 3 S 2 : We can show that each term in the above expression evaluates to zero and hence the (2; 1) block is zero. For example, consider the rst term, S T T 3 1S 1 = S T T 3 1E 1 S 1 = S T E 3 1T 1 S 1 =?S T T 3 1S 1 = 0. Similarly, all other terms in the expression evaluate to zero and the (2; 1) block is zero. Since the Cauchy-like matrix P C P T is symmetric, the (1; 2) block is also zero. The permutation matrix P, therefore, separates the system of equation into two systems about half the size of the original system that an be solved independently. Now we show that the displacement rank of M 1 and M 2 is 2. If n is odd, then P G can be partitioned as : g1 g P G = 3 E g 1 E g 3 g 2 g 4 E g 2 E g 4 where g 1 and g 3 are of size dn=2e and g 2 and g 4 are vectors of lenth bn=2c. From 37 and 35, we have (P S T S P T T 1 g ) (P G ) = 1 + S T 2 g 2 S T 1 g 3 + S T 1 g 4 S T 1 E 1 g 1 + S T 2 E 2 g 2 S T 1 E 1 g 3 + S T 2 E 2 g 4 S T g S T g 4 2 S T g S T g 4 4 S T E 3 1 g 1 + S T E 4 2 g 2 S T E 3 1 g 3 + S T E 4 2 g 4 S T = 1 g 1 + S T 2 g 2 S T 1 g 3 + S T 2 g 4 S T 1 g 1 + S T 2 g 2 S T 1 g 3 + S T 2 g 4 S T g S T g 4 2 S T g S T g 4 4?S T g 3 1? S T g 4 2?S T g 3 3? S T g (38) 4 4 The above equation shows that the displacement rank of M 1 and M 2 is 2 because the generators of M 1 and M 2 have rank 2. In this section we have shown how the odd-even permutation matrix can be used to decouple a Cauchylike matrix arising from a real symmetric Toeplitz matrix of size n into two Cauchy-like matrices of half the size and half the displacement rank. This yields a substantial savings over the unsymmetric forms of the displacement equation as we shall see in a latter section Converting Hermitian Toeplitz matrices to real Cauchy-like matrices In section 2.2, a displacement equation for Hermitian Toeplitz matrices was suggested using Z 1 as the displacement matrix. The discrete Fourier transform was used to convert the Hermitian Toeplitz matrix to a Hermitian Cauchy-like matrix. In this section, we show how the displacement matrix Z may be used along with the odd-even permutation matrix, to convert a Hermitian Toeplitz matrix into a real, symmetric Cauchy-like matrix. The factorization of the Cauchy-like matrix can be done in real arithmetic and the savings in computation is signicant. Consider a Hermitian Toeplitz matrix of size n. The displacement equation of T w.r.t. Z can be written as Z T? T Z = G G (39) where is a skew-symmetric matrix and G has rank 4. In the previous section on real, symmetric Toeplitz matrices, we proved that P S T Real(T ) S P T is of the form P S T Real(T ) S P T = M1 0 0 M 2 where M 1 and M 2 are real Cauchy-like matrices of size dn=2e and bn=2c respectively. In addition, one can prove, through construction, that the imaginary part of T satises the equation P S T 0 Imag(T ) S P T?M T 3 = ; (41) M 3 0 where M 3 is a real Cauchy-like matrix of size dn=2e bn=2c. If we dene a matrix D to be of the form D = (37) (40) I1 0 0 i I 2 ; (42)

9 9 where I 1 and I 2 are identity matrices of size dn=2e and bn=2c respectively and i = p?1, then D P S T T S P T D M1 M = T 3 : (43) M 3 M 2 The matrix on the right hand side is a real symmetric Cauchy-like matrix. Since the rank of the generator G of the Hermitian Toeplitz matrix was 4, the generator matrix of the corresponding real, symmetric Cauchy-like matrix will also be of rank 4. This section shows how a Hermitian Toeplitz matrix may be converted to a real symmetric Cauchy-like matrix. The factorization may proceed in real arithmetic. The savings in computation resulting from this conversion will be calculated in section 6. 3 Factorization of Cauchy-like matrices with pivoting In this section we overview factorization algorithms for Cauchy-like matrices that allow for various pivoting strategies to be incorporated. We rst discuss the algorithm due to Gohberg et. al to factor non-hermitian Cauchy-like matrices with displacement matrices of the form shown in (5). We then present another algorithm [10] to factor Hermitian Cauchy-like matrices with displacement equations of the form C?D l CD l = GG. These can then be trivially adapted to suit the kind of Cauchy-like matrix at hand. 3.1 Factoring non-hermitian Cauchy-like matrices Consider a complex non-hermitian Cauchy-like matrix C of size n, dened by the displacement equation D l C? C D r = G H : (44) Here D l and D r are diagonal matrices and G and H are matrices of size n with rank equal to. Let us further assume that D l and D r do not have any entries on the diagonal that are equal. We relax this restriction later. In section we show how partial pivoting may be incorporated into the factorization algorithm. From the above displacement equation, it is clear that any column of C can be obtained by solving the following Sylvester equation : and the (i; j) th element of C can then be computed as D l C(:; j)? C(:; j) D r (j; j) = G H(j; :) (45) C(i; j) = G(i; :) H(j; :) D l (i; i)? D r (j; j) : (46) This indicates that unless the diagonal elements of D l and D r are distinct, one cannot construct all elements of C. Specically, if D l (k; k) = D r (l; l), then the element C(k; l) cannot be computed. Such elements would have to be known prior to the start of the factorization. If it so happens that D l = D r, the the entire diagonal of C would have to be known apriori. We now proceed to describe the LU factorization algorithm with partial pivoting. The rst step of the algorithm would be to compute the rst column of C. This can be done as described in the previous paragraph. Let the permutation matrix that brings the pivot element to the (1; 1) position be P 1. Applying this permutation to the displacement equation, we get : P 1 D l P T 1 P 1 C? P 1 C D r = P 1 G H (47) Let us partition the matrix P 1 C as : d u P 1 C = l C 1 (48)

10 10 Let us dene two matrices X and Y as : then P 1 C can be factored as: X = 1 0 l d?1 I P 1 C = X Further, let P 1 D l P T 1 and D r be conformally partitioned as : P 1 D l P T 1 = Dl1 0 0 D l2 1 d?1 u Y = 0 I (49) d 0 Y (50) 0 C sc D r = Dr1 0 0 D r2 Let us apply the transformation X?1 (:) Y?1 to (47). Using (50) and (51), we can write the transformed equation as : X?1 (P 1 D l P T 1 ) X (X?1 P 1 CY?1 )? (X?1 P 1 CY?1 ) Y D r Y?1 = X?1 P 1 G H Y?1 (52) The above equation can be rewritten after simplication as : D l1 0 d 0? D l2 ld?1? ld?1 D l1 D l2 0 C sc Equating the (2; 2) position in the above equation we have : (51) d 0 Dr1 d?1 ud r2? D r1 d?1 u = X 0 C sc 0?1 P D 1 GH Y?1 r2 (53) D l2 C sc? C sc D r2 = G 1 H 1 (54) where G 1 is the portion of X?1 P 1 G from the second row down and H 1 is the portion of Y? H from the second row down. The rst column of L in the LU factorization would be 1 d? l and the rst row of U would be d u. This completes one step of the LU factorization algorithm. The process can now be repeated on the displacement equation of the Schur complement of P 1 C w.r.t. d (C sc ) to get the second column of L and row of U. After n steps, one would have the LU factorization of a permuted Cauchy-like matrix. If the displacement matrices D l and D r have diagonal entries that are identical, then as we pointed out earlier, some elements corresponding to these entries would have be known apriori. In addition, these elements would have to be updated with the transformation X?1 (:)Y?1 to reect their values in the Schur complement C sc. To avoid this extra step in the algorithm, it is often desirable to have D l and D r distinct. In some cases such as hermitian Cauchy-like matrices, however, one cannot satisfy this condition because doing so would destroy the symmetry. In this case the extra computation at the end of each step of the algorithm to update the diagonal elements of C is unavoidable if symmetry is to be maintained. If the permutations at each step are accumulated into the matrix P, then we see that the above algorithm produces a factorization of the Toeplitz matrix of the form T = F P T L U F: (55) 3.2 Factoring Hermitian Cauchy-like matrices Now consider a Hermitian Cauchy-like matrix C with the following displacement equation. C? D l C D l = G G (56) Any column of C can be obtained by solving the following Lyapunov equation C(:; j)? D l C(:; j) D l (j; j) = G G(j; :) (57) If D l (j; j) is equal to any eigenvalue of D l, then the corresponding element of C will have to be computed apriori and updated during the course of the algorithm as described earlier. For the moment, let us assume

11 11 that this is not the case. Further, let us assume that the pivot block is in the right location. Since we have a symmetric Cauchy-like matrix, a pivoting strategy like Bunch-Kaufman would have to be used during the factorization. As a result, the pivot block may be either 1 1 or 2 2. Let us partition the matrix C as d l C = : (58) l C 1 Let us dene the matrix X, then applying X?1 ( : ) X? to (56) we obtain d 0? 0 C sc X = I 0 ld?1 I A11 0 d 0? A 21 A 22 0 C sc A 11 A 21 0 A 22 (59) = X?1 GG X? (60) where A 11 = D l1, A 22 = D l2 and A 21 = D l2 ld?1? ld?1 D l1. If the Bunch-Kaufman pivoting strategy results in a 11 pivot, then D l1 is a 11 matrix, otherwise it is of size 22. To proceed with the factorization of the Cauchy-like matrix, we have to obtain a displacement equation of the form (56) for the Schur complement of C, i.e. for C sc. The displacement equation will have to be of the form C sc? A 22 C sc A 22 = G sc sc G sc; (61) Partitioning G conformally as G = [G 1 G 2] and equating the (2; 2) position in (60) we have C sc? A 22 C sc A 22 = (G 2? ld?1 G 1 )(G 2? ld?1 G 1 ) + A 21 da 21 (62) where A 21 = A 22 ld?1? ld?1 A 11. The last term of (62), A 21 da 21, can be expanded as = (A 22 ld?1? ld?1 A 11 )d(d?1 l A 22? A 11 d?1 l ) From the displacament equation (56), we can also write Inserting the above equations in (63) yields = (A 22 la 11? ld?1 A 11 da 11)A? 11 d?1 A?1 11 (A 11 l A 22? A 11 da 11d?1 l ): (63) d? A 11 da 11 = ^G 1 ^G 1 (64) l? A 22 la 11 = ^G 2 ^G 1 (65) A 21 da 21 = (G 2? ld?1 G 1 ) (G 1A? 11 d?1 A?1 11 G 1) (G 2? ld?1 G 1 ) (66) Substituting (66) in (62), C sc = C sc? A 22 C sc A 22 has the form C sc = (G 2? ld?1 G 1 )( + G 1 A? 11 d?1 A?1 11 ^G 1 )(G 2? ld?1 G 1 ) (67) Using the Sherman-Morrison-Woodbury formula and (64) it can be shown that ( + G 1A? 11 d?1 A?1 11 G 1) = (?1? G 1d?1 G 1 )?1 (68) Hence, the update equations for the generator and the signature matrices are G sc = G 2? ld?1 G 1 sc?1 =?1? G 1d?1 G 1 (69) At this point, all the elements of C that were computed apriori would have to be updated to reect their values in the Schur complement C sc. Since we now have the same displacement for the Schur complement C sc, the factorization algorithm can proceed in the same manner to the next step and eventually to completion.

12 12 4 Factoring Hermitian Toeplitz matrices In this section we present an algorithm to compute a symmetric factorization of a Hermitian Toeplitz matrix by converting it to a Hermitian Cauchy-like matrix. We then compare this method in complexity to the method for factoring non-hermitian Cauchy-like matrices. In section 2.4.1, we presented an alternate algorithm to factor Hermitian Toeplitz matrices. This was based on the conversion of Hermitian Toeplitz matrices to real, symmetric Cauchy-like matrices. The factorization of the Cauchy-like matrices is then done in real arithmetic. This results in substantial savings in computation. We postpone the discussion of this algorithm to section 6 however, because it is similar to the algorithm to factor real, symmetric Toeplitz matrices. The comparison of the complexity of the two methods can be found in table 3 in section 6. Consider a Hermitian Toeplitz matrix T of size n. The displacement equation of type I for such a matrix and the corresponding Cauchy-like matrix were shown in section 2.2 to be : T? Z 1 T Z T 1 = H H C? C = ^H ^H (70) Since the matrix T is Hermitian, the Cauchy-like matrix C is also Hermitian. The displacement matrices and are diagonal and have entries that are complex conjugates of each other. Also, from the denition of we see that the (j + 1; j + 1) th entry of and the (n? j + 1; n? j + 1) th entry of are identical for j = 1; ; n? 1 : (n? j + 1; n? j + 1) = e? 2i(n?j) n = e? 2in 2ij n e n = e 2ij n = (j + 1; j + 1) (71) In addition, the (1; 1) elements of the two displacement matrices are identical. As indicated in section 3, this means that we have to compute the following elements of C apriori : C(1; 1) and C(i; j) for j = 2; ; n and i = n? j + 2. This set of elements includes some diagonal and other non diagonal elements. If n is even, then for i = j = 1 and i = j = n=2, the elements C(i; j) are diagonal and the rest are non diagonal. If n is odd, then the only diagonal element is C(1; 1). We rst present a fast method to compute the non-diagonal elements of C and later indicate how the diagonal elements may be computed. To compute the non-diagonal elements of C that are needed apriori, we set up a non-hermitian form of the displacement equation for T and the corresponding Cauchy-like matrix C as : Z 1 T? T Z 1 = G 1 G 2 C? C = F G 1 G 2F (72) Since, (i; i) 6= (j; j) for i 6= j, any nondiagonal element of C can be easily computed using (46). To show how the diagonal elements of C are computed, we make use of the following theorem. Theorem 1 For any matrix A of size n, if F is the DFT matrix of size n and C is a circulant matrix that minimizes the Frobenius norm of (A? C), then the diagonal of F AF is equal to the eigenvalues of the minimizer C. Since the Cauchy-like matrix C is dened as F T F, the diagonal elements can be obtained from the eigenvalues of the circulant minimizer C that minimizes the Frobenius norm of (T? C). Further, it can be easily proved that p the eigenvalues of a circulant matrix are obtained from the DFT of the rst column of the matrix : (n)f C(:; 1). For Toeplitz matrices, the circulant minimizer C can be computed in O(n) ops as demonstrated in [7]. Since we only require a few diagonal elements of C (one if n is odd and two if n is even), we can use the matrix-vector product form of the DFT (instead of an FFT) to compute them. This means that the required diagonal entries of C can be computed in O(n) ops. Having computed the elements of C that are needed apriori, we can now proceed with a symmetric factorization of the matrix with symmetric pivoting. The Bunch-Kaufman algorithm can be used as a

13 13 symmetric pivoting strategy. We outline the rst step of such an algorithm. The Bunch-Kaufman pivoting strategy requires the computation of either one or two columns of C. The computation of columns of C was described in the previous section. Let P 1 be the permutation that permutes the 1X1 or 2X2 pivot block to the proper place. The displacement equation is then written as : (P 1 C P T 1 )? (P 1 P T 1 ) (P 1 C P T 1 )(P 1 P T 1 ) = P 1 ^H ^H P T 1 (73) The recurrence relations between the generators of the Cauchy-like matrix and its Schur complement w.r.t. the pivot blocks was given in section 3 by (69). The obvious advantage in the Hermitian case is that only half the computation needs to be performed. However, we have to compute some elements of C apriori because and have common eigenvalues. These elements will have to be updated at each step of the factorization to obtain their values in the Schur complement of C. It can, therefore, be seen that the reduction in computation due to the Hermitian property of T, is to some extent oset by the additional work one has to do in the beginning to compute some elements of C apriori and at every step in updating these elements. We now determine the complexity of the two algorithms to factor non-hermitian and Hermitian Toeplitz matrices. In all the calculations we assume that a complex multiplication requires 6 ops, a complex division requires 9 ops (assuming that a real division requires 1 op) and a complex addition requires 2 ops. We ignore the computation required to set up the displacement equation for Toeplitz matrices, since this can be done in O(n) time. Let us rst consider the non-hermitian case. Transforming the displacement equation of a Toeplitz matrix (11) to that of a Cauchy-like matrix (13) requires 2 FFTs of length n (the displacement rank = 2). The cost of computing these FFTs is 2K 1 n log n ops. The value of K 1 is small if n is a highly composite number. If n is not so composite (or prime), then the constant K 1 can be quite large. Computing each row or column of the factorization using (46) at the k th step of the factorization requires 6(n? k) + 2(? 1)(n? k) + 2(n? k) + 9(n? k) = 8(n? k) + 9(n? k) ops. Since a column and a row have to be computed at each step, this means that the total work at each step to obtain a row and a column of the matrix is 16(n? k) + 18(n? k)flops. Having computed the required row and column of the factorization we must now update the generators of the k th step to obtain the generators of the (k + 1) th step. The update of each generator requires 8(n? k) for a total of 16(n? k) ops. Thus the total number of ops required to factor the matrix would be X n?1 F lops = 2 K 1 n log (n) + ( )(n? k) k=1 = 2 K 1 n log (n) + (16 + 9)(n 2? n) (16 + 9)n 2 (74) For Toeplitz matrices with = 2, the algorithm requires approximately 41n 2 ops. Now in the Hermitian case, one would use the Bunch-Kaufman pivoting strategy. At each step of the factorization the Bunch-Kaufman algorithm checks either one or two rows of the matrix and selects either a 1X1 or a 2X2 pivot. The worst case scenario would be that at each step two rows are checked but a 1X1 pivot is used. The best case, however, would be if a 2X2 pivot were used every time 2 rows were checked. We can, therefore, only estimate the complexity of the algorithm in the Hermitian case. Since the matrix is Hermitian, the number of FFTs needed to transform the generators of the Toeplitz-like matrix to those of a Cauchy-like matrix is exactly half of that in the non-hermitian case. The complexity for this step is K 1 n log (n). However, extra work is necessary to compute some elements of C apriori. Of these elements, n? 1 (n? 2) elements are o diagonal if n is odd (even). The o diagonal elements are computed by solving the corresponding Lyapunov equations in (72). The complexity to do this is 2K 1 n log (n) to compute F G 1 and G F 2 and to solve the Lyapunov equation for each element. In addition, there are 1 (2) elements of C that are on the diagonal if n is odd (even). These elements are computed from the DFT of the rst column of the circulant minimizer discussed earlier. These elements require 2n (10n) if n is odd (even). Hence, the total complexity in computing the elements of C needed apriori is 2K 1 n log (n) + 8n + O(n). We now compute the complexity of the factorization algorithm. In the worst case, at every step k in the factorization, two rows are computed from the generators and tested for a 1 1 pivot. The work to compute

14 14 2 rows from the generators is ( )(n? k). The work to update the generators of the k th step to those of the (k + 1) th step is 8(n? k). In addition to this, some extra work is required to update the elements along the diagonal of C that had to be computed apriori. This adds an extra 14(n? k) ops at each step. The worst case complexity would, therefore, be n?1 F lops = 3K 1 n log (n) + 8n + O(n) + ( )(n? k) + X k=1 = 3K 1 n log (n) + 8n + O(n) + ( )(n 2? n) n?1 X k=1 14(n? k) ( )n 2 (75) For = 2, the total complexity is 40n 2. This shows that, in the worst case, the complexity of the Hermitian algorithm is the same as that for the non-hermitian case. However, in the best case scenario, only one row is checked at each step and a 1X1 pivot block is used. The complexity in this situation would be : n?1 F lops = 3K 1 n log (n) + 8n + O(n) + (16 + 9)(n? k) + X k=1 = 3K 1 n log (n) + 8n + O(n) + (8 + 11:5)(n 2? n) n?1 X k=1 14(n? k) (8 + 11:5)n 2 (76) For = 2, the complexity is 27:5n 2. This indicates that the complexity of the factorization algorithm using the Bunch-Kaufman pivoting scheme can vary from 27:5n 2 to 40n 2 depending on the pivot sequence obtained. Preserving the Hermitian structure of the factorization reduces the complexity of the factorization algorithm to some extent. Further reduction in complexity can be obtained if the Hermitian Toeplitz matrix is converted to a real, symmetric Cauchy-like matrix. This is discussed in section 6. If the Toeplitz matrix is real, one would like to preserve this property as well because computation in complex arithmetic is very expensive. In the following sections we present algorithms to factor real Toeplitz matrices that are either symmetric or unsymmetric. We also compare them in complexity to the complex arithmetic cases. 5 Real unsymmetric Toeplitz matrices In this section we present an algorithm to factor real unsymmetric Toeplitz matrices by converting them to real Cauchy-like matrices. We then compare this method to the algorithms discussed in the previous sections and show how maintaining the realness property leads to signicant savings in computation. Consider a real unsymmetric Toeplitz matrix T of size n. Following the notation of section 2.3, we write the displacement equation for T as : Z 00 T? T Z 11 = G H T : (77) The displacement rank of this equation is 4. If we were to use Z 1 and Z?1 as the displacement matrices, the displacement rank would have been 2. Since Z 1 and Z?1 are diagonalized by the DFT matrix, a real Toeplitz matrix would be converted to a complex Cauchy-like matrix and all subsequent factorization would have to be done in complex arithmetic. If, however, we use (Z 00 ; Z 11 ) as the displacement matrix pair, then real trigonometric transforms can be used to convert a real Toeplitz matrix into a real Cauchy-like matrix. The subsequent factorization is done in real arithmetic. We show that the reduction in complexity due to real arithmetic more than osets the increased displacement rank. From (19) and (20), we can write : (S 00 Z 00 S 00 ) (S 00 T S 11 )? (S 00 T S 11 ) (S T 11 Z 11 S 11 ) = (S 00 G) (H T S 11 ) D l C? C D r = ^G ^H T (78) where D l is a diagonal matrix containing the eigenvalues of Z 00 as dened in (19), D r is also a digonal matrix containing the eigenvalues of Z 11 as dened in (20), ^G = S00 G and ^H = S T 11H. For all n, the eigenvalues of Z 00 and Z 11 are distinct and hence one would not have to compute any elements of the real Cauchy-like

15 15 matrix C apriori. One could easily derive a real arithmetic version of the algorithm described in section 3. If the permutations at every step of the algorithm are accumulated in P and the upper and lower triangular factors are denoted by U and L, then we obtain a factorization of T as : T = S 00 P T L U S T 11 We now attempt to compute the complexity of the factorization algorithm for real Toeplitz matrices. Transforming the Toeplitz matrix to real Cauchy-like matrices involves applying the Sine-I and Cosine-II transforms to the generators. Let the displacement rank be (= 4 for real Toeplitz matrices). The complexity of the transformation would be 2K 2 n log (n). If n were a power of 2, then K 2 = 2:5. Computing a row or column of the factorization at the k th step using a real arithmetic version of (46) requires (2? 1)(n? k) + 2(n? k) ops. Since both a row and column of the matrix are to be computed at every step, the total ops for this operation is (4? 2)(n? k) + 4(n? k). Having computed the row and column of the factorization we must update the generators of the k th step to those of the (k + 1) th step using a real arithmetic version of (53). The complexity of this step is 4(n? k). The total number of ops for the entire factorization algorithm would then be : n?1 X F lops = 2K 2 n log (n) + (8 + 2)(n? k): (79) The asymptotic complexity would, therefor, be 4n 2 + n 2. For real Toeplitz matrices, since = 4, the complexity is 17n 2. If the complex arithmetic version were used the complexity would have been 41n 2 ops. It can therefore be seen that staying in the real domain leads to signicant savings in computation. 6 Real, Symmetric Toeplitz matrices In this section we present an algorithm to factor real, symmetric Toeplitz matrices by converting them to real, symmetric Cauchy-like matrices. In addition, we also present an algorithm that converts a Hermitian Toeplitz matrix to a real, symmetric matrix and proceeds to factor it in real arithmetic. In section 2.4, we showed that a signicant reduction in complexity may be obtained if we exploit both realness and symmetry in the Toeplitz matrix simultaneously. It was shown that for a real symmetric Toeplitz matrix T of size n, if the symmetric form of the displacement equation was used with a displacement matrix Z, then the corresponding Cauchy-like matrix C could be decoupled into two Cauchy-like matrices of half the size. Further, it was shown that the two smaller Cauchy-like matrices would have a displacement rank of 2. These two smaller Cauchy-like matrices can be factored independently of each other. Since we use the symmetric form of the displacement equation (27), the diagonal elements of C cannot be obtained by solving the corresponding Lyapunov equation. One has to compute these elements apriori. In the following paragraphs we show how the diagonal elements of C may be computed. We demonstrate this for the case when = 0. The construction for = 1 is similar. The diagonal elements of C 00 = S 00 T S 00 can be computed using the following theorem. Theorem 2 Let S be a vector-space containing all n n matrices that can be diagonalized by the Sine-I transform. Then, for any matrix A of size n, if we obtain a matrix S in this space that minimizes the Frobenius norm of (A? S), then the diagonal of S 00 AS 00 (S 00 is the Sine-I transform of size n) is identical to the eigenvalues of S. In addition, it was proved independently in [1], [3] and [13] that a matrix belongs to the vector-space S if and only if the matrix can be expressed as a special sum of a Toeplitz and a Hankel matrix. This is outlined in the following theorem. Theorem 3 Any matrix S in S can be written as S = X? Y, where X us a symmetric Toeplitz matrix with rst column x = [ x 1 x 2 x n ] T, and Y is a Hankel matrix with rst column [ 0 0 x n x 3 ] T and last column [ x 3 x n 0 0 ] T. k=1

16 16 In [5], R. Chan et al. show how the minimizer S may be constructed for any matrix A in O(n 2 ) ops. If A is Toeplitz, then they show that this computation requires only n log (n) ops. The algorithm proceeds by setting the partial derivative of ka? Sk w.r.t. x 1 ; x 2 ; ; x n equal to zero. We summarize the lemmas and algorithms that are important to to this discussion. An important lemma due to Boman and Koltract [3] gives a basis for the vector space S. Lemma 1 Let Q i ; i = 1; ; n be n n matrices with the (j; k) entry being given by Then fq i g n i=1 is a basis for S. Let us dene a vector r = [ r 1 r 2 Q i = 8 >< >: r n ], where 1 if jj? kj = i? 1?1 if j + k = i? 1?1 if j + k = 2n? i? 3 0 otherwise. r i = 1 T n (Q i A)1 n : (80) 1 n is a vector of ones of length n and denotes the element-wise product. The following corollary due to Chan et al. [5] gives an explicit formula for the entries on the rst column of the minimizer S for any matrix A. Corollary 1 Let A be a symmetric matrix of size n and let S be the minimizer of ka?sk F over all matrices in the vector space S. Let z, be the rst column of S and r i = 1 T n(q i A)1 n. If s o and s e are dened to be the sum of the odd and even entries of the vector r, then we have : z 1 = z i = 1 2(n + 1) (2r 1? r 3 ) 1 2(n + 1) (r i? r i+2 ) i = 2; ; n? 2 and z n?1 = z n = 1 2(n + 1) (s o + r n?1 ) 1 2(n + 1) (2s e + r n ) if n is even; and z n?1 = z n = 1 2(n + 1) (s e + r n?1 ) 1 2(n + 1) (2s o + r n ) if n is odd. The eigenvalues of the minimizer S can now be calculated from the rst column of S. S = S 00 S 00 ) S 00 S e 1 = S 00 e 1 ) = D?1 S 00 S e 1 ; where D = diag(s 00 e 1 ): (81) For any arbitrary matrix A, it is clear that the vector r can be computed in O(n 2 ) ops and the diagonal of S 00 A S 00 in O(n 2 + n log (n)) ops. If however, A were Toeplitz, then r can be computed in O(n) ops and the diagonal of the Cauchy-like matrix C 00 = S 00 A S 00 can be computed in O(n log (n)) ops. In [5], the authors present the following O(n) algorithm to obtain the r vector given a symmetric Toeplitz matrix T of size n whose rst column is [ t 1 t 2 t n ] T.

17 17 Algorithm to compute r (for Sine-I transform): r 1 = nt 1 r 2 = 2(n? 1)t 2 w 1 =?t 1 v 1 =?2t 2 for k = 2 : bn=2c r 2k?1 = 2(n? 2k + 2)t 2k?1 + 2w k?1 w k = w k?1? 2t 2k?1 r 2k = 2(n? 2k + 1)t 2k + 2v k?1 v k = v k?1? 2t 2k end if n is odd r n = 2t n + 2w (n?1)=2 end From the above discussion, it can be seen that the total complexity of computing the diagonal elements of the Cauchy-like matrix C 00 = S 00 T S 00 is O(n log (n)). The next step in the factorization of the Cauchy-like matrix C 00 is the application of the odd-even sort permutation matrix P as shown in (28) in order to expose the sparsity of C 00 and separate the large system of equations into two independent systems of size dn=2e and bn=2c respectively. Each sub-system can be solved using the real arithmetic variant of the algorithm to factor non-hermitian Cauchy-like matrices discussed in section 3. Since the two Cauchy-like matrices are symmetric, we use the Bunch-Kaufman algorithm to search for a pivot. We now estimate the complexity of the factorization algorithm. Consider a Cauchy-like matrix of size m with the displacement equation D l C? C D l = G G T. At the k th step of the factorization algorithm computing a row or column of the matrix requires (2 + 1)(m? k) ops. The worst case scenario is one in which, at each step, if 2 rows are computed and checked and only a 1 1 pivot block is used. The complexity to update the generators for the next step is 2(m? k). In addition, the elements of the Cauchy-like matrix that were computed apriori have to be updated. The complexity to do this at the k th step is 3(m? k). The total complexity in the worst case scenario is, therefore, F lops = m?1 X k=1 (6 + 5)(m? k) = (3 + 2:5)m 2 (82) In the best case scenario, at each step, if 2 rows are computed and checked then a 2 2 pivot is used. Or else, only 1 row is checked and a 1 1 pivot is used. The best case complexity is F lops = m?1 X k=1 (4 + 4)(m? k) = (2 + 2)m 2 (83) As shown in section 2.4, there are two independent systems each of size approximately n=2 having a displacement rank of = 2. The total complexity, in the worst case, for factoring real symmetric Toeplitz matrices of size n by converting them to Cauchy-like matrices is and in the best case, the complexity is F lops = K 2 ( + 1)n log (n) + O(n) + 4:25n 2 (84) F lops = K 2 ( + 1)n log (n) + O(n) + 3n 2 (85) A similar algorithm can be used if we choose to use the displacement matrix Z 11 instead of Z 00. The diagonal elements of C 11 can be computed in a similar manner [6].

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Displacement structure approach to discrete-trigonometric-transform based preconditioners of G.Strang type and of T.Chan type

Displacement structure approach to discrete-trigonometric-transform based preconditioners of G.Strang type and of T.Chan type Displacement structure approach to discrete-trigonometric-transform based preconditioners of GStrang type and of TChan type Thomas Kailath Information System Laboratory, Stanford University, Stanford CA

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

arxiv: v1 [cs.sc] 17 Apr 2013

arxiv: v1 [cs.sc] 17 Apr 2013 EFFICIENT CALCULATION OF DETERMINANTS OF SYMBOLIC MATRICES WITH MANY VARIABLES TANYA KHOVANOVA 1 AND ZIV SCULLY 2 arxiv:1304.4691v1 [cs.sc] 17 Apr 2013 Abstract. Efficient matrix determinant calculations

More information

Solving Linear Systems Using Gaussian Elimination. How can we solve

Solving Linear Systems Using Gaussian Elimination. How can we solve Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology

TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology DONALD M. DAVIS Abstract. We use ku-cohomology to determine lower bounds for the topological complexity of mod-2 e lens spaces. In the

More information

QM and Angular Momentum

QM and Angular Momentum Chapter 5 QM and Angular Momentum 5. Angular Momentum Operators In your Introductory Quantum Mechanics (QM) course you learned about the basic properties of low spin systems. Here we want to review that

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Matrix Multiplication Chapter IV Special Linear Systems

Matrix Multiplication Chapter IV Special Linear Systems Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

0 Introduction Let the matrix A C nn be given by all its n entries The problem is to compute the product Ab of a matrix A by input vector b C n Using

0 Introduction Let the matrix A C nn be given by all its n entries The problem is to compute the product Ab of a matrix A by input vector b C n Using Fast algorithms with preprocessing for matrix{vector multiplication problems IGohberg and VOlshevsky School of Mathematical Sciences Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University,

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Outline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina

Outline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina A Fast Recursive Algorithm for Constructing Matrices with Prescribed Eigenvalues and Singular Values by Moody T. Chu North Carolina State University Outline Background Schur-Horn Theorem Mirsky Theorem

More information

Chapter 12 Block LU Factorization

Chapter 12 Block LU Factorization Chapter 12 Block LU Factorization Block algorithms are advantageous for at least two important reasons. First, they work with blocks of data having b 2 elements, performing O(b 3 ) operations. The O(b)

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Preconditioning of elliptic problems by approximation in the transform domain

Preconditioning of elliptic problems by approximation in the transform domain TR-CS-97-2 Preconditioning of elliptic problems by approximation in the transform domain Michael K. Ng July 997 Joint Computer Science Technical Report Series Department of Computer Science Faculty of

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract UMIACS-TR-9-86 July 199 CS-TR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space

More information

Diagonal pivoting methods for solving tridiagonal systems without interchanges

Diagonal pivoting methods for solving tridiagonal systems without interchanges Diagonal pivoting methods for solving tridiagonal systems without interchanges Joseph Tyson, Jennifer Erway, and Roummel F. Marcia Department of Mathematics, Wake Forest University School of Natural Sciences,

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

I = i 0,

I = i 0, Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Some notes on Linear Algebra. Mark Schmidt September 10, 2009

Some notes on Linear Algebra. Mark Schmidt September 10, 2009 Some notes on Linear Algebra Mark Schmidt September 10, 2009 References Linear Algebra and Its Applications. Strang, 1988. Practical Optimization. Gill, Murray, Wright, 1982. Matrix Computations. Golub

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

An exploration of matrix equilibration

An exploration of matrix equilibration An exploration of matrix equilibration Paul Liu Abstract We review three algorithms that scale the innity-norm of each row and column in a matrix to. The rst algorithm applies to unsymmetric matrices,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

VII Selected Topics. 28 Matrix Operations

VII Selected Topics. 28 Matrix Operations VII Selected Topics Matrix Operations Linear Programming Number Theoretic Algorithms Polynomials and the FFT Approximation Algorithms 28 Matrix Operations We focus on how to multiply matrices and solve

More information

Determinants. Recall that the 2 2 matrix a b c d. is invertible if

Determinants. Recall that the 2 2 matrix a b c d. is invertible if Determinants Recall that the 2 2 matrix a b c d is invertible if and only if the quantity ad bc is nonzero. Since this quantity helps to determine the invertibility of the matrix, we call it the determinant.

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 3 Review of Matrix Algebra Vectors and matrices are essential for modern analysis of systems of equations algebrai, differential, functional, etc In this

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Parallel Algorithms for the Solution of Toeplitz Systems of Linear Equations

Parallel Algorithms for the Solution of Toeplitz Systems of Linear Equations Parallel Algorithms for the Solution of Toeplitz Systems of Linear Equations Pedro Alonso 1, José M. Badía 2, and Antonio M. Vidal 1 1 Departamento de Sistemas Informáticos y Computación, Universidad Politécnica

More information

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1]. Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Our rst result is the direct analogue of the main theorem of [5], and can be roughly stated as follows: Theorem. Let R be a complex reection of order

Our rst result is the direct analogue of the main theorem of [5], and can be roughly stated as follows: Theorem. Let R be a complex reection of order Unfaithful complex hyperbolic triangle groups II: Higher order reections John R. Parker Department of Mathematical Sciences, University of Durham, South Road, Durham DH LE, England. e-mail: j.r.parker@durham.ac.uk

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

3 QR factorization revisited

3 QR factorization revisited LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 2, 2000 30 3 QR factorization revisited Now we can explain why A = QR factorization is much better when using it to solve Ax = b than the A = LU factorization

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

SOLUTION OF SPECIALIZED SYLVESTER EQUATION. Ondra Kamenik. Given the following matrix equation AX + BX C = D,

SOLUTION OF SPECIALIZED SYLVESTER EQUATION. Ondra Kamenik. Given the following matrix equation AX + BX C = D, SOLUTION OF SPECIALIZED SYLVESTER EQUATION Ondra Kamenik Given the following matrix equation i AX + BX C D, where A is regular n n matrix, X is n m i matrix of unknowns, B is singular n n matrix, C is

More information

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM OLGA SLYUSAREVA AND MICHAEL TSATSOMEROS Abstract. The principal pivot transform (PPT) is a transformation of a matrix A tantamount to exchanging

More information

All of my class notes can be found at

All of my class notes can be found at My name is Leon Hostetler I am currently a student at Florida State University majoring in physics as well as applied and computational mathematics Feel free to download, print, and use these class notes

More information

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f. Review Example. Elementary matrices in action: (a) 0 0 0 0 a b c d e f = g h i d e f 0 0 g h i a b c (b) 0 0 0 0 a b c d e f = a b c d e f 0 0 7 g h i 7g 7h 7i (c) 0 0 0 0 a b c a b c d e f = d e f 0 g

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix.

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. arxiv:math/0506382v1 [math.na] 19 Jun 2005 Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. Adviser: Charles R. Johnson Department of Mathematics College

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Structures preserved by matrix inversion Steven Delvaux Marc Van Barel Report TW 414, December 24 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 2A B-31 Heverlee (Belgium)

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Ranks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices

Ranks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices Operator Theory: Advances and Applications, Vol 1, 1 13 c 27 Birkhäuser Verlag Basel/Switzerland Ranks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices Tom Bella, Vadim

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 5 Eigenvalue Problems Section 5.1 Michael T. Heath and Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael

More information

Chapter 8 Structured Low Rank Approximation

Chapter 8 Structured Low Rank Approximation Chapter 8 Structured Low Rank Approximation Overview Low Rank Toeplitz Approximation Low Rank Circulant Approximation Low Rank Covariance Approximation Eculidean Distance Matrix Approximation Approximate

More information

Generalizations of Sylvester s determinantal identity

Generalizations of Sylvester s determinantal identity Generalizations of Sylvester s determinantal identity. Redivo Zaglia,.R. Russo Università degli Studi di Padova Dipartimento di atematica Pura ed Applicata Due Giorni di Algebra Lineare Numerica 6-7 arzo

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information