or H = UU = nx i=1 i u i u i ; where H is a non-singular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the

Size: px
Start display at page:

Download "or H = UU = nx i=1 i u i u i ; where H is a non-singular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the"

Transcription

1 Relative Perturbation Bound for Invariant Subspaces of Graded Indenite Hermitian Matrices Ninoslav Truhar 1 University Josip Juraj Strossmayer, Faculty of Civil Engineering, Drinska 16 a, Osijek, Croatia, truhar@gfos.hr Ivan Slapnicar 1 University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, R. Boskovica b.b, 1000 Split, Croatia, ivan.slapnicar@fesb.hr Abstract We give a bound for the perturbations of invariant subspaces of graded indenite Hermitian matrix H = D AD which is perturbed into H+H = D (A+A)D. Such relative perturbations include important case where H is given with an element-wise relative error. Application of our bounds reuires only the knowledge of the size of relative perturbation kak, and not the perturbation A itself. This typically occurs when data are given with relative uncertainties, when the matrix is being stored into computer memory, and when analyzing some numerical algorithms. Subspace perturbations are measured in terms of perturbations of angles between subspaces, and our bound is therefore relative variant of the well-known Davis{Kahan sin theorem. Our bounds generalize some of the recent relative perturbation results. 1 Introduction and preliminaries We are considering the Hermitian eigenvalue problem Hu i = i u i ; 1 Authors acknowledge the grant from the Croatian Ministry of Science and Technology. Part of this work was done while the second author was visiting the Pennsylvania State University, Department of Computer Science and Engineering, University Park, PA, USA. Preprint submitted to Elsevier Science 14 April 1999

2 or H = UU = nx i=1 i u i u i ; where H is a non-singular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the eigenvalues of H, and U = [ u 1 u u n ] is unitary matrix whose i-th column is an eigenvector which corresponds to i. Subspace X is an invariant subspace of a general matrix H if HX X. If H is Hermitian and the set of eigenvalues f i1 ; i ; ; ik g does not intersect the rest of the spectrum of H, then the corresponding k- dimensional invariant subspace is spanned by the eigenvectors u i1 ; u i ; ; u ik. Throughout the paper kk and kk F will denote the -norm and the Frobenius norm, respectively. Our aim is to give bound for perturbations of invariant subspaces for the case when H is a graded Hermitian matrix, that is, H = D AD; (1) where D is some non-singular grading matrix, under Hermitian relative perturbation of the form H + H f H = D (A + A)D: Our bound is a relative variant of the well-known sin theorems by Davis and Kahan [], [4, Section V.3.3]. The development of relative perturbation results for eigenvalue and singular value problems has been very active area of research in the past years [3,1,4,9,1,6,5,15,16,7,13] (see also the review article [1]). We shall rst describe the relative perturbation and state the existing eigenvalue perturbation results. Let H be the Hermitian relative perturbation which satises jx Hxj x H x; 8x; < 1; () where H = p H = UjjU is a spectral absolute value of H (that is, H is the positive denite polar factor of H). Under such perturbations the relative change in eigenvalues is bounded by [9] 1? e j j 1 + : (3)

3 This ineuality implies that the perturbations which satisfy () are inertia preserving. This result is very general and includes important classes of perturbations. If H is a graded matrix dened by (1) and ba = D? H D?1 ; (4) then (3) holds with = kakk b A?1 k: (5) Indeed, jx Hxj = jx D ADxj = kx D ADxk kx D kkakkdxk kakk b A?1 k x H x; (6) as desired. Another important class of perturbations is when H is perturbed element-wise in the relative sense, jh ij j "jh ij j: (7) By setting D = diag( H ii ); (8) since ja ij j "ja ij j, the relation (6) implies that (3) holds with = "kjajkk b A?1 k: (9) Since b A ii = 1, we have k b A?1 k ( b A) nk b A?1 k, where (A) kakka?1 k is the spectral condition number. Also, kjajk n (see [9, Proof of Theorem.16]). The diagonal grading matrix D from (8) is almost optimal in the sense that [3] ( b A) n min D ( D H D) n( H ) = n(h); where the minimum is taken over all non-singular diagonal matrices. Similarly, for more general perturbations of the type jh ij j "D ii D jj ; (10) In [9] no attention was paid to scalings with non-diagonal matrix D, so this result is new. Also notice that any perturbation H + H can clearly be interpreted as the perturbation of a graded matrix, and vice versa. 3

4 (3) holds with = "nk b A?1 k "n( b A): (11) Remark 1 Application of the bounds (5), (9) and (11) reuires only the knowledge of the size of relative perturbation kak, and not the perturbation A itself. Such situation occurs in several cases which are very important in applications. When the data are determined to some relative accuracy or when the matrix is being stored in computer memory, the only available information about H is that it satises (7). Similarly, in error analysis of various numerical algorithms (matrix factorizations, eigenvalue or singular value computations), the the only available information is that H satises more general condition (10). If H is positive denite, then H = H, and (9) and (11) reduce to the corresponding results from [4]. We would like to point out a major dierence between positive denite and indenite case. Remark If H is positive denite, then small perturbations of the type (7) and (10) cause small relative changes in eigenvalues if and only if ( b A) (A) is small [4]. If H is indenite, then from (9) and (11) it follows that small ( b A) implies small relative changes in eigenvalues. However, these changes can be small even if ( b A) is large [9,18]. Although such examples can be successfully analyzed by perturbations through factors as in [9,8], this shows that the graded indenite case is essentially more dicult than the positive denite one. Perturbation bounds for eigenvectors of simple eigenvalues were given for scaled diagonally dominant matrices in [1], and for positive denite matrices in [4]. The bound for invariant subspace which corresponds to single, possibly multiple, eigenvalue of an indenite Hermitian matrix was given in [9, Theorem.48]. This bound, given in terms of projections which are dened by Dunford integral as in [14, Section II.1.4], is generalized to invariant subspaces which correspond to a set of neighboring eigenvalues in [5]. In [16, Theorems 3.3 and 3.4] two bounds for invariant subspaces of a graded positive denite matrix were given. We generalize the result of [16, Theorem 3.3] and give the bound for k sin k F for a graded non-singular indenite matrix. Our results are, therefore, relative variants of the well-known sin theorems [, Section ], [4, Theorem V.3.4]. Our results also generalize the ones from [9,5] to subspaces which correspond to arbitrary set of eigenvalues. Our results and the related results from [1,4,9,1] and other works, are also useful in estimating the accuracy of highly accurate algorithms for computing eigenvalue decompositions [1,4,7,18]. The paper is organized as follows: in Section we prove our main theorem. In Section 3 we show how to eciently apply that theorem, in particular for the 4

5 important types of relative perturbations (7) and (10) (c.f. Remarks 1 and ). We also describe some classes of well-behaved indenite matrices. In Section 4 we give some concluding remarks. Bound for invariant subspaces Let the eigenvalue decomposition of the graded matrix H be given by H D AD = UU = [ U 1 U ] ; (1) 0 U U where 1 = diag( i1 ; i ; ; ik ); = diag( j1 ; j ; ; jl ); and k + l = n. We assume that 1 and have no common eigenvalues, thus, U 1 and U both span simple invariant subspaces according to [4, Denition V.1.]. Similarly, let the eigenvalue decomposition of H f be fh D (A + A)D = U e e U e = [ U e 1 U e e 1 0 U e 1 ] 0 e eu ; (13) where e 1 = diag( e i1 ; ; e ik ) and e = diag( e j1 ; ; e jl ). Let XY be a singular value decomposition of the matrix e U U 1. The diagonal entries of the matrix sin, are the sines of canonical angles between subspaces which are spanned by the columns of U 1 and e U 1 [4, Corollary I.5.4]. Before stating the theorem, we need some additional denitions. Let A = QQ = Qjj 1= Jjj 1= Q ; (14) be an eigenvalue decomposition of A. Here J is diagonal matrix of signs whose diagonal elements dene the inertia of A and, by Sylvester's theorem [10, Theorem 4.5.8], of H, as well. Set G = D Qjj 1= ; (15) such that H = GJG. Further, set N = jj?1= Q AQjj?1= ; (16) 5

6 such that f H = G(J + N)G. Finally, the hyperbolic eigenvector matrix of a matrix pair (M; J), where M is a Hermitian positive denite matrix, and J = diag(1), is the matrix X which simultaneously diagonalizes the pair such that X MX = M and X JX = J, where M is a positive denite diagonal matrix. Some properties of hyperbolic eigenvector matrices will be discussed in the next section. Theorem 3 Let H and f H be given as above, and let ka?1 kkak < 1. Then H and f H have the same inertia, thus e U 1 and e U from (13) span simple invariant subspaces, as well, and the canonical angles between the subspaces spanned by U 1 and e U 1 are bounded by k sin k F k e U U 1 k F ka?1 k kak 1 F? ka?1 k kak kv 1 k kv e k? e ; (17) j j min 1pk 1l e j j provided that the above minimum is greater than zero. Here V = [ V 1 V ] is the hyperbolic eigenvector matrix of the pair (G G; J), where G is dened by (14) and (15), and e V = [ e V 1 e V ] is the hyperbolic eigenvector matrix of the pair ([(I + NJ) 1= ] G G(I + NJ) 1= ; J); where N is dened by (16). V and e V are partitioned accordingly to (1) and (13). Note that the matrix suare root exists since by the assumption knk ka?1 k kak < 1 (see [11, Theorem 6.4.1]). Proof. The proof is similar to the proof of [16, Theorem 3.3]. From knk < 1 we conclude that the matrices H = GJG, J + N and f H = G(J + N)G all have the same inertia dened by J. Therefore, (1) and (13) can be written as H = UJU ; f H = e U e J e U ; (18) with the same J = diag(1) as in (14), respectively. Also, J + N can be decomposed as J + N = (I + NJ) 1= J[(I + NJ) 1= ] ; (19) which follows from (I + NJ) 1= = J[(I + NJ) 1= ] J. Thus, we can write f H as fh = G(I + NJ) 1= J[(I + NJ) 1= ] G : (0) 6

7 From (18) and H = GJG we conclude that the matrix G has the hyperbolic singular value decomposition [17] given by G = Ujj 1= JV J; V JV = J: (1) Similarly, from (18) and (0) we conclude that the matrix G(I + JN) 1= has the hyperbolic singular value decomposition given by G(I + NJ) 1= = e U j e j 1= J e V J; e V J e V = J: () Now (0) and (19) imply that fh? H = G(I + NJ) 1= J[(I + NJ) 1= ] G? GJG = G(I + NJ) 1= G ; (3) where = J[(I + NJ) 1= ]? (I + NJ)?1= J = (I + NJ)?1= N: (4) Pre- and post-multiplication of (3) by e U and U, respectively, together with (1), (13), (1) and (), gives e e U U? e U U = j e j 1= J e V J JV Jjj 1= : This, in turn, implies e e U U 1? e U U 1 1 = j e j 1= J e V J JV 1 J 1 j 1 j 1= ; where J = J 1 J is partitioned accordingly to (1) and (13). By interpreting this euality component-wise we have [ U e U 1 ] p = [J V e j e ; jj 1;pp j J JV 1 J 1 ] p ; e ;? 1;pp for all p f1; : : : ; kg and f1; : : : ; lg. By taking the Frobenius norm we have k e U U 1 k F kj e V J JV 1 J 1 k F max p; j e ; jj 1;pp j e ;? 1;pp : 7

8 Further, kj e V J JV 1 J 1 k F k e V k kv 1 k kk F : The relations (4), (16) and the assumption imply kk F k(i + JN)?1= k knk F 1 1? knk knk F ka?1 k kak F 1? ka?1 k kak; and the theorem follows by combining the last three relations. For positive denite H the matrices V e and V are unitary. Thus kv 1 k = kv e k = 1, and Theorem 3 reduces to [16, Theorem 3.3]. Thus, the dierence from the positive denite and indenite case is existence of the additional factor kv 1 k kv e k in the indenite case. This again shows that the indenite case is essentially more dicult (c.f. Remark ). The minimum in (17) plays the role of the relative gap, although the function j? e j= j e j does not necessarily increase with the distance between and e if they have dierent signs. For example, if 1 = f1g and e = f?1; 0:1g, then the minimum is attained between 1 and?1 and not between 1 and 0:1. Some other results for positive denite matrices and singular value decomposition from [15,16] can be generalized to indenite case and hyperbolic singular value decomposition, respectively, by using the techniues similar to those used in the proof of Theorem 3 and Section 3. As already mentioned in Section 1, Theorem 3 generalizes some other relative bounds from [1,4,9,1], as well. The bound (17) involves unperturbed and the perturbed uantities ( and e, V 1 and e V ), and can, therefore be computed only if A is known. The existing bounds for graded indenite Hermitian matrices from [9,1,5], depend neither upon perturbed vectors nor eigenvalues, and are therfore simpler to compute, and can be applied to some important problems where only the size of the relative perturbation is known (c.f. Remark 1). Our next aim is to remove the dependence on the perturbed uantities from (17), and to explain when is the factor kv 1 k k e V k in (17) expected to be small. We do this in the next section. 8

9 3 Applying the bound We show how to remove the perturbed uantities from the bound of Theorem 3. In particular, we show how to eciently compute and the upper bound for the factor kv 1 k k e V k. In Section 3.1 we show that this factor will be small if, for the chosen grading D, the matrix b A which is dened by (4) is well conditioned. We also describe some easily recognisable classes of matrices which fulll this condition for any diagonal grading D. First note that the perturbed eigenvalues can be expressed in terms of the unperturbed one by using (3) and (5), that is, the minimum in (17) is bounded by min 1pk 1l? e j j e j j min 1pk 1l? j (1 + sign( i? j ) sign( j ))j : (5) j (1 + )j We now proceed as follows: we rst bound k e V k in terms of kv k; we then bound kv k in terms of ka?1 k and k b Ak; and, nally, we show how to eciently compute k b Ak and from (5), (9) or (11). The matrices for which X JX = J, where J = diag(1) are called J-unitary. Such matrices have the following properties: { XJX = J. { kxk = kx?1 k. Moreover, the singular values of X come in pairs of reciprocals, f; 1=g. { Let J = I l (?I n?l ) (6) and let X be partitioned accordingly, X11 X X = 1 : X X 1 Then kx 11 k = kx k, kx 1 k = kx 1 k, and kxk = kx 1 k kx 1 k = kx 1 k + kx 11 k: (7) These eualities follow from the CS decomposition of X [0]. Lemma 4 Let J be given by (6) and let X and f X be two J-unitary matrices which are partitioned accordingly in block columns as X = [ X p X n ] and f X = [ f Xp f Xn ] : 9

10 X p spans the so called positive subspace with respect to J since XpJX p = I l. Similarly, X n spans the negative subspace. Then the matrix X JX f is also J-unitary, and kx J f Xk = kx nj f X p k kx nj f X p k: Also, k f Xk kxnj X f p k kxnj X f p k kxk: (8) Proof. The euality follows from (7), and the ineuality follows since k f Xk = kx? X J f Xk kx? k kx J f Xk = kxk kx J f Xk. Lemma 5 Let X and f X be the hyperbolic eigenvector matrices of the pairs (M; J) and ( f M; J) where M and f M are positive denite, f M = (I +?) M(I +?), and J is given by (6). Let X and f X be partitioned as in Lemma 4. Dene = k?k F =(1? k?k). If kxk < 1 4 ; (9) then k f Xk kxk : 1? 4kXk Proof. The fact that X diagonalizes the pair (M; J) can be written as X MX = p n ; (30) where is diagonal positive denite matrix which is partitioned according to J and X. J-unitarity of X and (30) imply MX = X? = JXJ, thus MX p = JX p p and MX n =?JX n n : (31) Relations analogous to (30) and (31) hold for f M, f X and e, as well. By premultiplying f M f X p = J f X p e p by X n we have, after using (31) and rearranging, n X nj f X p + X nj f X p e p = X n( f M? M) f X p : 10

11 Set b? = I? (I +?)? (the inverse exists since (9) implies k?k < 1). By using the identity f M? M = b? f M + M?, the above euality can be written component-wise as [X nj f X p ] ij ([ n ] ii + [ e p ] jj ) = [X n ] :i fb? f M + M?g[ f X p ] :j = [X n ] :i b?j[ f X p ] :j [ e p ] jj? [ n ] ii [X n ] :ij?[ f X p ] :j : Here [X] :k denotes the k-th column of the matrix X. By dividing this euality by [ n ] ii + [ e p ] jj and by using the fact that maxf[ n ] ii ; [ e p ] jj g [ n ] ii + [ e p ] jj < 1; we have j[x nj f X p ] ij j j[x n ] :i b?j[ f X p ] :j j + j[x n ] :ij?[ f X p ] :j j: Since k b?k F, by taking Frobenius norm we have kx nj f X p k F kx n k k f X p k(k b?k F + k?k F ) kxk k f Xk: By inserting this ineuality in (8), we have k f Xk kxk k f Xk (kxk k f Xk) kxk: After a simple manipulation we obtain k f Xk kxk 1? 4kXk : as desired. Now we can use Lemma 5 to bound kv e k by kv k in Theorem 3. V diagonalizes the pair (G G; J) where G is dened by (15), and V e diagonalizes the pair ([(I + NJ) 1= ] G G(I + NJ) 1= ; J), where N is dened by (16). In order to apply Lemma 5 with M = G G and I +? = (I +NJ) 1= we need to bound k?k in terms of knk. Since knjk < 1, we can apply Taylor series of the function (1 + x) 1= to the matrix (I + NJ) 1= [11, Theorem 6..8], which gives 1X (I + NJ) 1= = I + n=1 n?1 (n? 3)!! (?1) (NJ) n n I +?: n! 11

12 Here (n? 3)!! = (n? 3). By taking norms we have k?k 1X n=1 1 knk 1X (n? 3)!! n knk n = 1 1X n! knk n=1 n=1 knk n?1 1 knk 1? knk : Similarly, for the Frobenius norm we have k?k F 1 knk F 1? knk : (n? 3)!! knk n?1 n?1 n! From the above two ineualities we see that from Lemma 5 is bounded by knk F? 3kNk ka?1 k kak F : (3)? 3kA?1 k kak If ka?1 k kak F < 4kV k + 3 ; (33) then < 1=(4kV k ), that is, (9) holds and Lemma 5 gives k e V k kv k : (34) 1? 4kV k We can further bound kv k in terms of A and b A from (4). By denition, V diagonalizes the pair (G G; J) where H = GJG, that is, V G GV = jj, where is the eigenvalue matrix of H. Then, the eigenvalue decomposition of H is given by H = UU where U = GV jj?1=. Then H = UjjU = GV V G : (35) Therefore kv k = kv V k = kg?1 H G? k. In our case, from (4) and (15), we have kv k = kjj?1= Q D? D b ADD?1 Qjj?1= k ka?1 k k b Ak: (36) We can now prove the following theorem. 1

13 Theorem 6 Let the assumptions of Theorem 3 hold and let, in addition, ka?1 k kak F < =(4kA?1 k k b Ak + 3). Then k sin k F ka?1 k kak 1 F? ka?1 k kak ka?1 k kak 1 F? ka?1 k kak min 1pk 1l min 1pk 1l 1? e j j e j j 1? e j j e j j kv k 1? 4kV k ka?1 k k b Ak 1? 4kA?1 k k b Ak (37) (38) where is dened by (3). Proof. The assumption and (36) imply (33), which in turn implies (34). Now (37) follows by inserting (34) in Theorem 3, and (38) follows by inserting (36) in (37). Note that the assumption of the theorem implies the positivity of the second suare root in (38), and is therefore not too restrictive provided that ka?1 k and k b Ak are not too large. Some classes of matrices which fulll this condition are described in Section 3.1. Also, instead of (36) we can use an alternative bound kv k = kv?1 k kak k b A?1 k: (39) In order to apply (38), besides ka?1 k, kak F and kak, we also need to know k b Ak. For the special case (8) when D is diagonal, we simply have k b Ak trace( b A) = n. Such D appears naturally when we consider perturbations of the type (7) and (10) which occur in numerical computation (see Remark 1). We now show that, for any D, k b Ak from (38) and from (5) can be computed by highly accurate eigenreduction algorithm from [7,18] at little extra cost 3. This algorithm rst factorizes H as H = F JF by symmetric indenite factorization [19]. This factorization is followed by one-sided J-orthogonal Jacobi method on the pair F; J. This method forms the seuence of matrices F k+1 = F k X k ; where X kjx k = J: 3 In [18] only the algorithm for the real symmetric case was analyzed, but a version for the Hermtian case is possible, as well. 13

14 This seuence converges to some matrix F X which has numerically orthogonal columns, and F is J orthogonal, that is, F JF = J. The eigenvalues of H are approximated by = J diag(x F F X), and the eigenvectors are approximated by U = F Xjj?1=. Therefore, H = F XX F. Note that X also diagonalizes the pair (F F; J). Since the matrix F X is readily available in the computer, we can compute b A as ba = D? F XX F D?1 ; (40) or, even simpler, just its factor D? F X. Finally, after computing b A, can be computed directly from the denitions (5), (9), or (11), respectively. 3.1 "Well-behaved" matrices Our bounds dier from the bounds for the positive denite case [16, Theorem 3.3] by additional factors, namely the last uotients in (37) and (38). From (36) and (39) we also have kv k ((A)( b A)) 1=. From (1) and (4) and the denitions of H and H from Section 1, we have A = D? Ujj 1= Jjj 1= U D?1 ; b A = D? Ujj 1= jj 1= U D?1 : This implies that (see also [9, Proof of Theorem.16]) ja ij j A b iiajj b ; ja?1 ij j A b?1 ii ba?1 jj ; and hence k jaj k trace( b A); k ja?1 j k trace( b A?1 ): This implies that A is well conditioned if so is b A. We conclude that the additional factors will be small if, for the chosen grading D, any of the right hand sides in (36) or (39) are small, or (which is a more restrictive condition) simply if b A is well conditioned. We call such matrices \well-behaved". We also conclude that an invariant subspace is stable under small relative perturbations for the chosen grading D, if ka?1 k is small, any of the three above conditions is fullled, and the eigenvalues which dene the subspace are well relatively separated from the rest of the spectrum. Although b A can be easily computed as described in the comments of Theorem 6, we are interested in identifying classes of \well-behaved" matrices in advance. In the positive denite case the answer is simple { such are the well 14

15 graded matrices, that is the matrices of the form H = DAD, where D is diagonal positive denite such that A ii = 1, and A is well conditioned. In the indefinite case we have two easily recognisable classes of \well-behaved" matrices, namely scaled diagonally dominant matrices [1] and Hermitian uasidenite matrices [6,8]. Moreover, for the same A such matrices are well-behaved for any diagonal grading D. Scaled diagonally dominant matrices have the form H = D(J + )D, where D is diagonal positive denite, J = diag(1) and k k < 1. For these matrices A = J + and kak k b A?1 k n(1 + k k)=(1? k k) [9, Theorem.9]. Thus, (39) implies that the last uotient in (37) is small when k k is not too close to one. Also, kak which is used in the denition of in (3) has to be suciently small. Hermitian matrix H is uasidenite if there exists a permutation P such that P T HP = H = H 11 H 1 H ; 1? H where H 11 and H are positive denite. The proof that such matrix is \wellbehaved" is rather involved. Quasidenite matrix always has a triangular factorization H = F JF, where F is lower triangular with real diagonal and J is diagonal with J ii = sign(h ii ) [6] 4. Set H = DAD where D = diag(jh ii j). Then H = D A D = D A 11 A 1 A 1? A D; D = P T DP; A = P T AP: (41) Note that A is also uasidenite, and its triangular factorization is given by A = BJB where B = D?1 F. Let us now bound ( b A). Assume without loss of generality that the diagonal elements of D are nonincreasing which can be easily attained by permutation. From (40) we have ( b A) (B) (X) ; (4) where, as already mentioned, X diagonalizes the pair (F F; J). In [] it was shown that (X) min ( F F ); 4 The proof in [6] is for the real symmetric case, but it is easily seen that it holds for the Hermitian case, as well. 15

16 where the minimum is over all matrices which commute with J. In our case this clearly implies kxk (F D?1 ) = (DBD?1 ): Since B is lower triangular and D has nondecreasing diagonal elements, we have jdbd?1 j jbj and jdb?1 D?1 j jb?1 j, so that (X) k jbj k k jb?1 j k kbk F kb?1 k F F (B): (43) The matrix JA is positive denite according to [9]. By appropriately modifying the proof of [9, Theorem], we have k jbj jbj T k F n(kt k + kst?1 Sk); where T = [JA + (JA) T ]= and S = [JA? (JA) T ]=. Note that the proof in [9] is for the real case, but it readily holds in the complex case as is recently shown in [8]. From (41) we have k jbj jbj T k F n(kp T T P k + kp T SP P T T?1 P P T SP k) (44) n maxfk A 11 k + k A 1 A?1 A k; 1 k A k + k A 1A?1 11 A 1 k: Now note that the inverse of a uasidenite matrix is also uasidenite [6, Theorem 1.1]. By modifying the proof of [9, Theorem] we obtain a bound similar to (44) for k jb?t j jb?1 jk F (for details see [8]). By combining this discussion with (43), from (4) we conclude that essentially ( A) b will be small if in (41) A 11 and A are well conditioned and k A 1 k is not too large. 4 Conclusion We derived new perturbation bounds for invariant subspaces of non-singular Hermitian matrices. Our bounds improve the existing bounds in the following aspects. Our bounds extend the bounds for scaled diagonally dominant matrices from [1] to general indenite matrices. Our bounds also extend the bound of [16, Theorem 3.3] for positive denite matrices to indenite case. Finally, our bounds extend the bounds for indenite matrices from [9,5] to subspaces which correspond to any set of possibly nonadjacent eigenvalues (numerical experiments also indicate that our bounds tend to be sharper). For graded matrices of the form (1) which are well-behaved according to Section 3.1, our 16

17 bounds are sharper that the general bound for diagonalizable matrices from [13], k sin k F kh?1 Hk F min 1pk 1l?e j j j : (45) This bound makes no preference for positive denite matrices over indenite ones, and is easier to interpret than our bound. However, since in (45) H is multiplied by H?1 from the left, this bound does not accommodate two sided grading well. Finally, our bounds are computable from unperturbed uantities and can be eciently used in analyzing numerical algorithms in which graded perturbations naturally occur. We would like to thank the referee for valuable remarks which lead to considerable improvements in the paper. References [1] J. Barlow and J. Demmel, Computing accurate eigensystems of scaled diagonally dominant matrices, SIAM J. Numer. Anal., 7:76{791 (1990). [] C. Davis and W. M. Kahan, The rotation of eigenvectors by a perturbation. III, SIAM J. Numer. Anal., 7:1{46 (1970). [3] J. Demmel and W. M. Kahan, Accurate singular values of bidiagonal matrices, SIAM J. Sci. Stat. Comput., 11:873{91 (1990). [4] J. Demmel and K. Veselic, Jacobi's method is more accurate than QR, SIAM J. Matrix Anal. Appl., 13:104{144 (199). [5] Z. Drmac, Computing the Singular and the Generalized Singular Values, PhD thesis, Fernuniversitat, Hagen, [6] S. C. Eisenstat and I. C. F. Ipsen, Relative perturbation techniues for singular value problems, SIAM J. Numer. Anal., 3:197{1988 (1995). [7] S. C. Eisenstat and I. C. F. Ipsen, Relative perturbation results for eigenvalues and eigenvectors of diagonalisable matrices, BIT, 38:50{509 (1998). [8] P. E. Gill, M. A. Saunders and J. R. Shinnerl, On the stability of Cholesky factorization for symmetric uasidenite systems, SIAM J. Matrix Anal. Appl, 17:35{46 (1996). [9] G. H. Golub and C. F. Van Loan, Unsymmetric positive denite linear systems, Linear Algebra Appl., 8:85{97 (1979). 17

18 [10] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, [11] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, [1] I. C. F. Ipsen, Relative perturbation bounds for matrix eigenvalues and singular values, in:acta Numerica 1998, Vol. 7, Cambridge University Press, Cambridge, pp. 151{01 (1998). [13] I. C. F. Ipsen, Absolute and relative perturbation bounds for invariant subspaces of matrices, August 1998, submitted to Linear Algebra Appl. [14] T. Kato, Perturbation Theory for Linear Operators, Springer, Berlin, [15] R.-C. Li, Relative perturbation theory: (i) eigenvalue and singular value variations, SIAM J. Matrix Anal. Appl., 19:956{98 (1998). [16] R.-C. Li, Relative perturbation theory: (ii) eigenspace and singular subspace variations, SIAM J. Matrix Anal. Appl., 0:471{49 (1998). [17] R. Onn, A. O. Steinhardt and A. Bojanczyk, Hyperbolic singular value decompositions and applications, IEEE Trans. on Acoustics, Speech, and Signal Processing, 1575{1588 (1991). [18] I. Slapnicar, Accurate Symmetric Eigenreduction by a Jacobi Method, PhD thesis, Fernuniversitat, Hagen, 199. [19] I. Slapnicar, Componentwise analysis of direct factorization of real symmetric and Hermitian matrices, Linear Algebra Appl., 7:7{75 (1998). [0] I. Slapnicar and N. Truhar, Relative perturbation theory for hyperbolic eigenvalue problem, submitted to Linear Algebra Appl. [1] I. Slapnicar and K.Veselic, Perturbations of the eigenprojections of a factorized Hermitian matrix, Linear Algebra Appl., 18:73{80 (1995). [] I. Slapnicar and K.Veselic, A bound for the condition of a hyperbolic eigenvector matrix, to appear in Linear Algebra Appl. [3] A. van der Sluis, Condition numbers and euilibration of matrices, Numer. Math., 14:14{3 (1969). [4] G. W. Stewart and J.-G. Sun, Matrix Perturbation Theory, Academic Press, Boston, [5] N. Truhar, Perturbations of Invariant Subspaces, MS thesis, University of Zagreb, 1995, (in Croatian). [6] R. J. Vanderbei, Symmetric uasidenite matrices, SIAM J. Optimization, 5:100{113 (1995). [7] K. Veselic, A Jacobi eigenreduction algorithm for denite matrix pairs, Numer. Math., 64:41{69 (1993). 18

19 [8] K. Veselic, Perturbation theory for the eigenvalues of factorised symmetric matrices, Technical report, Fernuniversitat Hagen, Germany, [9] K. Veselic and I. Slapnicar, Floating{point perturbations of Hermitian matrices, Linear Algebra Appl., 195:81{116 (1993). 19

1 Quasi-definite matrix

1 Quasi-definite matrix 1 Quasi-definite matrix The matrix H is a quasi-definite matrix, if there exists a permutation matrix P such that H qd P T H11 H HP = 1 H1, 1) H where H 11 and H + H1H 11 H 1 are positive definite. This

More information

deviation of D and D from similarity (Theorem 6.). The bound is tight when the perturbation is a similarity transformation D = D? or when ^ = 0. From

deviation of D and D from similarity (Theorem 6.). The bound is tight when the perturbation is a similarity transformation D = D? or when ^ = 0. From RELATIVE PERTURBATION RESULTS FOR EIGENVALUES AND EIGENVECTORS OF DIAGONALISABLE MATRICES STANLEY C. EISENSTAT AND ILSE C. F. IPSEN y Abstract. Let ^ and ^x be a perturbed eigenpair of a diagonalisable

More information

2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H

2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H Optimal Perturbation Bounds for the Hermitian Eigenvalue Problem Jesse L. Barlow Department of Computer Science and Engineering The Pennsylvania State University University Park, PA 1682-616 e-mail: barlow@cse.psu.edu

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

I.C.F. Ipsen small magnitude. Our goal is to give some intuition for what the bounds mean and why they hold. Suppose you have to compute an eigenvalue

I.C.F. Ipsen small magnitude. Our goal is to give some intuition for what the bounds mean and why they hold. Suppose you have to compute an eigenvalue Acta Numerica (998), pp. { Relative Perturbation Results for Matrix Eigenvalues and Singular Values Ilse C.F. Ipsen Department of Mathematics North Carolina State University Raleigh, NC 7695-85, USA ipsen@math.ncsu.edu

More information

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk

More information

Iterative Algorithm for Computing the Eigenvalues

Iterative Algorithm for Computing the Eigenvalues Iterative Algorithm for Computing the Eigenvalues LILJANA FERBAR Faculty of Economics University of Ljubljana Kardeljeva pl. 17, 1000 Ljubljana SLOVENIA Abstract: - We consider the eigenvalue problem Hx

More information

On the Quadratic Convergence of the. Falk-Langemeyer Method. Ivan Slapnicar. Vjeran Hari y. Abstract

On the Quadratic Convergence of the. Falk-Langemeyer Method. Ivan Slapnicar. Vjeran Hari y. Abstract On the Quadratic Convergence of the Falk-Langemeyer Method Ivan Slapnicar Vjeran Hari y Abstract The Falk{Langemeyer method for solving a real denite generalized eigenvalue problem, Ax = Bx; x 6= 0, is

More information

Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known

Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known University of Kentucky Lexington Department of Mathematics Technical Report 2000-23 Pinchings and Norms of Scaled Triangular Matrices 1 Rajendra Bhatia 2 William Kahan 3 Ren-Cang Li 4 May 2000 ABSTRACT

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995

More information

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart. UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated

More information

the Unitary Polar Factor æ Ren-Cang Li P.O. Box 2008, Bldg 6012

the Unitary Polar Factor æ Ren-Cang Li P.O. Box 2008, Bldg 6012 Relative Perturbation Bounds for the Unitary Polar actor Ren-Cang Li Mathematical Science Section Oak Ridge National Laboratory P.O. Box 2008, Bldg 602 Oak Ridge, TN 3783-6367 èli@msr.epm.ornl.govè LAPACK

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

On Computing Accurate Singular. Values and Eigenvalues of. Matrices with Acyclic Graphs. James W. Demmel. William Gragg.

On Computing Accurate Singular. Values and Eigenvalues of. Matrices with Acyclic Graphs. James W. Demmel. William Gragg. On Computing Accurate Singular Values and Eigenvalues of Matrices with Acyclic Graphs James W. Demmel William Gragg CRPC-TR943 199 Center for Research on Parallel Computation Rice University 61 South Main

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

the Unitary Polar Factor æ Ren-Cang Li Department of Mathematics University of California at Berkeley Berkeley, California 94720

the Unitary Polar Factor æ Ren-Cang Li Department of Mathematics University of California at Berkeley Berkeley, California 94720 Relative Perturbation Bounds for the Unitary Polar Factor Ren-Cang Li Department of Mathematics University of California at Berkeley Berkeley, California 9470 èli@math.berkeley.eduè July 5, 994 Computer

More information

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices

More information

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

An Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias

An Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias An Analog of the Cauchy-Schwarz Inequality for Hadamard Products and Unitarily Invariant Norms Roger A. Horn and Roy Mathias The Johns Hopkins University, Baltimore, Maryland 21218 SIAM J. Matrix Analysis

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

MATRICES WITH POSITIVE DEFINITE HERMITIAN PART: INEQUALITIES AND LINEAR SYSTEMS ROY MATHIAS

MATRICES WITH POSITIVE DEFINITE HERMITIAN PART: INEQUALITIES AND LINEAR SYSTEMS ROY MATHIAS MATRICES WITH POSITIVE DEFINITE HERMITIAN PART: INEQUALITIES AND LINEAR SYSTEMS ROY MATHIAS Abstract. The Hermitian and skew-hermitian parts of a square matrix A are dened by H(A) (A + A )=2 and S(A) (A?

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Accuracy and Stability in Numerical Linear Algebra

Accuracy and Stability in Numerical Linear Algebra Proceedings of the 1. Conference on Applied Mathematics and Computation Dubrovnik, Croatia, September 13 18, 1999 pp. 105 112 Accuracy and Stability in Numerical Linear Algebra Zlatko Drmač Abstract. We

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

arxiv: v2 [math.na] 27 Dec 2016

arxiv: v2 [math.na] 27 Dec 2016 An algorithm for constructing Equiangular vectors Azim rivaz a,, Danial Sadeghi a a Department of Mathematics, Shahid Bahonar University of Kerman, Kerman 76169-14111, IRAN arxiv:1412.7552v2 [math.na]

More information

ACCURATE SYMMETRIC EIGENREDUCTION BY A JACOBI METHOD

ACCURATE SYMMETRIC EIGENREDUCTION BY A JACOBI METHOD æ ACCURATE SYMMETRIC EIGENREDUCTION BY A JACOBI METHOD DISSERTATION zur Erlangung des Grades Dr. rer. nat. des Fachbereichs Mathematik der Fernuniversität Gesamthochschule Hagen vorgelegt von IVAN SLAPNIČAR

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

An SVD-Like Matrix Decomposition and Its Applications

An SVD-Like Matrix Decomposition and Its Applications An SVD-Like Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

The spectra of super line multigraphs

The spectra of super line multigraphs The spectra of super line multigraphs Jay Bagga Department of Computer Science Ball State University Muncie, IN jbagga@bsuedu Robert B Ellis Department of Applied Mathematics Illinois Institute of Technology

More information

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract UMIACS-TR-9-86 July 199 CS-TR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Principal Angles Between Subspaces and Their Tangents

Principal Angles Between Subspaces and Their Tangents MITSUBISI ELECTRIC RESEARC LABORATORIES http://wwwmerlcom Principal Angles Between Subspaces and Their Tangents Knyazev, AV; Zhu, P TR2012-058 September 2012 Abstract Principal angles between subspaces

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

NP-hardness of the stable matrix in unit interval family problem in discrete time

NP-hardness of the stable matrix in unit interval family problem in discrete time Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and

More information

Positive definite preserving linear transformations on symmetric matrix spaces

Positive definite preserving linear transformations on symmetric matrix spaces Positive definite preserving linear transformations on symmetric matrix spaces arxiv:1008.1347v1 [math.ra] 7 Aug 2010 Huynh Dinh Tuan-Tran Thi Nha Trang-Doan The Hieu Hue Geometry Group College of Education,

More information

Abed Elhashash, Uriel G. Rothblum, and Daniel B. Szyld. Report August 2009

Abed Elhashash, Uriel G. Rothblum, and Daniel B. Szyld. Report August 2009 Paths of matrices with the strong Perron-Frobenius property converging to a given matrix with the Perron-Frobenius property Abed Elhashash, Uriel G. Rothblum, and Daniel B. Szyld Report 09-08-14 August

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review Chapter III Quantum Computation These lecture notes are exclusively for the use of students in Prof. MacLennan s Unconventional Computation course. c 2017, B. J. MacLennan, EECS, University of Tennessee,

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

QUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS *

QUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS * QUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS * WEI-CHANG SHANN AND JANN-CHANG YAN Abstract. Scaling equations are used to derive formulae of quadratures involving polynomials and scaling/wavelet

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Numerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices

Numerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices TR-CS-9-1 Numerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices Michael K Ng and William F Trench July 199 Joint Computer Science Technical Report Series Department of Computer

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum

More information

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs The Eighth International Symposium on Operations Research and Its Applications (ISORA 09) Zhangjiajie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp 298 305 On the Eigenstructure of Hermitian

More information

Key words. multiplicative perturbation, relative perturbation theory, relative distance, eigenvalue, singular value, graded matrix

Key words. multiplicative perturbation, relative perturbation theory, relative distance, eigenvalue, singular value, graded matrix SIAM J MATRIX ANAL APPL c 1998 Society for Industrial and Applied Mathematics Vol 19, No 4, pp 956 98, October 1998 009 RELATIVE PERTURBATION THEORY: I EIGENVALUE AND SINGULAR VALUE VARIATIONS REN-CANG

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Wavelets and Linear Algebra

Wavelets and Linear Algebra Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations. HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Multiple eigenvalues

Multiple eigenvalues Multiple eigenvalues arxiv:0711.3948v1 [math.na] 6 Nov 007 Joseph B. Keller Departments of Mathematics and Mechanical Engineering Stanford University Stanford, CA 94305-15 June 4, 007 Abstract The dimensions

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

KEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio

KEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio COMPUTING THE SVD OF A GENERAL MATRIX PRODUCT/QUOTIENT GENE GOLUB Computer Science Department Stanford University Stanford, CA USA golub@sccm.stanford.edu KNUT SLNA SC-CM Stanford University Stanford,

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Preconditioned Parallel Block Jacobi SVD Algorithm

Preconditioned Parallel Block Jacobi SVD Algorithm Parallel Numerics 5, 15-24 M. Vajteršic, R. Trobec, P. Zinterhof, A. Uhl (Eds.) Chapter 2: Matrix Algebra ISBN 961-633-67-8 Preconditioned Parallel Block Jacobi SVD Algorithm Gabriel Okša 1, Marián Vajteršic

More information

COMPARISON OF PERTURBATION BOUNDS FOR THE MATRIX. Abstract. The paper deals with the perturbation estimates proposed by Xu[4],Sun,

COMPARISON OF PERTURBATION BOUNDS FOR THE MATRIX. Abstract. The paper deals with the perturbation estimates proposed by Xu[4],Sun, COMPARISON OF PERTURBATION BOUNDS FOR THE MATRIX EQUATION X = A 1 + A H X;1 A M.M. Konstantinov V.A. Angelova y P.H. Petkov z I.P. Popchev x Abstract The paper deals with the perturbation estimates proposed

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

that of the SVD provides new understanding of left and right generalized singular vectors. It is shown

that of the SVD provides new understanding of left and right generalized singular vectors. It is shown ON A VARIAIONAL FORMULAION OF HE GENERALIZED SINGULAR VALUE DECOMPOSIION MOODY.CHU,ROBER. E. FUNDERLIC y AND GENE H. GOLUB z Abstract. A variational formulation for the generalized singular value decomposition

More information

10.34: Numerical Methods Applied to Chemical Engineering. Lecture 2: More basics of linear algebra Matrix norms, Condition number

10.34: Numerical Methods Applied to Chemical Engineering. Lecture 2: More basics of linear algebra Matrix norms, Condition number 10.34: Numerical Methods Applied to Chemical Engineering Lecture 2: More basics of linear algebra Matrix norms, Condition number 1 Recap Numerical error Operations Properties Scalars, vectors, and matrices

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

Jacobi method for small matrices

Jacobi method for small matrices Jacobi method for small matrices Erna Begović University of Zagreb Joint work with Vjeran Hari CIME-EMS Summer School 24.6.2015. OUTLINE Why small matrices? Jacobi method and pivot strategies Parallel

More information

The Matrix Sign Function Method and the. Computation of Invariant Subspaces. November Abstract

The Matrix Sign Function Method and the. Computation of Invariant Subspaces. November Abstract The Matrix Sign Function Method and the Computation of Invariant Subspaces Ralph Byers Chunyang He y Volker Mehrmann y November 5. 1994 Abstract A perturbation analysis shows that if a numerically stable

More information