2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H
|
|
- Jerome Cross
- 5 years ago
- Views:
Transcription
1 Optimal Perturbation Bounds for the Hermitian Eigenvalue Problem Jesse L. Barlow Department of Computer Science and Engineering The Pennsylvania State University University Park, PA Ivan Slapnicar y Faculty of Electrical Engineering, Mechanical Engineering, and Naval Architecture University of Split R. Boskovica b.b., 21 Split, Croatia ivan.slapnicar@fesb.hr Submitted by Kresimir Veselic ABSTRACT There is now a large literature on structured perturbation bounds for eigenvalue problems of the form Hx = M x; where H and M are Hermitian. These results give relative error bounds on the ith eigenvalue, i, of the form j i? ~ ij ; j ij The research of Jesse L. Barlow was supported by the National Science Foundation under grants no. CCR and no. CCR y The research of Ivan Slapnicar was supported by the Croatian Ministry of Science and Technology under grant no Part of this work was done while the author was visiting the Department of Computer Science and Engineering, The Pennsylvania State University, University Park, PA. LINEAR ALGEBRA AND ITS APPLICATIONS xxx:1{xxx (1993) 1 c Elsevier Science Publishing Co., Inc., Avenue of the Americas, New York, NY /93/$6.
2 2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H to be nonsingular and M to be positive denite. We relax this restriction by allowing H to be singular. For our results on eigenvalues we allow M to be positive semi-denite and for a few results we allow it to be more general. For these problems, for eigenvalues that are not zero or innity under perturbation, it is possible to obtain local relative error bounds. Thus, a wider class of problems may be characterized by this theory. Although it is impossible to give meaningful relative error bounds on eigenvalues that are not bounded away from zero, we show that the error in the subspace associated with those eigenvalues can be characterized meaningfully. 1. Introduction We consider the eigenvalue problem Hx = Mx; H; M 2 C nn ; x 2 C n ; 2 C; (1.1) where H and M are Hermitian matrices. We assume that there exists a nonsingular matrix X 2 C nn such that where and X HX = ; X MX = J; (1.2) = diag(! 1 ; : : : ;! n ); J = diag(j 1 ; : : : ; j n );! i 2 R; j i 2 fe i : 2 [; 2]g [ fg; i = 1; 2; : : : ; n: If we restrict J to be nonsingular, then (1.1) is a statement of eigenproblem for the matrix A = HM?1. If we impose the restriction! i 2 R; j i 2 f; 1g; then M is positive semi-denite, and (1.1) is the generalized Hermitian eigenvalue problem. Most of the results in this paper concern this class of eigenvalue problems. We compare (1.1) to the perturbed problem (H + H)~x = ~ (M + M)~x (1.3)
3 where H and M are \small" Hermitian perturbations. Let 1 2 : : : n be the eigenvalues of the pencil (1.1) and let ~ 1 : : : ~ n be eigenvalues of the perturbed pencil (1.3). Starting from the theory of Kato [12], we obtain meaningful bounds on j i? ~ i j : (1.4) j i j Moreover, for the case when M is positive denite, we give conditions under which we can bound the error in the subspaces in terms of a generalization of the relative gap relgap( i ) = min j6=i j i? j j : j i j j 2 1 This theory generalizes that in papers by Barlow and Demmel [1], Demmel and Veselic [5], Veselic and Slapnicar [18], and some of the results by Gu and Eisenstat [11], Li [13, 14], and Eisenstat and Ipsen [7]. We make the following improvements to the theory given in the above papers: The bounds on eigenvalues allow for H and M to be singular. These bounds are used to obtain bounds on the singular value decomposition (SVD). The bounds given are local in the sense that each eigenvalue has its own condition number. The bounds given are optimal and show clearly the role of structured perturbations. The bounds on eigenvectors include bounds on the error in the subspace associated with eigenvalues that are not bounded away from zero. In x2, we give simple bounds for the relative error of the form (1.4) under weaker assumptions than have been given in previous works [1, 18, 11, 1] and show how this theory can be applied to the singular value decomposition. In x3, we show how this theory accounts for the eect of structured perturbation on the problem (1.1). In x4, we give bounds on error in subspaces for scaled perturbations. Some examples are given in x5 and our conclusions are in x Locally optimal perturbation bounds on Hermitian pencils In this section we rst give local condition numbers for eigenvalues. We then derive the perturbation bounds for the singular value decomposition.
4 Local condition numbers of eigenvalues Consider the perturbed pair ~H H + H = H + E; ~ M M + M = M + F; (2.1) where is a positive real number, E = H= and F = M=. For 2 [; ] let H() = H + E; M() = M + F: (2.2) Now consider the family of generalized eigenproblems H()x() = ()M()x(); 2 [; ]: (2.3) We assume that (1.2) holds for each 2 [; ] and some X(), () and J(). Let ( i (); x i ()) be the ith eigenpair of (2.3). Dene S to be the set of indices given by S = fi: H()x i () 6= ; M()x i () 6= for all 2 [; ]g: (2.4) The set S is the set of eigenvalues for which relative error bounds can be found. The next theorem gives such a bound. Its proof follows that of Theorem 4 in [1, p.773]. Theorem 2.1. Let ( i (); x i ()) be the ith eigenpair of the Hermitian pencil in (2.3). Let S be dened by (2.4). If i 2 S, then Z! i ()? i () = exp i ()d ; (2.5) i () where i () = x i ()Ex i() x i ()H()x i()? x i ()F x i() x i ()M()x i() : (2.6) Proof. Assume that i () is simple at the point. Then from the classical eigenvalue perturbation theory, we have the rst order expansion i ( + ) = i () + _ i () + O( 2 ): (2.7) Let us compute _ (). By dierentiating the equation [H + ( + )E]x i ( + ) = i ( + )[M + ( + )F ]x i ( + ) with respect to, and setting = in the result, we obtain Ex i () + H() _x i () = _ i ()M()x i () + i ()F x i () + i ()M() _x i ():
5 Applying x i () to the left side of this equation, using (2.3), and rearranging gives _ i () = x i ()Ex i() x i ()M()x i()? i()x i ()F x i() x i ()M()x i() : Since i () 6= for all, dividing by i () gives 5 d(ln i ()) d = _ i () i () = x i ()Ex i() x i ()H()x i()? x i ()F x i() x i ()M()x i() = i(); (2.8) where ln is the complex version of the natural logarithm function. If i () is simple for all 2 [; ], then the bound (2.5) follows by integrating from to. In Kato [12, Theorem II.6.1,p.139], it is shown that the eigenvalues of H() in S are real analytic, even when they are multiple. Moreover, Kato goes on to point out that there are only a nite number of where i () is multiple, so that i () is continuous and piecewise analytic throughout the interval [; ]. Thus we can obtain (2.5) by integrating (2.8) over each of the intervals in which i () is analytic. Our usual way of interpreting this bound comes from the following corollary. Corollary 2.1. Assume the hypothesis and terminology of Theorem 2.1. For all i 2 S we have i ()? i () i () j exp( i)? 1j; where i = max jre( i ()j + i max jim( i ())j: Neither the exact expression Theorem 2.1 nor the bound in Corollary 2.1 is computable. We can only hope to compute the function i () at one point, =. Thus we will always use the computable rst order approximation z ^ i = jre( i ())j + ijim( i ())j: If this value is large, then the corresponding eigenvalue is sensitive. Moreover, if is suciently small, then this approximation will give us appropriate qualitative information. z This approximation can be computed by switching the roles of the perturbed and unperturbed problem. The backward errors E and F simply change signs, and the \original" vector x i () becomes the computed one.
6 A very important issue in this paper is whether or not an index i is in the set S. From the proof of Theorem 2.1, it can easily be inferred that i 2 S if and only if i () in (2.6) is bounded for each. That, of course, is not possible to verify since we cannot compute i () for every value of. We will give a heuristic criterion for membership in this set, justied below. The discussion below assumes that i () is simple and that we can use an argument like that in Theorem 2.1 to consider the case where i () is not simple. First, consider the case where H()x i () = for some, but M()x i () 6= for all. Then i () = for some. By the mean value theorem for some 2 [; ], we have Thus If i () = i () + _ i () = : i () =? _ i (): j i ()j > max j _ i ()j; then i 2 S. Of course, this is not veriable either. Thus a good heuristic is or equivalently, j i ()j j _ i ()j ) i 2 S: j^ i j = j i ()j 1 ) i 2 S: (2.9) If we assume that M()x i () = for some, but H()x i () 6= for all, similar reasoning arrives again at the heuristic (2.9). The only way to know for certain if i 2 S c, the complement of S, is to discover a perturbation in the class of interest that makes either H()x i () = or M()x i () =. It is entirely possible that H()x i () = M()x i () = and that i () is dened in the limit and bounded for all. This is the so-called (; )- eigenvalue case. To exclude this case as well, we may wish to impose the slightly stricter criterion jx i ()Ex i ()j jx i ()H()x i()j + 6 jx i ()F x i ()j jx i ()M()x 1: (2.1) i()j If (2.1) is not true, then it is unlikely that reasonable relative error bound on i is obtainable. Absolute error bounds are unlikely to be improved upon for such eigenvalues. If (2.1) holds, then the estimate i ()? i () i () j exp(^ i)? 1j + O( 2 )
7 is accurate and computable. We demonstrate this point with an example. and Example 2.1. We take M = I and H = DAD where A = D = diag(1; 1 8 ; 1 4 ; 1 6 ) 1?1?1 2?1?1 2?1?1 1 The matrix H has exactly one zero eigenvalue. We then take E = jhj: To the digits displayed, MATLAB computes the eigenvalues = diag( ; 1: ; 4: ; 3:562 1?17 ): The last eigenvalue should be zero, but is not. The clue here is that the corresponding values of ^ i ; i = 1; 2; 3; 4, are ^ = (1; 1; 13; 3: ): If we choose = 2:224 1?16, the relative distance between two oating point numbers in IEEE double precision, we see that ^ 4 = 87 > 1; thus the last eigenvalue cannot be reliably separated from zero, and the small componentwise perturbation H ~ = H? jhj makes this eigenvalue negative. However, all of the other eigenvalues are well-separated from zero. The non-zero eigenvalues of H are all well-behaved under componentwise perturbations. We will show later (Example 5.5) that the subspace associated with the zero eigenvalue is also well-behaved. Since for Hermitian eigenvalue problems, the eigenvalues will be real, we can will use the following simplication of Corollary 2.1. Corollary 2.2. Assume the hypothesis and terminology of Theorem 2.1. If i () 2 R for all 2 [; ] and i 2 S then where 1 C A exp(? i ) i() i () exp( i); (2.11) i = max x i ()Ex i () x i ()H()x i()? x i ()F x i () x i ()M()x i() : 7
8 Our bound improves similar results from [1, 5, 6, 18] since it is applicable to the larger class of matrix pairs, accommodates the case when one or both matrices are singular, and gives nearly optimal local condition number for each non-zero eigenvalue. If the perturbations E and F are not structured in any particular way with regard to x i (), then the bound from Theorem 2.1 will not be much better than classical normwise bounds [19, 17, 9]. For structured perturbations the rst order approximation of our bound can be much sharper than the classical bounds and relative bounds from [1, 5, 6, 18], as shown in the examples of x The singular value decomposition The singular value decomposition (SVD) of a matrix A 2 C mn is given by A = UV ; where U and V are unitary, and = diag( i ) diagonal and nonnegative. For simplicity, we assume m n, for m < n, analogous results follow by considering A. Corollary 2.3. Let A ~ = A + A = A + E, where E = A=. Dene A() = A + E for 2 [; ]. Let A() have the singular value decomposition A() = U()()V () ; 2 [; ]; where U() 2 C mm and V () 2 C nn are unitary and () = diag( 1 (); : : : ; n ()); U() = (u 1 (); : : : ; u m ()); V () = (v 1 (); : : : ; v n ()): Then for each index i such that i () 6= for all 2 [; ] we have exp(? i ) i() i () exp( i); (2.12) where i = max Re(u i ()Ev i ()) u i ()A()v i() : (2.13) Proof. We note that i () is the ith eigenvalue of (1.1) with M = I and H() given by A() H() = A() :
9 9 The corresponding eigenvector is x i () = 1 p 2 ui () v i () If we evaluate i () in (2.6) we obtain i () = x i () E E x i ()H()x i() x() If we then apply Corollary 2.2, we have (2.13). : = Re(u i ()Ev i()) u i ()A()v i() : 3. Eect of structured perturbations In this section, we discuss the eect of common structured errors. For this part of the theory we state the results for the SVD and the Hermitian pencil (H; M). Similar bounds which can be derived for generalizations of the SVD are given in [2]. If we had exact expressions for the perturbation matrices E and F from (2.1), then Theorem 2.1 and Corollary 2.1 could always be used to give a good bound. However, it is extremely rare that a perturbation resulting from either data or an algorithm is known exactly. Usually, we just have a bound for it, and with some luck, we have a structured bound. In this section, we discuss what can be determined from such a bound. Two structures of perturbations are discussed, although there are others. The rst structure discussed are scaled perturbations, that is, when E has the form (and analogously F ) E = D E S D; ke S k 1: Such perturbations are common in the discussion of Jacobi-type methods [5]. When discussing the SVD, we may discuss the two-sided perturbation E = D L E SD R ; ke S k 1: However, for simplicity, we will just simply discuss the perturbation E = E S D; ke S k 1: Our results are easy to generalize to the two-sided perturbations. The other structure discussed here are componentwise perturbations. Here we assume that jej jhj; jf j jmj; where both the inequality and the absolute value are componentwise. Such perturbations are common for highly structured eigenvalue problems and for data perturbations.
10 Structured perturbations of the SVD We suppose that ~A = A + A = A + E; E = E S D; (3.1) where D is some right grading matrix and ke S k 1. An important context for this class of perturbations is the analysis of Jacobi-type methods [5] where D = diag(ka(: ; 1)k; ka(: ; 2)k; : : : ; ka(: ; n)k): However, the results here allow D to be any matrix. For 2 [; ] let A() and its SVD be dened as in Corollary 2.3. As before, let S be the set of indices for which i () 6= for all 2 [; ]. We now introduce the notion of a truncated SVD. In this case, we truncate with respect to the index set S. Definition 3.1. Let k = jsj, and let the singular values of A() whose indices are in S correspond to singular values 1 (); : : : ; k () in non-increasing order. The truncated SVD of A() with respect to S is dened as A(; S) = U()(; S)V (); where (; S) = diag( 1 (); 2 (); : : : ; k (); ; : : : ; ): It is also appropriate to dene the Moore-Penrose pseudoinverse of A(; S). For a xed matrix A 2 C mn, the Moore-Penrose pseudoinverse is the unique matrix A y 2 C nm satisfying the four Penrose conditions 1: AA y A = A; 3: (AA y ) = AA y ; 2: A y AA y = A y ; 4: (A y A) = A y A: It is easily veried that the Moore-Penrose pseudoinverse of A(; S) is given by A y (; S) = V () y (; S)U (); where y (; S) = diag(?1 1 (); : : : ;?1 k (); ; : : : ; ); and k is as specied in Denition 3.1. We now use this form to establish global error bounds for all i ; i 2 S.
11 Proposition 3.1. For 2 [; ] let A() = A + E S D, where E S and D are dened by (3.1), have the singular value decomposition assumed in Corollary 2.3. Then (2.12) holds for each i 2 S with i bounded by i max kda y (; S)u i ()k max kda y (; S)k: 11 Proof. From (2.13), for each i 2 S we have i = max Using the fact that ku i ()E S k 1 with (3.2) yields jre(u i ()E S Dv i ())j ju i ()A()v : (3.2) i()j i max By the denition of A y (; S) we have kdv i ()k : (3.3) i () v i () = A y (; S)u i () i (): (3.4) Combining (3.3) with (3.4) yields the desired result. The following corollary is a componentwise error bound that we might expect from singular value improvement procedures. Its proof is very similar to the scaled case. Corollary 3.1. Let ~A = A + A = A + E; jej jaj: Here both the inequality and the absolute value are componentwise. For 2 [; ] let A() = A+E, have the singular value decomposition assumed in Corollary 2.3. Then (2.12) holds for each i 2 S with i bounded by i max k jaj ja y (; S)u i ()j k max k jaj ja y (; S)j k: Proposition 3.1 and Corollary 3.1 generalize the corresponding results by Demmel and Veselic [5] and Li [13] to matrices which do not necessarily have full rank. Corollary 3.1 is illustrated by Example 5.1 in x Structured perturbations for the Hermitian generalized eigenvalue problem We consider the Hermitian pencil H? M, where M is positive semidenite, and the perturbed pencil ~ H? ~ M, where ~ H and ~ M are dened by
12 (2.1). Suppose that the family of pencils H()? M(); 2 [; ], dened by (2.2), has the form where Here H() = X? ()()X?1 (); M() = X? ()J()X?1 (); (3.5) X() = (x 1 (); : : : ; x n ()); () = diag( 1 (); : : : ; n ()); J() = diag(j 1 (); : : : ; j n ()): if xi () 2 Null(M()) ; j i () = 1 otherwise. As done in Veselic and Slapnicar [18], we relate our problem to a positive denite eigenvalue problem. To do so, we rst dene the spectral absolute value of the matrix H() with respect to M(). Definition 3.2. Let For 2 [; ] let the pair (H(); M()) have the generalized eigendecomposition in (3.5). The spectral absolute value of H() with respect to M() is the matrix H() M given by H() M = X? ()j()jx?1 (); where j()j = diag(j 1 ()j; : : : ; j n ()j). If M() = I for all, then we dene H() = H() I. If we let X?1 () have the factorization X?1 () = Q()R(); where Q() is unitary, then it is easily seen that H() M = R () R? ()H()R?1 () R(): This is the denition given by Veselic and Slapnicar [18] for the case where M is nonsingular. We also note that for the case M = I, we have H() = p H 2 (); where p denotes matrix square root, that is, H() is the positive semidenite polar factor of H(). We will now dene a truncated version of H() M. Dene S as in (2.4). 12
13 13 Definition 3.3. The truncated spectral absolute value of H() with respect to M() is the matrix H(; S) M such that H(; S) M = X? ()j(; S)jX?1 (); where and (; S) = diag( 1 (; S); : : : ; n (; S)) i (; S) = i (); i 2 S; ; otherwise. We dene M(; S) and J(; S) conformally. Clearly, H(; S) M is positive semi-denite. We can factor both H(; S) M and M(; S) into the form respectively, where Here H(; S) M = C (; S)C(; S); (3.6) M(; S) = G (; S)G(; S); C(; S) = U()(; S)X?1 () 2 C mn ; m n; (3.7) G(; S) = V ()J(; S)X?1 () 2 C pn ; p n: (; S) = diag( i (; S)) = diag( p j i (; S)j); and U() and V () are matrices with orthonormal rows and orthonormal nontrivial columns. That is, columns of U() which correspond to i 2 S are orthonormal, and columns of V () for which j i () = 1 are orthonormal. Note that the form (3.7) describes the quotient singular value decomposition (QSVD) of the pair (C(; S); G(; S)) [15, 16]. The G(; S)-weighted pseudoinverse of C(; S) [3, 8] is given by C y G (; S) X()y (; S)U (): (3.8) Likewise, the C(; S)-weighted psuedoinverse of G(; S) is G y C (; S) X()J y (; S)V (): Using this structure, we can establish bounds on all of the eigenvalues that do not change sign under the perturbation.
14 Theorem 3.1. Let the pair ( H; ~ M) ~ be dened by (2.1). For 2 [; ] let ( i (); x i ()) be the ith eigenpair of the pair (H(); M()), dened by (2.2). Let C(; S) and G(; S) be dened by (3.6), and let the QSVD of (C(; S); G(; S)) be given by (3.7). Dene S as in (2.4). Then each i (); i 2 S, satises (2.11), where 14 i max jx i ()Ex i()j x i () H(; S) M x i () + max jx i ()F x i()j x i () M()x i() (3.9) = max ju i ()[C y G (; S)] EC y G (; S)u i()j + max jvi ()[G y C (; S)] F G y C (; S)v i()j max k[c y G (; S)] EC y G (; S)k + max k[g y C (; S)] F G y C (; S)k: Proof. For each i 2 S considering (2.11) as in Corollary 2.2 yields i = max jx i ()Ex i()j jx i ()H()x i()j + max jx i ()F x i()j jx i ()M()x i()j jx i = max ()Ex i()j jx i + max ()F x i()j : i () 2 j i () 2 By (3.6) and (3.7) this is just the rst equality in (3.9). Since we have x i () = i ()C y G (; S)u i() = j i ()G y C (; S)v i(); i = max ju i ()[C y (; S)] EC y (; S)u i ()j + max jvi ()[G y C (; S)] F G y C (; S)v i()j: which is the second equality in (3.9). Classical norm inequalities yield the inequality in (3.9). The following corollary yields a bound for the case of scaled perturbations discussed by Barlow and Demmel [1]. Corollary 3.2. Assume the hypothesis and terminology of Theorem 3.1. Assume that E and F have the form E = D HE S D H ; ke S k 1; F = D MF S D M ; kf S k 1:
15 15 Then for each i 2 S i max kd H C y G (; S)u i()k 2 + max kd M G y C (; S)v i()k 2 : max kd H C y G (; S)k2 + max kd M G y C (; S)k2 : The componentwise version of Theorem 3.1 is obtained similarly as in x3.1 and x3.2. Corollary 3.3. Assume the hypothesis and terminology of Theorem 3.1. Assume that jej jhj and jf j jmj. Then i max ju i ()[C y G (; S)] j jhj jc y (; S)u i ()j + max jvi ()[G y C (; S)] j jmj jg y G (; S)v i()j max k j[c y G (; S)] j jhj jc y G (; S)j k + max k j[g y C (; S)] j jmj jg y C (; S)j k: Corollary 3.3 is illustrated by Examples 5.2, 5.3 and 5.4 in x5. It generalizes bounds in Veselic and Slapnicar [18] to pencils that are singular and as shown in the examples, often allows us to improve upon them. 4. Error bounds on subspaces We now consider the eect of structured perturbations on the eigenvectors of H. We conne our attention to the perturbed problem (H + H)~x = ~ ~x; where H = E. Consider the family of Hermitian eigenproblems Dene the set S by H()x() = ()x(); H() = H + E; 2 [; ]: (4.1) in which case its set complement is S = fi: i () 6= ; 2 [; ]g; (4.2) S c = fi: i () = ; for some 2 [; ]g: (4.3)
16 Suppose that S has k elements and that S c has n? k elements. Let X 1 ; ~ X1 2 C nk be the eigenvectors of H and H + H associated with S and let X 2 ; ~ X2 2 C n(n?k) be the matrix of eigenvectors associated with S c. Thus we have 16 HX j = X j j ; j = 1; 2; (4.4) (H + H) ~ Xj = ~ Xj ~ j ; j = 1; 2: We now dene two separate types of relative gaps: relgap(; ) = relgap (; ) = j? j ; ; 6= ; jj1=2 j? j : jj The rst denition is just that from Barlow and Demmel [1], and the second denition allows (but not ) to be zero. This allows to bound the error in the \zero" subspaces, that is, the eigenvectors in S c. These straightforward extensions to sets of eigenvalues will be used: for = diag( i ) and? = diag( j ) We will bound relgap(;?) = min relgap( i ; j ); (i;j) relgap (;?) = min relgap ( i ; j ): (i;j) kx 1 ~ X 2 k F which is \sin " bound for the error in the zero subspace X 2 from Davis and Kahan [4]. We will also partition X 1 (conformally ~ X1 ) into X 1 = ( X 11 X 12 ); where X 11 and X 12 are the eigenvectors associated with the eigenvalue matrices 11 and 12 such that We also bound 1 = diag( 11 ; 12 ): kx 11 ~ X 12 k F which is \sin " bound for the error in two subspaces within the non-zero subspace X 1. There can be no meaningful bound within the zero subspace X 2. We can write down the following theorem on the perturbations of these subspaces.
17 Theorem 4.1. Let H; H 2 C nn be Hermitian and let S and S c be dened by (4.2) and (4.3), respectively. Let X j ; ~ Xj and j, ~ j, j = 1; 2 satisfy (4.4). Furthermore, let 1 be partitioned into 1 = diag( 11 ; 12 ), and let ~ 1 be partitioned conformally. Dene X 1i, i = 1; 2 such that HX 1i = X 1i 1i ; and dene ~ X1i ; i = 1; 2 conformally as well. For 2 [; ] let X() be the eigenvector matrix of the H() from (4.1), and let (; S) be dened as in Denition 3.3 with M = I and X?1 () = X (). Let 17 H(; S) = X()(; S)X (); C(; S) = j(; S)j 1=2 X (): Then and Proof. kx 11 ~ X 12 k F k[cy (; S)] HC y (; S)k F relgap( 11 ; ~ 12 ) kx 1 ~ X 2 k F kh y (; S)HX 2 k F relgap ( 1 ; ~ 2 ) For any i 6= ~ j and i 6= j, we have (4.5) : (4.6) x i ~x j = x i H ~x j ~ j? i : (4.7) If x i is a column of X 11 and ~x j is a column of X 12, then i; j 2 S, thus jx i ~x j j jx i H ~x j j j i ~ j j 1=2 j ~ j? i j j i ~ j j 1=2 = jx i H ~x j j j i ~ j j 1=2 j ~ j? i j kc(; S)x i k kc(; S)~x j k = ju i [Cy (; S)] HC y (; S)~u j j relgap( i ; ~ : j ) Summing these up and using standard norm inequalities yields kx 11 ~ X 12 k F ku 11[C y (; S)] HC y (; S) ~ U12 k F relgap( 11 ; ~ 12 ) where U 11 = C(; S)X 11 j 11 j 1=2 and ~ U12 = C(; S)X 12 j 12 j 1=2. Since U 11 and ~ U12 have orthonormal columns, the bound (4.5) follows. To obtain (4.6), simply assume that i 2 S and j 2 S c in (4.7). Then ; jx i ~x j j jx i H ~x jj j i j j~ j? i j j i j jx i = H ~x jj relgap ( i ; j ~ ) kh(; S)x i k = jx i H y (; S)H ~x j j relgap ( i ; ~ : j )
18 A similar argument to that above produces (4.6). This theorem can be generalized to (1.3) with M positive denite if we substitute an M-weighted norm for the Euclidean and Frobenius norms. Bounds on the perturbed subspaces for structured perturbations are easy to derive from Theorem 4.1. For instance, let H = D AD; kak F = F : Then short arguments from (4.5) and (4.6) lead to kx ~ kdc y (; S)k 2 11 X 12 k F F max relgap( 11 ; ~ 12 ) ; kx ~ kdh y (; S)k kd X2 ~ k 1 X 2 k F F : relgap ( 1 ; ~ 2 ) 18 If we instead assume that then jhj jhj; kx 11 ~ X 12 k F k jcy (; S)j T jhj jc y (; S)j k F relgap( 11 ; ~ 12 ) ; kx 1 ~ X 2 k F k jh y (; S)j jhj k F relgap ( 1 ; ~ 2 ) : We note that the error in the zero subspace X 2, given by kx 1 ~ X 2 k F is modest if k jh y (; S)j jhj k F or kdh y (; S)k kd ~ X2 k is modest and 1 has a good relative separation from the near zero eigenvalues. 5. Examples In this section we illustrate our results on several examples. We give examples for structured perturbations of x3, in particular for the relative componentwise perturbations of the type H = E; jej jhj: Such perturbations are highly interesting since they appear during various numerical algorithms for eigenvalue and singular value problems [1, 5, 9, 17, 18, 19]. Such perturbations are sometimes called oating-point perturbations [18]. In all examples we compute the rst order approximations of our bounds, thus we cannot expect optimality in all cases.
19 The rst example deals with the singular value decomposition and illustrates Corollary 3.1. Example 5.1. Let?2 A ?8 1 4?6:1 1 2?6 A :? Note that the last two column vectors of A are nearly parallel. Let A = E where = 1?6 and 7 E 1 139? ? ?1 A :? :4 Also, both A and D are strongly scaled from the right. Let i = i (A + A)? i (A). The singular values of A are (properly rounded) ( 1 ; 2 ; 3 ) = (1: ; 9: ; :45); and the relative changes in the singular values are j1 j ; j 2j ; j 3j = (2:5 1?7 ; 1:6 1?7 ; 3: 1?2 ): Both singular value decompositions, of A and A + A, are computed by the one-sided Jacobi method whose suciently high accuracy is guaranteed by the analysis of Demmel and Veselic [5]. Since jej :42857jAj, we can apply Corollary 3.1. We compute the rst order approximations of the corresponding bounds, that is, j i j i A k jaj ja y u i j k A k jaj ja y j k; A = :42857: (5.1) Note that we can use the fact that A is strongly scaled from the right to compute inverse much more accurately. The bounds obtained by the rst inequality in (5.1) are j1 j 1 ; j 2j ; j 3j 2 3 (4:3 1?7 ; 5 1?7 ; 1:8 1?1 ): This shows that our bounds are local and even the rst order approximations can be nearly optimal. Note that our relative bound for 1 is slightly worse than the the bound kek= 1 = 2:97 1?7 which is derived from the classical normwise perturbation theory. This is to be expected for the 19
20 largest singular value since it is always perfectly conditioned in the relative sense (unless it is ) and our bounds have an extra condition number. However, the classical bound is meaningless for other singular values. The simplied bounds, that is, the second inequality in (5.1) and the Demmel-Veselic bound, j i j max 1:8 1?1 i=1;2;3 i j i j max n A k [A diag(ka?1 :i k)]?1 k 3:8 1?1 ; i=1;2;3 i respectively, both cover only the worst case. 2 The following two examples illustrate Corollary 3.3. were also analyzed in [18]. Both examples Example 5.2. Let H = ?8 1 A : Let H = E, where = :5 1?5 and 1 :6?1 :8 E A : :8?1:2 1?11 Thus, jej jhj. Let i = i (H + H)? i (H). The eigenvalues of H are (properly rounded) ( 1 ; 2 ; 3 ) = (2;?1; 5 1?9 ); and the relative changes in the eigenvalues are j1 j j 1 j ; j 2j j 2 j ; j 3j = (6:7 1?7 ; 1:7 1?6 ; 9: 1?6 ): j 3 j We want to apply Corollary 3.3 with M = I. Since the eigenvector matrix X() is itself unitary, we can take U() = V () = X? () = X() in (3.7), which implies C(; S) = H(; S) 1=2 ; G() = I. The rst order approximations of the bounds from Corollary 3.3 are j i j j i j ju i H?1=2 j jhj j H?1=2 u i j k j H?1=2 j jhj j H?1=2 j k: (5.2)
21 21 The bounds obtained by the rst inequality in (5.2) are j1 j j 1 j ; j 2j j 2 j ; j 3j (5 1?6 ; 8:3 1?6 ; 1:5 1?5 ); j 3 j which is again nearly optimal. The heuristic (2.9) implies that even the tiniest eigenvalue 3 does not cross zero for any 2 [; ], even though 3 is in magnitude much less that. On the other hand, the simplied bound, that is, the bound obtained from the second inequality in (5.2), is j i j max :95: i=1;2;3 j i j The bound from Veselic and Slapnicar [18], j i j max nk(d?1 H D?1 )?1 k ; (5.3) i=1;2;3 j i j where D = diag( p H ii ), is useless. Example 5.3. Another interesting example is the following: let H = DAD, where 1 1?1?1? A = B?1 1?1?1 1?1 A ; D = B 1 A : 1?1?1? The eigenvector matrix of H is X = 1= p 2 1=2 1=2?1=2 1=2 1= p 2?1=2 1=2?1= p 2?1= p 2 1=2 1=2 1 C A : Let H = E, where = :5 1?6, E = DE S D, E S = ww T, w = ( 1 1?1 1 ) T. The eigenvalues of H are ( 1 ; 2 ; 3 ; 4 ) = ( ; ;?2 1 8 ; 2); and the relative changes in the eigenvalues are j1 j j 1 j ; j 2j j 2 j ; j 3j j 3 j ; j 4j = (; 49; :98; 5 1?7 ): j 4 j
22 There is no change in 1 because its eigenvector satises Ex 1 =. Unless we use the exact perturbation E with Theorem 2.1, or incorporate this into the structure of our bound, we will not detect this. We see that 2 and 3 are very sensitive. The bounds obtained by the rst inequality in (5.2) are j1 j j 1 j ; j 2j j 2 j ; j 3j j 3 j ; j 4j (5 1?7 ; 25; 25; 5 1?7 ); j 4 j and clearly show the sensitivity of 2 and 3. The bound on 2 is too optimistic because this is a rst order theory and value of ^ i is now too large (> 1) for it to be relevant. All eigenvalues are in the set S since all of the eigenvectors retain their sign pattern under the perturbation, but two of them are not well-behaved and can only be meaningfully bounded by exactly computing the integral in Theorem 2.1 or in absolute error terms. For 2 the absolute bound j 2? ~ 2 j kek = 1 1 is a good estimate, but does not tell us whether the eigenvalue crosses zero or not. Theorem 2.1 would tell us that, but at great expense. The bounds (the rst order approximations) for 1 and 4 are good in the sense that they show that these eigenvalues are well-behaved. The bound for 1 is not optimal since it only uses the information that jej jhj. The bounds obtained from the second inequality in (5.2) and (5.3), j i j j i j max 5; and max 1; i=1;2;3;4 j i j i=1;2;3;4 j i j respectively, as well as the bound for 4 obtained by the classical normwise perturbation theory, are useless. The next example illustrates Corollary 3.3 on a matrix pair (H; M). 22 Example 5.4. Let H = D H A T AD H and M = D M B T BD M, where D H = diag(1 8 ; 1 4 ; 1; 1; 1); = diag(?1;?1; 1; 1); D M = diag(1?4 ; 1?2 ; 1?2 ; 1?1 ; 1); and 1?3?5? ?2?4?4?2 B 4 2?2?4?5 C A A ; B A :?1? ?1 1 2?
23 Thus, H is indenite singular of rank four, M is semi-denite of rank three, and H and M are scaled in opposite directions. Altogether, H = M =?2:3 1 17?1:7 1 13?5: 1 9 1: : ?1:7 1 13?3: 1 8?7: 1 5 1: : ?7: 1 5?1:9 1 3?4: 1 2 1: 1 2 C 1: :2 1 6?4: 1 2?1:4 1 3?1: : : : 1 2?1:6 1 2?1: :9 1?7?1:9 1?5?2:6 1?5?2:4 1?4?4: 1?4 1?1:9 1?5 2:1 1?3 2:6 1?3 2:4 1?2 2: 1?2 1?5 2:6 1?3 3:6 1?3 3:2 1?2 4: 1?2 C?2:4 1?4 2:4 1?2 3:2 1?2 3:2 1?1 8: 1?1 A :?4: 1?4 2: 1?2 4: 1?2 8: 1?1 8: 1 The eigenvector matrix of the pair (H; M) is (properly rounded) X = 1:?7:4 1?5?4:8 1?6 1:86 1?7 2:18 1?8 1?2:52 1?3 1: 3:1 1?2?2:96 1?4?6:72 1?5 3:74 1?3 5: 1?1 8:66 1 1?8:41 1?1?5:4 1?2 C 6:29 1?4?1:5 1?1?1: :13 1:68 1?1?2:53 1?5 1: 1?2 5:76 1?1?2:37 1?1 3:36 1?1 We have X HX = diag(?2: ; 9: ;?1: ; 7:7388; 2:6 1?15 ); X MX = diag(?3:6 1?23 ; 1:1 1?18 ; 1; 1; 1): We conclude that S = f3; 4g, (; S) = diag(; ;?1: ; 7:7388; ); J() = diag(; ; 1; 1; 1); where (; S) and J() are dened by Denition 3.3 and (3.5), respectively. We can arrive to this conclusion in two ways, by using heuristic (2.9), or by observing that the null subspaces of of H and M have only the trivial intersection. Let us perturb H to H + H with H = E, where = 1?6 and E = ? ? ? ? ? ? ? ?6 Thus, je H j jhj. The relative changes in the eigenvalues i, i 2 S, are j3 j j 3 j ; j 4j = (3:5 1?7 ; 5:3 1?4 ): j 4 j 1 C A : A ; 23 A :
24 24 The rst order approximations of the bounds from Corollary 3.3 are j i j j i j ju i ()[C y (; S)] j jhj jc y (; S)u i ()j (5.4) k j[c y (; S)] j jhj jc y (; S)j k; where C y (; S) and U() are dened by (3.6) and (3.7), respectively. We can take 1 U = ; 1 in which case?1:331 1?9 6:675 1?8 1 8:341 1?6?1:65 1?4 C y (; S) = B 2:4 1?2?3:24 1?1 1?3 7:667 1?1 A : 1:595 1?4?8:525 1?2 The bounds obtained from the rst inequality in (5.4) are j3 j j 3 j ; j 4j (2:7 1?6 ; 4:6 1?3 ); j 4 j and the bound obtained from the second inequality in (5.4) is j i j max 4:6 1?3 : i=3;4 j i j Note that choosing C y (; S) with another U() in (3.8) would yield the same bounds. Our last example deals with subspace bounds of x5. Example 5.5. Let us reconsider the matrix in Example 2.1. If we choose = 2:224 1?16 then the bound in the perturbation of its zero subspace is The truncated psuedoinverse of H is H y (; S) = kx 1 ~ X 2 k F k jh y (; S)j jhj k F relgap ( 1 ; ~ 2 ) : 2:4 1?16?1:1 1?16?2:2 1?12?2:3 1?14?1:1 1?16 1 1?16 1 1?12 1 1?14?2:2 1?12 1 1?12 2 1?8 2 1?1?2:3 1?14 1 1?14 2 1?1 3 1?12 1 C A ;
25 25 the associated condition number is k jh y (; S)j jhj k F = 4:4 1 4 ; and we have that relgap ( 1 ; ~ 2 ) 1. Thus kx 1 ~ X 2 k F k jh y (; S)j jhj k F relgap ( 1 ; ~ 2 ) = 8:8827 1?12 Actually, this bound is very pessimistic. better. If we have jej jhj; then We have that thus F kak F = 8:8818 1?16 : kdh y (; S)k = 3:2 1?4 ; kd ~ X2 k = 2; kx 1 ~ X 2 k F F kdh y (; S)k kd ~ X2 k relgap ( 1 ; ~ 2 ) The scaled bound is much = 5:3295 1?19 : Thus it is reasonable to expect the zero subspace of this matrix to be computed accurately. The standard absolute gap bound is This is far too pessimistic. kx X kek F 1 2k F gap( 1 ; ~ 2 ) = 8:8827 1?8 : 6. Conclusion As a general conclusion based on our bounds and the above examples we note that the error bounds on individual eigenvalues and vectors tend to be tighter, sometimes much tighter, than the global error bounds for all of the eigenvalues of the matrix given in [1] or [18]. Moreover, they are easier to generalize to large classes of eigenvalue problems. We also note that we obtain structured perturbation results on Hermitian pencils when one or both of the matrices are singular (see Proposition 3.1, Corollaries 3.1 and 3.3).
26 The above examples also lead us to observation that we can often obtain meaningful relative error bounds on eigenvalues of numerically singular matrices as long as those eigenvalues have good local condition numbers. If these \non-zero" eigenvalues are well-behaved, it is possible that the subspace associated with the \zero" eigenvalues is also well-behaved (see comments after Theorem 4.1 and Example 5.5). Thus, we expand the denition of well-behaved matrices to include matrices whose non-zero eigenvalues have modest local condition numbers and whose zero subspace is well-behaved. This denition includes the matrices in Examples 5.2 and 5.5, but does not include Example 5.3 because of its two badly behaved eigenvalues. Examples 5.1 and 5.2 have eigenvalues (and singular values) that are much better behaved than the normwise theory would tell us, but we would not expect any numerical method to compute all of the digits of the smallest eigenvalues (or singular values) correctly. We would like to thank the referee for very helpful comments. 26 REFERENCES 1 J.L. Barlow and J.W. Demmel, Computing accurate eigensystems of scaled diagonally dominant matrices, SIAM J. Numer. Anal., 27:762{791, J.L. Barlow and I. Slapnicar, Optimal perturbation bounds for the Hermitian eigenvalue problem, Technical report, The Pennsylvania State University, S.L. Campbell and Jr. C.D. Meyer, Generalized Inverses of Linear Transformations, Pitman, London, Reprint by Dover, New York, C. Davis and W.M. Kahan, The rotation of eigenvectors by a perturbation III, SIAM J. Num. Anal., 7:1{46, J.W. Demmel and K. Veselic, Jacobi's method is more accurate than QR, SIAM J. Matrix Anal. Appl., 13:124{1243, F.M. Dopico, J. Moro, and J.M. Molera, Weyl-type perturbation bounds for eigenvalues of Hermitian matrices, preprint, S.C. Eisenstat and I.C.F. Ipsen, Relative perturbation techniques for singular value problems, SIAM J. Numer. Anal., 32:1972{1988, L. Elden, Error analysis of direct method of matrix inversion, J. Assoc. Comput. Mach., 8:281{33, G.H. Golub and C.F. Van Loan, Matrix Computations, Second Edition, The Johns Hopkins Press, Baltimore, 1989.
27 1 M. Gu, Studies in Numerical Linear Algebra, PhD thesis, Yale University, New Haven, CT, M. Gu and S.C. Eisenstat, Relative perturbation theory for eigenproblems, Research Report YALEU/DCS/RR-934, Department of Computer Science, Yale University, New Haven, CT, February T. Kato, A Short Introduction to Perturbation Theory for Linear Operators, Springer-Verlag, New York, R.-C. Li, Relative perturbation theory: (i) eigenvalue and singular value variations, SIAM J. Matrix Anal. Appl., 19:956{982, R.-C. Li, Relative perturbation theory: (ii) eigenspace ans singular subspace variations, SIAM J. Matrix Anal. Appl., 2:471{492, C.F. Van Loan, Generalizing the singular value decomposition, SIAM J. Num. Anal., 13:76{83, C.C. Paige and M.A. Saunders, Towards a generalized singular value decomposition, SIAM J. Num. Anal., 18:398{45, B.N. Parlett, The Symmetric Eigenvalue Problem, Prentice-Hall, Englewood Clis, N.J., K. Veselic and I. Slapnicar, Floating-point perturbations of Hermitian matrices, Linear Algebra and Its Applications, 195:81{116, J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University Press, London,
or H = UU = nx i=1 i u i u i ; where H is a non-singular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the
Relative Perturbation Bound for Invariant Subspaces of Graded Indenite Hermitian Matrices Ninoslav Truhar 1 University Josip Juraj Strossmayer, Faculty of Civil Engineering, Drinska 16 a, 31000 Osijek,
More informationdeviation of D and D from similarity (Theorem 6.). The bound is tight when the perturbation is a similarity transformation D = D? or when ^ = 0. From
RELATIVE PERTURBATION RESULTS FOR EIGENVALUES AND EIGENVECTORS OF DIAGONALISABLE MATRICES STANLEY C. EISENSTAT AND ILSE C. F. IPSEN y Abstract. Let ^ and ^x be a perturbed eigenpair of a diagonalisable
More informationPerturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.
Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that
More informationI.C.F. Ipsen small magnitude. Our goal is to give some intuition for what the bounds mean and why they hold. Suppose you have to compute an eigenvalue
Acta Numerica (998), pp. { Relative Perturbation Results for Matrix Eigenvalues and Singular Values Ilse C.F. Ipsen Department of Mathematics North Carolina State University Raleigh, NC 7695-85, USA ipsen@math.ncsu.edu
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationKEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio
COMPUTING THE SVD OF A GENERAL MATRIX PRODUCT/QUOTIENT GENE GOLUB Computer Science Department Stanford University Stanford, CA USA golub@sccm.stanford.edu KNUT SLNA SC-CM Stanford University Stanford,
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More information216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are
Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationAccuracy and Stability in Numerical Linear Algebra
Proceedings of the 1. Conference on Applied Mathematics and Computation Dubrovnik, Croatia, September 13 18, 1999 pp. 105 112 Accuracy and Stability in Numerical Linear Algebra Zlatko Drmač Abstract. We
More informationthe Unitary Polar Factor æ Ren-Cang Li Department of Mathematics University of California at Berkeley Berkeley, California 94720
Relative Perturbation Bounds for the Unitary Polar Factor Ren-Cang Li Department of Mathematics University of California at Berkeley Berkeley, California 9470 èli@math.berkeley.eduè July 5, 994 Computer
More informationSTABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin
On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting
More informationWHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS
IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex
More informationSubset selection for matrices
Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,
More informationthe Unitary Polar Factor æ Ren-Cang Li P.O. Box 2008, Bldg 6012
Relative Perturbation Bounds for the Unitary Polar actor Ren-Cang Li Mathematical Science Section Oak Ridge National Laboratory P.O. Box 2008, Bldg 602 Oak Ridge, TN 3783-6367 èli@msr.epm.ornl.govè LAPACK
More informationYimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract
Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory
More information1. The Polar Decomposition
A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationUMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.
UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More information1 Quasi-definite matrix
1 Quasi-definite matrix The matrix H is a quasi-definite matrix, if there exists a permutation matrix P such that H qd P T H11 H HP = 1 H1, 1) H where H 11 and H + H1H 11 H 1 are positive definite. This
More informationthen kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja
Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk
More informationOff-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems
Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Yuji Nakatsukasa Abstract When a symmetric block diagonal matrix [ A 1 A2 ] undergoes an
More informationError estimates for the ESPRIT algorithm
Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider
More informationQUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those
QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationIterative Algorithm for Computing the Eigenvalues
Iterative Algorithm for Computing the Eigenvalues LILJANA FERBAR Faculty of Economics University of Ljubljana Kardeljeva pl. 17, 1000 Ljubljana SLOVENIA Abstract: - We consider the eigenvalue problem Hx
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationMATRICES WITH POSITIVE DEFINITE HERMITIAN PART: INEQUALITIES AND LINEAR SYSTEMS ROY MATHIAS
MATRICES WITH POSITIVE DEFINITE HERMITIAN PART: INEQUALITIES AND LINEAR SYSTEMS ROY MATHIAS Abstract. The Hermitian and skew-hermitian parts of a square matrix A are dened by H(A) (A + A )=2 and S(A) (A?
More informationAn SVD-Like Matrix Decomposition and Its Applications
An SVD-Like Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationthat of the SVD provides new understanding of left and right generalized singular vectors. It is shown
ON A VARIAIONAL FORMULAION OF HE GENERALIZED SINGULAR VALUE DECOMPOSIION MOODY.CHU,ROBER. E. FUNDERLIC y AND GENE H. GOLUB z Abstract. A variational formulation for the generalized singular value decomposition
More informationStanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures
2 1 Borel Regular Measures We now state and prove an important regularity property of Borel regular outer measures: Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{97{02 TR{3864 On the Adjoint Matrix G. W. Stewart y ABSTRACT The adjoint A A of a matrix A
More informationChater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c Chater Matrix Norms
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationProposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)
RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,
More informationELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES
Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse
More informationAnalysis of Block LDL T Factorizations for Symmetric Indefinite Matrices
Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationN.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a
WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure
More informationOUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact
Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationOn the Quadratic Convergence of the. Falk-Langemeyer Method. Ivan Slapnicar. Vjeran Hari y. Abstract
On the Quadratic Convergence of the Falk-Langemeyer Method Ivan Slapnicar Vjeran Hari y Abstract The Falk{Langemeyer method for solving a real denite generalized eigenvalue problem, Ax = Bx; x 6= 0, is
More informationThe Drazin inverses of products and differences of orthogonal projections
J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationRELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES
RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES MEGAN DAILEY, FROILÁN M. DOPICO, AND QIANG YE Abstract. In this paper, strong relative perturbation bounds are developed for a number of linear
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationGeneralized Principal Pivot Transform
Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationonly nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr
The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands
More informationMain matrix factorizations
Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined
More informationREORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS
REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University
More informationMultiplicative Perturbation Analysis for QR Factorizations
Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations
More informationFrame Diagonalization of Matrices
Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationA Note on Simple Nonzero Finite Generalized Singular Values
A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationEfficient and Accurate Rectangular Window Subspace Tracking
Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu
More informationClass notes: Approximation
Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric
More informationUniversiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.
Universiteit-Utrecht * Department of Mathematics The convergence of Jacobi-Davidson for Hermitian eigenproblems by Jasper van den Eshof Preprint nr. 1165 November, 2000 THE CONVERGENCE OF JACOBI-DAVIDSON
More informationRank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about
Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix
More informationThe Algebraic Multigrid Projection for Eigenvalue Problems; Backrotations and Multigrid Fixed Points. Sorin Costiner and Shlomo Ta'asan
The Algebraic Multigrid Projection for Eigenvalue Problems; Backrotations and Multigrid Fixed Points Sorin Costiner and Shlomo Ta'asan Department of Applied Mathematics and Computer Science The Weizmann
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationUMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract
UMIACS-TR-9-86 July 199 CS-TR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space
More informationOn the Solution of Constrained and Weighted Linear Least Squares Problems
International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science
More informationBaltzer Journals April 24, Bounds for the Trace of the Inverse and the Determinant. of Symmetric Positive Denite Matrices
Baltzer Journals April 4, 996 Bounds for the Trace of the Inverse and the Determinant of Symmetric Positive Denite Matrices haojun Bai ; and Gene H. Golub ; y Department of Mathematics, University of Kentucky,
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationFinding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October
Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationMaximizing the numerical radii of matrices by permuting their entries
Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and
More informationON THE SINGULAR DECOMPOSITION OF MATRICES
An. Şt. Univ. Ovidius Constanţa Vol. 8, 00, 55 6 ON THE SINGULAR DECOMPOSITION OF MATRICES Alina PETRESCU-NIŢǍ Abstract This paper is an original presentation of the algorithm of the singular decomposition
More informationGeneralized Shifted Inverse Iterations on Grassmann Manifolds 1
Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.
More informationTheorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R
Initializing Newton's method for discrete-time algebraic Riccati equations using the buttery SZ algorithm Heike Fabender Zentrum fur Technomathematik Fachbereich - Mathematik und Informatik Universitat
More informationON THE QR ITERATIONS OF REAL MATRICES
Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationOn the Perturbation of the Q-factor of the QR Factorization
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer
More informationThe Singular Value Decomposition
CHAPTER 6 The Singular Value Decomposition Exercise 67: SVD examples (a) For A =[, 4] T we find a matrixa T A =5,whichhastheeigenvalue =5 Thisprovidesuswiththesingularvalue =+ p =5forA Hence the matrix
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More information