Principal Angles Between Subspaces and Their Tangents

Size: px
Start display at page:

Download "Principal Angles Between Subspaces and Their Tangents"

Transcription

1 MITSUBISI ELECTRIC RESEARC LABORATORIES Principal Angles Between Subspaces and Their Tangents Knyazev, AV; Zhu, P TR September 2012 Abstract Principal angles between subspaces (PABS) (also called canonical angles) serve as a classical tool in mathematics, statistics, and applications, eg, data mining Traditionally, PABS are introduced and used via their cosines The tangents of PABS have attracted relatively less attention, but are important for analysis of convergence of subspace iterations for eigenvalue problems We explicitly construct matrices, such that their singular values are equal to the tangents of PABS, using several approaches: orthonormal and nonorthonormal bases for subspaces, and orthogonal projectors Cornell University Library This work may not be copied or reproduced in whole or in part for any commercial purpose Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc All rights reserved Copyright c Mitsubishi Electric Research Laboratories, Inc, Broadway, Cambridge, Massachusetts 02139

2 MERLCoverPageSide2

3 Principal angles between subspaces and their tangents Peizhen Zhu a, Andrew V Knyazev a,b, arxiv: v1 [mathna] 4 Sep 2012 a Department of Mathematical and Statistical Sciences; University of Colorado Denver, PO Box , Campus Box 170, Denver, CO , USA b Mitsubishi Electric Research Laboratories; 201 Broadway Cambridge, MA Abstract Principal angles between subspaces (PABS) (also called canonical angles) serve as a classical tool in mathematics, statistics, and applications, eg, data mining Traditionally, PABS are introduced and used via their cosines The tangents of PABS have attracted relatively less attention, but are important for analysis of convergence of subspace iterations for eigenvalue problems We explicitly construct matrices, such that their singular values are equal to the tangents of PABS, using several approaches: orthonormal and nonorthonormal bases for subspaces, and orthogonal projectors Keywords: principal angles, canonical angles, singular value, projector 2000 MSC: 15A42, 15A60, 65F35 1 Introduction The concept of principal angles between subspaces (PABS) is first introduced by Jordan [1] in 1875 otelling [2] defines PABS in the form of canonical correlations in statistics in 1936 Numerous researchers work on PABS; see, eg, our pseudo-random choice of initial references [3, 4, 5, 6, 7, 8], out of tens of thousand of Internet links for the keywords principal angles and canonical angles September 5, 2012 The preliminary version is at This material is based upon work partially supported by the National Science Foundation under Grant No Corresponding author addresses: peizhenzhu[at]ucdenveredu (Peizhen Zhu), andrewknyazev[at]ucdenveredu (Andrew V Knyazev) URL: (Andrew V Knyazev) Preprint submitted to Linear Algebra and Its Applications September 5, 2012

4 In this paper, we first briefly review the concept and some important properties of PABS in Section 2 Traditionally, PABS are introduced and used via their sines and more commonly, because of the connection to canonical correlations, cosines The tangents of PABS have attracted relatively less attention, despite of the celebrated work of Davis and Kahan [9], which includes several tangent-related theorems Our interest to the tangents of PABS is motivated by their applications in theoretical analysis of convergence of subspace iterations for eigenvalue problems We review some previous work on the tangents of PABS in Section 3 The main goal of this work is explicitly constructing a family of matrices such that their singular values are equal to the tangents of PABS We form these matrices using several different approaches: orthonormal bases for subspaces in Section 4, non-orthonormal bases in Section 5, and orthogonal projectors in Section 6 2 Definition and basic properties of PABS In this section, we remind the reader the concept of PABS and some fundamental properties of PABS First, we recall that an acute angle between two unit vectors x and y, ie, with x x = y y = 1, is defined as cos θ(x, y) = x y, where 0 θ(x, y) π/2 The definition of an acute angle between two vectors can be recursively extended to PABS; see, eg, [2, 10, 11] Definition 21 Let X C n and Y C n be subspaces with dim(x ) = p and dim(y) = q Let m = min (p, q) The principal angles Θ(X, Y) = [θ 1,, θ m ], where θ k [0, π/2], k = 1,, m, between X and Y are recursively defined by subject to s k = cos(θ k ) = max x X max y Y x y = x k y k, x = y = 1, x x i = 0, y y i = 0, i = 1,, k 1 The vectors {x 1,, x m } and {y 1,, y m } are called the principal vectors An alternative definition of PABS, from [10, 11], is based on the singular value decomposition (SVD) and reproduced here as the following theorem 2

5 Theorem 21 Let the columns of matrices X C n p and Y C n q form orthonormal bases for the subspaces X and Y, correspondingly Let the SVD of X Y be UΣV, where U and V are unitary matrices and Σ is a p q diagonal matrix with the real diagonal elements s 1 (X Y ),, s m (X Y ) in nonincreasing order with m = min(p, q) Then cos Θ (X, Y) = S ( X Y ) = [ s 1 ( X Y ),, s m ( X Y )], where Θ (X, Y) denotes the vector of principal angles between X and Y arranged in nondecreasing order and S(A) denotes the vector of singular values of A Moreover, the principal vectors associated with this pair of subspaces are given by the first m columns of XU and Y V, correspondingly Theorem 21 implies that PABS are symmetric, ie Θ(X, Y) = Θ(Y, X ), and unitarily invariant, ie, Θ(UX, UY) = Θ(X, Y) for any unitary matrix U C n n, since (UX) (UY ) = X Y A number of other important properties of PABS have been established, for finite dimensional subspaces, eg, in [4, 5, 6, 7, 8], and for infinite dimensional subspaces in [3, 12] We list a few most useful properties of PABS below Property 21 [7] In the notation of Theorem 21, let p q and let [X, X ] be a unitary matrix such that S(X Y ) = [s 1(X Y ),, s q(x Y )] where s 1 (X Y ) s q(x Y ) Then s k(x Y ) = sin(θ q+1 k), k = 1,, q Relationships of principal angles between X and Y, and between their orthogonal complements are thoroughly investigated in [4, 5, 12] The nonzero principal angles between X and Y are the same as those between X and Y Similarly, the nonzero principal angles between X and Y are the same as those between X and Y Property 22 [4, 5, 12] Let the orthogonal complements of the subspaces X and Y be denoted by X and Y, correspondingly Then, 1 [ Θ (X, Y), 0,, 0 ] = [ Θ (X, Y ), 0,, 0 ] 2 [ Θ (X, Y ), 0,, 0 ] = [ Θ (X, Y), 0,, 0 ] 3 [ π,, π, 2 2 Θ (X, Y) ] = [ π 2 Θ (X, Y ), 0,, 0 ], where there are max(dim(x ) dim(y), 0) additional π/2s on the left Extra 0s at the end may need to be added on either side to match the sizes Symbols and denote the components in vector arranged in nonincreasing and nondecreasing order, correspondingly These statements are widely used to discover new properties of PABS 3

6 3 Tangents of PABS The tangents of PABS serve as an important tool in numerical matrix analysis For example, in [13, 14], the authors use the tangent of the largest principal angle derived from a norm of a specific matrix In [6, p ] and [15, Theorem 24, p 252] the tangents of PABS, related to singular values of a matrix without an explicit matrix formulation are used to analyze perturbations of invariant subspaces In [6, 13, 14, 15], the two subspaces have the same dimensions Let the orthonormal columns of matrices X, X, and Y span the subspaces X, the orthogonal complement X of X, and Y, correspondingly According to Theorem 21 and Property 21, cos Θ(X, Y) = S(X Y ) and sin Θ(X, Y) = S(X Y ) The properties of sines and especially cosines of PABS are well studied; eg, in [7, 10] One could obtain the tangents of PABS by using the sines and the cosines of PABS directly, however, in some situations this does not reveal enough information about their properties In this work, we construct a family F of explicitly given matrices, such that the singular values of the matrix T are the tangents of PABS, where T F We form T in three different ways First, we derive T as the product of matrices whose columns form the orthonormal bases of subspaces Second, we present T as the product of matrices with non-orthonormal columns Third, we form T as the product of orthogonal projections on subspaces To better understand the matrix T, we provide a geometric interpretation of singular values of T and of the action of T as a linear operator Our constructions include the matrices used in [6, 13, 14, 15] Furthermore, we consider the tangents of principal angles between two subspaces not only with the same dimensions, but also with different dimensions 4 tan Θ in terms of the orthonormal bases of subspaces The sines and cosines of PABS are obtained from the singular values of explicitly given matrices, which motivates us to construct T as follows: if matrices X and Y have orthonormal columns and X Y has full rank, let T = X Y ( X Y ) 1, then tan Θ(X, Y) could be equal to the positive singular values of T We begin with an example in two dimensions in Figure 1 Let X = ( cos β sin β ), X = ( sin β cos β ), and Y = ( cos (θ + β) sin (θ + β) ), 4

7 Figure 1: The PABS in 2D where 0 θ < π/2 and β [0, π/2] Then, X Y = cos θ and X Y = sin θ Obviously, tan θ coincides with the singular value of T = X Y ( X Y ) 1 If θ = π/2, then the matrix X Y is singular, which demonstrates that, in general, we need to use its Moore-Penrose Pseudoinverse to form our matrix T = X Y ( X Y ) The Moore-Penrose Pseudoinverse is well known For a matrix A C n m, its Moore-Penrose Pseudoinverse is the unique matrix A C m n satisfying AA A = A, A AA = A, (AA ) = AA, (A A) = A A Some properties of the Moore-Penrose Pseudoinverse are listed as follows If A = UΣV is the SVD of A, then A = V Σ U If A has full column rank and B has full row rank, then (AB) = B A owever, this formula does not hold in general AA is the orthogonal projector onto the range of A, and A A is the orthogonal projector onto the range of A For additional properties of the Moore-Penrose Pseudoinverse, we refer the reader to [6] Now we are ready to prove that the intuition discussed above, suggesting to try T = X Y ( X Y ), is correct Moreover, we cover the case of the subspaces X and Y having possibly different dimensions 5

8 Theorem 41 Let X C n p have orthonormal columns and be arbitrarily completed to a unitary matrix [X, X ] Let the matrix Y C n q be such that Y Y = I Then the positive singular values, denoted by S + (T ), of the matrix T = X Y ( X Y ) satisfy tan Θ(R(X), R(Y )) = [,,, S + (T ), 0,, 0], where R( ) denotes the matrix range Proof Let [ Y Y ] be unitary, then [ X X ] [ Y Y ] is unitary, such that [ X X ] [ Y Y ] = p n p [ q n q X Y X Y X Y X Y ] Applying the CS-decomposition (CSD), eg, [16, 17, 18], we get [ ] X [ X X ] [ Y Y ] = Y X Y X Y = X Y [ U1 U 2 ] [ V1 D V 2 ], where U 1 U(p), U 2 U(n p), V 1 U(q), and V 2 U(n q) The symbol U(k) denotes the family of unitary matrices of the size k The matrix D has the following structure: D = r s q r s n p q+r s p r s r I Os s C S p r s O c I n p q+r O s I s S C q r s I O c, where C = diag(cos(θ 1 ),, cos(θ s )), and S = diag(sin(θ 1 ),, sin(θ s )) with θ k (0, π/2) for k = 1,, s, which are the principal angles between the subspaces R(Y ) and R(X) The matrices C and S could be not present The matrices O s and O c are matrices of zeros and do not have to be square I denotes the identity matrix We may have different sizes of I in D 6

9 Therefore, T = X Y ( X Y ) = U2 = U 2 O s SC 1 O s S U1 I V 1 V 1 I C O c U 1 O c ence, S + (T ) = (tan(θ 1 ),, tan(θ s )), where 0 < θ 1 θ 2 θ s < π/2 From the matrix D, we obtain S ( X Y ) = S (diag(i, C, O c )) According to Theorem 21, Θ(R(X), R(Y )) = [0,, 0, θ 1,, θ s, π/2,, π/2], where θ k (0, π/2) for k = 1,, s, and there are min(q r s, p r s) additional π/2 s and r additional 0 s on the right-hand side, which completes the proof Remark 42 If more information is available about the subspaces X, Y and their orthogonal complements, we can obtain the exact sizes of block matrices in the matrix D Let us consider the decomposition of the space C n into an orthogonal sum of five subspaces as in [12, 19], C n = M 00 M 01 M 10 M 11 M, where M 00 = X Y, M 01 = X Y, M 10 = X Y, M 11 = X Y (X = R (X) and Y = R (Y )) Using Tables 1 and 2 in [12] we get dim(m 00 ) = r, dim(m 10 ) = q r s, dim(m 11 ) = n p q + r, dim(m 01 ) = p r s, where M = M X M X = M Y M Y with M X =X (M 00 M 01 ), M X =X (M 10 M 11 ), M Y =Y (M 00 M 10 ), M Y =Y (M 01 M 11 ), 7

10 and s = dim(m X ) = dim(m Y ) = dim(m X ) = dim(m Y ) Thus, we can represent D in the following block form dim(m 00 ) dim(m)/2 dim(m 10 ) dim(m 11 ) dim(m)/2 dim(m 01 ) dim(m 00 ) I Os dim(m)/2 C S dim(m 01 ) O c I dim(m 11 ) O s I dim(m)/2 S C dim(m 10 ) I O c In addition, it is possible to permute the first q columns or the last n q columns of D, or the first p rows or the last n p rows and to change the sign of any column or row to obtain the variants of the CSD Remark 43 From Remark 42, we see that some identity and zero matrices may be not present in the matrix D To obtain more details for D, we have to deal with D in several cases based on either p q or q p and either p+q n or p + q n Since the angles are symmetric such that Θ(X, Y) = Θ(Y, X ), it is enough to discuss one case of either p q or q p Let us assume q p which implies dim(m 01 ) dim(m 10 ) = p q Case 1: assuming p + q n, we have (also see [16, 20]), C 0 O S 0 O O (p q) q D = O O I p q S 0 O C 0 O O (n p q) q I n p q O O where C 0 = diag(cos(θ 1 ),, cos(θ q )) and S 0 = diag(sin(θ 1 ),, sin(θ q )) with θ k [0, π/2] for k = 1,, q Case 2 : assuming p + q n, we have D =, C 0 O (n p) (p+q n) S 0 O O (p+q n) (n p) I p+q n O O O (p q) (n p) O (p q) (p+q n) O I p q S 0 O (n p) (p+q n) C 0 O where C 0 = diag(cos(θ 1 ),, cos(θ n p )) and S 0 = diag(sin(θ 1 ),, sin(θ n p )) with θ k [0, π/2] for k = 1,, n p, 8

11 Due to the fact that the angles are symmetric, ie, Θ(X, Y) = Θ(Y, X ), the matrix T in Theorem 41 could be substituted with Y X(Y X) Moreover, the nonzero angles between the subspaces X and Y are the same as those between the subspaces X and Y ence, T can be presented as X Y (X Y ) and Y X (Y X ) Furthermore, for any matrix T, we have S(T ) = S(T ), which implies that all conjugate transposes of T are valid Let F denote a family of matrices, such that the singular values of the matrix in F are the tangents of PABS To sum up, any of the formulas for T in F in the first column of Table 1 can be used in Theorem 41 Table 1: Different matrices in F: matrix T F using orthonormal bases X Y ( X Y ) P X Y ( X Y ) Y X(Y X) P Y X(Y X) X Y (X Y ) P X Y (X Y ) Y X (Y X ) P Y X (Y X ) (Y X) Y X (Y X) Y P X (X Y ) X Y (X Y ) X P Y (Y X ) Y X (Y X ) Y P X (X Y ) X Y (X Y ) X P Y Using the fact that the singular values are invariant under unitary multiplications, we can also use P X Y (X Y ) for T in Theorem 41, where P X is an orthogonal projector onto the subspace X Thus, we can use T as in the second column in Table 1, where P X, P Y, and P Y denote the orthogonal projectors onto the subspace X, Y, and Y, correspondingly Remark 44 From the proof of Theorem 41, we immediately derive that X Y ( X Y ) = (Y X ) Y X, and Y X(Y X) = (X Y ) X Y Remark 45 Let X Y ( X Y ) = U2 Σ 1 U 1 and Y X(Y X) = V 2 Σ 2 V 1, be the SVDs, where U i and V i (i = 1, 2) are unitary matrices and Σ i (i = 1, 2) are diagonal matrices Let the diagonal elements of Σ 1 and Σ 2 be in the same order Then the principal vectors associated with the pair of subspaces X and Y are given by the columns of XU 1 and Y V 1 5 tan Θ in terms of the bases of subspaces In the previous section, we use matrices X and Y with orthonormal columns to formulate T It turns out that we can choose a non-orthonormal 9

12 basis for one of the subspaces X or Y Theorem 51 Let [X, X ] be a unitary matrix with X C n p Let Y C n q and rank (Y ) = rank ( X Y ) p Then the positive singular values of the matrix T = X Y ( X Y ) satisfy tan Θ(R(X), R(Y )) = [S + (T ), 0,, 0] Proof Let the rank of Y be r, thus r p Let the SVD of Y be UΣV, where U is an n n unitary matrix; Σ is an n q real diagonal matrix with diagonal entries s 1,, s r, 0,, 0 ordered by decreasing magnitude such that s 1 s 2 s r > 0, and V is a q q unitary matrix Since rank(y )=r q, we can compute a reduced SVD such that Y = U r Σ r Vr Only the r column vectors of U and the r row vectors of V, corresponding to nonzero singular values are used, which means that Σ r is an r r invertible diagonal matrix Based on the fact that the left singular vectors corresponding to the non-zero singular values of Y span the range of Y, we have tan Θ(R(X), R(Y )) = tan Θ(R(X), R(U r )) Since rank(x Y ) = r, we have rank(x U r Σ r ) = rank(x U r ) = r Let T 1 = X U r ( X U r ) According to Theorem 41, it follows that tan Θ(R(X), R(U r )) = [S + (T 1 ), 0,, 0] It is worth noting that the angles between R(X) and R(U r ) are in [0, π/2), since X U r is full rank Our task is now to show that T 1 = T By direct computation, we have T = X Y ( X Y ) = X U r Σ r V r ( X U r Σ r V = X U r Σ r Vr ( ) = X U r Σ r X U r Σ r = X U r Σ r Σ r r ) ( V r ) ( X U r Σ r ) ( X U r ) = X U r ( X U r ) In two identities above we use the fact that if a matrix A is of full column rank, and a matrix B is of full row rank, then (AB) = B A ence, tan Θ(R(X), R(Y )) = [S + (T ), 0,, 0] which completes the proof 10

13 Remark 52 Let X Y have full rank Let us define Z = X +X T following [6, p ] and [15, Theorem 24, p252] Since ( X Y ) X Y = I, the following identities Y = P X Y + P X Y = XX Y + X X Y = ZX Y imply that R(Y ) R(Z) By direct calculation, we obtain that X Z = X (X +X T ) = I and Z Z = (X +X T ) (X +X T ) = I +T T Thus, X Z ( Z Z ) 1/2 = (I + T T ) 1/2 is ermitian positive definite The matrix Z(Z Z) 1/2 by construction has orthonormal columns which span the space Z Moreover, we observe that S ( X Z(Z Z) 1/2) = Λ ( (I + T T ) 1/2) = ( 1 + S 2 (T ) ) 1/2 Therefore, tan Θ(R(X), R(Z)) = [S + (T ), 0,, 0] and dim(x ) = dim(z) From Theorem 51, we have tan Θ(R(X), R(Y )) tan Θ(R(X), R(Z)) It is worth noting that Θ(R(X), R(Y )) and Θ(R(X), R(Z)) in (0, π/2) are the same For the case p = q, we have that Θ(R(X), R(Y )) = Θ(R(X), R(Z)) This gives us an explicit matrix expression X Y ( X Y ) 1 for the matrix P in [6, p ] and [15, Theorem 24, p252] Remark 53 The condition rank (Y ) = rank ( X Y ) p is necessary in Theorem 51 For example, let X = 0, X = 1 0, and Y = Then, we have X Y = ( 1 1 ) and (X Y ) = ( 1/2 1/2 ) Thus, T = X Y ( X Y ) = ( 1/2 0 ), and s(t ) = 1/2 On the other hand, we obtain that tan Θ(R(X), R(Y )) = 0 From this example, we see that the result in Theorem 51 does not hold for the case rank(y ) = rank ( X Y ) > p Corollary 51 Using the notation of Theorem 51, let Y be full rank and p = q Let P X be an orthogonal ( projection onto the subspace R(X ) Then tan Θ(R(X), R(Y )) = S P X Y ( X Y ) ) 1 Proof Since the singular [ ( values are invariant under unitary transform, it is easy to obtain S X Y ( X Y ) ) ] ( 1, 0,, 0 = S P X Y ( X Y ) ) 1 Moreover, the number of the singular values of P X Y ( X Y ) 1 is p, hence we obtain the result as stated 11

14 Above we construct T explicitly in terms of bases of subspaces X and Y Next, we provide an alternative construction by adopting only the triangular matrix from the QR factorization of a basis of the subspace X + Y Corollary 52 Let X C n p and Y C n q be matrices of full rank where q p Let the triangular part R of the reduced QR factorization of the matrix L = [X, Y ] be [ ] R11 R R = 12, O R 22 where R 12 is full rank Then the positive singular values of the matrix T = R 22 (R 12 ) satisfy tan Θ(R(X), R(Y )) = [S + (T ), 0,, 0] Proof Let the reduced QR factorization of L be [ R11 R [ X Y ] = [ Q 1 Q 2 ] 12 O R 22 ], where the columns of [ Q 1 Q 2 ] are orthonormal, and R(Q 1 ) = X We directly obtain Y = Q 1 R 12 + Q 2 R 22 Multiplying by Q 1 on both sides of the above equality for Y, we get R 12 = Q 1 Y Similarly, multiplying by Q 2 we have R 22 = Q 2 Y Combining these equalities with Theorem 51 completes the proof 6 tan Θ in terms of the orthogonal projections onto subspaces In this section, we present T using orthogonal projectors only Before we proceed, we need a lemma and a corollary Lemma 61 [6] If U and V are unitary matrices, then for any matrix A we have (UAV ) = V A U Corollary 61 Let A C n n be a block matrix, such that [ ] D O12 A =, O 21 O 22 where D is a p q matrix and O 12, O 21, and O 22 are zero matrices Then [ ] D A O21 = O12 O22 12

15 Proof Let U(n) denote a family of unitary matrices of size n by n Let the SVD of D be U 1 ΣV1, where Σ is a p q diagonal matrix and U 1 U(p), V 1 U(q) Then there exist unitary matrices U 2 U(n p) and V 2 U(n q), such that [ ] [ ] [ ] U1 Σ O12 V A = 1 U 2 O 21 O 22 V2 Thus, [ ] [ ] [ ] A V1 Σ O12 U = 1 V 2 O 21 O 22 U2 [ ] [ ] [ ] V1 Σ O21 = U 1 V 2 O12 O22 U2 [ ] D Since D = V 1 Σ U1, we have A O21 = O12 O22 as desired Theorem 62 Let P X, P X and P Y be orthogonal projectors onto the subspaces X, X and Y, correspondingly Then the positive singular values S + (T ) of the matrix T = P X (P X P Y ) satisfy tan Θ(X, Y) = [,,, S + (T ), 0,, 0] Proof Let the matrices [ X X ] and [ Y Y ] be unitary, where R(X) = X and R(Y ) = Y By direct calculation, we obtain [ ] [ ] X X P X P Y [ Y Y ] = Y O O O X The matrix P X P Y can be written as [ X P X P Y = [ X X ] Y O O O ] [ Y Y ] (1) From Lemma 61 and Corollary 61, we have [ (X (P X P Y ) = [ Y Y ] Y ) O O O ] [ X X ] (2) In a similar way, P X P Y = [ X [ X X ] Y O O O 13 ] [ Y Y ] (3)

16 Let T = P X P Y (P X P Y ) and B = X Y ( X Y ), so [ ] [ ] [ ] X T = [ X X ] Y O (X Y ) O X O O O O X [ ] [ ] B O X = [ X X ] O O The singular values are invariant under unitary multiplications, therefore S + (T ) = S + (B) Finally, we show that P X P Y (P X P Y ) = P X (P X P Y ) The null space of P X P Y is the orthogonal sum Y ( Y X ) The range of (P X P Y ) is thus Y ( Y X ), which is the orthogonal complement of the null space of P X P Y Moreover, (P X P Y ) is an oblique projector, because (P X P Y ) = ( (PX P Y ) ) 2 which can be obtained by direct calculation using equality (2) The oblique projector (P X P Y ) projects onto Y along X Thus, we have P Y (P X P Y ) = (P X P Y ) Consequently, we can simplify T = P X P Y (P X P Y ) as T = P X (P X P Y ) X Figure 2: Geometrical meaning of T = P X (P X P Y ) To gain geometrical insight of T, in Figure 2 we choose a generic unit vector z We project z onto Y along X, then project onto the subspace X which is interpreted as T z The red segment in the graph is the image of T under all unit vectors It is straightforward to see that s(t ) = T = tan(θ) 14

17 Using Property 22 and the fact that the principal angles are symmetric with respect to the subspaces X and Y, the expressions P Y (P Y P X ) and P X (P X P Y ) can also be used in Theorem 62 Remark 63 According to Remark 44 and Theorem 62, we have ( P X (P X P Y ) = P X P Y (P X P Y ) = P X (P X P Y ) ) Furthermore, by the geometrical properties of the oblique projector (P X P Y ), it follows that (P X P Y ) = P Y (P X P Y ) = (P X P Y ) P X Therefore, we obtain Similarly, we have P X (P X P Y ) = P X P Y (P X P Y ) = (P Y P X )(P X P Y ) P Y (P Y P X ) = P Y P X (P Y P X ) = (P X P Y )(P Y P X ) = ( P Y (P Y P X ) ) From equalities above, we see that there are several different expressions for T in Theorem 62 Theorem 64 For subspaces X and Y, 1 If dim(x ) dim(y), we have P X P Y (P X P Y ) = P X (P X P Y ) = P X 2 If dim(x ) dim(y), we have (P X P Y ) P X P Y = (P X P Y ) P Y = P Y Proof Using the fact that X Y (X Y ) = I for dim(x ) dim(y) and combining identities (1) and (2), we obtain the first statement Identities (1) and (2), together with the fact that (X Y ) X Y = I for dim(x ) dim(y), imply the second statement Remark 65 From Theorem 64, for the case dim(x ) dim(y) we have P X (P X P Y ) = (P X P Y ) P X (P X P Y ) = (P X P Y ) P X Since the angles are symmetric, using the second statement in Theorem 64 we have (P X P Y ) P Y = (P X P Y ) P Y On the other hand, for the case dim(x ) dim(y) we obtain (P X P Y ) P Y = (P X P Y ) P Y and (P X P Y ) P X = (P X P Y ) P X To sum up, the following formulas for T in Table 2 can also be used in Theorem 62 An alternative proof for T = (P X P Y ) P Y is provided by Drmač in [14] for the particular case dim(x ) = dim(y) 15

18 Table 2: Different formulas for T : left for dim(x ) dim(y); right for dim(x ) dim(y) (P X P Y ) P X (P X P Y ) P Y (P X P Y ) P Y (P X P Y ) P X Remark 66 Finally, we note that our choice of the space = C n may appear natural to the reader familiar with the matrix theory, but in fact is somewhat misleading The principal angles (and the corresponding principal vectors) between the subspaces X and Y are exactly the same as those between the subspaces X X + Y and Y X + Y, ie, we can reduce the space to the space X + Y without changing PABS This reduction changes the definition of the subspaces X and Y and, thus, of the matrices X and Y that column-span the subspaces X and Y All our statements that use the subspaces X and Y or the matrices X and Y therefore have their new analogs, if the space X + Y substitutes for The formulas P X = I P X and P Y = I P Y, look the same, but the identity operators are different in the spaces X + Y and References [1] C Jordan, Essai sur la géométrie á n dimensions, Amer Math Monthly 3 (1875) [2] otelling, Relations between two sets of variates, Biometrika 28 (1936) [3] F Deutsch, The angle between subspaces of a ilbert space, in: Approximation theory, wavelets and applications (Maratea, 1994), volume 454 of NATO Adv Sci Inst Ser C Math Phys Sci, Kluwer Acad Publ, Dordrecht, 1995, pp [4] I C F Ipsen, C D Meyer, The angle between complementary subspaces, Amer Math Monthly 102 (1995) [5] A V Knyazev, M E Argentati, Majorization for changes in angles between subspaces, Ritz values, and graph Laplacian spectra, SIAM J Matrix Anal Appl 29 (2006/07) (electronic) [6] G W Stewart, J Sun, Matrix perturbation theory, Computer Science and Scientific Computing, Academic Press Inc, Boston, MA,

19 [7] J Sun, Perturbation of angles between linear subspaces, J Comput Math 5 (1987) [8] P Wedin, On angles between subspaces of a finite dimensional inner product space, in: B Kgstrm, A Ruhe (Eds), Matrix Pencils, volume 973 of Lecture Notes in Mathematics, Springer Berlin eidelberg, 1983, pp [9] C Davis, W M Kahan, The rotation of eigenvectors by a perturbation III, SIAM J Numer Anal 7 (1970) 1 46 [10] Ȧ Björck, G Golub, Numerical methods for computing angles between linear subspaces, Math Comp 27 (1973) [11] G Golub, Zha, The canonical correlations of matrix pairs and their numerical computation, in: Linear algebra for signal processing (Minneapolis, MN, 1992), volume 69 of IMA Vol Math Appl, Springer, New York, 1995, pp [12] A V Knyazev, A Jujunashvili, M Argentati, Angles between infinite dimensional subspaces with applications to the Rayleigh-Ritz and alternating projectors methods, J Funct Anal 259 (2010) [13] N Bosner, Z Drmač, Subspace gap residuals for Rayleigh-Ritz approximations, SIAM J Matrix Anal Appl 31 (2009) [14] Z Drmač, On relative residual bounds for the eigenvalues of a ermitian matrix, Linear Algebra Appl 244 (1996) [15] G W Stewart, Matrix algorithms Vol II, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2001 Eigensystems [16] A Edelman, B D Sutton, The beta-jacobi matrix model, the CS decomposition, and generalized singular value problems, Found Comput Math 8 (2008) [17] C C Paige, M A Saunders, Towards a generalized singular value decomposition, SIAM J Numer Anal 18 (1981) [18] C C Paige, M Wei, istory and generality of the CS decomposition, Linear Algebra Appl 208/209 (1994)

20 [19] P R almos, Two subspaces, Trans Amer Math Soc 144 (1969) [20] B D Sutton, Computing the complete CS decomposition, Numer Algorithms 50 (2009)

ANGLES BETWEEN SUBSPACES AND THE RAYLEIGH-RITZ METHOD. Peizhen Zhu. M.S., University of Colorado Denver, A thesis submitted to the

ANGLES BETWEEN SUBSPACES AND THE RAYLEIGH-RITZ METHOD. Peizhen Zhu. M.S., University of Colorado Denver, A thesis submitted to the ANGLES BETWEEN SUBSPACES AND THE RAYLEIGH-RITZ METHOD by Peizhen Zhu M.S., University of Colorado Denver, 2009 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in

More information

Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II)

Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II) 1 Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II) Merico Argentati (speaker), Andrew Knyazev, Ilya Lashuk and Abram Jujunashvili Department of Mathematics

More information

Rayleigh-Ritz majorization error bounds with applications to FEM and subspace iterations

Rayleigh-Ritz majorization error bounds with applications to FEM and subspace iterations 1 Rayleigh-Ritz majorization error bounds with applications to FEM and subspace iterations Merico Argentati and Andrew Knyazev (speaker) Department of Applied Mathematical and Statistical Sciences Center

More information

ON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES

ON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES ON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES HENDRA GUNAWAN AND OKI NESWAN Abstract. We discuss the notion of angles between two subspaces of an inner product space, as introduced by Risteski and

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

MAJORIZATION FOR CHANGES IN ANGLES BETWEEN SUBSPACES, RITZ VALUES, AND GRAPH LAPLACIAN SPECTRA

MAJORIZATION FOR CHANGES IN ANGLES BETWEEN SUBSPACES, RITZ VALUES, AND GRAPH LAPLACIAN SPECTRA SIAM J. MATRIX ANAL. APPL. Vol.?, No.?, pp.?? c 2006 Society for Industrial and Applied Mathematics MAJORIZATION FOR CHANGES IN ANGLES BETWEEN SUBSPACES, RITZ VALUES, AND GRAPH LAPLACIAN SPECTRA ANDREW

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

arxiv: v2 [math.na] 27 Dec 2016

arxiv: v2 [math.na] 27 Dec 2016 An algorithm for constructing Equiangular vectors Azim rivaz a,, Danial Sadeghi a a Department of Mathematics, Shahid Bahonar University of Kerman, Kerman 76169-14111, IRAN arxiv:1412.7552v2 [math.na]

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

MAJORIZATION FOR CHANGES IN ANGLES BETWEEN SUBSPACES, RITZ VALUES, AND GRAPH LAPLACIAN SPECTRA

MAJORIZATION FOR CHANGES IN ANGLES BETWEEN SUBSPACES, RITZ VALUES, AND GRAPH LAPLACIAN SPECTRA SIAM J. MATRIX ANAL. APPL. Vol. 29, No. 1, pp. 15 32 c 2006 Society for Industrial and Applied Mathematics MAJORIZATION FOR CHANGES IN ANGLES BETWEEN SUBSPACES, RITZ VALUES, AND GRAPH LAPLACIAN SPECTRA

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

1 Quasi-definite matrix

1 Quasi-definite matrix 1 Quasi-definite matrix The matrix H is a quasi-definite matrix, if there exists a permutation matrix P such that H qd P T H11 H HP = 1 H1, 1) H where H 11 and H + H1H 11 H 1 are positive definite. This

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Angles Between Infinite Dimensional Subspaces with Applications to the Rayleigh-Ritz and Alternating Projectors Methods

Angles Between Infinite Dimensional Subspaces with Applications to the Rayleigh-Ritz and Alternating Projectors Methods Angles Between Infinite Dimensional Subspaces with Applications to the Rayleigh-Ritz and Alternating Projectors Methods Andrew Knyazev a,,1, Abram Jujunashvili a, Merico Argentati a a Department of Mathematical

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Majorization for Changes in Angles Between Subspaces, Ritz values, and graph Laplacian spectra

Majorization for Changes in Angles Between Subspaces, Ritz values, and graph Laplacian spectra University of Colorado at Denver and Health Sciences Center Majorization for Changes in Angles Between Subspaces, Ritz values, and graph Laplacian spectra A. V. Knyazev and M. E. Argentati September 2005

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Dragan S. Djordjević. 1. Motivation

Dragan S. Djordjević. 1. Motivation ON DAVIS-AAN-WEINERGER EXTENSION TEOREM Dragan S. Djordjević Abstract. If R = where = we find a pseudo-inverse form of all solutions W = W such that A = R where A = W and R. In this paper we extend well-known

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

The SVD-Fundamental Theorem of Linear Algebra

The SVD-Fundamental Theorem of Linear Algebra Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and

More information

Normalized power iterations for the computation of SVD

Normalized power iterations for the computation of SVD Normalized power iterations for the computation of SVD Per-Gunnar Martinsson Department of Applied Mathematics University of Colorado Boulder, Co. Per-gunnar.Martinsson@Colorado.edu Arthur Szlam Courant

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

More information

ANGLES BETWEEN INFINITE-DIMENSIONAL SUBSPACES. Abram Jujunashvili. M.S., Tbilisi State University, Tbilisi, Georgia, 1978

ANGLES BETWEEN INFINITE-DIMENSIONAL SUBSPACES. Abram Jujunashvili. M.S., Tbilisi State University, Tbilisi, Georgia, 1978 ANGLES BETWEEN INFINITE-DIMENSIONAL SUBSPACES by Abram Jujunashvili M.S., Tbilisi State University, Tbilisi, Georgia, 1978 Ph.D., Optimization, Georgian Academy of Sciences, Tbilisi, 1984 A thesis submitted

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

ECE 275A Homework # 3 Due Thursday 10/27/2016

ECE 275A Homework # 3 Due Thursday 10/27/2016 ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in 86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Pseudoinverse & Orthogonal Projection Operators

Pseudoinverse & Orthogonal Projection Operators Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Summary of Week 9 B = then A A =

Summary of Week 9 B = then A A = Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

ON THE SINGULAR DECOMPOSITION OF MATRICES

ON THE SINGULAR DECOMPOSITION OF MATRICES An. Şt. Univ. Ovidius Constanţa Vol. 8, 00, 55 6 ON THE SINGULAR DECOMPOSITION OF MATRICES Alina PETRESCU-NIŢǍ Abstract This paper is an original presentation of the algorithm of the singular decomposition

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information