ETNA Kent State University

Size: px
Start display at page:

Download "ETNA Kent State University"

Transcription

1 Electronic Transactions on Numerical Analysis Volume 2 pp September 1994 Copyright 1994 ISSN ETNA etna@mcskentedu ON THE PERIODIC QUOTIENT SINGULAR VALUE DECOMPOSITION JJ HENCH Abstract The periodic Schur decomposition has been generally seen as a tool to compute the eigenvalues of a product of matrices in a numerically sound way In a recent technical report it was shown that the periodic Schur decomposition may also be used to accurately compute the singular value decomposition (SVD) of a matrix This was accomplished by reducing a periodic pencil that is associated with the standard normal equations to eigenvalue revealing form If this technique is extended to the periodic QZ decomposition then it is possible to compute the quotient singular value decomposition (QSVD) of a matrix pair This technique may easily be extended further to a sequence of matrix pairs thus computing the periodic QSVD Key words singular value decomposition periodic Schur decomposition periodic QR algorithm periodic QZ algorithm QSVD SVD AMS subject classifications 15A18 65F05 65F15 1 Introduction In a recent technical report 8 a method was proposed by which to compute the singular value decomposition (SVD) of a matrix via the periodic Schur decomposition This method applied the periodic QR algorithm 1 6 to the matrices in the periodic linear system of equations (11) x 2 Ax 1 x 3 A T x 2 thereby computing the orthogonal matrices and the non-negative definite diagonal matrix that comprise the SVD of the matrix A While numerically not as efficient as the standard SVD algorithm it was shown that the periodic QR algorithm could compute the SVD of a matrix with comparable accuracy In fact if the standard periodic QR algorithm is modified to perform a single (real) implicit shift instead of the standard double (complex-conjugate) implicit shift the result is essentially the Golub Kahan SVD algorithm 3 applied to both A and A T doubling the normal operations count The technical report further showed that it was possible to compute the singular values of a matrix operator defined by a quotient of matrices Here we speak of the operator Ξ : X 1 X 2 X 1 R n X 2 R n defined by the equations (12) Ex 2 Fx 1 where E R n n and F R n n This method accurately extracted the singular values of Ξ by examining the eigenvalues of a related operator In this paper we develop the idea further by relating the results to the quotient singular value decomposition (QSVD) The QSVD reduces the matrices E and F in (12) to (diagonal) singular value revealing form More precisely via the Received January Accepted for publication September Communicated by P M VanDooren Institute of Information and Control Theory Academy of Sciences of the Czech Republic Pod vodárenskou věží Praha 8 (hench@utiacascz) This research was supported by the Academy of Sciences of the Czech Republic and by a Fulbright research grant from the Institute of International Education New York 138

2 JJ Hench 139 QSVD it is possible to compute a non orthogonal matrix X non negative definite diagonal matrices Φ and Θ and orthogonal matrices U and V such that X T EU Φ (13) X T FV Θ Furthermore if E is nonsingular the ordering of the diagonal elements of Φ and Θ may be chosen to reflect the ordering of the singular values in the SVD of the explicitly formed operator ˆΞ E 1 F ie θ i θ i+1 (14) φ i φ i+1 In this paper as in 7 we will show that it is possible to compute the matrices X Φ Θ U and V via the periodic QZ decomposition Further we will show how this technique may be extended to compute the periodic QSVD of a sequence of matrix pairs First however we review from 9 the periodic QZ decomposition 2 The Periodic QZ Decomposition The periodic QZ decomposition simultaneously triangularizes by orthogonal equivalences a sequence of matrices associated with the generalized periodic eigenvalue problem Consider the linear algebraic map Π k :X k X k+p where X k R n and X k+p R n and x k+p Π k x k is defined by the set of linear equations (21) E k x k+1 F k x k E k+1 x k+2 F k+1 x k+1 E k+p 1 x k+p F k+p 1 x k+p 1 It is possible to operate on the matrices E k and F k with orthogonal matrices Q k and Z k so as to triangularize the system associated with the generalized periodic eigenvalue problem (22) More precisely the following products (23) E k x k+1 F k x k E k+1 x k+2 F k+1 x k+1 E k+p 2 x k+p 1 F k+p 2 x k+p 2 λe k+p 1 x k F k+p 1 x k+p 1 Q T k E k Z k+1 Q T k F k Z k Q T k+1 E k+1z k+2 Q T k+1 F k+1z k+1 Q T k+p 2 E k+p 2Z k+p 1 Q T k+p 2 F k+p 2Z k+p 2 Q T k+p 1 E k+p 1Z k+p Q T k+p 1 F k+p 1Z k+p 1 have the properties that Q T k E kz k+1 is a quasi upper triangular matrix H k andthe remaining matrices Q T k+l E k+lz k+l+1 and Q T k+l F k+lz k+l are upper triangular matrices H k+l and T k+l respectively with Z k+p Z k This permits the periodic pencil

3 140 On the periodic quotient singular value decomposition in (22) to be expressed as the triangularized periodic pencil that is associated with the linear algebraic map Γ k :Y k Y k+p where Y k R n and Y k+p R n and where y k+p Γ k y k as follows: (24) H k y k+1 T k y k H k+1 y k+2 T k+1 y k+1 H k+p 2 y k+p 1 T k+p 2 y k+p 2 µh k+p 1 y k T k+p 1 y k+p 1 with (25) x l Z T l y l Clearly since the systems in (22) and (24) are (orthogonally) equivalent their eigenvalues are equal ie Λ(Π l )Λ(Γ l ) Further the triangularized system of linear equations in (24) is written in an eigenvalue revealing form; this allows the eigenvalues of the period map defined by (24) to be related to the diagonal elements of its constituent matrices Proceeding let t ijk denote the ijth element of the matrix T k and similarly let h ijk denote the ijth element of the matrix H k Also let the matrices T ik and H ik be 2 2 matrices centered on the diagonals of T k and H k respectively with the upper left hand entries being t iik and h iik if the system defined by T ik and H ik is associated with a 2 2 bulge in the quasi upper triangular H 1 Then the eigenvalues of the operator Γ l may be written as (26) Λ(Γ l ) p λ i j1 { {λi λ i+1 } Λ( H 1 ip t iij /h iij if h iij 0λ i R T 1 ip H T } i1 i1 ) λ i λ i+1 C { λ i if t iij h iij 0} { λ i C if t iij h iij 0} i {1n} l {1p} From the definition above the eigenvalues fall into four categories Since it is necessary to refer to these categories throughout the paper we provide a table of nomenclature in the following definition Definition 1 Let E and F in (12) be the scalars E α and F β Then the adjectives 1 enomorphic and medotropic may be used to describe the four categories of possible eigenvalues as shown in Table 1 3 Quotient Singular Value Decomposition In this section we show how the QSVD may be related to the periodic QZ decomposition We start by restricting our attention to the case when E and F in (12) are nonsingular In such a case it is 1 We introduce here the neologisms enomorphic and medotropic These terms are derived from Greek meaning of the form of one and changed by zero respectively The former term reflects the fact that an enomorphic number is equivalent to one for a finite scaling The latter term emphasizes the fact that a medotropic number has a zero in its rational representation

4 JJ Hench 141 Table 21 Table of Eigenvalue Categories α β λ categories β α 0 β 0 α finite determinate enomorphic α 0 β 0 0 finite determinate medotropic α 0 β 0 non-finite determinate medotropic α 0 β 0 C non-finite indeterminate medotropic possible to write the matrices (31) ˆ 1 F T E T E 1 F ˆ 2 FF T E T E 1 ˆ 3 E 1 FF T E T ˆ 4 E T E 1 FF T ˆ T 2 where the matrices ˆ i are nonsingular It turns out that the eigenvectors of these matrices are closely related to the QSVD which we demonstrate formally in the following lemmas and theorems Lemma 31 Let ˆ 1 ˆ 2 ˆ 3 and ˆ 4 be defined by (31) with E R n n and F R n n nonsingular There exist matrices M 1 M 2 M 3 andm 4 such that the following sets of relations hold: (32) S M 1 1 M 1 2 M 1 3 M 1 4 ˆ 1 M 1 ˆ 2 M 2 ˆ 3 M 3 ˆ 4 M 4 where S is real diagonal positive definite and has arbitrarily ordered diagonal elements and D 1 M2 1 F M 1 D 2 M3 1 E 1 M 2 (33) D 3 M4 1 E T M 3 D 4 M1 1 F T M 4 where the matrices D i are diagonal and positive definite Proof Since ˆ 1 is symmetric we can find an orthogonal M 1 which diagonalizes ˆ 1 in (31) and (32) with arbitrarily ordered eigenvalues as above Let (34) F M 1 M 2 D1 where the matrix D 1 is chosen to be an arbitrary positive definite diagonal matrix and where the matrix M 2 is nonsingular Inserting the identity matrix M 1 2 M 2 between E 1 and F in the equation for S in (31) yields the expression (35) M1 1 F T E T E 1 1 M2 M 2 FM 1 S M1 1 F T E T E 1 M2 D1

5 142 On the periodic quotient singular value decomposition Since S and D 1 are diagonal M 1 1 F T E T E 1 M2 must be diagonal as well Noting that diagonal matrices commute we write (36) M 1 2 FM 1M 1 1 F T E T E 1 M2 S This implies that it is possible to write M 2 M 2 and D 1 D 1 Repeating this procedure for all of the matrices D i in (33) completes the proof In Lemma 31 the matrices M 1 2 and M 1 diagonalize F ; however we would like to diagonalize this matrix without forming an inverse This indeed is possible as we demonstrate in the next lemma Lemma 32 Suppose M 2 and M 4 diagonalize ˆ 2 and ˆ 4 respectively with the same eigenvalue ordering as in (32) Then M T 4 and M T 2 diagonalize ˆ 2 and ˆ 4 respectively Further if the eigenvalues of the matrices ˆ i are distinct the matrices M 2 and M 4 may be related by the equation (37) M T 2 M 4 L with L diagonal Proof SinceM 2 diagonalizes ˆ 2 M 4 diagonalizes ˆ 4 ands S T S M 1 4 E T E 1 FF T M 4 S T M T 2 E T E 1 FF T M T 2 With the eigenvalues of S distinct columns of the the matrices M2 T and M 4 are right eigenvectors associated with the eigenvalues of S Since eigenvectors of distinct eigenvalues are equal up to a scaling then M2 T M 4 L with L nonsingular and diagonal The lemmas above demonstrate a number of remarkable properties of the sets of matrices M i and ˆ i namely that the matrices M i diagonalize the matrices ˆ i that the matrices M i diagonalize the matrices E E T F and F T individually and that under certain conditions the inverses of all of the M i s may be computed by appropriately weighting the columns of related M j s These properties allow us to demonstrate our main result Theorem 33 Let the matrices E R n n and F R n n be nonsingular with the eigenvalues of the matrices i in (31) distinct Further let M 1 M 2 M 3 and M 4 be defined as in (32) and (33) There exists a diagonal signature matrix P with ±1 ±1 (38) P ±1 such that U X V Φ and Θ associated with the QSVD (13) of Ξ (12) may be written as (39) U M 3 X M 4 P V M 1 Φ PM4 T EM 3 Θ PM4 T FM 1

6 JJ Hench 143 Proof Since ˆ 1 is symmetric it is possible to choose M 1 to be orthogonal Further the matrices D 1 D 2 andd 3 may be chosen such that the columns of M 2 M 3 and M 4 are unit normalized The fact that the columns of M 3 are unit normalized and that the matrix ˆ 3 is symmetric imply that M 3 is orthogonal Thus D 1 M2 1 1 D2 1 M2 1 3 with M 1 and M 3 are orthogonal and with D 1 and D 2 diagonal and positive definite Since the singular values of Ξ are distinct then via Lemma 32 LD 1 M4 T FM 1 LD2 1 M4 T EM 3 This implies that M4 T FM 1 and M4 T EM 3 are diagonal Setting P sgn(l) completes the proof Remark 1 In the proof of the above theorem the condition that the singular values appearing on the diagonal of Φ and Θ be ordered as in (14) was not imposed However any ordering of the singular values in the QSVD may be imposed without loss of generality 3 In general we would like to be able to prove Theorem 33 when E and F are not restricted to be nonsingular with the singular values of Ξ in (12) distinct Even inthecasewheree and F are nonsingular the procedure outlined in the proof of Lemma 31 is undesirable from a numerical point of view since it requires forming the matrices ˆ i explicitly Fortunately the periodic QZ decomposition allows us to view the matrices ˆ i as period maps i of certain related periodic systems thereby eliminating the need for inversions and multiplications by non-orthogonal matrices Three key properties of the periodic QZ decomposition are employed to generalize Theorem 33 to the case where E and F are singular The first property allows the reduction of the matrices comprising a periodic system of equations to quasi triangular (eigenvalue revealing) form without forming the period map explicitly Parenthetically this property implies that the periodic QZ decomposition triangularizes each of the period maps i separately The second property allows the eigenvalues appearing along the diagonals of the triangularized period map produced by the periodic QZ decomposition to be ordered arbitrarily The third property asserts that the first columns (rows) of the matrices Z i (Q T i ) of the periodic QZ decomposition are right (left) eigenvectors of the period maps i By rotating each of the eigenvalues of the system triangularized by the periodic QZ decomposition in turn to the upper left hand corner it is possible to assemble the eigenvector matrices M i associated with the ˆ i s in (32) and (33) and thereby to compute all of the matrices related to the QSVD This may be done irrespective of the rank of E and F Proceeding we propose a modified form for the periodic QZ decomposition which is more closely related to the QSVD than the standard periodic QZ decomposition Lemma 34 Let Q and Z be orthogonal matrices with the product QZ being upper-triangular Then QZ P where P is a diagonal signature matrix Proof By definition P is orthogonal which implies P T P I Since P is upper triangular as well P T P I implies that P is diagonal Finally since the modulus of the eigenvalues of an orthogonal matrix must be one and since P is real then P is a diagonal matrix with p ii ±1

7 144 On the periodic quotient singular value decomposition Next the operator 1 is defined in terms of a periodic system Eventually it will be used to compute the QSVD of Ξ The next lemma shows that its periodic QZ decomposition has a special form Lemma 35 Let the operator 1 : X 1 X 4 be the operator defined by the equations (310) Ex 2 Fx 1 E T x 3 x 2 x 4 F T x 3 with E R n n and F R n n nonsingular There exist orthogonal matrices Q U V andw such that (311) Q T EU H 1 Q T FV T 1 U T E T W H 2 V T F T W T 3 where H 1 H 2 T 1 andt 3 are upper triangular Proof Via the periodic Schur decomposition there exist orthogonal matrices Q 1 Q 2 Q 3 Z 1 Z 2 andz 3 such that (312) Q T 1 EZ 2 H 1 Q T 1 FZ 1 T 1 Q T 2 ET Z 3 H 2 Q T 2 Z 2 T 2 Q T 3 Z 1 H 3 Q T 3 F T Z 3 T 3 where H 2 H 3 T 1 T 2 andt 3 are upper triangular Since the operator 1 ˆ 1 is symmetric the eigenvalues of the operator will be real and therefore H 1 is uppertriangular as well By Lemma 34 Q T 2 Z 2 P 1 and Q T 3 Z 1 P 2 where the diagonal elements of P 1 and P 2 are ±1 This implies that the columns of Q 2 and Z 2 differ only by sign This is equally the case for Q 3 and Z 1 By setting Q Q 1 U Q 2 Z 2 P 1 V Q 3 Z 1 P 2 andw Z 3 we complete the proof Remark 2 Since the operator 1 ˆ 1 with E and F nonsingular the orthogonal matrix V resulting from the (modified) periodic QZ decomposition that diagonalizes the operator is the matrix V associated with the QSVD in (13) modulo the sign of the columns Similarly the orthogonal matrix U resulting from the periodic QZ decomposition is the matrix U associated with the same QSVD 3 In the following lemma we show how the remaining matrices X ΦandΘmay be computed via the periodic QZ decomposition Lemma 36 Let Ξ be defined as in (12) with E R n n and F R n n nonsingular Further let the eigenvalues of the matrices i in (31) be distinct Then the matrices X U V Φ andθ associated with the QSVD in (13) may be computed via the (modified) QZ decomposition in (311) Proof Via Lemma 35 it is possible to find orthogonal matrices Q Ū V and W that diagonalize 1 with a particular eigenvalue ordering Let λ i be the eigenvalue associated with the ith column of V Let the matrix V j be the orthogonal matrix that results from interchanging the first column of V and the jth column Since the operator 1 is diagonalized by the matrix V V j also diagonalizes the operator 1 with the jth eigenvalue in the upper left hand corner Now define the matrices Q j U j andw j as the matrices along with V j that triangularize matrices E F E T and F T in 1 with the jth eigenvalue in the upper left hand corner Also define the vector x j to be the first column of W j Since 4 is triangularized by W j with the

8 JJ Hench 145 jth eigenvalue in the upper left hand corner then x j is a right eigenvector of 4 and via Lemma 32 x T j is a left eigenvector of 2 Thus the matrix X x 1 x n diagonalizes 4 from the right and X T diagonalizes 2 from the left with the same eigenvalue ordering This implies via Lemma 31 that X T along with matrices U and V diagonalize E and F albeit with the diagonal elements not necessarily positive There exists however a diagonal signature matrix P as in (38) a matrix X P X and orthogonal matrices U and V such that X T U andv diagonalize E and F with positive diagonal elements In the proof above we rely on the fact that the matrices E and F are nonsingular and with the singular values distinct In the following theorem we provide a proof for the existence of the QSVD with E and F possibly singular Theorem 37 Let E R n n and F R n n and the operator Ξ be defined as in (12) There exist a non orthogonal matrix X non negative definite diagonal matrices Φ and Θ and orthogonal matrices U and V such that (313) X T EU Φ X T FV Θ Proof Let {E (k) } and {F (k) } be a sequence of nonsingular n n matrices that converge pointwise to E and F respectively with the eigenvalues of the associated matrix ˆ (k) distinct Let the matrices Q (k) U (k) V (k) W (k) H 1(k) H 2(k) T 1(k) and T 2(k) be the matrices produced by the (modified) periodic QZ decomposition in (311) Further let X (k) Φ (k) andθ (k) be produced by the procedure in the proof in Lemma 36 For any given E (k) and F (k) the periodic QZ decomposition produces bounded Q (k) U (k) V (k) W (k) The boundedness of Q (k) U (k) V (k) W (k) E (k) and F (k) clearly imply the boundedness of X (k) Φ (k) andθ (k) Finally since these matrices are all elements of a compact metric space the Bolzano-Weierstrass Theorem ensures that the bounded sequence {(U (k) X (k) V (k) Φ (k) Θ (k) )} has a converging subsequence k i ie lim {(U (k i X k (ki)v (ki) Φ (ki) Θ (ki))} (U X V Φ Θ) i In the limit the matrices U and V remain orthogonal and the matrices Φ and are non negative definite and therefore are matrices associated with the QSVD in (13) Remark 3 This proof of the existence of the QSVD for E and F singular requires the use of a converging subsequence which is impractical from a computational point of view If the requirement that Φ and Θ is diagonal is relaxed to be block upper triangular where the non-diagonal blocks correspond to the non-distinct singular values then the matrices resulting from the procedure in the proof in Lemma 36 suffices In such a case this algorithm requires approximately 162n 3 flops assuming 2 implicit QZ steps per eigenvalue This compares with approximately 52n 3 flops for the standard QSVD algorithm 13 3 Remark 4 In 12 and 13 Van Loan suggests the use of the VZ algorithm 12 applied to the pencil (314) E T E λf T F to compute the singular values of Ξ in (12) This idea is similar to the idea of this paper in a number of ways First the matrix pencil in (314) is closely related to

9 146 On the periodic quotient singular value decomposition the operator 4 Second the VZ decomposition is nearly identical to the modified periodic QZ decomposition proposed in Lemma 35 To our knowledge however the eludication of the relationships between the eigenvectors of the operators i and the matrices U X andv and the method of extracting the eigenvectors from the orthogonal matrices that comprise the modified QZ decomposition is novel to this paper 3 4 Periodic Quotient Singular Value Decomposition In this section we examine the operator Π 1 : X 1 X p+1 X 1 R n X p+1 R n whereπ 1 is defined by the equations (41) E 1 x 2 F 1 x 1 E p x p+1 F p x p with E i R n n and F i R n n In this section we intend to show that it is possible to implicitly reduce the matrices comprising the operator Π 1 to diagonal form as was the case for the non periodic operator Ξ in (12) As in the previous section we restrict our attention first to the case where the matrices E i and F i are nonsingular Proceeding we write definitions for the matrices ˆΓ i for i 14p where the matrices ˆΓ i are analogous to matrices ˆ i in the non periodic case ˆΓ 1 F1 T E1 T F2 T E2 T Fp T E T p ˆΓ 2 F 1 F1 T E1 T F2 T Fp T E T p (42) ˆΓ 4p E1 T F2 T E2 T F3 T Fp T E T p E 1 p E 1 p F pep 1 1 F p 1 E1 1 F 1 F pep 1 1 F p 1 F 2 E 1 1 Ep 1F pep 1 1 p 1 F 1 F1 T The matrices ˆΓ i defined as above provide a mechanism by which the existence of a periodic QSVD may be proved Lemma 41 Let ˆΓ 1 ˆΓ 2 ˆΓ 4p be defined by (42) with E i R n n and F i R n n nonsingular for i 1p There exist matrices M i for i 14p such that the following sets of relations hold: (43) S M1 1 ˆΓ 1 M 1 M2 1 ˆΓ 2 M 2 M 1 4p ˆΓ 4p M 4p where S is real diagonal positive definite and (44) D 1 M2 1 F M 1 D 2 M3 1 E 1 M 2 D 4p 1 M4p 1 E T M 4p 1 D 4p M1 1 F T M 4p where the matrices D i are diagonal and positive definite Proof The proof is trivial extension of the proof of Lemma 31 with F 1 substituting for F in (34) and with the ˆΓ i playing the role of ˆ i throughout

10 JJ Hench 147 Proceeding along the lines of the previous section we generalize Lemma 32 to the periodic case Lemma 42 Suppose M r diagonalizes ˆΓ r for r 22p and r 2p+24p with the same eigenvalue ordering as in (43) Then M T 4p r+2 diagonalizes ˆΓ r for r 2 2p and r 2p +2 4p Further if the eigenvalues of the matrices ˆΓ i are distinct then the matrices M r and M 4p r+2 may be related by the equation (45) M T 4p r+2 M rl r with L r diagonal Proof The key element of the proof for Lemma 32 was the observation that ˆ 2 ˆ T 4 In the periodic case ˆΓ r ˆΓ T 4p r+2 for r 22p and r 2p+24p The remainder of the proof follows from this observation Next we prove the existence of the periodic QSVD in the case where the matrices E i and F i are nonsingular Theorem 43 Let the matrices E i R n n and F i R n n in (42) be nonsingular with the eigenvalues of the matrix ˆΓ 1 distinct There exist non orthogonal matrices X 1 X p and Y 2 Y p positive definite diagonal matrices Φ 1 Φ p and Θ 1 Θ p and orthogonal matrices U and V such that (46) X1 T E 1Y 2 Φ 1 X1 T F 1V Θ 1 X2 T E 2Y 3 Φ 2 X2 T F 2Y 2 Θ 2 Xp 1E T p 1 Y p Φ p 1 Xp 1F T p 1 Y p 1 Θ p 1 Xp T E pu Φ p Xp T F py p Θ p Further the constituent matrices of the periodic QSVD may be related to the matrices M i in (43) and (44) which diagonalize the matrices ˆΓ i in the following way: X 1 M 4p P 4p V M 1 X 2 M 4p 2 P 4p 2 Y 2 M 4p 1 P 4p 1 X 3 M 4p 4 P 4p 4 Y 3 M 4p 3 P 4p 3 X p M 2p+2 P 2p+2 Y p M 2p+3 P 2p+3 (47) U M 2p+1 Φ 1 P 4p M4pE T 1 M 4p 1 P 4p 1 Θ 1 P 4p M4pF T 1 M 1 Φ 2 P 4p 2 M4p 2E T 2 M 4p 3 P 4p 3 Θ 2 P 4p 2 M4p 2F T 2 M 4p 1 P 4p 1 Φ p 1 P 2p M2p T E p 1M 2p+3 P 2p+3 Θ p 1 P 2p M2p T F p 1M 2p+1 Φ p P 2p+2 M2p+2 T E pm 2p+1 Θ p P 2p+2 M2p+2 T F pm 2p+3 P 2p+3 where the matrices P i are diagonal signature matrices as in (38) Proof The proof is a straightforward extension of the proof of Theorem 33 By the substitution of the results of Lemmas 41 we can assure that the matrix S in (43) and the matrices D i s (44) are diagonal and positive definite Since ˆΓ 1 and ˆΓ 2p+1 are symmetric M 1 and M 2p+1 may be chosen to be orthogonal With M 1 and M 2p+1 orthogonal and S positive definite Lemma 41 and Lemma 42 imply that there exist matrices P i such that (47) holds completing the proof

11 148 On the periodic quotient singular value decomposition As in the previous section we show that the matrices in the periodic QSVD may be constructed from the matrices resulting from the periodic QZ decomposition of the matrices comprising a related operator Γ 1 : X 1 X 2p+2 X i R n where Γ 1 is defined by the equations (48) E 1 x 2 F 1 x 1 E p x p+1 F p x p Ep T x p+2 x p+1 Ep 1x T p+3 Fp T x p+2 E1 T x 2p+1 x 2p+2 F2 T x 2p F1 T x 2p+1 To prove the existence of the periodic QSVD it is necessary to generalize the lemmas and theorems of the previous section Lemma 44 Let the operator Γ 1 be defined as in (48) There exist orthogonal matrices Q 1 Q 2p 1 W 2 W 2p U and V such that the matrices H k and T k are upper triangular Q T 1 E 1W 2 H 1 Q T 1 F 1V T 1 (49) Q T p E p U H p Q T p F p W p T p U T Ep T W p+1 H p+1 Q T p+1 ET p 1 W p+2 H p+2 Q T p+1 F p T W p+1 T p+2 Q T 2p 1E1 T W 2p H 2p Q T 2p 1F2 T W 2p 1 T 2p V T F1 T W 2p T 2p+1 Proof The proof is a trivial extension of that in Lemma 35 Lemma 45 Let the operator Γ 1 be defined as in (48) with the matrices E i R n n and F i R n n nonsingular and with the eigenvalues of Γ 1 distinct The matrices X i U i V i Φ i and Θ i associated with the periodic QSVD in (46) may be computed via the (modified) QZ decomposition in (49) Proof The proof for the existence of the periodic QSVD with the matrices E i and F i nonsingular is an extension of Lemma 36 with the matrices X k and Y k constructed analogously Lemma 44 ensures that there exist orthogonal matrices Q 1 Q 2p 1 Ū V and W2 W 2p that diagonalize Γ 1 with a particular eigenvalue ordering Let λ i be the eigenvalue of associated with the ith column of V Let the matrix V j be the orthogonal matrix that results from interchanging the first column of V and the jth column Since the operator Γ 1 is diagonalized by the matrix V V j also diagonalizes the operator with the jth eigenvalue in the upper left hand corner Now define the matrices Q l j U l andw l j as the matrices along with V l that triangularize constituent matrices of Γ 1 with the jth eigenvalue in the upper left hand corner Also define the vector x l j to be the first column of W 2p l+1 j Via Lemma 42 and in analogy with Lemma 36 x T l j is a left eigenvector of the operator Γ 4p 2l Similarly define the vector ỹ l j to be the first vector of W l+1 j The vector ỹ l j is a right eigenvector of the operator Γ 2l 2 Thus the matrices X l x l 1 x l n and

12 JJ Hench 149 Ỹ l ỹ l 1 ỹ l n are matrices of eigenvectors of Γ 4p 2l and Γ 2l 2 respectively and therefore diagonalize the constituent matrices of the operator Γ 1 via Lemma 41 albeit with the diagonal elements not necessarily positive There exists however diagonal signature matrices P xl and P yl as in (38) such that the matrices X l X l P xl and Y l ỸlP yl diagonalize the constituent matrices of the operator Γ 1 with the diagonalized matrices positive definite Finally Theorem 37 is generalized for the periodic case Theorem 46 Let the operator Π be defined as in (41) There exist non orthogonal matrices X 1 X p and Y 2 Y p non negative definite diagonal matrices Φ 1 Φ p and Θ 1 Θ p and orthogonal matrices U and V such that the relations in (46) hold Proof The proof is a trivial extension of Theorem 37 5 Numerical Examples In this section we give some numerical examples to illustrate the points discussed in the previous sections Example 1 Consider the operator Ξ defined by (12) where the matrices E and F are E F Since E is nonsingular it is possible to write the map Ξ ˆΞ directly: 2 1 ˆΞ 1 0 The singular value decomposition of ˆΞ Û ˆΣ ˆV T where Û ˆΣ ˆV The matrices U V Q W H i andt i that result from the periodic QZ decomposition of 1 are: U V Q W H 1 Q T EU H 2 U T E T W T 1 Q T FV T 3 V T F T W

13 150 On the periodic quotient singular value decomposition With a two by two system it is possible to construct X x 1 x 2 directly from Q and W : x 1 w 1 x 2 q 2 yielding X Computing Φ and Θ yields Φ X T EU Θ X T FV The product Σ φθ Φ 1 Θis Σ φθ which is equal to ˆΣ as expected Example 2 Consider again the operator Ξ where E F Since E is singular it is not possible to write the map ˆΞ directly However it is possibletocomputetheqsvdofthesystemthematricesu V Q W H i andt i that result from the periodic QZ decomposition of are: U V Q W H 1 Q T EU H 2 U T E T W T 1 Q T FV T 3 V T F T W As before we construct X x 1 x 2 directly from Q and W : x 1 w 1 x 2 q 2

14 JJ Hench 151 making X Computing Φ and Θ yields Φ X T EU Θ X T FV Note that the system has an infinite and a zero singular value and no enomorphic eigenvalues Example 3 Consider the operator Π in (41) defined by matrices E i and F i where E 1 E F 1 F Since the matrices E i are nonsingular it is possible to write the map ˆΠ directly: 1 2 ˆΠ 1 1 The singular value decomposition of ˆΠ Û ˆΣ ˆV T where Û ˆΣ ˆV The matrices U V Q k W k H k and T k that result from the periodic QZ decomposition of the pencil of Γ are: U V Q Q Q W W W H 1 Q T 1 E 1W H 2 Q T 2 E 2U

15 152 On the periodic quotient singular value decomposition H 3 U T E2 T W H 4 Q T 3 E1 T W T 1 Q T 1 F 1 V T 2 Q T 2 F 2 W T 4 Q T 3 F2 T W T 5 V T F1 T W With a two by two system it is possible to construct X 1 x 11 x 12 X 2 x 21 x 22 and Y 2 y 21 y 22 directly from the Q k s and W k s: x 11 w 41 x 12 q 12 x 21 w 31 x 22 q 22 y 11 w 21 y 12 q 32 yielding X X and Y Computing Φ 1 Φ 2 Θ 1 andθ 2 yields Φ 1 X1 T E 1 Y Θ 1 X1 T F 1 V Φ 2 X2 T E 2 U Θ 2 X2 T F 2 Y The product Σ φθ Φ 1 2 Θ 2Φ 1 1 Θ 1 is Σ φθ which is equal to ˆΣ as expected

16 JJ Hench Conclusion In this paper the relationship between the QSVD and the periodic QZ decomposition is elaborated Specifically the periodic QZ algorithm may be viewed as a method by which the QSVD may be computed as accurately albeit half as efficiently than the standard algorithm Nevertheless by using this technique the periodic QSVD of a sequence of matrix pairs may be readily computed In this paper we have discussed the QSVD; however other canonical forms such as those discussed in previous papers or the computation of principal angles and vectors 4 may be cast in the periodic QZ framework This demonstrates the versatility of the QZ decomposition; it provides a theoretical basis for computing eigenvalue or singular value revealing matrix decompositions REFERENCES 1 A Bojanczyk GH Golub and P van Dooren The periodic Schur decomposition algorithms and applications in Proc SPIE Conf vol pp B De Moor and P van Dooren Generalizations of the singular value and QR decompositions SIAM J Matrix Anal Appl 13 (1992) pp GH Golub and W Kahan Calculating the singular values and pseudo-inverse of a matrix SIAM J Matrix Anal Appl B2 (1965) pp GH Golub and CF Van Loan Matrix Computations (2nd edition) The Johns Hopkins University Press Baltimore Maryland (1989) 5 MT Heath AJ Laub CC Paige and RC Ward Computing the singular value decomposition of a product of two matrices SIAM J Sci Stat Comput 7 (1986) pp JJ Hench Numerical Methods for Periodic Linear Systems PhD thesis University of California Santa Barbara Electrical and Computer Engineering Dept September The quotient singular value decomposition and the periodic QZ decomposition Tech Report 1775 Institute of Information and Control Theory Academy of Sciences of the Czech Republic Pod vodárenskou věží 4 Praha 8 October Special decompositions via the periodic Schur and QZ decompositions Tech Report 1774 Institute of Information and Control Theory Academy of Sciences of the Czech Republic Pod vodárenskou věží 4 Praha 8 September JJ Hench and AJ Laub Numerical solution of the discrete-time periodic Riccati equation IEEE Trans Automat Control 39(1994) CB Moler and GW Stewart An algorithm for generalized matrix eigenvalue problems SIAM J Numer Anal 10 (1973) pp CC Paige and M Saunders Towards a generalized singular value decomposition SIAMJ Numer Anal 18 (1981) pp CF Van Loan A general matrix eigenvalue algorithm SIAM J Numer Anal 12 (1975) pp Generalizing the singular value decomposition SIAM J Numer Anal 13 (1976) pp 76 83

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q A NUMERICAL METHOD FOR COMPUTING AN SVD-LIKE DECOMPOSITION HONGGUO XU Abstract We present a numerical method to compute the SVD-like decomposition B = QDS 1 where Q is orthogonal S is symplectic and D

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

An SVD-Like Matrix Decomposition and Its Applications

An SVD-Like Matrix Decomposition and Its Applications An SVD-Like Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

Using semiseparable matrices to compute the SVD of a general matrix product/quotient

Using semiseparable matrices to compute the SVD of a general matrix product/quotient Using semiseparable matrices to compute the SVD of a general matrix product/quotient Marc Van Barel Yvette Vanberghen Paul Van Dooren Report TW 508, November 007 n Katholieke Universiteit Leuven Department

More information

Efficient and Accurate Rectangular Window Subspace Tracking

Efficient and Accurate Rectangular Window Subspace Tracking Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš

More information

MATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1

MATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1 MATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1 Robert Granat Bo Kågström Daniel Kressner,2 Department of Computing Science and HPC2N, Umeå University, SE-90187 Umeå, Sweden. {granat,bokg,kressner}@cs.umu.se

More information

Total Least Squares Approach in Regression Methods

Total Least Squares Approach in Regression Methods WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order Filomat 29:8 2015, 1831 1837 DOI 10.2298/FIL1508831B Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Positive Solution to a Generalized

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

THE STABLE EMBEDDING PROBLEM

THE STABLE EMBEDDING PROBLEM THE STABLE EMBEDDING PROBLEM R. Zavala Yoé C. Praagman H.L. Trentelman Department of Econometrics, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands Research Institute for Mathematics

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices Part 2: block displacement structure algorithms JC Zúñiga and D Henrion Abstract Motivated by some control

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

A Divide-and-Conquer Method for the Takagi Factorization

A Divide-and-Conquer Method for the Takagi Factorization A Divide-and-Conquer Method for the Takagi Factorization Wei Xu 1 and Sanzheng Qiao 1, Department of Computing and Software, McMaster University Hamilton, Ont, L8S 4K1, Canada. 1 xuw5@mcmaster.ca qiao@mcmaster.ca

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis Volume 8, pp 127-137, 1999 Copyright 1999, ISSN 1068-9613 ETNA etna@mcskentedu BOUNDS FOR THE MINIMUM EIGENVALUE OF A SYMMETRIC TOEPLITZ MATRIX HEINRICH VOSS

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017 Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents Eigenvalue and Least-squares Problems Module Contents Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems nag nsym gen eig provides procedures for solving nonsymmetric generalized

More information

Tensor rank-one decomposition of probability tables

Tensor rank-one decomposition of probability tables Tensor rank-one decomposition of probability tables Petr Savicky Inst. of Comp. Science Academy of Sciences of the Czech Rep. Pod vodárenskou věží 2 82 7 Prague, Czech Rep. http://www.cs.cas.cz/~savicky/

More information

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots

More information

EIGENVALUE PROBLEMS (EVP)

EIGENVALUE PROBLEMS (EVP) EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

7.6 The Inverse of a Square Matrix

7.6 The Inverse of a Square Matrix 7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses

More information

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.

More information

An error estimate for matrix equations

An error estimate for matrix equations Applied Numerical Mathematics 50 (2004) 395 407 www.elsevier.com/locate/apnum An error estimate for matrix equations Yang Cao, Linda Petzold 1 Department of Computer Science, University of California,

More information

Multiplicative Perturbation Analysis for QR Factorizations

Multiplicative Perturbation Analysis for QR Factorizations Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations

More information

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information