REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS

Size: px
Start display at page:

Download "REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS"

Transcription

1 REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University Par, PA USA. Abstract. The Golub Kahan Lanczos bidiagonal reduction generates a factorization of a matrix X R m n, m n, such that X = UBV T U R m n is left orthogonal, V R n n is orthogonal, and B R n n is bidiagonal. When the Lanczos recurrence is implemented in finite precision arithmetic, the columns of U and V tend to lose orthogonality, maing a reorthogonalization strategy necessary to preserve convergence of the singular values. A new strategy is proposed for recovering the left singular vectors. When using that strategy, it is shown that, in floating point arithmetic with machine unit ε M, if and if orth(v ) = I V T V 2, Θ j = diag(θ 1,..., θ j ) are the leading j singular values of B = B(1:,1: ), then the corresponding approximate left singular vectors of X in the matrix P j satisfy I P T j P j F O([ε M + orth(v )) X 2 /θ j ] 2 ). Therefore, regulation of the orthogonality of the columns of V, even if there is no attempt to reorthogonalize the columns of U, serves to preserve orthogonality in the leading approximate left singular vectors. More importantly, the new strategy for computing left singular vectors produces smaller residual bounds for computed singular triplets. AMS subject classifications. 65F15,65F25. Key words. Lanczos vectors, orthogonality, singular vectors, left orthogonal matrix. 1. Introduction. Bidiagonal reduction, the first step in many algorithms for computing the singular value decomposition (SVD) [9, 2], is also used for solving least squares problems [16, 13], for solving ill-posed problems [8, 4, 11], the computation of matrix functions [7] [10, ], and for matrix approximation [3] including the solution of the Netflix problem [12]. In [9], Golub and Kahan give two Lanczos-based bidiagonal reduction algorithms which we call the Golub Kahan Lanczos (GKL) algorithms. The first GKL algorithm taes a matrix X R m n, m n, and generates the factorization (1.1) with (1.2) (1.3) X = UBV T, U = (u 1,...,u n ) R m n, V = (v 1,...,v n ) R n n, left orthogonal, orthogonal, The research of Jesse L. Barlow was supported by the National Science Foundation under grant no. CCF

2 2 J.L. BARLOW and B R n n having a bidiagonal form given by γ 1 φ 2 0 γ 2 φ 3 def (1.4) B = = ubidiag(γ 1,..., γ n ; φ 2,..., φ n ). γ n 1 φ n γ n For certain structured matrices, even with reorthogonalization, this GKL algorithm yields a faster method of producing a bidiagonal reduction to compute the complete singular value decomposition. For large sparse matrices, it is often the method of choice to compute a few singular values and associated singular vectors. The recurrence generating the decomposition (1.1) (1.4) is constructed by choosing a vector v 1 R n such that v 1 2 = 1, letting u R m, = 1,...,n and v R n, = 2,..., n be unit vectors, and letting γ, φ, = 1,...,n be scaling constants such that (1.5) (1.6) (1.7) γ 1 u 1 = Xv 1, φ +1 v +1 = X T u γ v, = 1,...,n 1, γ +1 u +1 = Xv +1 φ +1 u. The other GKL algorithm in [9] starts with u 1 and instead generates a lower bidiagonal matrix. The discussion below also applies to that recurrence if we note that the other GKL algorithm is equivalent to (1.5) (1.7) applied to ( u 1 X ) with v 1 = e 1. For our purposes, we associate V with the smaller of the two dimensions m and n of X. The recurrence (1.5) (1.7) is equivalent to the symmetric Lanczos tridiagonalization algorithm performed on the matrix 0 X T M = X 0 ( v1 ) with the starting vector. 0 Since the vectors u 1,...,u n and v 1,...,v n tend to lose orthogonality in finite precision arithmetic, reorthogonalization is performed when the bidiagonal reduction algorithm (1.5) (1.7) is used to compute the singular value decomposition as in [9] or in regularization algorithms as in [4, 8] or in the computation of matrix functions as in [7]. Paige [14] points out that the loss of orthogonality in Lanczos reductions is structured in the sense that it is coincident with the convergence of approximate eigenvalues and eigenvectors (called Ritz values and vectors). Parlett and Scott [18] use this observation to develop partial reorthogonalization procedures. A good summary of the surrounding issues is given by Parlett [17, Chapter 13]. In their version of Lanczos bidiagonal reduction, Simon and Zha [19] reorthogonalize only the right Lanczos vectors v 1,...,v n and show that, under certain assumptions, the loss of orthogonality in u 1,...,u n is bounded. Their results assume that if we let (1.8) (1.9) U = (u 1,...,u ), V = (v 1,...,v ), B = ubidiag(γ 1,..., γ ; φ 2,...,φ ),

3 LANCZOS BIDIAGONAL REDUCTION SINGULAR VECTORS 3 then, in floating point arithmetic, the matrices in (1.8) (1.9) satisfy a recurrence of the form (1.10) (1.11) and (1.12) XV = U B + E (R), X T U = V B T + φ +1v +1 e + E (L) ( E (R) E (L) ) F ε M f(m, n) X F for some modest function f(m, n) ε M is the machine unit. As also pointed out in [19], when reorthogonalization is done, the bounds (1.12) do not hold. To understand how the algorithm wors with reorthogonalization of V, we define the loss of three orthogonality measures (1.13) (1.14) orth(v ) def = I V T V 2, η = V 1 T v 2, η = Since orth(v ) satisfies the upper bound (1.15) and the lower bound orth(v ) I V T V F 2 j=1 j=1 η 2 j orth(v ) max η j 1 η, 1 j, η 2 j 1/2 1/2. = 2 η, orth(v ) and η are large or small together. Thus we express our bounds in terms of η with the understanding that, with minor modification, they could expressed in terms of orth(v ). In 3.2, (Theorem 3.2), another replacement for (1.10) (1.11) is given by (1.16) (1.17) XV = ÛB + F, X T Û = V B T + φ +1 v +1 e T + G Û is computed from U using Function 3.1. In Theorem 3.2, the error matrices, F and G, are shown to satisfy (1.18) F G F [ε M h 1 (m, n) + η h 2 (n)] X 2 + O(ε 2 M ) for modestly growing functions h 1 ( ) and h 2 ( ). The orthogonality maintained in V guarantees a bound on the loss of orthogonality in Û, thereby assuring better orthogonality in the left singular vectors. Moreover, (1.16) (1.17) guarantees that the approximate singular values and vectors recovered

4 4 J.L. BARLOW from this process have smaller residuals than from the algorithm in [19]. Thus we recommend altering the GKL algorithm to recover the approximate left singular vectors as discussed in 3.2. We also show that and orth(u ) = O( X 2 B 1 2 η ) orth(û) = O([ X 2 B 1 2 η ] 2 ). The matrix W in Theorem 3.1 arises out of an observation about modified Gram Schmidt that Charles Sheffield made to Gene Golub. That observation motivates [15] and was used implicitly in [5, 2, 6]. We structure this paper as follows. In 2, we state the algorithm and define notation. In 3, we restate the main theorem (Theorem 3.1) from [1], in 3.2, we use Theorem 3.1 to produces an algorithm to compute Û satisfying (1.16) (1.18) (Theorem 3.2) with orthogonality bounds on Û (Corollary 3.9), plus residual and orthogonality bounds on approximate left singular vectors (Corollaries 3.3 and 3.5), and in 3.3 we give bounds on loss of orthogonality in the left Lanczos vectors (Corollary 3.9). In 4, we give numerical tests based upon regulating the orthogonality of V in various ways which we follow with a conclusion in The Lanczos Bidiagonal Recurrence with Reorthogonalization. In exact arithmetic, the columns of V in (1.3), computed according to (1.10), are orthornomal, but, in floating point arithmetic, some reorthogonalization of these vectors is necessary. A model of how that reorthogonalization could be done is summarized from [1] below. To recover v +1 from v 1,...,v and u 1,...,u, we compute (2.1) r = X T u γ v, then reorthogonalize r against v 1,...,v so that (2.2) (2.3) φ +1 v +1 = r ĥ j,+1 v j j=1 = r V ĥ +1, ĥ +1 = ĥ 1,+1. ĥ,+1 for some coefficients ĥj,+1, j = 1,...,. Combining (2.1) and (2.3), we have that (2.4) φ +1 v +1 = X T u V h +1 h +1 = γ e + ĥ+1. To encapsulate our approaches to reorthogonalization, we assume the existence of a general function reorthog that performs step (2.3) in some manner. Thus the ( + 1)st Lanczos vector comes from (2.1) followed by (2.5) [v +1,ĥ+1, φ +1 ] = reorthog(b, V,r ),

5 LANCZOS BIDIAGONAL REDUCTION SINGULAR VECTORS 5 B, as given by (1.9), may provide necessary information for the partial reorthogonalization schemes. In floating point arithmetic, we assume that the steps (2.1) and (2.5) produce vectors v +1 and h +1, and a scalar φ +1 such that (2.6) (2.7) X T u = V h +1 + φ +1 v +1 + β +1 β +1 2 ε M q(m) X 2 for some modest sized function q(m). The value of q(m) varies depending upon the orthogonalization method used, but, for, say, the complete reorthogonalization scheme in Function 3.1 in [1], we would have q(m) = O(m). The following function specifies the first steps of the Lanczos bidiagonal reduction. Function 2.1 (First steps of Lanczos Bidiagonal Reduction with reorthogonalization). function [B, U, V ]=lanczos bidiag(x,v 1, ) V 1 = (v 1 ); s 1 = Xv 1 ; γ 1 = s 1 2 ; u 1 = s 1 /γ 1 ; for j = 2: r j = X T u j 1 γ j 1 v j 1 ; [v j,ĥj, φ j ]=reorthog(b j 1, V j 1,r j ); s j = Xv j φ j u j 1 ; γ j = s j 2 ; u j = s j /γ j ; V j = V j 1 v j ; Uj = U j 1 u j ; Bj 1 φ B j = j e j 1 ; 0 γ j end; end; lanczos bidiag We discuss three specific methods for performing the reorthogonalization in reorthog in [1, 4]. 3. GKL Bidiagonalization with One-Sided Reorthogonalization Summary of The Main Theorem from [1]. The results in this paper are based upon Theorem 3.1 stated next and proved in [1]. We assume that orth(v ) < 1 and let (3.1) ω def 1 + q(m)ε M = (1 orth(v )). 1/2 Theorem 3.1. Let Function 2.1 be implemented in floating point arithmetic with machine unit ε M. Assume that V = (v 1,...,v ) with orthogonality parametrized by η, = 1,...,n in (1.14), U = (u 1,...,u ), and B = ubidiag(γ 1,...,γ ; φ 2,...,φ ) are output from that function. Assume also that orth(v ) in (1.13) satisfies orth(v ) < 1. Define (3.2) (3.3) (3.4) ( ) n 0 C =, m XV W j = I w j wj T ej, w j =, u j W = W 1 W.

6 6 J.L. BARLOW If q(m) is defined in (2.7) and ω is given by (3.1), then (3.5) (3.6) and (3.7) C + δc = W B m + n 0 δc F [f 1 (m, n, )ε M + f 2 () η ] X 2 + O(ε 2 M ) f 1 (m, n, ) = [ 2/3q(m) + m + n + 2], f 2 () = ω 2/3 3/2. The matrix W is orthogonal since it is the product of Householder matrices; details about its structure are given in [15, Theorem 2.1]. In 3.2, we discuss the impact of Theorem 3.1 in the construction of the left singular vectors Residual Bounds and Left Singular Vectors. To capture good approximate left singular vectors from the GKL process, we first rewrite equation (3.2) as (3.8) ( 0 n δc (1) ) + XV m δc (2) = W B. 0 The bottom bloc row of this equation is Û is defined by (3.9) It is easily verified that (3.10) XV + δc (2) = ÛB Û = W (n + 1: m + n, 1: ) = ( ( ) I 0 I n W1 W 0 W j = I w j wj T, w ej j =. u j Û = W n (n + 1: m + n, 1: ) = Ûn(:, 1: ). In the construction of approximate left singular vectors, we need the operation (3.11) P = ÛQ Q R, and P R m. We note that (3.11) is equivalent to the computation n P c = m P W Q Q = W 0 1 W 0 each W j is constructed from the left Lanczos vector u j. Thus if we let U = (u 1,...,u ), the following function does the operation (3.11). )

7 LANCZOS BIDIAGONAL REDUCTION SINGULAR VECTORS 7 Function 3.1 (Multiplication with Û). function P=UhatMult(U, Q) [m, ]=size(u ); P = U (:, ) Q(, :); for j = 1: 1: 1 g = P U (:, j); P = P U (:, j) (g Q(j, : )); end; end UhatMult Note that Û = UhatMult(U, I ). Function 3.1 is closely related to the modified Gram Schmidt procedure in [5], so we refer to Û as the MGSed Lanczos vectors in 4 and particularly in Figure 4.1. We now give a residual theorem for V, Û and B. Theorem 3.2. Assume the hypothesis and terminology of Theorem 3.1. Then, for = 1,...,n, with the convention that φ n+1 v n+1 = 0, we have (1.16) (1.17) Û is given in (3.9) and F and G satisfy (1.18) with (3.12) h 1 (m, n) = 2f 1 (m, n, n), h 2 (n) = 2f 2 (n) + 2n. Proof. To prove (1.16), we note that the bottom bloc of (3.2) (3.7) is written F = δc (n + 1: m + n, : ). Thus (3.13) XV = ÛB + F F F δc F δc n F. From on bound on δc F in (3.6) (3.7), the inequality for F in (1.18) is satisfied. To prove (1.17), write (3.2) (3.7) for = n. Then 0 = XV W B n δc n 0 n. If we multiply on the left by W n T and the right by V n T, then W n T 0 B XV n Vn T = V 0 n T W n T (δc n)vn T which can be reorganized into ( W n T 0 B (3.14) = V X 0 n T + W n T ) 0 X(I V n Vn T) W n T (δc n)vn T. Using the definition of Ûn, taing the first bloc row of (3.14) and transposing it, we get (3.15) X T Û n = V n B T + (I V n V T n )X T Û n V n (δc) T Û n. If we just tae first columns, we have (3.16) (3.17) X T Û = V B T + φ +1v +1 e T + G, G = (I V n V T n )XT Û V n (δc n ) T Û.

8 8 J.L. BARLOW (3.18) Thus Using the fact that we have (3.19) G F I V n V T n 2 X T Û F + δc n F. I V n V T n 2 = I V T n V n 2 = orth(v n ), G F orth(v n ) X F + δc n F, thus combining (3.13) with (3.19), we get F (3.20) G F 2 δc n F + orth(v n ) X F. The bound in (1.18) is satisfied by plugging in our bound on δc n F and that orth(v n ) 2 η n, X F n X 2. No bound comparable to (1.18) has been shown if U, the matrix of Lanczos vectors, is substituted for Û. In fact, our tests in 4 (see Figures 4.2, 4.3, and 4.4) give evidence that such a bound is unliely to hold. Theorem 3.2 leads to bound on residual for approximate singular triplets harvested from the GKL algorithm. Corollary 3.3. Assume the hypothesis and terminology of Theorem 3.1. Let B have a computed singular value decomposition that satisfies (3.21) (3.22) (3.23) (3.24) If (3.25) (3.26) then (3.27) (3.28) In (3.27) (3.28), if B + δb = QΘS T Q = (q 1,...,q ) = (q ij ), S = (s 1,...,s ) = (s ij ) Θ = diag(θ 1,..., θ ) δb F [ε M + η ]d() X 2. P = ÛQ = (p 1,...,p ) Z = V S = (z 1,...,z ), Xz j θ j p j = δ (R) j X T p j θ j z j = φ +1 q j v +1 + δ (L) j. (R) = (δ (R) 1,...,δ (R) ), (L) = (δ (L) 1,...,δ (L) ),

9 then (3.29) (3.30) (3.31) (3.32) (3.33) (3.34) Thus LANCZOS BIDIAGONAL REDUCTION SINGULAR VECTORS 9 (R) F [ε M h 3 (m, n, ) + η h 4 (n, )] X 2. (L) h 3 (m, n, ) = h 1 (m, n, ) + 2d(), h 4 (n, ) = 2h 2 (n) + 2d(). Proof. Multiplying (1.16) on the right by S and (1.17) on the right by Q, we have XV S = ÛB S + F S, X T Û Q = V B T Q + φ +1 v +1 (e T Q) + G Q. Equations (3.31) (3.32) become XZ = P Θ + (R), X T P = Z Θ + φ +1 v +1 (e T Q) + (L), (L) (R) = Û(δB )S + F S, (L) = V (δb ) T Q + G Q. (R) F 2 δb F + ( F ) G F. F Since G F is bounded by (1.18) and (3.12) and δb F is defined by (3.24), we have (3.29) (3.30). The jth column of (3.33) (3.34) is just (3.27) (3.28) δ (R) j = Û(δB )s j + F s j, δ (L) j = V (δb ) T q j + G q j. Corollary 3.3 allows us to monitor the convergence of singular triplets. Provided that we maintain good orthogonality in V, the triple (θ j,p j,z j ) can be accepted as a singular triplet when φ +1 q j is sufficiently small. The next two corollaries comment upon the orthogonality of Û and of computed left singular vectors. Corollary 3.4. Assume the hypothesis and terminology of Theorem 3.1 and let Û be as defined in Theorem 3.2. Then (3.35) (3.36) orth(û) = I ÛT Û 2 = δc (1:, : )B (f 1 (m, n, )ε M + f 2 () η ) 2 X 2 2 B

10 10 J.L. BARLOW Proof. We have that 0 = XV W B δc 0. If we let then (3.37) W 11 = W (1:, 1: ). We note that ( ( n δc (1) ) δc = m δc (2), ) δc (1) W11 B XV + δc (2) = Û B W 11 = δc (1) B 1 Since W is exactly orthogonal, we have that so that Thus, = δc (1: n, : )B 1 Û T Û + W T 11 W 11 = I. I ÛT Û 2 = W T 11 W 11 2 = W orth(û) = W = [δc (1) δc (1) ]B F B (f 1 (m, n, )ε M + f 2 () η ) 2 X 2 2 B For the leading left singular vectors, we can obtain an even stronger bound on the loss of orthogonality. Corollary 3.5. Assume the hypothesis and terminology of Corollary 3.4. Let B have the computed singular value decomposition in Corollary 3.3, and let P j = ÛQ(:, 1: j), be the matrix of the first j approximate left singular vectors. Then (3.38) I j P T j P j 2 (f 3 (m, n, )ε M + f 4 () η ) 2 X 2 2 /θ2 j f 3 (m, n, ) = f 1 (m, n, ) + d(), f 4 () = f 2 () + d().

11 LANCZOS BIDIAGONAL REDUCTION SINGULAR VECTORS 11 Proof. Using the upper bloc of (3.37) and the assumption (3.21) (3.23), we have that ( ) (δc (1) )S + W 11(δB )S P11 (3.39) X Z + δc (2) S + Û(δB = Θ )S P For each j = 1,...,, is left orthogonal, therefore thus (3.40) Thus, (3.41) I j P T j From (3.39), we note that P 11 = W 11 Q, P = ÛQ. P11 (:, 1: j) P j P j = P 11 (:, 1: j) T P 11 (:, 1: j), orth( P j ) = I j P T j P j 2 = P 11 (:, 1: j) T P 11 (:, 1: j) 2 = P 11 (:, 1: j) 2 2. P 11 (:, 1: j) = [(δc )S(:, 1: j) + W 11 (δb )S(:, 1: j)]θ 1 Θ j = diag(θ 1,...,θ j ). P 11 (:, 1: j) 2 [ (δc )S(:, 1: j) 2 + W 11 (δb )S(:, 1: j) 2 ]/θ j [ δc 2 + δb 2 ]/θ j [ε M f 3 (m, n, ) + η f 4 ()] X 2 /θ j. Combining (3.40) and (3.41) yields (3.38) Orthogonality Results for Left Lanczos Vectors. We can show bounds on the orthogonality of the left Lanczos vectors U = (u 1,...,u ), beginning with a result that yields an exact orthogonal matrix Ū that is close to U. Our bounds on the orthogonality of the left Lanczos vectors are similar to those in [2]. The bounds in [2] and Theorem 3.7 below use a fact about orthogonal factorizations formalized by Paige [15, Theorem 4.1] given as Theorem 3.6 next. That fact has been implicitly used in [5, 6] and some papers cited in [15]. Only the part of Theorem 4.1 from [15] necessary to prove our result is given. Theorem 3.6. [15, Theorem 4.1] For Y R m, m, let n 0 C = m Y j

12 12 J.L. BARLOW and let δc C + δc = (1) Y + δc (2) = W 1 R W 1 R (m+n) s is left orthogonal, R R s with s n. If W 1 is partitioned n W W 1 = 11, m W 21 then there exists a left orthogonal Ū R m s such that (3.42) Y + δy = ŪR, (3.43) δy = FW11(δC T (1) ) + δc (2). F 2 [0.5, 1]. We now use Theorem 3.6 to prove a bacward error bound on XV. Theorem 3.7. Assume the hypothesis and notation of Theorem 3.1. For = 1,...,n, there exists an exactly left orthogonal Ū R m such that (3.44) (3.45) δx F 2 δc F XV + δx = ŪB 2[f 1 (m, n, )ε M + f 2 () η ] X 2 + O(ε 2 M ). Proof. From Theorem 3.6, using the partition from (3.37), we may conclude that for some left orthgonal Ū R m with with F 2 [0.5, 1]. Thus XV + δx = ŪB δx = FW11(δC T (1) ) + δc(2) δx F F 2 δc (1) F + δc (2) δc (1) F + δc (2) F 2 δc (1) 2 F + δc(2) = 2 δc F. Combining this result with (3.5) (3.7) yields (3.45). To prove results about the orthogonality of the left Lanczos vectors and left singular vectors, we need the relationship between the computed V and the computed U given in the next theorem. Theorem 3.8. Assume the hypothesis and notation of Theorem 3.7. Then XV E = U B F 2 F

13 Thus (3.46) Thus (3.47) Thus (3.48) LANCZOS BIDIAGONAL REDUCTION SINGULAR VECTORS 13 E F 3 n X 2 ε M + O(ε 2 M ). Proof. For = 1, the computed version of Function 2.1 produces s 1 + δs 1 = Xv 1, δs 1 2 nε M X 2 + O(ε 2 M), s 1 = (γ 1 + δγ 1 )u 1, δγ 1 nε M γ 1 + O(ε 2 M ) nε M X 2 + O(ε 2 M ). For higher values of, we have Xv 1 α 1 = γ 1 u 1 α 1 = (δγ 1 )u 1 + δs 1. α 1 2 δγ 1 + δs 1 2 2nε M X 2 + O(ε 2 M) 3nε M X 2 + O(ε 2 M ). s + δs = Xv φ u 1, δs 2 2nε M X 2 + O(ε 2 M ), yielding the bound (3.49) s = (γ + δγ )u, Summarizing, we may state that δγ nε M γ + O(ε 2 M) nε M X 2 + O(ε 2 M). Xv α = γ u + φ u 1 α = (δγ )u + δs, α 2 δγ + δs 2 3nε M X 2 + O(ε 2 M ). XV E = U B, E = (α 1,...,α ) E F = (α 1,..., α ) F 3 n X 2 ε M + O(ε 2 M ) which is the desired result. Now we bound the distance between U and the exactly left orthogonal matrix Ū in Theorem 3.8.

14 14 J.L. BARLOW Corollary 3.9. Assume the hypothesis and terminology of Theorem 3.8. Then, for Ū from Theorem 3.8, (3.50) (3.51) (3.52) (3.53) Ū U F [ δx F + E F ] B O(ε 2 M) X 2 B 1 2g(m, n,, ε M, η ) + O(ε 2 M) g(m, n,, ε M, η ) = 2[f 1 (m, n, n)ε M + f 2 () η ] + 3 nε M. Proof. From Theorems 3.7 and 3.8, we have that Subtracting we get Thus leading to the conclusion XV + δx = ŪB, XV + E = U B. (Ū U )B = δx E. Ū U = (δx E )B 1 Ū U F δx E F B 1 2 ( δx F + E F ) B 1 2. Using the bounds on δx F from Theorem 3.7, and that on E F from Theorem 3.8, we obtain the bound (3.50). Remar 1. In Corollary 3.9, we bounded the distance Ū U F and Ū and is an exactly left orthogonal matrix. It is somewhat more conventional to bound U T U I F. Such bounds can be obtained as follows. U T U I F = U T U ŪT Ū F = U T U U T Ū + U T Ū ŪT Ū F U T (U Ū) F + (U T ŪT )Ū F (U Ū) T (U Ū) F + ŪT (U Ū) F + (U T ŪT )Ū F = Ū U 2 F + 2 Ū 2 Ū U F = Ū U 2 F + 2 Ū U F Thus from (3.50), we obtain the bounds (3.54) U T U I F 2 X 2 B 1 2g(m, n,, ε M, η ) + O(ε 2 M + ε M η ).

15 LANCZOS BIDIAGONAL REDUCTION SINGULAR VECTORS Numerical Tests. Example 4.1. For these examples, we construct m n matrices of the form X = PΣZ T n = 50, 60,..., 300, m = 1.5 n, P R m n is left orthogonal, Z R n n is orthogonal, and Σ is positive and diagonal. The matrices P and Z are produced from the randn command which generates a m n matrix with a standard normal distribution, and the orth command which produces the orthogonal factor of the content. The matrices P and Z come from the two MATLAB commands P = orth(randn(m, n)); Z = orth(randn(n, n)); the randn command which generates a m n matrix with a standard normal distribution, and the orth command which produces the orthogonal factor of the contents. The diagonal matrix Σ is given by Σ = diag(σ 1,..., σ n ) σ 1 = 1, σ = r 1, and r n 1 = 10 18, giving X has a geometric distribution of singular values. The bidiagonal reduction of X was computed in two different ways. 1. The Golub Kahan Householder (GKH) algorithm from [9]. 2. The Golub Kahan Lanczos procedure using Function 3.1 from [1] to do reorthogonalization setting small elements not set to zero. In Figure 4.1, we give the orthogonality measure I U T U F for the left Lanczos vector matrix U in the upper window and the same measure for Û computed by Function 3.1 in the lower window. Although the results for Û are slightly better than those for U, as might be expected from Corollaries 3.4 and 3.9, since B 1 2 = 10 18, neither has good orthogonality. To obtain Figure 4.2, we recover the first left singular vectors of X from GKLnonzero bidiagonal reduction is the largest integer such that σ /σ After computing the singular value decomposition B = QΣS T we compute P, the matrix of the leading left singular vectors of X in two different ways. 1. Using Function 3.1, we let 2. Using just the matrix multiplication (4.1) as is done in [19]. P = UhatMult(U, Q(:, 1: )). P = UQ(:, 1: )

16 16 J.L. BARLOW We also recovered the right singular vector matrix Z and the singular value matrix Σ corresponding to these first singular values. In the top window of Figure 4.2, is the value I P T P F for the two methods of computing the left singular vectors. As can be seen, the first method of computing P produces much better orthogonality. The bottom window of Figure 4.2 shows the residual error XZ P Σ (4.2) X T P Z Σ F for the singular vector problem. When Z is computed by Function 3.1, the residuals are small, about 10 15, but when compute P in the standard manner, the residuals are not particularly good, about We repeated the experiment by choosing to be the largest integer such that σ /σ We again post the results on orthogonality and residuals for the two methods of computing the left singular vector matrix P. From Figure 4.3, the left singular vectors have near perfect orthogonality from Function 3.1 have near perfect orthogonality and small residuals as those from (4.1) are far from orthogonal and have poor residuals.. Example 4.2. We construct our examples exactly as in Example 4.1 except that we do reorthogonalization of V with Function 4.2 from [1] with ˆη = We do the same two inds of bidiagonalization given in Example 4.1. We again compute P, the matrix of the leading left singular vectors in the same two ways as Example 4.1: with Function 3.1 and doing the matrix multiplication 4.1. However, in this case is the largest integer such that σ /σ In the upper window of Figure 4.4, we compare the orthogonality of the leading left singular vectors as computed by the two methods stated above. Again, we note that Function 3.1 obtains much better orthogonality. The lower window of Figure 4.4 gives the residual measure (4.2) alongside orth(v ). Whereas the left singular vectors computed using (4.1) have residuals much larger than orth(v ), the residuals for those computed with Function 3.1 are of about the same size as orth(v ) consistent with Corollary Conclusion. As shown in [19], when implementing the GKL bidiagonal reduction for SVD computation, it is only necessary to reorthogonalize one set of Lanczos vectors, for instance, the right ones. In applications only the leading singular vectors need to be computed, good orthogonality is also maintained in leading left singular vectors. Moreover, if the left singular vectors are computed by Function 3.1, we obtain not only good orthogonality, but better residuals than the method that has been used in previous implementations. This change to the implementation of the GKL SVD has little effect on the speed of computation, but leads to a more robust algorithm. REFERENCES [1] J.L. Barlow. Reorthogonalization for the Golub Kahan Lanczos bidiagonal reduction: Part I singular values. svd orthi.pdf, 2010.

17 Loss of orthogonality Loss of orthogonality LANCZOS BIDIAGONAL REDUCTION SINGULAR VECTORS 17 Log 10 of orthogonality of Left Lanczos Vectors Dimension Log 10 of orthogonality of MGSed Left Lanczos Vectors Dimension Fig Loss of Orthogonality in Lanczos Vectors from Example 4.1 [2] J.L. Barlow, N. Bosner, and Z. Drmač. A new bacward stable bidiagonal reduction method. Linear Alg. Appl., 397:35 84, [3] M. Berry, Z. Drmač, and E. Jessup. Matrices, vector spaces, and information retrieval. SIAM Review, 41: , [4] Å. Björc. A bidiagonalization algorithm for solving large and sparse ill-posed systems of linear equations. BIT, 28: , [5] Å. Björc and C.C. Paige. Loss and recapture of orthogonality in the modified Gram Schmidt algorithm. SIAM J. Matrix Anal. Appl., 13: , [6] N. Bosner and J. Barlow. Bloc and parallel versions of one-!sided bidiagonalization. SIAM J. Matrix Anal. Appl., 29(3): , [7] D. Calvetti and L. Reichel. Tihanov regularization on large linear problems. BIT, 43: , [8] L. Eldén. Algorithms for the regularization of ill-conditioned least square problems. BIT, 17: , [9] G.H. Golub and W.M. Kahan. Calculating the singular values and pseudoinverse of a matrix. SIAM J. Num. Anal. Ser. B, 2: , [10] N.J. Higham. Functions of Matrices: Theory and Computation. SIAM Publications, Philadelphia, PA, [11] I. Hnetynova, M. Plesinger, and Z. Straos. Golub-Kahan iterative bidiagonalization and determining the size of the noise in data. BIT, 49: , 2009.

18 18 J.L. BARLOW Loss of Orthogonality Residuals Log 1O of orthogonality of First Left Singular Vectors GKL( vecs from Fun. 3.1) GKL(vecs from Matrix Mult.) Dimension Log 10 of residuals of First Left and Right Singular Vectors GKL(vecs by Fun. 3.1) GKL(vecs from Matrix Mult.) orth(v) Dimension Fig Orthogonality and Residual Errors in Leading Singular Vectors from Example 4.1 σ /σ [12] R. Mazumber, T. Hastie, and R. Tishbarani. Spectral regularization algorithms for learning large incomplete matrices. hastie/papers/svd JMLR.pdf. [13] C.C. Paige and M.A. Saunders. LSQR:An algorithm for sparse linear equations and least squares problems. ACM Trans. on Math. Software, 8:43 71, [14] C.C. Paige. The Computation of Eigenvalues and Eigenvalues of Very Large Sparse Matrices. PhD thesis, University of London, [15] C.C. Paige. A useful form of unitary matrix form any sequence of unit 2-norm n-vectors. SIAM J. Matrix Anal. Appl., 31(2): , [16] C.C. Paige and M.A. Saunders. Algorithm 583 LSQR:Sparse linear equations and least squares problems. ACM Trans. on Math. Software, 8: , [17] B.N. Parlett. The Symmetric Eigenvalue Problem. SIAM Publications, Philadelphia, PA, Republication of 1980 boo. [18] B.N. Parlett and D.S. Scott. The Lanczos algorithm with selective reorthogonalization. Math. Comp., 33: , [19] H. Simon and H. Zha. Low ran matrix approximation using the Lanczos bidiagonalization process. SIAM J. Sci. Stat. Computing, 21: , 2000.

19 Loss of Orthogonality Residuals LANCZOS BIDIAGONAL REDUCTION SINGULAR VECTORS 19 Log 1O of orthogonality of First Left Singular Vectors GKL( vecs from Fun. 3.1) GKL(vecs from Matrix Mult.) Dimension Log 10 of residuals of First Left and Right Singular Vectors GKL(vecs by Fun. 3.1) GKL(vecs from Matrix Mult.) orth(v) Dimension Fig Orthogonality and Residual Errors in Leading Singular Vectors from Example 4.1 σ /sigma

20 20 J.L. BARLOW Loss of Orthogonality Residuals Log 1O of orthogonality of First Left Singular Vectors GKL( vecs from Fun. 3.1) GKL(vecs from Matrix Mult.) Dimension Log 10 of residuals of First Left and Right Singular Vectors GKL(vecs by Fun. 3.1) GKL(vecs from Matrix Mult.) orth(v) Dimension Fig Orthogonality and Residual Errors in Leading Singular Vectors from Example 4.2 σ /σ

REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES

REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Greedy Tikhonov regularization for large linear ill-posed problems

Greedy Tikhonov regularization for large linear ill-posed problems International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Probabilistic upper bounds for the matrix two-norm

Probabilistic upper bounds for the matrix two-norm Noname manuscript No. (will be inserted by the editor) Probabilistic upper bounds for the matrix two-norm Michiel E. Hochstenbach Received: date / Accepted: date Abstract We develop probabilistic upper

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices.

Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. A.M. Matsekh E.P. Shurina 1 Introduction We present a hybrid scheme for computing singular vectors

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

A communication-avoiding thick-restart Lanczos method on a distributed-memory system

A communication-avoiding thick-restart Lanczos method on a distributed-memory system A communication-avoiding thick-restart Lanczos method on a distributed-memory system Ichitaro Yamazaki and Kesheng Wu Lawrence Berkeley National Laboratory, Berkeley, CA, USA Abstract. The Thick-Restart

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Rounding error analysis of the classical Gram-Schmidt orthogonalization process Cerfacs Technical report TR-PA-04-77 submitted to Numerische Mathematik manuscript No. 5271 Rounding error analysis of the classical Gram-Schmidt orthogonalization process Luc Giraud 1, Julien Langou 2,

More information

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Weighted Golub-Kahan-Lanczos algorithms and applications

Weighted Golub-Kahan-Lanczos algorithms and applications Weighted Golub-Kahan-Lanczos algorithms and applications Hongguo Xu and Hong-xiu Zhong January 13, 2016 Abstract We present weighted Golub-Kahan-Lanczos algorithms for computing a bidiagonal form of a

More information

Efficient and Accurate Rectangular Window Subspace Tracking

Efficient and Accurate Rectangular Window Subspace Tracking Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

RESTARTED BLOCK LANCZOS BIDIAGONALIZATION METHODS

RESTARTED BLOCK LANCZOS BIDIAGONALIZATION METHODS RESTARTED BLOCK LANCZOS BIDIAGONALIZATION METHODS JAMES BAGLAMA AND LOTHAR REICHEL Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Mathematical Operations with Arrays) Contents Getting Started Matrices Creating Arrays Linear equations Mathematical Operations with Arrays Using Script

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Arnoldi Notes for 2016-11-16 Krylov subspaces are good spaces for approximation schemes. But the power basis (i.e. the basis A j b for j = 0,..., k 1) is not good for numerical work. The vectors in the

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Residual statics analysis by LSQR

Residual statics analysis by LSQR Residual statics analysis by LSQR Residual statics analysis by LSQR Yan Yan, Zhengsheng Yao, Gary F. Margrave and R. James Brown ABSRAC Residual statics corrections can be formulated as a linear inverse

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm

Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm Stanford Exploration Project, Report 82, May 11, 2001, pages 1 176 Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm James G. Berryman 1 ABSTRACT Resolution matrices

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Sparse BLAS-3 Reduction

Sparse BLAS-3 Reduction Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

Computing the pth Roots of a Matrix. with Repeated Eigenvalues

Computing the pth Roots of a Matrix. with Repeated Eigenvalues Applied Mathematical Sciences, Vol. 5, 2011, no. 53, 2645-2661 Computing the pth Roots of a Matrix with Repeated Eigenvalues Amir Sadeghi 1, Ahmad Izani Md. Ismail and Azhana Ahmad School of Mathematical

More information

GCV for Tikhonov regularization by partial SVD

GCV for Tikhonov regularization by partial SVD BIT manuscript No. (will be inserted by the editor) GCV for Tihonov regularization by partial SVD Caterina Fenu Lothar Reichel Giuseppe Rodriguez Hassane Sado Received: date / Accepted: date Abstract Tihonov

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research

More information

ON THE COMPUTATION OF A TRUNCATED SVD OF A LARGE LINEAR DISCRETE ILL-POSED PROBLEM. Dedicated to Ken Hayami on the occasion of his 60th birthday.

ON THE COMPUTATION OF A TRUNCATED SVD OF A LARGE LINEAR DISCRETE ILL-POSED PROBLEM. Dedicated to Ken Hayami on the occasion of his 60th birthday. ON THE COMPUTATION OF A TRUNCATED SVD OF A LARGE LINEAR DISCRETE ILL-POSED PROBLEM ENYINDA ONUNWOR AND LOTHAR REICHEL Dedicated to Ken Hayami on the occasion of his 60th birthday. Abstract. The singular

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 14, pp. 2-35, 22. Copyright 22,. ISSN 168-9613. ETNA L-CURVE CURVATURE BOUNDS VIA LANCZOS BIDIAGONALIZATION D. CALVETTI, P. C. HANSEN, AND L. REICHEL

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix Journal of Computational and Applied Mathematics 35 (010 76 735 Contents lists aailable at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elseier.com/locate/cam New

More information

Iterative Algorithm for Computing the Eigenvalues

Iterative Algorithm for Computing the Eigenvalues Iterative Algorithm for Computing the Eigenvalues LILJANA FERBAR Faculty of Economics University of Ljubljana Kardeljeva pl. 17, 1000 Ljubljana SLOVENIA Abstract: - We consider the eigenvalue problem Hx

More information

Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II)

Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II) 1 Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II) Merico Argentati (speaker), Andrew Knyazev, Ilya Lashuk and Abram Jujunashvili Department of Mathematics

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Why the QR Factorization can be more Accurate than the SVD

Why the QR Factorization can be more Accurate than the SVD Why the QR Factorization can be more Accurate than the SVD Leslie V. Foster Department of Mathematics San Jose State University San Jose, CA 95192 foster@math.sjsu.edu May 10, 2004 Problem: or Ax = b for

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES 1 PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES MARK EMBREE, THOMAS H. GIBSON, KEVIN MENDOZA, AND RONALD B. MORGAN Abstract. fill in abstract Key words. eigenvalues, multiple eigenvalues, Arnoldi,

More information

A fast randomized algorithm for approximating an SVD of a matrix

A fast randomized algorithm for approximating an SVD of a matrix A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 16 1 / 21 Overview

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD

Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD Yuji Nakatsukasa PhD dissertation University of California, Davis Supervisor: Roland Freund Householder 2014 2/28 Acknowledgment

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information