REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES

Size: px
Start display at page:

Download "REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES"

Transcription

1 REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University Par, PA USA. barlow@cse.psu.edu Abstract. The Golub Kahan Lanczos bidiagonal reduction generates a factorization of a matrix X R m n, m n, such that X = UBV T U R m n is left orthogonal, V R n n is orthogonal, and B R n n is bidiagonal. When the Lanczos recurrence is implemented in finite precision arithmetic, the columns of U and V tend to lose orthogonality, maing a reorthogonalization strategy necessary to preserve convergence of the singular values. It is shown that if orth(v = I V T V 2, then the singular values of B and those of X satisfy 1 (σ j (X σ j (B 2 A O(ε M + orth(v X 2 ε M is machine precision. Moreover, a strategy is introduced for neglecting small off-diagonal elements during reorthogonalization that preserves the above bound on the singular values. AMS subject classifications. 65F15,65F25. Key words. Lanczos vectors, orthogonality, singular values, left orthogonal matrix. 1. Introduction. Bidiagonal reduction, the first step in many algorithms for computing the singular value decomposition (SVD [1, 2], is also used for solving least squares problems [2, 17], for solving ill-posed problems [9, 5, 13], the computation of matrix functions [7] [12, ], for matrix approximation [4], and the solution of the Netflix problem in [16]. In [1], Golub and Kahan give two Lanczos-based bidiagonal reduction algorithms which we call the Golub Kahan Lanczos (GKL algorithms. The first GKL algorithm taes a matrix X R m n, m n, and generates the factorization (1.1 X = UBV T with (1.2 (1.3 U = (u 1,...,u n R m n, V = (v 1,...,v n R n n, left orthogonal, orthogonal, The research of Jesse L. Barlow was supported by the National Science Foundation under grant no. CCF

2 2 J.L. BARLOW and B R n n having a bidiagonal form given by γ 1 φ 2 γ 2 φ 3 def (1.4 B = = ubidiag(γ 1,..., γ n ; φ 2,..., φ n. γ n 1 φ n γ n For certain structured matrices, even with reorthogonalization, this GKL algorithm yields a faster method of producing a bidiagonal reduction to compute the complete singular value decomposition. For large sparse matrices, it is often the method of choice to compute a few singular values and associated singular vectors. The recurrence generating the decomposition (1.1 (1.4 is constructed by choosing a vector v 1 R n such that v 1 2 = 1, letting u R m, = 1,...,n and v R n, = 2,..., n be unit vectors, and letting γ, φ, = 1,...,n be scaling constants such that (1.5 (1.6 (1.7 γ 1 u 1 = Xv 1, φ +1 v +1 = X T u γ v, = 1,...,n 1, γ +1 u +1 = Xv +1 φ +1 u. The other GKL algorithm in [1] starts with u 1 and instead generates a lower bidiagonal matrix. The discussion below also applies to that recurrence if we note that second GKL algorithm is just the first applied to ( u 1 X with v 1 = e 1. For our purposes, it is best to associate V with the minimum of the two dimensions m and n of X. The recurrence (1.5 (1.7 is equivalent to the symmetric Lanczos tridiagonalization algorithm performed on the matrix ( X T (1.8 M = X ( v1 with the starting vector. Since the vectors u 1,...,u n and v 1,...,v n tend to lose orthogonality in finite precision arithmetic, reorthogonalization is performed when the bidiagonal reduction algorithm (1.5 (1.7 is used to compute the singular value decomposition as in [1] or in regularization algorithms as in [5, 9] or in the computation of matrix functions as in [7]. Paige [18] pointed out that the loss of orthogonality in Lanczos reductions is structured in the sense that it is coincident with the convergence of approximate eigenvalues and eigenvectors (called Ritz values and vectors. Parlett and Scott [22] used this observation to develop partial reorthogonalization procedures. A good summary of the surrounding issues is given by Parlett [21, Chapter 13]. To understand how the algorithm wors with reorthogonalization of V, we define the loss of orthogonality measures (1.9 (1.1 orth(v def = I V T V 2, η = 1v 2, η = η 2 j.

3 LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES 3 Noting that orth(v satisfies the upper bound orth(v I V T V F ( and the lower bound η 2 j orth(v max η j 1 η, 1 j = 2 η, we have that orth(v and η are large or small together. Thus we express our bounds in terms of η with the understanding that, with minor modification, they could expressed in terms of orth(v. The singular values of B, given by σ 1 (B σ 2 (B σ n (B, and the corresponding singular values of X satisfy an O(ε M + η n bound given in equation (3.25 in Theorem 3.6. Thus the accuracy of the computed singular values depends upon our ability to preserve the orthogonality of V. These results are similar to those for a procedure due to Barlow, Bosner, and Drmač [2] that generates V using Householder transformations and U by the recurrences (1.5 (1.7. We structure this paper as follows. In 2, we establish the framewor for the analysis in 3. In 3, we prove our main theorem (Theorem 3.1 and results on the singular values of B. In 4, we give three reorthogonalization strategies for V and give a method for neglecting small superdiagonal elements resulting from reorthogonalization. In 5, we give numerical tests based upon regulating the orthogonality of V in various ways which we follow with a conclusion in 6. In part II of this wor [1] this author uses Theorem 3.1 to produce an algorithm that computes left singular vectors with stronger residual and orthogonality bounds than previous versions of the GKL algorithm in the literature. 2. The Lanczos Bidiagonal Recurrence with Reorthogonalization. In exact arithmetic, the columns of V in (1.3, computed according to (1.5 (1.7, are orthonormal, but, in floating point arithmetic, some reorthogonalization of these vectors is necessary. A model of how that reorthogonalization could be done is proposed and analyzed below. To recover v +1 from v 1,...,v and u 1,...,u, we compute (2.1 r = X T u γ v. We then reorthogonalize r is against v 1,...,v so that (2.2 (2.3 φ +1 v +1 = r ĥ j,+1 v j = r V ĥ +1, ĥ +1 = ĥ 1,+1.. ĥ,+1 for some coefficients ĥj,+1, j = 1,...,. Combining (2.1 and (2.3, we have that (2.4 φ +1 v +1 = X T u V h +1

4 4 J.L. BARLOW h +1 = γ e + ĥ+1. To encapsulate our approaches to reorthogonalization, we assume the existence of a general function reorthog that performs step (2.3 in some manner. Thus the ( + 1st Lanczos vector comes from (2.1 followed by (2.5 [v +1,ĥ+1, φ +1 ] = reorthog(b, V,r, (2.6 B = ubidiag(γ 1,...,γ ; φ 2,...,φ may provide necessary information for the partial reorthogonalization schemes. In floating point arithmetic, we assume that the steps (2.1 and (2.5 produce vectors v +1 and h +1, and a scalar φ +1 such that (2.7 X T u = V h +1 + φ +1 v +1 + β +1 (2.8 β +1 2 ε M q(m X 2 for some modest sized function q(m. The value of q(m varies depending upon which orthogonalization method is used, but, for, say, the complete reorthogonalization scheme in Function 4.1, we would have q(m = O(m. In general, we have the recurrence (2.9 X T U = V +1 H +1 + E H 2 = ( ( 1 h 1 H h, H 1 φ +1 = +1, 2 φ +1 E = (β 2,...,β +1. The following function specifies the first steps of the Lanczos bidiagonal reduction. Function 2.1 (First steps of Lanczos Bidiagonal Reduction with reorthogonalization. function [B, U, V ]=lanczos bidiag(x,v 1, V 1 = (v 1 ; s 1 = Xv 1 ; γ 1 = s 1 2 ; u 1 = s 1 /γ 1 ; for j = 2: r j = X T u j 1 γ j 1 v j 1 ; [v j,ĥj, φ j ]=reorthog(b j 1, V j 1,r j ; s j = Xv j φ j u j 1 ; γ j = s j 2 ; u j = s j /γ j ; V j = ( V j 1 v j ; Uj = ( U j 1 u j ; B j = end; end; lanczos bidiag ( Bj 1 φ j e j 1 ; γ j

5 LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES 5 We discuss three specific methods for performing the reorthogonalization in reorthog in 4. In the remainder of this section, we discuss a model for that loss that leads to the analysis in 3. Although ĥ is discarded by Function 2.1 in the construction of U, B, and V, we show that throwing out this information affects the accuracy on the singular values of B only through the loss of orthogonality of V. We assume orth(v < 1, for all, otherwise V is not meaningfully close to left orthogonal. Using the definition of orth(v in (1.1, the singular values of V are bounded by (2.1 (2.11 σ 1 (V = σ (V = λ 1 (V T V λ (V T V If V is the Moore Penrose pseudoinverse of V, then λ 1 (I + I V T V 2 = 1 + orth(v, λ (I I V T V 2 = 1 orth(v. (2.12 (2.13 V 2 = σ 1 (V (1 + orth(v, V 2 = σ (V 1 (1 orth(v. Equation (2.8 can be rewritten, X T u +1 β +1 = V +1 ( h+1 φ +1. Using our assumption that orth(v +1 < 1, V +1 must have full column ran so that V +1 satisfies V +1 V +1 = I +1, thus ( h+1 = V φ +1 (XT u +1 β Adding the assumption that orth(v +1 and q(mε M are sufficiently small that (2.14 (1 + q(mε M (1 orth(v +1 def = ω for some reasonable constant ω, thus we infer that (2.15 H +1 e 2 = ( h+1 φ +1 2 V +1 2( X T u β +1 2, (1 + q(mε M (1 orth(v +1 X 2, = ω X 2. Thus, the columns of H +1 are bounded as long as reasonable orthogonality is maintained for V Error Bounds for GKL Bidiagonalization with One-Sided Reorthogonalization. The results in this paper and in [1] are based upon Theorem 3.1 stated next.

6 6 J.L. BARLOW Theorem 3.1. Let Function 2.1 be implemented in floating point arithmetic with machine unit ε M. Assume that V = (v 1,...,v with orthogonality parametrized by η in (1.1, U = (u 1,...,u, and B = ubidiag(γ 1,..., γ ; φ 2,..., φ are output from that function. Assume also that orth(v < 1. Define (3.1 (3.2 (3.3 ( n C =, m XV ( W j = I w j wj T, w ej j = u j W = W 1 W. If q(m is defined in (2.8 and ω is given by (2.14, then for = 1,..., n (3.4 ( C + δc = W B m + n (3.5 δc F [f 1 (m, n, ε M + f 2 ( η ] X 2 + O(ε 2 M and (3.6 f 1 (m, n, = [ 2/3q(m + m + n + 2], f 2 ( = ω 2/3 3/2. The matrix W is orthogonal because u 1,...,u are unit vectors. Some details of the form of W are given in [19, Theorem 2.1]. Three technical lemmas are necessary to prove the result (3.4 (3.6; the first concerns the effect of W 1. Lemma 3.2. Let φ, γ,u, and v be computed by the th step of Function 2.1. Let W be defined in (3.2. Then, for 2, ( ( φ W e (3.7 1 = + δz Xv φ u Xv 1 (3.8 δz 1 2 2(ωη + q(mε M X 2. Proof. We have that ( ( ( φ W e φ e 1 = W 1 Xv φ u 1 + W 1 ( ( = + φ u 1 (3.9 = Xv φ u + u Xv φ u T 1 (Xv φ u 1 w 1 1 ( Xv + u T 1(Xv φ u 1 w 1.

7 LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES 7 To bound the last term, we note that from (2.7, we have X T u 1 = φ v + V 1 h + β, β 2 q(mε M X 2, so that (3.1 u T 1 Xv = φ v T v + v T V 1h + u T 1 β = φ + δφ δφ = v T V 1h + u T 1 β. Thus (3.11 δφ 1 v 2 h 2 + β 2 1 v 2 H +1 e 2 + β 2 (ωη + q(mε M X 2. Combining (3.9, (3.1, and (3.11, we have (3.7 Thus δz 1 = (δφ w 1. δz 1 2 = δφ w 1 2. Since w 1 2 = (e T 1,uT 1 T 2 = 2, we have the bound (3.8 for δz 1 2. Our second lemma bounds the effect of W j, j = 1, 2,..., 2. Lemma 3.3. Assume the hypothesis and notation of Lemma 3.2. For 3 and j < 1, we have ( ( (3.12 W j = + δz Xv Xv j (3.13 δz j 2 2(ωη + q(mε M X 2. (3.14 Proof. First, we note that ( ( W j = w Xv Xv j wj T = ( Xv ( Xv (u T j Xv w j. Again, using (2.7, we have X T u j = V j+1 ( hj φ j + β j.

8 8 J.L. BARLOW Thus ( T u T j Xv hj = V φ j+1 T v + β T j v j H j e j 1 2 j+1 v 2 + β T j v ω X 2 η + β j 2. Therefore, using the bound in (2.15, we have Using (3.12 yields so (3.13 follows from u T j Xv [ωη + q(mε M ] X 2. δz j = (u T j Xv w j, δz j 2 = u T j Xv w j 2 2[ωη + q(mε M ] X 2. We now combine Lemma 3.2 and 3.3 to give the effect of the product of Householder transformations. Lemma 3.4. Assume the hypothesis and notation of Lemma 3.2. Let W be given by (3.3. Then ( ( φ e W 1 (3.15 = + δc γ e 1 Xv (3.16 δc 2 [ 2( 1(ωη + q(mε M + (m + n + 2ε M ] X 2. Proof. Before proving Theorem 3.1, we note that W in (3.2 is defined so that (3.17 ( 1 φ W e 1 = s γ e 1 ( n φ e 1, s = m + n + 1. m γ u thus ( ( φ e W 1 = γ e W φ e 1 W 1 1 γ e 1 ( = W φ e 1 1. γ u From the Lanczos recurrence, (γ + δγ u = s = fl(xv φ u 1 = Xv φ u 1 δs δγ mε M γ + O(ε 2 M mε M X 2 + O(ε 2 M, δs 2 (n + 2ε M X 2 + O(ε 2 M.

9 LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES 9 Thus, (3.18 γ u = Xv φ u 1 + δ z δ z = δs (δγ u. That yields the bound Thus, if we let δ z 2 (m + n + 2ε M X 2 + O(ε 2 M. δz = ( n, m δ z then using (3.17 and (3.18 and a simple recurrence, we have ( ( φ e W 1 = γ e W φ e γ u ( = W φ e 1 1 Xv φ u 1 + δ z ( = W φ e Xv φ u W 1 δz 1 ( = W φ 2 W e Xv φ u W 1 δz 1 ( = W 2 + Xv W 2 δz 1 + W 1 δz. After 2 applications of Lemma 3.3, this becomes ( ( φ e W 1 = γ e 1 Xv = δc = 1 + ( Xv + δc 1 W j δz j+1. W j δz j+1 Thus 1 δc 2 W j δz j = δz j+1 2 = [ 2( 1(ωη + q(mε M + (m + n + 2ε M ] X 2 + O(ε 2 M

10 1 J.L. BARLOW establishing the result. We now prove Theorem 3.1. Proof. (of Theorem 3.1 We use induction on. For = 1, we have Thus B 1 = (γ 1, U 1 = (u 1, V 1 = (v 1. (3.19 s 1 = fl(xv 1 = (γ 1 + δγ 1 u 1, δγ 1 m γ 1 ε M + O(ε 2 M, and (3.2 s 1 + δs 1 = Xv 1, δs 1 2 nε M X 2 + O(ε 2 M. Combining (3.19 and (3.2, we have γ 1 u 1 = Xv 1 δs 1 (δγ 1 u 1 = Xv 1 + δ c 1, δ c 1 = δs 1 (δγ 1 u 1, δ c 1 2 (m + n X 2 ε M + O(ε 2 M. Rewriting the above in terms of W 1, we have ( B1 W 1 = (I w 1 w1 T γ 1e 1 = ( = Xv 1 δ c 1 = ( γ 1 u 1 ( Xv 1 + δc 1, δc 1 = δc 1 2 2n X 2 ε M + O(ε 2 M, thus establishing the result for = 1. To construct the induction step, we write ( ( B W = W B 1 φ e 1 γ e 1 ( ( B 1 = W ( ( B 1 = W 1 ( ( = ( δ c 1 W ( φ e 1 γ e 1 W ( φ e 1 γ e 1 ( φ e + δc 1 XV 1 W 1 γ e 1 δc 1 = (δc 1,..., δc 1.

11 LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES 11 From Lemma 3.4, we have (3.15 and (3.16. Thus ( ( B W = + δc XV δc = (δc 1,..., δc. The bound on δc F in (3.5 (3.6 comes from δc F = δc j 2 2 [ 2(j 1(2η j + q(mε M + (m + n + 2ε M ] X 2 ] 2 Using the triangle inequality, this becomes δc F (2 2(j 1η j 2 ( [ ( 2jq(mε M 2 X 2 + ((m + n + 2ε M 2 + O(ε 2 M. ] X 2 + O(ε 2 M. The Cauchy-Schwarz inequality applied to the first term and bounding the second term yields (3.22 ( (j 1η j ( 2(j 1q(mε M 2 j 2 2 2η j 2q(mεM j 2 2/3 3/2 q(mε M, 2 2/3 3/2 η thus combining (3.21 with (3.22 (3.23, we have (3.5 (3.6. To use Theorem 3.1 to bound the distance between the singular values of B = B n and those of X, we need a lemma that bounds the difference between the singular values of X and XV. Lemma 3.5. Let V = V n be the result of n steps of Function 2.1. If σ (X is the th singular value of X, then (3.24 σ (XV (1 + orth(v σ (X σ (XV (1 orth(v. Proof. We use the inequality in [15, p.419,corollary 7.3.8] of the form σ (Xσ n (V σ (XV σ (Xσ 1 (V.

12 12 J.L. BARLOW Using the bounds in (2.1 (2.11, on V we get (3.24. If we just use Lemma 3.5 with Theorem 3.1, we obtain a bound on the singular values of B in terms of those of X. Theorem 3.6. Assume the hypothesis and terminology of Theorem 3.1. Excluding terms of O(ε 2 M + η2 n, the singular values of X and those of B are related by (3.25 (3.26 n (σ j (X σ j (B 2 δc n F + X F [(1 orth(v 1] [f 1 (m, n, nε M + (f 2 (n + n/2η n ] X 2. Proof. From the triangle inequality applied to the two norm, n (σ j (X σ j (B 2 n (σ j (X σ j (XV 2 n + (σ j (XV σ j (B 2 From Theorem 3.1 and the Wielandt Hoffman theorem [11, p.45,thm ], we have that. (3.27 ( n (σ (XV σ (B 2 δc n F. =1 From Lemma 3.5, we have that σ (X σ (XV σ (Xmax{1 (1 + orth(v, (1 orth(v 1} = [(1 orth(v 1]σ (X. Using this inequality, we obtain the bound (3.28 n n (σ (XV σ (B 2 [(1 orth(v 1] 2 σ (X 2 =1 =1 = [(1 orth(v 1] 2 X 2 F. Combining (3.27 and (3.28 yields (3.25. To obtain (3.26 note the bound on δc n F in (3.5 (3.6 and that (3.29 [(1 orth(v 1] = (orth(v + O(orth 2 (V n 2 η n + O( η n 2. Since X F n X 2, (3.26 results from combining (3.29 with (3.25. The bound (3.25 shows that the singular values of B are close to those of X as long as orth(v is ept small.

13 LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES Reorthogonalization Strategies. We discuss two common approaches to specifying reorthog from Function 2.1. The first is to use Gram Schmidt reorthogonalization of v +1 against all previously computed right Lanczos vectors 4.1; the second is to use a selective reorthogonalization strategy 4.2. A third method, specified in 4.3, is used for our numerical tests to quantify the effect of loss of orthogonality in V Complete Reorthogonalization. Our strategy for complete reorthogonalization, which grows out of twice is enough approaches in [14] and approaches in [8, 24], [21, 6.9], is a version of Gram Schmidt reorthognalization given by Barlow et al. [3]. First, we compute (4.1 If (4.2 then we accept ĥ (1 = V T r, r (1 = r V ĥ (1. r (1 2 4/5 r 2, Otherwise, we compute φ +1 = r (1 2, v +1 = r (1 /φ +1, ĥ +1 = ĥ(1. (4.3 If ĥ (2 = V T r(1, r(2 = r (1 V ĥ (1. (4.4 then we accept r (2 2 4/5 r (1 2, φ +1 = r (2 2, v +1 = r (2 /φ +1, ĥ +1 = ĥ(1 + ĥ(2. If either (4.2 or (4.4 holds, we show in the Appendix, that, ignoring rounding error, we have v ξ + O(ξ 2 (4.5 ξ = I V T V 2 = orth(v. If (4.2 (4.4 is false, we use a method from [8] and modified in [3] to construct φ +1 and v +1. We find e J such that and then compute e J 2 = min V T e j 2, 1 j m c (1 = V T J, t (1 = e J V c (1, c (2 = V T, t (2 = t (1 V c (2.

14 14 J.L. BARLOW Then (4.6 v +1 = t (2 / t (2 2 satisfies v +1 2 /(m ξ 2 + O(ξ 4. For all practical purposes, this choice of v +1 restarts the Lanczos process. We propose two possible ways to choose φ +1. In exact arithmetic,, in the Appendix, we show that X T u = V ĥ + r (2, r (2 2 = φ n 2 2 (4.7 2ξ (1 + ξ r 2 2ξ (1 + ξ X 2 which is small relative to X 2. Our first choice of φ +1, given by (4.8 produces φ +1 = v T +1r (2 X T u = V ĥ + φ +1 v +1 + n n 2 is minimized over all choices of φ +1. However, since our second choice of φ +1, given by φ +1 r (2 2 2ξ (1 + ξ X 2, φ +1 = neglects an element of size O(ξ X 2, the magnitude of the bounds on the errors in the singular values in Theorem 3.6. We encapsulate this algorithm in Function 4.1. The Boolean variable setzero is true if we set φ +1 to zero when (4.2 and (4.4 are false, and false if we compute φ +1 as in (4.8. Function 4.1 (Gram Schmidt reorthogonalization of r against V. function [v +1,ĥ, φ +1 ] = GS reorthog(v,r, setzero ĥ (1 = V Tr ; r (1 = r V ĥ (1 ; if r (1 2 4/5 r 2 φ +1 = r (1 2; v +1 = r (1 /φ +1; ĥ = ĥ(1 ; else ĥ (2 = V Tr(1 ; r(2 = r (1 V ĥ (2 ; ĥ = ĥ(1 + ĥ(2 ; if r (2 2 4/5 r (1 2 φ +1 = r (2 2; v +1 = r (2 /φ +1;

15 LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES 15 else Find e J such that e J 2 = min 1 j m e j 2 c (1 = V Te J; t (1 = e J V c (1 ; c (2 = V Tt(1 ; t (2 = t (1 V c (2 ; v +1 = t (2 / t (2 2 ; if setzero φ +1 = ; else φ +1 = v+1 T r(2 ; end; end; end; end GS reorthog 4.2. Selective Reorthogonalization. Selective reorthogonalization was created by Parlett and Scott [22] from a result of Paige [18] showing that most of the loss of orthogonality in V is confined to converged right singular vectors. The variant of that strategy for this decomposition taes the SVD of B given by (4.9 (4.1 (4.11 B = QΘS T Q = (q 1,...,q = (q ij, S = (s 1,...,s = (s ij, Θ = diag(θ 1,..., θ, and finds components such that l 1,..., l τ such that for a given tolerence tol we have It then lets φ +1 q,lj tol X F, j = 1,...,τ. S τ = (s l1,...,s lτ be the corresponding right singular vectors of B so that the matrix Z τ = V S τ = (z l1,...,z lτ consists of converged right singular vectors of X. A reorthogonalization procedure, say, GS reorthog, with Z τ, computes a vector t and v +1 according to [v +1,t +1, φ +1 ] = GS reorthog(z τ,r with the resulting ĥ+1 in (2.7 is given by ĥ +1 = S τ t +1. The strategies in our examples in 5 are variants on performing Gram Schmidt on all previous right Lanczos vectors, thereby allowing us to demonstrate the effect on the orthogonality of V. Since we expect that τ, this reorthogonalization practice is often much cheaper than complete reorthogonalization.

16 16 J.L. BARLOW 4.3. Parametrized Reorthogonalization for Numerical Tests. To construct our numerical tests in 5, we give a parametrized modification of GS reorthog in 4. Let (4.12 φ ( +1 = r 2, v ( +1 = r /φ ( +1, and let (4.13 (4.14 r (j = (I V V j r, φ (j +1 = r(j 2, j = 1, 2 v (j +1 = r(j /φ(j +1. For j =, 1, 2, we accept v +1 = v (j +1 (and do no further reorthogonalization if (4.15 v(j +1 2 ˆη for some specified parameter ˆη. If (4.15 is not satisfied for j =, 1, 2, we compute v +1 according to (4.6. Function 4.2 (Parmaterized Gram Schmidt reorthogonalization of r against V. function [v +1,ĥ, φ +1 ] = GS reorthog eta (V,r, ˆη, setzero ĥ (1 = V T r ; φ +1 = r 2 ; if ĥ(1 2 φ +1 ˆη v +1 = r /φ +1 ; ĥ = ; else r (1 = r V ĥ (1 ; ĥ(2 = V Tr(1 ; φ +1 = r (1 2; if ĥ(2 2 φ +1 ˆη v +1 = r (1 /φ +1; ĥ = ĥ(1 ; else r (2 = r (1 V ĥ (2 ; ĥ(3 = V Tr(2 ; φ +1 = r (2 2 if ĥ(3 2 φ +1 ˆη v +1 = r (2 /φ +1; ĥ = ĥ(1 + ĥ(2 ; else Find e J such that e J 2 = min 1 j m e j 2 c (1 = V Te J; t (1 = e J V c (1 ; c (2 = V Tt(1 ; t (2 = t (1 V c (1 ; v +1 = t (2 / t (2 2 ; if setzero φ +1 = ; else φ +1 = v+1 T r(2 ; end; end; end; end; end GS reorthog eta

17 LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES 17 This routine guarantees that ξ is from ( Numerical Tests. v +1 2 max{ˆη, n/(m nξ 2 } Example 5.1. For these examples, we construct m n matrices of the form X = PΣZ T n = 5, 6,..., 3, m = 1.5 n, P R m n is left orthogonal, Z R n n is orthogonal, and Σ is positive and diagonal. The matrices P and Z come from the two MATLAB commands P = orth(randn(m, n; Z = orth(randn(n, n; randn command which generates a m n matrix with a standard normal distribution, and the orth command which produces the orthogonal factor of the contents. The diagonal matrix Σ is given by Σ = diag(σ 1,..., σ n σ 1 = 1, σ = r 1, and r n 1 = 1 18, giving X has a geometric distribution of singular values. The bidiagonal reduction of X was computed in three different ways. 1. The Golub Kahan Householder (GKH algorithm from [1]. 2. The Golub Kahan Lanczos procedure using Function 4.1 to do reorthogonalization setting setzero = f alse. We called this GKL-nonzero. 3. The GKL procedure as in item (2, except setting setzero = true, for restarts which we call GKL-setzero. For this case, several elements of the superdiagonal are set to zero. We show how many in Figure 5.2. The singular values of the resulting bidiagonal matrices are computed by the MAT- LAB svd command and compared to result using the svd command directly on X. The upper window of Figure 5.1 compares the singular values from the GKH algorithm to the GKL nonzero algorithm. They are also displayed with orth(v. These errors and orth(v are about 1 15, thus V is near orthogonal and the singular values computed by the two methods are very close. Their errors are difference between their computed singular values and those computed by the MATLAB svd command on X. In the lower window of Figure 5.1, we loo at the difference in the singular values between GKL nonzero and GKL setzero, again displayed with orth(v. It is shown here that they are about 1 16, thus the two strategies are almost indistinguishable. Example 5.2. We construct our examples exactly as in Example 5.1 except that we do reorthogonalization of V with Function 4.2, GS reorthog eta with ˆη = 1 8. We do the same three inds of bidiagonalization: GKH, GKL nonzero, GKLsetzero. The upper window of Figure 5.3 is the error in the singular values for GKH and GKL-nonzero posted beside orth(v. We see that orth(v is the range of 1 8 and the error in the singular values for GKL-nonzero is a little smaller than that, consistent with Theorem 3.6. In the lower window of Figure 5.3, we compare the singular values of the two restart strategies GKL-nonzero and GKL-setzero. Their difference is about 1 9 and thus a little smaller than orth(v.

18 18 J.L. BARLOW 14.6 Log 1 of Wielandt Hoffman Error in Geometric Dist. Matrices Error and orth(v GKH GKL orth(v Dimension Diff. in Restarts Log of difference in Singular Values for Restart Strategies for Lanczos 1 GKL difference in restarts F norm of discards orth(v Dimension Fig Wielandt Hoffman Error in Singular Values from Example Conclusion. When the Golub Kahan Lanczos algorithm is applied to X R m n, we can reorthogonalize just the right Lanczos vectors as proposed by [23]. Theorem 3.1 establishes a ey relationship between the V, the matrix of the first right Lanczos vectors, B, the leading submatrix of B, and an orthogonal matrix W generated from the left Lanczos vectors. As a consequence, the computed singular values of B are a distance bounded by O([ε M + orth(v ] X 2 from those of X. Moreover, if the reorthogonalization strategies used to produce v +1, the (+1st column of V, do not produce an orthogonal v +1, v +1 can be produced from a restart strategy and the corresponding upper bidiagonal element φ +1 can be set to zero without significant loss of accuracy. In [1], Theorem 3.1 is used to change the manner in which left singular vectors are computed from the left Lanczos vectors.

19 LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES 19 4 Number of zero elements from restarts 35 3 Zeroed out superdiagonals Dimension Fig Number of zeros in superdiagonal Example 5.1 REFERENCES [1] J.L. Barlow. Reorthogonalization for the Golub Kahan Lanczos bidiagonal reduction: Part II singular vectors. svd orthii.pdf, 21. [2] J.L. Barlow, N. Bosner, and Z. Drmač. A new bacward stable bidiagonal reduction method. Linear Alg. Appl., 397:35 84, 25. [3] J.L. Barlow, A. Smotunowicz, and H. Erbay. Improved Gram Schmidt downdating methods. BIT, 45: , 25. [4] M. Berry, Z. Drmač, and E. Jessup. Matrices, vector spaces, and information retrieval. SIAM Review, 41: , [5] Å. Björc. A bidiagonalization algorithm for solving large and sparse ill-posed systems of linear equations. BIT, 28:659 67, [6] N. Bosner and J. Barlow. Bloc and parallel versions of one-!sided bidiagonalization. SIAM J. Matrix Anal. Appl., 29(3: , 27. [7] D. Calvetti and L. Reichel. Tihanov regularization on large linear problems. BIT, 43: , 23. [8] J. W. Daniel, W. B. Gragg, L. Kaufman, and G. W. Stewart. Reorthogonalization and stable algorithms for updating the Gram-Schmidt QR factorization. Math. Comp., 3(136: , [9] L. Eldén. Algorithms for the regularization of ill-conditioned least square problems. BIT, 17: , [1] G.H. Golub and W.M. Kahan. Calculating the singular values and pseudoinverse of a matrix. SIAM J. Num. Anal. Ser. B, 2:25 224, [11] G.H. Golub and C.F. Van Loan. Matrix Computations, Third Edition. The Johns Hopins Press, Baltimore,MD, [12] N.J. Higham. Functions of Matrices: Theory and Computation. SIAM Publications, Philadelphia, PA, 28.

20 2 J.L. BARLOW 6 Log 1 of Wielandt Hoffman Error in Geometric Dist. Matrices Error and orth(v GKH GKL orth(v Dimension 6 Log 1 of difference in Singular Values for Restart Strategies for Lanczos Diff. in Restarts GKL difference in restarts F norm of discards orth(v Dimension Fig Maximum Error in Singular Values from Example 5.2 [13] I. Hnetynova, M. Plesinger, and Z. Straos. Golub-Kahan iterative bidiagonalization and determining the size of the noise in data. BIT, 49: , 29. [14] W. Hoffmann. Iterative algorithms for Gram-Schmidt orthogonalization. Computing, 41: , [15] R.A. Horn and C.A. Johnson. Matrix Analysis. Cambridge University Press, Cambridge,UK, [16] R. Mazumber, T. Hastie, and R. Tishbarani. Spectral regularization algorithms for learning large incomplete matrices. hastie/papers/svd JMLR.pdf. [17] C.C. Paige and M.A. Saunders. LSQR:An algorithm for sparse linear equations and least squares problems. ACM Trans. on Math. Software, 8:43 71, [18] C.C. Paige. The Computation of Eigenvalues and Eigenvalues of Very Large Sparse Matrices. PhD thesis, University of London, [19] C.C. Paige. A useful form of unitary matrix form any sequence of unit 2-norm n-vectors. SIAM J. Matrix Anal. Appl., 31(2: , 29. [2] C.C. Paige and M.A. Saunders. Algorithm 583 LSQR:Sparse linear equations and least squares problems. ACM Trans. on Math. Software, 8:195 29, [21] B.N. Parlett. The Symmetric Eigenvalue Problem. SIAM Publications, Philadelphia, PA, Republication of 198 boo. [22] B.N. Parlett and D.S. Scott. The Lanczos algorithm with selective reorthogonalization. Math. Comp., 33: , 1979.

21 LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES 21 [23] H. Simon and H. Zha. Low ran matrix approximation using the Lanczos bidiagonalization process. SIAM J. Sci. Stat. Computing, 21: , 2. [24] K. Yoo and H. Par. Accurate downdating of a modified Gram Schmidt QR decomposition. BIT, 36: , Appendix. In this appendix, we show two bounds that are related to Function 4.1. They are similar to bounds proved in [3], but are instead stated in terms of ξ = orth(v. First we assume that either (4.2 or (4.4 holds. Assuming (4.2 we have that that v +1 = r (1 / r(1 2. The argument for (4.4 is identical. Our argument assumes exact arithmetic, but the arguments in [3] show that if ξ ε M, rounding error has little qualitative effect on the behavior of this procedure. First, note that Since we have that We now bound the ratio v +1 2 r (1 2/ r (1 2. r (1 = (I V V T r r (1 2 = (I V V T r 2 / r (1 2 = (I V T V V T r 2 / r (1 2 I V T V 2 r 2 / r (1 2 = ξ r 2 / r (1 2. (6.1 To do that, we note that r 2 / r (1 2. r (1 2 2 = (I V V T r 2 2 = r 2 2 2r T V V T r + V V T r 2 2 r V T r V 2 2 V T r 2 2. Since V ξ we have Using (4.2, this becomes r (1 2 2 r 2 2 (1 ξ r r 2 2 r 2 2 (1 ξ r 2 2

22 22 J.L. BARLOW implying that (6.2 1 r 2 5(1 ξ r 2. Combining (6.2 with (4.2, our bound for (6.1 is Thus r 2 / r ( ξ. ξ v =.5ξ + O(ξ 1 ξ. 2 If (4.2 and (4.4 are both false, analysis in [8] yields Thus, similar to that above, if we let then e J 2 /m. v (1 +1 = t(1 / t (1 2 Now we loo at v( = t(1 2 t (1 2 ξ e J 2 t (1 2 m ξ A repeat of the arguments above yields v +1 = t (2 / t (2 2. Since v +1 2 = t (2 2 / t (2 2 ξ v(1 +1 2/ t (2 2 m ξ2 / t(2 2 t (2 = (I V V T v (1 +1 we have that t (2 2 2 = v ( v ( V V T v ( ξ 2 m + σ2 (V v (

23 Using (2.13, we have Thus LANCZOS BIDIAGONAL REDUCTION SINGULAR VALUES 23 ( v +1 2 = t ( ξ(1 2 + ξ m. m ξ 2 /(1 ξ2 (1 + ξ (/(m m ξ2 + O(ξ4. We now bound r (2 2 in the case r (2 2 < 4/5 r (1 2. Computing two-norms yields r (2 2 2 = r( V V T r( [r(1 ]T V V T r(1 = r ( V V T r ( r ( The second term expands to V V T r (1 2 2 = [r (1 ]T V (V T V V T r (1 = r (1 2 2 [r (1 ]T V G V T r (1 G = I V T V and from (4.5, G 2 = orth(v = ξ. Thus (6.3 r (2 2 2 = r(1 2 2 r( [r(1 ]T V G V T r(1. Equation (6.3 leads to the bound and since (4.2 (4.4 is false, so that (6.4 which implies (6.5 then (6.6 Since we have r (2 2 2 = r (1 2 2 (1 + ξ r ( r(1 2 2 r (2 2 2 = r (1 2 2 (1 + ξ r (1 2 2 r (1 2 2 r ( (1 + ξ r( (1 + ξ 4 r(2 2 = 1 4(1 + ξ r(2 2. V T r(1 = V T (I V V T r = G V T r, r (1 2 2 G 2 2 r 2 2 ξ 2 (1 + ξ r 2 2. Combining (6.6 and (6.5 and taing square roots, we have (6.7 Since r 2 X 2, we have that r (2 2 2ξ (1 + ξ r 2. r (2 2 2ξ (1 + ξ X 2.

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Rounding error analysis of the classical Gram-Schmidt orthogonalization process Cerfacs Technical report TR-PA-04-77 submitted to Numerische Mathematik manuscript No. 5271 Rounding error analysis of the classical Gram-Schmidt orthogonalization process Luc Giraud 1, Julien Langou 2,

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper

More information

Greedy Tikhonov regularization for large linear ill-posed problems

Greedy Tikhonov regularization for large linear ill-posed problems International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices.

Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. A.M. Matsekh E.P. Shurina 1 Introduction We present a hybrid scheme for computing singular vectors

More information

Probabilistic upper bounds for the matrix two-norm

Probabilistic upper bounds for the matrix two-norm Noname manuscript No. (will be inserted by the editor) Probabilistic upper bounds for the matrix two-norm Michiel E. Hochstenbach Received: date / Accepted: date Abstract We develop probabilistic upper

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

A communication-avoiding thick-restart Lanczos method on a distributed-memory system

A communication-avoiding thick-restart Lanczos method on a distributed-memory system A communication-avoiding thick-restart Lanczos method on a distributed-memory system Ichitaro Yamazaki and Kesheng Wu Lawrence Berkeley National Laboratory, Berkeley, CA, USA Abstract. The Thick-Restart

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm

Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm Stanford Exploration Project, Report 82, May 11, 2001, pages 1 176 Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm James G. Berryman 1 ABSTRACT Resolution matrices

More information

A Divide-and-Conquer Method for the Takagi Factorization

A Divide-and-Conquer Method for the Takagi Factorization A Divide-and-Conquer Method for the Takagi Factorization Wei Xu 1 and Sanzheng Qiao 1, Department of Computing and Software, McMaster University Hamilton, Ont, L8S 4K1, Canada. 1 xuw5@mcmaster.ca qiao@mcmaster.ca

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Weighted Golub-Kahan-Lanczos algorithms and applications

Weighted Golub-Kahan-Lanczos algorithms and applications Weighted Golub-Kahan-Lanczos algorithms and applications Hongguo Xu and Hong-xiu Zhong January 13, 2016 Abstract We present weighted Golub-Kahan-Lanczos algorithms for computing a bidiagonal form of a

More information

Efficient and Accurate Rectangular Window Subspace Tracking

Efficient and Accurate Rectangular Window Subspace Tracking Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Arnoldi Notes for 2016-11-16 Krylov subspaces are good spaces for approximation schemes. But the power basis (i.e. the basis A j b for j = 0,..., k 1) is not good for numerical work. The vectors in the

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix

Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix Journal of Computational and Applied Mathematics 35 (010 76 735 Contents lists aailable at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elseier.com/locate/cam New

More information

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Rounding error analysis of the classical Gram-Schmidt orthogonalization process Numer. Math. (2005) 101: 87 100 DOI 10.1007/s00211-005-0615-4 Numerische Mathematik Luc Giraud Julien Langou Miroslav Rozložník Jasper van den Eshof Rounding error analysis of the classical Gram-Schmidt

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

arxiv:math/ v1 [math.na] 12 Jul 2004

arxiv:math/ v1 [math.na] 12 Jul 2004 arxiv:math/0407177v1 [math.na] 12 Jul 2004 On improving the accuracy of Horner s and Goertzel s algorithms Alica Smoktunowicz and Iwona Wróbel Faculty of Mathematics and Information Science, Warsaw University

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

RESTARTED BLOCK LANCZOS BIDIAGONALIZATION METHODS

RESTARTED BLOCK LANCZOS BIDIAGONALIZATION METHODS RESTARTED BLOCK LANCZOS BIDIAGONALIZATION METHODS JAMES BAGLAMA AND LOTHAR REICHEL Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

LSMR: An iterative algorithm for least-squares problems

LSMR: An iterative algorithm for least-squares problems LSMR: An iterative algorithm for least-squares problems David Fong Michael Saunders Institute for Computational and Mathematical Engineering (icme) Stanford University Copper Mountain Conference on Iterative

More information

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION A. BOUHAMIDI, K. JBILOU, L. REICHEL, H. SADOK, AND Z. WANG Abstract. This paper is concerned with the computation

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 14, pp. 2-35, 22. Copyright 22,. ISSN 168-9613. ETNA L-CURVE CURVATURE BOUNDS VIA LANCZOS BIDIAGONALIZATION D. CALVETTI, P. C. HANSEN, AND L. REICHEL

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Safe and Effective Determinant Evaluation

Safe and Effective Determinant Evaluation Safe and Effective Determinant Evaluation Kenneth L. Clarson AT&T Bell Laboratories Murray Hill, New Jersey 07974 e-mail: clarson@research.att.com February 25, 1994 Abstract The problem of evaluating the

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

PARALLEL ONE-SIDED BLOCK-JACOBI SVD ALGORITHM

PARALLEL ONE-SIDED BLOCK-JACOBI SVD ALGORITHM Proceedings of AGORITMY 2012 pp. 132 140 PARAE ONE-SIDED BOCK-JACOBI SVD AGORITHM MARTIN BEČKA, GABRIE OKŠA, AND MARIÁN VAJTERŠIC Abstract. A new dynamic ordering is presented for the parallel one-sided

More information

A Modified Method for Reconstructing Periodic Jacobi Matrices

A Modified Method for Reconstructing Periodic Jacobi Matrices mathematics of computation volume 42. number 165 january 1984, pages 143 15 A Modified Method for Reconstructing Periodic Jacobi Matrices By Daniel Boley and Gene H. Golub* Abstract. In this note, we discuss

More information

arxiv: v2 [math.na] 27 Dec 2016

arxiv: v2 [math.na] 27 Dec 2016 An algorithm for constructing Equiangular vectors Azim rivaz a,, Danial Sadeghi a a Department of Mathematics, Shahid Bahonar University of Kerman, Kerman 76169-14111, IRAN arxiv:1412.7552v2 [math.na]

More information

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

Math 310 Final Project Recycled subspaces in the LSMR Krylov method

Math 310 Final Project Recycled subspaces in the LSMR Krylov method Math 310 Final Project Recycled subspaces in the LSMR Krylov method Victor Minden November 25, 2013 1 Introduction In many applications, it is the case that we are interested not only in solving a single

More information

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Projected Nonstationary Iterated Tikhonov Regularization

Projected Nonstationary Iterated Tikhonov Regularization BIT manuscript No. (will be inserted by the editor) Projected Nonstationary Iterated Tihonov Regularization Guangxin Huang Lothar Reichel Feng Yin Received: date / Accepted: date Dedicated to Heinrich

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Lecture 9 Least Square Problems

Lecture 9 Least Square Problems March 26, 2018 Lecture 9 Least Square Problems Consider the least square problem Ax b (β 1,,β n ) T, where A is an n m matrix The situation where b R(A) is of particular interest: often there is a vectors

More information

Updating QR factorization procedure for solution of linear least squares problem with equality constraints

Updating QR factorization procedure for solution of linear least squares problem with equality constraints Zeb and Yousaf Journal of Inequalities and Applications 2017 2017:281 DOI 10.1186/s13660-017-1547-0 R E S E A R C H Open Access Updating QR factorization procedure for solution of linear least squares

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

On the Computations of Eigenvalues of the Fourth-order Sturm Liouville Problems

On the Computations of Eigenvalues of the Fourth-order Sturm Liouville Problems Int. J. Open Problems Compt. Math., Vol. 4, No. 3, September 2011 ISSN 1998-6262; Copyright c ICSRS Publication, 2011 www.i-csrs.org On the Computations of Eigenvalues of the Fourth-order Sturm Liouville

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of

More information