Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD
|
|
- Ilene Alexander
- 5 years ago
- Views:
Transcription
1 Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD Yuji Nakatsukasa PhD dissertation University of California, Davis Supervisor: Roland Freund Householder 2014
2 2/28 Acknowledgment for the supervision and support... Zhaojun Bai Nick Higham Françoise Tisseur
3 2/28 Acknowledgment for the supervision and support... Zhaojun Bai Nick Higham Françoise Tisseur for the collaboration and friendship... Kensuke Aishima Rüdiger Borsdorf Stefan Güttel Vanni Noferini Alex Townsend
4 3/28 Dissertation content: references I Matrix decomposition algorithms N., Aishima, Yamazaki. dqds with agg. deflation. SIMAX, N., Z. Bai, and F. Gygi. Optimizing Halleys iteration for the polar decomposition. SIMAX, N., Higham, Backward stability of polar decomp. alg. SIMAX, N., Higham, Spectral d-c alg. for symeig and SVD, SISC, II Eigenvalue perturbation theory Li, N., Truhar, Xu, Pert. for partitioned Hermitian GEP, SIMAX, N., absolute/relative Weyl theorem for GEP, LAA, N., Perturbation of a multiple generalized eigenvalue, BIT, N., Gerschgorin-type theorem for GEP in the Euclidean metric. Math. Comp., N., Pert. for Hermitian block tridiagonal matrices. APNUM, N., Condition numbers of a multiple generalized eigenvalue, Numer. Math., N., The tan θ theorem with relaxed conditions. LAA, 2012.
5 4/28 Dissertation content: table of contents I Matrix decomposition algorithms spectral divide-and-conquer algorithms for eigenproblems polar decomposition algorithm (type (3 k, 3 k 1) Zolotarev) for symeig and SVD led to Zolotarev-based algorithms (Tuesday s talk) + generalized eigenproblems stability proof for polar and symeig, SVD [N., Higham SIMAX (12), SISC (13)] bidiagonal singular values: dqds + aggressive early deflation [N., Aishima, Yamazaki SIMAX (12)] II Eigenvalue perturbation theory Weyl-type bounds for generalized eigenproblems off-diagonal, block tridiagonal perturbation eigenvector bounds, tan θ theorem Gerschgorin theory for generalized eigenproblems
6 Dissertation content: table of contents I Matrix decomposition algorithms spectral divide-and-conquer algorithms for eigenproblems polar decomposition algorithm (type (3 k, 3 k 1) Zolotarev) for symeig and SVD led to Zolotarev-based algorithms (Tuesday s talk) + generalized eigenproblems stability proof for polar and symeig, SVD [N., Higham SIMAX (12), SISC (13)] bidiagonal singular values: dqds + aggressive early deflation [N., Aishima, Yamazaki SIMAX (12)] II Eigenvalue perturbation theory Weyl-type bounds for generalized eigenproblems off-diagonal, block tridiagonal perturbation eigenvector bounds, tan θ theorem Gerschgorin theory for generalized eigenproblems today s plan: a few tricks I learned show how perturbation theory inspires algorithm design 4/28
7 Dissertation content: table of contents I Matrix decomposition algorithms spectral divide-and-conquer algorithms for eigenproblems polar decomposition algorithm (type (3 k, 3 k 1) Zolotarev) for symeig and SVD led to Zolotarev-based algorithms (Tuesday s talk) + generalized eigenproblems stability proof for polar and symeig, SVD [N., Higham SIMAX (12), SISC (13)] bidiagonal singular values: dqds + aggressive early deflation [N., Aishima, Yamazaki SIMAX (12)] II Eigenvalue perturbation theory Weyl-type bounds for generalized eigenproblems off-diagonal, block tridiagonal perturbation eigenvector bounds, tan θ theorem Gerschgorin theory for generalized eigenproblems today s plan: a few tricks I learned show how perturbation theory inspires algorithm design 4/28
8 5/28 Tricks I ve learned 1. (almost) all matrix iterations employ rational approximation examples: QR algorithm, expm, polar, shift-invert Arnoldi 2. O(ɛ) off-diagonal perturbation results in O(ɛ 2 ) change in eigenvalues [Li, Li (05)] E T eig A 1 0 eig A 1 0 A 2 E A 2 E 2 gap even when generalized nonsymmetric [Li, N., Truhar, Xu SIMAX (11)] eig A 1 0 λ B 1 0 eig A 1 E 1 λ B 1 F T 1 0 A 2 0 B 2 E 2 A 2 F 2 B 2 ( E + λf ) 2 gap(a 1 λb 1, A 2 λb 2 ) can be proved also by a Gerschgorin-type argument [N, Math. Comp. (11)] 3. Influence of diagonal blocks connected by k off-diagonals of O(ɛ) decays like O( ɛk gap ) [Paige LAA (74), N, Apnum. (11)]
9 Polar decomposition A = U p H algorithms Scaled Newton Iteration X k+1 = 1 2 ( µk X k + µ 1 k ) X k, X0 = A. Higham (1986): Gave optimal µ k and cheap approximation 2 Byers-Xu (2008): ζ k+1 = (ζk + 1/ζ k ), ζ 0 = 1/ ab, a A 2, b σ min (A) QDWH (QR-based dyn. weigh. Halley) X k+1 = X k (a k I + b k X k X k)(i + c k X k X k) 1, [N., Bai & Gygi (2010)]. X 0 = A/α. Convergence cubic, 6 iterations in double precision. QR-based DWH ck X k = I Q 1 R, Q 2 X k+1 = b k c k X k + 1 ck ( ak b k c k ) Q1 Q 2 Are the algorithms backward stable? (experimentally yes) 6/28
10 Polar decomposition A = U p H algorithms Scaled Newton Iteration :type (2,1) Zolotarev X k+1 = 1 ( ) µk X k + µ 1 k 2 X k, X0 = A. Higham (1986): Gave optimal µ k and cheap approximation 2 Byers-Xu (2008): ζ k+1 = (ζk + 1/ζ k ), ζ 0 = 1/ ab, a A 2, b σ min (A) QDWH (QR-based dyn. weigh. Halley) :type (3,2) Zolotarev [N., Bai & Gygi (2010)]. X k+1 = X k (a k I + b k X k X k)(i + c k X k X k) 1, X 0 = A/α. Convergence cubic, 6 iterations in double precision. QR-based DWH ck X k = I Q 1 R, Q 2 X k+1 = b k c k X k + 1 ck ( ak b k c k ) Q1 Q 2 Are the algorithms backward stable? (experimentally yes) 6/28
11 7/28 Backward Stability Assume Ĥ is Hermitian. Alg is backward stable if Û p Ĥ = A + A, A = ɛ A, Ĥ = H + H, Û p = U p + U, where H Hermitian psd and U p unitary. H = ɛ H, U = ɛ U p, crucial consequence: symeig and SVD are backward stable [N. and Higham, SISC (13)]
12 Backward Stability Assume Ĥ is Hermitian. Alg is backward stable if Û p Ĥ = A + A, A = ɛ A, Ĥ = H + H, Û p = U p + U, where H Hermitian psd and U p unitary. H = ɛ H, U = ɛ U p, crucial consequence: symeig and SVD are backward stable We develop a global analysis of iterations for polar that proves some are backward stable, correctly predicts that others are not stable. Strategy: [N. and Higham, SISC (13)] take account of rounding errors within each iteration and error propagation between iterations. key fact: Hermitian factor H is well-conditioned [Bhatia (94), Higham (08)] 7/28
13 8/28 Statement Suppose 1. Iteration form: X k+1 = f k (X k ), X 0 = A, X k U p.
14 8/28 Statement Suppose 1. Iteration form: X k+1 = f k (X k ), X 0 = A, X k U p. 2. Mixed stable evaluation of iteration There is an X k C n n such that X k+1 = f k ( X k ) + ɛ X k+1 2, X k = X k + ɛ X k 2.
15 8/28 Statement Suppose 1. Iteration form: X k+1 = f k (X k ), X 0 = A, X k U p. 2. Mixed stable evaluation of iteration There is an X k C n n such that X k+1 = f k ( X k ) + ɛ X k+1 2, X k = X k + ɛ X k Mapping function condition f k does not significantly decrease relative size of σ i f k (σ i ) 1 ( ) σi, d 1. f k ( X k ) 2 d X k 2
16 Statement Suppose 1. Iteration form: X k+1 = f k (X k ), X 0 = A, X k U p. 2. Mixed stable evaluation of iteration There is an X k C n n such that X k+1 = f k ( X k ) + ɛ X k+1 2, X k = X k + ɛ X k Mapping function condition f k does not significantly decrease relative size of σ i f k (σ i ) 1 ( ) σi, d 1. f k ( X k ) 2 d X k 2 Theorem 1 Suppose X l X l = I + ɛ, let Û p = X l and Ĥ = 1 2 (Û pa + (Û pa) ). Then Û p Ĥ = A + dɛ A 2, Ĥ = H + dɛ H 2, where H is the Hermitian polar factor of A. Furthermore, Û p = U p + dɛκ 2 (A). 8/28
17 9/28 Condition on f k : good mapping 1 f(x) f(x) x M f(x) f(x) x M m M 0 m M QDWH iteration f (x) = x a + bx2 1 + cx 2, a stable mapping, d = 1. Scaled Newton iteration f (x) = 1 2 (µx + (µx) 1 ), a stable mapping, d = 1.
18 10/28 Condition on f k : bad mapping f(x) f(x) 0.5 x M m M 3 Inverse Newton iteration f (x) = 2µx(1 + µ 2 x 2 ) 1, an unstable mapping. Newton Schulz iteration f (x) = 1 2 x(3 x2 ), an unstable mapping if M 3.
19 11/28 QDWH is stable QR-based implementation (QDWH) 1 f(x) f(x) [ ] ck X k I = [ Q1 Q 2 ] R, X k+1 = bk c k X k + 1 ck ( ak bk c k ) Q1 Q x M 0 m M Use Householder QR factorization with column pivoting and row sorting (or pivoting). The QR factorization has row-wise b errs of order ρ i u, where growth factors ρ i (1 + 2) n 1 (Cox and Higham, 1998). ρ i usually small in practice. Can prove that mixed stable evaluation of iteration condition holds. No pivoting is fine in practice. But blocking order matters: [ ] [ ] I Q2 = R is unstable ck X k Q 1
20 12/28 Scaled Newton stability Mixed stable condition holds if matrix inverse computed using mixed backward forward stable method. Condition on f k holds Conclusion Scaled Newton is backward stable f(x) f(x) x M 0 m M History: Higham (85): raised the question of backward stability. Kielbasiński, Ziȩtak (03): long and complicated analysis proving backward stability, assuming matrix inverses are computed in a mixed backward forward stable way. Byers, Xu (08): proof with much simpler arguments, but some incompleteness in analysis [Kielbasiński, Ziȩtak (10)]
21 13/28 Extra: Is the (degree-17) Zolotarev-polar stable? 1. Mixed stable evaluation of iteration There is an X k C n n such that X k+1 = f k ( X k ) + ɛ X k+1 2, X k = X k + ɛ X k Mapping function condition f k does not significantly decrease relative size of σ i f k (σ i ) 1 ( ) σi, d 1. f k ( X k ) 2 d X k Type (7,6)
22 13/28 Extra: Is the (degree-17) Zolotarev-polar stable? 1. Mixed stable evaluation of iteration? There is an X k C n n such that X k+1 = f k ( X k ) + ɛ X k+1 2, X k = X k + ɛ X k Mapping function condition f k does not significantly decrease relative size of σ i f k (σ i ) 1 ( ) σi, d 1. f k ( X k ) 2 d X k Type (7,6)
23 14/28 Recap: Tricks I ve learned 1. (almost) all matrix iterations employ rational approximation : QR algorithm, Zolotarev-(pd,eig,SVD) 2. O(ɛ) off-diagonal perturbation results in O(ɛ 2 ) change in eigenvalues [Li, Li (05)] E T eig A 1 0 eig A 1 0 A 2 E A 2 E 2 gap even when generalized nonsymmetric [Li, N., Truhar, Xu (11)] eig A 1 0 λ B 1 0 eig A 1 E 1 λ B 1 F T 1 0 A 2 0 B 2 E 2 A 2 F 2 B 2 ( E + λf ) 2 gap(a 1 λb 1, A 2 λb 2 ) can be proved also via a Gerschgorin-type argument [N. (11)] 3. Influence of diagonal blocks connected by k off-diagonals of O(ɛ) decays like O( ɛk gap ) [N. (11)]
24 15/28 Recap: Tricks I ve learned A = 3 Influence of diagonal blocks connected by k off-diagonals of O(ɛ) decays like O( ɛk a 1 e T 1 e 1 a 2 e T 2 e gap ) a k+1 e T k+1. e.. k+1, Â = A(k+1:end,k+1:end) =,... an 1 e T n 1 e n 1 a n eig(a) eig m (Â) m i=k+1 e 2 i a i a k, m = k + 1,..., n... an 1 e T n 1 e n 1 a n many of eig(â) match an eigenvalue of A proof : = x T A i j x + eigenvector decays exponentially λ A i j
25 16/28 Standard SVD algorithm 1. Reduce A to bidiagonal form via Householder reflections H L, H R H L H R A = H L H R H L B. A = U A BV A, where U A = ( H L ), V A = H R. 2. Compute SVD of B = U B ΣV B. [Golub and Kahan (1965)] Compute singular values Σ via dqds. Compute singular vectors U B, V B via inverse iteration. 3. SVD: A = (U A U B )Σ(V B V A ) = UΣV.
26 Typical relative accuracy for B with σ max = 1, σ min (B) = /28 Computing bidiagonal singular values: historical aspect QR algorithm applied to B T B: yields absolute accuracy [Golub and Kahan (1965)] σ i σ i O(n) σ max ɛ Refined QR: attains high relative accuracy [Demmel and Kahan (1990)] σ i σ i 69n 2 σ i ɛ dqds: 4-fold speedup + higher relative accuracy [Fernando and Parlett (1994)] σ i σ i 4nσ i ɛ σ max σ max σ max σ min σ min σ min QR Refined QR dqds
27 18/28 dqds: pseudocode Algorithm 1 The dqds algorithm q i = (B i,i ) 2, e i = (B i,i+1 ) 2 for m := 0, 1, do choose shift s( 0) d 1 := q 1 s for i := 1,, n 1 do q i := d i + e i e i := e i q i+1 /q i d i+1 := d i q i+1 /q i s end for q n := d n end for B = q1 e1 q2 dqds estimate shift s e qn 1 en 1 qn 1 dqds estimate shift s get shift s dqds get shift s dqds time root-free e i 0, qi σ i with guaranteed high relative accuracy sequential nature, has been difficult to parallelize
28 19/28 dqds with conventional deflation strategy Typically, running dqds results in q1 e1 B = q2 e qn 1 en 1 qn 1
29 19/28 dqds with conventional deflation strategy Typically, running dqds results in q1 e1 B = q2 e qn 1 en 1 qn 1 e n 1 0 with convergence factor σ 2 n s σ 2 n 1 s < 1.
30 19/28 dqds with conventional deflation strategy Typically, running dqds results in q1 e1 B = q2 e qn 1 en 1 qn 1 e n 1 0 with convergence factor σ 2 n s σ 2 n 1 s < 1.
31 19/28 dqds with conventional deflation strategy Typically, running dqds results in q1 e1 B = q2 e qn 1 en 1 qn 1 e n 1 0 with convergence factor σ 2 n s σ 2 n 1 s < 1.
32 19/28 dqds with conventional deflation strategy Typically, running dqds results in q1 e1 B = q2 e qn 1 en 1 qn 1 e n 1 0 with convergence factor σ 2 n s σ 2 n 1 s < 1.
33 19/28 dqds with conventional deflation strategy Typically, running dqds results in q1 e1 B = q2 e qn 1 en 1 qn 1 e n 1 0 with convergence factor σ 2 n s σ 2 n 1 s < 1.
34 19/28 dqds with conventional deflation strategy Typically, running dqds results in q1 e1 B = q2 e qn 1 en 1 qn 1 e n 1 0 with convergence factor σ 2 n s σ 2 n 1 s < 1. when e n 1 is negligibly small, set it to 0.
35 19/28 dqds with conventional deflation strategy Typically, running dqds results in q1 e1 B = q2 e qn 1 0 qn 1 e n 1 0 with convergence factor σ 2 n s σ 2 n 1 s < 1. when e n 1 is negligibly small, set it to 0. q n is isolated: converged singular value. remove last row and column (deflation), repeat.
36 19/28 dqds with conventional deflation strategy Typically, running dqds results in B = q1 e1 q2 e qn 1 0 qn 1 q1 e1 q en 2 qn 1 e n 1 0 with convergence factor σ 2 n s σ 2 n 1 s < 1. when e n 1 is negligibly small, set it to 0. q n is isolated: converged singular value. remove last row and column (deflation), repeat.
37 Aggressive deflation for non-hermitian eigenproblems n k 1 1 k n k 1 H 11 H 12 H 13 H = 1 H 21 H 22 H 23 k 0 H 32 H 33 [Braman, Byers, Mathias (2003)] k : window size Compute Schur decomposition H 33 = VTV (T is triangular) I 0 0 H 11 H 12 H 13 I 0 0 H 11 H 12 H 13 V H 21 H 22 H = H 21 H 22 H 23 V. 0 0 V 0 H 32 H V 0 t T Find negligible elements in t = and deflate.. Results in significant speed-up. 20/28
38 21/28 Aggressive deflation for dqds -version 1: Aggdef(1) 1. Compute the small SVD of k-by-k B 2 = UΣV T in B B = 1 en k. 2. Compute [ In k U T ] [ In k U T ] [ In k B ] : V [ In k B 2 ] = V Find negligible elements in, remove corresponding rows and columns. 4. Reduce matrix to bidiagonal form, resume dqds.
39 21/28 Aggressive deflation for dqds -version 1: Aggdef(1) 1. Compute the small SVD of k-by-k B 2 = UΣV T in B B = 1 en k. 2. Compute [ In k U T ] [ In k U T ] [ In k B ] : V [ In k B 2 ] = V Find negligible elements in, remove corresponding rows and columns. 0 due to O( ɛk gap ) effect 4. Reduce matrix to bidiagonal form, resume dqds.
40 Aggressive deflation for dqds -version 1: Aggdef(1) 1. Compute the small SVD of k-by-k B 2 = UΣV T in B B = 1 en k. 2. Compute [ In k U T ] [ In k U T ] [ In k B ] : V [ In k B 2 ] = V Find negligible elements in, remove corresponding rows and columns. 0 due to O( ɛk gap ) effect 4. Reduce matrix to bidiagonal form, resume dqds. Problem in speed + stability 21/28
41 22/28 Efficient and stable Aggressive deflation: Aggdef(2) 1. Compute B 2 s.t. B T 2 B 2 = B T 2 B 2 si, where s = (σ min (B 2 )) 2 2. Apply Givens rotations to B 2 : x 0 Set x 0 when negligible. x Update B 2 : B T 2 B 2 = B T 2 B 2 + si, deflate, repeat. x 0 0 0
42 Efficient and stable Aggressive deflation: Aggdef(2) 1. Compute B 2 s.t. B T 2 B 2 = B T 2 B 2 si, where s = (σ min (B 2 )) 2 2. Apply Givens rotations to B 2 : x 0 Set x 0 when negligible. x Update B 2 : B T 2 B 2 = B T 2 B 2 + si, deflate, repeat. x Lemma 2 Aggdef(1) and Aggdef(2) are mathematically equivalent. flops rel. accuracy Aggdef(1) O(k 2 ) conditional Aggdef(2) O(kl) guaranteed k: window size ( n), l: number of singular values deflated by Aggdef 22/28
43 23/28 Aggdef(2) preserves high relative accuracy By a mixed forward-backward relative error analysis, we establish: Theorem 3 for i = 1,..., n. 1 8nɛ σ i( B) σ i (B) 1 + 8nɛ Recall dqds error bound 1 4nɛ σ i( B) σ i (B) 1 + 4nɛ Calling Aggdef(2) maintains high relative accuracy.
44 24/28 Conventional deflation vs. Aggressive deflation Conventional Aggressive looks for negligible values in i : local view i = e n i convergence factor of i : i i σ2 n i+1 σ 2 n i i : i after one dqd(s) iteration, looks for negligible values in i : global view n i e j i e n i q j=n k+2 j convergence factor of i : i i σ2 n i+1 σ 2 n k+1 k: window size (k = 4 above)
45 24/28 Conventional deflation vs. Aggressive deflation Conventional Aggressive looks for negligible values in i : local view i = e n i convergence factor of i : looks for negligible values in i : global view n i e j i e n i q j=n k+2 j convergence factor of i : i i σ2 n i+1 σ 2 n i σ2 n i+1 s σ 2 n i s i i σ2 n i+1 σ 2 n k+1 σ2 n i+1 s σ 2 n k+1 s i : i after one dqd(s) iteration, k: window size (k = 4 above)
46 25/28 Convergence factors of i Conventional i Aggressive i i e n i e n i n i i i with shift s Conventional σ 2 n i+1 s σ 2 n i s j=n k+2 σ 2 n i+1 s σ 2 n k+1 s e j 1 q j Aggressive solid: dqds (with shift), dashed: dqd (zero-shift) aggressive deflation is much more powerful shift seems unnecessary with aggressive deflation
47 Convergence factors of i Conventional i Aggressive i i e n i e n i n i i i with shift s Conventional σ 2 n i+1 s σ 2 n i s j=n k+2 σ 2 n i+1 s σ 2 n k+1 s e j 1 q j Aggressive solid: dqds (with shift), dashed: dqd (zero-shift) aggressive deflation is much more powerful shift seems unnecessary with aggressive deflation use dqd (zero-shift)? 25/28
48 26/28 Numerical experiments: specifications algorithm deflation strategy shift LAPACK conventional s > 0 dqds+agg1 Aggdef(1) s > 0 dqds+agg2 Aggdef(2) s > 0 dqd+agg2 Aggdef(2) zero-shift environment: Intel Core i7 2.67GHz Processor (4 cores, 8 threads), 12GB RAM n Test matrices B: diagonals q i, off-diagonals e i qi = n + 1 i, ei = qi 1 = β q i, ei = q i, β = Toeplitz: q i = 1, ei = q2i 1 = n + 1 i, q2i = i, ei = (n i)/ qi+1 = β q i (i n/2), qn/2 = 1, qi 1 = β q i (i n/2), ei = 1, β = Cholesky factor of tridiagonal (1, 2, 1) matrix Cholesky factor of Laguerre matrix Cholesky factor of Hermite recurrence matrix Cholesky factor of Wilkinson matrix Cholesky factor of Clement matrix matrix from electronic structure calculations matrix from electronic structure calculations
49 Numerical experiments 27/28
50 28/28 Summary perturbation theory can inspire algorithm design algorithm design inspires perturbation problems off-diagonal perturbation results in O(ɛ k ) eigenvalue change understand matrix iterations using rational approximation theory thesis posted at my website
51 29/28 Backward stability proof of QDWH-eig [ Goal: show E 2 = ɛa 2 where V T A+ E T ] A V = E A Assumptions: A = ÛĤ + ɛ A 2, Û T Û I = ɛ. [ ] I V T V = Û + ɛ I [ I By the assumptions A = V I 0 = A A T ( [ ] [ I I = V V T Ĥ Ĥ T V I ] V T Ĥ + ɛ A 2, so I ] V T ) + ɛ A 2 Therefore [ I ɛ A 2 = V T Ĥ V I ] [ I V T Ĥ V ] [ ] 0 E T = 2 I E 0
Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the Singular Value Decomposition
Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the Singular Value Decomposition By YUJI NAKATSUKASA B.S. (University of Tokyo) 2005 M.S. (University of Tokyo) 2007 DISSERTATION Submitted
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationBACKWARD STABILITY OF ITERATIONS FOR COMPUTING THE POLAR DECOMPOSITION
SIAM J. MATRIX ANAL. APPL. Vol. 33, No. 2, pp. 460 479 c 2012 Society for Industrial and Applied Mathematics BACKWARD STABILITY OF ITERATIONS FOR COMPUTING THE POLAR DECOMPOSITION YUJI NAKATSUKASA AND
More informationAlgorithms for Solving the Polynomial Eigenvalue Problem
Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationCOMPUTING FUNDAMENTAL MATRIX DECOMPOSITIONS ACCURATELY VIA THE MATRIX SIGN FUNCTION IN TWO ITERATIONS: THE POWER OF ZOLOTAREV S FUNCTIONS
COMPUTING FUNDAMENTAL MATRIX DECOMPOSITIONS ACCURATELY VIA THE MATRIX SIGN FUNCTION IN TWO ITERATIONS: THE POWER OF ZOLOTAREV S FUNCTIONS YUJI NAKATSUKASA AND ROLAND W. FREUND Abstract. The symmetric eigenvalue
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationA DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION
SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationOrthogonal Eigenvectors and Gram-Schmidt
Orthogonal Eigenvectors and Gram-Schmidt Inderjit S. Dhillon The University of Texas at Austin Beresford N. Parlett The University of California at Berkeley Joint GAMM-SIAM Conference on Applied Linear
More informationLecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationThe geometric mean algorithm
The geometric mean algorithm Rui Ralha Centro de Matemática Universidade do Minho 4710-057 Braga, Portugal email: r ralha@math.uminho.pt Abstract Bisection (of a real interval) is a well known algorithm
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationDownloaded 08/19/14 to Redistribution subject to SIAM license or copyright; see
SIAM J. SCI. COMPUT. Vol. 36, No. 3, pp. C290 C308 c 2014 Society for Industrial and Applied Mathematics AN IMPROVED DQDS ALGORITHM SHENGGUO LI, MING GU, AND BERESFORD N. PARLETT Abstract. In this paper
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationEigenvalue Problems and Singular Value Decomposition
Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software
More informationAccelerating computation of eigenvectors in the nonsymmetric eigenvalue problem
Accelerating computation of eigenvectors in the nonsymmetric eigenvalue problem Mark Gates 1, Azzam Haidar 1, and Jack Dongarra 1,2,3 1 University of Tennessee, Knoxville, TN, USA 2 Oak Ridge National
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationAnalysis of Block LDL T Factorizations for Symmetric Indefinite Matrices
Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL
More informationAccelerating computation of eigenvectors in the dense nonsymmetric eigenvalue problem
Accelerating computation of eigenvectors in the dense nonsymmetric eigenvalue problem Mark Gates 1, Azzam Haidar 1, and Jack Dongarra 1,2,3 1 University of Tennessee, Knoxville, TN, USA 2 Oak Ridge National
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationOrthogonal iteration to QR
Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationLU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b
AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization
More information5 Selected Topics in Numerical Linear Algebra
5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationA Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem
A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationAnalysis of the Cholesky Method with Iterative Refinement for Solving the Symmetric Definite Generalized Eigenproblem
Analysis of the Cholesky Method with Iterative Refinement for Solving the Symmetric Definite Generalized Eigenproblem Davies, Philip I. and Higham, Nicholas J. and Tisseur, Françoise 2001 MIMS EPrint:
More information1. Introduction. The CS decomposition [7, Section 2.5.4] allows any partitioned V 0 U 2 S
A BACKWARD STABLE ALGORITHM FOR COMPUTING THE CS DECOMPOSITION VIA THE POLAR DECOMPOSITION EVAN S. GAWLIK, YUJI NAKATSUKASA, AND BRIAN D. SUTTON Abstract. We introduce a backward stable algorithm for computing
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationKU Leuven Department of Computer Science
On Deflations in Extended QR Algorithms Thomas Mach Raf Vandebril Report TW 634, September 2013 KU Leuven Department of Computer Science Celestijnenlaan 200A B-3001 Heverlee (Belgium) On Deflations in
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationResearch Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics
Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester
More informationSolving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems
Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems Zhaojun Bai University of California, Davis Berkeley, August 19, 2016 Symmetric definite generalized eigenvalue problem
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationRe-design of Higher level Matrix Algorithms for Multicore and Heterogeneous Architectures. Based on the presentation at UC Berkeley, October 7, 2009
III.1 Re-design of Higher level Matrix Algorithms for Multicore and Heterogeneous Architectures Based on the presentation at UC Berkeley, October 7, 2009 Background and motivation Running time of an algorithm
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today
More informationthe Unitary Polar Factor æ Ren-Cang Li P.O. Box 2008, Bldg 6012
Relative Perturbation Bounds for the Unitary Polar actor Ren-Cang Li Mathematical Science Section Oak Ridge National Laboratory P.O. Box 2008, Bldg 602 Oak Ridge, TN 3783-6367 èli@msr.epm.ornl.govè LAPACK
More informationarxiv: v1 [cs.lg] 26 Jul 2017
Updating Singular Value Decomposition for Rank One Matrix Perturbation Ratnik Gandhi, Amoli Rajgor School of Engineering & Applied Science, Ahmedabad University, Ahmedabad-380009, India arxiv:70708369v
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationReview of similarity transformation and Singular Value Decomposition
Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm
More informationAvoiding Communication in Distributed-Memory Tridiagonalization
Avoiding Communication in Distributed-Memory Tridiagonalization SIAM CSE 15 Nicholas Knight University of California, Berkeley March 14, 2015 Joint work with: Grey Ballard (SNL) James Demmel (UCB) Laura
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationREORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS
REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University
More informationRecent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.
Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator
More informationLecture 2 Decompositions, perturbations
March 26, 2018 Lecture 2 Decompositions, perturbations A Triangular systems Exercise 2.1. Let L = (L ij ) be an n n lower triangular matrix (L ij = 0 if i > j). (a) Prove that L is non-singular if and
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationA Tour of the Lanczos Algorithm and its Convergence Guarantees through the Decades
A Tour of the Lanczos Algorithm and its Convergence Guarantees through the Decades Qiaochu Yuan Department of Mathematics UC Berkeley Joint work with Prof. Ming Gu, Bo Li April 17, 2018 Qiaochu Yuan Tour
More informationNick Higham. Director of Research School of Mathematics
Exploiting Research Tropical Matters Algebra in Numerical February 25, Linear 2009 Algebra Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University of Mathematics
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationA CS decomposition for orthogonal matrices with application to eigenvalue computation
A CS decomposition for orthogonal matrices with application to eigenvalue computation D Calvetti L Reichel H Xu Abstract We show that a Schur form of a real orthogonal matrix can be obtained from a full
More informationIs there a Small Skew Cayley Transform with Zero Diagonal?
Is there a Small Skew Cayley Transform with Zero Diagonal? Abstract The eigenvectors of an Hermitian matrix H are the columns of some complex unitary matrix Q. For any diagonal unitary matrix Ω the columns
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationOff-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems
Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Yuji Nakatsukasa Abstract When a symmetric block diagonal matrix [ A 1 A2 ] undergoes an
More informationThe Algorithm of Multiple Relatively Robust Representations for Multi-Core Processors
Aachen Institute for Advanced Study in Computational Engineering Science Preprint: AICES-2010/09-4 23/September/2010 The Algorithm of Multiple Relatively Robust Representations for Multi-Core Processors
More informationSparse BLAS-3 Reduction
Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc
More informationIntel Math Kernel Library (Intel MKL) LAPACK
Intel Math Kernel Library (Intel MKL) LAPACK Linear equations Victor Kostin Intel MKL Dense Solvers team manager LAPACK http://www.netlib.org/lapack Systems of Linear Equations Linear Least Squares Eigenvalue
More informationSection 4.5 Eigenvalues of Symmetric Tridiagonal Matrices
Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete
More informationKEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio
COMPUTING THE SVD OF A GENERAL MATRIX PRODUCT/QUOTIENT GENE GOLUB Computer Science Department Stanford University Stanford, CA USA golub@sccm.stanford.edu KNUT SLNA SC-CM Stanford University Stanford,
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationDominant feature extraction
Dominant feature extraction Francqui Lecture 7-5-200 Paul Van Dooren Université catholique de Louvain CESAME, Louvain-la-Neuve, Belgium Goal of this lecture Develop basic ideas for large scale dense matrices
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationSingular Value Decomposition
Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding
More informationThe QR Decomposition
The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided
More informationThe Future of LAPACK and ScaLAPACK
The Future of LAPACK and ScaLAPACK Jason Riedy, Yozo Hida, James Demmel EECS Department University of California, Berkeley November 18, 2005 Outline Survey responses: What users want Improving LAPACK and
More informationComputing the common zeros of two bivariate functions via Bézout resultants Colorado State University, 26th September 2013
Work supported by supported by EPSRC grant EP/P505666/1. Computing the common zeros of two bivariate functions via Bézout resultants Colorado State University, 26th September 2013 Alex Townsend PhD student
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationEstimating the Largest Elements of a Matrix
Estimating the Largest Elements of a Matrix Samuel Relton samuel.relton@manchester.ac.uk @sdrelton samrelton.com blog.samrelton.com Joint work with Nick Higham nick.higham@manchester.ac.uk May 12th, 2016
More informationA QR-decomposition of block tridiagonal matrices generated by the block Lanczos process
1 A QR-decomposition of block tridiagonal matrices generated by the block Lanczos process Thomas Schmelzer Martin H Gutknecht Oxford University Computing Laboratory Seminar for Applied Mathematics, ETH
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationMatrix Analysis and Algorithms
Matrix Analysis and Algorithms Andrew Stuart Jochen Voss 4th August 2009 2 Introduction The three basic problems we will address in this book are as follows. In all cases we are given as data a matrix
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationExploiting off-diagonal rank structures in the solution of linear matrix equations
Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University
More information