An Explanation of the MRRR algorithm to compute eigenvectors of symmetric tridiagonal matrices

Size: px
Start display at page:

Download "An Explanation of the MRRR algorithm to compute eigenvectors of symmetric tridiagonal matrices"

Transcription

1 An Explanation of the MRRR algorithm to compute eigenvectors of symmetric tridiagonal matrices Beresford N. Parlett Departments of Mathematics and Computer Science Division, EECS dept. University of California, Berkeley Inderjit S. Dhillon Department of Computer Sciences University of Texas, Austin

2 Difficulties All eigenvalues of T are easily computed in O(n 2 ) time. Given ˆλ, inverse iteration computes the eigenvector: (T ˆλI)x i+1 = x i, i = 0, 1, 2,.... Costs O(n) per iteration. Typically, 1-3 iterations are enough. BUT, inverse iteration only guarantees T ˆv ˆλˆv = O(ε T ).

3 Gap Theorem : Fundamental Limitations T ˆv ˆλˆv sin (v, ˆv). Gap(ˆλ) Assume all off-diagonals not negligible, so Gap(ˆλ) not zero, but can be small: 1 ε 1 ε 1 1 ε 2 ε 2 1 ε 3 ε 3 1 When eigenvalues are close, independently computed eigenvectors WILL NOT be mutually orthogonal.

4 Plot eigenvalues : Eigenvalues of Biphenyl Matrix Eigenvalue Eigenvalue Index Plot Absgap(i) = log 10 (min(λ i+1 λ i, λ i λ i 1 )/ T ) : 0 2 Absolute Eigenvalue Gaps Eigenvalue Index LAPACK one big cluster λ 1, λ 2,..., λ 939. Tridiagonal solution takes 80% of Total Time.

5 Ideal Solution Fundamental Limitation: sin (v, ˆv) T ˆv ˆλˆv. Gap(ˆλ) Get smallest possible residual norm. Compute eigenvalue to greatest accuracy possible : ˆλ λ = O(ε ˆλ ). Compute eigenvector to high relative accuracy : T ˆv ˆλˆv = O(ε ˆλ ). Gap Theorem implies : sin (v, ˆv) = O(ε λ ) Gap(ˆλ) = O(ε) Relgap(ˆλ). Can we achieve the above?

6 Key Idea 1. Discard Tridiagonals, Embrace Bidiagonals

7 Factored Forms yield Better Representations Tridiagonals DO NOT determine their eigenvalues to high relative accuracy. Bidiagonals determine their singular values to high relative accuracy. T + µi = L L T x x x x x x x x = x x x x.... x x x x x x x.... x x x Bidiagonal Factors are better since they allow us to compute eigenvalues to high relative accuracy, compute eigenvectors to high relative accuracy. High accuracy Orthogonality. For interior eigenvalues, extends to indefinite factorization LDL T.

8 Algorithm Outline 1. Choose µ such that T + µi is positive definite. 2. Compute the factorization : T + µi = LDL T. 3. Compute eigenvalues of LDL T to high relative accuracy (by dqds or bisection). 4. Given eigenvalues, compute accurate eigenvectors of LDL T. HOW?

9 Key Idea 2. Shift with Differential QD How do we get an eigenvector such that T ˆv ˆλˆv = O(ε ˆλ )?

10 Differential Transformations Inverse iteration Solve for z : LDL T ˆλI = L + D + L T +. L + D + L T + z = random vector. Simple qd : D + (1) := d 1 ˆλ for i = 1, n 1 L + (i) := (d i l i )/D + (i) D + (i + 1) := d i li 2 + d i+1 L + (i)d i l i ˆλ end for Differential qd (dqds) : s 1 := ˆλ for i = 1, n 1 D + (i) := s i + d i L + (i) := (d i l i )/D + (i) end for s i+1 := L + (i)l i s i ˆλ D + (n) := s n + d n

11 Computing an Eigenvector Compute the appropriate Twisted Factorization : where D r is diagonal, N r = T ˆλI = N r D r N T r, x x x.. x γ r x.. x x x x x and r is chosen to minimize γ r (it will be O(λ ˆλ)). Solve for z, N r D r Nr T z = γ r e r ( Nr T z = e r ) : 1, i = r, z(i) = L + (i) z(i + 1), i = r 1,...,1, U (i 1) z(i 1), i = r + 1,..., n. Solves an open problem posed by Wilkinson (1965).

12 Main Theorem THEOREM. [Dhillon & Parlett, 2003] Eigenvectors computed by twisted factorization are numerically orthogonal if eigenvalues of LDL T have large relative gaps. In particular, where (ˆv i, ˆv j ) = Relsep(λ i, λ j ) = O(ε) Relsep(λ i, λ j ), λ i λ j max( λ i, λ j ). Example of Large Relsep : λ 1 = 10 16, λ 2 = Relsep(λ 1, λ 2 ) 1 Above Theorem Automatic Orthogonality. Example of Small Relsep : λ 1 = , λ 2 =

13 Proof of Correctness Desired Relationship: LDL T ˆλI = N r D r N T r, and N r D r N T r z = γ r e r. computed v, LDL T ˆNr ˆDr ˆNT r, ẑ 3 ulps in L 3 ulps in D 4 ulps in Ñr 2 ulps in D r v, L D LT exact Ñ r Dr Ñ r T z = γr e r Exact Mathematical relationship holds : L D LT ˆλI = Ñr D r Ñ r T. Key step in proof is to relate ẑ to v in 3 steps : 1. ẑ is close to z, (only multiplications), 2. sin ( v, z) = O(ε λ )/gap(ˆλ), ( γ r = O(ε λ )), 3. sin ( v, v) = O(ε)/relgap(ˆλ) (relative perturbation theory). sin (ẑ, v) = O(ε) Relgap(ˆλ).

14 Key Idea 3. Shift for Separation, again differentially

15 Algorithm MR 3 (Multiple RRRs) 1. Choose µ such that T + µi is positive definite. 2. Compute the factorization : T + µi = LDL T. 3. Compute eigenvalues of LDL T to high relative accuracy (by dqds or bisection). 4. Group eigenvalues according to their Relative Gaps : a) isolated (agree in < 3 digits). Compute eigenvector using a twisted factorization. b) clustered (agree in > 3 digits). Pick µ near cluster to form LDL T µi = L 1 D 1 L T 1 (by dqds). Refine eigenvalues in cluster to high relative accuracy. Set L L 1, D D 1. Repeat step 4 for eigenvalues in cluster.

16 Key Idea 4. Analyze the Representation Tree Step 4 of the MR 3 algorithm may be represented as a tree: At the root is the original factorization. At each internal node is a factorization for another shift µ. Each child of the node for µ corresponds to an isolated eigenvalue or a cluster.

17 Small Example Eigenvalues: ε, 1 + ε, ε, 2. Extra representation needed at σ = 1: LDL T I = L 1 D 1 L T 1. The following Representation Tree captures the steps of the algorithm: ({L, D}, {1, 2, 3, 4}) ε 1 2 ( ) {N (1) 1, 1}, {1} ({L 1, D 1 }, {2, 3}) ε 2 ε ) ( ( {N (4) 2, 2}, {2} ) 3, 3}, {3} {N (4) ( {N (3) 4, 4}, {4} )

18 Wilkinson s Matrix W 21 + : Wilkinson s matrix. λ 20 and λ 21 are identical to working precision. What happens in this case? LDL T ˆλ 21 I = L 1 D 1 L T 1. λ 20 (L 1 D 1 L T 1 ) & λ 21(L 1 D 1 L T 1 ) no digits in common! & (ˆv 20, ˆv 21 ) = Computed Eigenvectors ˆv 20 and ˆv 21 :

19 Worst Case / Large Depth matrix with eigenvalues: 0, 1, 1 ± 10 15, 1 ± 10 12, 1 ± 10 9, 1 ± 10 6, 1 ± 10 3, 2. ({L, D}, {1,..., 13}) ε ({N 1, 1}, {1}) ({L 1, D 1}, {2,..., 12}) ({N 13, 13}, {13}) ({N 2, 2}, {2}) ({L 2, D 2}, {3,..., 11}) ({N 12, 12}, {12}) ({N 3, 3}, {3}) ({L 3, D 3}, {4,..., 10}) ({N 11, 11}, {11}) 0 ({N 4, 4}, {4}) ({L 4, D 4}, {5,..., 9}) ({N 10, 10}, {10}) 0 ({N 5, 5}, {5}) ({L 5, D 5}, {6,7, 8}) ({N 9, 9}, {9}) ({N 6, 6}, {6}) ({N 7, 7}, {7}) ({N 8, 8}, {8})

20 Performance of MRRR on Biphenyl Matrix For the biphenyl matrix (n = 966), the root node had 805 leaf children and 63 internal node children. all nodes at the next level were leaf nodes. 49 clusters had 2 eigenvalues, 13 clusters had 3-8 eigenvalues. one cluster had 9 eigenvalues,

21 Residual Norms for computed z - Part 1 Paper 1 guarantees small residuals at the bottom of the representation tree : a leaf and its parent (leaf δλi)z = γ = O(εδλ). Our problem : (root λi)z = γ = O(ε spdiam(root))??? In exact arithmetic, root residual is also O(εδλ).

22 Residual Norms for computed z - Part 2 Compare child residual with parent residual at each interval node from leaf to root. parent λ + τ r p computed child r c λ r c = (T c λi)z T c = L c D c L T c r p exact τ r c By design also r p = r c r p = r p + δt p z, r c = r c + δt c z, exact Tp = T p + δt p Tc = T c + δt c T + δt = (L + δl)(d + δd)(l + δl) T, T = LDL T

23 Technical Lemma If then Dz c spdiam(t 0 ) LD L T z c spdiam(t 0 ), L = L I, ( ) δtz (2c )(9ε) spdiam(t 0) + O(nε 2 ) NOTE: large values in D and LDL T can be neutralized by small entries in z. (*) gives a bound on increase in residual norm at each internal node on path from leaf to root.

24 Orthogonality j j Γ α LDL T, Γ Γ α Γ β = k Γ β k By Paper 1, eigenvectors with same parent are orthogonal to working accuracy. Our problem : z T j z k = O(nε)???? S Γ = S LDLT Γ = subspace invariant under LDL T for eigenvalues in Γ.

25 Two Angles Ψ k,γ := (z k, S Γ ) Φ Γα := (SΓ parent α, SΓ child α ), SΓ parent α S parent Γ Lemma 1. sin Ψ j,γ sin Ψ j,γα + sin Φ Γα Lemma 2. sin Φ Γα Rnε, R depends on tolerance for relgap.

26 Theorem Let (LDL T, Γ) be the least common ancestor of z j and z k,j k. If all internal nodes on the paths from leaves < j > and < k > to Γ, (in the representation tree) are RRRs, then cos (z j,z k ) 2 leafbound+{depth(γ,j)+depth(γ,k) 2}Rnε.

Orthogonal Eigenvectors and Gram-Schmidt

Orthogonal Eigenvectors and Gram-Schmidt Orthogonal Eigenvectors and Gram-Schmidt Inderjit S. Dhillon The University of Texas at Austin Beresford N. Parlett The University of California at Berkeley Joint GAMM-SIAM Conference on Applied Linear

More information

Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices

Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices Linear Algebra and its Applications 387 (2004) 1 28 www.elsevier.com/locate/laa Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices Inderjit S. Dhillon a,1, Beresford

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 5, No. 3, pp. 858 899 c 004 Society for Industrial and Applied Mathematics ORTHOGONAL EIGENVECTORS AND RELATIVE GAPS INDERJIT S. DHILLON AND BERESFORD N. PARLETT Abstract.

More information

A PARALLEL EIGENSOLVER FOR DENSE SYMMETRIC MATRICES BASED ON MULTIPLE RELATIVELY ROBUST REPRESENTATIONS

A PARALLEL EIGENSOLVER FOR DENSE SYMMETRIC MATRICES BASED ON MULTIPLE RELATIVELY ROBUST REPRESENTATIONS SIAM J. SCI. COMPUT. Vol. 27, No. 1, pp. 43 66 c 2005 Society for Industrial and Applied Mathematics A PARALLEL EIGENSOLVER FOR DENSE SYMMETRIC MATRICES BASED ON MULTIPLE RELATIVELY ROBUST REPRESENTATIONS

More information

The Algorithm of Multiple Relatively Robust Representations for Multi-Core Processors

The Algorithm of Multiple Relatively Robust Representations for Multi-Core Processors Aachen Institute for Advanced Study in Computational Engineering Science Preprint: AICES-2010/09-4 23/September/2010 The Algorithm of Multiple Relatively Robust Representations for Multi-Core Processors

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

Matrix Multiplication Chapter IV Special Linear Systems

Matrix Multiplication Chapter IV Special Linear Systems Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 39, pp. -2, 202. Copyright 202,. ISSN 068-963. ETNA THE MR 3 -GK ALGORITHM FOR THE BIDIAGONAL SVD PAUL R. WILLEMS AND BRUNO LANG Abstract. Determining

More information

Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices.

Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. A.M. Matsekh E.P. Shurina 1 Introduction We present a hybrid scheme for computing singular vectors

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

The Future of LAPACK and ScaLAPACK

The Future of LAPACK and ScaLAPACK The Future of LAPACK and ScaLAPACK Jason Riedy, Yozo Hida, James Demmel EECS Department University of California, Berkeley November 18, 2005 Outline Survey responses: What users want Improving LAPACK and

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

NAG Library Routine Document F08JDF (DSTEVR)

NAG Library Routine Document F08JDF (DSTEVR) F08 Least-squares and Eigenvalue Problems (LAPACK) NAG Library Routine Document (DSTEVR) Note: before using this routine, please read the Users Note for your implementation to check the interpretation

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

An Implementation of the MRRR Algorithm on a Data-Parallel Coprocessor

An Implementation of the MRRR Algorithm on a Data-Parallel Coprocessor An Implementation of the MRRR Algorithm on a Data-Parallel Coprocessor Christian Lessig Abstract The Algorithm of Multiple Relatively Robust Representations (MRRRR) is one of the most efficient and most

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x!

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x! The residual again The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute r = b - A x! which gives us a measure of how good or bad x! is as a

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Iassume you share the view that the rapid

Iassume you share the view that the rapid the Top T HEME ARTICLE THE QR ALGORITHM After a brief sketch of the early days of eigenvalue hunting, the author describes the QR Algorithm and its major virtues. The symmetric case brings with it guaranteed

More information

Eigenvalue/Eigenvector Problem. Inderjit Singh Dhillon. B.Tech. (Indian Institute of Technology, Bombay) requirements for the degree of

Eigenvalue/Eigenvector Problem. Inderjit Singh Dhillon. B.Tech. (Indian Institute of Technology, Bombay) requirements for the degree of A New O(n ) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem by Inderjit Singh Dhillon B.Tech. (Indian Institute of Technology, Bombay) 1989 A dissertation submitted in partial satisfaction

More information

An Implementation of the MRRR Algorithm on a Data-Parallel Coprocessor

An Implementation of the MRRR Algorithm on a Data-Parallel Coprocessor An Implementation of the MRRR Algorithm on a Data-Parallel Coprocessor Christian Lessig Abstract The Algorithm of Multiple Relatively Robust Representations (MRRR) is one of the most efficient and accurate

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Eigenvalue/Eigenvector Problem. Inderjit Singh Dhillon. B.Tech. (Indian Institute of Technology, Bombay) requirements for the degree of

Eigenvalue/Eigenvector Problem. Inderjit Singh Dhillon. B.Tech. (Indian Institute of Technology, Bombay) requirements for the degree of A New O(n ) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem by Inderjit Singh Dhillon B.Tech. (Indian Institute of Technology, Bombay) 1989 A dissertation submitted in partial satisfaction

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor

Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor Masami Takata 1, Hiroyuki Ishigami 2, Kini Kimura 2, and Yoshimasa Nakamura 2 1 Academic Group of Information

More information

Lecture Note 13: Eigenvalue Problem for Symmetric Matrices

Lecture Note 13: Eigenvalue Problem for Symmetric Matrices MATH 5330: Computational Methods of Linear Algebra Lecture Note 13: Eigenvalue Problem for Symmetric Matrices 1 The Jacobi Algorithm Xianyi Zeng Department of Mathematical Sciences, UTEP Let A be real

More information

Accurate eigenvalue decomposition of arrowhead matrices and applications

Accurate eigenvalue decomposition of arrowhead matrices and applications Accurate eigenvalue decomposition of arrowhead matrices and applications Nevena Jakovčević Stor FESB, University of Split joint work with Ivan Slapničar and Jesse Barlow IWASEP9 June 4th, 2012. 1/31 Introduction

More information

The Godunov Inverse Iteration: A Fast and Accurate Solution to the Symmetric Tridiagonal Eigenvalue Problem

The Godunov Inverse Iteration: A Fast and Accurate Solution to the Symmetric Tridiagonal Eigenvalue Problem The Godunov Inverse Iteration: A Fast and Accurate Solution to the Symmetric Tridiagonal Eigenvalue Problem Anna M. Matsekh a,1 a Institute of Computational Technologies, Siberian Branch of the Russian

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 4, pp. 1126 1156 c 2006 Society for Industrial and Applied Mathematics ACCURATE SYMMETRIC RANK REVEALING AND EIGENDECOMPOSITIONS OF SYMMETRIC STRUCTURED MATRICES

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Power iteration Notes for 2016-10-17 In most introductory linear algebra classes, one computes eigenvalues as roots of a characteristic polynomial. For most problems, this is a bad idea: the roots of

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Bounding the Spectrum of Large Hermitian Matrices

Bounding the Spectrum of Large Hermitian Matrices Bounding the Spectrum of Large Hermitian Matrices Ren-Cang Li Yunkai Zhou Technical Report 2009-07 http://www.uta.edu/math/preprint/ Bounding the Spectrum of Large Hermitian Matrices Ren-Cang Li and Yunkai

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

c 1997 Society for Industrial and Applied Mathematics Vol. 39, No. 2, pp , June

c 1997 Society for Industrial and Applied Mathematics Vol. 39, No. 2, pp , June SIAM REV. c 997 Society for Industrial and Applied Mathematics Vol. 39, No. 2, pp. 254 29, June 997 003 COMPUTING AN EIGENVECTOR WITH INVERSE ITERATION ILSE C. F. IPSEN Abstract. The purpose of this paper

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Bidiagonal SVD Computation via an Associated Tridiagonal Eigenproblem

Bidiagonal SVD Computation via an Associated Tridiagonal Eigenproblem Bidiagonal SVD Computation via an Associated Tridiagonal Eigenproblem Osni Marques, James Demmel and Paulo B. Vasconcelos Lawrence Berkeley National Laboratory, oamarques@lbl.gov University of California

More information

Intel Math Kernel Library (Intel MKL) LAPACK

Intel Math Kernel Library (Intel MKL) LAPACK Intel Math Kernel Library (Intel MKL) LAPACK Linear equations Victor Kostin Intel MKL Dense Solvers team manager LAPACK http://www.netlib.org/lapack Systems of Linear Equations Linear Least Squares Eigenvalue

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Spectral Graph Theory and its Applications Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Adjacency matrix and Laplacian Intuition, spectral graph drawing

More information

Section 4.4 Reduction to Symmetric Tridiagonal Form

Section 4.4 Reduction to Symmetric Tridiagonal Form Section 4.4 Reduction to Symmetric Tridiagonal Form Key terms Symmetric matrix conditioning Tridiagonal matrix Similarity transformation Orthogonal matrix Orthogonal similarity transformation properties

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Quantum NP - Cont. Classical and Quantum Computation A.Yu Kitaev, A. Shen, M. N. Vyalyi 2002

Quantum NP - Cont. Classical and Quantum Computation A.Yu Kitaev, A. Shen, M. N. Vyalyi 2002 Quantum NP - Cont. Classical and Quantum Computation A.Yu Kitaev, A. Shen, M. N. Vyalyi 2002 1 QMA - the quantum analog to MA (and NP). Definition 1 QMA. The complexity class QMA is the class of all languages

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

The geometric mean algorithm

The geometric mean algorithm The geometric mean algorithm Rui Ralha Centro de Matemática Universidade do Minho 4710-057 Braga, Portugal email: r ralha@math.uminho.pt Abstract Bisection (of a real interval) is a well known algorithm

More information

Notes on the Symmetric QR Algorithm

Notes on the Symmetric QR Algorithm Notes on the Symmetric QR Algorithm Robert A van de Geijn Department of Computer Science The University of Texas Austin, TX 78712 rvdg@csutexasedu November 4, 2014 The QR algorithm is a standard method

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018 CS17 Integrated Introduction to Computer Science Klein Contents Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018 1 Tree definitions 1 2 Analysis of mergesort using a binary tree 1 3 Analysis of

More information

On Computing Accurate Singular. Values and Eigenvalues of. Matrices with Acyclic Graphs. James W. Demmel. William Gragg.

On Computing Accurate Singular. Values and Eigenvalues of. Matrices with Acyclic Graphs. James W. Demmel. William Gragg. On Computing Accurate Singular Values and Eigenvalues of Matrices with Acyclic Graphs James W. Demmel William Gragg CRPC-TR943 199 Center for Research on Parallel Computation Rice University 61 South Main

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze

More information

A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices

A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices Jean Christophe Tremblay and Tucker Carrington Chemistry Department Queen s University 23 août 2007 We want to

More information

Bruce Hendrickson Discrete Algorithms & Math Dept. Sandia National Labs Albuquerque, New Mexico Also, CS Department, UNM

Bruce Hendrickson Discrete Algorithms & Math Dept. Sandia National Labs Albuquerque, New Mexico Also, CS Department, UNM Latent Semantic Analysis and Fiedler Retrieval Bruce Hendrickson Discrete Algorithms & Math Dept. Sandia National Labs Albuquerque, New Mexico Also, CS Department, UNM Informatics & Linear Algebra Eigenvectors

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

PHYSICS 210A : STATISTICAL PHYSICS HW ASSIGNMENT

PHYSICS 210A : STATISTICAL PHYSICS HW ASSIGNMENT PHYSICS 210A : STATISTICAL PHYSICS HW ASSIGNMENT #1 (1) Consider a system with K possible states i, with i {1,,K}, where the transition rate W ij between any two states is the same, with W ij = γ > 0 (a)

More information

Singular value decomposition

Singular value decomposition Singular value decomposition The eigenvalue decomposition (EVD) for a square matrix A gives AU = UD. Let A be rectangular (m n, m > n). A singular value σ and corresponding pair of singular vectors u (m

More information

Appendix to Portfolio Construction by Mitigating Error Amplification: The Bounded-Noise Portfolio

Appendix to Portfolio Construction by Mitigating Error Amplification: The Bounded-Noise Portfolio Appendix to Portfolio Construction by Mitigating Error Amplification: The Bounded-Noise Portfolio Long Zhao, Deepayan Chakrabarti, and Kumar Muthuraman McCombs School of Business, University of Texas,

More information

Math 205 A B Final Exam page 1 12/12/2012 Name

Math 205 A B Final Exam page 1 12/12/2012 Name Math 205 A B Final Exam page 1 12/12/2012 Name 1. Let F be the vector space of continuous functions f : R R that we have been using in class. Let H = {f F f has the same y-coordinate at x = 1 as it does

More information

How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues

How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues S. Noschese and L. Pasquini 1 Introduction In this paper we consider a matrix A C n n and we show how to construct

More information

Lecture 10: Finite Differences for ODEs & Nonlinear Equations

Lecture 10: Finite Differences for ODEs & Nonlinear Equations Lecture 10: Finite Differences for ODEs & Nonlinear Equations J.K. Ryan@tudelft.nl WI3097TU Delft Institute of Applied Mathematics Delft University of Technology 21 November 2012 () Finite Differences

More information

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

A communication-avoiding thick-restart Lanczos method on a distributed-memory system

A communication-avoiding thick-restart Lanczos method on a distributed-memory system A communication-avoiding thick-restart Lanczos method on a distributed-memory system Ichitaro Yamazaki and Kesheng Wu Lawrence Berkeley National Laboratory, Berkeley, CA, USA Abstract. The Thick-Restart

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Mathematical Sciences 205, 2, p. 20 M a t h e m a t i c s MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I Yu. R. HAKOPIAN, S. S. ALEKSANYAN 2, Chair

More information