Orthogonal Eigenvectors and Gram-Schmidt
|
|
- Cathleen Nash
- 5 years ago
- Views:
Transcription
1 Orthogonal Eigenvectors and Gram-Schmidt Inderjit S. Dhillon The University of Texas at Austin Beresford N. Parlett The University of California at Berkeley Joint GAMM-SIAM Conference on Applied Linear Algebra University of Düsseldorf, Germany July 25, 2006
2 One of FOUR main papers on this work Algorithm MR 3 or MRRR Acronym for Multiple Relatively Robust Representations Accurate but turgid title Jim Demmel has a more catchy title... Guiding Principle: No Gram-Schmidt
3 Excerpt from the Introduction
4 Excerpt from the Introduction What this talk will not do
5 Excerpt from the Introduction What this talk will not do Roundoff Error Analysis
6 Excerpt from the Introduction What this talk will not do Roundoff Error Analysis Theorems, Proofs
7 Excerpt from the Introduction What this talk will not do Roundoff Error Analysis Theorems, Proofs Performance Numbers
8 Excerpt from the Introduction What this talk will not do Roundoff Error Analysis Theorems, Proofs Performance Numbers Stick closely to the paper
9 Diagonal of the Inverse Let J be a tridiagonal that is irreducible and invertible Not necessarily symmetric
10 Diagonal of the Inverse Let J be a tridiagonal that is irreducible and invertible Not necessarily symmetric Perform triangular factorization down and up (no pivoting) J = L + D + U + = U D L D + are forward pivots while D are backward pivots
11 Diagonal of the Inverse Let J be a tridiagonal that is irreducible and invertible Not necessarily symmetric Perform triangular factorization down and up (no pivoting) J = L + D + U + = U D L D + are forward pivots while D are backward pivots Is there a relation between D + and D?
12 Diagonal of the Inverse Let J be a tridiagonal that is irreducible and invertible Not necessarily symmetric Perform triangular factorization down and up (no pivoting) J = L + D + U + = U D L D + are forward pivots while D are backward pivots Is there a relation between D + and D? Beautiful identity: D + + D = diag (J) + diag (J 1 ) 1
13 FOUR New Ideas
14 FOUR New Ideas Replace the tridiagonal with a bidiagonal
15 FOUR New Ideas Replace the tridiagonal with a bidiagonal Shift close to clusters, but with differential transforms
16 FOUR New Ideas Replace the tridiagonal with a bidiagonal Shift close to clusters, but with differential transforms Twist, again with differential transforms
17 FOUR New Ideas Replace the tridiagonal with a bidiagonal Shift close to clusters, but with differential transforms Twist, again with differential transforms Analyze with a Representation Tree
18 Difficulties All eigenvalues of T are easily computed in O(n 2 ) time Given ˆλ, inverse iteration computes the eigenvector: (T ˆλI)x i+1 = x i, i = 0,1, 2,... Costs O(n) per iteration Typically, 1-3 iterations are enough BUT, inverse iteration only guarantees T ˆv ˆλˆv = O(ε T )
19 ¾ Fundamental Limitations Gap Theorem : sin (v, ˆv) T ˆv ˆλˆv Gap(ˆλ) Gap(ˆλ) can be small : 1 ε 1 ε 1 1 ε 2 ε 2 1 ε 3 ε 3 1 When eigenvalues are close, independently computed eigenvectors WILL NOT be mutually orthogonal
20 Example from Quantum Chemistry Symmetric positive definite eigenproblem, n = 966 Occurs in Møller-Plesset theory in the modeling of biphenyl Eigenvalue Distribution: Eigenvalue Eigenvalue Index LAPACK clusters numbered 1,...,939 and smaller ones
21 Example from Quantum Chemistry Plot of Absgap(i) = log 10 (min(λ i+1 λ i, λ i λ i 1 )/ T ) versus i : 0 2 Absolute Eigenvalue Gaps Eigenvalue Index Plot of Relgap(i) = log 10 (min(λ i+1 λ i, λ i λ i 1 )/ λ i ) versus i : 0 2 Relative Eigenvalue Gaps Eigenvalue Index
22 First Steps Factor T = LDL T Compute eigenvalues of LDL T by dqds or bisection For each eigenvalue λ, compute eigenvector by inverse iteration
23 Computing Eigenvector #1 ˆλ 1 = , ˆλ 2 = , ˆλ 3 = Factor T 1 = LDL T ˆλ 1 I = L + D + L T + = U D U T (up & down) Compute γ(i) = D + (i) + D (i) T 1 (i,i) log10( gamma ) i Solve T 1 z 1 = γ r e r, where γ r = min k γ k z i
24 Computing Eigenvector #2 ˆλ 1 = , ˆλ 2 = , ˆλ 3 = Factor T 2 = LDL T ˆλ 2 I = L + D + L T + = U D U T (up & down) Compute γ(i) = D + (i) + D (i) T 2 (i,i) log10( gamma ) i Solve T 2 z 2 = γ r e r, where γ r = min k γ k z i
25 Accuracy of the vectors z T 1 z 2 < 2ε Residual norms are < ε T In general, min k γ k ε ˆλ Dot Product between two vectors ε ˆλ Gap If two eigenvalues share d leading digits, then dot product 10 d ε
26 Smaller Relative Gaps ˆλ 280 to ˆλ 303 are : Some eigenvalues agree in 5 digits
27 Smaller Relative Gaps ˆλ 280 to ˆλ 303 are : Some eigenvalues agree in 5 digits Compute new shifted representations: LDL T I = L 1 D 1 L T
28 Smaller Relative Gaps ˆλ 280 to ˆλ 303 are : Some eigenvalues agree in 5 digits Compute new shifted representations: LDL T I = L 1 D 1 L T 1 L 1 D 1 L T I = L 2 D 2 L T
29 The computed vectors ˆλ = log10( gamma ) i z i
30 The computed vectors ˆλ = log10( gamma ) i z i
31 The computed vectors ˆλ = log10( gamma ) i z i Maximum Dot Product < 3ε.
32 Key Idea 1 Replace the tridiagonal with a bidiagonal Bidiagonal Factorization T LDL T
33 Key Idea 1 Replace the tridiagonal with a bidiagonal Bidiagonal Factorization Relative condition number T LDL T relcond(λ) = vt L D L T v v T LDL T v
34 Key Idea 1 Replace the tridiagonal with a bidiagonal Bidiagonal Factorization Relative condition number T LDL T relcond(λ) = vt L D L T v v T LDL T v Seminal 1991 Demmel-Kahan paper (SIAG/LA Prize) Small relative changes in the entries of a bidiagonal cause small relative changes in all its singular values
35 Key Idea 1 Replace the tridiagonal with a bidiagonal Bidiagonal Factorization Relative condition number T LDL T relcond(λ) = vt L D L T v v T LDL T v Seminal 1991 Demmel-Kahan paper (SIAG/LA Prize) Small relative changes in the entries of a bidiagonal cause small relative changes in all its singular values L and D almost always define the small eigenvalues of LDL T to high relative accuracy Even in the face of element growth!
36 Key Idea 2 Shift close to clusters Connection between residuals and orthogonality. Let r = Ax xˆλ 0, s = Ay yˆµ 0,
37 Key Idea 2 Shift close to clusters Connection between residuals and orthogonality. Let r = Ax xˆλ 0, s = Ay yˆµ 0, Then x T y(ˆµ ˆλ) = y T r x T s
38 Key Idea 2 Shift close to clusters Connection between residuals and orthogonality. Let r = Ax xˆλ 0, s = Ay yˆµ 0, Then x T y(ˆµ ˆλ) = y T r x T s Suppose, by some miracle, r ε 1 ˆλ and s ε 2 ˆµ, then
39 Key Idea 2 Shift close to clusters Connection between residuals and orthogonality. Let r = Ax xˆλ 0, s = Ay yˆµ 0, Then x T y(ˆµ ˆλ) = y T r x T s Suppose, by some miracle, r ε 1 ˆλ and s ε 2 ˆµ, then x T y max(ε 1, ε 2 ) ˆλ ˆµ ˆλ + ˆµ Thus, orthogonality depends on the relative separation
40 Key Idea 2 Shift close to clusters Connection between residuals and orthogonality. Let r = Ax xˆλ 0, s = Ay yˆµ 0, Then x T y(ˆµ ˆλ) = y T r x T s Suppose, by some miracle, r ε 1 ˆλ and s ε 2 ˆµ, then x T y max(ε 1, ε 2 ) ˆλ ˆµ ˆλ + ˆµ Thus, orthogonality depends on the relative separation Relative Gaps can be made larger by shifting LDL T ξi = L D L T Different representations for different clusters! Essential to use differential transforms
41 Differential Transforms LDL T ˆλI = L + D + L T + Simple qd : D + (1) := d 1 ˆλ for i = 1,n 1 L + (i) := (d i l i )/D + (i) D + (i + 1) := d i l 2 i + d i+1 L + (i)d i l i ˆλ end for Differential qd : s 1 := ˆλ for i = 1,n 1 D + (i) := s i + d i L + (i) := (d i l i )/D + (i) s i+1 := L + (i)l i s i ˆλ end for D + (n) := s n + d n
42 ¾ Key Idea 3 Twist, again with differential transformations Godunov et al. [1985], Fernando [1995] Compute the appropriate Twisted Factorization : LDL T ˆλI = N r D r N T r, where N r = x x x.. x x x.. x x x x x Solve for z, N r D r N T r z = γ r e r ( N T r z = e r ) : z(i) = 1, i = r, L + (i) z(i + 1), i = r 1,...,1, U (i 1) z(i 1), i = r + 1,...,n.
43 Key Idea 4 Representation Tree Eigenvalues: ε, 1 + ε, ε, 2
44 Key Idea 4 Representation Tree Eigenvalues: ε, 1 + ε, ε, 2 Extra representation needed at σ = 1: L p D p L T p I = L 0 D 0 L T 0
45 Key Idea 4 Representation Tree Eigenvalues: ε, 1 + ε, ε, 2 Extra representation needed at σ = 1: L p D p L T p I = L 0 D 0 L T 0 Following Representation Tree captures the steps of the algorithm: ({N (1) 1, 1},{1}) ({L p,d p },{1,2,3,4}) ε 1 2 ε ({L 0,D 0 },{2,3}) ({N (3) 4, 4},{4}) 2 ε ({N (4) 2, 2},{2}) ({N (4) 3, 3},{3})
46 Caveats BLAS Complexity is O(nk) to compute k eigenpairs However, all operations are BLAS 1 Closest competitor D&C O(n 3 ), but BLAS 3 Very tight eigenvalue clusters C. Vömel s torture tests
47 Wilkinson s Matrix W + 21 λ 20 and λ 21 are identical to working precision
48 Wilkinson s Matrix W + 21 λ 20 and λ 21 are identical to working precision Form new representation: L p D p L T p ˆλ 21 I = L 0 D 0 L T 0
49 Wilkinson s Matrix W + 21 λ 20 and λ 21 are identical to working precision Form new representation: L p D p L T p ˆλ 21 I = L 0 D 0 L T 0 Roundoff to the rescue: λ 20 (L 0 D 0 L T 0 ) & λ 21 (L 0 D 0 L T 0 ) no digits in common! &
50 Wilkinson s Matrix W + 21 λ 20 and λ 21 are identical to working precision Form new representation: L p D p L T p ˆλ 21 I = L 0 D 0 L T 0 Roundoff to the rescue: λ 20 (L 0 D 0 L T 0 ) & λ 21 (L 0 D 0 L T 0 ) no digits in common! & Computed Eigenvectors ˆv 20 and ˆv 21 (inner product is ):
51 Caveats BLAS Complexity is O(nk) to compute k eigenpairs However, all operations are BLAS 1 Closest competitor D&C O(n 3 ), but BLAS 3 Very tight eigenvalue clusters C. Vömel s torture tests 5 copies of W with glue of ε Required a tweak Perturb base representation
An Explanation of the MRRR algorithm to compute eigenvectors of symmetric tridiagonal matrices
An Explanation of the MRRR algorithm to compute eigenvectors of symmetric tridiagonal matrices Beresford N. Parlett Departments of Mathematics and Computer Science Division, EECS dept. University of California,
More informationMultiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices
Linear Algebra and its Applications 387 (2004) 1 28 www.elsevier.com/locate/laa Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices Inderjit S. Dhillon a,1, Beresford
More informationc 2004 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 5, No. 3, pp. 858 899 c 004 Society for Industrial and Applied Mathematics ORTHOGONAL EIGENVECTORS AND RELATIVE GAPS INDERJIT S. DHILLON AND BERESFORD N. PARLETT Abstract.
More informationThe Algorithm of Multiple Relatively Robust Representations for Multi-Core Processors
Aachen Institute for Advanced Study in Computational Engineering Science Preprint: AICES-2010/09-4 23/September/2010 The Algorithm of Multiple Relatively Robust Representations for Multi-Core Processors
More informationA PARALLEL EIGENSOLVER FOR DENSE SYMMETRIC MATRICES BASED ON MULTIPLE RELATIVELY ROBUST REPRESENTATIONS
SIAM J. SCI. COMPUT. Vol. 27, No. 1, pp. 43 66 c 2005 Society for Industrial and Applied Mathematics A PARALLEL EIGENSOLVER FOR DENSE SYMMETRIC MATRICES BASED ON MULTIPLE RELATIVELY ROBUST REPRESENTATIONS
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse
More informationUsing Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices.
Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. A.M. Matsekh E.P. Shurina 1 Introduction We present a hybrid scheme for computing singular vectors
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationPerformance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor
Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor Masami Takata 1, Hiroyuki Ishigami 2, Kini Kimura 2, and Yoshimasa Nakamura 2 1 Academic Group of Information
More informationThe Godunov Inverse Iteration: A Fast and Accurate Solution to the Symmetric Tridiagonal Eigenvalue Problem
The Godunov Inverse Iteration: A Fast and Accurate Solution to the Symmetric Tridiagonal Eigenvalue Problem Anna M. Matsekh a,1 a Institute of Computational Technologies, Siberian Branch of the Russian
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationThe Future of LAPACK and ScaLAPACK
The Future of LAPACK and ScaLAPACK Jason Riedy, Yozo Hida, James Demmel EECS Department University of California, Berkeley November 18, 2005 Outline Survey responses: What users want Improving LAPACK and
More informationIndex. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden
Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,
More informationNAG Library Routine Document F08JDF (DSTEVR)
F08 Least-squares and Eigenvalue Problems (LAPACK) NAG Library Routine Document (DSTEVR) Note: before using this routine, please read the Users Note for your implementation to check the interpretation
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationAccurate eigenvalue decomposition of arrowhead matrices and applications
Accurate eigenvalue decomposition of arrowhead matrices and applications Nevena Jakovčević Stor FESB, University of Split joint work with Ivan Slapničar and Jesse Barlow IWASEP9 June 4th, 2012. 1/31 Introduction
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More information(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots
More informationAlgorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD
Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD Yuji Nakatsukasa PhD dissertation University of California, Davis Supervisor: Roland Freund Householder 2014 2/28 Acknowledgment
More informationThe 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.
Orthogonality and QR The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next. So, what is an inner product? An inner product
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationEigenvalue/Eigenvector Problem. Inderjit Singh Dhillon. B.Tech. (Indian Institute of Technology, Bombay) requirements for the degree of
A New O(n ) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem by Inderjit Singh Dhillon B.Tech. (Indian Institute of Technology, Bombay) 1989 A dissertation submitted in partial satisfaction
More informationBidiagonal SVD Computation via an Associated Tridiagonal Eigenproblem
Bidiagonal SVD Computation via an Associated Tridiagonal Eigenproblem Osni Marques, James Demmel and Paulo B. Vasconcelos Lawrence Berkeley National Laboratory, oamarques@lbl.gov University of California
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationEigenvalue/Eigenvector Problem. Inderjit Singh Dhillon. B.Tech. (Indian Institute of Technology, Bombay) requirements for the degree of
A New O(n ) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem by Inderjit Singh Dhillon B.Tech. (Indian Institute of Technology, Bombay) 1989 A dissertation submitted in partial satisfaction
More informationLecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.
Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some
More informationON MATRIX BALANCING AND EIGENVECTOR COMPUTATION
ON MATRIX BALANCING AND EIGENVECTOR COMPUTATION RODNEY JAMES, JULIEN LANGOU, AND BRADLEY R. LOWERY arxiv:40.5766v [math.na] Jan 04 Abstract. Balancing a matrix is a preprocessing step while solving the
More informationSensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data
Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationLecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)
Lecture 1: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11) The eigenvalue problem, Ax= λ x, occurs in many, many contexts: classical mechanics, quantum mechanics, optics 22 Eigenvectors and
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationApril 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition
Applied mathematics PhD candidate, physics MA UC Berkeley April 26, 2013 UCB 1/19 Symmetric positive-definite I Definition A symmetric matrix A R n n is positive definite iff x T Ax > 0 holds x 0 R n.
More informationIterative methods for symmetric eigenvalue problems
s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient
More informationMIDTERM. b) [2 points] Compute the LU Decomposition A = LU or explain why one does not exist.
MAE 9A / FALL 3 Maurício de Oliveira MIDTERM Instructions: You have 75 minutes This exam is open notes, books No computers, calculators, phones, etc There are 3 questions for a total of 45 points and bonus
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 39, pp. -2, 202. Copyright 202,. ISSN 068-963. ETNA THE MR 3 -GK ALGORITHM FOR THE BIDIAGONAL SVD PAUL R. WILLEMS AND BRUNO LANG Abstract. Determining
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationc 1997 Society for Industrial and Applied Mathematics Vol. 39, No. 2, pp , June
SIAM REV. c 997 Society for Industrial and Applied Mathematics Vol. 39, No. 2, pp. 254 29, June 997 003 COMPUTING AN EIGENVECTOR WITH INVERSE ITERATION ILSE C. F. IPSEN Abstract. The purpose of this paper
More informationRoundoff Error. Monday, August 29, 11
Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate
More informationVector and Matrix Norms. Vector and Matrix Norms
Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose
More informationA HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY
A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationTotal 170. Name. Final Examination M340L-CS
1 10 2 10 3 15 4 5 5 10 6 10 7 20 8 10 9 20 10 25 11 10 12 10 13 15 Total 170 Final Examination Name M340L-CS 1. Use extra paper to determine your solutions then neatly transcribe them onto these sheets.
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationMATCOM and Selected MATCOM Functions
book-onlin page 567 Appendix C MATCOM and Selected MATCOM Functions C.1 MATCOM and Selected MATCOM Functions C.1.1 What is MATCOM? MATCOM is a MATLAB-based interactive software package containing the implementation
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationDerivation of the Kalman Filter
Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More informationAA242B: MECHANICAL VIBRATIONS
AA242B: MECHANICAL VIBRATIONS 1 / 17 AA242B: MECHANICAL VIBRATIONS Solution Methods for the Generalized Eigenvalue Problem These slides are based on the recommended textbook: M. Géradin and D. Rixen, Mechanical
More informationLecture 2: Numerical linear algebra
Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra
More informationMath 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam
Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system
More informationEigenvalue Problems and Singular Value Decomposition
Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationThe geometric mean algorithm
The geometric mean algorithm Rui Ralha Centro de Matemática Universidade do Minho 4710-057 Braga, Portugal email: r ralha@math.uminho.pt Abstract Bisection (of a real interval) is a well known algorithm
More informationLinear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.
Homework #1 due Thursday, Feb. 2 1. Find the eigenvalues and the eigenvectors of the matrix [ ] 4 6 A =. 2 5 2. Find the eigenvalues and the eigenvectors of the matrix 3 3 1 A = 1 1 1. 3 9 5 3. The following
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationLinear Algebra - Part II
Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T
More informationA DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION
SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationRecent Developments in Dense Numerical Linear Algebra. Higham, Nicholas J. MIMS EPrint:
Recent Developments in Dense Numerical Linear Algebra Higham, Nicholas J. 1997 MIMS EPrint: 2007.75 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports
More informationMath 310 Final Exam Solutions
Math 3 Final Exam Solutions. ( pts) Consider the system of equations Ax = b where: A, b (a) Compute deta. Is A singular or nonsingular? (b) Compute A, if possible. (c) Write the row reduced echelon form
More informationIassume you share the view that the rapid
the Top T HEME ARTICLE THE QR ALGORITHM After a brief sketch of the early days of eigenvalue hunting, the author describes the QR Algorithm and its major virtues. The symmetric case brings with it guaranteed
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationScientific Computing: Dense Linear Systems
Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationMatrix balancing and robust Monte Carlo algorithm for evaluating dominant eigenpair
Computer Science Journal of Moldova, vol.18, no.3(54), 2010 Matrix balancing and robust Monte Carlo algorithm for evaluating dominant eigenpair Behrouz Fathi Vajargah Farshid Mehrdoust Abstract Matrix
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationav 1 x 2 + 4y 2 + xy + 4z 2 = 16.
74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method
More informationMatrix Multiplication Chapter IV Special Linear Systems
Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationScientific Computing: Solving Linear Systems
Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Power iteration Notes for 2016-10-17 In most introductory linear algebra classes, one computes eigenvalues as roots of a characteristic polynomial. For most problems, this is a bad idea: the roots of
More informationOutline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,
Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences
More informationANSWERS. E k E 2 E 1 A = B
MATH 7- Final Exam Spring ANSWERS Essay Questions points Define an Elementary Matrix Display the fundamental matrix multiply equation which summarizes a sequence of swap, combination and multiply operations,
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today
More informationlecture 2 and 3: algorithms for linear algebra
lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations
More informationEigenvalues and Eigenvectors
5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS n n Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),
More information