Accurate eigenvalue decomposition of arrowhead matrices and applications

Size: px
Start display at page:

Download "Accurate eigenvalue decomposition of arrowhead matrices and applications"

Transcription

1 Accurate eigenvalue decomposition of arrowhead matrices and applications Nevena Jakovčević Stor FESB, University of Split joint work with Ivan Slapničar and Jesse Barlow IWASEP9 June 4th, /31

2 Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. 2/31

3 Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n 2 ) operations. 2/31

4 Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n 2 ) operations. 2/31

5 Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n 2 ) operations. The algorithm is based on shift-and-invert technique and limited finite O(n) use of double precision arithmetic when necessary. 2/31

6 Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n 2 ) operations. The algorithm is based on shift-and-invert technique and limited finite O(n) use of double precision arithmetic when necessary. Orthogonality of eigenvectors computed by aheig algorithm is consequence of their accuracy (no need for follow-up orthogonalization). 2/31

7 Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n 2 ) operations. The algorithm is based on shift-and-invert technique and limited finite O(n) use of double precision arithmetic when necessary. Orthogonality of eigenvectors computed by aheig algorithm is consequence of their accuracy (no need for follow-up orthogonalization). Each eigenvalue and corresponding eigenvector can be computed independently, which makes the algorithm adaptable for parallel computing. 2/31

8 Introduction We present a new algorithm (aheig) for computing eigenvalue decomposition of real symmetric arrowhead matrix. Under certain conditions, the aheig algorithm computes all eigenvalues and all components of corresponding eigenvectors with high rel. acc. in O(n 2 ) operations. The algorithm is based on shift-and-invert technique and limited finite O(n) use of double precision arithmetic when necessary. Orthogonality of eigenvectors computed by aheig algorithm is consequence of their accuracy (no need for follow-up orthogonalization). Each eigenvalue and corresponding eigenvector can be computed independently, which makes the algorithm adaptable for parallel computing. We also present the applications of aheig algorithm to hermitian arrowhead, symmetric tridiagonal matrices and diagonal plus rank one matrices. 2/31

9 Introduction Outline 1 Introduction 2 The idea of the aheig algorithm 3 Example 4 Accuracy of the aheig algorithm 5 Application to hermitian arrowhead matrices 6 Application to tridiagonal symmetric matrices 7 Application to diagonal + rank-one matrices (D +uu T ) 3/31

10 Introduction Let A = [ D z z T α be n n real symmetric arrowhead matrix, where D = diag(d 1,d 2,...,d n 1 ), z = [ ζ 1 ζ 2 ζ n 1 ] T, α R and ] 4/31

11 Introduction Let A = [ D z z T α be n n real symmetric arrowhead matrix, where D = diag(d 1,d 2,...,d n 1 ), z = [ ζ 1 ζ 2 ζ n 1 ] T, α R and A = VΛV T eigenvalue decomposition of A, where ] Λ = diag(λ 1,λ 2,...,λ n ) and V = [ v 1 v n ]. 4/31

12 Introduction We assume that A is irreducible: ζ i 0, i, d i d j, i j. Without loss of generality we can assume that ζ i > 0, i and that diagonal elements of D are ordered, that is d 1 > d 2 > > d n 1. 5/31

13 Introduction We assume that A is irreducible: ζ i 0, i, d i d j, i j. Without loss of generality we can assume that ζ i > 0, i and that diagonal elements of D are ordered, that is d 1 > d 2 > > d n 1. The assumptions on D imply the interlacing property λ 1 > d 1 > λ 2 > d 2 > > d n 2 > λ n 1 > d n 1 > λ n where λ i,i = 1,...,n, are eigenvalues of matrix A. 5/31

14 Introduction The eigenvalues of A are the zeros of function: n 1 ζi 2 ϕ A (λ) = α λ d i λ = α λ zt (D λi) 1 z. i=1 6/31

15 Introduction The eigenvalues of A are the zeros of function: n 1 ζi 2 ϕ A (λ) = α λ d i λ = α λ zt (D λi) 1 z. i=1 and the eigenvectors are given by: v i = x [ i (D λi I) 1 z, x i = x i 2 1 ], i = 1,...,n. 6/31

16 Introduction The eigenvalues of A are the zeros of function: n 1 ζi 2 ϕ A (λ) = α λ d i λ = α λ zt (D λi) 1 z. i=1 and the eigenvectors are given by: v i = x [ i (D λi I) 1 z, x i = x i 2 1 ], i = 1,...,n. Problem: λ i is not exact = v i may not be orthogonal. 6/31

17 Introduction The existing algorithms obtain orthogonal eigenvectors with the following procedure. Compute eigenvalues of A. Construct new matrix  (inverse problem) with prescribed eigenvalues λ, and diagonal matrix D (that is compute new ẑ and α). Compute eigenvectors of  with the previous formula. This way computed, eigenvectors are not the exact eigenvectors of starting matrix A, they are exact eigenvectors of matrix [ D ẑ  = ẑ T α ) ) ẑ i = ( d i λ ) i ( λj d n 1 i ( λj d i n )( λ1 d i ( ) ( ), j=2 dj 1 d i j=i+1 dj d i ], n 1 α = λ ) n + ( λj d j. j=1 7/31

18 The idea of the aheig algorithm Our algorithm has a different concept. Accuracy of the eigenvectors and their orthogonality follow from accuracy of computed eigenvalues. Thus, there is no need for follow-up orthogonalization. 8/31

19 The idea of the aheig algorithm Our algorithm has a different concept. Accuracy of the eigenvectors and their orthogonality follow from accuracy of computed eigenvalues. Thus, there is no need for follow-up orthogonalization. Let d i be the diagonal element (pole) in A which is closest to λ. 8/31

20 The idea of the aheig algorithm Our algorithm has a different concept. Accuracy of the eigenvectors and their orthogonality follow from accuracy of computed eigenvalues. Thus, there is no need for follow-up orthogonalization. Let d i be the diagonal element (pole) in A which is closest to λ. From the interlacing property it follows that either λ = λ i or λ = λ i+1. 8/31

21 The idea of the aheig algorithm Let A i be the shifted matrix, A i = A d i I = D z ζ i 0 0 D 2 z 2 z T 1 ζ i z T 2 a, 9/31

22 The idea of the aheig algorithm Let A i be the shifted matrix, A i = A d i I = D z ζ i 0 0 D 2 z 2 z T 1 ζ i z T 2 a, where D 1 = diag(d 1 d i,...,d i 1 d i ) positive definite, D 2 = diag(d i+1 d i,...,d n 1 d i ) negative definite, z 1 = [ ζ 1 ζ 2 ] ζ T i 1, z 2 = [ ζ i+1 ζ i+2 ] ζ T n 1, a = α d i. 9/31

23 The idea of the aheig algorithm Let A i be the shifted matrix, A i = A d i I = D z ζ i 0 0 D 2 z 2 z T 1 ζ i z T 2 a, where D 1 = diag(d 1 d i,...,d i 1 d i ) positive definite, D 2 = diag(d i+1 d i,...,d n 1 d i ) negative definite, z 1 = [ ζ 1 ζ 2 ] ζ T i 1, z 2 = [ ζ i+1 ζ i+2 ] ζ T n 1, a = α d i. Obviously µ = λ d i is an eigenvalue of A i iff λ is an eigenvalue of A. 9/31

24 The idea of the aheig algorithm Now A 1 i = D 1 1 w w T 1 b w T 2 1/ζ i 0 w 2 D /ζ i 0 0, 10/31

25 The idea of the aheig algorithm Now where A 1 i = D 1 1 w w T 1 b w T 2 1/ζ i 0 w 2 D /ζ i 0 0, w 1 = D 1 1 z 1 1, ζ i w 2 = D 1 2 z 1 2, ζ i b = 1 ζ 2 i ( a + z T 1 D 1 1 z 1 + z T ) 2 D 1 2 z 2. 10/31

26 The idea of the aheig algorithm λ is the eigenvalue of matrix A which is closest to the pole d i. 11/31

27 The idea of the aheig algorithm λ is the eigenvalue of matrix A which is closest to the pole d i. µ is the eigenvalue of matrix A i which is closest to zero. 11/31

28 The idea of the aheig algorithm λ is the eigenvalue of matrix A which is closest to the pole d i. µ is the eigenvalue of matrix A i which is closest to zero. 1/ µ = A 1 i 2 11/31

29 The idea of the aheig algorithm λ is the eigenvalue of matrix A which is closest to the pole d i. µ is the eigenvalue of matrix A i which is closest to zero. 1/ µ = A 1 i 2 µ is well behaved by the standard perturbation theory. 11/31

30 Example Example 1/2 Let A = , where D = [ ], z = [ ] and α = /31

31 Example Example 2/2 Eigenvalues computed by Matlab eig, aheig and Mathematica (100 digits) are: λ eig λ aheig λ Math λ 3 and λ 4 computed by aheig are accurate (to the machine precision). 13/31

32 Example Example 2/2 Eigenvalues computed by Matlab eig, aheig and Mathematica (100 digits) are: λ eig λ aheig λ Math λ 3 and λ 4 computed by aheig are accurate (to the machine precision). 13/31

33 Example Example 2/2 Eigenvalues computed by Matlab eig, aheig and Mathematica (100 digits) are: λ eig λ aheig λ Math λ 3 and λ 4 computed by aheig are accurate (to the machine precision). Eigenvectors computed by aheig are accurate and therefore, orthogonal. For example, let us look at U 4 U 4(eig) U 4(aheig) U 4(Math) /31

34 Example Example 2/2 Eigenvalues computed by Matlab eig, aheig and Mathematica (100 digits) are: λ eig λ aheig λ Math λ 3 and λ 4 computed by aheig are accurate (to the machine precision). Eigenvectors computed by aheig are accurate and therefore, orthogonal. For example, let us look at U 4 U 4(eig) U 4(aheig) U 4(Math) /31

35 Accuracy of the aheig algorithm We will use the following notation: MATRIX exact eigenvalue computed eigenvalue A λ λ A i µ Ã i = fl(a i ) µ µ = fl( µ) A 1 i ν (A 1 i ) = fl(a 1 i ) ν ν = fl( ν) 14/31

36 Accuracy of the aheig algorithm Let Ãi = fl(a i ) Ã i = D 1 (I +E 1 ) 0 0 z ζ i 0 0 D 2 (I +E 2 ) z 2 z T 1 ζ i z T 2 a(1+ε a ) where E 1,E 2 are diagonal matrices: (E 1 ) ii ε M, (E 2 ) ii ε M and ε a ε M. Also, ( ) ( ) A 1 i = fl A 1 i. 15/31

37 Accuracy of the aheig algorithm Let Ãi = fl(a i ) Ã i = D 1 (I +E 1 ) 0 0 z ζ i 0 0 D 2 (I +E 2 ) z 2 z T 1 ζ i z T 2 a(1+ε a ) where E 1,E 2 are diagonal matrices: (E 1 ) ii ε M, (E 2 ) ii ε M and ε a ε M. Also, ( ) ( ) A 1 i = fl A 1 i. All elements of A 1 i are computed with high relative accuracy except possibly b. Weather b is computed accurately (or we need extra precision) is monitored by condition number. 15/31

38 Accuracy of the aheig algorithm Q: What is the accuracy of computed A 1 i? Let s recall A 1 i = D 1 1 w w T 1 b w T 2 1/ζ i 0 w 2 D /ζ i 0 0, 16/31

39 Accuracy of the aheig algorithm Q: What is the accuracy of computed A 1 i? Let s recall A 1 i = D 1 1 w w T 1 b w T 2 1/ζ i 0 w 2 D /ζ i 0 0, where w 1 = D 1 1 z 1 ζ k 1, fl((w 1 ) k ) = (1 + ε 2 + ε 3 ), ζ i (d k d i )(1 + ε 1 )ζ i w 2 = D 1 2 z 1 2, ζ i b = 1 ζ 2 i ( a + z T 1 D 1 1 z 1 + z T ) 2 D 1 2 z 2. 16/31

40 Accuracy of the aheig algorithm Q: What is the accuracy of computed A 1 i? Let s recall A 1 i = D 1 1 w w T 1 b w T 2 1/ζ i 0 w 2 D /ζ i 0 0, where w 1 = D 1 1 z 1 ζ k 1, fl((w 1 ) k ) = (1 + ε 2 + ε 3 ), ζ i (d k d i )(1 + ε 1 )ζ i w 2 = D 1 2 z 1 2, ζ i b = 1 ζ 2 i ( a + z T 1 D 1 1 z 1 + z T ) 2 D 1 2 z 2. A : (A 1 i )ij = fl( A 1 i )ij = ( A 1 ) i (1+ε ij ij), ε ij 3ε M for all elements of matrix A 1 i with possible exception of element b. 16/31

41 Accuracy of the aheig algorithm Q: When b is not computed accurately and how to fix it? 17/31

42 Accuracy of the aheig algorithm Q: When b is not computed accurately and how to fix it? Condition K 1 K 1 (λ i ) = a + z1 TD 1 1 z 1 + z2 T D 1 A 1 i 2 ζi 2 2 z 2. 17/31

43 Accuracy of the aheig algorithm Q: When b is not computed accurately and how to fix it? Condition K 1 K 1 (λ i ) = a + z1 TD 1 1 z 1 + z2 T D 1 A 1 i 2 ζi 2 2 z 2. If K 1 >> 1 we have to compute b in double of standard precision arithmetic. 17/31

44 Accuracy of the aheig algorithm Q: When b is not computed accurately and how to fix it? Condition K 1 K 1 (λ i ) = a + z1 TD 1 1 z 1 + z2 T D 1 A 1 i 2 ζi 2 2 z 2. If K 1 >> 1 we have to compute b in double of standard precision arithmetic. Also K 1 (n 2) 1 ζ i max k=1,...,n 1 k i ζ k. 17/31

45 Accuracy of the aheig algorithm Example (double precision) 1/3 Let A = Condition numbers of matrix A eigenvalues are: K /31

46 Accuracy of the aheig algorithm Example (double precision) 2/3 A 2 = Inverse computed by aheig is A 1 2 = b = , by aheig, b = , by Matlab inv, b = , by aheig quad. 19/31

47 Accuracy of the aheig algorithm Example (double precision) 2/3 A 2 = Inverse computed by aheig is A 1 2 = b = , by aheig, b = , by Matlab inv, b = , by aheig quad. 19/31

48 Accuracy of the aheig algorithm Example (double precision) 3/3 Eigenvalues computed by aheig, aheig quad and Mathematica (100 digits) are: λ aheig λ aheig quad λ Math /31

49 Accuracy of the aheig algorithm Example (double precision) 3/3 Eigenvalues computed by aheig, aheig quad and Mathematica (100 digits) are: λ aheig λ aheig quad λ Math /31

50 Accuracy of the aheig algorithm Example (double precision) 3/3 Eigenvalues computed by aheig, aheig quad and Mathematica (100 digits) are: λ aheig λ aheig quad λ Math Eigenvectors computed by aheig quad are accurate and therefore, orthogonal. For example, let us look at U 6 U 6(eig) U 6(aheig quad) U 6(Math) /31

51 Accuracy of the aheig algorithm Example (double precision) 3/3 Eigenvalues computed by aheig, aheig quad and Mathematica (100 digits) are: λ aheig λ aheig quad λ Math Eigenvectors computed by aheig quad are accurate and therefore, orthogonal. For example, let us look at U 6 U 6(eig) U 6(aheig quad) U 6(Math) /31

52 Accuracy of the aheig algorithm Q: Is µ i eigenvalue of A i closest to zero and if not how far is it from the closest one? Condition K 2 K 2 (λ i ) = A 1 2. µ i. i A: If K 2 >> 1 µ i is not eigenvalue of matrix A i which is closest to zero. 21/31

53 Accuracy of the aheig algorithm Q: Is µ i eigenvalue of A i closest to zero and if not how far is it from the closest one? Condition K 2 K 2 (λ i ) = A 1 2. µ i. i A: If K 2 >> 1 µ i is not eigenvalue of matrix A i which is closest to zero. Two different cases: for λ 1 or λ n, we can compute them from the starting matrix A. for inside eigenvalues (only open problem), possible solutions is to send this eigenvalue to the other pole. 21/31

54 Accuracy of the aheig algorithm Example Arrowhead matrix applications in quantum optics 1/3 The research is about quantum dots excited states decay in real photonic crystals. Matrix has the following structure g 1 g 2 g n g 1 ω 1 A = g 2 ω 2,.... where g n quantum dot transition frequency, ω i is frequency of optical mode, g i interaction constant of quantum dot with optical modes. At this point our task is to compute the eigenvalues of matrix A. ω n 22/31

55 Accuracy of the aheig algorithm Example Arrowhead matrix applications in quantum optics 2/3 The size of the matrix is changeable but in realistic case it is approximately n 10 3 to For example for n = g is vector with components from interval [ , ]. ω is vector with components from interval [ , ]. = /31

56 Accuracy of the aheig algorithm Example Arrowhead matrix applications in quantum optics 3/3 The components of vector g are of the same order of magnitude = we can guarantee all eigenvalues will be computed with high relative [ ] accuracy. (K , ). Let { y(λ) = 0, d i > λ i+1 > d i+1 1, λ i+1 > d i or λ i+1 < d i+1 24/31

57 Accuracy of the aheig algorithm Example Arrowhead matrix applications in quantum optics 3/3 The components of vector g are of the same order of magnitude = we can guarantee all eigenvalues will be computed with high relative [ ] accuracy. (K , ). Let { y(λ) = 0, d i > λ i+1 > d i+1 1, λ i+1 > d i or λ i+1 < d i MATLAB 24/31

58 Accuracy of the aheig algorithm Example Arrowhead matrix applications in quantum optics 3/3 The components of vector g are of the same order of magnitude = we can guarantee all eigenvalues will be computed with high relative [ ] accuracy. (K , ). Let { y(λ) = 0, d i > λ i+1 > d i+1 1, λ i+1 > d i or λ i+1 < d i MATLAB AHEIG 24/31

59 Application to hermitian arrowhead matrices Algorithm herm2ahig Let [ D r H = r T α ], r = [ ρ 1 ρ 2 ρ n 1 ] T,ρi C hermitian arrowhead matrix transform [ A = D z H = z T α ]. where = diag( ρ 1 ρ 1,..., ρ n 1 ρ n 1,1). 25/31

60 Application to hermitian arrowhead matrices Algorithm herm2ahig Let [ D r H = r T α ], r = [ ρ 1 ρ 2 ρ n 1 ] T,ρi C hermitian arrowhead matrix transform [ A = D z H = z T α ]. where = diag( ρ 1 ρ 1,..., ρ n 1 ρ n 1,1). A = VΛV T = H = UΛU,U = V. 25/31

61 Application to hermitian arrowhead matrices Algorithm herm2ahig Let [ D r H = r T α ], r = [ ρ 1 ρ 2 ρ n 1 ] T,ρi C hermitian arrowhead matrix transform [ A = D z H = z T α where ]. = diag( ρ 1 ρ 1,..., ρ n 1 ρ n 1,1). A = VΛV T = H = UΛU,U = V. Accuracy of EVD of A = Accuracy of EVD of H. (If aheig quad is needed, we also need to compute z in double of standard precision.) 25/31

62 Application to tridiagonal symmetric matrices Algorithm dc t2a T is symmetric tridiagonal matrix α 1 β 2 T = β 2 α 2 β β n 1 α n 1 β n β n α n T 1 β k+1 e k 0 β k+1 e T k α k+1 β k+2 e T 1. 0 β k+2 e 1 T 2 where 1 < k < n, T 1 and T 2 are k k and (n k 1) (n k 1) submatrices of T, respectively, and e j is the j th unit vector of appropriate dimension. Usually k is taken to be [n/2]. 26/31

63 Application to tridiagonal symmetric matrices Algorithm dc t2a Let Q i D i Q T i = T i be an eigenvalue decomposition of T i. T = = = = QAQ T, T 1 β k+1 e k 0 β k+1 e T k α k+1 β k+2 e T 1 0 β k+2 e 1 T 2 Q 1 D 1 Q T 1 β k+1 e k 0 β k+1 e T k α k+1 β k+2 e T 1 0 β k+2 e 1 Q 2 D 2 Q T 2 0 Q Q 2 α k+1 β k+1 l T 1 β k+2 f T 2 β k+1 l 1 D 1 0 β k+2 f 2 0 D Q T Q T 2 where l T 1 is last row of Q 1 and f T 2 is first row of Q 2. Thus T is reduced to symmetric arrowhead matrix A by orthogonal transformation Q. 27/31

64 Application to tridiagonal symmetric matrices Algorithm dc t2a is used for computing an eigenvalue decomposition of symmetric tridiagonal matrices in a way that: We transform a symmetric tridiagonal matrix to symmetric arrowhead matrix by orthogonal transformation. We compute eigenvalue decomposition of symmetric arrowhead matrix using aheig algorithm. We can guarantee high relative accuracy of eigenvalues and orthogonality of eigenvectors of tridiagonal symmetric matrix only when we can guarantee high relative accuracy of eigenvalue decompositions of corresponding symmetric arrowhead matrices emerging during algorithm dc t2a. 28/31

65 Application to tridiagonal symmetric matrices Algorithm dc t2a Example Wilkinson matrix 21 1/1 λ Math λ dc t2a(t) /31

66 Application to tridiagonal symmetric matrices Algorithm dc t2a Example Wilkinson matrix 21 1/1 λ Math λ dc t2a(t) /31

67 Application to diagonal + rank-one matrices (D + uu T ) Algorithm dpr1 2a Let where M = D +uu T d 1 +u 2 1 u 1 u 2 u 1 u n u 2 u 1 d 2 +u 2 2 u 2 u n =..... u n u 1 u n u 2 d n +u 2 n d i R, i = 1,...,n, u i R,, i = 1,...,n. Now, for x 1 = 0, x j = u j /u 1, j = 2,...,n ( G = I + e 1 x T) M (I e 1 x T) is arrowhead matrix. 30/31

68 Application to diagonal + rank-one matrices (D + uu T ) Algorithm dpr1 2a Under assumptions we form d 1 < d j, j = 2,...,n ( d2 d 1 = diag 1,,, u 1 dn d 1 u 1 ). A = G 1 is symmetric arrowhead matrix of form d 1 +u u 2 n u 2 d2 d 1 u n dn d 1 u 2 d2 d 1 d 2 0 A =..... u n dn d 1 0 d n. Algorithm aheig is now used on A. If aheig quad is needed, we also need to compute α and z in double of standard precision. 31/31

arxiv: v2 [math.na] 21 Sep 2015

arxiv: v2 [math.na] 21 Sep 2015 Forward stable eigenvalue decomposition of rank-one modifications of diagonal matrices arxiv:1405.7537v2 [math.na] 21 Sep 2015 N. Jakovčević Stor a,1,, I. Slapničar a,1, J. L. Barlow b,2 a Faculty of Electrical

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method 632 CHAP 11 EIGENVALUES AND EIGENVECTORS QR Method Suppose that A is a real symmetric matrix In the preceding section we saw how Householder s method is used to construct a similar tridiagonal matrix The

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Iterative Algorithm for Computing the Eigenvalues

Iterative Algorithm for Computing the Eigenvalues Iterative Algorithm for Computing the Eigenvalues LILJANA FERBAR Faculty of Economics University of Ljubljana Kardeljeva pl. 17, 1000 Ljubljana SLOVENIA Abstract: - We consider the eigenvalue problem Hx

More information

More chapter 3...linear dependence and independence... vectors

More chapter 3...linear dependence and independence... vectors More chapter 3...linear dependence and independence... vectors It is important to determine if a set of vectors is linearly dependent or independent Consider a set of vectors A, B, and C. If we can find

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 9, 2008 Today

More information

Problem 1: Solving a linear equation

Problem 1: Solving a linear equation Math 38 Practice Final Exam ANSWERS Page Problem : Solving a linear equation Given matrix A = 2 2 3 7 4 and vector y = 5 8 9. (a) Solve Ax = y (if the equation is consistent) and write the general solution

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Orthogonal Eigenvectors and Gram-Schmidt

Orthogonal Eigenvectors and Gram-Schmidt Orthogonal Eigenvectors and Gram-Schmidt Inderjit S. Dhillon The University of Texas at Austin Beresford N. Parlett The University of California at Berkeley Joint GAMM-SIAM Conference on Applied Linear

More information

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11) Lecture 1: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11) The eigenvalue problem, Ax= λ x, occurs in many, many contexts: classical mechanics, quantum mechanics, optics 22 Eigenvectors and

More information

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Quick Tour of Linear Algebra and Graph Theory

Quick Tour of Linear Algebra and Graph Theory Quick Tour of Linear Algebra and Graph Theory CS224W: Social and Information Network Analysis Fall 2014 David Hallac Based on Peter Lofgren, Yu Wayne Wu, and Borja Pelato s previous versions Matrices and

More information

Solutions Problem Set 8 Math 240, Fall

Solutions Problem Set 8 Math 240, Fall Solutions Problem Set 8 Math 240, Fall 2012 5.6 T/F.2. True. If A is upper or lower diagonal, to make det(a λi) 0, we need product of the main diagonal elements of A λi to be 0, which means λ is one of

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m Eigensystems We will discuss matrix diagonalization algorithms in umerical Recipes in the context of the eigenvalue problem in quantum mechanics, A n = λ n n, (1) where A is a real, symmetric Hamiltonian

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4 Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

Sudoku and Matrices. Merciadri Luca. June 28, 2011

Sudoku and Matrices. Merciadri Luca. June 28, 2011 Sudoku and Matrices Merciadri Luca June 28, 2 Outline Introduction 2 onventions 3 Determinant 4 Erroneous Sudoku 5 Eigenvalues Example 6 Transpose Determinant Trace 7 Antisymmetricity 8 Non-Normality 9

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

A Divide-and-Conquer Method for the Takagi Factorization

A Divide-and-Conquer Method for the Takagi Factorization A Divide-and-Conquer Method for the Takagi Factorization Wei Xu 1 and Sanzheng Qiao 1, Department of Computing and Software, McMaster University Hamilton, Ont, L8S 4K1, Canada. 1 xuw5@mcmaster.ca qiao@mcmaster.ca

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Math 308 Practice Final Exam Page and vector y =

Math 308 Practice Final Exam Page and vector y = Math 308 Practice Final Exam Page Problem : Solving a linear equation 2 0 2 5 Given matrix A = 3 7 0 0 and vector y = 8. 4 0 0 9 (a) Solve Ax = y (if the equation is consistent) and write the general solution

More information

Matrix stabilization using differential equations.

Matrix stabilization using differential equations. Matrix stabilization using differential equations. Nicola Guglielmi Universitá dell Aquila and Gran Sasso Science Institute, Italia NUMOC-2017 Roma, 19 23 June, 2017 Inspired by a joint work with Christian

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Matrix Theory, Math6304 Lecture Notes from October 25, 2012

Matrix Theory, Math6304 Lecture Notes from October 25, 2012 Matrix Theory, Math6304 Lecture Notes from October 25, 2012 taken by John Haas Last Time (10/23/12) Example of Low Rank Perturbation Relationship Between Eigenvalues and Principal Submatrices: We started

More information

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8 Math 205, Summer I, 2016 Week 4b: Continued Chapter 5, Section 8 2 5.8 Diagonalization [reprint, week04: Eigenvalues and Eigenvectors] + diagonaliization 1. 5.8 Eigenspaces, Diagonalization A vector v

More information

Physics 215 Quantum Mechanics 1 Assignment 1

Physics 215 Quantum Mechanics 1 Assignment 1 Physics 5 Quantum Mechanics Assignment Logan A. Morrison January 9, 06 Problem Prove via the dual correspondence definition that the hermitian conjugate of α β is β α. By definition, the hermitian conjugate

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Schur s Triangularization Theorem. Math 422

Schur s Triangularization Theorem. Math 422 Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a

More information

A classification of sharp tridiagonal pairs. Tatsuro Ito, Kazumasa Nomura, Paul Terwilliger

A classification of sharp tridiagonal pairs. Tatsuro Ito, Kazumasa Nomura, Paul Terwilliger Tatsuro Ito Kazumasa Nomura Paul Terwilliger Overview This talk concerns a linear algebraic object called a tridiagonal pair. We will describe its features such as the eigenvalues, dual eigenvalues, shape,

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Math 205, Summer I, Week 4b:

Math 205, Summer I, Week 4b: Math 205, Summer I, 2016 Week 4b: Chapter 5, Sections 6, 7 and 8 (5.5 is NOT on the syllabus) 5.6 Eigenvalues and Eigenvectors 5.7 Eigenspaces, nondefective matrices 5.8 Diagonalization [*** See next slide

More information

Differential equations

Differential equations Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system

More information

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz C&O367: Nonlinear Optimization (Winter 013) Assignment 4 H. Wolkowicz Posted Mon, Feb. 8 Due: Thursday, Feb. 8 10:00AM (before class), 1 Matrices 1.1 Positive Definite Matrices 1. Let A S n, i.e., let

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today

More information

THE EIGENVALUE PROBLEM

THE EIGENVALUE PROBLEM THE EIGENVALUE PROBLEM Let A be an n n square matrix. If there is a number λ and a column vector v 0 for which Av = λv then we say λ is an eigenvalue of A and v is an associated eigenvector. Note that

More information

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

More information

A Quick Tour of Linear Algebra and Optimization for Machine Learning

A Quick Tour of Linear Algebra and Optimization for Machine Learning A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators

More information

Reduction to the associated homogeneous system via a particular solution

Reduction to the associated homogeneous system via a particular solution June PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 5) Linear Algebra This study guide describes briefly the course materials to be covered in MA 5. In order to be qualified for the credit, one

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016 EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016 Answer the questions in the spaces provided on the question sheets. You must show your work to get credit for your answers. There will

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations Linear Algebra in Computer Vision CSED441:Introduction to Computer Vision (2017F Lecture2: Basic Linear Algebra & Probability Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Mathematics in vector space Linear

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

NOTES FOR LINEAR ALGEBRA 133

NOTES FOR LINEAR ALGEBRA 133 NOTES FOR LINEAR ALGEBRA 33 William J Anderson McGill University These are not official notes for Math 33 identical to the notes projected in class They are intended for Anderson s section 4, and are 2

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

MP 472 Quantum Information and Computation

MP 472 Quantum Information and Computation MP 472 Quantum Information and Computation http://www.thphys.may.ie/staff/jvala/mp472.htm Outline Open quantum systems The density operator ensemble of quantum states general properties the reduced density

More information

THE MATRIX EIGENVALUE PROBLEM

THE MATRIX EIGENVALUE PROBLEM THE MATRIX EIGENVALUE PROBLEM Find scalars λ and vectors x 0forwhich Ax = λx The form of the matrix affects the way in which we solve this problem, and we also have variety as to what is to be found. A

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

Dimension. Eigenvalue and eigenvector

Dimension. Eigenvalue and eigenvector Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information