EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

Size: px
Start display at page:

Download "EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4"

Transcription

1 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4

2 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue λ.

3 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue λ. Q: Prove that if x is an eigenvector, so is αx, for α 0.

4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue λ. Q: Prove that if x is an eigenvector, so is αx, for α 0. So we often take x = 1.

5 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue λ. Q: Prove that if x is an eigenvector, so is αx, for α 0. So we often take x = 1. For an eigenpair (λ,x), (λi A)x = 0, x 0. Q: This is only possible if?

6 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue λ. Q: Prove that if x is an eigenvector, so is αx, for α 0. So we often take x = 1. For an eigenpair (λ,x), (λi A)x = 0, x 0. Q: This is only possible if? λi A is singular.

7 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue λ. Q: Prove that if x is an eigenvector, so is αx, for α 0. So we often take x = 1. For an eigenpair (λ,x), (λi A)x = 0, x 0. Q: This is only possible if? λi A is singular. For general λ, π(λ) det (λi A) is called the characteristic polynomial of A π(λ) = λ n (a a nn )λ n det( A). π(λ) = 0 is called the characteristic equation. π(λ) has exact degree n, and so has n zeros λ 1,...,λ n, say.

8 EIGENVALUE PROBLEMS p. 3/4 Eigenvalues and eigenvectors, ctd Spectrum of A: the set Λ(A) = {λ 1,...,λ n }.

9 EIGENVALUE PROBLEMS p. 3/4 Eigenvalues and eigenvectors, ctd Spectrum of A: the set Λ(A) = {λ 1,...,λ n }. An eigenvalue λ has algebraic multiplicity r if it is repeated exactly r times.

10 EIGENVALUE PROBLEMS p. 3/4 Eigenvalues and eigenvectors, ctd Spectrum of A: the set Λ(A) = {λ 1,...,λ n }. An eigenvalue λ has algebraic multiplicity r if it is repeated exactly r times. Q: Can real A have complex eigenvalues?

11 EIGENVALUE PROBLEMS p. 3/4 Eigenvalues and eigenvectors, ctd Spectrum of A: the set Λ(A) = {λ 1,...,λ n }. An eigenvalue λ has algebraic multiplicity r if it is repeated exactly r times. Q: Can real A have complex eigenvalues? Q: Let x i be an eigenvector of A coresponding to λ i. Are x 1,...,x n linearly independent?

12 EIGENVALUE PROBLEMS p. 3/4 Eigenvalues and eigenvectors, ctd Spectrum of A: the set Λ(A) = {λ 1,...,λ n }. An eigenvalue λ has algebraic multiplicity r if it is repeated exactly r times. Q: Can real A have complex eigenvalues? Q: Let x i be an eigenvector of A coresponding to λ i. Are x 1,...,x n linearly independent? [ ][ ] [ ] 1 1 ξ ξ =, 1 η η It has 2 eigenvalues 1 and 1, but 1 independent eigenvector.

13 EIGENVALUE PROBLEMS p. 3/4 Eigenvalues and eigenvectors, ctd Spectrum of A: the set Λ(A) = {λ 1,...,λ n }. An eigenvalue λ has algebraic multiplicity r if it is repeated exactly r times. Q: Can real A have complex eigenvalues? Q: Let x i be an eigenvector of A coresponding to λ i. Are x 1,...,x n linearly independent? [ ][ ] [ ] 1 1 ξ ξ =, 1 η η It has 2 eigenvalues 1 and 1, but 1 independent eigenvector. An eigenvalue λ has geometric multiplicity s if has s linearly independent eigenvectors.

14 EIGENVALUE PROBLEMS p. 4/4 Similarity Transformations (ST) Similarity transformations: B X 1 AX. A and B are said to be similar.

15 EIGENVALUE PROBLEMS p. 4/4 Similarity Transformations (ST) Similarity transformations: B X 1 AX. A and B are said to be similar. Thm. Similar matrices have the same characteristic polynomial.

16 EIGENVALUE PROBLEMS p. 4/4 Similarity Transformations (ST) Similarity transformations: B X 1 AX. A and B are said to be similar. Thm. Similar matrices have the same characteristic polynomial. Cor. Similarity transformations preserve eigenvalues and algebraic multiplicities.

17 EIGENVALUE PROBLEMS p. 4/4 Similarity Transformations (ST) Similarity transformations: B X 1 AX. A and B are said to be similar. Thm. Similar matrices have the same characteristic polynomial. Cor. Similarity transformations preserve eigenvalues and algebraic multiplicities. Thm. Similarity transformations preserve geometric multiplicities.

18 EIGENVALUE PROBLEMS p. 5/4 Jordan Canonical Form (JCF) k k Jordan block: J (k) λ = λ 1 λ 1 λ It has one distinct eigenvalue λ with algebraic multiplicity k and geometric multiplicity 1.

19 EIGENVALUE PROBLEMS p. 5/4 Jordan Canonical Form (JCF) k k Jordan block: J (k) λ = λ 1 λ 1 λ It has one distinct eigenvalue λ with algebraic multiplicity k and geometric multiplicity 1. Thm. Let A C n n, then there exist unique numbers λ 1,λ 2,...,λ s (complex, not necessarily distinct), and unique positive integers m 1,m 2,...,m s, and nonsingular X C n n giving the JCF of A X 1 AX = J diag[j (m 1) λ 1,...,J (m s) ]. λ s

20 EIGENVALUE PROBLEMS p. 6/4 Jordan Canonical Form, ctd We see AX = XJ, X = [x 1,,x n ]. Q: Which of x 1,...,x n are eigenvectors?

21 EIGENVALUE PROBLEMS p. 6/4 Jordan Canonical Form, ctd We see AX = XJ, X = [x 1,,x n ]. Q: Which of x 1,...,x n are eigenvectors? Q: Can we find the algebraic multiplicity and geometric multiplicity of an eigenvalue of A from its JCF?

22 EIGENVALUE PROBLEMS p. 6/4 Jordan Canonical Form, ctd We see AX = XJ, X = [x 1,,x n ]. Q: Which of x 1,...,x n are eigenvectors? Q: Can we find the algebraic multiplicity and geometric multiplicity of an eigenvalue of A from its JCF? Q: What is the relationship between the algebraic multiplicity and geometric multiplicity of an eigenvalue?

23 EIGENVALUE PROBLEMS p. 6/4 Jordan Canonical Form, ctd We see AX = XJ, X = [x 1,,x n ]. Q: Which of x 1,...,x n are eigenvectors? Q: Can we find the algebraic multiplicity and geometric multiplicity of an eigenvalue of A from its JCF? Q: What is the relationship between the algebraic multiplicity and geometric multiplicity of an eigenvalue? If all the Jordan blocks corresponding to the same eigenvalue have dimension 1, the eigenvalue is nondefective or semisimple, else defective.

24 EIGENVALUE PROBLEMS p. 6/4 Jordan Canonical Form, ctd We see AX = XJ, X = [x 1,,x n ]. Q: Which of x 1,...,x n are eigenvectors? Q: Can we find the algebraic multiplicity and geometric multiplicity of an eigenvalue of A from its JCF? Q: What is the relationship between the algebraic multiplicity and geometric multiplicity of an eigenvalue? If all the Jordan blocks corresponding to the same eigenvalue have dimension 1, the eigenvalue is nondefective or semisimple, else defective. If all the eigenvalues of A are nondefective, A is nondefective or semisimple.

25 EIGENVALUE PROBLEMS p. 7/4 Jordan Canonical Form, ctd If J is diagonal, A is diagonalizable.

26 EIGENVALUE PROBLEMS p. 7/4 Jordan Canonical Form, ctd If J is diagonal, A is diagonalizable. Q. Is nondefective = diagonalizabl?

27 EIGENVALUE PROBLEMS p. 7/4 Jordan Canonical Form, ctd If J is diagonal, A is diagonalizable. Q. Is nondefective = diagonalizabl? Q: If a matrix A C n n has n distinctive eigenvalues, is A diagonalizable?

28 EIGENVALUE PROBLEMS p. 7/4 Jordan Canonical Form, ctd If J is diagonal, A is diagonalizable. Q. Is nondefective = diagonalizabl? Q: If a matrix A C n n has n distinctive eigenvalues, is A diagonalizable? If two or more Jordan blocks have the same eigenvalue, the eigenvalue is derogatory, else nonderogatory.

29 EIGENVALUE PROBLEMS p. 7/4 Jordan Canonical Form, ctd If J is diagonal, A is diagonalizable. Q. Is nondefective = diagonalizabl? Q: If a matrix A C n n has n distinctive eigenvalues, is A diagonalizable? If two or more Jordan blocks have the same eigenvalue, the eigenvalue is derogatory, else nonderogatory. If a matrix has a derogatory eigenvalue, the matrix is derogatory, else nonderogatory.

30 EIGENVALUE PROBLEMS p. 7/4 Jordan Canonical Form, ctd If J is diagonal, A is diagonalizable. Q. Is nondefective = diagonalizabl? Q: If a matrix A C n n has n distinctive eigenvalues, is A diagonalizable? If two or more Jordan blocks have the same eigenvalue, the eigenvalue is derogatory, else nonderogatory. If a matrix has a derogatory eigenvalue, the matrix is derogatory, else nonderogatory. If an eigenvalue is distinct, it is simple.

31 EIGENVALUE PROBLEMS p. 7/4 Jordan Canonical Form, ctd If J is diagonal, A is diagonalizable. Q. Is nondefective = diagonalizabl? Q: If a matrix A C n n has n distinctive eigenvalues, is A diagonalizable? If two or more Jordan blocks have the same eigenvalue, the eigenvalue is derogatory, else nonderogatory. If a matrix has a derogatory eigenvalue, the matrix is derogatory, else nonderogatory. If an eigenvalue is distinct, it is simple. If all eigenvalues are distinct, the matrix is simple.

32 EIGENVALUE PROBLEMS p. 8/4 Jordan Canonical Form, ctd The JCF is classical theory important in describing properties of various linear differential equations, and so is important in control theory etc.

33 EIGENVALUE PROBLEMS p. 8/4 Jordan Canonical Form, ctd The JCF is classical theory important in describing properties of various linear differential equations, and so is important in control theory etc. Two problems with JCF: For a general A, X in X 1 AX = J can be very ill-conditioned. Rounding errors in the JCF computation can be greatly magnified: fl(x 1 AX) = X 1 AX+E, E 2 = O(u)κ 2 (X) A 2.

34 EIGENVALUE PROBLEMS p. 8/4 Jordan Canonical Form, ctd The JCF is classical theory important in describing properties of various linear differential equations, and so is important in control theory etc. Two problems with JCF: For a general A, X in X 1 AX = J can be very ill-conditioned. Rounding errors in the JCF computation can be greatly magnified: fl(x 1 AX) = X 1 AX+E, E 2 = O(u)κ 2 (X) A 2. A small perturbation in A may change the dimensions of the Jordan blocks completely.

35 EIGENVALUE PROBLEMS p. 8/4 Jordan Canonical Form, ctd The JCF is classical theory important in describing properties of various linear differential equations, and so is important in control theory etc. Two problems with JCF: For a general A, X in X 1 AX = J can be very ill-conditioned. Rounding errors in the JCF computation can be greatly magnified: fl(x 1 AX) = X 1 AX+E, E 2 = O(u)κ 2 (X) A 2. A small perturbation in A may change the dimensions of the Jordan blocks completely. If possible, avoid the JCF in computations.

36 EIGENVALUE PROBLEMS p. 9/4 Unitary similarity transformations Unitary transformations preserve size. If Q C n n is unitary, B Q H AQ is a unitary similarity transformation of A.

37 EIGENVALUE PROBLEMS p. 9/4 Unitary similarity transformations Unitary transformations preserve size. If Q C n n is unitary, B Q H AQ is a unitary similarity transformation of A. Q: What are the relations between the eigenvalues and singular values of B and those of A?

38 EIGENVALUE PROBLEMS p. 9/4 Unitary similarity transformations Unitary transformations preserve size. If Q C n n is unitary, B Q H AQ is a unitary similarity transformation of A. Q: What are the relations between the eigenvalues and singular values of B and those of A? Q: What is simplest form of the above B?

39 EIGENVALUE PROBLEMS p. 10/4 Schur Decomposition Schur s Theorem. There exists a similarity transformation with unitary Q such that Q H AQ = R, R upper triangular.

40 EIGENVALUE PROBLEMS p. 10/4 Schur Decomposition Schur s Theorem. There exists a similarity transformation with unitary Q such that Q H AQ = R, R upper triangular. Pf. A has at least one eigenvector x, so suppose Ax = λx, x H x = 1, Q H 1 x = e 1, with Q 1 unitary, gives Q 1 = [x,f] say, then [ ] x Q H 1 AQ 1 = H Ax x H AF F H Ax F H AF = [ λ x H AF O F H AF ], and the same argument can be applied to F H AF, etc.

41 EIGENVALUE PROBLEMS p. 11/4 Real Schur decomposition If A R n n, then there exists real orthogonal Q, giving Q T AQ = R, R is quasi upper triangular, with blocks on the diagonal corresponding to complex conjugate pairs, e.g., R = two complex conjugate pairs, and one real eigenvalue.

42 EIGENVALUE PROBLEMS p. 12/4 Real Schur decomposition, ctd Q: If A = A H, the Schur form has real diagonal R?

43 EIGENVALUE PROBLEMS p. 12/4 Real Schur decomposition, ctd Q: If A = A H, the Schur form has real diagonal R? Hermitian matrices have real eigenvalues, and a complete set of unitary eigenvectors Q.

44 EIGENVALUE PROBLEMS p. 12/4 Real Schur decomposition, ctd Q: If A = A H, the Schur form has real diagonal R? Hermitian matrices have real eigenvalues, and a complete set of unitary eigenvectors Q. Any matrix having a complete set of unitary eigenvectors is called a normal matrix.

45 EIGENVALUE PROBLEMS p. 12/4 Real Schur decomposition, ctd Q: If A = A H, the Schur form has real diagonal R? Hermitian matrices have real eigenvalues, and a complete set of unitary eigenvectors Q. Any matrix having a complete set of unitary eigenvectors is called a normal matrix. Q: Show A is normal iff AA H = A H A.

46 EIGENVALUE PROBLEMS p. 12/4 Real Schur decomposition, ctd Q: If A = A H, the Schur form has real diagonal R? Hermitian matrices have real eigenvalues, and a complete set of unitary eigenvectors Q. Any matrix having a complete set of unitary eigenvectors is called a normal matrix. Q: Show A is normal iff AA H = A H A. Q: If A is real symmetrc, then it has a complete set of orthogonal eigenvectors.

47 EIGENVALUE PROBLEMS p. 13/4 Algorithms for the EVP If A is real, A = A T, A = Q diag(λ i )Q T, Q T Q = I, the eigendecomposition of A, is the same as the SVD (apart from signs).

48 EIGENVALUE PROBLEMS p. 13/4 Algorithms for the EVP If A is real, A = A T, A = Q diag(λ i )Q T, Q T Q = I, the eigendecomposition of A, is the same as the SVD (apart from signs). Q: What is one way of computing this?

49 EIGENVALUE PROBLEMS p. 13/4 Algorithms for the EVP If A is real, A = A T, A = Q diag(λ i )Q T, Q T Q = I, the eigendecomposition of A, is the same as the SVD (apart from signs). Q: What is one way of computing this? Jacobi s method (1845)

50 EIGENVALUE PROBLEMS p. 13/4 Algorithms for the EVP If A is real, A = A T, A = Q diag(λ i )Q T, Q T Q = I, the eigendecomposition of A, is the same as the SVD (apart from signs). Q: What is one way of computing this? Jacobi s method (1845) For general A C n n, there exists unitary Q such that Q H AQ = R, R upper triangular. Also Q H AQ 2,F = A 2,F, numerically desirable. So we seek such Q and R.

51 EIGENVALUE PROBLEMS p. 14/4 Algorithms for the EVP, ctd Q: Why must methods be iterative for the EVP?

52 EIGENVALUE PROBLEMS p. 14/4 Algorithms for the EVP, ctd Q: Why must methods be iterative for the EVP? Finding zeros of π(λ) = 0 is a nonlinear problem cannot be solved on a fixed number of steps.

53 EIGENVALUE PROBLEMS p. 14/4 Algorithms for the EVP, ctd Q: Why must methods be iterative for the EVP? Finding zeros of π(λ) = 0 is a nonlinear problem cannot be solved on a fixed number of steps. The form of such iterative algorithms for computing the Schur form is: A 1 = A A k+1 = Q H k A k Q k, Q k is unitary, k = 1, 2,... = Q H k Q H k 1 Q H 1 AQ 1 Q k 1 Q k, We try to design the Q k so that A k upper triangular.

54 EIGENVALUE PROBLEMS p. 14/4 Algorithms for the EVP, ctd Q: Why must methods be iterative for the EVP? Finding zeros of π(λ) = 0 is a nonlinear problem cannot be solved on a fixed number of steps. The form of such iterative algorithms for computing the Schur form is: A 1 = A A k+1 = Q H k A k Q k, Q k is unitary, k = 1, 2,... = Q H k Q H k 1 Q H 1 AQ 1 Q k 1 Q k, We try to design the Q k so that A k upper triangular. If A is real, we should try to design orthogonal Q k so that A k quasi-upper triangular.

55 EIGENVALUE PROBLEMS p. 15/4 The QR algorithm Basic algorithm: A 1 = A for k = 1, 2,... until convergence do QR factorization of A k : A k = Q k R k, Q k unitary, R k upper triangular recombine in the reverse order: A k+1 = R k Q k end

56 EIGENVALUE PROBLEMS p. 16/4 The QR algorithm, ctd 1. A k+1 = Q H k A k Q k = Q H k Q H 1 AQ 1 Q k is a unitary similarity transformation of A. With refinements this is one of the most effective matrix computation algorithms for transforming A to upper triangular form using unitary similarity transformations.

57 EIGENVALUE PROBLEMS p. 16/4 The QR algorithm, ctd 1. A k+1 = Q H k A k Q k = Q H k Q H 1 AQ 1 Q k is a unitary similarity transformation of A. With refinements this is one of the most effective matrix computation algorithms for transforming A to upper triangular form using unitary similarity transformations. 2. If A is real, then Q k are orthogonal, A k+1 = Q T k A k Q k = Q T k Q T 1 AQ 1 Q k. With some refinements, the algorithm will transform A to quasi-upper triangular form.

58 EIGENVALUE PROBLEMS p. 17/4 Hessenberg Reduction: Each iteration (called a QR step) costs O(n 3 ) flops very expensive.

59 EIGENVALUE PROBLEMS p. 17/4 Hessenberg Reduction: Each iteration (called a QR step) costs O(n 3 ) flops very expensive. Q. Can we reduce A to a matrix with special structure so that each QR step costs O(n 2 ) flops?

60 EIGENVALUE PROBLEMS p. 17/4 Hessenberg Reduction: Each iteration (called a QR step) costs O(n 3 ) flops very expensive. Q. Can we reduce A to a matrix with special structure so that each QR step costs O(n 2 ) flops? Reduce the matrix A to the Hessenberg form:, the closest we can get to upper triangular form using a fixed number of unitary similarity transformations.

61 EIGENVALUE PROBLEMS p. 18/4 Hessenberg Reduction: Q. How to do the Hessenberg reduction?

62 EIGENVALUE PROBLEMS p. 18/4 Hessenberg Reduction: Q. How to do the Hessenberg reduction? Q. What is its cost if A is real? 10n 3 /3 flops.

63 EIGENVALUE PROBLEMS p. 18/4 Hessenberg Reduction: Q. How to do the Hessenberg reduction? Q. What is its cost if A is real? 10n 3 /3 flops. Q. Do we lose the upper Hessenberg form in the iterations?

64 EIGENVALUE PROBLEMS p. 18/4 Hessenberg Reduction: Q. How to do the Hessenberg reduction? Q. What is its cost if A is real? 10n 3 /3 flops. Q. Do we lose the upper Hessenberg form in the iterations? No if we use Givens rotations in the QR factn. Q. What is the cost of each QR step now if A is real? 6n 2 flops.

65 EIGENVALUE PROBLEMS p. 18/4 Hessenberg Reduction: Q. How to do the Hessenberg reduction? Q. What is its cost if A is real? 10n 3 /3 flops. Q. Do we lose the upper Hessenberg form in the iterations? No if we use Givens rotations in the QR factn. Q. What is the cost of each QR step now if A is real? 6n 2 flops.

66 EIGENVALUE PROBLEMS p. 19/4 QR algorithm with Hessenberg reduction: Compute Hessenberg reduction A 1 := Q H 0 AQ 0 where Q 0 := H 1...H n 2 for k = 1, 2,... until convergence do QR factorization of A k : A k = Q k R k, Q k unitary, R k upper triangular recombine in the reverse order: A k+1 = R k Q k end

67 EIGENVALUE PROBLEMS p. 19/4 QR algorithm with Hessenberg reduction: Compute Hessenberg reduction A 1 := Q H 0 AQ 0 where Q 0 := H 1...H n 2 for k = 1, 2,... until convergence do QR factorization of A k : A k = Q k R k, Q k unitary, R k upper triangular recombine in the reverse order: A k+1 = R k Q k end It is still slow.

68 EIGENVALUE PROBLEMS p. 20/4 Shifting Let λ 1 λ n. If λ i > λ i+1, we can show a (k) i+1,i 0, as λ i+1 k λ i 0 so it has linear convergence.

69 EIGENVALUE PROBLEMS p. 21/4 Shifting, ctd Q: What are the eigenvalues of A µi, and what is the new measure of convergence for a (k) i+1,i 0?

70 EIGENVALUE PROBLEMS p. 21/4 Shifting, ctd Q: What are the eigenvalues of A µi, and what is the new measure of convergence for a (k) i+1,i 0? Eigenvalues of A µi: λ i µ, i = 1 : n.

71 EIGENVALUE PROBLEMS p. 21/4 Shifting, ctd Q: What are the eigenvalues of A µi, and what is the new measure of convergence for a (k) i+1,i 0? Eigenvalues of A µi: λ i µ, i = 1 : n. Suppose we renumber the eigenvalues of A so that λ 1 µ λ n µ New measure of convergence: λ i+1 µ λ i µ.

72 EIGENVALUE PROBLEMS p. 21/4 Shifting, ctd Q: What are the eigenvalues of A µi, and what is the new measure of convergence for a (k) i+1,i 0? Eigenvalues of A µi: λ i µ, i = 1 : n. Suppose we renumber the eigenvalues of A so that λ 1 µ λ n µ New measure of convergence: λ i+1 µ λ i µ. Q: How should we choose µ to obtain fast convergence?

73 EIGENVALUE PROBLEMS p. 21/4 Shifting, ctd Q: What are the eigenvalues of A µi, and what is the new measure of convergence for a (k) i+1,i 0? Eigenvalues of A µi: λ i µ, i = 1 : n. Suppose we renumber the eigenvalues of A so that λ 1 µ λ n µ New measure of convergence: λ i+1 µ λ i µ. Q: How should we choose µ to obtain fast convergence? Choose µ to be close to an eigenvalue of A.

74 EIGENVALUE PROBLEMS p. 22/4 Shifting, ctd Suppose λ n µ < λ n 1 µ. If we apply the QR iterations to Ã1 = A 1 µi, then ã (k) n,n 1 will converge to zero quickly. Suppose after k 0 1 QR steps, ã (k 0) n 1,n is small enough, we regard it as zero and add the shift back on: Ã k0 + µi = 0 λ Then λ is an eigenvalue of A.

75 EIGENVALUE PROBLEMS p. 22/4 Shifting, ctd Suppose λ n µ < λ n 1 µ. If we apply the QR iterations to Ã1 = A 1 µi, then ã (k) n,n 1 will converge to zero quickly. Suppose after k 0 1 QR steps, ã (k 0) n 1,n is small enough, we regard it as zero and add the shift back on: Ã k0 + µi = 0 λ Then λ is an eigenvalue of A.

76 EIGENVALUE PROBLEMS p. 23/4 Shifting, ctd The above process can be written as à 1 := A 1 µi for k = 1 : k 0 1 à k = Q k Rk à k+1 = R k Qk end A k0 := Ãk 0 + µi

77 EIGENVALUE PROBLEMS p. 23/4 Shifting, ctd The above process can be written as à 1 := A 1 µi for k = 1 : k 0 1 à k = Q k Rk à k+1 = R k Qk end A k0 := Ãk 0 + µi It is easy to show the above process is equivalent to for k = 1 : k 0 1 A k µi = Q k R k A k+1 = R k Q k + µi end

78 EIGENVALUE PROBLEMS p. 24/4 Shifting, ctd But there is no reason to use the same shift µ in all QR steps.

79 EIGENVALUE PROBLEMS p. 24/4 Shifting, ctd But there is no reason to use the same shift µ in all QR steps. Also we usually can only find a good approximation to an eigenvalue during the QR iterations.

80 EIGENVALUE PROBLEMS p. 24/4 Shifting, ctd But there is no reason to use the same shift µ in all QR steps. Also we usually can only find a good approximation to an eigenvalue during the QR iterations. So we should use different shifts in the algorithm for different QR steps.

81 EIGENVALUE PROBLEMS p. 24/4 Shifting, ctd But there is no reason to use the same shift µ in all QR steps. Also we usually can only find a good approximation to an eigenvalue during the QR iterations. So we should use different shifts in the algorithm for different QR steps. Shifted QR Algorithm with Hessenberg Reduction: Compute the Hessenberg reduction A 1 = Q H 0 AQ 0, where Q 0 := H 1...H n 2 for k = 1, 2,... until convergence do A k µ k I = Q k R k A k+1 = R k Q k + µ k I end

82 EIGENVALUE PROBLEMS p. 25/4 Shifting, ctd Remarks: A k+1 = Q H k (A k µ k I)Q k + µ k I = Q H k A kq k. Now Q k depends on µ k.

83 EIGENVALUE PROBLEMS p. 25/4 Shifting, ctd Remarks: A k+1 = Q H k (A k µ k I)Q k + µ k I = Q H k A kq k. Now Q k depends on µ k. With correct choice of shift we get quadratic convergence, and if A = A H we can get approximately cubic convergence.

84 EIGENVALUE PROBLEMS p. 25/4 Shifting, ctd Remarks: A k+1 = Q H k (A k µ k I)Q k + µ k I = Q H k A kq k. Now Q k depends on µ k. With correct choice of shift we get quadratic convergence, and if A = A H we can get approximately cubic convergence. If a (k) n,n 1 is small, then a(k) nn is close to an eigenvalue. So one possible choice for µ is a (k) nn. This shift is called the Rayleigh quotient shift.

85 Example: no shifting >> A = [8 1; -2 1]; >> A1 = A; >> [Q1,R1] = qr(a1); >> A2 = R1 * Q1 A2 = e e e e+00 >> [Q2,R2] = qr(a2); >> A3 = R2 *Q2 A3 = e e e e+00 >> [Q3,R3] = qr(a3); >> A4 = R3 *Q3 EIGENVALUE PROBLEMS p. 26/4

86 EIGENVALUE PROBLEMS p. 27/4 Example: shifting >> A1 = A; >> I = eye (2); >> [Q1,R1] = qr(a1 - A1(2,2)*I); >> A2 = R1 * Q1 + A1(2,2)*I A2 = e e e e+00 >> [Q2,R2] = qr(a2 - A2(2,2)*I); >> A3 = R2 * Q2 + A2(2,2)*I A3 = e e e e+00

87 EIGENVALUE PROBLEMS p. 28/4 Deflation Technique When a (k) n,n 1 row and column. is small enough, we deflate by ignoring the last 0 Note the remaining eigenvalues of A are those of A k (1:n 1, 1:n 1). Apply the QR algorithm to A k (1:n 1, 1:n 1).

88 EIGENVALUE PROBLEMS p. 29/4 Deflation Technique, ctd Note: During iterations, it may happen that some a (k) i+1,i other than a(k) n 1,n becomes very small. Then we just regard it as zero, and work with A k (1:i, 1:i), A k (i + 1:n,i + 1:n)

89 EIGENVALUE PROBLEMS p. 30/4 Special case A = A H Q: What does the upper Hessenberg form become?

90 EIGENVALUE PROBLEMS p. 30/4 Special case A = A H Q: What does the upper Hessenberg form become? Tridiagonal, costs 4n 3 /3 flops if A is real.

91 EIGENVALUE PROBLEMS p. 30/4 Special case A = A H Q: What does the upper Hessenberg form become? Tridiagonal, costs 4n 3 /3 flops if A is real. Q: Is it preserved in the QR algorithm?

92 EIGENVALUE PROBLEMS p. 30/4 Special case A = A H Q: What does the upper Hessenberg form become? Tridiagonal, costs 4n 3 /3 flops if A is real. Q: Is it preserved in the QR algorithm? Yes Q: What is the cost per QR step if A is real?

93 EIGENVALUE PROBLEMS p. 30/4 Special case A = A H Q: What does the upper Hessenberg form become? Tridiagonal, costs 4n 3 /3 flops if A is real. Q: Is it preserved in the QR algorithm? Yes Q: What is the cost per QR step if A is real? 12n flops.

94 EIGENVALUE PROBLEMS p. 30/4 Special case A = A H Q: What does the upper Hessenberg form become? Tridiagonal, costs 4n 3 /3 flops if A is real. Q: Is it preserved in the QR algorithm? Yes Q: What is the cost per QR step if A is real? 12n flops. It has cubic convergence and the eigenvectors are immediately available. Q H AQ D diagonal, so the eigenvectors are the columns of Q.

95 EIGENVALUE PROBLEMS p. 31/4 Numerical Difficulty with Shifting Form A µi with e.g. µ = a nn. Suppose A = fl(a I) = It loses information unnecessarily

96 EIGENVALUE PROBLEMS p. 32/4 Numerical Difficulty with Shifting After one QR step, A leading to the eignevalues 10 4, 10 4, More accuare computed eignevluaes by MATLAB are: , 1, We can use implicit shifts in order to avoid this loss.

97 EIGENVALUE PROBLEMS p. 33/4 Implicitly shifted QR alg. for real unsym A A real unsymmetric A usually has complex conjugate pairs of eigenvelues.

98 EIGENVALUE PROBLEMS p. 33/4 Implicitly shifted QR alg. for real unsym A A real unsymmetric A usually has complex conjugate pairs of eigenvelues. A complex µ in Q H (A µi) requires complex arithmetic.

99 EIGENVALUE PROBLEMS p. 33/4 Implicitly shifted QR alg. for real unsym A A real unsymmetric A usually has complex conjugate pairs of eigenvelues. A complex µ in Q H (A µi) requires complex arithmetic. But for a real matrix, we would like to avoid complex arithmetic as much as possible.

100 EIGENVALUE PROBLEMS p. 33/4 Implicitly shifted QR alg. for real unsym A A real unsymmetric A usually has complex conjugate pairs of eigenvelues. A complex µ in Q H (A µi) requires complex arithmetic. But for a real matrix, we would like to avoid complex arithmetic as much as possible. When the QR algorithm converges to a complex conjugate pair, we find the (n 1,n 2) entry converges to 0, and eventually we can deflate.

101 EIGENVALUE PROBLEMS p. 33/4 Implicitly shifted QR alg. for real unsym A A real unsymmetric A usually has complex conjugate pairs of eigenvelues. A complex µ in Q H (A µi) requires complex arithmetic. But for a real matrix, we would like to avoid complex arithmetic as much as possible. When the QR algorithm converges to a complex conjugate pair, we find the (n 1,n 2) entry converges to 0, and eventually we can deflate. Q: How do we deflate here?

102 EIGENVALUE PROBLEMS p. 33/4 Implicitly shifted QR alg. for real unsym A A real unsymmetric A usually has complex conjugate pairs of eigenvelues. A complex µ in Q H (A µi) requires complex arithmetic. But for a real matrix, we would like to avoid complex arithmetic as much as possible. When the QR algorithm converges to a complex conjugate pair, we find the (n 1,n 2) entry converges to 0, and eventually we can deflate. Q: How do we deflate here? Ignore the last 2 rows and columns.

103 EIGENVALUE PROBLEMS p. 34/4 Implicitly shifted QR alg. for real unsym A When the (n 1,n 2) entry is small, the eigenvalues µ 1,µ 2 of the bottom right hand corner 2 2 block are good approximations to eigenvalues of A.

104 EIGENVALUE PROBLEMS p. 34/4 Implicitly shifted QR alg. for real unsym A When the (n 1,n 2) entry is small, the eigenvalues µ 1,µ 2 of the bottom right hand corner 2 2 block are good approximations to eigenvalues of A. If they are a complex conjugate pair, we could do one QR step with µ 1, the next with µ 2 = µ 1.

105 EIGENVALUE PROBLEMS p. 35/4 Implicitly shifted QR alg. for real unsym A Suppose A 1 is real upper Hessenberg. One step of double QR with explicit shifts µ 1 and µ 2 is A 1 µ 1 I = Q 1 R 1, A 2 = R 1 Q 1 + µ 1 I, A 2 µ 2 I = Q 2 R 2, A 3 = R 2 Q 2 + µ 2 I.

106 EIGENVALUE PROBLEMS p. 35/4 Implicitly shifted QR alg. for real unsym A Suppose A 1 is real upper Hessenberg. One step of double QR with explicit shifts µ 1 and µ 2 is Two drawbacks: A 1 µ 1 I = Q 1 R 1, A 2 = R 1 Q 1 + µ 1 I, A 2 µ 2 I = Q 2 R 2, A 3 = R 2 Q 2 + µ 2 I. Although A 1 is real, complex arithmetic is involved if µ 1 and µ 2 are complex. Explicit shifting may cause numerical difficulties.

107 EIGENVALUE PROBLEMS p. 36/4 Implicitly shifted QR alg. for real unsym A Q. Show that 1. A 3 = Q H 2 A 2 Q 2 = (Q 1 Q 2 ) H A 1 (Q 1 Q 2 ).

108 EIGENVALUE PROBLEMS p. 36/4 Implicitly shifted QR alg. for real unsym A Q. Show that 1. A 3 = Q H 2 A 2 Q 2 = (Q 1 Q 2 ) H A 1 (Q 1 Q 2 ). 2. N (A 1 µ 1 I)(A 1 µ 2 I) is real, N = Q 1 Q 2 R 2 R 1.

109 EIGENVALUE PROBLEMS p. 36/4 Implicitly shifted QR alg. for real unsym A Q. Show that 1. A 3 = Q H 2 A 2 Q 2 = (Q 1 Q 2 ) H A 1 (Q 1 Q 2 ). 2. N (A 1 µ 1 I)(A 1 µ 2 I) is real, N = Q 1 Q 2 R 2 R N ij = 0 for i j + 3, i.e., N has two nonzero subdiagonals.

110 EIGENVALUE PROBLEMS p. 36/4 Implicitly shifted QR alg. for real unsym A Q. Show that 1. A 3 = Q H 2 A 2 Q 2 = (Q 1 Q 2 ) H A 1 (Q 1 Q 2 ). 2. N (A 1 µ 1 I)(A 1 µ 2 I) is real, N = Q 1 Q 2 R 2 R N ij = 0 for i j + 3, i.e., N has two nonzero subdiagonals. Since N is real, we can choose the Q-factor Q 1 Q 2 of its QR factorization to be real. So we can obtain the QR factorization of N to get Q 1 Q 2 and then use it to obtain A 3 avoiding complex arithmetic.

111 EIGENVALUE PROBLEMS p. 36/4 Implicitly shifted QR alg. for real unsym A Q. Show that 1. A 3 = Q H 2 A 2 Q 2 = (Q 1 Q 2 ) H A 1 (Q 1 Q 2 ). 2. N (A 1 µ 1 I)(A 1 µ 2 I) is real, N = Q 1 Q 2 R 2 R N ij = 0 for i j + 3, i.e., N has two nonzero subdiagonals. Since N is real, we can choose the Q-factor Q 1 Q 2 of its QR factorization to be real. So we can obtain the QR factorization of N to get Q 1 Q 2 and then use it to obtain A 3 avoiding complex arithmetic. But there are two problems: i. Forming N costs O(n 3 ) too expensive;

112 EIGENVALUE PROBLEMS p. 36/4 Implicitly shifted QR alg. for real unsym A Q. Show that 1. A 3 = Q H 2 A 2 Q 2 = (Q 1 Q 2 ) H A 1 (Q 1 Q 2 ). 2. N (A 1 µ 1 I)(A 1 µ 2 I) is real, N = Q 1 Q 2 R 2 R N ij = 0 for i j + 3, i.e., N has two nonzero subdiagonals. Since N is real, we can choose the Q-factor Q 1 Q 2 of its QR factorization to be real. So we can obtain the QR factorization of N to get Q 1 Q 2 and then use it to obtain A 3 avoiding complex arithmetic. But there are two problems: i. Forming N costs O(n 3 ) too expensive; ii. Essentially explicit shifting is still involved.

113 EIGENVALUE PROBLEMS p. 37/4 One step of double QR iteration We can avoid these two problems by the following algorithm.

114 EIGENVALUE PROBLEMS p. 37/4 One step of double QR iteration We can avoid these two problems by the following algorithm. Do the 1st step of Householder QR of N: N = Q 1 Q 2 R 2 R 1.

115 EIGENVALUE PROBLEMS p. 37/4 One step of double QR iteration We can avoid these two problems by the following algorithm. Do the 1st step of Householder QR of N: N = Q 1 Q 2 R 2 R 1. Compute n 1 = Ne 1 = [τ,σ,ν, 0,, 0] T.

116 One step of double QR iteration We can avoid these two problems by the following algorithm. Do the 1st step of Householder QR of N: N = Q 1 Q 2 R 2 R 1. Compute n 1 = Ne 1 = [τ,σ,ν, 0,, 0] T. Design a real Householder transformation H 0 such that H T 0 n 1 = = 0. 0 EIGENVALUE PROBLEMS p. 37/4

117 One step of double QR iteration We can avoid these two problems by the following algorithm. Do the 1st step of Householder QR of N: N = Q 1 Q 2 R 2 R 1. Compute n 1 = Ne 1 = [τ,σ,ν, 0,, 0] T. Design a real Householder transformation H 0 such that H T 0 n 1 = = 0. 0 Note H 0 e 1 = (Q 1 Q 2 )e 1. EIGENVALUE PROBLEMS p. 37/4

118 EIGENVALUE PROBLEMS p. 38/4 One step of double QR iteration, ctd Apply H0 T & H 0 to A 1 from left and right, respectively: H0 T A 1 H 0 =

119 EIGENVALUE PROBLEMS p. 38/4 One step of double QR iteration, ctd Apply H0 T & H 0 to A 1 from left and right, respectively: H0 T A 1 H 0 = Then use real Householder transformations H 1,...,H n 2 to transform H T 0 A 1H 0 back to upper Hessenberg form: Ã 3 = H T n 2 H T 1 H T 0 A 1 H 0 H 1 H n 2

120 EIGENVALUE PROBLEMS p. 39/4 One step of double QR iteration, ctd Remember what we want is A 3. But Ã3 is essentially just A 3 due to the following result:

121 EIGENVALUE PROBLEMS p. 39/4 One step of double QR iteration, ctd Remember what we want is A 3. But Ã3 is essentially just A 3 due to the following result: The Implicit Q-Theorem. Suppose Q and P are two real orthogonal matrices such that Q T AQ and P T AP are both upper Hessenberg matrices, one of which is unreduced (i.e., all of its subdiagonal entries are nonzero). If Qe 1 = ±Pe 1, then and Qe j = ±Pe j, j = 2,...,n Q T AQ = D 1 P T APD, D = diag(±1,...,±1)

122 EIGENVALUE PROBLEMS p. 40/4 One step of double QR iteration, ctd Two Hessenberg reductions: A 3 = (Q 1 Q 2 ) T A 1 (Q 1 Q 2 ) Ã 3 = (H 0 H 1 H n 2 ) T A 1 (H 0 H 1 H n 2 )

123 EIGENVALUE PROBLEMS p. 40/4 One step of double QR iteration, ctd Two Hessenberg reductions: A 3 = (Q 1 Q 2 ) T A 1 (Q 1 Q 2 ) Ã 3 = (H 0 H 1 H n 2 ) T A 1 (H 0 H 1 H n 2 ) Q. Show (Q 1 Q 2 )e 1 = (H 0 H 1 H n 2 )e 1.

124 EIGENVALUE PROBLEMS p. 40/4 One step of double QR iteration, ctd Two Hessenberg reductions: A 3 = (Q 1 Q 2 ) T A 1 (Q 1 Q 2 ) Ã 3 = (H 0 H 1 H n 2 ) T A 1 (H 0 H 1 H n 2 ) Q. Show (Q 1 Q 2 )e 1 = (H 0 H 1 H n 2 )e 1. Q. Suppose A 1 is unreduced Hessenberg and µ 1 and µ 2 are not its eigenvalues. Show that A 3 is unreduced Hessenberg.

125 EIGENVALUE PROBLEMS p. 40/4 One step of double QR iteration, ctd Two Hessenberg reductions: A 3 = (Q 1 Q 2 ) T A 1 (Q 1 Q 2 ) Ã 3 = (H 0 H 1 H n 2 ) T A 1 (H 0 H 1 H n 2 ) Q. Show (Q 1 Q 2 )e 1 = (H 0 H 1 H n 2 )e 1. Q. Suppose A 1 is unreduced Hessenberg and µ 1 and µ 2 are not its eigenvalues. Show that A 3 is unreduced Hessenberg. Conclusion: Ã 3 = D 1 A 3 D.

126 EIGENVALUE PROBLEMS p. 41/4 QR Algorithm Given A R n n and a tolerance tol greater than the unit roundoff, this algorithm computes the real Schur decomposition Q T AQ = R. If Q and R are desired, then R is stored in A. If only the eigenvalues are desired, then diagonal blocks in R are stored in the corresponding positions in A.

127 1. Compute the Hessenberg reduction A := Q T AQ where Q = H 1 H n 2. If the final Q is desired, form Q := H 1 H n until q = n Set to zero all subdiagonal entries that satisfy: a i,i 1 tol( a ii + a i 1,i 1 ). Find the largest non-negative q and the smallest non-negative p such that p n p q q A = A 11 A 12 A 13 p 0 A 22 A 23 n p q 0 0 A 33 q where A 33 is upper quasi-triangular and A 22 is unreduced. (Note: either p and q may be zero.) if q < n Perform one step of double QR iteration on A 22 : A 22 := Z T A 22 Z if Q is desired Q := Qdiag(I p, Z, I q ), A 12 := A 12 Z, A 23 := Z T A 23 end end end 3. Upper triangularize all 2-by-2 diagonal blocks in A that have real eigenvalues and accumulate the transformations if necessary.

128 EIGENVALUE PROBLEMS p. 42/4 Inverse iteration for eigenvectors Inverse Iteration: Given A C n n. Let 0 < λ j µ λ i µ (i j). Choose q 0 with q 0 2 = 1. for k = 1, 2,... Solve (A µi)z k = q k 1 q k = z k / z k 2 Stop if (A µi)q k 2 cu A 2. end

129 EIGENVALUE PROBLEMS p. 43/4 Inverse Iteration for eigenvectors, ctd 1. Solve (A µi)z k = q k 1 by LU with PP, & replace any pivot < u A by u A, this works even when A µi is singular. Ill-condition of A µi does not spoil computation.

130 EIGENVALUE PROBLEMS p. 43/4 Inverse Iteration for eigenvectors, ctd 1. Solve (A µi)z k = q k 1 by LU with PP, & replace any pivot < u A by u A, this works even when A µi is singular. Ill-condition of A µi does not spoil computation. 2. When it stops µ and q k is an exact eigenpair for a nearby problem: (A + E k )q k = µq k, where E k r k q T k with r k (A µi)q k.

131 EIGENVALUE PROBLEMS p. 43/4 Inverse Iteration for eigenvectors, ctd 1. Solve (A µi)z k = q k 1 by LU with PP, & replace any pivot < u A by u A, this works even when A µi is singular. Ill-condition of A µi does not spoil computation. 2. When it stops µ and q k is an exact eigenpair for a nearby problem: (A + E k )q k = µq k, where E k r k q T k with r k (A µi)q k. 3. When a known eigenvalue is used as µ, usually only one step is needed. If one step does not give desired result, start with a new initial vector.

132 EIGENVALUE PROBLEMS p. 44/4 Convergence Analysis Assume A is nondefective, AX = Xdiag(λ i ).

133 EIGENVALUE PROBLEMS p. 44/4 Convergence Analysis Assume A is nondefective, AX = Xdiag(λ i ). Let q 0 = Xa = n i=1 a ix i, where we assume a j 0.

134 EIGENVALUE PROBLEMS p. 44/4 Convergence Analysis Assume A is nondefective, AX = Xdiag(λ i ). Let q 0 = Xa = n i=1 a ix i, where we assume a j 0. Thus n n (A µi) k q 0 = (A µi) k 1 a i x i = a i (λ i µ) kx i = i=1 1 [ a (λ j µ) k j x j + i j i=1 a i ( λj µ λ i µ ) kxi ].

135 EIGENVALUE PROBLEMS p. 44/4 Convergence Analysis Assume A is nondefective, AX = Xdiag(λ i ). Let q 0 = Xa = n i=1 a ix i, where we assume a j 0. Thus n n (A µi) k q 0 = (A µi) k 1 a i x i = a i (λ i µ) kx i = i=1 1 [ a (λ j µ) k j x j + i j i=1 a i ( λj µ λ i µ Since λ j µ λ i µ, q k = (A µi) k q 0 ± x j (A µ) k as k, q 0 2 x j 2 ) kxi ].

136 EIGENVALUE PROBLEMS p. 44/4 Convergence Analysis Assume A is nondefective, AX = Xdiag(λ i ). Let q 0 = Xa = n i=1 a ix i, where we assume a j 0. Thus n n (A µi) k q 0 = (A µi) k 1 a i x i = a i (λ i µ) kx i = i=1 1 [ a (λ j µ) k j x j + i j i=1 a i ( λj µ λ i µ Since λ j µ λ i µ, q k = (A µi) k q 0 ± x j (A µ) k as k, q 0 2 x j 2 i.e. q k rapidly converges to the eigenvector of A. ) kxi ].

137 EIGENVALUE PROBLEMS p. 45/4 Computing an eigenvector after QR alg 1. QR algorithm {λ i } 2. Apply the inverse iteration to Hessenberg A 1 = Q T 0 AQ 0, to find an eiegnvector y i of A 1 corresponding to λ i. Inverse iteration with A 1 is economical: solving (A 1 λ i I)z k = q k 1 costs O(n 2 ) flops. 3. Let x i = Q 0 y i. Then Ax i = λ i x i, x i 2 = 1, i.e. x i is a unit eigenvector of A corresponding to λ i.

EIGENVALUE PROBLEMS (EVP)

EIGENVALUE PROBLEMS (EVP) EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Orthogonal iteration to QR

Orthogonal iteration to QR Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Linear algebra & Numerical Analysis

Linear algebra & Numerical Analysis Linear algebra & Numerical Analysis Eigenvalues and Eigenvectors Marta Jarošová http://homel.vsb.cz/~dom033/ Outline Methods computing all eigenvalues Characteristic polynomial Jacobi method for symmetric

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Lecture 4 Eigenvalue problems

Lecture 4 Eigenvalue problems Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

MAA507, Power method, QR-method and sparse matrix representation.

MAA507, Power method, QR-method and sparse matrix representation. ,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

Numerical Solution of Linear Eigenvalue Problems

Numerical Solution of Linear Eigenvalue Problems Numerical Solution of Linear Eigenvalue Problems Jessica Bosch and Chen Greif Abstract We review numerical methods for computing eigenvalues of matrices We start by considering the computation of the dominant

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS SECINS 5/54 BSIC PRPERIES F EIGENVUES ND EIGENVECRS / SIMIRIY RNSFRMINS Eigenvalues of an n : there exists a vector x for which x = x Such a vector x is called an eigenvector, and (, x) is called an eigenpair

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Numerical methods for eigenvalue problems

Numerical methods for eigenvalue problems Numerical methods for eigenvalue problems D. Löchel Supervisors: M. Hochbruck und M. Tokar Mathematisches Institut Heinrich-Heine-Universität Düsseldorf GRK 1203 seminar february 2008 Outline Introduction

More information

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS

DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS The correct choice of a coordinate system (or basis) often can simplify the form of an equation or the analysis of a particular problem. For example, consider

More information

5 Selected Topics in Numerical Linear Algebra

5 Selected Topics in Numerical Linear Algebra 5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Lecture 2: Numerical linear algebra

Lecture 2: Numerical linear algebra Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Orthogonal iteration to QR

Orthogonal iteration to QR Week 10: Wednesday and Friday, Oct 24 and 26 Orthogonal iteration to QR On Monday, we went through a somewhat roundabout algbraic path from orthogonal subspace iteration to the QR iteration. Let me start

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin

More information

Eigenvalue problems. Eigenvalue problems

Eigenvalue problems. Eigenvalue problems Determination of eigenvalues and eigenvectors Ax x, where A is an N N matrix, eigenvector x 0, and eigenvalues are in general complex numbers In physics: - Energy eigenvalues in a quantum mechanical system

More information

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix 11.3 Eigenvalues and Eigenvectors of a ridiagonal Matrix Evaluation of the Characteristic Polynomial Once our original, real, symmetric matrix has been reduced to tridiagonal form, one possible way to

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Jordan Normal Form and Singular Decomposition

Jordan Normal Form and Singular Decomposition University of Debrecen Diagonalization and eigenvalues Diagonalization We have seen that if A is an n n square matrix, then A is diagonalizable if and only if for all λ eigenvalues of A we have dim(u λ

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again View at edx Let us revisit the example from Week 4, in which we had a simple model for predicting

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All

More information

Lecture 11: Diagonalization

Lecture 11: Diagonalization Lecture 11: Elif Tan Ankara University Elif Tan (Ankara University) Lecture 11 1 / 11 Definition The n n matrix A is diagonalizableif there exits nonsingular matrix P d 1 0 0. such that P 1 AP = D, where

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Math Spring 2011 Final Exam

Math Spring 2011 Final Exam Math 471 - Spring 211 Final Exam Instructions The following exam consists of three problems, each with multiple parts. There are 15 points available on the exam. The highest possible score is 125. Your

More information

Schur s Triangularization Theorem. Math 422

Schur s Triangularization Theorem. Math 422 Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a

More information

Least squares and Eigenvalues

Least squares and Eigenvalues Lab 1 Least squares and Eigenvalues Lab Objective: Use least squares to fit curves to data and use QR decomposition to find eigenvalues. Least Squares A linear system Ax = b is overdetermined if it has

More information

Chapter 3. Linear and Nonlinear Systems

Chapter 3. Linear and Nonlinear Systems 59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Eigenvalues and Eigenvectors Fall 2015 1 / 14 Introduction We define eigenvalues and eigenvectors. We discuss how to

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Draft. Lecture 14 Eigenvalue Problems. MATH 562 Numerical Analysis II. Songting Luo. Department of Mathematics Iowa State University

Draft. Lecture 14 Eigenvalue Problems. MATH 562 Numerical Analysis II. Songting Luo. Department of Mathematics Iowa State University Lecture 14 Eigenvalue Problems Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH562

More information