QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

Size: px
Start display at page:

Download "QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR"

Transcription

1 QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique up to a change of signs of the columns of Q: A = (QD)( DR) with D = I

2 QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR If A is n k with column rank l and l k n, then the ecomical QR-decomposition is an n l orthonormal matrix Q and an l k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A, 0 );

3 QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR If A is n k with column rank l and l k n, then the ecomical QR-decomposition is an n l orthonormal matrix Q and an l k upper triangular matrix R for which A = QR Note. The columns of Q form an orthonormal basis of the space spanned by the columns of A: the QR-decomp. represents the results of the Gram-Schmidt process.

4 QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR Theorem. The QR-decomposition can be stably computed with Householder reflections.

5 QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR Theorem. The QR-decomposition can be stably computed with Householder reflections: Let R be the computed R and Q = (H vk... H v ) with H vj the Householder reflection as actually used in step j. Then A + A = Q R for some A with A F nku A F. Note. The claim is not that Q is close to the Q that we would have been obtained in exact arithmetic, but that Q is unitary (product of Householder reflections).

6 Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A);

7 Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Theorem. If ST = TΛ is the eigenvalue decomposition of S, i.e., T is non-singular and Λ is diagonal, then A(UT) = (UT)Λ is the eigenvalue decomposition of A. In particular, Λ(A) = Λ(S) = diag(s) and C 2 (T) = C 2 (UT).

8 Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); The columns u j Ue j of U are called Schur vectors.

9 Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Observation. Many techniques, where the eigenvalue decomposition of A is exploited, can be based on the Schur decomposition as well. For practical computations, the Schur decomposition is preferable, since it is stable: C 2 (U) =, while C 2 (UT) can be huge.

10 Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Note. The first column u Ue of U is an eigenvector of A with eigenvalue λ e Se. Proof. S is upper triangular: Se = λ e.

11 Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Note. The last column u n Ue n of U is an eigenvector of A with eigenvalue λ n, where λ n e n Se n. Proof. S is lower triangular: S e n = λ n e n.

12 Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Note. The second column u 2 of U is an eigenvector of A (I u u )A(I u u ) with eigenvalue λ 2 e 2 Se 2. Proof. S is upper triangular: Se 2 = αe + λ e 2 for α = S,2. Hence, USU u 2 = αu + λ u 2.

13 Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Note. The second column u 2 of U is an eigenvector of A (I u u )A(I u u ) with eigenvalue λ 2 e 2 Se 2. In A, the eigenvector u is deflated from A.

14 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Theorem. With a proper shift strategy: U k U, U is unitary S k S, S is upper triangular, AU = US: The QR-algorithm converges to the Schur decomposition.

15 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. Proof. S 0 Q = (S 0 σ I + σ I)Q = (Q R + σ I)Q = Q (R Q + σ I) = Q S

16 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. Proof. S k Q k = Q k S k AU k = S 0 Q Q 2... Q k = Q S Q 2... Q k = U k S k

17 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, Proof. (A σ k I)U k = U k (S k σ k I) = U k Q k R k

18 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Proof. U k (A σ k I) = R k U k

19 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Corollary. With x k U k e and τ k e R ke, we have (A σ k I)x k = τ k x k (the shifted power method). Proof. R k upper triangular R k e = τ k e.

20 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Corollary. With x k U k e and τ k e R ke, we have (A σ k I)x k = τ k x k (the shifted power method). With p(λ) (λ σ k )... (λ σ ), x k = τp(a)x 0 for some τ C.

21 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Corollary. With x k U k e and τ k e R ke, we have (A σ k I)x k = τ k x k (the shifted power method). Note that λ (k) x k Ax k = e U k AU ke = e S k e.

22 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Suppose v is the dominant eigenvector for A σi. With σ k = σ and x k U k e, for k, we have that (x k, v) 0, λ (k) e S ke λ, S k e λ (k) e 0, where λ is the eigenvalue of A associated v.

23 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Corollary. With x k U k e n and τ k e n R ke n, we have (A σ k I)x k = τ k x k (Shift & Invert). Proof. R k lower triangular R k e n = τ k e n.

24 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Corollary. With x k U k e n and τ k e n R ke n, we have (A σ k I)x k = τ k x k (Shift & Invert). Note that λ (k) x k Ax k = e n U k AU ke n = e n S k e n.

25 QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Suppose v is the dominant eigenvector for (A σi). With σ k = σ and x k U k e n, for k, we have that (x k, v) 0, λ (k) e n S ke n λ, S k e n λ (k) e n 0, where λ is the eigenvalue of A associated v.

26 Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). Rayleigh Quotient Iteration is Shift and Invert with shifts equal to the the Rayleigh quotients, σ k = x k A x k. Theorem. The asymptotic convergence of RQI is quadratic. In this case, with x k = U k e n, σ k = e n S k e n. With The asymptotic convergence of this method is quadratic, we mean: the method produces sequences (x k ) that converge provided x 0 is close enough to some (limit) eigenvector, and for k large, the error reduces quadratically.

27 Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). Rayleigh Quotient Iteration is Shift and Invert with shifts equal to the the Rayleigh quotients, σ k = x k A x k. Theorem. The asymptotic convergence of RQI is quadratic. In this case, with x k = U k e n, σ k = e n S k e n. RQI need converge, as the following example shows [ ] 0 Example. With A = and x 0 0 = e. RQI produces the sequence (x k ) = (e, e 2, e, e 2,...). Note that x k Ax k = 0. Observation. The shifts σ k = e n S k e n may lead to stagnation.

28 Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). The Wilkinson shift is the absolute smallest eigenvalue of the 2 2 right lower block of S k.

29 Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). The Wilkinson shift is the absolute smallest eigenvalue of the 2 2 right lower block of S k. Theorem. For σ k take the Wilkinson shift and take x k U k e n. Then, for some eigenpair (v, λ) of A, we have that (x k, v) 0, λ (k) e ns k e n λ, S k e n λ (k) e n 0. The convergence is quadratic (and cubic if A is Hermitian).

30 Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). The Wilkinson shift is the absolute smallest eigenvalue of the 2 2 right lower block of S k. Theorem. For σ k take the Wilkinson shift and take x k U k e n. Then, for some eigenpair (v, λ) of A, we have that (x k, v) 0, λ (k) e ns k e n λ, S k e n λ (k) e n 0. The convergence is quadratic (and cubic if A is Hermitian). The QR-algorithm: if e n S k λ (k) e n 2 ɛ, then accept U k e n as an eigenvector of A deflate: delete the last row and column of S k and continu (the search for an eigenpair of the lower dimensional matrix).

31 Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). The Wilkinson shift is the absolute smallest eigenvalue of the 2 2 right lower block of S k. Theorem. For σ k take the Wilkinson shift and take x k U k e n. Then, for some eigenpair (v, λ) of A, we have that (x k, v) 0, λ (k) e ns k e n λ, S k e n λ (k) e n 0. The convergence is quadratic (and cubic if A is Hermitian). The QR-algorithm: if e n S k λ (k) e n 2 ɛ, then accept U k e n as the nth Schur vector of A deflate: delete the last row and column of S k and continu (the search for an eigenpair of the lower dimensional matrix).

32 Deflation Consider the kth step of the QR-algorithm. Put u n U k e n. Note that Hence (I u n u n)u k = U k (I e n e n) (I u n u n )A(I u n u n )U k = U k (I e n e n )S k(i e n e n ). Deflating the nth Schur vector from A can easily be performed in the QR-algorithm: simply delete the last row and last column of the active matrix S k (assumming that U k e n is the nth Schur vector to required accuracy).

33 QR-algorithm Select U unitary. S = U AU, m = size(a, ), N = [ : m], I = I m. repeat until m = ) Select the Wilskinson shift σ 2) [Q, R] = qr(s σ I) 3) S RQ + σi 4) U(:, N) U(:, N)Q 5) if S(m, m ) ɛ S(m, m) %% Deflate m m, N [ : m], I I m S S(N, N) end if end repeat Theorem. U k U, U is unitary S k S, S is upper triangular, AU = US.

34 Observations. The QR-algorithm quickly converges towards to the eigenvalue as targeted by the Wilkinson shift (on average 8 steps of the QR algorithm seems to be required for accurate detection of the first eigenvalue). While converging to a target eigenvalue, other eigenvalues are also approximated. Therefore, the next eigenvalues are detected more quickly (from the 5th eigenvalue on, 2 steps appear to be sufficient). All eigenvalues are being computed (according to multiplicity). Computation of all eigenvalues (actually of the Schur decomposition of A) requires approximately 2n steps of the QR-algorithm. The order in which the eigenvalues are being computed can not be controled.

35 Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop.

36 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) R = Here, = c s s c = G R 0 [ ] [ ] c s cos(φ ) sin(φ ) =. Empty matrix entries are 0. s c sin(φ ) cos(φ )

37 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) R 2 = Here, = c s s c [ ] [ ] c s cos(φ2 ) sin(φ 2 ) = s c sin(φ 2 ) cos(φ 2 ) = G 2R

38 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) R 3 = Here, = c s s c [ ] [ ] c s cos(φ3 ) sin(φ 3 ) = s c sin(φ 3 ) cos(φ 3 ) = G 3R 2

39 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) R 4 = Here, = [ ] [ ] c s cos(φ4 ) sin(φ 4 ) = s c sin(φ 4 ) cos(φ 4 ) c s s c = G 4R 3

40 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) R 4 = Here, = [ ] [ ] c s cos(φ4 ) sin(φ 4 ) = s c sin(φ 4 ) cos(φ 4 ) c s s c = G 4R 3

41 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) Then R = R n and Q = G n... G. Note. Q need not be formed explicitly: it suffices to store the sequences of cosines and sines.

42 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) Then R = R n and Q = G n... G, and S = RG... G n.

43 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) (G 3 (R 2 )G ) = c s s c c s s c

44 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) (G 3 (R 2 )G ) = c s s c c s s c

45 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) (G 3 (R 2 )G ) = c s s c Chasing the bulge.

46 Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) Then R = R n and Q = G n... G and S = RG... G n. Property. S =... G 4 (G 3 (G 2 G S)G )G 2... Only two sines and two cosines have to be stored.

47 Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop.

48 Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop. Observation. For computing the eigenvalues only, step 4 can be skipped.

49 Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop. If the eigenvalue λ j is available, then the associated eigenvector can also be computed with Shift & Invert: solve (A λ j I)v j = e for v j. Note that the LU-decomposition can cheaply be computed if A is upper Hessenberg.

50 Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop. Observation. The QR-algorithm requires approximately 8n 3 flop to compute the Schur decomposition to full accuracy.

51 Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop. Observation. The QR-algorithm can not exploit any sparsity structure of A.

52 Benefits of the QR-RQ steps. The Shift & Invert power method is implicitly incorporated for one eigenvalue. The power method is implicitly incorporated for all other eigenvalues. Easy deflation is allowed. The computations are stable (when a stable qr-decomposition is used). When combined with an upper Hessenberg start: Upper Hessenberg structure is preserved, leading to realtively low computational costs per step. Simple error controle: the norm of the residual equals S k (n, n ). Effective shifts can easily be computed: with an eigenvalue of the 2 2 right lower block of S k, quadratic convergence is achieved and stagnation avoided.

53 Excellent performance of the QR algorithm relies on QR-RQ steps. (see previous transparant) A good shift strategy leading to fast convergence (quadratic and, if A is Hermitian, cubic) to one eigenvalue. While quickly converging to one eigenvalue, other eigenvalues are also approximated, yielding good starts for quick eigenvalue computation. Deflation allows a fast search for the next eigenvalue. Deflation is performed simply by deleting the last row and the last column of the active matrix. The upper Hessenberg structure is preserved, allowing relatively cheap QR steps.

54 Theorem. The QR-algorithm is stable: for the matrix U and the upper triangular matrix S we have that (A + A )U = US, U U I 2 us where A is an n n matrix such that A 2 u A 2

Orthogonal iteration to QR

Orthogonal iteration to QR Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

Lecture 4 Basic Iterative Methods I

Lecture 4 Basic Iterative Methods I March 26, 2018 Lecture 4 Basic Iterative Methods I A The Power Method Let A be an n n with eigenvalues λ 1,...,λ n counted according to multiplicity. We assume the eigenvalues to be ordered in absolute

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Orthogonal iteration to QR

Orthogonal iteration to QR Week 10: Wednesday and Friday, Oct 24 and 26 Orthogonal iteration to QR On Monday, we went through a somewhat roundabout algbraic path from orthogonal subspace iteration to the QR iteration. Let me start

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3 Chasing the Bulge Sebastian Gant 5/9/207 Contents Precursers and Motivation 2 The Reduction to Hessenberg Form 3 3 The Algorithm 5 4 Concluding Remarks 8 5 References 0 ntroduction n the early days of

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

The QR Algorithm. Chapter The basic QR algorithm

The QR Algorithm. Chapter The basic QR algorithm Chapter 3 The QR Algorithm The QR algorithm computes a Schur decomposition of a matrix. It is certainly one of the most important algorithm in eigenvalue computations. However, it is applied to dense (or:

More information

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

EIGENVALUE PROBLEMS (EVP)

EIGENVALUE PROBLEMS (EVP) EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Implementing the QR algorithm for efficiently computing matrix eigenvalues and eigenvectors

Implementing the QR algorithm for efficiently computing matrix eigenvalues and eigenvectors Implementing the QR algorithm for efficiently computing matrix eigenvalues and eigenvectors Final Degree Dissertation Degree in Mathematics Gorka Eraña Robles Supervisor: Ion Zaballa Tejada Leioa, June

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The QR Algorithm. Marco Latini. February 26, 2004

The QR Algorithm. Marco Latini. February 26, 2004 The QR Algorithm Marco Latini February 26, 2004 Contents 1 Introduction 2 2 The Power and Inverse Power method 2 2.1 The Power Method............................... 2 2.2 The Inverse power method...........................

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Lecture 5 Basic Iterative Methods II

Lecture 5 Basic Iterative Methods II March 26, 2018 Lecture 5 Basic Iterative Methods II The following two exercises show that eigenvalue problems and linear systems of equations are closely related. From a certain point of view, they are

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

5 Selected Topics in Numerical Linear Algebra

5 Selected Topics in Numerical Linear Algebra 5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Lecture 2: Numerical linear algebra

Lecture 2: Numerical linear algebra Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

Orthogonal iteration revisited

Orthogonal iteration revisited Week 10 11: Friday, Oct 30 and Monday, Nov 2 Orthogonal iteration revisited Last time, we described a generalization of the power methods to compute invariant subspaces. That is, starting from some initial

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 17, 2012 T.-M. Huang (Taiwan Normal University) Krylov

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Power iteration Notes for 2016-10-17 In most introductory linear algebra classes, one computes eigenvalues as roots of a characteristic polynomial. For most problems, this is a bad idea: the roots of

More information

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury Matrix Theory, Math634 Lecture Notes from September 27, 212 taken by Tasadduk Chowdhury Last Time (9/25/12): QR factorization: any matrix A M n has a QR factorization: A = QR, whereq is unitary and R is

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Numerical methods for eigenvalue problems

Numerical methods for eigenvalue problems Numerical methods for eigenvalue problems D. Löchel Supervisors: M. Hochbruck und M. Tokar Mathematisches Institut Heinrich-Heine-Universität Düsseldorf GRK 1203 seminar february 2008 Outline Introduction

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

MATRIX AND LINEAR ALGEBR A Aided with MATLAB

MATRIX AND LINEAR ALGEBR A Aided with MATLAB Second Edition (Revised) MATRIX AND LINEAR ALGEBR A Aided with MATLAB Kanti Bhushan Datta Matrix and Linear Algebra Aided with MATLAB Second Edition KANTI BHUSHAN DATTA Former Professor Department of Electrical

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

MAA507, Power method, QR-method and sparse matrix representation.

MAA507, Power method, QR-method and sparse matrix representation. ,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming

More information

Iassume you share the view that the rapid

Iassume you share the view that the rapid the Top T HEME ARTICLE THE QR ALGORITHM After a brief sketch of the early days of eigenvalue hunting, the author describes the QR Algorithm and its major virtues. The symmetric case brings with it guaranteed

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

ACM106a - Homework 4 Solutions

ACM106a - Homework 4 Solutions ACM106a - Homework 4 Solutions prepared by Svitlana Vyetrenko November 17, 2006 1. Problem 1: (a) Let A be a normal triangular matrix. Without the loss of generality assume that A is upper triangular,

More information