Matrix functions and their approximation. Krylov subspaces

Size: px
Start display at page:

Download "Matrix functions and their approximation. Krylov subspaces"

Transcription

1 [ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th of January 2006 Matrix functions Krylov subspaces

2 [ 2 / 31 ] University of Cyprus Overview 1 Matrix functions Introduction Definitions Properties 2 Krylov subspaces Arnoldi method First error bounds Matrix functions Krylov subspaces

3 [ 3 / 31 ] University of Cyprus What is a matrix function? In general... f : D R, from a domain D C k l to some range area R C m n. f : D R m = n = 1 m = 1 or n = 1 m, n arbitrary k = l = 1 scalar function of a single variable vector function of a single variable matrix-valued f. of a single variable k = 1 or l = 1 scalar function of a vector vector field matrix function of a vector k, l arbitrary scalar function vector function matrix function Tabelle: Classification of matrix functions

4 [ 3 / 31 ] University of Cyprus What is a matrix function? In general... f : D R, from a domain D C k l to some range area R C m n. f : D R m = n = 1 m = 1 or n = 1 m, n arbitrary k = l = 1 scalar function of a single variable vector function of a single variable matrix-valued f. of a single variable k = 1 or l = 1 scalar function of a vector vector field matrix function of a vector k, l arbitrary scalar function vector function matrix function Tabelle: Classification of matrix functions

5 [ 4 / 31 ] University of Cyprus Definition 1 Polynomial matrix functions Given A C N N and p(z) of degree m with complex coefficients, i.e. p(z) = α m z m + α m 1 z m α 0. Notation: p P m (z). Since the powers I, A, A 2,... exist we may give the following Definition p(a) := α m A m + α m 1 A m α 0 I C N N. p is a polynomial matrix function. (D1) We no longer have to distinguish between P m (z) and the set of polynomials in A of degree m. We simply write P m.

6 [ 4 / 31 ] University of Cyprus Definition 1 Polynomial matrix functions Given A C N N and p(z) of degree m with complex coefficients, i.e. p(z) = α m z m + α m 1 z m α 0. Notation: p P m (z). Since the powers I, A, A 2,... exist we may give the following Definition p(a) := α m A m + α m 1 A m α 0 I C N N. p is a polynomial matrix function. (D1) We no longer have to distinguish between P m (z) and the set of polynomials in A of degree m. We simply write P m.

7 [ 4 / 31 ] University of Cyprus Definition 1 Polynomial matrix functions Given A C N N and p(z) of degree m with complex coefficients, i.e. p(z) = α m z m + α m 1 z m α 0. Notation: p P m (z). Since the powers I, A, A 2,... exist we may give the following Definition p(a) := α m A m + α m 1 A m α 0 I C N N. p is a polynomial matrix function. (D1) We no longer have to distinguish between P m (z) and the set of polynomials in A of degree m. We simply write P m.

8 [ 4 / 31 ] University of Cyprus Definition 1 Polynomial matrix functions Given A C N N and p(z) of degree m with complex coefficients, i.e. p(z) = α m z m + α m 1 z m α 0. Notation: p P m (z). Since the powers I, A, A 2,... exist we may give the following Definition p(a) := α m A m + α m 1 A m α 0 I C N N. p is a polynomial matrix function. (D1) We no longer have to distinguish between P m (z) and the set of polynomials in A of degree m. We simply write P m.

9 [ 5 / 31 ] University of Cyprus Properties of polynomials in matrices Lemma Let p P m be a polynomial, A C N N and A = TJT 1 where J = diag(j 1, J 2,..., J k ) is block-diagonal. Then 1 p(a) = Tp(J)T 1, 2 p(j) = diag (p(j 1 ), p(j 2 ),..., p(j k )), 3 If Av = λv then p(a)v = p(λ)v, 4 Given another polynomial p P m, then p(a) p(a) = p(a)p(a).

10 [ 6 / 31 ] University of Cyprus The Jordan canonical form Every square matrix A is similar to a block-diagonal Jordan matrix J = diag(j 1, J 2,..., J k ), where each Jordan block J j = J j (λ j ) C n j n j has entries λ j on the main diagonal and ones on the first upper diagonal (j = 1, 2,..., k): J j (λ j ) := toep(λ j, 1) = λ j 1 λ j λ j 1 λ j. We say J = T 1 AT is a Jordan canonical form (JCF) of A. The columns of T are the generalized eigenvectors of A.

11 [ 6 / 31 ] University of Cyprus The Jordan canonical form Every square matrix A is similar to a block-diagonal Jordan matrix J = diag(j 1, J 2,..., J k ), where each Jordan block J j = J j (λ j ) C n j n j has entries λ j on the main diagonal and ones on the first upper diagonal (j = 1, 2,..., k): J j (λ j ) := toep(λ j, 1) = λ j 1 λ j λ j 1 λ j. We say J = T 1 AT is a Jordan canonical form (JCF) of A. The columns of T are the generalized eigenvectors of A.

12 [ 7 / 31 ] University of Cyprus The Jordan canonical form Given a Jordan block J := toep(λ, 1) C n n. Let f (z) := z m be the monomial of degree m. Then ( ( ) ( ) f (J) = toep λ m, mλ m 1 m,..., λ m i m,..., )λ m min{m,n 1} i min{m, n 1} ( = toep f (λ), f (λ),..., f (i) (λ),..., f ) (min{m,n 1}) (λ) i! min{m, n 1}! ( = toep f (λ), f (λ),..., f (i) (λ),..., f ) (n 1) (λ). i! (n 1)! f (J) is already defined if f, f,..., f (n 1) exist in an open subset of C containing λ.

13 [ 7 / 31 ] University of Cyprus The Jordan canonical form Given a Jordan block J := toep(λ, 1) C n n. Let f (z) := z m be the monomial of degree m. Then ( ( ) ( ) f (J) = toep λ m, mλ m 1 m,..., λ m i m,..., )λ m min{m,n 1} i min{m, n 1} ( = toep f (λ), f (λ),..., f (i) (λ),..., f ) (min{m,n 1}) (λ) i! min{m, n 1}! ( = toep f (λ), f (λ),..., f (i) (λ),..., f ) (n 1) (λ). i! (n 1)! f (J) is already defined if f, f,..., f (n 1) exist in an open subset of C containing λ.

14 [ 7 / 31 ] University of Cyprus The Jordan canonical form Given a Jordan block J := toep(λ, 1) C n n. Let f (z) := z m be the monomial of degree m. Then ( ( ) ( ) f (J) = toep λ m, mλ m 1 m,..., λ m i m,..., )λ m min{m,n 1} i min{m, n 1} ( = toep f (λ), f (λ),..., f (i) (λ),..., f ) (min{m,n 1}) (λ) i! min{m, n 1}! ( = toep f (λ), f (λ),..., f (i) (λ),..., f ) (n 1) (λ). i! (n 1)! f (J) is already defined if f, f,..., f (n 1) exist in an open subset of C containing λ.

15 [ 7 / 31 ] University of Cyprus The Jordan canonical form Given a Jordan block J := toep(λ, 1) C n n. Let f (z) := z m be the monomial of degree m. Then ( ( ) ( ) f (J) = toep λ m, mλ m 1 m,..., λ m i m,..., )λ m min{m,n 1} i min{m, n 1} ( = toep f (λ), f (λ),..., f (i) (λ),..., f ) (min{m,n 1}) (λ) i! min{m, n 1}! ( = toep f (λ), f (λ),..., f (i) (λ),..., f ) (n 1) (λ). i! (n 1)! f (J) is already defined if f, f,..., f (n 1) exist in an open subset of C containing λ.

16 [ 7 / 31 ] University of Cyprus The Jordan canonical form Given a Jordan block J := toep(λ, 1) C n n. Let f (z) := z m be the monomial of degree m. Then ( ( ) ( ) f (J) = toep λ m, mλ m 1 m,..., λ m i m,..., )λ m min{m,n 1} i min{m, n 1} ( = toep f (λ), f (λ),..., f (i) (λ),..., f ) (min{m,n 1}) (λ) i! min{m, n 1}! ( = toep f (λ), f (λ),..., f (i) (λ),..., f ) (n 1) (λ). i! (n 1)! f (J) is already defined if f, f,..., f (n 1) exist in an open subset of C containing λ.

17 [ 8 / 31 ] University of Cyprus Definition 2 Definition Given A C N N with a Jordan canonical form J = T 1 AT, where J = diag(j 1, J 2,..., J k ) and J j = J j (λ j ) C n j n j (j = 1, 2,..., k). Let U be an open subset of C such that {λ 1, λ 2,..., λ k } U. Let f be a function f : U D C. Then f is defined on A if f (λ j ), f (λ j ),..., f (d λ 1) j (λ j ) exist, d λj := max{n i : i = 1, 2,..., k and λ i = λ j }. We set f (A) := T diag (f (J 1 ), f (J 2 ),..., f (J k )) T 1, (D2) where ( f (J j ) := toep f (λ j ), f (λ j ),..., f (i) (λ j ),..., f (n j 1) (λ j ) i! (n j 1)! ).

18 [ 9 / 31 ] University of Cyprus Remarks 1 This definition is independent of the choice of J and T. Hence, f (A) is uniquely determined. 2 d λ is the size of the largest Jordan block to eigenvalue λ. By Λ(A) := {λ 1, λ 2,..., λ k } we denote the set of the eigenvalues of A. The minimal polynomial of A is defined as ψ A (z) := (z λ) d λ. λ Λ(A) 3 For all the λ j being pairwise distinct, ψ A (z) = k (z λ j ) n j = χ A (z), j=1 where χ A (z) is the characteristic polynomial of A. Matrices with ψ A = χ A are called nonderogatory. 4 For all p P m there holds (D1) = (D2).

19 [ 10 / 31 ] University of Cyprus Polynomial interpolation Theorem 1 There holds if and only if f (A) = p(a) f (i) (λ) = p (i) (λ), λ Λ(A), i = 0, 1,..., d λ 1. (HIP) These are d := deg(ψ A ) interpolation conditions to p. 2 There exists a uniquely determined polynomial ˆp P d 1 that satisfies (HIP). ˆp is the Hermite interpolation polynomial. 3 Assumed p is another polynomial that satisfies (HIP). Then p(z) = ˆp(z) + ψ A (z)h(z) for some polynomial h(z) and ψ A the minimal polynomial of A.

20 [ 11 / 31 ] University of Cyprus Example (1) Let A = [α]. Then ψ A (z) = z α and deg(ψ A ) = 1, the most. Therefore f (A) = ˆp(A) with deg(ˆp) = 0, namely ˆp(A) = f (α)i. This is a degenerated case.

21 [ 12 / 31 ] University of Cyprus Example (2) Calculate ˆp for f (z) = exp(z) and A = J = ψ A (z) = (z 1)(z + 1) 2 z p(λ 1 ) = p(1) p(λ 2 ) = p( 1) p (λ 2 ) = p ( 1) p(λ 3 ) = p(0)! = exp(1) = e,! = exp( 1) = 1/e,! = exp( 1) = 1/e,! = exp(0) = 1. A solution is p(z) = e2 4e+5 z 4e 3 + (e 1)2 z 2e 2 + e2 +4e 7 z + 1 and there holds 4e p(a) = f (A) = exp(a). Because of deg(p) < deg(ψ A ), p = ˆp.

22 [ 12 / 31 ] University of Cyprus Example (2) Calculate ˆp for f (z) = exp(z) and A = J = ψ A (z) = (z 1)(z + 1) 2 z p(λ 1 ) = p(1) p(λ 2 ) = p( 1) p (λ 2 ) = p ( 1) p(λ 3 ) = p(0)! = exp(1) = e,! = exp( 1) = 1/e,! = exp( 1) = 1/e,! = exp(0) = 1. A solution is p(z) = e2 4e+5 z 4e 3 + (e 1)2 z 2e 2 + e2 +4e 7 z + 1 and there holds 4e p(a) = f (A) = exp(a). Because of deg(p) < deg(ψ A ), p = ˆp.

23 [ 12 / 31 ] University of Cyprus Example (2) Calculate ˆp for f (z) = exp(z) and A = J = ψ A (z) = (z 1)(z + 1) 2 z p(λ 1 ) = p(1) p(λ 2 ) = p( 1) p (λ 2 ) = p ( 1) p(λ 3 ) = p(0)! = exp(1) = e,! = exp( 1) = 1/e,! = exp( 1) = 1/e,! = exp(0) = 1. A solution is p(z) = e2 4e+5 z 4e 3 + (e 1)2 z 2e 2 + e2 +4e 7 z + 1 and there holds 4e p(a) = f (A) = exp(a). Because of deg(p) < deg(ψ A ), p = ˆp.

24 [ 12 / 31 ] University of Cyprus Example (2) Calculate ˆp for f (z) = exp(z) and A = J = ψ A (z) = (z 1)(z + 1) 2 z p(λ 1 ) = p(1) p(λ 2 ) = p( 1) p (λ 2 ) = p ( 1) p(λ 3 ) = p(0)! = exp(1) = e,! = exp( 1) = 1/e,! = exp( 1) = 1/e,! = exp(0) = 1. A solution is p(z) = e2 4e+5 z 4e 3 + (e 1)2 z 2e 2 + e2 +4e 7 z + 1 and there holds 4e p(a) = f (A) = exp(a). Because of deg(p) < deg(ψ A ), p = ˆp.

25 [ 12 / 31 ] University of Cyprus Example (2) Calculate ˆp for f (z) = exp(z) and A = J = ψ A (z) = (z 1)(z + 1) 2 z p(λ 1 ) = p(1) p(λ 2 ) = p( 1) p (λ 2 ) = p ( 1) p(λ 3 ) = p(0)! = exp(1) = e,! = exp( 1) = 1/e,! = exp( 1) = 1/e,! = exp(0) = 1. A solution is p(z) = e2 4e+5 z 4e 3 + (e 1)2 z 2e 2 + e2 +4e 7 z + 1 and there holds 4e p(a) = f (A) = exp(a). Because of deg(p) < deg(ψ A ), p = ˆp.

26 [ 13 / 31 ] University of Cyprus Remarks 1 Every function f ( ) that is defined on the spectrum of A C N N can be represented pointwise (i.e., for a concrete A) as a polynomial p(a) P d 1, d = deg(ψ A ). Or we might say, f is a field of polynomials. 2 f (A) depends only on the values of f, f,... on Λ(A). Thus f (A) and f (B) have the same polynomial representation for A and B having the same minimal polynomial (e.g. similar matrices). 3 If all Jordan blocks have size 1 1 and thus J is a diagonal matrix (e.g. for normal A) then (HIP) reduces to a Lagrange interpolation problem: f (λ) = p(λ), λ Λ(A). (LIP)

27 [ 14 / 31 ] University of Cyprus The components of A Let again ψ A (z) = λ Λ(A) (z λ)d λ denote the minimal polynomial of A, d = deg(ψ A ). Definition Define H := {ϕ λ,i (z) P d 1 : λ Λ(A), i = 0, 1,..., d λ 1} such that { ϕ (ν) 1, z = λ, i = ν; λ,i (z) = 0, otherwise, for all z Λ(A). H is the Hermite basis of P d 1 with respect to ψ A. (It has to be shown that all the ϕ λ,i are linearly independent.)

28 [ 15 / 31 ] University of Cyprus The components of A Hermite basis for example (2) 1.5 φ 1,0 φ 1,1 φ 0,0 φ 1,

29 [ 16 / 31 ] University of Cyprus Definition The components C λ,i of A are defined as C λ,i := ϕ λ,i (A). Lemma 1 {C λ,i : λ Λ(A); i = 0, 1,..., d λ 1} is a set of linearly independent matrices. 2 spectral resolution of A for f : 3 f (A) = λ Λ(A) d λ 1 i=0 f (i) (λ)c λ,i, λ Λ(A) C λ,0 = I and λ Λ(A) λc λ,0 + C λ,1 = A, 4 C λ,i C µ,j = C µ,j C λ,i. (SR)

30 [ 16 / 31 ] University of Cyprus Definition The components C λ,i of A are defined as C λ,i := ϕ λ,i (A). Lemma 1 {C λ,i : λ Λ(A); i = 0, 1,..., d λ 1} is a set of linearly independent matrices. 2 spectral resolution of A for f : 3 f (A) = λ Λ(A) d λ 1 i=0 f (i) (λ)c λ,i, λ Λ(A) C λ,0 = I and λ Λ(A) λc λ,0 + C λ,1 = A, 4 C λ,i C µ,j = C µ,j C λ,i. (SR)

31 [ 17 / 31 ] University of Cyprus Cauchy integral formula Let f (z) be analytic in a domain G and let γ be a closed path contained in G. Then the Cauchy theorem asserts f (i) (z) = i! f (ζ) dζ (CIF) 2π i (ζ z) i+1 for any z G, wind z (γ) = 1 and i = 0, 1,... γ

32 [ 18 / 31 ] University of Cyprus The resolvent of A Lemma Let A C N N and ζ Λ(A), C λ,i the components of A. There holds R ζ (A) := (ζi A) 1 = R ζ (A) is the resolvent of A to ζ. λ Λ(A) d λ 1 i=0 i! (ζ λ) i+1 C λ,i. Beweis. For ζ Λ(A), (ζi A) is invertible because N (ζi A) = {0}. The spectral resolution of A for f ζ (λ) = 1/(ζ λ), which is defined for all λ ζ, yields the desired equivalence.

33 [ 18 / 31 ] University of Cyprus The resolvent of A Lemma Let A C N N and ζ Λ(A), C λ,i the components of A. There holds R ζ (A) := (ζi A) 1 = R ζ (A) is the resolvent of A to ζ. λ Λ(A) d λ 1 i=0 i! (ζ λ) i+1 C λ,i. Beweis. For ζ Λ(A), (ζi A) is invertible because N (ζi A) = {0}. The spectral resolution of A for f ζ (λ) = 1/(ζ λ), which is defined for all λ ζ, yields the desired equivalence.

34 [ 19 / 31 ] University of Cyprus Theorem Let A C N N, γ be a closed path surrounding all λ Λ(A) once, f analytic in int(γ) and extending continuously to it, then f (A) = 1 f (ζ)(ζi A) 1 dζ = 1 f (ζ)r ζ (A)dζ. (D3) 2π i 2π i Beweis. γ By multiplying both sides of R ζ (A) by f (ζ)/(2π i) and integrating along γ we get d 1 f (ζ)(ζi A) 1 f (ζ) λ 1 i! dζ = 2π i γ γ 2π i (ζ λ) (i+1) C λ,i dζ λ Λ(A) i=0 = (CIF) = λ Λ(A) λ Λ(A) (SR) = f (A). d λ 1 i=0 d λ 1 i=0 γ ( i! 2π i γ f (i) (λ)c λ,i ) f (ζ) dζ C (ζ λ) i+1 λ,i

35 [ 19 / 31 ] University of Cyprus Theorem Let A C N N, γ be a closed path surrounding all λ Λ(A) once, f analytic in int(γ) and extending continuously to it, then f (A) = 1 f (ζ)(ζi A) 1 dζ = 1 f (ζ)r ζ (A)dζ. (D3) 2π i 2π i Beweis. γ By multiplying both sides of R ζ (A) by f (ζ)/(2π i) and integrating along γ we get d 1 f (ζ)(ζi A) 1 f (ζ) λ 1 i! dζ = 2π i γ γ 2π i (ζ λ) (i+1) C λ,i dζ λ Λ(A) i=0 = (CIF) = λ Λ(A) λ Λ(A) (SR) = f (A). d λ 1 i=0 d λ 1 i=0 γ ( i! 2π i γ f (i) (λ)c λ,i ) f (ζ) dζ C (ζ λ) i+1 λ,i

36 [ 20 / 31 ] University of Cyprus Power series Definition Let f be analytic in an open set U 0 and let f (z) = j=0 α jz j be the Taylor expansion of f in 0 with convergence radius τ (0, ]. Then f (A) is defined for every A with σ(a) < τ and there holds f (A) = j=0 α j A j = lim m m α j A j. j=0 (D4) j=0 α ja j converges ε > 0 n ε N 0 : j=n ε α j A j < ε. Assumed f has convergence radius τ (i.e., f (z) < for z < τ). Then j=n ε α j A j j=n ε α j A j, thus σ(a) A < τ is a sufficient criteria for convergence of j=0 α ja j (Taylor series converge absolutely!).

37 [ 20 / 31 ] University of Cyprus Power series Definition Let f be analytic in an open set U 0 and let f (z) = j=0 α jz j be the Taylor expansion of f in 0 with convergence radius τ (0, ]. Then f (A) is defined for every A with σ(a) < τ and there holds f (A) = j=0 α j A j = lim m m α j A j. j=0 (D4) j=0 α ja j converges ε > 0 n ε N 0 : j=n ε α j A j < ε. Assumed f has convergence radius τ (i.e., f (z) < for z < τ). Then j=n ε α j A j j=n ε α j A j, thus σ(a) A < τ is a sufficient criteria for convergence of j=0 α ja j (Taylor series converge absolutely!).

38 [ 21 / 31 ] University of Cyprus Power series Example Let f (z) = exp(z). f has convergence radius τ =. Thus f (A) is defined for every A C N N and there holds f (A) = exp(a) = j=0 A j j!.

39 [ 22 / 31 ] University of Cyprus Some facts Because of f (z) = α C f (A) = αi, f (z) = z f (A) = A, f (z) = g(z) + h(z) f (A) = g(a) + h(a) f (z) = g(z)h(z) f (A) = g(a)h(a), any rational identity in scalar functions of a complex variable will be fulfilled by the corresponding matrix function. Examples sin 2 (A) + cos 2 (A) = I, exp(ia) = cos(a) + i sin(a), (I A) 1 = I + A + A 2 + (for σ(a) < 1).

40 [ 22 / 31 ] University of Cyprus Some facts Because of f (z) = α C f (A) = αi, f (z) = z f (A) = A, f (z) = g(z) + h(z) f (A) = g(a) + h(a) f (z) = g(z)h(z) f (A) = g(a)h(a), any rational identity in scalar functions of a complex variable will be fulfilled by the corresponding matrix function. Examples sin 2 (A) + cos 2 (A) = I, exp(ia) = cos(a) + i sin(a), (I A) 1 = I + A + A 2 + (for σ(a) < 1).

41 [ 23 / 31 ] University of Cyprus Krylov subspaces Definition Problem Given A C N N, b C N, f defined on A. Calculate f (A)b! Definition The m-th Krylov (sub)space of A and b is defined by Lemma K m (A, b) = K m := span{b, Ab, A 2 b,..., A m 1 b}. There exists an index L = L(A, b) deg(ψ A ) such that K 1 (A, b) K 2 (A, b)... K L (A, b) = K L+1 (A, b) =... Moreover f (A)b K L. Matrix functions Krylov subspaces Arnoldi method First error bounds

42 [ 23 / 31 ] University of Cyprus Krylov subspaces Definition Problem Given A C N N, b C N, f defined on A. Calculate f (A)b! Definition The m-th Krylov (sub)space of A and b is defined by Lemma K m (A, b) = K m := span{b, Ab, A 2 b,..., A m 1 b}. There exists an index L = L(A, b) deg(ψ A ) such that K 1 (A, b) K 2 (A, b)... K L (A, b) = K L+1 (A, b) =... Moreover f (A)b K L. Matrix functions Krylov subspaces Arnoldi method First error bounds

43 [ 23 / 31 ] University of Cyprus Krylov subspaces Definition Problem Given A C N N, b C N, f defined on A. Calculate f (A)b! Definition The m-th Krylov (sub)space of A and b is defined by Lemma K m (A, b) = K m := span{b, Ab, A 2 b,..., A m 1 b}. There exists an index L = L(A, b) deg(ψ A ) such that K 1 (A, b) K 2 (A, b)... K L (A, b) = K L+1 (A, b) =... Moreover f (A)b K L. Matrix functions Krylov subspaces Arnoldi method First error bounds

44 [ 24 / 31 ] University of Cyprus The Arnoldi process Task: Generate an orthonormal basis of K m, m L. Algorithm v 1 := b/ b for j = 2, 3,..., m w j := Av j 1 ṽ j := w j j 1 i=1 (w j, v i )v i v j := ṽ j / ṽ j end Output: A matrix V m = [v 1, v 2,..., v m ] C N m. An unreduced upper Hessenberg matrix H m C m m. Matrix functions Krylov subspaces Arnoldi method First error bounds

45 [ 25 / 31 ] University of Cyprus Arnoldi decomposition Theorem Let m < L. There exist orthonormal vectors v 1, v 2,..., v m, v m+1 C N and an unreduced upper Hessenberg matrix H m C m m, such that AV m = V m H m + h m+1,m v m+1 e T m, where V m = [v 1, v 2,..., v m ] and h m+1,m C. For m = L there holds AV m = V m H m. N N N m m m N m H m A V m = V m + h m+1,mv m+1 Matrix functions Krylov subspaces Arnoldi method First error bounds

46 [ 26 / 31 ] University of Cyprus Arnoldi approximation Lemma Let p(z) = α m z m + + α 1 z + α 0 P m be a polynomial, 1 m < L. Then there holds p(a)b = b V m p(h m )e 1 + b α m γ m v m+1, where γ m = m j=1 h j+1,j. In particular, for p P m 1, there holds p(a)b = b V m p(h m )e 1. Definition We define the Arnoldi approximation from K m (A, b) to f (A)b as f m := b V m f (H m )e 1. Matrix functions Krylov subspaces Arnoldi method First error bounds

47 [ 26 / 31 ] University of Cyprus Arnoldi approximation Lemma Let p(z) = α m z m + + α 1 z + α 0 P m be a polynomial, 1 m < L. Then there holds p(a)b = b V m p(h m )e 1 + b α m γ m v m+1, where γ m = m j=1 h j+1,j. In particular, for p P m 1, there holds p(a)b = b V m p(h m )e 1. Definition We define the Arnoldi approximation from K m (A, b) to f (A)b as f m := b V m f (H m )e 1. Matrix functions Krylov subspaces Arnoldi method First error bounds

48 [ 27 / 31 ] University of Cyprus f (z) = exp(z), N = 500, A sparse with nz = 3106 (1.25 percent) and (0, 1)-normal-distributed entries. b full with (0, 1)-normal-distributed entries f m f(a)b m Execution speed: expm(a)*b s, f s. Matrix functions Krylov subspaces Arnoldi method First error bounds

49 [ 28 / 31 ] University of Cyprus Krylov subspace methods? Why to use them? expm(a), logm(a), funm(a,@sin), etc. operate only on full matrices, Arnoldi methods involve only matrix-vector-products Ab, speed and storage matters. Why to seek for convergence estimates? Iterative method, break condition? We only know m L, but L? In general, no residual available! Matrix functions Krylov subspaces Arnoldi method First error bounds

50 [ 28 / 31 ] University of Cyprus Krylov subspace methods? Why to use them? expm(a), logm(a), funm(a,@sin), etc. operate only on full matrices, Arnoldi methods involve only matrix-vector-products Ab, speed and storage matters. Why to seek for convergence estimates? Iterative method, break condition? We only know m L, but L? In general, no residual available! Matrix functions Krylov subspaces Arnoldi method First error bounds

51 [ 29 / 31 ] University of Cyprus How good are Krylov approximations? Remember: f m = b V m f (H m )e 1. The best approximation (for the 2 -norm) f m to f (A)b from K m (A, b) is its orthogonal projection, i.e. f m = V m V H m f (A)b = b V m [V H m f (A)V m ]e 1 which is computationally unfeasible. Another representation of f m is f m = b V m [Vm H f (V L H L VL H )V m ]e 1 = b V m [Vm H V L f (H L )VL H V m ]e 1 = b V m [I m O]f (H L )[I m O] T e 1 = b [V m O]f (H L )e 1. Furthermore f m = b V m f (V H m AV m )e 1 = b V m f ([I m O]H L )e 1. Matrix functions Krylov subspaces Arnoldi method First error bounds

52 [ 30 / 31 ] University of Cyprus How good are Krylov approximations? Lemma Let A be normal and Ω a compact set, Λ(A) Λ(H m ) Ω. Then f (A)b f m 2 2 b min max f (λ) p(λ). p P m 1 λ Ω To be continued... Matrix functions Krylov subspaces Arnoldi method First error bounds

53 [ 30 / 31 ] University of Cyprus How good are Krylov approximations? Lemma Let A be normal and Ω a compact set, Λ(A) Λ(H m ) Ω. Then f (A)b f m 2 2 b min max f (λ) p(λ). p P m 1 λ Ω To be continued... Matrix functions Krylov subspaces Arnoldi method First error bounds

54 [ 31 / 31 ] University of Cyprus Weiterführende Literatur I A. Autor. Einführung in das Präsentationswesen. Klein-Verlag, S. Jemand. On this and that. Journal of This and That, 2(1):50 100, Anhang Weiterführende Literatur

Matrix Functions and their Approximation by. Polynomial methods

Matrix Functions and their Approximation by. Polynomial methods [ 1 / 48 ] University of Cyprus Matrix Functions and their Approximation by Polynomial Methods Stefan Güttel stefan@guettel.com Nicosia, 7th April 2006 Matrix functions Polynomial methods [ 2 / 48 ] University

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Numerical methods for matrix functions

Numerical methods for matrix functions Numerical methods for matrix functions SF2524 - Matrix Computations for Large-scale Systems Lecture 13 Numerical methods for matrix functions 1 / 26 Reading material Lecture notes online Numerical methods

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials

An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials [ 1 ] University of Cyprus An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials Nikos Stylianopoulos, University of Cyprus New Perspectives in Univariate and Multivariate

More information

DEFLATED RESTARTING FOR MATRIX FUNCTIONS

DEFLATED RESTARTING FOR MATRIX FUNCTIONS DEFLATED RESTARTING FOR MATRIX FUNCTIONS M. EIERMANN, O. G. ERNST AND S. GÜTTEL Abstract. We investigate an acceleration technique for restarted Krylov subspace methods for computing the action of a function

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

MATH 5640: Functions of Diagonalizable Matrices

MATH 5640: Functions of Diagonalizable Matrices MATH 5640: Functions of Diagonalizable Matrices Hung Phan, UMass Lowell November 27, 208 Spectral theorem for diagonalizable matrices Definition Let V = X Y Every v V is uniquely decomposed as u = x +

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

10.1. The spectrum of an operator. Lemma If A < 1 then I A is invertible with bounded inverse

10.1. The spectrum of an operator. Lemma If A < 1 then I A is invertible with bounded inverse 10. Spectral theory For operators on finite dimensional vectors spaces, we can often find a basis of eigenvectors (which we use to diagonalize the matrix). If the operator is symmetric, this is always

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Rational Krylov Decompositions: Theory and Applications. Berljafa, Mario. MIMS EPrint:

Rational Krylov Decompositions: Theory and Applications. Berljafa, Mario. MIMS EPrint: Rational Krylov Decompositions: Theory and Applications Berljafa, Mario 2017 MIMS EPrint: 2017.6 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports

More information

Key words. matrix approximation problems, Chebyshev polynomials, complex approximation theory, Krylov subspace methods, Arnoldi s method

Key words. matrix approximation problems, Chebyshev polynomials, complex approximation theory, Krylov subspace methods, Arnoldi s method ON CHEBYSHEV POLYNOMIALS OF MATRICES VANCE FABER, JÖRG LIESEN, AND PETR TICHÝ Abstract. The mth Chebyshev polynomial of a square matrix A is the monic polynomial that minimizes the matrix 2-norm of p(a)

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will

More information

On the solution of large Sylvester-observer equations

On the solution of large Sylvester-observer equations NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 200; 8: 6 [Version: 2000/03/22 v.0] On the solution of large Sylvester-observer equations D. Calvetti, B. Lewis 2, and L. Reichel

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

11. Spectral theory For operators on finite dimensional vectors spaces, we can often find a basis of eigenvectors (which we use to diagonalize the

11. Spectral theory For operators on finite dimensional vectors spaces, we can often find a basis of eigenvectors (which we use to diagonalize the 11. Spectral theory For operators on finite dimensional vectors spaces, we can often find a basis of eigenvectors (which we use to diagonalize the matrix). If the operator is symmetric, this is always

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

(VI.C) Rational Canonical Form

(VI.C) Rational Canonical Form (VI.C) Rational Canonical Form Let s agree to call a transformation T : F n F n semisimple if there is a basis B = { v,..., v n } such that T v = λ v, T v 2 = λ 2 v 2,..., T v n = λ n for some scalars

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Rational Krylov methods for linear and nonlinear eigenvalue problems

Rational Krylov methods for linear and nonlinear eigenvalue problems Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

1. Elements of linear algebra

1. Elements of linear algebra Elements of linear algebra Contents Solving systems of linear equations 2 Diagonal form of a square matrix 3 The Jordan normal form of a square matrix 4 The Gram-Schmidt orthogonalization process 5 The

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 4. Monotonicity of the Lanczos Method Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Matrix Vector Products

Matrix Vector Products We covered these notes in the tutorial sessions I strongly recommend that you further read the presented materials in classical books on linear algebra Please make sure that you understand the proofs and

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS

November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS November 8, 203 ANALYTIC FUNCTIONAL CALCULUS RODICA D. COSTIN Contents. The spectral projection theorem. Functional calculus 2.. The spectral projection theorem for self-adjoint matrices 2.2. The spectral

More information

Jordan Normal Form and Singular Decomposition

Jordan Normal Form and Singular Decomposition University of Debrecen Diagonalization and eigenvalues Diagonalization We have seen that if A is an n n square matrix, then A is diagonalizable if and only if for all λ eigenvalues of A we have dim(u λ

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,

More information

Math 240 Calculus III

Math 240 Calculus III Generalized Calculus III Summer 2015, Session II Thursday, July 23, 2015 Agenda 1. 2. 3. 4. Motivation Defective matrices cannot be diagonalized because they do not possess enough eigenvectors to make

More information

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Numerical Solution of Linear Eigenvalue Problems

Numerical Solution of Linear Eigenvalue Problems Numerical Solution of Linear Eigenvalue Problems Jessica Bosch and Chen Greif Abstract We review numerical methods for computing eigenvalues of matrices We start by considering the computation of the dominant

More information

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM Electronic Transactions on Numerical Analysis. Volume 45, pp. 133 145, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Lec 2: Mathematical Economics

Lec 2: Mathematical Economics Lec 2: Mathematical Economics to Spectral Theory Sugata Bag Delhi School of Economics 24th August 2012 [SB] (Delhi School of Economics) Introductory Math Econ 24th August 2012 1 / 17 Definition: Eigen

More information

Section 6.4. The Gram Schmidt Process

Section 6.4. The Gram Schmidt Process Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find

More information

Numerical Programming I (for CSE)

Numerical Programming I (for CSE) Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3, MATH 205 HOMEWORK #3 OFFICIAL SOLUTION Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. a F = R, V = R 3, b F = R or C, V = F 2, T = T = 9 4 4 8 3 4 16 8 7 0 1

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information