Multiparameter eigenvalue problem as a structured eigenproblem

Size: px
Start display at page:

Download "Multiparameter eigenvalue problem as a structured eigenproblem"

Transcription

1 Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, /28

2 Overview Introduction Backward error and condition numbers Pseudospectrum Distance to the nearest singular problem Będlewo, /28

3 Introduction Two-parameter eigenvalue problem: A 1 x = λb 1 x + µc 1 x A 2 y = λb 2 y + µc 2 y, (W) where A i, B i, C i are n i n i matrices, λ, µ C, x C n 1, and y C n 2 Eigenvalue: a pair (λ, µ) that satisfies (W) for nonzero x and y Eigenvector: the tensor product x y Będlewo, /28

4 u + νu = 0 on Ω, u δω = 0 Rectangular membrane: Ω = [0, a] [0, b] = two S L equations (ν = λ + µ) x + λx = 0, x(0) = x(a) = 0, y + µy = 0, y(0) = y(b) = 0 Będlewo, /28

5 u + νu = 0 on Ω, u δω = 0 Rectangular membrane: Ω = [0, a] [0, b] = two S L equations (ν = λ + µ) x + λx = 0, x(0) = x(a) = 0, y + µy = 0, y(0) = y(b) = 0 Circular membrane: Ω = {x 2 + y 2 a 2 }, polar coordinates = a triangular situation Φ + λφ = 0, Φ(0) = Φ(2π) = 0, r 1 (rr ) + (ν λr 2 )R = 0, R(0) <, R(a) = 0 Będlewo, /28

6 u + νu = 0 on Ω, u δω = 0 Rectangular membrane: Ω = [0, a] [0, b] = two S L equations (ν = λ + µ) x + λx = 0, x(0) = x(a) = 0, y + µy = 0, y(0) = y(b) = 0 Circular membrane: Ω = {x 2 + y 2 a 2 }, polar coordinates = a triangular situation Φ + λφ = 0, Φ(0) = Φ(2π) = 0, r 1 (rr ) + (ν λr 2 )R = 0, R(0) <, R(a) = 0 Elliptical membrane: Ω = {(x/a) 2 + (y/b) 2 1}, elliptic coordinates (c focus) = modified Mathieu s and Mathieu s DE ( 4λ = c 2 ν) v 1 + (2λ cosh(2y 1) µ)v 1 = 0, v 1 (0) = v 1 (d) = 0, v 2 (2λ cos(2y 1) µ)v 2 = 0, v 2 (0) = v 2 (π/2) = 0 Będlewo, /28

7 Tensor product approach A 1 x = λb 1 x + µc 1 x A 2 y = λb 2 y + µc 2 y (W) On C n 1 C n 2 of the dimension n 1 n 2 we define 0 = B 1 C 2 C 1 B 2 1 = A 1 C 2 C 1 A 2 2 = B 1 A 2 A 1 B 2 Two-parameter problem (W) is equivalent to a coupled GEP where z = x y (W) is nonsingular 0 is invertible and commute 1 z = λ 0 z 2 z = µ 0 z, ( ) Będlewo, /28

8 Right definite problem (W ) A 1 x = λb 1 x + µc 1 x A 2 y = λb 2 y + µc 2 y 0 = B 1 C 2 C 1 B 2 1 = A 1 C 2 C 1 A 2 2 = B 1 A 2 A 1 B 2 1 z = λ 0 z 2 z = µ 0 z ( ) (W) is right definite when A i, B i, C i are Hermitian and 0 is positive definite Atkinson (1972): 0 positive definite x B 1 x y B 2 y x C 1 x y C 2 y = (x y) 0 (x y) > 0 for x, y 0 If (W) is right definite then eigenvalues are real, there exist n 1 n 2 linearly independent eigenvectors, eigenvectors of distinct eigenvalues are 0 -orthogonal, ie (x 1 y 1 ) 0 (x 2 y 2 ) = 0 Będlewo, /28

9 Some available numerical methods Blum, Curtis, Geltner (1978), and Browne, Sleeman (1982): gradient method, Bohte (1980): Newton s method for eigenvalues, Slivnik, Tomšič (1986): solving ( ) with standard numerical methods for RD problem, Ji, Jiang, Lee (1992): Generalized Rayleigh Quotient Iteration Shimasaki (1995): continuation method for a special class of RD problems P (1999): continuation method for RD problem, Tensor Rayleigh Quotient Iteration P (2000): continuation method for weakly elliptic problem Hochstenbach, P (2002): Jacobi-Davidson type method for RD problem Hochstenbach, Košir, P (2003): Two-sided Jacobi-Davidson type method Hochstenbach, P (submitted): Harmonic Jacobi-Davidson type method Będlewo, /28

10 Multiparameter eigenvalue problems V 10 x 1 = λ 1 V 11 x 1 + λ 2 V 12 x λ k V 1k x 1 V 20 x 2 = λ 1 V 21 x 2 + λ 2 V 22 x λ k V 2k x 2 V k0 x k = λ 1 V k1 x k + λ 2 V k2 x k + + λ k V kk x k, where V ij is n i n i matrix A k-tuple λ = (λ 1,, λ k ) is an eigenvalue and x = x 1 x k is the corresponding eigenvector Będlewo, /28

11 Multiparameter eigenvalue problems V 10 x 1 = λ 1 V 11 x 1 + λ 2 V 12 x λ k V 1k x 1 V 20 x 2 = λ 1 V 21 x 2 + λ 2 V 22 x λ k V 2k x 2 V k0 x k = λ 1 V k1 x k + λ 2 V k2 x k + + λ k V kk x k, where V ij is n i n i matrix A k-tuple λ = (λ 1,, λ k ) is an eigenvalue and x = x 1 x k is the corresponding eigenvector In the tensor product space of dimension N = n 1 n k we define 0 = V 11 V 12 V 1k V 21 V 22 V V k1 V k2 2k V kk, i = V 11 V 1,i 1 V 21 V 2,i 1 V k1 V k,i 1 V 10 V 1,i+1 V 20 V 2,i+1 V k0 V k,i+1 V 1k V 2k V kk, i = 1,, k, where V ij is defined by V ij (x 1 x i x k ) = x 1 V ij x i x k and linearity A nonsingular MEP (ie, 0 nonsingular) is equivalent to a joined GEP i x = λ i 0 x, i = 1,, k, for decomposable tensors x = x 1 x k, where the matrices 1 0 i commute Będlewo, /28

12 Multiparameter eigenvalue problems V 10 x 1 = λ 1 V 11 x 1 + λ 2 V 12 x λ k V 1k x 1 V 20 x 2 = λ 1 V 21 x 2 + λ 2 V 22 x λ k V 2k x 2 V k0 x k = λ 1 V k1 x k + λ 2 V k2 x k + + λ k V kk x k, where V ij is n i n i matrix A k-tuple λ = (λ 1,, λ k ) is an eigenvalue and x = x 1 x k is the corresponding eigenvector In the tensor product space of dimension N = n 1 n k we define 0 = V 11 V 12 V 1k V 21 V 22 V V k1 V k2 2k V kk, i = V 11 V 1,i 1 V 21 V 2,i 1 V k1 V k,i 1 V 10 V 1,i+1 V 20 V 2,i+1 V k0 V k,i+1 V 1k V 2k V kk, i = 1,, k, where V ij is defined by V ij (x 1 x i x k ) = x 1 V ij x i x k and linearity A nonsingular MEP (ie, 0 nonsingular) is equivalent to a joined GEP i x = λ i 0 x, i = 1,, k, for decomposable tensors x = x 1 x k, where the matrices 1 0 i commute MEP is right definite when all matrices V ij are Hermitian and 0 is positive definite Będlewo, /28

13 Overview Introduction Backward error and condition numbers Pseudospectrum Distance to the nearest singular problem Będlewo, /28

14 Backward error W i (λ) := V i0 kx j=1 λ j V ij for i = 1,, k Let (ex, e λ) be an approximate eigenpair of W and let ex = ex 1 ex k be normalized, ie, ex i 2 = 1 for i = 1,, k We define the normwise backward error of (ex, e λ) by where η(ex, λ) e := min nɛ : W i (e λ) + Wi (e λ) o exi = 0, V ij ɛ E ij for all i, j W i (λ) = V i0 kx j=1 λ j V ij for i = 1,, k, Będlewo, /28

15 Backward error W i (λ) := V i0 kx j=1 λ j V ij for i = 1,, k Let (ex, e λ) be an approximate eigenpair of W and let ex = ex 1 ex k be normalized, ie, ex i 2 = 1 for i = 1,, k We define the normwise backward error of (ex, e λ) by where η(ex, λ) e := min nɛ : W i (e λ) + Wi (e λ) o exi = 0, V ij ɛ E ij for all i, j Can show: W i (λ) = V i0 and e θi := E i0 + kx j=1 η(ex, e λ) = max i=1,,k kx j=1 λ j V ij for i = 1,, k r i eθ i, where r i := W i (e λ)ex i e λj E ij for i = 1,, k, Będlewo, /28

16 MEP vs GEP and PEP GEP (V Frayssé, V Toumazou (1998), Higham, Higham (1998)): Ax = λbx η(ex, e λ) = min n ɛ : (A + A)ex = e λ(b + B)ex, A ɛ E, B ɛ F o = e λbex Aex E + e λ F PEP (Tisseur (2000)): P (λ)x = (λ m A m + λ m 1 A m A 0 )x = 0 o η(ex, λ) e = min nɛ : (P (e λ) + P ( λ))ex e = 0, Ai ɛ E i, i = 0,, m = P (e λ)ex P mi=0 e λ i E i MEP (Hochstenbach, P (2003)): η(ex, e λ) = min nɛ : W i (λ)x i = (V i0 P k j=1 λ j V ij )x i = 0 for i = 1,, k o W i (e λ) + Wi (e λ) exi = 0, V ij ɛ E ij for all i, j = max i=1,,k W i (e λ)xi E i0 + P k j=1 e λj E ij Będlewo, /28

17 Backward error for a Hermitian MEP If W is Hermitian then we consider perturbations where all V ij are Hermitian η H (ex, e λ) := min{ ɛ : (Wi (e λ) + Wi (e λ))ex i = 0, V ij = V ij, V ij ɛ E ij, i = 1,, k; j = 0,, k} Clearly, η H (ex, e λ) η(ex, e λ) Can show: If W is Hermitian and e λ is real then η H (ex, e λ) = η(ex, e λ) Będlewo, /28

18 Backward error for an eigenvalue approximation η(ex, e λ) = max i=1,,k r i eθ i, r i := W i (e λ)ex i, e θi := E i0 + kx j=1 e λj E ij for i = 1,, k If we are interested only in the approximate eigenvalue e λ, then a more appropriate measure of the backward error may be η(e λ) := min{ η(ex, e λ) : ex normalized } Can show: η(e λ) = max i=1,,k σ min (W i (e λ)) eθ i Będlewo, /28

19 Alternative definition η(ex, λ) e = min nɛ : W i (e λ) + Wi (e λ) o exi = 0, V ij ɛ E ij for all i, j = max i=1,,k r i E i0 + P k j=1 e λj E ij Liu and Wang (2005): Backward error in Frobenius norm 8 > 0 < η F (ex, λ) e := X k >: = kx i=1 i=1 kx j=0 1 V ij 2 F A 1/2 E ij 2 r i 2!1/2 P E i0 2 k + j=1 λj e 2 E ij 2 9 : W i (e λ) + Wi (e λ) >= exi = 0>; Będlewo, /28

20 Alternative definition η(ex, λ) e = min nɛ : W i (e λ) + Wi (e λ) o exi = 0, V ij ɛ E ij for all i, j = max i=1,,k r i E i0 + P k j=1 e λj E ij Liu and Wang (2005): Backward error in Frobenius norm 8 > 0 < η F (ex, λ) e := X k >: = kx i=1 i=1 kx j=0 1 V ij 2 F A 1/2 E ij 2 r i 2!1/2 P E i0 2 k + j=1 λj e 2 E ij 2 9 : W i (e λ) + Wi (e λ) >= exi = 0>; Can show: 1 k η F (ex, e λ) η(ex, e λ) ηf (ex, e λ) Będlewo, /28

21 Eigenvector backward error If we assume that E i0 = E i1 = = E ik for i = 1,, k then η F (ex) = inf eλ η F (ex, e λ) = inf eλ where E := diag( E 10,, E k0 ) E 1 0 B@ V 10 ex 1 V 1k ex 1 V 20 ex 2 V 2k ex 2 V k0 ex k V kk ex k 0 1 e λ1 e λk 1 B@ CA 0 0 V 10 ex 1 V = σ 20 ex 2 min B@ B@ E 1 V 1k ex 1 V 2k ex 2 V k0 ex k V kk ex k 10 CA B@ 11 CA CA, 1 1 e λ1 CA e λk η F (ex) k η(ex) η F (ex) Będlewo, /28

22 Matrix B 0 Let x and y be the corresponding left and right eigenvectors of λ, respectively We define B 0 = 2 64 y 1 V 11x 1 y 1 V 12x 1 y 1 V 1kx 1 y 2 V 21x 2 y 2 V 22x 2 y 2 V 2kx 2 y k V k1x k y k V k2x k y k V kkx k 3 75 Notice that det(b 0 ) = y 0 x Košir (1994): λ is algebraically simple = B 0 is nonsingular Będlewo, /28

23 Eigenvalue condition number Let λ be a nonzero algebraically simple eigenvalue of a nonsingular MEP W with normalized right eigenvector x and left eigenvector y A normwise condition number of λ is λ κ(λ) := lim sup ɛ 0 ɛ λ : W i(λ + λ) + W i (λ + λ) (x i + x i ) = 0, V ij ɛ E ij, i = 1,, k; j = 0,, k Będlewo, /28

24 Eigenvalue condition number Let λ be a nonzero algebraically simple eigenvalue of a nonsingular MEP W with normalized right eigenvector x and left eigenvector y A normwise condition number of λ is λ κ(λ) := lim sup ɛ 0 ɛ λ : W i(λ + λ) + W i (λ + λ) (x i + x i ) = 0, V ij ɛ E ij, i = 1,, k; j = 0,, k Can show: where and B 1 0 θ θ i := E i0 + κ(λ) = λ 1 B 1 0 θ, kx j=1 λ j E ij for i = 1,, k := max{ B 1 0 z 2 : z C k, z i = θ i for i = 1,, k } Będlewo, /28

25 Eigenvalue condition number Let λ be a nonzero algebraically simple eigenvalue of a nonsingular MEP W with normalized right eigenvector x and left eigenvector y A normwise condition number of λ is λ κ(λ) := lim sup ɛ 0 ɛ λ : W i(λ + λ) + W i (λ + λ) (x i + x i ) = 0, V ij ɛ E ij, i = 1,, k; j = 0,, k Can show: where and B 1 0 θ θ i := E i0 + κ(λ) = λ 1 B 1 0 θ, kx j=1 λ j E ij for i = 1,, k := max{ B 1 0 z 2 : z C k, z i = θ i for i = 1,, k } 1 θ B B 1 0 θ B θ 2 Będlewo, /28

26 Hermitian MEP κ H (λ) := lim sup ɛ 0 λ ɛ λ : W i(λ + λ) + W i (λ + λ) (x i + x i ) = 0, V ij ɛ E ij, V ij = V ij, i = 1,, k; j = 0,, k Can show: If W is Hermitian and λ is real then κ H (λ) = κ(λ) Będlewo, /28

27 Hermitian MEP κ H (λ) := lim sup ɛ 0 λ ɛ λ : W i(λ + λ) + W i (λ + λ) (x i + x i ) = 0, V ij ɛ E ij, V ij = V ij, i = 1,, k; j = 0,, k Can show: If W is Hermitian and λ is real then κ H (λ) = κ(λ) = λ 1 B 1 0 θ If MEP is Hermitian, then B 0 is real and B 1 0 θ = max{ B 1 0 z 2 : z R k, z i = θ i for i = 1,, k } Będlewo, /28

28 MEP vs GEP and PEP GEP (V Frayssé, V Toumazou (1998), Higham, Higham (1998)): κ(λ) = lim ɛ 0 sup = E + λ F λ y Bx Ax = λbx λ ɛ λ : (A + A)(x + x) = (λ + λ)(b + B)(x + x), PEP (Tisseur (2000)): P (λ)x = (λ m A m + λ m 1 A m A 0 )x = 0 κ(λ) = lim ɛ 0 sup λ ɛ λ : (P (λ + λ) + P (λ + λ))(x + x) = 0, A i ɛ E i = P mi=0 λ i E i λ y P (λ)x MEP (Hochstenbach, P (2003)): ɛ 0 W i (λ)x i = (V i0 P k j=1 λ j V ij )x i = 0 for i = 1,, k λ κ(λ) = lim sup ɛ λ : W i (λ + λ) + W i (λ + λ) (x i + x i ) = 0, = 1 λ 2 4 y 1 V 11x 1 y1 V 1kx 1 yk V k1x k yk V kkx k θ θ 2 B0 1 2 λ Będlewo, /28

29 Overview Introduction Backward error and condition numbers Pseudospectrum Distance to the nearest singular problem Będlewo, /28

30 Pseudospectrum We define the ɛ-pseudospectrum of W by Λ ɛ (W ) = λ C k : (W i (λ) + W i (λ))x i = 0, x i 0, V ij ɛ E ij, i = 1,, k; j = 0,, k Będlewo, /28

31 Pseudospectrum We define the ɛ-pseudospectrum of W by Λ ɛ (W ) = λ C k : (W i (λ) + W i (λ))x i = 0, x i 0, V ij ɛ E ij, i = 1,, k; j = 0,, k Can show: Λ ɛ (W ) = { λ C k : η(λ) ɛ} = { λ C k : σ min (W i (λ)) ɛθ i for i = 1,, k } = { λ C k : W i (λ) 1 1/(ɛθ i ) for i = 1,, k } = { λ C k : u i, u i = 1 st W i (λ)u i ɛθ i for i = 1,, k }, where θ i = E i0 + P k j=1 λ j E ij for i = 1,, k Będlewo, /28

32 Pseudospectrum We define the ɛ-pseudospectrum of W by Λ ɛ (W ) = λ C k : (W i (λ) + W i (λ))x i = 0, x i 0, V ij ɛ E ij, i = 1,, k; j = 0,, k Can show: Λ ɛ (W ) = { λ C k : η(λ) ɛ} = { λ C k : σ min (W i (λ)) ɛθ i for i = 1,, k } = { λ C k : W i (λ) 1 1/(ɛθ i ) for i = 1,, k } = { λ C k : u i, u i = 1 st W i (λ)u i ɛθ i for i = 1,, k }, where θ i = E i0 + P k j=1 λ j E ij for i = 1,, k Λ ɛ (W ) = Λ ɛ (W 1 ) Λ ɛ (W 2 ) Λ ɛ (W k ), where Λ ɛ (W i ) = λ C k : (W i (λ) + W i (λ))x i = 0, x i 0, V ij ɛ E ij, j = 0,, k Będlewo, /28

33 W 1 (λ) = W 2 (λ) = Example λ λ λ λ λ 2 0 λ λ λ λ 2 0 λ λ Będlewo, /28 λ 1

34 Example 2 We discretize the two-parameter Sturm-Liouville problem from Binding, Browne, Ji (1993) W 1 (λ)x 1 (t 1 ) = x 1 (t 1) (λ 1 + λ 2 cos 2t 1 )x 1 (t 1 ), W 2 (λ)x 2 (t 2 ) = x 2 (t 2) λ 2 x 2 (t 2 ) with boundary conditions x i (0) = x i (π) = 0 for i = 1, 2 Będlewo, /28

35 Example 2 We discretize the two-parameter Sturm-Liouville problem from Binding, Browne, Ji (1993) W 1 (λ)x 1 (t 1 ) = x 1 (t 1) (λ 1 + λ 2 cos 2t 1 )x 1 (t 1 ), W 2 (λ)x 2 (t 2 ) = x 2 (t 2) λ 2 x 2 (t 2 ) with boundary conditions x i (0) = x i (π) = 0 for i = 1, 2 = V 10 = V 20 = 1 h 2 tridiag(1, 2, 1), V 11 = I, V 21 = 0, V 22 = I, V 12 = diag cos n+1 2π, cos 4π n+1,, cos 2nπ n λ 2 30 λ λ Będlewo, /28 λ 1

36 Overview Introduction Backward error and condition numbers Pseudospectrum Distance to the nearest singular problem Będlewo, /28

37 Distance to the nearest singular problem MEP is singular 0 is singular Będlewo, /28

38 Distance to the nearest singular problem MEP is singular 0 is singular Let k = 2 We define the distance to the nearest singular MEP as B1 δ(w ) = min F C 1 F F : W + W is singular B 2 F C 2 F Będlewo, /28

39 Distance to the nearest singular problem MEP is singular 0 is singular Let k = 2 We define the distance to the nearest singular MEP as B1 δ(w ) = min F C 1 F F : W + W is singular B 2 F C 2 F If W + W is singular then S 0 σ min ( 0 ), where S 0 := (B 1 + B 1 ) (C 2 + C 2 ) (C 1 + C 1 ) (B 2 + B 2 ) 0 Będlewo, /28

40 Distance to the nearest singular problem MEP is singular 0 is singular Let k = 2 We define the distance to the nearest singular MEP as B1 δ(w ) = min F C 1 F F : W + W is singular B 2 F C 2 F If W + W is singular then S 0 σ min ( 0 ), where S 0 := (B 1 + B 1 ) (C 2 + C 2 ) (C 1 + C 1 ) (B 2 + B 2 ) 0 If we neglect second order terms, it follows from X Y = X Y that S 0 B 1 C 2 + B 1 C 2 + C 1 B 2 + C 1 B 2 B1 F B1 C 1 F B 2 F C 2 F F C 1 F B 2 C 2 and it follows δ(w ) σ min ( 0 ) B1 C 1 F B 2 C 2 Będlewo, /28

41 Distance to the nearest singular problem for a right definite MEP Atkinson (1972): For a Hermitian MEP, 0 is positive definite (x y) 0 (x y) > 0 for all x, y 0 Będlewo, /28

42 Distance to the nearest singular problem for a right definite MEP Atkinson (1972): For a Hermitian MEP, 0 is positive definite (x y) 0 (x y) > 0 for all x, y 0 Corollary: A Hermitian MEP is not definite there exist x, y 0 such that (x y) 0 (x y) = 0 Będlewo, /28

43 Distance to the nearest singular problem for a right definite MEP Atkinson (1972): For a Hermitian MEP, 0 is positive definite (x y) 0 (x y) > 0 for all x, y 0 Corollary: A Hermitian MEP is not definite there exist x, y 0 such that (x y) 0 (x y) = 0 x B1 x x C Let us define M(x, y) = 1 x x B1 x x C y B 2 y y and M(x, y) = 1 x C 2 y y B 2 y y C 2 y Since (x y) 0 (x y) = det M(x, y) we want to make M(x, y) singular M(x, y) F B1 B1 F and a necessary condition is B1 F B 2 F C 1 F B 2 C 2 C 1 F C 2 F F σ 2 (M(x, y)) C 1 F B 2 F C 2 F F, Będlewo, /28

44 Distance to the nearest singular problem for a right definite MEP Atkinson (1972): For a Hermitian MEP, 0 is positive definite (x y) 0 (x y) > 0 for all x, y 0 Corollary: A Hermitian MEP is not definite there exist x, y 0 such that (x y) 0 (x y) = 0 x B1 x x C Let us define M(x, y) = 1 x x B1 x x C y B 2 y y and M(x, y) = 1 x C 2 y y B 2 y y C 2 y Since (x y) 0 (x y) = det M(x, y) we want to make M(x, y) singular M(x, y) F B1 B1 F and a necessary condition is B1 F B 2 F C 1 F B 2 C 2 C 1 F C 2 F F σ 2 (M(x, y)) C 1 F B 2 F C 2 F F, The smallest perturbation that makes M(x, y) singular is M(x, y) = σ 2 uv T, where σ 2 is the smallest singular value with the corresponding left and right singular vectors u and v If we take B 1 = σ 2 u 1 v 1 xx, C 1 = σ 2 u 1 v 2 xx, B 2 = σ 2 u 2 v 1 yy, C 2 = σ 2 u 2 v 2 yy, then M(x, y) + M(x, y) is singular and B1 F C 1 F B 2 F C 2 F F = σ 2 (M(x, y)) Będlewo, /28

45 Distance to the nearest singular problem for a right definite MEP Atkinson (1972): For a Hermitian MEP, 0 is positive definite (x y) 0 (x y) > 0 for all x, y 0 Corollary: A Hermitian MEP is not definite there exist x, y 0 such that (x y) 0 (x y) = 0 x B1 x x C Let us define M(x, y) = 1 x x B1 x x C y B 2 y y and M(x, y) = 1 x C 2 y y B 2 y y C 2 y Since (x y) 0 (x y) = det M(x, y) we want to make M(x, y) singular M(x, y) F B1 B1 F and a necessary condition is B1 F B 2 F C 1 F B 2 C 2 C 1 F C 2 F F σ 2 (M(x, y)) C 1 F B 2 F C 2 F F, The smallest perturbation that makes M(x, y) singular is M(x, y) = σ 2 uv T, where σ 2 is the smallest singular value with the corresponding left and right singular vectors u and v If we take B 1 = σ 2 u 1 v 1 xx, C 1 = σ 2 u 1 v 2 xx, B 2 = σ 2 u 2 v 1 yy, C 2 = σ 2 u 2 v 2 yy, then M(x, y) + M(x, y) is singular and B1 F C 1 F B 2 F C 2 F F = σ 2 (M(x, y)) Thus, δ(w ) = min σ 2(M(x, y)) x = y =1 Będlewo, /28

46 Distance to singular problems for the right definite MEP α := B1 C 1 B 2 C 2 F δ(w ) = min σ 2(M(x, y)) = min M(x, x = y =1 x = y =1 y) 1 1 x B M(x, y) = 1 x x C 1 x y B 2 y y C 2 y M(x, y) 1 1 y C = 2 y x C 1 x (x B 1 x)(y C 2 y) (x C 1 x)(y B 2 y) y B 2 y x B 1 x M(x, y) 1 α (x y) 0 (x y) Thus, δ(w ) min (x x = y =1 y) 0 (x y) α min z =1 z 0 z α = σ min( 0 ) α Będlewo, /28

47 Bibliography M E Hochstenbach, B Plestenjak: Backward error, condition numbers, and pseudospectrum for the multiparameter eigenvalue problem, Linear Algebra Appl 375 (2003) Będlewo, /28

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter

More information

of dimension n 1 n 2, one defines the matrix determinants

of dimension n 1 n 2, one defines the matrix determinants HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue

More information

arxiv: v2 [math.oc] 31 Jul 2017

arxiv: v2 [math.oc] 31 Jul 2017 An unconstrained framework for eigenvalue problems Yunho Kim arxiv:6.09707v [math.oc] 3 Jul 07 May 0, 08 Abstract In this paper, we propose an unconstrained framework for eigenvalue problems in both discrete

More information

Solving Polynomial Eigenproblems by Linearization

Solving Polynomial Eigenproblems by Linearization Solving Polynomial Eigenproblems by Linearization Nick Higham School of Mathematics University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey and Françoise

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Reduction of nonlinear eigenproblems with JD

Reduction of nonlinear eigenproblems with JD Reduction of nonlinear eigenproblems with JD Henk van der Vorst H.A.vanderVorst@math.uu.nl Mathematical Institute Utrecht University July 13, 2005, SIAM Annual New Orleans p.1/16 Outline Polynomial Eigenproblems

More information

Structured Condition Numbers and Backward Errors in Scalar Product Spaces

Structured Condition Numbers and Backward Errors in Scalar Product Spaces Structured Condition Numbers and Backward Errors in Scalar Product Spaces Françoise Tisseur Department of Mathematics University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/ Joint

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Jacobi Davidson methods for polynomial two parameter eigenvalue problems

Jacobi Davidson methods for polynomial two parameter eigenvalue problems Jacobi Davidson methods for polynomial two parameter eigenvalue problems Michiel E. Hochstenbach a, Andrej Muhič b, Bor Plestenjak b a Department of Mathematics and Computer Science, TU Eindhoven, PO Box

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

condition that is positive denite. He also shows that matrices ;1 1 and ;1 2 commute and that the problem (1.1) is equivalent to the associated proble

condition that is positive denite. He also shows that matrices ;1 1 and ;1 2 commute and that the problem (1.1) is equivalent to the associated proble A JACOBI{DAVIDSON TYPE METHOD FOR A RIGHT DEFINITE TWO-PARAMETER EIGENVALUE PROBLEM MICHIEL HOCHSTENBACH y, BOR PLESTENJAK z Abstract. We present a new numerical iterative method for computing selected

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

Sensitivity analysis for the multivariate eigenvalue problem

Sensitivity analysis for the multivariate eigenvalue problem Electronic Journal of Linear Algebra Volume 6 Volume 6 (03) Article 43 03 Sensitivity analysis for the multivariate eigenvalue problem Xin-guo Liu wgwang@ouc.edu.cn Wei-guo Wang Lian-hua Xu Follow this

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Elias Jarlebring a, Michiel E. Hochstenbach b,1, a Technische Universität Braunschweig,

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

FIBER PRODUCT HOMOTOPY METHOD FOR MULTIPARAMETER EIGENVALUE PROBLEMS

FIBER PRODUCT HOMOTOPY METHOD FOR MULTIPARAMETER EIGENVALUE PROBLEMS FIBER PRODUCT HOMOTOPY METHOD FOR MULTIPARAMETER EIGENVALUE PROBLEMS JOSE ISRAEL RODRIGUEZ, LEK-HENG LIM, AND YILING YOU Abstract We develop a new homotopy method for solving multiparameter eigenvalue

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Elias Jarlebring a, Michiel E. Hochstenbach b,1, a Technische Universität Braunschweig,

More information

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Solving the Polynomial Eigenvalue Problem by Linearization

Solving the Polynomial Eigenvalue Problem by Linearization Solving the Polynomial Eigenvalue Problem by Linearization Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Ren-Cang Li,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS OF LINEAR EQUATIONS. Perturbation analysis for linear systems (Ax = b)

ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS OF LINEAR EQUATIONS. Perturbation analysis for linear systems (Ax = b) ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS OF LINEAR EQUATIONS Conditioning of linear systems. Estimating errors for solutions of linear systems Backward error analysis Perturbation analysis for linear

More information

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

ACCURATE SOLUTIONS OF POLYNOMIAL EIGENVALUE PROBLEMS

ACCURATE SOLUTIONS OF POLYNOMIAL EIGENVALUE PROBLEMS ACCURATE SOLUTIONS OF POLYNOMIAL EIGENVALUE PROBLEMS YILING YOU, JOSE ISRAEL RODRIGUEZ, AND LEK-HENG LIM Abstract. Quadratic eigenvalue problems (QEP) and more generally polynomial eigenvalue problems

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Pseudospectra of Matrix Pencils and some Distance Problems

Pseudospectra of Matrix Pencils and some Distance Problems Pseudospectra of Matrix Pencils and some Distance Problems Rafikul Alam Department of Mathematics Indian Institute of Technology Guwahati Guwahati - 781 039, INDIA Many thanks to Safique Ahmad and Ralph

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Math 300 Winter 2011 Advanced Boundary Value Problems I. Bessel s Equation and Bessel Functions

Math 300 Winter 2011 Advanced Boundary Value Problems I. Bessel s Equation and Bessel Functions Math 3 Winter 2 Avance Bounary Value Problems I Bessel s Equation an Bessel Functions Department of Mathematical an Statistical Sciences University of Alberta Bessel s Equation an Bessel Functions We use

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Solution of Matrix Eigenvalue Problem

Solution of Matrix Eigenvalue Problem Outlines October 12, 2004 Outlines Part I: Review of Previous Lecture Part II: Review of Previous Lecture Outlines Part I: Review of Previous Lecture Part II: Standard Matrix Eigenvalue Problem Other Forms

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13]) SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to

More information

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics. An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

EIGENVALUE PROBLEMS (EVP)

EIGENVALUE PROBLEMS (EVP) EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

A Framework for Structured Linearizations of Matrix Polynomials in Various Bases

A Framework for Structured Linearizations of Matrix Polynomials in Various Bases A Framework for Structured Linearizations of Matrix Polynomials in Various Bases Leonardo Robol Joint work with Raf Vandebril and Paul Van Dooren, KU Leuven and Université

More information

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/

More information

Tropical roots as approximations to eigenvalues of matrix polynomials. Noferini, Vanni and Sharify, Meisam and Tisseur, Francoise

Tropical roots as approximations to eigenvalues of matrix polynomials. Noferini, Vanni and Sharify, Meisam and Tisseur, Francoise Tropical roots as approximations to eigenvalues of matrix polynomials Noferini, Vanni and Sharify, Meisam and Tisseur, Francoise 2014 MIMS EPrint: 2014.16 Manchester Institute for Mathematical Sciences

More information

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015 Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i Problems in basic linear algebra Science Academies Lecture Workshop at PSGRK College Coimbatore, June 22-24, 2016 Govind S. Krishnaswami, Chennai Mathematical Institute http://www.cmi.ac.in/~govind/teaching,

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

A block-symmetric linearization of odd degree matrix polynomials with optimal eigenvalue condition number and backward error

A block-symmetric linearization of odd degree matrix polynomials with optimal eigenvalue condition number and backward error Calcolo manuscript No. (will be inserted by the editor) A block-symmetric linearization of odd degree matrix polynomials with optimal eigenvalue condition number and backward error M. I Bueno F. M. Dopico

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Nonlinear Eigenvalue Problems: An Introduction

Nonlinear Eigenvalue Problems: An Introduction Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science A Sylvester Arnoldi type method for the generalized eigenvalue problem with two-by-two operator determinants Karl Meerbergen Bor Plestenjak Report TW 653, August 2014 KU Leuven Department of Computer Science

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the

More information

THE CAYLEY-HAMILTON THEOREM AND INVERSE PROBLEMS FOR MULTIPARAMETER SYSTEMS

THE CAYLEY-HAMILTON THEOREM AND INVERSE PROBLEMS FOR MULTIPARAMETER SYSTEMS THE CAYLEY-HAMILTON THEOREM AND INVERSE PROBLEMS FOR MULTIPARAMETER SYSTEMS TOMAŽ KOŠIR Abstract. We review some of the current research in multiparameter spectral theory. We prove a version of the Cayley-Hamilton

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Math 489AB Exercises for Chapter 1 Fall Section 1.0

Math 489AB Exercises for Chapter 1 Fall Section 1.0 Math 489AB Exercises for Chapter 1 Fall 2008 Section 1.0 1.0.2 We want to maximize x T Ax subject to the condition x T x = 1. We use the method of Lagrange multipliers. Let f(x) = x T Ax and g(x) = x T

More information

Eigenpairs and Similarity Transformations

Eigenpairs and Similarity Transformations CHAPTER 5 Eigenpairs and Similarity Transformations Exercise 56: Characteristic polynomial of transpose we have that A T ( )=det(a T I)=det((A I) T )=det(a I) = A ( ) A ( ) = det(a I) =det(a T I) =det(a

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016 Linear Algebra Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall

More information

U AUc = θc. n u = γ i x i,

U AUc = θc. n u = γ i x i, HARMONIC AND REFINED RAYLEIGH RITZ FOR THE POLYNOMIAL EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND GERARD L. G. SLEIJPEN Abstract. After reviewing the harmonic Rayleigh Ritz approach for the standard

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems

Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems Bor Plestenjak a, Călin I. Gheorghiu b, Michiel E. Hochstenbach c a IMFM and Department of Mathematics,

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information