Solving large Hamiltonian eigenvalue problems

Size: px
Start display at page:

Download "Solving large Hamiltonian eigenvalue problems"

Transcription

1 Solving large Hamiltonian eigenvalue problems David S. Watkins Department of Mathematics Washington State University Adaptivity and Beyond, Vancouver, August 2005 p.1

2 Some Collaborators Adaptivity and Beyond, Vancouver, August 2005 p.2

3 Some Collaborators Volker Mehrmann Adaptivity and Beyond, Vancouver, August 2005 p.2

4 Some Collaborators Volker Mehrmann Thomas Apel Adaptivity and Beyond, Vancouver, August 2005 p.2

5 Some Collaborators Volker Mehrmann Thomas Apel Peter Benner Adaptivity and Beyond, Vancouver, August 2005 p.2

6 Some Collaborators Volker Mehrmann Thomas Apel Peter Benner Heike Faßbender Adaptivity and Beyond, Vancouver, August 2005 p.2

7 Some Collaborators Volker Mehrmann Thomas Apel Peter Benner Heike Faßbender... Adaptivity and Beyond, Vancouver, August 2005 p.2

8 Problem: Linear Elasticity Elastic Deformation (3D, anisotropic, composite materials) Adaptivity and Beyond, Vancouver, August 2005 p.3

9 Problem: Linear Elasticity Elastic Deformation (3D, anisotropic, composite materials) Singularities at cracks, interfaces Adaptivity and Beyond, Vancouver, August 2005 p.3

10 Problem: Linear Elasticity Elastic Deformation (3D, anisotropic, composite materials) Singularities at cracks, interfaces Lamé Equations (PDE, spherical coordinates) Adaptivity and Beyond, Vancouver, August 2005 p.3

11 Problem: Linear Elasticity Elastic Deformation (3D, anisotropic, composite materials) Singularities at cracks, interfaces Lamé Equations (PDE, spherical coordinates) Separate radial variable. Adaptivity and Beyond, Vancouver, August 2005 p.3

12 Problem: Linear Elasticity Elastic Deformation (3D, anisotropic, composite materials) Singularities at cracks, interfaces Lamé Equations (PDE, spherical coordinates) Separate radial variable. Get quadratic eigenvalue problem. (λ 2 M + λg + K)v = 0 M = M > 0 G = G K = K < 0 Adaptivity and Beyond, Vancouver, August 2005 p.3

13 Problem: Linear Elasticity Elastic Deformation (3D, anisotropic, composite materials) Singularities at cracks, interfaces Lamé Equations (PDE, spherical coordinates) Separate radial variable. Get quadratic eigenvalue problem. (λ 2 M + λg + K)v = 0 M = M > 0 G = G K = K < 0 solve numerically Adaptivity and Beyond, Vancouver, August 2005 p.3

14 Numerical Solution Adaptivity and Beyond, Vancouver, August 2005 p.4

15 Numerical Solution Discretize using finite elements Adaptivity and Beyond, Vancouver, August 2005 p.4

16 Numerical Solution Discretize using finite elements (λ 2 M + λg + K)v = 0 M T = M > 0 G T = G K T = K < 0 Adaptivity and Beyond, Vancouver, August 2005 p.4

17 Numerical Solution Discretize using finite elements (λ 2 M + λg + K)v = 0 M T = M > 0 G T = G K T = K < 0 matrix quadratic eigenvalue problem (large, sparse) Adaptivity and Beyond, Vancouver, August 2005 p.4

18 Numerical Solution Discretize using finite elements (λ 2 M + λg + K)v = 0 M T = M > 0 G T = G K T = K < 0 matrix quadratic eigenvalue problem (large, sparse) Find few smallest eigenvalues (and corresponding eigenvectors). Adaptivity and Beyond, Vancouver, August 2005 p.4

19 Other Applications Adaptivity and Beyond, Vancouver, August 2005 p.5

20 Other Applications Gyroscopic systems λ 2 M + λg + K Adaptivity and Beyond, Vancouver, August 2005 p.5

21 Other Applications Gyroscopic systems λ 2 M + λg + K Quadratic regulator (optimal control) [ ] [ ] C T C A T 0 E T A BB T λ E 0 symmetric/skew-symmetric Adaptivity and Beyond, Vancouver, August 2005 p.5

22 Other Applications Gyroscopic systems λ 2 M + λg + K Quadratic regulator (optimal control) [ ] [ ] C T C A T 0 E T A BB T λ E 0 symmetric/skew-symmetric Higher-order systems λ n A n + λ n 1 A n λa 1 + A 0 Adaptivity and Beyond, Vancouver, August 2005 p.5

23 Hamiltonian Structure Adaptivity and Beyond, Vancouver, August 2005 p.6

24 Hamiltonian Structure Our matrices are real. Adaptivity and Beyond, Vancouver, August 2005 p.6

25 Hamiltonian Structure Our matrices are real. λ, λ, λ, λ occur together. Adaptivity and Beyond, Vancouver, August 2005 p.6

26 Hamiltonian Structure Our matrices are real. λ, λ, λ, λ occur together. spectra of Hamiltonian matrices Adaptivity and Beyond, Vancouver, August 2005 p.6

27 Hamiltonian Structure Our matrices are real. λ, λ, λ, λ occur together. spectra of Hamiltonian matrices Adaptivity and Beyond, Vancouver, August 2005 p.6

28 Hamiltonian Structure Our matrices are real. λ, λ, λ, λ occur together. spectra of Hamiltonian matrices Adaptivity and Beyond, Vancouver, August 2005 p.6

29 Hamiltonian Matrices Adaptivity and Beyond, Vancouver, August 2005 p.7

30 Hamiltonian Matrices H R 2n 2n Adaptivity and Beyond, Vancouver, August 2005 p.7

31 Hamiltonian Matrices H R 2n 2n [ 0 I J = I 0 ] R 2n 2n Adaptivity and Beyond, Vancouver, August 2005 p.7

32 Hamiltonian Matrices H R 2n 2n [ 0 I J = I 0 ] R 2n 2n H is Hamiltonian iff JH is symmetric. Adaptivity and Beyond, Vancouver, August 2005 p.7

33 Hamiltonian Matrices H R 2n 2n [ 0 I J = I 0 ] R 2n 2n H is Hamiltonian iff JH is symmetric. [ ] A K H = N A T, where K = K T and N = N T Adaptivity and Beyond, Vancouver, August 2005 p.7

34 Reduction of Order λ 2 Mv + λgv + Kv = 0 Adaptivity and Beyond, Vancouver, August 2005 p.8

35 Reduction of Order λ 2 Mv + λgv + Kv = 0 w = λv, Adaptivity and Beyond, Vancouver, August 2005 p.8

36 Reduction of Order λ 2 Mv + λgv + Kv = 0 w = λv, Mw = λmv Adaptivity and Beyond, Vancouver, August 2005 p.8

37 Reduction of Order λ 2 Mv + λgv + Kv = 0 w = λv, Mw = λmv [ ] [ ] [ K 0 v λ 0 M w G M M 0 ] [ v w ] = 0 Adaptivity and Beyond, Vancouver, August 2005 p.8

38 Reduction of Order λ 2 Mv + λgv + Kv = 0 w = λv, Mw = λmv [ ] [ ] [ K 0 v λ 0 M w G M M 0 ] [ v w ] = 0 Ax λnx = 0 Adaptivity and Beyond, Vancouver, August 2005 p.8

39 Reduction of Order λ 2 Mv + λgv + Kv = 0 w = λv, Mw = λmv [ ] [ ] [ K 0 v λ 0 M w G M M 0 ] [ v w ] = 0 Ax λnx = 0 symmetric/skew-symmetric Adaptivity and Beyond, Vancouver, August 2005 p.8

40 Reduction of Order λ 2 Mv + λgv + Kv = 0 w = λv, Mw = λmv [ ] [ ] [ K 0 v λ 0 M w G M M 0 ] [ v w ] = 0 Ax λnx = 0 symmetric/skew-symmetric Structure has been preserved. Adaptivity and Beyond, Vancouver, August 2005 p.8

41 Reduction to Hamiltonian Matrix A λn (symmetric/skew-symmetric) Adaptivity and Beyond, Vancouver, August 2005 p.9

42 Reduction to Hamiltonian Matrix A λn (symmetric/skew-symmetric) ( [ ]) N = R T 0 I JR J = I 0 Adaptivity and Beyond, Vancouver, August 2005 p.9

43 Reduction to Hamiltonian Matrix A λn (symmetric/skew-symmetric) ( [ ]) N = R T 0 I JR J = I 0 always possible, sometimes easy Adaptivity and Beyond, Vancouver, August 2005 p.9

44 Reduction to Hamiltonian Matrix A λn (symmetric/skew-symmetric) ( [ ]) N = R T 0 I JR J = I 0 always possible, sometimes easy Transform: Adaptivity and Beyond, Vancouver, August 2005 p.9

45 Reduction to Hamiltonian Matrix A λn (symmetric/skew-symmetric) ( [ ]) N = R T 0 I JR J = I 0 always possible, sometimes easy Transform: A λr T JR Adaptivity and Beyond, Vancouver, August 2005 p.9

46 Reduction to Hamiltonian Matrix A λn (symmetric/skew-symmetric) ( [ ]) N = R T 0 I JR J = I 0 always possible, sometimes easy Transform: A λr T JR R T AR 1 λj Adaptivity and Beyond, Vancouver, August 2005 p.9

47 Reduction to Hamiltonian Matrix A λn (symmetric/skew-symmetric) ( [ ]) N = R T 0 I JR J = I 0 always possible, sometimes easy Transform: A λr T JR R T AR 1 λj J 1 R T AR 1 λi Adaptivity and Beyond, Vancouver, August 2005 p.9

48 Reduction to Hamiltonian Matrix A λn (symmetric/skew-symmetric) ( [ ]) N = R T 0 I JR J = I 0 always possible, sometimes easy Transform: A λr T JR R T AR 1 λj J 1 R T AR 1 λi H = J 1 R T AR 1 is Hamiltonian. Adaptivity and Beyond, Vancouver, August 2005 p.9

49 Example (our application) [ K 0 0 M ] [ v w ] λ [ G M M 0 ] [ v w ] = 0 Adaptivity and Beyond, Vancouver, August 2005 p.10

50 Example (our application) [ ] [ K 0 0 M [ G M N = M 0 v w ] ] λ [ G M M 0 ] [ v w ] = 0 Adaptivity and Beyond, Vancouver, August 2005 p.10

51 Example (our application) [ ] [ K 0 0 M [ G M N = M 0 v w ] ] λ [ G M M 0 ] [ v w ] = 0 N = R T JR = Adaptivity and Beyond, Vancouver, August 2005 p.10

52 Example (our application) [ ] [ ] [ ] [ K 0 v G M λ 0 M w M 0 [ ] G M N = M 0 [ ] [ ] [ N = R T I 1 JR = 2 G 0 I 0 M I 0 v w ] = 0 I G M ] Adaptivity and Beyond, Vancouver, August 2005 p.10

53 Example (our application) [ ] [ ] [ ] [ K 0 v G M λ 0 M w M 0 [ ] G M N = M 0 [ ] [ ] [ N = R T I 1 JR = 2 G 0 I 0 M I 0 cost: zero flops v w ] = 0 I G M ] Adaptivity and Beyond, Vancouver, August 2005 p.10

54 H = J 1 R T AR 1 Adaptivity and Beyond, Vancouver, August 2005 p.11

55 H = J 1 R T AR 1 After some algebra... Adaptivity and Beyond, Vancouver, August 2005 p.11

56 H = J 1 R T AR 1 After some algebra... [ ] [ I 0 0 M 1 H = 1 2 G I K 0 ] [ I G I ] Adaptivity and Beyond, Vancouver, August 2005 p.11

57 H = J 1 R T AR 1 After some algebra... [ ] [ I 0 0 M 1 H = 1 2 G I K 0 ] [ I G I ] Do not form the product explicitly Adaptivity and Beyond, Vancouver, August 2005 p.11

58 H = J 1 R T AR 1 After some algebra... [ ] [ I 0 0 M 1 H = 1 2 G I K 0 ] [ I G I ] Do not form the product explicitly Krylov subspace methods Adaptivity and Beyond, Vancouver, August 2005 p.11

59 H = J 1 R T AR 1 After some algebra... [ ] [ I 0 0 M 1 H = 1 2 G I K 0 ] [ I G I ] Do not form the product explicitly Krylov subspace methods We just need to apply the operator. Adaptivity and Beyond, Vancouver, August 2005 p.11

60 H = J 1 R T AR 1 After some algebra... [ ] [ I 0 0 M 1 H = 1 2 G I K 0 ] [ I G I ] Do not form the product explicitly Krylov subspace methods We just need to apply the operator. M = LL T (done once) Adaptivity and Beyond, Vancouver, August 2005 p.11

61 H = J 1 R T AR 1 After some algebra... [ ] [ I 0 0 M 1 H = 1 2 G I K 0 ] [ I G I ] Do not form the product explicitly Krylov subspace methods We just need to apply the operator. M = LL T (done once) backsolve Adaptivity and Beyond, Vancouver, August 2005 p.11

62 However,... Adaptivity and Beyond, Vancouver, August 2005 p.12

63 However,... Adaptivity and Beyond, Vancouver, August 2005 p.12

64 However, we really want H 1. Adaptivity and Beyond, Vancouver, August 2005 p.12

65 However, we really want H 1. H = [ I G I ] [ 0 M 1 K 0 ] [ I G I ] Adaptivity and Beyond, Vancouver, August 2005 p.12

66 However, we really want H 1. [ ] [ I 0 H = 1 2 G I [ ] [ H 1 I 0 = 1 2 G I 0 M 1 K 0 0 ( K) 1 M 0 ] [ I G I ] [ I G I ] ] Adaptivity and Beyond, Vancouver, August 2005 p.12

67 However, we really want H 1. [ ] [ I 0 H = 1 2 G I [ ] [ H 1 I 0 = 1 2 G I K = LL T 0 M 1 K 0 0 ( K) 1 M 0 ] [ I G I ] [ I G I ] ] Adaptivity and Beyond, Vancouver, August 2005 p.12

68 Shift and Invert? Adaptivity and Beyond, Vancouver, August 2005 p.13

69 Shift and Invert? (H τi) 1 Adaptivity and Beyond, Vancouver, August 2005 p.13

70 Shift and Invert? (H τi) 1 is not Hamiltonian Adaptivity and Beyond, Vancouver, August 2005 p.13

71 Shift and Invert? (H τi) 1 is not Hamiltonian Structure is lost. Adaptivity and Beyond, Vancouver, August 2005 p.13

72 Shift and Invert? (H τi) 1 is not Hamiltonian Structure is lost. How can we recover it? Adaptivity and Beyond, Vancouver, August 2005 p.13

73 Exploitable Structures Adaptivity and Beyond, Vancouver, August 2005 p.14

74 Exploitable Structures symplectic (first idea) (H τi) 1 (H + τi) Adaptivity and Beyond, Vancouver, August 2005 p.14

75 Exploitable Structures symplectic (first idea) (H τi) 1 (H + τi) skew-hamiltonian (easiest?) H 2 Adaptivity and Beyond, Vancouver, August 2005 p.14

76 Exploitable Structures symplectic (first idea) (H τi) 1 (H + τi) skew-hamiltonian H 2 (easiest?) (H τi) 1 (H + τi) 1 Adaptivity and Beyond, Vancouver, August 2005 p.14

77 Exploitable Structures symplectic (first idea) (H τi) 1 (H + τi) skew-hamiltonian H 2 (easiest?) (H τi) 1 (H + τi) 1 Hamiltonian H 1 Adaptivity and Beyond, Vancouver, August 2005 p.14

78 Exploitable Structures symplectic (first idea) (H τi) 1 (H + τi) skew-hamiltonian H 2 (easiest?) (H τi) 1 (H + τi) 1 Hamiltonian H 1 H 1 (H τi) 1 (H + τi) 1 Adaptivity and Beyond, Vancouver, August 2005 p.14

79 Structured Lanczos Processes Adaptivity and Beyond, Vancouver, August 2005 p.15

80 Structured Lanczos Processes Unsymmetric Lanczos Process u k+1 b k d k = Bu k u k a k d k u k 1 b k 1 d k w k+1 d k+1 b k = B T w k w k d k a k w k 1 d k 1 b k 1 Adaptivity and Beyond, Vancouver, August 2005 p.15

81 Hamiltonian Lanczos Process u k+1 b k+1 = Hv k u k a k u k 1 b k 1 v k+1 d k+1 = Hu k+1 Adaptivity and Beyond, Vancouver, August 2005 p.16

82 Symplectic Lanczos Process v k+1 d k+1 b k = Sv k v k d k a k v k 1 d k 1 b k 1 + u k d k u k+1 d k+1 = S 1 v k+1 Adaptivity and Beyond, Vancouver, August 2005 p.17

83 Symplectic Lanczos Process v k+1 d k+1 b k = Sv k v k d k a k v k 1 d k 1 b k 1 + u k d k u k+1 d k+1 = S 1 v k+1 many interesting relationships Adaptivity and Beyond, Vancouver, August 2005 p.17

84 Implicit Restarts Adaptivity and Beyond, Vancouver, August 2005 p.18

85 Implicit Restarts short Lanczos runs (breakdowns!!, no look-ahead) Adaptivity and Beyond, Vancouver, August 2005 p.18

86 Implicit Restarts short Lanczos runs (breakdowns!!, no look-ahead) Restart implicitly as in IRA (Sorensen 1991), ARPACK Adaptivity and Beyond, Vancouver, August 2005 p.18

87 Implicit Restarts short Lanczos runs (breakdowns!!, no look-ahead) Restart implicitly as in IRA (Sorensen 1991), ARPACK Restart with HR, not QR Adaptivity and Beyond, Vancouver, August 2005 p.18

88 Remarks on Stability Adaptivity and Beyond, Vancouver, August 2005 p.19

89 Remarks on Stability Both Hamiltonian and symplectic Lanczos processes are potentially unstable. Adaptivity and Beyond, Vancouver, August 2005 p.19

90 Remarks on Stability Both Hamiltonian and symplectic Lanczos processes are potentially unstable. Breakdowns can occur. Adaptivity and Beyond, Vancouver, August 2005 p.19

91 Remarks on Stability Both Hamiltonian and symplectic Lanczos processes are potentially unstable. Breakdowns can occur. Are the answers worth anything? Adaptivity and Beyond, Vancouver, August 2005 p.19

92 Remarks on Stability Both Hamiltonian and symplectic Lanczos processes are potentially unstable. Breakdowns can occur. Are the answers worth anything? right and left eigenvectors Adaptivity and Beyond, Vancouver, August 2005 p.19

93 Remarks on Stability Both Hamiltonian and symplectic Lanczos processes are potentially unstable. Breakdowns can occur. Are the answers worth anything? right and left eigenvectors residuals Adaptivity and Beyond, Vancouver, August 2005 p.19

94 Remarks on Stability Both Hamiltonian and symplectic Lanczos processes are potentially unstable. Breakdowns can occur. Are the answers worth anything? right and left eigenvectors residuals condition numbers for eigenvalues Adaptivity and Beyond, Vancouver, August 2005 p.19

95 Remarks on Stability Both Hamiltonian and symplectic Lanczos processes are potentially unstable. Breakdowns can occur. Are the answers worth anything? right and left eigenvectors residuals condition numbers for eigenvalues Don t skip these tests. Adaptivity and Beyond, Vancouver, August 2005 p.19

96 Example Adaptivity and Beyond, Vancouver, August 2005 p.20

97 Example λ 2 Mv + λgv + Kv = 0 Adaptivity and Beyond, Vancouver, August 2005 p.20

98 Example λ 2 Mv + λgv + Kv = 0 n = 3423 Adaptivity and Beyond, Vancouver, August 2005 p.20

99 Example λ 2 Mv + λgv + Kv = 0 n = 3423 H = [ I G I ] [ 0 M 1 K 0 ] [ I G I ] Adaptivity and Beyond, Vancouver, August 2005 p.20

100 Example λ 2 Mv + λgv + Kv = 0 n = 3423 H = [ I G I ] [ 0 M 1 K 0 ] [ I G I ] H 1 = [ I G I ] [ 0 ( K) 1 M 0 ] [ I G I ] Adaptivity and Beyond, Vancouver, August 2005 p.20

101 Compare various approaches: Adaptivity and Beyond, Vancouver, August 2005 p.21

102 Compare various approaches: Hamiltonian(1) H 1 Adaptivity and Beyond, Vancouver, August 2005 p.21

103 Compare various approaches: Hamiltonian(1) Hamiltonian(3) H 1 H 1 (H τi) 1 (H + τi) 1 Adaptivity and Beyond, Vancouver, August 2005 p.21

104 Compare various approaches: Hamiltonian(1) Hamiltonian(3) H 1 H 1 (H τi) 1 (H + τi) 1 symplectic (H τi) 1 (H + τi) Adaptivity and Beyond, Vancouver, August 2005 p.21

105 Compare various approaches: Hamiltonian(1) Hamiltonian(3) H 1 H 1 (H τi) 1 (H + τi) 1 symplectic (H τi) 1 (H + τi) unstructured (H τi) 1 + ordinary Lanczos with implicit restarts Adaptivity and Beyond, Vancouver, August 2005 p.21

106 Compare various approaches: Hamiltonian(1) Hamiltonian(3) H 1 H 1 (H τi) 1 (H + τi) 1 symplectic (H τi) 1 (H + τi) unstructured (H τi) 1 + ordinary Lanczos with implicit restarts Get 6 smallest eigenvalues in right half-plane. Adaptivity and Beyond, Vancouver, August 2005 p.21

107 Compare various approaches: Hamiltonian(1) Hamiltonian(3) symplectic H 1 H 1 (H τi) 1 (H + τi) 1 (H τi) 1 (H + τi) unstructured (H τi) 1 + ordinary Lanczos with implicit restarts Get 6 smallest eigenvalues in right half-plane. Tolerance = 10 8 Adaptivity and Beyond, Vancouver, August 2005 p.21

108 Compare various approaches: Hamiltonian(1) Hamiltonian(3) symplectic H 1 H 1 (H τi) 1 (H + τi) 1 (H τi) 1 (H + τi) unstructured (H τi) 1 + ordinary Lanczos with implicit restarts Get 6 smallest eigenvalues in right half-plane. Tolerance = 10 8 Take 20 steps and restart with 10. Adaptivity and Beyond, Vancouver, August 2005 p.21

109 No-Clue Case (τ = 0) Adaptivity and Beyond, Vancouver, August 2005 p.22

110 No-Clue Case (τ = 0) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) Adaptivity and Beyond, Vancouver, August 2005 p.22

111 No-Clue Case (τ = 0) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) Unstructured Adaptivity and Beyond, Vancouver, August 2005 p.22

112 No-Clue Case (τ = 0) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) Unstructured Unstructured code must find everything twice. 5 x Adaptivity and Beyond, Vancouver, August 2005 p.22

113 Conservative Shift (τ = 0.5) Adaptivity and Beyond, Vancouver, August 2005 p.23

114 Conservative Shift (τ = 0.5) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) x Adaptivity and Beyond, Vancouver, August 2005 p.23

115 Conservative Shift (τ = 0.5) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) Unstructured x Adaptivity and Beyond, Vancouver, August 2005 p.23

116 Conservative Shift (τ = 0.5) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) Unstructured Hamiltonian(3) x Adaptivity and Beyond, Vancouver, August 2005 p.23

117 Conservative Shift (τ = 0.5) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) Unstructured Hamiltonian(3) Symplectic x Adaptivity and Beyond, Vancouver, August 2005 p.23

118 Aggressive Shift (τ = 1.5) Adaptivity and Beyond, Vancouver, August 2005 p.24

119 Aggressive Shift (τ = 1.5) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) Adaptivity and Beyond, Vancouver, August 2005 p.24

120 Aggressive Shift (τ = 1.5) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) Unstructured Adaptivity and Beyond, Vancouver, August 2005 p.24

121 Aggressive Shift (τ = 1.5) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) Unstructured Hamiltonian(3) Adaptivity and Beyond, Vancouver, August 2005 p.24

122 Aggressive Shift (τ = 1.5) Method Solves Eigvals Max. Found Resid. Hamiltonian(1) Unstructured Hamiltonian(3) Symplectic Adaptivity and Beyond, Vancouver, August 2005 p.24

123 The Last Slide Adaptivity and Beyond, Vancouver, August 2005 p.25

124 The Last Slide We have developed structure-preserving implicitly-restarted Lanczos methods for Hamiltonian and symplectic eigenvalue problems. Adaptivity and Beyond, Vancouver, August 2005 p.25

125 The Last Slide We have developed structure-preserving implicitly-restarted Lanczos methods for Hamiltonian and symplectic eigenvalue problems. The structure-preserving methods are more accurate than a comparable non-structured method. Adaptivity and Beyond, Vancouver, August 2005 p.25

126 The Last Slide We have developed structure-preserving implicitly-restarted Lanczos methods for Hamiltonian and symplectic eigenvalue problems. The structure-preserving methods are more accurate than a comparable non-structured method. By exploiting structure we can solve our problems more economically. Adaptivity and Beyond, Vancouver, August 2005 p.25

127 The Last Slide We have developed structure-preserving implicitly-restarted Lanczos methods for Hamiltonian and symplectic eigenvalue problems. The structure-preserving methods are more accurate than a comparable non-structured method. By exploiting structure we can solve our problems more economically. Thank you for your attention. Adaptivity and Beyond, Vancouver, August 2005 p.25

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils A Structure-Preserving Method for Large Scale Eigenproblems of Skew-Hamiltonian/Hamiltonian (SHH) Pencils Yangfeng Su Department of Mathematics, Fudan University Zhaojun Bai Department of Computer Science,

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner. The Newton-ADI Method for Large-Scale Algebraic Riccati Equations Mathematik in Industrie und Technik Fakultät für Mathematik Peter Benner benner@mathematik.tu-chemnitz.de Sonderforschungsbereich 393 S

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Ritz Value Bounds That Exploit Quasi-Sparsity

Ritz Value Bounds That Exploit Quasi-Sparsity Banff p.1 Ritz Value Bounds That Exploit Quasi-Sparsity Ilse Ipsen North Carolina State University Banff p.2 Motivation Quantum Physics Small eigenvalues of large Hamiltonians Quasi-Sparse Eigenvector

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices

A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices Jean Christophe Tremblay and Tucker Carrington Chemistry Department Queen s University 23 août 2007 We want to

More information

Structure preserving Krylov-subspace methods for Lyapunov equations

Structure preserving Krylov-subspace methods for Lyapunov equations Structure preserving Krylov-subspace methods for Lyapunov equations Matthias Bollhöfer, André Eppler Institute Computational Mathematics TU Braunschweig MoRePas Workshop, Münster September 17, 2009 System

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

A factorization of the inverse of the shifted companion matrix

A factorization of the inverse of the shifted companion matrix Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 A factorization of the inverse of the shifted companion matrix Jared L Aurentz jaurentz@mathwsuedu Follow this and additional

More information

On the Ritz values of normal matrices

On the Ritz values of normal matrices On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific

More information

Iterative methods for symmetric eigenvalue problems

Iterative methods for symmetric eigenvalue problems s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Report Number 09/32. A Hamiltonian Krylov-Schur-type method based on the symplectic Lanczos process. Peter Benner, Heike Faßbender, Martin Stoll

Report Number 09/32. A Hamiltonian Krylov-Schur-type method based on the symplectic Lanczos process. Peter Benner, Heike Faßbender, Martin Stoll Report Number 09/32 A Hamiltonian Krylov-Schur-type method based on the symplectic Lanczos process by Peter Benner, Heike Faßbender, Martin Stoll Oxford Centre for Collaborative Applied Mathematics Mathematical

More information

Real Eigenvalue Extraction and the Distance to Uncontrollability

Real Eigenvalue Extraction and the Distance to Uncontrollability Real Eigenvalue Extraction and the Distance to Uncontrollability Emre Mengi Computer Science Department Courant Institute of Mathematical Sciences New York University mengi@cs.nyu.edu May 22nd, 2006 Emre

More information

Cyclic reduction and index reduction/shifting for a second-order probabilistic problem

Cyclic reduction and index reduction/shifting for a second-order probabilistic problem Cyclic reduction and index reduction/shifting for a second-order probabilistic problem Federico Poloni 1 Joint work with Giang T. Nguyen 2 1 U Pisa, Computer Science Dept. 2 U Adelaide, School of Math.

More information

Spectral Analysis of Matrices - An Introduction for Engineers

Spectral Analysis of Matrices - An Introduction for Engineers Spectral Analysis of Matrices - An Introduction for Engineers Timo Weidl Department of Mathematics Royal Institute of Technology S-144 Stockholm weidl@mathkthse January 13, 26 Plan of the Talk 1 A practical

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i 8.409 An Algorithmist s oolkit December, 009 Lecturer: Jonathan Kelner Lecture Last time Last time, we reduced solving sparse systems of linear equations Ax = b where A is symmetric and positive definite

More information

Chapter 5. Eigenvalues and Eigenvectors

Chapter 5. Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Section 5. Eigenvectors and Eigenvalues Motivation: Difference equations A Biology Question How to predict a population of rabbits with given dynamics:. half of the

More information

Diagonalization of Matrix

Diagonalization of Matrix of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods

Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods Volker Mehrmann Heinrich Voss November 29, 2004 Abstract We discuss the state of the art in numerical solution methods for large

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri Technische Universitat Chemnitz Sonderforschungsbereich 393 Numerische Simulation auf massiv parallelen Rechnern Peter Benner Ralph Byers Volker Mehrmann Hongguo Xu Numerical Computation of Deating Subspaces

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Eigenvalues and Eigenvectors Consider the equation A x = λ x, where A is an nxn matrix. We call x (must be non-zero) an eigenvector of A if this equation can be solved for some value of λ. We call λ an

More information

Math Camp Notes: Linear Algebra II

Math Camp Notes: Linear Algebra II Math Camp Notes: Linear Algebra II Eigenvalues Let A be a square matrix. An eigenvalue is a number λ which when subtracted from the diagonal elements of the matrix A creates a singular matrix. In other

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Power iteration Notes for 2016-10-17 In most introductory linear algebra classes, one computes eigenvalues as roots of a characteristic polynomial. For most problems, this is a bad idea: the roots of

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS Electronic Transactions on Numerical Analysis Volume 44 pp 1 24 215 Copyright c 215 ISSN 168 9613 ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO

More information

On the solution of large Sylvester-observer equations

On the solution of large Sylvester-observer equations NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 200; 8: 6 [Version: 2000/03/22 v.0] On the solution of large Sylvester-observer equations D. Calvetti, B. Lewis 2, and L. Reichel

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

A comparison of solvers for quadratic eigenvalue problems from combustion

A comparison of solvers for quadratic eigenvalue problems from combustion INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS [Version: 2002/09/18 v1.01] A comparison of solvers for quadratic eigenvalue problems from combustion C. Sensiau 1, F. Nicoud 2,, M. van Gijzen 3,

More information

Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations

Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations, VU Amsterdam and North West University, South Africa joint work with: Leonard Batzke, Christian

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare COMP6237 Data Mining Covariance, EVD, PCA & SVD Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and covariance) in terms of

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

A new block method for computing the Hamiltonian Schur form

A new block method for computing the Hamiltonian Schur form A new block method for computing the Hamiltonian Schur form V Mehrmann, C Schröder, and D S Watkins September 8, 28 Dedicated to Henk van der Vorst on the occasion of his 65th birthday Abstract A generalization

More information

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x!

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x! The residual again The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute r = b - A x! which gives us a measure of how good or bad x! is as a

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

5 More on Linear Algebra

5 More on Linear Algebra 14.102, Math for Economists Fall 2004 Lecture Notes, 9/23/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

L -Norm Computation for Large-Scale Descriptor Systems Using Structured Iterative Eigensolvers

L -Norm Computation for Large-Scale Descriptor Systems Using Structured Iterative Eigensolvers L -Norm Computation for Large-Scale Descriptor Systems Using Structured Iterative Eigensolvers Peter Benner Ryan Lowe Matthias Voigt July 5, 2016 Abstract In this article, we discuss a method for computing

More information

2 Eigenvectors and Eigenvalues in abstract spaces.

2 Eigenvectors and Eigenvalues in abstract spaces. MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Matrix stabilization using differential equations.

Matrix stabilization using differential equations. Matrix stabilization using differential equations. Nicola Guglielmi Universitá dell Aquila and Gran Sasso Science Institute, Italia NUMOC-2017 Roma, 19 23 June, 2017 Inspired by a joint work with Christian

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

Eigenvectors. Prop-Defn

Eigenvectors. Prop-Defn Eigenvectors Aim lecture: The simplest T -invariant subspaces are 1-dim & these give rise to the theory of eigenvectors. To compute these we introduce the similarity invariant, the characteristic polynomial.

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

Math 205, Summer I, Week 4b:

Math 205, Summer I, Week 4b: Math 205, Summer I, 2016 Week 4b: Chapter 5, Sections 6, 7 and 8 (5.5 is NOT on the syllabus) 5.6 Eigenvalues and Eigenvectors 5.7 Eigenspaces, nondefective matrices 5.8 Diagonalization [*** See next slide

More information

Nonlinear Eigenvalue Problems: An Introduction

Nonlinear Eigenvalue Problems: An Introduction Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

SKEW-HAMILTONIAN AND HAMILTONIAN EIGENVALUE PROBLEMS: THEORY, ALGORITHMS AND APPLICATIONS

SKEW-HAMILTONIAN AND HAMILTONIAN EIGENVALUE PROBLEMS: THEORY, ALGORITHMS AND APPLICATIONS SKEW-HAMILTONIAN AND HAMILTONIAN EIGENVALUE PROBLEMS: THEORY, ALGORITHMS AND APPLICATIONS Peter Benner Technische Universität Chemnitz Fakultät für Mathematik benner@mathematik.tu-chemnitz.de Daniel Kressner

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM

ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM DAVID S WATKINS Abstract Recently Chu Liu and Mehrmann developed an O(n 3 ) structure preserving method for computing the Hamiltonian

More information

Krylov Subspace Methods for Nonlinear Model Reduction

Krylov Subspace Methods for Nonlinear Model Reduction MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute

More information

On the Computations of Eigenvalues of the Fourth-order Sturm Liouville Problems

On the Computations of Eigenvalues of the Fourth-order Sturm Liouville Problems Int. J. Open Problems Compt. Math., Vol. 4, No. 3, September 2011 ISSN 1998-6262; Copyright c ICSRS Publication, 2011 www.i-csrs.org On the Computations of Eigenvalues of the Fourth-order Sturm Liouville

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Opportunities for ELPA to Accelerate the Solution of the Bethe-Salpeter Eigenvalue Problem

Opportunities for ELPA to Accelerate the Solution of the Bethe-Salpeter Eigenvalue Problem Opportunities for ELPA to Accelerate the Solution of the Bethe-Salpeter Eigenvalue Problem Peter Benner, Andreas Marek, Carolin Penke August 16, 2018 ELSI Workshop 2018 Partners: The Problem The Bethe-Salpeter

More information

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

MATH 1553 PRACTICE MIDTERM 3 (VERSION B) MATH 1553 PRACTICE MIDTERM 3 (VERSION B) Name Section 1 2 3 4 5 Total Please read all instructions carefully before beginning. Each problem is worth 10 points. The maximum score on this exam is 50 points.

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15.

PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15. PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15. You may write your answers on this sheet or on a separate paper. Remember to write your name on top. Please note:

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Numerical Methods for General and Structured Eigenvalue Problems

Numerical Methods for General and Structured Eigenvalue Problems Daniel Kressner Numerical Methods for General and Structured Eigenvalue Problems SPIN Springer s internal project number, if known Monograph April 4, 2005 Springer Berlin Heidelberg New York Hong Kong

More information