FAME-matlab Package: Fast Algorithm for Maxwell Equations Tsung-Ming Huang

Size: px
Start display at page:

Download "FAME-matlab Package: Fast Algorithm for Maxwell Equations Tsung-Ming Huang"

Transcription

1 FAME-matlab Package: Fast Algorithm for Maxwell Equations Tsung-Ming Huang Modelling, Simulation and Analysis of Nonlinear Optics, NUK, September, 4-8,

2 2

3 FAME group Wen-Wei Lin Department of Applied Mathematics National Chiao-Tung University Weichung Wang Department of Mathematics National Taiwan University Chien-Chih Huang( 黃建智 ) Department of Mathematics National Taiwan Normal University Han-En Hsieh( 謝函恩 ) Department of Mathematics National Taiwan University 32

4 Fast Algorithm for Maxwell Eq FAME FAME.GPU FAME.m FAME.mpi 4

5 FAME Eigen-solvers (JD, SIRA) Photonic Crystals Dispersive Metallic materials Complex materials E(r) = λε(r)e(r) E(r) = ω 2 ε(r,ω )E(r) 0 0 E H = iω ζ µ ε ξ E H Simple Cubic Face-Centered Cubic 5

6 Generalized eigenvalue problems for 3D photonic crystal 6

7 E(r) = ω 2 ε(r)e(r) Curl operator Central edge points E = 0 z z y y 0 x 0 x E 1 E 2 E 3 Central face points where H (r) = ω 2 ε(r)e(r) C h = ω 2 Be E(r) = H (r) Ce = h C = 0 C 3 C 2 C 3 0 C 1 C 2 C 1 0 Resulting generalized eigenvalue problem C C ω 2 B with diagonal B! 3n 3n C 1 = I n2 n 3 K 1! n n, C 2 = I n3 K 2! n n, C 3 = K 3! n n ( )x ( A λ B)x = 0 7

8 Finite Diff. Assoc. with Quasi-Periodic Cond. K 1 = 1 δ x K 2 = 1 δ y K 3 = 1 δ z 1 1!! 1 1 e ı2πk a 1 I n1 e ı2πk a 2 J 2 I n1 1!! I n1! n 1 n 1, I n1 I n1 I n1 n 2 I n1 n 2!! I n1 n 2 I n1 n 2 e ı2πk a 3 J 3 I n1 n 2! (n 1n 2 ) (n 1 n ) 2,! n n 8

9 Finite Diff. Assoc. with Quasi-Periodic Cond. E(r + a l ) = e i2πk a l E(r) K 1 = 1 δ x K 2 = 1 δ y K 3 = 1 δ z 1 1!! 1 1 e ı2πk a 1 I n1 e ı2πk a 2 J 2 I n1 1!! I n1! n 1 n 1, I n1 I n1 I n1 n 2 I n1 n 2!! I n1 n 2 I n1 n 2 e ı2πk a 3 J 3 I n1 n 2! (n 1n 2 ) (n 1 n ) 2,! n n 8

10 Finite Diff. Assoc. with Quasi-Periodic Cond. K 1 = 1 δ x K 2 = 1 δ y K 3 = 1 δ z 1 1!! 1 1 e ı2πk a 1 I n1 e ı2πk a 2 J 2 I n1 1!! I n1! n 1 n 1, I n1 I n1 I n1 n 2 I n1 n 2!! I n1 n 2 I n1 n 2 e ı2πk a 3 J 3 I n1 n 2! (n 1n 2 ) (n 1 n ) 2,! n n 8

11 Finite Diff. Assoc. with Quasi-Periodic Cond. K 1 = 1 δ x K 2 = 1 δ y K 3 = 1 δ z 1 1!! 1 1 e ı2πk a 1 I n1 e ı2πk a 2 J 2 I n1 1!! I n1! n 1 n 1, I n1 I n1 I n1 n 2 I n1 n 2!! I n1 n 2 I n1 n 2 e ı2πk a 3 J 3 I n1 n 2! (n 1n 2 ) (n 1 n ) 2,! n n For SC lattice J 2 = I n1, J 3 = I n1 n 2 8

12 Finite Diff. Assoc. with Quasi-Periodic Cond. K 1 = 1 δ x K 2 = 1 δ y K 3 = 1 δ z 1 1!! 1 1 e ı2πk a 1 I n1 e ı2πk a 2 J 2 I n1 1!! I n1! n 1 n 1, I n1 I n1 I n1 n 2 I n1 n 2!! I n1 n 2 I n1 n 2 e ı2πk a 3 J 3 I n1 n 2! (n 1n 2 ) (n 1 n ) 2,! n n J 2 = J 3 = For SC lattice J 2 = I n1, J 3 = I n1 n 2 For FCC lattice 0 e ı2πk a 1 I n 1 /2 I n1 /2 0 0 e ı2πk a 2 I 1 3 n 2 I 2 J n 2!n 1 n 1, I n1! (n 1n 2 ) (n 1 n 2 ) 8

13 Power method Let x,, x 1 n be the eigenpairs of A where is linearly independent (λ i, x i ) for i = 1,,n For any nonzero vector u u = α 1 x 1 +!+ α n x n Since A k x i = λ i k x i, we have If for i >1 and, then Given shift value A k u = α 1 λ 1 k x 1 +!+ α n λ n k x n λ 1 > λ i α λ 1 k Ak u = α 1 x 1 + ( λ 2 λ 1 ) k α 2 x 2 +!+ α n ( λ n λ 1 ) k x n α 1 x 1 as k {(A σ I) 1 } k u = α { 1 (λ 1 σ ) 1 } k x 1 +!+ α { n (λ n σ ) 1 } k x n 9

14 Solving ( A λ B)x = 0 Use shift-and-invert Lanczos method In each iteration of shift-and-invert Lanczos method, we need to solve (A σ B)y = b How to efficiently solve this linear system? 10

15 Solving linear system (A σ B)y = b 11

16 Solve (A σ B)y = b Direct method (Gaussian elimination) Iterative method y = (A σ B) \ b Matrix vector multiplication with A σ B Preconditioner M sol = bicgstabl(coef_mtx, rhs, tol, diag_coef_mtx, lower_l)); 12

17 Solve (A σ B)y = b Direct method (Gaussian elimination) Iterative method y = (A σ B) \ b Matrix vector multiplication with A σ B Preconditioner M sol = bicgstabl(coef_mtx, rhs, tol, diag_coef_mtx, lower_l)); Demo performance 12

18 Eigen-decomp. of C 1, C 2, C 3 for SC lattice Define Define unitary matrix T as Then it holds that ( ), D a,m = diag( 1,e θ a,m,!,e (m 1)θ a,m ), Λ a,m = diag e θ m,1+θ a,m 1! e θ m,m +θ a,m 1 U m = 1 1! 1 e θ m,1 e θ m,2! 1!!! e (m 1)θ m,1 e (m 1)θ m,2! 1! m m, θ a,m = ı2πk a m, θ m,i = ı2πi m T = 1 ( n D a 3,n 3 D a2,n 2 D )( a1,n 1 U n3 U n2 U ) n1 ( ) T Λ 1, C 1 T = δ x 1 T I n3 I n2 Λ a1,n 1 C 2 T = δ 1 y T ( I n3 Λ a2,n 2 I ) n1 T Λ 2, C 3 T = δ 1 z T ( Λ a3,n 3 I n2 I ) n1 T Λ 3 13

19 Eigen-decomp. of C 1, C 2, C 3 for FCC lattice Define Define unitary matrix T as T = 1 n Then it holds that ψ x = ı2πk a 1, D x = diag 1,e ψ x ( 1 1)ψ n,!,e(n x ), 1 ψ y,i = ı2π k a 2 a 1 n 2 2 x i = D x U n1 (:,i), y i, j = D y,i U n2 (:, j) T 1 T 2! T n1 T i, j = ( D z,i+ j U n3 ) ( y i, j x ) i i 2 ψ z,i+ j = ı2π k a 3 a + a 1 2 n 3 3, i + j 3,!n n, T i = T i,1 T i,2! T i,n2 C 1 T = T ( Λ n1 I ) n2 n 3 T Λ 1, ( ) T Λ 2, C 2 T = T ( n 1 i=1 Λ i,n2 ) I n3 D = diag y,i 1,eψ y,i,!,e (n2 1)ψ y,i ( ), D z,i+ j = diag 1,e ψ y,i+ j,!,e (n3 1)ψ y,i+ j ( )!n (n 2n 3 ), C 3 T = T i=1 ( n 1 n 2 j=1 Λ ) i, j,n3 T Λ 3 14

20 Eigen-decomp. of C 1, C 2, C 3 for FCC lattice Define Define unitary matrix T as T = 1 n Then it holds that ψ x = ı2πk a 1, D x = diag 1,e ψ x ( 1 1)ψ n,!,e(n x ), 1 ψ y,i = ı2π k a 2 a 1 n 2 2 x i = D x U n1 (:,i), y i, j = D y,i U n2 (:, j) T 1 T 2! T n1 T i, j = ( D z,i+ j U n3 ) ( y i, j x ) i i 2 ψ z,i+ j = ı2π k a 3 a + a 1 2 n 3 3, i + j 3,!n n, T i = T i,1 T i,2! T i,n2 C 1 T = T ( Λ n1 I ) n2 n 3 T Λ 1, ( ) T Λ 2, C 2 T = T ( n 1 i=1 Λ i,n2 ) I n3 D = diag y,i 1,eψ y,i,!,e (n2 1)ψ y,i ( ), D z,i+ j = diag 1,e ψ y,i+ j,!,e (n3 1)ψ y,i+ j ( )!n (n 2n 3 ), Demo performance C 3 T = T i=1 ( n 1 n 2 j=1 Λ ) i, j,n3 T Λ 3 14

21 CPU Times for T*p and Tq with FCC MATLAB 10 9 Tq T * p T*p : fft CPU times (sec.) n x 10 7 Tq : ifft 15

22 Solving preconditioning linear system (C C τ I)y = d 16

23 Solving preconditioning linear system (C C τ I)y = d G = [C 1,C 2,C 3 ] C C = I 3 ( G G) GG C = 0 C 3 C 2 C 3 0 C 1 C 2 C

24 Solving preconditioning linear system (C C τ I)y = d G = [C 1,C 2,C 3 ] C C = I 3 ( G G) GG { I 3 (G G) τ I}y = d + GG y C = 0 C 3 C 2 C 3 0 C 1 C 2 C

25 Solving preconditioning linear system (C C τ I)y = d G = [C 1,C 2,C 3 ] C C = I 3 ( G G) GG C = 0 C 3 C 2 C 3 0 C 1 C 2 C 1 0 CG = 0 GG y = τ 1 GG d { I 3 (G G) τ I}y = d + GG y 16

26 Solving preconditioning linear system (C C τ I)y = d G = [C 1,C 2,C 3 ] C C = I 3 ( G G) GG C = 0 C 3 C 2 C 3 0 C 1 C 2 C 1 0 CG = 0 GG y = τ 1 GG d { I 3 (G G) τ I}y = d + GG y { I 3 (G G) τ I}y = d τ 1 GG d 16

27 Solving preconditioning linear system (C C τ I)y = d G = [C 1,C 2,C 3 ] C C = I 3 ( G G) GG C = 0 C 3 C 2 C 3 0 C 1 C 2 C 1 0 CG = 0 GG y = τ 1 GG d { I 3 (G G) τ I}y = d + GG y { I 3 (G G) τ I}y = d τ 1 GG d Λ q = Λ 1 Λ 1 + Λ 2 Λ 2 + Λ 3 Λ 3 C 1 T = T Λ 1, C 2 T = T Λ 2, C 3 T = T Λ 3 16

28 Solving preconditioning linear system (C C τ I)y = d G = [C 1,C 2,C 3 ] C C = I 3 ( G G) GG C = 0 C 3 C 2 C 3 0 C 1 C 2 C 1 0 CG = 0 GG y = τ 1 GG d { I 3 (G G) τ I}y = d + GG y { I 3 (G G) τ I}y = d τ 1 GG d Λ q = Λ 1 Λ 1 + Λ 2 Λ 2 + Λ 3 Λ 3 C 1 T = T Λ 1, C 2 T = T Λ 2, C 3 T = T Λ 3 Λ 1 ( I 3 Λ q τ I )y! = I τ 1 Λ 2 Λ 3 Λ 1 * Λ 2 * Λ 3 * ( I 3 T ) * d, y = ( I 3 T )y! 16

29 Solving preconditioning linear system Demo performance (C C τ I)y = d G = [C 1,C 2,C 3 ] C C = I 3 ( G G) GG C = 0 C 3 C 2 C 3 0 C 1 C 2 C 1 0 CG = 0 GG y = τ 1 GG d { I 3 (G G) τ I}y = d + GG y { I 3 (G G) τ I}y = d τ 1 GG d Λ q = Λ 1 Λ 1 + Λ 2 Λ 2 + Λ 3 Λ 3 C 1 T = T Λ 1, C 2 T = T Λ 2, C 3 T = T Λ 3 Λ 1 ( I 3 Λ q τ I )y! = I τ 1 Λ 2 Λ 3 Λ 1 * Λ 2 * Λ 3 * ( I 3 T ) * d, y = ( I 3 T )y! 16

30 Preconditioner M = C C τ I Iterative solver with preconditioner M: sol = bicgstabl(coef_mtx, rhs, tol, Lambda, tau, EigDecompDoubCurl_cell, fun_mtx_th_prod_vec, fun_mtx_t_prod_vec)); 17

31 Preconditioner M = C C τ I Iterative solver with preconditioner M: sol = bicgstabl(coef_mtx, rhs, tol, Lambda, tau, EigDecompDoubCurl_cell, fun_mtx_th_prod_vec, fun_mtx_t_prod_vec)); Since we have M 1 (A σ B) = M 1 (A- τ I + τ I σ B) = I + M 1 (τ I σ B) { I + M 1 (τ I σ B) }y = M 1 b No need to compute the matrix-vector multiplication involving A: sol = bicgstabl(@(vec)mtx_prod_vec_shift_invert_ls(vec, tau, Lambda_new, EigDecompDoubCurl_cell, mtx_b_sigma, fun_mtx_th_prod_vec, fun_mtx_t_prod_vec), rhs, tol, maxit); 17

32 Challenge in Solving Linear System SC lattice (dim = 46875) FCC lattice 18

33 Null-space free eigenvalue problem 19

34 Huge zero eigenvalues Eigen-decomposition where Q 0 Q 0 Q is unitary and Q A Q0 Q ( ) Π Π 0 1 := I 3 T ( ) I 3 T Λ q = Λ 1 Λ 1 + Λ 2 Λ 2 + Λ 3 Λ 3 = diag ( 0,Λ q,λ ) q diag( 0,Λ) Π 0,1 Π 1,1 Π 1,2 Π 0,2 Π 1,3 Π 1,4 Π 0,3 Π 1,5 Π 1,6 Q AQ = Λ 20

35 Huge zero eigenvalues Eigen-decomposition where Q 0 Q 0 Q is unitary and Q A Q0 Q ( ) Π Π 0 1 := I 3 T ( ) I 3 T Λ q = Λ 1 Λ 1 + Λ 2 Λ 2 + Λ 3 Λ 3 = diag ( 0,Λ q,λ ) q diag( 0,Λ) Π 0,1 Π 1,1 Π 1,2 Π 0,2 Π 1,3 Π 1,4 Π 0,3 Π 1,5 Π 1,6 Q AQ = Λ Ax = λ Bx 0 n zero eigenvalues k wanted interior eigenvalues other 20

36 Huge zero eigenvalues Eigen-decomposition where Q 0 Q 0 Q is unitary and Q A Q0 Q ( ) Π Π 0 1 := I 3 T ( ) I 3 T Λ q = Λ 1 Λ 1 + Λ 2 Λ 2 + Λ 3 Λ 3 = diag ( 0,Λ q,λ ) q diag( 0,Λ) Π 0,1 Π 1,1 Π 1,2 Π 0,2 Π 1,3 Π 1,4 Π 0,3 Π 1,5 Π 1,6 Q AQ = Λ Ax = λ Bx 0 n zero eigenvalues k wanted interior eigenvalues other Ritz values are dragged toward zero during the iteration 20

37 Null-space free method Theorem and span( B 1 QΛ 1/2 ) = span{ x Ax = λbx, λ 0} { λ 0 Ax = λbx} = { λ Λ 1/2 Q B 1 QΛ 1/2 u = λu} Q AQ = Λ 21

38 Null-space free method Theorem and span( B 1 QΛ 1/2 ) = span{ x Ax = λbx, λ 0} { λ 0 Ax = λbx} = { λ Λ 1/2 Q B 1 QΛ 1/2 u = λu} Q AQ = Λ Null-space free SEP Ax = λbx Ku ( Λ 1/2 Q B 1 QΛ 1/2 )u = λu Dim. of GEP and SEP are 3n and 2n, respectively GEP and SEP have same 2n nonzero eigenvalues. SEP has no zero eigenvalues 21

39 Null-space free method Theorem and span( B 1 QΛ 1/2 ) = span{ x Ax = λbx, λ 0} { λ 0 Ax = λbx} = { λ Λ 1/2 Q B 1 QΛ 1/2 u = λu} Q AQ = Λ Null-space free SEP Ax = λbx Ku ( Λ 1/2 Q B 1 QΛ 1/2 )u = λu Dim. of GEP and SEP are 3n and 2n, respectively GEP and SEP have same 2n nonzero eigenvalues. SEP has no zero eigenvalues Ku = λ u 0 n zero eigenvalues k wanted interior eigenvalues other 21

40 Solving Λ 1/2 Q B 1 QΛ 1/2 u = λu Invert Lanczos method In each step, we need to solve a linear system Λ 1/2 Q B 1 QΛ 1/2 v = b 22

41 Solving Λ 1/2 Q B 1 QΛ 1/2 u = λu Invert Lanczos method In each step, we need to solve a linear system Λ 1/2 Q B 1 QΛ 1/2 v = b Solve LS by CG method sol = pcg(@(vec)nfsep_mtx_prod_vec_lambda(vec, EigDecompDoubCurl_cell, FFT_parameter)), rhs, tol, maxit); 22

42 Solving Λ 1/2 Q B 1 QΛ 1/2 u = λu Invert Lanczos method In each step, we need to solve a linear system Λ 1/2 Q B 1 QΛ 1/2 v = b Demo performance Solve LS by CG method sol = pcg(@(vec)nfsep_mtx_prod_vec_lambda(vec, EigDecompDoubCurl_cell, FFT_parameter)), rhs, tol, maxit); 22

43 Solving Λ 1/2 Q B 1 QΛ 1/2 u = λu Invert Lanczos method In each step, we need to solve a linear system Solve LS by CG method sol = pcg(@(vec)nfsep_mtx_prod_vec_lambda(vec, EigDecompDoubCurl_cell, FFT_parameter)), rhs, tol, maxit); Rewrite linear system as Well condition number Solve it by CG method Λ 1/2 Q B 1 QΛ 1/2 v = b Q B 1 Qv! = Λ 1/2 b, v = Λ 1/2 v! κ (Q B 1 Q) κ (B 1 ) Demo performance 22

44 CPU Time Comparison R. L. CHERN et al. Ax = λbx Λ 1/2 Q B 1 QΛ 1/2 u = λu Fig. 2. Tetragonal square spiral structure comprising circular cylinders. 27,28) SILM IPLM (b) n log(n) Fig. 3. diamond structure with sp 3 -like configuration comprising dielectric spheres and connecting spheroids CPU time for computing T p and T q with various n. cation (photopolymerization). 32) The order 0.9 of presentation of the paper is organized as follows. In 2, we show how to correctly formulate the finite 0.8 difference method for the double curl operator of the 0.7 photonic eigenvalue problem. In 3, we develop the 0.6 numerical method (inverse iteration with the full multigrid acceleration) 0.5 and present the fast algorithm, in which two alternative 0.4 methods of projection are proposed to avoid the frequency necessity of deflating zeros). In 4, we first present 0.3 numerical results that illustrate the efficiency of the Elapsed times (sec.) presently developed method. Then, the band structures are 0.1 computed for the modified simple cubic lattice, the tetragonal square 0 spiral structure (direct and inverse structure) and X U L G X W K the diamond structure with sp 3 -like configuration. Finally, concluding remarks with a summary of results are drawn in ructure 5. of the 3D photonic crystals with p FCC lattice. The vectors k s along X U L G X W K 23

45 Shift-Invert Residual Arnoldi method 24

46 Shift-Invert Residual Arnoldi method (SIRA) For a given search subspace V, let (θ,z! ) be an eigenpair of V (Λ 1/2 Q B 1 QΛ 1/2 λi)vz = 0 and let x! = V z! be the associated Ritz vector The new search direction v is chosen as ( ) 1 (Λ 1/2 Q B 1 QΛ 1/2 θi)x! v = Λ 1/2 Q B 1 QΛ 1/2 σ I where σ is a given shift value ( Λ1/2 Q B 1 QΛ 1/2 σ I ) 1 r After re-orthogonalizing v against V, the vector is appended to V and one repeats this process until converges to the desired eigenpair. (θ,x! ) 25

47 CPU Time Comparison Λ 1/2 Q B 1 QΛ 1/2 u = λu frequency X U L G X W K 26

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

A NULL SPACE FREE JACOBI-DAVIDSON ITERATION FOR MAXWELL S OPERATOR

A NULL SPACE FREE JACOBI-DAVIDSON ITERATION FOR MAXWELL S OPERATOR A NULL SPACE FREE JACOBI-DAVIDSON ITERATION FOR MAXWELL S OPERATOR YIN-LIANG HUANG, TSUNG-MING HUANG, WEN-WEI LIN, AND WEI-CHENG WANG Abstract. We present an efficient null space free Jacobi-Davidson method

More information

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 17, 2012 T.-M. Huang (Taiwan Normal University) Krylov

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Nonlinear Eigenvalue Problems: An Introduction

Nonlinear Eigenvalue Problems: An Introduction Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Comparison Of Iterative Eigenvalue Solvers For Photonic Crystal Modeling

Comparison Of Iterative Eigenvalue Solvers For Photonic Crystal Modeling UNIVERSITY OF TWENTE Comparison Of Iterative Eigenvalue Solvers For Photonic Crystal Modeling Bachelor Assignment Chairperson: Prof.Dr.Ir. J.J.W. van der Vegt Daily supervisor: Dr. M.A. Bochev External

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

PASC17 - Lugano June 27, 2017

PASC17 - Lugano June 27, 2017 Polynomial and rational filtering for eigenvalue problems and the EVSL project Yousef Saad Department of Computer Science and Engineering University of Minnesota PASC17 - Lugano June 27, 217 Large eigenvalue

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Department of Computer Science, University of Illinois at Urbana-Champaign

Department of Computer Science, University of Illinois at Urbana-Champaign Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Solving Polynomial Eigenvalue Problems Arising in Simulations of Nanoscale Quantum Dots

Solving Polynomial Eigenvalue Problems Arising in Simulations of Nanoscale Quantum Dots Solving Polynomial Eigenvalue Problems Arising in Simulations of Nanoscale Quantum Dots Weichung Wang Department of Mathematics National Taiwan University Recent Advances in Numerical Methods for Eigenvalue

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Jacobi s Ideas on Eigenvalue Computation in a modern context

Jacobi s Ideas on Eigenvalue Computation in a modern context Jacobi s Ideas on Eigenvalue Computation in a modern context Henk van der Vorst vorst@math.uu.nl Mathematical Institute Utrecht University June 3, 2006, Michel Crouzeix p.1/18 General remarks Ax = λx Nonlinear

More information

A hybrid reordered Arnoldi method to accelerate PageRank computations

A hybrid reordered Arnoldi method to accelerate PageRank computations A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

Introduction to Polarization

Introduction to Polarization Phone: Ext 659, E-mail: hcchui@mail.ncku.edu.tw Fall/007 Introduction to Polarization Text Book: A Yariv and P Yeh, Photonics, Oxford (007) 1.6 Polarization States and Representations (Stokes Parameters

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

ABSTRACT OF DISSERTATION. Ping Zhang

ABSTRACT OF DISSERTATION. Ping Zhang ABSTRACT OF DISSERTATION Ping Zhang The Graduate School University of Kentucky 2009 Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices ABSTRACT OF DISSERTATION A dissertation

More information

Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems

Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems Zhaojun Bai University of California, Davis Berkeley, August 19, 2016 Symmetric definite generalized eigenvalue problem

More information

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation Zih-Hao Wei 1, Feng-Nan Hwang 1, Tsung-Ming Huang 2, and Weichung Wang 3 1 Department of

More information

Fast iterative solvers

Fast iterative solvers Utrecht, 15 november 2017 Fast iterative solvers Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ sleij101/ Review. To simplify, take x 0 = 0, assume b 2 = 1. Solving Ax = b for

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX. 1. Introduction. We consider the eigenvalue problem

AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX. 1. Introduction. We consider the eigenvalue problem AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX MICHIEL E. HOCHSTENBACH, WEN-WEI LIN, AND CHING-SUNG LIU Abstract. In this paper, we present an inexact inverse

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems A Tuned Preconditioner for Applied to Eigenvalue Problems Department of Mathematical Sciences University of Bath, United Kingdom IWASEP VI May 22-25, 2006 Pennsylvania State University, University Park

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Inexact Inverse Iteration for Symmetric Matrices

Inexact Inverse Iteration for Symmetric Matrices Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Unconstrained Optimization Models for Computing Several Extreme Eigenpairs of Real Symmetric Matrices

Unconstrained Optimization Models for Computing Several Extreme Eigenpairs of Real Symmetric Matrices Unconstrained Optimization Models for Computing Several Extreme Eigenpairs of Real Symmetric Matrices Yu-Hong Dai LSEC, ICMSEC Academy of Mathematics and Systems Science Chinese Academy of Sciences Joint

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Master Thesis Literature Review: Efficiency Improvement for Panel Codes. TU Delft MARIN

Master Thesis Literature Review: Efficiency Improvement for Panel Codes. TU Delft MARIN Master Thesis Literature Review: Efficiency Improvement for Panel Codes TU Delft MARIN Elisa Ang Yun Mei Student Number: 4420888 1-27-2015 CONTENTS 1 Introduction... 1 1.1 Problem Statement... 1 1.2 Current

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

A Parallel Implementation of the Trace Minimization Eigensolver

A Parallel Implementation of the Trace Minimization Eigensolver A Parallel Implementation of the Trace Minimization Eigensolver Eloy Romero and Jose E. Roman Instituto ITACA, Universidad Politécnica de Valencia, Camino de Vera, s/n, 4622 Valencia, Spain. Tel. +34-963877356,

More information

On the choice of abstract projection vectors for second level preconditioners

On the choice of abstract projection vectors for second level preconditioners On the choice of abstract projection vectors for second level preconditioners C. Vuik 1, J.M. Tang 1, and R. Nabben 2 1 Delft University of Technology 2 Technische Universität Berlin Institut für Mathematik

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

A Jacobi Davidson Algorithm for Large Eigenvalue Problems from Opto-Electronics

A Jacobi Davidson Algorithm for Large Eigenvalue Problems from Opto-Electronics RANMEP, Hsinchu, Taiwan, January 4 8, 2008 1/30 A Jacobi Davidson Algorithm for Large Eigenvalue Problems from Opto-Electronics Peter Arbenz 1 Urban Weber 1 Ratko Veprek 2 Bernd Witzigmann 2 1 Institute

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

The EVSL package for symmetric eigenvalue problems Yousef Saad Department of Computer Science and Engineering University of Minnesota

The EVSL package for symmetric eigenvalue problems Yousef Saad Department of Computer Science and Engineering University of Minnesota The EVSL package for symmetric eigenvalue problems Yousef Saad Department of Computer Science and Engineering University of Minnesota 15th Copper Mountain Conference Mar. 28, 218 First: Joint work with

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini Iterative solvers for saddle point algebraic linear systems: tools of the trade V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The problem

More information

EIGENVALUE PROBLEMS (EVP)

EIGENVALUE PROBLEMS (EVP) EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a

More information

Inexact Inverse Iteration for Symmetric Matrices

Inexact Inverse Iteration for Symmetric Matrices Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av

More information

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0 Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)

More information

State-of-the-art numerical solution of large Hermitian eigenvalue problems. Andreas Stathopoulos

State-of-the-art numerical solution of large Hermitian eigenvalue problems. Andreas Stathopoulos State-of-the-art numerical solution of large Hermitian eigenvalue problems Andreas Stathopoulos Computer Science Department and Computational Sciences Cluster College of William and Mary Acknowledgment:

More information

Tsung-Ming Huang. Matrix Computation, 2016, NTNU

Tsung-Ming Huang. Matrix Computation, 2016, NTNU Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Krylov Subspaces. The order-n Krylov subspace of A generated by x is

Krylov Subspaces. The order-n Krylov subspace of A generated by x is Lab 1 Krylov Subspaces Lab Objective: matrices. Use Krylov subspaces to find eigenvalues of extremely large One of the biggest difficulties in computational linear algebra is the amount of memory needed

More information

Preconditioned Eigenvalue Solvers for electronic structure calculations. Andrew V. Knyazev. Householder Symposium XVI May 26, 2005

Preconditioned Eigenvalue Solvers for electronic structure calculations. Andrew V. Knyazev. Householder Symposium XVI May 26, 2005 1 Preconditioned Eigenvalue Solvers for electronic structure calculations Andrew V. Knyazev Department of Mathematics and Center for Computational Mathematics University of Colorado at Denver Householder

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Computing a partial generalized real Schur form using the Jacobi-Davidson method

Computing a partial generalized real Schur form using the Jacobi-Davidson method Computing a partial generalized real Schur form using the Jacobi-Davidson method T.L. van Noorden and J. Rommes Abstract In this paper, a new variant of the Jacobi-Davidson method is presented that is

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Power iteration Notes for 2016-10-17 In most introductory linear algebra classes, one computes eigenvalues as roots of a characteristic polynomial. For most problems, this is a bad idea: the roots of

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

A Jacobi-Davidson method for solving complex symmetric eigenvalue problems Arbenz, P.; Hochstenbach, M.E.

A Jacobi-Davidson method for solving complex symmetric eigenvalue problems Arbenz, P.; Hochstenbach, M.E. A Jacobi-Davidson method for solving complex symmetric eigenvalue problems Arbenz, P.; Hochstenbach, M.E. Published in: SIAM Journal on Scientific Computing DOI: 10.1137/S1064827502410992 Published: 01/01/2004

More information

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY KLAUS NEYMEYR ABSTRACT. Multigrid techniques can successfully be applied to mesh eigenvalue problems for elliptic differential operators. They allow

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Iterative Helmholtz Solvers

Iterative Helmholtz Solvers Iterative Helmholtz Solvers Scalability of deflation-based Helmholtz solvers Delft University of Technology Vandana Dwarka March 28th, 2018 Vandana Dwarka (TU Delft) 15th Copper Mountain Conference 2018

More information

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations Harrachov 2007 Peter Sonneveld en Martin van Gijzen August 24, 2007 1 Delft University of Technology

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Solving large sparse eigenvalue problems

Solving large sparse eigenvalue problems Solving large sparse eigenvalue problems Mario Berljafa Stefan Güttel June 2015 Contents 1 Introduction 1 2 Extracting approximate eigenpairs 2 3 Accuracy of the approximate eigenpairs 3 4 Expanding the

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, 2011 1 / 36 Outline 1 Steepest

More information

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

More information