Preconditioned inverse iteration and shift-invert Arnoldi method

Size: px
Start display at page:

Download "Preconditioned inverse iteration and shift-invert Arnoldi method"

Transcription

1 Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical Systems Magdeburg 3rd May 2011 joint work with Alastair Spence (Bath)

2 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

3 Outline 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

4 Problem and iterative methods Find a small number of eigenvalues and eigenvectors of: Ax = λx, λ C,x C n A is large, sparse, nonsymmetric

5 Problem and iterative methods Find a small number of eigenvalues and eigenvectors of: Ax = λx, λ C,x C n A is large, sparse, nonsymmetric Iterative solves Power method Simultaneous iteration Arnoldi method Jacobi-Davidson method repeated application of the matrix A to a vector Generally convergence to largest/outlying eigenvector

6 Problem and iterative methods Find a small number of eigenvalues and eigenvectors of: Ax = λx, λ C,x C n A is large, sparse, nonsymmetric Iterative solves Power method Simultaneous iteration Arnoldi method Jacobi-Davidson method The first three of these involve repeated application of the matrix A to a vector Generally convergence to largest/outlying eigenvector

7 Shift-invert strategy Wish to find a few eigenvalues close to a shift σ σ λ3 λ1 λ2 λ4 λ n

8 Shift-invert strategy Wish to find a few eigenvalues close to a shift σ σ Problem becomes λ3 λ1 λ2 λ4 λ n (A σi) 1 x = 1 λ σ x each step of the iterative method involves repeated application of A = (A σi) 1 to a vector Inner iterative solve: (A σi)y = x using Krylov method for linear systems. leading to inner-outer iterative method.

9 Shift-invert strategy This talk: Inner iteration and preconditioning Fixed shifts only Inverse iteration and Arnoldi method

10 Outline 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

11 The algorithm Inexact inverse iteration for i = 1 to... do choose τ (i) solve Rescale x (i+1) = y(i) y (i), (A σi)y (i) x (i) = d (i) τ (i), Update λ (i+1) = x (i+1)h Ax (i+1), Test: eigenvalue residual r (i+1) = (A λ (i+1) I)x (i+1). end for

12 The algorithm Inexact inverse iteration for i = 1 to... do choose τ (i) solve Rescale x (i+1) = y(i) y (i), (A σi)y (i) x (i) = d (i) τ (i), Update λ (i+1) = x (i+1)h Ax (i+1), Test: eigenvalue residual r (i+1) = (A λ (i+1) I)x (i+1). end for Convergence rates If τ (i) = C r (i) then convergence rate is linear (same convergence rate as for exact solves).

13 The inner iteration for (A σi)y = x Standard GMRES theory for y 0 = 0 and A diagonalisable x (A σi)y k κ(w) min max p(λj) x j=1,...,n p P k where λ j are eigenvalues of A σi and (A σi) = WΛW 1.

14 The inner iteration for (A σi)y = x Standard GMRES theory for y 0 = 0 and A diagonalisable x (A σi)y k κ(w) min max p(λj) x j=1,...,n p P k where λ j are eigenvalues of A σi and (A σi) = WΛW 1. Number of inner iterations for x (A σi)y k τ. k C 1 +C 2log x τ

15 The inner iteration for (A σi)y = x More detailed GMRES theory for y 0 = 0 λ2 λ1 x (A σi)y k κ(w) λ 1 where λ j are eigenvalues of A σi. Number of inner iterations min p P k 1 max p(λj) Qx j=2,...,n k C 1 +C 2log Qx, τ where Q projects onto the space not spanned by the eigenvector.

16 The inner iteration for (A σi)y = x Good news! k (i) C 1 +C 2log C3 r(i) τ (i), where τ (i) = C r (i). Iteration number approximately constant!

17 The inner iteration for (A σi)y = x Good news! k (i) C 1 +C 2log C3 r(i) τ (i), where τ (i) = C r (i). Iteration number approximately constant! Bad news :-( For a standard preconditioner P (A σi)p 1 ỹ (i) = x (i) P 1 ỹ (i) = y (i) k (i) C 1 +C 2 log Qx (i) τ (i) = C 1 +C 2 log C τ (i), where τ (i) = C r (i). Iteration number increases!

18 Convection-Diffusion operator Finite difference discretisation on a grid of the convection-diffusion operator u+5u x +5u y = λu on (0,1) 2, with homogeneous Dirichlet boundary conditions ( matrix). smallest eigenvalue: λ , Preconditioned GMRES with tolerance τ (i) = 0.01 r (i), standard and tuned preconditioner (incomplete LU).

19 Convection-Diffusion operator 45 right preconditioning, 569 iterations 10 0 right preconditioning, 569 iterations inner iterations residual norms \ r^{(i)}\ outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

20 The inner iteration for (A σi)p 1 ỹ = x How to overcome this problem Use a different preconditioner, namely one that satisfies P ix (i) = Ax (i), P i := P +(A P)x (i) x (i)h minor modification and minor extra computational cost, [AP 1 i ]Ax (i) = Ax (i).

21 The inner iteration for (A σi)p 1 ỹ = x How to overcome this problem Use a different preconditioner, namely one that satisfies P ix (i) = Ax (i), P i := P +(A P)x (i) x (i)h minor modification and minor extra computational cost, [AP 1 i ]Ax (i) = Ax (i). Why does that work? Assume we have found eigenvector x 1 Ax 1 = Px 1 = λ 1x 1 (A σi)p 1 x 1 = λ1 σ λ 1 x 1 and convergence of Krylov method applied to (A σi)p 1 ỹ = x 1 in one iteration. For general x (i) k (i) C 1 +C 2 log C3 r(i), where τ (i) = C r (i). τ (i)

22 Convection-Diffusion operator Finite difference discretisation on a grid of the convection-diffusion operator u+5u x +5u y = λu on (0,1) 2, with homogeneous Dirichlet boundary conditions ( matrix). smallest eigenvalue: λ , Preconditioned GMRES with tolerance τ (i) = 0.01 r (i), standard and tuned preconditioner (incomplete LU).

23 Convection-Diffusion operator 45 right preconditioning, 569 iterations 10 0 right preconditioning, 569 iterations inner iterations residual norms \ r^{(i)}\ outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

24 Convection-Diffusion operator 45 right preconditioning, 569 iterations tuned right preconditioning, 115 iterations 10 0 right preconditioning, 569 iterations tuned right preconditioning, 115 iterations inner iterations residual norms \ r^{(i)}\ outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

25 Outline 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

26 The algorithm Arnoldi s method Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace K k (A,q (1) ) = span{q (1),Aq (1),A 2 q (1),...,A k 1 q (1) }, [ ] AQ k = Q k H k +q k+1 h k+1,k e H H k k = Q k+1 h k+1,k e H k Q H k Q k = I.

27 The algorithm Arnoldi s method Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace K k (A,q (1) ) = span{q (1),Aq (1),A 2 q (1),...,A k 1 q (1) }, [ ] AQ k = Q k H k +q k+1 h k+1,k e H H k k = Q k+1 h k+1,k e H k Q H k Q k = I. Eigenvalues of H k are eigenvalue approximations of (outlying) eigenvalues of A r k = Ax θx = (AQ k Q k H k )u = h k+1,k e H k u,

28 The algorithm Arnoldi s method Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace K k (A,q (1) ) = span{q (1),Aq (1),A 2 q (1),...,A k 1 q (1) }, [ ] AQ k = Q k H k +q k+1 h k+1,k e H H k k = Q k+1 h k+1,k e H k Q H k Q k = I. Eigenvalues of H k are eigenvalue approximations of (outlying) eigenvalues of A r k = Ax θx = (AQ k Q k H k )u = h k+1,k e H k u, at each step, application of A to q k : Aq k = q k+1

29 Example random complex matrix of dimension n = 144 generated in Matlab: G=numgrid( N,14);B=delsq(G);A=sprandn(B)+i*sprandn(B) eigenvalues of A

30 after 5 Arnoldi steps Arnoldi after 5 steps

31 after 10 Arnoldi steps Arnoldi after 10 steps

32 after 15 Arnoldi steps Arnoldi after 15 steps

33 after 20 Arnoldi steps Arnoldi after 20 steps

34 after 25 Arnoldi steps Arnoldi after 25 steps

35 after 30 Arnoldi steps Arnoldi after 30 steps

36 The algorithm: take σ = 0 Shift-Invert Arnoldi s method A := A 1 Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace K k (A 1,q (1) ) = span{q (1),A 1 q (1),(A 1 ) 2 q (1),...,(A 1 ) k 1 q (1) }, [ ] A 1 Q k = Q k H k +q k+1 h k+1,k e H H k k = Q k+1 h k+1,k e H k Q H k Q k = I. Eigenvalues of H k are eigenvalue approximations of (outlying) eigenvalues of A 1 r k = A 1 x θx = (A 1 Q k Q k H k )u = h k+1,k e H k u, at each step, application of A 1 to q k : A 1 q k = q k+1

37 Inexact solves Inexact solves (Simoncini 2005), Bouras and Frayssé (2000) Wish to solve q k A q k+1 = d k τ k

38 Inexact solves Inexact solves (Simoncini 2005), Bouras and Frayssé (2000) Wish to solve q k A q k+1 = d k τ k leads to inexact Arnoldi relation [ [ A 1 H k H k Q k = Q k+1 ]+D h k+1,k e H k = Q k+1 k h k+1,k e H k ] +[d 1... d k ]

39 Inexact solves Inexact solves (Simoncini 2005), Bouras and Frayssé (2000) Wish to solve q k A q k+1 = d k τ k leads to inexact Arnoldi relation [ [ A 1 H k H k Q k = Q k+1 ]+D h k+1,k e H k = Q k+1 k h k+1,k e H k ] +[d 1... d k ] u eigenvector of H k : r k = (A 1 Q k Q k H k )u = h k+1,k e H k u +D k u,

40 Inexact solves Inexact solves (Simoncini 2005), Bouras and Frayssé (2000) Wish to solve q k A q k+1 = d k τ k leads to inexact Arnoldi relation [ [ A 1 H k H k Q k = Q k+1 ]+D h k+1,k e H k = Q k+1 k h k+1,k e H k ] +[d 1... d k ] u eigenvector of H k : r k = (A 1 Q k Q k H k )u = h k+1,k e H k u +D k u, Linear combination of the columns of D k k D k u = d l u l, if u l l=1 small, then d l allowed to be large!

41 Inexact solves Inexact solves (Simoncini 2005), Bouras and Frayssé (2000) Linear combination of the columns of D k k D k u = d l u l, if u l l=1 small, then d l allowed to be large! and leads to Solve tolerance can be relaxed. d l u l 1 k ε D ku < ε u l C(l,k) r l 1 q k A q k+1 = d k d k = C 1 r k 1

42 The inner iteration for AP 1 q k+1 = q k Preconditioning GMRES convergence bound depending on q k AP 1 q l k+1 = κ min p Π l max i=1,...,n p(µi) q k

43 The inner iteration for AP 1 q k+1 = q k Preconditioning GMRES convergence bound depending on q k AP 1 q l k+1 = κ min p Π l max i=1,...,n p(µi) q k the eigenvalue clustering of AP 1 the condition number the right hand side (initial guess)

44 The inner iteration for AP 1 q k+1 = q k Preconditioning Introduce preconditioner P and solve AP 1 q k+1 = q k, P 1 q k+1 = q k+1 using GMRES

45 The inner iteration for AP 1 q k+1 = q k Preconditioning Introduce preconditioner P and solve using GMRES AP 1 q k+1 = q k, P 1 q k+1 = q k+1 Tuned Preconditioner using a tuned preconditioner for Arnoldi s method P k Q k = AQ k ; given by P k = P +(A P)Q k Q H k

46 The inner iteration for A q = q Theorem (Properties of the tuned preconditioner) Let P with P = A+E be a preconditioner for A and assume k steps of Arnoldi s method have been carried out; then k eigenvalues of AP 1 k are equal to one: [AP 1 k ]AQ k = AQ k and n k eigenvalues are close to the corresponding eigenvalues of AP 1.

47 The inner iteration for A q = q Theorem (Properties of the tuned preconditioner) Let P with P = A+E be a preconditioner for A and assume k steps of Arnoldi s method have been carried out; then k eigenvalues of AP 1 k are equal to one: [AP 1 k ]AQ k = AQ k and n k eigenvalues are close to the corresponding eigenvalues of AP 1. Implementation Sherman-Morrison-Woodbury. Only minor extra costs (one back substitution per outer iteration)

48 Numerical Example sherman5.mtx nonsymmetric matrix from the Matrix Market library ( ). smallest eigenvalue: λ , Preconditioned GMRES as inner solver (both fixed tolerance and relaxation strategy), standard and tuned preconditioner (incomplete LU).

49 No tuning and standard preconditioner Arnoldi tolerance 10e 14/m inner itertaions residual norms r (i) Arnoldi tolerance 10e 14/m outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

50 Tuning the preconditioner Arnoldi tolerance 10e 14/m Arnoldi tolerance 10e 14 tuned inner itertaions residual norms r (i) Arnoldi tolerance 10e 14/m Arnoldi tolerance 10e 14 tuned outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

51 Relaxation Arnoldi tolerance 10e 14/m Arnoldi relaxed tolerance Arnoldi tolerance 10e 14 tuned inner itertaions residual norms r (i) Arnoldi tolerance 10e 14/m Arnoldi relaxed tolerance Arnoldi tolerance 10e 14 tuned outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

52 Tuning and relaxation strategy Arnoldi tolerance 10e 14/m Arnoldi relaxed tolerance Arnoldi tolerance 10e 14/m tuned Arnoldi relaxed tolerance tuned inner itertaions Arnoldi tolerance 10e 14/m Arnoldi relaxed tolerance Arnoldi tolerance 10e 14/m tuned Arnoldi relaxed tolerance tuned residual norms r (i) outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

53 Ritz values of exact and inexact Arnoldi Exact eigenvalues Ritz values (exact Arnoldi) Ritz values (inexact Arnoldi, tuning) e e e e e e e e e e e e e e e-01 Table: Ritz values of exact Arnoldi s method and inexact Arnoldi s method with the tuning strategy compared to exact eigenvalues closest to zero after 14 shift-invert Arnoldi steps.

54 Outline 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

55 Implicitly restarted Arnoldi (Sorensen (1992)) Exact shifts take an k +p step Arnoldi factorisation AQ k+p = Q k+p H k+p +q k+p+1 h k+p+1,k+p e H k+p Compute Λ(H k+p ) and select p shifts for an implicit QR iteration implicit restart with new starting vector ˆq (1) = p(a)q(1) p(a)q (1)

56 Implicitly restarted Arnoldi (Sorensen (1992)) Exact shifts take an k +p step Arnoldi factorisation AQ k+p = Q k+p H k+p +q k+p+1 h k+p+1,k+p e H k+p Compute Λ(H k+p ) and select p shifts for an implicit QR iteration implicit restart with new starting vector ˆq (1) = p(a)q(1) p(a)q (1) Aim of IRA AQ k = Q k H k +q k+1 h k+1,k }{{} 0 e H k

57 Relaxation strategy for IRA Theorem For any given ε R with ε > 0 assume that ε C if l > k, d l R k ε otherwise. Then AQ mu Q muθ R m ε. Very technical Relaxation strategy also works for IRA!

58 Tuning Tuning for implicitly restarted Arnoldi s method Introduce preconditioner P and solve AP 1 k q k+1 = q k, P 1 k q k+1 = q k+1 using GMRES and a tuned preconditioner P k Q k = AQ k ; given by P k = P +(A P)Q k Q H k

59 Tuning Why does tuning help? Assume we have found an approximate invariant subspace, that is A 1 Q k = Q k H k +q k+1 h k+1,k e H k }{{} 0

60 Tuning Why does tuning help? Assume we have found an approximate invariant subspace, that is A 1 Q k = Q k H k +q k+1 h k+1,k e H k }{{} 0 let A 1 have the upper Hessenberg form [ Qk Q ] H k A 1 [ Q k Q ] [ k = H k T 12 h k+1,k e 1e H k T 22 where [ Q k Q ] k is unitary and H k C k,k and T 22 C n k,n k are upper Hessenberg. ],

61 Tuning Why does tuning help? Assume we have found an approximate invariant subspace, that is A 1 Q k = Q k H k +q k+1 h k+1,k e H k }{{} 0 let A 1 have the upper Hessenberg form [ Qk Q ] H k A 1 [ Q k Q ] [ k = H k T 12 h k+1,k e 1e H k T 22 where [ Q k Q ] k is unitary and H k C k,k and T 22 C n k,n k are upper Hessenberg. ], If h k+1,k 0 then [ Qk Q ] H [ k AP 1 k Qk Q ] k = [ I + ] Q H k AP 1 k Q k T 1 22 (Q H k PQ k ) 1 +

62 Tuning Why does tuning help? Assume we have found an approximate invariant subspace, that is A 1 Q k = Q k H k +q k+1 h k+1,k e H k }{{} 0 let A 1 have the upper Hessenberg form [ Qk Q ] H k A 1 [ Q k Q ] [ k = H k T 12 h k+1,k e 1e H k T 22 where [ Q k Q ] k is unitary and H k C k,k and T 22 C n k,n k are upper Hessenberg. ], If h k+1,k = 0 then [ Qk Q ] H [ k AP 1 k Qk Q ] k = [ ] I Q H k AP 1 k Q k 0 T 1 22 (Q H k PQ k ) 1

63 Tuning Another advantage of tuning System to be solved at each step of Arnoldi s method is AP 1 k q k+1 = q k, P 1 k q k+1 = q k

64 Tuning Another advantage of tuning System to be solved at each step of Arnoldi s method is AP 1 k q k+1 = q k, P 1 k q k+1 = q k Assuming invariant subspace found then (A 1 Q k = Q k H k ): AP 1 k q k = q k

65 Tuning Another advantage of tuning System to be solved at each step of Arnoldi s method is AP 1 k q k+1 = q k, P 1 k q k+1 = q k Assuming invariant subspace found then (A 1 Q k = Q k H k ): AP 1 k q k = q k the right hand side of the system matrix is an eigenvector of the system!

66 Tuning Another advantage of tuning System to be solved at each step of Arnoldi s method is AP 1 k q k+1 = q k, P 1 k q k+1 = q k Assuming invariant subspace found then (A 1 Q k = Q k H k ): AP 1 k q k = q k the right hand side of the system matrix is an eigenvector of the system! Krylov methods converge in one iteration

67 Tuning Another advantage of tuning In practice: and A 1 Q k = Q k H k +q k+1 h k+1,k e H k AP 1 k q k q k = O( h k+1,k )

68 Tuning Another advantage of tuning In practice: and A 1 Q k = Q k H k +q k+1 h k+1,k e H k AP 1 k q k q k = O( h k+1,k ) number of iterations decreases as the outer iteration proceeds

69 Tuning Another advantage of tuning In practice: and A 1 Q k = Q k H k +q k+1 h k+1,k e H k AP 1 k q k q k = O( h k+1,k ) number of iterations decreases as the outer iteration proceeds Rigorous analysis quite technical.

70 Numerical Example sherman5.mtx nonsymmetric matrix from the Matrix Market library ( ). k = 8 eigenvalues closest to zero IRA with exact shifts p = 4 Preconditioned GMRES as inner solver (fixed tolerance and relaxation strategy), standard and tuned preconditioner (incomplete LU).

71 No tuning and standard preconditioner Arnoldi tolerance 10e 12 inner iterations residual norms r (i) Arnoldi tolerance 10e outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

72 Tuning Arnoldi tolerance 10e 12 Arnoldi 10e 12 tuned inner iterations residual norms r (i) Arnoldi tolerance 10e 12 Arnoldi 10e 12 tuned outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

73 Relaxation Arnoldi tolerance 10e 12 Arnold relaxed tolerance Arnoldi 10e 12 tuned inner iterations residual norms r (i) Arnoldi tolerance 10e 12 Arnold relaxed tolerance Arnoldi 10e 12 tuned outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

74 Tuning and relaxation strategy inner iterations Arnoldi tolerance 10e Arnold relaxed tolerance Arnoldi 10e 12 tuned Arnold relaxed tolerance tuned outer iterations residual norms r (i) sum of inner iterations Arnoldi tolerance 10e 12 Arnold relaxed tolerance Arnoldi 10e 12 tuned Arnold relaxed tolerance tuned Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

75 Numerical Example qc2534.mtx matrix from the Matrix Market library. k = 6 eigenvalues closest to zero IRA with exact shifts p = 4 Preconditioned GMRES as inner solver (fixed tolerance and relaxation strategy), standard and tuned preconditioner (incomplete LU).

76 Tuning and relaxation strategy inner iterations Arnoldi tolerance 10e 12 residual norms r (i) Arnoldi tolerance 10e 12 Arnold relaxed tolerance Arnoldi 10e 12 tuned Arnold relaxed tolerance tuned 20 Arnold relaxed tolerance 10 Arnoldi 10e 12 tuned Arnold relaxed tolerance tuned outer iterations sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

77 Outline 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

78 Conclusions For eigenvalue computations it is advantageous to consider small rank changes to the standard preconditioners Works for any preconditioner Works for SI versions of Power method, Simultaneous iteration, Arnoldi method Inexact inverse iteration with a special tuned preconditioner is equivalent to the Jacobi-Davidson method (without subspace expansion) For Arnoldi method best results are obtained when relaxation and tuning are combined

79 M. A. Freitag and A. Spence, Rayleigh quotient iteration and simplified Jacobi-Davidson method with preconditioned iterative solves., Convergence rates for inexact inverse iteration with application to preconditioned iterative solves, BIT, 47 (2007), pp , A tuned preconditioner for inexact inverse iteration applied to Hermitian eigenvalue problems, IMA journal of numerical analysis, 28 (2008), p. 522., Shift-invert Arnoldi s method with preconditioned iterative solves, SIAM Journal on Matrix Analysis and Applications, 31 (2009), pp F. Xue and H. C. Elman, Convergence analysis of iterative solvers in inexact rayleigh quotient iteration, SIAM J. Matrix Anal. Appl., 31 (2009), pp , Fast inexact subspace iteration for generalized eigenvalue problems with spectral transformation, Linear Algebra and its Applications, (2010).

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems A Tuned Preconditioner for Applied to Eigenvalue Problems Department of Mathematical Sciences University of Bath, United Kingdom IWASEP VI May 22-25, 2006 Pennsylvania State University, University Park

More information

A Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems

A Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems A Tuned Preconditioner for for Generalised Eigenvalue Problems Department of Mathematical Sciences University of Bath, United Kingdom IWASEP VI May 22-25, 2006 Pennsylvania State University, University

More information

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer

More information

Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning

Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning submitted by Melina Annerose Freitag for the degree of Doctor of Philosophy of the University of Bath Department

More information

RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS

RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS MELINA A. FREITAG, ALASTAIR SPENCE, AND EERO VAINIKKO Abstract. The computation

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Inexact Inverse Iteration for Symmetric Matrices

Inexact Inverse Iteration for Symmetric Matrices Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av

More information

Inexact Inverse Iteration for Symmetric Matrices

Inexact Inverse Iteration for Symmetric Matrices Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 4, pp. 1069 1082 c 2006 Society for Industrial and Applied Mathematics INEXACT INVERSE ITERATION WITH VARIABLE SHIFT FOR NONSYMMETRIC GENERALIZED EIGENVALUE PROBLEMS

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Iterative methods for symmetric eigenvalue problems

Iterative methods for symmetric eigenvalue problems s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient

More information

Inexact inverse iteration for symmetric matrices

Inexact inverse iteration for symmetric matrices Linear Algebra and its Applications 46 (2006) 389 43 www.elsevier.com/locate/laa Inexact inverse iteration for symmetric matrices Jörg Berns-Müller a, Ivan G. Graham b, Alastair Spence b, a Fachbereich

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

ABSTRACT NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS

ABSTRACT NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS ABSTRACT Title of dissertation: NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS Fei Xue, Doctor of Philosophy, 2009 Dissertation directed by: Professor Howard C. Elman Department

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Introduction to Arnoldi method

Introduction to Arnoldi method Introduction to Arnoldi method SF2524 - Matrix Computations for Large-scale Systems KTH Royal Institute of Technology (Elias Jarlebring) 2014-11-07 KTH Royal Institute of Technology (Elias Jarlebring)Introduction

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2) A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS Department of Mathematics University of Maryland, College Park Advisor: Dr. Howard Elman Department of Computer

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

Is there life after the Lanczos method? What is LOBPCG?

Is there life after the Lanczos method? What is LOBPCG? 1 Is there life after the Lanczos method? What is LOBPCG? Andrew V. Knyazev Department of Mathematics and Center for Computational Mathematics University of Colorado at Denver SIAM ALA Meeting, July 17,

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

A Jacobi Davidson Method for Nonlinear Eigenproblems

A Jacobi Davidson Method for Nonlinear Eigenproblems A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G. Universiteit-Utrecht * Mathematical Institute Alternative correction equations in the Jacobi-Davidson method by Menno Genseberger and Gerard L. G. Sleijpen Preprint No. 173 June 1998, revised: March 1999

More information

ABSTRACT OF DISSERTATION. Ping Zhang

ABSTRACT OF DISSERTATION. Ping Zhang ABSTRACT OF DISSERTATION Ping Zhang The Graduate School University of Kentucky 2009 Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices ABSTRACT OF DISSERTATION A dissertation

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 435 (20) 60 622 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Fast inexact subspace iteration

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

A comparison of solvers for quadratic eigenvalue problems from combustion

A comparison of solvers for quadratic eigenvalue problems from combustion INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS [Version: 2002/09/18 v1.01] A comparison of solvers for quadratic eigenvalue problems from combustion C. Sensiau 1, F. Nicoud 2,, M. van Gijzen 3,

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Harmonic Projection Methods for Large Non-symmetric Eigenvalue Problems

Harmonic Projection Methods for Large Non-symmetric Eigenvalue Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl., 5, 33 55 (1998) Harmonic Projection Methods for Large Non-symmetric Eigenvalue Problems Ronald B. Morgan 1 and Min Zeng 2 1 Department

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

of dimension n 1 n 2, one defines the matrix determinants

of dimension n 1 n 2, one defines the matrix determinants HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

MULTIGRID ARNOLDI FOR EIGENVALUES

MULTIGRID ARNOLDI FOR EIGENVALUES 1 MULTIGRID ARNOLDI FOR EIGENVALUES RONALD B. MORGAN AND ZHAO YANG Abstract. A new approach is given for computing eigenvalues and eigenvectors of large matrices. Multigrid is combined with the Arnoldi

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

Controlling inner iterations in the Jacobi Davidson method

Controlling inner iterations in the Jacobi Davidson method Controlling inner iterations in the Jacobi Davidson method Michiel E. Hochstenbach and Yvan Notay Service de Métrologie Nucléaire, Université Libre de Bruxelles (C.P. 165/84), 50, Av. F.D. Roosevelt, B-1050

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Eigenvalue Problems Computation and Applications

Eigenvalue Problems Computation and Applications Eigenvalue ProblemsComputation and Applications p. 1/36 Eigenvalue Problems Computation and Applications Che-Rung Lee cherung@gmail.com National Tsing Hua University Eigenvalue ProblemsComputation and

More information

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. The Jacobi Davidson method is an eigenvalue solver which uses the iterative (and in general inaccurate)

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Dr. ir. Gerrit C. Groenenboom 14 june - 18 june, 1993 Last revised: May 11, 2005 Contents 1 Introduction

More information

Efficient iterative algorithms for linear stability analysis of incompressible flows

Efficient iterative algorithms for linear stability analysis of incompressible flows IMA Journal of Numerical Analysis Advance Access published February 27, 215 IMA Journal of Numerical Analysis (215) Page 1 of 21 doi:1.193/imanum/drv3 Efficient iterative algorithms for linear stability

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 14-1 Nested Krylov methods for shifted linear systems M. Baumann and M. B. van Gizen ISSN 1389-652 Reports of the Delft Institute of Applied Mathematics Delft 214

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter

More information

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line Karl Meerbergen en Raf Vandebril Karl Meerbergen Department of Computer Science KU Leuven, Belgium

More information

A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator

A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator Artur Strebel Bergische Universität Wuppertal August 3, 2016 Joint Work This project is joint work with: Gunnar

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

WHEN studying distributed simulations of power systems,

WHEN studying distributed simulations of power systems, 1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems A Jacobi Davidson-type projection method for nonlinear eigenvalue problems Timo Betce and Heinrich Voss Technical University of Hamburg-Harburg, Department of Mathematics, Schwarzenbergstrasse 95, D-21073

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information