Jacobi s Ideas on Eigenvalue Computation in a modern context

Similar documents
Reduction of nonlinear eigenproblems with JD

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

Henk van der Vorst. Abstract. We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y

Lecture 3: Inexact inverse iteration with preconditioning

A Jacobi Davidson Method for Nonlinear Eigenproblems

Inexact inverse iteration with preconditioning

Alternative correction equations in the Jacobi-Davidson method

MICHIEL E. HOCHSTENBACH

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

M.A. Botchev. September 5, 2014

Algorithms that use the Arnoldi Basis

Course Notes: Week 1

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

The parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven.

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

The Conjugate Gradient Method

Numerical Methods in Matrix Computations

Iterative methods for Linear System

6.4 Krylov Subspaces and Conjugate Gradients

Arnoldi Methods in SLEPc

Linear and Non-Linear Preconditioning

A comparison of solvers for quadratic eigenvalue problems from combustion

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

Iterative projection methods for sparse nonlinear eigenvalue problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

U AUc = θc. n u = γ i x i,

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Universiteit-Utrecht. Department. of Mathematics. Jacobi-Davidson algorithms for various. eigenproblems. - A working document -

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Simple iteration procedure

1 Extrapolation: A Hint of Things to Come

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

ABSTRACT. Professor G.W. Stewart

235 Final exam review questions

Jacobi method for small matrices

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007.

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

of dimension n 1 n 2, one defines the matrix determinants

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Largest Bratu solution, lambda=4

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Chapter 7 Iterative Techniques in Matrix Algebra

A Jacobi Davidson type method for the product eigenvalue problem

Preconditioned inverse iteration and shift-invert Arnoldi method

COMP 558 lecture 18 Nov. 15, 2010

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

Iterative Methods for Linear Systems of Equations

problem Au = u by constructing an orthonormal basis V k = [v 1 ; : : : ; v k ], at each k th iteration step, and then nding an approximation for the e

A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Lecture 8: Fast Linear Solvers (Part 7)

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Inexactness and flexibility in linear Krylov solvers

Numerical Methods for Large Scale Eigenvalue Problems

Department of Computer Science, University of Illinois at Urbana-Champaign

FEM and sparse linear system solving

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

A Jacobi Davidson type method for the generalized singular value problem

Rational Krylov methods for linear and nonlinear eigenvalue problems

Multigrid absolute value preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra)

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

Iterative Methods for Sparse Linear Systems

Jacobi-Davidson Algorithm for Locating Resonances in a Few-Body Tunneling System

Gram-Schmidt Orthogonalization: 100 Years and More

Chapter 6: Orthogonality

Stabilization and Acceleration of Algebraic Multigrid Method

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Numerical Mathematics

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

Domain decomposition in the Jacobi-Davidson method for eigenproblems

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Notes on Some Methods for Solving Linear Systems

A Parallel Implementation of the Trace Minimization Eigensolver

Communication-avoiding Krylov subspace methods

Index. for generalized eigenvalue problem, butterfly form, 211

Introduction to Arnoldi method

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

KRYLOV SUBSPACE ITERATION

Lab 1: Iterative Methods for Solving Linear Systems

APPLIED NUMERICAL LINEAR ALGEBRA

The Lanczos and conjugate gradient algorithms

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations

Transcription:

Jacobi s Ideas on Eigenvalue Computation in a modern context Henk van der Vorst vorst@math.uu.nl Mathematical Institute Utrecht University June 3, 2006, Michel Crouzeix p.1/18

General remarks Ax = λx Nonlinear problem: for n > 4 no explicit solution Essentially iterative methods June 3, 2006, Michel Crouzeix p.2/18

General remarks Ax = λx Nonlinear problem: for n > 4 no explicit solution Essentially iterative methods Oldest methods: Leverrier 1840 Jacobi 1845-1846 No matrix notation in that time June 3, 2006, Michel Crouzeix p.2/18

General remarks Ax = λx Nonlinear problem: for n > 4 no explicit solution Essentially iterative methods Oldest methods: Leverrier 1840 Jacobi 1845-1846 No matrix notation in that time Masterthesis of Anjet de Boer, 1991, Utrecht June 3, 2006, Michel Crouzeix p.2/18

Early paper by Leverrier (1811-1877) Sur les Variations sèculaires des Eléments elliptiques des sept Planètes principales: Mercure, Vénus, la Terre, Mars, Jupiter, Saturne et Uranus, 1840 based on Laplace s work (1789) June 3, 2006, Michel Crouzeix p.3/18

Early paper by Leverrier (1811-1877) Sur les Variations sèculaires des Eléments elliptiques des sept Planètes principales: Mercure, Vénus, la Terre, Mars, Jupiter, Saturne et Uranus, 1840 based on Laplace s work (1789) Perturbations to the orbits of planets caused by the presence of other planets linear eigensystem from system of 7 diff. equations June 3, 2006, Michel Crouzeix p.3/18

Early paper by Leverrier (1811-1877) Sur les Variations sèculaires des Eléments elliptiques des sept Planètes principales: Mercure, Vénus, la Terre, Mars, Jupiter, Saturne et Uranus, 1840 based on Laplace s work (1789) Perturbations to the orbits of planets caused by the presence of other planets linear eigensystem from system of 7 diff. equations coefficients of characteristic polynomial He neglected some small elements: factors of degree 3 and 4 June 3, 2006, Michel Crouzeix p.3/18

Papers by Jacobi (1804-1851) Über eine neue Auflösungsart der bei der Methode der kleinsten Quadrate vorkommende lineare Gleichungen, 1845 June 3, 2006, Michel Crouzeix p.4/18

Papers by Jacobi (1804-1851) Über eine neue Auflösungsart der bei der Methode der kleinsten Quadrate vorkommende lineare Gleichungen, 1845 Inspired by work of Gauss (1823) New method for solution of sym. linear systems; Jacobi-rotations as preconditioner for G-Jacobi method June 3, 2006, Michel Crouzeix p.4/18

Papers by Jacobi (1804-1851) Über eine neue Auflösungsart der bei der Methode der kleinsten Quadrate vorkommende lineare Gleichungen, 1845 Inspired by work of Gauss (1823) New method for solution of sym. linear systems; Jacobi-rotations as preconditioner for G-Jacobi method He announces the application for eigenproblems June 3, 2006, Michel Crouzeix p.4/18

Second paper by Jacobi Über ein leichtes Verfahren, die in der Theory der Säculärstörungen vorkommenden Gleichungen numerisch aufzulösen, 1846 based on earlier work of Lagrange (1778) and Cauchy (1829) June 3, 2006, Michel Crouzeix p.5/18

Second paper by Jacobi Über ein leichtes Verfahren, die in der Theory der Säculärstörungen vorkommenden Gleichungen numerisch aufzulösen, 1846 based on earlier work of Lagrange (1778) and Cauchy (1829) He applied his 1845-method to the system studied by Leverrier Claim: easier and more accurate method (unsupported) refers to Leverriers work June 3, 2006, Michel Crouzeix p.5/18

Second paper by Jacobi Über ein leichtes Verfahren, die in der Theory der Säculärstörungen vorkommenden Gleichungen numerisch aufzulösen, 1846 based on earlier work of Lagrange (1778) and Cauchy (1829) He applied his 1845-method to the system studied by Leverrier Claim: easier and more accurate method (unsupported) refers to Leverriers work Bodewig (1951): Jacobi knew his methods before 1840 (inconclusive) evidence: letter of Schumacher to Gauss (1842) June 3, 2006, Michel Crouzeix p.5/18

Jacobi method (1) The Jacobi-rotations appear to be forgotten until 1950 June 3, 2006, Michel Crouzeix p.6/18

Jacobi method (1) The Jacobi-rotations appear to be forgotten until 1950 Whittaker (1924) described G-Jacobi Von Mises (1929) described the method without reference to J. June 3, 2006, Michel Crouzeix p.6/18

Jacobi method (1) The Jacobi-rotations appear to be forgotten until 1950 Whittaker (1924) described G-Jacobi Von Mises (1929) described the method without reference to J. The Jacobi (rotation) method was forgotten, but J. described the two methods as one single algorithm June 3, 2006, Michel Crouzeix p.6/18

Jacobi method (2) In 1951 Goldstine presented the rotation method joint work with Murray and Von Neumann June 3, 2006, Michel Crouzeix p.7/18

Jacobi method (2) In 1951 Goldstine presented the rotation method joint work with Murray and Von Neumann Ostrowski pointed out that they had reinvented J. s method June 3, 2006, Michel Crouzeix p.7/18

Jacobi method (2) In 1951 Goldstine presented the rotation method joint work with Murray and Von Neumann Ostrowski pointed out that they had reinvented J. s method also Runge, Hessenberg, Krylov, Magnier, Bodewig knew the method June 3, 2006, Michel Crouzeix p.7/18

Jacobi method (2) In 1951 Goldstine presented the rotation method joint work with Murray and Von Neumann Ostrowski pointed out that they had reinvented J. s method also Runge, Hessenberg, Krylov, Magnier, Bodewig knew the method Bodewig (1950, 1951) described the full J-method He claimed the rediscovery June 3, 2006, Michel Crouzeix p.7/18

Jacobi in Matrix Notation June 3, 2006, Michel Crouzeix p.8/18

Jacobi in Matrix Notation (1) First plane rotations to make A diag. dom. June 3, 2006, Michel Crouzeix p.8/18

Jacobi in Matrix Notation (1) First plane rotations to make A diag. dom. suppose that a 1,1 is largest element then λ a 1,1 and x e 1 (Ax = λx) June 3, 2006, Michel Crouzeix p.8/18

Jacobi in Matrix Notation (1) First plane rotations to make A diag. dom. suppose that a 1,1 is largest element then λ a 1,1 and x e 1 (Ax = λx) (2) Consider orthogonal complement of e 1 : A ( 1 w ) = ( a 1,1 c c T F ) ( 1 w ) = λ ( 1 w ) June 3, 2006, Michel Crouzeix p.8/18

Jacobi in Matrix Notation (1) First plane rotations to make A diag. dom. suppose that a 1,1 is largest element then λ a 1,1 and x e 1 (Ax = λx) (2) Consider orthogonal complement of e 1 : A ( 1 w leads to ) = ( λ = a 1,1 + c T w a 1,1 (F λi)w = c c c T F ) ( 1 w ) = λ ( 1 w ) June 3, 2006, Michel Crouzeix p.8/18

Jacobi (2) start with w = 0, θ = a 1,1 Solve w from (F θi)w = c with G-J iterations June 3, 2006, Michel Crouzeix p.9/18

Jacobi (2) start with w = 0, θ = a 1,1 Solve w from (F θi)w = c with G-J iterations J. applied 10 rotations before switch to G-J June 3, 2006, Michel Crouzeix p.9/18

Jacobi (2) start with w = 0, θ = a 1,1 Solve w from (F θi)w = c with G-J iterations J. applied 10 rotations before switch to G-J He applied 2 G-J steps before updating θ Both decisions without further comment June 3, 2006, Michel Crouzeix p.9/18

Jacobi (2) start with w = 0, θ = a 1,1 Solve w from (F θi)w = c with G-J iterations J. applied 10 rotations before switch to G-J He applied 2 G-J steps before updating θ Both decisions without further comment Bodewig (1959) advocated this method (without success?) Quadratic convergence of J-rotations already fast enough? June 3, 2006, Michel Crouzeix p.9/18

Jacobi (2) start with w = 0, θ = a 1,1 Solve w from (F θi)w = c with G-J iterations J. applied 10 rotations before switch to G-J He applied 2 G-J steps before updating θ Both decisions without further comment Bodewig (1959) advocated this method (without success?) Quadratic convergence of J-rotations already fast enough? Goldstine suggested J s rotations only for proving real eigenvalues June 3, 2006, Michel Crouzeix p.9/18

Krylov subspaces (1) June 3, 2006, Michel Crouzeix p.10/18

Krylov subspaces (1) Krylov suggested in 1931 the subspace: K m (A; x) = span{x, Ax,..., A m 1 x} for some convenient starting vector x for construction of characteristic polynomial June 3, 2006, Michel Crouzeix p.10/18

Krylov subspaces (1) Krylov suggested in 1931 the subspace: K m (A; x) = span{x, Ax,..., A m 1 x} for some convenient starting vector x for construction of characteristic polynomial illconditioned basis, but in his case: m = 6 How to make things work for large m? June 3, 2006, Michel Crouzeix p.10/18

Krylov subspaces (2) In the early 1950s: orthogonal basis June 3, 2006, Michel Crouzeix p.11/18

Krylov subspaces (2) In the early 1950s: orthogonal basis It does not help to build basis first June 3, 2006, Michel Crouzeix p.11/18

Krylov subspaces (2) In the early 1950s: orthogonal basis It does not help to build basis first Start with v 1 = x/ x Form Av 1 and orthogonalize w.r.t. v 1 Normalize: v 2 (so far nothing new!) June 3, 2006, Michel Crouzeix p.11/18

Krylov subspaces (2) In the early 1950s: orthogonal basis It does not help to build basis first Start with v 1 = x/ x Form Av 1 and orthogonalize w.r.t. v 1 Normalize: v 2 (so far nothing new!) Instead of A 2 v 1, compute Av 2 June 3, 2006, Michel Crouzeix p.11/18

Krylov subspaces (2) In the early 1950s: orthogonal basis It does not help to build basis first Start with v 1 = x/ x Form Av 1 and orthogonalize w.r.t. v 1 Normalize: v 2 (so far nothing new!) Instead of A 2 v 1, compute Av 2 Orthogonalize w.r.t v 1, v 2 and normalize: v 3 June 3, 2006, Michel Crouzeix p.11/18

Krylov subspaces (3) The general step is: Form Av i, Orth. w.r.t v 1,..., v i, normalize: v i+1 June 3, 2006, Michel Crouzeix p.12/18

Krylov subspaces (3) The general step is: Form Av i, Orth. w.r.t v 1,..., v i, normalize: v i+1 (modified) Gram-Schmidt orthogonalization June 3, 2006, Michel Crouzeix p.12/18

Krylov subspaces (3) The general step is: Form Av i, Orth. w.r.t v 1,..., v i, normalize: v i+1 (modified) Gram-Schmidt orthogonalization Results in well-conditioned basis (Stewart, SIAM books) June 3, 2006, Michel Crouzeix p.12/18

Krylov methods (3) write V m = [v 1 v 2... v m ] June 3, 2006, Michel Crouzeix p.13/18

Krylov methods (3) write V m = [v 1 v 2... v m ] then G-S in matrix notation: AV m = V m H m + c m v m+1 e T m June 3, 2006, Michel Crouzeix p.13/18

Krylov methods (3) write V m = [v 1 v 2... v m ] then G-S in matrix notation: AV m = V m H m + c m v m+1 e T m Note that H m = V T m AV m June 3, 2006, Michel Crouzeix p.13/18

Krylov methods (3) write V m = [v 1 v 2... v m ] then G-S in matrix notation: AV m = V m H m + c m v m+1 e T m Note that H m = V T m AV m The eigenvalues θ of H m : approximations for eigenvalues of A June 3, 2006, Michel Crouzeix p.13/18

Krylov methods (3) write V m = [v 1 v 2... v m ] then G-S in matrix notation: AV m = V m H m + c m v m+1 e T m Note that H m = V T m AV m The eigenvalues θ of H m : approximations for eigenvalues of A H m y = θy, z = V m y is approximation for eigenvector of A June 3, 2006, Michel Crouzeix p.13/18

Krylov methods (3) write V m = [v 1 v 2... v m ] then G-S in matrix notation: AV m = V m H m + c m v m+1 e T m Note that H m = V T m AV m The eigenvalues θ of H m : approximations for eigenvalues of A H m y = θy, z = V m y is approximation for eigenvector of A A symmetric: LANCZOS METHOD (1952) June 3, 2006, Michel Crouzeix p.13/18

Krylov methods (3) write V m = [v 1 v 2... v m ] then G-S in matrix notation: AV m = V m H m + c m v m+1 e T m Note that H m = V T m AV m The eigenvalues θ of H m : approximations for eigenvalues of A H m y = θy, z = V m y is approximation for eigenvector of A A symmetric: LANCZOS METHOD (1952) A unsymmetric: ARNOLDI METHOD (1952) June 3, 2006, Michel Crouzeix p.13/18

Davidson s subspace June 3, 2006, Michel Crouzeix p.14/18

Davidson s subspace Krylov subspaces popular after 1976 (Paige) June 3, 2006, Michel Crouzeix p.14/18

Davidson s subspace Krylov subspaces popular after 1976 (Paige) Davidson (1975) suggested other subspace: Compute residual r = Az θz June 3, 2006, Michel Crouzeix p.14/18

Davidson s subspace Krylov subspaces popular after 1976 (Paige) Davidson (1975) suggested other subspace: Compute residual r = Az θz Precondition r( inverse iteration): t = (D A θi) 1 r June 3, 2006, Michel Crouzeix p.14/18

Davidson s subspace Krylov subspaces popular after 1976 (Paige) Davidson (1975) suggested other subspace: Compute residual r = Az θz Precondition r( inverse iteration): t = (D A θi) 1 r orthonormalize t and expand subspace June 3, 2006, Michel Crouzeix p.14/18

Davidson s subspace Krylov subspaces popular after 1976 (Paige) Davidson (1975) suggested other subspace: Compute residual r = Az θz Precondition r( inverse iteration): t = (D A θi) 1 r orthonormalize t and expand subspace claim: Newton method (?) preconditioned Arnoldi? June 3, 2006, Michel Crouzeix p.14/18

Davidson s subspace Krylov subspaces popular after 1976 (Paige) Davidson (1975) suggested other subspace: Compute residual r = Az θz Precondition r( inverse iteration): t = (D A θi) 1 r orthonormalize t and expand subspace claim: Newton method (?) preconditioned Arnoldi? Davidson opens ways for other subspaces June 3, 2006, Michel Crouzeix p.14/18

Davidson - num. analysis June 3, 2006, Michel Crouzeix p.15/18

Davidson - num. analysis r = (A θi)z June 3, 2006, Michel Crouzeix p.15/18

Davidson - num. analysis r = (A θi)z t = (D A θi) 1 r (A θi) 1 r = z June 3, 2006, Michel Crouzeix p.15/18

Davidson - num. analysis r = (A θi)z t = (D A θi) 1 r (A θi) 1 r = z With preconditioner (A θi) 1 no expansion of subspace June 3, 2006, Michel Crouzeix p.15/18

Davidson - num. analysis r = (A θi)z t = (D A θi) 1 r (A θi) 1 r = z With preconditioner (A θi) 1 no expansion of subspace Insightful paper by Crouzeix, Philippe, Sadkane (1994) Analysis for t = M 1 k r June 3, 2006, Michel Crouzeix p.15/18

Davidson - num. analysis r = (A θi)z t = (D A θi) 1 r (A θi) 1 r = z With preconditioner (A θi) 1 no expansion of subspace Insightful paper by Crouzeix, Philippe, Sadkane (1994) Analysis for t = M 1 k r M k should not be close to A θi Suspect! But successful for Chemistry problems June 3, 2006, Michel Crouzeix p.15/18

Davidson - num. analysis r = (A θi)z t = (D A θi) 1 r (A θi) 1 r = z With preconditioner (A θi) 1 no expansion of subspace Insightful paper by Crouzeix, Philippe, Sadkane (1994) Analysis for t = M 1 k r M k should not be close to A θi Suspect! But successful for Chemistry problems Idea: apply preconditioner instead of Jacobi rotations and use Jacobi s idea for new update of z June 3, 2006, Michel Crouzeix p.15/18

Jacobi-Davidson June 3, 2006, Michel Crouzeix p.16/18

Jacobi-Davidson In Jacobi s case: e 1 is the approximation for x In subspace method we have approximation z June 3, 2006, Michel Crouzeix p.16/18

Jacobi-Davidson In Jacobi s case: e 1 is the approximation for x In subspace method we have approximation z J. computes update in subspace e 1 Sleijpen en VDV (1996): compute update in z June 3, 2006, Michel Crouzeix p.16/18

Jacobi-Davidson In Jacobi s case: e 1 is the approximation for x In subspace method we have approximation z J. computes update in subspace e 1 Sleijpen en VDV (1996): compute update in z (A θi) restricted to z is given by B = (I zz )(A θi)(i zz ) June 3, 2006, Michel Crouzeix p.16/18

Jacobi-Davidson In Jacobi s case: e 1 is the approximation for x In subspace method we have approximation z J. computes update in subspace e 1 Sleijpen en VDV (1996): compute update in z (A θi) restricted to z is given by B = (I zz )(A θi)(i zz ) Expand subspace with (approx.) solution of Bt = r Jacobi-Davidson method, SIMAX 1996 Newton method for RQ June 3, 2006, Michel Crouzeix p.16/18

Numerical example n = 100, A = tridiag(1, 2.4, 1) x = (1, 1,..., 1) T June 3, 2006, Michel Crouzeix p.17/18

Numerical example n = 100, A = tridiag(1, 2.4, 1) x = (1, 1,..., 1) T Davidson: M k = A θ k I: stagnation June 3, 2006, Michel Crouzeix p.17/18

Numerical example n = 100, A = tridiag(1, 2.4, 1) x = (1, 1,..., 1) T Davidson: M k = A θ k I: stagnation Jacobi-Davidson: M k = A θ k I: 5 it s June 3, 2006, Michel Crouzeix p.17/18

Numerical example n = 100, A = tridiag(1, 2.4, 1) x = (1, 1,..., 1) T Davidson: M k = A θ k I: stagnation Jacobi-Davidson: M k = A θ k I: 5 it s Davidson, prec. with GMRES(5) for (A θ k I) t = r: slow convergence (since θ k λ) June 3, 2006, Michel Crouzeix p.17/18

Numerical example n = 100, A = tridiag(1, 2.4, 1) x = (1, 1,..., 1) T Davidson: M k = A θ k I: stagnation Jacobi-Davidson: M k = A θ k I: 5 it s Davidson, prec. with GMRES(5) for (A θ k I) t = r: slow convergence (since θ k λ) Jac.Dav., GMRES(5) for F t = r with F = (I zz T )(A θ k I)(I zz T ):13 it s Note that F has no small eigenvalues June 3, 2006, Michel Crouzeix p.17/18

More practical example Acoustics, attachment line: Ax + λbx + λ 2 Cx = 0 June 3, 2006, Michel Crouzeix p.18/18

More practical example Acoustics, attachment line: Ax + λbx + λ 2 Cx = 0 For problem coming from acoustics: A, C 19-diagonal, B complex, n = 136161 June 3, 2006, Michel Crouzeix p.18/18

More practical example Acoustics, attachment line: Ax + λbx + λ 2 Cx = 0 For problem coming from acoustics: A, C 19-diagonal, B complex, n = 136161 Results for interior isolated eigenvalue (resonance) on a CRAY T3D Processors Elapsed time (sec) 16 206.4 32 101.3 64 52.1 June 3, 2006, Michel Crouzeix p.18/18

More practical example Acoustics, attachment line: Ax + λbx + λ 2 Cx = 0 For problem coming from acoustics: A, C 19-diagonal, B complex, n = 136161 Results for interior isolated eigenvalue (resonance) on a CRAY T3D Processors Elapsed time (sec) 16 206.4 32 101.3 64 52.1 For n = 274625, on 64 processors: 93.3 seconds 1 invert step 3 hours June 3, 2006, Michel Crouzeix p.18/18