A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations

Similar documents
Iterative projection methods for sparse nonlinear eigenvalue problems

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

A Jacobi Davidson Method for Nonlinear Eigenproblems

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

Lecture 3: Inexact inverse iteration with preconditioning

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

Inexact inverse iteration with preconditioning

Solving Regularized Total Least Squares Problems

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Preconditioned inverse iteration and shift-invert Arnoldi method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

MICHIEL E. HOCHSTENBACH

of dimension n 1 n 2, one defines the matrix determinants

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

T(λ)x = 0 (1.1) k λ j A j x = 0 (1.3)

Alternative correction equations in the Jacobi-Davidson method

Numerical Methods for Large Scale Eigenvalue Problems

Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Rational Krylov methods for linear and nonlinear eigenvalue problems

Controlling inner iterations in the Jacobi Davidson method

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

A comparison of solvers for quadratic eigenvalue problems from combustion

Nonlinear Eigenvalue Problems

Jacobi Davidson methods for polynomial two parameter eigenvalue problems

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

Iterative Methods for Linear Systems of Equations

Jacobi s Ideas on Eigenvalue Computation in a modern context

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

Jacobi-Davidson Algorithm for Locating Resonances in a Few-Body Tunneling System

U AUc = θc. n u = γ i x i,

Domain decomposition on different levels of the Jacobi-Davidson method

Nonlinear Eigenvalue Problems: An Introduction

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

Two-sided Eigenvalue Algorithms for Modal Approximation

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.

RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS

ABSTRACT OF DISSERTATION. Ping Zhang

Reduction of nonlinear eigenproblems with JD

Arnoldi Methods in SLEPc

Numerical Methods in Matrix Computations

Robust successive computation of eigenpairs for nonlinear eigenvalue problems

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

A Jacobi Davidson type method for the generalized singular value problem

Linear Algebra and its Applications

Efficient computation of transfer function dominant poles of large second-order dynamical systems

1. Introduction. We consider nonlinear eigenvalue problems of the form

A Jacobi-Davidson method for solving complex symmetric eigenvalue problems Arbenz, P.; Hochstenbach, M.E.

ABSTRACT. Professor G.W. Stewart

Jacobi Davidson Methods for Cubic Eigenvalue Problems (March 11, 2004)

Numerical behavior of inexact linear solvers

The Lanczos and conjugate gradient algorithms

COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

An Arnoldi method with structured starting vectors for the delay eigenvalue problem

DELFT UNIVERSITY OF TECHNOLOGY

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Inexact Inverse Iteration for Symmetric Matrices

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS

Nonlinear palindromic eigenvalue problems and their numerical solution

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

A numerical method for polynomial eigenvalue problems using contour integral

1. Introduction. In this paper we consider variants of the Jacobi Davidson (JD) algorithm [27] for computing a few eigenpairs of

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

Krylov subspace projection methods

KU Leuven Department of Computer Science

On the choice of abstract projection vectors for second level preconditioners

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

Harmonic and refined extraction methods for the singular value problem, with applications in least squares problems

DELFT UNIVERSITY OF TECHNOLOGY

How to Detect Definite Hermitian Pairs

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

Introduction to Iterative Solvers of Linear Systems

Iterative methods for positive definite linear systems with a complex shift

Contour Integral Method for the Simulation of Accelerator Cavities

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization

Index. for generalized eigenvalue problem, butterfly form, 211

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Computing Transfer Function Dominant Poles of Large Second-Order Systems

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y

Universiteit-Utrecht. Department. of Mathematics. Jacobi-Davidson algorithms for various. eigenproblems. - A working document -

Transcription:

A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations Heinrich Voss voss@tuhh.de Joint work with Karl Meerbergen (KU Leuven) and Christian Schröder (TU Berlin) Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Delay JD Beijing, May 2012 1 / 41

Outline 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May 2012 2 / 41

Outline Problem definition 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May 2012 3 / 41

Problem Problem definition Consider the linear time-invariant delay differential equation Mẋ(t) + Ax(t) + Bx(t τ) = 0 with given M, A, B C n n ; τ 0 is the delay TUHH Heinrich Voss Delay JD Beijing, May 2012 4 / 41

Problem Problem definition Consider the linear time-invariant delay differential equation Mẋ(t) + Ax(t) + Bx(t τ) = 0 with given M, A, B C n n ; τ 0 is the delay This system is (asymptotically) stable if, for every bounded initial condition, it holds that x(t) 0 as t TUHH Heinrich Voss Delay JD Beijing, May 2012 4 / 41

Problem Problem definition Consider the linear time-invariant delay differential equation Mẋ(t) + Ax(t) + Bx(t τ) = 0 with given M, A, B C n n ; τ 0 is the delay This system is (asymptotically) stable if, for every bounded initial condition, it holds that x(t) 0 as t Necessary and sufficient is that the spectrum of the nonlinear eigenvalue problem λmu + Au + e τλ Bu = 0 is contained in the open left half plane. TUHH Heinrich Voss Delay JD Beijing, May 2012 4 / 41

Problem cnt. Problem definition Approach at hand: homotopy, i.e. follow eigenvalues close to imaginary axis for changing τ TUHH Heinrich Voss Delay JD Beijing, May 2012 5 / 41

Problem cnt. Problem definition Approach at hand: homotopy, i.e. follow eigenvalues close to imaginary axis for changing τ Problem expensive and unreliable; a different eigenvalue curve may have crossed the imaginary axis before the followed one arrives it. TUHH Heinrich Voss Delay JD Beijing, May 2012 5 / 41

Problem cnt. Problem definition Approach at hand: homotopy, i.e. follow eigenvalues close to imaginary axis for changing τ Problem expensive and unreliable; a different eigenvalue curve may have crossed the imaginary axis before the followed one arrives it. Wanted: critical delays τ where system changes stability necessary: system has a purely imaginary eigenvalue TUHH Heinrich Voss Delay JD Beijing, May 2012 5 / 41

Outline small critical delay problems 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May 2012 6 / 41

small critical delay problems Solving small critical delay problems Recall the DEVP (iωm + A + e iωτ B)u = 0 where M, A, B C n n. We are interested in solutions (ω, τ, u) R R C n, u 0. TUHH Heinrich Voss Delay JD Beijing, May 2012 7 / 41

small critical delay problems Solving small critical delay problems Recall the DEVP where M, A, B C n n. (iωm + A + e iωτ B)u = 0 We are interested in solutions (ω, τ, u) R R C n, u 0. Introducing the parameter µ = e iωτ translates the problem to the two-parameter problem iωmu + Au + µbu = 0. TUHH Heinrich Voss Delay JD Beijing, May 2012 7 / 41

small critical delay problems Solving small critical delay problems Recall the DEVP where M, A, B C n n. (iωm + A + e iωτ B)u = 0 We are interested in solutions (ω, τ, u) R R C n, u 0. Introducing the parameter µ = e iωτ translates the problem to the two-parameter problem iωmu + Au + µbu = 0. Note that µ lies on the unit circle, thus µ = µ 1. TUHH Heinrich Voss Delay JD Beijing, May 2012 7 / 41

small critical delay problems Solving small critical delay problems Recall the DEVP where M, A, B C n n. (iωm + A + e iωτ B)u = 0 We are interested in solutions (ω, τ, u) R R C n, u 0. Introducing the parameter µ = e iωτ translates the problem to the two-parameter problem iωmu + Au + µbu = 0. Note that µ lies on the unit circle, thus µ = µ 1. Hence the complex conjugate equation reads iω Mū + Āū + µ 1 Bū = 0. TUHH Heinrich Voss Delay JD Beijing, May 2012 7 / 41

small critical delay problems Solving small critical delay problems Using Kronecker products we eliminate ω TUHH Heinrich Voss Delay JD Beijing, May 2012 8 / 41

small critical delay problems Solving small critical delay problems Using Kronecker products we eliminate ω 0 = (iωmu) Mū Mu ( iω Mū) = (Au + µbu) Mū + Mu (Āū + µ 1 Bū) = ((A + µb) M + M (Ā + µ 1 B))(u ū). and arrive at a rational eigenvalue problem. TUHH Heinrich Voss Delay JD Beijing, May 2012 8 / 41

small critical delay problems Solving small critical delay problems Using Kronecker products we eliminate ω 0 = (iωmu) Mū Mu ( iω Mū) = (Au + µbu) Mū + Mu (Āū + µ 1 Bū) = ((A + µb) M + M (Ā + µ 1 B))(u ū). and arrive at a rational eigenvalue problem. Expanding and multiplication by µ yields the quadratic eigenvalue problem µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. TUHH Heinrich Voss Delay JD Beijing, May 2012 8 / 41

small critical delay problems Solving small critical delay problems Using Kronecker products we eliminate ω 0 = (iωmu) Mū Mu ( iω Mū) = (Au + µbu) Mū + Mu (Āū + µ 1 Bū) = ((A + µb) M + M (Ā + µ 1 B))(u ū). and arrive at a rational eigenvalue problem. Expanding and multiplication by µ yields the quadratic eigenvalue problem µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. We have thus shown that if (ω, τ, u) is a solution of the DEVP and µ = e iωτ is on the unit circle, then (µ, u ū) is an eigenpair of the quadratic eigenvalue problem (QEVP) above. TUHH Heinrich Voss Delay JD Beijing, May 2012 8 / 41

small critical delay problems Solving small critical delay problems The converse is also true: TUHH Heinrich Voss Delay JD Beijing, May 2012 9 / 41

small critical delay problems Solving small critical delay problems The converse is also true: Theorem Let M, A, B C n n with M nonsingular. Then any eigenvector z C n2 of QEVP corresponding to a simple eigenvalue µ C can be written as z = αu 1 u 2 for some vectors u 1, u 2 C n and some α C. TUHH Heinrich Voss Delay JD Beijing, May 2012 9 / 41

small critical delay problems Solving small critical delay problems The converse is also true: Theorem Let M, A, B C n n with M nonsingular. Then any eigenvector z C n2 of QEVP corresponding to a simple eigenvalue µ C can be written as z = αu 1 u 2 for some vectors u 1, u 2 C n and some α C. Moreover, if µ = 1 then u 1 = ū 2 = u and there is an ω R such that DEVP holds. TUHH Heinrich Voss Delay JD Beijing, May 2012 9 / 41

small critical delay problems Solving small critical delay problems The converse is also true: Theorem Let M, A, B C n n with M nonsingular. Then any eigenvector z C n2 of QEVP corresponding to a simple eigenvalue µ C can be written as z = αu 1 u 2 for some vectors u 1, u 2 C n and some α C. Moreover, if µ = 1 then u 1 = ū 2 = u and there is an ω R such that DEVP holds. We have thus transformed the problem of finding solutions of the DEVP to the problem of finding eigenvalues of modulus one of a QEVP. TUHH Heinrich Voss Delay JD Beijing, May 2012 9 / 41

small critical delay problems Solving small critical delay problems The converse is also true: Theorem Let M, A, B C n n with M nonsingular. Then any eigenvector z C n2 of QEVP corresponding to a simple eigenvalue µ C can be written as z = αu 1 u 2 for some vectors u 1, u 2 C n and some α C. Moreover, if µ = 1 then u 1 = ū 2 = u and there is an ω R such that DEVP holds. We have thus transformed the problem of finding solutions of the DEVP to the problem of finding eigenvalues of modulus one of a QEVP. This can be done by the structure preserving method similar to the method presented in Fassbender,Mackey,Mackey,Schröder (2008) TUHH Heinrich Voss Delay JD Beijing, May 2012 9 / 41

small critical delay problems Solving small critical delay problems Once an eigenpair (µ, z) of QEVP is known, u can be recovered from z = αu ū. TUHH Heinrich Voss Delay JD Beijing, May 2012 10 / 41

small critical delay problems Solving small critical delay problems Once an eigenpair (µ, z) of QEVP is known, u can be recovered from z = αu ū. One possibility is to compute the dominant singular vector of the n n matrix Z with z = vec(z ). Alternatively, one could chose u as a column of Z scaled such that u has norm one. TUHH Heinrich Voss Delay JD Beijing, May 2012 10 / 41

small critical delay problems Solving small critical delay problems Once an eigenpair (µ, z) of QEVP is known, u can be recovered from z = αu ū. One possibility is to compute the dominant singular vector of the n n matrix Z with z = vec(z ). Alternatively, one could chose u as a column of Z scaled such that u has norm one. Subsequently ω can be obtained by projection, i.e. ω = Im(Mu)H (Au + µbu) Mu 2. 2 TUHH Heinrich Voss Delay JD Beijing, May 2012 10 / 41

small critical delay problems Solving small critical delay problems Once an eigenpair (µ, z) of QEVP is known, u can be recovered from z = αu ū. One possibility is to compute the dominant singular vector of the n n matrix Z with z = vec(z ). Alternatively, one could chose u as a column of Z scaled such that u has norm one. Subsequently ω can be obtained by projection, i.e. ω = Im(Mu)H (Au + µbu) Mu 2. 2 Finally, τ may be computed from µ and ω as τ = Im( ln(µ))/ω. TUHH Heinrich Voss Delay JD Beijing, May 2012 10 / 41

small critical delay problems Structure preserving method for QEVP µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. TUHH Heinrich Voss Delay JD Beijing, May 2012 11 / 41

small critical delay problems Structure preserving method for QEVP µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. Recall that the Kronecker product does, in general, not commute, i.e. A B B A. However there is a symmetric permutation matrix P such that A B = P(B A)P. TUHH Heinrich Voss Delay JD Beijing, May 2012 11 / 41

small critical delay problems Structure preserving method for QEVP µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. Recall that the Kronecker product does, in general, not commute, i.e. A B B A. However there is a symmetric permutation matrix P such that A B = P(B A)P. Hence, QEVP can be rewritten as where (µ 2 A 2 + µa 1 + A 0 )z = 0 A 2 = PĀ0P, A 1 = PĀ1P, whith P = P 1 = P T R n2 n 2. TUHH Heinrich Voss Delay JD Beijing, May 2012 11 / 41

small critical delay problems Structure preserving method for QEVP µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. Recall that the Kronecker product does, in general, not commute, i.e. A B B A. However there is a symmetric permutation matrix P such that A B = P(B A)P. Hence, QEVP can be rewritten as where (µ 2 A 2 + µa 1 + A 0 )z = 0 A 2 = PĀ0P, A 1 = PĀ1P, whith P = P 1 = P T R n2 n 2. Such problems are called PCP-palindromic eigenvalue problems. TUHH Heinrich Voss Delay JD Beijing, May 2012 11 / 41

small critical delay problems Structure preserving method for QEVP The method can be divided into three steps. TUHH Heinrich Voss Delay JD Beijing, May 2012 12 / 41

small critical delay problems Structure preserving method for QEVP The method can be divided into three steps. First, the quadratic eigenvalue problem is reformulated as ( [ ] ] [ ] [ ]) [ ] 0 P [Ā1 µ Ā2 Ā 0 0 P A1 A + 2 A 0 µz = 0, P 0 Ā 0 Ā 0 P 0 A 0 A 0 z which is a linear PCP-palindromic problem, because with P also [ ] P P is a real symmetric permutation. TUHH Heinrich Voss Delay JD Beijing, May 2012 12 / 41

small critical delay problems Structure preserving method for QEVP The method can be divided into three steps. First, the quadratic eigenvalue problem is reformulated as ( [ ] ] [ ] [ ]) [ ] 0 P [Ā1 µ Ā2 Ā 0 0 P A1 A + 2 A 0 µz = 0, P 0 Ā 0 Ā 0 P 0 A 0 A 0 z which is a linear PCP-palindromic problem, because with P also [ ] P P is a real symmetric permutation. Second, using the factorization [ ] 0 P = UU T, where U = 1 [ I P 0 2 P ] ip ii we define [ ] C := U H A1 A 2 A 0 Ū. A 0 A 0 TUHH Heinrich Voss Delay JD Beijing, May 2012 12 / 41

small critical delay problems Structure preserving method for QEVP Since U is unitary, i.e. U H T U = I = U Ū, this pencil is equivalent to U H (µ [ ] 0 P [Ā1 Ā2 Ā 0 P 0 Ā 0 Ā 0 ] Ā 0 [Ā1 =µu T Ā2 Ā 0 Ā 0 =µ C + C. ] [ 0 P P 0 [ ] U + U H A1 A 2 A 0 Ū A 0 A 0 ] [ ]) A1 A + 2 A 0 Ū A 0 A 0 TUHH Heinrich Voss Delay JD Beijing, May 2012 13 / 41

small critical delay problems Structure preserving method for QEVP Since U is unitary, i.e. U H T U = I = U Ū, this pencil is equivalent to U H (µ [ ] 0 P [Ā1 Ā2 Ā 0 P 0 Ā 0 Ā 0 ] Ā 0 [Ā1 =µu T Ā2 Ā 0 Ā 0 =µ C + C. ] [ 0 P P 0 [ ] U + U H A1 A 2 A 0 Ū A 0 A 0 ] [ ]) A1 A + 2 A 0 Ū A 0 A 0 Third, consider an eigenpair (θ, x) of the real generalized eigenvalue problem or, equivalently, Re(C)x = θim(c)x Cx + i + θ i θ Cx = 0. TUHH Heinrich Voss Delay JD Beijing, May 2012 13 / 41

small critical delay problems Structure preserving method for QEVP Since U is unitary, i.e. U H T U = I = U Ū, this pencil is equivalent to U H (µ [ ] 0 P [Ā1 Ā2 Ā 0 P 0 Ā 0 Ā 0 ] Ā 0 [Ā1 =µu T Ā2 Ā 0 Ā 0 =µ C + C. ] [ 0 P P 0 [ ] U + U H A1 A 2 A 0 Ū A 0 A 0 ] [ ]) A1 A + 2 A 0 Ū A 0 A 0 Third, consider an eigenpair (θ, x) of the real generalized eigenvalue problem or, equivalently, Re(C)x = θim(c)x Cx + i + θ i θ Cx = 0. Note that µ := (i + θ)/(i θ) is on the unit circle if and only if θ is real or θ = (the latter resulting in µ = 1). Note that real simple eigenvalues of real pencils can be stably computed by e.g. the real QZ algorithm. TUHH Heinrich Voss Delay JD Beijing, May 2012 13 / 41

small critical delay problems Structure preserving method for QEVP Require: M, A, B C n n Ensure: solutions (ω j, τ j, u j ) j=1,... of (iωm + A + e iωτ B)u = 0 1: A 0 = M B, A 1 = A M + M Ā 2: Construct the permutation P [ ] H [ ] [ ] I ip 3: C = 1 A1 PA 0 P A 0 I ip 2 P ii A 0 A 0 P ii 4: Compute all eigenpairs (θ j, x j ) of Re(C)x = θim(c)x where θ j is real 5: for j=1,... do 6: µ j = i+θ j i θ j 7: z j = [I ip]x j 8: Compute u j as dominant singular vector of mat(z j ) 9: ω j = Im((Mu j ) H (Au j +µ j Bu j )) Mu j 2 2 10: τ j = Im( ln(µ j )) ω j 11: end for TUHH Heinrich Voss Delay JD Beijing, May 2012 14 / 41

small critical delay problems Structure preserving method for QEVP Require: M, A, B C n n Ensure: solutions (ω j, τ j, u j ) j=1,... of (iωm + A + e iωτ B)u = 0 1: A 0 = M B, A 1 = A M + M Ā 2: Construct the permutation P [ ] H [ ] [ ] I ip 3: C = 1 A1 PA 0 P A 0 I ip 2 P ii A 0 A 0 P ii 4: Compute all eigenpairs (θ j, x j ) of Re(C)x = θim(c)x where θ j is real 5: for j=1,... do 6: µ j = i+θ j i θ j 7: z j = [I ip]x j 8: Compute u j as dominant singular vector of mat(z j ) 9: ω j = Im((Mu j ) H (Au j +µ j Bu j )) Mu j 2 2 10: τ j = Im( ln(µ j )) ω j 11: end for The dominant computational part is step 4 with a complexity of O(n 6 ). TUHH Heinrich Voss Delay JD Beijing, May 2012 14 / 41

Iterative projection methods for nonlinear eigenproblems Outline 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May 2012 15 / 41

Iterative projection methods for nonlinear eigenproblems Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. TUHH Heinrich Voss Delay JD Beijing, May 2012 16 / 41

Iterative projection methods for nonlinear eigenproblems Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. Generalizations to nonlinear sparse eigenproblems nonlinear rational Krylov: Ruhe(2000,2005), Jarlebring, V. (2005)) TUHH Heinrich Voss Delay JD Beijing, May 2012 16 / 41

Iterative projection methods for nonlinear eigenproblems Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. Generalizations to nonlinear sparse eigenproblems nonlinear rational Krylov: Ruhe(2000,2005), Jarlebring, V. (2005)) Arnoldi method: quadratic probl.: Meerbergen (2001) general problems: V. (2003,2004); Liao, Bai, Lee, Ko (2006), Liao (2007) TUHH Heinrich Voss Delay JD Beijing, May 2012 16 / 41

Iterative projection methods for nonlinear eigenproblems Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. Generalizations to nonlinear sparse eigenproblems nonlinear rational Krylov: Ruhe(2000,2005), Jarlebring, V. (2005)) Arnoldi method: quadratic probl.: Meerbergen (2001) general problems: V. (2003,2004); Liao, Bai, Lee, Ko (2006), Liao (2007) Jacobi-Davidson: polynomial problems: Sleijpen, Boten, Fokkema, van der Vorst (1996) Hwang, Lin, Wang, Wang (2004,2005) general problems:t. Betcke, V. (2004), V. (2004,2007), Schwetlick, Schreiber (2006), Schreiber (2008) TUHH Heinrich Voss Delay JD Beijing, May 2012 16 / 41

Iterative projection methods for nonlinear eigenproblems Iterative projection method Require: Initial basis V with V H V = I; set m = 1 1: while m number of wanted eigenvalues do 2: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 3: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 4: if r < ε then 5: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 6: reduce search space V if necessary 7: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m )u 8: end if 9: expand search space V = [V, vnew] 10: update projected problem 11: end while TUHH Heinrich Voss Delay JD Beijing, May 2012 17 / 41

Iterative projection methods for nonlinear eigenproblems Iterative projection method Require: Initial basis V with V H V = I; set m = 1 1: while m number of wanted eigenvalues do 2: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 3: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 4: if r < ε then 5: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 6: reduce search space V if necessary 7: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m )u 8: end if 9: expand search space V = [V, vnew] 10: update projected problem 11: end while Main tasks expand search space choose eigenpair of projected problem (locking, purging) TUHH Heinrich Voss Delay JD Beijing, May 2012 17 / 41

Iterative projection methods for nonlinear eigenproblems Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. TUHH Heinrich Voss Delay JD Beijing, May 2012 18 / 41

Iterative projection methods for nonlinear eigenproblems Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)vy = 0 and x = Vy corresponding Ritz vector, then inverse iteration yields suitable candidate v := T (θ) 1 T (θ)x TUHH Heinrich Voss Delay JD Beijing, May 2012 18 / 41

Iterative projection methods for nonlinear eigenproblems Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)vy = 0 and x = Vy corresponding Ritz vector, then inverse iteration yields suitable candidate v := T (θ) 1 T (θ)x BUT: In each step have to solve large linear system with varying matrix TUHH Heinrich Voss Delay JD Beijing, May 2012 18 / 41

Iterative projection methods for nonlinear eigenproblems Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)vy = 0 and x = Vy corresponding Ritz vector, then inverse iteration yields suitable candidate v := T (θ) 1 T (θ)x BUT: In each step have to solve large linear system with varying matrix and in a truly large problem the vector v will not be accessible but only an inexact solution ṽ := v + e of T (θ)v = T (θ)x, and the next iterate will be a solution of the projection of T (λ)x = 0 upon the expanded space Ṽ := span{v, ṽ}. TUHH Heinrich Voss Delay JD Beijing, May 2012 18 / 41

Iterative projection methods for nonlinear eigenproblems Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. TUHH Heinrich Voss Delay JD Beijing, May 2012 19 / 41

Iterative projection methods for nonlinear eigenproblems Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x, ṽ}. TUHH Heinrich Voss Delay JD Beijing, May 2012 19 / 41

Iterative projection methods for nonlinear eigenproblems Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x, ṽ}. If the angle between these two planes is small, then the projection of T (λ) upon Ṽ should be similar to the one upon span{v, v}, and the approximation properties of inverse iteration should be maintained. TUHH Heinrich Voss Delay JD Beijing, May 2012 19 / 41

Iterative projection methods for nonlinear eigenproblems Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x, ṽ}. If the angle between these two planes is small, then the projection of T (λ) upon Ṽ should be similar to the one upon span{v, v}, and the approximation properties of inverse iteration should be maintained. If this angle can become large, then it is not surprising that the convergence properties of inverse iteration are not reflected by the projection method. TUHH Heinrich Voss Delay JD Beijing, May 2012 19 / 41

Iterative projection methods for nonlinear eigenproblems Theorem Let φ 0 = arccos(x T v) denote the angle between x and v, and the relative error of ṽ by ε := e. TUHH Heinrich Voss Delay JD Beijing, May 2012 20 / 41

Iterative projection methods for nonlinear eigenproblems Theorem Let φ 0 = arccos(x T v) denote the angle between x and v, and the relative error of ṽ by ε := e. Then the maximal possible acute angle between the planes E and Ẽ is { β(ε) = arccos 1 ε 2 / sin 2 φ 0 if ε sin φ 0 π 2 if ε sin φ 0 TUHH Heinrich Voss Delay JD Beijing, May 2012 20 / 41

Iterative projection methods for nonlinear eigenproblems Proof TUHH Heinrich Voss Delay JD Beijing, May 2012 21 / 41

Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration Obviously for every α R, α 0 the plane E is also spanned by x and x + αv. TUHH Heinrich Voss Delay JD Beijing, May 2012 22 / 41

Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration Obviously for every α R, α 0 the plane E is also spanned by x and x + αv. If Ẽ(α) is the plane which is spanned by x and a perturbed realization x + αv + e of x + αv then by the same arguments as in the proof of the Theorem the maximum angle between E and Ẽ(α) is { γ(α, ε) = arccos 1 ε 2 / sin 2 φ(α) if ε sin φ(α) π 2 if ε sin φ(α) where φ(α) denotes the angle between x and x + αv. TUHH Heinrich Voss Delay JD Beijing, May 2012 22 / 41

Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration Obviously for every α R, α 0 the plane E is also spanned by x and x + αv. If Ẽ(α) is the plane which is spanned by x and a perturbed realization x + αv + e of x + αv then by the same arguments as in the proof of the Theorem the maximum angle between E and Ẽ(α) is { γ(α, ε) = arccos 1 ε 2 / sin 2 φ(α) if ε sin φ(α) π 2 if ε sin φ(α) where φ(α) denotes the angle between x and x + αv. Since the mapping φ arccos 1 ε 2 / sin 2 φ decreases monotonically the expansion of the search space by an inexact realization of t := x + αv is most robust with respect to small perturbations, if α is chosen such that x and x + αv are orthogonal TUHH Heinrich Voss Delay JD Beijing, May 2012 22 / 41

Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration ct. t := x + αv is orthogonal to x iff v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) TUHH Heinrich Voss Delay JD Beijing, May 2012 23 / 41

Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration ct. t := x + αv is orthogonal to x iff v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) which yields a maximum acute angle between E and Ẽ(α) { arccos 1 ε 2 if ε 1 γ(α, ε) = π 2 if ε 1. TUHH Heinrich Voss Delay JD Beijing, May 2012 23 / 41

Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration TUHH Heinrich Voss Delay JD Beijing, May 2012 24 / 41

Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration ct. TUHH Heinrich Voss Delay JD Beijing, May 2012 25 / 41

Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration ct. TUHH Heinrich Voss Delay JD Beijing, May 2012 26 / 41

Iterative projection methods for nonlinear eigenproblems Jacobi Davidson method The expansion v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) of the current search space V is the solution of the equation (I T (θ)xx H x H T (θ)x )T (θ)(i xx H )t = r, t x TUHH Heinrich Voss Delay JD Beijing, May 2012 27 / 41

Iterative projection methods for nonlinear eigenproblems Jacobi Davidson method The expansion v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) of the current search space V is the solution of the equation (I T (θ)xx H x H T (θ)x )T (θ)(i xx H )t = r, t x This is the so called correction equation of the Jacobi Davidson method which was derived in T. Betcke & Voss (2004) generalizing the approach of Sleijpen and van der Vorst (1996) for linear and polynomial eigenvalue problems. TUHH Heinrich Voss Delay JD Beijing, May 2012 27 / 41

Iterative projection methods for nonlinear eigenproblems Jacobi Davidson method The expansion v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) of the current search space V is the solution of the equation (I T (θ)xx H x H T (θ)x )T (θ)(i xx H )t = r, t x This is the so called correction equation of the Jacobi Davidson method which was derived in T. Betcke & Voss (2004) generalizing the approach of Sleijpen and van der Vorst (1996) for linear and polynomial eigenvalue problems. Hence, the Jacobi Davidson method is the most robust realization of an expansion of a search space such that the direction of inverse iteration is contained in the expanded space in the sense that it is least sensitive to inexact solves of linear systems T (θ)v = T (θ)x. TUHH Heinrich Voss Delay JD Beijing, May 2012 27 / 41

Iterative projection methods for nonlinear eigenproblems Nonlinear Jacobi-Davidson 1: Start with orthonormal basis V ; set m = 1 2: determine preconditioner M T (σ) 1 ; σ close to first wanted eigenvalue 3: while m number of wanted eigenvalues do 4: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 5: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 6: if r < ε then 7: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 8: reduce search space V if necessary 9: choose new preconditioner M T (µ) if indicated 10: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m)u 11: end if 12: solve approximately correction equation (I T (µ)uu H u H T (µ)u ) T (µ) (I uuh u H u ) t = r, 13: t = t VV T t,ṽ = t/ t, reorthogonalize if necessary 14: expand search space V = [V, ṽ] 15: update projected problem 16: end while t u TUHH Heinrich Voss Delay JD Beijing, May 2012 28 / 41

Iterative projection methods for nonlinear eigenproblems Correction equation The correction equation is solved approximately by a few steps of an iterative solver (GMRES or BiCGStab). TUHH Heinrich Voss Delay JD Beijing, May 2012 29 / 41

Iterative projection methods for nonlinear eigenproblems Correction equation The correction equation is solved approximately by a few steps of an iterative solver (GMRES or BiCGStab). The operator T (σ) is restricted to map the subspace x into itself. Hence, if M T (σ) is a preconditioner of T (σ), σ µ, then a preconditioner for an iterativ solver of the correction equation should be modified correspondingly to M := (I T (µ)xx H H xx x H )M(I T (µ)x x H x ). TUHH Heinrich Voss Delay JD Beijing, May 2012 29 / 41

Iterative projection methods for nonlinear eigenproblems Correction equation The correction equation is solved approximately by a few steps of an iterative solver (GMRES or BiCGStab). The operator T (σ) is restricted to map the subspace x into itself. Hence, if M T (σ) is a preconditioner of T (σ), σ µ, then a preconditioner for an iterativ solver of the correction equation should be modified correspondingly to M := (I T (µ)xx H H xx x H )M(I T (µ)x x H x ). Taking into account the projectors in the preconditioner, i.e. using M instead of M, raises the cost of the preconditioned Krylov solver only slightly (cf. Sleijpen, van der Vorst). Only one additional linear solve with system matrix M is required. TUHH Heinrich Voss Delay JD Beijing, May 2012 29 / 41

Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) TUHH Heinrich Voss Delay JD Beijing, May 2012 30 / 41

Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) i.e. ( ) ( T (θ) T (θ)x v 2x T = 0 α) ( ) T (θ)x 0 TUHH Heinrich Voss Delay JD Beijing, May 2012 30 / 41

Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) i.e. ( ) ( T (θ) T (θ)x v 2x T = 0 α) ( ) T (θ)x 0 T (θ)v + αt (θ)x = T (θ)x, 2x T v = 0 TUHH Heinrich Voss Delay JD Beijing, May 2012 30 / 41

Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) i.e. ( ) ( T (θ) T (θ)x v 2x T = 0 α) ( ) T (θ)x 0 T (θ)v + αt (θ)x = T (θ)x, 2x T v = 0 v = x αt (θ) 1 T (θ)x, x T v = 0 TUHH Heinrich Voss Delay JD Beijing, May 2012 30 / 41

Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) i.e. ( ) ( T (θ) T (θ)x v 2x T = 0 α) ( ) T (θ)x 0 T (θ)v + αt (θ)x = T (θ)x, 2x T v = 0 v = x αt (θ) 1 T (θ)x, x T v = 0 x T x v = x x T T (θ) 1 T (θ)x T (θ) 1 T (θ)x, and the quadratic convergence of Newton s method explains the fast convergence of the Jacobi Davidson method. TUHH Heinrich Voss Delay JD Beijing, May 2012 30 / 41

Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson BUT the resulting correction equation is solved only very inexactly which spoils the good approximation properties of Newton s method. TUHH Heinrich Voss Delay JD Beijing, May 2012 30 / 41 Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) i.e. ( ) ( T (θ) T (θ)x v 2x T = 0 α) ( ) T (θ)x 0 T (θ)v + αt (θ)x = T (θ)x, 2x T v = 0 v = x αt (θ) 1 T (θ)x, x T v = 0 x T x v = x x T T (θ) 1 T (θ)x T (θ) 1 T (θ)x, and the quadratic convergence of Newton s method explains the fast convergence of the Jacobi Davidson method.

Iterative projection methods for nonlinear eigenproblems More general approach Assume that we are given a base method x k+1 = S(x k, θ k ), θ k+1 = p(x k+1, θ k ) which converges locally ˆx x k+1 = O( ˆx x k q 1 ), ˆλ θ k+1 = O( ˆλ θ k q 2 ). TUHH Heinrich Voss Delay JD Beijing, May 2012 31 / 41

Iterative projection methods for nonlinear eigenproblems More general approach Assume that we are given a base method which converges locally Robustification: x k+1 = S(x k, θ k ), θ k+1 = p(x k+1, θ k ) ˆx x k+1 = O( ˆx x k q 1 ), ˆλ θ k+1 = O( ˆλ θ k q 2 ). (i) Find v k = x k + αs(x k, θ k )x k, (t k ) H Bx k = 0 (ii) Choose x k+1 span{x k, v k }, for instance solving the projected problem P H k T (λ)p k z k = 0, x k+1 = P k z k where P k is the orthogonal projector onto span{x k, v k } (iii) θ k+1 = p(x k+1 ) where B is a Hermitian positive definite matrix (for instance if it is known in advance that the eigenvectors are B orthogonal). TUHH Heinrich Voss Delay JD Beijing, May 2012 31 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Outline 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May 2012 32 / 41

A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: TUHH Heinrich Voss Delay JD Beijing, May 2012 33 / 41

A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations known approximations of eigenvectors or random vector TUHH Heinrich Voss Delay JD Beijing, May 2012 33 / 41

A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations 2 solve projected problem V H T (ω, τ)vz = 0 use presented method for small problems TUHH Heinrich Voss Delay JD Beijing, May 2012 33 / 41

A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations 2 solve projected problem V H T (ω, τ)vz = 0 3 compute approximate eigenvectors u i = Vz i and residuals r i = T (ω i, τ i )u i TUHH Heinrich Voss Delay JD Beijing, May 2012 33 / 41

A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations 2 solve projected problem V H T (ω, τ)vz = 0 3 compute approximate eigenvectors u i = Vz i and residuals r i = T (ω i, τ i )u i 4 stop, if enough eigentriples have converged we use r i 2 tol TUHH Heinrich Voss Delay JD Beijing, May 2012 33 / 41

A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations 2 solve projected problem V H T (ω, τ)vz = 0 3 compute approximate eigenvectors u i = Vz i and residuals r i = T (ω i, τ i )u i 4 stop, if enough eigentriples have converged 5 compute a correction c of an approx. eigenvector û i.e., û + c is a better approximation that s the ineresting part TUHH Heinrich Voss Delay JD Beijing, May 2012 33 / 41

A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations 2 solve projected problem V H T (ω, τ)vz = 0 3 compute approximate eigenvectors u i = Vz i and residuals r i = T (ω i, τ i )u i 4 stop, if enough eigentriples have converged 5 compute a correction c of an approx. eigenvector û i.e., û + c is a better approximation 6 expand V orth[v, c] and GOTO 2 repeat as long as neccessary TUHH Heinrich Voss Delay JD Beijing, May 2012 33 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) TUHH Heinrich Voss Delay JD Beijing, May 2012 34 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) [ ] T (ˆω + δ, ˆτ + ε)(û + c) one step of Newtons method applied to û H = 0 c TUHH Heinrich Voss Delay JD Beijing, May 2012 34 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) [ ] T (ˆω + δ, ˆτ + ε)(û + c) one step of Newtons method applied to û H = 0 c [ ] T (ˆω, ˆτ) Tω (ˆω, ˆτ)û T τ (ˆω, ˆτ)û c [ ] û H δ ˆr =. 0 0 0 ε linear system, n + 1 complex equations, n complex + 2 real unknowns no standard software available TUHH Heinrich Voss Delay JD Beijing, May 2012 34 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) [ ] T (ˆω + δ, ˆτ + ε)(û + c) one step of Newtons method applied to û H = 0 c [ ] T (ˆω, ˆτ) Tω (ˆω, ˆτ)û T τ (ˆω, ˆτ)û c [ ] û H δ ˆr =. 0 0 0 ε linear system, n + 1 complex equations, n complex + 2 real unknowns no standard software available ˆT 0 ˆTω û ˆTτ û c ˆr 0 ˆT ˆTω û ˆTτ û û H d 0 0 0 δ = ˆr 0. 0 û H 0 0 ε 0 TUHH Heinrich Voss Delay JD Beijing, May 2012 34 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) [ ] T (ˆω + δ, ˆτ + ε)(û + c) one step of Newtons method applied to û H = 0 c [ ] T (ˆω, ˆτ) Tω (ˆω, ˆτ)û T τ (ˆω, ˆτ)û c [ ] û H δ ˆr =. 0 0 0 ε linear system, n + 1 complex equations, n complex + 2 real unknowns no standard software available ˆT 0 ˆTω û ˆTτ û c ˆr 0 ˆT ˆTω û ˆTτ û û H d 0 0 0 δ = ˆr 0. 0 û H 0 0 ε 0 Lemma: d = c, δ, ɛ R TUHH Heinrich Voss Delay JD Beijing, May 2012 34 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) [ ] T (ˆω + δ, ˆτ + ε)(û + c) one step of Newtons method applied to û H = 0 c [ ] T (ˆω, ˆτ) Tω (ˆω, ˆτ)û T τ (ˆω, ˆτ)û c [ ] û H δ ˆr =. 0 0 0 ε linear system, n + 1 complex equations, n complex + 2 real unknowns no standard software available ˆT 0 ˆTω û ˆTτ û c ˆr 0 ˆT ˆTω û ˆTτ û û H d 0 0 0 δ = ˆr 0. 0 û H 0 0 ε 0 Lemma: d = c, δ, ɛ R What if n is large and a preconditioner is available for ˆT only? TUHH Heinrich Voss Delay JD Beijing, May 2012 34 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - large size With system reads [ ] [ ] ˆT 0 ˆTω û ˆTτ û T =, K =, U = 0 ˆT ˆT ω û ˆTτ û [ ] c r T K c U H 0 δ = r 0. ε 0 [û ] 0 0 û TUHH Heinrich Voss Delay JD Beijing, May 2012 35 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - large size With system reads [ ] [ ] ˆT 0 ˆTω û ˆTτ û T =, K =, U = 0 ˆT ˆT ω û ˆTτ û [ ] c r T K c U H 0 δ = r 0. ε 0 [û ] 0 0 û Eliminating (δ, ε) yields [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) TUHH Heinrich Voss Delay JD Beijing, May 2012 35 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - large size With system reads [ ] [ ] ˆT 0 ˆTω û ˆTτ û T =, K =, U = 0 ˆT ˆT ω û ˆTτ û [ ] c r T K c U H 0 δ = r 0. ε 0 [û ] 0 0 û Eliminating (δ, ε) yields [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) [ [ c c] r r] looks like other JD correction equations: T = is complemented by projectors TUHH Heinrich Voss Delay JD Beijing, May 2012 35 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - large size With system reads [ ] [ ] ˆT 0 ˆTω û ˆTτ û T =, K =, U = 0 ˆT ˆT ω û ˆTτ û [ ] c r T K c U H 0 δ = r 0. ε 0 [û ] 0 0 û Eliminating (δ, ε) yields [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) [ [ c c] r r] looks like other JD correction equations: T = is complemented by projectors How does this help with the preconditioner? TUHH Heinrich Voss Delay JD Beijing, May 2012 35 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Preconditioning the correction equation Correction equation [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) TUHH Heinrich Voss Delay JD Beijing, May 2012 36 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Preconditioning the correction equation Correction equation [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) preconditioner: (I K (U H K ) 1 U H )P(I UU H ) with P = [ ] P 0 0 P TUHH Heinrich Voss Delay JD Beijing, May 2012 36 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Preconditioning the correction equation Correction equation [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) preconditioner: (I K (U H K ) 1 U H )P(I UU H ) with P = [ ] P 0 0 P need to apply in a left preconditioned Krylov solver: (I UU H )P 1 (I K (U H P 1 K ) 1 U H P 1 )(I K (U H K ) 1 U H )T(I UU H ) TUHH Heinrich Voss Delay JD Beijing, May 2012 36 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Preconditioning the correction equation Correction equation [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) preconditioner: (I K (U H K ) 1 U H )P(I UU H ) with P = [ ] P 0 0 P need to apply in a left preconditioned Krylov solver: (I UU H )P 1 (I K (U H P 1 K ) 1 U H P 1 )(I K (U H K ) 1 U H )T(I UU H ) Efficient implementation needs one application of P 1 ˆT per iteration and additionally 2 T products and 3 P solves as in other JD variants TUHH Heinrich Voss Delay JD Beijing, May 2012 36 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) TUHH Heinrich Voss Delay JD Beijing, May 2012 37 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? TUHH Heinrich Voss Delay JD Beijing, May 2012 37 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? we use ˆω = 0, e i ˆωˆτ = σ (given) and û span(v ) such that T (ˆω, ˆτ)û 2 = min TUHH Heinrich Voss Delay JD Beijing, May 2012 37 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion Restart sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? we use ˆω = 0, e i ˆωˆτ = σ (given) and û span(v ) such that T (ˆω, ˆτ)û 2 = min cost grows like O(k 6 ), so becomes prohibitve for larger k TUHH Heinrich Voss Delay JD Beijing, May 2012 37 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion Restart sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? we use ˆω = 0, e i ˆωˆτ = σ (given) and û span(v ) such that T (ˆω, ˆτ)û 2 = min cost grows like O(k 6 ), so becomes prohibitve for larger k restart with space spanned by converged eigenvectors and a few unconverged ones with the best residuals TUHH Heinrich Voss Delay JD Beijing, May 2012 37 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion Restart sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? we use ˆω = 0, e i ˆωˆτ = σ (given) and û span(v ) such that T (ˆω, ˆτ)û 2 = min cost grows like O(k 6 ), so becomes prohibitve for larger k restart with space spanned by converged eigenvectors and a few unconverged ones with the best residuals Real problems if M, A, B are real, then eigentriples come in pairs (ω, τ, u), ( ω, τ, u) TUHH Heinrich Voss Delay JD Beijing, May 2012 37 / 41

A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion Restart sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? we use ˆω = 0, e i ˆωˆτ = σ (given) and û span(v ) such that T (ˆω, ˆτ)û 2 = min cost grows like O(k 6 ), so becomes prohibitve for larger k restart with space spanned by converged eigenvectors and a few unconverged ones with the best residuals Real problems if M, A, B are real, then eigentriples come in pairs (ω, τ, u), ( ω, τ, u) when an eigenvector u converges, add u to searchspace TUHH Heinrich Voss Delay JD Beijing, May 2012 37 / 41

Outline Numerical Experience 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May 2012 38 / 41

Numerical Experience Numerical example Consider the parabolic problem u t ((1 + x 2 + y 2 + z 2 ) u) + [1, 0, 1] u + u α(1 + x 2 )u(t τ) = 0 (1) with spatial variables x, y and z on Ω = (0, 1) (0, 1) (0, 1) with Dirichlet boundary condition u = 0 on Ω. TUHH Heinrich Voss Delay JD Beijing, May 2012 39 / 41

Numerical Experience Numerical example Consider the parabolic problem u t ((1 + x 2 + y 2 + z 2 ) u) + [1, 0, 1] u + u α(1 + x 2 )u(t τ) = 0 (1) with spatial variables x, y and z on Ω = (0, 1) (0, 1) (0, 1) with Dirichlet boundary condition u = 0 on Ω. A discretization with piecewise quadratic ansatz functions on a tetrahedral grid using COMSOL yielded an eigenvalue problem of dimension n = 80623. Mẋ(t) + Ax(t) + Bx(t τ) = 0 TUHH Heinrich Voss Delay JD Beijing, May 2012 39 / 41

Numerical Experience Numerical example Consider the parabolic problem u t ((1 + x 2 + y 2 + z 2 ) u) + [1, 0, 1] u + u α(1 + x 2 )u(t τ) = 0 (1) with spatial variables x, y and z on Ω = (0, 1) (0, 1) (0, 1) with Dirichlet boundary condition u = 0 on Ω. A discretization with piecewise quadratic ansatz functions on a tetrahedral grid using COMSOL yielded an eigenvalue problem of dimension n = 80623. Mẋ(t) + Ax(t) + Bx(t τ) = 0 For α = 100 the problem has four pairs of eigentriples. TUHH Heinrich Voss Delay JD Beijing, May 2012 39 / 41

Numerical Experience Numerical Example 10 0 Convergence History Example 3 10 2 10 4 residual norm 10 6 10 8 10 10 10 12 0 5 10 15 20 25 30 35 iteration TUHH Heinrich Voss Delay JD Beijing, May 2012 40 / 41

Numerical Experience Numerical Example 10 0 Convergence History Example 3 10 2 10 4 residual norm 10 6 10 8 10 10 10 12 0 5 10 15 20 25 30 35 iteration x axis: number of iteration: tic=5; together 31 TUHH Heinrich Voss Delay JD Beijing, May 2012 40 / 41

Numerical Experience Numerical Example 10 0 Convergence History Example 3 10 2 10 4 residual norm 10 6 10 8 10 10 10 12 0 5 10 15 20 25 30 35 iteration y axis: norm of residual: tic=10 2 ; tolerance= 10 10 TUHH Heinrich Voss Delay JD Beijing, May 2012 40 / 41