HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

Size: px
Start display at page:

Download "HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM"

Transcription

1 HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue problem. These techniques are generalizations of their counterparts for the standard and generalized eigenvalue problem. The methods aim to approximate interior eigenpairs generally more accurately than the standard extraction does. We study their properties and give Saad type theorems. The processes can be combined with any subspace expansion approach for instance a Jacobi Davidson type technique to form a subspace method for multiparameter eigenproblems of high dimension. Key words. multiparameter eigenvalue problem two-parameter eigenvalue problem harmonic extraction refined extraction Rayleigh Ritz subspace method Saad s theorem Jacobi Davidson AMS subject classifications. 65F15 65F50 15A18 15A69 1. Introduction. We study harmonic and refined Rayleigh Ritz techniques for the multiparameter eigenvalue problem (MEP). For ease of presentation we will focus on the twoparameter eigenvalue problem (1.1) A 1 x = λb 1 x + µc 1 x A 2 y = λb 2 y + µc 2 y for given n 1 n 1 (real or complex) matrices A 1 B 1 C 1 and n 2 n 2 matrices A 2 B 2 C 2 ; we are interested in eigenpairs ((λ µ) x y) where x and y have unit norm. The approaches for general multiparameter eigenproblems will be straightforward generalizations of the twoparameter case. Multiparameter eigenvalue problems of this kind arise in a variety of applications [1] particularly in mathematical physics when the method of separation of variables is used to solve boundary value problems [27]; see [10] for several other applications. Two-parameter problems can be expressed as two coupled generalized eigenvalue problems as follows. On the tensor product space C n1 C n2 of dimension n 1 n 2 one defines the matrix determinants (1.2) 0 = B 1 C 2 C 1 B 2 1 = A 1 C 2 C 1 A 2 2 = B 1 A 2 A 1 B 2. The MEP is called right definite if all the A i B i C i i = 1 2 are Hermitian and 0 is (positive or negative) definite; in this case the eigenvalues are real and the eigenvectors can be chosen to be real. The MEP is called nonsingular if 0 is nonsingular (without further conditions on the A i B i C i ). A nonsingular two-parameter eigenvalue problem is equivalent to the coupled generalized eigenvalue problems (1.3) 1 z = λ 0 z 2 z = µ 0 z Received October Accepted for publication June Recommended by M. Hochbruck. Department of Mathematics and Computing Science Eindhoven University of Technology PO Box MB The Netherlands ( hochsten/). The research of this author was supported in part by NSF grant DMS Department of Mathematics University of Ljubljana Jadranska 19 SI-1000 Ljubljana Slovenia (www-lp.fmf.uni-lj.si/plestenjak/bor.htm). 1

2 2 M. E. HOCHSTENBACH AND B. PLESTENJAK where z = x y and and commute. Because of the product dimension n 1 n 2 the multiparameter eigenvalue problem is a computationally quite challenging problem. There exist several numerical methods for the MEP. Blum and colleagues [2 3 4] Bohte [5] and Browne and Sleeman [6] proposed methods for computing one eigenvalue hopefully the closest one to a given approximation. There are also methods which determine all eigenvalues. The first class is formed by direct methods for right definite MEPs [ ] and for non right definite MEPs [10]; these methods are suitable for small (dense) matrices only since their complexity is O((n 1 n 2 ) 3 ). The second class consists of continuation methods [ ] that are asymptotically somewhat cheaper than direct methods but are so far often not very competitive for small problems in practice; for larger problems their computational cost is still enormous. Fortunately in applications often only a few relevant eigenpairs are of interest for instance those corresponding to the largest eigenvalues or the eigenvalues closest to a given target. Recently some subspace methods for the MEP have been proposed [11 10] that are suitable for finding some selected eigenpairs. These methods combine a subspace approach with one of the mentioned dense methods as solver for the projected MEP. The approaches are also suitable for multiparameter problems where the matrices are large and sparse although convergence to the wanted eigenpairs may sometimes remain an issue of concern. In particular in [11] it was observed that finding interior eigenvalues was one of the challenges for the Jacobi Davidson type method. It was left as an open question how to generalize the harmonic Rayleigh Ritz approach for the MEP. This paper addresses this issue and also introduces a refined Ritz method. The rest of the paper has been organized as follows. In Section 2 we review the harmonic Rayleigh Ritz method for the generalized eigenproblem after which this method is generalized for the MEP in Section 3. In Section 4 we present two Saad type theorems for the standard and harmonic extraction. Section 5 proposes a refined Rayleigh Ritz method for the MEP. We conclude with experiments and a conclusion in Sections 6 and Harmonic Rayleigh Ritz for the generalized eigenvalue problem. We first briefly review the harmonic Rayleigh Ritz for the generalized eigenvalue problem Ax = λbx. Suppose we would like to compute an approximation (θ u) to the eigenpair (λ x) where the approximate eigenvector u should be in a given search space U k of low dimension k and θ should be in the neighborhood of the target τ C. Since u U k we can write u = U k c where the columns of U k form an orthonormal basis for U k and c is a vector in C k of unit norm. The standard Ritz Galerkin condition on the residual r is (cf. [18]) r := Au θbu U k which implies that (θ c) should be a primitive Ritz pair (terminology from Stewart [26]) an eigenpair of the projected generalized eigenproblem U k AU k c = θ U k BU k c. It follows that if u Bu 0 then θ = u Au u Bu ; the case u Bu = 0 is an exceptional case where the Ritz value is infinite (if u Au 0) or undefined (if u Au = 0). If B is Hermitian positive definite then we can define the B 1 -norm of the residual by z 2 B = z B 1 z and one 1 can show that this θ minimizes r B 1.

3 HARMONIC RITZ FOR THE MEP 3 However the problem with this standard Rayleigh Ritz approach is that even if there is a Ritz value θ τ we do not have the guarantee that the two-norm r is small which reflects the fact that the approximate eigenvector may be poor. As a remedy the harmonic Rayleigh Ritz was proposed by Morgan [16] Paige Parlett and Van der Vorst [17] for the standard eigenproblem and by Stewart [26] for the generalized eigenproblem; see also Fokkema Sleijpen and Van der Vorst [9]. Assuming A τb is nonsingular the idea is to consider a spectral transformation (2.1) (A τb) 1 Bx = (λ τ) 1 x. Thus the interior eigenvalues λ τ are exterior eigenvalues of (A τb) 1 B for which a Galerkin condition usually works well in practice. To avoid working with (A τb) 1 the inverse of a large sparse matrix we impose a Petrov Galerkin condition or equivalently (A τb) 1 Bu (θ τ) 1 u (A τb) (A τb) U k (2.2) Au θbu = (A τb) u (θ τ) Bu (A τb) U k leading to the projected eigenproblem (2.3) U k (A τb) (A τb) U k c = (θ τ) U k (A τb) BU k c. Here we are interested in the primitive harmonic Ritz pair(s) (θ c) with θ closest to τ. This approach has two motivations: if an exact eigenvector is in the search space x = U k c then the eigenpair (λ x) satisfies (2.3) (this implies that exact eigenvectors in the search space will be detected unless we are in the special circumstance of the presence of multiple harmonic Ritz values); a harmonic Ritz pair (θ u) satisfies [26] Au τbu θ τ Bu θ τ BU k which motivates the choice of the harmonic Ritz value closest to τ. The harmonic Rayleigh Ritz approach was generalized for the polynomial eigenproblem in [13]. 3. Harmonic Rayleigh Ritz for the multiparameter eigenvalue problem. For the MEP (1.1) it is natural to make use of two search spaces U k and V k for the vectors x and y respectively. Let the columns of U k and V k form orthonormal bases for U k and V k. We look for an approximate eigenpair ((θ η) u v) ((λ µ) x y) where u v is of the form U k c V k d where both c d C k are of unit norm. The standard extraction (3.1) (A 1 θb 1 ηc 1 ) U k c U k (A 2 θb 2 ηc 2 ) V k d V k was introduced in [11]. As is also experienced for the standard eigenvalue problem the standard extraction for the MEP works well for exterior eigenvalues but is generally less favorable for interior ones [11]. Now suppose we are interested in a harmonic approach to better approximate eigenpairs near the target (σ τ). One obstacle is that MEPs do not seem to allow for a straightforward

4 4 M. E. HOCHSTENBACH AND B. PLESTENJAK generalization of the spectral transformation (2.1). Therefore we generalize (2.2) and impose the two Galerkin conditions (3.2) or equivalently (A 1 θb 1 ηc 1 ) u (A 1 σb 1 τc 1 ) U k (A 2 θb 2 ηc 2 ) v (A 2 σb 2 τc 2 ) V k (A 1 σb 1 τc 1 ) U k c (θ σ) B 1 U k c (η τ) C 1 U k c (A 1 σb 1 τc 1 ) U k (A 2 σb 2 τc 2 ) V k d (θ σ) B 2 V k d (η τ) C 2 V k d (A 2 σb 2 τc 2 ) V k. We call this the harmonic Rayleigh Ritz extraction for the MEP. Introduce the following reduced QR-decompositions (3.3) (A 1 σb 1 τc 1 ) U k = Q 1 R 1 (A 2 σb 2 τc 2 ) V k = Q 2 R 2 which we can compute incrementally i.e. one column per step during the subspace method. This is done for computational efficiency as well as stability: cross products of the form (A i σb i τc i ) (A i σb i τc i ) with potentially a high condition number are avoided. Then computationally the extraction amounts to the projected two-parameter eigenproblem R 1 c = (θ σ) Q 1B 1 U k c + (η τ) Q 1C 1 U k c R 2 d = (θ σ) Q 2B 2 V k d + (η τ) Q 2C 2 V k d. We now compute the smallest eigenpair(s) ((ξ 1 ξ 2 ) c d) (in absolute value sense that is with minimal ξ ξ 2 2 ) of the low-dimensional MEP (3.4) R 1 c = ξ 1 Q 1B 1 U k c + ξ 2 Q 1C 1 U k c R 2 d = ξ 1 Q 2B 2 V k d + ξ 2 Q 2C 2 V k d which can be solved by existing low-dimensional techniques as mentioned in the introduction. As in the case for the generalized eigenproblem there are two justifications for the harmonic approach for the MEP: we have the following upper bounds for the residual norms: (A 1 σb 1 τc 1 )u ξ 1 B 1 u + ξ 2 C 1 u ξ 1 B 1 U k + ξ 2 C 1 U k (A 2 σb 2 τc 2 )v ξ 1 B 2 v + ξ 2 C 2 v ξ 1 B 2 V k + ξ 2 C 2 V k so to obtain small residual norms it is clear that it is sensible to select the smallest (ξ 1 ξ 2 ); if the search spaces contain an eigenvector x = U k c y = V k d then the pair ((λ µ) x y) satisfies (3.2); this means that this pair is a harmonic Ritz pair unless the harmonic Ritz value (θ η) is not simple. We will prove a more precise statement in the next section. In conclusion the harmonic approach for the MEP tries to combine two desirable properties: it will generally find an exact eigenpair present in the search spaces while it will also try to detect approximate eigenpairs with small residual norm. 4. Saad type theorems. In this section we derive Saad type theorems for both the standard and harmonic extraction for the MEP. This type of theorem expresses the quality of the approximate vectors in terms of the quality of the search spaces. The original theorem by Saad [21 Thm. 4.6] was for the standard extraction for the standard Hermitian eigenvalue problem. A generalization for non-hermitian matrices and eigenspaces was given by Stewart [25] while an extension for the harmonic extraction for the standard eigenvalue problem was presented by Chen and Jia [8].

5 HARMONIC RITZ FOR THE MEP Saad type theorem for the standard extraction. Let w := u v be a Ritz vector corresponding to Ritz value (θ η) and [w W W ] be an orthonormal basis for C n1n2 such that Define for i = span([w W ]) = U k V k. (4.1) E i = [w W ] i [w W ]. We assume that E 0 is invertible which is guaranteed if 0 is definite as in the case of a right definite MEP. From (3.1) we have that the Ritz pairs are of the form ((θ η) U k c V k d) where ((θ η) c d) are the eigenvalues of Uk A 1 U k c = θ Uk B 1 U k c + η Uk C 1 U k c Vk A 2 V k d = θ Vk B 2 V k d + η Vk C 2 V k d. The three matrix determinants of this projected MEP (cf. (1.2)) are of the form Q E i Q where Q is the orthonormal basis transformation matrix that maps U k V k coordinates to [w W ] coordinates: Q = [w W ] (U k V k ). For instance we have U k B 1 U k V k C 2 V k U k C 1 U k V k B 2 V k = (U k V k ) 0 (U k V k ) = Q [w W ] 0 [w W ]Q = Q E 0 Q. Therefore the components θ j and η j of the Ritz values (θ j η j ) are eigenvalues of E 1 0 E 1 and E 1 0 E 2 respectively (cf. (1.3)). In particular we know that (E 1 θe 0 ) e 1 = 0 so E 1 0 E 1 is of the form (4.2) [ ] E0 1 θ f E 1 = 1 0 G 1 where the precise expression for G 1 is less important than the fact that its eigenvalues are the k 2 1 θ j -values belonging to the pairs (θ j η j ) distinct from (θ η). We hereby note that some of the first coordinates θ j may still be equal to θ even if (θ η) is not a multiple Ritz value. Similarly E0 1 E 2 is of the form [ ] E0 1 η f E (4.3) 2 = 2 0 G 2 where the eigenvalues of G 2 are the k 2 1 η j -values belonging to the pairs (θ j η j ) distinct from (θ η). Using these quantities we can now prove the following theorem which extends [21 Thm. 4.6] [25 Thm. 2] and [8 Thm. 3]. THEOREM 4.1. Let ((θ η) u v) be a Ritz pair and ((λ µ) x y) an eigenpair. Let E i = [w W ] i [w W ] for i = and assume E 0 is invertible. Then sin(u v x y) 1 + γ2 δ 2 sin(u k V k x y)

6 6 M. E. HOCHSTENBACH AND B. PLESTENJAK where γ = E 1 0 ( P Uk V k ( 1 λ 0 )(I P Uk V k ) 2 ([ ]) G1 λi δ = σ min G 2 µi + P Uk V k ( 2 µ 0 )(I P Uk V k ) 2) 1/2 and P Uk V k is the orthogonal projection onto U k V k. Proof. From 1 z = λ 0 z where as before z = x y we get with a change of variables where [w W W ] ( 1 λ 0 )[w W W ][a 1 a 2 a 3 ] T = 0 [a 1 a 2 a 3 ] T = [w W W ] z. Writing out the first and second (block) equation gives [ ] a1 (E 1 λe 0 ) = [w W ] ( a 1 λ 0 )W a 3. 2 Left-multiplying by E0 1 and using (4.2) yield [ ] [ ] θ λ f 1 a1 = E 0 G 1 λi a 0 1 [w W ] ( 1 λ 0 )W a 3. 2 Hence (4.4) (G 1 λi) a 2 E 1 0 [w W ] ( 1 λ 0 )W a 3. Similarly if we start from 2 z = µ 0 z and use (4.3) we get the bound (4.5) (G 2 µi) a 2 E 1 0 [w W ] ( 2 µ 0 )W a 3. From (4.4) and (4.5) it follows that ([ ]) [ ] G1 λi σ min a G 2 µi 2 (G1 λi) a 2 (G 2 µi) a 2 [ E 1 0 [w W ] ] ( 1 λ 0 )W a 3 E0 1 [w W ] ( 2 µ 0 )W a 3 [ ] E0 1 [w W ] ( 1 λ 0 )W [w W ] a ( 2 µ 0 )W 3. This gives us the bound a 2 (γ/δ) a 3. Since a = 1 we have that a 1 2 = cos 2 (u v x y) a a 2 2 = cos 2 (U k V k x y) a a 3 2 = 1 cos 2 (u v x y) a 3 2 = 1 cos 2 (U k V k x y).

7 HARMONIC RITZ FOR THE MEP 7 The result now follows from substituting the bound for a 2 in terms of a 3 in the expression a a 3 2. The significance of the theorem is the following. If sin(u k V k x y) 0 we know that sin(u v x y) 0 so there is a Ritz vector u v converging to the eigenvector x y unless δ is zero which means that (λ µ) coincides with one of the Ritz values distinct from (θ η) Saad type theorem for the harmonic extraction. We have a similar theorem for the harmonic extraction mutatis mutandis which means that the harmonic extraction is also asymptotically accurate. Define the quantities à i = (A i σb i τc i ) A i 0 = B 1 C 2 C 1 B 2 B i = (A i σb i τc i ) B i 1 = Ã1 C 2 C 1 Ã2 C i = (A i σb i τc i ) C i 2 = B 1 Ã2 Ã1 B 2. Let w := ũ ṽ be a harmonic Ritz vector corresponding to the harmonic Ritz value ( θ η) and let [ w W W ] be an orthonormal basis for C n1n2 such that span([ w W ]) = U k V k. Similar to (4.1) define Ẽi = [ w W ] i [ w W ] for i = Then the components θ j and η j of the harmonic Ritz values ( θ j η j ) are eigenvalues of Ẽ 1 0 Ẽ 1 and Ẽ 1 0 Ẽ 2 respectively. Since (Ẽ1 θẽ0) e 1 = 0 we know that Ẽ 1 0 Ẽ 1 is of the form ] [ θ Ẽ0 1 f Ẽ 1 = 1 0 G1 where the eigenvalues of G 1 are the k 2 1 θ j -values belonging to the pairs ( θ j η j ) distinct from ( θ η). Similarly Ẽ 1 0 Ẽ 2 is of the form [ ] Ẽ0 1 η f Ẽ 2 = 2 0 G2 where the eigenvalues of G 2 are the k 2 1 η j -values belonging to the pairs ( θ j η j ) distinct from ( θ η). Analogous to Theorem 4.1 we can prove the following result. THEOREM 4.2. Let (( θ η) ũ ṽ) be a harmonic Ritz pair and ((λ µ) x y) be an eigenpair. Let Ẽi = [ w W ] i [ w W ] for i = and assume Ẽ0 is invertible. Then sin(ũ ṽ x y) 1 + γ2 δ sin(u 2 k V k x y) where ( γ = Ẽ 1 0 P Uk V k ( 1 λ 0 )(I P Uk V k ) 2 ([ ]) G1 λi δ = σ min. G 2 µi + P Uk V k ( 2 µ 0 )(I P Uk V k ) 2) 1/2

8 8 M. E. HOCHSTENBACH AND B. PLESTENJAK This means that if sin(u k V k x y) 0 then there is a harmonic Ritz vector ũ ṽ converging to the eigenvector x y unless (λ µ) coincides with one of the harmonic Ritz values distinct from ( θ η). Comparing Theorems (4.1) and (4.2) we see that both the standard and harmonic extraction are asymptotically accurate: up to the occurrence of multiple (harmonic) Ritz values they will recognize an eigenvector present in the search space. 5. Refined extraction for the multiparameter problem. The refined extraction is an alternative approach that minimizes the residual norm over a given search space. It was popularized for the standard eigenvalue problem by Jia [15]. We now generalize this approach for the MEP. Given σ and τ for instance a tensor Rayleigh quotient [19] of the form (5.1) σ = (u v) 1 (u v) (u v) 0 (u v) = (u A 1 u)(v C 2 v) (u C 1 u)(v A 2 v) (u B 1 u)(v C 2 v) (u C 1 u)(v B 2 v) τ = (u v) 2 (u v) (u v) 0 (u v) = (u B 1 u)(v A 2 v) (u A 1 u)(v B 2 v) (u B 1 u)(v C 2 v) (u C 1 u)(v B 2 v) or a target the refined extraction determines an approximate eigenvector û v by minimizing the residual norms over U k and V k respectively: (5.2) û = v = argmin (A 1 σb 1 τc 1 ) u u U k u =1 argmin (A 2 σb 2 τc 2 ) v. v V k v =1 In practice it is often sensible to take the target in the beginning of the process and switch to the Rayleigh quotient when the residual norm is under a certain threshold because this indicates that the Rayleigh quotient is of sufficient quality. If we always use the same target then (5.2) is computationally best determined by two incremental QR-decompositions (3.3) followed by singular value decompositions of R 1 and R 2. If we vary σ and τ during the process then we may incrementally compute QR-like decompositions A 1 U k = Q 1 R 1A B 1 U k = Q 1 R 1B C 1 U k = Q 1 R 1C A 2 V k = Q 2 R 2A B 2 V k = Q 2 R 2B C 2 V k = Q 2 R 2C where Q 1 and Q 2 have 3k orthonormal columns and R 1A R 1B R 1C R 2A R 2B and R 2C are 3k k matrices. Such decompositions can be computed efficiently with a straightforward generalization of the approach presented in [22] for the case of two matrices. For each different value of σ and τ we then evaluate (5.2) by QR-decompositions of the 3k k matrices R 1A σr 1B τr 1C and R 2A σr 2B τr 2C followed by singular value decompositions of k k upper triangular matrices. The following theorem is a generalization of [26 Thm. 4.10]. THEOREM 5.1. For the residuals of the refined Ritz vector (5.2) we have (A 1 σb 1 τc 1 )û λ σ B 1 + µ τ C 1 + (A 1 σb 1 τc 1 ) sin(u k x) 1 sin 2 (U k x) (A 2 σb 2 τc 2 ) v λ σ B 2 + µ τ C 2 + (A 2 σb 2 τc 2 ) sin(v k y). 1 sin 2 (V k y)

9 HARMONIC RITZ FOR THE MEP 9 Proof. Decompose x = γ U x U +σ U e U where x U := UU x / UU x is the orthogonal projection of x onto U x U = e U = 1 γ U = cos(u k x) and σ U = sin(u k x). Since x U = (x σ U e U )/γ U we have by the definition of a refined Ritz vector (5.2) (A 1 σb 1 τc 1 ) û (A 1 σb 1 τc 1 ) x U (λ σ) B 1 x + (µ τ) C 1 x + σ U (A 1 σb 1 τc 1 ) e U /γ U from which the first result follows. The derivation of the second result is similar. Similar to the refined extraction for the standard eigenvalue problem we see that in contrast to the situation for the standard and harmonic extraction methods (Theorems 4.1 and 4.2) the conditions sin(u k x) 0 and sin(v k y) 0 are no longer sufficient for convergence of the refined vectors to eigenvectors; we also need σ λ and τ µ. Recall from Theorem 4.1 that if an eigenvalue is simple then sin(u k x) 0 and sin(v k y) 0 imply that there is a Ritz vector converging to the eigenvector corresponding to that eigenvalue. From this follows in turn that the Ritz value converges to the eigenvalue (see e.g. [11]). Therefore if we are prepared to accept additional computational costs by varying σ and τ and asymptotically take the Ritz value as shifts the refined Ritz vector will converge to the eigenvector if the eigenvalue is simple and sin(u k x) 0 and sin(v k y) Numerical experiments. The numerical results in this section were obtained with Matlab 7.0. EXAMPLE 6.1. In the first example we consider a random right definite two-parameter eigenvalue problems with known eigenpairs which enables us to check the obtained results. We take A i = S i F i S i B i = S i G i S i C i = S i H i S i where F i G i and H i are diagonal matrices and S i are banded matrices generated in Matlab by 2*speye(1000)+triu(tril(sprandn( d)b)-b) where d is the density and b is the bandwidth for i = 1 2. We select the diagonal elements of F 1 F 2 G 2 and H 1 as normally distributed random numbers with mean zero variance one and standard deviation one and the diagonal elements of G 1 and H 2 as normally distributed random numbers with mean 5 variance one and standard deviation one. In this way the problem is right definite and the eigenvalues can be computed exactly from diagonal elements of F i G i and H i see [11] for details. We are interested in approximating the innermost eigenpair ((λ µ) x y) i.e. the pair for which the eigenvalue (λ µ) is closest to the arithmetic mean of the eigenvalues. For the search subspace U we take the span of x which is a perturbation of the eigenvector component x and nine additional random vectors. The search space V is formed similarly. We test with different perturbations which affect the quality of the 10-dimensional search spaces U and V. We compare the approximations for the innermost eigenpair obtained from the standard and the harmonic extraction. The results are in Table 6.1. Let (θ η) be a standard or harmonic Ritz value that approximates (λ µ) and let u v be the corresponding standard or harmonic Ritz vector. The rows in Table 6.1 are: subspace: (x y U V) the angle between the exact eigenvector x y and the search subspace U V; this quantity indicates the best result any extraction method can obtain.

10 10 M. E. HOCHSTENBACH AND B. PLESTENJAK vector: (u v x y) the angle between the exact eigenvector and the (harmonic) Ritz vector. value: ( λ θ 2 + µ η 2 ) 1/2 the difference between the (harmonic) Ritz value (θ η) and the exact eigenvalue (λ µ). residual: the norm of the residual of the (harmonic) Ritz pair ( (A1 θb 1 ηc 1 ) u 2 + (A 2 θb 2 ηc 2 ) v 2) 1/2. RQ value: the difference between the Rayleigh quotient (5.1) of the (harmonic) Ritz vector and the exact eigenvalue (λ µ). Note that for the standard extraction this is the same as the value column and therefore omitted. RQ residual: the norm of the residual ( (A1 θb 1 ηc 1 ) u 2 + (A 2 θb 2 ηc 2 ) v 2) 1/2 where we take the Rayleigh quotient of the (harmonic) Ritz vector instead of the (harmonic) Ritz value. Note that for the standard extraction this is the same as the residual column. refined vector: (û v x y) the angle between the exact eigenvector and the refined vector û v which minimizes (5.2) with the Ritz value respectively with the Rayleigh quotient of the harmonic Ritz vector as shift. refined residual: the norm of the residual ( (A1 θb 1 ηc 1 ) û 2 + (A 2 θb 2 ηc 2 ) v 2) 1/2 where we take the Ritz value respectively the Rayleigh quotient of the harmonic Ritz vector for (θ η) and the refined vector û v. TABLE 6.1 A comparison of the standard and harmonic extraction from three different subspaces for a right definite twoparameter eigenvalue problem. b = 50 d = 0.02 b = 40 d = 0.03 b = 50 d = 0.03 subspace = 3.6e-3 subspace = 4.0e-5 subspace = 4.2e-7 standard harmonic standard harmonic standard harmonic vector 1.9e-2 4.4e-3 3.7e-4 4.2e-5 1.6e-6 4.3e-7 value 1.2e-5 2.0e-2 1.2e-8 4.0e-5 2.5e e-8 residual 1.4e-1 1.8e-1 2.6e-3 4.2e-4 1.2e-5 3.0e-6 RQ value 4.6e-6 8.8e-9 1.5e-14 RQ residual 3.4e-2 3.9e-4 3.0e-6 refined vector 3.8e-3 3.8e-3 4.1e-5 4.1e-5 4.3e-7 4.3e-7 refined residual 3.1e-2 3.1e-2 3.9e-4 3.9e-4 3.0e-6 3.0e-6 From Table 6.1 we see that the harmonic extraction returns eigenvector approximations that are almost optimal i.e. they are very close to the orthogonal projections of the exact eigenvectors onto the search subspace. On the other hand as is also usual for the standard eigenvalue problem the harmonic Ritz values are less favorable approximations to the exact eigenvalues than the Ritz values. However if we use the harmonic Ritz vector with its tensor Rayleigh quotient as approximation to the eigenvalue we get better approximations and smaller residuals than in the standard extraction. If we apply the refined extraction where we take the Rayleigh quotient for the approximation of the eigenvalue we can further improve the results. The improvement is substantial for the standard extraction but small for the harmonic extraction. Results after the refined

11 HARMONIC RITZ FOR THE MEP 11 extraction do not differ much regardless whether we start with the standard or the harmonic extraction. EXAMPLE 6.2. In the second example we take a random non right definite two-parameter eigenvalue problem of the same dimensions as in Example 6.1. Here we select the diagonal elements of F 1 F 2 G 1 G 2 H 1 and H 2 as complex numbers where both the real and the imaginary part are uniformly distributed random numbers from the interval ( ). Table 6.2 contains the results of similar numerical experiments as in Example 6.1. TABLE 6.2 A comparison of the standard and harmonic extraction from three different subspaces for a non right definite two-parameter eigenvalue problem. b = 50 d = 0.02 b = 40 d = 0.03 b = 50 d = 0.03 subspace = 2.6e-3 subspace = 4.8e-5 subspace = 5.0e-7 standard harmonic standard harmonic standard harmonic vector 7.4e-3 2.6e-3 1.8e-4 4.9e-5 1.0e-5 5.1e-7 value 2.7e-3 1.5e-4 4.9e-5 4.7e-6 1.4e-6 4.1e-8 residual 1.8e-2 5.8e-3 4.5e-4 1.3e-4 2.9e-5 1.5e-6 RQ value 3.0e-3 3.8e-5 1.6e-6 RQ residual 7.5e-3 1.3e-4 3.1e-6 refined vector 2.6e-3 2.6e-3 4.9e-5 4.9e-5 5.1e-7 5.1e-7 refined residual 7.2e-3 7.5e-3 1.4e-4 1.3e-4 3.2e-6 3.1e-6 As in the previous example the harmonic extraction returns almost optimal eigenvector approximations which are clearly better than the results of the standard extraction. Since this problem is non right definite the error in the tensor Rayleigh quotient is linear in the eigenvector approximation error compared to quadratic for the right definite problem in the previous example; see [11]. Again we can improve the results using the refined extraction in particular those of the standard extraction. EXAMPLE 6.3. We take the right definite examples from Example 6.1 with density d = 0.01 and bandwidth b = 80. We compare the eigenvalues obtained by the Jacobi Davidson method [11] where we apply the standard and harmonic extraction respectively. We start with the same initial vectors and we test various numbers of inner GMRES steps for approximately solving the correction equation. We use the second order correction equation with the oblique projections for the details see [11]. The maximum dimension of the search spaces is 14 after which we restart with three-dimensional spaces. For the target (σ τ) we take the arithmetic mean of the eigenvalues. In the extraction phase the standard or harmonic Ritz value is selected that is closest to a given target. Subsequently we take the corresponding eigenvector approximation and take its tensor Rayleigh quotient as an approximate eigenvalue. As one can see in Examples 6.1 and 6.2 this gives a better approximation for the eigenvalue when we use the harmonic extraction whereas it naturally does not change the eigenvalue approximation in the standard extraction. We compute 100 eigenvalues where we note that with a total number of 10 6 eigenvalues this problem cannot be considered a toy problem. The criterion for the convergence is that the norm of the residual is below In the correction equation we use preconditioning with an incomplete LU factorization of the matrix A i σb i τc i with a drop tolerance 10 3 for i = 1 2. If we use the standard extraction then in each outer iteration the projected problem is right definite and we solve it using the algorithm in [24]. If we use the harmonic extraction then the projected problem (3.4) is not definite. We could solve it with the method described in [10] using the QZ algorithm on (1.3) but experiments showed that we obtain very similar results by applying a cheaper but possibly unstable approach where we first solve one

12 12 M. E. HOCHSTENBACH AND B. PLESTENJAK of the projected generalized eigenvalue problems (1.3) by solving the eigenvalue problem z = λz where 0 and 1 are the projected 0 and 1 and then insert the vector z in the second equation in order to obtain µ. All of the above methods require O(k 6 ) flops to solve the projected two-parameter eigenvalue problem where k is the size of the search spaces. The values in Table 6.3 are: iter: the number of outer iterations time: time in seconds in 50 in 100: the number of the computed eigenvalues that are among the 50 and 100 closest eigenvalues to the target respectively. TABLE 6.3 Comparison of the standard and harmonic extraction for the JD type method for a right definite two-parameter eigenvalue problem. Standard Harmonic GMRES iter time in 50 in 100 iter time in 50 in The results in Table 6.3 show that the harmonic extraction is faster and only slightly less accurate than the standard extraction. Solving the projected problems is more expensive for the harmonic extraction because they are not right definite as opposed to the projected problems in the standard extraction. However as the harmonic extraction requires far fewer outer iterations it computes the eigenvalues faster than the standard extraction. On the other hand the standard extraction returns a bit more accurate results for instance in all cases we get all 50 closest eigenvalues to the target. Both methods are suitable for this right definite two-parameter eigenvalue problem and based on the results we give the harmonic extraction a slight preference over the standard extraction. We also applied the refined extraction but in spite of the results in Tables 6.1 and 6.2 the experiments did not show advantages of the refined extraction. We do not report the results but we note that the refined method is more expensive and usually requires more outer iterations. As these remarks also apply to the remaining numerical examples in this section we excluded the refined approach in the following. As one can see in the next example the difference between the standard and the harmonic extraction may be much more in favor of the harmonic extraction when we consider a non definite two-parameter eigenvalue problem. EXAMPLE 6.4. In this example we take a random non right definite two-parameter eigenvalue problem with matrices of size from Example 6.2 with density d = 0.01 and bandwidth b = 80. We perform similar experiments as in Example 6.3 the only difference is that now we use the two-sided Ritz extraction as well. As discussed in [10] this is a natural approach when we have a non definite two-parameter eigenvalue problem. We limit the computation to 2500 outer iterations or to 50 extracted eigenvalues. Neither the one-sided standard nor the two-sided standard extraction is able to compute the required number of eigenvalues in the prescribed number of outer iterations (therefore we omit the number of iterations and the cpu time for these methods). This is not an issue for the harmonic extraction which is a clear winner in this example. The results are presented in Table 6.4. Figure 6.1 shows the convergence graphs for the two-sided extraction (a) and the harmonic extraction (b) for the first 40 outer iterations in both cases we take 8 GMRES steps in

13 HARMONIC RITZ FOR THE MEP 13 TABLE 6.4 Comparison of one-sided standard two-sided standard and harmonic extraction methods for the JD type method for a non right definite two-parameter eigenvalue problem. One-sided standard Two-sided standard Harmonic GMRES eigs in 10 in 30 eigs in 10 in 30 iter time in 10 in 30 in the inner iteration. One can see that the convergence is more erratic when we use the standard extraction and smoother (almost monotonous) if we use the harmonic extraction. log 10 of residual norm number of outer iterations (a) log 10 of residual norm number of outer iterations (b) FIG Comparison of convergence using the two-sided Ritz extraction (a) and the harmonic extraction (b). EXAMPLE 6.5. Using the same non right definite problem as in Example 6.3 we test how many eigenvalues can we extract with a limited number of matrix-vector multiplications i.e. we fix the product of inner and outer iterations. The limit is TABLE 6.5 Eigenvalues obtained using the harmonic extraction and 3200 inner iterations. GMRES all in 25 in 50 time (sec) The results in Table 6.5 show that for a low number of inner iterations we get fewer eigenvalues and spend more time but the possibility to compute an unwanted eigenvalue is smaller. If we use more inner iterations we get many unwanted eigenvalues but spend less time. The optimal combination is to take a moderate number of inner iterations in this example this would be between 16 and 32 inner steps. Besides the matrix-vector multiplications the most time consuming operation is to solve the projected low-dimensional two-parameter eigenvalue problem in each outer step. If the search spaces are of size k then we need O(k 6 ) flops to solve these projected problems. Since this is relatively expensive compared to the work for the matrix-vector multiplications

14 14 M. E. HOCHSTENBACH AND B. PLESTENJAK it is a good idea to use several GMRES steps (but not too many to avoid convergence to an unwanted eigenvalue) to try to reduce the number of outer iterations. EXAMPLE 6.6. In this example we take a non right definite two-parameter eigenvalue problem where A i B i and C i are random complex banded matrices of size generated by the Matlab command M=sparse(triu(tril(randn(500)+i*randn(500)5)-5)); where M is respectively equal to A 1 B 1 C 1 A 2 B 2 and C 2. TABLE 6.6 Comparison of standard and harmonic extraction for a non right definite two-parameter eigenvalue problem. One-sided standard Two-sided standard Harmonic GMRES eigs time in 5 eigs time in 5 iter time in 5 in We look for the eigenvalues closest to the origin and for a preconditioner we take M i = A i for i = 1 2. We limit the computation to 1000 outer iteration or 15 extracted eigenvalues. From the results in Table 6.6 one can see clearly that the harmonic extraction extracts more eigenvalues than the standard extraction. Both the one-sided and the two-sided standard Ritz extraction fail to compute 15 eigenvalues in 1000 outer iterations; therefore the number of iterations is displayed only for the harmonic extraction the number of iterations for both the one-sided and two-sided Ritz extraction is The one-sided standard extraction is particularly poor in view of the fact that it manages to compute at most one eigenvalue. The two-sided Ritz extraction computes more eigenvalues but falls considerably short of the required 15. For this example the harmonic extraction is clearly the suggested method. EXAMPLE 6.7. We take the two-parameter eigenvalue problem from Example 8.4 in [10]. The problem is non right definite and the matrices are of size We used this problem in [10] to demonstrate that the two-sided Ritz extraction may give better results than the one-sided standard extraction. We limit the computation to 500 outer iterations or 30 extracted eigenvalues. For the target we take the arithmetic mean of the eigenvalues. The results in Table 6.7 show that the harmonic extraction is a substantial improvement to the standard extraction. As in the previous example both the one-sided and two-sided Ritz extraction fail to compute the required number of eigenvalues in the available number of outer iterations. The number of iterations is displayed in Table 6.7 only for the harmonic extraction the number of iterations for one-sided and two-sided Ritz extraction is 500. TABLE 6.7 Comparison of standard and harmonic extraction for a non right definite two-parameter eigenvalue problem. One-sided standard Two-sided standard Harmonic GMRES eigs time in 10 eigs time in 10 iter time in 10 in Conclusions. It was observed in [11] that the multiparameter eigenvalue problem is a challenge especially with respect to the task of finding interior eigenvalues. The concept of a harmonic extraction technique for the MEP was left as an open question and dealt with in

15 HARMONIC RITZ FOR THE MEP 15 this paper. We have seen that although there seems to be no straightforward generalization of a spectral transformation (2.1) for the MEP the harmonic approach can be generalized to the MEP with a corresponding elegant and intuitive generalization of Saad s theorem. We also gave a generalization of the refined extraction which seems to be less suited for this problem. Based on the theory and the numerical results our recommendations for the numerical computation of interior eigenvalues of a MEP are the following. For right definite MEPs we use the one-sided Jacobi Davidson method [11] for the subspace expansion. The harmonic extraction presented in this paper is at least very competitive with the standard extraction described in [11]. For non right definite problems the one-sided approach [11] combined with the harmonic extraction described in this paper is both faster and more accurate than the two-sided approach proposed in [10] which on its turn is more accurate than the one-sided approach with standard extraction [11]. For exterior eigenvalues we opt for the standard extraction combined with a one-sided approach for right definite MEPs [11] or a two-sided approach for non right definite MEPs [10]. It is important to realize that for the MEP solving the projected problems is itself already a computationally non-negligible task in view of the O(k 6 ) costs. Therefore it is advisable to invest in solving the correction equations relatively accurately to minimize these costs. Hence although just a few steps of GMRES may give more accurate results because we compute fewer unwanted eigenpairs this may be much more time demanding. Finally we remark that a two-sided harmonic approach is possible but much less effective since the correspondence between right and left approximate eigenvectors is lost; we will not go into further details. The methods in this paper can be generalized to MEPs with more than two parameters in a straightforward way (see e.g. [12] for some more details on these MEPs). MATLAB code of the methods is available on request. Acknowledgments. The authors thank the referees for their constructive and very helpful comments. REFERENCES [1] F. V. ATKINSON Multiparameter spectral theory Bull. Amer. Math. Soc. 74 (1968) pp [2] E. K. BLUM AND A. F. CHANG A numerical method for the solution of the double eigenvalue problem J. Inst. Math. Appl. 22 (1978) pp [3] E. K. BLUM AND A. R. CURTIS A convergent gradient method for matrix eigenvector-eigentuple problems Numer. Math. 31 (1978/79) pp [4] E. K. BLUM AND P. B. GELTNER Numerical solution of eigentuple-eigenvector problems in Hilbert spaces by a gradient method Numer. Math. 31 (1978/79) pp [5] Z. BOHTE Numerical solution of some two-parameter eigenvalue problems Slov. Acad. Sci. Art. Anton Kuhelj Memorial Volume (1982) pp [6] P. J. BROWNE AND B. D. SLEEMAN A numerical technique for multiparameter eigenvalue problems IMA J. Numer. Anal. 2 (1982) pp [7] A. BUNSE-GERSTNER R. BYERS AND V. MEHRMANN Numerical methods for simultaneous diagonalization SIAM J. Matrix Anal. Appl. 14 (1993) pp [8] G. CHEN AND Z. JIA An analogue of the results of Saad and Stewart for harmonic Ritz vectors J. Comput. Appl. Math. 167 (2004) pp [9] D. R. FOKKEMA G. L. G. SLEIJPEN AND H. A. VAN DER VORST Jacobi Davidson style QR and QZ algorithms for the reduction of matrix pencils SIAM J. Sci. Comput. 20 (1998) pp [10] M. E. HOCHSTENBACH T. KOŠIR AND B. PLESTENJAK A Jacobi Davidson type method for the nonsingular two-parameter eigenvalue problem SIAM J. Matrix Anal. Appl. 26 (2005) pp [11] M. E. HOCHSTENBACH AND B. PLESTENJAK A Jacobi Davidson type method for a right definite twoparameter eigenvalue problem SIAM J. Matrix Anal. Appl. 24 (2002) pp [12] Backward error condition numbers and pseudospectrum for the multiparameter eigenvalue problem Linear Algebra Appl. 375 (2003) pp

16 16 M. E. HOCHSTENBACH AND B. PLESTENJAK [13] M. E. HOCHSTENBACH AND G. L. G. SLEIJPEN Harmonic and refined extraction methods for the polynomial eigenvalue problem Numer. Linear Algebra Appl. 15 (2008) pp [14] X. Z. JI Numerical solution of joint eigenpairs of a family of commutative matrices Appl. Math. Lett. 4 (1991) pp [15] Z. JIA Refined iterative algorithms based on Arnoldi s process for large unsymmetric eigenproblems Linear Algebra Appl. 259 (1997) pp [16] R. B. MORGAN Computing interior eigenvalues of large matrices Linear Algebra Appl. 154/156 (1991) pp [17] C. C. PAIGE B. N. PARLETT AND H. A. VAN DER VORST Approximate solutions and eigenvalue bounds from Krylov subspaces Num. Lin. Alg. Appl. 2 (1995) pp [18] B. N. PARLETT The Symmetric Eigenvalue Problem Society for Industrial and Applied Mathematics (SIAM) Philadelphia PA Corrected reprint of the 1980 original. [19] B. PLESTENJAK A continuation method for a right definite two-parameter eigenvalue problem SIAM J. Matrix Anal. Appl. 21 (2000) pp [20] A continuation method for a weakly elliptic two-parameter eigenvalue problem IMA J. Numer. Anal. 21 (2001) pp [21] Y. SAAD Numerical Methods for Large Eigenvalue Problems Manchester University Press Manchester UK [22] P. SPELLUCCI AND W. M. HARTMANN A QR decomposition for matrix pencils BIT 40 (2000) pp [23] M. SHIMASAKI A homotopy algorithm for two-parameter eigenvalue problems ZAMM Z. Angew. Math. Mech. 76 (1996) pp Suppl. 2. [24] T. SLIVNIK AND G. TOMŠIČ A numerical method for the solution of two-parameter eigenvalue problems J. Comput. Appl. Math. 15 (1986) pp [25] G. W. STEWART A generalization of Saad s theorem on Rayleigh-Ritz approximations Linear Algebra Appl. 327 (2001) pp [26] Matrix algorithms. Vol. II SIAM Philadelphia PA [27] H. VOLKMER Multiparameter Eigenvalue Problems and Expansion Theorems Springer-Verlag Berlin 1988.

of dimension n 1 n 2, one defines the matrix determinants

of dimension n 1 n 2, one defines the matrix determinants HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue

More information

MICHIEL E. HOCHSTENBACH

MICHIEL E. HOCHSTENBACH VARIATIONS ON HARMONIC RAYLEIGH RITZ FOR STANDARD AND GENERALIZED EIGENPROBLEMS MICHIEL E. HOCHSTENBACH Abstract. We present several variations on the harmonic Rayleigh Ritz method. First, we introduce

More information

Multiparameter eigenvalue problem as a structured eigenproblem

Multiparameter eigenvalue problem as a structured eigenproblem Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction

More information

condition that is positive denite. He also shows that matrices ;1 1 and ;1 2 commute and that the problem (1.1) is equivalent to the associated proble

condition that is positive denite. He also shows that matrices ;1 1 and ;1 2 commute and that the problem (1.1) is equivalent to the associated proble A JACOBI{DAVIDSON TYPE METHOD FOR A RIGHT DEFINITE TWO-PARAMETER EIGENVALUE PROBLEM MICHIEL HOCHSTENBACH y, BOR PLESTENJAK z Abstract. We present a new numerical iterative method for computing selected

More information

U AUc = θc. n u = γ i x i,

U AUc = θc. n u = γ i x i, HARMONIC AND REFINED RAYLEIGH RITZ FOR THE POLYNOMIAL EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND GERARD L. G. SLEIJPEN Abstract. After reviewing the harmonic Rayleigh Ritz approach for the standard

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem HOMOGENEOUS JACOBI DAVIDSON MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. We study a homogeneous variant of the Jacobi Davidson method for the generalized and polynomial eigenvalue problem. Special

More information

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2) A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

Jacobi Davidson methods for polynomial two parameter eigenvalue problems

Jacobi Davidson methods for polynomial two parameter eigenvalue problems Jacobi Davidson methods for polynomial two parameter eigenvalue problems Michiel E. Hochstenbach a, Andrej Muhič b, Bor Plestenjak b a Department of Mathematics and Computer Science, TU Eindhoven, PO Box

More information

A Jacobi Davidson type method for the generalized singular value problem

A Jacobi Davidson type method for the generalized singular value problem A Jacobi Davidson type method for the generalized singular value problem M. E. Hochstenbach a, a Department of Mathematics and Computing Science, Eindhoven University of Technology, P.O. Box 513, 5600

More information

A Jacobi Davidson type method for the product eigenvalue problem

A Jacobi Davidson type method for the product eigenvalue problem A Jacobi Davidson type method for the product eigenvalue problem Michiel E Hochstenbach Abstract We propose a Jacobi Davidson type method to compute selected eigenpairs of the product eigenvalue problem

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 431 (2009) 471 487 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa A Jacobi Davidson type

More information

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science. CASA-Report November 2013

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science. CASA-Report November 2013 EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science CASA-Report 13-26 November 213 Polynomial optimization and a Jacobi-Davidson type method for commuting matrices by I.W.M.

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. The Jacobi Davidson method is an eigenvalue solver which uses the iterative (and in general inaccurate)

More information

A Jacobi-Davidson method for solving complex symmetric eigenvalue problems Arbenz, P.; Hochstenbach, M.E.

A Jacobi-Davidson method for solving complex symmetric eigenvalue problems Arbenz, P.; Hochstenbach, M.E. A Jacobi-Davidson method for solving complex symmetric eigenvalue problems Arbenz, P.; Hochstenbach, M.E. Published in: SIAM Journal on Scientific Computing DOI: 10.1137/S1064827502410992 Published: 01/01/2004

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Controlling inner iterations in the Jacobi Davidson method

Controlling inner iterations in the Jacobi Davidson method Controlling inner iterations in the Jacobi Davidson method Michiel E. Hochstenbach and Yvan Notay Service de Métrologie Nucléaire, Université Libre de Bruxelles (C.P. 165/84), 50, Av. F.D. Roosevelt, B-1050

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

A JACOBI DAVIDSON METHOD FOR SOLVING COMPLEX-SYMMETRIC EIGENVALUE PROBLEMS

A JACOBI DAVIDSON METHOD FOR SOLVING COMPLEX-SYMMETRIC EIGENVALUE PROBLEMS A JACOBI DAVIDSON METHOD FOR SOLVING COMPLEX-SYMMETRIC EIGENVALUE PROBLEMS PETER ARBENZ AND MICHIEL E. HOCHSTENBACH Abstract. We discuss variants of the Jacobi Davidson method for solving the generalized

More information

1. Introduction. In this paper we consider variants of the Jacobi Davidson (JD) algorithm [27] for computing a few eigenpairs of

1. Introduction. In this paper we consider variants of the Jacobi Davidson (JD) algorithm [27] for computing a few eigenpairs of SIAM J. SCI. COMPUT. Vol. 25, No. 5, pp. 1655 1673 c 2004 Society for Industrial and Applied Mathematics A JACOBI DAVIDSON METHOD FOR SOLVING COMPLEX SYMMETRIC EIGENVALUE PROBLEMS PETER ARBENZ AND MICHIEL

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. The Jacobi Davidson method is an eigenvalue solver which uses an inner-outer scheme. In the outer

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

A comparison of solvers for quadratic eigenvalue problems from combustion

A comparison of solvers for quadratic eigenvalue problems from combustion INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS [Version: 2002/09/18 v1.01] A comparison of solvers for quadratic eigenvalue problems from combustion C. Sensiau 1, F. Nicoud 2,, M. van Gijzen 3,

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

A Jacobi Davidson Method for Nonlinear Eigenproblems

A Jacobi Davidson Method for Nonlinear Eigenproblems A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Computing a partial generalized real Schur form using the Jacobi-Davidson method

Computing a partial generalized real Schur form using the Jacobi-Davidson method Computing a partial generalized real Schur form using the Jacobi-Davidson method T.L. van Noorden and J. Rommes Abstract In this paper, a new variant of the Jacobi-Davidson method is presented that is

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis RANA03-02 January 2003 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis by J.Rommes, H.A. van der Vorst, EJ.W. ter Maten Reports on Applied and Numerical Analysis Department

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX. 1. Introduction. We consider the eigenvalue problem

AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX. 1. Introduction. We consider the eigenvalue problem AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX MICHIEL E. HOCHSTENBACH, WEN-WEI LIN, AND CHING-SUNG LIU Abstract. In this paper, we present an inexact inverse

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

Jacobi-Davidson methods for polynomial two-parameter eigenvalue problems Hochstenbach, M.E.; Muhic, A.; Plestenjak, B.

Jacobi-Davidson methods for polynomial two-parameter eigenvalue problems Hochstenbach, M.E.; Muhic, A.; Plestenjak, B. Jacobi-Davidson methods for polynomial two-parameter eigenvalue problems Hochstenbach, M.E.; Muhic, A.; Plestenjak, B. Published: 01/01/015 Document Version Publisher s PDF, also known as Version of Record

More information

Henk van der Vorst. Abstract. We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors

Henk van der Vorst. Abstract. We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors Subspace Iteration for Eigenproblems Henk van der Vorst Abstract We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors of the standard eigenproblem Ax = x. Our method

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Elias Jarlebring a, Michiel E. Hochstenbach b,1, a Technische Universität Braunschweig,

More information

Reduction of nonlinear eigenproblems with JD

Reduction of nonlinear eigenproblems with JD Reduction of nonlinear eigenproblems with JD Henk van der Vorst H.A.vanderVorst@math.uu.nl Mathematical Institute Utrecht University July 13, 2005, SIAM Annual New Orleans p.1/16 Outline Polynomial Eigenproblems

More information

ABSTRACT. Professor G.W. Stewart

ABSTRACT. Professor G.W. Stewart ABSTRACT Title of dissertation: Residual Arnoldi Methods : Theory, Package, and Experiments Che-Rung Lee, Doctor of Philosophy, 2007 Dissertation directed by: Professor G.W. Stewart Department of Computer

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof. Universiteit-Utrecht * Department of Mathematics The convergence of Jacobi-Davidson for Hermitian eigenproblems by Jasper van den Eshof Preprint nr. 1165 November, 2000 THE CONVERGENCE OF JACOBI-DAVIDSON

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department ofmathematics and Computing Science

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department ofmathematics and Computing Science EINDHOVEN UNIVERSITY OF TECHNOLOGY Department ofmathematics and Computing Science RANA03-15 July 2003 The application of preconditioned Jacobi-Davidson methods in pole-zero analysis by J. Rommes, C.W.

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

Harmonic and refined extraction methods for the singular value problem, with applications in least squares problems

Harmonic and refined extraction methods for the singular value problem, with applications in least squares problems Harmonic and refined extraction methods for the singular value problem, with applications in least squares problems Michiel E. Hochstenbach December 17, 2002 Abstract. For the accurate approximation of

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science A Sylvester Arnoldi type method for the generalized eigenvalue problem with two-by-two operator determinants Karl Meerbergen Bor Plestenjak Report TW 653, August 2014 KU Leuven Department of Computer Science

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

MULTIGRID ARNOLDI FOR EIGENVALUES

MULTIGRID ARNOLDI FOR EIGENVALUES 1 MULTIGRID ARNOLDI FOR EIGENVALUES RONALD B. MORGAN AND ZHAO YANG Abstract. A new approach is given for computing eigenvalues and eigenvectors of large matrices. Multigrid is combined with the Arnoldi

More information

A Residual Inverse Power Method

A Residual Inverse Power Method University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR 2007 09 TR 4854 A Residual Inverse Power Method G. W. Stewart February 2007 ABSTRACT The inverse

More information

Polynomial optimization and a Jacobi Davidson type method for commuting matrices

Polynomial optimization and a Jacobi Davidson type method for commuting matrices Polynomial optimization and a Jacobi Davidson type method for commuting matrices Ivo W. M. Bleylevens a, Michiel E. Hochstenbach b, Ralf L. M. Peeters a a Department of Knowledge Engineering Maastricht

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G. Universiteit-Utrecht * Mathematical Institute Alternative correction equations in the Jacobi-Davidson method by Menno Genseberger and Gerard L. G. Sleijpen Preprint No. 173 June 1998, revised: March 1999

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

On the Ritz values of normal matrices

On the Ritz values of normal matrices On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific

More information

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY KLAUS NEYMEYR ABSTRACT. Multigrid techniques can successfully be applied to mesh eigenvalue problems for elliptic differential operators. They allow

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

c 2009 Society for Industrial and Applied Mathematics

c 2009 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 31, No. 2, pp. 460 477 c 2009 Society for Industrial and Applied Mathematics CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD MICHIEL E. HOCHSTENBACH AND YVAN

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 4, pp. 1069 1082 c 2006 Society for Industrial and Applied Mathematics INEXACT INVERSE ITERATION WITH VARIABLE SHIFT FOR NONSYMMETRIC GENERALIZED EIGENVALUE PROBLEMS

More information

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Sunyoung Bu University of North Carolina Department of Mathematics CB # 325, Chapel Hill USA agatha@email.unc.edu Jingfang

More information

ABSTRACT OF DISSERTATION. Ping Zhang

ABSTRACT OF DISSERTATION. Ping Zhang ABSTRACT OF DISSERTATION Ping Zhang The Graduate School University of Kentucky 2009 Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices ABSTRACT OF DISSERTATION A dissertation

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis Rommes, J.; Vorst, van der, H.A.; ter Maten, E.J.W.

Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis Rommes, J.; Vorst, van der, H.A.; ter Maten, E.J.W. Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis Rommes, J.; Vorst, van der, H.A.; ter Maten, E.J.W. Published: 01/01/2003 Document Version Publisher s PDF, also known

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Elias Jarlebring a, Michiel E. Hochstenbach b,1, a Technische Universität Braunschweig,

More information

A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems

A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems C. G. Baker P.-A. Absil K. A. Gallivan Technical Report FSU-SCS-2005-096 Submitted June 7, 2005 Abstract A general inner-outer

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Rational Krylov methods for linear and nonlinear eigenvalue problems

Rational Krylov methods for linear and nonlinear eigenvalue problems Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational

More information

COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS

COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS GERARD L.G. SLEIJPEN AND JOOST ROMMES Abstract. The transfer function describes the response of a dynamical system to periodic inputs. Dominant poles are

More information

A Refined Jacobi-Davidson Method and Its Correction Equation

A Refined Jacobi-Davidson Method and Its Correction Equation Available online at www.sciencedirectcom An Intemational Journal computers & oc,..c, mathematics with applications Computers and Mathematics with Applications 49 (2005) 417-427 www.elsevier.com/locate/camwa

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES 1 PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES MARK EMBREE, THOMAS H. GIBSON, KEVIN MENDOZA, AND RONALD B. MORGAN Abstract. fill in abstract Key words. eigenvalues, multiple eigenvalues, Arnoldi,

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information