Using domain decomposition in the Jacobi-Davidson method. Mathematical Institute. Preprint No October, 2000

Size: px
Start display at page:

Download "Using domain decomposition in the Jacobi-Davidson method. Mathematical Institute. Preprint No October, 2000"

Transcription

1 Universiteit-Utrecht * Mathematical Institute Using domain decomposition in the Jacobi-Davidson method by Menno Genseberger, Gerard Sleijpen, and Henk Van der Vorst Preprint No October, 000 Also available at URL

2

3 Using domain decomposition in the Jacobi-Davidson method Menno Genseberger x, Gerard Sleijpen, and Henk Van der Vorst October, 000 Abstract The Jacobi-Davidson method is suitable for computing solutions of large n-dimensional eigenvalue problems. It needs (approximate) solutionsof specific n-dimensional linear systems. Here we propose a strategy based on a nonoverlapping domain decomposition technique in order to reduce the wall clock time and local memory requirements. For a model eigenvalue problem we derive optimal coupling parameters. Numerical experiments show the effect of this approach on the overall Jacobi- Davidson process. The implementation of the eventual process on a parallel computer is beyond the scope of this paper. Keywords: Eigenvalue problems, domain decomposition, Jacobi-Davidson, Schwarz method, nonoverlapping, iterative methods. 000 Mathematics Subject lassification: 65F15, 65N5, 65N55. 1 Introduction The Jacobi-Davidson method [17] is a valuable approach for the solution of large (generalized) linear eigenvalue problems. The method reduces the large problem to a small one by projecting it on an appropriate low dimensional subspace. Approximate solutions for eigenpairs of the large problem are obtained from the small problem by means of a Rayleigh-Ritz principle. The heart of the Jacobi-Davidson method is how the subspace is expanded. To keep the dimension of the subspace, and consequently the size of the small problem, low it is essential that all necessary information of the wanted eigenpair(s) is collected in the subspace after a small number of iterations. Therefore, the subspace should be expanded with a vector that contains important information not already present in the subspace. The correction equation of the Jacobi-Davidson method aims to prescribe such a vector. But in itself, the correction equation poses a large linear problem, with size equal to the size of the originating large eigenvalue problem. Because of this, most of the computational work of the Jacobi- Davidson method arises from solving the correction equation. In practice the eigenvalue problem is often so large that an accurate solution of the correction equation is too expensive. However, often approximate solutions of the correction equation suffice to obtain sufficiently fast convergence of the Jacobi-Davidson x Mathematical Institute, Utrecht University, and WI, Amsterdam, The Netherlands. Mathematical Institute, Utrecht University, P.O. Box , 3508 TA Utrecht, The Netherlands. genseber@math.uu.nl, sleijpen@math.uu.nl, vorst@math.uu.nl 1

4 M. Genseberger, G. L. G. Sleijpen, and H. A. Van der Vorst method. The speed of this convergence depends on the accuracy of the approximate solution. Jacobi- Davidson lends itself to be used in combination with a preconditioned iterative solver for the correction equation. In such a case the quality of the preconditioner is critical. Nonoverlapping domain decomposition methods for linear systems have been studied well in literature. Because of the absence of overlapping regions they have computational advantages compared to domain decomposition methods with overlap. But much depends on the coupling that should be chosen carefully. In this paper we will show how a nonoverlapping domain decomposition technique can be incorporated in the correction equation of Jacobi-Davidson, when applied to PDE type of eigenvalue problems. The technique is based on work by Tang and by Tan and Borsboom for linear systems. For a linear system Tang [0] proposed to enhance the system with duplicates in order to enable an additive Schwarz method with minimal overlap (for more recent publications, see for example [7], [1] and [10]). Tan and Borsboom [19, 18] refined this idea by introducing more flexibility for the unknowns near the interfaces between the subdomains. In this way additional degrees of freedom are created, reflected by coupling equations for the unknowns near the interfaces and their virtual counterparts. Now, the key point is to tune these interface conditions for the given problem in order to improve the speed of convergence of the iterative solution method. This approach is very effective for classes of linear systems stemming from advection-diffusion problems [19, 18]. The operator in the correction equation involves the matrix of the large eigenvalue problem shifted by an approximate eigenvalue. In the computational process, this shift will become arbitrarily close to the desired eigenvalue. This is a situation that requires special attention when applying the domain decomposition technique. An eigenvalue problem imposes a mildly nonlinear problem. Therefore, for the computation of solutions to the eigenvalue problem one needs a nonlinear solver, for instance, a Newton method. In fact, Jacobi-Davidson can be seen as an accelerated inexact Newton method [16]. Here, we shall, as explained above, combine the Jacobi-Davidson method with a Krylov solver for the correction equation. A preconditioner for the Krylov solver is constructed with domain decomposition. A similar type of nesting, but for general nonlinear systems, can be found in the Newton-Krylov-Schwarz algorithms by ai, Gropp, Keyes et al. in [4] and [5]. In these two papers the subdomains have overlap, therefore there is no analysis for the tuning of the coupling between subdomains. Furthermore, the eigenvalue problem is nonlinear but with a specific structure; we will exploit this structure. Our paper is organized as follows. First, we recall the enhancement technique for domain decomposition in x. Then, in x3 we discuss the Jacobi-Davidson method. We outline how the technique can be applied to the correction equation and how the projections in the correction equation should be handled. For a model eigenvalue problem we investigate, in x4, in detail how the coupling equations should be chosen for optimal performance. It will turn out that the shift plays a critical role. Section x5givesa number of illustrative numerical examples. Domain decomposition.1 anonical enhancement of a linear system Tang [0] has proposed the concept of matrix enhancement, which gives elegant possibilities for the formulation of effective domain decomposition of the underlying PDE problem. The idea is to decompose the grid into nonoverlapping subgrids and to expand the subgrids by introducing additional gridpoints and

5 Using domain decomposition in the Jacobi-Davidson method 3 additional unknows along the interfaces of the decomposition. This approach artificially creates some overlap on gridpoint level and the overlap is minimal. For hyperbolic systems of PDEs, this approach was further refined by Tan in [18] and by Tan and Borsboom in [19]. Discretization of the PDE leads to a linear system of equations. Tang duplicates and adjusts those equations in the system that couple across the interfaces. Tan and Borsboom introduce a double set of additional gridpoints along the interfaces in order to keep each equation confined to one expanded subgrid. As a consequence, none of the equations has to be adjusted. Then they enhanced the linear system by new equations that can be viewed as discretized boundary conditions for the internal boundaries (along the interfaces). Since the last approach offers more flexibility, this is the one we follow. We start with the linear nonsingular system By = d; (1) that results from discretization of a given PDE over some domain. Now, we partition the matrix B, and the vectors y and d correspondingly, B 11 B 1` B 1r B 1 y 1 d 1 B`1 B`` B`r B` 6 4B r1 B r` B rr B 7 ; y` 6 r 5 4y 7 and d` 6 r 5 4d 7 : r 5 B 1 B ` B r B d The labels are not chosen arbitrarily: we associate with label 1 (and, respectively) elements/operations of the linear system corresponding to subdomain 1 (, respectively) and with label ` (resp. r) elements/operations corresponding to the left (resp. right) of the interface between the two subdomains. The central blocks B``, B`r, B r` and B rr are square matrices of equal size, say, n i by n i. They correspond to the unknowns along the interface. Since the number of unknowns along the interface will typically be much smaller than the total number of unknows, n i will be much smaller than n, the size of B. For a typical discretization, the matrix B is banded and the unknowns are only locally coupled. Therefore it is not unreasonable to assume that B r1 ; B 1 ; B 1 and B` are zero. For this situation, we define the canonical enhancement B I of B, y of y,andd of d, by B I 6 4 B 11 B 1` B 1r B`1 B`` B`r I 0,I ,I 0 I B r` B rr B r B ` B r B 3 y y 1 y` ey ; y r,and ey` y 7 r 5 One easily verifies that B I is also nonsingular and that y is the unique solution of y 3 d 1 d` 0 d : () 0 6 4d 7 r 5 B I y = d; (3) with y èy T 1 ;y T` ;yt r ;y T` ;yt r ; y T è T. With this linear system we can associate a simple iterative scheme for the two coupled subblocks: 3 3 B 11 B 1` B 1r y èi+1è d 1 4 B`1 B`` B`r y èi+1è 7 ` 5 = d` 5 ; 0 I 0 èi+1è ey r ey`èiè d 3

6 4 M. Genseberger, G. L. G. Sleijpen, and H. A. Van der Vorst I 0 B r` B rr B r B ` B r B ey`èi+1è y èi+1è r y èi+1è = 6 4 ey r èiè d r d : (4) These systems can be solved in parallel and we can view this as a simple additive Schwarz iteration (with no overlap and Dirichlet-Dirichlet coupling). The extra unknowns ey` and ey r, in the enhanced vector y, will serve for communication between the subdomains during the iterative solution process of the linear system. After termination of the iterative process, we have to undo the enhancement. We could simply skip the values of the additional elements, but since these carry also information one of the alternatives could be the following one. With an approximate solution y èiè =èy èiè 1 T ;y èiè ` T T ; ey èiè r ; ey èiè ` T ;y èiè r of (3), we may associate the approximate solution Ry of (1) given by Ry èy èiè T 1 ; 1 èyèiè ` + ey èiè ` è T ; 1 èyèiè r + ey èiè r T ; y èiè T è T è T ; y èiè T è T ; that is, we simply average the two sets of unknowns that should have been equal to each other at full convergence.. Interface coupling matrix From () we see that the interface unknowns and the additional interface unknowns are coupled in a straightforward way by " è" è " è" è I 0 y` I 0 ey` = ; (5) 0,I ey r 0,I y r but, of course, we may replace the coupling matrix by any other nonsingular interface coupling matrix : " è `` `r : (6), r`, rr This leads to the following block system 3 3 B 11 B 1` B 1r B y = 6 4 B`1 B`` B`r `` `r,``,`r , r`, rr r` rr B r` B rr B r B ` B r B y 1 y` ey r ey` y r y = d: (7) 7 5 In a domain decomposition context, we will have for the approximate solution y that ey r y r and ey` y`. If we know some analytic properties about the local behavior of the true solution y across the interface, for instance, smoothness up to some degree, then we may try to identify a convenient coupling matrix that takes advantage of this knowledge. We want preferably a so that,`` ey`, `r y r,``y`, `r y r 0 and, r`y`, rr ey r, r`y`, rr y r 0: In that case (7) is almost decoupled into two independent smaller linear systems (identified by the two boxes). We may expect fast convergence for the corresponding additive Schwarz iteration.

7 Using domain decomposition in the Jacobi-Davidson method 5.3 Solution of the coupled subproblems The goal of the enhancement of the matrix of a given linear system, together with a convenient coupling matrix, is to get two smaller mildly coupled subsystems that can be solved in parallel. Additive Schwarz for the linear system (7) leads to the following iterative scheme and g èiè r 3 B 11 B 1` B 1r 6 7 4B`1 B`` B`r `` `r 3 r` rr B r` B rr B r B ` B r B = `` ey èiè ` y èi+1è 1 y èi+1è ` ey èi+1è r ey èi+1è ` yr èi+1è y èi+1è = = d 1 d r g èiè r g èiè ` d` d ; ; (8) + `r yr èiè ; g èiè ` = r` y èiè ` + rr ey r èiè : (9) The additive Schwarz method can be represented as a block Jacobi iteration method. To see this, consider the matrix splitting B = M, N,where M " M1 0 0 M with M 1 the matrix at the top in (8) and M the matrix at the bottom. We assume that is such that M is nonsingular. The approximate solution y èi+1è of (7) at step i +1of the block Jacobi method, è ; y èi+1è = y èiè + M,1 r èiè with r èiè d, B y èiè ; (10) corresponds to the approximate solutions at step i +1of the additive Schwarz method. In view of the fact that one wants to have gr èiè and g èiè è0è ` as small as possible in norm, the starting value y 0 is convenient, but it is conceivable to construct other starting values for which the two vectors are small in norm (for instance, after a restart of some acceleration scheme). Jacobi is a one step method and the updates from previous steps are discarded. The updates can also be stored in a space V m and be used to obtain more accurate approximations. This leads to a subspace method that, at step m, searches for the approximate solution in the space V m, which is precisely equal to the Krylov subspace K m èm,1 B ; M,1 dè. For instance, GMRES [14] finds the approximation in V m with the smallest residual, and may be useful if only a few iterations are to be expected. Krylov subspace methods can be interpreted as accelerators of the domain decomposition method (10). The resulting method can also be seen as a preconditioned Krylov subspace method where, in this case, the preconditioner is based on domain decomposition: the matrix M. This preconditioning approach where a system of the form M,1 B x = r è0è is solved, is referred to as left preconditioning. Here r è0è M,1 èd, B y è0è è and y = y è0è + x, Since M,1B = I, M,1 N, the search subspace V m coincides with the Krylov subspace K m èm,1 N; M,1 dè. The rank of both N and M,1 N is equal to the dimension of which, in this case where is nonsingular, is n i. This shows that the dimension of V m is at most n i. Therefore, the exact solution y of (7) belongs to V m for m n i and GMRES finds y in at most n i steps. (For further discussion see, for instance, [3, x3.], [, x], and [].)

8 6 M. Genseberger, G. L. G. Sleijpen, and H. A. Van der Vorst.4 Right preconditioning We can also use M as a right preconditioner. In that case solution y of (7) is obtained as y = y è0è + M,1x where x is solved from BM,1x = r è0è with r è0è d, B y è0è : (11) Right preconditioning has some advantages for domain decomposition. To see this, first note that any vector of the form Nv vanishes outside the artificial boundary, that is, only the e r and e` component of this vector are nonzero. Since B M,1 = I, NM,1, multiplication by this operator preserves the è0è property of vanishing outside the artificial boundary. Moreover, if y M,1 B y è0è = NM,1 d,thenr è0è = d, d vanishes outside the artificial boundary. d, equation (11) is solved with a Krylov subspace method with an initial Therefore, if, for y è0è M,1 guess that vanishes outside the artificial boundary, for instance x è0è = 0, then all the intermediate vectors also vanish outside the artificial boundary. onsequently, only vectors of size n i have to be stored and the vector updates and dot products are n i dimensional operations. For appropriate y è0è, the left preconditioned equation can also be formulated in a n i dimensional subspace. However, with respect to the standard basis, it is not so easy to identify the corresponding subspace. We will use the n i dimensional subspace, characterized by right preconditioningas corresponding to the artificial boundary, for the derivation of properties of the eigensystem of the iteration matrix..5 onvergence analysis As a consequence of (10), the errors e èiè y, y èiè in the block Jacobi method satisfy: e èi+1è =èi, M,1B èe èiè = M,1Ne èiè : (1) Therefore, the convergence rate of Jacobi depends on the spectral properties of the error propagationmatrix M,1 N. These properties also determine the convergence behavior of other Krylov subspace methods. With right preconditioning, we have to work with èiè x,x, which would lead to the error propagation matrix NM,1, but this matrix has the same eigenvalues as the previous one, so we can analyse either of them with the same result. For the Jacobi iteration, the spectral radius of M,1 N (or of NM,1 in the right preconditioned situation) should be strictly less than 1. For other methods, as GMRES, clustering of the eigenvalues of the error propagation matrix around 0 is a desirable property for fast convergence. The kernel of N forms the space of eigenvectors of M,1 N that are associated with eigenvalue 0. onsider an eigenvalue 6= 0of M,1N with eigenvector z èz T 1 ;zt` ; ez r T ; ez T` ;zt r ; z T è T : M,1 Nz = z : (13) Since N maps all components, except for thee` ande r ones, to zero, we have that all components of M z, except for thee` and e r components, are zero. The eigenvalue problem M z = Nz can be decomposed into two coupled problems: B 11 B 1` B 1r z 1 0 r` rr 0 ez` g` B`1 B`` B`r 5 4z` 5 = ; 4B r` B rr B r 5 4z r 5 = ; (14) 0 `` `r ez r B ` B r B z 0 g r

9 Using domain decomposition in the Jacobi-Davidson method 7 with g r `` ez` + `r z r ; g` r` z` + rr ez r : (15) In the context of PDEs, the systems in (14) can be interpreted as representing homogeneous partial differential equations with inhomogeneous boundary conditions along the artificial boundary: the left system for domain 1, the right system for domain. The values g r and g` at the artificial boundaries are defined by (15): the value g r for domain 1 is determined by the solution of the PDE at domain, while the solution of the PDE at domain 1 determines the value at the internal boundary of domain. We have the following properties, that help to identify the relevant part of the eigensystem: (i) N is an n +n i by n +n i matrix. Since is nonsingular, we have that rankènè =n i,andit follows that dimèkerènèè = n. Hence, =0is an eigenvalue with geometric multiplicity n. (ii) Since rankènè =n i, there are at most n i nonzero eigenvalues, counted according to algebraic multiplicity. (iii) If is a nonzero eigenvalue then the corresponding components g r and g` are non-zero. To see this, take g r =0. Then from (14) we have that èz T 1 ;z T` ; ez r T è T =0. Hence, g` =0,sothatz would be zero. (iv) If is an eigenvalue with corresponding nonzero components g r and g` then, is an eigenvalue with eigenvector with components g r and,g` (use (14) and (15)). (v) The vector ez` èz` T ; ez r T è T is linearly independent of ez r èez T` ;zt r è T. To prove this, suppose that ez` = ez r for some ; 6= 0. Then, from (14) it follows that Bez =0where ez è z T 1 ;zt ` ;ez T r ;z T è T =è z T 1 ;ez T` ;zt r ;z T è T : As B is nonsingular, we have ez =0. Hence, z = 0 and z is not an eigenvector. onsequently the value of cannot be equal to 1. To prove this, suppose that =1.Thenby combining the last row of the left part and the first row of the right part of (14) with (15), we find that èez`, ez r è=0.sinceis nonsingular, this implies that ez` = ez r, i.e. the vectors are linearly dependent. The value,1 for is then excluded on account of property (iv). The magnitude of dictates the error reduction. From (14) and (15) it follows that è``z` + `r ez r è=g r = ``ez` + `r z r è r`ez` + rr z r è=g` = r`z` + rr ez r ; (16) which leads to jj = è``ez` + `r z r è è r`z` + rr ez r è è``z` + `r ez r è è r`ez` + rr z r è : (17) From (16) we conclude that multiplying both `` and `r by a nonsingular matrix does not affect the value of. Likewise, both r` and rr may be multiplied by (another) singular matrix with no effect to. This can be exploited to bring the matrices to some convenient form. The one-dimensional case. We first study the one-dimensional case, because this will not only give some insight in how to reduce, but it will also be useful to control local situations in the two-dimensional case.

10 8 M. Genseberger, G. L. G. Sleijpen, and H. A. Van der Vorst In this situation the problem simplifies: the matrices ``, `r, r`, and rr are scalars, and so are the vector parts z`, z r, ez`,andez r. Because of the freedom to scale the matrices (scalars), we may take as = " `` `r, r`, rr è = " 1 `, r,1 è : (18) With ` ez r =z`, r ez`=z r, we have from (17) that jj = r + ` 1+`` r + ` r r +1 : (19) The -values will be interpreted as local growth factors at the artificial boundary: ` shows how z changes at the artificial boundary of the left domain; r shows the same for the right domain. Note that ez` depends linearly on ez r if r ` =1. Since this situation is excluded on account of property (v), we have that r ` 6= 1. The best choice for the minimization of in (19) is obviously ` =, r and r =,`, leading to =0, which gives optimal damping. The optimal choice for ` and r results in a coupling that annihilates the outflow g r and g` of the two domains. This leads effectively to two uncoupled subdomains: an ideal situation. More dimensions. In the realistic case of a more dimensional overlap (n i é 1), there is no choice for ` and r (i.e., `` = I, `r = `I, etc.) that leads to an error reduction matrix with only trivial eigenvalues. But, the conclusion that the outflow should be minimized in some average sense for the best error reduction is here also correct. In our application in x4, we will identify coupling matrices that lead to satisfactory clustering of most of the eigenvalues, of the error propagation matrix, around 0. We will do so by selecting the r and ` as suitable averages of the local growth factors r and `. 3 The eigenvalue problem 3.1 The Jacobi-Davidson method For the computation of a solution to an eigenvalue problem the Jacobi-Davidson method [17], is an iterative method that in each iteration: 1. computes an approximation for an eigenpair from a given subspace, using a Rayleigh-Ritz principle,. computes a correction for the eigenvector from a so-called correction equation, 3. expands the subspace with the computed correction. The correction equation mentioned in step is characteristic for the Jacobi-Davidson method, for example, the Arnoldi method [1, 13] simply expands the subspace with the residual for the approximated eigenpair, and the Davidson method [6] expands the subspace with a preconditioned residual. The success of the Jacobi-Davidson method depends on how fast good approximations for the correction equation can be obtained and it is for that purpose that we will try to exploit the enhancement techniques discussed in the previous section. Therefore, we will consider this correction equation in some more detail. We will do this for the standard eigenvalue problem Ax = x: (0)

11 Using domain decomposition in the Jacobi-Davidson method 9 Given an approximate eigenpair è; u è (with residual r u,au) that is close to some wanted eigenpair è; x è, a correction t for the normalized u is computed from the correction equation: t? u; èi, uu èèa, I èèi, uu è t = r; (1) or in augmented formulation ([15, x3.4]) " è" è " è A, I u t r u = : () 0 " 0 In many situations it is quite expensive to solve this correction equation accurately and fortunately it is also not always necessary to do so. A common technique is to compute an approximation for t by a few steps of a preconditioned iterative method, such as GMRES or Bi-GSTAB. When a preconditioner M for A, I is available, then èi, uu èmèi, uu è can be used as left preconditioner for (1). This leads to the linear system (see, [17, x4]) PM,1 èa, Iè Pt = PM,1 r where P I, M,1 uu u M,1 u : (3) The operator at the left hand side in (3) involvestwo (skew) projectors P. However, when we start the iterativesolutionprocess for (3) with initialguess 0, thenpt may be replaced with t at each iteration of a Krylov iteration method: projection at the right can be skipped in each step of the Krylov subspace solver. Right preconditioning, which has advantages in the domain decomposition approach, can be carried out in a similar way, with similar reductions in the application of P, aswewillseeinx3.3 below. However, because the formulas with right preconditioninglook slightlymore complicated,we will present our arguments mainly for left preconditioning. 3. Enhancement of the correction equation We use the domain decomposition approach as presented in x to solve the correction equation (1). Again, we will assume that we have two subdomains and we will use the same notations for the enhanced vectors. With B A, I this leads to the enhanced Jacobi-Davidson correction equation t? u; èi, u u è B èi, u u è t = r (4) with u èu T 1 ;u T` ; 0 T ; 0 T ;ur T ; u T è T, and likewise r èr T 1 ;r T` ; 0 T ; 0 T ;rr T ; r T è T. The dimension of the zero parts, indicated by 0, is assumed to be the same as the dimension of u` (and u r ). To see why this is correct, apply the enhancements of x to the augmented formulation () of the correction equation, and use the fact that the augmented and the projected form are equivalent. We assume u to be normalized. Then u is normalized as well. With èi, u u èm èi, u u è (5) as the left preconditioner, we obtain PM,1B Pt = PM,1 M,1 r with P I, u u u M,1u : (6) In comparison with the error propagation (1) of the block Jacobi method for ordinary linear systems, the error propagation matrix M,1 N is now embedded by the projections P. These projections prevent

12 10 M. Genseberger, G. L. G. Sleijpen, and H. A. Van der Vorst the operator in the correction equation from getting (nearly) singular: as approximates the wanted eigenvalue, in the asymptotic case is even equal to, B gets close to singular in the direction of the wanted eigenvector x. For ordinary linear systems this possibility is excluded by imposing B to be nonsingular (see remark (v) in x.5). Here we have to allow a singular B. In our analysis of the propagation matrix of the correction equation, for the model problem in x4.3, in first instance we will ignore the projections. Afterwards, we will justify this (both analytically (x4.3) as well as numerically (x5.)). Note. We have enhanced the correction equation. Another option is to start with an enhancement of the eigenvalue problem itself. However, this does not result in essential differences ([9]). If the correction equations for these two different approaches are solved exactly, then the approaches are even equivalent. 3.3 Right preconditioning In x.4 we have showed that, without projections, right preconditioning for domain decomposition leads to an equation that is defined by its behavior on the artificial boundary only. Although the projections slightly complicate matters, the computations for the projected equation can also be restricted to vectors corresponding to the artificial boundary, as we will see below. Moreover, similar to the situation for left preconditioning, right preconditioning requires only one projection per iteration of a Krylov subspace method. In this section, we will use the underscore notation for vectors in order to emphasize that they are defined in the enhanced space. First we analyze the action of the right preconditioned matrix. Theinverseonu? of the projected preconditioner in (5) is equal to (cf. [15, x7.1.1] and [8]) PM,1 = è I, M,1 u u u M,1 u! M,1 = M,1 è I, u u M,1 u M,1u! ; (7) with P as in (6). This expression represents the Moore Penrose inverse of the operator in (5), on the entire space. Note that u P = 0 (by definition of P)andu N = 0 (by definition of u and N). Therefore, for the operator that is involved in right preconditioning (cf. (11)), we have that èi, u u èb èi, u u èpm,1 =èi, u u èb PM,1 =èi, u u èb M,1 I M, uu,1 u M,1 u = I, u u, èi, u u ènpm,1 = I, u u, NPM,1 : Hence, this operator maps a vector v that is orthogonal to u to the vector ; (8) èi, u u èb èi, u u èpm,1v = v, NPM,1v that is also orthogonal to u. Therefore, right preconditioning for (4) can be carried out in the following steps (cf. x.4): è0è 1. ompute t PM,1r and è0è r è0è Nt.. ompute an (approximate) solution s èmè of èi, NPM,1 ès = r è0è ; with (m steps of) a Krylov subspace method with initial guess 0.

13 Using domain decomposition in the Jacobi-Davidson method Update t è0è to the (approximate) solution t of (4): t = t è0è + PM,1sèmè : As in x.4, the intermediate vectors in the solution process for the equation in step vanish outside the artificial boundary. Therefore, for the solution of the right preconditioned enhanced correction equation, only n i -dimensional vectors have to be stored, and the vector updates and dot products are also for vectors of length n i. 4 Tuning of the coupling matrix for a model problem Now we will address the problem whether it is possible to reduce the computing time for the Jacobi- Davidson process, by an appropriate choice of the coupling matrix. We have, in x, introduced the decomposition of a linear system, into two coupled subsystems, in an algebraic way. In this section we will demonstrate how knowledge of the physical equations from which the linear system originates can be used for tuning of the coupling parameters. 4.1 The model problem As a model problem we will consider the two-dimensional advection-diffusion operator: Lè b' b' b' b' + c b'; (9) that is defined on the open domain =è0;! x è è0;! y è in R, with constants aé0, b 0, c; u and v. We will further assume Dirichlet boundary conditions: b' of. We are interested in some eigenvalue b and corresponding eigenfunction b' of L: è Lè b'è = b b' on ; (30) b' =0 We will use the insights, obtained with this simple model problem, for the construction of couplings for more complicated partial differential operators. Discretization. We discretize L with central differences with stepsize h =èh x ;h y è=è!x n ;! y x+1 n è y+1 for the second order part and stepsize h =èh x ; h y è for the first order part, where n x and n y are positive integers: blè b'è a x h x b' + b y h y The operator x h x denotes the central difference operator, defined as b' + u x h x b' + v y h y b' + c b': (31) x h x b èèx; yè bèèx + 1 h x;yè, b èèx, 1 h x;yè h x ; and y h y is defined similar. This leads to the discretized eigenvalue problem è Lè'è =' on h ; ' =0 h ; (3)

14 1 M. Genseberger, G. L. G. Sleijpen, and H. A. Van der Vorst where h h is the uniform rectangular grid of points èj x h x ;j y h y è in and respectively. We have skipped the hat b in order to indicate that the functions are restricted to the appropriate grid, and that the operator L is restricted to grid functions. The vector ' is defined on h h. We use the boundary conditions ' h for the elimination of these values of ' from Lè'è = '. Identification of grid functions with vectors and of operators on grid functions with matrices leads to an eigenvalue problem as in (0) of dimension n n x n y : the eigenvector x corresponds to the eigenfunction ' restricted to h. The matrix A corresponds to the operator L from which the boundary conditions have been eliminated. In our application, we obtain the corresponding vectors by enumeration of the grid points from bottom to top first (i.e., the y-coordinates first) and then from left to right ([1, x6.3]). In our further analysis, we will switch from one representation to another (grid function or vector), selecting the representation that is the most convenient at that moment. 4. Decomposition of the physical domain For some 0 é! x1 é! x we decompose the domain in two subdomains 1 è0;! x1 ë è0;! y è and è! x1 ;! x è è0;! y è. Let n x1 be the number of grid points in the x direction in 1.Then 1 ë h and ë h is an n x1 n y and n x n y grid respectively with n x1 + n x = n x. To number the grid points in the x direction, we use local indices j x1, 1 j x1 n x1,andj x, 1 j x n x,in 1 and respectively. Because of the 5 point star discretization, the unknowns at the last row of grid points (j x1 = n x1 )in the y direction in 1 are coupled with those at the first row of grid points (j x = 1)inthey direction in, and vice versa. The unknowns for j x1 = n x1 are denoted by the vector y`, and the unknowns for j x =1are denoted by y r,justasinx. Now we enhance the system with the unknowns ey r and ey`, which, in grid terminology, correspond to a virtual new row of gridpoints to the right of 1,andtheleft of, respectively. These new virtual gridpoints serve as boundary points for the domains 1 and. See Fig. 1 for an illustration. The vectors y`, y r, ey`, andey r are n y dimensional (the n i in x.1 is now equal to n y ). The n y by n y matrix, that couples y`, ey r, ey`,andy r can be interpreted as discretized boundary conditions of the differential operator at the internal newly created boundary between 1 and [19, 18]. Note that the internal boundary conditions are explicitly expressed in the total system matrix B, through, whereas the external boundary conditionshave been used to eliminate the values at the external boundary (see x4.1). 4.3 Eigenvectors of the error propagation matrix We will now analyze the eigensystem of the error reduction matrix M,1 N (see x.5) and discuss appropriate coupling conditions(that is, the internal boundary conditions)as represented by the matrix. Here, the matrices M and N are defined for B A, I, as explained in xx.-.3, for some approximate eigenvalue (cf., xx3.1-3.). The matrix A corresponds to L, as explained in x4.1. First, we willdiscussinsectionx4.3.1 the case of one spatialdimension (i.e., no y variable). The results for the one-dimensional case are easy to interpret. Moreover, since the two-dimensional eigenvalue problem in (30) is a tensor product of two one-dimensional problems, the results for the one-dimensional case can conveniently be used for the analysis in x4.3. of the two-dimensional problem.

15 Using domain decomposition in the Jacobi-Davidson method 13 FIGURE 1. Decomposition of the domain into two subdomains 1 and. The bullets () representthe grid points of the original grid. The circles (o) representthe extra grid points at the internal boundary. The indices j x and j y refer to numbering in the x direction and y direction respectively of the grid points in the grids: the pair èj x;j yè corresponds to point èj xh x;j yh yè in. For the numbering of the grid points in the x direction in the two subdomains a local index is used: j x1 = j x in 1 (0 j x1 n x1 +1) and j x = j x, n x1 in (0 j x n x +). 1 n y +1 n y j y" ,! jx n x n x+1 * X XXXX n y j y" 1 X XXXX 1 Xz,! n x1 n x1 +1 jx1 n y j y" 1 0 1,! n x jx The one-dimensional case In this section, we will discuss the case of one spatial dimension: there is no y variable. To simplify notations, we will skip the index x for this case. Suppose that we have an approximate eigenvalue for some eigenvalue of B. To simplify formulas, we shift the approximate eigenvalue by c. The matrix B in x.5 corresponds to the three point stencil of the finite difference operator a h + u h, : For the eigensystem of M,1 N, we have to solve the systems in (14) for an ex r 6= 0and ex` 6= 0,thatis, we have to compute solutions è 1 and è for the discretized PDE on domain 1 and domain, respectively (cf. x.5). The functions è 1 and è should satisfy " a h + u è h, è p èj p hè=0 for 1 j p n p and p =1; : (33) The conditions on the external boundaries imply that è 1 è0è = 0 and è èn h + hè =0: For the solutions of (33), we try functions of the form èèjhè= j.then satisfies 1+ uh a, D + 1, uh a,1 =0 with D 1+ h : (34) a

16 14 M. Genseberger, G. L. G. Sleijpen, and H. A. Van der Vorst Let + and, denote the roots of this equation, such that j + jj, j. In the regular case where + 6=,, the solutions è 1 and è are, apart from scaling, given by è 1 èj 1 hè= j 1 +, j 1, and è èj hè= j,n,1,, j,n,1 + : We distinguish three different situations: (i) Harmonic behavior:, = + 6 R. If 0 R and ë0; è are such that + = 0 expèi è. Then, up from scaling factors, è 1 èj 1 hè= j 1 0 sinèj 1è and è èj hè= j 0 sinè èj, n, 1èè: (ii) Degenerated harmonic behavior: + =,. In this case we have, apart from scaling factors, è 1 èj 1 hè=j 1 j 1 0 and è èj hè=èn +1, j è j 0 : (iii) Dominating behavior: j + j é j, j. Near the artificial boundary, that is for j 1 n 1 and j 1, we have apart from scaling factors that è! j1 è 1 èj 1 hè= j 1, + 1, j and è èj hè= j,n,1, è! n +1,j, 1, + c j, ; so that, apart from a scaling factor again, è èj hè j,. How accurate the approximation is depends on the ratio j, j=j + j andonthesizeofn 1 and n. The coupling matrix is by (n i =1). We consider a as in (18). Then, according to (19), the absolute value of the eigenvalue is given by jj = ` + r r + ` 1+ r ; (35) r 1+`` where ` = è 1 èn 1 h + hè=è 1 èn 1 hè and r = è è0è=è èhè: z` in (14) corresponds to è 1 èn 1 hè, ez r to è 1 èn 1 h + hè, etcetera. In the case of dominating behavior (cf. (iii)), we have that ` + and r 1=,. As observed in (iii), the accuracy of the approximation depends on the ratio j, j=j + j and on the values of n 1 and n. But already for modest (and realistic) values of these quantities, we obtain useful estimates, and we may expect a good error reduction for the choice ` =,1=, and r =, +. The parameters + and, would also appear in a local mode analysis: they do not depend on the external boundary condition nor on the position of the artificial boundary. The value for jj in (35) is equal to one when r =1=`, regardless ` and r (assuming these are real). If we would follow the local mode approach for the situations (i) and (ii), that is, if we would estimate ` by + and r by 1=,, then we would encounter such values for ` and r. In specific situations, we may do better by using the expressions for è 1 and è in (i) and (ii), that is, we may find coupling parameters ` and r that lead to an eigenvalue with jj é 1. However, then we need information on the external boundary conditions and the position of the artificial boundary. ertainly in the case of a

17 Using domain decomposition in the Jacobi-Davidson method 15 higher spatial dimension, this is undesirable. Moreover, if is an exact eigenvalue of A then we are in the situation in (i): the functions è 1 and è are multiples of the components on domain 1 and domain, respectively, of the eigenfunction and =1(see (v) in x.5 and the remark in x3.). In this case there is no value of ` and r for which jj é 1. We define èa + uhè=èa, uhè. In order to simplify the forthcoming discussion for two spatial dimensions, observe that, in the case of dominating growth (iii), that is, ` + and r 1=,, (35) implies that jj e` + e e r + e 1+e` e 1+e e r ; where e` p ` ; e r p r ; e p + : (36) Here we have used that +, =1=, which follows from (34). If, for the Laplace operator (where u =0and c =0), we use Ritz values for the approximate eigenvalues, then takes values between ènè and è0è. Hence, è,4a=h ; 0è, and the roots + and, are always complex conjugates. We will see in the next subsections that, for two spatial dimensions, the Ritz values that are of interest lead to a dominant root, also for the Laplace operator, and we will see that local mode analysis is then a convenient tool for the identification of effective coupling parameters Two dimensions Similar to the one-dimensional case we are interested in functions 1 and such that, Lè p è=0 on h ë p ; p =1; ; (37) and that satisfy the external boundary conditions. But now 1 and are functions that depend on both the x- andy direction whereas the operator L (here L is introduced in x4.1) acts in these two directions. Since the finite difference operator x h x acts only in the x direction and y h y acts only in the y direction, their actions are independent of each other. Therefore, in this case of constant coefficients 1, we can write the operator L in equation (37) as a sum of tensor product of one-dimensional operators: where L = L x I + I L y ; (38) L x a x + u x and L h y b y + v y + c, : (39) x h x h y h y L x and L y incorporate the action of L in the x direction and y direction respectively. Since the domain is rectangular and since on each of the four boundary sides of we have the same boundary conditions, the tensor product decomposition of L corresponds to a tensor product decomposition of the matrix A. We try to construct solutions of (37) by tensor product functions, that is by functions p of the form p èj xp h x ;j y h y è=è p èj xp h x è 'èj y h y è=è p èj xp h x è 'èj y h y è: For ' we select eigenfunctions ' èlè of the operator L y that satisfy the boundary conditions for the y direction. Then Lè p è=èl x è p è ' èlè + è p èlè ' èlè =èl x + èlè èèè p è ' èlè ; 1 It is sufficient if a and u are constants as functions of y, b and v are constants as function of x,andcisaproduct of a function in x and a function in y.

18 16 M. Genseberger, G. L. G. Sleijpen, and H. A. Van der Vorst where èlè is the eigenvalue of L y that corresponds to ' èlè. Apparently, for each eigensolution of the yoperator L y, the problem of finding solutions of (37) reduces to a one-dimensional problem as discussed in the previous subsection: find è p such that " èl x + èlè èèè p è= a x + u è x + èlè è h p =0; (40) x h x and that satisfy the external boundary conditions in the x direction. To express the dependency of the solutions è p on the selected eigenfunction of L y, we denote the solution as èp èlè. Now, consider matrixpairs è`r ;``è and è r`; rr è for which the eigenfunctions ' èlè of L y are also eigenfunctions: `r ' èlè = èlè ` ``' èlè and r`' èlè = èlè r rr' èlè : (41) Examples of such matrices are scalar multiples of the identity matrix (for instance, `r = èlè ` I and `` = I), but there are others as well, as we will see in x4.4. For such a there is a 1 1 correspondence for each function ' èlè on the two subdomains: a component in the direction of è èlè 1 'èlè on subdomain 1 is transferred by M,1 N to a component in the direction of èèlè 'èlè on subdomain and vice versa. More precisely, if is such that (41) holds and if è èlè èc l è èlè 1 ;èèlè è T for some scalar c l then, by construction of è èlè, M maps è èlè ' èlè onto a vector that is zero except for the e` and e r components (cf. (14)) which are equal to c l è èlè èn 1 1xh x è+ èlè ` èèlèèn 1 1xh x + h x è ``' èlè (4) and èlè r è èlè è0è + èèlè èh xè rr ' èlè ; (43) respectively. In its turn, N maps è èlè ' èlè onto a vector that is zero except for thee` and e r components (cf. (14) and (15)) which are equal to è èlè è0è + èlè ` èèlèèh xè ``' èlè (44) and c l èlè r è èlè èn 1 1xh x è+è èlè èn 1 1xh x + h x è rr ' èlè ; (45) respectively. By a combination of (4) and (44), and (43) and (45), respectively, one can check that, for an appropriate scalar c l, è èlè ' èlè is an eigenvector of M,1N with corresponding eigenvalue èlè such that j èlè j = èlè ` 1+ èlè r + èlè r èlè r èlè r + èlè ` 1+ èlè ` where (here we assumed that è èlè 1 èn 1xh x è 6= 0and è èlè èh xè 6= 0) èlè ` èlè ` ; (46) è èlè èn 1 1xh x + h x è=è èlè èn 1 1xh x è and èlè r è èlè è0è=èèlèèh xè: Note that the expression for èlè does not involve the value of c l. From property (iv)inx.5 we know that è èlè, ' èlè where è èlè, èc l è èlè ; 1,èèlè è T is also an eigenvector with eigenvalue, èlè. As spanfè èlè ;è, èlè g = spanfèè èlè ; 1 0è T ; è0;è èlè è T g the functions è èlè ' èlè and è, èlè ' èlè are linearly independent and spanfè è1è ' è1è ;è, è1è ' è1è ;:::;è ènyè ' ènyè ;è ènyè, ' ènyè g = è! è! è! è è1è è 1 ' spanf è1è èn 0 è yè ; ;:::; 0 è è1è 1 ' ènyè ; ' è1è 0 0 è ènyè ' ènyè! g:

19 Using domain decomposition in the Jacobi-Davidson method 17 From this it follows that the total number of linear independently eigenfunctions of the form è èlè ' èlè is equal to n y. Note that our approach with tensorproduct functions leads to the required result: once we know the n y functions ' è1è ;:::;' ènyè, we can, up to scalars, construct all eigenvectors of M,1 N that correspond to the case (ii) inx.5, i.e. the eigenvectors with, in general, nonzero eigenvalues. Apparently, the problem of finding the two times n y nontrivial eigensolutions of M,1 N breaks up into n y one -dimensional problems. For each l, the matrix M,1N has two eigenvectors èlè and, èlè with components that, on domain p, correspond to a scalar multiple of èp èlè ' èlè (p =1; ). Errors will be transferred in the iterative solution process of (7) from one subdomain to the other. These errors can be decomposed in eigenvectors of M,1 N, that is, they can be expressed on subdomain ' èlè. The component of the error on domain p in the direction of èp èlè ' èlè is transferred in each step of the iteration process precisely to the component in the direction of è èlè 3,p 'èlè on domain 3, p. In case of the block Jacobi method, transference damps this component by a factor j èlè j. Here, as in the case of one spatial dimension (x4.3.1), the size of the eigenvalues èlè is determined p (p =1; ) as linear combination of the functions è èlè p by the growth factor èlè ` of è èlè 1 and èlè r of è èlè in (46). In case of dominated behavior, these factors can adequately be estimated by the dominating root of the appropriate characteristic equation (cf. (34)). The scalars, that is, the matrices `r and r` can be tuned to minimize the j èlè j. This will be the subject of our next section. As we explainedin x4.3.1, we see no practical way to tune our coefficients in case of harmonic behavior. However, in our applications the number of eigenvalues that can not be controlled is limited as we will see in our next subsection. Except for a few eigenvalues, the eigenvalues of the error reduction matrix M,1 N will be small in absolute value: the eigenvalues cluster around 0. If is equal to an eigenvalue of A,then1is an eigenvalue of M,1 N (see (v) in x.5 and x3.) and M,1 B is singular. However, the projections that have been discussed in x3., will remove this singularity. An accurate approximation of (a desirable situation) corresponds to a near singular matrix M,1 B, and here, the projection will also improve the conditioning of the matrix. 4.4 Optimizing the coupling In this section, we will discuss the construction of a coupling matrix that leads to a clustering of eigenvalues èlè of M,1 N round 0. We give details for the Laplace operator. We will concentrate on the error modes èp èlè ' èlè on domain p with dominated growth in the x direction, that is, modes for which èp èlè exhibits the dominated behavior as described in (iii) of x For these modes and for as in (18) and (41), we have that (cf., (36) and (46)) j èlè j e èlè ` 1+e èlè ` + e èlè e èlè e èlè r 1+e èlè r + e èlè e èlè : (47) Here, for èa + uh x è=èa, uh x è, the quantities e èlè `, eèlè r and eèlè are defined as in (36): e èlè ` èlè ` =p, e èlè r p èlè r, eèlè p èlè +, where here èlè + is the dominant root of (34) for 0 = èlè.note that, in view of the symmetry in the expression for j èlè j, it suffices to study a for which e èlè ` = e èlè r. Let E be the set of l s in f1;:::;n y g for which the èp èlè exhibit dominated growth, or, equivalently, for which the characteristic equation associated with the operator L x + èlè in (40) (cf., (34)) has a dominant For èlè ` or èlè r one of the nonzero eigenvalues degenerates to a defective zero eigenvalue. But then still this construction yields all nonzero eigenvalues. To avoid a technical discussionwe give no details here.!, èlè r!, èlè `

20 18 M. Genseberger, G. L. G. Sleijpen, and H. A. Van der Vorst root èlè : E fl =1;:::;n + y jj èlè j é + jèlè, jg. We are interested in èlè e èlè ` = e èlè r for which è è èlè + opt max E 1+ èlè b with E b f p èlè + j l Eg (48) is as small as possible. Simple coupling. For the choice `r p = I and r` = è= p èi, we can easily analyze the situation. Then èlè = for all l and we should find the = opt that minimizes max jè + è=è1 + èj. We assume that juh x j é a. Note that then p times the dominant characteristic roots are real and é 1. Therefore, the two extremal values determine the size of the maximum. This leads to min b E and M max b E (49) p è, 1èèM,, 1è opt = M è, 1èèM, 1è + M é 1 (50) and opt = p M, 1, p, 1 M p, 1+ p M, 1 é 0: (51) Laplace operator. To get a feeling for what we can expect, we interpret and discuss the results for the Laplace operator, that is, we now take u = v = c =0. Further, we concentrate on the computation of (one of) the largest eigenvalue of L and we assume that is close to the target eigenvalue. Then èlè =, b h y l è1, cosè èè, : (5) n y +1 First we derive a lower bound for and an upper bound for M. For D èlè 1, h x a èlè (cf., (34)), we have that jd èlè j é 1, or, equivalently, j èlè + j é j, èlè j, if and only if èlè é 0. Hence l e min E is the smallest integer l for which èlè é 0 and l e b e l e c +1 where e l e èn y + 1è arcsin h y s 1, A : b s (The noninteger value l = e l e is the solution of èlè =0.) For h y 1, e l e! y, b. For an impression on the error reduction that can be achieved with a suitable coupling, we are interested in lower p bounds for, 1 that are as large as possible. With D èleè, 1 we have that, 1= + + p. Therefore, we are interested in positive lower bounds for : el = e ècosè n y +1 è, cosè l e n y +1 èè l e, e l e e n y +1 sinè l e n y +1 è ae b è! s l e èl e, e hx l e è where h x b! y h y a :

21 Using domain decomposition in the Jacobi-Davidson method 19 The bound for depends on the distance of e l e to the integers, which can be arbitrarily small. This means that, even for the optimal coupling parameters, the (absolute value of the) eigenvalue èleè can be arbitrarily close to one. Since, for optimal coupling, the damping that we achieve for the smallest l in E is the same as for the largest, it seems to be undesirable to concentrate on damping the error modes associated with l e as much as possible. Therefore, we remove l e from the set E and concentrate on damping the error modes associated with l in E 0 Enfl e g.fortheand associated with this slightly reduced set E 0 we have that, 1 p s h x where 1 b l e! y a : (53) The lower bound for, 1 is sharp for h! 0 with fixed, i.e., for given, h =èh x ;h y è is such that h x = h y p a=b. An upper bound for M follows from the observations that é 0 and that the cosine takes values between,1 and 1:wehavethatD èlè 1+ and Put M, 1 + M 0 Then, for h! è0; 0è such that is fixed, we have that Here we used that, opt =1+ q : s s M, 1 M :, opt =1+M 0p h x + Oèh x è and 1, opt = p hx M 0 + Oèh x è: q q è, 1èM 0 + Oè, 1è and 1, opt = è, 1è=M 0 + Oè, 1è for! 1 (see (50) and (51)). So, for small stepsizes h, the best asymptotic error reduction factor opt is less than one with a difference from one that is proportional to the square root of h y. We tried to cluster the eigenvalues of M,1 B around one as much as possible. With = opt,atmost l e eigenvalues may be located outside the disk with radius opt and center one. After an initial l e steps we may expect the convergence of GMRES to be determinated by opt (provided that the basis of eigenvectors is not too skew). Therefore, as long as l e is a modest integer, we expect GMRES to converge well in this situation. We will now argue that, in realistic situations, l e will be modest as compared to the index of the eigenvalue of A where we are interested in. For clearness of arguments, we assume the stepsizes to be small: h! è0; 0è with fixed: èleè,b èl e =! y è,. Suppose that, for some é0, we are interested in the smallest eigenvalue of A that is larger than,. Since, in the Jacobi-Davidson process, converges to, will eventually be larger than,. We concentrate on this asymptotic situation. 3 3 The Jacobi-Davidson process can often be started in practice with an approximate eigenvector that is already close to the wanted eigenvector. Then will be close to. For instance, if one is interested in a number of eigenvalues close to some target value, then the search for the second and following eigenvectors will be started with a search subspace that has been constructed for the first eigenvector. This search subspace will be rich with components in the direction of the eigenvectors that are wanted next (see [8, x3.4]).

Domain decomposition for the Jacobi-Davidson method: practical strategies

Domain decomposition for the Jacobi-Davidson method: practical strategies Chapter 4 Domain decomposition for the Jacobi-Davidson method: practical strategies Abstract The Jacobi-Davidson method is an iterative method for the computation of solutions of large eigenvalue problems.

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

Domain decomposition in the Jacobi-Davidson method for eigenproblems

Domain decomposition in the Jacobi-Davidson method for eigenproblems Domain decomposition in the Jacobi-Davidson method for eigenproblems Menno Genseberger Domain decomposition in the Jacobi-Davidson method for eigenproblems Domeindecompositie in de Jacobi-Davidson methode

More information

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G. Universiteit-Utrecht * Mathematical Institute Alternative correction equations in the Jacobi-Davidson method by Menno Genseberger and Gerard L. G. Sleijpen Preprint No. 173 June 1998, revised: March 1999

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y Abstract. In this paper we propose a new method for the iterative computation of a few

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

1 Review of simple harmonic oscillator

1 Review of simple harmonic oscillator MATHEMATICS 7302 (Analytical Dynamics YEAR 2017 2018, TERM 2 HANDOUT #8: COUPLED OSCILLATIONS AND NORMAL MODES 1 Review of simple harmonic oscillator In MATH 1301/1302 you studied the simple harmonic oscillator:

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation Zih-Hao Wei 1, Feng-Nan Hwang 1, Tsung-Ming Huang 2, and Weichung Wang 3 1 Department of

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

A Jacobi Davidson Method for Nonlinear Eigenproblems

A Jacobi Davidson Method for Nonlinear Eigenproblems A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

The parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven.

The parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven. The parallel computation of the smallest eigenpair of an acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven Abstract Acoustic problems with damping may give rise to large quadratic

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

Indefinite and physics-based preconditioning

Indefinite and physics-based preconditioning Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Construction of a New Domain Decomposition Method for the Stokes Equations

Construction of a New Domain Decomposition Method for the Stokes Equations Construction of a New Domain Decomposition Method for the Stokes Equations Frédéric Nataf 1 and Gerd Rapin 2 1 CMAP, CNRS; UMR7641, Ecole Polytechnique, 91128 Palaiseau Cedex, France 2 Math. Dep., NAM,

More information

A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator

A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator Artur Strebel Bergische Universität Wuppertal August 3, 2016 Joint Work This project is joint work with: Gunnar

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information

The Deflation Accelerated Schwarz Method for CFD

The Deflation Accelerated Schwarz Method for CFD The Deflation Accelerated Schwarz Method for CFD J. Verkaik 1, C. Vuik 2,, B.D. Paarhuis 1, and A. Twerda 1 1 TNO Science and Industry, Stieltjesweg 1, P.O. Box 155, 2600 AD Delft, The Netherlands 2 Delft

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis RANA03-02 January 2003 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis by J.Rommes, H.A. van der Vorst, EJ.W. ter Maten Reports on Applied and Numerical Analysis Department

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

CHAPTER 3. Matrix Eigenvalue Problems

CHAPTER 3. Matrix Eigenvalue Problems A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

Incompatibility Paradoxes

Incompatibility Paradoxes Chapter 22 Incompatibility Paradoxes 22.1 Simultaneous Values There is never any difficulty in supposing that a classical mechanical system possesses, at a particular instant of time, precise values of

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,

More information

Family Feud Review. Linear Algebra. October 22, 2013

Family Feud Review. Linear Algebra. October 22, 2013 Review Linear Algebra October 22, 2013 Question 1 Let A and B be matrices. If AB is a 4 7 matrix, then determine the dimensions of A and B if A has 19 columns. Answer 1 Answer A is a 4 19 matrix, while

More information

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof. Universiteit-Utrecht * Department of Mathematics The convergence of Jacobi-Davidson for Hermitian eigenproblems by Jasper van den Eshof Preprint nr. 1165 November, 2000 THE CONVERGENCE OF JACOBI-DAVIDSON

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Contents. 1 Repeated Gram Schmidt Local errors Propagation of the errors... 3

Contents. 1 Repeated Gram Schmidt Local errors Propagation of the errors... 3 Contents 1 Repeated Gram Schmidt 1 1.1 Local errors.................................. 1 1.2 Propagation of the errors.......................... 3 Gram-Schmidt orthogonalisation Gerard Sleijpen December

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

Convergence Behavior of a Two-Level Optimized Schwarz Preconditioner

Convergence Behavior of a Two-Level Optimized Schwarz Preconditioner Convergence Behavior of a Two-Level Optimized Schwarz Preconditioner Olivier Dubois 1 and Martin J. Gander 2 1 IMA, University of Minnesota, 207 Church St. SE, Minneapolis, MN 55455 dubois@ima.umn.edu

More information

Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains

Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains Martin J. Gander and Felix Kwok Section de mathématiques, Université de Genève, Geneva CH-1211, Switzerland, Martin.Gander@unige.ch;

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information