Abstract. It is well-known that Bi-CG can be adapted so that hybrid methods with computational complexity

Size: px
Start display at page:

Download "Abstract. It is well-known that Bi-CG can be adapted so that hybrid methods with computational complexity"

Transcription

1 Example Universiteit Utrecht * Department of Mathematics Maintaining convergence properties of BiCGstab methods in nite precision arithmetic by Gerard L.G. Sleijpen and Hen A. Van der Vorst Preprint nr. 86 July, 99 revised version February, 995

2

3 Maintaining convergence properties of BiCGstab methods in nite precision arithmetic GERARD L.G. SLEIJPEN AND HENK A. VAN DER VORST Abstract. It is well-nown that Bi-CG can be adapted so that hybrid methods with computational complexity almost similar to Bi-CG can be constructed, in which it is attempted to further improve the convergence behavior. In this paper we will study the class of BiCGstab methods. In many applications, the speed of convergence of these methods appears to be determined mainly by the incorporated Bi-CG process, and the problem is that the Bi-CG iteration coecients have to be determined from the BiCGstab process. We will focus our attention to the accuracy of these Bi-CG coecients, and how rounding errors may aect the speed of convergence of the BiCGstab methods. We will propose a strategy for a more stable determination of the Bi-CG iteration coecients and by experiments we will show that this indeed may lead to faster convergence. Key words. Non-symmetric linear systems, iterative solvers, Bi-CG, Bi-CGSTAB, BiCGstab(`). AMS subject classication. 65F.. Introduction. The BiCGstab methods can be viewed as Bi-CG combined with repeated low degree GMRES processes, lie GMRES() in Bi-CGSTAB. Therefore, we start with a brief overview of Bi-CG. Bi-CG [7, ] is an iterative solution method for linear systems (.) Ax = b in which the n n matrix A is nonsingular. In typical applications n will be large and A will be sparse. For ease of presentation, we assume A and b to be real. Starting with an initial guess x for the solution x and a \shadow" residual er (most often one taes er = r ), Bi-CG produces sequences of approximations x, residuals r, and search directions u by (.) u = r? u? ; x + = x + u ; r + = r? Au ; where the Bi-CG coecients and are such that r and Au are orthogonal to the shadow Krylov subspace K (A T ;er ). In principle we are free to select any basis for the shadow Krylov subspace that suits our purposes. We will represent the basis vectors of this subspace in polynomial form. If ( ) is a sequence of polynomials of degree with a non-trivial leading coecient then the vectors (A T )er ; : : :;? (A T )er form a basis of K (A T ;er ) and we have (see [] or []): (.) =?? and = where ( := (r ; (A T )er ); := (Au ; (A T )er ): In nite precision arithmetic computation the values of the iteration coecients depend quite critically on the choice of the basis vectors for the shadow Krylov subspace. For example, if we mae the straightforward choice (t) = t, then the basis vectors tend to be more and more in the direction of the dominating eigenvector of A T. Depending on how well the dominant eigenvalue of A T is separated from the others, this would imply that eventually the new vectors in K (A; r ) are eectively made only orthogonal with respect to this dominating eigenvector. This then would lead to a new vector r, that is almost orthogonal to the -th basis vector for the shadow Krylov subspace, and hence we may expect large relative errors in the new iteration coecients. Of course, even if all computational steps are done as accurately as possible (in nite precision), eventually the computed Bi-CG coecients will dier in all digits from the exact ones. As is well nown Mathematical Institute, University of Utrecht, P.O. Box 8., NL-58 TA Utrecht, The Netherlands, sleijpen@math.ruu.nl vorst@math.ruu.nl.

4 Sleijpen and Van der Vorst for the Bi-CG process itself (cf. [6,, ]), this can be attributed to a global loss of bi-orthogonality. Since Bi-CG seems to wor rather well as long as some local bi-orthogonality is maintained (that means that the local bi-orthogonalization is done accurately enough, see also [9]), we expect to recover the convergence behavior of the incorporated Bi-CG process (in the BiCGstab methods) if we compute the iteration coecients as accurately as possible. Therefore, we want to avoid all additional perturbations that might be introduced by an unfortunate choice of the polynomial process that is carried out on top of the Bi-CG process. In Sect. we will study the choice of the set, and we will identify polynomials that lead to suciently stable computation of the Bi-CG iteration coecients. The polynomials can also be used for a dierent purpose in the Bi-CG process. Sonneveld [] was the rst to suggest to rewrite the inner products, not only to avoid the operations with A T, e.g., (.) = (r ; (A T )er ) = ( (A)r ;er ) = (r ;er ); but also to allow the construction of recursions for the vectors r := (A)r. In this way the polynomials can be used for a further reduction of the residual in some norm. In fact Sonneveld suggested the specic choice =, where r = (A)r (i.e., is the Bi-CG iteration polynomial), and this led to the well-nown CGS method []. More recently, other hybrid Bi-CG methods have emerged as well. In all these approaches the Bi-CG iteration vectors are not computed explicitly. In the BiCGstab methods [5,, ] is chosen as the product of low degree minimum residual (lie GMRES) polynomials. We will study these choices in Sect. in view of our new insights on the choice of the. It will turn out that the quest for a stable computation of the iteration coecients is not always in concordance with optimal residual reducing properties. Conventions. Throughout this paper will denote the Euclidean norm.. For methods to be discussed, the residual r at the th step is in K + (A; r ) and can be written as a th degree polynomial in A acting on r : r = (A)r and () =. In connection with this we will call the polynomial for method M, the M-polynomial associated with A and r (see, e.g., [6]), or the OR M()-polynomial if we want to specify the degree. In particular the OR-polynomial corresponds to the situation where the residual is orthogonal with respect to the Krylov subspace K (A; r ) (FOM MR [8] and GENCG [] dene implicitly OR-polynomials). The MR-polynomial denes the residual that is minimal in the Krylov subspace K + (A; r ) (as in GMRES [9]).. We will often use phrases lie \reduces the residual" or \small residual". This will always mean that these residual vectors are reduced (or are small) with respect to the Euclidean norm.. The Bi-CG iteration coecients. To understand why the Bi-CG coecients can be inaccurate, we concentrate rst on the (see (.)). Then, as we will show, the eects of inaccurate computation of can be understood in a similar way. The computed will be inaccurate if r is nearly orthogonal to (A T )er. As is well nown, this will happen if the incorporated Lanczos process nearly breas down (i.e. ( (A)r ; (A T )er ) for any polynomial of exact degree ), and this may be attributed to an unlucy choice of er. This ind of brea down may be circumvented by so-called loo-ahead techniques (see, e.g., [7, 9]). However, apart from this a bad choice of may lead to a small as well. Here, we only consider how this choice of causes further instability in the computation of the iteration coecients. We assume that the Lanczos process itself does not (nearly) brea down. The relative error ", due to rounding errors, in can be bounded sharply by (see, e.g., [6]) (.) where (.) j" j : n (jr j; j (A T )er j) j(r ; (A T )er )j : n r (A T )er j(r ; (A T )er )j b := j(r ; (A T )er )j r (A T )er ; : n b ; is the relative machine precision and n the dimension of the problem. For a small relative error we want to have b (the scaled ) as large as possible.

5 (.) Maintaining convergence of BiCGstab methods Because of the orthogonality of r with respect to K (A T ;er ) it follows that (r ; (A T )er ) = (r ; (A T ) er ); where is the leading coecient of : (t) = P j jt j. Hence, (.) b = j j (A T )er j(r ; (A T ) er )j : r The second quotient in this expression for b does not depend on, and therefore, the expression for b is maximal over all polynomials with xed leading coecient, when is an appropriate multiple of the OR-polynomial OR associated with A T and er, that is OR (A T )er? K (A T ; er ); and OR () = : The appropriate multiple is to tae care that the polynomial has leading coecient, but since the expression for b is invariant under scaling for, we conclude that the OR-polynomial is the polynomial that maes b maximal. For = (Au ; (A T )er ) we can follow the same line of reasoning. Since by construction in Bi- CG the vectors Au are orthogonal to lower dimensional shadow Krylov subspaces, it follows that the relative error in, due to rounding errors can be bounded by an expression which is also minimal for OR. Note that the Bi-CG iteration coecients are formed from ratios of 's and 's, so that errors in each of these add to the inaccuracy in the 's and 's. Since Bi-CG is designed to avoid all the wor for the construction of an orthogonal basis, it would be expensive to construct the OR as the basis generating polynomials for the shadow Krylov subspace. In Bi-CG a compromise is made by taing the = which creates a bi-orthogonal basis. At least for (near) symmetric matrices this is (almost) optimal.. Small residuals and accurate Bi-CG coecients. Now the question arises whether we can select polynomials, with () =, that satisfy the following requirements:. leads to suciently stable Bi-CG coecients,. can be used to further reduce the Bi-CG residual, that is r = (A)r is (much) smaller in norm than r,. the can be (implicitly) formed by short recurrences... Some choices for the polynomial. A choice that comes close, in many relevant cases, to fullling all these requirements is the one suggested by Sonneveld []: =, which leads to CGS. However, there are two disadvantages associated with this choice. The rst is that there is no reason why should lead to a further reduction (and often it does not), the second is that all irregularities in the convergence behavior of Bi-CG are magnied in CGS (although the negative eects of this on the accuracy of the approximated solution can be largely reduced [5, ]). An obvious alternative is to select as the product of rst degree MR (or GMRES) polynomials, which leads to Bi-CGSTAB [5], or as n=` factors of ` degree MR-polynomials [, ]. An obvious problem, when replacing the Bi-CG polynomial by other polynomials which are chosen as to reduce the residual vector, is that these polynomials do not necessarily lead implicitly to an optimal basis for the shadow Krylov subspace. We will rst concentrate on suitable (inexpensive) polynomial methods that help to further reduce H the Bi-CG residual. For hybrid Bi-CG methods, where r = (A)r, we have that is computed as = (r ;er ) (cf. (.)), and in nite precision arithmetic there will be a relative error " H in the evaluation of this inner product that can be bounded as (.) where (.) j" H j : n (jr j; jer j) j(r ;er )j : r er n j(r ;er )j : n ; b H b H := j(r ;er )j r er :

6 Sleijpen and Van der Vorst Fig... Amplication eects of GMRES() and FOM(). bs := =b! ror s 6 bs br br Abr Abr b! rmr (br; Abr) b! := br Abr s - r MR = br? b!bs r OR = br? b! bs C CW? For similar reasons as in Sect., we have (.) b H = j j H (A)r j(a r ;er )j ; er where is the leading coecient of H. Again, the OR-polynomial OR (associated with A and r ), which minimizes H(A)r =j j, would lead to a maximal value for b H. Of course, there is no guarantee or reason why this polynomial should also maximize the expression for b in the Bi-CG representation for the inner product, but given the fact that we want that the polynomial acts directly on r this is the best we can do. In the BiCGstab methods the H is chosen as a product of MR-polynomials, for obvious reasons. H The MR-polynomial (associated with A and r ) minimizes r = (A)r H over all polynomials H for which () =, and hence seems to be more appropriate for creating small residuals r. But it H would be too expensive to perform steps of FOM or GMRES in order to compute r = (A)r (using r as initial residual): we not only strive for a large reduction but also for inexpensive steps. The product of MR()-polynomials (i.e. MR-polynomials of degree ) as in Bi-CGSTAB is a compromise between the wish for small residuals and inexpensive steps. In view of our discussion above, however, we might also consider OR()-polynomials in order to achieve more stability in some cases, giving up some of the reduction. H With OR()-polynomials (?! ) for which (I?! A)br? br, where br :=? (A)r, we compromise between (locally) more accurate coecients and inexpensive steps. Although these polynomials occasionally cure stagnation of Bi-CGSTAB (see also [5]) they also may amplify residuals, which again leads to inaccurate approximations (as explained by (7) in []) or even to overow. We will now show that, if the angle between br and Abr is more than 5 (i.e. j(br; Abr)j p br Abr ) then the OR()-polynomial locally amplies the residual, while the MR()-polynomial leads to a smaller value for b H (cf. (.)). This property is easily explained with Fig.., where bs is a scalar multiple of Abr, scaled such that bs = br. The residual r OR is obtained by applying the OR()-polynomial to br, r MR results from the MR()-polynomial. Clearly, in the situation as setched in the gure, r MR < br < r OR, while scaling the polynomials such that the leading coecient is identical (in the gure br=abr) changes

7 Maintaining convergence of BiCGstab methods 5 the order: r OR =j=b!j < bs < r MR =jb!j. We have to be careful with such amplications when they are extremely large or when they occur in a consecutive number of iteration steps (as will be the case in a stagnation phase of the process). In such cases, any of the two choices may slow down the convergence: the OR()-polynomials because of amplifying r ; the MR()-polynomials because of shrining b H and thus aecting the accuracy of the Bi-CG coecients. Apparently, Bi-CGSTAB may also be expected to converge poorly if in a number of consecutive steps the angle between br and Abr is more than 5. Especially when, in Bi- CGSTAB, GMRES() stagnates in a consecutive number of steps, it is important to have accurate Bi-CG coecients, because, in such a case any convergence is to be expected only from the Bi-CG part of Bi-CGSTAB. Unfortunately, this is precisely the situation where the MR()-polynomials spoil the accuracy of the coecients, while the OR()-polynomials spoil the accuracy of the approximations or lead to overow. Therefore, we have to nd a cure by other modications. We will suggest other choices for! (as in (.), see also [5]) which occasionally cure these problems. However, they often cannot completely prevent poor convergence. One reason is that, with these rst degree factors, we are (implicitly) building a power basis for the shadow Krylov subspace (if the! are close to each other), and we have seen in Sect. that this is highly undesirable. We may expect better convergence results by performing a composition of ` steps, using minimizing polynomials of degree `, with ` > : that is, by taing as a product of MR(`)-polynomials as in BiCGstab(`) (see [,, ]), provided that the MR(`)-polynomials lead to signicant reductions. We will consider this approach in much more detail. In BiCGstab(`) the polynomial is constructed as a product of polynomials of degree `: for = m`, = p m? : : : p where p j is of degree `. To investigate what properties these polynomial factors p j should have, we consider = m`, concentrate on p m, and we dene (.) br := (A) +`(A)r = (A)r +`: In BiCGstab(`) the vector br is computed explicitly in the Bi-CG part of the algorithm. The new residual r +` will be r +` := p m (A)br (for implementational details, see [, ])... Minimizing polynomials of low degree. For the derivations and results in this section we assume exact arithmetic. However, we expect that they also have some validity in nite precision arithmetic, and as we will see in Sect., the experiments do largely conrm our expectations. As before, we wish to maximize b H for polynomials p m of degree ` for which p m () =. As we have seen in Sect., this means that we have to restrict ourselves to polynomials with a xed leading coecient `. In order to mae our formulas more readable, we dene (.5) In order to have b H +`, r := r j ` j where r := p(a)br and p(t) = `X j= j t j : (.6) b H +` = j(r;er )j rer ; (approximately) maximal, we should have that (.7) p(a)br is (approximately) minimal. As is well nown r MR solves if and only if (.8) minf r j r = p(a)br; degr(p) `; p() = g r MR? Abr; A br; : : : ; A`br:

8 6 Sleijpen and Van der Vorst In a similar way, one can show that r OR solves (cf. (.7)) if and only if (.9) min r j r = p(a)br; degr(p) `; p() = r OR? br; Abr; : : : ; A`? br: The residual r MR is the `th residual of a minimal residual method (as GMRES), and r OR is the `th residual of an orthogonal residual method (as FOM), each for the problem Ax = b with the same initial residual br. The following theorem compares how good r MR is in maximizing b H and how well ror helps to reduce br. A method lie ORTHODIR produces explicitly an orthonormal basis for which Theorem. can be applied. We will call a sequence of vectors br ; : : : ;br` a Krylov basis for K`(A;br ) ifbr ; : : : ;br j andbr ; : : : ; A j? br span the same space for each j = ; : : : ; `. Theorem.. Let br ; : : :;br`? be an orthonormal Krylov basis for K`? (A; Abr). Let the vectors er and er` be obtained by orthogonalizing br and A`br with respect to K`? (A; Abr): Let % be dened as (.) Then (.) er := br? X`? j= (br;br j )br j and er` := A`br? % := (er`;er ) er` er : X`? j= (A`br;br j )br j : r MR = er? er % er` er` and r OR = er? er % er` er`; (.) r MR = p? % er and r OR = p? % er ; j%j (.) p? p % r MR = er` and r OR =? % j%j er`: Proof. One may easily verify that s := er? er` = p(a)br for some polynomial p of degree ` with leading coecient and p() =. Moreover, since er and er` are orthogonal to Abr; : : : ; A`? br, the vector s is orthogonal to these vectors as well. Dene := er =er`. With = %, s is also orthogonal to er`, and consequently, orthogonal to A`br. By (.8), s = r MR for this = %. Similarly, the choice = =% maes s orthogonal to er, which implies that s = r OR (cf. (.9)), and this completes the proof of (.). The expressions in (.) for the norms of the optimal residuals follow from Pythagoras' theorem: er`? r MR and er? r OR. Combining these expressions with the values for the leading coecient ( = %, = =%, respectively) gives (.). The vector er in the theorem is precisely the residual in the (`? )-th step of a minimal residual method. Therefore, if & is the residual reduction step ` by this MR method then % = p? & and we have the following corollary. Corollary.. (.) r MR ` r OR ` = j%j and r MR ` r OR ` = j%j p where? % = rmr ` r MR : `? (i.e. & := r MR ` = r MR `? ) Property (.) can also be found in [], [6] (there, the authors focussed on GMRES while our formulation follows the ORTHODIR approach).

9 Maintaining convergence of BiCGstab methods 7.. Global eects. The concept of Lanczos breadown is well nown. We will see that it can be translated in terms of angles between the Krylov subspaces and their \shadow subspaces". It is less obvious what near-breadown of the Lanczos process could mean. We will say that no near-breadown taes place if the angle between the Krylov subspace K := K (A; r ) and the shadow Krylov subspace ek := K (A T ;er ) is uniformly (in ) suciently smaller than : (.5) inf cos \(K ; e K ) = inf v K sup ev e K j(v;ev)j A > : v ev In particular, we then have that (.6) inf sup j(r ;ev)j r ev j ev e K+ > ; and, since r? e K, we see that (r ;er ) 6= (no-breadown of the Lanczos process). For hybrid Bi-CG methods as CGS, Bi-CGSTAB and BiCGstab(`) we translate property (.6) as (.7) := sup j( (A)r ;er )j (A)r er j pol. of degree ; >, for all : We may expect to obtain the most accurate by using the th OR-polynomial OR of steps of an OR method with initial residual r. Then, may expect to be endowed with a relative error of size n=. In practise, if we use another polynomial of degree, we may expect an error in that is larger than n= by a multiplicative factor (.8) (A)r OR (A)r : If this factor is large, say =(n), the scalar and hence the Bi-CG coecients can not be expected to have any correct digit. It is dicult to analyze this factor as a function of, since the initial residual r changes for each. Clearly, since OR +`(A)r +` p OR m (A) (A)r +` = p OR m (A)br and +` = p m = p m p, (.9) +`(A)r +` OR +`(A)r +` = OR pm (A)br % m OR +`(A)r +` where % m := p OR m (A)br % m p m (A)br : In our discussion, we assume that the factor in (.8) has at least the order of magnitude of the product m j= =% j (with = m`) of the factors =% j (in (.9)) per sweep. The conclusion that we should avoid that =% j =(n) seems to be supported by results from numerical experiments (cf. Sect. )... Discussion. The OR(`)-polynomial is the best choice for obtaining more accurate Bi-CG coecients, but the MR(`)-polynomial will do almost as well if there is a signicant error reduction at the `th step of GMRES (with initial residual br) (cf. (.) and (.)). In that case the eect of OR(`) is practically equivalent with the eect of MR(`) (cf. (.) and (.)), and MR(`)-polynomials may be slightly preferable. However, if GMRES (with initial residual br) does not reduce the residual well at the `th step then j%j (cf. (.)), and, hence, the MR(`)-polynomial may be expected to lead to an inaccurate +`, while the OR(`)-polynomial will enlarge the residual signicantly. Liewise, if in a consecutive number of sweeps with BiCGstab(`) the factor j%j is less than p, say, then the choice of MR(`)-polynomials may lead to inaccurate Bi-CG coecients as well, because of an accumulation of rounding errors in (and ). On the other hand, the OR(`)-polynomials may lead to unacceptable large residuals, or even to overow, after a consecutive number of amplications of the residual.

10 8 Sleijpen and Van der Vorst We propose to mae a compromise by choosing some intermediate between the OR(`)- and MR(`)- polynomial in case of a poor reduction by the latter one. This choice is inspired by equation (.): (.) er r +` = er? b er` er`; where b := % max(j%j; ) j%j with er, er`, and % as in Theorem. and [; ). In Sect... we will present some strategies for the choice of and we will comment on computational details. However, although this approach often helps to cure our instability problems, it is not a panacea which can circumvent always an occasionally poor convergence of BiCGstab(`). In such a situation, increasing the value of ` may be an alternative. We may expect better convergence for BiCGstab(`) if we increase the value for `, and we will argue why. It helps for our discussion to compare BiCGstab(`) with Bi-CGSTAB (=BiCGstab()), that is, we compare sweep of BiCGstab(`) with ` sweeps of Bi-CGSTAB. In ` sweeps of Bi-CGSTAB, ` times a MR()-polynomial is applied (each time for a dierent starting vector: +j(a)r +j+, j = ; : : : ; `?). By selecting higher degree MR(`)-polynomials, as in BiCGstab(`), we hope to prot for two dierent reasons:. One sweep of GMRES(`) may be expected to result in a better residual reduction than ` steps of GMRES(), Remar. In the case that GMRES() reduces well, we do not have to fear a loss of speed of convergence due to inaccurate Bi-CG coecients. However, since GMRES may accelerate (superlinear convergence behavior), we may expect to obtain a smaller residual r with BiCGstab(`) than with Bi-CGSTAB.. ` steps of GMRES() contribute ` times to a decrease of b (hence contributing ` times to increasingly larger rounding errors in ), while one sweep of GMRES(`) contributes only once; the decreasing eect in each single step of ` steps of GMRES() may be expected to be comparable or worse than the eect of only one sweep with GMRES(`). Remar. If the `th step of GMRES does not reduce well (i.e. we have a small j%j) then the MR(`)-polynomial \amplies" the inaccuracy on +` by =j%j (in comparison with the OR(`)-polynomial). In such a situation we may not expect to obtain a signicant reduction by any of the steps of GMRES(). It is even worse: in ` steps of Bi-CGSTAB, we should expect an \amplication" by the GMRES() steps in the inaccuracy of +` by a factor lie (=j%j) ` or more. More specically, in a stagnation phase of the iterative method, it is more liely to have rather accurate Bi-CG coecients, when we use BiCGstab(`), than with Bi-CGSTAB. In our discussion, the expected superlinear convergence behavior of GMRES plays a rather important role. However, as is well nown, GMRES may as well converge slower or even stagnate for quite a while in any phase of the iteration process. In order to prot from a possible good reduction of the MR(`? )-polynomial, in cases where the MR(`)-polynomial gives only a poor additional reduction, we may use the modication as suggested in (.).... Computational details. The computation of the leading coecient and %, as suggested in Theorem. and formula (.), can be done with relatively inexpensive operations involving `-vectors only. If br = br;br ; : : : ;br` is a Krylov basis of K`+ (A;br) and R := [br j : : :jbr`] then er = R~ and er` = R~` for some ~ ; ~` IR`+. Consider the (` + ) (` + )-matrix V := R T R, the inner product < ~; ~ >:= ~ T V ~ and norm j~ j := p < ~; ~ >. Then (cf. (.)) (.) r +` = R and some scalar in [; ). ~? b j~ j j~`j ~` ; with b := % j%j max(j%j; ), % = < ~`; ~ > j~`j j~ j

11 Maintaining convergence of BiCGstab methods 9 With = we have the MR(`)-polynomial which gives optimal reduction of br with respect to. For very small % or for a consecutive number of non-large % (say, j%j < ), = may result in inaccurate Bi-CG coecients, since, as compared with the OR-polynomial, the MR-polynomial amplies br with respect to by =j%j (i.e. r MR = r OR =j%j): if in m sweeps of BiCGstab(`) each % is j%j < we may expect an amplication of the relative rounding error by at most (=) m (comparing local eects of OR- and MR-polynomials and assuming that local dierences accumulate). With = =j%j we have the OR(`)-polynomial. Although this polynomial gives optimal reduction of br with respect to, it may lead to large residuals. With > we may avoid large amplications of the inaccuracy in the Bi-CG coecients. The choice = distributes negative eects of small % equally amongst residual reduction (amplifying r MR by at most p : r +` p r MR ) and rounding error amplication (amplifying r OR by at most p : r +` p r OR ). For ` =, this choice amplies both br and Abr whenever j%j < (that is, r + > br and r + > Abr ), and reduces both quantities for other values of %. A signicant amplication of the rounding error in a few steps need not worry us as long as the coecients are still rather accurate (in 6 digit arithmetic a cumulative amplication by, say, 8 may still be very acceptable). Therefore, the best choice of will depend on the length of the stagnation phase: using (.), we expect a cumulative rounding error amplication by at most ( +? ) L=(`), where L is in the stagnation phase (again assuming that the local dierences accumulate). Therefore, for larger ` a smaller may be acceptable. Moreover, in the upper bound, we did not tae into account the reducing eect of the OR-polynomial itself: the amplication factor =j%j of the MR-polynomial may be harmless if r OR is small. For instance, for ` = and j%j < p, we have that r MR < Abr..5. Minimizing polynomials and. Following the arguments for, using the concept of near-breadown of the LU-decomposition, we can see that, for accurate, the A (A)u should be as small as possible. In the mth sweep, where = p m? p is given ( = m`) and p m is constructed, Ap m (A)bu, with bu := (A)u +`, should be as small as possible. Unfortunately, the construction of is lined to the residual r. Note that we do not have this problem in the Bi-CG process. For Bi-CG, the polynomial that will give the most accurate coecients and is lined to the initial shadow residual er (see Sect. ): (A T )er should be minimal. The following observation lins the scaled (cf. (.)) to the scaled and the residual r and gives some theoretical support to the strategy of concentrating on non-small scaled only. Since Au = (r? r + ) and ( (A)r + ;er ) = we have, (.) b H := A (A)u er j(a (A)u ;er )j = (A)r? (A)r + er : j( (A)r ;er )j Hence, with r := (A)r, we can bound the scaled by the scaled and the growth of the residual in step : (.) b H r er j(r ;er )j + (A)r + (A)r = b H + (A)r + (A)r If our strategy to choose the polynomial prevents b H te become too small and the residuals cannot grow much in one step then our strategy wors for as well. One can show that (see []), : (.) r + r () + ; with := j(r +;er +)j () r + er + ;? and ()? and () + are maximal and minimal, respectively, such that, (.5) ()? sup ev e K j(av; ev)j v ev () + for all v K + :

12 Sleijpen and Van der Vorst The () j are the singular values of A projected on the Krylov subspace K and the quotient space IR n =e K+. The LU-decomposition in Bi-CG breas down in step if and only if the smallest singular value ()? is zero. By scaling by () +, we obtain ()? =() +, that can be viewed as a quantication of the near-breadown of the LU-decomposition. ((.5) can be related to the Babusa-Brezzi condition, well-nown in mixed nite element theory, cf. [], Sect. ). Apparently, (.) tells us that the growth of the Bi-CG residuals at step can be bounded in terms of the \distance" to Lanczos breadown and the distance ()? =() + to LU-decomposition breadown. If Bi-CG incorporated in the BiCGstab process does not suer from near breadown of the Lanczos process nor from the LU-decomposition, then we expect (cf. (.)) to be of moderate size. (A)r + = (A)r. Numerical experiments. In the previous sections, we have focussed on the scaled (i.e. b H as dened in (.)) and the scaled. Although these scaled values (where we have applied Cauchy-Schwartz) are actually smaller than the values that we display in the gures they do not dier signicantly in our numerical examples. The gures also show that the values of b H and b H are rather close to each other, as they should in view of our arguments in Sect..5. All gures show the log of the norm of the true residual b? Ax (curve, { in gure) for = m`, the log of j(r ;er )j=(jr j; jer j) (curve, { { in gure), also for intermediate, the log of j(au ;er )j=(jau j; jer j) (curve, in gure), and the log :7 of jb j (curve, the 's in gure), where b is the scaled leading coecient of the polynomial p m that we actually used (cf..) and :7 is the value for if we modify the methods by (.) (i.e., if 6= ). We count the iteration phases by numbers of matrix-vector products (MV), since this maes it possible to compare eectively dierent BiCGstab variants. The numerical results clearly indicate that limiting the relative size of the leading coecient of the polynomial may help to cure the eects of stagnation of BiCGstab(`), and also it may help to increasing the value of ` (our standard choice was ` = in these examples). In some situations one modication may help, while in other situations the other modication may help or a combination of both... Example. First we consider an advection dominated -nd order partial dierential equation, with Dirichlet boundary conditions, on the unit cube (this equation was taen from []):?u xx? u yy? u zz + u x = f: The function f is dened by the solution u(x; y; z) = exp(xyz) sin(x) sin(y) sin(z). This equation was discretized using nite volumes and central dierences for u x, resulting in a sevendiagonal linear system of order. No preconditioning has been used in this example (or other examples in order to mae the dierences between approaches more visible). As we see, Bi-CGSTAB more or less stagnates (see Fig..), which creates the ind of situation that we were particularly interested in, and that we want to cure. After about MVs, the scalars and, and consequently the Bi-CG coecients, do not have any correct digit. None of the scaled leading coecients % is extremely small (according to Fig.., j%j (:7) = :8; the 's in the gure show log :7 j%j). Apparently, the accumulation of the amplications of br by the MR()-polynomials leads to this situation. The graph of j(r ;er )j=(jr j; jer j) shows nicely the predicted exponential decrease for (that is for less than MVs; j%j (:8) = :8?6). For larger,, these values are smaller than the machine precision. Although the modication with = :7 improves the relative size of the and (only after = 5 we lose all signicant digits, cf. Fig..), we now even have divergence: the amplication of br per step

13 Maintaining convergence of BiCGstab methods Fig... Standard Bi-CGSTAB Fig... Modied = :7 Example Example Fig... Standard BiCGstab() Fig... Modied = :7 Example Example is not compensated by better convergence behavior of the incorporated Bi-CG process. Note that, for (that is, #MV ), j(r ; er )j=(jr j; jer j) decreases proportionally to (= p +? ) = :57. Using BiCGstab() improves the situation: both versions of BiCGstab(`) converge nicely. According to Fig.., j%j (:7) = :. During the rst 6 MVs, we have to deal with these %'s 5 times. This leads us to expect a decrease of j(r ;er )j=(jr j; jer j) by at most (:) 5 = :7?7. This is conrmed by the numerical results. Here, BiCGstab() seems to be able to retain locally ve correct digits of the Bi-CG coecients, which is, apparently, enough to eep the incorporated Bi-CG process converging as it should. The modication with = :7 further improves the situation (but only slightly).

14 Sleijpen and Van der Vorst Fig..5. Standard Bi-CGSTAB Fig..6. Modied = :7 5 Example Example Example. For this comparison we have chosen the matrix that arises from a 6 6 nite volume discretization of (.)?u xx? u yy + (xu x + yu y ) + u = f on the unit cube with Dirichlet boundary conditions, with = and =?. This example has been suggested in [8]. We have chosen the right hand side b such that the solution x of the equation Ax = b is the vector (; ; : : :; ) T. The zero vector was used as an initial guess. Here Bi-CGSTAB stagnates also (see Fig..5). After about 5 MVs, the Bi-CG coecients do not have any correct digits left, due to the cumulative eect of non-small j%j's (j%j (:7) 5 = :68). The modication with = :7 (cf. Fig..6) improves the relative size of the and signicantly, enough to survive the phase where j%j's are :7. As soon as the MR()-polynomial is more eective in reducing the norm of the residual (for #MV 8, we nd j%j :7), the relative size of the and grows. With BiCGstab() the situation does not seem to improve (see Fig..7). In the initial phase, the values of j%j for the MR()-polynomials are much smaller than the values of j%j for the MR()- polynomials. Although, up to #MV 8 the decrease of the scaled and is a little bit better than with Bi-CGSTAB, this is apparently not enough to help survive the phase where the j%j is too small. However, (not shown in the gure) BiCGstab() did eventually converge (that is, r? r ) in MVs, needing MV more than the modied Bi-CGSTAB. MVs is the length of the phase where the scaled and are very small (? ). The convergence behavior of modied BiCGstab() with = :7 (see Fig..8) is comparable to the one of modied Bi-CGSTAB. (in Fig..6)... Example. For this example we have selected a problem similar to the one in Example. Here, the matrix arises from a nite volume discretization of (.), now with = and = as in []. The example shows that a combination of our strategies for Bi-CGSTAB (increasing ` and limiting the size of the scaled leading coecient b, cf. (.)) can cure the accuracy problems, whereas each of the strategies independently may fail. As Fig.. shows, limiting the leading coecient improves the accuracy of the Bi-CG coecients in the initial phase of the process but can not prevent a loss of all digits (for #MV> ). Now the

15 Maintaining convergence of BiCGstab methods Fig..7. Standard BiCGstab() Fig..8. Modied = :7 5 Example Example Fig..9. Standard Bi-CGSTAB Fig... Modied = :7 5 Example Example amplifying eect on r of this choice of the polynomial can clearly be seen (from #MV= to #MV= the residual grows with ( p + ) = :6 8 ). 5. Conclusions. In order to maintain the convergence properties of the Bi-CG component in hybrid Bi-CG methods, it is necessary to select polynomial methods for the hybrid part that permit to compute the Bi-CG coecients as accurately as possible. The (dynamical) combination of two strategies for the improvement of the local accuracy seems to be very attractive. These strategies are: A. Tae products of degree ` polynomials with ` > as in BiCGstab(`) rather than of degree polynomials as in Bi-CGSTAB. B. Try to limit the size of the leading coecient of these polynomials, by switching occasionally

16 Sleijpen and Van der Vorst Fig... Standard BiCGstab() Fig... Modied = :7 5 Example Example between FOM and GMRES processes. This approach often leads to improve convergence and may help to overcome phases of stagnation. Our strategies are rather inexpensive, relative to the wor per matrix-vector product. Acnowledgement. We than the referees for their suggestions to improve our presentation. REFERENCES [] Z. Bai, Error analysis of the Lanczos algorithm for the nonsymmetric eigenvalue problem, Math. Comp., 6 (99), pp. 9{6. [] R.E. Ban and T.F. Chan, An analysis of the composite step biconjugate gradient method, Numer. Math., 66 (99), pp. 95{9. [] P.N. Brown, A theoretical comparison of the Arnoldi and GMRES algorithms, SIAM J. Sci. Stat. Comput., (99), pp. 58{78. [] S.C. Eisenstat, H.C. Elman and M.H. Schultz, Variational iterative methods for nonsymmetric systems of linear equations, Siam J. Numer. Anal., (98), pp. 5{57. [5] T.F. Chan, E. Gallopoulos, V. Simoncini, T. Szeto and C.H. Tong, A uasi-minimal Residual Variant of the Bi-CGSTAB Algorithm for Nonsymmetric Systems, SIAM J. Sci. Comput., 5 (99), pp. 8{7. [6] G.H. Golub and C.F. Van Loan, Matrix Computations, Second edition, The John Hopins University Press, Baltimore and London, 989. [7] R. Fletcher, Conjugate gradient methods for indenite systems, In Proc. of the Dundee Biennial Conference on Numerical Analysis, G. Watson, ed., Springer-Verlag, New Yor, 975. [8] R.W. Freund, A transpose-free quasi-minimal residual algorithm for non-hermitian linear systems, SIAM J. Sci. Comput., (99), pp. 7{8. [9] R.W. Freund, M.H. Gutnecht and N. Nachtigal, An implementation of the loo-ahead Lanczos algorithm for non-hermitian matrices, SIAM J. Sci. Comput., (99), pp. 7{58. [] A. Greenbaum, Behavior of slightly perturbed Lanczos and conjugate gradient recurrences, Linear Algebra Appl., (989), pp. 7{6. [] M.H. Gutnecht, Variants of BiCGStab for Matrices with Complex Spectrum, SIAM J. Sci. Comput., (99), pp. {. [] C. Lanczos, Solution of systems of linear equations by minimized iteration, J. Res. Nat. Bur. Stand., 9 (95), pp. {5. [] U. Meier Yang, Preconditioned Conjugate Gradient-Lie methods for Nonsymmetric Linear Systems, Preprint, Center for Research and Development, University of Illinois at Urbana-Champaign, 99. [] J. Modersitzi and G.L.G. Sleijpen, An error analysis of CSBCG, In preparation [5] A. Neumaier, Oral presentation at the Oberwolfach meeting \Numerical Linear Algebra", Oberwolfach, April 99.

17 Maintaining convergence of BiCGstab methods 5 [6] C. Paige, Accuracy and eectiveness of the Lanczos algorithm for the symmetric eigenproblem, Linear Algebra Appl., (98), pp. 5{58. [7] B.N. Parlett, D.R. Taylor and Z.A. Liu, A loo-ahead Lanczos algorithm for unsymmetric matrices, Math. Comp., (985), pp. 5{. [8] Y. Saad, Krylov subspace methods for solving large linear unsymmetric linear systems, Math. Comput., 7 (98), pp. 5{6. [9] Y. Saad and M.H. Schultz, GMRES: A generalized minimum residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7 (986), pp. 856{869. [] Y. Saad, A exible inner-outer preconditioned GMRES algorithm, SIAM J. Sci. Comput.,, (99), pp. 6{69. [] G.L.G. Sleijpen and D.R. Foema, BiCGstab(`) for linear equations involving matrices with complex spectrum, ETNA, (99), pp. {. [] G.L.G. Sleijpen and H.A. van der Vorst, Reliable updated residuals in hybrid Bi-CG methods, Preprint 886, Dept. Math., University Utrecht (99). [] G.L.G. Sleijpen, H.A. van der Vorst and D.R. Foema, BiCGstab(`) and other hybrid Bi-CG methods, Numerical Algorithms, 7 (99), pp. 75{9. [] P. Sonneveld, CGS, a fast Lanczos-type solver for nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., (989), pp. 6{5. [5] H.A. van der Vorst, Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., (99), pp. 6{6. [6] H.A. van der Vorst and C. Vui, The superlinear convergence behaviour of GMRES, J. Comput. Appl. Math., 8 (99), pp. 7{. [7] D.M. Young and K.C. Jea, Generalized conjugate-gradient acceleration of nonsymmetrizable iterative methods, Linear Algebra Appl., (98), pp. 59{9.

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

-.- Bi-CG... GMRES(25) --- Bi-CGSTAB BiCGstab(2)

-.- Bi-CG... GMRES(25) --- Bi-CGSTAB BiCGstab(2) .... Advection dominated problem -.- Bi-CG... GMRES(25) --- Bi-CGSTAB BiCGstab(2) * Universiteit Utrecht -2 log1 of residual norm -4-6 -8 Department of Mathematics - GMRES(25) 2 4 6 8 1 Hybrid Bi-Conjugate

More information

using (13) (Neumaier) / using (15) / standard / using

using (13) (Neumaier) / using (15) / standard / using Sherman4 5 0-5 -10 using (13) (Neumaier) / using (15) / standard / using / (16) * Universiteit Utrecht Department of Mathematics 0 50 100 150 200 250 300 350 400 CGS Reliable updated residuals in hybrid

More information

GMRESR: A family of nested GMRES methods

GMRESR: A family of nested GMRES methods GMRESR: A family of nested GMRES methods Report 91-80 H.A. van der Vorst C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical

More information

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y Abstract. In this paper we propose a new method for the iterative computation of a few

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994 Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.

More information

Further experiences with GMRESR

Further experiences with GMRESR Further experiences with GMRESR Report 92-2 C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical Mathematics and Informatics

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Largest Bratu solution, lambda=4

Largest Bratu solution, lambda=4 Largest Bratu solution, lambda=4 * Universiteit Utrecht 5 4 3 2 Department of Mathematics 1 0 30 25 20 15 10 5 5 10 15 20 25 30 Accelerated Inexact Newton Schemes for Large Systems of Nonlinear Equations

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

CG Type Algorithm for Indefinite Linear Systems 1. Conjugate Gradient Type Methods for Solving Symmetric, Indenite Linear Systems.

CG Type Algorithm for Indefinite Linear Systems 1. Conjugate Gradient Type Methods for Solving Symmetric, Indenite Linear Systems. CG Type Algorithm for Indefinite Linear Systems 1 Conjugate Gradient Type Methods for Solving Symmetric, Indenite Linear Systems Jan Modersitzki ) Institute of Applied Mathematics Bundesstrae 55 20146

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Variants of BiCGSafe method using shadow three-term recurrence

Variants of BiCGSafe method using shadow three-term recurrence Variants of BiCGSafe method using shadow three-term recurrence Seiji Fujino, Takashi Sekimoto Research Institute for Information Technology, Kyushu University Ryoyu Systems Co. Ltd. (Kyushu University)

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

The parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven.

The parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven. The parallel computation of the smallest eigenpair of an acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven Abstract Acoustic problems with damping may give rise to large quadratic

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

A stable variant of Simpler GMRES and GCR

A stable variant of Simpler GMRES and GCR A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro@cs.cas.cz,

More information

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum

More information

ETNA Kent State University

ETNA Kent State University Electronic ransactions on Numerical Analysis. Volume 2, pp. 57-75, March 1994. Copyright 1994,. ISSN 1068-9613. ENA MINIMIZAION PROPERIES AND SHOR RECURRENCES FOR KRYLOV SUBSPACE MEHODS RÜDIGER WEISS Dedicated

More information

The rate of convergence of the GMRES method

The rate of convergence of the GMRES method The rate of convergence of the GMRES method Report 90-77 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

Lecture 8 Fast Iterative Methods for linear systems of equations

Lecture 8 Fast Iterative Methods for linear systems of equations CGLS Select x 0 C n x = x 0, r = b Ax 0 u = 0, ρ = 1 while r 2 > tol do s = A r ρ = ρ, ρ = s s, β = ρ/ρ u s βu, c = Au σ = c c, α = ρ/σ r r αc x x+αu end while Graig s method Select x 0 C n x = x 0, r

More information

Peter Deuhard. for Symmetric Indenite Linear Systems

Peter Deuhard. for Symmetric Indenite Linear Systems Peter Deuhard A Study of Lanczos{Type Iterations for Symmetric Indenite Linear Systems Preprint SC 93{6 (March 993) Contents 0. Introduction. Basic Recursive Structure 2. Algorithm Design Principles 7

More information

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof. Universiteit-Utrecht * Department of Mathematics The convergence of Jacobi-Davidson for Hermitian eigenproblems by Jasper van den Eshof Preprint nr. 1165 November, 2000 THE CONVERGENCE OF JACOBI-DAVIDSON

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Preconditioned GMRES Revisited

Preconditioned GMRES Revisited Preconditioned GMRES Revisited Roland Herzog Kirk Soodhalter UBC (visiting) RICAM Linz Preconditioning Conference 2017 Vancouver August 01, 2017 Preconditioned GMRES Revisited Vancouver 1 / 32 Table of

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

Henk van der Vorst. Abstract. We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors

Henk van der Vorst. Abstract. We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors Subspace Iteration for Eigenproblems Henk van der Vorst Abstract We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors of the standard eigenproblem Ax = x. Our method

More information

Gradient Method Based on Roots of A

Gradient Method Based on Roots of A Journal of Scientific Computing, Vol. 15, No. 4, 2000 Solving Ax Using a Modified Conjugate Gradient Method Based on Roots of A Paul F. Fischer 1 and Sigal Gottlieb 2 Received January 23, 2001; accepted

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 11-14 On the convergence of inexact Newton methods R. Idema, D.J.P. Lahaye, and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical Analysis Delft

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

SOLVING HERMITIAN POSITIVE DEFINITE SYSTEMS USING INDEFINITE INCOMPLETE FACTORIZATIONS

SOLVING HERMITIAN POSITIVE DEFINITE SYSTEMS USING INDEFINITE INCOMPLETE FACTORIZATIONS SOLVING HERMITIAN POSITIVE DEFINITE SYSTEMS USING INDEFINITE INCOMPLETE FACTORIZATIONS HAIM AVRON, ANSHUL GUPTA, AND SIVAN TOLEDO Abstract. Incomplete LDL factorizations sometimes produce an indenite preconditioner

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM Electronic Transactions on Numerical Analysis. Volume 45, pp. 133 145, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

More information

Formulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure

Formulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure Applied Mathematics 215 6 1389-146 Published Online July 215 in SciRes http://wwwscirporg/journal/am http://dxdoiorg/14236/am21568131 Formulation of a Preconditioned Algorithm for the Conjugate Gradient

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Fast iterative solvers

Fast iterative solvers Utrecht, 15 november 2017 Fast iterative solvers Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ sleij101/ Review. To simplify, take x 0 = 0, assume b 2 = 1. Solving Ax = b for

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

A preconditioned Krylov subspace method for the solution of least squares problems

A preconditioned Krylov subspace method for the solution of least squares problems A preconditioned Krylov subspace method for the solution of least squares problems Report 94-56 Kees Vui Agur G.J. Sevin Gérard C. Herman Technische Universiteit Delft Delft University of Technology Faculteit

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

arxiv: v2 [math.na] 1 Sep 2016

arxiv: v2 [math.na] 1 Sep 2016 The structure of the polynomials in preconditioned BiCG algorithms and the switching direction of preconditioned systems Shoji Itoh and Masaai Sugihara arxiv:163.175v2 [math.na] 1 Sep 216 Abstract We present

More information

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Seiji Fujino, Takashi Sekimoto Abstract GPBiCG method is an attractive iterative method for the solution

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

arxiv: v1 [hep-lat] 2 May 2012

arxiv: v1 [hep-lat] 2 May 2012 A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität

More information

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G. Universiteit-Utrecht * Mathematical Institute Alternative correction equations in the Jacobi-Davidson method by Menno Genseberger and Gerard L. G. Sleijpen Preprint No. 173 June 1998, revised: March 1999

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

SOR as a Preconditioner. A Dissertation. Presented to. University of Virginia. In Partial Fulllment. of the Requirements for the Degree

SOR as a Preconditioner. A Dissertation. Presented to. University of Virginia. In Partial Fulllment. of the Requirements for the Degree SOR as a Preconditioner A Dissertation Presented to The Faculty of the School of Engineering and Applied Science University of Virginia In Partial Fulllment of the Reuirements for the Degree Doctor of

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Universiteit-Utrecht. Department. of Mathematics. Jacobi-Davidson algorithms for various. eigenproblems. - A working document -

Universiteit-Utrecht. Department. of Mathematics. Jacobi-Davidson algorithms for various. eigenproblems. - A working document - Universiteit-Utrecht * Department of Mathematics Jacobi-Davidson algorithms for various eigenproblems - A working document - by Gerard L.G. Sleipen, Henk A. Van der Vorst, and Zhaoun Bai Preprint nr. 1114

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis RANA03-02 January 2003 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis by J.Rommes, H.A. van der Vorst, EJ.W. ter Maten Reports on Applied and Numerical Analysis Department

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment

More information

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld Recent computational developments in Krylov Subspace Methods for linear systems Valeria Simoncini and Daniel B. Szyld A later version appeared in : Numerical Linear Algebra w/appl., 2007, v. 14(1), pp.1-59.

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-16 An elegant IDR(s) variant that efficiently exploits bi-orthogonality properties Martin B. van Gijzen and Peter Sonneveld ISSN 1389-6520 Reports of the Department

More information

STEEPEST DESCENT AND CONJUGATE GRADIENT METHODS WITH VARIABLE PRECONDITIONING

STEEPEST DESCENT AND CONJUGATE GRADIENT METHODS WITH VARIABLE PRECONDITIONING SIAM J. MATRIX ANAL. APPL. Vol.?, No.?, pp.?? c 2007 Society for Industrial and Applied Mathematics STEEPEST DESCENT AND CONJUGATE GRADIENT METHODS WITH VARIABLE PRECONDITIONING ANDREW V. KNYAZEV AND ILYA

More information

ETNA Kent State University

ETNA Kent State University Z Electronic Transactions on Numerical Analysis. olume 16, pp. 191, 003. Copyright 003,. ISSN 10689613. etnamcs.kent.edu A BLOCK ERSION O BICGSTAB OR LINEAR SYSTEMS WITH MULTIPLE RIGHTHAND SIDES A. EL

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

1e N

1e N Spectral schemes on triangular elements by Wilhelm Heinrichs and Birgit I. Loch Abstract The Poisson problem with homogeneous Dirichlet boundary conditions is considered on a triangle. The mapping between

More information

Fast iterative solvers

Fast iterative solvers Solving Ax = b, an overview Utrecht, 15 november 2017 Fast iterative solvers A = A no Good precond. yes flex. precond. yes GCR yes A > 0 yes CG no no GMRES no ill cond. yes SYMMLQ str indef no large im

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Incomplete Block LU Preconditioners on Slightly Overlapping. E. de Sturler. Delft University of Technology. Abstract

Incomplete Block LU Preconditioners on Slightly Overlapping. E. de Sturler. Delft University of Technology. Abstract Incomplete Block LU Preconditioners on Slightly Overlapping Subdomains for a Massively Parallel Computer E. de Sturler Faculty of Technical Mathematics and Informatics Delft University of Technology Mekelweg

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Jos L.M. van Dorsselaer. February Abstract. Continuation methods are a well-known technique for computing several stationary

Jos L.M. van Dorsselaer. February Abstract. Continuation methods are a well-known technique for computing several stationary Computing eigenvalues occurring in continuation methods with the Jacobi-Davidson QZ method Jos L.M. van Dorsselaer February 1997 Abstract. Continuation methods are a well-known technique for computing

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

The Tortoise and the Hare Restart GMRES

The Tortoise and the Hare Restart GMRES SIAM REVIEW Vol. 45,No. 2,pp. 259 266 c 2 Society for Industrial and Applied Mathematics The Tortoise and the Hare Restart GMRES Mar Embree Abstract. When solving large nonsymmetric systems of linear equations

More information

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss

More information

Principles and Analysis of Krylov Subspace Methods

Principles and Analysis of Krylov Subspace Methods Principles and Analysis of Krylov Subspace Methods Zdeněk Strakoš Institute of Computer Science, Academy of Sciences, Prague www.cs.cas.cz/~strakos Ostrava, February 2005 1 With special thanks to C.C.

More information

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE

HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE PAVEL JIRÁNEK, MIROSLAV ROZLOŽNÍK, AND MARTIN H. GUTKNECHT Abstract. In this paper we analyze the numerical behavior of several minimum residual methods, which

More information