The convergence of iterative solution methods for. A.J. Wathen B. Fischer D.J. Silvester

Size: px
Start display at page:

Download "The convergence of iterative solution methods for. A.J. Wathen B. Fischer D.J. Silvester"

Transcription

1 Report no. 97/16 The convergence of iterative solution methods for symmetric and indenite linear systems A.J. Wathen B. Fischer D.J. Silvester Iterative solution methods provide the only feasible alternative to direct methods for very large scale linear systems such as those which derive from approximation of many partial dierential equation problems. For symmetric and positive denite coecient matrices the conjugate gradient method provides an ecient and popular solver, especially when employed with an appropriate preconditioner. Part of the success of this method is attributable to the rigorous and largely descriptive convergence theory which enables very large sized problems to be tackled with condence. Here we describe some convergence results for symmetric and indenite coecient matrices which depend on an asymptotically small parameter such as the mesh size in a nite dierence or nite element discretisation. These estimates are seen to be descriptive in numerical calculations. Subject classications: AMS(MOS): to be inserted Key words and phrases: to be inserted This research was supported by NATO Grant CRG Oxford University Computing Laboratory Numerical Analysis Group Wolfson Building Parks Road Oxford, England OX1 3QD wathen@comlab.oxford.ac.uk October, 1997

2 2 Contents 1 Introduction 3 2 Optimal polynomials and convergence of the minimum residual method 5 3 Optimal polynomials 6 4 Ordinary Dierential Equations 7 5 Asymptotic solutions 10 6 Conclusion 14

3 3 1 Introduction Large matrix problems arise in dierent areas, the most common of which is perhaps the approximation of boundary value and initial boundary value problems for partial dierential equations. For nite dierence and nite element approximations, accuracy of the discrete solution is most often expressed in terms of an asymptotically small characteristic mesh size, h. A priori optimal error estimates will be descriptive of the observed accuracy in computations when the mesh size h is reduced. In direct correspondence with this is the recognition that the linear or linearised systems of equations which must be solved to yield the discrete solution belong to a family of matrix systems, parameterised by h, where the matrices become of larger dimension as h is reduced. Thus, for example, approximation of a 2-dimensional Dirichlet problem on [0; 1] [0; 1] for the Laplacian on a regular square mesh with mesh size h yields coecient matrices of dimension h 2 2h For direct solution methods this size (together, for example, with the bandwidth h 1 for a natural numbering) will allow an estimate of the work required to solve a particular discrete problem, however for iterative methods (for which the work at each iteration is usually xed and easy to calculate) more crucial still is the way in which the convergence will depend on h. It is usually the case that not only does the work per iteration increase as h is reduced, but also that the number of iterations required to achieve a convergence tolerance will also increase for larger problems (smaller h). For example, if the conjugate gradient method [13] is applied to the Laplacian problem mentioned above, then the iteration error will decrease at each step by a factor of 1 O(h). For positive denite problems, particularly those which are symmetric, all this is well known, see for example [5] (or for more classical iteration methods see [24]), and there has been much eort directed at preconditioning to reduce the dependence of the convergence on h or even in the best case to give a preconditioned iterative method which has a convergence rate independent of h (see for example [4] or [21]). For indenite matrices the situation is quite dierent. There is a widely held view that iterative solution of indenite systems is much less reliable and much less ecient in general, and that steps should be taken to make such systems positive denite. For example in the case of saddle-point (or KKT) systems of the form " # " # " # K B T x f = ; B 0 y 0 (which are necessarily indenite), provided that K is denite, then block elimination yields a denite Schur Complement BK 1 B T to which iteration such

4 4 as conjugate gradients, or one of the family of non-symmetric Krylov subspace methods can be applied (in the case of nonsymmetric K). Even conversion of indenite systems to normal equations by multiplying with the transpose of the coecient matrix (which can be eective for well conditioned problems) is suggested by some authors: Yousef Saad for example says \When the original matrix is strongly indenite, i.e. when it has eigenvalues spread on both sides of the imaginary axis, the usual Krylov subspace methods may fail. The conjugate gradient approach applied to the normal equations may then become a good alternative" ([21], pp 313). Research in the last ve years has provided a strong counter to this view at least in the case of symmetric and indenite systems: in general it is sensible to consider Krylov subspace iteration methods such as the method of minimal residuals, a stable implementation of which (MINRES) is described in the seminal paper by Paige and Saunders [18]. A detailed description of MINRES and alternative implementations (in particular, SYMMLQ) is given in [9]. (For a detailed roundo error analysis of MINRES and SYMMLQ see [23].) Eective preconditioners with guaranteed rapid convergence rates have been derived for a number of important problems. For problems in incompressible uid mechanics, see [20], [26], [22] or the review paper [7], for problems in elasticity see [14], [3] and [19], for problems in porous media ow, see [16] and for an example in nite element/boundary element coupling for transmission problems, see [27]. In this paper we concentrate on convergence estimates for miminum residual iteration applied to a linear system Ax = b, (so that A represents the preconditioned coecient matrix if preconditioning is employed). In particular we generalise the results of [25] to establish rigorous convergence estimates for families of matrices which depend on an asymptotically small parameter (in applications is typically a positive power of the mesh size parameter h). These results prove the superiority of the minimum residual approach over the solution of normal equations for all except one very special type of symmetric and indenite matrix. More background and an easy introduction to this problem can be found in [8], pp In section 2 we give a brief description of the minimum residual iteration and show how convergence is described by polynomial approximations on the eigenvalues of the coecient matrix. In section 3 we give some solutions of the approximation problem in terms of the classical Chebyshev polynomials and show in section 4 how an ordinary dierential equations approach can be used on these problems. In the rather more technical section 5 we establish asymptotic convergence estimates for small based upon asymptotic solution of the relevant ordinary dierential equation initial value problem. In section 6 we summarise our conclusions.

5 2 Optimal polynomials and convergence of the minimum residual method At a cost of one matrix{vector multiplication and a few vector operations per iteration, starting from x 0, the minimum residual method computes iterates x k from the shifted Krylov subspace x 0 + spanfr 0 ; Ar 0 ; : : : ; A k 1 r 0 g for which the residuals r k = b Ax k have minimal Euclidean norm. Equivalently, 5 r k = p k (A)r 0 where p k is a polynomial of degree k satisfying p k (0) = 1, and so the minimisation property yields kr k k = min p2 k ;p(0)=1 kp(a)r 0 k min p2 k ;p(0)=1 kp(a)k kr 0 k = min p2 k ;p(0)=1 kp()k kr 0 k where A = QQ T is an orthogonal diagonalisation of A. Note that it is at this point that symmetry is important: the consequent orthogonality of the eigenvector matrix Q and the invariance with respect to orthogonal transformations of the Euclidean norm have been used. For a non-symmetric problem this is not possible in general [6]. By considering the Euclidean norm of the diagonal matrix one arrives at kr k k kr 0 k min p2 k ;p(0)=1 max jp()j min p2 k ;p(0)=1 max x2e jp(x)j (2.1) where are the eigenvalues of A and E is an inclusion set for these eigenvalues. In the case of a symmetric indenite matrix, E is comprised of two real intervals on either side of the origin (which must be excluded from E because of the constraint p(0) = 1). Thus the sequence of polynomial approximation errors k = min p2 k ;p(0)=1 max x2e jp(x)j (2.2) describe the convergence of the minimum residual method. A more convenient single quantity is the asymptotic convergence factor = lim k!1 1=k k

6 6 which gives the average contraction per iteration: it is for which we will establish estimates. A comment is required at this stage: as with the conjugate gradient method, the minimum residual method would terminate with the exact solution in a number of iterations (steps) less than or equal to the dimension of A if exact arithmetic could be employed. However, the convergence estimate (2.1) and the corresponding value of describe convergence of the iteration for much smaller numbers of iterations. Indeed it is usually the case that it would be very expensive or even infeasible to iterate for as many as dim(a) iterations, so in general one seeks a rapid initial reduction of the norm of the residual. 3 Optimal polynomials Existence of a unique solution of the approximation problem (2.2), the so called optimal polynomial, is guaranteed and will not be discussed further here (see for example [17]). What is of concern to us is a characterisation of the optimal polynomial since its exact form is not known in general. The solution of such minimax approximation problems is usually associated with `equioscillation' properties of the error, and so it is here. For the polynomial of degree k there are k+1 distinct points lying in E at which the value k is attained, and at successive such points the sign of the error alternates. For the simple case where E is a single interval, say [c; d], it follows that the solution of (2.2) is explicitly given by suitably scaled and shifted Chebyshev polynomials, T k 2x d c d c p(x) = T k T k (3.1) d c d c and from this that p d p c k k 2 p p (3.2) d + c so that ( p d p c)=( p d + p c) (see for example [10], Theorem or for a derivation more in context with the present application see [8], section 7.4.5). For the case of a pair of intervals E = [ a; b] [ [c; d] (3.3) where a, b, c and d are positive, the solution is simple to express only when the intervals are of equal length: again the solution is in terms of Chebyshev polynomials 2(x + b)(x c) p(x) = T bk=2c (q(x))=t bk=2c (q(0)); q(x) = 1 + bc ad (3.4)

7 7 where bc means the integer part (see [10] or [12] though the result is originally due to Lebedev [15]). The quadratic polynomial q simply maps the two intervals onto [ 1; 1], so that p is symmetric about the central point (c b)=2 = (d a)=2. In this case p ad p bc bk=2c k 2 p p (3.5) ad + bc (see again [10]). It is unfortunate that this easily accessable estimate based on `symmetric' Chebyshev polynomials is only descriptive when the eigenvalues of a matrix are spread in intervals of essentially equal length, since it is usually quoted as the estimate for the convergence of the minimum residual algorithm regardless of any special structure that the eigenvalue spectrum may possess. In particular if the eigenvalues of a matrix are essentially symmetrically distributed about the origin so that a = d and b = c, then (3.5) implies that residual reduction after k steps is the same as would be achieved after bk=2c steps on a positive denite problem with eigenvalues in [c 2 ; d 2 ], (so that it is comparable with squaring the matrix to achieve the normal equations). Since any implementation of a Krylov subspace method for the normal equations must in some manner compute two matrix{ vector products at each iteration, the work in bk=2c steps of such a method compares with the work in k steps of the minimum residual method, and it follows that the overall work using either of these two approaches is comparable. We now embark on a dierent approach which for many problems involving an asymptotically small parameter yields tighter estimates on the convergence of the minimum residual method in all but this worst case. 4 Ordinary Dierential Equations We have already mentioned the `equioscillation' characterisation of the minimax approximation problem (2.2) when E is a single real interval which does not contain the origin: for a degree k polynomial there exist k + 1 distinct points at which the maximal error k is achieved with alternating sign. Moreover the end points of the interval E are necessarily in this set of `extreme' points. The situation is not so straightforward in the indenite case where E comprises two real intervals (except when there is an exact symmetry as in the case of two intervals of equal length as above). In particular, there is no guarantee that the four end points of the intervals will be extreme points. The issue is that though the error alternates on either side of the constraint point (the origin) we would require that there be k +2 extreme points to ensure that j k j was attained

8 8 at all four end points as well as at k 2 interior points. If there are not k + 2 extreme points then it is not possible to explicitly compute the optimal polynomial (see [9]) and further analysis is impossible. Fortunately what we require is not impossible: Proposition 4.1 If E = [ a; b] [ [c; d] then the solution of the polynomial approximation problem (2.2) has k + 2 extremal points if and only if sn q(b 1 + d)=(a + d) ; m = n k for some n 2 f1; 2; : : : ; k 1g. K(m); m = (c + b)(a + d) (b + d)(a + c) (4.1) This result, in which sn is the elliptic sine function with modulus m, and the number K is the complete elliptic integral of the rst kind, (see [28] for a comprehensive description) was proved by Achieser [2]. Though this leads to an expression for k in terms of the Jacobian theta function, for our present purposes the proposition most usefully tells us that as the number of iterations k increases, there is a more and more dense discrete net of end points for which there are k +2 extreme points including the interval end points themselves. That is, the end points of a pair of intervals are extreme points if and only if the inverse elliptic sine function for the particular given argument and modulus is an exact integer multiple of the complete elliptic integral of the modulus divided by k. Since the elliptic sine function is continuous and monotonic on the relevant domain, it follows that any given pair of intervals, E, can be embeded in a pair of generally larger intervals, ^E say, for which the optimal polynomial has k + 2 extreme points including all four interval end points and k 2 interior points. The minimax error on ^E must clearly be greater than or equal to the minimax error on E. Moreover, for larger values of k, ^E need only be larger than E by a smaller and smaller amount. In this paper we are specically concerned with indenite systems which have eigenvalues contained in inclusion intervals of the form E = [ a L ; b`] [ [c r ; d] (4.2) where a, b, c and d are positive real numbers, is an asymptotically small quantity and the exponents L, ` and r are real with L being non-negative and ` and r being positive. We clearly must have a L b` which for small enough implies that L `. Our analysis requires the further assumptions that in fact L < ` and also L < r. We note here that many other cases can be easily transformed into the form (4.2): for a simple linear change of variable, ux + v it is clear that since the

9 9 degree does not change, then the polynomial approximation problem (2.2) remains unchanged except for a shift of the constraint point if v 6= 0. The results we shall describe therefore apply to more general situations such as for example where 2 [ ; 2 ] [ [ 3 ; 2 ] when the choice u = 2, v = 0 yields (4.2) with a = b = c = d = 1, L = 3, ` = 4 and r = 5. Similarly by taking u = 1, v = 0 we cover cases when the positive upper bound in (4.2) depends asymptotically on whilst the negative lower bound is constant. The rst important consequence of the assumed asymptotic behaviour concerns Proposition 4.1. We are not guaranteed k + 2 extreme points on (4.2), however we may embed E in a pair of intervals ^E for which the result of the proposition does hold as described above. Now ^E can be selected so that for large enough k, ^E E is as small as we desire. In particular its size (measure) can be made proportional to any positive power of the asymptotic parameter,. Thus, we may embed E = [ a L ; b`] [ [c r ; d] in ^E = [ a L ; b` + ] [ [c r ; d + ] (4.3) where = O( r+`) with ^E satisfying the conditions of Proposition 4.1. The outcome is that for a set such as (4.2), we may assume that the optimal polynomial has k + 2 extreme points by ignoring higher order asymptotic quantities when k is large enough. For a more precise mathematical description in a particular case, see [25]. Figure 1: Form of the optimal polynomial Our analysis now takes an unusual turn: we derive an ordinary dierential equation initial value problem satised by the optimal polynomial. The characterisation in terms of extreme points implies that the (degree k) optimal polynomial, p, has the form as in gure 1 where a degree 7 polynomial is displayed.

10 10 Noting that p = k at k 2 interior points where the derivative p 0 vanishes as well as at the four end points we have k 2 (x t) 2 (p 2 (x) 2 k) = (x + a L )(x d)(x + b`)(x c r )(p 0 (x)) 2 (4.4) where t is the local maximum between b` and c r and the scaling factor of k 2 arises from matching the (leading) coecient of x 2k+2. Taking the appropriate square root of (4.4) and dropping dependencies, for x 2 ( b`; c r ) we have k(x t)(p 2 2 k ) 1 2 = h (d x)(x + al )(x + b`)(c r x) i 1 2 If we now rewrite this as ( dp ): (4.5) dx dp dx = k(x t)(p 2 2) 1 2 k [(d x)(x + a L )(x + b`)(c r x)] 1 2 = f(x; p); (4.6) then f(x; p) is continuous in x and Lipschitz continuous in p for x 2 ( b`; c r ) as p > k on this interval. Thus using the classical Picard existence theorem for ordinary dierential equations, (4.5) has a unique solution in this interval which satises the `initial' condition p(0) = 1 for any bounded value of t. (Note also that the solution is a polynomial if and only if the condition of proposition 4.1 is satised.) Having derived the characterising dierential equation, we now seek an asymptotic expression for (not p nor even k!) for small. Our minimum residual convergence estimate will then follow using (2.1) and (2.2). For the analogous derivation of an ordinary dierential equation in the two simpler cases of a single interval and equal length intervals as in the previous section, see [8]. We should also point out that the path followed above is a welltrodden one: Chebyshev originally computed \his polynomials" by solving the associated ordinary dierential equation in the single interval case. 5 Asymptotic solutions Our analysis now becomes more technical though we are in fact simply attempting to nd the solution to the rst order ordinary dierential equation (4.5) by using the elementary `separation of variables' technique. Directly this gives Z z y (x t) dx [(d x)(x + a L )(x + b`)(c r x)] 1 2 = 1 jp(y) + k log [p(y)2 2 k] 1 2 j e jp(z) + [p(z) 2 2 k] 1 2 j (5.1)

11 11 for any b` < y z < c r. Throughout the analysis we will only be interested in the solution between these two interval end points which are nearest the origin and so need not concern ourselves with the many singularities. We see that the left hand side of (5.1) is an elliptic integral involving the unknown point t. Our two problems now are therefore to estimate t and to simplify the elliptic integral using asymptotics. To do the second of these we note that for any x 2 [ b`; c r ] (d c r )(a L b`) (d x)(x + a L ) (d + b`)(a L + c r ) (5.2) so that taking reciprocals and square roots and using simple binomial expansions we have q 1= (d x)(x + a L ) = (ad L ) O( minf` 3L=2;r 3L=2g ) (5.3) since both the upper and lower bounds in (5.2) have this asymptotic form. Note that our assumptions L < `, L < r are necesssary here to ensure that (ad L ) 1 2 is the leading (lowest order) asymptotic term. The complexity of the higher order terms will not be important. Use of (5.3) in (5.1) will leave integrals for which there is the following explicit formula (see [1], formulae and ) Z z y (x t) dx = h [(x+b`)(c x)] 1 (y + b`)(c r y) i 1 2 h (z + b`)(c r z) i 1 2 r 2! (cr ) 1 (b`) t arcsin( cr b` 2y ) arcsin( cr b` 2z ) : (5.4) 2 c r +b` c r +b` Now since p & k both as y & b` and as z % c r, it follows from (5.1) that Z z lim y& b` and z%c r y (x t) dx [(d x)(x + a L )(x + b`)(c r x)] 1 2 = 0: (5.5) The asymptotic formula (5.3) together with (5.4) then gives t = 1 2 (cr b`) + O( r+`) (5.6) as in the limit, the rst two terms of (5.4) vanish and the trignometric terms tend to. We see that (5.6) is consistent with t = 0 in the case r = ` and b = c. That is when the intervals are symmetrically placed about the origin, then the turning point t coincides with the origin. (See the exact description (3.4) in terms of

12 12 Chebyshev polynomials when also L = 0 and a = d). In any case, provided that L < minf`; rg, t lies approximately half way between c r and b`. Our analysis is completed by integrating (5.1) from the origin to the nearer of c r or b`. Strictly speaking, we integrate from 0 to z and take the limit as z % c r or from y to 0 and take the limit as y & b`. Whichever of these two integrals we do the result is (ad L ) 1 2 (bc r+`) 1 2 (1 + O( r+`)) = log e j1 + [1 2 k] 1 2 j 1 k 1 k k : (5.7) Now j1 + [1 2 k] 1 2 j 1 k! 1 as k! 1 since 0 [1 2 k] for all k, so negating and exponentiating gives k lim k = exp q (r+` L)=2 bc=ad + O( r+` L=2 ) The outcome is k!1 1 = 1 (r+` L)=2 qbc=da + O( r+` L=2 ): Theorem 5.1 The asymptotic convergence factor when the minimum residual method is applied to a symmetric indenite linear system with eigenvalues lying in an inclusion region of the form (4.2) satises = 1 (r+` L)=2 qbc=da + O( r+` L=2 ): (5.8) In order to compare the result of Theorem 5.1 with the results of section 3 which involve Chebyshev polynomials, we must embed E in the union of (larger) equal length intervals so that (3.5) can be applied. If L 6= 0 one obvious way to do this is to take for which (3.5) gives E ^E = [ d b` + c r ; b`] [ [c r ; d] (5.9) q d 2 + O( minf`;rg ) p bc r+` bk=2c k 2 q d 2 + O( minf`;rg ) + p : (5.10) bc r+` Taking the limit as k! 1 leads to a corresponding asymptotic convergence factor s bc 1=2 1 2 (r+`)=2 + O( minfr+`=2;r=2+`g ) d s 2 bc = 1 (r+`)=2 + O( minfr+`=2;r=2+`g ) d 2

13 13 which is seen to be inferior to (5.8). If L = 0 then it matters which of a or d is bigger and the corresponding estimate is s bc 1 (r+`)=2 + O( minfr+`=2;r=2+`g ) (5.11) maxfa 2 ; d 2 g which is seen to have the same asymptotic form as (5.8) but with a smaller constant. We now consider two simple examples which illustrate the results given here. Firstly for simple illustration we use the example of an indenite matrix whose eigenvalues satisfy 2 [ ; 2 ][[ 3 ; 2 ]. Applying the simple linear mapping illustrated in section 4, the convergence of the minimum residual method for a problem of this type will be identical (excluding roundo eects) to a problem with 2 [ 3 ; 4 ] [ [ 5 ; 1]. For such a problem theorem 5.1 gives that the residual vectors satisfy kr k k kr 0 k 1 O( 3 ) k (5.12) whereas use of (3.5) gives kr k k kr 0 k 1 O( 9=2 ) k : (5.13) There is of course an advantage of (3.5) in that it is based on an estimate which holds at every iteration whereas (5.8) is only established for large k. A more practical example comes from mixed nite element approximation of the Stokes problem which describes the slow ow of an incompressible uid. See for example [11] for a general description, or [8] for a treatment more closely related to the presentation given here. For a discretisation employing a stable element on a quasi-uniform grid in a 2-dimensional ow domain, we have that the eigenvalues of the unscaled Stokes matrix satisfy 2 [ ah; bh 2 ][[ch 2 ; d] for some positive constants a, b, c and d where h is a representative mesh size. Use of theorem 5.1 then bounds the convergence of the minimum residual method by kr k k kr 0 k 1 O(h 3=2 ) k (5.14) whereas (3.5) gives kr k k kr 0 k 1 O(h 2 ) k (5.15) which is the same estimate as would be derived for iterative solution of the normal equations using the conjugate gradient method. The estimate (5.14) clearly demonstrates the superiority of the minimum residual method for such problems.

14 14 It should be mentioned in this example however that appropriate diagonal scaling of the Stokes matrix as described in [26] gives eigenvalues 2 [ a; bh][[ch 2 ; d] for which (5.8) and (3.5) yield similar results. See [25] for numerical results which demonstrate the descriptiveness of our convergence estimate for the Stokes problem. 6 Conclusion We have derived convergence estimates for the minimum residual method applied to symmetric and indenite matrix systems where the eigenvalues of the coecient matrix depend on an asymptotically small parameter, such as the mesh size in a dierential equation setting. Our estimates are an improvement on the commonly used estimate based on equal length intervals in many cases of practical interest. References [1] M. Abramowitz and I. A. Stegun, \Handbook of Mathematical functions," Dover, New York, [2] N. I. Achieser, \ Uber einige Funktionen, welche in zwei gegebenen Intervallen am wenigsten von Null abweichen I. Teil," Bull. Acad. Sci. URSS S.VII 9, (1932). [3] D.N. Arnold, R.S. Falk and R. Winther, \Preconditioning in H(div) and applications," Math. Comput. 66, (1997). [4] O. Axelsson, \Iterative solution methods," Cambridge University Press, [5] O. Axelsson and V.A. Barker, \Finite element solution of boundary value problems: Theory and computation," Academic Press, New York, [6] T.A. Driscoll, K-C. Toh and L.N. Trefethen, \From potential theory to matrix iterations in six steps," SIAM Review, to appear. [7] H.C. Elman, \Multigrid and Krylov subspace methods for the discrete Stokes equations," Int. J. Numer. Meth. Fluids 227, (1996). [8] H.C. Elman, D.J. Silvester and A.J. Wathen, \Iterative methods for problems in Computational Fluid Dynamics," in `Iterative Methods in Scientic

15 15 Computing', Eds. R.H. Chan, T.F. Chan & G.H. Golub, Springer-Verlag, Singapore, , [9] B. Fischer, \Polynomial based iteration methods for symmetric linear systems," Wiley-Teubner, Chichester, [10] A. Greenbaum, \Iterative methods for solving linear systems," SIAM, Philadelphia, [11] M. Gunzburger, \Finite element methods for viscous incompressible ows: a guide to theory, practice and algorithms," Academic Press, London, [12] W. Hackbusch, \Iterative solution of large sparse systems of equations," Springer-Verlag, Berlin, [13] M.R. Hestenes and E. Stiefel, \Methods of conjugate gradients for solving linear systems," J. Res. Nat. Bur. Stand. 49, (1952). [14] A. Klawonn, \An optimal preconditioner for a class of saddle point problems with a penalty term," SIAM J. Sci. Comput., to appear. [15] V.I. Lebedev, \An iteration method for the solution of operator equations with their spectrum lying on several intervals," USSR Comput. Math. and Math. Phys. 9, (1969). [16] J. Maryska, M. Rozloznk and M. Tuma, \The potential uid ow problem and the convergence rate of the minimum residual method," Numer. Linear Alg. Appl. 3, (1996). [17] G. Meinardus, \Approximation of functions: theory and numerical methods," Springer, New York, [18] C.C. Paige and M.A. Saunders, \Solution of sparse indenite systems of linear equations," SIAM J. Numer. Anal. 12, (1975). [19] L.F. Pavarino, \Preconditioned mixed spectral element methods for elasticity and Stokes problems," SIAM J. Sci. Comput., to appear. [20] T. Rusten and R. Winther, \A preconditioned iterative method for saddle point problems," SIAM J. Matrix Anal. Appl. 13, (1992). [21] Y. Saad, \Iterative methods for sparse linear systems," PWS, Boston, [22] D. J. Silvester and A. J. Wathen, \Fast iterative solution of stabilised Stokes systems Part II: using general block preconditioners," SIAM J. Numer. Anal. 31, (1994).

16 16 [23] G.L.G. Sleijpen, H.A. Van der Vorst and J. Modersitzki, \The main eects of rounding errors in Krylov solvers for symmetric linear systems," University of Utrecht, Department of Mathematics Preprint 1006 (1997). [24] R.S. Varga, \Matrix iterative analysis," Prentice-Hall, Englewood Clis, [25] A.J. Wathen, B. Fischer and D.J. Silvester, \The convergence rate of the minimum residual method for the Stokes problem," Numer. Math. 71, (1995). [26] A.J. Wathen and D.J. Silvester, \Fast iterative solution of stabilised Stokes systems Part I: using simple diagonal preconditioners," SIAM J. Numer. Anal. 30, (1993). [27] A.J. Wathen and E.P. Stephan, \Convergence of preconditioned minimum residual iteration for coupled nite element/boundary element computations," Bristol University, Mathematics Dept report AM (1994). [28] E.T. Whittaker and G.N. Watson, \A course of Modern Analysis," Cambridge University Press, A.J. Wathen Oxford University Computing Laboratory Wolfson Building Parks Road Oxford OX1 3QD United Kingdom B. Fischer Institute of Mathematics Medical University of Luebeck Wallstrasse Luebeck Germany D.J. Silvester Mathematics Department UMIST PO Box 88, Manchester M60 1QD United Kingdom

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

Fast solvers for steady incompressible flow

Fast solvers for steady incompressible flow ICFD 25 p.1/21 Fast solvers for steady incompressible flow Andy Wathen Oxford University wathen@comlab.ox.ac.uk http://web.comlab.ox.ac.uk/~wathen/ Joint work with: Howard Elman (University of Maryland,

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT

More information

UNIVERSITY OF MARYLAND NAVIER-STOKES EQUATIONS CS-TR #4073 / UMIACS TR #99-66 OCTOBER 1999

UNIVERSITY OF MARYLAND NAVIER-STOKES EQUATIONS CS-TR #4073 / UMIACS TR #99-66 OCTOBER 1999 UNIVERSITY OF MARYLAND INSTITUTE FOR ADVANCED COMPUTER STUDIES DEPARTMENT OF COMPUTER SCIENCE EFFICIENT PRECONDITIONING OF THE LINEARIZED NAVIER-STOKES EQUATIONS CS-TR #4073 / UMIACS TR #99-66 DAVID SILVESTER

More information

Gradient Method Based on Roots of A

Gradient Method Based on Roots of A Journal of Scientific Computing, Vol. 15, No. 4, 2000 Solving Ax Using a Modified Conjugate Gradient Method Based on Roots of A Paul F. Fischer 1 and Sigal Gottlieb 2 Received January 23, 2001; accepted

More information

UNIVERSITY OF MARYLAND PRECONDITIONERS FOR THE DISCRETE STEADY-STATE NAVIER-STOKES EQUATIONS CS-TR #4164 / UMIACS TR # JULY 2000

UNIVERSITY OF MARYLAND PRECONDITIONERS FOR THE DISCRETE STEADY-STATE NAVIER-STOKES EQUATIONS CS-TR #4164 / UMIACS TR # JULY 2000 UNIVERSITY OF MARYLAND INSTITUTE FOR ADVANCED COMPUTER STUDIES DEPARTMENT OF COMPUTER SCIENCE PERFORMANCE AND ANALYSIS OF SADDLE POINT PRECONDITIONERS FOR THE DISCRETE STEADY-STATE NAVIER-STOKES EQUATIONS

More information

CG Type Algorithm for Indefinite Linear Systems 1. Conjugate Gradient Type Methods for Solving Symmetric, Indenite Linear Systems.

CG Type Algorithm for Indefinite Linear Systems 1. Conjugate Gradient Type Methods for Solving Symmetric, Indenite Linear Systems. CG Type Algorithm for Indefinite Linear Systems 1 Conjugate Gradient Type Methods for Solving Symmetric, Indenite Linear Systems Jan Modersitzki ) Institute of Applied Mathematics Bundesstrae 55 20146

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

Preconditioned GMRES Revisited

Preconditioned GMRES Revisited Preconditioned GMRES Revisited Roland Herzog Kirk Soodhalter UBC (visiting) RICAM Linz Preconditioning Conference 2017 Vancouver August 01, 2017 Preconditioned GMRES Revisited Vancouver 1 / 32 Table of

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract A Finite Element Method for an Ill-Posed Problem W. Lucht Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D-699 Halle, Germany Abstract For an ill-posed problem which has its origin

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Centro de Processamento de Dados, Universidade Federal do Rio Grande do Sul,

Centro de Processamento de Dados, Universidade Federal do Rio Grande do Sul, A COMPARISON OF ACCELERATION TECHNIQUES APPLIED TO THE METHOD RUDNEI DIAS DA CUNHA Computing Laboratory, University of Kent at Canterbury, U.K. Centro de Processamento de Dados, Universidade Federal do

More information

NAVIER-STOKES OPERATORS HOWARD C. ELMAN UMIACS-TR November Revised July 1996

NAVIER-STOKES OPERATORS HOWARD C. ELMAN UMIACS-TR November Revised July 1996 PERTURBATION OF EIGENVALUES OF PRECONDITIONED NAVIER-STOKES OPERATORS HOWARD C. ELMAN Report CS-TR-3559 UMIACS-TR-95- November 995 Revised July 996 Abstract. We study the sensitivity of algebraic eigenvalue

More information

Henk van der Vorst. Abstract. We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors

Henk van der Vorst. Abstract. We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors Subspace Iteration for Eigenproblems Henk van der Vorst Abstract We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors of the standard eigenproblem Ax = x. Our method

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994 Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.

More information

t x 0.25

t x 0.25 Journal of ELECTRICAL ENGINEERING, VOL. 52, NO. /s, 2, 48{52 COMPARISON OF BROYDEN AND NEWTON METHODS FOR SOLVING NONLINEAR PARABOLIC EQUATIONS Ivan Cimrak If the time discretization of a nonlinear parabolic

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

7.4 The Saddle Point Stokes Problem

7.4 The Saddle Point Stokes Problem 346 CHAPTER 7. APPLIED FOURIER ANALYSIS 7.4 The Saddle Point Stokes Problem So far the matrix C has been diagonal no trouble to invert. This section jumps to a fluid flow problem that is still linear (simpler

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

The rate of convergence of the GMRES method

The rate of convergence of the GMRES method The rate of convergence of the GMRES method Report 90-77 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

A general Krylov method for solving symmetric systems of linear equations

A general Krylov method for solving symmetric systems of linear equations A general Krylov method for solving symmetric systems of linear equations Anders FORSGREN Tove ODLAND Technical Report TRITA-MAT-214-OS-1 Department of Mathematics KTH Royal Institute of Technology March

More information

EQUATIONS WITH LOW VISCOSITY HOWARD C. ELMAN UMIACS-TR November 1996

EQUATIONS WITH LOW VISCOSITY HOWARD C. ELMAN UMIACS-TR November 1996 PRECONDITIONING FOR THE STEADY-STATE NAVIER-STOKES EQUATIONS WITH LOW VISCOSITY HOWARD C. ELMAN Report CS-TR-372 UMIACS-TR-96-82 November 996 Abstract. We introduce a preconditioner for the linearized

More information

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m

More information

The parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven.

The parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven. The parallel computation of the smallest eigenpair of an acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven Abstract Acoustic problems with damping may give rise to large quadratic

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

arxiv: v1 [math.na] 29 Feb 2016

arxiv: v1 [math.na] 29 Feb 2016 EFFECTIVE IMPLEMENTATION OF THE WEAK GALERKIN FINITE ELEMENT METHODS FOR THE BIHARMONIC EQUATION LIN MU, JUNPING WANG, AND XIU YE Abstract. arxiv:1602.08817v1 [math.na] 29 Feb 2016 The weak Galerkin (WG)

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts and Stephan Matthai Mathematics Research Report No. MRR 003{96, Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL

More information

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization

More information

Takens embedding theorem for infinite-dimensional dynamical systems

Takens embedding theorem for infinite-dimensional dynamical systems Takens embedding theorem for infinite-dimensional dynamical systems James C. Robinson Mathematics Institute, University of Warwick, Coventry, CV4 7AL, U.K. E-mail: jcr@maths.warwick.ac.uk Abstract. Takens

More information

Some Preconditioning Techniques for Saddle Point Problems

Some Preconditioning Techniques for Saddle Point Problems Some Preconditioning Techniques for Saddle Point Problems Michele Benzi 1 and Andrew J. Wathen 2 1 Department of Mathematics and Computer Science, Emory University, Atlanta, Georgia 30322, USA. benzi@mathcs.emory.edu

More information

A general theory of discrete ltering. for LES in complex geometry. By Oleg V. Vasilyev AND Thomas S. Lund

A general theory of discrete ltering. for LES in complex geometry. By Oleg V. Vasilyev AND Thomas S. Lund Center for Turbulence Research Annual Research Briefs 997 67 A general theory of discrete ltering for ES in complex geometry By Oleg V. Vasilyev AND Thomas S. und. Motivation and objectives In large eddy

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Bifurcation analysis of incompressible ow in a driven cavity F.W. Wubs y, G. Tiesinga z and A.E.P. Veldman x Abstract Knowledge of the transition point of steady to periodic ow and the frequency occurring

More information

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y Abstract. In this paper we propose a new method for the iterative computation of a few

More information

ADAPTIVE ITERATION TO STEADY STATE OF FLOW PROBLEMS KARL H ORNELL 1 and PER L OTSTEDT 2 1 Dept of Information Technology, Scientic Computing, Uppsala

ADAPTIVE ITERATION TO STEADY STATE OF FLOW PROBLEMS KARL H ORNELL 1 and PER L OTSTEDT 2 1 Dept of Information Technology, Scientic Computing, Uppsala ADAPTIVE ITERATION TO STEADY STATE OF FLOW PROBLEMS KARL H ORNELL 1 and PER L OTSTEDT 2 1 Dept of Information Technology, Scientic Computing, Uppsala University, SE-75104 Uppsala, Sweden. email: karl@tdb.uu.se

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 13-10 Comparison of some preconditioners for the incompressible Navier-Stokes equations X. He and C. Vuik ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

Preconditioning a mixed discontinuous nite element method for radiation diusion

Preconditioning a mixed discontinuous nite element method for radiation diusion NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2004; 11:795 811 (DOI: 10.1002/nla.347) Preconditioning a mixed discontinuous nite element method for radiation diusion James S. Warsa

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems NATURAL PRECONDITIONING AND ITERATIVE METHODS FOR SADDLE POINT SYSTEMS JENNIFER PESTANA AND ANDREW J. WATHEN Abstract. The solution of quadratic or locally quadratic extremum problems subject to linear(ized)

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof. Universiteit-Utrecht * Department of Mathematics The convergence of Jacobi-Davidson for Hermitian eigenproblems by Jasper van den Eshof Preprint nr. 1165 November, 2000 THE CONVERGENCE OF JACOBI-DAVIDSON

More information

2 Formulation. = arg = 2 (1)

2 Formulation. = arg = 2 (1) Acoustic di raction by an impedance wedge Aladin H. Kamel (alaahassan.kamel@yahoo.com) PO Box 433 Heliopolis Center 11757, Cairo, Egypt Abstract. We consider the boundary-value problem for the Helmholtz

More information

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y.

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{94{77 TR{3306 On Markov Chains with Sluggish Transients G. W. Stewart y June, 994 ABSTRACT

More information

Multigrid and Iterative Strategies for Optimal Control Problems

Multigrid and Iterative Strategies for Optimal Control Problems Multigrid and Iterative Strategies for Optimal Control Problems John Pearson 1, Stefan Takacs 1 1 Mathematical Institute, 24 29 St. Giles, Oxford, OX1 3LB e-mail: john.pearson@worc.ox.ac.uk, takacs@maths.ox.ac.uk

More information

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses P. Boyanova 1, I. Georgiev 34, S. Margenov, L. Zikatanov 5 1 Uppsala University, Box 337, 751 05 Uppsala,

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Combination Preconditioning of saddle-point systems for positive definiteness

Combination Preconditioning of saddle-point systems for positive definiteness Combination Preconditioning of saddle-point systems for positive definiteness Andy Wathen Oxford University, UK joint work with Jen Pestana Eindhoven, 2012 p.1/30 compute iterates with residuals Krylov

More information

Further experiences with GMRESR

Further experiences with GMRESR Further experiences with GMRESR Report 92-2 C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical Mathematics and Informatics

More information

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY KLAUS NEYMEYR ABSTRACT. Multigrid techniques can successfully be applied to mesh eigenvalue problems for elliptic differential operators. They allow

More information

1e N

1e N Spectral schemes on triangular elements by Wilhelm Heinrichs and Birgit I. Loch Abstract The Poisson problem with homogeneous Dirichlet boundary conditions is considered on a triangle. The mapping between

More information

Eigenvalue problems and optimization

Eigenvalue problems and optimization Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we

More information

Largest Bratu solution, lambda=4

Largest Bratu solution, lambda=4 Largest Bratu solution, lambda=4 * Universiteit Utrecht 5 4 3 2 Department of Mathematics 1 0 30 25 20 15 10 5 5 10 15 20 25 30 Accelerated Inexact Newton Schemes for Large Systems of Nonlinear Equations

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

On solving linear systems arising from Shishkin mesh discretizations

On solving linear systems arising from Shishkin mesh discretizations On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October

More information

Fast Solvers for Unsteady Incompressible Flow

Fast Solvers for Unsteady Incompressible Flow ICFD25 Workshop p. 1/39 Fast Solvers for Unsteady Incompressible Flow David Silvester University of Manchester http://www.maths.manchester.ac.uk/~djs/ ICFD25 Workshop p. 2/39 Outline Semi-Discrete Navier-Stokes

More information

A smooth variant of the fictitious domain approach

A smooth variant of the fictitious domain approach A smooth variant of the fictitious domain approach J. Haslinger, T. Kozubek, R. Kučera Department of umerical Mathematics, Charles University, Prague Department of Applied Mathematics, VŠB-TU, Ostrava

More information

Numerical Integration exact integration is not needed to achieve the optimal convergence rate of nite element solutions ([, 9, 11], and Chapter 7). In

Numerical Integration exact integration is not needed to achieve the optimal convergence rate of nite element solutions ([, 9, 11], and Chapter 7). In Chapter 6 Numerical Integration 6.1 Introduction After transformation to a canonical element,typical integrals in the element stiness or mass matrices (cf. (5.5.8)) have the forms Q = T ( )N s Nt det(j

More information

Termination criteria for inexact fixed point methods

Termination criteria for inexact fixed point methods Termination criteria for inexact fixed point methods Philipp Birken 1 October 1, 2013 1 Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany Department of Mathematics/Computer

More information

Jos L.M. van Dorsselaer. February Abstract. Continuation methods are a well-known technique for computing several stationary

Jos L.M. van Dorsselaer. February Abstract. Continuation methods are a well-known technique for computing several stationary Computing eigenvalues occurring in continuation methods with the Jacobi-Davidson QZ method Jos L.M. van Dorsselaer February 1997 Abstract. Continuation methods are a well-known technique for computing

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES INCOMPLEE FACORIZAION CONSRAIN PRECONDIIONERS FOR SADDLE-POIN MARICES H. S. DOLLAR AND A. J. WAHEN Abstract. We consider the application of the conjugate gradient method to the solution of large symmetric,

More information

Finite element approximation on quadrilateral meshes

Finite element approximation on quadrilateral meshes COMMUNICATIONS IN NUMERICAL METHODS IN ENGINEERING Commun. Numer. Meth. Engng 2001; 17:805 812 (DOI: 10.1002/cnm.450) Finite element approximation on quadrilateral meshes Douglas N. Arnold 1;, Daniele

More information

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday.

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday. MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* DOUGLAS N ARNOLD, RICHARD S FALK, and RAGNAR WINTHER Dedicated to Professor Jim Douglas, Jr on the occasion of his seventieth birthday Abstract

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods.

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods. Contemporary Mathematics Volume 00, 0000 Domain Decomposition Methods for Monotone Nonlinear Elliptic Problems XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA Abstract. In this paper, we study several overlapping

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

STOPPING CRITERIA FOR MIXED FINITE ELEMENT PROBLEMS

STOPPING CRITERIA FOR MIXED FINITE ELEMENT PROBLEMS STOPPING CRITERIA FOR MIXED FINITE ELEMENT PROBLEMS M. ARIOLI 1,3 AND D. LOGHIN 2 Abstract. We study stopping criteria that are suitable in the solution by Krylov space based methods of linear and non

More information

Taylor series based nite dierence approximations of higher-degree derivatives

Taylor series based nite dierence approximations of higher-degree derivatives Journal of Computational and Applied Mathematics 54 (3) 5 4 www.elsevier.com/locate/cam Taylor series based nite dierence approximations of higher-degree derivatives Ishtiaq Rasool Khan a;b;, Ryoji Ohba

More information

Iterative Methods for Problems in Computational Fluid. Howard C. Elman. University of Maryland. David J. Silvester. University of Manchester

Iterative Methods for Problems in Computational Fluid. Howard C. Elman. University of Maryland. David J. Silvester. University of Manchester Report no. 96/19 Iterative Methods for Problems in Computational Fluid Dynamics Howard C. Elman University of Maryland David J. Silvester University of Manchester Andrew J. Wathen Oxford University We

More information