c 2004 Society for Industrial and Applied Mathematics

Similar documents
ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Structured Preconditioners for Saddle Point Problems

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

ETNA Kent State University

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

Fast Iterative Solution of Saddle Point Problems

Structured Preconditioners for Saddle Point Problems

Numerical behavior of inexact linear solvers

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini

A Review of Preconditioning Techniques for Steady Incompressible Flow

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Department of Computer Science, University of Illinois at Urbana-Champaign

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini

Mathematics and Computer Science

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

Block preconditioners for saddle point systems arising from liquid crystal directors modeling

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Lecture 3: Inexact inverse iteration with preconditioning

Recall the convention that, for us, all vectors are column vectors.

1. Introduction. We consider the solution of systems of linear equations with the following block 2 2 structure:

Mathematics and Computer Science

MINIMAL NORMAL AND COMMUTING COMPLETIONS

Preconditioners for the incompressible Navier Stokes equations

Chebyshev semi-iteration in Preconditioning

Regularized HSS iteration methods for saddle-point linear systems

On the Preconditioning of the Block Tridiagonal Linear System of Equations

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Fast solvers for steady incompressible flow

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

A Note on Eigenvalues of Perturbed Hermitian Matrices

M.A. Botchev. September 5, 2014

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 26 Dec 2013

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

Absolute value equations

Preconditioned inverse iteration and shift-invert Arnoldi method

Iterative Methods for Sparse Linear Systems

When is the hermitian/skew-hermitian part of a matrix a potent matrix?

Inexact inverse iteration with preconditioning

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

Block triangular preconditioner for static Maxwell equations*

Combination Preconditioning of saddle-point systems for positive definiteness

Preconditioning for Nonsymmetry and Time-dependence

DELFT UNIVERSITY OF TECHNOLOGY

On the accuracy of saddle point solvers

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

Linear Algebra Massoud Malek

PRECONDITIONED ITERATIVE METHODS FOR LINEAR SYSTEMS, EIGENVALUE AND SINGULAR VALUE PROBLEMS. Eugene Vecharynski. M.S., Belarus State University, 2006

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

A Residual Inverse Power Method

Iterative methods for Linear System

The semi-convergence of GSI method for singular saddle point problems

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

w T 1 w T 2. w T n 0 if i j 1 if i = j

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

c 2005 Society for Industrial and Applied Mathematics

Alternative correction equations in the Jacobi-Davidson method

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems

On the Hermitian solutions of the

The Lanczos and conjugate gradient algorithms

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS

Multigrid absolute value preconditioning

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Characterization of half-radial matrices

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems

Numerical Linear Algebra Homework Assignment - Week 2

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Efficient iterative algorithms for linear stability analysis of incompressible flows

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 408 Advanced Linear Algebra

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

Homework 2 Foundations of Computational Math 2 Spring 2019

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Jae Heon Yun and Yu Du Han

Transcription:

SIAM J. MATRIX ANAL. APPL. Vol. 6, No., pp. 377 389 c 004 Society for Industrial and Applied Mathematics SPECTRAL PROPERTIES OF THE HERMITIAN AND SKEW-HERMITIAN SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS VALERIA SIMONCINI AND MICHELE BENZI Abstract. In this paper we derive bounds on the eigenvalues of the preconditioned matrix that arises in the solution of saddle point problems when the Hermitian and skew-hermitian splitting preconditioner is employed. We also give sufficient conditions for the eigenvalues to be real. A few numerical experiments are used to illustrate the quality of the bounds. Key words. saddle point problems, iterative methods, preconditioning, eigenvalues AMS subject classifications. 65F0, 65N, 65F50, 5A06 DOI. 0.37/S089547980343496. Introduction. We are given the saddle point problem A B T u f. =, or Ax = b B 0 v g with A R n n symmetric positive semidefinite and B R m n with rankb =m n. We assume that the null spaces of A and B have trivial intersection, which implies that A is nonsingular. We set H = A 0 0 0 0 B T S = B 0 so that A = H+S. We consider the preconditioner P = H+IS +I, with real >0, and we study the eigenvalue problem associated with the preconditioned matrix, that is,,. H + Sx = η H + IS + Ix. This preconditioner has been studied in a somewhat more general setting in [4], motivated by the paper []. Letting D, := {z C ; z < }, it was shown in [4] that the spectrum of the preconditioned matrix satisfies σp A D, \{0}. Furthermore, σp A D, if A is positive definite. Some rather special cases including the case A = I have been studied in [, 3]. The purpose of this paper is to provide more refined inclusion regions for the spectrum of P A for saddle point problems of the form.. Most of our bounds are in terms of the extreme eigenvalues and singular values of the blocks A and B, respectively. Although these quantities may be difficult to estimate, our results can be used to explain why small values of usually give the best results in terms of convergence rates. For instance, we show Received by the editors September 7, 003; accepted for publication in revised form by D. Szyld December 9, 003; published electronically November 7, 004. http://www.siam.org/journals/simax/6-/4349.html Dipartimento di Matematica, Università di Bologna, P.zza di Porta S. Donato, 5, I-407 Bologna, Italy and IMATI-CNR, Pavia, Italy valeria@dm.unibo.it. Department of Mathematics and Computer Science, Emory University, Atlanta, GA 303 benzi@mathcs.emory.edu. The work of this author was supported in part by National Science Foundation grant DMS-007599. 377

378 VALERIA SIMONCINI AND MICHELE BENZI that sufficiently small values of always result in preconditioned matrices having a real spectrum consisting of two tight clusters. Throughout the paper, we write M T for the transpose of a matrix M and u for the conjugate transpose of a complex vector u. Also, A>0A 0 means that matrix A is symmetric positive definite respectively, semidefinite.. Spectral bounds. In this section we provide bounds for the eigenvalues of the preconditioned matrix. In the following we shall use the fact that A is symmetric positive semidefinite, so that 0 λ n u Au. u u λ u C n,u 0, where λ n,λ are the smallest and largest eigenvalues of A, respectively. Moreover, we denote by σ,...,σ m the decreasingly ordered singular values of B. The spectrum of the preconditioned matrix can be more easily analyzed by means of a particular spectral mapping, which we introduce next. We shall then derive estimates for the location of the eigenvalues of.. We first observe that H + IS + I =HS + H + S+ I. By collecting the terms with H + S we can write the eigenvalue problem. as. η H + Sx = η If η = 0, then η =. For η 0 we set I + HS x..3 θ := η, from which η = η θ + = θ θ +. Therefore,. can be written as H + Sx = θ I + HS x. By explicitly writing the term HS, the eigenproblem above becomes A B T I x = θ AB T x, or Ax = θgx, B 0 0 I where G := I AB T. 0 I The equivalent eigenproblem G Ax = θx can be explicitly written as A + AB T B B.4 T x = θx. B 0 Therefore, the two eigenproblems. and.4 have the same eigenvectors, while the eigenvalues are related by.3. Our spectral analysis aims at describing the behavior of the spectrum of G A, from which considerations on the spectrum of. can be derived. In the following, Iθ and Rθ denote the imaginary and real part of θ, respectively. Lemma.. Assume A is symmetric and positive semidefinite. Let K := I + B T B. For each eigenpair η, [u; v] of., η either is η =or can be written as η = +θ, where θ 0satisfies the following:

. If Iθ 0, then PRECONDITIONING SADDLE POINT PROBLEMS 379.5 Rθ = u KAKu u Ku, θ = u KB T Bu u. Ku. If Iθ =0, then min λ n, σ m λ + σ m θ ρ where ρ := λ + σ. Proof. The first statement of the lemma was already shown by means of the mapping in.3. We are thus left with proving the estimates for θ. First of all, note that θ 0 or else η = 0, which is not possible since P A is nonsingular. Let x =[u; v] 0 be the complex eigenvector associated with θ. We explicitly observe that K = I + B T B is symmetric positive definite and that KB T B is symmetric. We shall make use of the following properties of K, λ max K =+ σ.6, λ mink, where the inequality becomes an equality whenever B is not square. In addition,.7 λ n u KAKu u K u λ, and using KB T B = K K,.8 0 u KB T Bu u K u = u K u u Ku u K u The two matrix equations in.4 are given by A +.9 ABT B u + B T v = θu, = u Ku u K u 0. u.0 Bu = θv. It must be u 0; otherwise.0 would imply θ =0orv = 0, neither of which can be satisfied. For u 0 and v = 0, from.9, θ must satisfy AKu = θu and Bu =0. Since K is symmetric and positive definite, we can write K AK û = θû, û = K u, from which it follows that θ is real and satisfies 0 <θ λ K = λ λ max I + BT B = λ + σ = ρ. We now assume u 0 v. Using.0, we write v = θ Bu, which, substituted into.9, yields θai + B T Bu B T Bu = θ u. By multiplying this equation from the left by u K we obtain. θu KAKu u KB T Bu = θ u Ku.

380 VALERIA SIMONCINI AND MICHELE BENZI Let θ = θ + ıθ.fora symmetric, the quadratic equation. has real coefficients so that its roots are given by. θ ± = u KAKu u ± Ku 4 u KAKu u u KB T Bu Ku u. Ku Eigenvalues with nonzero imaginary part arise if the discriminant is negative. Case θ 0. It must be.3 u KAKu 4u Kuu KB T Bu < 0, and from. we get θ = u KAKu u Ku. By substituting θ in., we obtain θ + θ = u KB T Bu u Ku. Case θ = 0. In this case, from. it follows that θ = θ > 0. For Bu =0, from.0 it follows that v =0θ 0, and the reasoning for v = 0 applies. We now assume that Bu 0. We have θ u Ku + θ u KAKu = u KB T Bu > 0, where the last inequality follows from.8. Since θ > 0, the inequality θ u KAKu θ u Ku > 0 implies u KAKu θ u Ku > 0, hence θ <λ λ max K =ρ. To prove the lower bound on θ, write the equation.9 as AK θiu = B T v. If θ is an eigenvalue of AK, then θ λ n λ min K λ n. Otherwise, AK θi is invertible, so that u = AK θi B T v, which, substituted into.0, yields.4 BAK θi B T v = θv BK A θk B T v = θv. [ ] Let B T =[W,W ] Σ0 Q T be the singular value decomposition of B T, and note that I + K =[W,W ] Σ 0 0 I [W T,W T ] T, BK = Q ΣI + Σ 0 [W T,W T ] T = QD ΣW T, where D = I + Σ. Problem.4 can be thus written as QD ΣW T A θk W ΣQ T v = θv, or, equivalently, from which ΣW T A θk W Σw = θdw, w = Q T v,.5 W T A θk W ŵ = θσ DΣ ŵ, ŵ =Σw. We multiply both sides from the left by ŵ and we notice that the left-hand side is positive for any ŵ 0. Ifθ λ min AK λ n, then λ n is the sought-after lower bound. Assume now that θ<λ min AK. Then, the matrix A θk is symmetric and positive definite. Therefore,.6 ŵ W T A θk W ŵ λ min A θk W ŵ = λ min A θk ŵ,

and we have PRECONDITIONING SADDLE POINT PROBLEMS 38 λ min A θk = λ max A θk λ θλ min K = λ θ λ maxk = λ θ, τ where τ := λ max K =+ σ. This, together with.6, provides a lower bound for the left-hand side of.5. Using θŵ Σ DΣ ŵ = θŵ Σ + I ŵ θ σm + ŵ and recalling that λ τ θ>0, from.5 we obtain λ θ θ σ + θ τ m, i.e., τ + σ m + σm λ θ. Since θ > 0, we get σ m +σ λ m θ, and the final bound follows. The quantities in part of the lemma can also be bounded with techniques similar to those for the real case. However, in the next theorem, we derive sharper bounds for complex η than those one would obtain by using estimates for complex θ. Theorem.. Under the hypotheses and notation of Lemma., the eigenvalues of problem. are such that the following hold:. If Iη 0, then.7 + λ nλ n 3 { < Rη < min, 4 + λ n },.8 λ n 3 + < η 4 λ n. If Iη =0, then η>0 and λ n σ m.9 ϱ min, + λ n + σ m ϱ 4. + + σ + λ n η ρ + ρ <, where ϱ := λ + σ m and ρ := λ + σ. Proof. We have that η is real if and only if θ is real. Assume Iη 0 and write θ = θ + ıθ. Recall that τ =+ σ. Using the definition of θ in.3 we obtain θ + θ Rη = +θ + θ, that is, +θ + θ Rη =θ + θ. We substitute the quantities in.5 to get u Ku + u KAKu + u KB T BuRη =u KAKu +u KB T Bu. Note that u Ku + u KB T Bu = u K u. We divide by u K u>0 to obtain + u KAKu u K Rη = u KAKu KB T Bu u u K u +u u K u.

38 VALERIA SIMONCINI AND MICHELE BENZI We recall that for Iη 0 relation.3 holds, which implies by.6 and.8.0 u KAKu u K u < 4 u Ku u K u u KB T Bu u K u 4 and. u KB T Bu u K u > u KAKu u K u 4 u K u u Ku 4 λ n. Therefore, by applying.7,.0, and.8, we obtain + λ n Rη <+ Rη < 4. + λ n By once more applying.0,.7, and., we also get + Rη >λ n + λ n Rη > + λ nλ n 3, which provide the upper and lower bounds for Rη. To complete the proof of the first statement, we write η using.3 to obtain +θ η =4 η θ. Substituting.5 as before and dividing by u K u, it yields u Ku u K u + KAKu u u K η =4 η u KB T Bu u u K u. Note that 4 η > 0. As before, we bound η from both sides, keeping in mind.6,.7,.8,., and.0, to get τ + λ n η 4 η η 4, + + σ + λ n and + η > 4 λ n4 η η > 3 +. 4 λ n This completes the proof of the first part. Assume now that η is real. Then, from the corresponding bound for real θ in Lemma. and the fact that η = φθ = θ +θ is a strictly increasing function of its argument, we obtain the desired bounds on η. A few comments are in order. We start by noticing that, in general, real eigenvalues η may well cover the whole open interval 0,, depending on the parameter. Our numerical experiments show that these bounds are indeed sharp for several values of cf. section 4. Although much less sharp in general, we also found the bounds for eigenvalues with nonzero imaginary part of interest. The lower estimate for η indicates that nonreal eigenvalues are not close to the origin, especially for small. In addition, they are located in a section of an annulus as in Figure.. We will see in Theorem 3. λ n

PRECONDITIONING SADDLE POINT PROBLEMS 383 0 Fig... Inclusion region for the typical spectrum of the preconditioned matrix. that complex eigenvalues cannot arise for values of smaller than one half the smallest eigenvalue of A. Remark.. We note that when A is positive definite, selecting = λ n provides constant bounds for the cluster of eigenvalues with nonzero imaginary part. Indeed, substituting = λ n in.7 and in.8 we obtain Rη < and 4 3 η 4λ n + σ 3λ n +σ 4λ n + σ λ n +σ =. For λ n we expect to obtain similar bounds. This complex clustering seems to be relevant in the performance of the preconditioned iteration; cf. section 4. 3. Conditions for a real spectrum and clustering properties. We next show that under suitable conditions, the spectrum of the nonsymmetric preconditioned matrix P A is real. We stress the fact that a real spectrum is a welcome property, because it enables the efficient use of short-recurrence Krylov subspace methods such as Bi-CGSTAB; see, e.g., [, p. 39]. Theorem 3.. Assume the hypotheses and notation of Lemma. hold and assume in addition that A is symmetric positive definite. If λ n, then all eigenvalues η are real. Proof. We prove our assertion for the eigenvalues θ, from which the statement for η will follow. Let x =[u; v] be an eigenvector associated with θ. Foru 0,v =0we already showed that the spectrum is real, while u = 0 implies v = 0, a contradiction. We now assume u 0 v. The eigenvalues θ of.4 are the roots of equation., which can be expressed as in.. These are all real if the discriminant is nonnegative. Equivalently, θ R if u KAKu 4u Kuu KB T Bu u 0. Since u K u>0 for u 0, we write the problem above as θ R if u KAKu u K u 4 u Ku u KB T Bu u K u u K u u 0.

384 VALERIA SIMONCINI AND MICHELE BENZI We have u KAKu u K u λ n, and u Ku u K u λ mink ; see.6. Therefore, using.8, if λ n,wehave u KAKu u K u λ n 4 4 u Ku u KB T Bu 3. u K u u K u 0. u The discriminant is nonnegative, therefore all roots of. are real, and so are the eigenvalues θ. The smallest eigenvalue of A can be increased by suitable scalings, thus enlarging the interval of values leading to a real spectrum. Note, however, that multiplying. by a positive constant ω is equivalent to applying the Hermitian/skew-Hermitian splitting preconditioner with parameter ˆ := ω to the original, unscaled system. Under additional assumptions on the spectrum of the block matrices, it is possible to provide a less strict condition on. This is stated in the following corollary. Corollary 3.. Under the hypotheses and notation of Theorem 3., assume that 4σ λ n > 0. If λnσ then all eigenvalues η are real. 4σ λ n Proof. Using.8, we can write u KB T Bu u K u = u Ku u K u + σ = σ + σ. Therefore, if λ n 4 σ, the bound equivalent to 3. follows. Moreover, we +σ note that under the assumption that 4σ λ n > 0, λ n 4 σ + σ λ nσ 4σ. λ n It is interesting to observe that if σ = λ, the condition 4σ λ n > 0 corresponds to the inequality λ > λ n 4 λ n, which is easily satisfied since usually λ n is small and λ is much bigger than λ n. Note that such a setting is very common in the Stokes problem, where A is a discretization of a vector Laplacian and BB T can also be regarded as a discrete Laplacian. The following result shows that the eigenvalues form two tight clusters as 0. This is an important property from the point of view of convergence of preconditioned Krylov subspace methods. This result extends and sharpens the clustering result obtained in [3] using different tools for the special case of Poisson s equation in saddle point form. Proposition 3.3. Assume A is symmetric and positive definite. For sufficiently small >0, the eigenvalues of P A cluster near zero and two. More precisely, for small >0, η 0,ε ε,, with ε,ε > 0 and ε,ε 0 for 0. Proof. We assume is small, and in particular λ n; therefore all eigenvalues are real. Let [u; v] be an eigenvector of.4 and let θ ± be the roots of equation.. These are given by.. Collecting u Ku and dividing and multiplying. by u K u>0, we obtain θ ± = u K u u Ku u KAKu u K u ± 4 u KAKu u Ku u K u u K u u KB T Bu u K u u K u u Ku ν±.

PRECONDITIONING SADDLE POINT PROBLEMS 385 We recall the bounds in.7 and.8, while u K u u Ku + σ for any u 0, with + σ =O as 0. Moreover, 0 u Ku u KB T Bu u K u u K u,so that u Ku u KAKu u K u u K u u KB T Bu u K u 0as 0. We thus have ν + u KAKu u K u is bounded independently of, we also obtain u Ku u KB T Bu ν = O u K u u K for 0. u as 0. Since Therefore, θ + = O u K u u Ku =O as 0, whereas θ = O u KB T Bu u K u =O as 0. It thus follows that η + = and η = 0 for 0. + θ+ + θ We mention that the dependency of the optimal value of on the mesh size h has been discussed, using Fourier analysis, in [3] for the case of Poisson s equation in first order system form, and in [5] for the case of the Stokes problem. In the first case one can choose so as to have h-independent convergence, whereas in the second case there is a moderate growth in the number of iterations as h 0. It is important to remark that the occurrence of a gap in the spectrum for small can be deduced from known results for overdamped systems. Indeed, equation. stems from the quadratic eigenvalue problem θ Ku θkaku + KB T Bu =0. The eigenproblem above has n eigenvalues, n m of which are zero, corresponding to the dimension of the null space of KB T B. The remaining n + m eigenvalues coincide with the eigenvalues of our problem.4. By introducing θ = θ, we obtain the quadratic symmetric eigenproblem see [6] θ Ku + θkaku + KB T Bu =0, K > 0, KAK > 0, KB T B 0. It can be shown see, e.g., [6, Theorem 3.] that if the discriminant is positive that is, if u KAKu 4u Kuu KB T Bu > 0 for any u 0 then all eigenvalues θ are real and nonpositive. Moreover, the spectrum is split in two parts, each of which contains n eigenvalues. In our context, and in light of Proposition 3.3, the result above implies that m eigenvalues η will cluster towards zero, while n eigenvalues η will cluster around, for sufficiently small. 4. Numerical experiments. In this section we present the results of a few numerical tests aimed at assessing the tightness of our bounds. The first problem we consider is a saddle point system arising from a finite element discretization of a model Stokes problem leaky-lid driven cavity. This problem was generated using the IFISS software written by Howard Elman, Alison Ramage, and David Silvester [9]. Here n = 578, m = 54, λ n =0.0763666, λ =3.94953, σ =0.4760666, and σ m =0.0053957. Note that the B matrices discrete divergence operators generated by this software are rank deficient; we obtained a full rank matrix by dropping the two first rows of B. Note that in the statement of Theorem 3. in [6], matrix KB T B is required to be positive definite rather than just semidefinite. However, the result is still true under the weaker assumption KB T B 0; see also the treatment in [0] and references therein.

386 VALERIA SIMONCINI AND MICHELE BENZI Table 4. Real bounds in.9 vs. actual eigenvalues, Stokes problem. Lower bound η min η max Upper bound 0.00 0.0004890 0.0005069.9999.9999 0.0 0.00635 0.006974.9999.9999 0. 0.000489 0.000355.999.999 0. 0.0000760 0.00005.9608.9608 0.3 0.00004775 0.00007473.934.935 0.4 0.0000358 0.00005606.8633.8635 0.5 0.0000866 0.00004485.850.854 0.6 0.0000388 0.00003738.7696.770 0.7 0.0000047 0.0000304.77.778 0.8 0.000079 0.0000803.687.6880 0.9 0.000059 0.000049.6494.6504.0 0.0000433 0.000043.637.647.0 0.0000077 0.0000.337.3344 5.0 0.0000087 0.00000449 0.886 0.8838 Table 4. Bounds in.9 vs. actual real eigenvalues, groundwater flow problem. Lower bound η min η max Upper bound 0.00 0.883 0.888.000000.000000 0.0 0.8573 0.30869.999893.99997 0.05 0.06455 0.07048.985944.99634 0. 0.03786 0.035865 0.3754.977 0.3 0.0049 0.0099 0.047856.437903 0.5 0.006644 0.00777 0.08988 0.733.0 0.00337 0.003645 0.04599 0.45003 3.0 0.000 0.007 0.004890 0.0648 5.0 0.000666 0.000730 0.00937 0.005078 In Table 4. we compare the lower and upper bounds given in Theorem. with the actual values of the smallest and largest eigenvalues of P A, which in this case are all real. One can see that the upper bound is always very tight and that the lower bound is fairly tight, especially for small values of. For 0.0 or smaller, the eigenvalues form two tight clusters near 0 and, containing m and n eigenvalues, respectively, as predicted by Proposition 3.3. Next, we consider a saddle point system arising from the discretization of a groundwater flow problem using mixed-hybrid finite elements [7]. In the example at hand, n = 70, m = 07, n + m = 477, and A contains, 746 nonzeros. Here we have λ n =0.007, λ =0.00, σ =.6, and σ m =0.9743. In this case there are nonreal eigenvalues except for very small. In Table 4. we compare the lower and upper bounds given in Theorem. with the actual values of the smallest and largest real eigenvalues of P A while in Tables 4.3 and 4.4 we provide the analogous results for the real part and modulus of the nonreal eigenvalues. One can see that the location of the real eigenvalues is well detected with our bounds. In particular, the lower bound is very sharp, whereas the upper bound gets looser when the whole spectrum becomes complex 0.05, providing again good estimates for large values of. The lower bounds suggest that the leftmost cluster will not be too close to zero, particularly for between 0 3 and 0, and it turns out that these values of yield the best results see below.

PRECONDITIONING SADDLE POINT PROBLEMS 387 Table 4.3 Bounds in.7 vs. actual real part of nonreal eigenvalues, groundwater flow problem. Lower bound min Rη max Rη Upper bound 0.00 0.0 0.05 0.096.83080.96387.000000 0. 0.00560.57808.975776.000000 0.3 0.00857 0.608980.966375.000000 0.5 0.003 0.74840.94906.000000.0 0.000556 0.07855.7440.000000 3.0 0.00085 0.009779 0.86083.000000 5.0 0.000 0.00380 0.48775.000000 Table 4.4 Bounds in.8 vs. actual modulus of nonreal eigenvalues, groundwater flow problem. Lower bound min η max η Upper bound 0.00 0.0 0.05 0.0944.8603.963349.9679 0. 0.0096.753875.97799.98 0.3 0.00307.0935.97900.98669 0.5 0.0094 0.73979.95973.96379.0 0.00096 0.386709.865509.88779 3.0 0.0003 0.360.3480.596393 5.0 0.0009 0.078883 0.95533.49650 Concerning nonreal eigenvalues, we observe that our bounds are generally not very sharp. The real part of the eigenvalues changes considerably as varies, clustering on different regions of the interval 0,. Our lower bounds on Rη are rather loose, although they get better for larger values of ; conversely, the upper bounds are tight for small and loose for large. We conclude this section with the results of a few experiments that illustrate the convergence behavior of full GMRES [8] with Hermitian/skew-Hermitian splitting preconditioning; we refer to [4] for more extensive experimental results. The purpose of these experiments is to investigate the influence of the eigenvalue distribution, and in particular of the clustering that occurs as 0, on the convergence of GMRES. We also monitor the conditioning of the eigenvectors of the preconditioned matrix for different values of. In Table 4.5 we report a sample of results for both the Stokes and the groundwater flow problem, for different values of from tiny to fairly large. Here κ V := σ maxv σ minv denotes the spectral condition number of the matrix of normalized eigenvectors of P A, and Its denotes the corresponding number of preconditioned GMRES iterations matrix-vector products needed to reduce the initial residual by at least six orders of magnitude. For the Stokes problem, the condition number of the eigenvector matrix of the unpreconditioned A is κ V =6.94. Without preconditioning, full GMRES converges in 99 iterations. For the unpreconditioned groundwater flow problem, it is κ V =.37 and GMRES stagnates. Note that for both problems, the best results in terms of GMRES iterations are obtained for =0.005, with generally good convergence behavior for between 0 6 and 0. Good performance is observed in particular for λ n, for which nonreal eigenvalues, when they occur, lie in a small region in the disc D, cf. Remark..

388 VALERIA SIMONCINI AND MICHELE BENZI Table 4.5 Conditioning of the eigenvectors and iteration count. Stokes Groundwater flow κ V Its κ V Its 0.8E+8 > 00 4.3E+09 5 0 9.3E+0 45.0E+08 7 0 6 4.5E+08 4.4E+7 7 0 5 3.30E+04 40 5.69E+00 7 0 4 9.65E+03 40.3E+0 7 0 3.48E+03 40 8.0E+00 3 0.005.6E+04 38.3E+03 0.0.8E+03 38.57E+04 3 0.03 7.63E+0 40.3E+0 7 0.05.68E+0 44 6.79E+0 9 0.07.6E+0 48.9E+0 0 0. 6.05E+0 54.37E+0 6 0.3 3.55E+0 76.76E+00 67 0.5 4.38E+0 88.9E+00 09 0.7.88E+0 97 8.87E+00 > 00.0.77E+0 08.56E+00 > 00 5.0 3.33E+0 57.0E+00 > 00 0.0 6.44E+00 74.90E+00 > 00 The convergence rate remains fairly stable even for smaller values of, but eventually it starts deteriorating as approaches zero. It is likely that this is due to the fact that the preconditioner and with it, the preconditioned matrix becomes singular as 0. On the other hand, as the preconditioned matrix tends to the unpreconditioned one and the preconditioner becomes ineffective. Note that somewhat better results can be obtained by a suitable diagonal scaling of A see [4]; however, no scaling was used here. For both problems, κ V appears to be very sensitive to changes in, at least when is small. This is in stark contrast with the rather smooth variation in the number of GMRES iterations. Overall, the condition number of the eigenvector matrix does not seem to have much influence on the convergence of GMRES. 5. Conclusions. In this paper we have provided bounds and clustering results for the spectra of preconditioned matrices arising from the application of the Hermitian/skew-Hermitian splitting preconditioner to saddle point problems. Numerical experiments have been used to illustrate the capability of our estimates to locate the actual spectral region. We have also shown that for small, all the eigenvalues are real and fall in two clusters, one near 0 and the other near. Our bounds are especially sharp precisely for these values of, which are those of practical interest. Indeed, our analysis suggests that the best value of should be small enough so that the spectrum is clustered, but not so small that the preconditioned matrix is close to being singular. Numerical experiments confirm this, and it appears that when A is positive definite, λ n A is generally a good choice. Finally, we found a connection with the quadratic eigenvalue problems arising in the theory of overdamped systems; it is possible that exploitation of this connection may lead to further insight into the spectral properties of preconditioned saddle point problems. Acknowledgment. We would like to thank Martin Gander for useful comments on an earlier draft of the paper.

PRECONDITIONING SADDLE POINT PROBLEMS 389 REFERENCES [] Z. Z. Bai, G. H. Golub, and M. K. Ng, Hermitian and Skew-Hermitian Splitting methods for non-hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 4 003, pp. 603 66. [] Z. Z. Bai, G. H. Golub, and J. Y. Pan, Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Linear Systems, Technical Report SCCM-0-, Scientific Computing and Computational Mathematics Program, Department of Computer Science, Stanford University, Stanford, CA, 00. [3] M. Benzi, M. J. Gander, and G. H. Golub, Optimization of the Hermitian and skew- Hermitian splitting iteration for saddle-point problems, BIT, 43 003, pp. 88 900. [4] M. Benzi and G. H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl., 6 004, pp. 0 4. [5] M. Gander, Optimization of a Preconditioner for Its Performance with a Krylov Method, talk delivered at the Dagstuhl Seminar 034 on Theoretical and Computational Properties of Matrix Algorithms, Dagstuhl, Germany, 003 http://www.dagstuhl.de/034/. [6] I. Gohberg, P. Lancaster, and L. Rodman, Matrix Polynomials, Academic Press, New York, 98. [7] J. Maryška, M. Rozložník, and M. Tůma, Mixed-hybrid finite element approximation of the potential fluid flow problem, J. Comput. Appl. Math., 63 995, pp. 383 39. [8] Y. Saad and M. H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 986, pp. 856 869. [9] D. Silvester, Private communication, 00. [0] F. Tisseur and K. Meerbergen, The quadratic eigenvalue problem, SIAM Rev., 43 00, pp. 35 86. [] H. A. van der Vorst, Iterative Krylov Methods for Large Linear Systems, Cambridge Monogr. Appl. Comput. Math. 3, Cambridge University Press, Cambridge, UK, 003.