Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Similar documents
ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

Numerical behavior of inexact linear solvers

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

The semi-convergence of GSI method for singular saddle point problems

Mathematics and Computer Science

On the accuracy of saddle point solvers

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

Chebyshev semi-iteration in Preconditioning

Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

ETNA Kent State University

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

Efficient Solvers for the Navier Stokes Equations in Rotation Form

POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction

9.1 Preconditioned Krylov Subspace Methods

Fast Iterative Solution of Saddle Point Problems

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Regularized HSS iteration methods for saddle-point linear systems

arxiv: v1 [math.na] 26 Dec 2013

Splitting Iteration Methods for Positive Definite Linear Systems

A Review of Preconditioning Techniques for Steady Incompressible Flow

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

Comparison results between Jacobi and other iterative methods

THE solution of the absolute value equation (AVE) of

Block triangular preconditioner for static Maxwell equations*

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

On preconditioned MHSS iteration methods for complex symmetric linear systems

Mathematics and Computer Science

Iterative methods for Linear System

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems

Structured Preconditioners for Saddle Point Problems

Modified HSS iteration methods for a class of complex symmetric linear systems

Numerical Methods in Matrix Computations

Preconditioners for the incompressible Navier Stokes equations

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Fast solvers for steady incompressible flow

Iterative Methods for Solving A x = b

Chapter 7 Iterative Techniques in Matrix Algebra

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

A new iterative method for solving a class of complex symmetric system of linear equations

Preconditioned inverse iteration and shift-invert Arnoldi method

The antitriangular factorisation of saddle point matrices

Department of Computer Science, University of Illinois at Urbana-Champaign

Iterative Solution methods

Preconditioning for Nonsymmetry and Time-dependence

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

CAAM 454/554: Stationary Iterative Methods

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

Iterative Methods and Multigrid

Structured Preconditioners for Saddle Point Problems

CLASSICAL ITERATIVE METHODS

Lecture 18 Classical Iterative Methods

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

Available online: 19 Oct To link to this article:

On the Preconditioning of the Block Tridiagonal Linear System of Equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

Jae Heon Yun and Yu Du Han

Numerical Methods I Non-Square and Sparse Linear Systems

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

c 2004 Society for Industrial and Applied Mathematics

4.6 Iterative Solvers for Linear Systems

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

FEM and Sparse Linear System Solving

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *

DELFT UNIVERSITY OF TECHNOLOGY

On deflation and singular symmetric positive semi-definite matrices

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

AMS526: Numerical Analysis I (Numerical Linear Algebra)

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

Iterative Methods for Smooth Objective Functions

arxiv: v1 [math.na] 1 Sep 2018

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

IN this paper, we investigate spectral properties of block

APPLIED NUMERICAL LINEAR ALGEBRA

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

A Note on Inverse Iteration

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Dense Matrices for Biofluids Applications

ON THE ROLE OF COMMUTATOR ARGUMENTS IN THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS FOR STOKES CONTROL PROBLEMS

Computational Linear Algebra

Preface to the Second Edition. Preface to the First Edition

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini

Improved Newton s method with exact line searches to solve quadratic matrix equation

AMS526: Numerical Analysis I (Numerical Linear Algebra)

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS

Transcription:

Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization of the parameterized Uzawa preconditioners for saddle point matrices Zeng-Qi Wang State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics Scientific/Engineering Computing Academy of Mathematics Systems Science Chinese Academy of Sciences PO Box 719 Beijing 100080 PR China a r t i c l e i n f o a b s t r a c t Article history: Received 1 December 007 Received in revised form 6 February 008 Keywords: Saddle point problem Parameterized Uzawa preconditioner Optimal parameter The parameterized Uzawa preconditioners for saddle point problems are studied in this paper The eigenvalues of the preconditioned matrix are located in (0 ) by choosing the suitable parameters Furthermore we give two strategies to optimize the rate of convergence by finding the suitable values of parameters Numerical computations show that the parameterized Uzawa preconditioners can lead to practical effective preconditioned GMRES methods for solving the saddle point problems 008 Elsevier BV All rights reserved 1 Introduction Let A R m m be a symmetric positive definite matrix B R m n be a matrix of full column rank where m n Denote by B T the transpose of the matrix B Then the saddle point problem is of the form ( ) ( ) ( A B x b Az B T = f (1) 0 y q) where b R m q R n are two given vectors Such systems of linear equations (1) arise from many areas of scientific computing engineering applications such as mixed finite-element approximation of partial differential equations in elasticity fluid dynamics interior point sequential quadratic programming algorithms for optimization the solution of weighted least-squares problems the modeling of statistical processes; see [501] references therein It is widely recognized that effective Krylov iterations for saddle point problems depend crucially on good preconditioners (see [331]) such as incomplete factorization preconditioners [146] matrix splitting preconditioners (see [13]) The matrix splitting preconditioners are possibly obtained through the simple iterative methods (eg Jacobi symmetric Gauss Seidel (SGS) successive overrelaxation (SOR) symmetric successive overrelaxation (SSOR) preconditioners [1316173]) or the alternating direction iteration methods (eg the Hermitian skew-hermitian (HSS) preconditioners [8101150]) so on In this paper we present a new type of preconditioner which results from the parameterized Uzawa (PU) method studied in [7] as follows: Method 11 ([7] The PU Method for Saddle Point Problem) Let Q R n n be a symmetric positive definite matrix Given are initial vectors x (0) R m y (0) R n two relaxation factors ω τ with ω τ 0 For k = 0 1 until the iteration sequence {(x (k)t y (k)t ) T } converges to the exact solution of the saddle point problem (1) compute { x (k+1) = (1 ω)x (k) + ωa 1 (b By (k) ) y (k+1) = y (k) + τ Q 1 (B T x (k+1) q) Here Q is assumed to be an approximate (or preconditioning) matrix of the Schur complement matrix B T A 1 B Corresponding address: Department of Mathematics Shanghai Jiaotong University Shanghai 0040 PR China E-mail address: wangzengqi@sjtueducn 0377-047/$ see front matter 008 Elsevier BV All rights reserved doi:101016/jcam00805019

Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 137 The PU method is a stationary iterative method based on the matrix splitting A = M(ω τ) N (ω τ) where 1 M(ω τ) ω A 0 B T 1 τ Q The corresponding iteration matrix is given by () H(ω τ) = I M(ω τ) 1 A where I is the identity matrix of suitable size When the relaxation factors ω τ satisfy 0 < ω < 0 < τ < ( ω) ω the spectral radius of H(ω τ) is less than 1 ie the PU method is convergent Here is the maximum eigenvalue of the matrix Q 1 B T A 1 B; see [7] In this paper we use the matrix M(ω τ) in () as a preconditioner for the system of linear equations (1) call it a parameterized Uzawa preconditioner or PU preconditioner in short Theoretical analyses show that the spectral distribution of the coefficient matrix in (1) is improved well by the PU preconditioner All the eigenvalues of the preconditioned matrix are located in the interval (0 ) when the parameters ω τ satisfy (4) Moreover there are quite a number of eigenvalues clustered around a point To further improve the conditioning of the coefficient matrix A in (1) we give two strategies for optimizing the preconditioner On the premise of confining the smallest eigenvalue away from the origin the optimal parameters are chosen to minimize the measurement of the objective intervals of the spectrum Although the convergence of nonsymmetric problems has no clear relationship with the eigenvalues when the Krylov subspace methods such as GMRES are performed intuitively the tight distribution of the eigenvalues (away from the origin) often results in rapid convergence [193] We use numerical results to show the effectiveness of the PU preconditioners the corresponding preconditioned GMRES iteration methods The paper is organized as follows After introducing the PU preconditioner M(ω τ) we analyze the spectral distribution of the preconditioned matrix M(ω τ) 1 A in Section Strategies corresponding parameters for optimizing the preconditioning matrix are studied in Section 3 numerical results are shown in Section 4 Finally we end the paper with a brief conclusion The PU preconditioner When the matrix M(ω τ) in () is used as a preconditioner for the saddle point problem (1) the spectral distribution of the preconditioned matrix M(ω τ) 1 A can be analyzed easily by (3) the following lemma Lemma 1 ([7]) Let A R m m be symmetric positive definite B R m n be of full column rank Q R n n be nonsingular symmetric Denote by µ an eigenvalue of the matrix J = Q 1 B T A 1 B Then the nonzero eigenvalues of the matrix H(ω τ) are given by λ = 1 ω λ = 1 ( ω τωµ ± ) ( ω τωµ) 4(1 ω) (3) (4) Furthermore it can be proved that λ = 1 ω is an eigenvalue of multiplicity at least m n zero is not the eigenvalue of H(ω τ) if ω 1 Consequently we get the following theorem Theorem 1 Let A R m m be symmetric positive definite B R m n be of full column rank Q R n n be nonsingular symmetric Denote by µ an eigenvalue of the matrix J = Q 1 B T A 1 B Then the eigenvalues of M(ω τ) 1 A denoted by λ are given by λ = ω or λ = 1 [ (ω + τωµ) ± ] ω (1 + τµ) 4ωτµ Moreover there are at least m n eigenvalues which are equal to ω Proof From (3) the eigenvalues of M(ω τ) 1 A the eigenvalues of the iteration matrix H(ω τ) have the relationship λ = 1 λ The results of this theorem can be straightforwardly deduced from (5) Lemma 1 (5)

138 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 Moreover all eigenvalues of M(ω τ) 1 A are located in the disk {z C z 1 < 1} when the parameters ω τ satisfy (4) The eigenvalues of the preconditioned matrix fall into two categories: one is ω the other is conditionally real or complex which is dependent on the discriminant := (µ) = ω (1 + τµ) 4ωτµ where µ [ ] are the minimum the maximum eigenvalues of the matrix Q 1 B T A 1 B respectively After straightforward derivation we have the following results: (F a ) When ω 1 for any τ µ 0 ie λ = 1 [ (ω + τωµ) ± ] ω (1 + τµ) 4ωτµ is real all the eigenvalues of M(ω τ) 1 A are real; (F b ) When ω < 1 τ ω + 1 ω ωµ or τ ω 1 ω ωµ for any µ [ ] it holds that 0 so that λ = 1 [ (ω + τωµ) ± ] ω (1 + τµ) 4ωτµ is real Hence all the eigenvalues of M(ω τ) 1 A are real; (F c ) When ω < 1 for ω 1 ω < τ < ω+ 1 ω < 0 ωµ ωµ λ = 1 [ (ω + τωµ) ± ] ω (1 + τµ) 4ωτµ is complex Hence there are complex eigenvalues for M(ω τ) 1 A Define the functions: (a) f 1 (µ ω τ) = 1 [(ω + τωµ) + ω (1 + τµ) 4ωτµ] (b) f (µ ω τ) = 1 [(ω + τωµ) ω (1 + τµ) 4ωτµ] (c) f 3 (µ ω τ) = τωµ We first analyze the monotonicity of these functions According to the monotonicity we then define two intervals I1(ω τ) I(ω τ) The real spectrum of the preconditioned matrix lies in I1(ω τ) I(ω τ) except for λ = ω Theorem Consider the preconditioned matrix M(ω τ) 1 A in which the parameters ω τ satisfy (4) Then (i) The real eigenvalues of the preconditioned matrix satisfy 0 < λ < all the eigenvalues are located in the unit disk {λ C; λ 1 < 1}; (ii) When ω 1 all the eigenvalues of the preconditioned matrix are real Moreover these eigenvalues are located in the union of the intervals where (iii) When I1(ω τ) I(ω τ) {ω} I1(ω τ) = [f 1 ( ω τ) f 1 ( ω τ)] I(ω τ) = [f ( ω τ) f ( ω τ)]; ω < 1 τ ω + 1 ω ω the eigenvalues of the preconditioned matrix are all real Moreover they are located in the union of the intervals where I1(ω τ) I(ω τ) {ω} I1(ω τ) = [f 1 ( ω τ) f 1 ( ω τ)]

Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 139 (iv) When I(ω τ) = [f ( ω τ) f ( ω τ)]; ω < 1 τ ω 1 ω ω the eigenvalues of the preconditioned matrix are all real Moreover these eigenvalues are located in where (v) When I1(ω τ) I(ω τ) {ω} I1(ω τ) = [f 1 ( ω τ) f 1 ( ω τ)] I(ω τ) = [f ( ω τ) f ( ω τ)]; ω < 1 ω 1 ω < τ < ω + 1 ω ω ω the conjugate complex eigenvalues of the preconditioned matrix exist These complex eigenvalues satisfy [ ω(1 + τµmin ) R(λ) ω(1 + τµ ] max) λ [ ωτµmin ] ωτ where R( ) denotes the real part of the corresponding complex number Proof From Theorem 1 we know that the spectral set of the preconditioned matrix M(ω τ) 1 A consists of the following two types of eigenvalues: λ = ω λ = 1 [ (ω + τωµ) ± ] ω (1 + τµ) 4ωτµ We can obtain the results in (i) straightforwardly since ρ(h(ω τ)) < 1 when ω τ satisfy (4) According to (F a ) all the eigenvalues λ are real when ω 1 In this case f 1 (µ ω τ) f (µ ω τ) are both monotonically increasing functions with respect to the variable µ hence (ii) holds true When ω < 1 τ ω + 1 ω ωµ f 1 (µ ω τ) is an increasing function while f (µ ω τ) is a decreasing function with respect to µ so (iii) holds true When ω < 1 τ ω 1 ω ω f 1 (µ ω τ) is a decreasing function while f (µ ω τ) is an increasing function with respect to µ so (iv) holds true When ω 1 ω 1 ω < τ < ω + 1 ω ω ω according to (F c ) the complex eigenvalues λ = 1 [(ω + τωµ) ± ω (1 + τµ) 4ωτµ] = 1 [(ω + τωµ) ± i 4ωτµ ω (1 + τµ) ] exist It is easy to see that the real part of λ is monotonically increasing with respect to µ is bounded as ω(1 + τ ) R(λ) ω(1 + τ)

140 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 λ is given by λ = 1 4 [(ω + τωµ) + 4ωτµ ω (1 + τµ) ] = ωτµ It is bounded as τω λ τω Now the theorem is proved When the spectrum is real some Krylov subspace methods become more attractive because of the short recurrence see [8131] Hence in the following we only consider the cases that all the eigenvalues of M(ω τ) 1 A are real In those cases the eigenvalues of the preconditioned matrix are located in (0 ) with the corresponding parameters ω τ In the next section we want to improve the conditioning of the preconditioned matrix by further selecting the parameters 3 Strategies for optimizing the preconditioner In this section we present two strategies to optimize the preconditioning matrix compute the corresponding optimal parameters for the PU preconditioners under these strategies To avoid confusion we emphasize that the optimal parameters for the PU preconditioning matrix may be different from the optimal parameters for the PU iteration method; see [7] We denote the measurements of the intervals I1(ω τ) I(ω τ) by I1(ω τ) I(ω τ) respectively In the following two strategies we improve the conditioning of the coefficient matrix in the aspects: (i) Compress the distribution of the eigenvalues; (ii) Ensure that the eigenvalues are away from the origin Strategy A Compress the eigenvalue distribution by reducing I(ω τ) = max{ I1(ω τ) I(ω τ) } The parameter pair of {ω opt τ opt } is the solution of the minimization problem st min ωτ I(ω τ) min µ f (µ ω τ) ε where min µ f (µ ω τ) is the minimum eigenvalue of M(ω τ) 1 A Strategy B Compress the eigenvalue distribution by reducing the measurement Ĩ(ω τ) = [min µf (µ ω τ) max µf 1 (µ ω τ)] The parameter pair of {ω (opt) τ (opt) } is the solution of the minimization problem st min Ĩ(ω τ) ωτ min f (µ ω τ) ε µ where min µ f (µ ω τ) is the minimum eigenvalue of M(ω τ) 1 A The constraints in (6) (7) are used to guarantee that the eigenvalues of the preconditioned matrix are away from zero For certain saddle point problem ε is a constant less than 1 in general Theorem 31 In different cases of ω τ the function I(ω τ) is expressed as: (i) When ω 1 (i1) For τ (i) For τ < ( ω) ( + )ω I(ω τ) = I1(ω τ) = 1 ( ωτ( ) + ω (1 + τ ) 4τω ( ω) ( + )ω ω (1 + τ ) 4τω ) ; (8) I(ω τ) = I(ω τ) = 1 ( ωτ( ) + ω (1 + τ ) 4τω ) ω (1 + τ ) 4τω (9) (6) (7)

Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 141 (ii) When ω < 1 (ii1) For τ ω 1 ω ω I(ω τ) = I(ω τ) = 1 ( ωτ( ) + ω (1 + τ ) 4τω (ii) For τ ω+ 1 ω ω ω (1 + τ ) 4τω ) ; (10) I(ω τ) = I1(ω τ) = 1 ( ωτ( ) + ω (1 + τ ) 4τω ) ω (1 + τ ) 4τω (11) Proof When ω 1 it holds that I1(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) I(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) By straightforward calculations for τ for τ < ( ω) we have ( + )ω ω (1 + τ ) 4τω ω (1 + τ ) 4τω ; ( ω) we have ( + )ω ω (1 + τ ) 4τω < ω (1 + τ ) 4τω The result then follows directly from the above equations When ω 1 all the eigenvalues are real if only if τ ω 1 ω or τ ω + 1 ω ω ω We first discuss the case of τ ω 1 ω ω It holds that I1(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) I(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) It is clear that I(ω τ) I1(ω τ) For the case of τ ω+ 1 ω ω it holds that I1(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) I(ω τ) = 1 (ωτ( ) + ω (1 + τ ) 4τω ω (1 + τ ) 4τω ) It is clear that I1(ω τ) I(ω τ)

14 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 Theorem 3 Consider the PU preconditioning optimization with Strategy A Let A R m m be symmetric positive definite B R m n be of full column rank Q R n n be symmetric positive definite Denote the smallest the largest eigenvalues of the matrix J = Q 1 B T A 1 B by the condition number by κ = Let the constant ε be less than κ Then when < ε the optimal parameters are κ+1 κ ω opt = 1 τ opt = ε the corresponding minimum measurement of the interval is I(ω opt τ opt ) = κε 1 When ε the optimal parameters are κ+1 ω opt = 4(1 ε) + ε (1 + κ) ε( ε) τ opt = ε(1 κ) [ε (κ + 1) + 4(1 ε)] the corresponding minimum measurement of the interval is I(ω opt τ opt ) = ε( ε)(κ 1) ε + εκ Proof In order to demonstrate the results conveniently we define the following variables: τ (ε) = ε(ω ε) (1 ε)ω ω(0) = κε 4ε + 4 κε ε + We declare that f (ω τ ) ε if only if τ τ (ε) τ (ε) ( ω) ω if only if ω ω (0) So it is reasonable to restrict our discussion within the scope of ω ω (0) We are going to fulfill the proof according to the following three cases with respect to ω τ Case (a) ω 1 κε For this case according to (F a ) all the eigenvalues of the preconditioned matrix are real for any τ It is clear that the lower bound of I(ω τ) is f ( ω τ) from Theorem To the end of satisfying the constraint in (6) we request f ( ω τ) ε Hence τ must satisfy τ τ (ε) Furthermore the condition κε is necessary When κε > it holds that ω (0) < 1 It contradicts with ω 1 ω ω (0) According to Strategy A we want to minimize the function I(ω τ) From Theorem 31 { I1(ω τ) for τ τ I(ω τ) = I(ω τ) for τ < τ where τ = Denote by ( ω) ( + )ω ˆω = 4(1 ε) + ε (1 + κ) ε(1 κ) Then it holds that ω (0) ˆω = ε(1 ε)( ε) (κε ε + )(κε ε + ) > 0 τ (ε) τ for ε 1 ω ˆω κ + 1 τ (ε) > τ for ε ˆω < ω < ω(0) κ + 1 or κ + 1 < ε < κ 1 ω < ω(0)

Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 143 We now prove Case (a) with the following two cases: (a1) κ+1 < ε κ 1 ω < ω(0) Since τ (ε) > τ we consider the case τ (ε) τ < only It holds that ( ω) ω I1(ω τ) > I(ω τ) We want to minimize the function I(ω τ) = I1(ω τ) see (8) Since ( ) ( ) is a monotonically increasing function with respect to τ we declare that I(ω τ) is a monotonically increasing function with respect to τ too Hence I(ω τ) attains its minimum at τ (1) := τ (ε) = 1 ε(ω ε) ω(1 ε) Substituting τ by τ (1) in (8) we know that I(ω τ (1) ) is an increasing function with respect to ω it achieves its minimum at ω (1) = 1 Correspondingly we have τ (1) = ε I (1) := I(ω (1) τ (1) ) = κε 1 (a) ε 1 ω < κ+1 ω(0) (i) When ˆω < ω < ω (0) we have τ (ε) > τ We consider the case τ (ε) τ < ( ω) ω The analysis is similar to (a1) Since I1(ω τ) > I(ω τ) We want to minimize the function I(ω τ) = I1(ω τ) in (8) Since ( ) ( ) is a monotonically increasing function with respect to τ we declare that I(ω τ) is a monotonically increasing function with respect to τ too Hence I(ω τ) attains its minimum at τ () := arg min τ(i(ω τ)) = τ (ε) = 1 ε(ω ε) ω(1 ε) Substituting τ by τ () in (8) we see that I(ω τ () ) is a monotonically increasing function with respect to ω it attains the minimum at ω () = ˆω Correspondingly we obtain τ () = τ (ε) (ω () ε( ε) ) = [ε (κ + 1) + 4(1 ε)] I () := I(ω () τ () ε( ε)(κ 1) ) = ε + εκ (ii) When 1 ω ˆω we have τ (ε) τ Consider the domain τ (ε) ( ω) τ < ω (1) In the case ( ω) τ τ < ω we have I1(ω τ) I(ω τ) Now the analysis is similar to (a1) We want to minimize the function I(ω τ) = I1(ω τ) Since ( ) ( ) is a monotonically increasing function with respect to τ we declare that I(ω τ) is a monotonically increasing function with respect to τ too Hence I(ω τ) attains its minimum at τ (3) := τ Therefore I(ω (3) τ (3) ) = I(ω τ) = ( ω)(κ 1) κ + 1

144 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 As I(ω τ) is a decreasing function with respect to ω its minimum is attained at ω (3) = ˆω Correspondingly we have τ (3) ε( ε)(1 + κ) = ( + )[4(1 ε) + ε (1 + κ)] I (3) ε( ε)(κ 1) = ε + εκ () In the case τ (ε) τ < τ we have I(ω τ) > I1(ω τ) We are going to minimize I(ω τ) = I(ω τ) in (9) We introduce the auxiliary variable ˆτ = ωτ Then I(ω τ) = 1 (ˆτ( ) + (ω + ˆτ ) 4ˆτ (ω + ˆτ ) 4ˆτ ) It is easy to verify that I(ω τ) is a decreasing function with respect to the variable ω Therefore it achieves the minimum at ω (4) = ˆω When ω = ˆω it holds that τ (ε) = τ ˆωτ (ε) = ˆω τ Hence I(ω (4) τ) = I(ω (4) ˆτ) achieves the minimum I (4) ε( ε)(κ 1) = ε + εκ at ˆτ = ˆωτ (ε) = ˆω τ ie τ (4) = τ (ε) = τ = ε( ε)(1 + κ) ( + )[4(1 ε) + ε (1 + κ)] Case (b) ω 1 τ ω+ 1 ω ω We declare that Case (b) is meaningful only when κ If κ > then ω + 1 ω ( ω) > ω ω holds true there is no τ satisfying τ (0 ( ω) ω ) When κ f (µ ω τ) > ε holds for any eigenvalue µ since τ ω + 1 ω ε(ω ε) > ω ω(1 ε) In this case I1(ω τ) I(ω τ) We are going to minimize I(ω τ) = I1(ω τ) in (11) Since (f 1 ( ω τ) f 1 ( ω τ)) τ I(ω τ) attains its minimum at τ (5) = ω + 1 ω ω 0 Moreover since f 1 ( ω τ (5) ) f 1 ( ω τ (5) ) is a monotonically decreasing function with respect to ω we get the optimal parameter ω (5) = 1 Correspondingly we have τ (5) = 1 I (5) := I(ω (5) τ (5) ) = κ 1 Case (c) ω 1 τ ω 1 ω ω

Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 145 For this case I1(ω τ) I(ω τ) We minimize the function I(ω τ) = I(ω τ) see (10) We introduce the auxiliary parameter ˆτ = ωτ Then ˆτ ω 1 ω f (ω ˆτ µ) := f (ω τ µ) = 1 (ω + ˆτµ (ω + ˆτµ) 4ˆτµ) By straightforward calculation we have Hence f (ω ˆτ µ) µ ω = ˆτ (ω ˆτµ) < 0 3 (f (ω ˆτ ) f (ω ˆτ )) ω or equivalently I(ωτ) ω ω (6) = 1 Substituting ω (6) into (10) we get < 0 < 0 We get the optimal parameter of ω in this case as follows: I(ω τ) = τ( ) = τ( ) Clearly I(1 τ) is a monotonically increasing function with respect to τ We begin to discuss the cases κε 1 κε > 1 For the former since ε ˆτ 1 the corresponding optimal parameter τ is τ (6) = ε the corresponding minimizing function is I (6) = ε(κ 1) When κε > 1 τ (ε) > ω 1 ω ω So there is no suitable τ satisfying (4) Now we summarize Cases (a) (c) The optimal parameters ω τ depend on κ ε strongly When κ+1 < ε κ we choose the optimal pair of the parameters from (ω (1) τ (1) ) (ω (5) τ (5) ) From straightforward calculation we get I (5) > I (1) Therefore for this case ω opt = ω (1) = 1 τ opt = τ (1) = ε the minimum of I(ω τ) is I opt = I (1) = κε 1 When ε κ+1 we choose the optimal pair of the parameters from (ω() τ () ) (ω (3) τ (3) ) (ω (4) τ (4) ) (ω (5) τ (5) ) (ω (6) τ (6) ) The corresponding values of the function I(ω τ) are I () I (3) I (4) I (5) I (6) Obviously I () is the minimum Therefore for this case ω opt = ω () = 4(1 ε) + ε (1 + κ) τ opt = τ () ε( ε) = ε(1 κ) [ε (κ + 1) + 4(1 ε)] the minimum of I(ω τ) is I opt = I () = ε( ε)(κ 1) ε + εκ

146 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 Theorem 33 Consider the PU preconditioning optimization with Strategy B Let A R m m be symmetric positive definite B R m n be of full column rank Q R n n be symmetric positive definite Denote the smallest the largest eigenvalues the condition number of the matrix J = Q 1 B T A 1 B by κ = Let the constant ε in (6) be less than κ Then when κε 1 the optimal parameters are ω (opt) = 1 τ (opt) = 1 min Ĩ(ω τ) = 1 1 κ When 1 < κε the optimal parameters are ω (opt) = 1 τ (opt) = ε min Ĩ(ω τ) = ε(κ 1) Proof When κε it always holds that τ (ε) < ( ω) ω We are going to fulfill the proof according to the following three cases with respect to the parameters ω τ Case (a) 1 ω < τ (ε) τ < ( ω) ω For this case f 1 (ω τ µ) f (ω τ µ) are both monotonically increasing functions with respect to the variable µ According to Strategy B we are going to minimize the measurement Ĩ(ω τ) = [f ( ω τ) f 1 ( ω τ)] We replace ωτ by the auxiliary variable ˆτ Then the measurement of the interval Ĩ(ω τ) is a function of the variables ω ˆτ ie Ĩ(ω τ) = 1 [ˆτ( ) + (ω + ˆτ ) 4ˆτ + (ω + ˆτ ) 4ˆτ ] (1) It is obvious that Ĩ(ω τ) is a monotonically increasing function with respect to the variable ω We fix the optimal parameter ω at ω (a) = 1 Then (1) is simplified to be Ĩ(1 τ) = 1 (ˆτ( ) + ˆτ 1 + ˆτ 1 ) It is easy to verify that (a1) when ˆτ 1 Ĩ(1 τ) = 1 ˆτ 1 1 κ the equality holds when τ = 1 ; (a) when 1 ˆτ < 1 Ĩ(1 τ) = ˆτ( ) 1 1 κ the equality holds when τ = 1 ; (a3) when ˆτ 1 Ĩ(1 τ) = ˆτ 1 κ 1 the equality holds when τ = 1

Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 147 We carry on our discussion under the constraint (7) ie τ τ (ε) When κε > 1 it holds that ˆτ (ε) = τ (ε) 1 Hence we omit case (a1) only consider Cases (a) (a3) It is clear that Ĩ(1 τ) achieves its minimum at min τ Ĩ(1 τ) = ε(κ 1) τ (a) = τ (ε) = ε When κε 1 ˆτ (ε) = τ (ε) 1 all these three cases exist Ĩ(1 τ) achieves its minimum at min Ĩ(1 τ) = 1 1 τ κ τ (a) = 1 Case (b) ω 1 τ ω+ 1 ω ω We declare that this case exists only when κ Otherwise it holds that τ ω + 1 ω ( ω) > ω ω which is incompatible as 0 < τ < ( ω) ω In Case (b) f 1 (ω τ µ) is a monotonically increasing function with respect to the variable µ as long as f (ω τ µ) is the monotonically decreasing function with respect to the variable µ According to Strategy B we are going to minimize the function Ĩ(ω τ) = f 1 ( ω τ) f ( ω τ) We replace ωτ by the auxiliary variable ˆτ Then the measurement of the interval Ĩ(ω τ) is a function of the variables ω ˆτ it satisfies Ĩ(ω ˆτ) = (ω + ˆτ ) 4ˆτ For this case Ĩ(ω ˆτ) is a monotonically increasing function with respect to the variable ˆτ We fix the parameter ˆτ at ˆτ (b) = ω + 1 ω substitute ˆτ (b) into the expression of Ĩ(ω ˆτ) It is obvious that Ĩ(ω ˆτ (b) ) is a monotonically decreasing function with respect to the variable ω So the optimal parameters for this case are ω (b) = 1 τ (b) = 1 the corresponding measurement of the interval is min Ĩ(ω τ) = κ 1 ωτ Case (c) ω 1 τ ω 1 ω ω For this case f 1 (ω τ µ) is a monotonically decreasing function while f (ω τ µ) is a monotonically increasing function with respect to the variable µ According to Strategy B we are going to minimize the measurement of the interval Ĩ(ω τ) = [f ( ω τ) f 1 ( ω τ)] We replace ωτ by the auxiliary variable ˆτ Then the measurement of the interval Ĩ(ω τ) is a function of the variables ω ˆτ it satisfies Ĩ(ω ˆτ) = (ω + ˆτ ) 4ˆτ For this case Ĩ(ω ˆτ) is a monotonically decreasing function with respect to the variable ˆτ We fix the parameter ˆτ at ˆτ (c) = ω 1 ω

148 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 substitute ˆτ (c) into the expression of Ĩ(ω ˆτ) It can be verified that Ĩ(ω ˆτ (c) ) is increasing with respect to ω when 0 < ω κ 10κ + (κ + ) κ(κ + 8) (κ 1) decreasing with respect to ω when κ 10κ + (κ + ) κ(κ + 8) (κ 1) < ω 1 So ω = 1 is a local minimum point We abon another local minimum point say the zero since the preconditioned matrix will be near singular when ω 0 When ω = 1 τ (ε) 1 if only if κε 1 We consider this case under the assumption κε 1 The optimal parameters for this case are ω (c) = 1 τ (c) = 1 the corresponding interval measurement is min ωτ Ĩ(ω τ) = 1 1 κ By summarizing the aforementioned cases we draw the following conclusion: When κε 1 ω (opt) = 1 τ (opt) = 1 min ωτ Ĩ(ω τ) = 1 1 κ When 1 < κε < ω (opt) = 1 τ (opt) = ε min Ĩ(ω τ) = ε(κ 1) ωτ Remark 1 The efficiency of Strategies A B strongly depends on the condition number of the preconditioned Schur matrix J In other words Q should be a good approximation to B T A 1 B Several approximations were suggested in [5718 679] Especially for the Stokes problem the pressure mass matrix will be a reliable cidate see [] As revealed in the last two theorems the condition number κ of the matrix J is closely related to the spectral distribution of the preconditioned matrix the constant ε the optimal parameters Remark The optimal relaxation factors of the PU iteration method in [7] are ω = 4 κ ( τ 1 = (13) κ + 1) µmax They are different from the parameters chosen by either Strategy A or Strategy B With the parameters ω τ the eigenvalues of the preconditioned matrix are 4 κ λ = ( λ = 1 κ + 1) (ω + ω τ µ ± i 4ω τ µ (ω + ω τ µ) ) The real parts R(λ) of these eigenvalues are in the range of [ ] 1 + κ κ 1 + κ the moduli of the eigenvalues λ are in the range of [ ] 1 + κ κ 1 + κ We refer to [3] for some practical techniques that can be used to iteratively compute the optimal parameters of the relaxed splitting methods such as the SOR

Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 149 Table 1 The corresponding parameters for Example 41 ε = ε 1 ε = ε ε = ε 3 N 187 659 467 9539 ε 005 005 005 005 ω 156 156 156 156 τ 036 034 034 034 ω 100 100 100 100 τ 035 034 033 033 ε 015 014 014 014 P ω 11 11 11 11 τ 100 097 096 096 ω 100 100 100 100 τ 070 067 067 067 ε 00 019 019 019 ω 100 100 100 100 τ 133 19 18 17 ω 100 100 100 100 τ 070 067 067 067 ω 074 073 073 073 PGMRES-C τ 16 1 11 11 4 Numerical result In this section we use examples to further examine the effectiveness of the parameterized Uzawa preconditioners for solving the saddle point problems (1) from the aspects of number of iteration steps (denoted by IT ) elapsed CPU time in seconds (denoted by CPU ) norm of relative residual vectors (denoted by RES ) Here Res is defined by b Ax (k) B RES := T y (k) + q Bx (k) b + q with (x (k)t y (k)t ) T being the current approximate solution In our computation all runs of the Krylov subspace methods are started from the initial vector (x (0)T y (0)T ) = 0 terminated if the current iterations satisfy RES 10 7 or if the numbers of the prescribed iteration κ max = 500 are exceeded To investigate the influence of ε in (6) (7) on Strategy A Strategy B we select the constant ε in different intervals namely ε 1 (0 1 κ ) ε [ 1 κ ] ε κ+1 3 [ ] where κ κ+1 κ is the condition number of the matrix Q 1 B T A 1 B The optimal parameters ω τ are acquired according to Theorems 3 33 subsequently We denote the PU preconditioned GMRES methods as PGMRES-C since the ω τ in them are advised by Strategy A Strategy B Remark respectively In the PGMRES-tri method the Tri-diagonal preconditioner ( ) A 0 M(ω τ) B T (14) Q is used It is a special PU preconditioner with ω = 1 τ = 1; see [1448] We compare these methods with GMRES without preconditioning for each example The first example is generated by running the Incompressible Flow Iterative Solution Software (IFISS) introduced in [] Example 41 Consider the Stokes equation { u + p = 0 u = 0 in the square domain Ω = ( 1 1) with the natural outflow boundary condition u np = s n on Ω We discretize the Stokes equation by Q Q 1 approximation obtain the linear system (1) The approximate matrix Q is the positive definite pressure mass matrix generated by the mix-element discretization In Table 1 we list the optimal parameters in Theorems 3 33 Remark for the different choices of ε For the different problem scales N εi (i = 1 3) (the midpoints of the corresponding intervals) ω τ of the strategies are quite stable due to the advisable choice of Q In Table we list the numerical results in terms of IT CPU RES for testing methods for Example 41 with different sizes of problems From this table we see that all the PU preconditioned GMRES methods are faster than the GMRES method without preconditioning In most of the cases PGMRES-tri all outperform PGMRES- C The performance of is comparable with PGMRES-tri when ε = ε 3 Compared to PGMRES-tri has no distinct advantage for this example The reason is that we choose a very effective Q to approximate B T A 1 B

150 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 Table IT CPU RES with different ε for Example 41 ε = ε 1 ε = ε ε = ε 3 N 187 659 467 9539 IT 4 8 7 9 CPU 013 018 093 496 RES 330e 08 37e 08 939e 08 33e 08 IT 13 16 16 19 CPU 00 011 057 33 RES 96e 08 163e 08 695e 08 148e 08 IT 18 0 0 CPU 003 013 070 381 RES 631e 08 383e 08 583e 08 308e 08 IT 13 16 17 19 CPU 00 011 060 334 RES 56e 08 56e 08 807e 08 50e 09 IT 14 16 18 18 CPU 003 011 063 316 RES 84e 09 45e 08 106e 08 863e 08 IT 13 16 17 19 CPU 00 011 060 334 RES 56e 08 56e 08 807e 08 50e 09 IT 1 4 5 5 PGMRES-C CPU 004 016 086 431 RES 869e 08 711e 08 408e 08 434e 08 IT 13 17 18 19 PGMRES-tri CPU 00 01 064 33 RES 800e 08 675e 09 90e 09 55e 08 IT 86 70 500 500 GMRES CPU 010 087 998 4873 RES 370e 08 995e 08 483e 07 388e 05 Fig 1 The spectrum of the coefficient matrix (left) the Preconditioned matrix with parameters in (13) (right) for Example 41 (N = 187) so that all the eigenvalues of Q 1 B T A 1 B are located in [0 ] the behavior of the preconditioner is not very sensitive about τ We plot the eigenvalues of the coefficient matrix the preconditioned matrices in Figs 1 In terms of the spectral distribution of the PU preconditioned method Strategy B performs better than Strategy A in the case of ε 1 = 005 they both outperform PGMRES-C with parameters ω τ since there are a number of complex eigenvalues in curve PU-C The distribution of eigenvalues affects the preconditioning performance This is coincident with the result in Table Example 4 ([715]) Consider the augmented linear system (1) in which ( ) I T + T I 0 A = R 0 I T + T p p B = ( I F F ) I R p p I

Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 151 Fig The spectrum of the PU preconditioned matrix (N = 187) for Example 41 PU-A: PU preconditioner of Strategy A; PU-B: PU preconditioner of Strategy B; PU-tri: PU preconditioner in (14) Table 3 Choices of the matrix Q Case no Matrix Q Description I B T Â 1 B Â = tridiag(a) II B T Â 1 B Â = diag(a) Table 4 Corresponding parameters of Case I for Example 4 N 108 588 145 700 433 Case I ε 0056 0013 0006 0003 000 ω 1557 1590 1596 1598 1598 τ 0104 006 001 0007 0004 ε = ε 1 ω 1000 1000 1000 1000 1000 τ 010 006 001 0007 0004 ε = ε ε = ε 3 ε 0158 0040 0018 0010 0006 ω 1115 1136 1140 1141 114 τ 091 0078 0035 000 0013 ω 1000 1000 1000 1000 1000 τ 004 005 003 0013 0008 ε 014 0053 003 0013 0008 ω 1000 1000 1000 1000 1000 τ 0387 0103 0046 006 0017 ω 1000 1000 1000 1000 1000 τ 004 005 003 0013 0008 ω 0753 0484 0353 078 09 PGMRES-C τ 0607 030 015 0161 019 T = 1 h tridiag( 1 1) Rp p F = 1 h tridiag( 1 1 0) Rp p with being the Kronecker product symbol h = 1 the discretization mesh size p+1 For this example we have m = p n = p Hence the total number of variables is m + n = 3p We choose the matrix Q the approximation to the matrix B T A 1 B as the cases listed in Table 3 In Tables 4 5 we list the optimal parameters of Strategy A Strategy B the optimal relaxation factors given in [7] for various problem sizes (m n) approximate matrices Q for Example 4 The corresponding numerical results are listed in Table 6 Table 7 In the sense of iteration step CPU time is faster than other preconditioned GMRES methods for each case of Q ε In the case of ε = ε 3 the performance of is comparable with better than PGMRES-tri while in the other cases PGMRES-tri is faster than PGMRES-C All of these PU

15 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 Table 5 Corresponding parameters of Case II for Example 4 N 108 588 145 700 433 Case II ε 0031 0007 0003 000 0001 ω 1577 1595 1598 1599 1599 τ 0059 0014 0006 0003 000 ε = ε 1 ω 1000 1000 1000 1000 1000 τ 0058 0014 0006 0003 000 ε = ε ε = ε 3 ε 0088 001 0009 0005 0003 ω 118 1139 1141 114 114 τ 0169 0041 0018 0010 0006 ω 1000 1000 1000 1000 1000 τ 0116 007 001 0007 0004 ε 0119 008 001 0007 0004 ω 1000 1000 1000 1000 1000 τ 05 0054 004 0013 0008 ω 1000 1000 1000 1000 1000 τ 0116 007 001 0007 0004 ω 0636 0377 067 007 0168 PGMRES-C τ 0469 033 0154 0115 009 Table 6 IT CPU RES of Case I for Example 41 ε = ε 1 ε = ε ε = ε 3 N 108 588 145 700 433 IT 8 48 66 81 96 CPU 005 017 088 30 869 RES 543e 08 475e 08 519e 08 610e 08 617e 08 IT 17 8 38 47 56 CPU 001 010 051 185 505 RES 95e 08 756e 08 946e 08 969e 08 960e 08 IT 3 41 56 68 8 CPU 00 015 074 67 740 RES 955e 08 554e 08 60e 08 895e 08 698e 08 IT 16 7 38 47 55 CPU 001 010 051 185 496 RES 557e 08 956e 08 544e 08 619e 08 977e 08 IT 17 8 38 46 55 CPU 001 010 051 181 496 RES 97e 08 70e 08 70e 08 75e 08 595e 08 IT 16 7 38 47 55 CPU 001 010 051 185 496 RES 557e 08 956e 08 544e 08 619e 08 977e 08 IT 3 43 6 79 94 PGMRES-C CPU 00 015 083 311 851 RES 716e 08 733e 08 989e 08 993e 08 846e 08 IT 18 30 41 5 6 PGMRES-tri CPU 00 011 055 04 559 RES 10e 08 773e 08 856e 08 614e 08 565e 08 preconditioned methods are efficient than the GMRES method without preconditioning From Figs 3 4 we see that the condition of the original problem is much worse than the preconditioned system As far as the spectral distribution is concerned the strategies for preconditioning optimization are successful From these examples we find that in the case of ε [ ] the Strategy A is much effective than the other cases κ+1 κ Correspondingly ω = 1 is obtained the difference between Strategy A Strategy B depends on the choice of τ The performance of two strategies are similar when the PU preconditioner is not dependent on τ sensitively 5 Conclusion remarks In recent years quite a few structured preconditioners have been studied for saddle point problems eg the Hermitian skew-hermitian splitting preconditioners in [101530810] the constraint preconditioners in [5] the restrictive preconditioners in [518] so on Initially the HSS method was used as a stationary iterative method for non-hermitian positive definite systems in [9115] the optimal parameters for the stationary iteration are found to accelerate the iteration [8915] But the work of finding the parameters for optimizing the preconditioning is more difficult [80] In [30] Simoncini Benzi presented that the eigenvalues of the preconditioned matrix are clustered when the parameter

Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 153 Table 7 IT CPU RES of Case II for Example 41 ε = ε 1 ε = ε ε = ε 3 N 108 588 145 700 433 IT 31 58 79 98 117 CPU 006 01 010 334 887 RES 738e 08 503e 08 81e 08 610e 08 705e 08 IT 19 34 47 59 69 CPU 00 013 058 198 516 RES 360e 08 844e 08 603e 08 730e 08 886e 08 IT 7 41 67 84 10 CPU 00 015 084 84 770 RES 39e 08 554e 08 91e 08 873e 08 541e 08 IT 19 50 46 58 68 CPU 00 019 057 195 509 RES 460e 08 315e 08 63e 08 734e 08 881e 08 IT 18 33 45 56 66 CPU 00 01 056 188 494 RES 839e 08 94e 08 757e 08 817e 08 938e 08 IT 19 33 46 58 68 CPU 00 01 057 195 509 RES 460e 08 94e 08 63e 08 734e 08 881e 08 IT 8 55 78 100 1 PGMRES-C CPU 00 00 098 341 97 RES 515e 08 36e 08 848e 08 94e 08 810e 08 IT 0 36 50 64 75 PGMRES-tri CPU 00 014 06 15 56 RES 393e 08 97e 08 964e 08 811e 08 964e 08 IT 67 18 301 430 500 GMRES CPU 006 045 1418 4873 3730 RES 437e 08 988e 08 979e 08 839e 08 14e 06 Fig 3 The spectrum of the coefficient matrix (left) the Preconditioned matrix with parameters in (13) (right) for Example 4 (N = 108) α 0+ Unfortunately the near singularity of the preconditioned matrix accompanies the clustering result so that the Krylov subspace methods converge slowly In this paper the parameters of the preconditioners are chosen so that the eigenvalues of the preconditioned matrix have a good distribution We consider the eigenvalue clustering by compressing the distribution of eigenvalues as well as by constraining the lower bound of the eigenvalues The motivation behind constraining is to ensure that all the eigenvalues of the preconditioned matrix are away from the origin The strategy may be extended to choose the iteration parameters involved in the HSS [1] the NSS 1 [13] the PSS [11] the BTSS 3 [11] iteration methods etc 1 NSS is the abbreviation of the term normal skew-hermitian splitting PSS is the abbreviation of the term positive definite skew-hermitian splitting 3 BTSS is the abbreviation of the term block triangular skew-hermitian splitting

154 Z-Q Wang / Journal of Computational Applied Mathematics 6 (009) 136 154 Fig 4 The spectrum of the PU preconditioned matrix (N = 108) for Example 4 PU-A: PU preconditioner of Strategy A; PU-B: PU preconditioner of Strategy B; PU-tri: PU preconditioner in (14) References [1] O Axelsson Iterative Solution Methods Cambridge Univ Press Cambridge 1994 [] Z-Z Bai Structured preconditioners for nonsingular matrices of block two-by-two structures Math Comput 75 (006) 791 815 [3] Z-Z Bai X-B Chi Asymptotically optimal successive overrelaxation methods for systems of linear equations J Comput Math 1 (003) 603 61 [4] Z-Z Bai IS Duff AJ Wathen A class of incomplete orthogonal factorization methods I: Methods theories BIT 41 (001) 53 70 [5] Z-Z Bai G-Q Li Restrictively preconditioned conjugate gradient methods for systems of linear equations IMA J Numer Anal 3 (003) 561 580 [6] Z-Z Bai G-Q Li L-Z Lu Combinative preconditioners of modified imcomplete Cholesky factorization Sherman Morrison Woodbury update for self-adjoint elliptic Dirichlet-periodic boundary value problems J Comput Math (004) 833 856 [7] Z-Z Bai BN Parlett Z-Q Wang On generalized successive overrelaxation methods for augmented linear systems Numer Math 10 (005) 1 38 [8] Z-Z Bai GH Golub Accelerated Hermitian skew-hermitian splitting iteration methods for saddle-point problems IMA J Numer Anal 7 (007) 1 3 [9] Z-Z Bai GH Golub C-K Li Optimal parameter in Hermitian skew-hermitian splitting method for certain two-by-two block matrices SIAM J Sci Comput 8 (006) 583 603 [10] Z-Z Bai GH Golub C-K Li Convergence properties of preconditioned Hermitian skew-hermitian splitting methods for non-hermitian positive semidefinite matrices Math Comp 76 (007) 87 98 [11] Z-Z Bai GH Golub L-Z Lu J-F Yin Block triangular skew-hermitian splitting methods for positive definite linear systems SIAM J Sci Comput 6 (005) 844 863 [1] Z-Z Bai GH Golub MK Ng Hermitian skew-hermitian splitting methods for non-hermitian positive definite linear systems SIAM J Matrix Anal Appl 4 (003) 603 66 [13] Z-Z Bai GH Golub MK Ng On successive-overrelaxation acceleration of the Hermitian skew-hermitian splitting iterations Numer Linear Algebra Appl 14 (007) 319 335 [14] Z-Z Bai MK Ng On inexact preconditioners for nonsymmetric matrices SIAM J Sci Comput 6 (005) 1710 174 [15] Z-Z Bai GH Golub J-Y Pan Preconditioned Hermitian skew-hermitian splitting methods for non-hermitian positive semidefinite linear systems Numer Math 98 (004) 1 3 [16] Z-Z Bai J-C Sun D-R Wang A unified framework for the construction of various matrix multisplitting iterative methods for large sparse system of linear equations Comput Math Appl 3 (1996) 51 76 [17] Z-Z Bai C-L Wang On the convergence of nonstationary multisplitting two-stage iteration methods for Hermitian positive definite linear systems J Comp Appl Math 138 (00) 87 96 [18] Z-Z Bai Z-Q Wang Restrictive preconditioner for conjugate gradient methods for symmetric positive definite linear system J Comp Appl Math 187 (006) 0 6 [19] M Benzi Preconditioning techniques for large linear systems: A survey J Comp Phys 18 (00) 418 477 [0] M Benzi GH Golub A preconditioner for generalized saddle point problems SIAM J Matrix Anal Appl 6 (004) 0 41 [1] K Chen Matrix Preconditioning Techniques Applications Cambridge Univ Press Cambridge 005 [] HC Elman DJ Silvester AJ Wathen Finite Elements Fast Iterative Solvers: With Applications in Incompressible Fluid Dynamics Oxford Univ Press Oxford 005 [3] A Greenbaum Iterative Methods for Solving Linear Systems SIAM Philadelphia 1997 [4] ICF Ipsen A note on preconditioning nonsymmetric matrices SIAM J Sci Comput 3 (001) 1050 1051 [5] C Keller NIM Gould AJ Wathen Constraint preconditioning for indefinite linear systems SIAM J Matrix Anal Appl 1 (000) 1300 1317 [6] D Loghin AJ Wathen Schur complement preconditioning for elliptic systems of partial differential equations Numer Linear Algebra Appl 10 (003) 43 443 [7] D Loghin AJ Wathen Analysis of preconditioners for saddle-point problems SIAM J Sci Comput 5 (004) 09 049 [8] MF Murphy GH Golub AJ Wathen A note on preconditioning for indefinite linear systems SIAM J Sci Comput 1 (000) 1969 197 [9] DJ Silvester AJ Wathen Fast iterative solution of stabilised Stokes systems II: Using general block preconditioners SIAM J Numer Anal 31 (1994) 135 1367 [30] V Simoncini M Benzi Spectral properties of the Hermitian skew-hermitian splitting preconditioner for saddle point problems SIAM J Matrix Anal Appl 6 (004) 377 389 [31] HA van der Vorst Iterative Krylov Methods for Large Linear Systems Cambridge Univ Press Cambridge 003 [3] RS Varga Matrix Iterative Analysis Prentice Hall Englewood Cliffs NJ 196