ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

Similar documents
ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A new iterative method for solving a class of complex symmetric system of linear equations

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

THE solution of the absolute value equation (AVE) of

Efficient Solvers for the Navier Stokes Equations in Rotation Form

arxiv: v1 [math.na] 26 Dec 2013

A Review of Preconditioning Techniques for Steady Incompressible Flow

Mathematics and Computer Science

Fast solvers for steady incompressible flow

The semi-convergence of GSI method for singular saddle point problems

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

ETNA Kent State University

Mathematics and Computer Science

ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS

Regularized HSS iteration methods for saddle-point linear systems

c 2004 Society for Industrial and Applied Mathematics

Two efficient inexact algorithms for a class of large sparse complex linear systems

Fast Iterative Solution of Saddle Point Problems

On the Preconditioning of the Block Tridiagonal Linear System of Equations

Preconditioners for the incompressible Navier Stokes equations

Chebyshev semi-iteration in Preconditioning

Numerical behavior of inexact linear solvers

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Department of Computer Science, University of Illinois at Urbana-Champaign

Block triangular preconditioner for static Maxwell equations*

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

c 2011 Society for Industrial and Applied Mathematics

Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems

Splitting Iteration Methods for Positive Definite Linear Systems

9.1 Preconditioned Krylov Subspace Methods

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

DELFT UNIVERSITY OF TECHNOLOGY

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

1. Introduction. We consider the solution of systems of linear equations with the following block 2 2 structure:

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction

Block preconditioners for saddle point systems arising from liquid crystal directors modeling

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

Structured Preconditioners for Saddle Point Problems

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems

On preconditioned MHSS iteration methods for complex symmetric linear systems

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

Solving Large Nonlinear Sparse Systems

New multigrid smoothers for the Oseen problem

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

ON THE ROLE OF COMMUTATOR ARGUMENTS IN THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS FOR STOKES CONTROL PROBLEMS

Jae Heon Yun and Yu Du Han

JURJEN DUINTJER TEBBENS

Iterative Methods for Solving A x = b

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

BLOCK ILU PRECONDITIONED ITERATIVE METHODS FOR REDUCED LINEAR SYSTEMS

The antitriangular factorisation of saddle point matrices

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

An Accelerated Jacobi-gradient Based Iterative Algorithm for Solving Sylvester Matrix Equations

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

Numerische Mathematik

DELFT UNIVERSITY OF TECHNOLOGY

1. Introduction. We consider the system of saddle point linear systems

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

Numerical Methods I Non-Square and Sparse Linear Systems

Using Python to Solve the Navier-Stokes Equations - Applications in the Preconditioned Iterative Methods

CLASSICAL ITERATIVE METHODS

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

On solving linear systems arising from Shishkin mesh discretizations

Numerical Methods in Matrix Computations

Some Preconditioning Techniques for Saddle Point Problems

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

4.6 Iterative Solvers for Linear Systems

Lecture 18 Classical Iterative Methods

A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations

20. A Dual-Primal FETI Method for solving Stokes/Navier-Stokes Equations

Iterative Solution methods

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations

X. He and M. Neytcheva. Preconditioning the incompressible Navier-Stokes equations with variable viscosity. J. Comput. Math., in press, 2012.

Available online: 19 Oct To link to this article:

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

Linear and Non-Linear Preconditioning

MINIMUM NORM LEAST-SQUARES SOLUTION TO GENERAL COMPLEX COUPLED LINEAR MATRIX EQUATIONS VIA ITERATION

Transcription:

Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod Hezari Faculty of Mathematical Sciences, University of Guilan, Rasht, Iran Email: d.hezari@gmail.com Vahid Edalatpour Faculty of Mathematical Sciences, University of Guilan, Rasht, Iran Email: vedalat.math@gmail.com Hadi Feyzollahzadeh Department of Mathematical and Computer Science, Technical Faculty, University of Bonab, Bonab, Iran Email: hadi.feyz@yahoo.com Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan, Rasht, Iran Email: khojasteh@guilan.ac.ir Abstract For nonsymmetric saddle point problems, Huang et al. in (Numer. Algor. 26, doi:.7/s75-6-236-2 established a generalized variant of the deteriorated positive semi-definite and skew-hermitian splitting ( to expedite the convergence speed of the Krylov subspace iteration methods like the GMRES method. In this paper, some new convergence properties as well as some new numerical results are presented to validate the theoretical results. Mathematics subject classification: 65F, 65N22. Key words: Saddle point problem, preconditioner, nonsymmetric, symmetric, positive definite, Krylov subspace method.. Introduction Consider the solution of large sparse saddle point problems of the form ( A B T Au B ( x y = ( f g b, (. where A R n n, the matrix B R m n is of full row rank with m n, B T denotes the transpose of the matrix B. Moreover, x, f R n and y, g R m. We are especially interested in cases that the matrix A is symmetric positive definite or nonsymmetric with positive definite symmetric part (i.e., A is real positive. When A = A T, the linear system (. is called the symmetric saddle point problem and, when A A T, it is called the nonsymmetric saddle point problem. According to Lemma. in [7] the matrix A is nonsingular. In the last decade, there has been tremendous efforts to develop fast solution methods for solving the saddle point problems. As is well-known, Krylov subspace methods [5] are * Received xxx / Revised version received xxx / Accepted xxx /

2 D. HEZARI, V. EDALATPOUR, H. FEYZOLLAHZADEH AND D.K. SALKUYEH the most effective methods for solving the saddle point problems of the form (.. But the convergence rate of these methods depend closely on the eigenvalues and the eigenvectors of the coefficient matrix [,5] and they tend to converge slowly when are applied to the saddle point problem (.. In general, favourable rates of convergence of Krylov subspace methods are often incorporated with a well-clustered spectrum of the preconditioned matrices (away from zero. Therefore, many kinds of preconditioners have been studied in the literature for saddle point matrix, e.g., HSS-based preconditioners [2, 4, 5, 7], block diagonal preconditioners [8], block triangular preconditioners [3, 8], shift-splitting preconditioners [6,, 6], and so on. Zhang and Gu in [9] established a variant of the deteriorated positive semi-definite and skew-hermitian splitting (VDPSS preconditioner as follows ( A M V DP SS = ABT B I, (.2 for the problem (.. Recently, Huang et al. in [3] proposed a generalization of the VDPSS ( of the form ( A P GV DP SS = ABT B I. (.3 The difference between P GV DP SS and A is given by R GV DP SS = P GV DP SS A = ( ABT B T I. It follows from the latter equation that as +, the (2, 2-block of R GV DP SS tends to zero matrix and as +, the (, 2-block of R tends to B T. So, it seems that the GVDPSS preconditioner with proper parameters and is more closer to the coefficient matrix A than the VDPSS preconditioner due to the independence of the parameters and, as a result, the corresponding preconditioned matrix will have a well-clustered spectrum. It can be seen that by choosing different values for the parameters and, the GVDPSS preconditioner coincides with some existing preconditioners such as the RHSS preconditioner [], the REHSS preconditioner [7], the RDPSS preconditioner [9] and the VDPSS preconditioner [9]. The can be derived from the GVDPSS iteration method. Huang et al. have presented the convergence properties of the GVDPSS iteration method and the spectral properties of the corresponding preconditioned matrix in [3], but nothing about the optimal values of the involved parameters. In this paper, we present new convergence properties and the optimal parameters, which minimize the spectral radius of the iteration matrix of the GVDPSS iteration method. 2. New convergence results for the GVDPSS iteration method The P GV DP SS can be induced by a fixed-point iteration, which is based on the following splitting of the coefficient matrix A: ( A A = P GV DP SS R ABT GV DP SS B I ( ABT B T I. (2.

On the 3 Based on this splitting, the GVDPSS iteration method can be constructed as ( A ABT B I ( u (k+ = ABT B T I ( f u (k + g, (2.2 where > and > are the iteration parameters and u ( is an initial guess. Note that the iterative method can also be written as a fixed point form u k+ = Γu k + c, where ( A Γ = ABT B I is the iteration matrix and c = P b. ( ABT B T I, (2.3 Theorem 2.. Let A R n n be symmetric positive definite and Γ be the iteration matrix (2.3 of the iteration method (2.2. Then, ρ(γ < if one of the following two conditions holds true. (i If > and max{λ max (Q, }, where λ max (Q is the largest eigenvalue of the matrix Q B( 2 A IBT. (ii If and < 2λ min (A, where λ min (A is the smallest eigenvalue of A. Proof. From Lemma 2. in [3], we get Γ = P R = P (P A = I P A ( A = I BT S BA ( BT S A B T S BA S B ( I A B T = I BT S BA B T S BA B T (2.4 ( à = I Â, (2.5 where S = I + BBT, à = A B T + BT S BA B T and  = S BA B T. Therefore, if λ is an eigenvalue of the iteration matrix Γ, then λ = or λ = µ, where µ is the solution of the eigenvalue problem S BA B T x = µx. (2.6 Since S = I + BBT, Eq. (2.6 is equivalent to BA B T x = µ(i + BBT x. (2.7 It is clear that x. Without loss of generality, we suppose that x 2 =. Since B T x, multiplying the above equation by x on both sides yields µ = x BA B T x + x BB T x >. Therefore, λ < if and only if x BA B T x + x BB T x < 2,

4 D. HEZARI, V. EDALATPOUR, H. FEYZOLLAHZADEH AND D.K. SALKUYEH which is equivalent to > 2 x BA B T x x BB T x. (2.8 To prove part (i, let be an arbitrary positive constant. From above discussion, it is easy to know that a sufficient condition to have λ < is that > λ max (Q = max y 2= y Qy > x Qx = 2 x BA B T x x BB T x, which completes the proof of (i. Now, we prove the part (ii. If the right-hand side of the inequality (2.8 is less than zero then for every the inequality (2.8 holds true and, as a result, we get ρ(γ <. Hence, the parameter must be chosen such a way that which is equivalent to 2 x BA B T x x BB T x <, < 2y y y A y, where y = B T x. For the above relation to be held, it is enough to have < 2λ min (A = which completes the proof. 2 λ max (A = 2 min y y y y A y < 2y y y A y, Now, we consider the GVDPSS iteration method (2.2 from another point of view. Let ω be a fixed constant. Consider the parameters > and such that ω =. In this case, the iteration matrix Γ of the GVDPSS iteration method (2.2 is given by ( A B T Γ = BT S BA B T I (ωi + BB T BA B T. (2.9 It follows from Eq. (2.9 that the spectral radius of the iteration matrix Γ is ρ(γ = max i m µ i, (2. where µ i, i =,..., m, are the eigenvalues of the matrix (ωi + BB T BA B T. Since (ωi + BB T BA B T is similar to (ωi + BB T 2 BA B T (ωi + BB T 2, then, in fact, the spectral radius (2. is the same as that of the stationary Richardson iteration when applied to the following linear system (ωi + BB T 2 BA B T (ωi + BB T 2 x = b. Therefore, it is expected that the convergence properties of the Richardson iteration method and the GVDPSS iteration method (2.2 with = ω/ are the same. Then, using the above results and an argument like Theorem 3. in [], we can deduce the following theorem. Theorem 2.2. Let A R n n be symmetric positive definite and Γ be the iteration matrix (2.3 of the GVDPSS iteration method (2.2. Let ω be a constant and consider the parameters > and such that ω =. Then, ρ(γ = max i m µ i,

On the 5 where µ i, i =, 2,..., m, are the eigenvalues of (ωi + BB T BA B T. Let µ m and µ be the smallest and largest eigenvalues of (ωi + BB T BA B T, respectively. For every ω, if < < µ, and = ω/ then the GVDPSS iteration method is convergent. The optimal value of which minimizes the spectral radius ρ(γ is given by opt = 2/(µ + µ m, which yields opt = ω/ opt. The corresponding optimal convergence factor is given by ρ(γ = µ µ m µ + µ m. Remark 2.. If ω = then = and, as a result, the preconditioner P GV DP SS reduces to the RHSS preconditioner. Hence, the corresponding optimal value is exactly the same as that of the RHSS preconditioner (Theorem 3. in []. Theorem 2.3. Let A R n n be symmetric positive definite, the preconditioner P GV DP SS be defined in (.3 and > and. Let λ min and λ max be the smallest and largest eigenvalues of A, respectively. Let σ m and σ be the smallest and largest singular values of matrix B, respectively. Then, the preconditioned matrix P GV DP SSA has an eigenvalue of algebraic multiplicity at least n. The remaining eigenvalues are positive real and the solution of the generalized eigenvalue problem: In addition, the non-unit eigenvalues µ satisfy BA B T x = µ(i + BBT x. λ max (A( + µ λ min (A( +. (2. Proof. It follows from Eq. (2.4 that ( I A P B T A = BT S BA B T S BA B T = ( I Ã Â, (2.2 where S = I + BBT. It follows from Eq. (2.2 that the preconditioned matrix P GV DP SS A has an eigenvalue of algebraic multiplicity at least n, and the remaining eigenvalues are the solution of the following generalized eigenvalue problem BA B T x = µ(i + BBT x. (2.3 Since BB T and BA B T are symmetric and positive definite, and >, the solution of the generalized eigenvalue problem (2.3 is positive real. This completes the proof of the first part of the theorem. Let (µ, x be an eigenpair of the generalized eigenvalue problem (2.3. If we set ˆx = x/ B T x 2 then (µ, ˆx is also an eigenpair of (2.3. As seen in the proof of Theorem 2., we can write µ = ˆx BA B T ˆx + ˆx BB T ˆx = ˆx BA B T ˆx +. (2.4 According to Theorem.22 of [5], we have ˆx BA B T ˆx λ max (A ˆx BB T ˆx = λ min (A,

6 D. HEZARI, V. EDALATPOUR, H. FEYZOLLAHZADEH AND D.K. SALKUYEH and that prove (2.. λ max (A = λ min(a ˆx BB T ˆx ˆx BA B T ˆx, Remark 2.2. It immediately follows from Eq. (2. that if + then the non-unit eigenvalues µ of the preconditioned matrix P A satisfy λ max (A µ λ min (A, and if +, then λ max (A µ λ min (A. Theorem 2.4. Let A R n n be nonsymmetric and positive definite and Γ be the iteration matrix (2.3 of the GVDPSS iteration method (2.2. Let ω be a fixed constant and consider the parameters > and such that ω =. Then, ρ(γ = max j m (γ j + iη j, where γ j + iη j, j =,..., m, are the eigenvalues of (ωi + BB T BA B T and i =. Let γ and γ m, and η and η m be the upper and lower bounds of the real, the absolute values of the imaginary parts of the eigenvalues of (ωi + BB T BA B T, respectively. For every ω, if < < 2γ m, if η γm 2 γ γ m, +η2 2γ, if η γ 2 < γ γ m, +η2 and = ω/ then the GVDPSS iteration method is convergent. The optimal value of which minimizes the spectral radius ρ(γ is given by γ m, if η γ 2 γ γ m, m +η2 opt = 2 γ +γ m, if η < γ γ m, which yields opt = ω/ opt. The corresponding optimal convergence factor is given by η, if η γ 2 γ γ m, m +η 2 ρ opt (γ = (γ γ m 2 +4η 2 γ +γ m, if η < γ γ m. Proof. As seen before, when = ω/, the iteration matrix Γ is of the form (2.9. It shows that the spectral radius of the iteration matrix Γ is ρ(γ = max j m (γ j + iη j, (2.5 where γ j + iη j, j =,..., m, are the eigenvalues of (ωi + BB T BA B T. Because, the rest of the proof can be carried out in a similar procedure to that of Theorem 2.2 in [9], it is omitted. Remark 2.3. If ω =, then = and, as a result, the preconditioner P GV DP SS reduces to the RDPSS preconditioner. Hence, the corresponding optimal value is exactly the same as that of the RDPSS preconditioner (Theorem 2.2 in [9].

On the 7 3. Numerical Experiments In this section, we use two examples to validate the theoretical analysis of the previous section and illustrate the feasibility and effectiveness of the (.3 when it is applied to accelerate the convergence rate of Krylov subspace iteration methods such as GMRES for solving saddle point problems (., from the point of view of both the number of iterations (denoted by IT and and the elapsed CPU time in seconds (denoted by CPU. As mentioned before, the (.3 covers the RHSS preconditioner [] and the REHSS preconditioner [7] when it is applied to solve the symmetric saddle point problems as well as the DPSS preconditioner [4] and the VDPSS preconditioner [9] when it is applied to solve the nonsymmetric saddle point problems. The first and second examples lead to a symmetric and nonsymmetric saddle point problem, respectively. The Krylov subspace methods such as GMRES incorporated with RHSS preconditioner, REHSS preconditioner, the, the PHSS preconditioner [5] and the AHSS preconditioner [2] are applied to solve the symmetric saddle point problem in the first example, and then their results are compared with each other. In addition, the RPDSS and VDPSS preconditioners as well as the are applied to solve the nonsymmetric saddle point problem in the second example, and then their results are compared with each other. All tests are performed in MATLAB on a Laptop with Intel Core i7 CPU.8 GHz, 6GB RAM. In all the tests, the right-hand side vector b is chosen so that the exact solution of the saddle point problem (. is a vector of all ones. Besides, all runs are started from a null vector, and are terminated if the current iteration satisfies r k 2 6 r 2, where r k = b Au (k is the residual at the kth iteration. At each step of applying the preconditioners, it is required to solve the sub-linear systems which can be done by direct methods. In Matlab, this corresponds to computing the Cholesky or LU factorization in combination with AMD or column AMD reordering. 5 x 4.5..5.5. 5 2 4 6 8.5 2 4 6 8 Fig. 3..: Eigenvalue distribution of the saddle point matrix A for Example 3. with q = 32 (left and for Example 3.2 with 32 32 grid (right. Example 3.. (see [5] We consider the Stokes problem µ u + p = in Ω,.u = g, in Ω, u =, on Ω, pdx =, Ω (3.

8 D. HEZARI, V. EDALATPOUR, H. FEYZOLLAHZADEH AND D.K. SALKUYEH.5 =, =.5 =, =.5.5.2.4.6.8.2.4.6.8.5 REHSS preconditioner =.5 RHSS preconditioner =.5.5.2.4.6.8.2.4.6.8 Fig. 3.2.: Eigenvalue distribution of the preconditioned matrix for Example 3. (q = 32. Table 3.:: The optimal parameters of the PHSS and AHSS preconditioners and the corresponding numerical results for Example 3. with different q. Method q 6 32 48 64 PHSS-GMRES.878 2.5657 3.3 3.5753 IT 46 94 49 29 CPU.6 5.49 5. 266 AHSS-GMRES.526.9482 2.35 2.6256 2.337 3.3789 4.879 4.8685 IT 43 84 3 84 CPU.97 4.424 43.57 235 where Ω = (, (,, Ω is the boundary of the domain Ω, is the componentwise Laplace operator, u is a vector-valued function representing the velocity, and p is a scalar function representing the pressure. By discretizing (3. by the upwind scheme, we obtain the system of linear equations (. where A = ( I T + T I I T + T I R 2q2 2q 2, B T = ( I F F I R 2q2 q 2 and T = µ h 2.tridiag(, 2, Rq q, F = h.tridiag(,, Rq q, with being the Kronecker product symbol and h = /(q + the discretization meshsize. For this example, we have n = 2q 2 and m = q 2. Hence, the total number of variables is m+n = 3q 2. Here, we consider µ =.

On the 9.5 =, =.5 =, =.5.5..2.3.4.5.6.7.2.4.6.8 REHSS preconditioner RHSS preconditioner.5 =.5 =.5.5.5..5.2.25.3.35.5..5.2.25.3.35.4 Fig. 3.3.: Non-unit eigenvalue distribution of the preconditioned matrix for Example 3. (q = 32..5 * =244, * =.4892.5 * =975, * =.52.5.5.5.5 2.5.5 2 Fig. 3.4.: Eigenvalue distribution of the preconditioned matrix with optimal parameters for Example 3. (q = 32. From [2], we know that the AHSS preconditioner is given by M(, = 2 B + 2 A + 2 BT where C is an arbitrary Hermitian positive definite matrix. It is noted that the PHSS preconditioner is a special case of the AHSS preconditioner when =. A natural choice of the matrix C is C = B B T, where  is a good approximation of the matrix block A. For this example, we take  = 2µ h I + I T the block-diagonal matrix A. Based on Theorem 3.2 in [5] and [2], we 2 can obtain the optimal parameters and (, of the PHSS and AHSS iteration methods, respectively. In Table 3., these optimal parameters and the corresponding numerical results with respect to iteration step and CPU time, for various problem sizes q, are listed. In Table 3.2, we list the optimal parameters and of the, determined by the formula given in Theorem 2.2, and the corresponding numerical results with respect to iteration step and CPU time for different values of ω and q. For each q, it is seen that by changing ω, the changes of the corresponding optimal parameter in comparison with optimal parameter 2 C,

D. HEZARI, V. EDALATPOUR, H. FEYZOLLAHZADEH AND D.K. SALKUYEH is significantly slow (for new preconditioner. It should be noted that if ω = then = and, as a result, the preconditioner P GV DP SS reduces to the RHSS preconditioner which its results were shown in boldface type. It is also noticed that corresponding to ω = plays the role of the optimal parameter of the RHSS preconditioner. As seen, the preconditioner P GV DP SS with optimal parameters is more effective than the RHSS, PHSS and AHSS preconditioners in terms of both the iteration steps and the CPU time. Moreover, by increasing the value of ω, the preconditioner P GV DP SS with corresponding optimal parameters has a considerable reduction in the number of the iteration steps. Table 3.2:: The optimal parameters of the and the numerical results for the corresponding preconditioned GMRES method for Example 3. with different ω and q. q ω IT CPU 49.25 23.78 56.9.76 23.795 4.32.959 2.62 6 2 37.6.325 5.95 3 966.586.94 4 8473.543 9.87 5.9 36.5 59.8.69 36.97 7.34.932 34.29 32 2 32.8.38 26.769 3 244.4892 5.47 4 975.52.343 5.82 47.52 59.9.67 46.553 8.6.925 44.4892 48 2 324.5.38 36.393 3 276.487 9.246 4 946.538.338 52.3 56.95 6.25.66 56.926 8.36.923 54.884 64 2 325.48.372 45.56 3 293.4776 23.799 4 966.598.46 To further investigate the influence of the new parameter in the convergence speed of the preconditioned GMRES method, we present the numerical results for different values of and in Table 3.3 with respect to iteration step and CPU time (in parentheses. It should be noted that when = the preconditioner P GV DP SS reduces to the RHSS preconditioner which its results were shown in boldface type, and when = it coincides with the REHSS preconditioner which its results were underlined. Table 3.3 indicates that the is more effective than the RHSS, REHSS, PHSS and AHSS preconditioners. Specially, for, the use of the with some large values of results in a considerable reduction in the number of iteration steps.

On the Table 3.3:: Numerical results for the GMRES method incorporated with the for Example 3. with different q. q 2 27(.89 27(.99 27(.84 27(.93 26(.99 25(.78 25(.93 25(.82 24(.9 9(.45 6 24(.7 24(.66 23(.63 8(.42 (.99 2 22(.6 2(.5 6(.25 (.96 9(.85 3 2(.66 5(.22 9(.86 7(.77 8(.8 2 43(.393 43(.393 43(.359 43(.336 4(.263 4(.243 4(.243 4(.242 39(.23 32(.994 32 38(.42 38(.42 37(.76 3(.4 8(.578 2 34(.59 34(.59 28(.866 6(.55 (.369 3 26(.8 26(.8 5(.557 (.362 8(.29 2 56(.656 56(.6456 56(.7553 56(.76 54(.6389 53(.676 53(.6288 53(.6974 5(.6233 44(.5325 48 5(.5898 5(.624 48(.59 42(.524 23(.37 2 46(.569 45(.548 38(.4549 22(.2665 (.349 3 44(.5493 36(.443 9(.2428 (.366 9(.94 2 68(2.55 68(2.495 68(2.488 68(2.5 66(2.449 64(2.334 64(2.347 64(2.34 62(2.277 54(.962 64 6(2.54 6(2.62 58(2.9 5(.842 28(.5 2 55(.962 54(.952 48(.743 26(.99 3(.484 3 53(.92 44(.59 23(.84 2(.444 9(.347 We plot the eigenvalue distribution of the matrix A in Fig 3. and that of the preconditioned matrix P GV DP SS A in Fig 3.2. As mentioned, if = then the P GV DP SS preconditioner reduces to the RHSS preconditioner and if = then the P GV DP SS preconditioner reduces to the REHSS preconditioner. The non-unit eigenvalue distribution of the preconditioned matrices in Fig 3.2 are drawn in Fig 3.3. Fig 3.4 depicts the eigenvalue distribution of the preconditioned matrix P GV DP SSA with optimal parameters. These figures show that the use of the preconditioner P GV DP SS leads to a well-clustered spectrum away from zero which in turn results in a faster convergence of GMRES. In addition, Fig 3.2 verifies our statement in Theorem 2.3 that the preconditioned matrix P GV DP SSA has at least n eigenvalues and the remaining non-unit eigenvalues are positive real. Example 3.2. Consider the Oseen problem which is obtained from the linearization of the following steady-state Navier-Stokes equation by the Picard iteration with suitable boundary

2 D. HEZARI, V. EDALATPOUR, H. FEYZOLLAHZADEH AND D.K. SALKUYEH Table 3.4:: The optimal parameters of the and the numerical results for the corresponding preconditioned GMRES method for Example 3.2 with different uniform grids. condition on Ω Grids ω IT CPU.99 27.557. 3.846.26 2.396 6 6 33.8.33 2.397 5 6.28.3 2.399 32.3.3 2.4.488 42.2778. 3.56.8 2.63 32 32 28.54.8 2.452 5 64.8 2.49 28.8 2.57.23 66 2.893. 4.339.7 2.925 64 64 38.4.7 2.933 5 689.7 2.933 379.7 2.96 ν u + (w. u + p = f, in Ω, divu =, in Ω, u = g, on Ω, where ν >,,, div, u, and p stand for the Laplace operator, the gradient operator, the divergence, velocity and pressure of the fluid, respectively. Here the vector field w is the approximation of u from the previous Picard iteration. Many approximation schemes can be applied to discretize the Oseen problem (3.2 leading to a saddle point system of type (.. We consider a leaky two-dimensional lid-driven cavity problem discretized by Q2-P finite element on uniform grids on the unit square. The test problem was generated by using the IFISS software package written by Elman et al. [2]. We use the viscosity value ν = to generate linear systems corresponding to 6 6, 32 32, 64 64 and 28 28 meshes. The numerical results for optimal parameters and, determined by the formula given in Theorem 2.4, corresponding to different valuses of ω on 6 6, 32 32, 64 64 uniform grids are given in Table 3.4. For each grid, we can see that by changing ω, the corresponding optimal values is almost constant while the optimal values changes increasingly. In addition, it is observed that the values of optimal parameters of the grid 32 32 are significantly close to that of the grid size 64 64. It should be noted that if ω = then = and, as a result, the preconditioner P GV DP SS reduces to the RDPSS preconditioner which its results were shown in boldface type. It is also noticed that corresponding to ω = plays the role of the optimal parameter of the RDPSS preconditioner. As seen, the performance of the preconditioner P GV DP SS with optimal parameters is better than that of the RDPSS preconditioner in terms of both iteration steps and CPU time. Moreover, it is known that the efficiency of the preconditioner P GV DP SS with optimal parameters corresponding to nonzero ω is often the same. (3.2

On the 3 Further, in Table 3.5, we present the numerical results with respect to iteration step and CPU time (in parentheses for different values of and to analyze the influence of the GVDPSS parameter in the convergence speed of the preconditioned GMRES method. It should be noted that when = the preconditioner P GV DP SS reduces to the RDPSS preconditioner which its results were shown in boldface type, and when = it reduces to the VRDPSS preconditioner which its results were underlined. This table indicates that the performance of the new preconditioner P GV DP SS is often better than that of the RDPSS and VRDPSS preconditioners. Specially in the case of small values of, the use of preconditioner P GV DP SS with some large value of results in a considerable reduction in the number of iteration steps, while the other two preconditioners are difficult to implement efficiently. For large value of, the has the best performance and is no longer sensitive to the value of the parameter, in the sense that the number of iteration does not change drastically. It is also noted that the is not competitive in the case of large. We plot the eigenvalue distribution of the matrix A in Fig 3., the eigenvalue distribution of the preconditioned matrix P GV DP SSA in Fig 3.5, and the eigenvalue distribution of the preconditioned matrix P GV DP SSA with optimal parameters in Fig 3.6. As mentioned, if = then the preconditioner P GV DP SS reduces to the RDPSS preconditioner and if = then the preconditioner P GV DP SS reduces to the VDPSS preconditioner. It is evident that the preconditioner matrix P GV DP SSA is of a well-clustered spectrum around (, away from zero. In addition, these figures verify the our statement in Theorem?? that the preconditioned matrix P GV DP SS A has at least n eigenvalues. 4. Conclusion We have presented new convergence properties of the GVDPSS method and the optimal parameters, which minimize the spectral radius of the iteration matrix of the GVDPSS iteration method. The, which involves two parameters, covers the relaxed versions of the HSS preconditioner [, 7] when it is applied to solve the symmetric saddle point problems as well as the RDPSS preconditioner [9] and the VDPSS preconditioner [9] when it is applied to solve the nonsymmetric saddle point problems. Some numerical examples have been presented to validate the theoretical analysis and show the effectiveness of the method. Numerical performances have shown that the is superior to the RHSS, REHSS and RDPSS preconditioners at accelerating the convergence rates of Krylov subspace iteration methods such as GMRES. The performance of the is better than that of the VDPSS preconditioner for small value of the parameter, but it is not competitive in the case of large. Acknowledgments. The authors are grateful to the anonymous referees and the editor of the journal for their valuable comments and suggestions which improved the quality of this paper. The work of Davod Khojasteh Salkuyeh is partially supported by University of Guilan. References [] Z.-Z. Bai, Sharp error bounds of some Krylov subspace methods for non-hermitian linear systems, Appl. Math. Comput., 9 (2, 273-285. [2] Z.-Z. Bai, G.H. Golub, Accelerated Hermitian and skew-hermitian splitting methods for saddlepoint problems, IMA J. Numer. Anal. 27 (27, -23.

4 D. HEZARI, V. EDALATPOUR, H. FEYZOLLAHZADEH AND D.K. SALKUYEH Table 3.5:: Numerical results for the GMRES method incorporated with the for Example 3.2 with different uniform grids. Grids 3 2 2 3 3(.579 3(.54 3(.5 3(.53 3(.568 26(.437 2 29(.473 29(.522 29(.545 28(.463 28(.478 22(.46 6 6 27(.454 27(.498 26(.446 27(.5 23(.423 2(.388 26(.447 24(.428 25(.432 2(.389 2(.374 2(.44 2 3(.498 25(.454 23(.45 8(.357 9(.394 2(.389 3 2 2 3 48(.332 48(.328 47(.3258 47(.345 5(.338 26(.854 2 44(.337 44(.2948 44(.338 47(.3254 34(.2439 22(.586 32 32 4(.2866 4(.2827 43(.2879 3(.277 23(.682 22(.586 4(.295 42(.287 28(.982 22(.579 9(.393 22(.597 2 48(.3346 24(.729 2(.455 8(.329 9(.348 22(.558 3 2 2 3 7(3.336 7(3.3 76(3.489 78(3.598 52(2.386 24(.66 2 67(3.63 7(3.34 72(3.299 49(2.247 29(.368 2(.984 64 64 62(2.854 66(3.3 45(2.47 27(.32 2(.27 2(.9856 63(2.893 42(.95 25(.78 2(.979 8(.96 2(.984 2 76(3.692 23(.99 2(.997 7(.854 8(.898 2(.978 3 2 2 3 8(28.9 2(3.55 2(3.63 78(2.45 2(5.64 8(2.82 2 (26.8 3(29.7 74(9.65 9(5.53 2(3.73 7(2.62 28 28 96(25.4 66(7. 7(4.93 (3.6 7(2.57 7(2.6 97(25.3 6(4.7 (3.3 7(2.62 6(2.35 7(2.59 2 8(3.97 (3.22 7(2.62 6(2.35 6(2.38 7(2.72 [3] Z.-Z. Bai, G.H. Golub, L.-Z. Lu, J.-F. Yin, Block triangular and skew-hermitian splitting methods for positive-definite linear systems, SIAM J. Sci. Comput., 26 (24, 844-863. [4] Z.-Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-hermitian splitting methods for non- Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (23, 63-626. [5] Z.-Z. Bai, G.H. Golub, J.-Y. Pan, Preconditioned Hermitian and skew-hermitian splitting methods for non-hermitian positive semidefinite linear systems, Numer. Math., 98 (24, -32. [6] Z.-Z. Bai, J.-F. Yin, Y.-F. Su, A shift-splitting preconditioner for non-hermitian positive definite matrices, J. Comput. Math., 24 (26, 539-552. [7] M. Benzi, G.H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl., 26 (24, 2-4. [8] M. Benzi, G.H. Golub, J. Liesen, Numerical solution of saddle point problems, Acta Numer., 4

On the 5.4 4 x 3.3.2 = 2, = -2 3 2 = 2, =...2 2.3 3.4.5.5 2 4.2.4.6.8 4 x 6 VDPSS preconditioner 3 = 2, = 2 2 2 3 5 5 5 RDPSS preconditioner = 2, = 4.2.4.6.8 5 2 3 4 5 Fig. 3.5.: Eigenvalue distribution of the preconditioned matrix for Example 3.2 ( 32 32 grid..5.5 * =28, * =.8 * =64, * =.8.5.5.5 2.5.5.5 2 Fig. 3.6.: Eigenvalue distribution of the preconditioned matrix with optimal parameters for Example 3.2 ( 32 32 grid. (25, -37. [9] Y. Cao, J.-L. Dong, Y.-M. Wang, A relaxed deteriorated PSS preconditioner for nonsymmetric saddle point problems from the steady Navier-Stokes equation, J. Comput. Appl. Math., 273 (25, 4-6. [] Y. Cao, J. Du, Q. Niu, Shift-splitting preconditioners for saddle point problems, J. Comput. Appl.

6 D. HEZARI, V. EDALATPOUR, H. FEYZOLLAHZADEH AND D.K. SALKUYEH Math., 272 (24, 239-25. [] Y. Cao, L.-Q. Yao, M.-Q. Jiang, Q. Niu, A relaxed HSS preconditioner for saddle point problems from mesh free discretization, J. Comput Math. 3 (23, 398-42. [2] H.C. Elman, A. Ramage, D.J. Silvester, IFISS: A Matlab toolbox for modelling incompressible flow, ACM Trans. Math. Software., 33 (27, Article 4. [3] Z.-G Huang, L.-G Wang, Z. Xu, J.-J Cui, A generalized variant of the deteriorated PSS preconditioner for nonsymmetric saddle point problems, Numr. Algor., 26, doi:.7/s75-6- 236-2. [4] J.-Y. Pan, M.K. Ng, Z.-Z. Bai, New preconditioners for saddle point problems, Appl. Math. Comput., 72 (26, 762-77. [5] Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM: Philadelphia, 23. [6] D.K. Salkuyeh, M. Masoudi, D. Hezari, On the generalized shift-splitting preconditioner for saddle point problems, Appl. Math. Lett., 48 (25, 55-6. [7] D.K. Salkuyeh, M. Masoudi, A new relaxed HSS preconditioner for saddle point problems, Numer. Algor., 27, doi:.7/s75-6-7-2. [8] E.D. Sturler, J. Liesen, Block-diagonal and constraint preconditioners for nonsymmetric indenite linear systems, SIAM, J. Sci. Comput., 26 (25, 598-69. [9] J. Zhang, C. Gu, A variant of the deteriorated PSS preconditioner for nonsymmetric saddle point problems, BIT Numer. Math., 26, doi:.7/s543-5-59-9.