Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Similar documents
Splitting Iteration Methods for Positive Definite Linear Systems

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Mathematics and Computer Science

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

THE solution of the absolute value equation (AVE) of

Modified HSS iteration methods for a class of complex symmetric linear systems

POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction

On preconditioned MHSS iteration methods for complex symmetric linear systems

ETNA Kent State University

On inexact Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems

Available online: 19 Oct To link to this article:

Regularized HSS iteration methods for saddle-point linear systems

The semi-convergence of GSI method for singular saddle point problems

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

Jae Heon Yun and Yu Du Han

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

M.A. Botchev. September 5, 2014

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Numerical behavior of inexact linear solvers

Numerical Methods I Non-Square and Sparse Linear Systems

On the Preconditioning of the Block Tridiagonal Linear System of Equations

1 Singular Value Decomposition and Principal Component

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

Fast Iterative Solution of Saddle Point Problems

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

9.1 Preconditioned Krylov Subspace Methods

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Iterative Methods for Linear Systems

The Lanczos and conjugate gradient algorithms

Classical iterative methods for linear systems

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

Iterative Solution methods

Preface to the Second Edition. Preface to the First Edition

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

2 Computing complex square roots of a real matrix

Lecture 18 Classical Iterative Methods

Iterative methods for Linear System

Cholesky factorisations of linear systems coming from a finite difference method applied to singularly perturbed problems

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

Chebyshev semi-iteration in Preconditioning

Interval solutions for interval algebraic equations

Singular Value Inequalities for Real and Imaginary Parts of Matrices

Introduction to Iterative Solvers of Linear Systems

Preconditioning Techniques Analysis for CG Method

Residual iterative schemes for largescale linear systems

arxiv: v1 [math.na] 1 Sep 2018

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

Preconditioned inverse iteration and shift-invert Arnoldi method

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

Iterative Methods for Solving A x = b

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

First, we review some important facts on the location of eigenvalues of matrices.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

On the Schur Complement of Diagonally Dominant Matrices

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

R, 1 i 1,i 2,...,i m n.

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

CLASSICAL ITERATIVE METHODS

The Conjugate Gradient Method

CAAM 454/554: Stationary Iterative Methods

Mathematics and Computer Science

On the performance of the Algebraic Optimized Schwarz Methods with applications

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

1. Introduction. We consider the solution of systems of linear equations with the following block 2 2 structure:

A Review of Preconditioning Techniques for Steady Incompressible Flow

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

Computational Linear Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

When is the hermitian/skew-hermitian part of a matrix a potent matrix?

Numerical Methods in Matrix Computations

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

WHEN studying distributed simulations of power systems,

Chapter 7 Iterative Techniques in Matrix Algebra

Solving large scale eigenvalue problems

Partitioned Methods for Multifield Problems

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems

Boundary Value Problems and Iterative Methods for Linear Systems

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

c 2004 Society for Industrial and Applied Mathematics

Transcription:

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics and Scientific/Engineering Computing Academy of Mathematics and System Sciences Chinese Academy of Sciences, P.O. Box 279, Beijing 8, P.R.China Email: bzz@lsec.cc.ac.cn Gene H. Golub Scientific Computing and Computational Mathematics Program Department of Computer Science Stanford University, CA 9435-925, USA Email: golub@sccm.stanford.edu Lin-Zhang Lu Department of Mathematics, Xiamen University Xiamen 365, P.R.China Email: lzlu@jingxian.xmu.edu.cn Jun-Feng Yin State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics and Scientific/Engineering Computing Academy of Mathematics and System Sciences Chinese Academy of Sciences, P.O. Box 279, Beijing 8, P.R.China Email: yinjf@lsec.cc.ac.cn May 2, 23 Abstract By further generalizing the concept of Hermitian (or normal) and skew-hermitian splitting for a non-hermitian and positive-definite matrix, we introduce a new splitting, called positive-definite and skew-hermitian (PS) splitting, and then establish a class of positivedefinite and skew-hermitian splitting (PSS) methods similar to the Hermitian (or normal) and skew-hermitian splitting (HSS or NSS) method for iteratively solving the positive definite systems of linear equations. Theoretical analysis shows that the PSS method converges Subsidized by The Special Funds For Major State Basic Research Projects G9993283. Research supported, in part, by DOE-FC2-ER477. Supported by National Natural Science Foundation of China.

2 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin unconditionally to the exact solution of the linear system, with the upper bound of its convergence factor being only dependent on the spectrum of the positive-definite splitting matrix, and independent of the spectrum of the skew-hermitian splitting matrix as well as the eigenvectors of all matrices involved. When we specialize the PS splitting to block-triangular (or triangular) and skew-hermitian (BTS, or TS) splitting, the PSS method naturally leads to a block-triangular (or triangular) and skew-hermitian splitting (BTSS, or TSS) iteration method, which may be more practical and efficient than the HSS and the NSS iteration methods. Applications of the BTSS method to the linear systems of block two-by-two structures are discussed in detail. Numerical experiments further show the effectiveness of our new methods and the correctness of the corresponding theories. Keywords. Non-Hermitian matrix, Positive definite matrix, Triangular matrix, Block triangular matrix, Hermitian and skew-hermitian splitting, Splitting iteration method. AMS subject classifications: 65F, 65F5, 65F5. Introduction We consider iterative solution of the large sparse system of linear equations Ax = b, A = (a k,j ) C n n nonsingular and b C n, () where A C n n is a positive definite complex matrix, which may be either Hermitian or non- Hermitian. Denote by H = 2 (A + A ) and S = 2 (A A ) the Hermitian and the skew-hermitian parts of the matrix A, respectively. Then it immediately holds that A = H + S, which naturally leads to the Hermitian and skew-hermitian (HS) splitting of the matrix A[, 7]. Based on the HS splitting and motivated by the classical alternating direction implicit (ADI) iteration technique[8], Bai, Golub and Ng[] recently presented the following Hermitian and skew-hermitian splitting (HSS) iteration method for solving the non-hermitian positive definite system of linear equations (): { (αi + H)x (k+ 2 ) = (αi S)x (k) + b, (αi + S)x (k+) = (αi H)x (k+ 2 ) + b, where α is a given positive constant. Then, by further generalizing the HS splitting to the normal and skew-hermitian (NS) splitting A = N + S o, N C n n normal and So C n n skew-hermitian, they obtained the normal and skew-hermitian splitting (NSS) iteration method[2], which is of an analogous formula to the above HSS scheme except for the splitting matrices H and S in HSS iteration being replaced by N and S o, respectively.

Block-triangular and skew-hermitian splitting methods 3 It was demonstrated in [, 2] that both HSS and NSS iteration methods converge unconditionally to the unique solution of the non-hermitian positive definite system of linear equations (), with the bounds on their convergence about the same as those of conjugate gradient[9] (for HSS) and GMRES[, ] (for NSS) methods when they are applied to Hermitian and normal matrices, respectively. Moreover, the upper bounds of their contraction factors are only dependent on the spectrums of the Hermitian (for HSS) and the normal (for NSS) parts, but are independent of the spectrums of the skew-hermitian parts as well as the eigenvectors of all matrices involved. Numerical results show that both HSS and NSS iteration methods are very effective and robust when they are used to solve large sparse positive definite system of linear equations (). A noticeable property of this class of methods is that by making use of matrix splittings they split a general non-hermitian positive definite linear system into two special subsystems of linear equations which can be solved effectively by either direct methods (e.g., Cholesky factorization and Bunch decomposition[5, 6]) or iterative methods (e.g., Krylov subspace methods[], multigrid and multilevel methods). Hence, The HSS and the NSS iteration techniques build a bridge between the classical splitting iteration methods and the modern subspace projection and grid correction methods. More generally, we know that any Hermitian or non-hermitian positive definite matrix A C n n also possesses a splitting of the form A = P + S, P C n n positive-definite and S C n n skew-hermitian. (2) That is to say, A is of a positive-definite and skew-hermitian splitting (2), or in short, PS splitting. By applying the technique of constructing HSS and NSS iterations, we can establish a class of positive-definite and skew-hermitian splitting (PSS) iteration method for solving the positive definite system of linear equations (). Different from both HSS and NSS methods, the new PSS method can be used to effectively compute iterative solutions of both Hermitian and non-hermitian positive definite linear systems. Theoretical analysis shows that the PSS iteration method preserves all properties of both HSS and NSS iteration methods. By specializing the PS splitting to block-triangular (or triangular) and skew-hermitian (BTS, or TS) splittings, the PSS method directly results in a class of block-triangular (or triangular) and skew-hermitian splitting (BTSS, or TSS) iteration method for solving the system of linear equations (), which may be more practical and efficient than both HSS and NSS iteration methods, because for the BTSS (or TSS) method, we only need to solve block-triangular (or triangular) linear sub-systems, rather than to invert shifted positive-definite matrices as in the PSS method or shifted Hermitian (normal) positive-definite matrices as in the HSS (NSS) method, at the first-half of each iteration steps. In addition, application of the BTSS method to linear systems of block two-by-two structures are discussed in detail, and several numerical examples are used to show effectiveness of our new methods and examine correctness of the corresponding theories. This paper is organized as follows: We establish the PSS iteration method and its convergence theory in Section 2, and describe the BTSS iteration methods in Section 3. Applications of the BTSS methods to linear systems of block two-by-two structure are discussed in Section 4, and numerical results are listed and analyzed in Section 5. Finally, in Section 6 we briefly give some concluding remarks.

4 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin Abbreviated Terms and the Descriptive Phrases Notation Description HS Hermitian and skew-hermitian HSS Hermitian and skew-hermitian splitting NS normal and skew-hermitian NSS normal and skew-hermitian splitting PS positive-definite and skew-hermitian PSS positive-definite and skew-hermitian splitting TS triangular and skew-hermitian TSS triangular and skew-hermitian splitting BTS block-triangular and skew-hermitian BTSS block-triangular and skew-hermitian splitting 2 PSS iteration method and its convergence More precisely, the above-mentioned PSS iteration scheme based on the PS splitting (2) for solving the positive definite system of linear equations () can be described as follows: The PSS Iteration Method. Given an initial guess x () C n. For k =,, 2,... until {x (k) } converges, compute where α is a given positive constant. { (αi + P )x (k+ 2 ) = (αi S)x (k) + b, (αi + S)x (k+) = (αi P )x (k+ 2 ) + b, Evidently, just like HSS and NSS methods, each iterate of the PSS iteration alternates between the positive-definite matrix P and the skew-hermitian matrix S, analogously to the classical ADI iteration for partial differential equations[8, 2]. In fact, we can reverse roles of the matrices P and S in the above PSS iteration so that we may first solve the linear system with coefficient matrix αi + S and then the linear system with the coefficient matrix αi + P. We easily see that when P C n n is normal or Hermitian, the above PSS iteration method reduces to the NSS or the HSS iteration methods, accordingly. In matrix-vector form, the PSS iteration can be equivalently rewritten as where x (k+) = M(α) x (k) + G(α) b, (3) { M(α) = (αi + S) (αi P )(αi + P ) (αi S), G(α) = 2α(αI + S) (αi + P ). (4) Thus, M(α) is the iteration matrix of the PSS iteration. As a matter of fact, (3) may also result from the splitting A = B(α) C(α)

Block-triangular and skew-hermitian splitting methods 5 of the coefficient matrix A, with { B(α) = 2α (αi + P )(αi + S) G(α), C(α) = 2α (αi P )(αi S) G(α) M(α). To prove convergence of the PSS iteration method, we first demonstrate the following two lemmas. Lemma 2. Let If P C n n is a positive definite matrix, then it holds that V (α) = (αi P )(αi + P ). (5) V (α) 2 <, α >. Proof. Because P C n n is a positive definite matrix, i.e., 2 (P + P ), the Hermitian part of P, is a Hermitian positive definite matrix, we know that for any y C n \ {}, 2α (P + P )y, y > or (αp + αp )y, y > ( αp αp )y, y holds. Here, denotes the inner product in C n. It then straightforwardly follows that or equivalently, (α 2 I + αp + αp + P P )y, y > (α 2 I αp αp + P P )y, y, (αi + P )y, (αi + P )y > (αi P )y, (αi P )y. (6) Let x = (αi + P )y. Then we have x since (αi + P ) is nonsingular and y. Now, (6) can be equivalently written as x, x > V (α)x, V (α)x. That is to say, it holds that or in other words, V (α)x 2 x 2 <, α >, V (α) 2 <, α >. Lemma 2.2 Let S C n n be a skew-hermitian matrix. Then for any α >, (a) αi + S is a non-hermitian positive definite matrix; and (b) the Cayley transform Q(α) (αi S)(αI + S) of S is a unitary matrix. Proof. See [, 2]. With Lemmas 2. and 2.2, we are now ready to prove convergence of the PSS iteration method.

6 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin Theorem 2. Let A C n n be a positive definite matrix, M(α) defined in (4) be the iteration matrix of the PSS iteration, and V (α) be the matrix defined in (5). Then the spectral radius ρ(m(α)) of M(α) is bounded by V (α) 2. Therefore, it holds that ρ(m(α)) V (α) 2 <, α >, i.e., the PSS iteration converges to the exact solution x C n of the system of linear equations (). Proof. Because S C n n is a skew-hermitian matrix, from the definition of the PS splitting (2) we know that P +P = A +A. Hence, P C n n is also a positive definite matrix. Moreover, by Lemma 2.2 we see that Q(α) = (αi S)(αI + S) is a unitary matrix. Let M(α) = V (α)(αi S)(αI + S). Then M(α) is similar to M(α). Therefore, by applying Lemma 2. we have ρ(m(α)) = ρ( M(α)) M(α) 2 = V (α)(αi S)(αI + S) 2 V (α) 2 Q(α) 2 = V (α) 2 <. Theorem 2. shows that the PSS iteration method converges unconditionally to the exact solution of the positive definite system of linear equations (). Moreover, the upper bound of its contraction factor is only dependent on the spectrum of the positive definite part P, but is independent of the spectrum of the skew-hermitian part S as well as the eigenvectors of the matrices P, S and A. We should point out that two important problems need to be further studied for the PSS iteration method. One is the choice of the skew-hermitian matrix S, or the splitting of the matrix A, and another is the choice of the acceleration parameter α. Theoretically, due to Theorem 2. we can choose S to be any skew-hermitian matrix such that the matrix P = A S is positive definite, and α to be any positive constant. However, practically, besides the above requirements we have to choose S to be the skew-hermitian matrix such that the linear systems with the coefficient matrices αi +P and αi +S can be solved easily and effectively, and to choose the positive constant α such that the PSS iteration converges very fast. Evidently, these two problems may be very difficult and, usually, their solutions strongly depend on the concrete structures and properties of the coefficient matrix A as well as the splitting matrices P and S. For a positive definite matrix A C n n, whatever Hermitian or non-hermitian, one useful choice of its PS splitting matrices P and S is as follows: Let A = H + S be the HS-splitting of the matrix A, and let H = D + L H + L H be the (block) triangular splitting of the Hermitian positive definite matrix H, where D is the (block) diagonal matrix and L H is the strictly (block) lower triangular matrix of H. Then we can define P and S to be P = D + 2L H and S = L H L H + S, respectively. This PS-splitting directly leads to a special PSS iteration method for solving the system of linear equations (), which only solves (block) lower triangular linear systems at the

Block-triangular and skew-hermitian splitting methods 7 first-half of each of its iteration steps. Alternatively, we can also define P and S to be P = D + 2L H and S = L H L H + S, respectively, which immediately leads to a similar PSS iteration method. In the next section, we will give another two practical choices of the PS splitting. These two special kinds of PS splittings are very basic. Technical combinations of them can yield variety and new positive-definite splittings, and hence, many practical PSS iteration methods. According to the positive constant α, if P C n n is a normal matrix, then we can compute α = arg min { V (α) 2} by making use of the formula in Theorem 2.2 in [2]; if P C n n is a α> general positive definite matrix, we do not have any formula to compute a usable α and hence, the upper bound of V ( α ) 2. Usually, it holds that and α α opt arg min α> {ρ(m(α))} ρ(m( α )) > ρ(m(α opt )). Therefore, it is important to know how to compute an approximation of α opt as accurately as possible for improving the convergence speed of the method, and it is a hard task that needs further in-depth study from the viewpoint of both theory and computations. 3 BTSS iteration methods Without loss of generality, we assume, in this section, that the coefficient matrix A C n n of the system of linear equations () is partitioned into the following block m-by-m one: A = (A l,j ) m m C n n, A l,j C n l n j, l, j =, 2,..., m, m where n l, l =, 2,..., m, are positive integers satisfying n l = n. Let D, L and U be the block diagonal, the strictly block lower triangular and the strictly block upper triangular parts of the block matrix A, respectively. Then we have l= A = (L + D + U ) + (U U ) T + S = (L + D + U) + (L L ) T 2 + S 2. (7) Clearly, T and T 2 are block lower triangular and block upper triangular matrices, respectively, and both S and S 2 are skew-hermitian matrices. We will call the two splittings in (7) as blocktriangular and skew-hermitian (BTS) splittings of the matrix A. We remark that these two splittings are both PS splittings, because T l + T l = A + A (l =, 2) and A C n n is positive definite. If we make technical combinations of the BTS splitting with the HS or the NS splitting, other interesting and practical cases of the PS splitting can be obtained. For example, A = (L + 2 ) ( ) (D + D ) + U + 2 (D D ) + U U T 3 + S 3 ( = L + ) ( ) 2 (D + D ) + U + 2 (D D ) + L L T 4 + S 4 (8)

8 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin are two BTS splittings, which come from combinations of BTS splittings of the matrix A in (7) and HS splitting of the matrix D. Now, with the choices P = T l, S = S l, l =, 2, 3, 4, we can immediately define the corresponding block-triangular and skew-hermitian splitting (BTSS) iteration methods for solving the positive definite system of linear equations (). We note that for these four BTSS iteration methods, we only need to solve block-triangular linear sub-systems, rather than to invert shifted positive-definite matrices as in the PSS iteration method or shifted Hermitian (normal) positive-definite matrices as in the HSS (NSS) iteration method. Moreover, the block-triangular linear sub-systems can be solved recursively through solutions of the systems of linear equations (αi + A j,j )x j = r j, j =, 2,..., m (9) for the BTSS iteration methods associated with the splittings in (7), and ( αi + ) 2 (A j,j + A j,j) x j = r j, j =, 2,..., m () for those associated with the splittings in (8). Because the splitting matrices T l (l =, 2, 3, 4) are positive definite, the block sub-matrices A j,j (j =, 2,..., m) are also positive definite, in particular, 2 (A j,j +A j,j ) (j =, 2,..., m) are all Hermitian positive definite matrices. Therefore, we may employ another BTSS iteration to solve the linear sub-systems (9) and the conjugate gradient iteration to solve the linear sub-systems () if necessary. In addition, the matrices T l, l =, 2, 3, 4, may be much more sparse than the matrices H and N in HSS and NSS methods. For instance, when the matrix A is an upper Hessenberg matrix, T l and S l, l =, 2, 3, 4, in the BTSS (or TSS) splittings are still very sparse, but H, S, and N, S o in the HS and the NS splittings may be very dense. Therefore, the BTSS iteration methods may save computing costs considerably more than both HSS and NSS iteration methods. Another advantage of the BTSS iteration methods is that they can be used to solve both Hermitian and strongly non-hermitian positive definite system of linear equations more effectively than both HSS and NSS iteration methods. For example, consider the non-hermitian positive definite system of linear equations (αi + Ŝ)z = r arising from HSS, NSS and TSS iteration methods, where Ŝ Cn n is a skew-hermitian matrix, α is a positive constant, and r C n is a given right-hand-side vector. Both HSS and NSS iteration methods can not be used to solve it, however, the BTSS iteration method may solve it very efficiently. This shows that the BTSS iteration methods have a large application area. When D, L and U are the (pointwise) diagonal, the (pointwise) strictly lower triangular and the (pointwise) strictly upper triangular parts of the matrix A, we call the BTS splitting triangular and skew-hermitian (TS) splitting and the BTSS iteration method as triangular and skew-hermitian splitting (TSS) iteration method. We remark that both BTSS and TSS iteration methods are, in general, different from the HSS and the NSS iteration methods. Only when D is Hermitian (normal) and L + U =, the BTSS and the TSS methods give the same scheme as the HSS (NSS) method.

Block-triangular and skew-hermitian splitting methods 9 The following investigation describes formulae for approximately estimating the acceleration parameters α for the BTSS iteration methods. For l =, 2, 3, 4, let T l = D l + G l, where { D, for l =, 2, D l = 2 (D + D ), for l = 3, 4, and G l = { L + U, for l =, 3, L + U, for l = 2, 4. Obviously, D l (l =, 2, 3, 4) are block diagonal matrices and G l (l =, 2, 3, 4) are strictly block lower (for l =, 3) or upper (for l = 2, 4) triangular matrices. Therefore, (αi + T l ) = ((αi + D l ) + G l ) = [( I + G l (αi + D l ) ) (αi + D l ) ] = [ (αi + D l ) ( I + (αi + D l ) G l )] m = (αi + D l ) ( ) j [ G l (αi + D l ) ] j = j= m ( ) j [ (αi + D l ) ] j G l (αi + Dl ). j= Here, the last two equalities hold since [ Gl (αi + D l ) ] m = [ (αi + Dl ) G l ] m =. It then follows that V l (α) (αi T l )(αi + T l ) = I 2T l (αi + T l ) = I 2(D l + G l )(αi + T l ) = m (αi D l )(αi + D l ) + 2α(αI + D l ) ( ) j [ G l (αi + D l ) ] j (αi D l )(αi + D l ), (a first-order approximation) and j= V l (α) 2 (αi D l )(αi + D l ) 2 { max (αi Aj,j )(αi + A j,j ) } j m 2, for l =, 2, { = max (αi 2 ) j m (A j,j + A (αi j,j ) + 2 ) } (A j,j + A 2 j,j ), for l = 3, 4. For l = 3, 4, because 2 (A j,j + A j,j ) (j =, 2,..., m) are Hermitian matrices, we immediately

Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin have where λ (j) k α = arg min V l(α) 2 (l = 3, 4) α> { arg min (αi max 2 ) α> j m (A j,j + A (αi j,j ) + 2 ) } (A j,j + A 2 j,j ) { } = arg min max max α λ (j) k α> j m k n j α + λ (j) k = λ (jo) min λ(jo) max, (k =, 2,... n j ) are eigenvalues of the matrix 2 (A j,j + A j,j ), λ(jo) min and λ(jo) max are the smallest and the largest eigenvalues of the matrix 2 (A j,j +A j,j ), j o = arg max j m κ( 2 (A j,j +A j,j )), and κ( ) denotes the spectral condition number of the corresponding matrix. In particular, when m = n and A j,j (j =, 2,..., m) are entries, we have for l =, 2, 3, 4 that α = arg min max α a j,j α> j n α + a j,j = a min a max, where 4 Applications a min = min j n {a j,j}, a max = max j n {a j,j}. We now give applications of the BTSS iteration methods to the systems of linear equations () whose coefficient matrices possess the block two-by-two structure [ ] W F A =, E N where W C q q and N C (n q) (n q) are positive definite complex sub-matrices such that A C n n is a positive definite matrix. Without loss of generality, we may further assume that both W and N are Hermitian, because otherwise we can consider the BTSS iteration methods induced by the BTS splittings in (8). According to (7) we know that the BTS splittings of the block two-by-two matrix A C n n are [ ] [ ] W F A = E + F + N F T + S [ W E = ] [ ] + F E + T N E 2 + S 2 [ = 2 (W + W ] [ ) E + F 2 (N + N + 2 (W W ] ) F ) F 2 (N N T ) 3 + S 3 [ = 2 (W + W ) E ] [ + F 2 (N + N + 2 (W W ) E ] ) E 2 (N N T ) 4 + S 4.

Block-triangular and skew-hermitian splitting methods Given an initial guess x () C n. Then the BTSS iterations compute sequences {x (k) } as follows: { (αi + T l )x (k+ 2 ) = (αi S l )x (k) + b, (αi + S l )x (k+) = (αi T l )x (k+ 2 ) + b, l =, 2, 3, 4. As example, in the following we only investigate the BTSS iteration for the case l =, because the other three cases can be discussed analogously. Moreover, without loss of generality, we could assume that both W and N are Hermitian if necessary, because otherwise we can consider the BTSS iteration methods induced by the BTS splittings for l = 3, 4. Let x and b be correspondingly partitioned into blocks as x = [ u p ] [ f, b = g where u, f C q and p, g C n q. The first half-step of the BTSS iteration requires the solution of linear systems of the form ], (αi + W )u (k+ 2 ) = αu (k) + f F p (k). () Once the solution u (k+ 2 ) of () has been obtained, we compute (αi + N)p (k+ 2 ) = (E + F )u (k+ 2 ) + αp (k) + g + F u (k). (2) Note that the coefficient matrices in () and (2) are positive definite or Hermitian positive definite if the matrices W and N are, respectively. Therefore, the linear systems () and (2) can be solved by any algorithm for positive definite systems (e.g., a PSS method) when the matrices W and N are positive definite, or by any algorithm for Hermitian positive definite systems (e.g., a sparse Cholesky factorization or the conjugate gradient method[9]) when the matrices W and N are Hermitian positive definite. The second half-step of the BTSS iteration requires the solution of linear systems of the form { αu (k+) + F p (k+) = (αi W )u (k+ 2 ) + f, F u (k+) + αp (k+) = (E + F )u (k+ 2 ) + (αi N)p (k+ 2 ) (3) + g. This linear system can be solved in various ways, including the CG-like method discussed in [7, ] and the BTSS (or TSS) method itself. Besides, when n 2q, we may first solve the Hermitian positive definite system of linear equations (α 2 I + F F )p (k+) = (αe + F W )u (k+ 2 ) + α(αi N)p (k+ 2 ) + F f + αg, and then compute u (k+) = α ( ) F p (k+) + (αi W )u (k+ 2 ) + f ; and when n 2q, we may first solve the Hermitian positive definite system of linear equations (α 2 I + F F )u (k+) = (α(αi W ) + F (E + F ))u (k+ 2 ) F (αi N)p (k+ 2 ) + αf F g,

2 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin and then compute p (k+) = α ( ) F u (k+) (E + F )u (k+ 2 ) + (αi N)p (k+ 2 ) + g. We remark that unlike the HSS and the PHSS iteration methods in [3, 4], the BTSS iteration methods are divergent for any α > when they are applied to solve the saddle-point problem [ ] [ ] [ ] W F u f Ax F = b, p g where W C q q is Hermitian positive definite, F C q (n q) is of full column rank, and 2q n. More concretely, when W = I, following an analogous derivation to that in [3] we obtain that the eigenvalues of the BTSS iteration matrix are α α+ M (α) = (αi + S ) (αi T )(αi + T ) (αi S ) with multiplicity 2q n and (α + )(α 2 + σ 2 k ) (α(α 2 + 3σ 2k ) ± (α 2 + σ 2k )2 + 4α 2 σ 2k (α2 + 2σ 2k ) ), k =, 2,..., n q, where σ k (k =, 2,..., n q) are the positive singular values of the matrix F. Therefore, α(α 2 + 3σk (α 2) + 2 + σk 2)2 + 4α 2 σk 2(α2 + 2σk 2) ρ(m (α)) = max k n q (α + )(α 2 + σk 2) > for any α >. However, we see that all eigenvalues of the iteration matrix M (α) are real, and the eigenvalues α α+ and (α + )(α 2 + σ 2 k ) (α(α 2 + 3σ 2k ) (α 2 + σ 2k )2 + 4α 2 σ 2k (α2 + 2σ 2k ) ), k =, 2,..., n q are within the interval (, ), while the other eigenvalues (α + )(α 2 + σ 2 k ) are within the interval (, + ). (α(α 2 + 3σ 2k ) + (α 2 + σ 2k )2 + 4α 2 σ 2k (α2 + 2σ 2k ) ), k =, 2,..., n q 5 Numerical experiments We use three examples to numerically examine feasibility and effectiveness of our new methods. All our tests are started from random vectors, performed in MATLAB with machine precision 6, and terminated when the current iterate satisfies r (k) 2 / r () 2 < 5, where r (k) is the residual of the current, say k-th, iteration. The right-hand-side vector b is computed from b = Ax, where x is the exact solution of the system of linear equations () which is a randomly generated vector.

Block-triangular and skew-hermitian splitting methods 3 Example 5. [] Consider the system of linear equations (), for which A R n n is the upwind difference matrix of the two-dimensional convection-diffusion equation (u xx + u yy ) + q exp(x + y)(xu x + yu y ) = f(x, y) on the unit square Ω = [, ] [, ] with the homogeneous Dirichlet boundary conditions. The stepsizes along both x and y directions are the same, i.e., h = m+. In our computations, we use the TSS iteration resulted from the TS splitting A = T + S in (7) as the test method. In Figures and 2, we plot the eigenvalues of the iteration matrices of both HSS and TSS methods when q = and m = 24, 32, respectively. It is clear that the eigenvalue distributions of the HSS and the TSS iteration matrices are quite different: the eigenvalues of the HSS iteration matrix are tightly clustered around the real axis and a circular arc on the complex plane, while those of the TSS iteration matrix are closely around the real axis and a circle; however, the distribution domains of the eigenvalues of both iteration matrices are very similar. This observation is further confirmed by Figures 3 and 4 for which m = 32 and q = 6, 9, respectively. In particular, from these figures we can see that when q becomes larger, the shapes and domains of the eigenvalue distributions of the HSS and the TSS iteration matrices become more similar. In Table, we list the experimentally optimal parameters α exp and the corresponding spectral radii ρ(m(α exp )) of the iteration matrices M(α exp ) of both TSS and HSS methods for several m when q =. The data show that when m is increasing, α exp is decreasing while ρ(m(α exp )) is increasing, for both TSS and HSS methods. α exp and ρ(m(α exp )) of TSS are larger than those of HSS, correspondingly, for all of our tested m. This straightforwardly implies that the number of iteration steps (IT) of TSS may be larger than that of HSS. However, because TSS has much less computational workload than HSS at each of the iteration steps, the actual computing time (CPU) of TSS may be less than that of HSS. These facts are confirmed by the numerical results in Table 2. In fact, the speed-up of TSS with respect to HSS is quite noticeable, where we define by speed-up = CPU of HSS method CPU of TSS (or BTSS) method. In Table 2, the speed-up is at least.52 for m = 64, and it even achieves 3.76 for m = 24. Table : α exp versus ρ(m(α exp )) for Example 5. (q = ) m 8 6 24 32 64 TSS α exp.8.69.424.322.63 ρ(m(α exp )).723.858.95.929.964 HSS α exp.54.595.43.36.63 ρ(m(α exp )).76.837.882.99.953

4 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin.8 m = 24 m = 32.6.8.6.4.4.2.2.2.2.4.4.6.6.8.8.8.6.4.2.2.4.6.8.8.6.4.2.2.4.6.8 Figure : The eigenvalue distributions of the HSS iteration matrices when q =, and m = 24 (left) and m = 32 (right), for Example 5..8 m = 24 m = 32.6.8.6.4.4.2.2.2.2.4.4.6.6.8.8.8.6.4.2.2.4.6.8.8.6.4.2.2.4.6.8 Figure 2: The eigenvalue distributions of the TSS iteration matrices when q =, and m = 24 (left) and m = 32 (right), for Example 5. As we have mentioned in Sections 2 and 3, to compute the optimal parameter α opt for both HSS and TSS is very difficult, and is usually problem-dependent. In Figure 5 we depict the curves of ρ(m(α)) with respect to α and ρ(m(α exp )) with respect to q, and in Figure 6 we depict the curves of α exp with respect to q, when m = 32, for both TSS and HSS iteration methods to intuitively show these functional relationships. Evidently, from Figure 5 we see that ρ(m(α)) attains the minimum at about α = α exp, ρ(m(α exp )) monotonically decreases when q is increasing, and both ρ(m(α)) and ρ(m(α exp )) for TSS are larger than those for HSS for all α and q, respectively. These facts are further confirmed by the numerical results listed in Table 3. It then follows that IT of TSS will be larger than that of HSS. However, because TSS has much less computational workload than HSS at each of the iteration steps, the actual CPU of TSS may be less than that of HSS, which straightforwardly implies that TSS is, practically, more efficient than HSS. These facts are confirmed by the numerical results in Table 4. In fact, in Table 4 the speed-up of TSS with respect to HSS is quite noticeable; it is at least.78 for all

Block-triangular and skew-hermitian splitting methods 5 m = 32 m = 32.8.8.6.6.4.4.2.2.2.2.4.4.6.6.8.8.8.6.4.2.2.4.6.8.8.6.4.2.2.4.6.8 Figure 3: The eigenvalue distributions of the HSS iteration matrices when m = 32, and q = 6 (left) and q = 9 (right), for Example 5..8 m = 32.8 m = 32.6.6.4.4.2.2.2.2.4.4.6.6.8.8.6.4.2.2.4.6.8.8.8.6.4.2.2.4.6.8 Figure 4: The eigenvalue distributions of the TSS iteration matrices when m = 32, and q = 6 (left) and q = 9 (right), for Example 5. tested values of q except for q = 5 whose speed-up is.5, and it even achieves 2.8 for q = 3. In addition, from Figure 6 we observe that α exp monotonically increases when q is increasing for both TSS and HSS iteration methods, α exp for TSS is larger than that for HSS for all q, and the gap between the α exp s for TSS and HSS also becomes larger when q becomes larger. In Figure 7 we depict the curves of IT and CPU with respect to q for both TSS and HSS iteration methods. We see that for a small q, say q < 4, IT of TSS is less than that of HSS; while for a large q, say q 4, the situation is just reversed. However, CPU of TSS is always less than that of HSS for all of our tested q. This clearly shows that TSS is much more effective than HSS in actual computations. Example 5.2 Consider the system of linear equations (), for which A = (a k,j ) R n n is

6 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin Table 2: IT and CPU for Example 5. (q = ) m 8 6 24 32 64 TSS IT 3 56 83 3 234 CPU.9.453 4.78 48.95 75.948 HSS IT 24 5 82 8 24 CPU.9.825 7.954 87.96 5.66 speed-up 2..82 3.76.83.52.99 m = 32 TSS: solid HSS: dash dotted m = 32 TSS: solid HSS: dash dotted.98.97.9 ρ ( M (α)).96.95.94 ρ ( M( α exp ) ).93.85.92.9.9..2.3.4.5.6.7 α.8 2 3 4 5 6 7 8 9 q Figure 5: Curves of ρ(m(α)) versus α with q = (left) and ρ(m(α exp )) versus q (right) for the HSS and the TSS iteration matrices when m = 32 for Example 5. defined as follows: a k,j = { (k+j )( k j +), for max{, k l b} j min{n, k + l u },, otherwise, k, j =, 2,..., n, where l b and l u are the lower and the upper half-bandwidth of the matrix A. Note that A R n n is a truncated Hilbert-like matrix, and it is non-symmetric positive definite when l b l u. When l b > l u (or l b < l u ), we use the TSS iteration method induced by the TS splitting with l = (or l = 2) in (7) to solve the system of linear equations (). In our computations, we choose l u = and l b = n so that the coefficient matrix A is an n-by-n lower Hessenberg matrix. In Figures 8 and 9, we plot the eigenvalues of the iteration matrices of both HSS and TSS methods when n = 4 and n = 8, respectively. Again, it is clear that the eigenvalue distributions of the HSS and the TSS iteration matrices are quite different: The eigenvalues of the HSS iteration matrix are clustered around the real axis and a circle on the complex plane, while those of the TSS iteration matrix are clustered around the real axis and several circles; however, the distribution domain of the eigenvalues of the TSS iteration matrix is considerably

Block-triangular and skew-hermitian splitting methods 7.7.65 TSS: solid HSS: dash dotted m = 32.6.55 α exp.5.45.4.35.3.25 2 3 4 5 6 7 8 9 q Figure 6: Curves of α exp versus q for the HSS and the TSS iteration matrices when m = 32 for Example 5. Table 3: α exp and ρ(m(α exp )) when m = 32 for Example 5. q 2 3 4 5 6 7 8 9 TSS α exp.322.379.43.474.52.546.576.65.63 ρ(m(α exp )).929.92.92.93.895.887.88.874.868 HSS α exp.36.286.285.298.35.33.346.36.373 ρ(m(α exp )).99.93.892.882.87.862.853.845.837 smaller than that of the HSS iteration matrix, in particular, along the direction of the imaginary axis. In Table 5, we list the experimentally optimal parameters α exp and the corresponding spectral radii ρ(m(α exp )) of the iteration matrices M(α exp ) of both TSS and HSS methods for several n. The data show that when n is increasing, α exp is decreasing while ρ(m(α exp )) is increasing, for both TSS and HSS methods. ρ(m(α exp )) of TSS are slightly larger than those of HSS, correspondingly, for all our tested n. This implies that IT of TSS may be comparable with that of HSS. Because TSS has a much less computational workload than HSS at each of the iteration steps, the actual CPU of TSS may be much less than that of HSS. Therefore, TSS will be much more efficient than HSS. This fact is further confirmed by the numerical results in Table 6. In Table 6 the speed-up is at least.45 for n = 8, and it achieves.77 for n =. In Figure we depict the curves of ρ(m(α)) with respect to α when n = 8, for both TSS and HSS iteration methods to intuitively show this functional relationship. Evidently, we see that ρ(m(α)) attains the minimum at about α = α exp, and the spectral radius of TSS iteration matrix is almost equal to that of HSS iteration matrix when α becomes larger than α exp of TSS.

8 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin Table 4: IT and CPU when m = 32 for Example 5. q 2 3 4 5 6 7 8 9 TSS IT 3 98 95 9 88 83 79 79 76 CPU 48.95 38.567 39.224 35.889 35.6 8.6 7.74 7.379 6.33 HSS IT 8 6 99 9 82 83 7 72 65 CPU 87.96 9.372 85.389 72.429 37.56 38.673 32.98 3.427 29.3 speed-up.83 2.37 2.8 2.2.5 2.3.86.8.78 5 m = 32 m = 32 TSS: solid HSS: dash dotted 9 TSS: solid HSS: dash dotted 5 8 7 95 6 IT 9 85 CPU 5 8 4 75 3 7 2 65 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 q q Figure 7: Curves of IT versus q (left) and CPU versus q (right) for the HSS and the TSS iteration methods when α = α exp and m = 32 for Example 5. Example 5.3 Consider the system of linear equations (), for which [ ] W F Ω A = F T, where W R q q and N, Ω R (n q) (n q), with 2q > n. N We define the matrices W = (w k,j ), N = (n k,j ), F = (f k,j ) and Ω = diag(ω,..., ω n q ) as follows: k +, for j = k, w k,j =, for k j =, k, j =, 2,..., q,, otherwise, k +, for j = k, n k,j =, for k j =,, otherwise, k, j =, 2,..., n q, { j, for k = j + 2q n, f k,j =, otherwise, k =, 2,..., q; j =, 2,..., n q,

Block-triangular and skew-hermitian splitting methods 9.6 n = 4.6 n = 8.4.4.2.2.2.2.4.4.6.6.8.4.2.2.4.6.8.8.4.2.2.4.6.8 Figure 8: The eigenvalue distributions of the HSS iteration matrices when n = 4 (left) and n = 8 (right) for Example 5.2.2 n = 4.25 n = 8.2.5.5...5.5.5.5...5.5.2.2.2.2.4.6.8.2.25.2.2.4.6.8.2 Figure 9: The eigenvalue distributions of the TSS iteration matrices when n = 4 (left) and n = 8 (right) for Example 5.2 and ω k =, k =, 2,..., n q. k Note that A R n n is a non-symmetric positive definite matrix. We use the BTSS iteration method defined by ()-(3) to solve the system of linear equations (). In our computations, we choose q = 9 n. In Figures and 2, we plot the eigenvalues of the iteration matrices of both HSS and BTSS methods when n = 4 and n = 8, respectively. Analogously, it is clear that the eigenvalue distributions of the HSS and the BTSS iteration matrices are quite different: The eigenvalues of the HSS iteration matrix are clustered on the real axis and a circular arc on the complex plane, while those of the BTSS iteration matrix are clustered on the real axis and a a circular arc as well as located in a triangular area; however, the distribution domain of the eigenvalues

2 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin Table 5: α exp and ρ(m(α exp )) for Example 5.2 n 2 4 8 6 TSS α exp.27.8.3.8.5 ρ(m(α exp )).883.858.944.962.978 HSS α exp.23.595..7.4 ρ(m(α exp )).868.832.936.956.974 Table 6: IT and CPU for Example 5.2 n 2 4 8 6 TSS IT 54 7 3 2 326 CPU.78.499 3.59 37.97 22.35 HSS IT 56 8 4 7 254 CPU.38.854 5.643 55.26 327.958 speed-up.77.7.6.45.49 of the BTSS iteration matrix is considerably smaller than that of the HSS iteration matrix, in particular, along the direction of the imaginary axis. In Table 7, we list the experimentally optimal parameters α exp and the corresponding spectral radii ρ(m(α exp )) of the iteration matrices M(α exp ) of both BTSS and HSS methods for several n. The data show that when n is increasing, α exp and ρ(m(α exp )) are increasing for both BTSS and HSS methods. ρ(m(α exp )) of BTSS is slightly larger than that of HSS, correspondingly, for all our tested n. This straightforwardly implies that IT of BTSS may be comparable to that of HSS. Because BTSS has much less computational workload than HSS at each of the iteration steps, the actual CPU of BTSS may be much less than that of HSS. Therefore, BTSS will be much more efficient than HSS. This fact is further confirmed by the numerical results in Table 8. In Table 8 the speed-up is at least.75 for n = 8, and it even achieves 2.4 for n = 2. Table 7: α exp and ρ(m(α exp )) for Example 5.3 n 2 4 8 6 BTSS α exp 4.865 6.874 9.73 3.733 9.48 ρ(m(α exp )).9.929.949.964.974 HSS α exp 4.476 6.35 8.999 2.736 8.8 ρ(m(α exp )).896.924.946.96.972 In Figure 3 we depict the curves of ρ(m(α)) with respect to α when n = 8, for both BTSS and HSS iteration methods to intuitively show this functional relationship. Evidently, we see that ρ(m(α)) attains the minimum at about α = α exp, and the spectral radius of BTSS

Block-triangular and skew-hermitian splitting methods 2.995 n = 8 TSS: solid HSS: dash dotted.99.985.98 ρ ( M (α)).975.97.965.96.955.5..5 α Figure : Curves of ρ(m(α)) versus α for the HSS and the TSS iteration matrices when n = 8 for Example 5.2 Table 8: IT and CPU for Example 5.3 n 2 4 8 6 BTSS IT 66 45 2 293 CPU.64.42 2.99 39.869 28.375 HSS IT 7 97 34 92 269 CPU.33.967 6.635 69.744 394.76 speed-up 2.8 2.4 2.22.75.89 iteration matrix is almost equal to that of HSS iteration matrix when α becomes larger than α exp of BTSS. 6 Concluding remarks We have further developed the HSS and the NSS iteration methods, and established a more general framework of iteration method based on the PS-splitting of the positive definite coefficient matrix of the system of linear equations. We have proved the convergence of the PSS iteration method, and showed that it inherits all advantages of both HSS and NSS iteration methods. Several special examples of the PSS method, i.e., the TSS (or BTSS) methods, have been described, and numerical examples have been implemented to show that the TSS/BTSS iteration methods are much more practical and effective than the HSS iteration method in the senses of computer storage and CPU time. However, we should mention that how to choose the optimal parameters for these iteration methods is a very practical and interesting problem that needs further in-depth study.

22 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin n = 4 n = 8.8.8.6.6.4.4.2.2.2.2.4.4.6.6.8.8.8.6.4.2.2.4.6.8.8.6.4.2.2.4.6.8 Figure : The eigenvalue distributions of the HSS iteration matrices when n = 4 (left) and n = 8 (right) for Example 5.3.5 n = 4.3 n = 8.4.3.2.2....2..3.2.4.5.8.6.4.2.2.4.6.8.3.8.6.4.2.2.4.6.8 Figure 2: The eigenvalue distributions of the BTSS iteration matrices when n = 4 (left) and n = 8 (right) for Example 5.3

Block-triangular and skew-hermitian splitting methods 23 n = 8.995 BTSS: solid HSS: dash dotted.99.985 ρ ( M (α) ).98.975.97.965 5 5 2 25 3 α Figure 3: Curves of ρ(m(α)) versus α for the HSS and the BTSS iteration matrices when n = 8 for Example 5.3

24 Z.-Z. Bai, G.H. Golub, L.-Z. Lu and J.-F. Yin References [] Z.-Z. Bai, G.H. Golub and M.K. Ng, Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24:3(23), 63-626. Also available at http://www-sccm.stanford.edu/ [2] Z.-Z. Bai, G.H. Golub and M.K. Ng, On successive-overrelaxation acceleration of the Hermitian and skew-hermitian splitting iteration, BIT, submitted. Also available at http://wwwsccm.stanford.edu/ [3] Z.-Z. Bai, G.H. Golub and J.-Y. Pan, Hermitian and skew-hermitian splitting methods for non-hermitian positive semidefinite linear systems, Technical Report SCCM-2-2, Scientific Computing and Computational Mathematics Program, Department of Computer Science, Stanford University, 22. [4] M. Benzi and G.H. Golub, An iterative method for generalized saddle-point problems, Technical Report SCCM-2-4, Scientific Computing and Computational Mathematics Program, Department of Computer Science, Stanford University, 22. [5] J.R. Bunch, A note on the stable decomposition of skew-symmetric matrices, Math. Comput., 38:58(982), 475-479. [6] J.R. Bunch, Stable algorithms for solving symmetric and skew-symmetric systems, Bull. Austral. Math. Soc., 26:(982), 7-9. [7] P. Concus and G.H. Golub, A generalized conjugate gradient method for non-symmetric systems of linear equations, in Computing Methods in Applied Sciences and Engineering, Lecture Notes in Econom. and Math. Systems 34, R. Glowinski and J.R. Lions eds., Springer-Verlag, Berlin, 976, pp. 56 65. Also available at http://www-sccm.stanford.edu/ [8] J. Douglas, Jr. and H. Rachford, Jr., Alternating direction methods for three space variables, Numer. Math., 4(956), 4-63. [9] G.H. Golub and C.F. Van Loan, Matrix Computations, 3rd edition, Johns Hopkins University Press, Baltimore, MD, 996. [] Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing Company, Boston, 996. [] Y. Saad and M.H. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7(986), 856-869. [2] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, NJ, 962.