Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Similar documents
The semi-convergence of GSI method for singular saddle point problems

Research Article Convergence of a Generalized USOR Iterative Method for Augmented Systems

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

Jae Heon Yun and Yu Du Han

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

arxiv: v1 [math.na] 26 Dec 2013

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

9.1 Preconditioned Krylov Subspace Methods

Numerical behavior of inexact linear solvers

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

THE solution of the absolute value equation (AVE) of

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ETNA Kent State University

Iterative Solution methods

CLASSICAL ITERATIVE METHODS

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

FEM and Sparse Linear System Solving

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

Chapter 7 Iterative Techniques in Matrix Algebra

of a Two-Operator Product 1

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

Iterative Methods for Solving A x = b

Iterative techniques in matrix algebra

Computational Economics and Finance

Convergence of Inner-Iteration GMRES Methods for Least Squares Problems

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Solving Homogeneous Systems with Sub-matrices

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

CAAM 454/554: Stationary Iterative Methods

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

PERTURBATION ANAYSIS FOR THE MATRIX EQUATION X = I A X 1 A + B X 1 B. Hosoo Lee

Fast Iterative Solution of Saddle Point Problems

A new iterative method for solving a class of complex symmetric system of linear equations

Iterative Methods. Splitting Methods

On the accuracy of saddle point solvers

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems

Permutation transformations of tensors with an application

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Numerical Methods in Matrix Computations

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

The iterative methods for solving nonlinear matrix equation X + A X 1 A + B X 1 B = Q

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Lecture 18 Classical Iterative Methods

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

Properties for the Perron complement of three known subclasses of H-matrices

Group Inverse for a Class of. Centrosymmetric Matrix

A Smoothing Newton Method for Solving Absolute Value Equations

A note on the unique solution of linear complementarity problem

Lecture # 20 The Preconditioned Conjugate Gradient Method

Efficient Solvers for the Navier Stokes Equations in Rotation Form

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Research Article Constrained Solutions of a System of Matrix Equations

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

A Fixed Point Approach to the Stability of a Quadratic-Additive Type Functional Equation in Non-Archimedean Normed Spaces

Lecture notes: Applied linear algebra Part 1. Version 2

Splitting Iteration Methods for Positive Definite Linear Systems

Comparison results between Jacobi and other iterative methods

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

A Review of Preconditioning Techniques for Steady Incompressible Flow

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Some results about the index of matrix and Drazin inverse

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

ITERATIVE METHODS FOR DOUBLE SADDLE POINT SYSTEMS

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *

Delayed Over-Relaxation in Iterative Schemes to Solve Rank Deficient Linear System of (Matrix) Equations

9. Iterative Methods for Large Linear Systems

Strong Convergence of the Mann Iteration for Demicontractive Mappings

Mathematics and Computer Science

Numerical Methods I Non-Square and Sparse Linear Systems

Improvements in Newton-Rapshon Method for Nonlinear Equations Using Modified Adomian Decomposition Method

A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

Department of Computer Science, University of Illinois at Urbana-Champaign

POSITIVE DEFINITE AND SEMI-DEFINITE SPLITTING METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS * 1. Introduction

AMG for a Peta-scale Navier Stokes Code

IN this paper, we investigate spectral properties of block

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Notes on Eigenvalues, Singular Values and QR

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

The antitriangular factorisation of saddle point matrices

On the Characterization Feedback of Positive LTI Continuous Singular Systems of Index 1

An Efficient Multiscale Runge-Kutta Galerkin Method for Generalized Burgers-Huxley Equation

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

Remarks on Fuglede-Putnam Theorem for Normal Operators Modulo the Hilbert-Schmidt Class

On Positive Stable Realization for Continuous Linear Singular Systems

ON THE QR ITERATIONS OF REAL MATRICES

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini

Introduction to Scientific Computing

Transcription:

Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Jae Heon Yun Department of Mathematics, College of Natural Sciences Chungbuk National University, Cheongju 28644, Korea Copyright c 2016 Jae Heon Yun. This article is distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract In this paper, we first review the PU and Uzawa-SAOR relaxation methods with singular or nonsingular preconditioning matrices for solving singular saddle point problems, and then we provide numerical experiments to compare performance results of the relaxation iterative methods using nonsingular preconditioners with those using singular preconditioners. Mathematics Subject Classification: 65F10, 65F50 Keywords: Relaxation iterative methods, Singular saddle point problem, Semi-convergence, Drazin inverse, Moore-Penrose inverse 1 Introduction We consider the following large sparse augmented linear system ( ) ( ) ( ) A B x f B T =, (1) 0 y g where A R m m is a symmetric positive definite matrix, and B R m n is a rank-deficient matrix with m n. In this case, the coefficient matrix of (1) is singular and so the problem (1) is called a singular saddle point problem.

1478 Jae Heon Yun This type of problem appears in many different scientific applications, such as constrained optimization [10], the finite element approximation for solving the Navier-Stokes equation [6], the constrained least squares problems and generalized least squares problems [1, 13], and so on. Recently, several authors have studied semi-convergence of relaxation iterative methods for solving the singular saddle point problem (1). Zheng et al [17] studied semi-convergence of the PU (Parameterized Uzawa) method, Li and Huang [7] examined semi-convergence of the GSSOR method, Zhang et al [14] provided semi-convergence analysis of the inexact Uzawa method, Zhang and Wang [16] studied semi-convergence of the GPIU method, Chao and Chen [5] provided semi-convergence analysis of the Uzawa-SOR method, Yun [11] and Liang and Zhang [9] studied convergence or semi-convergence of the Uzawa-SAOR method, and Zhou and Zhang [18] studied semi-convergence of the GMSSOR (Generalized Modified SSOR) method. The purpose of this paper is to provide performance comparison results for relaxation iterative methods with singular or nonsingular preconditioning matrices for solving the singular saddle point problems (1). This paper is organized as follows. In Section 2, we provide preliminary results for semiconvergence of the basic iterative methods. In Section 3, we provide the PU and Uzawa-SAOR methods with nonsingular preconditioners. In Section 4, we provide the PU and Uzawa-SAOR methods with singular preconditioners. In Section 5, we provide numerical experiments to compare performance results of the PU and Uzawa-SAOR methods using nonsingular preconditioners with those using singular preconditioners. Lastly, some conclusions are drawn. 2 Preliminaries for Semi-convergence For the coefficient matrix of the singular saddle point problem (1), we consider the following splitting ( ) A B A = B T = D L U, (2) 0 where D = ( ) P 0, L = 0 Q ( ) 0 0 B T, U = 0 ( ) P A B, (3) 0 Q where P R m m is a SPD (Symmetric Positive Definite) matrix which approximates A, and Q R n n is a SPD matrix or a singular symmetric positive semi-definite matrix which approximates the approximated Schur complement matrix B T P 1 B. Several authors have presented semi-convergence analysis for relaxation iterative methods corresponding to the splitting (2) and (3) such as

Performance of relaxation methods for singular saddle point problems 1479 the PU method, PIU method, GSSOR method, the inexact Uzawa method, the Uzawa-SOR method, the Uzawa-AOR method, the Uzawa-SAOR method, GMSSOR method [2, 7, 5, 11, 12, 14, 16, 17, 18]. For simplicity of exposition, some notation and definitions are presented. For a Hermitian matrix H, λ min (H) and λ max (H) denote the minimum and maximum eigenvalues of H, respectively. For a square matrix G, N(G) denotes the null space of G and ρ(g) denotes the spectral radius of G. Let us recall some useful results on iterative methods for solving singular linear systems based on matrix splitting. For a matrix E R n n, the Drazin inverse [3] of E is defined by the unique matrix E D which satisfies the following equations E D EE D = E, E D E = EE D, E k+1 E D = E k, where k = index(e) which is the size of the largest Jordan block corresponding to the zero eigenvalue of E. Let A = M N be a splitting of a singular matrix A, where M is nonsingular. Then an iterative method corresponding to this splitting for solving a singular linear system Ax = b is given by x i+1 = M 1 Nx i + M 1 b for i = 0, 1,.... (4) Definition 2.1 The iterative method (4) is semi-convergent if for any initial guess x 0, the iteration sequence {x i } produced by (4) converges to a solution x of the singular linear system Ax = b. It is well-known that if A is nonsingular, then the iterative method (4) is convergent if and only if ρ(m 1 N) < 1. Since A is singular, the iteration matrix M 1 N has an eigenvalue 1 and thus ρ(m 1 N) can not be less than 1. Thus, we need to introduce its pseudo-spectral radius ν(m 1 N) ν(m 1 N) = max{ λ λ σ(m 1 N) {1}} where σ(m 1 N) is the set of eigenvalues of M 1 N. Notice that a matrix T is called semi-convergent if lim T k exists, or equivalently index(i T ) = 1 and k ν(t ) < 1 [3]. Theorem 2.2 ([3]) The iterative method (4) is semi-convergent if and only if index(i M 1 N) = 1 and ν(m 1 N) < 1, i.e., M 1 N is semi-convergent. In this case, the iteration sequence {x i } converges to the following x x = (I M 1 N) D M 1 b + ( I (I M 1 N) D (I M 1 N) ) x 0. The Moore-Penrose inverse [3] of a singular matrix E R n n is defined by the unique matrix E which satisfies the following equations E = EE E, E = E EE, (EE ) T = EE, (E E) T = E E.

1480 Jae Heon Yun Let A = M N be a splitting of a singular matrix A, where M is singular. Then an iterative method corresponding to this singular splitting for solving a singular linear system Ax = b is given by x i+1 = (I M A)x i + M b for i = 0, 1,.... (5) As in Definition 2.1, the iterative method (5) is called semi-convergent if for any initial guess x 0, the iteration sequence {x i } produced by (5) converges to a solution x of the singular linear system Ax = b. Theorem 2.3 ([4]) The iterative method (5) is semi-convergent if and only if index(m A) = 1, ν(i M A) < 1, and N(M A) = N(A), i.e., I M A is semi-convergent and N(M A) = N(A). 3 Relaxation methods with a nonsingular preconditioner Q We first provide the Uzawa-SAOR method with nonsingular preconditioners for solving the singular saddle point problem (1) [11]. Let A be decomposed as A = D L U, where D = diag(a) is the diagonal matrix, and L and U are strictly lower triangular and strictly upper triangular matrices, respectively. Let ω, s and τ be positive parameters. Then, the Uzawa-SAOR method with a SPD matrix Q can be written as ( ) ( ) ( ) xk+1 xk f = H 1 (ω, s, τ) + M 1 (ω, s, τ), k = 0, 1, 2,..., (6) g y k+1 y k where ( ) 1 ( ) M(ω, s) 0 N(ω, s) ωb H 1 (ω, s, τ) = τb T, Q 0 Q ( ) 1 ( ) M(ω, s) 0 ωi 0 M 1 (ω, s, τ) = τb T, Q 0 τi M(ω, s) = (D sl)c(ω, s) 1 (D su), N(ω, s) = ((1 ω)d + (ω s)u + ωl)c(ω, s) 1 ((1 ω)d + (ω s)l + ωu), C(ω, s) = (2 ω)d + (ω s)(l + U). (7) Notice that C(ω, s) is symmetric positive definite if 0 < ω s < 2 [11]. From (6) and (7), the Uzawa-SAOR method with a SPD matrix Q can be rewritten as Algorithm 1: Uzawa-SAOR method with nonsingular Q Choose ω, s, τ, and initial vectors x 0, y 0 C(ω, s) = (2 ω)d + (ω s)(l + U)

Performance of relaxation methods for singular saddle point problems 1481 For k = 0, 1,..., until convergence x k+1 = x k + ω(d su) 1 C(ω, s)(d sl) 1 (f Ax k By k ) y k+1 = y k + τq 1 (B T x k+1 g) End For Liang and Zhang [9] showed the following semi-convergence results for the Uzawa-SAOR method with a nonsingular preconditioner Q from convergence results of the Uzawa-SAOR method presented in [11]. Theorem 3.1 Suppose that Q is a SPD matrix. If 0 < ω s < 2 and 0 < τ < 2 λ min (M(ω, s) + N(ω, s)), ω ρ(bq 1 B T ) then ν(h 1 (ω, s, τ)) < 1 and thus the Uzawa-SAOR method is semi-convergennt. Corollary 3.2 Suppose that Q is a SPD matrix. If 0 < ω s < 2 and 0 < τ < 2 λ min(a) ρ(bq 1 B T ), then the Uzawa-SAOR method is semi-convergennt. We next provide the PU (Parameterized Uzawa) method using nonsingular preconditioners for the singular saddle point problem which have been studied in [17]. Then, the PU method with a SPD matrix Q can be written as Algorithm 2: PU Method with nonsingular Q Choose ω, τ and initial vectors x 0, y 0 For k = 0, 1,..., until convergence x k+1 = (1 ω) x k + ωa 1 (f By k ) y k+1 = y k + τq 1 (B T x k+1 g) End For where ω > 0 and τ > 0 are relaxation parameters. Zheng et al [17] proved the following theorem about optimal parameters and the corresponding optimal semi-convergence factor for the PU method with a nonsingular Q. Theorem 3.3 Suppose that Q is a SPD matrix. Let µ min and µ max be the smallest and largest nonzero eigenvalues of the matrix Q 1 B T A 1 B, respectively. Then the optimal parameters ω o and τ o are given by ω o = 4 µ min µ max ( µ min + µ max ) 2 and τ o = 1 µmin µ max and the corresponding optimal semi-convergence factor is given by µmax µ min µmax + µ min.

1482 Jae Heon Yun 4 Relaxation methods with a singular preconditioner Q We first provide the Uzawa-SAOR method with singular preconditioners for solving the singular saddle point problem (1). Let Q be chosen as Q = B T M 1 B, where M is a SPD matrix which approximates A. Then the Uzawa- SAOR method with the singular matrix Q is defined by where ( xk+1 y k+1 ) = H 3 (ω, s, τ) ( xk y k ) ( ) f + M 3 (ω, s, τ), k = 0, 1, 2,..., (8) g H 3 (ω, s, τ) = I M(ω, s, τ) ΩA M 3 (ω, s, τ) = M(ω, s, τ) Ω ( ) M(ω, s) 0 M(ω, s, τ) = τb T Q M(ω, s) and N(ω, s) are defined as in (7). Since QQ B T = B T, simple calculation yields ( ) M(ω, s, τ) M(ω, s) = 1 0 τq B T M(ω, s) 1 Q (9) and ( I H 3 (ω, s, τ) = m ωm(ω, s) 1 A ωτq B T M(ω, s) 1 A + τq B T ) ωm(ω, s) 1 B I n ωτq B T M(ω, s) 1. (10) B From (8), (9) and (10), the Uzawa-SAOR method with the singular matrix Q can be rewritten as Algorithm 3: Uzawa-SAOR method with singular Q Choose ω, s, τ, and initial vectors x 0, y 0 C(ω, s) = (2 ω)d + (ω s)(l + U) For k = 0, 1,..., until convergence x k+1 = x k + ω(d su) 1 C(ω, s)(d sl) 1 (f Ax k By k ) y k+1 = y k + τq (B T x k+1 g) End For Here, Q (Moore-Penrose inverse of the matrix Q) is computed only once to reduce computational amount, and then it is stored for later use. Liang and Zhang [9] also showed the following semi-convergence results for the Uzawa-SAOR method with the singular preconditioner Q from convergence results of the Uzawa-SAOR method presented in [11].

Performance of relaxation methods for singular saddle point problems 1483 Theorem 4.1 Let Q be chosen as Q = B T M 1 B, where M is a SPD matrix which approximates A. Then the Uzawa-SAOR method for solving the singular saddle point problem (1) is semi-convergent if 0 < ω s < 2 and 0 < τ < 2 λ min (M(ω, s) + N(ω, s)). ω ρ(bq B T ) Corollary 4.2 Let Q be chosen as Q = B T M 1 B, where M is a SPD matrix which approximates A. If 0 < ω s < 2 and 0 < τ < 2 λ min(a) ρ(bq B T ), then the Uzawa-SAOR method for solving the singular saddle point problem (1) is semi-convergennt. We next provide the PU method with the singular preconditioner Q for solving the singular saddle point problem (1), which can be written as Algorithm 4: PU Method with singular Q Choose ω, τ and initial vectors x 0, y 0 For k = 0, 1,..., until convergence x k+1 = (1 ω) x k + ωa 1 (f By k ) y k+1 = y k + τq (B T x k+1 g) End For where ω > 0 and τ > 0 are relaxation parameters. From Theorems 2.3 and 3.3, we can easily obtain the following theorem about the optimal parameters and the corresponding optimal semi-convergence factor for the PU method with the singular matrix Q. Theorem 4.3 Let Q be chosen as Q = B T M 1 B, where M is a SPD matrix which approximates A, and let µ min and µ max be the smallest and largest nonzero eigenvalues of the matrix Q B T A 1 B, respectively. Then the optimal parameters ω o and τ o are given by ω o = 4 µ min µ max ( µ min + µ max ) 2 and τ o = 1 µmin µ max and the corresponding optimal semi-convergence factor is given by µmax µ min µmax + µ min.

1484 Jae Heon Yun 5 Numerical results In this section, we provide numerical experiments to compare performance results of the PU and Uzawa-SAOR methods using nonsingular preconditioners with those using singular preconditioners for solving the singular saddle point problem (1). In Tables 3 and 4, Iter denotes the number of iteration steps and CPU denotes the elapsed CPU time in seconds. In Table 4, CP U 1 denotes the elapsed CPU time excluding the computational time for Q. Example 5.1 ([17]) We consider the saddle point problem (1), in which ( ) I T + T I 0 A = R 2p2 2p 2, 0 I T + T I B = ( ( ) ( ) ˆB B) = ˆB b1 b 2 R 2p 2 (p 2 I F +2), ˆB = R 2p2 p 2, F I ( ) ( ) b 1 = ˆB ep 2 /2, b 0 2 = ˆB 0, e e p 2 /2 = (1, 1,..., 1) T R p2 /2, p 2 /2 T = 1 h 2 tridiag( 1, 2, 1) Rp p, F = 1 h tridiag( 1, 1, 0) Rp p, with denoting the Kronecker product and h = 1 the discretization mesh p+1 size. For this example, m = 2p 2 and n = p 2 + 2. Thus the total number of variables is 3p 2 +2. We choose the right hand side vector (f T, g T ) T such that the exact solution of the saddle point problem (1) is (x T, y T ) T = (1, 1,..., 1) T R m+n. Numerical results for this example are listed in Tables 3 and 4. For the nonsingular case of Q, the preconditioning matrices Q are chosen as in Table 1, where ˆQ denotes a block diagonal matrix consisting of two submatrices ˆB T Â 1 ˆB and BT B. For the singular case of Q, the preconditioning matrices Q are chosen as in Table 2. Notice that Q is computed only once using the Matlab function pinv with a drop tolerance of 10 13, and then it is stored for later use. In all experiments, the initial vector was set to the zero vector. Let denote the L 2 -norm. The iterations for the relaxation iterative methods are terminated if the current iteration satisfies RES < 10 6, where RES is defined by f Axk By k RES = 2 + g B T x k 2. f 2 + g 2 All numerical tests are carried out on a PC equipped with Intel Core i5-4570 3.2GHz CPU and 8GB RAM using MATLAB R2013b. In Tables 3 and 4, we report the numerical results for two different values of m and n and six cases for the matrix Q. Relaxation iterative methods proposed in this paper

Performance of relaxation methods for singular saddle point problems 1485 depend on the parameters to be used. For the PU method, the parameters ω and τ are chosen as the optimal parameters stated in Theorems 3.3 and 4.3. For the Uzawa-SAOR method, the parameters are not optimal and are chosen as the best one by tries. Table 1. Choices of the nonsingular matrix Q with ˆQ = Diag( ˆB T Â 1 ˆB, BT B) Case Number Q Description I ˆQ Â = diag(a) II ˆQ Â = tridiag(a) III tridiag( ˆQ) Â = tridiag(a) IV tridiag( ˆQ) Â = A Table 2. Choices of the singular matrix Q Case Number Q Description V B T M 1 B M = diag(a) VI B T M 1 B M = tridiag(a) Table 3. Performance of PU and Uzawa-SAOR methods with nonsingular Q PU Uzawa-SAOR m n Q ω τ Iter CP U ω s τ Iter CP U 1152 578 I 0.2489 0.1423 131 0.175 0.90 1.58 0.50 107 0.060 II 0.3307 0.1985 90 0.232 0.90 1.55 1.00 105 0.182 III 0.5622 2.9447 44 0.048 0.85 1.59 1.40 98 0.024 IV 0.6199 3.3734 37 0.042 0.86 1.59 1.35 95 0.023 2048 1026 I 0.1956 0.1084 174 0.415 0.93 1.57 0.48 150 0.147 II 0.2635 0.1519 120 0.701 0.90 1.55 1.00 156 0.675 III 0.5115 3.3270 52 0.099 0.85 1.60 1.42 132 0.048 IV 0.5697 3.8505 43 0.082 0.86 1.59 1.40 124 0.048 The relaxation iterative methods with nonsingular preconditioner Q performs better than those with singular Q (see Tables 3 and 4). The reason is as follows: For nonsingular Q, Q 1 b is computed using Cholesky factorization of Q without constructing Q 1 explicitly, so computational cost is cheap. For singular Q, Q b is computed using matrix-times-vector operation after constructing Q explicitly, which is very time-consuming. Note that Q is constructed using the singular value decomposition of Q, which requires a lot of computational amount. If we exclude the computational time for constructing Q, then relaxation iterative methods with singular Q are comparable to those with nonsingular Q (see CP U in Table 3 and CP U 1 in Table 4).

1486 Jae Heon Yun Table 4. Performance of PU and Uzawa-SAOR methods with singular Q PU Uzawa-SAOR m n Q ω τ Iter CP U CP U 1 ω s τ Iter CP U CP U 1 1152 578 V 0.2489 0.1423 131 0.214 0.129 0.90 1.58 0.50 107 0.104 0.019 VI 0.3307 0.1985 90 0.175 0.090 0.90 1.55 1.00 105 0.105 0.020 2048 1026 V 0.1956 0.1084 174 0.767 0.362 0.93 1.58 0.47 149 0.495 0.090 VI 0.2635 0.1519 120 0.657 0.252 0.90 1.55 1.00 156 0.503 0.098 For both nonsingular and singular matrix of Q, Uzawa-SAOR method with parameters chosen by tries performs much better than PU method with optimal parameters. However, we do not have a formula for finding optimal or near optimal parameters of Uzawa-SAOR method, which should be done in the future work. 6 Conclusions We have provided performance comparison results for relaxation iterative methods with singular or nonsingular preconditioners for solving the singular saddle point problem (1). Numerical experiments showed that relaxation iterative methods with nonsingular preconditioning matrix Q performs better than those with singular Q. The reason is that relaxation iterative methods with singular Q takes a lot of CPU time for constructing Q. If we exclude the computational time for constructing Q, then relaxation iterative methods with singular Q are comparable to those with nonsingular Q. For both singular and nonsingular matrix of Q, Uzawa-SAOR method with parameters chosen by tries performs much better than PU method with optimal parameters. So, the problem of finding optimal or near optimal parameters of Uzawa-SAOR method will be a good challenging problem for the future work. For singular preconditioning matrix Q, we only considered the form of Q = B T M 1 B, where M is a SPD matrix which approximates A, which restricts the choices of Q. Further research is needed to study semi-convergence analysis for other forms of singular matrix Q, so that we can try many different kinds of Q in order to achieve the best possible performance of relaxation iterative methods. Acknowledgements. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2013R1A1A2005722).

Performance of relaxation methods for singular saddle point problems 1487 References [1] M. Arioli, I.S. Duff, P.P.M. de Rijk, On the augmented system approach to sparse least squares problems, Numer. Math., 55 (1989), 667-684. http://dx.doi.org/10.1007/bf01389335 [2] Z.Z. Bai, Z.-Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl., 428 (2008), 2900-2932. http://dx.doi.org/10.1016/j.laa.2008.01.018 [3] A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, 1979. [4] Z.H. Cao, On the convergence of general stationary linear iterative methods for singular linear systems, SIAM J. Matrix Anal. Appl., 29 (2008), 1382-1388. http://dx.doi.org/10.1137/060671243 [5] Z. Chao, G. Chen, Semi-convergence analysis of the Uzawa-SOR methods for singular saddle point problems, Appled Mathematics Letters, 35 (2014), 52-57. http://dx.doi.org/10.1016/j.aml.2014.04.014 [6] H.C. Elman, D.J. Silvester, Fast nonsymmetric iteration and preconditioning for Navier-Stokes equations, SIAM J. Sci. Comput., 17 (1996), 33-46. http://dx.doi.org/10.1137/0917004 [7] J.I. Li, T.Z. Huang, The semi-convergence of generalized SSOR method for singular augmented systems, Chapter in High Performance Computing and Applications, Lecture Notes in Computer Science, Vol. 5938, Springer Berlin Heidelberg, 2010, 230-235. http://dx.doi.org/10.1007/978-3-642-11842-5 31 [8] S.L. Wu, T.Z. Huang, X.L. Zhao, A modified SSOR iterative method for augmented systems, J. Comput. Appl. Math., 228 (2009), 424-433. http://dx.doi.org/10.1016/j.cam.2008.10.006 [9] Z.Z. Liang, G.F. Zhang, On semi-convergence of a class of Uzawa methods for singular saddle point problems, Appl. Math. Comput., 247 (2014), 397-409. http://dx.doi.org/10.1016/j.amc.2014.08.103 [10] S. Wright, Stability of augmented system factorization in interior point methods, SIAM J. Matrix Anal. Appl., 18 (1997), 191-222. http://dx.doi.org/10.1137/s0895479894271093 [11] J.H. Yun, Variants of the Uzawa method for saddle point problem, Comput. Math. Appl., 65 (2013), 1037-1046. http://dx.doi.org/10.1016/j.camwa.2013.01.037

1488 Jae Heon Yun [12] J.H. Yun, Semi-convergence of the parameterized inexact Uzawa method for singular saddle point problems, Bull. Korean Math. Soc., 52 (2015), 1669-1681. http://dx.doi.org/10.4134/bkms.2015.52.5.1669 [13] J.Y. Yuan, A.N. Iusem, Preconditioned conjugate gradient methods for generalized least squares problem, J. Comput. Appl. Math., 71 (1996), 287-297. http://dx.doi.org/10.1016/0377-0427(95)00239-1 [14] N.M. Zhang, T.T. Lu, Y. Wei, Semi-convergence analysis of Uzawa methods for singular saddle point problems, J. Comput. Appl. Math., 255 (2014), 334-345. http://dx.doi.org/10.1016/j.cam.2013.05.015 [15] N.M. Zhang, P. Shen, Constraint preconditioners for solving singular saddle point problems, J. Comput. Appl. Math., 238 (2013), 116-125. http://dx.doi.org/10.1016/j.cam.2012.08.025 [16] G.F. Zhang, S.S. Wang, A generalization of parameterized inexact Uzawa method for singular saddle point problems, Appl. Math. Comput., 219 (2013), 4225-4231. http://dx.doi.org/10.1016/j.amc.2012.10.116 [17] B. Zheng, Z.Z. Bai, X. Yang, On semi-convergence of parameterized Uzawa methods for singular saddle point problems, Linear Algebra Appl., 431 (2009), 808-817. http://dx.doi.org/10.1016/j.laa.2009.03.033 [18] L. Zhou, N.M. Zhang, Semi-convergence analysis of GMSSOR methods for singular saddle point problems, Comput. Math. Appl., 68 (2014), 596-605. http://dx.doi.org/10.1016/j.camwa.2014.07.003 Received: March 9, 2016; Published: April 21, 2016