ETNA Kent State University

Similar documents
DELFT UNIVERSITY OF TECHNOLOGY

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Research Article MINRES Seed Projection Methods for Solving Symmetric Linear Systems with Multiple Right-Hand Sides

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Block Krylov Space Solvers: a Survey

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Contents. Preface... xi. Introduction...

Preface to the Second Edition. Preface to the First Edition

On the influence of eigenvalues on Bi-CG residual norms

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Iterative Methods for Linear Systems of Equations

The Lanczos and conjugate gradient algorithms

Iterative methods for Linear System

M.A. Botchev. September 5, 2014

On Solving Large Algebraic. Riccati Matrix Equations

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

WHEN studying distributed simulations of power systems,

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

S-Step and Communication-Avoiding Iterative Methods

Course Notes: Week 1

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

Numerical Methods in Matrix Computations

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

Numerical behavior of inexact linear solvers

Arnoldi Methods in SLEPc

Multigrid absolute value preconditioning

FEM and sparse linear system solving

Computational Linear Algebra

6.4 Krylov Subspaces and Conjugate Gradients

PROJECTED GMRES AND ITS VARIANTS

Linear Solvers. Andrew Hazel

Updating the QR decomposition of block tridiagonal and block Hessenberg matrices generated by block Krylov space methods

ETNA Kent State University

A MULTIPRECONDITIONED CONJUGATE GRADIENT ALGORITHM

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Variants of BiCGSafe method using shadow three-term recurrence

Algorithms that use the Arnoldi Basis

9.1 Preconditioned Krylov Subspace Methods

On the Preconditioning of the Block Tridiagonal Linear System of Equations

Preconditioned inverse iteration and shift-invert Arnoldi method

arxiv: v1 [hep-lat] 2 May 2012

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Block Bidiagonal Decomposition and Least Squares Problems

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

Finite-choice algorithm optimization in Conjugate Gradients

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

Deflation for inversion with multiple right-hand sides in QCD

Introduction. Chapter One

EECS 275 Matrix Computation

Formulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Iterative Methods for Sparse Linear Systems

Chapter 7 Iterative Techniques in Matrix Algebra

A stable variant of Simpler GMRES and GCR

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

BLOCK KRYLOV SPACE METHODS FOR LINEAR SYSTEMS WITH MULTIPLE RIGHT-HAND SIDES: AN INTRODUCTION

Updating the QR decomposition of block tridiagonal and block Hessenberg matrices

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

A Novel Approach for Solving the Power Flow Equations

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

Domain decomposition on different levels of the Jacobi-Davidson method

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

ANONSINGULAR tridiagonal linear system of the form

Lecture 18 Classical Iterative Methods

Peter Deuhard. for Symmetric Indenite Linear Systems

Computational Linear Algebra

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Inexactness and flexibility in linear Krylov solvers

Generalized MINRES or Generalized LSQR?

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A Block Compression Algorithm for Computing Preconditioners

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994

Conjugate Gradient (CG) Method

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

Iterative Methods for Solving A x = b

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

DELFT UNIVERSITY OF TECHNOLOGY

Krylov Space Solvers

Math 504 (Fall 2011) 1. (*) Consider the matrices

Numerical linear algebra

Augmented GMRES-type methods

arxiv: v2 [math.na] 1 Sep 2016

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

ETNA Kent State University

EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS

Residual iterative schemes for largescale linear systems

Transcription:

Z Electronic Transactions on Numerical Analysis. olume 16, pp. 191, 003. Copyright 003,. ISSN 10689613. etnamcs.kent.edu A BLOCK ERSION O BICGSTAB OR LINEAR SYSTEMS WITH MULTIPLE RIGHTHAND SIDES A. EL GUENNOUNI, K. JBILOU, AND H. SADOK Abstract. We present a new block method for solving large nonsymmetric linear systems of equations with multiple righth sides. We first give the matrix polynomial interpretation of the classical block biconjugate gradient (BlBCG) algorithm using formal matrixvalued orthogonal polynomials. This allows us to derive a block version of BiCGSTAB. Numerical examples comparisons with other block methods are given to illustrate the effectiveness of the proposed method. Key words. block Krylov subspace, block methods, Lanczos method, multiple righth sides, nonsymmetric linear systems. AMS subject classifications. 6510. 1. Introduction. Many applications such as in electromagnetic scattering problem in structural mechanics problems require the solution of several linear systems of equations with the same coefficient matrix different righth sides. This problem can be written as (1.1) where is an nonsingular nonsymmetric real matrix, & are rectangular matrices whose columns are! "#$ "% "#!, respectively, where is of moderate size with ('. When is small, one can use direct methods to solve the given linear systems. We can compute the )+* decomposition of at a cost of,./103 operations then solve the system for each righth at a cost of,./13 operations. However, for large, direct methods may become very expensive. Instead of solving each of the linear systems by some iterative method, it is more efficient to use a block version generate iterates for all the systems simultaneously. Starting from an initial guess 57698:=<, block Krylov subspace methods determine, at step, an approximation =G( of the form (NG(OP 79.ACBD Q, (SR where BD belongs to the block Krylov subspace E $HJILKM with (TUẄXO.. We use a minimization property or an orthogonality relation to determine the correction BY. or symmetric positive definite problems, the block conjugate gradient (BlCG) algorithm [15] its variants [1] are useful for solving the linear system (1.1). When the matrix is nonsymmetric, the block biconjugate gradient (BlBCG) algorithm [15] (see [0] for a stabilized version), the block generalized minimal residual (BlGMRES) algorithm introduced in [] studied in [1], the block quasi minimum residual (Bl QMR) algorithm [7] (see also [13]) are the best known block Krylov subspace methods. The purpose of these block methods is to provide the solutions of a multiple righth sides system faster than their single righth side counterparts. We note that block solvers are Received April 1, 001. Accepted for publication October 8, 00. Recommended by M. Gutknecht. Laboratoire d Analyse Numérique et d Optimisation, Université des Sciences et Technologies de Lille, rance. Email: elguennano.univlille1.fr Université du Littoral, zone universitaire de la Mivoix, batiment H. Poincaré, 50 rue. Buisson, BP 699, 68 Calais Cedex, rance. Email: jbiloulmpa.univlittoral.fr sadokcalais.univlittoral.fr 19

8 < B etnamcs.kent.edu 130 A block version of BICGSTAB for linear systems with multiple righth sides effective compared to their single righth versions when the matrix is relatively dense. They are also attractive when a preconditioner is added to the block solver. In [9] [10] we proposed new methods called global GMRES global Lanczosbased methods. These methods are obtained by projecting the initial block residual onto a matrix Krylov subspace. Another approach for solving the problem (1.1) developed in the last few years consists in selecting one seed system the corresponding Krylov subspace projecting the residuals of the other systems onto this Krylov subspace. The process is repeated with an other seed system until all the systems are solved. This procedure has been used in [3] [] for the conjugate gradient method in [], when the matrix is nonsymmetric for the GMRES algorithm [18]. This approach is also effective when the righth sides of (1.1) are not available at the same time (see [16], [17] [6]). Note also that block methods such as the block Lanczos method [8], [5] the block Arnoldi method [19] are used for solving large eigenvalue problems. The BlBCG algorithm uses a short threeterm recurrence formula, but in many situations the algorithm exhibits a very irregular convergence behavior. This problem can be overcome by using a block smoothing technique as defined in [11] or a block QMR procedure [7]. A stabilized version of the BlBCG algorithm has been proposed in [0]. This method converges quite well but is in general more expensive than BlBCG. As for single righth side Lanczosbased methods, it cannot be excluded that breakdowns occur in the block Lanczosbased methods. We note that some block methods include a deflation procedure in order to detect delete linearly or almost linearly dependent vectors in the block Krylov subspaces generated during the iterations. This technique has been used in [7] for the BlQMR algorithm. See also [1] for a detailed discussion on loss of rank in the iteration matrices for the block conjugate gradient algorithm. In the present paper, we define a new transposefree block algorithm which is a generalization of the wellknown single righth side BiCGSTAB algorithm [5]. This new method is named block BiCGSTAB (BlBiCGSTAB). Numerical examples show that the new method is more efficient than the BlBCG, BlQMR BlGMRES algorithms. The remainder of the paper is organized as follows. In Section, we give a matrix polynomial interpretation of BlBCG using right matrix orthogonal polynomials. In Section 3, we define the BlBiCGSTAB algorithm for solving (1.1). inally, we report in Section some numerical examples. Throughout this paper, we use the following notations. or two matrices, where denotes the. for the usual inner product in 8 : 6T8:=< =. or a matrix, the Q is the subspace generated by the columns of the matrices. inally,, will denote the zero the identity matrices in, we consider the following inner product: trace of the square matrix B. The associated norm is the robenius norm denoted by We will use the notation block Krylov subspace E G.. Matrix polynomial interpretation of the block BCG algorithm..1. The block BCG algorithm. The block biconjugate gradient (BlBCG) algorithm was first proposed by O Leary [15] for solving the problem (1.1). This algorithm is a generalization of N the wellknown LR N BCG algorithm LR [6]. BlBCG computes two sets of direction =G( matrices M D O M that span the block Krylov subspaces E E, where (X. ( is an arbitrary matrix. The algorithm can be summarized as follows [15]:

E < Q Q Q Q E E E etnamcs.kent.edu ALGORITHM 1: Block BCG A. EL Guennouni, K.Jbilou, H. Sadok 131. is an initial guess, C.. ( is an arbitrary matrix.,. N or compute A ( A ( J A end. The algorithm breaks down if the matrices or are singular. Note also that the coefficient matrices computed in the algorithm are solutions of linear systems. The matrices of these systems could be very illconditioned this would affect the iterates computed by BlBCG. BlBCG can also exhibits very irregular behavior of the residual norms. To remedy this problem one can use a block smoothing technique [11] or a block quasiminimization procedure (see [7] [0]). The matrix residuals the matrix directions generated by ALGORITHM 1 satisfy the following properties. PROPOSITION 1. [15] If no breakdown occurs, the matrices computed by the BlBCG algorithm satisfy the following relations P L ( Ö S GH ILKM S GH ILKM X O 6 LR( LR( for. ON (SR GHJIPKM ON ONR GHJIPKM G( the columns of =O. O A. are orthogonal to. rom now on, we assume that no breakdown occurs in the BlBCG algorithm. In the following subsection, we use matrixvalued polynomials to give a new expression of the iterates computed by BlBCG. This will allow us to define the block BiCGSTAB algorithm... Connection with matrixvalued polynomials. In what follows, a matrixvalued polynomial of degree is a polynomial of the form (.1) 7 where 6T8 +6 8. We use the notation (used in [1]) for the product (.) 7

etnamcs.kent.edu 13 A block version of BICGSTAB for linear systems with multiple righth sides where is an matrix, we define, for any matrix, the matrix polynomial by (.3) A& With these definitions, we have the following properties: two matrices of dimensions (.3) it follows that PROPOSITION. Let be two matrixvalued polynomials let be, respectively. Then we have 7 A&, N ( 7. Proof. Let be the matrix polynomial of degree defined by (.1). Then using (.) A& The relation N is easy to verify. When solving a single righth side linear systems " the residual the direction H, it is wellknown that computed by the BCG algorithm can be expressed as H. These polynomials are related by some recurrence formulas [1]. Using matrixvalued polynomials we give in the following proposition new expressions for the residuals the matrix directions generated by BlBCG. A PROPOSITION 3. Let be the sequences of matrices generated by the Bl BCG algorithm. Then there exist two families of matrixvalued polynomials of degree at most such that (.) (.5) These matrix polynomials are also related by the recurrence formulas (.6) (.7) = with for +68. A Proof. The case is obvious. Assume that the relations hold for. The residual computed by the BlBCG algorithm is given by Hence by the induction hypothesis, we get (N 7 A ( (

< < etnamcs.kent.edu Then using Proposition, we obtain where The matrix direction A. EL Guennouni, K.Jbilou, H. Sadok 133. This proves the first part of the proposition. computed by BlBCG is given as where the matrixvalued polynomial (N is defined by O in 8 Let be the functional defined on the set of matrixvalued polynomials with coefficients given by (.8) & hence using (.) the induction hypothesis, it follows that & 7 O where is a matrixvalued polynomial. We also define the functional $ by (.9) $ With these definitions, it is easy to prove the following relations. PROPOSITION. The functional defined above satisfies the following properties:. P 6T8 if. The same relations are also satisfied by. This result shows that the functionals are linear. The matrix polynomials associated by (.) (.5) with the residual the direction polynomials generated by BCG belong to families of formal orthogonal polynomials with respect to $ (see [1]). These results are generalized in the next proposition. PROPOSITION 5. Let, be the sequences of matrixvalued poly, nomials defined in Proposition 3. If P& is an arbitrary matrixvalued polynomial of degree, (.10), then we have the following orthogonality properties, (.11) $,

,,,, Q Q etnamcs.kent.edu 13 A block version of BICGSTAB for linear systems with multiple righth sides Proof. We have seen (Proposition 1) that at step, the columns of the residual produced by BlBCG are orthogonal to the block Krylov subspace E expressed as P (.1) Using (.) of Proposition 3, it follows that,. This can be $ & P This shows that & N By the linearity of (expressed in Proposition ) we can conclude that if & is a matrixvalued polynomial of degree ( ), we have, & P To prove (.11), we use the relation to get using (.1), we obtain Since A therefore O =, it follows that $ & & N P P thus $ & The results of Proposition 5 show that are the matrixvalued polynomials of degree at most belonging to the families of matrixvalued orthogonal polynomials with respect to $ respectively normalized by the conditions. Note that if * is a scalar polynomial we also have *, $ * &, for. In general, this is not true in the block case. Using these matrixvalued polynomials, we will see in the next subsection how to define the block BiCGSTAB algorithm. P

" " H " H " H H Q " Q etnamcs.kent.edu A. EL Guennouni, K.Jbilou, H. Sadok 135 3. The block BiCGSTAB algorithm. The single righth side BiCGSTAB algorithm [5] is a transposefree a smoother variant of the BCG algorithm. The residuals produced by BiCGSTAB are defined by where is the scalar residual polynomial associated with the BCG algorithm is another polynomial of degree updated from step to step by multiplication with a new linear factor with the goal of stabilizing the convergence behavior of the BCG algorithm: The selected parameter is determined by a local residual minimization condition. The search direction H is defined by L P where is the search direction polynomial associated with the BCG algorithm. In detail the BiCGSTAB algorithm for solving a single righth side linear system is defined as follows [5]: ALGORITHM : BiCGSTAB an initial guess H is an arbitrary P vector satisfying for A L 3 L $A $ $ $ ( $ L end. J H We will see later that the coefficient used in ALGORITHM can be computed by a simpler formula. Techniques for curing breakdowns in BiCGSTAB are given in [3]. We define now a new block method for solving (1.1), which is a transposefree a smoother converging variant of the BlBCG algorithm. The new method is named block BiCGSTAB (BlBiCGSTAB) since it is a generalization of the single righth side BiCGSTAB algorithm. We have seen that the residual Q by the BlBCG algorithm are such that the matrix direction OL computed

Q etnamcs.kent.edu 136 A block version of BICGSTAB for linear systems with multiple righth sides where are the matrixvalued polynomials satisfying the relations (.6) (.7). The BlBiCGSTAB algorithm produces iterates whose residual matrices are of the form (3.1) A where is still a scalar polynomial defined recursively at each step to stabilize the convergence behavior of the original BlBCG algorithm. Specifically, is defined by the recurrence formula (3.) & As will be seen later, the scalar is selected by imposing a minimization of the robenius residual norm. It is also possible to choose different for each righth side. In all our numerical tests, we obtained similar results for these two cases. We want now to define a recurrence formula for the new residuals defined by (3.1). rom (.6), we immediately obtain OP OP Therefore (3.3) ( Using (.7) (3.) we get (3.) ( We also have (3.5) We now define the new matrix direction (3.6) A by & (N we set (3.7) Then, using these formulas, the iterates formulas (3.8) (3.9) A A 3 are computed by the following recurrence 3

etnamcs.kent.edu A. EL Guennouni, K.Jbilou, H. Sadok 137 The scalar parameter is chosen to minimize the robenius norm of the. This implies that G inally, we have to compute the ( matrix coefficients needed in the recurrence formulas. This is done by using the fact that the polynomials 9 residual belong to the families of formal matrix orthogonal polynomials with respect to the functionals D respectively. Hence, using the relations (.6) (.7), Proposition 5 the fact that a scalar polynomial, we have (3.10) (3.11) $ $ $ Therefore, by the definitions of the functionals $, the relations (3.10) (3.11) become (3.1) (3.13) So, the T matrices can be computed by solving two = linear systems with the same coefficient matrix of the matrix. Putting all these relations together, the BlBiCGSTAB algorithm can be summarized as follows ALGORITHM 3: BlBiCGSTAB an initial guess O O an arbitrary matrix P or solve = A L C solve. They will be solved by computing the )Y* factorization end. Note that when a single linear system is solved, ALGORITHM 3 reduces to BiCGSTAB. However, the coefficient computed by ALGORITHM 3 with (N is given by 9 (N. This expression for is simpler than the one given in ALGO RITHM but requires an extra inner product. is

etnamcs.kent.edu 138 A block version of BICGSTAB for linear systems with multiple righth sides The BlBiCGSTAB algorithm will also suffer from a breakdown when is singular. In situations where the matrix is nonsingular but is very illconditioned the computation of the( iterates will also be affected. To overcome these problems, one can restart with a different. Lookahead strategies could also be defined, but this will not be treated in this paper. Due to the local robenius norm minimization steps, BlBiCGSTAB has smoother convergence behavior than BlBCG. However, the norms of the residuals produced by Bl BiCGSTAB may oscillate for some problems. In this case, we can use the block smoothing procedure defined in [11] to get nonincreasing robenius norms of the residuals. or solving the linear system (1.1), BlBiCGSTAB requires per iteration the evaluation of P matrixvector products with a total ofn P,. 0 multiplications. This is to be compared to matrixvector product with, matrixvector product with a total of P,. 0 multiplications needed at each iteration for BlBCG. Storage requirements (excluding those of, ) the major computational costs (multiplications) per iteration for the BlBCG the BlBiCGSTAB algorithms are listed in Table 1. Table 1 Memory requirements computational costs (multiplications) for BlBCG BlBiCGSTAB BlBCG BlBiCGSTAB Matec with P Matec with Multiplications N1 +,. 0N +P1 +,. 0 Memory locations S1,. L,. The costs of BlBCG BlBiCGSTAB rapidly increase with the number of righth sides. However, these block solvers converge in fewer iterations=g than their single=g righth side counterparts since the dimensions of the search spaces E increase by N, respectively, in each step. E. Numerical examples. In this section we give some experimental results. Our examples have been coded in Matlab have been executed on a SUN SPARC workstation. We compare the performance of BlBiCGSTAB, BlQMR, BlGMRES BlBCG for solving the multiple linear system (1.1). or the experiments with BlQMR, we used the algorithm that generates a b matrix with a deflation procedure to eliminate nearly linearly dependent columns of the search space [7]. In all but the last experiment, the initial guess was ILKJ/, where function the ILK creates an 5 rom matrix with coefficients uniformly distributed in [0,1] except for the last experiment. Example 1. In this example, we use two sets of experiments. The tests were stopped as soon as A Q max max Experiment 1: The first matrix test represents the fivepoint discretization of the operator C (.1) )(

etnamcs.kent.edu A. EL Guennouni, K.Jbilou, H. Sadok 139 on the unit square [0,1] [0,1] with Dirichlet boundary conditions. The discretization was performed using a grid size of which yields a matrix of dimension. The matrix is nonsymmetric has a positive definite symmetric part. We choose ( (. The number of second righth sides was. We set in the BlBiCGSTAB the BlBCG algorithms. igure 1 reports on convergence history for BlBiCGSTAB, Bl QMR, BlBCG BlGMRES(10). In this figure, we plotted the maximum of the residual norms (on a logarithmic scale) versus the flops (number of arithmetic operations). 6 Bl BiCGSTAB Bl QMR Bl GMRES BLBCG max i R k (:, i) 0 6 8 0 1 3 5 6 7 flops x 10 9 igure 1: In igure, we plotted for the same example only the results obtained with the Bl BiCGSTAB the BlQMR algorithms. As observed from igure 1 igure, Bl BiCGSTAB returns the best results.

etnamcs.kent.edu 10 A block version of BICGSTAB for linear systems with multiple righth sides 1 Bl BiCGSTAB Bl QMR 0 1 max i R k (:, i) 3 5 6 7 0 0.5 1 1.5 flops.5 3 x 10 8 igure : Experiment : or the second experiment, we used the matrix test =pde900 from 0 HarwellBoweing collection (the size of the nonsymmetric matrix is 0 the number of nonzeros entries is K KJ 0 ). igure 3 reports on results obtained for BlBiCGSTAB, BlQMR, BlBCG BlGMRES(10). or this experiment the number of second righth sides was. As can be seen from igure 3, BlBiCGSTAB requires lower flops for convergence when compared to the other three block methods. The BlBCG algorithm failed to converge. 6 Bl BiCGSTAB Bl QMR Bl GMRES BLBCG max i R k (:, i) 0 6 8 0 0.5 1 1.5 flops.5 3 x 10 8 igure 3: pde900 0 Example. or the second set of experiments, we used matrices from the HarwellBoeing

etnamcs.kent.edu A. EL Guennouni, K.Jbilou, H. Sadok 11 Q collection using ILUT as a preconditioner with a drop tolerance. We compared the performance (in term of flops) of the BlBiCGSTAB algorithm, the BlGMRES(10) algorithm the BiCGSTAB algorithm applied to each single righth side. or all the tests the matrix was an W= rom matrix. Two different numbers of righth sides were used:. The iterations were stopped when max In Table we list the effectiveness of BlBiCGSTAB measured by the ratios 3 where 3 =flops(blbicgstab)/(flops(bicgstab)), 3 =flops(blbicgstab)/flops(blgmres(10)). We note that flops(bicgstab) corresponds to the flops required for solving the linear systems by applying the BiCGSTAB algorithm times. Table Effectiveness of BlBiCGSTAB as compared to BlGMRES(10) BiCGSTAB using the ILUT preconditioner. Matrices are from the HarwellBoeing collection. Matrix Utm 1700a ( ) N N (K KJ ) SAYLR( ) N N (K KJ ) SHERMAN5 ( ) N N (K KJ ) N SHERMAN3 ( N N N (K KJ ) K K denotes the number of nonzero entries in As shown in Table, BlBiCGSTAB is less expensive than BlGMRES than BiCGSTAB applied to each single righth N side linear system except for the last example with for which. The relative density of the matrices the use of preconditioners make the multiple righth side solvers less expensive then they are more effective. Q 5. Conclusion. We have proposed in this paper a new block BiCGSTAB method for nonsymmetric linear systems with multiple righth sides. To define this new method, we used formal matrixvalued orthogonal polynomials. Acknowledgements. We would like to thank the referees for their helpful remarks valuable suggestions. REERENCES [1] C. BREZINSKI AND H. SADOK, Lanczos type methods for solving systems of linear equations, Appl. Numer. Math., 11(1993), pp. 373. [] C. BREZINSKI, M. REDIOZAGLIA AND H. SADOK, A breakdownfree Lanczostype algorithm for solving linear systems, Numer. Math., 63(199), pp. 938. [3] C. BREZINSKI AND M. REDIOZAGLIA, Lookahead in BiCGSTAB other methods for linear systems, BIT, 35(1995), pp. 16901.

etnamcs.kent.edu 1 A block version of BICGSTAB for linear systems with multiple righth sides [] T. CHAN AND W. WAN, Analysis of projection methods for solving linear systems with multiple righth sides, SIAM J. Sci. Comput., 18(1997), pp. 1698171. [5] G. H. GOLUB AND R.R. UNDERWOOD, A block Lanczos algorithm for computing the q algebraically largest eigenvalues a corresponding eigenspace of large, sparse, real symmetric matrices, Proc. of 197 IEEE conf. on Decision avd Control, Phoenix. [6] R. LETCHER, Conjugate gradient methods for indefinite systems, In G. A. Watson, editor, Proceedings of the Dundee Biennal Conference on Numerical Analysis 197, pages 7389. Springer erlag, New York, 1975. [7] R. REUND AND M. MALHOTRA, A BlockQMR algorithm for nonhermitian linear systems with multiple righth sides, Linear Algebra Appl., 5(1997), pp. 119157. [8] G. H. GOLUB AND R. R. UNDERWOOD, The block Lanczos method for computing eigenvalues, in Mathematical Software 3, J. R. Rice, ed., Academic Press, New York, 1977, pp. 36377. [9] K. JBILOU A. MESSAOUDI AND H. SADOK, Global OM GMRES algorithms for matrix equations, Appl. Numer. Math., 31(1999), pp. 963. [10] K. JBILOU H. SADOK, Global Lanczosbased methods with applications, Technical Report LMA, Université du Littoral, Calais, rance 1997. [11] K. JBILOU, Smoothing iterative block methods for linear systems with multiple righth sides, J. Comput. Appl. Math., 107(1999), pp. 97109. [1] C. LANCZOS, Solution of systems of linear equations by minimized iterations, J. Research Nat. Bur. Stards, 9(195) pp. 3353. [13] M. MALHOTRA, R. REUND AND P. M. PINSKY, Iterative Solution of Multiple Radiation Scattering Problems in Structural Acoustics Using a Block QuasiMinimal Residual Algorithm, Comput. Methods Appl. Mech. Engrg., 16(1997), pp. 173196. [1] A. NIKISHIN AND A. YEREMIN, ariable Block CG Algorithms for Solving Large Sparse Symmetric Positive Definite Linear Systems on Parallel Computers, I: General Iterative Scheme, SIAM J. Matrix Anal. Appl., 16(1995), pp. 11351153. [15] D. O LEARY, The block conjugate gradient algorithm related methods, Linear Algebra Appl., 9(1980), pp. 933. [16] B. PARLETT, A New Look at the Lanczos Algorithm for Solving Symmetric Systems of Linear Equations, Linear Algebra Appl., 9(1980), pp. 3336. [17] Y. SAAD, On the Lanczos Method for Solving Symmetric Linear Systems with Several RightH Sides, Math. Comp., 8(1987), pp. 65166. [18] Y. SAAD AND M. H. SCHULTZ, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7(1986), pp. 856869. [19] M. SADKANE, Block Arnoldi Davidson methods for unsymmetric large eigenvalue problems, Numer. Math., 6(1993), pp. 687706. [0]. SIMONCINI, A stabilized QMR version of block BICG, SIAM J. Matrix Anal. Appl., 18(1997), pp. 19 3. [1]. SIMONCINI AND E. GALLOPOULOS, Convergence properties of block GMRES matrix polynomials, Linear Algebra Appl., 7(1996), pp. 97119. []. SIMONCINI AND E. GALLOPOULOS, An Iterative Method for Nonsymmetric Systems with Multiple Righth Sides, SIAM J. Sci. Comput., 16(1995), pp. 917933. [3] C. SMITH, A. PETERSON AND R. MITTRA, A Conjugate Gradient Algorithm for Tretement of Multiple Incident Electromagnetic ields, IEEE Trans. Antennas Propagation, 37(1989), pp. 190193. [] B. ITAL, Etude de quelques méthodes de résolution de problèmes linéaires de gre taille sur multiprocesseur, Ph.D. thesis, Université de Rennes, Rennes, rance, 1990. [5] H. AN DER ORST, BiCGSTAB: a fast smoothly converging variant of BiCG for the solution of nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 13(199), pp. 6316. [6] H. AN DER ORST, An Iterative Solution Method for Solving, using Krylov Subspace Information Obtained for the Symmetric Positive Definite Matrix, J. Comput. Appl. Math., 18(1987), pp. 963.