PROJECTED GMRES AND ITS VARIANTS

Similar documents
Residual iterative schemes for largescale linear systems

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

DELFT UNIVERSITY OF TECHNOLOGY

FEM and sparse linear system solving

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

IDR(s) as a projection method

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

On Solving Large Algebraic. Riccati Matrix Equations

M.A. Botchev. September 5, 2014

Computational Linear Algebra

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Large scale continuation using a block eigensolver

Arnoldi Methods in SLEPc

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Algorithms that use the Arnoldi Basis

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Residual iterative schemes for large-scale nonsymmetric positive definite linear systems

The Lanczos and conjugate gradient algorithms

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Preface to the Second Edition. Preface to the First Edition

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

Computational Linear Algebra

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

On the influence of eigenvalues on Bi-CG residual norms

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Research Article MINRES Seed Projection Methods for Solving Symmetric Linear Systems with Multiple Right-Hand Sides

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Preconditioning Techniques Analysis for CG Method

Iterative Methods for Sparse Linear Systems

A stable variant of Simpler GMRES and GCR

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

Preconditioned inverse iteration and shift-invert Arnoldi method

Simple iteration procedure

A Block Compression Algorithm for Computing Preconditioners

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Krylov Subspace Methods that Are Based on the Minimization of the Residual

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems

6.4 Krylov Subspaces and Conjugate Gradients

The design and use of a sparse direct solver for skew symmetric matrices

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

DELFT UNIVERSITY OF TECHNOLOGY

Iterative methods for Linear System

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Research Article Residual Iterative Method for Solving Absolute Value Equations

EECS 275 Matrix Computation

Augmented GMRES-type methods

Solving large sparse Ax = b.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

GMRES ON (NEARLY) SINGULAR SYSTEMS

WHEN studying distributed simulations of power systems,

EVALUATION OF ACCELERATION TECHNIQUES FOR THE RESTARTED ARNOLDI METHOD

9.1 Preconditioned Krylov Subspace Methods

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Laboratoire d'informatique Fondamentale de Lille

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Lab 1: Iterative Methods for Solving Linear Systems

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Pseudoinverse Preconditioners and Iterative Methods for Large Dense Linear Least-Squares Problems

Alternative correction equations in the Jacobi-Davidson method

Iterative Methods for Linear Systems of Equations

Lecture 8: Fast Linear Solvers (Part 7)

Henk van der Vorst. Abstract. We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES

Inexactness and flexibility in linear Krylov solvers

ON RESTARTING THE ARNOLDI METHOD FOR LARGE NONSYMMETRIC EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Linear Solvers. Andrew Hazel

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods

LARGE SPARSE EIGENVALUE PROBLEMS

The rate of convergence of the GMRES method

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

Generalized MINRES or Generalized LSQR?

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

Contents. Preface... xi. Introduction...

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

On the Preconditioning of the Block Tridiagonal Linear System of Equations

Course Notes: Week 1

Communication-avoiding Krylov subspace methods

Transcription:

PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias, Universidad Central de Venezuela (UCV), Ciudad Universitaria, Av. Los Estadios, Los Chaguaramos, Caracas-Venezuela. Abstract: In this work, we propose a new Krylov iterative method to solve systems of linear equations. This method is a variant of the well-known GMRES and is based on modifications over the constraints imposed on the residual vector, i.e., this vector is projected in another subspace and impose the constraints over this projection, because of this, we called the method: Projected GMRES (PRGMRES). Additionally, we develope two versions of PRGMRES: the PRGMRES with Biorthogonalization (BPRGMRES) and the Inexact PRGMRES (IPRGMRES). Experimental results are presented to show the good performances of the new methods, compared to FOM(m) and GMRES(m). Keywords: restarted GMRES, Krylov Subspace methods, Petrov-Galerkin conditions, unsymmetric linear systems. 1. INTRODUCTION In a variety of engineering and scientific applications we need to solve different systems of differential equations and these are solved numerically discretizing by mean of finite differences or finite element methods. The process of discretization, in general, leads to a linear system of the form: Ax = b (1) where A R n n is a sparse unsymmetric and nonsingular matrix, b R n the right hand side, and x R n is the unknown vector.

Due to computational and memory costs, compute x by factorization methods like: LU, QR, etc. can be very expensive for large values of n, besides these methods can be numerically unstable (see [1] and [2]). In this paper we will present a brief overview of general projection methods including the Krylov methods. At the same time, in section 3 we will discuss the modifications to restriction over general projection methods and how to produce the projected Krylov methods. We also present in section 4, preliminary numerical experiments and, finally, in section 5 we expose concluding remarks. 2. PROJECTION METHODS Definition: A projection method to solve (1) onto the subspace K (search subspace) and orthogonal to L (subspace of constraints) is a process which finds an approximate solution x by imposing the conditions that x belong to K and that the new residual vector is to be orthogonal to L (Petrov- Galerkin conditions). This can be written as : x K (2) b A x L. Let V =[v 1,...,v m ],an m matrix whose column-vectors form a basis of K and, similarly, W = [w 1,...,w m ] a n m matrix whose column-vectors form a basis of L, thus a prototype of projection method algorithm can be described as (see [3] and [4]): Algorithm 1 Prototype Projection Method 1: while no convergence do 2: choose V =[v 1,...,v m ] and W =[w 1,...,w m ] for K and L. 3: r b + A x 4: y (W T AV ) 1 W T r 5: x x 0 +Vy 6: end while 2.1 Krylov Methods The Krylov methods are projection methods where the search subspace K is the Krylov subspace: K m K m (A,r 0 )=Spanr 0,Ar 0,,A 2 r 0,...,,A m 1 r 0 } GMRES [5] is a Krylov method that computes an approximate solution x m at step k by doing an oblique projection onto the Krylov subspace K m (A,r 0 ) of size m, it means: x k K m (3) b Ax k AK m. Through the Arnoldi process [6], can be obtained a matrix V m =[v 1,...,v m ] whose columns are an orthonormal basis of K m (A,r 0 ) and an upper Hessenberg matrix H m.

These matrices satisfy the well-known relations given by: Then, the equation (3) can be written as: AV m = V m H m + h m+1,m v m+1 e T m = V m+1 H m, Vm T AV m = H m. x m = x 0 +V m y m where y m = arg min y R βe(m+1) m 1 H m y 2 (4) with β = r 0 2. FOM [7] on the other hand, is an orthogonal projection method where the equation (4) has been replaced by H m y m = βe (m) 1 while the rest of the algorithm remains unchanged. 3. A NEW PROTOTYPE PROJECTION METHOD If a subspace Z of dimension j contains the residual vector b A x, the expression (2) remain unchanged if we write: x k K T j Z (b Ax (5) k) L where T j Z is a orthogonal projector over Z. We propose to solve: x k K T j Z (b Ax (6) k) L. The main difference between the expressions (5) and (6) is T j Z which is a projector over a subspace Z where the residual vector b A x is not necessarily contained into this subspace. This is a generalization of (5). 3.1 Projected GMRES (PRGMRES) From (6), we choose K = K m (A,r k ), L = AK m (A,r k ) (such as GMRES) and Z = K m+1 (A,r k 1 ) (the previous Krylov subspace), which can be written as: x k = x 0 +V m y m (AV m ) T Z m+1 Zm+1 T r k = (7) 0 where V m is a matrix whose column-vectors represent the orthogonal Krylov basis, Z m+1 the previous Krylov basis and x 0 is an initial guess to the solution. Finally, using some algebraic manipulations, we rewrite (7) as: x k = x 0 +V m y m y m = min y R m Zm+1 T (r (8) 0 V m+1 H m y) 2 and together with the conditions (8), the PRGMRES algorithm is now described as:

Algorithm 2 PRGMRES(m) with Arnoldi modificated 1: Choose Z m+1 a orthogonal basis 2: r 0 b Ax 0, β r 0 2 y v 1 r 0 /β 3: for j 1,...,m do 4: w j Av j 5: for j 1,..., j do 6: h ij < w j,v i > 7: w j w j h ij v i 8: end for 9: h j+1, j w j 2 10: if h j+1, j = 0 then 11: Stop 12: end if 13: v j+1 w j /h j+1, j 14: end for 15: y m min y Zm+1 T (r 0 V m+1 H m y) 2 16: x m x 0 +V m y m 17: Z m+1 V m+1 18: x 0 x m go to 2 3.2 PRGMRES(m) with biorthogonalization (BPRGMRES) In the previous algorithm 2, we have to solve: y m = min y R m Z T m+1 (r 0 V m+1 H m y) 2. An option is to make a process of biorthogonalization between the basis Z m+1 and V m+1 such as: < vi,z j > 0 if i = j < v i,z j >= 0 if i j then the expressions (8) can be written like: where D m+1 is a diagonal matrix. y m = min y Z T m+1 (βv 1 V m+1 H m y) 2 = min y βz T m+1 v 1 Z T m+1 V m+1 H m y 2 = min y βd 11 e 1 D m+1 H m y 2 3.3 PRGMRES(m) Inexact (IPRGMRES) Another idea about the conditions (8) is to suppose that Z m+1 and V m+1 are biorthogonal bases, this is: Z T m+1 V m+1 = D m+1 Z T m+1r 0 = βd 11 e 1 where D m+1 is a diagonal matrix and d ii =< z i,v i > for i = 1,...,(m + 1).

4. NUMERICAL RESULTS We compare the classical restarted versions of FOM and GMRES with the projected versions presented in this work, on the following nonsymmetric matrices from the Harwell-Boeing library [8]: Matrix BCSSTK14 (n = 1806, cond(a)= 1.3 10 10 ) Matrix FS 680 3 (n = 680, cond(a)= 4.2 10 6 ) Matrix Orsirr 1 (n = 1030, cond(a)=1.7 10 5 ) Matrix Sherman 1 (n = 1000, cond(a)=2.3 10 4 ). In all cases, we run experiments without preconditioning techniques. All experiments were run on Pentium IV 1.8 GHz, 256 Mb RAM, using MATLAB 7.0, and we stopped the process when: b Ax 2 b 2 < 1 2 10 8. (9) Results are showed on tables 1 to 4 for each matrix studied. The parameter m is the positive integer used to restart the algorithms. For all cases, the initial guess was choosen as x 0 =(0,...,0) t. For each test matrix the rigth hand side vector is selected such that the solution vector is x =(1,1,...,1) t. A maximum of 3000 iterations were allowed for all methods. The symbol means that the associated method failed to satisfy (9) at the maximum number of iterations. Table 1. Number of iterations for the matrix BCSSTK14 m FOM GMRES PRGMRES BPRGMRES IPRGMRES 3 **** **** **** 2114 2359 4 **** **** 1975 1594 2053 5 **** **** 2276 2114 1301 6 **** **** 1605 1132 1190 7 **** **** 1130 948 944 8 **** **** 1963 879 745 9 **** **** 1312 641 682 10 2965 2426 1547 544 635 15 1083 1264 823 362 408 20 814 728 305 307 246 25 463 383 604 207 227 30 432 364 233 222 187 35 401 199 267 132 139 40 316 227 149 141 142 45 253 200 230 122 104 In general, we observe a very competitive behavior in the numerical tests between the proposed algorithms and classical restarted FOM and GMRES. However, we would like to point out some observations and conclusions about these numerical tests:

Table 2. Number of iterations for the matrix FS 680 3 m FOM GMRES PRGMRES BPRGMRES IPRGMRES 9 **** **** **** 2636 **** 10 **** **** **** 2508 **** 15 **** **** 2615 1156 1378 20 **** **** 1680 649 849 25 2428 2577 828 476 569 30 1660 1511 427 368 363 35 977 739 480 301 313 40 643 480 272 202 193 45 382 410 275 168 187 Table 3. Number of iterations for the matrix Orsirr 1 m FOM GMRES PRGMRES BPRGMRES IPRGMRES 4 **** **** **** 2625 2329 5 **** **** **** 1777 2032 6 **** **** **** 1527 1392 7 **** **** 1224 1071 803 8 2060 **** 1083 759 826 9 2462 **** 1376 805 686 10 1751 **** 1243 594 637 15 753 574 386 280 302 20 357 555 449 193 221 25 201 282 286 151 164 30 175 167 192 161 151 35 121 105 174 97 100 40 91 73 155 81 92 45 54 56 81 77 78 Table 4. Number of iterations for the matrix Sherman 1 m FOM GMRES PRGMRES BPRGMRES IPRGMRES 2 **** **** **** 587 1013 3 **** **** **** 493 774 4 **** **** 852 417 531 5 **** 2778 534 422 442 6 2119 1846 511 291 252 7 1637 1386 352 231 257 8 1265 1089 310 231 239 9 998 847 431 250 245 10 817 702 292 175 182 15 357 317 227 106 108 20 197 176 114 81 86 25 123 113 68 73 53 30 85 81 86 49 61 35 69 59 46 53 56 40 55 45 32 36 40 45 45 37 36 41 41

The methods PRGMRES, BPRGMRES, IPRGMRES have a nonmonotone behavior (see figure 1). For small values of m, the proposed algorithms outperform the classic restarted versions of FOM and GMRES. This is certainly an interesting feature of the projected versions for very large problems when storage is crucial. For large values of m the new methods converges in the same number of iterations of GM- RES. In our experiments the classic versions failed to converge more frequently than the projected versions. We have no theoretical explanation for the interesting results reported in this section. 10 2 Matriz orsirr 1 = 15 10 0 GMRES PRGMRES BPRGMRES IPRGMRES 10 2 10 4 10 6 10 8 10 10 0 100 200 300 400 500 600 Figure 1. Evolution of the residual norm between the different methods with m = 15. Matrix Orsirr 1 5. FINAL REMARKS In this work we have proposed a new family of methods (projected methods) to solve systems of linear equations based on the equation (6), and developed three new Krylov methods that, in the numerical tests have a competitive behavior with FOM and GMRES. In the near future, we would like to study the performance of the new methods from a theoretical point of view, and to extend the projected approach to biorthogonal type methods like QMR and TFQMR. Acknowledgements This work was partially supported by FONACIT under the Programme of Human Resources Formation and by CDCH-UCV project 03-00-5594-2004.

REFERENCES [1]. Biswa Nath Datta. Numerical Linear Algebra and Applications. Brooks/Cole Publishing Company, Pacific Grove, California, 1995. [2]. G. H. Golub and C. F. Van Loan, Matrix Computations, Third Edition, The Johns Hopkins University Press, Baltimore (1996). [3]. C. Brezinski. Multiparameter descent methods. Linear Algebra and Appl., 296:113 142, 1999. [4]. Y. Saad. Iterative Methods for Sparse Linear Systems, 2nd edition. SIAM, Philadelphia, PA, 2003. [5]. Y. Saad and M. H. Schultz. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput., 7:856 869, 1986. [6]. W.E. Arnoldi. The principle of minimized iteration in the solution of the matrix eigenvalue problem. Quart. Appl. Math., 9:17 19, 1951. [7]. Y. Saad. Krylov subspaces methods for solving large unsymmetric linear systems. Mathematics of Computation, 37:105 126, 1981. [8]. I. Duff, R. Grimes and J. Lewis. User s guide for the Harwell-Boeing sparse matrix collection (release i). Technical Report TR-PA-92-86, CERFACS, Toulouse, France, 1992.