ETNA Kent State University
|
|
- Buddy Sherman
- 6 years ago
- Views:
Transcription
1 Z Electronic Transactions on Numerical Analysis. olume 16, pp. 191, 003. Copyright 003,. ISSN etnamcs.kent.edu A BLOCK ERSION O BICGSTAB OR LINEAR SYSTEMS WITH MULTIPLE RIGHTHAND SIDES A. EL GUENNOUNI, K. JBILOU, AND H. SADOK Abstract. We present a new block method for solving large nonsymmetric linear systems of equations with multiple righth sides. We first give the matrix polynomial interpretation of the classical block biconjugate gradient (BlBCG) algorithm using formal matrixvalued orthogonal polynomials. This allows us to derive a block version of BiCGSTAB. Numerical examples comparisons with other block methods are given to illustrate the effectiveness of the proposed method. Key words. block Krylov subspace, block methods, Lanczos method, multiple righth sides, nonsymmetric linear systems. AMS subject classifications Introduction. Many applications such as in electromagnetic scattering problem in structural mechanics problems require the solution of several linear systems of equations with the same coefficient matrix different righth sides. This problem can be written as (1.1) where is an nonsingular nonsymmetric real matrix, & are rectangular matrices whose columns are! "#$ "% "#!, respectively, where is of moderate size with ('. When is small, one can use direct methods to solve the given linear systems. We can compute the )+* decomposition of at a cost of,./103 operations then solve the system for each righth at a cost of,./13 operations. However, for large, direct methods may become very expensive. Instead of solving each of the linear systems by some iterative method, it is more efficient to use a block version generate iterates for all the systems simultaneously. Starting from an initial guess 57698:=<, block Krylov subspace methods determine, at step, an approximation =G( of the form (NG(OP 79.ACBD Q, (SR where BD belongs to the block Krylov subspace E $HJILKM with (TUẄXO.. We use a minimization property or an orthogonality relation to determine the correction BY. or symmetric positive definite problems, the block conjugate gradient (BlCG) algorithm [15] its variants [1] are useful for solving the linear system (1.1). When the matrix is nonsymmetric, the block biconjugate gradient (BlBCG) algorithm [15] (see [0] for a stabilized version), the block generalized minimal residual (BlGMRES) algorithm introduced in [] studied in [1], the block quasi minimum residual (Bl QMR) algorithm [7] (see also [13]) are the best known block Krylov subspace methods. The purpose of these block methods is to provide the solutions of a multiple righth sides system faster than their single righth side counterparts. We note that block solvers are Received April 1, 001. Accepted for publication October 8, 00. Recommended by M. Gutknecht. Laboratoire d Analyse Numérique et d Optimisation, Université des Sciences et Technologies de Lille, rance. elguennano.univlille1.fr Université du Littoral, zone universitaire de la Mivoix, batiment H. Poincaré, 50 rue. Buisson, BP 699, 68 Calais Cedex, rance. jbiloulmpa.univlittoral.fr sadokcalais.univlittoral.fr 19
2 8 < B etnamcs.kent.edu 130 A block version of BICGSTAB for linear systems with multiple righth sides effective compared to their single righth versions when the matrix is relatively dense. They are also attractive when a preconditioner is added to the block solver. In [9] [10] we proposed new methods called global GMRES global Lanczosbased methods. These methods are obtained by projecting the initial block residual onto a matrix Krylov subspace. Another approach for solving the problem (1.1) developed in the last few years consists in selecting one seed system the corresponding Krylov subspace projecting the residuals of the other systems onto this Krylov subspace. The process is repeated with an other seed system until all the systems are solved. This procedure has been used in [3] [] for the conjugate gradient method in [], when the matrix is nonsymmetric for the GMRES algorithm [18]. This approach is also effective when the righth sides of (1.1) are not available at the same time (see [16], [17] [6]). Note also that block methods such as the block Lanczos method [8], [5] the block Arnoldi method [19] are used for solving large eigenvalue problems. The BlBCG algorithm uses a short threeterm recurrence formula, but in many situations the algorithm exhibits a very irregular convergence behavior. This problem can be overcome by using a block smoothing technique as defined in [11] or a block QMR procedure [7]. A stabilized version of the BlBCG algorithm has been proposed in [0]. This method converges quite well but is in general more expensive than BlBCG. As for single righth side Lanczosbased methods, it cannot be excluded that breakdowns occur in the block Lanczosbased methods. We note that some block methods include a deflation procedure in order to detect delete linearly or almost linearly dependent vectors in the block Krylov subspaces generated during the iterations. This technique has been used in [7] for the BlQMR algorithm. See also [1] for a detailed discussion on loss of rank in the iteration matrices for the block conjugate gradient algorithm. In the present paper, we define a new transposefree block algorithm which is a generalization of the wellknown single righth side BiCGSTAB algorithm [5]. This new method is named block BiCGSTAB (BlBiCGSTAB). Numerical examples show that the new method is more efficient than the BlBCG, BlQMR BlGMRES algorithms. The remainder of the paper is organized as follows. In Section, we give a matrix polynomial interpretation of BlBCG using right matrix orthogonal polynomials. In Section 3, we define the BlBiCGSTAB algorithm for solving (1.1). inally, we report in Section some numerical examples. Throughout this paper, we use the following notations. or two matrices, where denotes the. for the usual inner product in 8 : 6T8:=< =. or a matrix, the Q is the subspace generated by the columns of the matrices. inally,, will denote the zero the identity matrices in, we consider the following inner product: trace of the square matrix B. The associated norm is the robenius norm denoted by We will use the notation block Krylov subspace E G.. Matrix polynomial interpretation of the block BCG algorithm..1. The block BCG algorithm. The block biconjugate gradient (BlBCG) algorithm was first proposed by O Leary [15] for solving the problem (1.1). This algorithm is a generalization of N the wellknown LR N BCG algorithm LR [6]. BlBCG computes two sets of direction =G( matrices M D O M that span the block Krylov subspaces E E, where (X. ( is an arbitrary matrix. The algorithm can be summarized as follows [15]:
3 E < Q Q Q Q E E E etnamcs.kent.edu ALGORITHM 1: Block BCG A. EL Guennouni, K.Jbilou, H. Sadok 131. is an initial guess, C.. ( is an arbitrary matrix.,. N or compute A ( A ( J A end. The algorithm breaks down if the matrices or are singular. Note also that the coefficient matrices computed in the algorithm are solutions of linear systems. The matrices of these systems could be very illconditioned this would affect the iterates computed by BlBCG. BlBCG can also exhibits very irregular behavior of the residual norms. To remedy this problem one can use a block smoothing technique [11] or a block quasiminimization procedure (see [7] [0]). The matrix residuals the matrix directions generated by ALGORITHM 1 satisfy the following properties. PROPOSITION 1. [15] If no breakdown occurs, the matrices computed by the BlBCG algorithm satisfy the following relations P L ( Ö S GH ILKM S GH ILKM X O 6 LR( LR( for. ON (SR GHJIPKM ON ONR GHJIPKM G( the columns of =O. O A. are orthogonal to. rom now on, we assume that no breakdown occurs in the BlBCG algorithm. In the following subsection, we use matrixvalued polynomials to give a new expression of the iterates computed by BlBCG. This will allow us to define the block BiCGSTAB algorithm... Connection with matrixvalued polynomials. In what follows, a matrixvalued polynomial of degree is a polynomial of the form (.1) 7 where 6T We use the notation (used in [1]) for the product (.) 7
4 etnamcs.kent.edu 13 A block version of BICGSTAB for linear systems with multiple righth sides where is an matrix, we define, for any matrix, the matrix polynomial by (.3) A& With these definitions, we have the following properties: two matrices of dimensions (.3) it follows that PROPOSITION. Let be two matrixvalued polynomials let be, respectively. Then we have 7 A&, N ( 7. Proof. Let be the matrix polynomial of degree defined by (.1). Then using (.) A& The relation N is easy to verify. When solving a single righth side linear systems " the residual the direction H, it is wellknown that computed by the BCG algorithm can be expressed as H. These polynomials are related by some recurrence formulas [1]. Using matrixvalued polynomials we give in the following proposition new expressions for the residuals the matrix directions generated by BlBCG. A PROPOSITION 3. Let be the sequences of matrices generated by the Bl BCG algorithm. Then there exist two families of matrixvalued polynomials of degree at most such that (.) (.5) These matrix polynomials are also related by the recurrence formulas (.6) (.7) = with for +68. A Proof. The case is obvious. Assume that the relations hold for. The residual computed by the BlBCG algorithm is given by Hence by the induction hypothesis, we get (N 7 A ( (
5 < < etnamcs.kent.edu Then using Proposition, we obtain where The matrix direction A. EL Guennouni, K.Jbilou, H. Sadok 133. This proves the first part of the proposition. computed by BlBCG is given as where the matrixvalued polynomial (N is defined by O in 8 Let be the functional defined on the set of matrixvalued polynomials with coefficients given by (.8) & hence using (.) the induction hypothesis, it follows that & 7 O where is a matrixvalued polynomial. We also define the functional $ by (.9) $ With these definitions, it is easy to prove the following relations. PROPOSITION. The functional defined above satisfies the following properties:. P 6T8 if. The same relations are also satisfied by. This result shows that the functionals are linear. The matrix polynomials associated by (.) (.5) with the residual the direction polynomials generated by BCG belong to families of formal orthogonal polynomials with respect to $ (see [1]). These results are generalized in the next proposition. PROPOSITION 5. Let, be the sequences of matrixvalued poly, nomials defined in Proposition 3. If P& is an arbitrary matrixvalued polynomial of degree, (.10), then we have the following orthogonality properties, (.11) $,
6 ,,,, Q Q etnamcs.kent.edu 13 A block version of BICGSTAB for linear systems with multiple righth sides Proof. We have seen (Proposition 1) that at step, the columns of the residual produced by BlBCG are orthogonal to the block Krylov subspace E expressed as P (.1) Using (.) of Proposition 3, it follows that,. This can be $ & P This shows that & N By the linearity of (expressed in Proposition ) we can conclude that if & is a matrixvalued polynomial of degree ( ), we have, & P To prove (.11), we use the relation to get using (.1), we obtain Since A therefore O =, it follows that $ & & N P P thus $ & The results of Proposition 5 show that are the matrixvalued polynomials of degree at most belonging to the families of matrixvalued orthogonal polynomials with respect to $ respectively normalized by the conditions. Note that if * is a scalar polynomial we also have *, $ * &, for. In general, this is not true in the block case. Using these matrixvalued polynomials, we will see in the next subsection how to define the block BiCGSTAB algorithm. P
7 " " H " H " H H Q " Q etnamcs.kent.edu A. EL Guennouni, K.Jbilou, H. Sadok The block BiCGSTAB algorithm. The single righth side BiCGSTAB algorithm [5] is a transposefree a smoother variant of the BCG algorithm. The residuals produced by BiCGSTAB are defined by where is the scalar residual polynomial associated with the BCG algorithm is another polynomial of degree updated from step to step by multiplication with a new linear factor with the goal of stabilizing the convergence behavior of the BCG algorithm: The selected parameter is determined by a local residual minimization condition. The search direction H is defined by L P where is the search direction polynomial associated with the BCG algorithm. In detail the BiCGSTAB algorithm for solving a single righth side linear system is defined as follows [5]: ALGORITHM : BiCGSTAB an initial guess H is an arbitrary P vector satisfying for A L 3 L $A $ $ $ ( $ L end. J H We will see later that the coefficient used in ALGORITHM can be computed by a simpler formula. Techniques for curing breakdowns in BiCGSTAB are given in [3]. We define now a new block method for solving (1.1), which is a transposefree a smoother converging variant of the BlBCG algorithm. The new method is named block BiCGSTAB (BlBiCGSTAB) since it is a generalization of the single righth side BiCGSTAB algorithm. We have seen that the residual Q by the BlBCG algorithm are such that the matrix direction OL computed
8 Q etnamcs.kent.edu 136 A block version of BICGSTAB for linear systems with multiple righth sides where are the matrixvalued polynomials satisfying the relations (.6) (.7). The BlBiCGSTAB algorithm produces iterates whose residual matrices are of the form (3.1) A where is still a scalar polynomial defined recursively at each step to stabilize the convergence behavior of the original BlBCG algorithm. Specifically, is defined by the recurrence formula (3.) & As will be seen later, the scalar is selected by imposing a minimization of the robenius residual norm. It is also possible to choose different for each righth side. In all our numerical tests, we obtained similar results for these two cases. We want now to define a recurrence formula for the new residuals defined by (3.1). rom (.6), we immediately obtain OP OP Therefore (3.3) ( Using (.7) (3.) we get (3.) ( We also have (3.5) We now define the new matrix direction (3.6) A by & (N we set (3.7) Then, using these formulas, the iterates formulas (3.8) (3.9) A A 3 are computed by the following recurrence 3
9 etnamcs.kent.edu A. EL Guennouni, K.Jbilou, H. Sadok 137 The scalar parameter is chosen to minimize the robenius norm of the. This implies that G inally, we have to compute the ( matrix coefficients needed in the recurrence formulas. This is done by using the fact that the polynomials 9 residual belong to the families of formal matrix orthogonal polynomials with respect to the functionals D respectively. Hence, using the relations (.6) (.7), Proposition 5 the fact that a scalar polynomial, we have (3.10) (3.11) $ $ $ Therefore, by the definitions of the functionals $, the relations (3.10) (3.11) become (3.1) (3.13) So, the T matrices can be computed by solving two = linear systems with the same coefficient matrix of the matrix. Putting all these relations together, the BlBiCGSTAB algorithm can be summarized as follows ALGORITHM 3: BlBiCGSTAB an initial guess O O an arbitrary matrix P or solve = A L C solve. They will be solved by computing the )Y* factorization end. Note that when a single linear system is solved, ALGORITHM 3 reduces to BiCGSTAB. However, the coefficient computed by ALGORITHM 3 with (N is given by 9 (N. This expression for is simpler than the one given in ALGO RITHM but requires an extra inner product. is
10 etnamcs.kent.edu 138 A block version of BICGSTAB for linear systems with multiple righth sides The BlBiCGSTAB algorithm will also suffer from a breakdown when is singular. In situations where the matrix is nonsingular but is very illconditioned the computation of the( iterates will also be affected. To overcome these problems, one can restart with a different. Lookahead strategies could also be defined, but this will not be treated in this paper. Due to the local robenius norm minimization steps, BlBiCGSTAB has smoother convergence behavior than BlBCG. However, the norms of the residuals produced by Bl BiCGSTAB may oscillate for some problems. In this case, we can use the block smoothing procedure defined in [11] to get nonincreasing robenius norms of the residuals. or solving the linear system (1.1), BlBiCGSTAB requires per iteration the evaluation of P matrixvector products with a total ofn P,. 0 multiplications. This is to be compared to matrixvector product with, matrixvector product with a total of P,. 0 multiplications needed at each iteration for BlBCG. Storage requirements (excluding those of, ) the major computational costs (multiplications) per iteration for the BlBCG the BlBiCGSTAB algorithms are listed in Table 1. Table 1 Memory requirements computational costs (multiplications) for BlBCG BlBiCGSTAB BlBCG BlBiCGSTAB Matec with P Matec with Multiplications N1 +,. 0N +P1 +,. 0 Memory locations S1,. L,. The costs of BlBCG BlBiCGSTAB rapidly increase with the number of righth sides. However, these block solvers converge in fewer iterations=g than their single=g righth side counterparts since the dimensions of the search spaces E increase by N, respectively, in each step. E. Numerical examples. In this section we give some experimental results. Our examples have been coded in Matlab have been executed on a SUN SPARC workstation. We compare the performance of BlBiCGSTAB, BlQMR, BlGMRES BlBCG for solving the multiple linear system (1.1). or the experiments with BlQMR, we used the algorithm that generates a b matrix with a deflation procedure to eliminate nearly linearly dependent columns of the search space [7]. In all but the last experiment, the initial guess was ILKJ/, where function the ILK creates an 5 rom matrix with coefficients uniformly distributed in [0,1] except for the last experiment. Example 1. In this example, we use two sets of experiments. The tests were stopped as soon as A Q max max Experiment 1: The first matrix test represents the fivepoint discretization of the operator C (.1) )(
11 etnamcs.kent.edu A. EL Guennouni, K.Jbilou, H. Sadok 139 on the unit square [0,1] [0,1] with Dirichlet boundary conditions. The discretization was performed using a grid size of which yields a matrix of dimension. The matrix is nonsymmetric has a positive definite symmetric part. We choose ( (. The number of second righth sides was. We set in the BlBiCGSTAB the BlBCG algorithms. igure 1 reports on convergence history for BlBiCGSTAB, Bl QMR, BlBCG BlGMRES(10). In this figure, we plotted the maximum of the residual norms (on a logarithmic scale) versus the flops (number of arithmetic operations). 6 Bl BiCGSTAB Bl QMR Bl GMRES BLBCG max i R k (:, i) flops x 10 9 igure 1: In igure, we plotted for the same example only the results obtained with the Bl BiCGSTAB the BlQMR algorithms. As observed from igure 1 igure, Bl BiCGSTAB returns the best results.
12 etnamcs.kent.edu 10 A block version of BICGSTAB for linear systems with multiple righth sides 1 Bl BiCGSTAB Bl QMR 0 1 max i R k (:, i) flops.5 3 x 10 8 igure : Experiment : or the second experiment, we used the matrix test =pde900 from 0 HarwellBoweing collection (the size of the nonsymmetric matrix is 0 the number of nonzeros entries is K KJ 0 ). igure 3 reports on results obtained for BlBiCGSTAB, BlQMR, BlBCG BlGMRES(10). or this experiment the number of second righth sides was. As can be seen from igure 3, BlBiCGSTAB requires lower flops for convergence when compared to the other three block methods. The BlBCG algorithm failed to converge. 6 Bl BiCGSTAB Bl QMR Bl GMRES BLBCG max i R k (:, i) flops.5 3 x 10 8 igure 3: pde900 0 Example. or the second set of experiments, we used matrices from the HarwellBoeing
13 etnamcs.kent.edu A. EL Guennouni, K.Jbilou, H. Sadok 11 Q collection using ILUT as a preconditioner with a drop tolerance. We compared the performance (in term of flops) of the BlBiCGSTAB algorithm, the BlGMRES(10) algorithm the BiCGSTAB algorithm applied to each single righth side. or all the tests the matrix was an W= rom matrix. Two different numbers of righth sides were used:. The iterations were stopped when max In Table we list the effectiveness of BlBiCGSTAB measured by the ratios 3 where 3 =flops(blbicgstab)/(flops(bicgstab)), 3 =flops(blbicgstab)/flops(blgmres(10)). We note that flops(bicgstab) corresponds to the flops required for solving the linear systems by applying the BiCGSTAB algorithm times. Table Effectiveness of BlBiCGSTAB as compared to BlGMRES(10) BiCGSTAB using the ILUT preconditioner. Matrices are from the HarwellBoeing collection. Matrix Utm 1700a ( ) N N (K KJ ) SAYLR( ) N N (K KJ ) SHERMAN5 ( ) N N (K KJ ) N SHERMAN3 ( N N N (K KJ ) K K denotes the number of nonzero entries in As shown in Table, BlBiCGSTAB is less expensive than BlGMRES than BiCGSTAB applied to each single righth N side linear system except for the last example with for which. The relative density of the matrices the use of preconditioners make the multiple righth side solvers less expensive then they are more effective. Q 5. Conclusion. We have proposed in this paper a new block BiCGSTAB method for nonsymmetric linear systems with multiple righth sides. To define this new method, we used formal matrixvalued orthogonal polynomials. Acknowledgements. We would like to thank the referees for their helpful remarks valuable suggestions. REERENCES [1] C. BREZINSKI AND H. SADOK, Lanczos type methods for solving systems of linear equations, Appl. Numer. Math., 11(1993), pp [] C. BREZINSKI, M. REDIOZAGLIA AND H. SADOK, A breakdownfree Lanczostype algorithm for solving linear systems, Numer. Math., 63(199), pp [3] C. BREZINSKI AND M. REDIOZAGLIA, Lookahead in BiCGSTAB other methods for linear systems, BIT, 35(1995), pp
14 etnamcs.kent.edu 1 A block version of BICGSTAB for linear systems with multiple righth sides [] T. CHAN AND W. WAN, Analysis of projection methods for solving linear systems with multiple righth sides, SIAM J. Sci. Comput., 18(1997), pp [5] G. H. GOLUB AND R.R. UNDERWOOD, A block Lanczos algorithm for computing the q algebraically largest eigenvalues a corresponding eigenspace of large, sparse, real symmetric matrices, Proc. of 197 IEEE conf. on Decision avd Control, Phoenix. [6] R. LETCHER, Conjugate gradient methods for indefinite systems, In G. A. Watson, editor, Proceedings of the Dundee Biennal Conference on Numerical Analysis 197, pages Springer erlag, New York, [7] R. REUND AND M. MALHOTRA, A BlockQMR algorithm for nonhermitian linear systems with multiple righth sides, Linear Algebra Appl., 5(1997), pp [8] G. H. GOLUB AND R. R. UNDERWOOD, The block Lanczos method for computing eigenvalues, in Mathematical Software 3, J. R. Rice, ed., Academic Press, New York, 1977, pp [9] K. JBILOU A. MESSAOUDI AND H. SADOK, Global OM GMRES algorithms for matrix equations, Appl. Numer. Math., 31(1999), pp [10] K. JBILOU H. SADOK, Global Lanczosbased methods with applications, Technical Report LMA, Université du Littoral, Calais, rance [11] K. JBILOU, Smoothing iterative block methods for linear systems with multiple righth sides, J. Comput. Appl. Math., 107(1999), pp [1] C. LANCZOS, Solution of systems of linear equations by minimized iterations, J. Research Nat. Bur. Stards, 9(195) pp [13] M. MALHOTRA, R. REUND AND P. M. PINSKY, Iterative Solution of Multiple Radiation Scattering Problems in Structural Acoustics Using a Block QuasiMinimal Residual Algorithm, Comput. Methods Appl. Mech. Engrg., 16(1997), pp [1] A. NIKISHIN AND A. YEREMIN, ariable Block CG Algorithms for Solving Large Sparse Symmetric Positive Definite Linear Systems on Parallel Computers, I: General Iterative Scheme, SIAM J. Matrix Anal. Appl., 16(1995), pp [15] D. O LEARY, The block conjugate gradient algorithm related methods, Linear Algebra Appl., 9(1980), pp [16] B. PARLETT, A New Look at the Lanczos Algorithm for Solving Symmetric Systems of Linear Equations, Linear Algebra Appl., 9(1980), pp [17] Y. SAAD, On the Lanczos Method for Solving Symmetric Linear Systems with Several RightH Sides, Math. Comp., 8(1987), pp [18] Y. SAAD AND M. H. SCHULTZ, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7(1986), pp [19] M. SADKANE, Block Arnoldi Davidson methods for unsymmetric large eigenvalue problems, Numer. Math., 6(1993), pp [0]. SIMONCINI, A stabilized QMR version of block BICG, SIAM J. Matrix Anal. Appl., 18(1997), pp [1]. SIMONCINI AND E. GALLOPOULOS, Convergence properties of block GMRES matrix polynomials, Linear Algebra Appl., 7(1996), pp []. SIMONCINI AND E. GALLOPOULOS, An Iterative Method for Nonsymmetric Systems with Multiple Righth Sides, SIAM J. Sci. Comput., 16(1995), pp [3] C. SMITH, A. PETERSON AND R. MITTRA, A Conjugate Gradient Algorithm for Tretement of Multiple Incident Electromagnetic ields, IEEE Trans. Antennas Propagation, 37(1989), pp [] B. ITAL, Etude de quelques méthodes de résolution de problèmes linéaires de gre taille sur multiprocesseur, Ph.D. thesis, Université de Rennes, Rennes, rance, [5] H. AN DER ORST, BiCGSTAB: a fast smoothly converging variant of BiCG for the solution of nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 13(199), pp [6] H. AN DER ORST, An Iterative Solution Method for Solving, using Krylov Subspace Information Obtained for the Symmetric Positive Definite Matrix, J. Comput. Appl. Math., 18(1987), pp. 963.
DELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationResearch Article MINRES Seed Projection Methods for Solving Symmetric Linear Systems with Multiple Right-Hand Sides
Mathematical Problems in Engineering, Article ID 357874, 6 pages http://dx.doi.org/10.1155/2014/357874 Research Article MINRES Seed Projection Methods for Solving Symmetric Linear Systems with Multiple
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationKey words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR
POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationComparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations
American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov
More informationBlock Krylov Space Solvers: a Survey
Seminar for Applied Mathematics ETH Zurich Nagoya University 8 Dec. 2005 Partly joint work with Thomas Schmelzer, Oxford University Systems with multiple RHSs Given is a nonsingular linear system with
More informationRESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS
RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG
More informationContents. Preface... xi. Introduction...
Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationOn the influence of eigenvalues on Bi-CG residual norms
On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationIterative Methods for Linear Systems of Equations
Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationOn Solving Large Algebraic. Riccati Matrix Equations
International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University
More informationON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS
ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,
More informationWHEN studying distributed simulations of power systems,
1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationSolving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners
Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department
More informationS-Step and Communication-Avoiding Iterative Methods
S-Step and Communication-Avoiding Iterative Methods Maxim Naumov NVIDIA, 270 San Tomas Expressway, Santa Clara, CA 95050 Abstract In this paper we make an overview of s-step Conjugate Gradient and develop
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationPerformance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems
Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Seiji Fujino, Takashi Sekimoto Abstract GPBiCG method is an attractive iterative method for the solution
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS
ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationA short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering
A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization
More informationA SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS
INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationMultigrid absolute value preconditioning
Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationPROJECTED GMRES AND ITS VARIANTS
PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationUpdating the QR decomposition of block tridiagonal and block Hessenberg matrices generated by block Krylov space methods
Updating the QR decomposition of block tridiagonal and block Hessenberg matrices generated by block Krylov space methods Martin H. Gutknecht Seminar for Applied Mathematics, ETH Zurich, ETH-Zentrum HG,
More informationETNA Kent State University
Electronic ransactions on Numerical Analysis. Volume 2, pp. 57-75, March 1994. Copyright 1994,. ISSN 1068-9613. ENA MINIMIZAION PROPERIES AND SHOR RECURRENCES FOR KRYLOV SUBSPACE MEHODS RÜDIGER WEISS Dedicated
More informationA MULTIPRECONDITIONED CONJUGATE GRADIENT ALGORITHM
SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 4, pp. 1056 1068 c 2006 Society for Industrial and Applied Mathematics A MULTIPRECONDITIONED CONJUGATE GRADIENT ALGORITHM ROBERT BRIDSON AND CHEN GREIF Abstract.
More informationThe solution of the discretized incompressible Navier-Stokes equations with iterative methods
The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische
More informationResearch Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems
Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite
More informationVariants of BiCGSafe method using shadow three-term recurrence
Variants of BiCGSafe method using shadow three-term recurrence Seiji Fujino, Takashi Sekimoto Research Institute for Information Technology, Kyushu University Ryoyu Systems Co. Ltd. (Kyushu University)
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationOn the Preconditioning of the Block Tridiagonal Linear System of Equations
On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationarxiv: v1 [hep-lat] 2 May 2012
A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationIncomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices
Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM
Electronic Transactions on Numerical Analysis. Volume 45, pp. 133 145, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM
More informationFinite-choice algorithm optimization in Conjugate Gradients
Finite-choice algorithm optimization in Conjugate Gradients Jack Dongarra and Victor Eijkhout January 2003 Abstract We present computational aspects of mathematically equivalent implementations of the
More informationA HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY
A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues
More informationDeflation for inversion with multiple right-hand sides in QCD
Deflation for inversion with multiple right-hand sides in QCD A Stathopoulos 1, A M Abdel-Rehim 1 and K Orginos 2 1 Department of Computer Science, College of William and Mary, Williamsburg, VA 23187 2
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationFormulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure
Applied Mathematics 215 6 1389-146 Published Online July 215 in SciRes http://wwwscirporg/journal/am http://dxdoiorg/14236/am21568131 Formulation of a Preconditioned Algorithm for the Conjugate Gradient
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationA stable variant of Simpler GMRES and GCR
A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro@cs.cas.cz,
More informationA Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers
Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned
More informationBLOCK KRYLOV SPACE METHODS FOR LINEAR SYSTEMS WITH MULTIPLE RIGHT-HAND SIDES: AN INTRODUCTION
BLOCK KRYLOV SPACE METHODS FOR LINEAR SYSTEMS WITH MULTIPLE RIGHT-HAND SIDES: AN INTRODUCTION MARTIN H. GUTKNECHT / VERSION OF February 28, 2006 Abstract. In a number of applications in scientific computing
More informationUpdating the QR decomposition of block tridiagonal and block Hessenberg matrices
Applied Numerical Mathematics 58 (2008) 871 883 www.elsevier.com/locate/apnum Updating the QR decomposition of block tridiagonal and block Hessenberg matrices Martin H. Gutknecht a,, Thomas Schmelzer b
More informationKrylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration
Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationA Novel Approach for Solving the Power Flow Equations
Vol.1, Issue.2, pp-364-370 ISSN: 2249-6645 A Novel Approach for Solving the Power Flow Equations Anwesh Chowdary 1, Dr. G. MadhusudhanaRao 2 1 Dept.of.EEE, KL University-Guntur, Andhra Pradesh, INDIA 2
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationOn the loss of orthogonality in the Gram-Schmidt orthogonalization process
CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical
More informationDomain decomposition on different levels of the Jacobi-Davidson method
hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationPeter Deuhard. for Symmetric Indenite Linear Systems
Peter Deuhard A Study of Lanczos{Type Iterations for Symmetric Indenite Linear Systems Preprint SC 93{6 (March 993) Contents 0. Introduction. Basic Recursive Structure 2. Algorithm Design Principles 7
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationInexactness and flexibility in linear Krylov solvers
Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th
More informationGeneralized MINRES or Generalized LSQR?
Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical
More informationA DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to
A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods
More informationA Block Compression Algorithm for Computing Preconditioners
A Block Compression Algorithm for Computing Preconditioners J Cerdán, J Marín, and J Mas Abstract To implement efficiently algorithms for the solution of large systems of linear equations in modern computer
More informationPreconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994
Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationAMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.
J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationRecent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld
Recent computational developments in Krylov Subspace Methods for linear systems Valeria Simoncini and Daniel B. Szyld A later version appeared in : Numerical Linear Algebra w/appl., 2007, v. 14(1), pp.1-59.
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-16 An elegant IDR(s) variant that efficiently exploits bi-orthogonality properties Martin B. van Gijzen and Peter Sonneveld ISSN 1389-6520 Reports of the Department
More informationKrylov Space Solvers
Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationNumerical linear algebra
Numerical linear algebra Purdue University CS 51500 Fall 2017 David Gleich David F. Gleich Call me Prof Gleich Dr. Gleich Please not Hey matrix guy! Huda Nassar Call me Huda Ms. Huda Please not Matrix
More informationAugmented GMRES-type methods
Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/
More informationarxiv: v2 [math.na] 1 Sep 2016
The structure of the polynomials in preconditioned BiCG algorithms and the switching direction of preconditioned systems Shoji Itoh and Masaai Sugihara arxiv:163.175v2 [math.na] 1 Sep 216 Abstract We present
More informationReduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves
Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More informationEXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS
EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS JIANWEI DIAN AND R BAKER KEARFOTT Abstract Traditional computational fixed point theorems, such as the Kantorovich theorem (made rigorous
More informationResidual iterative schemes for largescale linear systems
Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz
More information