Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Similar documents
RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

9.1 Preconditioned Krylov Subspace Methods

DELFT UNIVERSITY OF TECHNOLOGY

Iterative Methods for Sparse Linear Systems

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Iterative Methods for Linear Systems of Equations

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Iterative Methods and Multigrid

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

Preface to the Second Edition. Preface to the First Edition

Since the early 1800s, researchers have

Iterative methods for Linear System

Preconditioning Techniques Analysis for CG Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Lecture 18 Classical Iterative Methods

KRYLOV SUBSPACE ITERATION

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Introduction to Scientific Computing

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Formulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure

Contents. Preface... xi. Introduction...

DELFT UNIVERSITY OF TECHNOLOGY

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Krylov Space Solvers

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

Numerical Methods in Matrix Computations

Variants of BiCGSafe method using shadow three-term recurrence

On the Preconditioning of the Block Tridiagonal Linear System of Equations

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

The Conjugate Gradient Method

4.6 Iterative Solvers for Linear Systems

FEM and Sparse Linear System Solving

Chapter 7 Iterative Techniques in Matrix Algebra

Iterative Methods for Solving A x = b

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Computational Linear Algebra

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

BiCGCR2: A new extension of conjugate residual method for solving non-hermitian linear systems

MULTIGRID-CONJUGATE GRADIENT TYPE METHODS FOR REACTION DIFFUSION SYSTEMS *

Lab 1: Iterative Methods for Solving Linear Systems

Accelerated residual methods for the iterative solution of systems of equations

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Splitting Iteration Methods for Positive Definite Linear Systems

CLASSICAL ITERATIVE METHODS

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

6.4 Krylov Subspaces and Conjugate Gradients

Research Article MINRES Seed Projection Methods for Solving Symmetric Linear Systems with Multiple Right-Hand Sides

Introduction of Interpolation and Extrapolation Model in Lanczos-type Algorithms A 13 /B 6 and A 13 /B 13 to Enhance their Stability

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

DELFT UNIVERSITY OF TECHNOLOGY

On the accuracy of saddle point solvers

M.A. Botchev. September 5, 2014

ETNA Kent State University

Algebraic Multigrid as Solvers and as Preconditioner

ETNA Kent State University

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Matlab s Krylov Methods Library. For. Large Sparse. Ax = b Problems

Preconditioning for Nonsymmetry and Time-dependence

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Controlling inner iterations in the Jacobi Davidson method

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Residual iterative schemes for largescale linear systems

A decade of fast and robust Helmholtz solvers

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction

Lecture 17: Iterative Methods and Sparse Linear Algebra

APPLIED NUMERICAL LINEAR ALGEBRA

arxiv: v2 [math.na] 1 Sep 2016

Introduction to Iterative Solvers of Linear Systems

Preconditioners for the incompressible Navier Stokes equations

Linear and Non-Linear Preconditioning

QUADERNI. del. Dipartimento di Matematica. Ezio Venturino, Peter R. Graves-Moris & Alessandra De Rossi

arxiv: v4 [math.na] 1 Sep 2018

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

The Lanczos and conjugate gradient algorithms

On the choice of abstract projection vectors for second level preconditioners

1. Introduction. Krylov subspace methods are used extensively for the iterative solution of linear systems of equations Ax = b.

Iterative Solution methods

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

A Bluffers Guide to Iterative Methods for Systems of Linear Equations

Unsteady Incompressible Flow Simulation Using Galerkin Finite Elements with Spatial/Temporal Adaptation

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

Transcription:

American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations Xijian Wang School of Mathematics and Computational Science, Wuyi University, Jiangmen, China Email: wangxj9846@63.com Received March 5; accepted 5 June 5; published 9 June 5 Copyright 5 by author and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4./ Abstract The paper first introduces two-dimensional convection-diffusion equation with boundary value condition, later uses the finite difference method to discretize the equation and analyzes positive definite, diagonally dominant and symmetric properties of the discretization matrix. Finally, the paper uses fixed point methods and Krylov subspace methods to solve the linear system and compare the convergence speed of these two methods. Keywords Finite Difference Method, Convection-Diffusion Equation, Discretization Matrix, Iterative Method, Convergence Speed. Introduction In the case of a linear system Ax = b, the two main classes of iterative methods are the stationary iterative methods (fixed point methods) []-[3], and the more general Krylov subspace methods [4]-[3]. When these two classical iterative methods suffer from slow convergence for problems which arise from typical applications such as fluid dynamics or electronic device simulation, preconditioning [4]-[6] is a key ingredient for the success of the convergent process. The goal of this paper is to find an efficient iterative method combined with preconditioning for the solution of the linear system Ax = b which is related to the following two-dimensional boundary value problem (BVP) [7]: How to cite this paper: Wang, X.J. (5) Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations. American Journal of Computational Mathematics, 5, 3-6. http://dx.doi.org/.436/ajcm.5.5

u u u u + f xy,, xy,, + γ = Ω= x y x y u =, ( xy, ) Ω. ( ) ( ) ( ) For parameter γ =,6,64, we use finite difference method to discretize the Equation (). Take n + points in both the x- and y-direction and number the related degrees of freedom first left-right and next bottom- top. Now, we take n = 3, thus the grid size x = y = h = =. For γ =, the convection term n + 33 equals to, using central difference method to the diffusion term, we get the discretization matrix of the Equation () B I O O 4 I B I 4 A = O O, B =. I B I 4 O O O I B 4 For γ = 6, the diffusion term use a central difference and for the convection term use central differences scheme as the following: u( ) u ij + u( ) u ( ) u ij + u ( ) u( ) u( ) u i j i j i j i j i j i j i( j ) u + + + + i( j ) f ( xij, yij ) = + + γ, h h h h we obtain the discretization matrix of the Equation () where ( ) ( ) ( ) B + c I O O c I B + c I A = O O, ( c) I B ( + c) I O O O ( c) I B ( c ) ( c ) 4 ( c ) 4 + γ h 8 B =, c = =. 33 ( + c) 4 ( c) ( + c ) 4 For γ = 64, the diffusion term use a central difference and for the convection term use upwind differences scheme as the following: u( ) u ij + u( ) u ( ) uij + u ( ) uij u i j i j i j i j ( i ) u j i( j ) u + + + ij f ( xij, yij ) = + + γ, h h h h we obtain the discretization matrix of the Equation () ( ) B + c I O O I B ( + c) I A = O O, I B ( + c) I O O O I B () () (3) (4) (5) (6) 4

where 4+ c ( + c) 4+ c 64 B =, c = γ h =. 33 ( + c) 4+ c ( + c) 4+ c. Properties of the Discretization Matrix In this section, we would first compute the eigenvalues of the discretization matrices A, A and A of Equation (), later analyze positive definite, diagonally dominant and symmetric properties of these matrices... Eigenvalues Using MATLAB, the eigenvalues of the discretization matrices A, A and A of Equation () are the following: pπ qπ λ ( A ) = 4 + cos cos, pq, + n+ n+ pπ qπ λ ( A) = 4 + c, cos cos, pq,,,, n. pq + = (7) n+ n+ pπ qπ λ ( A) = 4 + c, + + c cos cos, pq + n+ n+.. Definite Positiveness Matrix A is positive definite if and only if the symmetric part of A i.e. A, i =, Ai = A, i, S = c + A i = ( ),, A S T A+ A = is positive definite. Since we have all the eigenvalues of A + A i T i (i =,, ) are the following: pπ qπ λ ( A) = 4 + c, cos cos,,, pq + > i = n+ n+ λ (( Ai ) ) = pq,,,, n. S = pq, c c pπ qπ λ + A = + 4 + c cos + cos >, i =, n+ n+ pq, (8) Therefore ( ) A i (i =,, ) is positive definite, which means the discretization matrices S of Equation () are positive definite..3. Diagonal Dominance For all the discretization matrices A, A and A of Equation (), A, A and A 5

n i j, i= ajj A ( aij ) ( ) ( ) ( ) ( + c) + = + c = ajj A = ( aij ) + = 4 =, = ; aij + c + c = 4 = ajj, A = aij ; 4,. Therefore, the discretization matrices A, A and A of Equation () are diagonally dominant..4. Symmetriness It is easy to see that only A is symmetric. 3. Stationary Iteration Methods and Krylov Subspace Methods The goal of this section is to find an efficient iterative method for the solution of the linear system Ax i = b, i=,,. Since Ai ( i =,, ) are positive definite and diagonally dominant, we would use fixed point methods and Krylov subspace methods. In this section, first find the suitable convergence tolerance, later use numerical experiments to compare the convergence speed of various iteration methods. 3.. Convergence Tolerance Without loss of generality, take b Ax, x (,,,) T Axi b that δ, then b δ = In order to achieve 3 b A i. xi 3.. Numerical Experiments = =, convergence tolerance * * ( ) x x = A Ax b A Ax b i * i i 3 * δ = Ax i b 3 = b A δ b A =. b 3 b A i and assume x in the numerical experiments, we would set convergence tolerance Computational results using fixed point methods such as Jacobi, Gauss-Seidel, SOR etc. and projection methods such as PCG, BICG, BICGSTAB, CGS, GMRES and QMR are listed out in figures (Figures -), for all three different γ values ( γ =,6, 64). The projection methods are performed with different preconditioning methods such as Jacobi preconditioning, luinc and cholinc preconditioning. The tables (Table and Table ) containing relevant computational details are also given below. 4. Conclusions From the figures (Figures -) and tables (Table and Table ), we obtain the following conclusions: The convergence speeds of SOR, Backward SOR and SSOR are faster than that of GS and Backward GS; while GS and Backward GS are faster than Jacobi; If matrix A is symmetric ( γ = ), the convergence speeds of SOR, Backward SOR and SSOR are the same; otherwise ( γ = 6,64), SSOR is faster than Backward SOR and SOR; The convergence speeds of SOR and Backward SOR are the same, also for GS and Backward GS; From Table, the iteration steps and the time for all fixed point methods of the case in which γ = 64 are less than the case in which γ = 6, and also the case in which γ = 6 are less than the case in which γ = ; The upwind difference method is more suitable to be applied to convection dominant problem than the cen- 6

Figure. Comparison of convergence speed using fixed point methods when γ =. Figure. Comparison of convergence speed using fixed point methods when γ = 6. Figure 3. Comparison of convergence speed using fixed point methods when γ = 64. 7

Figure 4. PCG with different preconditionings when γ =. Figure 5. BICG with different preconditionings when γ =. Figure 6. BICGSTAB with different preconditionings when γ =. 8

Figure 7. CGS with different preconditionings when γ =. Figure 8. GMRES with different preconditionings when γ =. Figure 9. QMR with different preconditionings when γ =. 9

Figure. PCG with different preconditionings when γ = 6. Figure. BICG with different preconditionings when γ = 6. Figure. BICGSTAB with different preconditionings when γ = 6.

Figure 3. CGS with different preconditionings when γ = 6. Figure 4. GMRES with different preconditionings when γ = 6. Figure 5. QMR with different preconditionings when γ = 6.

Figure 6. PCG with different preconditionings when γ = 64. Figure 7. Bicg with different preconditionings when γ = 64. Figure 8. BICGSTAB with different preconditionings when γ = 64.

Figure 9. CGS with different preconditionings when γ = 64. Figure. GMRES with different preconditionings when γ = 64. Figure. QMR with different preconditionings when γ = 64. 3

Table. Computational details of different projection methods with different preconditioning. Gamma Function Preconditioning flag No. of iterations Relres Delta (tol) jacobi 5.3E 6.55E 6 PCG luinc 3.38E 6 cholinc 5 9.5E 7 jacobi 5.3E 6 BICG luinc 3.38E 6 cholinc 5 9.5E 7 jacobi 4.5.4E 6 BICGSTAB luinc 5.5 9.36E 7 cholinc 6.54E 6 jacobi 43.E 6 CGS luinc 6 8.7E 7 cholinc 6.8E 6 jacobi 49.3E 6 GMRES luinc 3 9.E 7 cholinc 4.E 6 jacobi 5.5E 6 QMR luinc 3.E 6 6 PCG bicg BICGSTAB CGS GMRES QMR cholinc 4.37E 6 jacobi 4.48367757.4E 5 luinc.34539 cholinc.3856367 jacobi 89 7.33E 6 luinc 5.E 5 cholinc 35 4.37E 6 jacobi 56.5.3E 5 luinc 6.5 6.3E 6 cholinc 3.5 3.98E 6 jacobi 6 6.6E 6 luinc.e 8 cholinc 7 5.9E 6 jacobi 68 9.7E 6 luinc 7.8E 6 cholinc 3.E 5 jacobi 89 6.43E 6 luinc 5.64E 6 cholinc 33 9.57E 6 4

Continued 64 PCG BICG BICGSTAB CGS GMRES QMR jacobi.797837 4.3E 5 luinc.54757 cholinc.666555 jacobi 7 3.5E 5 luinc.e 5 cholinc 44 3.47E 5 jacobi 59.5.E 5 luinc 6.5 5.7E 6 cholinc 7.5.6E 5 jacobi 87.3357 luinc 9 6.36E 7 cholinc 35 3.4E 6 jacobi 63.66E 5 luinc 9.5E 5 cholinc 37 3.E 5 jacobi 69 3.56E 5 luinc.5e 5 cholinc 45 3.3E 5 Table. Computational details of different fixed point methods. The fixed point method No. of iterations Gamma = Jacobi 46 Gauss-Seidel 4 SOR 37 Backward Gauss-Seidel 4 Backward SOR 37 SSOR 37 Gamma = 6 Jacobi 459 Gauss-Seidel 3 SOR 68 Backward Gauss-Seidel 3 Backward SOR 68 SSOR 4 Gamma = 64 Jacobi 9 Gauss-Seidel 97 SOR 54 Backward Gauss-Seidel 97 Backward SOR 54 SSOR 5

tral difference method; The convergence speed of the six projection methods including PCG, BICG, BICGSTAB, CGS, GMRES and QMR under luinc preconditioning are faster than under cholinc preconditioning, while under cholinc preconditioning are faster than Jacobi preconditioning; The six projection methods under Jacobi, luinc and cholinc are convergent when γ =, however, for γ = 6,64, the PCG method are not convergent and also the CGS method under Jacobi preconditioning are not convergent when γ = 64. Acknowledgements I thank the editor and the referee for their comments. I would like to express deep gratitude to my supervisor Prof. Dr. Mark A. Peletier whose guidance and support were crucial for the successful completion of this paper. This work was completed with the financial support of Foundation of Guangdong Educational Committee (4KQNCX6, 4KQNCX6). References [] Saad, Y. (3) Iterative Methods for Sparse Linear Systems. Siam, Bangkok. http://dx.doi.org/.37/.978898783 [] Varga, R.S. (9) Matrix Iterative Analysis. Volume 7, Springer Science & Business Media, Heidelberger. [3] Young, D.M. (4) Iterative Solution of Large Linear Systems. Elsevier, Amsterdam. [4] Lanczos, C. (95) Solution of Systems of Linear Equations by Minimized Iterations. Journal of Research of the National Bureau of Standards, 49, 33-53. http://dx.doi.org/.68/jres.49.6 [5] Hestenes, M.R. and Stiefel, E. (95) Methods of Conjugate Gradients for Solving Linear Systems. [6] Walker, H.F. (988) Implementation of the GMRES Method Using Householder Transformations. SIAM Journal on Scientific and Statistical Computing, 9, 5-63. http://dx.doi.org/.37/99 [7] Sonneveld, P. (989) CGS, a Fast Lanczos-Type Solver for Nonsymmetric Linear Systems. SIAM Journal on Scientific and Statistical Computing,, 36-5. http://dx.doi.org/.37/94 [8] Freund, R.W. and Nachtigal, N.M. (99) QMR: A Quasi-Minimal Residual Method for Non-Hermitian Linear Systems. Numerische Mathematik, 6, 35-339. http://dx.doi.org/.7/bf38576 [9] van der Vorst, H.A. (99) Bi-CGSTAB: A Fast and Smoothly Converging Variant of Bi-CG for the Solution of Nonsymmetric Linear Systems. SIAM Journal on Scientific and Statistical Computing, 3, 63-644. http://dx.doi.org/.37/9335 [] Brezinski, C., Zaglia, M.R. and Sadok, H. (99) A Breakdown-Free Lanczos Type Algorithm for Solving Linear Systems. Numerische Mathematik, 63, 9-38. http://dx.doi.org/.7/bf385846 [] Chan, T.F., Gallopoulos, E., Simoncini, V., Szeto, T. and Tong, C.H. (994) A Quasi-Minimal Residual Variant of the Bi-CGSTAB Algorithm for Nonsymmetric Systems. SIAM Journal on Scientific Computing, 5, 338-347. http://dx.doi.org/.37/953 [] Gutknecht, M.H. (99) A Completed Theory of the Unsymmetric Lanczos Process and Related Algorithms, Part I. SIAM Journal on Matrix Analysis and Applications, 3, 594-639. http://dx.doi.org/.37/6337 [3] Gutknecht, M.H. (994) A Completed Theory of the Unsymmetric Lanczos Process and Related Algorithms. Part II. SIAM Journal on Matrix Analysis and Applications, 5, 5-58. http://dx.doi.org/.37/s895479898883 [4] Eisenstat, S.C. (98) Efficient Implementation of a Class of Preconditioned Conjugate Gradient Methods. SIAM Journal on Scientific and Statistical Computing,, -4. http://dx.doi.org/.37/9 [5] Meijerink, J.V. and van der Vorst, H.A. (977) An Iterative Solution Method for Linear Systems of Which the Coefficient Matrix Is a Symmetric M-Matrix. Mathematics of Computation, 3, 48-6. [6] Ortega, J.M. (988) Efficient Implementations of Certain Iterative Methods. SIAM Journal on Scientific and Statistical Computing, 9, 88-89. http://dx.doi.org/.37/996 [7] Fowler, A.C. (997) Mathematical Models in the Applied Sciences. Volume 7, Cambridge University Press, Cambridge. 6