Iterative Solvers in the Finite Element Solution of Transient Heat Conduction

Similar documents
PDE Solvers for Fluid Flow

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

9.1 Preconditioned Krylov Subspace Methods

Jae Heon Yun and Yu Du Han

Preconditioning Techniques Analysis for CG Method

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Chapter 9 Implicit integration, incompressible flows

On the choice of abstract projection vectors for second level preconditioners

ME Computational Fluid Mechanics Lecture 5

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

DELFT UNIVERSITY OF TECHNOLOGY

Scientific Computing: An Introductory Survey

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Chebyshev semi-iteration in Preconditioning

Linear Solvers. Andrew Hazel

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

NUMERICAL METHODS FOR ENGINEERING APPLICATION

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations

Chapter 7 Iterative Techniques in Matrix Algebra

Notes on Some Methods for Solving Linear Systems

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

M.A. Botchev. September 5, 2014

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Iterative methods for Linear System

Computation Fluid Dynamics

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

The Conjugate Gradient Method

On the Preconditioning of the Block Tridiagonal Linear System of Equations

A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY MATCHED LAYER

Iterative Methods for Solving A x = b

Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction

New Multigrid Solver Advances in TOPS

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

Preface to the Second Edition. Preface to the First Edition

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Introduction. Finite and Spectral Element Methods Using MATLAB. Second Edition. C. Pozrikidis. University of Massachusetts Amherst, USA

The Conjugate Gradient Method for Solving Linear Systems of Equations

1 Conjugate gradients

Finite-choice algorithm optimization in Conjugate Gradients

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Partial Differential Equations

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Multipole-Based Preconditioners for Sparse Linear Systems.

Lecture # 20 The Preconditioned Conjugate Gradient Method

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

SIMULTANEOUS DETERMINATION OF STEADY TEMPERATURES AND HEAT FLUXES ON SURFACES OF THREE DIMENSIONAL OBJECTS USING FEM

Iterative Methods for Linear Systems of Equations

Lecture 17: Iterative Methods and Sparse Linear Algebra

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Iterative Methods for Sparse Linear Systems

Since the early 1800s, researchers have

4.6 Iterative Solvers for Linear Systems

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

Lecture 18 Classical Iterative Methods

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

TAU Solver Improvement [Implicit methods]

Termination criteria for inexact fixed point methods

Multigrid Methods and their application in CFD

Solving PDEs with Multigrid Methods p.1

Lab 1: Iterative Methods for Solving Linear Systems

Numerical methods for the Navier- Stokes equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen

Algebraic Multigrid as Solvers and as Preconditioner

Introduction to Scientific Computing

Research Article Evaluation of the Capability of the Multigrid Method in Speeding Up the Convergence of Iterative Methods

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Overlapping Schwarz preconditioners for Fekete spectral elements

Numerical Methods in Matrix Computations

arxiv: v2 [math.na] 1 Sep 2016

Stabilization and Acceleration of Algebraic Multigrid Method

The Deflation Accelerated Schwarz Method for CFD

1. Fast Iterative Solvers of SLE

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

AMS526: Numerical Analysis I (Numerical Linear Algebra)

The Lanczos and conjugate gradient algorithms

Indefinite and physics-based preconditioning

Notes on PCG for Sparse Linear Systems

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Preconditioning for Nonsymmetry and Time-dependence

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

From Stationary Methods to Krylov Subspaces

PETROV-GALERKIN METHODS

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Variants of BiCGSafe method using shadow three-term recurrence

Computational Linear Algebra

Diffusion / Parabolic Equations. PHY 688: Numerical Methods for (Astro)Physics

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Abstract. 1 Introduction

Transcription:

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Mile R. Vuji~i} PhD student Steve G.R. Brown Senior Lecturer Materials Research Centre School of Engineering University of Wales Swansea Swansea SA2 8PP Wales UK In this paper three-dimensional transient heat conduction in homogeneous materials using different iterative solvers based on the conjugate gradient method are studied. The methods used are preconditioned conjugate gradient least square conjugate gradient conjugate gradient squared preconditioned conjugate squared biconjugate gradient preconditioned bi-conjugate gradient bi-conjugate gradient stabilized and preconditioned bi-conjugate stabilized method as well as a Gaussian elimination method with incomplete Cholesky factorisation as a comparison. Time depence is solved using both the finite difference and the finite element scheme. The finite difference scheme includes backward and forward Euler method and θ method (Crank-Nicholson). The finite element scheme includes the weighted residual method and the least squares method. For the analysis in the aim to analyse depence between numerical scheme and obtained temperatures we use a brick that is divided into different number 8-noded or 2-noded hexahedral elements with Dirichlet boundary conditions. Keywords: transient heat conduction time stepping iterative solvers residual inner iterations. INTRODUCTION The analysis of heat conduction (steady and transient) is well established and there are several different approaches or techniques that can be used (finite difference boundary element finite element and finite volume method). The finite element method has been developed in parallel with the introduction of powerful and fast computers during the last forty years. It started in the field of structural analysis for stress calculations mainly between 94 and 96. Later the concept based on dividing a structure into small substructures of various shapes and re-assembled after each substructure had been analyzed become a very useful tool for heat transfer calculations solving mechanic fluids problems electro magnetic calculation etc. Discretization of the relevant partial differential equation gives a set of linear or nonlinear algebraic equations. Ax = b (.) Generally speaking there are several families of methods than can be used for solving systems of algebraic equations such as direct (Gauss elimination) Received: September 24 accepted: November 24 Correspondence to: Mile Vujičić Materials Research Centre School of Engineering University of Wales Swansea Swansea Singleton Park SA2 8PP Wales UK E-mail:29397@swansea.ac.uk multigrid and iterative methods. As problem sizes grow the storage requirement becomes a burden even on a modern computer. For this reason iterative methods which require less memory have been developed as alternative solution strategies. 2. TRANSIENT HEAT CONDUCTION AND FINITE ELEMENT METHOD The principle of conservation of heat energy over a fixed volume V of a medium that is bounded by a closed surface S can be used to derive the equation that governs heat conduction. Usually heat transfer is treated in a domain Ω bounded by a closed surface Γ. The classical statement of this can be written as: T = [ div( kgradt) + Q] in Ω (2.) t ρc p T = f (x) for x on Γ (2.2) T q( x) = ( kgradt) n = k for x on Γ 2 (2.3) n where k is thermal conductivity and Q is a source or sink of heat energy. This equation is well known as Fourier s law of conduction. Applying the finite element technique in which the Galerkin method is used for weighting functions we obtain following matrix equation system; MT + KT= f (2.4) Faculty of Mechanical Engineering Belgrade. All rights reserved. FME Transactions (24) 32 6-68 6

where M is the capacitance (mass) matrix K is the conductance (coefficient stiffness) matrix and T is the temperature differentiated with respect to time. 2.. Finite difference time stepping There are several finite difference time stepping schemes here the explicit (forward Euler) method the implicit (backward Euler) method and Crank-Nicolson form of theθ -method are used. Explicit (forward Euler) method This is a first-order method and can be derived from a Taylor s series expression. After applying this method on equation (2.4) we obtain n n n MT + = ( M tk) T + tf. (2.5) Implicit (backward Euler) method The implicit method requires much more storage than the explicit method but it provides the potential advantage of using a large time step which may result in a more efficient procedure. The matrix equation system (2.4) for this method becomes: n+ n n ( M+ tk) T = MT + tf (2.6) θ (Crank-Nicolson) method The Crank-Nicolson method is an implicit method with second-order accuracy and it is a special case of the θ method. The matrix equation system (2.4) for this method becomes: n+ n n ( M+θ tk) T = ( M ( θ) tk) T + tf (2.7) A different value of θ defines different methods. For θ= equation (2.7) is identical to equation (2.5). When θ= equation (2.7) is identical to equation (2.6). The Crank-Nicolson method is defined with θ=.5. 2.2 Finite element time stepping Finite element time stepping uses the same technique for time discretization as well as being used for the discretization of a domain [6]. In the text written below two different techniques are described: a) the weighted residual method; and b) the least squared method. Weighted residual method After time discretization applied on equation (2.4) and using the weighted residual method we will have: M n+ n T M ( + Kθ = θ ) T + ˆ t K t f (2.8) where are W d j ξ ξ W j f dξ θ= and f ˆ =. W dξ W d ξ j The difference between the finite element and finite difference approach is that θ in the finite difference algorithm is completely between and while in the j finite element algorithm it deps on the weighting function W j. This means that all methods mentioned earlier are just special cases of the weighted residual method. Least squared method The main idea of this method is that error with n+ respect to T can be minimized to produce a least squares algorithms. T T ( Κ M M K) T M M + T t n+ + + K K T = t 2 3 T T T M M ( K M M K) T t n T dξ + K K T K fξ t 2 6 t 3. ITERATIVE SOLVERS (2.9) 3.. Conjugate gradient method (CG) This is an old and well-known non-stationary method discovered indepently by Hestenes and Stiefel in the early 95 s []. Also it is the most popular iterative method for solving large symmetric and positive definite systems of linear equations. In fact this method can be used for solving these types of equations only. Historically speaking the conjugate gradient method came after the method of steepest descent and the method of conjugate directions. De facto the CG method is the conjugate direction method where the search directions are constructed by conjugation of the residuals. The conjugate gradient strategy is presented in the following pseudo code.. initial guess x ( for example x = ) 3. p = r do for i= 2 iterations limit ( i ) q = Ap ( i ) ( i ) ( ) ( i ) ( p q ) α = r r ( i ) ( i ) x = x +α p ( i ) r = r α q β = ( r r ) ( i ) ( i ) ( r r ) ( i ) p = r +β p check convergence continue if necessary 3.2. Preconditioned Conjugate gradient method (PCG) Increasing of the number of elements obtained after a discretization causes the standard CG method to 62 Vol. 32 No 2 24 FME Transactions

converge more slowly. This can be overcome by replacing the problem whose solution we seek by another one that has the same solution but which converges much faster. There are several different techniques for this type of preconditioning three of which are presented here namely splitting left preconditioning and right preconditioning. The splitting technique includes Jacobi Gauss-Seidel and Successive Over Relaxation etc. The pseudo code for the PCG method incorporating the Jacobi method for preconditioning is presented below.. initial guess x ( for example x = ) 3. p = d = M r do for i= 2 iterations limit ( i ) q = Ap ( i ) ( i ) ( ) ( i ) ( p q ) α = d r ( i ) ( i ) x = x +α p ( i ) r = r α q d = M r () ( q r i ) β = ( i ) ( i ) q r ( i ) p = d +β p check convergence continue if necessary The matrix M is the preconditioning matrix. 3.3. Least squared conjugate gradient method (LSCG) In the case that matrix A is non symmetric we cannot apply the CG method. Multiplying both sides of T the equation (.) by A yields: T T T A r = A A Ax (3.) The new matrix A T A is always symmetric. The pseudo code for LSCG is:. initial guess x ( for example x = ) T 3. p = s = A r do for i= 2 iterations limit ( i ) q = Ap ( i ) ( i ) ( ) ( q q ) α = s s ( i ) ( i ) x = x +α p ( i ) r = r α q T s = A r () ( s s i ) β = ( i ) ( i ) s s ( i ) p = s +β p check convergence continue if necessary 3.4. Bi-conjugate gradient method (BCG) Instead of solving the system of equations (.) in this method we are going to solve the following system of equations: Ax= b (3.2) T A x = b where two sequences of residuals are updated: ( i ) r = r α Ap r = r α A p ( i ) T and two sequences of search directions: ( i ) ( i ) ( i ) p = r +β p p = r +β p ( i ) ( i ) ( i ) The pseudo code is given in the following text:. initial guess x ( for example x = ) 3. choose ~ r (for example ~ r = r ) 4. ρ =. do for i= 2 iterations limit ( ) ( i ) ( i ) ρ = r r ρ β = ρ ( i ) ( i ) ρ =ρ ( i ) p = r +β p ( i ) p = r +β p ( ) σ = p Ap ρ α = σ ( i ) r = r α Ap ( i ) T r = r α A p ( i ) x = x +α p check convergence continue if necessary (3.3) (3.4) FME Transactions Vol. 32 No 2 24 63

3.5. Preconditioned Bi-conjugate gradient method (PBCG) The difference between this method and the BCG method is two more steps that involve preconditioning.. initial guess x ( for example x = ) 3. choose ~ r (for example ~ r = r ) 4. ρ =. do for i= 2 iterations limit ( ) ( ) solve Mz i = r i T ( ) ( ) solve ~ i ~ i M z = r ( i ) ( i ) ρ = z r ρ β = ρ ( i ) ( i ) ρ =ρ ( i ) p = z +β p ( i ) p = z +β p ( ) σ = p Ap ρ α = σ ( i ) r = r α Ap ( i ) T r = r α A p ( i ) x = x +α p check convergence continue if necessary 3.6. Conjugate gradient squared method (CGS) Developed by Sonneveld [3] this method has been derived from BCG method where using the transpose of A is avoided and thereby faster convergence for roughly the same computation cost is obtained. The pseudo code is given in the following text:. initial guess x ( for example x = ) 3. choose ~ r (for example ~ r = r ) 4. q = p =. 5. ρ =. do for i= 2 i terations limit ( ) ( i ) ( i ) ρ = r r ρ β = ρ ( i ) ( i ) ρ =ρ ( i ) ( i ) z = r +β q ( i ) ( i ) p = z +β q +β p v = Ap ( ) σ = r v ρ α = σ q = z α Av ( i ) r = r α A z + q ( i ) x = x +α z + q check convergence continue if necessary 3.7. Preconditioned conjugate gradient squared method (PCGS) The pseudo code is given in the text that follows without any theoretical explanation.. initial guess x (for example x = ) 3. choose ~ r (for example ~ r = r ) 4. q = p =. 5. ρ =. do for i= 2 iterations limit ( ) ( i ) ( i ) ρ = r r ρ β = ρ ( i ) ( i ) ρ =ρ ( i ) ( i ) z = r +β q ( i ) ( i ) p = z +β q +β p () () ˆ i i solve Mp = p ˆ ˆ v = Ap ( ˆ ) σ = r v ρ α = σ q = z α Av ˆ () () () solve Mzˆ i = z i + q i ( i ) x = x +α z ˆ ˆ ˆ q = Az ( i ) r = r α q ˆ check convergence continue if necessary 64 Vol. 32 No 2 24 FME Transactions

3.8. Bi-conjugate gradient stabilized method (BCGSTAB) This method was developed by Van der Vorst [4] and it presents a modification of the CGS method. The pseudo code is given in the following text:. initial guess x ( for example x = ) 3. p = r 4. q = v =. 5. ω ˆ =β =α =. do for i= 2 iterations limit ˆ ( i ) ( i ) β = p r ˆ β ωˆ ω = β α ( i ) ˆ β =β ( i ) ( i ) ( i ) ( i ) ( i ) ( i ) ( i ) q = r +ωˆ q α v v = Aq ˆ β ω ˆ = p v ( ) ( i ) s = r ωˆ v t = As ( ) ( t t ) α = t s ( i ) x = x +ω ˆ q +α s () () ˆ i i solve Mq = q ( i ) ˆ ( i ) v = Aq ˆ β ω ˆ = p v ( ) ( i ) s = r ωˆ v () () ˆ i i solve Ms = s ( i ) ˆ ( i ) t = As α = t s t t ( i ) x = x +ω ˆ q +α s r = s α t check convergence continue if necessary 4. NUMERICAL RESULTS AND DISCUSSION The analysis as mentioned earlier considers the brick (Fig. ) that is divided into 8 27 64 and 25 hexahedral elements. The back face of the brick is kept at C while the initial temperature was C. r = s α t check convergence continue if necessary 3.9. Preconditioned bi-conjugate gradient stabilized method (PBCGSTAB). initial guess x ( for example x = ) 3. p = r 4. q = v =. 5. ω ˆ =β =α =. do for i= 2 iterations limit ˆ ( i ) ( i ) β = p r ˆ β ωˆ ω = β α ( i ) ˆ β =β ( i ) ( i ) ( i ) ( i ) ( i ) ( i ) ( i ) q = r +ωˆ q α v Figure. Geometry with boundary conditions The analysis considers a several comparisons in the following order: type of finite element time stepping and iterative solvers. Also the analysis considers temperature depence on different meshes for the chosen time stepping (Crank-Nicolson). Fig. 2 presents transient temperature through the brick for 8 noded and 2 noded hexahedron elements. From the Fig. 2 we can see that 8-noded elements give better results than 2-noded elements. Fig. 3 presents results using different methods for time stepping. FME Transactions Vol. 32 No 2 24 65

To make our comparison between different time stepping schemes clearer temperature changes occurring at node are shown in Fig. 4. At the same time few different meshes were studied for the chosen time stepping (Crank-Nicolson). Obtained temperatures at node are presented in Fig. 5. temperature degree C 9 8 7 6 5 4 3 3824elements elements 8elements 64elements 27elements 25elements 2.5.5 2 2.5 3 3.5 4 position Figure 5. Transient temperatures at node In the following figures (Fig. 6 Fig. 7 and Fig. 8) the decreasing tr of the absolute values of the residuals (a characteristic of iterative solvers) is shown. Figure 2. Transient temperature for 8 and 2 node elements 5 5 2 25 3 35 4... Log (Abs [residual]). E-5 E-6 E-7 E-8 E-9 E- E- inner iterations Figure 6. Residuals at time t=. for node cg=bcg stab 5 5 2 25 3 35 4 Figure 3. Transient temperatures for different time stepping Log (Abs [residual]) E-5 E-6 E-7 E-8 E-9 E- E- inner iterations cg bcg stab Figure 7. Residuals at time t=.5 for node. 5 5 2 25 3 35 4.. Figure 4. Transient temperatures at node Log (Abs [residual]). E-5 E-6 E-7 E-8 E-9 E- E- inner iterations Figure 8. Residuals at time t=. for node cg bcg stab 66 Vol. 32 No 2 24 FME Transactions

Abs [residual].5.5 2 2.5 3 3.5 4.. E-7 E-8 E-9 E- E- E-2 cg bcg stab time Figure 9. Residuals of the last iterations in time Finally in the Fig. 9 we gave the absolute value of the residuals at last iteration during time. From the diagrams it can be seen that the least squared conjugate gradient method (LSCG) has a monotonically decreasing residual where convergence is the slowest of all the methods presented in this paper. The number of inner iterations is a very useful parameter for our comparison. According to the number of inner iterations as seen in Fig. it can be seen that the LSCG method requires the highest number of inner iterations while the preconditioned conjugate gradient method requires the least. inner iterations 6 5 4 3 2.2.4.6.8.2.4.6.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 time cg=bcg Figure. Number of inner iterations in time stab solvers CPU[s] CGS 5.87282 PCGS 6.4926 BCG 26.4889 PBCG 4.356 BCGSTAB 34.98 PBCGSTAB 6.9722 Gauss 6.569446 Table 2. CPU time for different mesh mesh PCG Gauss 2x2x2 62.44 77.92 3x3x3 33.8 998.8 4x4x4 927.2 No point to run 5x5x5 259.42 No point to run The last method is method based on Gauss elimination method with preconditioning. For preconditioning is used incomplete Cholesky factorization. An increase of number of elements will effect that CPU time will increase rapidly. In the following table we present CPU time for the PCG and Gauss method. An analysis of the selected solvers shows that the most time-consuming operation is MATMUL. During our calculation this operation is used at least two times. The results presented in the previous table are obtained using MATMUL operation as matrix-vector product. Instead of using matrix-vector product we can use matrix-matrix product and accelerate our calculation by almost three times. For the example we have that CPU time for the PCG method is.957 [s] and for BCG method is 9.25335 [s]. 5. CONCLUSION Sometimes taking the number of inner iterations as the main assessment criteria can lead to incorrect conclusions. For example computational time (CPU) required to solve the problem using BCG method is twice that spent using the CG method although there are the same number of inner iterations for both. So computational time becomes a very important parameter in any analysis of iterative solvers. In the table presented below CPU times for the solvers are shown. All these results are for the numerical scheme that includes elements with xx divisions. Table. CPU time for chosen solvers solvers CPU[s] CG.464 PCG 3.264694 LSCG 55.6998 Iterative solvers have great advantages over direct solvers [5] when one has to solve industrial problems with possibly hundreds of thousands elements or more. One of the parameters such as CPU time where we needed 259.42[s] to solve our problem applying slower version of PCG method (faster one will give CPU=72[s]) while there wasn t any point to apply the Gauss method was shown. Also when solving symmetric positive definite matrix systems the PCG method appears to be the fastest. Unfortunately not all industrially relevant partial differential equations are Laplace or Poisson equations. In the case that we have to deal with non symmetric matrix system (the finite element discretization of Navier-Stokes equations) other conjugate gradient methods such as BCG BCGSTAB CGS and LSCG appear very useful tool for solving these equations. FME Transactions Vol. 32 No 2 24 67

REFERENCES [] Hestenes M. Stiefel E.: Methods of conjugate gradients for solving linear systems of equations J. Res. Nat. Bur. Stan. Vol.49 pp. 49-435 952. [2] Fletcher R.: Conjugate gradient methods for indefinite systems Lecture Notes in Mathematics Springer Berlin pp. 73-89 976. [3] Sonneveld P.: CGS a fast Lancosz-type solver for non-symmetric linear systems SIAM J. Sci. Stat. Comput. Vol. pp. 36-52 984. [4] Van der Vorst H. BCGSTAB: a fast and smoothly converging variant of BCG for the solution of nonsymmetric linear systems SIAM J.Sci. Stat. Comp. Vol. 8 pp. 63-634 992. [5] Irons B.: A frontal solution program for finite element analysis Int. J. Num. Meth. Eng. Vol. 97. [6] Lewis R. W. Morgan K. Thomas H. R. Seetharamu K. N.: The Finite Element Method in Heat Transfer Analysis John Willey and Sons Chichester 996. [7] Smith I. M. Griffiths D. V.: Programming-The Finite Element Method John Willey and Sons Chichester 998. [8] Golub G. Van Loan C.: Matrix Computations John Hopkins University Press 996. [9] Saad Y. Iterative Methods for Spars Linear Systems SIAM 23 (PWS 996) 2. [] Lavery N.P. Taylor C.: Iterative and Multigrid Methods in the Finite Element Solution of Incompressible and Turbulent Fluid Flow International Journal for Numerical Methods in Fluids John Willey and Sons Vol. 3 No. 6 pp 69-634 999. 68 Vol. 32 No 2 24 FME Transactions