Iterative Methods and Multigrid

Similar documents
OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

9.1 Preconditioned Krylov Subspace Methods

Preface to the Second Edition. Preface to the First Edition

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Chapter 7 Iterative Techniques in Matrix Algebra

Iterative Methods for Solving A x = b

Iterative methods for Linear System

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Contents. Preface... xi. Introduction...

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Lecture 18 Classical Iterative Methods

Solving Ax = b, an overview. Program

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Iterative Methods and Multigrid

Iterative Methods for Sparse Linear Systems

M.A. Botchev. September 5, 2014

Computational Linear Algebra

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

4.6 Iterative Solvers for Linear Systems

9. Iterative Methods for Large Linear Systems

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

JACOBI S ITERATION METHOD

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

The Conjugate Gradient Method

FEM and Sparse Linear System Solving

Splitting Iteration Methods for Positive Definite Linear Systems

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Lecture # 20 The Preconditioned Conjugate Gradient Method

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Algebraic Multigrid as Solvers and as Preconditioner

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

Iterative Solution methods

Lecture 8: Fast Linear Solvers (Part 7)

Lab 1: Iterative Methods for Solving Linear Systems

Linear Solvers. Andrew Hazel

Lecture 17: Iterative Methods and Sparse Linear Algebra

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

arxiv: v4 [math.na] 1 Sep 2018

6.4 Krylov Subspaces and Conjugate Gradients

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Lecture 10 Preconditioning, Software, Parallelisation

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

Stabilization and Acceleration of Algebraic Multigrid Method

Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning

Preconditioning techniques to accelerate the convergence of the iterative solution methods

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

FEM and sparse linear system solving

Algebra C Numerical Linear Algebra Sample Exam Problems

Numerical Methods in Matrix Computations

7.2 Steepest Descent and Preconditioning

Numerical behavior of inexact linear solvers

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

High Performance Nonlinear Solvers

Solving Linear Systems

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

From Stationary Methods to Krylov Subspaces

Next topics: Solving systems of linear equations

Matlab s Krylov Methods Library. For. Large Sparse. Ax = b Problems

Notes for CS542G (Iterative Solvers for Linear Systems)

Introduction to Iterative Solvers of Linear Systems

Iterative Methods for Sparse Linear Systems

On the accuracy of saddle point solvers

Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners

1. Fast Iterative Solvers of SLE

Preconditioning Techniques Analysis for CG Method

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

CAAM 454/554: Stationary Iterative Methods

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

Scalable Non-blocking Preconditioned Conjugate Gradient Methods

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Master Thesis Literature Study Presentation

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Iterative Methods for Linear Systems of Equations

BLOCK KRYLOV SPACE METHODS FOR LINEAR SYSTEMS WITH MULTIPLE RIGHT-HAND SIDES: AN INTRODUCTION

Scientific Computing: An Introductory Survey

Lecture 3: Inexact inverse iteration with preconditioning

APPLIED NUMERICAL LINEAR ALGEBRA

Iterative Methods. Splitting Methods

Inexact inverse iteration with preconditioning

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Dense Matrices for Biofluids Applications

Numerical Linear Algebra And Its Applications

Numerical Methods I Non-Square and Sparse Linear Systems

Computational Linear Algebra

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Indefinite and physics-based preconditioning

Transcription:

Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be improved by applying the method to the preconditioned system 1) P 1 Ax = P 1 b or 2) AP 1 x = b and x = P 1 x or 3) P 1 1 AP 1 2 x = P 1 1 b and x = P 1 2 x. Although this is not really necessary, one easy way to think about preconditioning is that if P l A(case 1 and 2) or P 1 P 2 l A (case 3) then the preconditioned system is close to the identity (in some appropriate sense) and convergence will be rapid. 11/13/7 1 Krylov8.prz

Preconditioned Krylov methods For non-hermitian problems the Krylov method can be applied immediately to the preconditioned problem. For Hermitian problems and corresponding solvers the preconditioned system must also be Hermitian (positive definite). Two ways: Symmetric preconditioning:, P 1 AP H x = P 1 b Use inner product based on (HPD) preconditioner: x,y P = y H Px P 1 A is Hermitian (self-adjoint) wrt this inner product. P 1 Ax, y P = y H PP 1 Ax = y H A H x = y H A H P H P H x = x, P 1 Ay P. Now apply CG to P 1 A but use P-inner product. Preconditioners from splittings (1) One way to derive preconditioners is to use basic iterative methods (fixed point methods). Consider and use matrix splitting: Ax = b A = M N t Mx k+1 = Nx k + b w x k+1 = M 1 Nx k + M 1 b Converges iff (M 1 N) < 1 Fixpoint: x = M 1 Nx + M 1 b w (I M 1 N)x = M 1 b (I M 1 N) = M 1 A is preconditioned matrix. (I M 1 N)x = M 1 Ax = M 1 b is preconditioned system 11/13/7 3 Krylov8.prz

Preconditioners from splittings (2) Some well-known choices for M: A 11 A 12 A 13 A = A 21 A 22 A 23 A 31 A 32 A 33 M = diag(a 11, A 22, A 33 ) (block) Jacobi M = A 11 A 21 A 22 A 31 A 32 A 33 (block) Gauss-Seidel M = 1 A 11 A 21 1 A 22 A 31 A 32 1 A 33 (block) SOR x SOR k+1 = x GS SOR k+1 + (1 )x k Preconditioners from splittings (3) Properly chosen yields greatly improved convergence rate. For example, for the Poisson equation u = f the convergence rate goes from 1 O(h 2 ) for = 1 (Gauss-Seidel) to 1 O(h). For analysis see book. For special problems SOR (as a method in itself) is still very popular. For symmetric problems we need symmetric preconditioner: M = 2 1 A 11 A 21 1 A 22 A 31 A 32 1 A 33 1 A 11 1 A 22 2 A 33 1 A 11 A 12 A 13 1 A 22 A 23 1 A 33 (block) SSOR 11/13/7 5 Krylov8.prz

Diagonal preconditioning If A is HPD and A = I n 1 B. B H I n2 Then (A) [ (D H AD) for any nonsingular D = diag(d n1, D n2 ). If a HPD matrix has all diagonal elements equal, then (A) [ m.min D (D H AD) for all D diagonal and HPD and m is max number of nonzeros in any row of A. If A is HPD and A = I n1 A 12 A 1m H I n2 A 12 A m 1m H H A m 1m I nm A 1m. Then (A) [ m.min D (D H AD) for all nonsingular, block-diagonal D (same block sizes as A) and m is the number of diagonal blocks in A. Unpreconditioned Convergence -1 log 1 r 2-3 -5-7 p=q=1; r=s=7; h=1/ 31; u s = ; u w =; u n = 1; u e = 1; GMRES QMR GMRES(1) BiCG -9 1 2 3 4 5 6 7 # iterations (matvecs) #matvec time (s) gmres 3 3.57 gmres(1) 4.5 qmr 66.38 bicg 66.28 bicgstab 5.33 11/13/7 7 Krylov8.prz

Diagonal preconditioner BiC G GM RES GM RES(1) -1 2 4 6 8 Tridiagonal preconditiner BICG GMRES GMRES(1) -1 1 2 3 4 5 11/13/7 9-1 Krylov8.prz

Unpreconditioned p=1; q=1; r=2; s=-5; h=1/ 51; u s = ; u w =; u n = 1; u e = 1; log 1 r 2 8 6 4 2 CGS #matvec time (s) gmres 22 55.8 qmr 492 5.98 bicg 494 4.34 bicgstab 324 4.4 cgs 338 3.46 BiCG GMRES QMR 1 2 3 4 5 # iterations (matvecs) Unpreconditioned p=1; q=1; r=2; s=-5; h=1/ 51; u s = ; u w =; u n = 1; u e = 1; 2 #matvec time (s) gmres 22 55.8 gmres(25) 282 11.7 gmres(5) 14 23.4 log 1 r 2 GMRES(5) GMRES GMRES(25) 2 4 6 8 1 12 # iterations (matvecs) 11/13/7 11-12 Krylov8.prz

An important class of preconditioners is the one based on incomplete decompositions. Consider the linear system Ax = b. We carry out an LU decomposition (for example). However, at every point where we introduce an non-zero coefficient in A we ignore it. Let R = LU A. This gives, A= LU R, which is a matrix splitting. Clearly we won t get the exact LU decomposition this way, but it turns out that for M-matrices this splitting has nice properties. Theorem: Let Abe an M-matrix. Then for every subset P of off-diagonal indices, there exists a lower triangular matrix L with unit diagonal and an upper triangular matrix U such that A= LU R, and l ij = if (i, j) P, u ij = if (i, j) P, and r ij = if (i, j) " P. The factors L and U are unique and A = LU R is a regular splitting. Proof: (see book) If A is a symmetric M-matrix and (j, i) P w (i, j) P then we have an incomplete Cholesky decomposition A = LL T R which is a regular splitting. The incomplete decompositions can be used as preconditioners. If we allow no fill-in we call the preconditioners ILU(), IC() respectively. 11/13/7 13-14 Krylov8.prz

For hard problems it may be useful to allow more fill-in (non-zeros in or that are zeros in ). L U A The corresponding preconditioners are referred to as ILU(k), IC(k), where k indicates the level of fill-in allowed. Level 1 fill-in is fill-in caused by non-zero coefficients of A. Level 2 fill-in is fill-in caused by the level 1 fill-in (that is by non-zeros not originally in A), etc. Another variation of ILU is to use a drop tolerance. That is we only allow fill-in if, in absolute sense, it is larger than the drop tolerance times the norm of the column. The two approaches can also be mixed. The IC preconditioner tends to reduce the number of iterations significantly. However, for the mesh width h going to zero, the condition number of an elliptic problem still behaves as O(h 2 ), just as for the unpreconditioned problem. Hence, the number of iterations of CG still behaves as O(h 1 ). It turns out we can improve the condition number of the preconditioned matrix by a factor of h, that is it behaves as O(h). Consequently, the number of iterations of CG behaves as O(h 1/2 ). For details see the book. This type of preconditioner is referred to as MIC (or MILU) for Modified Incomplete Cholesky (or Modified ILU). Often a parametrized form of is used, where based on the parameter the preconditioner varies between IC (ILU) and MIC (MILU). 11/13/7 15-16 Krylov8.prz

MIC/MILU have about the same complexity to compute and to apply as IC/MILU. It is important to keep in mind that preconditioning introduces extra costs: 1) the cost to compute the incomplete decomposition, 2) the cost to apply the preconditioner at each iteration. The second is typically the most important. For example, ILU() introduces floating point overhead equal to another matrix vector product. In practice, especially on parallel (and vector) computers the extra cost in runtime can be even worse, because the solution of triangular systems is hard to parallelize (vectorize). Hence, the preconditioner should reduce the number of iterations sufficiently (and it usually does). On parallel computers we often use incomplete block (diagonal) decompositions. This decouples parts of the domain. Unfortunately, this generally increases the number of iterations again, so that we must look for some balance. A variety of strategies to prevent deterioration of the number of iterations has been developed. Some of these are related to so-called domain decomposition based preconditioners (next topic), sometimes referred to as interface preconditioners. Nevertheless incomplete decompositions are the most popular preconditioners today. Largely, because they can more or less be used as black box preconditioners and are nevertheless very effective for a wide range of problems. 11/13/7 17-18 Krylov8.prz

ILU() Preconditioner p=1; q=1; r=2; s=-5; h=1/ 51; u s = ; u w =; u n = 1; u e = 1; log 1 r 2 2 1-1 -3-5 GMRES CGS #matvec time (s) gmres 47 5.83 qmr 11 3.95 bicg data 1 bicgstab data 2 11 64 3.52 2.58 cgs data 3 7 2.41 gmres(5) data 4 59 3.3 data 5 data 6 BiCG QMR GMRES(5) -7 2 4 6 8 1 12 # iterations (matvecs) MILU() 4 2 cgs gmres(5) gmres bicgstab bicg 5 1 15 11/13/7 19 Krylov8.prz

Convergence CGS for various h 6 #matvec time (s) 5x5 7 2.41 1x1 7 18.1 2x2 143 172 4 2 5x5 1x1 2x2 5 1 15 2 25 3 11/13/7 212 Krylov8.prz