AMS526: Numerical Analysis I (Numerical Linear Algebra)

Similar documents
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

1. Fast Iterative Solvers of SLE

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Scientific Computing: An Introductory Survey

9.1 Preconditioned Krylov Subspace Methods

Kasetsart University Workshop. Multigrid methods: An introduction

Chapter 7 Iterative Techniques in Matrix Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Algebraic Multigrid as Solvers and as Preconditioner

Adaptive algebraic multigrid methods in lattice computations

Computational Linear Algebra

Solving PDEs with Multigrid Methods p.1

Iterative Methods and Multigrid

The Conjugate Gradient Method

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Multigrid absolute value preconditioning

Preface to the Second Edition. Preface to the First Edition

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Iterative Methods and Multigrid

Preconditioning Techniques Analysis for CG Method

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

New Multigrid Solver Advances in TOPS

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture # 20 The Preconditioned Conjugate Gradient Method

Sparse Matrix Techniques for MCAO

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Multigrid Methods and their application in CFD

Stabilization and Acceleration of Algebraic Multigrid Method

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends

JACOBI S ITERATION METHOD

9. Iterative Methods for Large Linear Systems

Lecture 18 Classical Iterative Methods

Numerical Programming I (for CSE)

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

AMG for a Peta-scale Navier Stokes Code

AMS526: Numerical Analysis I (Numerical Linear Algebra)

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Introduction to Scientific Computing II Multigrid

3D Space Charge Routines: The Software Package MOEVE and FFT Compared

Lecture 17: Iterative Methods and Sparse Linear Algebra

Numerical Methods I Non-Square and Sparse Linear Systems

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

4.6 Iterative Solvers for Linear Systems

FEM and Sparse Linear System Solving

K.S. Kang. The multigrid method for an elliptic problem on a rectangular domain with an internal conductiong structure and an inner empty space

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Bootstrap AMG. Kailai Xu. July 12, Stanford University

Solving Ax = b, an overview. Program

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

AMS526: Numerical Analysis I (Numerical Linear Algebra)

arxiv: v1 [math.na] 11 Jul 2011

Iterative Methods. Splitting Methods

Conjugate Gradient Method

6. Iterative Methods: Roots and Optima. Citius, Altius, Fortius!

6. Multigrid & Krylov Methods. June 1, 2010

Linear Solvers. Andrew Hazel

Introduction to Multigrid Method

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Iterative Methods and Multigrid

Motivation: Sparse matrices and numerical PDE's

Multigrid solvers for equations arising in implicit MHD simulations

Contents. Preface... xi. Introduction...

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Master Thesis Literature Study Presentation

Numerical Methods in Matrix Computations

Scientific Computing II

DELFT UNIVERSITY OF TECHNOLOGY

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

University of Illinois at Urbana-Champaign. Multigrid (MG) methods are used to approximate solutions to elliptic partial differential

APPLIED NUMERICAL LINEAR ALGEBRA

Numerical Solution Techniques in Mechanical and Aerospace Engineering

THEORETICAL AND NUMERICAL COMPARISON OF PROJECTION METHODS DERIVED FROM DEFLATION, DOMAIN DECOMPOSITION AND MULTIGRID METHODS

Algebra C Numerical Linear Algebra Sample Exam Problems

ADAPTIVE ALGEBRAIC MULTIGRID

Notes for CS542G (Iterative Solvers for Linear Systems)

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

INTRODUCTION TO MULTIGRID METHODS

Adaptive Multigrid for QCD. Lattice University of Regensburg

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Geometric Multigrid Methods

Lab 1: Iterative Methods for Solving Linear Systems

Research Article Evaluation of the Capability of the Multigrid Method in Speeding Up the Convergence of Iterative Methods

Robust solution of Poisson-like problems with aggregation-based AMG

Iterative Methods for Solving A x = b

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Jae Heon Yun and Yu Du Han

Aggregation-based algebraic multigrid

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

AMS527: Numerical Analysis II

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Conjugate Gradients: Idea

AN AGGREGATION MULTILEVEL METHOD USING SMOOTH ERROR VECTORS

Transcription:

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5

Preconditioning Motivation: Convergence of iterative methods heavily depends on eigenvalues or singular values of equation Main idea of preconditioning is to introduce a nonsingular matrix M such that M 1 A has better properties than A. Thereafter, solve M 1 Ax = M 1 b, which has the same solution as Ax = b Criteria of M Good approximation of A, depending on iterative solvers Ease of inversion Typically, a precondition M is good if M 1 A is not too far from normal and its eigenvalues are clustered Xiangmin Jiao Numerical Analysis I 2 / 5

Left, Right, and Hermitian Preconditioners Left preconditioner: Left multiply M 1 and solve M 1 Ax = M 1 b Right preconditioner: Right multiply M 1 and solve AM 1 y = b with x = M 1 y However, if A is Hermitian, M 1 A or AM 1 breaks symmetry How to resolve this problem? Xiangmin Jiao Numerical Analysis I 3 / 5

Left, Right, and Hermitian Preconditioners Left preconditioner: Left multiply M 1 and solve M 1 Ax = M 1 b Right preconditioner: Right multiply M 1 and solve AM 1 y = b with x = M 1 y However, if A is Hermitian, M 1 A or AM 1 breaks symmetry How to resolve this problem? Suppose M is Hermitian positive definite, with M = CC for some C, then Ax = b is equivalent to [ C 1 AC ] (C T x) = C 1 b, where C 1 AC is Hermitian positive definite, and it is similar to C C 1 A = M 1 A and has same eigenvalues as M 1 A Example of M = CC is Cholesky factorization M = RR, where R is upper triangular Xiangmin Jiao Numerical Analysis I 3 / 5

Preconditioned Conjugate Gradient When preconditioning a symmetric matrix, use SPD matrix M, and M = RR T In practice, algorithm can be organized so that only M 1 (instead of R 1 ) appears Algorithm: Preconditioned Conjugate Gradient Method x 0 = 0, r 0 = b, p 0 = M 1 r 0, z 0 = p 0 for n = 1 to 1, 2, 3,... α n = (r T n 1 z n 1)/(p T n 1 Ap n 1) step length x n = x n 1 + α n p n 1 approximate solution r n = r n 1 α n Ap n 1 residual z n = M 1 r n preconditioning β n = (r T n z n )/(r T n 1 z n 1) improvement this step p n = z n + β n p n 1 search direction Xiangmin Jiao Numerical Analysis I 4 / 5

Commonly Used Preconditioners Jacobi preconditioning: M = diag(a). Very simple and cheap, might improve certain problems but usually insufficient Block-Jacobi preconditioning: Let M be composed of block-diagonal instead of diagonal. Classical iterative methods: Precondition by applying one step of Jacobi, Gauss-Seidel, SOR, or SSOR Incomplete factorizations: Perform Gaussian elimination or Cholesky factorization but ignore fill Multigrid (coarse-grid approximations): For a PDE discretized on a grid, a preconditioner can be formed by transferring the solution to a coarser grid, solving a smaller problem, then transferring back. This is sometimes the most efficient approach if applicable Xiangmin Jiao Numerical Analysis I 5 / 5

Multigrid Methods Partial Differential Equations Smooth or oscillatory components of error are relative to mesh on which solution is defined Component that appears smooth on fine grid may appear oscillatory when sampled on coarser grid If we apply smoother on coarser grid, then we may make rapid progress in reducing this (now oscillatory) component of error After few iterations of smoother, results can then be interpolated back to fine grid to produce solution that has both higher-frequency and lower-frequency components of error reduced Michael T. Heath Scientific Computing 91 / 105

Partial Differential Equations Multigrid Methods, continued Multigrid methods : This idea can be extended to multiple levels of grids, so that error components of various frequencies can be reduced rapidly, each at appropriate level Transition from finer grid to coarser grid involves restriction or injection Transition from coarser grid to finer grid involves interpolation or prolongation Michael T. Heath Scientific Computing 92 / 105

Residual Equation Partial Differential Equations If ˆx is approximate solution to Ax = b, with residual r = b Aˆx, then error e = x ˆx satisfies equation Ae = r Thus, in improving approximate solution we can work with just this residual equation involving error and residual, rather than solution and original right-hand side One advantage of residual equation is that zero is reasonable starting guess for its solution Michael T. Heath Scientific Computing 93 / 105

Partial Differential Equations Two-Grid Algorithm 1 On fine grid, use few iterations of smoother to compute approximate solution ˆx for system Ax = b 2 Compute residual r = b Aˆx 3 Restrict residual to coarse grid 4 On coarse grid, use few iterations of smoother on residual equation to obtain coarse-grid approximation to error 5 Interpolate coarse-grid correction to fine grid to obtain improved approximate solution on fine grid 6 Apply few iterations of smoother to corrected solution on fine grid Michael T. Heath Scientific Computing 94 / 105

Partial Differential Equations Multigrid Methods, continued Multigrid method results from recursion in Step 4: coarse grid correction is itself improved by using still coarser grid, and so on down to some bottom level Computations become progressively cheaper on coarser and coarser grids because systems become successively smaller In particular, direct method may be feasible on coarsest grid if system is small enough Michael T. Heath Scientific Computing 95 / 105

Cycling Strategies Partial Differential Equations Common strategies for cycling through grid levels Michael T. Heath Scientific Computing 96 / 105

Partial Differential Equations Cycling Strategies, continued V-cycle starts with finest grid and goes down through successive levels to coarsest grid and then back up again to finest grid W-cycle zig-zags among lower level grids before moving back up to finest grid, to get more benefit from coarser grids where computations are cheaper Full multigrid starts at coarsest level, where good initial solution is easier to come by (perhaps by direct method), then bootstraps this solution up through grid levels, ultimately reaching finest grid Michael T. Heath Scientific Computing 97 / 105

Partial Differential Equations Multigrid Methods, continued By exploiting strengths of underlying iterative smoothers and avoiding their weaknesses, multigrid methods are capable of extraordinarily good performance, linear in number of grid points in best case At each level, smoother reduces oscillatory component of error rapidly, at rate independent of mesh size h, since few iterations of smoother, often only one, are performed at each level Since all components of error appear oscillatory at some level, convergence rate of entire multigrid scheme should be rapid and independent of mesh size, in contrast to other iterative methods Michael T. Heath Scientific Computing 98 / 105

Partial Differential Equations Multigrid Methods, continued Moreover, cost of entire cycle of multigrid is only modest multiple of cost of single sweep on finest grid As result, multigrid methods are among most powerful methods available for solving sparse linear systems arising from PDEs They are capable of converging to within truncation error of discretization at cost comparable with fast direct methods, although latter are much less broadly applicable Michael T. Heath Scientific Computing 99 / 105