Introduction to Multigrid Methods Sampling Theory and Elements of Multigrid

Similar documents
Kasetsart University Workshop. Multigrid methods: An introduction

Introduction to Multigrid Methods

1. Fast Iterative Solvers of SLE

Computational Linear Algebra

Aspects of Multigrid

Introduction to Multigrid Method

Iterative Methods and Multigrid

Multigrid Methods and their application in CFD

6. Multigrid & Krylov Methods. June 1, 2010

INTRODUCTION TO MULTIGRID METHODS

Stabilization and Acceleration of Algebraic Multigrid Method

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Adaptive algebraic multigrid methods in lattice computations

Numerical Programming I (for CSE)

Multigrid absolute value preconditioning

Introduction to Scientific Computing II Multigrid

Constrained Minimization and Multigrid

Solving PDEs with Multigrid Methods p.1

Robust solution of Poisson-like problems with aggregation-based AMG

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Algebraic Multigrid as Solvers and as Preconditioner

Geometric Multigrid Methods

Multigrid finite element methods on semi-structured triangular grids

Geometric Multigrid Methods for the Helmholtz equations

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

First-order overdetermined systems. for elliptic problems. John Strain Mathematics Department UC Berkeley July 2012

Scientific Computing: An Introductory Survey

The Conjugate Gradient Method

University of Illinois at Urbana-Champaign. Multigrid (MG) methods are used to approximate solutions to elliptic partial differential

INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR GRIDS. 1. Introduction

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Simple Examples on Rectangular Domains

An Introduction of Multigrid Methods for Large-Scale Computation

Numerical algorithms for one and two target optimal controls

New Multigrid Solver Advances in TOPS

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

The Fourier spectral method (Amath Bretherton)

EFFICIENT MULTIGRID BASED SOLVERS FOR ISOGEOMETRIC ANALYSIS

2.4 Eigenvalue problems

Iterative Methods and Multigrid

Notes for CS542G (Iterative Solvers for Linear Systems)

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

DELFT UNIVERSITY OF TECHNOLOGY

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Von Neumann Analysis of Jacobi and Gauss-Seidel Iterations

Multigrid Method for 2D Helmholtz Equation using Higher Order Finite Difference Scheme Accelerated by Krylov Subspace

Notes on Multigrid Methods

Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes

Lab 1: Iterative Methods for Solving Linear Systems

Chapter 7 Iterative Techniques in Matrix Algebra

Theory of Iterative Methods

An efficient multigrid solver based on aggregation

Time-dependent variational forms

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value

Numerical Solutions to Partial Differential Equations

Bootstrap AMG. Kailai Xu. July 12, Stanford University

Solving the stochastic steady-state diffusion problem using multigrid

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses

MULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW

An Introduction to the Discontinuous Galerkin Method

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

Electromagnetic wave propagation. ELEC 041-Modeling and design of electromagnetic systems

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Solving Boundary Value Problems (with Gaussians)

6. Iterative Methods: Roots and Optima. Citius, Altius, Fortius!

Outline. 1 Partial Differential Equations. 2 Numerical Methods for PDEs. 3 Sparse Linear Systems

K.S. Kang. The multigrid method for an elliptic problem on a rectangular domain with an internal conductiong structure and an inner empty space

Jim Lambers ENERGY 281 Spring Quarter Lecture 5 Notes

Preface to the Second Edition. Preface to the First Edition

Diffusion on the half-line. The Dirichlet problem

Iterative Methods for Solving A x = b

Math 46, Applied Math (Spring 2008): Final

arxiv: v1 [math.na] 6 Nov 2017

Chapter 5. Methods for Solving Elliptic Equations

Aggregation-based algebraic multigrid

Linear Solvers. Andrew Hazel

Sparse Matrix Techniques for MCAO

INTRODUCTION TO DELTA-SIGMA ADCS

DISCRETE MAXIMUM PRINCIPLES in THE FINITE ELEMENT SIMULATIONS

Scientific Computing I

Numerical Methods I Orthogonal Polynomials

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

A multilevel algorithm for inverse problems with elliptic PDE constraints

Multigrid Algorithms for Inverse Problems with Linear Parabolic PDE Constraints

Lecture 18 Classical Iterative Methods

Domain decomposition on different levels of the Jacobi-Davidson method

Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Notes on PCG for Sparse Linear Systems

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.

Multigrid Methods for Maxwell s Equations

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota

Numerical Approximation Methods for Non-Uniform Fourier Data

ECE Digital Image Processing and Introduction to Computer Vision

Recovery-Based A Posteriori Error Estimation

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Parallel Discontinuous Galerkin Method

A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS

Transcription:

Introduction to Multigrid Methods Sampling Theory and Elements of Multigrid Gustaf Söderlind Numerical Analysis, Lund University

Contents V3.15 What is a multigrid method? Sampling theory Digital filters Downsampling and aliasing Anti-aliasing A twogrid algorithm Spectral analysis Error estimation A convection diffusion problem

1. What is a multigrid method? Recall that standard iterative methods u m+1 = u m M 1 (T h u m f ) are slow unless the preconditioner M 1 T 1 h In a multigrid method, the preconditioner is constructed from an inverse of T H on a coarse grid At least two grids are used, a fine grid h and a coarse grid H The objective is to achieve a convergence rate independent of h

Solution, error and residual Given a linear system T h u = f construct sequence u m u Now recall the following definitions The Good The Bad The Ugly The solution is defined by T h u = f The error is defined by e m = u m u The residual is defined by r m = T h u m f Error equation relates error to residual T h e m = r m

Error (defect) correction Note that u = u m e m. If we can estimate the error, ẽ e m, then ũ u m ẽ is an improved solution, so we can put u m+1 u m ẽ Question Is it simpler to solve T h e m = r m than T h u = f? Answer YES! Because we only need moderate accuracy, solve the error equation on coarse grid This will still recover the persistent low frequency modes!

Elements of multigrid MG is a highly efficient iterative method for various operator equations, such as the Poisson equation Lu = f Key assumptions L is elliptic u, Lu M 2 [L] u 2 2 < 0 L 1 is regularizing u is smoother than the data f Basic idea Solve Lu = f on (at least) two grids L h u h = f h L H u H = f H Coarse grid mesh width H, fine grid h, mesh ratio ρ = H/h

Multigrid pattern algorithm Using at least two grids, iterate u m+1 h = u m h Ph H L 1 H RH h (L hu m h f h) For preconditioner M 1 = P h H L 1 H RH h we require I M 1 L h < 1 The operator P h H L 1 H RH h L h is key As L h and L H are given, success depends crucially on the operators R H h, which should be smoothing, and Ph H Downsampling R H h and oversampling P h H

Eigenvalues and eigenvectors u = f T h u = f To introduce multigrid, we return to damped Jacobi r m = T h u m f u m+1 = u m γd 1 r m Theorem Let T h with homogeneous Dirichlet conditions have eigenvectors given by T h v k = λ k v k. Then the eigenvalues and eigenvectors of the Jacobi matrix P J (γ) = (1 γ)i + γp J are v k = sin kπx h λ k [P J (γ)] = 1 γ + γ cos kπh k = 1 : N where grid and mesh width are given by x h = {jh} N 1 (0, 1)

High frequency damping in Jacobi iteration N = 31 Damping parameter γ = 1, 2/3, 1/2, 1/3 1 Eigenvalues of damped Jacobi matrix 0.8 0.6 0.4 0.2 0-0.2-0.4-0.6-0.8-1 0 0.5 1 1.5 2 2.5 3 Frequency

Effect of damped Jacobi in the frequency domain Error recursion e m+1 = P J (1/2)e m, with F π = P J (1/2) = 1 4 2 1 0 0 1 2 1 0 0 1 2 1 0 0 1 2 Apply to eigenvector v k = sin kπx h of normalized frequency θ k = kπh (0, π) to get ( 1 P J (1/2)v k = λ k [P J (1/2)] v k = 2 + 1 ) cos kπh v k = cos 2 kπh 2 2 v k Theorem Discrete Fourier transform of P J (1/2) is ˆF π (θ) = cos 2 θ 2

Separate treatment of HF and LF in Multigrid A few damped Jacobi iterations (γ = 1/2) suppress HF error components, while LF errors remain unaffected Evaluate residual r h = T h u m f and restrict to coarse grid by r H R H h r h (downsampling) Solve error equation T H e H = r H for error estimate e H and note that e H = T 1 H r H is smoother than r H Prolong to fine grid, ẽ h P h H e H (oversampling), e.g. by using linear interpolation Remove LF error u m+1 u m ẽ h on fine grid and repeat

A simple two-grid algorithm Solve T h u h = f h Solve T h u h = f h iteratively using the following algorithm 1. Run a few damped γ = 1/2 Jacobi iteration on fine grid u m+1 h u m h γd 1 h r m h m = 0 : M 1 2. Compute residual and restrict r H I H h F π(t h u M h f h) 3. Solve error equation T H e H = r H 4. Prolong e M h I h H e H 5. Correct u M+1 h u M h em h 6. Run one damped Jacobi iteration on fine grid 7. Repeat from 1 until convergence

Matlab code segment The two-grid V step rf = Tdx*v - f; v = v - gamma*ddx\rf; rf = Tdx*v - f; rf = lowpass(rf); rc = FMGrestrict(rf); ec = Tdxc\rc; ef = FMGprolong(ec); v = v - ef; rf = Tdx*v - f; v = v - gamma*ddx\rf; % Compute residual % Pre-smoothing Jacobi % Compute residual % Anti-alias filter % Restrict to coarse grid % Solve error equation % Prolong to fine grid % Correct: remove error % Compute residual % Post-smoothing Jacobi

Key points to analyze in MG algorithms There is a large number of technical issues to consider in MG 1. Sampling theory 2. Aliasing 3. Digital filters and anti-aliasing 4. Downsampling (restriction) 5. Oversampling (prolongation) 6. Fourier analysis of error 7. Regularizing effect of T 1 H 8. Grid resonance and AA-filter design

2. Sampling theory Fine grid x h = {x h j }N f +1 j=0 = {jh} N f +1 j=0 with h = 1/(N f + 1) Coarse grid x H = {x H j } Nc+1 j=0 with H = 2h Gridpoint relation x H j = x 2h j = x h 2j for j = 0(1)N c + 1 Discrete Fourier modes v k (x h j ) = sin kπx h j = sin ω k jh = sin θ h k j Wave number k for k = 1(1)N f Frequency ω k = kπ Normalized frequency θk h = hω k [0, π]

Nyquist frequency hω = π Define Nyquist wave number k f = N f + 1 = 1/h Note As hω kf = π sin ω kf jh 0, the Nyquist mode cannot be represented on the fine grid ω h := π/h = ω kf is called the (fine grid) Nyquist frequency Normalized Nyquist frequency θ h = hω h = π Since Hω H = π, if coarse mesh width H = 2h, then ω h = 2ω H

Downsampling LF modes @ N f = 31 1 0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 LF modes k = 1 (top) and k = 3 (bottom)

Downsampling LF modes @ N c = 15 1 0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 LF modes k = 1 (top) and k = 3 (bottom)

HF to LF aliasing HF mode @ N f = 31 1 0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 HF mode k = 31

HF to LF aliasing HF mode @ N c = 15 1 0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 LF mode k = 1

Nyquist frequency ω h @ N f = 31 1 0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Nyquist HF mode k = 32

Aliasing Downsampling h H = 2h Due to the Nyquist Shannon sampling theorem, sin 2ω H x h 2j = sin ω h x h 2j 0 cos 2ω H x h 2j = cos ω h x h 2j 1 Hence, using sin(α β) = sin α cos β cos α sin β ( ) ( ) sin (ω H + ω m )x2j h = sin (2ω H (ω H ω m ))x2j h ( ) = sin (ω H ω m )xj H HF modes ω H + ω m > ω H are aliased to LF ω H ω m < ω H

Aliasing Downsampling h H = 2h 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 LF + HF spectral content on fine grid

Aliasing Downsampling h H = 2h 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Downsampling: HF fold-over to LF

Aliasing Downsampling h H = 2h 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 HF fold-over corrupts spectral content on coarse grid

Linear filter theory Analogue and digital filters An analogue linear filter is an integral operator F : v u u = Fv u(x) = 1 0 f (x, y)v(y) dy Its discrete counterpart is a digital filter u = Fv u i = N f i,j v j j=1 The linear operator F, as defined by its kernel f (x, y) or the matrix F = {f i,j }, determines filter characteristics

Toeplitz operators and convolution filters A convolution filter is an integral operator F : v u u = Fv u(x) = 1 0 f (x y) v(y) dy A discrete convolution is of the form u = Fv u i = N f i j v j j=1 F is a Toeplitz matrix, as i j = const implies f i,j = f i j = const. Toeplitz filters are discrete convolutions. Bdry conditions may apply

Anti-aliasing HF content must be suppressed before downsampling One (inefficient) way to do this in MG is to apply a smoother often selected as a (few) damped Jacobi iterations Better is to use digital filter theory and apply an anti-aliasing filter u F θ v F θ is a linear operator designed to suppress HF presence in v, and can be specifically designed to quench a given frequency θ [0, π] Example The filter F π is designed to block the frequency θ = π, corresponding to the undesired HF oscillation e iπn = ( 1) n

The lowpass filter F π Note F π = P J (1/2) for T h To filter out HF, repeated averaging can be used 2 1 0 0 u = 1 1 2 1 0 4 0 1 2 1 v = F π v 0 0 1 2 Then u is smoother than v, and F π is a 2 nd order lowpass filter with DFT ˆF π (θ) = cos 2 θ 2 ˆF π (π) = 0 F π removes HF oscillation ( 1) n ˆF π (0) = 1 F π allows LF through virtually unchanged

The lowpass filter F π... A continuous DFT Let v = a k v k, where v k = sin kπx h Reflecting that a k is associated with the normalized frequency θ k, we write a k = a(θ k ) and consider the function a(θ) for θ (0, π) to represent the spectral content of v, with DFT ˆv(θ) = a(θ) Then u = F π v = k a(θ k )F π sin kπx h = k a(θ k ) cos 2 θ k 2 v k Thus, if spectral content of v is ˆv(θ) = a(θ), then u = F π v has spectral content û(θ) = cos 2 θ 2 ˆv(θ) Convolutions in space are multiplications in the frequency domain

Matlab matrix-free implementation of convolutions Given vector v and convolution filter kernel f π = 1 4 ] 1 2 1 [ the operation u F π v is implemented by the Matlab commands ker = [1 2 1]/4; u = conv(v,ker, same ); Likewise ker = [1-2 1]/dx2; r = conv(u,ker, same ) - f; computes residual r = T h u f without ever using matrices

Downsampling with and without anti-aliasing filter Due to aliasing HF folds back into LF and must be suppressed by AA-filter before downsampling Essential in MG is to obtain error estimate e H from error equation L H e H = r H where r H is downsampled from fine grid residual r h Without HF suppression the error estimate e H will be poor, causing slow convergence or even divergence Unlike smoothers anti-aliasing is only grid dependent

Downsampling without anti-aliasing (retaining even nodes only) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Corrupted spectral content on coarse grid

Doubly averaging AA-filter F π (3-node kernel, 2nd order filter) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Good HF suppression, little effect on LF

AA-convolution filter with 5-node kernel support (4th order) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Strong HF suppression, some effect also on LF

Fine grid, coarse grid Fine grid with odd number N of internal grid points x h = linspace(0,1,n+2) h = 1/(N + 1) Coarse grid with (N 1)/2 internal grid points x H = linspace(0,1,(n+3)/2) H = 2/(N + 1) Mesh widths h and H = 2h, respectively

Downsampling from fine grid to coarse Example N h = 7 internal points on fine grid x h map to N H = 3 internal points on coarse grid via restriction map 0 1 0 0 0 0 0 v H = 0 0 0 1 0 0 0 v h = Ih H v h 0 0 0 0 0 1 0 The restriction drops every second element An alternative is a smoothing map, aka weighted restriction 1/4 1/2 1/4 0 0 0 0 v H = 0 0 1/4 1/2 1/4 0 0 v h 0 0 0 0 1/4 1/2 1/4

Weighted restriction = plain restriction AA-filter = I H h F π 1/4 1/2 1/4 0 0 0 0 0 0 1/4 1/2 1/4 0 0 0 0 0 0 1/4 1/2 1/4 = 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 4 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 This is proper downsampling with necessary anti-aliasing

From fine grid to coarse, including boundaries Example N h = 3 internal points on fine grid x h map to N H = 1 internal point on coarse grid x H via restriction map 1 0 0 0 0 v H = 0 0 1 0 0 v h = Īh H v h 0 0 0 0 1 A weighted counterpart, with unchanged boundaries, is 1 0 0 0 0 v H = 0 1/4 1/2 1/4 0 v h 0 0 0 0 1

Oversampling from coarse grid to fine Note The maps I H h do not have inverses (information loss) I h H is an interpolation operator generating new vector elements Left and right inverses IH h I h H I I H h I h H = I The second identity holds if Ih H is a plain restriction, but not if it is a weighted restriction (smoothing operator)

Prolongation through linear interpolation Example The interpolation operator IH h piecewise linear interpolation can be defined by v j+1/2 (v j + v j+1 )/2 This defines a prolongation of the grid function by inserting an interpolated value between previous coarse grid points Advanced interpolation (splines) is possible but often too expensive Choose linear interpolation with linear FEM basis functions

Matrix representation of the prolongation Example Prolongation from 3 internal points to 7 v h = 1/2 0 0 1 0 0 1/2 1/2 0 0 1 0 0 1/2 1/2 0 0 1 0 0 1/2 v H = IH h v H Compare using weighted restriction I h H = 2F π(i H h )T

Tools for restriction R H h and prolongation Ph H Two tools needed, the Toeplitz AA-filter F π = 1 4 tridiag(1 2 1) and the plain restriction ( injection ) Ih H = 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 General restriction map R H h = I H h F π Linear interpolation prolongation map P h H = 2F π(i H h )T

Complete multigrid pattern algorithm L h u h = f h 1. Pre-smoothing u h u h γd 1 h (L hu h f h ) 2. Restriction r H R H h (L hu h f h ) 3. Solve for error L H e H = r H 4. Prolongation e h P h H e H 5. Correction u h u h e h 6. Post-smoothing u h u h γd 1 h (L hu h f h ) How does L h act on e h Im(P h H )? Galerkin condition L H = R H h L hp h H with Ph H = ρd (R H h ) Congruence transformation preserves ellipticity on all grids

4. Spectral analysis of MG u = f on [0, 1] 2nd order FDM discretization of u = f yields T h u = f, with 2 1 T h = 1 1 2 1 h 2... 1 1 2 The N f N f matrix is negative definite, with eigenvalues λ k [T h ] = 4 h 2 sin2 θh k 2, k = 1 : N f and Fourier mode eigenvectors v k j [T h] = sin ω k x h j = sin θ h k j

Eigenvalue locations λ k [T h ] = 2 h 2 + 2 h 2 cos θh k = 4 h 2 sin2 θh k 2

MG algorithm Consider T H u = f on coarse grid and T h u = f on fine grid 1. Jacobi smoother u u γd 1 h (T hu f ) 2. Restriction r H R H h (T hu f ) 3. Solve for error T H e H = r H 4. Prolongation e h 2(R H h )T e H 5. Correction u u e h 6. Jacobi smoother u u γd 1 h (T hu f ) Damped Jacobi and Galerkin condition downsampling share eigenvectors with T h spectral analysis is possible

Error analysis in the frequency domain Assume fine grid error is a linear combination of eigenvectors e h = 2N c+1 k=1 ê(θ h k ) v k [T h ] Derive expression for the computable coarse grid error estimate N c e H = ẽ(θk H ) A(θH k ) v k [T H ] k=1 with ẽ(θk H) ê(θh k ), and where A(θ) 1 at least for θ π Accuracy A(θ) 1 e H e h

Step 1. Damped Jacobi smoother Let D h = diag(t h ) and iterate u m+1 = u m γd 1 h (T hu m f ) with damping 0 < γ 1 to get u m+1 = PJ h(γ)um + γd 1 h f with 1 γ γ/2 γ/2 1 γ γ/2 PJ h (γ) =... γ/2 γ/2 1 γ Symmetric Toeplitz with eigenvalues λ k [P h J (γ)] = 1 γ + γ cos θh k, k = 1 : N f

Frequency response Damped Jacobi 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1 1.5 2 2.5 3 Frequency response 1 γ + γ cos θ for γ = 0.5(0.1)1

Error after one damped Jacobi iteration Denote the damped Jacobi frequency response δ γ (θ) = 1 γ + γ cos θ After one damped Jacobi iteration, the error is ē h := P h J (γ) e h = 2N c+1 k=1 ê(θ h k ) δ γ(θ h k ) v k [T h ]

Step 2a. Evaluate residual Because r h = T h ē h, the residual is r h = 2N c+1 k=1 ê(θ h k ) λ k[t h ] δ γ (θ h k ) v k [T h ] Note that λ k [T h ] is huge λ k [T h ] = 4 h 2 sin2 θh k 2

Step 2b. Apply Galerkin condition anti-aliasing filter Apply doubly averaging AA-filter r h := F π r h before downsampling 2 1 F π = 1 1 2 1... 4 1 1 2 Symmetric Toeplitz convolution filter Identical to damped Jacobi (γ = 1/2) with frequency response σ π (θ) = δ 1/2 (θ) = cos 2 θ 2

Doubly averaged residual and downsampling After application of F π the residual is r h = 2N c+1 k=1 ê(θ h k ) σ π(θ h k ) λ k[t h ] δ γ (θ h k ) v k [T h ] Downsampling h H retains only even vector components ( r h ) 2j = 2N c+1 k=1 ê(θ h k ) σ π(θ h k ) λ k[t h ] δ γ (θ h k ) v k 2j[T h ]

Interlude Aliasing in terms of vectors Recall HF aliasing to LF according to ( ) ( ) sin (ω H + ω m )x2j h = sin (2ω H (ω H ω m ))x2j h ( ) = sin (ω H ω m )xj H Aliasing in terms of Fourier mode eigenvector representation v kc+m 2j [T h ] = v kc m j [T 2h ] with Nyquist wave numbers k c = N c + 1 and k f = 2k c corresponding to Nyquist frequencies ω H and ω h, respectively

Step 2c. Aliasing due to downsampling Eigenvector aliasing v kc+m 2j [T h ] = v kc m j [T 2h ] = v kc m j [T H ] implies coarse grid residual has LF as well as offending fold-over HF N c r H = ê(θk h ) σ π(θk h ) λ k[t h ] δ γ (θk h ) v k [T H ] k=1 N c ê(θk h f k ) σ π(θk h f k ) λ k f k[t h ] δ γ (θk h f k ) v k [T H ] k=1

Step 3. Solving for coarse grid error e H = T 1 H r H = k=1 N c k=1 ê(θ h k ) σ π(θ h k ) δ γ(θ h k ) λk[t h ] λ k [T H ] v k [T H ] N c ê(θk h f k ) σ π(θk h f k ) δ γ(θk h f k ) λk f k[t h ] λ k [T H ] Attenuation factors depend on eigenvalue ratios v k [T H ] A LF k = σ π (θ h k ) δ γ(θ h k ) λ k[t h ] λ k [T H ] A HF k f k = σ π(π θ h k ) δ γ(π θ h k ) λ k f k[t h ] λ k [T H ]

Interlude Eigenvalue ratios In the LF band we have, for H = 2h, ( ) ( λ k [T h ] 4 H 2 λ k [T H ] = h 4 4 ) sin 2 θk h/2 sin 2 θk H/2 = 4 sin2 θk h/2 sin 2 θk h = 1 cos 2 θ h k /2 Likewise, in the HF band we have λ kf k[t h ] λ k [T H ] = 1 cos 2 (π θ h k )/2 Note resonance at θ = π/2

Eigenvalue locations T h

Eigenvalue locations T h, T H

Attenuation factors on (0, π/2) and full band attenuation on (0, π) A LF (θ) = σ π(θ) δ γ (θ) cos 2 θ/2 A HF (π θ) = σ π(π θ) δ γ (π θ) cos 2 (π θ)/2 Note A HF (π θ) equals foldover of A LF (θ) for θ (π/2, π) Unfold A HF to study full-band attenuation A(θ) = σ π(θ) δ γ (θ) cos 2 θ/2 θ (0, π) of coarse grid error estimate e H = ê(θk h ) A(θh k ) v k [T h ] N f k=1

Full-band attenuation on (0, π) The full band attenuation factor is A(θ) = σ π(θ) δ γ (θ) cos 2 θ/2 Double pole at θ = π must be annihilated by σ π (π) or δ γ (π) Damped Jacobi smoother helps iff γ = 1/2 Better Design dedicated AA-filter, putting a sufficiently heavy zero on top of θ = π to make A(π) = 0

However... it s your lucky day! Double averaging used in Galerkin condition restriction has so σ π (θ) = cos 2 θ 2 A(θ) = σ π(θ) δ γ (θ) cos 2 θ/2 = δ γ (θ) = 1 γ(1 cos θ) Hence A(0) = 1 but A(π) = 1 2γ := 0 requires γ = 1/2 Coarse grid error estimate e H has accurate LF content if γ = 1/2 and AA-filter F π is used

But... wait a minute! It is not the smoother, but the AA-filter that does the trick! Multigrid algorithm design Looking for a (problem dependent) smoother is likely looking the wrong way Looking for a (grid dependent) AA-filter is necessary to guarantee success The AA-filter is necessary to suppress eigenvalue grid resonance, but that the wrong smoother (γ 1/2) will still corrupt LF error, as A(π) 0 causes aliasing effects

Key ingredients that make MG work 1. L is elliptic This guarantees that grid resonances occur at regular intervals 2. L 1 is regularizing This implies that the error is smoother than the residual 3. A standard, smoothing iterative method reduces HF error This suppresses HF error on fine grid 4. Anti-aliasing is used before downsampling This guarantees that coarse grid error estimate is accurate

Eigenvalue locations T h, T H Non-elliptic Helmholtz equation Non-elliptic case u + βu = 0 with β > π 2 (principle sketch)

Conclusions Analysis of damaging effects of fold-over in residual Specific design of anti-aliasing filters are key to MG Smoothers are of secondary importance Arbitrary downsampling ratios possible Multitude of Toeplitz AA-filters available Works on non-selfadjoint operators as well Works even when eigenmodes are not of Fourier type Works on regular meshes (tensor product in 2D) Can be adapted to non-uniform meshes if smooth

Numerical test: Two-grid method N = 2048 0.4 Exact (g) & initial approx (b) 10 0 Error estimate progression 0.3 0.2 0.1 0 10 5 10 10 10 15 0.1 0 0.2 0.4 0.6 0.8 1 10 20 0 0.2 0.4 0.6 0.8 1 2 x Remaining errors 10 7 0 2 4 0.4 0.3 0.2 0.1 0 Exact (g) & numerical (b) 6 0 0.2 0.4 0.6 0.8 1 0.1 0 0.2 0.4 0.6 0.8 1 4 V-cycles on u = f on on fine grid with 2048 points, error estimation on 1024, single P J (1/2) pre- and post-smoothing

Numerical test: Two-grid method N = 256 0.4 Exact (g) & initial approx (b) 10 0 Error estimate progression 0.3 0.2 10 5 0.1 0 10 10 0.1 0 0.2 0.4 0.6 0.8 1 10 15 0 0.2 0.4 0.6 0.8 1 2 x Remaining errors 10 5 1 0 1 2 3 0.4 0.3 0.2 0.1 0 Exact (g) & numerical (b) 4 0 0.2 0.4 0.6 0.8 1 0.1 0 0.2 0.4 0.6 0.8 1 4 V-cycles on u = f on on fine grid with 256 points, error estimation on 128, single P J (1/2) pre- and post-smoothing

Numerical test: Two-grid method N = 32 0.4 Exact (g) & initial approx (b) 10 0 Error estimate progression 0.3 0.2 0.1 0 10 2 10 4 10 6 0.1 0 0.2 0.4 0.6 0.8 1 10 8 0 0.2 0.4 0.6 0.8 1 1 x Remaining errors 10 3 0.5 0 0.5 0.4 0.3 0.2 Exact (g) & numerical (b) 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 0.1 0 0.1 0 0.2 0.4 0.6 0.8 1 4 V-cycles on u = f on on fine grid with 32 points, error estimation on 16, single P J (1/2) pre- and post-smoothing

Observations in the numerical tests One can almost use the same, small number of iterations By and large, convergence is unaffected by mesh width h 0 One can solve the problem to high accuracy so that only the global error remains The basic iterative method must be smoothing but is less crucial Large initial errors are not a problem The method is robust even with respect to noise

Why is the Multigrid Method faster? All iterative methods work with the residual y m+1 = y m + γmr m The difference is that multigrid estimates the error The residual is large and nonsmooth The error is moderate and smooth Basic residual-reducing iterative methods fail to eliminate low frequency error components Therefore only error estimating methods can be fast

6. Multigrid on nested grids. The V-cycle We want to solve T K u K = f K on a fine grid Ω K of mesh width h K Construct an embedding of grids Ω 0 Ω 1 Ω K with mesh widths h 0 > h 1 > > h K On regular grids one typically takes h k = 2h k+1 to make restrictions and prolongations simple We want to formulate multigrid methods for solving on Ω K using all available grids T K u K = f K

Assumptions There are restriction operators R k 1 k : Ω k Ω k 1 There are prolongation operators P k+1 k : Ω k Ω k+1 The restrictions and prolongations must be very fast and run in O(N) operations There is an iterative method with the smoothing property on every grid Ω k, i.e., it must attenuate high frequencies by a constant factor independent of h k T 0 u 0 = f 0 can be solved fast to provide a crude estimate u 0

The operators {T k } It may be difficult to construct the T k needed on different grids Standard approach: if possible use the same discretization Use restriction and prolongation operators! Definition The operator triple (T k, R k 1 k, Pk 1 k ) satisfies the Galerkin condition if T k 1 = R k 1 k T k Pk 1 k where R k 1 k is the restriction and P k k 1 is the prolongation The Galerkin condition preserves ellipticity on all grids

The full, recursive multigrid V-cycle 1. Procedure v := FMGV (v k, T k, f k ) 2. If k = 0 return v T 0 \f 0 and STOP else: 3. Pre-smoothing v k v k γd 1 k (T kv k f k ) 4. Restrict residual to coarse grid r k 1 R k 1 k (T k v k f k ) 5. Recursive error estimation e k 1 FMGV (0, T k 1, r k 1 ) 6. Prolong error to fine grid e k P k k 1 e k 1 7. Correct by removing error v k v k e k 8. Post-smoothing v k v k γd 1 k (T kv k f k ) 9. Output v v k

Starting procedure Obtain initial approximation v 0 = T 0 \f 0 using direct solver on coarse grid Sweep down on successively finer grids: 1. Compute v 0 = T 0 \f 0 2. Prolong to next finer fine grid v k P k k 1 v k 1 3. Compute residual r k = T k v k f k 4. Smoothing iteration(s) v k v k γd 1 k r k 5. Prolong to next grid &c. until first v K has been obtained

Matlab code segment The recursive V-cycle rf = Tdx*v - f; v = v - gamma*ddx\rf; rf = Tdx*v - f; rf = lowpass(rf); rc = FMGrestrict(rf); ec = FMGV(0,T2dx,rc); ef = FMGprolong(ec); v = v - ef; rf = Tdx*v - f; v = v - gamma*ddx\rf; % Compute residual % Pre-smoothing Jacobi % Compute residual % Anti-alias filter % Restrict to coarse grid % Solve error equation % Prolong to fine grid % Remove error estimate % Compute residual % Post-smoothing Jacobi

Numerical test: Full MG V-cycles N = 2 10 1 10-4 Error progression 10-5 10-6 10-7 10-8 10-9 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 Solution: exact (g), numerical (b), err/dx 2 /10 (r) 0.3 0.2 0.1 0-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 V-cycles on u = f on on fine grid with 1023 points Coarsest grid with 255 points and γ = 1/2, 2/3, 1 Jacobi

N = 214 1 Numerical test: Full MG V-cycles Error progression 10-4 10-6 10-8 10-10 10-12 10-14 10-16 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.8 0.9 1 Solution: exact (g), numerical (b), err/dx2 /10 (r) 0.4 0.3 0.2 0.1 0-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 10 V-cycles on u 00 = f on on fine grid with 16k points Coarsest grid with 255 points and γ = 1/2, 2/3, 1 Jacobi c G So derlind 2015 2016 FMNN15 Multigrid V3.15 Sampling Theory and Elements of Multigrid

Numerical test: Full MG V-cycles N = 2 18 1 10 0 Error progression 10-5 10-10 10-15 10-20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 Solution: exact (g), numerical (b), err/dx 2 /10 (r) 0.3 0.2 0.1 0-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 V-cycles on u = f on on fine grid with 256k points Coarsest grid with 255 points and γ = 1/2, 2/3, 1 Jacobi

7. Efficiency. Memory and execution time Let the K th grid have N internal points Then grid K 1 has (N + 1)/2 1 < N/2 internal points Total storage requirement is therefore S = N (1 + 2 1 + + 2 K ) < 2N So, independent of recursion depth memory requirement does not exceed that of using a single finer mesh

Memory requirement in d dimensions Let the K th grid have N d points (curse of dimension) Then grid K 1 has (N 1) d /2 d < (N/2) d points Total storage requirement in d dimensions is S d < N d (1 + 2 d + + 2 dk ) N d S 1 < 2N S 2 < 4N2 3 S 3 < 8N3 7 So, for once, the curse of dimension works our way! Multigrid is always cheaper than using a single finer grid

Execution time in theory Work unit U K is the cost of one sweep on the fine K th grid W 0 2 dk N 2d is cost for direct solution on coarsest grid Neglect restriction and prolongation costs Cost W of V-cycle with one pre- and one post-smoothing W d < 2U K (1 + 2 d + + 2 dk ) + 2W 0 2U K + 2W 0 W 1 < 4U K + 2W 0 ; W 2 < 8U K 3 + 2W 0 ; W 3 < 16U K 7 + 2W 0 Execution time less than that of a single sweep on next finer grid

So it s cheap, what accuracy do we get? Solving T h u h = f we obtain v h with Algebraic error e h = v h u h Global error g h = u h u(x h ) Total error δ h = v h u(x h ) Note that δ h = e h g h, therefore v h u(x h ) x v h u h x + u h u(x h ) x ε MG + C h 2 Try to make ε MG C x 2 to make global error dominate

Convergence analysis a brief sketch Assume linear convergence rate ρ for each V-cycle Run M V-cycles, reducing error to global error level if ρ M h 2 N 2 for a 2 nd order method, implying number of iterations M = O(log N) Because the cost of each V-cycle is U K = O(N d ) total cost to reduce the error to the global error level is O(N d log N) = C cost(fft)

6. Convection diffusion u t = u xx + αu x f Stationary problem, homogeneous Dirichlet conditions on [0, 1] L α u = u + αu = f Convection dominated for Péclet numbers α 1 Boundary layer at x = 0 for α > 0, otherwise at x = 1 L α is not self-adjoint, but w(x) = e αx is a symmetrizer: wl α u = wu + αwu = (wu ) so (wl α ) = wl α. Pre- and postmultiply by w 1/2 produces A α = w 1/2 L α w 1/2 = w 1/2 L αw 1/2 = (w 1/2 L α w 1/2 ) = A α

Eigenfunctions L α u = λu Put u = w 1/2 v. Then L α w 1/2 v = λw 1/2 v, and L α w 1/2 v = (w 1/2 v) + α(w 1/2 v) = λw 1/2 v simplifies to so ( ) v k = λ k + α2 v k 4 λ k [L α ] = (kπ) 2 α2 4 u k (x) = e αx/2 sin kπx Note L α is equi-elliptic wrt Péclet number as M 2 [L α ] = π 2

FDM discretization Convection diffusion N N Toeplitz matrix T h with h = 1/(N + 1) 2 1 + hα/2 T h = 1 1 hα/2 2 1 + hα/2 h 2... 1 + hα/2 1 hα/2 2 Unsymmetric system T h u h = f Discrete diagonal exponential symmetrizer exists

Eigenvalues of unsymmetric Toeplitz matrices An N N Toeplitz matrix T =. d b a d b.. b a d has eigenvalues λ k [T ] = d + 2 ab cos kπ N + 1, k = 1 : N Note Real eigenvalues if ab > 0, otherwise complex conjugate

Discrete eigenvalues The mesh Péclet number For the FDM approximation of the convection diffusion operator L α the discrete eigenvalues are λ k [T h ] = 2 h 2 + 2 h 2 1 h2 α 2 cos θk h 4, k = 1 : N f Note λ k [L α ] is always real, but λ k [T h ] is real only if h α < 2 The mesh Péclet number limits H which must satisfy H α < 2

Computational experiment High Péclet numbers x 10 4 Original C grid error (exact) 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 15 x 10 5 C grid error estimate 10 5 0 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 6 x 10 5 Exact F grid error (r) and scaled, exact C grid error (b) 4 2 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Classical MG α = 200, H = 1/500, ρ = 2

Computational experiment High Péclet numbers 10 4 Convergence history: discretization (r) and algebraic errors (b) 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 13 10 14 0 1 2 3 4 5 6 7 8 9 Iteration number Classical MG α = 200, H = 1/500, ρ = 2

Eigenvalue locations T h, T H Convection diffusion Mesh Péclet condition satisfied (principle sketch)