Multigrid Methods for Discretized PDE Problems

Similar documents
Introduction to Multigrid Method

Notes on Multigrid Methods

4.2 - Richardson Extrapolation

(4.2) -Richardson Extrapolation

Multigrid Methods for Obstacle Problems

Crouzeix-Velte Decompositions and the Stokes Problem

A h u h = f h. 4.1 The CoarseGrid SystemandtheResidual Equation

arxiv: v1 [math.na] 3 Nov 2011

Stabilization and Acceleration of Algebraic Multigrid Method

AN EFFICIENT AND ROBUST METHOD FOR SIMULATING TWO-PHASE GEL DYNAMICS

Computational Linear Algebra

Order of Accuracy. ũ h u Ch p, (1)

Preconditioning in H(div) and Applications

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

MATH745 Fall MATH745 Fall

Exercises for numerical differentiation. Øyvind Ryan

c 2006 Society for Industrial and Applied Mathematics

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

Multigrid Methods and their application in CFD

Kasetsart University Workshop. Multigrid methods: An introduction

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

AMS526: Numerical Analysis I (Numerical Linear Algebra)

1. Fast Iterative Solvers of SLE

Function Composition and Chain Rules

1 Limits and Continuity

3 Parabolic Differential Equations

Differentiation in higher dimensions

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

Iterative Methods and Multigrid

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

MANY scientific and engineering problems can be

New Streamfunction Approach for Magnetohydrodynamics

arxiv: v1 [math.na] 7 Mar 2019

1 Upwind scheme for advection equation with variable. 2 Modified equations: numerical dissipation and dispersion

Test 2 Review. 1. Find the determinant of the matrix below using (a) cofactor expansion and (b) row reduction. A = 3 2 =

Implicit-explicit variational integration of highly oscillatory problems

Sin, Cos and All That

Parametric Spline Method for Solving Bratu s Problem

CLASSICAL ITERATIVE METHODS

Numerical Differentiation

Convergence and Descent Properties for a Class of Multilevel Optimization Algorithms

Quantum Mechanics Chapter 1.5: An illustration using measurements of particle spin.

Motivation: Sparse matrices and numerical PDE's

Finite Difference Methods Assignments

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is

158 Calculus and Structures

The Conjugate Gradient Method

Algebraic Multigrid as Solvers and as Preconditioner

JACOBI S ITERATION METHOD

lecture 26: Richardson extrapolation

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c)

Chapter 7 Iterative Techniques in Matrix Algebra

INTRODUCTION TO MULTIGRID METHODS

Gradient Descent etc.

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Multigrid absolute value preconditioning

Mass Lumping for Constant Density Acoustics

The Laplace equation, cylindrically or spherically symmetric case

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Third order Approximation on Icosahedral Great Circle Grids on the Sphere. J. Steppeler, P. Ripodas DWD Langen 2006

Grad-div stabilization for the evolutionary Oseen problem with inf-sup stable finite elements

Research Article Smoothing Analysis of Distributive Red-Black Jacobi Relaxation for Solving 2D Stokes Flow by Multigrid Method

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Section 3: The Derivative Definition of the Derivative

ADAPTIVE MULTILEVEL INEXACT SQP METHODS FOR PDE CONSTRAINED OPTIMIZATION

7.1 Using Antiderivatives to find Area

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

Spectral element agglomerate AMGe

The Triangle Algorithm: A Geometric Approach to Systems of Linear Equations

Robust approximation error estimates and multigrid solvers for isogeometric multi-patch discretizations

On the best approximation of function classes from values on a uniform grid in the real line

Algebra C Numerical Linear Algebra Sample Exam Problems

Exercise 19 - OLD EXAM, FDTD

Math 577 Assignment 7

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

CS522 - Partial Di erential Equations

9.1 Preconditioned Krylov Subspace Methods

Copyright c 2008 Kevin Long

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Master Thesis Literature Study Presentation

AMG for a Peta-scale Navier Stokes Code

2.11 That s So Derivative

A First-Order System Approach for Diffusion Equation. I. Second-Order Residual-Distribution Schemes

Numerical Methods I Eigenvalue Problems

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Click here to see an animation of the derivative

Introduction to Scientific Computing

Adaptive algebraic multigrid methods in lattice computations

Regularized Regression

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

SIMG-713 Homework 5 Solutions

Numerical Programming I (for CSE)

Lecture 18 Classical Iterative Methods

THE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS. L. Trautmann, R. Rabenstein

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

Transcription:

Towards Metods for Discretized PDE Problems Institute for Applied Matematics University of Heidelberg Feb 1-5, 2010

Towards Outline A model problem Solution of very large linear systems Iterative Metods Two-Grid and -Metods Preconditioning and wat to do, wen te problem gets a problem

Towards Summary, Basic Metods

Towards Direct Metods LR-Decomposition 1 0... 1 A = LU, L =... R =.......... 1 0 Direct Solver for te System Ax = b Very slow Large Memory usage E LR = O(N 3 ) M LR = O(N 2 )

Towards Basic Iterative Metods Wit initial guess x 0 R N x t+1 = x t + θb (b Ax t ) }{{} d t Convergence depends on Iteration Matrix B: Analyze Spectral Norm e t+1 I BA e t I BA 2 = λ max(i BA) Problem, very bad convergence rate (depends on mes size ) I BA 2 1 2 Many steps necessary so solve system up to ɛ Overall complexity bad N iter = O(N) E Jacobi = O(N 2 ), M Jacobi = O(N)

Towards Gradient-Metods Instead of solving Ax = b minimize functional E(x) = 1 x, x b, x 2 In every step t, set searc direction r t, make optimal step: Optimal searc direction would be Since x is not available, use min E(x t + sr t ) s = d t, r t s R Ar t, r t r t = x x t r t = b Ax t = Ax Ax t = A(x x t ) Problem, searc directions are bad, many steps necessary ( ) 1 1/κ t e t+1 e 0 1 1/κ, 1 + 1/κ 1 + 1/κ 1 1 κ = 1 2 Bad overall complexity (due to many steps) N iter = O(N), E GM = O(N 2 ), M GM = O(N)

Towards Metod of Conjugate Gradients Use better searc directions d 3 d 2 d 4 d 2 d 1 d 1 Ortogonalize r k, k = 0,..., t r k, r l A = Ar k, r l = 0 (k l) Since A symmetric, ortogonalization can be done in one step: d t+1 = b Ax t, r t+1 = d t+1 d t+1, r t A r t Convergence rate depends on square root of condition number: e t+1 ( ) t 1 1/ κ 1 + 1/ e 0, κ 1 1/ κ 1 + 1/ κ 1 1 κ = 1

Towards Comparison Solve 2D-Laplace Equation, problem size N = M 2, = 1 M, 2 = 1 N Metod E iter conv. N iter E all N = 10 6 LR N 3 0 1 N 3 30 years Jacobi 5N 1 2 N 5N 2 2 ours Gradient 10N 1 2 N 10N 2 3 ours 3 Conjugate Gradients 20N 1 N 20N 2 1 min

Towards Towards Metods

Towards Wat can we do better Optimal Numerical Complexity Numerical complexity in every step E step = O(N) Is given by Jacobi, Gradient, Conjugate Gradient Convergence Rate, Number of steps To reduce error by ɛ N iter = O(1) Not te case for Jacobi (O(N)), Gradient (O(N)), Conjugate Gradients (O( N))

Towards Dependence of convergence rate on mes-size e t+1 ρ e t Jacobi Gradient Conjugate Gradient ρ = λ max(i BA) = λ max(d 1 (L + R)) 1 2 ρ = ρ = ( ) 1 1/κ = 1 1 1 + 1/κ κ 1 2 ( ) 1 1/ κ 1 + 1/ = 1 1 1 κ κ Oter interpretation On a fixed Mes Ω were is a constant we already ave an optimal solver!

Towards Wy does convergence depend on te mes-size? A detailed analysis of te Jacobi Iteration, 1D-Laplace Start wit random initial guess x 0 Error in te beginning is randomly distribute: Perform some steps of Jacobi

Towards Jacobi Solution - 0

Towards Jacobi Solution - 1

Towards Jacobi Solution - 2

Towards Jacobi Solution - 3

Towards Jacobi Solution - 4

Towards Jacobi Solution - 5

Towards Jacobi Solution - 6

Towards Jacobi Solution - 7

Towards Jacobi Solution - 8

Towards Jacobi Solution - 9

Towards Solution Beavior Solution canges very rapidly in te beginning (te first two or tree steps) Ten, nearly noting appens Solution in step 0 2 and step 2 9

Towards Jacobi Error - 0

Towards Jacobi Error - 1

Towards Jacobi Error - 2

Towards Jacobi Error - 3

Towards Jacobi Error - 4

Towards Jacobi Error - 5

Towards Jacobi Error - 6

Towards Jacobi Error - 7

Towards Jacobi Error - 8

Towards Jacobi Error - 9

Towards Error Beavior Overall error amplitude does not cange a lot Error frequencies cange!!!

Towards Te Concept of Smooting Smooting Jacobi is very bad and slow as a solver, but ig frequencies are smooted quickly

Towards Smooting Property of Ricardson Iteration - I x (t+1) = x (t) + θ(b Ax (t) ) e (t+1) = e t + θ(ax Ax (t) ), e (t) := x (t) x = e (t) θae (t) = [I θa]e (t) Eigenvalues & Eigenvectors. Let: A ω i = λ i ω i, i = 1,..., N, λ min(a ) = λ 1 λ N = λ max(a ) Eigenvectors are ortogonal ω i, ωj = { 0 if i j 1 if i == j We know Eigenvalues and Eigenvectors of te Iteration Matrix [I θa ]ω i = ωi θa ω i = ωi θλ i ω i = (1 θλ i )ω i

Towards Smooting Property of Ricardson-Iteration - II We ave Develop Error propagation e (t+1) = [I θa]e (t) = [I θa] t+1 e (0) = N e (0) = ɛ i ω i i=1 N e (t) = [I θa ] t e (0) = ɛ i [I θa ] t ω i = N ɛ i (1 θλ i ) t ω i i=1 i=1 Since Eigenvectors are ortogonal e (t) N 2 = e (t), e(t) = i,j=1 = N ɛ 2 i (1 θλ i ) 2t i=1 ɛ i ɛ j (1 θλ i ) t (1 θλ j ) t ω i, ωj }{{} =δ ij

Towards Smooting Property of Ricardson-Iteration - III We ave Convergence, if e (t) 2 N ɛ 2 i (1 θλ i ) 2t i=1 1 θλ i < 1 0 < θ < 1 λ max(a )

Towards Smooting Property of Ricardson-Iteration - IV Eigenvalues A closer look at te Eigenvalues. 1D-Laplace i = 1,..., N ( ) iπ λ i = 2 2 cos N + 1 ( )) ikπ N ω (sin i = N + 1 k=1 Distribution of Eigenvalues

Towards Smooting Property of Ricardson-Iteration - V Eigenvectors λ 1 0.081 λ 2 0.317 λ 4 1.169 λ 8 3.310

Towards Smooting Property of Ricardson-Iteration - V Eigenvalues Eigenvalues of te Smooting Operator For θ = λ max(a ) 1 : ρ := λ(i θa ) = (1 θλ i ) For i = N/2:...

Towards Smooting Property of Ricardson-Iteration - VI For i = N/2 ρ N/2 = 1 1 ( ( )) pin/2 2 2 cos 4 N 1 1 2 + 1 ( ) pi 2 cos 2 = 1 2 For i > N/2: ρ i < 1 2 Error contributions for large Eigenvalues: ẽ (0) := N i=n/2 ɛ i ω i ẽ (t) 2 = N ɛ 2 i i=n/2 ( 1 1 ) 2t 4 λ i ẽ (t) N 2 ɛ 2 i i=n/2 ( ) 1 2t 2

Towards Smooting Property of Ricardson-Iteration - Summary Ricardson Iteration is very bad and slow Solver Ricardson Iteration is very good Smooter for ig frequencies Hig frequencies are all ω i, i > N 2 i > 1 2 Hig frequencies are only visible on finest Mes Ω

Towards Hierarcical Approac Smoot ig frequencies on fine mes N = 40 Ω (some few steps): Transfer to coarse Mes N = 20 Treat coarse problem on mes Ω 2

Towards Hierarcical Approac Low frequencies on fine mes Ω are ig frequencies on coarse mes Ω 2 Treat coarse problem on mes Ω 2

Towards Possible Benefits Only few (fixed number O(1)?) operations on fine mes Ω Coarse mes problem Ω 2 is smaller (2D-model problem by a factor 4). Peraps direct solver possible? Nested approac possible: Ω Ω 2 Ω 4... Ω H

Towards Analysis of te Smooting Property Ricardson Error contributions for large Eigenvalues: ẽ (0) := N i=n/2 ɛ i ω i ẽ (t) 2 = ẽ (t) Reduce ig frequency error by ɛ N 2 ɛ 2 i i=n/2 N ɛ 2 i i=n/2 ( ) 1 2t 2 ( ) 1 t ɛ t = log(ɛ) 2 Fixed number of iterations, independent of of N! ( 1 1 ) 2t 4 λ i

Towards Te Two-Grid Iteration

Towards Te Basic 2-Grid Iteration 2-Grid Metod for solving A x = b Initial guess x (0), iterate t 0 1 Smoot: y (t) 2 Defect: d (t) 3 Restrict: d (t) H := S(A, b, x (t) ) := b A y (t) := R d (t) 4 Coarse Mes Solution: x (t) H 5 Prolongate: x (t+1) := A 1 H d (t) H := x (t) + P x (t) H

Towards Te Basic 2-Grid Iteration - Smooting Smooting 1 Smoot: y (t) := S(A, b, x (t) ) Easy and fast iterative sceme (Jacobi, Ricardson, Gauss-Seidel) Just a few steps N iter = 2 5 Smoot ig frequencies ẽ (t) ρt ẽ (0), ρ < 1, i > N 2

Towards Te Basic 2-Grid Iteration - Restriction Smooting 3 Restrict: d (t) H... see later... := R d (t)

Towards Te Basic 2-Grid Iteration - Prolongation Prolongation 5 Prolongate: x (t+1) := x (t) + P x (t) H Transfer Solution x H from Ω H to x on Ω. Simple Interpolation Matrix Notation H 1 0 0 0 1 1 0 0 2 2 P = 0 1 0 0 1 1 0 0 2 2...

Towards Te Basic 2-Grid Iteration - Restriction Restriction 3 Restrict: d (t) H := R d (t) Transfer Solution x from Ω to x H on Ω H. Use adjoint of Prolongation: 1 1 0 0 2 R = P = 1 1 1 0 1 2 2 1 2 0 0 0 2... Example Matrix Notation H

Towards Te Basic 2-Grid Iteration - Coarse Mes Solution Coarse Mes Solution 4 Coarse Mes Solution: x (t) H := A 1 H d (t) H Direct solution of te system A H x H = d H For 2D-Model Problem: N H = 1 4 N Solution Wit LR: E LR (N H ) = NH 3 = 1 64 E LR(N ) Better, but still too muc!

Towards Two-Grid-Iteration: Compact Notation I Coarse Mes Problem: A H x H = d H Write as iterative Metod on fine mes Ω x (t+1) = x (t) + P A 1 H R (b A x (t) ) = [I P A 1 H R (t) A ] x }{{} B CM + P A 1 H R b For te error e (t+1) e (t+1) = x (t+1) = B CM e (t) x = B CM x (t) + P A 1 H R A x x

Towards Two-Grid-Iteration: Compact Notation I Coarse Mes Problem: A H x H = d H Write as iterative Metod on fine mes Ω x (t+1) = x (t) + P A 1 H R (b A x (t) ) = [I P A 1 H R (t) A ] x }{{} B CM + P A 1 H R b For te error e (t+1) e (t+1) = x (t+1) = B CM e (t) x = B CM x (t) + P A 1 H R A x x

Towards Two-Grid-Iteration: Compact Notation I Coarse Mes Problem: A H x H = d H Write as iterative Metod on fine mes Ω x (t+1) = x (t) + P A 1 H R (b A x (t) ) = [I P A 1 H R (t) A ] x }{{} B CM + P A 1 H R b For te error e (t+1) e (t+1) = x (t+1) = B CM e (t) x = B CM x (t) + P A 1 H R A x x

Towards Two-Grid-Iteration: Compact Notation II Coarse Mes Solution Smooting Operation e (t+1) e (t+1) = B CM e (t) = B SM e (t) Two-Grid-Iteration Initial guess x (0), iterate e (t+1) = B ν 2 SM B CMB ν 1 SM e(t)

Towards Two-Grid Iteration: Convergence I Analyze iteration matrix B TG = B ν SM B CM Coarse Mes Solution Ricardson-Iteration B CM = I P A 1 H R A B SM = I 1 λ max A Te Two-Grid-Iteration B TG = [I P A 1 H R A ][I 1 A ] = [A 1 P A 1 H λ R ] max }{{} Approximation A [I 1 λ max A ] ν } {{ } Smooting Sow B TG ρ(ν) = cν 1 < 1 for ν > ν 0

Towards Two-Grid Iteration: Convergence I Analyze iteration matrix B TG = B ν SM B CM Coarse Mes Solution Ricardson-Iteration B CM = I P A 1 H R A B SM = I 1 λ max A Te Two-Grid-Iteration B TG = [I P A 1 H R A ][I 1 A ] = [A 1 P A 1 H λ R ] max }{{} Approximation A [I 1 λ max A ] ν } {{ } Smooting Sow B TG ρ(ν) = cν 1 < 1 for ν > ν 0

Towards Two-Grid Iteration: Convergence I Analyze iteration matrix B TG = B ν SM B CM Coarse Mes Solution Ricardson-Iteration B CM = I P A 1 H R A B SM = I 1 λ max A Te Two-Grid-Iteration B TG = [I P A 1 H R A ][I 1 A ] = [A 1 P A 1 H λ R ] max }{{} Approximation A [I 1 λ max A ] ν } {{ } Smooting Sow B TG ρ(ν) = cν 1 < 1 for ν > ν 0

Towards Two-Grid Iteration: Convergence I Analyze iteration matrix B TG = B ν SM B CM Coarse Mes Solution Ricardson-Iteration B CM = I P A 1 H R A B SM = I 1 λ max A Te Two-Grid-Iteration B TG = [I P A 1 H R A ][I 1 A ] = [A 1 P A 1 H λ R ] max }{{} Approximation A [I 1 λ max A ] ν } {{ } Smooting Sow B TG ρ(ν) = cν 1 < 1 for ν > ν 0

Towards Two-Grid Iteration: Convergence Proof Proof: 1 Split error influences B TG e (t+1) [A 1 P A 1 H R ] A [I 1/λ maxa ] ν 2 Sow, tat for te Smooting Operator it olds A [I 1/λ maxa ] ν v c sν 1 2 v 3 Sow, tat for te Coarse Mes Approximation it olds (A 1 P A 1 H R )v c a 2 v 4 Put it all togeter, wit ρ(ν) := c sc aν 1 < 1

Towards Two-Grid Iteration: Convergence Proof Proof: 1 Split error influences B TG e (t+1) [A 1 P A 1 H R ] A [I 1/λ maxa ] ν 2 Sow, tat for te Smooting Operator it olds A [I 1/λ maxa ] ν v c sν 1 2 v 3 Sow, tat for te Coarse Mes Approximation it olds (A 1 P A 1 H R )v c a 2 v 4 Put it all togeter, wit ρ(ν) := c sc aν 1 < 1

Towards Two-Grid Iteration: Convergence Proof Proof: 1 Split error influences B TG e (t+1) [A 1 P A 1 H R ] A [I 1/λ maxa ] ν 2 Sow, tat for te Smooting Operator it olds A [I 1/λ maxa ] ν v c sν 1 2 v 3 Sow, tat for te Coarse Mes Approximation it olds (A 1 P A 1 H R )v c a 2 v 4 Put it all togeter, wit ρ(ν) := c sc aν 1 < 1

Towards Two-Grid Iteration: Convergence Proof Proof: 1 Split error influences B TG e (t+1) [A 1 P A 1 H R ] A [I 1/λ maxa ] ν 2 Sow, tat for te Smooting Operator it olds A [I 1/λ maxa ] ν v c sν 1 2 v 3 Sow, tat for te Coarse Mes Approximation it olds (A 1 P A 1 H R )v c a 2 v 4 Put it all togeter, wit ρ(ν) := c sc aν 1 < 1

Towards Two-Grid Iteration: Convergence Proof I, te Smooting Operator Step 2: sow, tat A [I 1/λ maxa ] ν v c sν 1 2 v. Done for Ricardson and Laplace-Matrix Must be done separately for every matrix (partial differential equation)! Separate analysis necessary for different smooters (Jacobi, Gauss-Seidel, Ricardson, ILU) Difficult for complex equations and complex smooting operators

Towards Two-Grid Iteration: Convergence Proof II, te Coarse Mes Approximation Operator Step 3: sow, tat (A 1 P A 1 H R )v c a 2 v We must use properties of te partial differential equation and te discretization! Use a priori error estimates: u u c 2 f, Ten, using triangle inequality < C H u H u ch 2 f u u H u u + u H u c( 2 + H 2 ) 2 f Assume geometric mes growt Use special v above: Finally < C H, v := A u = f, A 1 f P A 1 u u H c(1 + C 2 )2 f A 1 v = u, P A 1 H R v = u H H R f = u u H c(1 + C 2 }{{} ) c a 2 f

Towards Two-Grid Iteration: Convergence Proof II, te Coarse Mes Approximation Operator Step 3: sow, tat (A 1 P A 1 H R )v c a 2 v We must use properties of te partial differential equation and te discretization! Use a priori error estimates: u u c 2 f, Ten, using triangle inequality < C H u H u ch 2 f u u H u u + u H u c( 2 + H 2 ) 2 f Assume geometric mes growt Use special v above: Finally < C H, v := A u = f, A 1 f P A 1 u u H c(1 + C 2 )2 f A 1 v = u, P A 1 H R v = u H H R f = u u H c(1 + C 2 }{{} ) c a 2 f

Towards Two-Grid Iteration: Convergence Proof II, te Coarse Mes Approximation Operator Step 3: sow, tat (A 1 P A 1 H R )v c a 2 v We must use properties of te partial differential equation and te discretization! Use a priori error estimates: u u c 2 f, Ten, using triangle inequality < C H u H u ch 2 f u u H u u + u H u c( 2 + H 2 ) 2 f Assume geometric mes growt Use special v above: Finally < C H, v := A u = f, A 1 f P A 1 u u H c(1 + C 2 )2 f A 1 v = u, P A 1 H R v = u H H R f = u u H c(1 + C 2 }{{} ) c a 2 f

Towards Two-Grid Iteration: Convergence Proof II, te Coarse Mes Approximation Operator Step 3: sow, tat (A 1 P A 1 H R )v c a 2 v We must use properties of te partial differential equation and te discretization! Use a priori error estimates: u u c 2 f, Ten, using triangle inequality < C H u H u ch 2 f u u H u u + u H u c( 2 + H 2 ) 2 f Assume geometric mes growt Use special v above: Finally < C H, v := A u = f, A 1 f P A 1 u u H c(1 + C 2 )2 f A 1 v = u, P A 1 H R v = u H H R f = u u H c(1 + C 2 }{{} ) c a 2 f

Towards Two-Grid Iteration: Convergence Proof III, Putting it togeter Step 4: Step 2 and Step 3: For ν > ν 0 : B TG A [I 1/λ maxa ] ν A 1 P A 1 H R c sν 1 2 c(1 + C 2 c sc(1 + C 2 )ν 1 0 < 1 )2 = c sc(1 + C 2 )ν 1 q.e.d.

Towards Two-Grid Iteration: Numerical Complexity Good convergence. To reduce error by ɛ: t = log(ɛ) Large effort: E step = E smoot + E coarse mes ( ) 3 1 = 5νN + C 2 N 3 = O(N 3 )