Geometric Multigrid Methods Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University IMA Tutorial: Fast Solution Techniques November 28, 2010
Ideas Outline Set-Up Multigrid Algorithms Smoothing and Approximation Convergence of W-Cycle Convergence of V-Cycle Multiplicative Theory Additive Theory Other Algorithms
General References 1 W. Hacbusch Multi-grid Methods and Applications, Springer-Verlag, 1985. 2 J.H. Bramble Multigrid Methods, Longman Scientific & Technical, 1993. 3 J.H. Bramble and X. Zhang The Analysis of Multigrid Methods, in Handboo of Numerical Analysis VII, North-Holland, 2000. 4 U. Trottenberg, C. Oosterlee and A. Schüller Multigrid, Academic Press, 2001.
Let A be a SPD matrix. Suppose we solve Ideas (L) Ax = b by an iterative method (Jacobi, Gauss-Seidel, etc.).
Let A be a SPD matrix. Suppose we solve Ideas (L) Ax = b by an iterative method (Jacobi, Gauss-Seidel, etc.). After m iterations (starting with some initial guess), we obtain an approximate solution x m. Then the error e m = x x m satisfies the residual equation (RE) Ae m = r m where r m = b Ax m is the (computable) residual.
Let A be a SPD matrix. Suppose we solve Ideas (L) Ax = b by an iterative method (Jacobi, Gauss-Seidel, etc.). After m iterations (starting with some initial guess), we obtain an approximate solution x m. Then the error e m = x x m satisfies the residual equation (RE) Ae m = r m where r m = b Ax m is the (computable) residual. If we can solve (RE) exactly, then we can recover the exact solution of (L) by the relation (C) x = x m + (x x m ) = x m + e m
Ideas In reality, we will only solve (RE) approximately to obtain an approximation e m of e m. Then, hopefully, the correction (C ) x = x m + e m will give a better approximation of x.
Ideas In reality, we will only solve (RE) approximately to obtain an approximation e m of e m. Then, hopefully, the correction (C ) x = x m + e m will give a better approximation of x. In the context of finite element equations (FE h ) A h x h = f h there is a natural way to carry out this idea.
Ideas In reality, we will only solve (RE) approximately to obtain an approximation e m of e m. Then, hopefully, the correction (C ) x = x m + e m will give a better approximation of x. In the context of finite element equations (FE h ) A h x h = f h there is a natural way to carry out this idea. Smoothing Step Apply m iterations of a classical iterative method to obtain an approximation x m,h of x h and the corresponding residual equation e m,h = x h x m,h, (RE h ) r m,h = f h A hx m,h A h e m,h = r m,h
Ideas Correction Step Instead of solving (RE h ), we solve a related equation on a coarser grid T 2h (assuming that T h is obtained from T 2h by uniform refinement). (RE 2h ) A 2h e 2h = r 2h r 2h = projection of r h,m onto the coarse grid space A 2h = stiffness matrix for the coarse grid
Ideas Correction Step Instead of solving (RE h ), we solve a related equation on a coarser grid T 2h (assuming that T h is obtained from T 2h by uniform refinement). (RE 2h ) A 2h e 2h = r 2h r 2h = projection of r h,m onto the coarse grid space A 2h = stiffness matrix for the coarse grid We then use a transfer operator I h 2h to move e 2h to the fine grid T h and obtain the final output x m+1,h = x m,h + I h 2h e 2h This is nown as the two-grid algorithm.
Ideas Smoothing steps will damp out the highly oscillatory part of the error so that we can capture e h accurately on the coarser grid by the correction step. Together they produce a good approximate solution of (FE h ).
Ideas Smoothing steps will damp out the highly oscillatory part of the error so that we can capture e h accurately on the coarser grid by the correction step. Together they produce a good approximate solution of (FE h ). Moreover, it is cheaper to solve the coarse grid residual equation (RE 2h ).
Ideas Smoothing steps will damp out the highly oscillatory part of the error so that we can capture e h accurately on the coarser grid by the correction step. Together they produce a good approximate solution of (FE h ). Moreover, it is cheaper to solve the coarse grid residual equation (RE 2h ). Of course we do not have to solve (RE 2h ) exactly. Instead we can apply the same idea recursively to (RE 2h ). The resulting algorithm is a multigrid algorithm.
Set-Up Model Problem Find u H0 1 (Ω) such that a(u, v) = fv dx v H0(Ω) 1 Ω a(w, v) = w v dx Ω
Set-Up Model Problem Find u H0 1 (Ω) such that a(u, v) = fv dx v H0(Ω) 1 Ω a(w, v) = Ω w v dx Let T 0 be an initial triangulation of Ω and T ( 1) be obtained from T 1 by a refinement process. V ( 0) is the corresponding finite element space.
th Level Finite Element Problem a (u, v) = fv dx Ω Set-Up Find u V such that v V The bilinear form a (, ) is an approximation of the bilinear form a(, ) for the continuous problem. We can tae a (, ) to be a(, ) for conforming finite element methods. But in general a (, ) is a modification of a(, ) according to the choice of the finite element method.
th Level Finite Element Problem a (u, v) = fv dx Ω Set-Up Find u V such that v V It can be written as A u = φ where A : V V and φ V are defined by A w, v = a (w, v) v, w V φ, v = fv dx v V Ω
th Level Finite Element Problem a (u, v) = fv dx Ω Set-Up Find u V such that v V It can be written as A u = φ Two ey ingredients in defining multigrid algorithms. a good smoother for the equation A z = γ intergrid transfer operators to move functions between consecutive levels
Smoothing step for A z = γ (z V, γ V ) Set-Up (S) z new = z old + B 1 (γ A z old ) where B : V V is SPD, ρ(b 1 A ) 1 and B v, v h 2 v 2 L 2 (Ω) v V
Smoothing step for A z = γ (z V, γ V ) Set-Up (S) z new = z old + B 1 (γ A z old ) where B : V V is SPD, Example ρ(b 1 A ) 1 and B v, v h 2 v 2 L 2 (Ω) v V (Richardson relaxation scheme) V ( H 1 0 (Ω)) = P 1 Lagrange finite element space B w, v = λ p V w(p)v(p) V = the set of interior vertices of T λ = a (constant) damping factor
Intergrid Transfer Operators Set-Up The coarse-to-fine operator I 1 is a linear operator from V 1 to V. The fine-to-coarse operator I 1 : V V 1 is the transpose of I 1, i.e., I 1 α, v = α, I 1 v α V, v V 1
Intergrid Transfer Operators Set-Up The coarse-to-fine operator I 1 is a linear operator from V 1 to V. The fine-to-coarse operator I 1 : V V 1 is the transpose of I 1, i.e., Example I 1 α, v = α, I 1 v α V, v V 1 V ( H 1 0 (Ω)) = P 1 Lagrange finite element space V 0 V 1 I 1 = natural injection
Multigrid Algorithms V-Cycle Algorithm for A z = γ with initial guess z 0 Output = MG V (, γ, z 0, m)
Multigrid Algorithms V-Cycle Algorithm for A z = γ with initial guess z 0 Output = MG V (, γ, z 0, m) For = 0, we solve A 0 z = γ exactly to obtain MG V (0, γ, z 0, m) = A 1 0 γ
Multigrid Algorithms V-Cycle Algorithm for A z = γ with initial guess z 0 Output = MG V (, γ, z 0, m) For = 0, we solve A 0 z = γ exactly to obtain MG V (0, γ, z 0, m) = A 1 0 γ For 1, we compute the multigrid output recursively in 3 steps.
Multigrid Algorithms V-Cycle Algorithm for A z = γ with initial guess z 0 Output = MG V (, γ, z 0, m) For = 0, we solve A 0 z = γ exactly to obtain MG V (0, γ, z 0, m) = A 1 0 γ For 1, we compute the multigrid output recursively in 3 steps. Pre-smoothing Step For 1 m, compute z = z 1 + B 1 (γ A z 1 )
Multigrid Algorithms Correction Step Transfer the residual γ A z m V to the coarse grid using I 1 and solve the coarse-grid residual equation A 1 e 1 = I 1 (γ A z m ) by applying the ( 1) st level algorithm using 0 as the initial guess, i.e., we compute q = MG V ( 1, I 1 (γ A z m ), 0, m) as an approximation to e 1. Then we mae the correction z m+1 = z m + I 1 q
Multigrid Algorithms Correction Step Transfer the residual γ A z m V to the coarse grid using I 1 and solve the coarse-grid residual equation A 1 e 1 = I 1 (γ A z m ) by applying the ( 1) st level algorithm using 0 as the initial guess, i.e., we compute q = MG V ( 1, I 1 (γ A z m ), 0, m) as an approximation to e 1. Then we mae the correction z m+1 = z m + I 1 q Post-smoothing Step For m + 2 2m + 1, compute z = z 1 + B 1 (γ A z 1 )
Final Output Multigrid Algorithms MG V (, γ, z 0, m) = z 2m+1
Multigrid Algorithms Multigrid Algorithms for A z = ψ (z V, ψ V ) C 0 Interior Penalty Methods for Fourth Order Problems Final Output MG V (, γ, z 0, m) = z V-cycle Algorithm p = 1 2m+1 = 3 replacements = 2 = 1 = 0 scheduling Scheduling diagram Diagram for the of the V-cycle V-Cycle algorithm Algorithm Susanne C. Brenner AMIC 2010/6th EASIAM Conference
Multigrid Algorithms W-Cycle Algorithm for A z = γ with initial guess z 0 Output = MG W (, γ, z 0, m)
Multigrid Algorithms W-Cycle Algorithm for A z = γ with initial guess z 0 Output = MG W (, γ, z 0, m) Correction Step (apply the coarse grid algorithm twice) q = MG W ( 1, I 1 (γ A z m ), 0, m) q = MG W ( 1, I 1 (γ A z m ), q, m)
Multigrid Algorithms W-Cycle Algorithm for A z = γ with initial guess z 0 Output = MG W (, γ, z 0, m) C 0 Interior Penalty Methods for Fourth Order Problems Correction Step Multigrid Algorithms (apply the coarse grid algorithm twice) Multigrid Algorithms for A z = ψ (z V, ψ V ) q = MG W ( 1, I 1 (γ A z m ), 0, m) W-cycle Algorithm p = 2 q = MG W ( 1, I 1 (γ A z m ), q, m) = 3 ents = 2 = 1 = 0 scheduling diagram for the W-cycle algorithm Scheduling Diagram of the W-Cycle Algorithm
Multigrid Algorithms Operation Count n = dimv (n 4 ) W = number of flops for the th level multigrid algorithm m = number of smoothing steps p = 1 or 2
Operation Count n = dimv (n 4 ) Multigrid Algorithms W = number of flops for the th level multigrid algorithm m = number of smoothing steps p = 1 or 2 W C mn + pw 1 C mn + p(c mn 1 ) + p 2 (C mn 2 ) + p 1 (C mn 1 ) + p W 0 C m4 + pc m4 1 + p 2 C m4 2 + p 1 (C m4) + p W 0 = C m4 ( 1 + p 4 + p2 4 2 +... p 1 ) 4 1 + p W 0 C m4 1 p/4 + p W 0 C 4 + p W 0 C n
Error Propagation Operators Multigrid Algorithms Let E : V V be the error propagation operator that maps the initial error z z 0 to the final error z MG V (, γ, z 0, m). We want to develop a recursive relation between E and E 1.
Error Propagation Operators Multigrid Algorithms Let E : V V be the error propagation operator that maps the initial error z z 0 to the final error z MG V (, γ, z 0, m). We want to develop a recursive relation between E and E 1. It follows from (S) and A z = γ that z z new = z z old B 1 (γ A z old ) = (Id B 1 A )(z z old ) where Id is the identity operator on V.
Error Propagation Operators Multigrid Algorithms Let E : V V be the error propagation operator that maps the initial error z z 0 to the final error z MG V (, γ, z 0, m). We want to develop a recursive relation between E and E 1. It follows from (S) and A z = γ that z z new = z z old B 1 (γ A z old ) = (Id B 1 A )(z z old ) where Id is the identity operator on V. Therefore the effect of one smoothing step is measured by the operator R = Id B 1 A
Multigrid Algorithms Let P 1 : V V 1 be the transpose of the coarse-to-fine operator I 1 with respect to the variational forms, i.e. a 1 (P 1 v, w) = a (v, I 1 w) v V, w V 1
Multigrid Algorithms Let P 1 : V V 1 be the transpose of the coarse-to-fine operator I 1 with respect to the variational forms, i.e. a 1 (P 1 v, w) = a (v, I 1 w) v V, w V 1 Recall the coarse grid residual equation A 1 e 1 = I 1 (γ A z m )
Multigrid Algorithms Let P 1 : V V 1 be the transpose of the coarse-to-fine operator I 1 with respect to the variational forms, i.e. a 1 (P 1 v, w) = a (v, I 1 w) v V, w V 1 Recall the coarse grid residual equation A 1 e 1 = I 1 (γ A z m ) a 1 (e 1, v) = A 1 e 1, v (for any v V ) = I 1 (γ A z m ), v = (γ A z m ), I 1 v = A (z z m ), I 1 v = a (z z m, I 1 v) = a 1(P 1 (z z m ), v)
Multigrid Algorithms Let P 1 : V V 1 be the transpose of the coarse-to-fine operator I 1 with respect to the variational forms, i.e. a 1 (P 1 v, w) = a (v, I 1 w) v V, w V 1 Recall the coarse grid residual equation A 1 e 1 = I 1 (γ A z m ) a 1 (e 1, v) = A 1 e 1, v (for any v V ) = I 1 (γ A z m ), v = (γ A z m ), I 1 v = A (z z m ), I 1 v = a (z z m, I 1 v) = a 1(P 1 (z z m ), v)
Multigrid Algorithms e 1 = P 1 (z z m )
Multigrid Algorithms e 1 = P 1 (z z m ) Recall q = MG V ( 1, I 1 (γ A z m ), 0, m) is the approximate solution of the coarse grid residual equation obtained by using the ( 1) st level V-cycle algorithm with initial guess 0.
Multigrid Algorithms e 1 = P 1 (z z m ) Recall q = MG V ( 1, I 1 (γ A z m ), 0, m) is the approximate solution of the coarse grid residual equation obtained by using the ( 1) st level V-cycle algorithm with initial guess 0. e 1 q = E 1 (e 1 0) = q = (Id 1 E 1 )e 1
Multigrid Algorithms e 1 = P 1 (z z m ) Recall q = MG V ( 1, I 1 (γ A z m ), 0, m) is the approximate solution of the coarse grid residual equation obtained by using the ( 1) st level V-cycle algorithm with initial guess 0. e 1 q = E 1 (e 1 0) = q = (Id 1 E 1 )e 1 z z m+1 = z (z m + I 1 q) = z z m I 1 (Id 1 E 1 )e 1 = z z m I 1 (Id 1 E 1 )P 1 (z z m ) = (Id I 1 P 1 + I 1 E 1P 1 )(z z m ) = (Id I 1 P 1 + I 1 E 1P 1 )R m (z z 0)
Multigrid Algorithms z MG V (, γ, z 0, m) = z z 2m+1 = R m (z z m+1) = R m (Id I 1 P 1 + I 1 E 1P 1 )R m (z z 0)
Multigrid Algorithms z MG V (, γ, z 0, m) = z z 2m+1 = R m (z z m+1) = R m (Id I 1 P 1 + I 1 E 1P 1 )R m (z z 0)
Multigrid Algorithms z MG V (, γ, z 0, m) = z z 2m+1 = R m (z z m+1) = R m (Id I 1 P 1 Recursive Relation for V-Cycle E = R m (Id I 1 P 1 E 0 = 0 + I 1 E 1P 1 )R m (z z 0) + I 1 E 1P 1 )R m
Multigrid Algorithms z MG V (, γ, z 0, m) = z z 2m+1 = R m (z z m+1) = R m (Id I 1 P 1 Recursive Relation for V-Cycle E = R m (Id I 1 P 1 E 0 = 0 Recursive Relation for W-Cycle E = R m (Id I 1 P 1 E 0 = 0 + I 1 E 1P 1 )R m (z z 0) + I 1 E 1P 1 )R m + I 1 E2 1P 1 )R m
Smoothing and Approximation It is clear from the recursive relation E = R m (Id I 1 P 1 that we need to understand the operators + I 1 Ep 1 P 1 )R m R m and Id I 1 P 1
Smoothing and Approximation It is clear from the recursive relation E = R m (Id I 1 P 1 that we need to understand the operators + I 1 Ep 1 P 1 )R m R m and Id I 1 P 1 The effect of R m is measured by the smoothing property while the effect of Id I 1 P 1 is measured by the approximation property. These properties involve certain mesh dependent norms.
Smoothing and Approximation It is clear from the recursive relation E = R m (Id I 1 P 1 that we need to understand the operators + I 1 Ep 1 P 1 )R m R m and Id I 1 P 1 The effect of R m is measured by the smoothing property while the effect of Id I 1 P 1 is measured by the approximation property. These properties involve certain mesh dependent norms. Let the inner product (, ) on V be defined by (v, w) = h 2 B v, w Then the operator B 1 A : V V is SPD with respect to (, ).
Mesh-Dependent Norms v t, = h t Smoothing and Approximation ((B 1 A ) t v, v) v V, t R
Mesh-Dependent Norms In particular v t, = h t v 0, = (v, v) = v 1, = h 1 = Smoothing and Approximation ((B 1 B (B 1 ((B 1 A ) t v, v) v V, t R h 2 B v, v v L2 (Ω) A )v, v) A )v, v = A v, v = a (v, v) = v a
Mesh-Dependent Norms In particular v t, = h t v 0, = (v, v) = v 1, = h 1 = Smoothing and Approximation ((B 1 B (B 1 ((B 1 A ) t v, v) v V, t R h 2 B v, v v L2 (Ω) A )v, v) A )v, v = A v, v = a (v, v) = v a ρ(b 1 A ) 1 = R v t, v t,
Mesh-Dependent Norms In particular v t, = h t v 0, = (v, v) = v 1, = h 1 = Smoothing and Approximation ((B 1 B (B 1 ((B 1 A ) t v, v) v V, t R h 2 B v, v v L2 (Ω) A )v, v) Generalized Cauchy-Schwarz Inequality A )v, v = A v, v = a (v, v) = v a a (v, w) v 1+t, v 1 t, t R
Duality Smoothing and Approximation v 1+t, = a (v, w) max w V \{0} w 1 t, v V, t R
Duality Smoothing and Approximation v 1+t, = a (v, w) max w V \{0} w 1 t, v V, t R Smoothing Property For 0 s t 2 and, m = 1, 2,..., R m v s, Cm (t s) 2 h s t v t, v V
Duality Smoothing and Approximation v 1+t, = a (v, w) max w V \{0} w 1 t, v V, t R Smoothing Property For 0 s t 2 and, m = 1, 2,..., The proof is based on R m v s, Cm (t s) 2 h s t v t, v V Spectral Theorem ρ(b 1 A ) 1 calculus
Approximation Property Smoothing and Approximation There exists α ( 1 2, 1] such that (Id I 1 P 1 )v 1 α, Ch 2α v 1+α, v V, = 1, 2,... (The index α is related to elliptic regularity.)
Approximation Property Smoothing and Approximation There exists α ( 1 2, 1] such that (Id I 1 P 1 )v 1 α, Ch 2α v 1+α, v V, = 1, 2,... (The index α is related to elliptic regularity.) The proof is based on elliptic regularity duality arguments relations between the mesh dependent norms s, and the Sobolev norms H s (Ω)
Smoothing and Approximation Example (convex Ω) V ( H0 1(Ω)) = Lagrange P 1 finite element space a (w, v) = w v dx = a(w, v) Ω V 0 V 1, I 1 : V 1 V is the natural injection B w, v = λ p V w(p)v(p) v V
Smoothing and Approximation Example (convex Ω) V ( H0 1(Ω)) = Lagrange P 1 finite element space a (w, v) = w v dx = a(w, v) Ω V 0 V 1, I 1 : V 1 V is the natural injection B w, v = λ p V w(p)v(p) v V v 0, = ( h 2 B ) 1/2 v, v = λh 2 v 2 (p) v L2 (Ω) v V p V v 1, = v a = v H 1 (Ω) v V
The operator P 1 Smoothing and Approximation : V V 1 satisfies a(p 1 v, w) = a(v, I 1 w) = a(v, w) v V, w V 1 a((id I 1 P 1 )v, w) = a(v P 1 v, w) = 0 v V, w V 1
The operator P 1 Smoothing and Approximation : V V 1 satisfies a(p 1 v, w) = a(v, I 1 w) = a(v, w) v V, w V 1 a((id I 1 P 1 )v, w) = a(v P 1 v, w) = 0 v V, w V 1 Duality Argument Let v V be arbitrary and ζ H0 1 (Ω) satisfy a(w, ζ) = w(id I 1 P 1 )v dx w H0(Ω) 1 Ω i.e., ζ is the solution of the boundary value problem ζ = (Id I 1 P 1 )v in Ω and ζ = 0 on Ω
Since Ω is convex, ζ H 2 (Ω) and Smoothing and Approximation ζ H 2 (Ω) C Ω (Id I 1 P 1 )v L2 (Ω)
Since Ω is convex, ζ H 2 (Ω) and Smoothing and Approximation ζ H 2 (Ω) C Ω (Id I 1 P 1 )v L2 (Ω) (Id I 1 P 1 )v 2 L 2 (Ω) = a((id I 1 P 1 )v, ζ) = a((id I 1 P 1 )v, ζ Π 1 ζ) (Id I 1 P 1 )v H 1 (Ω) ζ Π 1 ζ H 1 (Ω) (Id I 1 P 1 )v H 1 (Ω)(Ch ζ H 2 (Ω)) (Id I 1 P 1 )v H 1 (Ω)(Ch (Id I 1 P 1 )v L2 (Ω))
Since Ω is convex, ζ H 2 (Ω) and Smoothing and Approximation ζ H 2 (Ω) C Ω (Id I 1 P 1 )v L2 (Ω) (Id I 1 P 1 )v 2 L 2 (Ω) = a((id I 1 P 1 )v, ζ) = a((id I 1 P 1 )v, ζ Π 1 ζ) (Id I 1 P 1 )v H 1 (Ω) ζ Π 1 ζ H 1 (Ω) (Id I 1 P 1 )v H 1 (Ω)(Ch ζ H 2 (Ω)) (Id I 1 P 1 )v H 1 (Ω)(Ch (Id I 1 P 1 )v L2 (Ω)) (Id I 1 P 1 )v L2 (Ω) Ch (Id I 1 P 1 )v H 1 (Ω)
Since Ω is convex, ζ H 2 (Ω) and Smoothing and Approximation ζ H 2 (Ω) C Ω (Id I 1 P 1 )v L2 (Ω) (Id I 1 P 1 )v 2 L 2 (Ω) = a((id I 1 P 1 )v, ζ) = a((id I 1 P 1 )v, ζ Π 1 ζ) (Id I 1 P 1 )v H 1 (Ω) ζ Π 1 ζ H 1 (Ω) (Id I 1 P 1 )v H 1 (Ω)(Ch ζ H 2 (Ω)) (Id I 1 P 1 )v H 1 (Ω)(Ch (Id I 1 P 1 )v L2 (Ω)) (Id I 1 P 1 )v L2 (Ω) Ch (Id I 1 P 1 )v H 1 (Ω) (Id I 1 P 1 )v 0, (Id I 1 P 1 )v L2 (Ω) Ch (Id I 1 P 1 )v H 1 (Ω) = Ch (Id I 1 P 1 )v 1,
In particular Smoothing and Approximation (Id I 1 P 1 )v 0, Ch (Id I 1 P 1 )v 1, Ch v 1,
Smoothing and Approximation In particular (Id I 1 P 1 )v 0, Ch (Id I 1 P 1 )v 1, Ch v 1, Duality (Id I 1 P 1 )v 1, = max w V \{0} = max w V \{0} a(v, (Id I 1 P 1 )w) w 1, a(id I 1 P 1 )v, w) w 1, v 2, (Id I 1 max P 1 )w 0, Ch v 2, w V \{0} w 1,
Smoothing and Approximation In particular (Id I 1 P 1 )v 0, Ch (Id I 1 P 1 )v 1, Ch v 1, Duality (Id I 1 P 1 )v 1, = max w V \{0} = max w V \{0} a(v, (Id I 1 P 1 )w) w 1, a(id I 1 P 1 )v, w) w 1, v 2, (Id I 1 max P 1 )w 0, Ch v 2, w V \{0} w 1, Approximation Property with α = 1 (Id I 1 P 1 )v 0, Ch 2 v 2, v V
Two-Grid Analysis E TG W-Cycle Convergence = R m (Id I 1 P 1 )R m
Two-Grid Analysis E TG W-Cycle Convergence = R m (Id I 1 P 1 )R m E TG v 1, = R m (Id I 1 P 1 )R m v 1, Cm α 2 h α Cm α 2 h α Cm α 2 h α (Id I 1 P 1 )R m v 1 α, h 2α R m v 1+α, h 2α m α 2 h α v 1, = C m α v 1,
Two-Grid Analysis E TG W-Cycle Convergence = R m (Id I 1 P 1 )R m E TG v 1, = R m (Id I 1 P 1 )R m v 1, Cm α 2 h α Cm α 2 h α Cm α 2 h α (Id I 1 P 1 )R m v 1 α, h 2α R m v 1+α, h 2α m α 2 h α v 1, = C m α v 1, The convergence of the W-cycle algorithm can then be established by a perturbation argument under the condition I 1 v 1, C v 1, 1 v V 1, = 1, 2,... which implies by duality P 1 v 1, 1 C v 1, v V, = 1, 2,...
Suppose for some δ > 0 we have W-Cycle Convergence E 1 v 1, 1 δ v 1, 1 v V 1
Suppose for some δ > 0 we have W-Cycle Convergence E 1 v 1, 1 δ v 1, 1 v V 1 E v 1, = R m (Id I 1 P 1 + I 1 E2 1P 1 )R m v 1, R m (Id I 1 P 1 )v 1, + R m I 1 E2 1P 1 R m v 1, C m α v 1, + C E 1P 2 1 R m v 1, C m α v 1, + C δ 2 P 1 R m v 1, C m α v 1, + C 2 δ2 v 1, = (C m α + C 2 δ2 ) v 1,
Suppose for some δ > 0 we have W-Cycle Convergence E 1 v 1, 1 δ v 1, 1 v V 1 E v 1, = R m (Id I 1 P 1 + I 1 E2 1P 1 )R m v 1, R m (Id I 1 P 1 )v 1, + R m I 1 E2 1P 1 R m v 1, C m α v 1, + C E 1P 2 1 R m v 1, C m α v 1, + C δ 2 P 1 R m v 1, C m α v 1, + C 2 δ2 v 1, = (C m α + C 2 δ2 ) v 1, E v 1, δ v 1, v V provided ( ) (C m α + C 2 δ2 ) = δ
W-Cycle Convergence Solving ( ) we find ( ) δ = 1 1 4C 2 C m α /(2C 2 ) < 1 provided ( ) 4C 2 C m α < 1
W-Cycle Convergence Solving ( ) we find ( ) δ = 1 1 4C 2 C m α /(2C 2 ) < 1 provided ( ) 4C 2 C m α < 1 Therefore, by mathematical induction E v 1, δ v 1, for 1 and the W-cycle algorithm is a contraction under the condition ( ). Moreover δ C m α as m
W-Cycle Convergence Theorem The W-cycle algorithm is a contraction with contraction number independent of the grid levels, provided the number m of smoothing steps is greater than a number m that is also independent of the grid levels.
W-Cycle Convergence Theorem The W-cycle algorithm is a contraction with contraction number independent of the grid levels, provided the number m of smoothing steps is greater than a number m that is also independent of the grid levels. The convergence analysis of W-cycle is based on the wor of Ban and Dupont (originally for conforming methods). It is a robust approach that wors for problems without full elliptic regularity (α < 1) and also for nonconforming methods.
W-Cycle Convergence Theorem The W-cycle algorithm is a contraction with contraction number independent of the grid levels, provided the number m of smoothing steps is greater than a number m that is also independent of the grid levels. The convergence analysis of W-cycle is based on the wor of Ban and Dupont (originally for conforming methods). It is a robust approach that wors for problems without full elliptic regularity (α < 1) and also for nonconforming methods. References 1 R.E. Ban and T.F. Dupont An optimal order process for solving finite element equations, Math. Comp., 1981. 2 B., Convergence of nonconforming multigrid methods without full elliptic regularity, Math. Comp., 1999.
W-Cycle Convergence Theorem The W-cycle algorithm is a contraction with contraction number independent of the grid levels, provided the number m of smoothing steps is greater than a number m that is also independent of the grid levels. The convergence analysis of W-cycle is based on the wor of Ban and Dupont (originally for conforming methods). It is a robust approach that wors for problems without full elliptic regularity (α < 1) and also for nonconforming methods. Remar For conforming finite element methods with nested finite element spaces, the W-cycle algorithm is a contraction for m = 1. This result can be deduced from the corresponding result for the V-cycle algorithm.
Model Problem V-Cycle Convergence Poisson problem on a polygonal domain discretized by a conforming Lagrange finite element method.
Model Problem V-Cycle Convergence Poisson problem on a polygonal domain discretized by a conforming Lagrange finite element method. With Full Elliptic Regularity (Ω convex) u H 2 (Ω) C Ω f L2 (Ω) 1983 Braess-Hacbusch z MG V (, γ, z 0, m) a C C + m z z 0 a for m, 1, where C is independent of m and. In particular, the V-cycle is a contraction with only one smoothing step.
V-Cycle Convergence Without Full Elliptic Regularity (Ω nonconvex) u H 1+α (Ω) C Ω f L2 (Ω) for 1 2 < α < 1
Without Full Elliptic Regularity for 1 2 < α < 1 1987 Bramble-Pascia V-Cycle Convergence (Ω nonconvex) u H 1+α (Ω) C Ω f L2 (Ω) 1988 Decer-Mandel-Parter ( 1 ) z MG V (, γ, z 0, 1) a 1 C (1 α)/α z z 0 a for 1
Without Full Elliptic Regularity for 1 2 < α < 1 1987 Bramble-Pascia V-Cycle Convergence (Ω nonconvex) u H 1+α (Ω) C Ω f L2 (Ω) 1988 Decer-Mandel-Parter ( 1 ) z MG V (, γ, z 0, 1) a 1 C (1 α)/α z z 0 a for 1 1991 Bramble-Pascia-Wang-Xu (no regularity assumption) z MG V (, γ, z 0, 1) a ( 1 1 ) z z 0 a for 1 C
1992 Zhang, Xu 1993 Bramble-Pascia There exists δ (0, 1) such that V-Cycle Convergence z MG V (, γ, z 0, m) a δ z z 0 a for m, 1. In particular, the V-cycle is a contraction with one smoothing step.
1992 Zhang, Xu 1993 Bramble-Pascia There exists δ (0, 1) such that V-Cycle Convergence z MG V (, γ, z 0, m) a δ z z 0 a for m, 1. In particular, the V-cycle is a contraction with one smoothing step. Braess-Hacbusch z MG V (, γ, z 0, m) a C C + m z z 0 a for m, 1, where C is independent of m and. In particular, the V-cycle is a contraction with only one smoothing step.
2002 B. z MG V (, γ, z 0, m) a V-Cycle Convergence C C + m α z z 0 a for m, 1, where C is independent of m and.
2002 B. z MG V (, γ, z 0, m) a V-Cycle Convergence C C + m α z z 0 a for m, 1, where C is independent of m and. This is a complete generalization of the Braess-Hacbusch result: z MG V (, γ, z 0, m) a C C + m z z 0 a for m, 1, where C is independent of m and. In particular, the V-cycle is a contraction with only one smoothing step.
Model Problem Multiplicative Theory P 1 Lagrange finite element method for the Poisson problem. T is generated from T 0 by uniform refinement. V ( H 1 0 (Ω)) is the P 1 Lagrange finite element space associated with T. V 0 V 1 I 1 : V 1 V is the natural injection. a (w, v) = w v dx = a(w, v) Ω
Recursive Relation Multiplicative Theory E = R m (Id I 1 P 1 E 0 = 0 + I 1 E 1P 1 )R m
Recursive Relation Multiplicative Theory E = R m (Id I 1 P 1 E 0 = 0 + I 1 E 1P 1 )R m Notation (j l) I l j : V j V l is the natural injection P j l : V l V j is the transpose of I l j with respect to the variational form, i.e., a(p j l v, w) = a(v, Il j w) v V l, w V j I j j = Id j = P j j
Multiplicative Theory Properties of I l j and P j l For j i l I l j = I l i I i j P j l = P j i P i l I i j = P i l Il j (in particular Id j = I j j = Pj l Il j ) P j i = P j l Il i (I l j Pj l )2 = I l j Pj l (Id l I l j P j l )2 = (Id l I l j P j l )
Multiplicative Theory Key Observation [ (Id I 1 P 1 ) + I 1 E 1P 1 ] = (Id I 1 P 1 ) + I 1 [ R m 1 (Id 1 I 1 2 P 2 1 + I 1 2 E 2P 2 ] 1 )Rm 1 P 1
Key Observation [ (Id I 1 P 1 ) + I 1 E 1P 1 ] = (Id I 1 P 1 ) + I 1 [ R m 1 (Id 1 I 1 2 P 2 = [ (Id I 1 P 1 ) + I 1 Rm 1 P 1 ] [(Id I 2 P 2 [(Id I 1 P 1 Multiplicative Theory 1 + I 1 2 E 2P 2 1 )Rm 1 + I 2 E 2P 2 ) ] ) + I 1 Rm 1 P 1 ] ] P 1
Key Observation [ (Id I 1 P 1 ) + I 1 E 1P 1 ] = (Id I 1 P 1 ) + I 1 [ R m 1 (Id 1 I 1 2 P 2 = [ (Id I 1 P 1 ) + I 1 Rm 1 P 1 ] [(Id I 2 P 2 Multiplicative Theory 1 + I 1 2 E 2P 2 1 )Rm 1 + I 2 E 2P 2 ) ] [(Id I 1 P 1 ) + I 1 Rm 1 P 1 = [ (Id I 1 P 1 ) + I 1 Rm 1 P 1 ] [(Id I 2 P 2 ) + I 2 Rm 2 P 2 ] [(Id I 3 P 3 ) + I 3 E 3P 3 ] [(Id I 2 P 2 ) + I 2 Rm 2 P 2 ] [(Id I 1 P 1 ) + I 1 Rm 1 P 1 ] ] ] P 1
Multiplicative Theory E = R m [ (Id I 1 P 1 ) + I 1 E 1P 1 ] R m = R m [ (Id I 1 P 1 ) + I 1 Rm 1 P 1 ] [(Id I1 P 1 ) + I1 ] Rm 1 P1 (Id I0 P 0 ) [ (Id I1 P 1 ) + I1 Rm 1 P 1 ] [(Id I 1 P 1 ) + I 1 ] Rm 1 P 1 R m
Multiplicative Theory E = R m [ (Id I 1 P 1 ) + I 1 E 1P 1 ] R m = R m [ (Id I 1 P 1 ) + I 1 Rm 1 P 1 ] [(Id I1 P 1 ) + I1 ] Rm 1 P1 (Id I0 P 0 ) [ (Id I1 P 1 ) + I1 Rm 1 P 1 ] [(Id I 1 P 1 ) + I 1 ] Rm 1 P 1 R m Notation For 1 j T j = def I j (Id j R m j )P j
Multiplicative Theory E = R m [ (Id I 1 P 1 ) + I 1 E 1P 1 ] R m = R m [ (Id I 1 P 1 ) + I 1 Rm 1 P 1 ] [(Id I1 P 1 ) + I1 ] Rm 1 P1 (Id I0 P 0 ) [ (Id I1 P 1 ) + I1 Rm 1 P 1 ] [(Id I 1 P 1 ) + I 1 ] Rm 1 P 1 R m Notation For 1 j In particular T j = def I j (Id j R m j )P j T = I (Id R m )P = Id R m = Rm = Id T
Multiplicative Theory E = R m [ (Id I 1 P 1 ) + I 1 E 1P 1 ] R m = R m [ (Id I 1 P 1 ) + I 1 Rm 1 P 1 ] [(Id I1 P 1 ) + I1 ] Rm 1 P1 (Id I0 P 0 ) [ (Id I1 P 1 ) + I1 Rm 1 P 1 ] [(Id I 1 P 1 ) + I 1 ] Rm 1 P 1 R m Notation For 1 j In particular T j = def I j (Id j R m j )P j T = I (Id R m )P = Id R m = Rm = Id T (Id I j P j ) + I j R m j P j = Id I j (Id j R m j )P j = Id T j
Multiplicative Theory E = R m [ (Id I 1 P 1 ) + I 1 E 1P 1 ] R m = R m [ (Id I 1 P 1 ) + I 1 Rm 1 P 1 ] [(Id I1 P 1 ) + I1 ] Rm 1 P1 (Id I0 P 0 ) [ (Id I1 P 1 ) + I1 Rm 1 P 1 ] [(Id I 1 P 1 ) + I 1 ] Rm 1 P 1 R m Notation For 1 j In particular T j = def I j (Id j R m j )P j T = I (Id R m )P = Id R m = Rm = Id T (Id I j P j ) + I j R m j P j = Id I j (Id j R m j )P j = Id T j T 0 = def I 0 P 0 = Id I 0 P 0 = Id T 0
Multiplicative Theory Multiplicative Expression for E E = (Id T )(Id T 1 )... (Id T 1 ) (Id T 0 )(Id T 1 )... (Id T 1 )(Id T )
Multiplicative Expression for E Multiplicative Theory E = (Id T )(Id T 1 )... (Id T 1 ) (Id T 0 )(Id T 1 )... (Id T 1 )(Id T ) Strengthened Cauchy-Schwarz Inequality For 0 j a(v j, v ) C2 ( j)/2 v j H 1 (Ω)h 1 v L2 (Ω) v j V j, v V
Multiplicative Expression for E Multiplicative Theory E = (Id T )(Id T 1 )... (Id T 1 ) (Id T 0 )(Id T 1 )... (Id T 1 )(Id T ) Strengthened Cauchy-Schwarz Inequality For 0 j a(v j, v ) C2 ( j)/2 v j H 1 (Ω)h 1 v L2 (Ω) v j V j, v V Standard Cauchy-Schwarz Inequality a(v j, v ) v j a v a = v H 1 (Ω) v H 1 (Ω) v j V j, v V which implies a(v j, v ) C v j H 1 (Ω)h 1 v L2 (Ω)
Theorem There exists δ (0, 1) such that Multiplicative Theory for m, 1. z MG V (, γ, z 0, m) a δ z z 0 a
Theorem There exists δ (0, 1) such that Multiplicative Theory for m, 1. z MG V (, γ, z 0, m) a δ z z 0 a Details can be found in the boo by Bramble and the survey article by Bramble and Zhang. Refinement of the multiplicative theory can be found in the paper by Xu and Ziatanov. Reference J. Xu and L. Ziatanov, The method of alternating projections and the method of subspace corrections in Hilbert space, J. Amer. Math. Soc., 2002.
Theorem There exists δ (0, 1) such that Multiplicative Theory for m, 1. z MG V (, γ, z 0, m) a δ z z 0 a Details can be found in the boo by Bramble and the survey article by Bramble and Zhang. Refinement of the multiplicative theory can be found in the paper by Xu and Ziatanov. Reference J. Xu and L. Ziatanov, The method of alternating projections and the method of subspace corrections in Hilbert space, J. Amer. Math. Soc., 2002. The multiplicative theory cannot be applied to nonconforming finite element methods in general since many of the algebraic relations are no longer valid. For example, (Id I l P l ) 2 (Id I l P l )
Recursive Relation E = R m (Id I 1 P 1 E 0 = 0 Additive Theory + I 1 E 1P 1 )R m
Recursive Relation E E = R m (Id I 1 P 1 E 0 = 0 Additive Theory + I 1 E 1P 1 )R m = R m (Id I 1 P 1 )R m + R m I 1 Rm 1 (Id 1 I 1 2 P 2 1 + I 1 2 E 2P 2 1 )Rm 1 P 1 = R m (Id I 1 P 1 )R m + R m I 1 Rm 1 (Id 1 I 1 2 P 2 1 )Rm 1 P 1 R m + R m I 1 Rm 1 I 1 2 Rm 2 (Id 2 I 2 3 P 3 2 )Rm 2 P 2 + R m 1 Rm 1 P 1 R m
Recursive Relation E E = R m (Id I 1 P 1 E 0 = 0 Additive Theory + I 1 E 1P 1 )R m = R m (Id I 1 P 1 )R m + R m I 1 Rm 1 (Id 1 I 1 2 P 2 1 + I 1 2 E 2P 2 1 )Rm 1 P 1 = R m (Id I 1 P 1 )R m + R m I 1 Rm 1 (Id 1 I 1 2 P 2 1 )Rm 1 P 1 R m + R m I 1 Rm 1 I 1 2 Rm 2 (Id 2 I 2 3 P 3 2 )Rm 2 P 2 + [ R m j (Id j I j 1 P j 1 ] P j = j=1 R m I 1 Rm j+1 Ij+1 j j )R m j R m 1 Rm 1 P 1 R m j+1 Rm j P 1 R m
Ideas Additive Theory The operator R m j (Id j I j 1 P j 1 j )R m j has already been analyzed in the two-grid analysis.
Ideas Additive Theory The operator R m j (Id j I j 1 P j 1 j )R m j has already been analyzed in the two-grid analysis. The ey is to analyze (for 0 j ) the multi-level operator T def,j = R m I 1 Rm j+1 Ij+1 j : V j V and its transpose with respect to the variational forms T def j, = P j j+1 Rm j P 1 R m : V V j
Ideas Additive Theory The operator R m j (Id j I j 1 P j 1 j )R m j has already been analyzed in the two-grid analysis. The ey is to analyze (for 0 j ) the multi-level operator T def,j = R m I 1 Rm j+1 Ij+1 j : V j V and its transpose with respect to the variational forms T def j, = P j j+1 Rm j P 1 R m : V V j We will need a strengthened Cauchy-Schwarz inequality with smoothing and estimates that compare the meshdependent norms on consecutive levels.
Ideas Additive Theory The operator R m j (Id j I j 1 P j 1 j )R m j has already been analyzed in the two-grid analysis. The ey is to analyze (for 0 j ) the multi-level operator T def,j = R m I 1 Rm j+1 Ij+1 j : V j V and its transpose with respect to the variational forms T def j, = P j j+1 Rm j P 1 R m : V V j We will need a strengthened Cauchy-Schwarz inequality with smoothing and estimates that compare the meshdependent norms on consecutive levels. We need to circumvent the fact that for nonconforming methods in general (Id I l P l )2 (Id I l Pl )
Additive Theory Strengthened Cauchy-Schwarz Inequality with Smoothing Let 0 j, l, v j V j and v l V l. a (T,j R m j v j, T,l R m l v l) C m α δ l j v j 1 α,j v l 1 α,l where C is a positive constant, 0 < δ < 1 and α ( 1 2, 1] is the index of elliptic regularity, provided the number of smoothing steps m is sufficiently large.
Additive Theory Strengthened Cauchy-Schwarz Inequality with Smoothing Let 0 j, l, v j V j and v l V l. a (T,j R m j v j, T,l R m l v l) C m α δ l j v j 1 α,j v l 1 α,l where C is a positive constant, 0 < δ < 1 and α ( 1 2, 1] is the index of elliptic regularity, provided the number of smoothing steps m is sufficiently large. A Nonconforming Estimate ( Id 1 P 1 I 1) v 1 α, 1 Ch α v 1, 1 v V 1 (This will allow us to handle (Id I l Pl )2 (Id I l P l ).)
Two-Level Estimates (0 < θ < 1) Additive Theory I 1 v 2 1, (1 + θ 2 ) v 2 1, 1 + Cθ 2 h 2α v 2 1+α, 1 v V 1 I 1 v 2 1 α, (1 + θ 2 ) v 2 1 α, 1 + Cθ 2 h 2α v 2 1, 1 v V 1 P 1 v 2 1 α, 1 (1 + θ 2 ) v 2 1 α, + Cθ 2 h 2α v 2 1, v V Important aspect: the constant C is independent of and θ.
Two-Level Estimates (0 < θ < 1) Additive Theory I 1 v 2 1, (1 + θ 2 ) v 2 1, 1 + Cθ 2 h 2α v 2 1+α, 1 v V 1 I 1 v 2 1 α, (1 + θ 2 ) v 2 1 α, 1 + Cθ 2 h 2α v 2 1, 1 v V 1 P 1 v 2 1 α, 1 (1 + θ 2 ) v 2 1 α, + Cθ 2 h 2α v 2 1, v V Important aspect: the constant C is independent of and θ. θ is a parameter that calibrates the meaning of high/low frequency. The freedom to choose different θ on different levels allows us to build multi-level estimates from these two-level estimates.
Additive Theory Theorem There exists a positive constant C independent of and m, such that z MG V (, γ, z 0, m) ah C m α z z a h provided that the number of smoothing steps m is larger than a number m which is independent of. In particular the V-cycle algorithm is a contraction with contraction number uniformly bounded away from 1 if m is sufficiently large.
Additive Theory Theorem There exists a positive constant C independent of and m, such that z MG V (, γ, z 0, m) ah C m α z z a h provided that the number of smoothing steps m is larger than a number m which is independent of. In particular the V-cycle algorithm is a contraction with contraction number uniformly bounded away from 1 if m is sufficiently large. This result holds for both conforming and nonconforming finite element methods.
Additive Theory Theorem There exists a positive constant C independent of and m, such that z MG V (, γ, z 0, m) ah C m α z z a h provided that the number of smoothing steps m is larger than a number m which is independent of. In particular the V-cycle algorithm is a contraction with contraction number uniformly bounded away from 1 if m is sufficiently large. This result holds for both conforming and nonconforming finite element methods. In the conforming case we can combine this with the result from the multiplicative theory to show that C z MG V (, γ, z 0, m) a C + m α z z a
References Additive Theory 1 B., Convergence of the multigrid V-cycle algorithm for second order boundary value problems without full elliptic regularity, Math. Comp., 2002. 2 B., Convergence of nonconforming V-cycle and F-cycle multigrid algorithms for second order elliptic boundary value problems, Math. Comp., 2004. 3 B., Smoothers, mesh dependent norms, interpolation and multigrid, Appl. Numer. Math., 2002. 4 B. and L.-Y. Sung Multigrid algorithms for C 0 interior penalty methods, SIAM J. Numer. Anal., 2006.
Other Algorithms F-Cycle Algorithm for A z = γ with initial guess z 0 Output = MG F (, γ, z 0, m)
Other Algorithms F-Cycle Algorithm for A z = γ with initial guess z 0 Output = MG F (, γ, z 0, m) Correction Step (coarse grid algorithm followed by V-cycle) q = MG F ( 1, I 1 (γ A z m ), 0, m) q = MG V ( 1, I 1 (γ A z m ), q, m)
Other Algorithms F-Cycle Algorithm for A z = γ with initial guess z 0 Output = MG F (, γ, z 0, m) Correction Step (coarse grid algorithm followed by V-cycle) q = MG F ( 1, I 1 (γ A z m ), 0, m) q = MG V ( 1, I 1 (γ A z m ), q, m) = 3 acements = 2 = 1 = 0 scheduling diagram for the F-cycle algorithm
Other Algorithms Convergence of the F-cycle algorithm follows from the convergence of the V-cycle algorithm by a perturbation argument. The computational cost of the F-cycle algorithm is more than the cost of the V-cycle algorithm but less than that of the W-cycle algorithm. For nonconforming methods the F-cycle is more robust than the V-cycle (i.e., it requires a smaller number of smoothing steps) and its performance is almost identical with the performance of the W-cycle. Nonconforming F-cycle algorithms have been used extensively in CFD computations by Rannacher and Ture.
Variable V-Cycle Other Algorithms This is the V-cycle algorithm where the number of smoothing steps can vary from level to level.
Variable V-Cycle Other Algorithms This is the V-cycle algorithm where the number of smoothing steps can vary from level to level. Suppose we want to solve the finite element equation on level. Then m j, the number of smoothing steps for level j, is chosen according to the rule β 1 m j m j 1 β 2 m j for 0 j, where 1 < β 1 β 2.
Variable V-Cycle Other Algorithms This is the V-cycle algorithm where the number of smoothing steps can vary from level to level. Suppose we want to solve the finite element equation on level. Then m j, the number of smoothing steps for level j, is chosen according to the rule β 1 m j m j 1 β 2 m j for 0 j, where 1 < β 1 β 2. The variable V-cycle algorithm is mostly used as an optimal preconditioner.
Full Multigrid Other Algorithms The th level multigrid algorithm solves the equation A z = γ with an (arbitrary) initial guess z 0.
Full Multigrid Other Algorithms The th level multigrid algorithm solves the equation A z = γ with an (arbitrary) initial guess z 0. When we are solving the finite element equation u v dx = fv dx v V Ω the finite element solution u on different levels are related, because they are approximations of the same u. Ω
Full Multigrid Other Algorithms The th level multigrid algorithm solves the equation A z = γ with an (arbitrary) initial guess z 0. When we are solving the finite element equation u v dx = fv dx v V Ω the finite element solution u on different levels are related, because they are approximations of the same u. Therefore the initial guess for the th level multigrid algorithm should come from the solution on the ( 1) st level. Ω
Finite Element Equation Full Multigrid Algorithm For = 0, û 0 = A 1 0 φ 0 For 1, A u = φ φ, v = fv dx Ω Other Algorithms v V u 0 = I 1û 1 u l = MG(, φ, u l 1, m) for 1 l r û = u r
Other Algorithms Suppose the multigrid algorithm is uniformly convergent. For a sufficiently large r, the full multigrid algorithm, which is a nested iteration of the th level multigrid algorithms, will produce an approximate solution of the continuous problem that is accurate to the same order as the exact solution of the finite element equation. Moreover the computational cost of the full multigrid algorithm remains proportional to the number of unnowns.