An Introduction of Multigrid Methods for Large-Scale Computation

Size: px
Start display at page:

Download "An Introduction of Multigrid Methods for Large-Scale Computation"

Transcription

1 An Introduction of Multigrid Methods for Large-Scale Computation Chin-Tien Wu National Center for Theoretical Sciences National Tsing-Hua University 01/4/005

2 How Large the Real Simulations Are? Large-scale Simulation of Polymer Electrolyte Fuel Cells by parallel Computing (Hua Meng and Chao-Yang Wang, 004) Three-Dimensional Finite Element Modeling of Human Ear for Sound Transmission (R. Z. Gan, B. Feng and Q. Sun, 004) FEM model with O(10 6 ) nodes FEM model with 10 5 ~10 6 nodes Car engine with O(10 5 ) nodes Commercial Aircraft: 10 7 nodes FINFET transistor: 10 5 nodes

3 What do we need in order to simulate? Our goal is to introduce multigrid methods for solving sparse linear systems. Why multigrid? 1. Computation cost of multigrid is proportional to problem sizes.. Multigrid is easy to be parallelized.

4 Outlines Stationary Iterative Methods Some finite element error estimates Multigrid Algebraic Multigrid Nonlinear Multigrid (FAS) Multigrid Parallelization Reference: 1. An introduction to multilevel methods (Jinchao Xu). Multigrid Methods (Stephen F. McCormick) 3. A multigrid tutorial (William L. Briggs) 4. Matrix iterative analysis (Richard S. Varga) 5. The mathematical theory of finite element methods (Brenner and Scott) 6. Introduction to Algebraic Multigrid (Christian Wagner)

5 Solving Linear System Ax=b by Iterative Methods Methods: Stationary Methods: Jacobi, Gauss Seidel (GS), SOR. Krylov Subspace Methods: Conjugate gradient, GMRES, by Saad and Schultz 1986, and MINRES, by Paige and Saunders Multigrid Methods: Geometric multigrid (MG), by Fedorenko 1961, and algebraic multigrid (AMG), by Ruge and Stüben 1985.

6 Basic questions and some definitions Basic questions are 1. How do we iterate?. For what category of matrices A, the iteration converge? 3. What is the convergence rate? Some definitions: A is irreducible if there is no permutation P such that P T AP= A 1,1 A 1, 0 A, A is non-negative (denoted as A 0) if a i,j 0, for all 1 i,j n A is an M-matrix if A is nonsingular, a i,j 0 for i j, and A -1 0 A is irreducibly diagonally dominant if A is irreducible, diagonally dominant with a i,i > a i,j for some i. n j=1,j i A=M-N is a regular splitting of A if M is nonsingular and M -1 0

7 1. r old = f Au old. Solve e=b -1 r old 3. update u new = u old + e Theorem: Let A 0 be an irreducible matrix. Then 1. A has a positive real eigenvalue equal to its spetral radius. There is an eigenvector x>0 corresponds to ρ( A) 3. ρ( A) increases when any entry of A increases. 4. ρ A is a simple eigenvalue of A. e new = e old B 1 ( f Au old ) = e old B 1 A( u u old ) = ( I - B -1 A)e old B is called an iterator or preconditioner of A. E B = I - B -1 A is called the error reduction operator of the iterator B Perron-Frobenius Theorem Stationary Iterative Methods

8 Some Well Known Iterative Methods Suppose A = D L U, Richardson: Jacobi: Damped Jacobi: Gauss-Seidel: B = 1, where 0<ω < ω ρ A B = D where D is the diagonal, L and U are lower and upper triangular parts, respectively.. B = 1 ω D, where 0<ω < ρ D -1 A B = ( D L). SOR: B = 1 ( ω D ω L ), where 0<ω <.

9 Jacobi and Gauss-Seidel Jacobi: x i (m+1) = n j =1 j i a i, j a i,i x (m) j + r i a i,i Gauss-Seidel: x i (m+1) = i 1 j =1 a i, j a i,i x j (m+1) n j =i+1 a i, j a i,i x j (m) + r i a i,i HW1: Write down a formula for SOR HW: x x 1 Write a program to solve 1 1 = x x Jacobi and Gauss-Seidel, starting with initial x (0) = [ 0,0,0,0]. by

10 and E GS = I D L Let E J = I D 1 A ( 1 A). Since the solution of HW is x= [ 1,1,1,1] and e 0 = x x (0) = [1,1,1,1]. Clearly, we have e m J = ( E J ) m e 0 and e m GS = ( E GS ) m e 0, One can easily check that 1 e m J = 1 1 and e m m 1 GS = 1. Thus, e m 4 m 1 J = 1 > e m m 1 GS = m 1 1 You might get a feeling that Gauss-Seidel method is faster than Jacobi method. Stein-Rosenberg Theorem Theorem: Let B J = L + U be the Jacobi matrix and B GS = I L 1 U be the Gauss- Seidel matrix. Then one and only one of the following relations is vaild: 1) ρ B J = ρ( B GS ) = 0. < ρ( B J ) < 1. = ρ( B GS ) = 1. < ρ( B GS ). ) 0 < ρ B GS 3) ρ B J 4) 1 < ρ B J

11 Convergence of Jacobi, Gauss-Seidel and SOR Iterative Methods Lemma 1. If A= a i,j n 0 is irreducible then either a i,j min 1 i n n j=1 a i, j < ρ A < max 1 i n n j=1 a i, j j=1 =ρ( A) or (1). Proof: Case(1): All row sums of A are equal (=σ ): Let ζ = [ 1,1,,1]. Clearly, Aζ =σζ and σ ρ( A). However, the Gerchgorin's Theorem implies ρ A Case(): Not all row sums of A are equal: Construct B= b i,j respectively, such that n j=1 σ. Hence, ρ( A) = σ. 0 and C= ( c i,j ) 0,by decreasing and increasing some entries of A, b, j = α = min 1 i n n j=1 a i, j n and c, j = β = By Perron-Frobenius theorem, we have ρ B j=1 max 1 i n n j=1. ρ( A) ρ C Clearly, from the result of Case(1), the inequality (1) holds. a i, j, for all 1 n. Lemma. Let A and B be two matrices with 0 B A. Then ρ( B) ρ( A)

12 Theorem 1. Let A= ( a ) i,j be a strictly or irreducibly diagonally dominant matrix then the Jacobi and Gauss-Seidel iterative methods converge. 0 i = j Proof: Recall that E J = I-D -1 A = D -1 ( L + U ) = ( b i, j ), where b i,j = a i, j. From Lemma, i j a i,i it is clear that ρ( B) ρ( B ). Since A is strictly diagonally dominant, clearly, we have n b i, j < 1 for all 1 i n. Therefore, Lemma 1 implies ρ B j=1 we have shown the Jacobi iterative method converge from ρ B Now, since E GS =I-( D-L) -1 A= ( D-L) -1 U= I-D -1 L < 1. As a result, ρ B <1. -1 D -1 U. Let L=D -1 L and U=D -1 U. We have ( I-L ) 1 U ( I-L ) 1 U ( n 1 I+ L + L + + L ) U = I- L Now consider B J = L + U and B GS = ρ( B J ) < 1, the Stein-Rosenberg theorem implies ρ( B GS ) < 1. 1 U. ( I- L ) 1 U. Since we have already shown Therefore, we conclude the Gauss-Seidel iterative method converges.

13 Theorem. Let A=D-E-E * and D be Hermitian matrices, where D is positive definite, and D-ωE is non-singular for 0 ω. Let E SOR = I ω ( D ωe) 1 A. Then ρ E SOR positive definite and 0<ω <. < 1 if only if A is Proof: First, assume e 0 is a nonzero vector, the SOR iteration can be written as ( D ωe)e m+1 = ( ωe * + ( 1 ω ) D)e m, m () Let δ m = e m e m+1. Substracting ( D ωe)e m and ( ωe * + ( 1 ω ) D)e m+1 from both side of (), we have ( D ωe)δ m = ω Ae m ---- (3) and ω Ae m+1 = ( 1 ω ) D + ωe * δ m (4). From e * * m (3) e m+1 (4) and "simplifying the expression" (HW), one has ( -ω )δ * m Dδ m =ω e * * m Ae m e m+1 Ae m+1 { } (5). Assume A is positive definite and 0<ω < and let e 0 be any eigenvector of E SOR. We have e 1 =λe 0 and δ 0 = ( 1-λ)e 0 and (5) reduces to -ω ω 1 λ e * 0 De 0 = ( 1 λ )e * 0 Ae (6). Now, λ 1. Otherwise, δ 0 = 0 Ae 0 = 0 (by (3)) e 0 = 0 contradiction! Since A and D are positive definite and 0<ω <, (6) implies 1 λ > 0. Therefore, ρ E SOR Using similar arguments, one can show that the converse is also true. < 1.

14 For SOR, it is possible to determined the optimal value of ω for special type of matrices (p-cyclic). The optimal value ω b is precisely specified as the unique positive root (0<p/(p-1)) of the equation For p=, ρ( E J )ω b 1 p p = p p p 1 ρ E ω b = 1 + J ρ E J ( ω 1 b ), ( Varga 1959). ( Young 1950) Semi-Iterative Method: y m = v j ( m) x j where v j ( m) =1. We have e m = v j ( m)e m j =0 m j =0 m. In general, e m = P m ( E S )e 0 where P m is a polymonial and E S is the error reduction of an iterater S This is so-called polynomial acceleration method. The most important one is the Chebyshev polynomials. m j =0

15 Algorithm: y m+1 = ω m+1 Chebyshev Semi-Iterative Method { E S y m + f y m 1 } + y m 1, for m 1, where C m+1 ( 1 / ρ), C and C m-1 m+1 and y 0 = x 0. ω m+1 = 1 + C m 1 1 / ρ ρ = ρ E S Convergence: m / m e m ω b 1 1+ ω b 1 e 0, here ω b = ρ The convergence rate is accelerated as ρ 0. are Chebyshev polynomials, C 0 = 1, C 1 = x, C m+1 = xc m C m 1 Remark: There are cases that the polynomial acceleration does not improve asymptotic rate of convergence.

16 Some PDE and Finite Element Analysis Finite Element Solutions: Let I h be a given triangulation, V h = { v H 1 0 : v T P 1 ( T ), T I } h and π h :H 1 V h be the interpolation defined by π h ( u) ( N i ) = u( N i ). Consider the weak solution u H 1 satisfying a( u,v)= ( f,v), for all v H 1 0, a finite element solution u h V h satisfies a( u,v)= ( f,v), for all v V h. Interpolation Errors: H -Regularity: v-π h v 1 Ch r u r +1 and v-π h v 0 Ch r +1 u r +1. a, is said to be H -Regular if there exists a constant C such that for all f L u C f 0

17 Finite Element Solution is Quasi-Optimal Céa Theorem: u u h H 1 C α min v V h u v H 1 where, C is the continuity constant and α is the coercivity constant of a,. Proof: Step1: Step: HW4: = 0, for all v V h a( u u h,u u h ) = a( u u h,u v) + a( u u h,v u h ) =a( u u h,u v) C u u h u v H 1 H 1 a u u h,v α u u h H 1 If a(, ) is self-adjoint, show that u u = min h u v A v V A, h where v A = a( v,v) is the energy norm. Remark: finite element solution is the orthogonal projection of the exact solution with respect to the energy norm.

18 FEM Error Estimation Theorem 3: Assume the interpolation error estimations holds for the given I h and a, has the H Regularity. The following estimates hold. u u h H 1 Ch r u r (i) and u u h L Ch r +1 u r (ii) Proof: From the interpolation estimation and Céa Theorem, (i) is trivial. To prove (ii), we use the duality argument. Let w be the solution to the adjoint problem, a( v,w)= ( u-u h,v), for all v H 1. Choosing v=u-u h, we have ( u-u h,u-u h )= a( u-u h,w) = a( u-u h,w w h ), for any w h V h C u u h H 1 w w h H 1 Ch u u h H 1 w Ch u u h H 1 u u h L. Therefore, u u h L Ch r +1 u r +1.

19 Definition: mesh dependent norm v k,h = ( A k h v,v) h for v V h, k=0,1, where ( v,w) h = h v( N i ),w( N i ). Clearly, v 0,h v 0 and v 1,h v A. Lemma 3: Λ( A h ) Ch Proof: Let λ be an eigenvalue of A h with eigenvector φ. a( φ,φ)= ( A h φ,φ) h = λ( φ,φ) h = λ φ 0,h. λ C φ A Ch φ 0 Ch. φ 0,h φ 0,h Lemma 4: (Generalized Cauchy-Schwarz Inequality) a v,w v 1+t,h w 1 t,h v,w V h and t R.

20 Ideas: Multigrid Methods Approximate solutions on fine grid using iterative methods. Correct remaining errors from coarse grids. k-1ki Mesh Refinement Prolongation Restriction kk-1i Prolongation Restriction MG Coarsening MG V-cycle

21 Why Multigrid Works? 1. Relaxation methods converge slowly but smooth the error quickly. Ex1: consider Lu = u'' = λu finite difference -u j-1 + u j u j+1 h = λu j Eigenvalues λ k = 4 kπ h sin (N + 1) and eigenvectors φ kjπ k = sin j N + 1 here, k=1 N is the wave number and j is the node number. Richardson relaxation: E R = (I σ 1 A) where A h = 1 tridiag[-1-1]. h Fourier analysis: Choosing σ = 4 (largest eigenvalue). h e m = N k =1 1 λ k σ m N kπ N φ k = 1 sin (N + 1) φ k = α m k φ k k =1 m k =1, after m relaxation. α k m 0 more quickly for k close to N.

22 Damped Jacobi e = sin jπ N jπ sin N jπ sin N + 1 e m = ( I ω D 1 A) m e. Smooth error modes are more oscillatory on coarse grids. Smooth errors can be better corrected by relaxation on coarser grids. Smooth error on fine grid Smooth error on coarse grid Relaxation convergence rate on fine grid is 1-O(h ) Relaxation convergence rate on coarse grid: 1-O(4h ) Remember: α 1 1 π N + 1 = 1 O( h ) for N 1

23 3. The smooth error is corrected by coarse grid correction operator: E c = ( I I h H A 1 H I H h A ) h = ( A 1 h I h H A 1 H H I ) h A h, here I h H and I H h are called restriction and prolongation operator respectively. HhI hhi A H can be obtained from discretization on coarse grid A H = I H h A h I h H and I H h 0chHcHhhEIIAE == h =c I H T (Galerkin formulation) E c is an A-orthogonal projection A h E c e, I h H e = 0 N ( E c ) h = R( I ) H R( E c ) = N ( I H h A ) h and E c is identity on N ( I H h A ) h

24 H A Picture That Show How Multigrid Works! N ( I H h A h ) o relaxation L h R( I H ) H N ( I H h A h ) correction o L R I h ( H ) H N ( I H h A h ) H N ( I H h A h ) o relaxation L correction L h R( I H ) o h R( I H )

25 Consider I h H = [ 1,1, 1 ( ]T linear interpolation) and I H h = 1 [1,1, 1 ]. It is easy to check that A H =I h H A h I H h is the discretization of L on I H. Now, for any v V h, let f v = A h v. One can consider v and v H = I h H A 1 H I H h f v as finite element approximations of ˆv, the solution of a( ˆv,w)= ( f v,w). Then, from the FEM-error estimation and H -regularity, we have E c ( v) = ( A 1 h I h H A 1 H H I ) h A h v k = ˆv v H ( ˆv v) k k ( ) Ch k ˆv Ch k f v 0 = Ch k Av 0 Consider the eigenfunction φ k j, k N. φ k j is also an eigenfunction of A H We have E c ( k φ ) j Chλ k = O h 1. This concludes the coarse-grid correction fixes the low frequency errors. For k N, E c ( k φ ) j 4 C 1 h, the high frequency errors can be amplified by coarse-grid correction.

26 Multigrid (MG) Algorithm: 1. x k =w k. (pre-smoothing) x k =w k +M k -1 g k -A k x k 3. (restriction) g =I k-1 k g k k -A k x k 4. (correction) q i =MG k-1 (q i-1,g k ) for 1 i m, m=1 or and q 0 =0 5. (prolongation) q =I k m q k-1 m 6. set x k =x k +q m 7. (post-smoothing) x k =x k +M k -1 g k -A k x k 8. set MG k (w k,g k )=x k MG Error reduction operator: Multigrid Algorithm E mg = ( A 1 h I h H A 1 H H I )( h A h E s ) = E s ( I I h H A 1 H I H h A ) h Pre-smoothing only Post-smoothing only

27 Multigrid Cycles Multigrid V-cycle (m=1) Multigrid V-cycle (m=) Full Multigrid cycle (FMG)

28 Results provided by in NCTU

29 MG Convergence Smoothing property: A l E s η( m) A l, for all 0 m < and l > 0. Approximation property: A 1 l I H h A 1 l 1 I H 1 h C A A l, for all l > 0. Ideas for proving the approximation property is shown in P.5 (*) Proof of smoothing property: Consider E S =E R = I- 1 Λ A h. Let v V h and v m S = E m S ( v). From Fourier expansion v= v k φ k A h E R ( v) 0 Since v s A h E R, we have v m S = 1 1 Λ A m v = λ = v s 1 k Λ λ k v k =Λ 1 λ k Λ Λ sup 0 x 1 0 v 0 m {( 1 x) m x} v s A h v s m 1 λ k Λ v k φ k. Therefore, Ch - 1 for Richarson iteration, clearly, m m v s 0 λ k Λ v s λ k v k ( v) 0 1 m A h v 0 A h E R η( m) A h, η( m) 0 as m. HW5: Prove MG with Richardson smoother is convergent in -norm 1

30 Choices of Interpolations and Coarse Grids Linear interpolation: p = r = Operator-dependent interpolation: De Zeeuw 1990 The rule in chosing interpolation and restriction: m p + m r > m (Brandt 1977) where m is the order of the PDE, m p -1 is the degree of polynomials exactly interpolated by I H h and m r -1 is the degree of ploynomials exactly interpolated by ( H I ) T h. Coarse grid selection: regular coarsening, semi-coarsening, algebraic coarsening

31 Choices of Smoothers Stationary iterative methods : Jacobi, Gauss-Seidel, SOR, Block-type stationary iterative methods (blocks can be determined by the way we number the nodes) Vertical line ordering Horizontal line ordering Red-Black ordering T -I -I T -I A= -I T -I -I T, T=[-1,4,-1]: Matrix of -D Laplacian Matrix Pattern for line ordering Matrix Pattern for R-B ordering

32 HW6: Write a MG code for solving 1-D using linear interpolant I H h u = f = u( 1) = 0 u 0 Update even (Red) points first: x i = 0.5 f i + x i 1 + x i+1 Update odd (Black) points: x i = 0.5 f i+1 + x i + x i+ and I h H = 0.5 I H h Red-Black Gauss-Seidel smoother T and the Brandt s Local Mode Analysis For analyzing the robustness of a smoother. Brandt s local mode analysis is a useful tool. Here, we demonstrate the method by considering the Jacobi and Gauss-Seidel relaxation for the -D laplace equation with periodic boundary condition.

33 Brandt s smoothing factor Let ε be the error before relaxation. From discrete Fourier transform theory, ε can be written as ε i, j = ˆε θ φ i, j ( θ) (i), where θ= ( θ 1,θ ), φ i, j θ ˆε θ = Θ n = θ Θ n 1 ε ( n + 1) k,l φ k,l ( θ ), and 1 k,l n π n + 1 k,l n + 1 k,l n + 3, n is odd. = e i iθ 1 + jθ, Similarly, the error ε obtained after relaxation can be written as ε = ˆε θ φ i, j θ -----(ii). Let λ θ θ Θ n factor is defined as ρ=sup λ θ ˆε θ. Brandt's smoothing ε θ, π θ k π, k = 1,.

34 ( ) Recall that ε i, j = ε i, j ω 4 4ε ε + ε + ε + ε i, j i+1, j i 1, j i, j +1 i, j 1 into it, we have ˆε θ φ i, j ( θ) = { ˆε θ φ i, j ( θ) ω 4 4 ˆε θ φ i, j ( θ) ( ˆε θ φ i+1, j ( θ) + ˆε θ φ i 1, j θ θ Θ n θ Θ n + ˆε θ φ i+1, j ( θ)) }= ˆε θ φ i, j θ φ i, j that ρ = max 1 ω, 1 ω, 1 3ω minimize ρ is 4 5 Smoothing Factor of Damped Jacobi Iteration θ Θ n ω 4 4φ i, j ( θ) φ i, j θ ( θ)e -iθ }= ˆε θ 1 ω 1 cos θ 1 θ Θ n Therefore, λ( θ) = 1 ω 1 cos θ 1 + cos( θ ) + cos( θ ) φ i, j ( θ). It is easy to see. The optimal ω that and the smoothing factor ρ = 0.6 for such ω. Plug (i) and (ii) + ˆε θ φ i, j +1 ( θ) e iθ 1 φ i, j ( θ)e -iθ 1 φ i, j ( θ)e iθ HW7: Show that the smoothing factor of the Gauss-Seidel iteration is 0.5

35 Convergence: How Much Multigrid Costs? Stationary method 1-O(κ -1 ) 1-h Conjugate gradient 1-O(κ -1/ ) 1-h Multigrid O(1) independent with h How much each MG step cost? Ignore the cost associated with inter-grid transfer (typically within 10-0%). Computation cost of one MG V-cycle is cn d ( 1 + d + d + ) = cnd 1 d n d : total number of points d: dimension of the problem c: cost for updating a single unknown cn d : cost per relaxation sweep.

36 Standard MG can fail! The original PDE has poor coercivity or regularity (for example, crack problems, convection-diffusion problems, etc.) - Relaxation may not smooth the error. - coarse grid correction can only capture a small portion of the error or even worse! The left figure is a sketch to illustrate why MG slow convergence Next, let s consider the following example: H N I h H A h o o relaxation o o correction L h R( I H ) d dx c( x) du ε, 0 x i dx = f ( x) 0 h Ex:, here c( x) = 1, i 0 h < x i 1 h ---- (+) u( 0) = u( 1) = 0 ε, i 1 h < x 1

37 Discrete matrix of (+) Damped Jacobi E DJ matrix of A h ε ε ε ε ε ε 1+ ε A h = 1 1+ ε ε ε ε ε ε ε 1 1 / 1 / 1 1 / ε / (1+ ε) 1 1 / (1+ ε) 1 / 1 1 / I ω D 1 A h = I ω 1 / (1 + ε) 1 ε / (1 + ε) 1 / 1 1 / 1 / 1

38 For ε 0, the eigen vector corresponding to the largest eigenvalue λ ( 0) of E DJ converges toward to the vector e ( 0) while λ ( 0) 1, where ( e ( 0) ) = i ih, i 0 h, i 0 i 1 n 1 ih i 0 n x i 0 h i 0 h < x i 1 h i 1 n 1 h, i 1 h < x 1. + i 0 h + i 1 h Figure of e (0) Damped Jacobi fails to smooth the high frequency error! MG convergence is deteriorated as ε 0 A remedy of this is to use operator-dependent interpolation! Construct such interpolation is not easy. But, there is a easier and better way to do it!

39 Algebraic Multigrid MG 1. A priori generated coarse grids are needed. Coarse grids need to be generated based on geometric information of the domain. AMG 1. A priori generated coarse grids are not needed! Coarse grids are generated by algebraic coarsening from matrix on fine grid.. Interpolation operators are defined independent with coarsening process. 3. Smoother is not always fixed.. Interpolation operators are defined dynamically in coarsening process. 3. Smoother is fixed. Ideas: Fix the smoothing operator. Carefully select coarse grids and define interpolation weights

40 AMG Convergence Smoothing assumption: α>0 E s e 1 e 1 α e for all e Vh Approximation assumption: min e I h e H e H β e H 0 1 where β is independent with e. E c e = ( AE c e, E c e I h 1 H e ) H E c e E c e I h H e H β E c e E c e 0 1 E s E c E c e α E c α e β E c α e 1 1 β e 1,here v 0 = Dv,v, v 1 = Av,v, v = D -1 Av,Av, for v V h AMG works when A is a symmetric positive definite M-matrix. In the following, we assume that A is also weakly diagonally dominate

41 What Does the Smooth Assumption Tell? Smooth error is characterized by E s e s 1 e s 1, e s is very small e 1 D 1/ Ae D 1/ e = e e 0 e 1 << e 0 ( Ae,e) = 1 1 a i, j a i,i a i, j (e i e j ) + a i, j e << a i i,i e i i, j ( e i e j ) << a ii e i j a i, j ( e i e j ) j i e i Smoother errors vary slowly in the direction of strong connection, from e i to e j, where a i, j a i,i are large. AMG coarsening should be done in the direction of the strong connections. In the coarsening process, interpolation weights are computed so that the approximation assumption is satisfied. (detail see Ruge and Stüben 1985) i j i

42 What Does the Approximation Assumption Tell? Approximation assumption min e I h e H e H β e H 0 1 a ii e i w ik e k k C β 1 ( a i F ij )(e i e j ) + i, j Since a ii e i w ik e k i F k C i = a ii w ik (e i e k ) + (1 s i )e i i F k C j a ij a ii w ik (e i e k ) + (1 s i )e i, i F k C e i here w ik > 0 is the interpolation weight from node k to node i, and s i = clearly, if a ii w ik (e i e k ) β ( a i F k C ij )(e i e j ) i, j ( Θ) a ii (1 s i )e i β a ij i F j e, i i the approximation assumption holds. For Θ ( Ξ) 0 a ii w ik β a ik and 0 a ii ( 1 s i ) β a ik w ik < 1, k C to hold, we can simply require. k

43 Lemma 5: Given a β 1, suppose the coarse grid C is selected such that a i,i + a i, j = j C i j i j C i Then, the approximation assumption holds if the interpolation weights are defined as w i,k = a i,k a i, j 1 β a i,i where C i =N i C, C is the coarse grid and N i = neighbors of i-th node a i,i ω i,k = a i,i a i,i j C i a i, j ( Φ). a i,k a = a i,i i, j a a i,k β a i,k i, j j C i j C i ( 1 s i ) = a i,i 1 ω i,k k C i = a i,i 1 β j a i, j k C i j C i a i,k a i, j a i, j =a j i,i Therefore, Ξ holds. From the arguments in previous page, We can conclude the approximation holds. j C i a i, j

44 Smoothing property holds for GS Recall E S = I B 1 A. We have E GS e 1 = A( I B 1 A)e, ( I B 1 A)e AB 1 Ae,e = Ae,e ( Ae, B 1 Ae) + ( AB 1 Ae, B 1 Ae) ( BB 1 Ae, B 1 Ae) + ( AB 1 Ae, B 1 Ae) B 1 Ae, B 1 Ae = e 1 B 1 Ae, BB 1 Ae = e 1 ( B T + B A ). The smooth assumption α e (( B T + B A) B 1 Ae, B 1 Ae) ---- (Θ) Let e=b 1 Ae. Since e = ( D 1 Ae, Ae) = ( D 1 BB 1 Ae, BB 1 Ae), clearly, (Θ) α D 1 B e, B e ---- (ΘΘ). ( B T + B A) e, e Now consider B=D-L. Since B T + B A = D, we have (ΘΘ) α BT D 1 B e, e D e, e = α D 1 B T D 1 BD e, e D 1/ e, D 1/ e = αρ( D 1 B T D 1 B) 1. α 1 ρ D 1 B T D 1 B = α D 1 B T D 1 BD 1/ e, D 1/ e D 1/ e, D 1/ e ---- (ΘΘΘ).

45 Therefore, the smooth assumption holds for Gauss-Seidel iteration. If A is a diagonally dominant M-matrices, we can estimate α as follows: Since ρ( D 1 B T D 1 B) ρ( I D 1 L T )ρ( I D 1 L) ( 1 + ρ( D 1 L T ))( 1 + ρ( D 1 L) ), and ρ( D 1 L) max 1 i n n j =1, j i a i, j a i,i 1, clearly, we have ρ D 1 B T D 1 B Therefore, Gauss-Seidel iteration satisfies the smoothing property with α= 1 4. h A H = ( I H ) T h A h I H A h ( Ξ) and ( Φ).

46 AMG Coarsening Criteria First, let s define the following sets: N S { i = j : a i, j γ max( } a i,m ),0 < γ < 1 ( S N ) T i = { S j : i N } j m i S ( N i ) T N i S Here, S N i S ( N i ) T is the set of nodes that node i strongly connects to. is the set of nodes strongly connects to node i. C i -nodes should be chosen from N i S From convergence result, we want β close to 1. This suggests we need a larger set C i ( we need to choose a small γ ). We don t want C=ø but we want C as small as possible. Criterion 1. For each node i in F, node j in N i s should be either in C or strongly connected to at least one node in C i. Criterion. C should be a maximal subset of all nodes with the property that no two C-nodes are strongly connected to each other.

47 AMG Coarsening (I) +1-1 N S i -1 S ( N i ) T Fine points +-1

48 AMG Coarsening (II)

49 AMG Coarsening: Example 1 Laplace operator from Galerkin FEM Discretization: A very good MG and AMG tutorial resource (by Van Emden Henson) :

50 AMG Coarsening: Example Convection-Diffusion with Characteristic and downstream layers εδu + u y = if (y=0 x>0) or x=1, u Ω = 0 otherwise, where Ω=[-1,1] [-1,1] Solution from Galerkin discretization on 3x3 grid Solution from SDFEM discretization on 3x3 grid

51 SDFEM discretization with δ T = h yields the left matrix stencil: ε ε ε h 6 ε h ε h 3 6 ε 3 h 6 ε 3 h 3 ε 3 h 6 ε 3 AMG coarsening with strong connection parameter ε/h << β << 0.5 C-point C-point Coarse grids from GMG coarsening Coarse grids from AMG coarsening

52 Example : GMG v.s. AMG level GMG AMG level GMG AMG level GMG AMG On the uniform mesh: ε=10 - ε=10-3 ε=10-4 Level GMG AMG Level GMG AMG Level GMG AMG On the adaptive mesh: ε=10 - ε=10-3 ε=10-4

53 Nonlinear Multigrid Nonlinear problems: L( u) = f discretization One needs to solve A h ( u h ) = f h a 1 a a n ( u 1,u,,u n ) u 1,u,,u n u 1,u,,u n = f 1 f f n. u j u j D Du A u h 1 ( f A ( h u )) j

54 Nonlinear Relaxation: ( old old,,u ) n for all i=1,,,n Jacobi: a i u old 1,,u old i 1,u new i,u i+1 Gauss-Seidel: a i ( old old,,u ) n for all i=1,,,n u new 1,,u new i 1,u new i,u i+1 Solve local nonlinear problems iteratively. Example ( Nonlinear Gauss-Seidel ) : Discretiation: Newton iteration for each j, starting from j=1

55 Nonlinear defect correction: In linear case: r h (n) = A h u h A h u h (n) In nonlinear case: r (n) (n) h = A h ( u h ) A h u h (n) = A h ( u h - u h ) (n) A h ( u - u h ) Solving A H e H = I h H r h (n) does not give an approximation to e h = u h - u h (n). Now consider e h = u h - u (n) h where u h satisfies r (n) h = A h e H = u H - I H h u (n) h where u H satisfies I H h r (n) h = A H Observe that u h (n) u h u H I h H u h e H I h H e h. (Here, I h H In this point of view, e H is a reasonable approximation of e h. Now, we can write down the FAS algorithm: ( u h ) A h (n) u h A H I H (n) h u h u H can simply be an injection) FAS: 1. Nonlinear Relaxation. Restrict u n h and r n h by r H = I H h r n h and v = I H n h u h = r H + A H v 3. Solve A H u H 4. Compute e H = u H v 5. Update u h n u h n + I H h e H

56 - u(x, y) + γ u(x, y) e u(x,y) = f(x, y) in [0,1] [0,1], u(x, y) = ( x-x )sin 3π y Discretization: finite difference Interpolation and Restriction Relaxation: Nonlinear Gauss-Seidel: p = r = ( u i, j = u i, j h 4u i, j u i 1, j u i+1, j u i, j 1 u ) i, j +1 + u i, j e u i, j f i, j 4h + γ ( 1+ u i, j )e u i, j starting from ( i,j) = ( 1,1), ( 1, ),,(,1), (,),, ( n,n).

57 An example taking from Multigrid Tutorial (Briggs) Who is better Newton-MG or FAS? Not so sure but FAS is popular in CFD.

58 Multigrid Parallelization Parallelization: Using multiple computers to do the job! communication What need to be done? P P P P M M M M Multigrid is a scalable algorithm! (Jim E. Jones, CASC, Lawrence Livermore National Laboratory)

59 Domain Decomposition: D G D B FEM assembling in domains D G, D B, can be done simultaneously. Matrix-vector product A x can be computed indenpendently in each domain. Pass x G to D and x B to D 1. Jacobi and red-black Gauss-Seidel Relaxations can be done in parallel. Grid-Coarsening and refinement can be done in parallel (not quite easy need to keep tracking grid topology). Interpolations can be parallelized too.

60 Scalability T( N,P) : Time to solve a problems with N unknowns on P processors Speedup S(N,P)=T(N,1)/T(N,P). Perfect if S(N,P)=P; Scaled Efficiency: E(N,P)=T(N,1)/T(NP,P). Perfect if E(N,P)=1. Assume D problem of size (pn) is distributed to p processors. Number of unknowns in each processor N Time for relaxation on grid level k: T k = T comm 5 point stencils N + 5 Time for a V-cycle = T v T k 8αL + 16Nβ α = startup time, β = time to transfer a single double k k N k N f, T comm (n) = α + βn = communication time for transmitting n doubles to one processor. f = one floating point operation time. f,

61 Since MG has O(1) convergence rate, we can analyze the scaled efficiency as follows: 40 3 T v N,1 N f T v (( pn ), p) = 8α log ( pn ) + 16β ( N ) N f O 1 log p O( 1) as N E N,P E N,P as p We need to be careful. In IBM SP, α = β = f = Communication is expensive!

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Solving PDEs with Multigrid Methods p.1

Solving PDEs with Multigrid Methods p.1 Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration

More information

Multigrid Methods and their application in CFD

Multigrid Methods and their application in CFD Multigrid Methods and their application in CFD Michael Wurst TU München 16.06.2009 1 Multigrid Methods Definition Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Adaptive algebraic multigrid methods in lattice computations

Adaptive algebraic multigrid methods in lattice computations Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Spectral element agglomerate AMGe

Spectral element agglomerate AMGe Spectral element agglomerate AMGe T. Chartier 1, R. Falgout 2, V. E. Henson 2, J. E. Jones 4, T. A. Manteuffel 3, S. F. McCormick 3, J. W. Ruge 3, and P. S. Vassilevski 2 1 Department of Mathematics, Davidson

More information

INTRODUCTION TO MULTIGRID METHODS

INTRODUCTION TO MULTIGRID METHODS INTRODUCTION TO MULTIGRID METHODS LONG CHEN 1. ALGEBRAIC EQUATION OF TWO POINT BOUNDARY VALUE PROBLEM We consider the discretization of Poisson equation in one dimension: (1) u = f, x (0, 1) u(0) = u(1)

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 1 12/02/09 MG02.prz Error Smoothing 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Initial Solution=-Error 0 10 20 30 40 50 60 70 80 90 100 DCT:

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

AMG for a Peta-scale Navier Stokes Code

AMG for a Peta-scale Navier Stokes Code AMG for a Peta-scale Navier Stokes Code James Lottes Argonne National Laboratory October 18, 2007 The Challenge Develop an AMG iterative method to solve Poisson 2 u = f discretized on highly irregular

More information

New Multigrid Solver Advances in TOPS

New Multigrid Solver Advances in TOPS New Multigrid Solver Advances in TOPS R D Falgout 1, J Brannick 2, M Brezina 2, T Manteuffel 2 and S McCormick 2 1 Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, P.O.

More information

Geometric Multigrid Methods

Geometric Multigrid Methods Geometric Multigrid Methods Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University IMA Tutorial: Fast Solution Techniques November 28, 2010 Ideas

More information

University of Illinois at Urbana-Champaign. Multigrid (MG) methods are used to approximate solutions to elliptic partial differential

University of Illinois at Urbana-Champaign. Multigrid (MG) methods are used to approximate solutions to elliptic partial differential Title: Multigrid Methods Name: Luke Olson 1 Affil./Addr.: Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 email: lukeo@illinois.edu url: http://www.cs.uiuc.edu/homes/lukeo/

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Aspects of Multigrid

Aspects of Multigrid Aspects of Multigrid Kees Oosterlee 1,2 1 Delft University of Technology, Delft. 2 CWI, Center for Mathematics and Computer Science, Amsterdam, SIAM Chapter Workshop Day, May 30th 2018 C.W.Oosterlee (CWI)

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

MULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW

MULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW MULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW VAN EMDEN HENSON CENTER FOR APPLIED SCIENTIFIC COMPUTING LAWRENCE LIVERMORE NATIONAL LABORATORY Abstract Since their early application to elliptic

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Numerical Programming I (for CSE)

Numerical Programming I (for CSE) Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

1. Fast Iterative Solvers of SLE

1. Fast Iterative Solvers of SLE 1. Fast Iterative Solvers of crucial drawback of solvers discussed so far: they become slower if we discretize more accurate! now: look for possible remedies relaxation: explicit application of the multigrid

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information

Multigrid finite element methods on semi-structured triangular grids

Multigrid finite element methods on semi-structured triangular grids XXI Congreso de Ecuaciones Diferenciales y Aplicaciones XI Congreso de Matemática Aplicada Ciudad Real, -5 septiembre 009 (pp. 8) Multigrid finite element methods on semi-structured triangular grids F.J.

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Introduction to Multigrid Method

Introduction to Multigrid Method Introduction to Multigrid Metod Presented by: Bogojeska Jasmina /08/005 JASS, 005, St. Petersburg 1 Te ultimate upsot of MLAT Te amount of computational work sould be proportional to te amount of real

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS LONG CHEN In this chapter we discuss iterative methods for solving the finite element discretization of semi-linear elliptic equations of the form: find

More information

Constrained Minimization and Multigrid

Constrained Minimization and Multigrid Constrained Minimization and Multigrid C. Gräser (FU Berlin), R. Kornhuber (FU Berlin), and O. Sander (FU Berlin) Workshop on PDE Constrained Optimization Hamburg, March 27-29, 2008 Matheon Outline Successive

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

ANALYSIS AND COMPARISON OF GEOMETRIC AND ALGEBRAIC MULTIGRID FOR CONVECTION-DIFFUSION EQUATIONS

ANALYSIS AND COMPARISON OF GEOMETRIC AND ALGEBRAIC MULTIGRID FOR CONVECTION-DIFFUSION EQUATIONS ANALYSIS AND COMPARISON OF GEOMETRIC AND ALGEBRAIC MULTIGRID FOR CONVECTION-DIFFUSION EQUATIONS CHIN-TIEN WU AND HOWARD C. ELMAN Abstract. The discrete convection-diffusion equations obtained from streamline

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Geometric Multigrid Methods for the Helmholtz equations

Geometric Multigrid Methods for the Helmholtz equations Geometric Multigrid Methods for the Helmholtz equations Ira Livshits Ball State University RICAM, Linz, 4, 6 November 20 Ira Livshits (BSU) - November 20, Linz / 83 Multigrid Methods Aim: To understand

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Bootstrap AMG. Kailai Xu. July 12, Stanford University

Bootstrap AMG. Kailai Xu. July 12, Stanford University Bootstrap AMG Kailai Xu Stanford University July 12, 2017 AMG Components A general AMG algorithm consists of the following components. A hierarchy of levels. A smoother. A prolongation. A restriction.

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Robust solution of Poisson-like problems with aggregation-based AMG

Robust solution of Poisson-like problems with aggregation-based AMG Robust solution of Poisson-like problems with aggregation-based AMG Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire Paris, January 26, 215 Supported by the Belgian FNRS http://homepages.ulb.ac.be/

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

6. Multigrid & Krylov Methods. June 1, 2010

6. Multigrid & Krylov Methods. June 1, 2010 June 1, 2010 Scientific Computing II, Tobias Weinzierl page 1 of 27 Outline of This Session A recapitulation of iterative schemes Lots of advertisement Multigrid Ingredients Multigrid Analysis Scientific

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Finite difference method for elliptic problems: I

Finite difference method for elliptic problems: I Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

More information

Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes

Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes Do Y. Kwak, 1 JunS.Lee 1 Department of Mathematics, KAIST, Taejon 305-701, Korea Department of Mathematics,

More information

ADAPTIVE ALGEBRAIC MULTIGRID

ADAPTIVE ALGEBRAIC MULTIGRID ADAPTIVE ALGEBRAIC MULTIGRID M. BREZINA, R. FALGOUT, S. MACLACHLAN, T. MANTEUFFEL, S. MCCORMICK, AND J. RUGE Abstract. Efficient numerical simulation of physical processes is constrained by our ability

More information

Numerical Solution I

Numerical Solution I Numerical Solution I Stationary Flow R. Kornhuber (FU Berlin) Summerschool Modelling of mass and energy transport in porous media with practical applications October 8-12, 2018 Schedule Classical Solutions

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Boundary Value Problems and Iterative Methods for Linear Systems

Boundary Value Problems and Iterative Methods for Linear Systems Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

AN AGGREGATION MULTILEVEL METHOD USING SMOOTH ERROR VECTORS

AN AGGREGATION MULTILEVEL METHOD USING SMOOTH ERROR VECTORS AN AGGREGATION MULTILEVEL METHOD USING SMOOTH ERROR VECTORS EDMOND CHOW Abstract. Many algebraic multilevel methods for solving linear systems assume that the slowto-converge, or algebraically smooth error

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

A greedy strategy for coarse-grid selection

A greedy strategy for coarse-grid selection A greedy strategy for coarse-grid selection S. MacLachlan Yousef Saad August 3, 2006 Abstract Efficient solution of the very large linear systems that arise in numerical modelling of real-world applications

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

VARIATIONAL AND NON-VARIATIONAL MULTIGRID ALGORITHMS FOR THE LAPLACE-BELTRAMI OPERATOR.

VARIATIONAL AND NON-VARIATIONAL MULTIGRID ALGORITHMS FOR THE LAPLACE-BELTRAMI OPERATOR. VARIATIONAL AND NON-VARIATIONAL MULTIGRID ALGORITHMS FOR THE LAPLACE-BELTRAMI OPERATOR. ANDREA BONITO AND JOSEPH E. PASCIAK Abstract. We design and analyze variational and non-variational multigrid algorithms

More information

arxiv: v1 [math.na] 11 Jul 2011

arxiv: v1 [math.na] 11 Jul 2011 Multigrid Preconditioner for Nonconforming Discretization of Elliptic Problems with Jump Coefficients arxiv:07.260v [math.na] Jul 20 Blanca Ayuso De Dios, Michael Holst 2, Yunrong Zhu 2, and Ludmil Zikatanov

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

Adaptive Multigrid for QCD. Lattice University of Regensburg

Adaptive Multigrid for QCD. Lattice University of Regensburg Lattice 2007 University of Regensburg Michael Clark Boston University with J. Brannick, R. Brower, J. Osborn and C. Rebbi -1- Lattice 2007, University of Regensburg Talk Outline Introduction to Multigrid

More information

Review: From problem to parallel algorithm

Review: From problem to parallel algorithm Review: From problem to parallel algorithm Mathematical formulations of interesting problems abound Poisson s equation Sources: Electrostatics, gravity, fluid flow, image processing (!) Numerical solution:

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

Notes on Multigrid Methods

Notes on Multigrid Methods Notes on Multigrid Metods Qingai Zang April, 17 Motivation of multigrids. Te convergence rates of classical iterative metod depend on te grid spacing, or problem size. In contrast, convergence rates of

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Multigrid Method ZHONG-CI SHI. Institute of Computational Mathematics Chinese Academy of Sciences, Beijing, China. Joint work: Xuejun Xu

Multigrid Method ZHONG-CI SHI. Institute of Computational Mathematics Chinese Academy of Sciences, Beijing, China. Joint work: Xuejun Xu Multigrid Method ZHONG-CI SHI Institute of Computational Mathematics Chinese Academy of Sciences, Beijing, China Joint work: Xuejun Xu Outline Introduction Standard Cascadic Multigrid method(cmg) Economical

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

The Removal of Critical Slowing Down. Lattice College of William and Mary

The Removal of Critical Slowing Down. Lattice College of William and Mary The Removal of Critical Slowing Down Lattice 2008 College of William and Mary Michael Clark Boston University James Brannick, Rich Brower, Tom Manteuffel, Steve McCormick, James Osborn, Claudio Rebbi 1

More information