Geometric Multigrid Methods for the Helmholtz equations

Size: px
Start display at page:

Download "Geometric Multigrid Methods for the Helmholtz equations"

Transcription

1 Geometric Multigrid Methods for the Helmholtz equations Ira Livshits Ball State University RICAM, Linz, 4, 6 November 20 Ira Livshits (BSU) - November 20, Linz / 83

2 Multigrid Methods Aim: To understand geometric multigrid for the Helmholtz equations (difficulties and some solutions) Give a deeper understanding that led to the various choices made. Contents Basics of multigrid Basics of Local Fourier analysis Nonlinear multigrid (FAS) The Helmholtz equation Ira Livshits (BSU) - November 20, Linz 2 / 83

3 Helmholtz equation Lu = u(x) k 2 u(x), x Ω R d with large wave numbers k: 2π/k dim(ω) Ira Livshits (BSU) - November 20, Linz 3 / 83

4 A Comment on Terminology The indefinite Helmholtz equation is in contrast to Lu = ( k 2 )u = f, ( η)u = f, η > 0 often also called Helmholtz equation, or Helmholtz equation with the good sign, or positive definite Helmholtz equation. The subject today is the indefinite Helmholtz equation. Ira Livshits (BSU) - November 20, Linz 4 / 83

5 The 2D discrete Poisson equation in the unit square We consider a stationary diffusion equation with diffusion in two coordinate directions: 2 u(x, y) x 2 2 u(x, y) y 2 = f (x, y) for (x, y) Ω = (0, ) 2 with boundary conditions: u(x, y) = g(x, y) for (x, y) Ω The functions f C(Ω) and g C( Ω) are known. The Laplace Operator is commonly written as: := 2 x y 2 Ira Livshits (BSU) - November 20, Linz 5 / 83

6 The accuracy in x- and y-direction is O(h 2 x), O(h 2 y ) or, with h x = h y = h, O(h 2 ). Ira Livshits (BSU) - November 20, Linz 6 / 83 With mesh size h = /n a grid G is defined as Ω = {(x i, y i ) (x i, y j ) = (ih, jh) IR 2 ; i, j, = 0,,..., n} In analogy to the D case we approximate the partial derivatives in y-direction. For the second derivative in y-direction we obtain: 2 u u i,j 2u i,j + u i,j+ y 2 i,j hy 2

7 Gauss-Seidel and Jacobi for the Poisson equation The discrete Laplace operator reads: h u h = h 2 [u i,j + u i+,j + u i,j + u i,j+ 4u i,j ] or: h u h (x, y) = h 2 [u h(x h, y) + u h (x + h, y) + u h (x, y h) + u h (x, y + h) 4u h (x, y)] Jacobi iteration reads u m+ h (x i, y j ) = [ h 2 f h (x i, y j ) + uh m (x i h, y j ) + uh m (x i + h, y j ) 4 ] +uh m (x i, y j h) + uh m (x i, y j + h), (with (x i, y j ) Ω h ) Gauss-Seidel iteration reads u m+ h (x i, y j ) = 4 [ h 2 f h (x i, y j ) + u m+ h (x i h, y j ) + uh m (x i + h, y j ) ], +u m+ h (x i, y j h) + u m h (x i, y j + h) Ira Livshits (BSU) - November 20, Linz 7 / 83

8 Linear systems of equations in matrix form For the solution of a linear system of equations Au = b, Iterative method: u m+ = Qu m + s, (m = 0,, 2,...), so that the system u = Qu + s is equivalent to the original problem. Matrix Q is usually not computed explicitly! Ira Livshits (BSU) - November 20, Linz 8 / 83

9 From the matrix equation Au = b we construct a splitting A = B + (A B), so that B is easily invertible. With the help of a general nonsingular N N matrix B we obtain such iteration procedures from the equation Bu + (A B)u = b. If we put or, rewritten for u m+ ; Bu m+ + (A B)u m = b, u m+ = u m B (Au m b) = (I B A)u m + B b. We will see, that it is important, to have the eigenvalues of Q = I B A possibly small. This is the more likely, the better B resembles A. Ira Livshits (BSU) - November 20, Linz 9 / 83

10 Linear systems of equations in matrix form Definition: An iterative process u m+ = Qu m + s is called consistent, if matrix I Q is not singular and if (I Q) s = A b. If the iteration is consistent, the equation (I Q)u = s has exactly one solution u m u. Definition: An iteration is called convergent, if the sequence u 0, u, u 2,... converges to a limit, independent of u 0. Ira Livshits (BSU) - November 20, Linz 0 / 83

11 The spectral radius Definition: The spectral radius ρ(q) of a (N, N) matrix Q is { } ρ(q) := max λ : λ eigenvalue of Q Lemma The spectral radius is the infimum over all (vector norm related) matrix norms: { } ρ(q) = inf Q Theorem: The iterative method u m+ = Qu m + s m = 0,, 2,... yields a convergent sequence {u m } for a general u 0, s ( the method converges ), if ρ(q) <. In the case ρ(q) <, matrix I Q is regular, there is for each s exactly one fix point of the iterative scheme. Ira Livshits (BSU) - November 20, Linz / 83

12 Error reduction General splitting of A (another way of writing): Bu m+ + (A B)u m = b = Au = Bu + (A B)u or: B(u m+ u ) = (B A)(u m u ), So, (u m+ u ) = Q(u m u ). By induction, we can obtain: (u m u ) = Q m (u 0 u ). Assume ρ(q) <. Ira Livshits (BSU) - November 20, Linz 2 / 83

13 Efficient solution methods for discrete equations Hierarchy One level methods, like Jacobi, Gauss-Seidel iterative methods are far from optimal: O(N α ), α >. O(N) complexity requires hierarchical approaches. Multigrid is the classical hierarchical approach Approximation on different scales Compute only those parts of a solution on fine scales that really require the fine resolution. Compute/correct the remaining parts on coarser scales Gives O(N) complexity Advantages increase with increasing problem size The idea behind multigrid is very general Ira Livshits (BSU) - November 20, Linz 3 / 83

14 Relaxation of u = (a) Random initial error, (b) Error after 0 GS iterations, (c) Error after 0 more GS iterations. Ira Livshits (BSU) - November 20, Linz 4 / 83

15 Convergence factor vs smoothing factor Convergence factor (GS, Jacobi, wjacobi, SOR) is ρ = O(h 2 ) Smoothing factor is a convergence factor for oscillatory components, and it seems to be good. Remaining error components are smooth. Ira Livshits (BSU) - November 20, Linz 5 / 83

16 Motivation to use coarse grids The remaining after relaxation, aka smoothing, error has a good coarse grid representation. Ira Livshits (BSU) - November 20, Linz 6 / 83

17 Detour to local Fourier analysis with a hint of trouble brewing for the Helmholtz The basic idea of any multigrid algorithm is to annihilate the high frequency error components efficiently on a fine grid by relaxation (smoothing) while the low frequencies are approximated from coarser grids. Smoothing analysis is used to analyze quantitatively this reduction of the high frequency error components on the fine grid. Assuming the ideal situation in which the relaxation does not affect the low frequencies all low frequencies are approximated more or less exactly on the coarse grid there is no interaction between high and low frequency error components it already leads to an estimate or prediction of the 2 level convergence factor. Ira Livshits (BSU) - November 20, Linz 7 / 83

18 Assumptions For a discretized PDE: L h U h = F h With the basic assumptions of any local Fourier analysis: linearize operator L h and/or freeze coefficients to local values neglect boundaries and boundary conditions consider the frozen, linearized operator on an infinite grid G h apply the linear constant coefficient local Fourier analysis Ira Livshits (BSU) - November 20, Linz 8 / 83

19 Components, the D case Infinite uniform mesh, meshsize h G h = {k h : k Z} D Fourier components: e iωx = cos ωx + i sin ωx On G h only Fourier components e iθx/h with θ ( π, π] are visible. Definition: A component e iθx/h is visible on G h if there is no frequency θ 0 with θ 0 < θ such that e iθ0x/h = e iθx/h for all x G h. A visible component e iθx/h does not coincide with any lower frequency on grid G h. θ frequency, 2πh θ period Ira Livshits (BSU) - November 20, Linz 9 / 83

20 Components, the 2D case Infinite uniform mesh, meshsize h = (h x, h y ) in x and y direction: G h = {(kh x, lh y ) : k, l Z} θ = (θ, θ 2 ) frequency On G h, only Fourier components e iθx/h := e iθx/hx e iθ2y/hy with θ := max ( θ, θ 2 ) π are visible. For higher frequencies θ > π, the Fourier components coincide with a lower frequency ( π, π]. For higher dimensions: analogously Ira Livshits (BSU) - November 20, Linz 20 / 83

21 Difference operators Let G h be an infinite uniform mesh, meshsize h ; u h : grid function on G h ; I = { ν,..., 0,..., ν}: set of indices. Difference operators L h : 2D x = (x, y) G h L h u h (x) = a µµ 2 u h (x + µ h x, y + µ 2 h y ) µ I µ 2 I Ira Livshits (BSU) - November 20, Linz 2 / 83

22 The stencil notation, examples 2D finite differences for u, I = {, 0, } a a 0 a L h u h (x) = a 0 a 00 a 0 u h (x) a a 0 a = h 2 4 u h (x) 5 point stencil } {{ } h or for the biharmonic operator 2, I = { 2,, 0,, 2} h h u h (x) = h 4 3-point stencil Here: x = (x, y) G h, h x = h y = h u h(x) Ira Livshits (BSU) - November 20, Linz 22 / 83

23 Motivation to consider Biharmonic equation is Kaczmarz relaxation Kaczmarz relaxation (equivalent to GS applied to AA ) Implementation: Consider i th equation: A i x = b i and a current approximation x, here A i is the i th row of A. A new approximation is given by x x + r i A i, where the normalized residual is computed as r i = b i A i x A i A. i Kaczmarz relaxation always converges (indeed, AA t is always SPD) but it is often slow (much slower than the GS on original matrix) and thus should be used only as the last reserve (µ = 0.8 for Laplace vs GS s µ = 0.25). Ira Livshits (BSU) - November 20, Linz 23 / 83

24 Difference operators applied to Fourier components On an infinite grid G h, the Fourier components e iθx/h are eigenfunctions of any scalar difference operator L h with constant coefficients. D: L h e iθx/h = a µ e iθµ µ I }{{} Lh (θ) e iθx/h 2D: L h e iθx/h = a µµ 2 e iθµ e iθ2µ2 µ I µ 2 I }{{} Lh (θ) e iθx/h Lh (θ) is the eigenvalue, also called the Fourier symbol of L h. Ira Livshits (BSU) - November 20, Linz 24 / 83

25 Examples D: L h = h 2 [ 2 ] = L h e iθx/h = ( e iθ h e iθ) e iθx/h = 2 (cos θ ) h2 }{{} Lh (θ) e iθx/h 2D: θ = (θ, θ 2 ) L h = h = h 2 4 = L h (θ) = 2 h 2 (cos θ + cos θ 2 2) or for the biharmonic operator L h = h h (3-point stencil): Lh (θ) = 4 h 4 (cos θ + cos θ 2 2) 2 Ira Livshits (BSU) - November 20, Linz 25 / 83

26 Smoothing analysis; a general class of relaxations: Consider a linear problem L h U h (x) = F h on grid G h Then: Many (not all!) relaxation schemes for this problem are based on an operator splitting L h = A h + B h where A h and B h are again difference operators. Then: A relaxation sweep starting from the initial approximation u h ( old values ) produces a new approximation u h ( new values ) by solving A h u h (x) + B h u h (x) = F h (x) at each gridpoint x G h. Ira Livshits (BSU) - November 20, Linz 26 / 83

27 Smoothing analysis; a general class of relaxations: What does that mean for the errors before and after the relaxation sweep? Let e h = U h u h be the error before and e h = U h u h after the sweep. Then: A h e h (x) + B h e h (x) = 0 at each gridpoint x G h. How do these relaxations act on Fourier components? Let e h = Ae iθx/h : error before relaxation; then e h (x) = Ae iθx/h error after relaxation. A, A R are the error amplitudes before and after relaxation. Ira Livshits (BSU) - November 20, Linz 27 / 83

28 Smoothing analysis; a general class of relaxations: From A h e h (x) + B h e h (x) = 0 follows à h (θ)a + B h (θ)a = 0 and therefore ) (Ãh (θ) A = A B h (θ) where Ãh(θ), B h (θ) C are the Fourier symbols of A h, B h. µ(θ) := Ãh(θ) B h (θ) is the amplification factor of the component θ. Ira Livshits (BSU) - November 20, Linz 28 / 83

29 D Example:Gauss-Seidel relaxation Corresponding splitting of L h : [ 2 ] = [0 0 ] + [ 2 0] h2 }{{} h2 }{{} h2 }{{} L h A h B h Then: at gridpoint x G h for the errors Ae iθ(x h)/h 2Ae iθx/h + Ae iθ(x+h)/h = 0 ( e iθ 2 ) } {{ } B h (θ) A + ( e iθ) }{{} Ã h (θ) A = 0. This yields: A = eiθ e iθ 2 A Ira Livshits (BSU) - November 20, Linz 29 / 83

30 Questions What is smoothing? How can the smoothing property of a given relaxation scheme be measured quantitatively? How does the smoothing property (or its quantitative measure) influence the performance of multigrid cycling? Can the performance of a multigrid cycle be predicted in terms of the smoothing measures? Note: In a multigrid cycle, relaxations are used to smooth the highly oscillating error components which cannot be approximated on coarser grids. Therefore, smoothing roughly means convergence of high frequency error components. Ira Livshits (BSU) - November 20, Linz 30 / 83

31 What are high and low frequencies? Let G h be a fine grid and G H a coarse grid. Then: A Fourier component e iθx/h on grid G h is called a high frequency (with respect to G H ) if its restriction (injection) to the coarse grid G H is not visible there. Otherwise it is called a low frequency. Take one dimensional standard coarsening: G h = {kh : k Z}, G H := G 2h = {2kh : k Z} Let θ π; then by injection from G h to G 2h : iθx/h inject e = e i2θx/2h This component e i2θx/2h is visible on G 2h only if 2θ π, i.e. θ π 2 Ira Livshits (BSU) - November 20, Linz 3 / 83

32 = The high frequencies on G h (with respect to G 2h ) are those with π θ π 2. Fourier components in 2D: e iθx/h = e iθx/h e iθ2y/h, θ = (θ, θ 2 ) with θ := max( θ, θ 2 ) π Standard coarsening G h = {(kh, lh) : k, l Z} G H = G 2h = {(2kh, 2lh) : k, l Z} = High frequencies: π 2 θ π Low frequencies: θ < π 2 Note: The definition of high and low frequencies depends on the coarsening. Ira Livshits (BSU) - November 20, Linz 32 / 83

33 What is smoothing? Smoothing stands for convergence of high frequency error components which cannot be approximated from the coarse grids in a multigrid cycle. For a relaxation scheme with amplification factors µ(θ) of Fourier components e iθx/h the smoothing rate µ is defined by: µ := max{ µ(θ) : θ = high frequencies} Note: The above definition of µ depends on the coarsening. The amplification or reduction factors µ(θ) are also called smoothing factors. Ira Livshits (BSU) - November 20, Linz 33 / 83

34 Example: Gauss Seidel relaxation for Poisson s equation: h U h = h 2 4 Relaxation in error terms: h 2 e h (x) + h 2 }{{} B h Symbols: θ = (θ, θ 2 ) U h = F h } {{ } A h B h e iθx/h = ( e iθ h 2 + e iθ2 4 ) }{{} B h (θ) e h (x) = 0 e iθx/h A h e iθx/h = ( e iθ h 2 + e iθ2) }{{} e iθx/h à h (θ) Ira Livshits (BSU) - November 20, Linz 34 / 83

35 = Amplification factors: µ(θ) = Ãh(θ) B h (θ) = e iθ + e iθ2 e iθ + e iθ2 4 = Smoothing rate: µ = max{ µ(θ) : π 2 θ π} 0.5 = The high frequency error components are reduced by a factor of (at least) µ per relaxation sweep. Note: For low frequencies, the reduction per relaxation sweep is much worse: µ(θ) if θ 0 Ira Livshits (BSU) - November 20, Linz 35 / 83

36 Now to Helmholtz : the stencils and the relaxations D finite differences for Lu = u + k 2 u, L h = h 2 [ 2 + (kh)2 ] 2D finite differences for Lu = u + k 2 u, L h = 4 + (kh) 2 h 2 Here: x = (x, y) G h, h x = h y = h. 5 point stencil Ira Livshits (BSU) - November 20, Linz 36 / 83

37 Fourier symbols D: L h = h 2 [ 2 + k 2 h 2 ] = L h e iθx/h = h 2 (2 cos θ 2 + k2 h 2 ) }{{} Lh (θ) e iθx/h 2D: θ = (θ, θ 2 ) = L h e iθx/h = h 2 (2 cos θ + 2 cos(θ 2 ) 4 + k 2 h 2 ) }{{} Lh (θ) e iθx/h or for the biharmonic operator (think Kaczmarz!) Lh (θ) = h 4 (2 cos θ + 2 cos θ k 2 h 2 ) 2 Ira Livshits (BSU) - November 20, Linz 37 / 83

38 D Example: Gauss-Seidel relaxation of Relaxation: Corresponding splitting of L h : h 2 [ 2 + k2 h 2 ] U h (x) = F h (x) }{{} L h u h = u h u h (x h) (2 k 2 h 2 )u h (x) + u h (x + h) h 2 [ 2 + k2 h 2 ] }{{} L h h 2 = [0 0 ] h2 }{{} A h Then: at grid point x G h for the errors = F h (x) + h 2 [ 2 + k2 h 2 0] }{{} B h Ae iθ(x h)/h (2 k 2 h 2 )Ae iθx/h + Ae iθ(x+h)/h = 0 Ira Livshits (BSU) - November 20, Linz 38 / 83

39 ( e iθ 2 + k 2 h 2) }{{} B h (θ) A + ( e iθ) }{{} Ã h (θ) A = 0. This yields: A = e iθ e iθ (2 k 2 h 2 ) A What is the difference compared to Laplace? What happens when θ = 0? What happens when θ is between π/2 and π? What happens when θ kh? Happens when kh grows, i.e., k is fixed and h is coarsened? Ira Livshits (BSU) - November 20, Linz 39 / 83

40 Answers If kh i.e., exp(±ikx) are smooth: other smooth components θ 0 remain almost unchanged by relaxation; For θ = ±kh there is a slight divergence; Oscillatory components π/2 < θ π converge almost as fast as in Laplace (convergence does depend on kh); If kh = O() divergence worsens! Results for kh = 0. and kh = Ira Livshits (BSU) - November 20, Linz 40 / 83

41 More examples: 2D Helmholtz equation: Gauss-Seidel Relaxation in error terms: k 2 h 2 0 h 2 e h (x) + h 2 }{{} B h } {{ } A h e h (x) = 0 Symbols: θ = (θ, θ 2 ) B h e iθx/h = ( e iθ h 2 + e iθ2 4 + k 2 h 2) }{{} B h (θ) e iθx/h A h e iθx/h = ( e iθ h 2 + e iθ2) }{{} Ã h (θ) e iθx/h Ira Livshits (BSU) - November 20, Linz 4 / 83

42 = Amplification factors: µ(θ) = Ãh(θ) B h (θ) = e iθ + e iθ2 e iθ + e iθ2 4 + k 2 h 2 = Smoothing rate (what is smooth here?): µ µl for kh ; µ deteriorates (grows) as kh grows; Ira Livshits (BSU) - November 20, Linz 42 / 83

43 Questions What is smooth? Is θ = 0 smooth? What is the smoothing factor? What do we want from relaxation? Ira Livshits (BSU) - November 20, Linz 43 / 83

44 Kaczmarz relaxation c = 4 + k 2 h 2 h 4 h h u h (x) = h c 4 + c c 2 } {{ } B h 2 2c 2 2c 4 + c 2 2c 2 2c 2 e h (x) + h 4 u h(x) 2 2c c } {{ } A h e h (x) = 0 Ira Livshits (BSU) - November 20, Linz 44 / 83

45 Symbols and amplification Symbol B h e iθx/h = ) ( (e 2iθ + e 2 iθ 2 ) + 2(e iθ iθ 2 + e iθ +iθ 2 ) + 2c(e iθ + e iθ 2 ) + (4 + c 2 ) e iθx/h h 4 } {{ } B h (θ) Symbol A h e iθx/h = ( h 4 (e 2iθ + e 2iθ 2 ) + 2(e iθ iθ 2 + e iθ +iθ 2 ) + 2c(e iθ + e iθ ) 2 ) e iθx/h }{{} Ã h (θ) Ira Livshits (BSU) - November 20, Linz 45 / 83

46 Good news: µ(θ) for any θ Q: But for which is works the best? Results for kh = (high frequencies and overall convergence). Ira Livshits (BSU) - November 20, Linz 46 / 83

47 More on convergence Consider ordered eigen pairs of A: (λ j, v j ): Av j = λ j v j such that λ λ 2 λ N. For errors representation e m+ = αn m+ v n and e m = αn m v n, Due to linearity (of everything), α m+ n v n = αn m Qv n and thus overall convergence is defined by max α m+ n n αn m The smoothing factor is a convergence factor for α n/2,..., α n, i.e., for the high energy eigenfunctions v n/2,... v n Remaining non reduced low energy eigenfunctions v,..., v n/2 are smooth. Ira Livshits (BSU) - November 20, Linz 47 / 83

48 Reason for using coarse grids Coarse grids can be used to compute an improved initial guess for the fine-grid relaxation. This is advantageous because: Relaxation on the coarse-grid is much cheaper (/2 as many points in D, /4in 2D, /8 in 3D); Relaxation on the coarse grid has a marginally better convergence rate, for example O(4h 2 ) instead of O(h 2 ); Smooth eigenfunctions on the finest grid become more oscillatory on the coarse grid. Ira Livshits (BSU) - November 20, Linz 48 / 83

49 Multigrid components Relaxation (smoother): efficiently dumps high-energy (eigenfunctions with large eigenvalues) error components; leaves unchanged low eigenmodes; A coarse-grid system operator: accurately resolve the low eigenmodes; Restriction operator (averaging): transfers fine-grid information (residual) to coarse grid; Prolongation operator (interpolation): transfers coarse-grid correction back to the fine grid; has to be accurate for low eigenmodes. Ira Livshits (BSU) - November 20, Linz 49 / 83

50 Nice example: u = f Relaxation: Gauss-Seidel. A coarse-grid operator: discretization of the differential Laplace operator on coarse scale (2h, 4h,... ) Restriction operator: full weighting of the residual (annihilates remaining oscillatory components), transferring only smooth ones to coarser grids; Interpolation: polynomial (linear) which is accurate for smooth components. Works great because the lowest eigenmodes are physically smooth. Ira Livshits (BSU) - November 20, Linz 50 / 83

51 One picture is worth a thousand words Ira Livshits (BSU) - November 20, Linz 5 / 83

52 Back to 2d Helmholtz : discretization error Discrete wave number vs exact wave number: k h k ( + k2 h 2 ), 24 γ 48. γ The total (accumulated) phase error is E(kd, kh) = kd k2 h 2 2πγ. For the discrete solutions to be accurate approximation to the differential solutions need E(kd, kh). For fixed k, d this means small h and large problem size, N. Ira Livshits (BSU) - November 20, Linz 52 / 83

53 Bottlenecks for multigrid Dominant phase discretization error; Quality of standard coarse-grid and prolongation operators deteriorates on coarse grids; Poor multigrid relaxation - standard fast relaxation scheme diverge; Ira Livshits (BSU) - November 20, Linz 53 / 83

54 Lowest eigenmodes Lowest eigenmodes for k = 6π and k = 8π. Ira Livshits (BSU) - November 20, Linz 54 / 83

55 More bottlenecks for multigrid Oscillatory low eigenmodes; There are many of them; They are different from each other; On sufficiently coarse grids, there is no single prolongation or coarse grid operator of a reasonable sparsity that works for all such eigenmodes. Ira Livshits (BSU) - November 20, Linz 55 / 83

56 One-dimensional Helmholtz equation: slow to converge components, aka lowest eigenmodes (a) a random initial error; (b)-(d) examples of remaining errors, obtained by applying three multigrid V-cycles to the homogenous Helmholtz equation (periodic boundary conditions) Ira Livshits (BSU) - November 20, Linz 56 / 83

57 Assumption on the error The error unreduced by standard multigrid e(x) = exp(iwx) = α w exp(iwx) + β w exp( iwx) w k w k w k where the value of w is determined by the relaxation history. One can drive it to be ( γ )k w ( + γ 2 )k, where typical are values γ j 0.3 Ira Livshits (BSU) - November 20, Linz 57 / 83

58 Dual representation If we agreed on the previous page, we can also agree that e(x) = e (x)exp(ikx) + e 2 (x)exp( ikx), where e (x) = w k α w exp(i(w k)x) and e 2 (x) = w k β w exp( i(w k)x). Obviously, envelope functions e and e 2 are smooth (compared to exp(±ikx)). Indeed the highest frequencies are w k γ j k Remark : Representation is not unique - one has to work to guarantee the statement above. The idea here is simple Lets try approximate smooth e (x) and e 2 (x) instead of oscillatory e(x) (which we cannot do well anyway) and hope for the best. Ira Livshits (BSU) - November 20, Linz 58 / 83

59 Defect correction If we assume the error in the form e(x) = e (x)exp(ikx) + e 2 (x)exp( ikx), then naturally r(x) = Le(x) = L(e (x)exp(ikx) + e 2 (x)exp( ikx)) = L(e (x)exp(ikx)) + L(e 2 (x)exp( ikx)) = exp(ikx)l e (x) + exp( ikx)l 2 e 2 (x) Ira Livshits (BSU) - November 20, Linz 59 / 83

60 Differential operators and residual representation Two conclusions from the previous page: Note that we use the differential Helmholtz operators and its kernel components, resulting in differential equations for the envelope functions e and e 2, L and L 2 : L u = u xx + 2iku x and L 2 u = u xx 2iku x. Trick! Operator L has two kernel components (exp(±ikx)) and so do L j ( and exp( 2ikx)) But we consider grids where only one (a constant) is visible! Also, given the form of e j and L j, we can claim that r(x) = r (x)exp(ikx) + r 2 (x)exp(ikx) where r j are smooth (as smooth as e j in fact). Ira Livshits (BSU) - November 20, Linz 60 / 83

61 From Waves to Rays To reduce the irreducible wave error e(x): Le = r we approximate two ray errors L e = r and L 2 e 2 = r 2 and then reconstruct back to the wave e(x) = e (x)exp(ikx) + e 2 (x)exp( ikx) Where is the gain: To compute the ray functions one should start on the grid with kh π (very coarse) To accelerate, one may use standard multigrid on each L j e j = r j if wishes to go coarser. Ira Livshits (BSU) - November 20, Linz 6 / 83

62 Residual separation and why on kh π Consider the wave (Helmholtz) residual with r and r 2 are smooth r(x) = r (x)exp(ikx) + r 2 (x)exp( ikx). Clearly, such representation is not unique and some others allow r(x) = r (x)exp(ikx) + r 2 (x)exp( ikx) with oscillatory r j. We want to approximate smooth ray residuals (on coarse grids). Ira Livshits (BSU) - November 20, Linz 62 / 83

63 Separation (implementation considered on grids) Compute fine-grid wave residual r h = f h L h u h (h is fine enough for small phase error kd(kh) 2 ) Transfer it to the grid with kh s π/2 using full weighting (annihilates physically oscillatory error components), resulting in r Hs = r Hs Hs exp(ikx) + r2 exp( ikx) To approximate r : Multiply by exp( ikx) exp( ikx)r Hs = r Hs + r Hs 2 exp( 2ikx) Note that first term is smooth and the second one is extremely oscillatory 2kH s π. Applying one more full weighting results in the desired ray residual I 2Hs H s (exp( ikx)r Hs ) = I 2Hs H s (r Hs ) + I 2Hs H s (r Hs 2Hs 2 exp( 2ikx)) r Ira Livshits (BSU) - November 20, Linz 63 / 83

64 Ray operator The discrete operators for the ray functions are derived from the differential operators: L u = u + 2iku = f considered on the grid with kh π. The choice of discretization is defined by the principality of terms: The second-order term : H 2 The first-order term: 2k H If kh π the first-order term is principal and should be accurately discretized; Thus using staggered grids. [ L H j = H 2 ( 2ik H ) ( 2ik H 2 H ) H 2 H 2 ] Ira Livshits (BSU) - November 20, Linz 64 / 83

65 Wave-ray cycle Wave cycle: Reduces all but problematic characteristic components; Produces residual for the ray cycles (on one of the fine grids with small phase error); Finest grid: (kd)(k 2 h 2 ) Coarsest grid: kh > π (Used to effectively wipe out constants and alikes, stencil [ (k 2 H 2 2) ] Correction Scheme; GS one pre- and one post relaxation on fine grid and on the coarsest grid; Kaczmarz four pre- and one post relaxation sweeps everywhere else; Linear interpolation and full weighting; Two ray cycles to reduce the characteristic components. Ira Livshits (BSU) - November 20, Linz 65 / 83

66 Ray cycles Staggered grid, short central differences for u x ; Marching scheme in the propagation direction; CS or FAS multigrid scheme depending on interpretation; Natural introduction of boundary conditions in terms of rays. Ira Livshits (BSU) - November 20, Linz 66 / 83

67 Before we proceed, detour to FAS Full Approximation Scheme Allows coarse grid approximation of the solution u H rather than correction e H ; Considers full equation L H u H = f H residual equation L H e H = r h ; Developed for solving non linear equations. Ira Livshits (BSU) - November 20, Linz 67 / 83

68 FAS details Consider ũ h a current fine-grid solution and r h = f h L h ũ h is its corresponding residual; CS: Coarse grid variable correction: v H Coarse grid system: L H v H = R H = Ih H r h Coarse grid correction: ũ h NEW = ũ h + IHv h H FAS: Coarse grid variable solution: ũ H = Ĩ h H ũ h + v H Coarse grid system: L H u H = f H = L H (Ĩ h H u h ) + R H Coarse grid correction: ũ h NEW = u h + IH(ũ h NEW H Ĩ h H ũ h ) If L H resolves all solution components then ũnew h = IH(ũ h NEW H ) Ira Livshits (BSU) - November 20, Linz 68 / 83

69 τ-correction Consider the c.g. equation as L H u H = f H + τ H h where Term τ H h f H = I H h f h, τ H h = L H (Ĩ H h u h ) I H h (L h u h ) is a fine-to-coarse defect correction. Nonlinear equations: The correction equation is L h (ũ h + v h ) L h ũ h = r h. On coarse grid its reads: L H (Ĩ H h ũh + v H ) L H (Ĩ H h ũh ) = I H h r h which is exactly the top equation. Ira Livshits (BSU) - November 20, Linz 69 / 83

70 Rays, directions and BC Ira Livshits (BSU) - November 20, Linz 70 / 83

71 Back to Boundary conditions Assumptions: Away from the source (near the boundary) the solution has a pure ray character so can be described using ray operators; The typical boundary conditions are the radiation BC, sometimes combined with the Dirichlet boundary conditions. The rays can enter the computational domain from the infinity and leave the computational domain freely unless there is an obstacle, media change, etc. The entering part is to be defined by the Direchlet boundary conditions at the entrance (for each ray); the exiting part is controlled by the Sommerfield boundary conditions at the exit (the latter can be replaced by an interior equation appropriately discretized). If no rays (waves) enter from infinity, the Dirichlet boundary conditions are homogenous. Ira Livshits (BSU) - November 20, Linz 7 / 83

72 Full ray formulation (for two-sided Sommerfield BC) Exiting Sommerfield BC: At x = 0: u + 2iku = 0 and at x = d: u 2iku = 0 Positively propagating ray (corresponding to exp(ikx)) L a(x) = a (x) + 2ika (x) = f (x), a (0) = a,0, a (d) = 0 Negatively propagating ray (corresponding to exp( ikx)) L 2 a(x) = a 2 (x) + 2ika 2(x) = f 2 (x), a 2 (d) = a 2,0, a (0) = 0 Two systems are solved independently; If there is a Dirichlet BC on one side, the two has two be combined (incident wave defines the amplitude of the reflected wave); Ira Livshits (BSU) - November 20, Linz 72 / 83

73 Wave-ray interaction Because of BC the ray cycles are implemented in FAS (solution representation); At the interior, regular FAS correction: ũ h NEW = ũh + exp(ikx)(i h H (ãh,new ãh )) + exp( ikx)(i h H (ãh 2,NEW ãh 2 )); At the boundary, direct replacement ũ h NEW = exp(ikx)(i h H ãh,new ) + exp( ikx)(i h H ãh 2,NEW ); Ira Livshits (BSU) - November 20, Linz 73 / 83

74 Conclusions on d The solver is a combination of the wave and the ray approaches; The complexity is similar to one for Laplace, it is O(N). Worked for jumping and slightly varying wave numbers; Ira Livshits (BSU) - November 20, Linz 74 / 83

75 Wave-Ray algorithm Geometric solver for the 2D Helmholtz equation with constant wave numbers Brandt, Livshits: Wave-ray multigrid method solving standing wave equations, 997 Livshits, Brandt: Accuracy Properties of the Wave-Ray Multigrid Algorithm for Helmholtz Equations, 2006 Special feature: Separate treatment of oscillatory kernel e ikx, k = k on coarse grids. Ira Livshits (BSU) - November 20, Linz 75 / 83

76 Directions of propagation e ikx Ira Livshits (BSU) - November 20, Linz 76 / 83

77 Discretization of the propagation directions: few chosen direction k l Ira Livshits (BSU) - November 20, Linz 77 / 83

78 Wave-ray approach Components not changed (at best) by the standard multigrid: exp(iwx), w k, w, x R 2 Ray representation for such components u(x) and residual r(x) u(x) = l u l (x)exp(ik l x) and r(x) = l r l (x)exp(ik l x) Instead of oscillatory u(x) compute smooth u l (x); Differential operators for u l (x) are obtained directly from L. Ira Livshits (BSU) - November 20, Linz 78 / 83

79 Phase error: Wave vs Rays Waves (as discussed): E(kd, kh) kd k2 h 2, 24 γ 48 2πγ Rays: E(kd, L) kd β 2πL 2, β 0.2 This is assuming very aggressive coarsening with L n = 2 n+2 : Propagation direction ξ: h n ξ = (L n ) 2 /(6k); Orthogonal direction η: h n η = CL n /4k For L = 8, kh ξ 4 and kh η 2 Ira Livshits (BSU) - November 20, Linz 79 / 83

80 Main features Smaller wave computational domain; Larger ray domains; grids aligned with propagation directions; Agressive ray coarsening; Ray equations a ξξ + a ηη + 2ika ξ, ξ = k l Ray residual separation using consecutive full weighting; FAS for both waves and rays - boundary values for waves come directly from rays; Ira Livshits (BSU) - November 20, Linz 80 / 83

81 More about rays Ira Livshits (BSU) - November 20, Linz 8 / 83

82 Algorithm Repeat until happy CS standard wave V cycle FAS ray cycle: computation of wave residual kh ; separation into ray residuals starting kh ; forming FAS ray rhs, line relaxation kh ξ 4, kh η 2; reconstructing to the wave form kh ; interpolating (with relaxation) to the finest wave grid kh. Ira Livshits (BSU) - November 20, Linz 82 / 83

83 Requirements and advantages Requirements Helmholtz PDE Structured grids Analytical knowledge of Lu 0 Advantages Local processes - wave representation on fine grids Geometrical optics processes - ray representation on coarse grids and larger domains Natural intro of radiation boundary conditions Ira Livshits (BSU) - November 20, Linz 83 / 83

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Aspects of Multigrid

Aspects of Multigrid Aspects of Multigrid Kees Oosterlee 1,2 1 Delft University of Technology, Delft. 2 CWI, Center for Mathematics and Computer Science, Amsterdam, SIAM Chapter Workshop Day, May 30th 2018 C.W.Oosterlee (CWI)

More information

1. Fast Iterative Solvers of SLE

1. Fast Iterative Solvers of SLE 1. Fast Iterative Solvers of crucial drawback of solvers discussed so far: they become slower if we discretize more accurate! now: look for possible remedies relaxation: explicit application of the multigrid

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 1 12/02/09 MG02.prz Error Smoothing 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Initial Solution=-Error 0 10 20 30 40 50 60 70 80 90 100 DCT:

More information

Adaptive algebraic multigrid methods in lattice computations

Adaptive algebraic multigrid methods in lattice computations Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

New Multigrid Solver Advances in TOPS

New Multigrid Solver Advances in TOPS New Multigrid Solver Advances in TOPS R D Falgout 1, J Brannick 2, M Brezina 2, T Manteuffel 2 and S McCormick 2 1 Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, P.O.

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Solving PDEs with Multigrid Methods p.1

Solving PDEs with Multigrid Methods p.1 Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Multigrid Methods and their application in CFD

Multigrid Methods and their application in CFD Multigrid Methods and their application in CFD Michael Wurst TU München 16.06.2009 1 Multigrid Methods Definition Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential

More information

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II Elliptic Problems / Multigrid Summary of Hyperbolic PDEs We looked at a simple linear and a nonlinear scalar hyperbolic PDE There is a speed associated with the change of the solution Explicit methods

More information

Multigrid finite element methods on semi-structured triangular grids

Multigrid finite element methods on semi-structured triangular grids XXI Congreso de Ecuaciones Diferenciales y Aplicaciones XI Congreso de Matemática Aplicada Ciudad Real, -5 septiembre 009 (pp. 8) Multigrid finite element methods on semi-structured triangular grids F.J.

More information

Multigrid Solution of the Debye-Hückel Equation

Multigrid Solution of the Debye-Hückel Equation Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 5-17-2013 Multigrid Solution of the Debye-Hückel Equation Benjamin Quanah Parker Follow this and additional works

More information

INTRODUCTION TO MULTIGRID METHODS

INTRODUCTION TO MULTIGRID METHODS INTRODUCTION TO MULTIGRID METHODS LONG CHEN 1. ALGEBRAIC EQUATION OF TWO POINT BOUNDARY VALUE PROBLEM We consider the discretization of Poisson equation in one dimension: (1) u = f, x (0, 1) u(0) = u(1)

More information

Introduction to Multigrid Method

Introduction to Multigrid Method Introduction to Multigrid Metod Presented by: Bogojeska Jasmina /08/005 JASS, 005, St. Petersburg 1 Te ultimate upsot of MLAT Te amount of computational work sould be proportional to te amount of real

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Von Neumann Analysis of Jacobi and Gauss-Seidel Iterations

Von Neumann Analysis of Jacobi and Gauss-Seidel Iterations Von Neumann Analysis of Jacobi and Gauss-Seidel Iterations We consider the FDA to the 1D Poisson equation on a grid x covering [0,1] with uniform spacing h, h (u +1 u + u 1 ) = f whose exact solution (to

More information

Partial Differential Equations

Partial Differential Equations Partial Differential Equations Introduction Deng Li Discretization Methods Chunfang Chen, Danny Thorne, Adam Zornes CS521 Feb.,7, 2006 What do You Stand For? A PDE is a Partial Differential Equation This

More information

6. Multigrid & Krylov Methods. June 1, 2010

6. Multigrid & Krylov Methods. June 1, 2010 June 1, 2010 Scientific Computing II, Tobias Weinzierl page 1 of 27 Outline of This Session A recapitulation of iterative schemes Lots of advertisement Multigrid Ingredients Multigrid Analysis Scientific

More information

Bootstrap AMG. Kailai Xu. July 12, Stanford University

Bootstrap AMG. Kailai Xu. July 12, Stanford University Bootstrap AMG Kailai Xu Stanford University July 12, 2017 AMG Components A general AMG algorithm consists of the following components. A hierarchy of levels. A smoother. A prolongation. A restriction.

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

A fast method for the solution of the Helmholtz equation

A fast method for the solution of the Helmholtz equation A fast method for the solution of the Helmholtz equation Eldad Haber and Scott MacLachlan September 15, 2010 Abstract In this paper, we consider the numerical solution of the Helmholtz equation, arising

More information

Chapter 5. Methods for Solving Elliptic Equations

Chapter 5. Methods for Solving Elliptic Equations Chapter 5. Methods for Solving Elliptic Equations References: Tannehill et al Section 4.3. Fulton et al (1986 MWR). Recommended reading: Chapter 7, Numerical Methods for Engineering Application. J. H.

More information

Numerical Programming I (for CSE)

Numerical Programming I (for CSE) Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let

More information

AMG for a Peta-scale Navier Stokes Code

AMG for a Peta-scale Navier Stokes Code AMG for a Peta-scale Navier Stokes Code James Lottes Argonne National Laboratory October 18, 2007 The Challenge Develop an AMG iterative method to solve Poisson 2 u = f discretized on highly irregular

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 11 Partial Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002.

More information

Geometric Multigrid Methods

Geometric Multigrid Methods Geometric Multigrid Methods Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University IMA Tutorial: Fast Solution Techniques November 28, 2010 Ideas

More information

Poisson Equation in 2D

Poisson Equation in 2D A Parallel Strategy Department of Mathematics and Statistics McMaster University March 31, 2010 Outline Introduction 1 Introduction Motivation Discretization Iterative Methods 2 Additive Schwarz Method

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

MULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW

MULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW MULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW VAN EMDEN HENSON CENTER FOR APPLIED SCIENTIFIC COMPUTING LAWRENCE LIVERMORE NATIONAL LABORATORY Abstract Since their early application to elliptic

More information

An Introduction of Multigrid Methods for Large-Scale Computation

An Introduction of Multigrid Methods for Large-Scale Computation An Introduction of Multigrid Methods for Large-Scale Computation Chin-Tien Wu National Center for Theoretical Sciences National Tsing-Hua University 01/4/005 How Large the Real Simulations Are? Large-scale

More information

Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes

Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes Do Y. Kwak, 1 JunS.Lee 1 Department of Mathematics, KAIST, Taejon 305-701, Korea Department of Mathematics,

More information

Multigrid Methods for Discretized PDE Problems

Multigrid Methods for Discretized PDE Problems Towards Metods for Discretized PDE Problems Institute for Applied Matematics University of Heidelberg Feb 1-5, 2010 Towards Outline A model problem Solution of very large linear systems Iterative Metods

More information

Robust solution of Poisson-like problems with aggregation-based AMG

Robust solution of Poisson-like problems with aggregation-based AMG Robust solution of Poisson-like problems with aggregation-based AMG Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire Paris, January 26, 215 Supported by the Belgian FNRS http://homepages.ulb.ac.be/

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information

Math 46, Applied Math (Spring 2009): Final

Math 46, Applied Math (Spring 2009): Final Math 46, Applied Math (Spring 2009): Final 3 hours, 80 points total, 9 questions worth varying numbers of points 1. [8 points] Find an approximate solution to the following initial-value problem which

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Constrained Minimization and Multigrid

Constrained Minimization and Multigrid Constrained Minimization and Multigrid C. Gräser (FU Berlin), R. Kornhuber (FU Berlin), and O. Sander (FU Berlin) Workshop on PDE Constrained Optimization Hamburg, March 27-29, 2008 Matheon Outline Successive

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Introduction to Scientific Computing II Multigrid

Introduction to Scientific Computing II Multigrid Introduction to Scientific Computing II Multigrid Miriam Mehl Slide 5: Relaxation Methods Properties convergence depends on method clear, see exercises and 3), frequency of the error remember eigenvectors

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

HW4, Math 228A. Multigrid Solver. Fall 2010

HW4, Math 228A. Multigrid Solver. Fall 2010 HW4, Math 228A. Multigrid Solver Date due 11/30/2010 UC Davis, California Fall 2010 Nasser M. Abbasi Fall 2010 Compiled on January 20, 2019 at 4:13am [public] Contents 1 Problem 1 3 1.1 Restriction and

More information

On nonlinear adaptivity with heterogeneity

On nonlinear adaptivity with heterogeneity On nonlinear adaptivity with heterogeneity Jed Brown jed@jedbrown.org (CU Boulder) Collaborators: Mark Adams (LBL), Matt Knepley (UChicago), Dave May (ETH), Laetitia Le Pourhiet (UPMC), Ravi Samtaney (KAUST)

More information

Introduction to Multigrid Methods

Introduction to Multigrid Methods Introduction to Multigrid Methods Chapter 9: Multigrid Methodology and Applications Gustaf Söderlind Numerical Analysis, Lund University Textbooks: A Multigrid Tutorial, by William L Briggs. SIAM 1988

More information

Separation of Variables in Linear PDE: One-Dimensional Problems

Separation of Variables in Linear PDE: One-Dimensional Problems Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

University of Illinois at Urbana-Champaign. Multigrid (MG) methods are used to approximate solutions to elliptic partial differential

University of Illinois at Urbana-Champaign. Multigrid (MG) methods are used to approximate solutions to elliptic partial differential Title: Multigrid Methods Name: Luke Olson 1 Affil./Addr.: Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 email: lukeo@illinois.edu url: http://www.cs.uiuc.edu/homes/lukeo/

More information

An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations

An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations Moshe Israeli Computer Science Department, Technion-Israel Institute of Technology, Technion city, Haifa 32000, ISRAEL Alexander

More information

Introduction to Multigrid Methods Sampling Theory and Elements of Multigrid

Introduction to Multigrid Methods Sampling Theory and Elements of Multigrid Introduction to Multigrid Methods Sampling Theory and Elements of Multigrid Gustaf Söderlind Numerical Analysis, Lund University Contents V3.15 What is a multigrid method? Sampling theory Digital filters

More information

1 Distributions (due January 22, 2009)

1 Distributions (due January 22, 2009) Distributions (due January 22, 29). The distribution derivative of the locally integrable function ln( x ) is the principal value distribution /x. We know that, φ = lim φ(x) dx. x ɛ x Show that x, φ =

More information

First-order overdetermined systems. for elliptic problems. John Strain Mathematics Department UC Berkeley July 2012

First-order overdetermined systems. for elliptic problems. John Strain Mathematics Department UC Berkeley July 2012 First-order overdetermined systems for elliptic problems John Strain Mathematics Department UC Berkeley July 2012 1 OVERVIEW Convert elliptic problems to first-order overdetermined form Control error via

More information

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 06-09 Multigrid for High Dimensional Elliptic Partial Differential Equations on Non-equidistant Grids H. bin Zubair, C.W. Oosterlee, R. Wienands ISSN 389-6520 Reports

More information

Introduction and Stationary Iterative Methods

Introduction and Stationary Iterative Methods Introduction and C. T. Kelley NC State University tim kelley@ncsu.edu Research Supported by NSF, DOE, ARO, USACE DTU ITMAN, 2011 Outline Notation and Preliminaries General References What you Should Know

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Scientific Computing I

Scientific Computing I Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

A greedy strategy for coarse-grid selection

A greedy strategy for coarse-grid selection A greedy strategy for coarse-grid selection S. MacLachlan Yousef Saad August 3, 2006 Abstract Efficient solution of the very large linear systems that arise in numerical modelling of real-world applications

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Multigrid Method for 2D Helmholtz Equation using Higher Order Finite Difference Scheme Accelerated by Krylov Subspace

Multigrid Method for 2D Helmholtz Equation using Higher Order Finite Difference Scheme Accelerated by Krylov Subspace 201, TextRoad Publication ISSN: 2090-27 Journal of Applied Environmental and Biological Sciences www.textroad.com Multigrid Method for 2D Helmholtz Equation using Higher Order Finite Difference Scheme

More information

AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends

AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends Lecture 3: Finite Elements in 2-D Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Finite Element Methods 1 / 18 Outline 1 Boundary

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea

More information

Aggregation-based algebraic multigrid

Aggregation-based algebraic multigrid Aggregation-based algebraic multigrid from theory to fast solvers Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire CEMRACS, Marseille, July 18, 2012 Supported by the Belgian FNRS

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Motivation: Sparse matrices and numerical PDE's

Motivation: Sparse matrices and numerical PDE's Lecture 20: Numerical Linear Algebra #4 Iterative methods and Eigenproblems Outline 1) Motivation: beyond LU for Ax=b A little PDE's and sparse matrices A) Temperature Equation B) Poisson Equation 2) Splitting

More information

Research Article Evaluation of the Capability of the Multigrid Method in Speeding Up the Convergence of Iterative Methods

Research Article Evaluation of the Capability of the Multigrid Method in Speeding Up the Convergence of Iterative Methods International Scholarly Research Network ISRN Computational Mathematics Volume 212, Article ID 172687, 5 pages doi:1.542/212/172687 Research Article Evaluation of the Capability of the Multigrid Method

More information

A Generalized Eigensolver Based on Smoothed Aggregation (GES-SA) for Initializing Smoothed Aggregation Multigrid (SA)

A Generalized Eigensolver Based on Smoothed Aggregation (GES-SA) for Initializing Smoothed Aggregation Multigrid (SA) NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2007; 07: 6 [Version: 2002/09/8 v.02] A Generalized Eigensolver Based on Smoothed Aggregation (GES-SA) for Initializing Smoothed Aggregation

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

Adaptive Multigrid for QCD. Lattice University of Regensburg

Adaptive Multigrid for QCD. Lattice University of Regensburg Lattice 2007 University of Regensburg Michael Clark Boston University with J. Brannick, R. Brower, J. Osborn and C. Rebbi -1- Lattice 2007, University of Regensburg Talk Outline Introduction to Multigrid

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

A multigrid method for large scale inverse problems

A multigrid method for large scale inverse problems A multigrid method for large scale inverse problems Eldad Haber Dept. of Computer Science, Dept. of Earth and Ocean Science University of British Columbia haber@cs.ubc.ca July 4, 2003 E.Haber: Multigrid

More information

Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains

Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains Martin J. Gander and Felix Kwok Section de mathématiques, Université de Genève, Geneva CH-1211, Switzerland, Martin.Gander@unige.ch;

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Notes on Multigrid Methods

Notes on Multigrid Methods Notes on Multigrid Metods Qingai Zang April, 17 Motivation of multigrids. Te convergence rates of classical iterative metod depend on te grid spacing, or problem size. In contrast, convergence rates of

More information

Spectral element agglomerate AMGe

Spectral element agglomerate AMGe Spectral element agglomerate AMGe T. Chartier 1, R. Falgout 2, V. E. Henson 2, J. E. Jones 4, T. A. Manteuffel 3, S. F. McCormick 3, J. W. Ruge 3, and P. S. Vassilevski 2 1 Department of Mathematics, Davidson

More information

Multigrid solvers for equations arising in implicit MHD simulations

Multigrid solvers for equations arising in implicit MHD simulations Multigrid solvers for equations arising in implicit MHD simulations smoothing Finest Grid Mark F. Adams Department of Applied Physics & Applied Mathematics Columbia University Ravi Samtaney PPPL Achi Brandt

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

Simulating Solid Tumor Growth Using Multigrid Algorithms

Simulating Solid Tumor Growth Using Multigrid Algorithms Simulating Solid Tumor Growth Using Multigrid Algorithms Asia Wyatt Applied Mathematics, Statistics, and Scientific Computation Program Advisor: Doron Levy Department of Mathematics/CSCAMM Abstract In

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Review: From problem to parallel algorithm

Review: From problem to parallel algorithm Review: From problem to parallel algorithm Mathematical formulations of interesting problems abound Poisson s equation Sources: Electrostatics, gravity, fluid flow, image processing (!) Numerical solution:

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,

More information

6. Iterative Methods: Roots and Optima. Citius, Altius, Fortius!

6. Iterative Methods: Roots and Optima. Citius, Altius, Fortius! Citius, Altius, Fortius! Numerisches Programmieren, Hans-Joachim Bungartz page 1 of 1 6.1. Large Sparse Systems of Linear Equations I Relaxation Methods Introduction Systems of linear equations, which

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information