Multigrid Methods for Discretized PDE Problems
|
|
- Anthony Owens
- 5 years ago
- Views:
Transcription
1 Towards Metods for Discretized PDE Problems Institute for Applied Matematics University of Heidelberg Feb 1-5, 2010
2 Towards Outline A model problem Solution of very large linear systems Iterative Metods Two-Grid and -Metods Preconditioning and wat to do, wen te problem gets a problem
3 Towards Summary, Basic Metods
4 Towards Direct Metods LR-Decomposition A = LU, L =... R = Direct Solver for te System Ax = b Very slow Large Memory usage E LR = O(N 3 ) M LR = O(N 2 )
5 Towards Basic Iterative Metods Wit initial guess x 0 R N x t+1 = x t + θb (b Ax t ) }{{} d t Convergence depends on Iteration Matrix B: Analyze Spectral Norm e t+1 I BA e t I BA 2 = λ max(i BA) Problem, very bad convergence rate (depends on mes size ) I BA Many steps necessary so solve system up to ɛ Overall complexity bad N iter = O(N) E Jacobi = O(N 2 ), M Jacobi = O(N)
6 Towards Gradient-Metods Instead of solving Ax = b minimize functional E(x) = 1 x, x b, x 2 In every step t, set searc direction r t, make optimal step: Optimal searc direction would be Since x is not available, use min E(x t + sr t ) s = d t, r t s R Ar t, r t r t = x x t r t = b Ax t = Ax Ax t = A(x x t ) Problem, searc directions are bad, many steps necessary ( ) 1 1/κ t e t+1 e 0 1 1/κ, 1 + 1/κ 1 + 1/κ 1 1 κ = 1 2 Bad overall complexity (due to many steps) N iter = O(N), E GM = O(N 2 ), M GM = O(N)
7 Towards Metod of Conjugate Gradients Use better searc directions d 3 d 2 d 4 d 2 d 1 d 1 Ortogonalize r k, k = 0,..., t r k, r l A = Ar k, r l = 0 (k l) Since A symmetric, ortogonalization can be done in one step: d t+1 = b Ax t, r t+1 = d t+1 d t+1, r t A r t Convergence rate depends on square root of condition number: e t+1 ( ) t 1 1/ κ 1 + 1/ e 0, κ 1 1/ κ 1 + 1/ κ 1 1 κ = 1
8 Towards Comparison Solve 2D-Laplace Equation, problem size N = M 2, = 1 M, 2 = 1 N Metod E iter conv. N iter E all N = 10 6 LR N N 3 30 years Jacobi 5N 1 2 N 5N 2 2 ours Gradient 10N 1 2 N 10N 2 3 ours 3 Conjugate Gradients 20N 1 N 20N 2 1 min
9 Towards Towards Metods
10 Towards Wat can we do better Optimal Numerical Complexity Numerical complexity in every step E step = O(N) Is given by Jacobi, Gradient, Conjugate Gradient Convergence Rate, Number of steps To reduce error by ɛ N iter = O(1) Not te case for Jacobi (O(N)), Gradient (O(N)), Conjugate Gradients (O( N))
11 Towards Dependence of convergence rate on mes-size e t+1 ρ e t Jacobi Gradient Conjugate Gradient ρ = λ max(i BA) = λ max(d 1 (L + R)) 1 2 ρ = ρ = ( ) 1 1/κ = /κ κ 1 2 ( ) 1 1/ κ 1 + 1/ = κ κ Oter interpretation On a fixed Mes Ω were is a constant we already ave an optimal solver!
12 Towards Wy does convergence depend on te mes-size? A detailed analysis of te Jacobi Iteration, 1D-Laplace Start wit random initial guess x 0 Error in te beginning is randomly distribute: Perform some steps of Jacobi
13 Towards Jacobi Solution - 0
14 Towards Jacobi Solution - 1
15 Towards Jacobi Solution - 2
16 Towards Jacobi Solution - 3
17 Towards Jacobi Solution - 4
18 Towards Jacobi Solution - 5
19 Towards Jacobi Solution - 6
20 Towards Jacobi Solution - 7
21 Towards Jacobi Solution - 8
22 Towards Jacobi Solution - 9
23 Towards Solution Beavior Solution canges very rapidly in te beginning (te first two or tree steps) Ten, nearly noting appens Solution in step 0 2 and step 2 9
24 Towards Jacobi Error - 0
25 Towards Jacobi Error - 1
26 Towards Jacobi Error - 2
27 Towards Jacobi Error - 3
28 Towards Jacobi Error - 4
29 Towards Jacobi Error - 5
30 Towards Jacobi Error - 6
31 Towards Jacobi Error - 7
32 Towards Jacobi Error - 8
33 Towards Jacobi Error - 9
34 Towards Error Beavior Overall error amplitude does not cange a lot Error frequencies cange!!!
35 Towards Te Concept of Smooting Smooting Jacobi is very bad and slow as a solver, but ig frequencies are smooted quickly
36 Towards Smooting Property of Ricardson Iteration - I x (t+1) = x (t) + θ(b Ax (t) ) e (t+1) = e t + θ(ax Ax (t) ), e (t) := x (t) x = e (t) θae (t) = [I θa]e (t) Eigenvalues & Eigenvectors. Let: A ω i = λ i ω i, i = 1,..., N, λ min(a ) = λ 1 λ N = λ max(a ) Eigenvectors are ortogonal ω i, ωj = { 0 if i j 1 if i == j We know Eigenvalues and Eigenvectors of te Iteration Matrix [I θa ]ω i = ωi θa ω i = ωi θλ i ω i = (1 θλ i )ω i
37 Towards Smooting Property of Ricardson-Iteration - II We ave Develop Error propagation e (t+1) = [I θa]e (t) = [I θa] t+1 e (0) = N e (0) = ɛ i ω i i=1 N e (t) = [I θa ] t e (0) = ɛ i [I θa ] t ω i = N ɛ i (1 θλ i ) t ω i i=1 i=1 Since Eigenvectors are ortogonal e (t) N 2 = e (t), e(t) = i,j=1 = N ɛ 2 i (1 θλ i ) 2t i=1 ɛ i ɛ j (1 θλ i ) t (1 θλ j ) t ω i, ωj }{{} =δ ij
38 Towards Smooting Property of Ricardson-Iteration - III We ave Convergence, if e (t) 2 N ɛ 2 i (1 θλ i ) 2t i=1 1 θλ i < 1 0 < θ < 1 λ max(a )
39 Towards Smooting Property of Ricardson-Iteration - IV Eigenvalues A closer look at te Eigenvalues. 1D-Laplace i = 1,..., N ( ) iπ λ i = 2 2 cos N + 1 ( )) ikπ N ω (sin i = N + 1 k=1 Distribution of Eigenvalues
40 Towards Smooting Property of Ricardson-Iteration - V Eigenvectors λ λ λ λ
41 Towards Smooting Property of Ricardson-Iteration - V Eigenvalues Eigenvalues of te Smooting Operator For θ = λ max(a ) 1 : ρ := λ(i θa ) = (1 θλ i ) For i = N/2:...
42 Towards Smooting Property of Ricardson-Iteration - VI For i = N/2 ρ N/2 = 1 1 ( ( )) pin/2 2 2 cos 4 N ( ) pi 2 cos 2 = 1 2 For i > N/2: ρ i < 1 2 Error contributions for large Eigenvalues: ẽ (0) := N i=n/2 ɛ i ω i ẽ (t) 2 = N ɛ 2 i i=n/2 ( 1 1 ) 2t 4 λ i ẽ (t) N 2 ɛ 2 i i=n/2 ( ) 1 2t 2
43 Towards Smooting Property of Ricardson-Iteration - Summary Ricardson Iteration is very bad and slow Solver Ricardson Iteration is very good Smooter for ig frequencies Hig frequencies are all ω i, i > N 2 i > 1 2 Hig frequencies are only visible on finest Mes Ω
44 Towards Hierarcical Approac Smoot ig frequencies on fine mes N = 40 Ω (some few steps): Transfer to coarse Mes N = 20 Treat coarse problem on mes Ω 2
45 Towards Hierarcical Approac Low frequencies on fine mes Ω are ig frequencies on coarse mes Ω 2 Treat coarse problem on mes Ω 2
46 Towards Possible Benefits Only few (fixed number O(1)?) operations on fine mes Ω Coarse mes problem Ω 2 is smaller (2D-model problem by a factor 4). Peraps direct solver possible? Nested approac possible: Ω Ω 2 Ω 4... Ω H
47 Towards Analysis of te Smooting Property Ricardson Error contributions for large Eigenvalues: ẽ (0) := N i=n/2 ɛ i ω i ẽ (t) 2 = ẽ (t) Reduce ig frequency error by ɛ N 2 ɛ 2 i i=n/2 N ɛ 2 i i=n/2 ( ) 1 2t 2 ( ) 1 t ɛ t = log(ɛ) 2 Fixed number of iterations, independent of of N! ( 1 1 ) 2t 4 λ i
48 Towards Te Two-Grid Iteration
49 Towards Te Basic 2-Grid Iteration 2-Grid Metod for solving A x = b Initial guess x (0), iterate t 0 1 Smoot: y (t) 2 Defect: d (t) 3 Restrict: d (t) H := S(A, b, x (t) ) := b A y (t) := R d (t) 4 Coarse Mes Solution: x (t) H 5 Prolongate: x (t+1) := A 1 H d (t) H := x (t) + P x (t) H
50 Towards Te Basic 2-Grid Iteration - Smooting Smooting 1 Smoot: y (t) := S(A, b, x (t) ) Easy and fast iterative sceme (Jacobi, Ricardson, Gauss-Seidel) Just a few steps N iter = 2 5 Smoot ig frequencies ẽ (t) ρt ẽ (0), ρ < 1, i > N 2
51 Towards Te Basic 2-Grid Iteration - Restriction Smooting 3 Restrict: d (t) H... see later... := R d (t)
52 Towards Te Basic 2-Grid Iteration - Prolongation Prolongation 5 Prolongate: x (t+1) := x (t) + P x (t) H Transfer Solution x H from Ω H to x on Ω. Simple Interpolation Matrix Notation H P =
53 Towards Te Basic 2-Grid Iteration - Restriction Restriction 3 Restrict: d (t) H := R d (t) Transfer Solution x from Ω to x H on Ω H. Use adjoint of Prolongation: R = P = Example Matrix Notation H
54 Towards Te Basic 2-Grid Iteration - Coarse Mes Solution Coarse Mes Solution 4 Coarse Mes Solution: x (t) H := A 1 H d (t) H Direct solution of te system A H x H = d H For 2D-Model Problem: N H = 1 4 N Solution Wit LR: E LR (N H ) = NH 3 = 1 64 E LR(N ) Better, but still too muc!
55 Towards Two-Grid-Iteration: Compact Notation I Coarse Mes Problem: A H x H = d H Write as iterative Metod on fine mes Ω x (t+1) = x (t) + P A 1 H R (b A x (t) ) = [I P A 1 H R (t) A ] x }{{} B CM + P A 1 H R b For te error e (t+1) e (t+1) = x (t+1) = B CM e (t) x = B CM x (t) + P A 1 H R A x x
56 Towards Two-Grid-Iteration: Compact Notation I Coarse Mes Problem: A H x H = d H Write as iterative Metod on fine mes Ω x (t+1) = x (t) + P A 1 H R (b A x (t) ) = [I P A 1 H R (t) A ] x }{{} B CM + P A 1 H R b For te error e (t+1) e (t+1) = x (t+1) = B CM e (t) x = B CM x (t) + P A 1 H R A x x
57 Towards Two-Grid-Iteration: Compact Notation I Coarse Mes Problem: A H x H = d H Write as iterative Metod on fine mes Ω x (t+1) = x (t) + P A 1 H R (b A x (t) ) = [I P A 1 H R (t) A ] x }{{} B CM + P A 1 H R b For te error e (t+1) e (t+1) = x (t+1) = B CM e (t) x = B CM x (t) + P A 1 H R A x x
58 Towards Two-Grid-Iteration: Compact Notation II Coarse Mes Solution Smooting Operation e (t+1) e (t+1) = B CM e (t) = B SM e (t) Two-Grid-Iteration Initial guess x (0), iterate e (t+1) = B ν 2 SM B CMB ν 1 SM e(t)
59 Towards Two-Grid Iteration: Convergence I Analyze iteration matrix B TG = B ν SM B CM Coarse Mes Solution Ricardson-Iteration B CM = I P A 1 H R A B SM = I 1 λ max A Te Two-Grid-Iteration B TG = [I P A 1 H R A ][I 1 A ] = [A 1 P A 1 H λ R ] max }{{} Approximation A [I 1 λ max A ] ν } {{ } Smooting Sow B TG ρ(ν) = cν 1 < 1 for ν > ν 0
60 Towards Two-Grid Iteration: Convergence I Analyze iteration matrix B TG = B ν SM B CM Coarse Mes Solution Ricardson-Iteration B CM = I P A 1 H R A B SM = I 1 λ max A Te Two-Grid-Iteration B TG = [I P A 1 H R A ][I 1 A ] = [A 1 P A 1 H λ R ] max }{{} Approximation A [I 1 λ max A ] ν } {{ } Smooting Sow B TG ρ(ν) = cν 1 < 1 for ν > ν 0
61 Towards Two-Grid Iteration: Convergence I Analyze iteration matrix B TG = B ν SM B CM Coarse Mes Solution Ricardson-Iteration B CM = I P A 1 H R A B SM = I 1 λ max A Te Two-Grid-Iteration B TG = [I P A 1 H R A ][I 1 A ] = [A 1 P A 1 H λ R ] max }{{} Approximation A [I 1 λ max A ] ν } {{ } Smooting Sow B TG ρ(ν) = cν 1 < 1 for ν > ν 0
62 Towards Two-Grid Iteration: Convergence I Analyze iteration matrix B TG = B ν SM B CM Coarse Mes Solution Ricardson-Iteration B CM = I P A 1 H R A B SM = I 1 λ max A Te Two-Grid-Iteration B TG = [I P A 1 H R A ][I 1 A ] = [A 1 P A 1 H λ R ] max }{{} Approximation A [I 1 λ max A ] ν } {{ } Smooting Sow B TG ρ(ν) = cν 1 < 1 for ν > ν 0
63 Towards Two-Grid Iteration: Convergence Proof Proof: 1 Split error influences B TG e (t+1) [A 1 P A 1 H R ] A [I 1/λ maxa ] ν 2 Sow, tat for te Smooting Operator it olds A [I 1/λ maxa ] ν v c sν 1 2 v 3 Sow, tat for te Coarse Mes Approximation it olds (A 1 P A 1 H R )v c a 2 v 4 Put it all togeter, wit ρ(ν) := c sc aν 1 < 1
64 Towards Two-Grid Iteration: Convergence Proof Proof: 1 Split error influences B TG e (t+1) [A 1 P A 1 H R ] A [I 1/λ maxa ] ν 2 Sow, tat for te Smooting Operator it olds A [I 1/λ maxa ] ν v c sν 1 2 v 3 Sow, tat for te Coarse Mes Approximation it olds (A 1 P A 1 H R )v c a 2 v 4 Put it all togeter, wit ρ(ν) := c sc aν 1 < 1
65 Towards Two-Grid Iteration: Convergence Proof Proof: 1 Split error influences B TG e (t+1) [A 1 P A 1 H R ] A [I 1/λ maxa ] ν 2 Sow, tat for te Smooting Operator it olds A [I 1/λ maxa ] ν v c sν 1 2 v 3 Sow, tat for te Coarse Mes Approximation it olds (A 1 P A 1 H R )v c a 2 v 4 Put it all togeter, wit ρ(ν) := c sc aν 1 < 1
66 Towards Two-Grid Iteration: Convergence Proof Proof: 1 Split error influences B TG e (t+1) [A 1 P A 1 H R ] A [I 1/λ maxa ] ν 2 Sow, tat for te Smooting Operator it olds A [I 1/λ maxa ] ν v c sν 1 2 v 3 Sow, tat for te Coarse Mes Approximation it olds (A 1 P A 1 H R )v c a 2 v 4 Put it all togeter, wit ρ(ν) := c sc aν 1 < 1
67 Towards Two-Grid Iteration: Convergence Proof I, te Smooting Operator Step 2: sow, tat A [I 1/λ maxa ] ν v c sν 1 2 v. Done for Ricardson and Laplace-Matrix Must be done separately for every matrix (partial differential equation)! Separate analysis necessary for different smooters (Jacobi, Gauss-Seidel, Ricardson, ILU) Difficult for complex equations and complex smooting operators
68 Towards Two-Grid Iteration: Convergence Proof II, te Coarse Mes Approximation Operator Step 3: sow, tat (A 1 P A 1 H R )v c a 2 v We must use properties of te partial differential equation and te discretization! Use a priori error estimates: u u c 2 f, Ten, using triangle inequality < C H u H u ch 2 f u u H u u + u H u c( 2 + H 2 ) 2 f Assume geometric mes growt Use special v above: Finally < C H, v := A u = f, A 1 f P A 1 u u H c(1 + C 2 )2 f A 1 v = u, P A 1 H R v = u H H R f = u u H c(1 + C 2 }{{} ) c a 2 f
69 Towards Two-Grid Iteration: Convergence Proof II, te Coarse Mes Approximation Operator Step 3: sow, tat (A 1 P A 1 H R )v c a 2 v We must use properties of te partial differential equation and te discretization! Use a priori error estimates: u u c 2 f, Ten, using triangle inequality < C H u H u ch 2 f u u H u u + u H u c( 2 + H 2 ) 2 f Assume geometric mes growt Use special v above: Finally < C H, v := A u = f, A 1 f P A 1 u u H c(1 + C 2 )2 f A 1 v = u, P A 1 H R v = u H H R f = u u H c(1 + C 2 }{{} ) c a 2 f
70 Towards Two-Grid Iteration: Convergence Proof II, te Coarse Mes Approximation Operator Step 3: sow, tat (A 1 P A 1 H R )v c a 2 v We must use properties of te partial differential equation and te discretization! Use a priori error estimates: u u c 2 f, Ten, using triangle inequality < C H u H u ch 2 f u u H u u + u H u c( 2 + H 2 ) 2 f Assume geometric mes growt Use special v above: Finally < C H, v := A u = f, A 1 f P A 1 u u H c(1 + C 2 )2 f A 1 v = u, P A 1 H R v = u H H R f = u u H c(1 + C 2 }{{} ) c a 2 f
71 Towards Two-Grid Iteration: Convergence Proof II, te Coarse Mes Approximation Operator Step 3: sow, tat (A 1 P A 1 H R )v c a 2 v We must use properties of te partial differential equation and te discretization! Use a priori error estimates: u u c 2 f, Ten, using triangle inequality < C H u H u ch 2 f u u H u u + u H u c( 2 + H 2 ) 2 f Assume geometric mes growt Use special v above: Finally < C H, v := A u = f, A 1 f P A 1 u u H c(1 + C 2 )2 f A 1 v = u, P A 1 H R v = u H H R f = u u H c(1 + C 2 }{{} ) c a 2 f
72 Towards Two-Grid Iteration: Convergence Proof III, Putting it togeter Step 4: Step 2 and Step 3: For ν > ν 0 : B TG A [I 1/λ maxa ] ν A 1 P A 1 H R c sν 1 2 c(1 + C 2 c sc(1 + C 2 )ν 1 0 < 1 )2 = c sc(1 + C 2 )ν 1 q.e.d.
73 Towards Two-Grid Iteration: Numerical Complexity Good convergence. To reduce error by ɛ: t = log(ɛ) Large effort: E step = E smoot + E coarse mes ( ) 3 1 = 5νN + C 2 N 3 = O(N 3 )
Introduction to Multigrid Method
Introduction to Multigrid Metod Presented by: Bogojeska Jasmina /08/005 JASS, 005, St. Petersburg 1 Te ultimate upsot of MLAT Te amount of computational work sould be proportional to te amount of real
More informationNotes on Multigrid Methods
Notes on Multigrid Metods Qingai Zang April, 17 Motivation of multigrids. Te convergence rates of classical iterative metod depend on te grid spacing, or problem size. In contrast, convergence rates of
More information4.2 - Richardson Extrapolation
. - Ricardson Extrapolation. Small-O Notation: Recall tat te big-o notation used to define te rate of convergence in Section.: Definition Let x n n converge to a number x. Suppose tat n n is a sequence
More information(4.2) -Richardson Extrapolation
(.) -Ricardson Extrapolation. Small-O Notation: Recall tat te big-o notation used to define te rate of convergence in Section.: Suppose tat lim G 0 and lim F L. Te function F is said to converge to L as
More informationMultigrid Methods for Obstacle Problems
Multigrid Metods for Obstacle Problems by Cunxiao Wu A Researc Paper presented to te University of Waterloo in partial fulfillment of te requirement for te degree of Master of Matematics in Computational
More informationCrouzeix-Velte Decompositions and the Stokes Problem
Crouzeix-Velte Decompositions and te Stokes Problem PD Tesis Strauber Györgyi Eötvös Loránd University of Sciences, Insitute of Matematics, Matematical Doctoral Scool Director of te Doctoral Scool: Dr.
More informationA h u h = f h. 4.1 The CoarseGrid SystemandtheResidual Equation
Capter Grid Transfer Remark. Contents of tis capter. Consider a grid wit grid size and te corresponding linear system of equations A u = f. Te summary given in Section 3. leads to te idea tat tere migt
More informationarxiv: v1 [math.na] 3 Nov 2011
arxiv:.983v [mat.na] 3 Nov 2 A Finite Difference Gost-cell Multigrid approac for Poisson Equation wit mixed Boundary Conditions in Arbitrary Domain Armando Coco, Giovanni Russo November 7, 2 Abstract In
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationAN EFFICIENT AND ROBUST METHOD FOR SIMULATING TWO-PHASE GEL DYNAMICS
AN EFFICIENT AND ROBUST METHOD FOR SIMULATING TWO-PHASE GEL DYNAMICS GRADY B. WRIGHT, ROBERT D. GUY, AND AARON L. FOGELSON Abstract. We develop a computational metod for simulating models of gel dynamics
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationOrder of Accuracy. ũ h u Ch p, (1)
Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical
More informationPreconditioning in H(div) and Applications
1 Preconditioning in H(div) and Applications Douglas N. Arnold 1, Ricard S. Falk 2 and Ragnar Winter 3 4 Abstract. Summarizing te work of [AFW97], we sow ow to construct preconditioners using domain decomposition
More informationLIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION
LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y
More informationMATH745 Fall MATH745 Fall
MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext
More informationExercises for numerical differentiation. Øyvind Ryan
Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can
More informationc 2006 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 27, No. 4, pp. 47 492 c 26 Society for Industrial and Applied Matematics A NOVEL MULTIGRID BASED PRECONDITIONER FOR HETEROGENEOUS HELMHOLTZ PROBLEMS Y. A. ERLANGGA, C. W. OOSTERLEE,
More informationNUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,
NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing
More informationMultigrid Methods and their application in CFD
Multigrid Methods and their application in CFD Michael Wurst TU München 16.06.2009 1 Multigrid Methods Definition Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential
More informationKasetsart University Workshop. Multigrid methods: An introduction
Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:
More information1. Fast Iterative Solvers of SLE
1. Fast Iterative Solvers of crucial drawback of solvers discussed so far: they become slower if we discretize more accurate! now: look for possible remedies relaxation: explicit application of the multigrid
More informationFunction Composition and Chain Rules
Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function
More information1 Limits and Continuity
1 Limits and Continuity 1.0 Tangent Lines, Velocities, Growt In tion 0.2, we estimated te slope of a line tangent to te grap of a function at a point. At te end of tion 0.3, we constructed a new function
More information3 Parabolic Differential Equations
3 Parabolic Differential Equations 3.1 Classical solutions We consider existence and uniqueness results for initial-boundary value problems for te linear eat equation in Q := Ω (, T ), were Ω is a bounded
More informationDifferentiation in higher dimensions
Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends
More informationConsider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.
Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions
More informationLIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT
LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 1: Introduction to Multigrid 1 12/02/09 MG02.prz Error Smoothing 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Initial Solution=-Error 0 10 20 30 40 50 60 70 80 90 100 DCT:
More informationSolving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners
Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department
More informationMANY scientific and engineering problems can be
A Domain Decomposition Metod using Elliptical Arc Artificial Boundary for Exterior Problems Yajun Cen, and Qikui Du Abstract In tis paper, a Diriclet-Neumann alternating metod using elliptical arc artificial
More informationNew Streamfunction Approach for Magnetohydrodynamics
New Streamfunction Approac for Magnetoydrodynamics Kab Seo Kang Brooaven National Laboratory, Computational Science Center, Building 63, Room, Upton NY 973, USA. sang@bnl.gov Summary. We apply te finite
More informationarxiv: v1 [math.na] 7 Mar 2019
Local Fourier analysis for mixed finite-element metods for te Stokes equations Yunui He a,, Scott P. MacLaclan a a Department of Matematics and Statistics, Memorial University of Newfoundland, St. Jon
More information1 Upwind scheme for advection equation with variable. 2 Modified equations: numerical dissipation and dispersion
1 Upwind sceme for advection equation wit variable coefficient Consider te equation u t + a(x)u x Applying te upwind sceme, we ave u n 1 = a (un u n 1), a 0 u n 1 = a (un +1 u n ) a < 0. CFL condition
More informationTest 2 Review. 1. Find the determinant of the matrix below using (a) cofactor expansion and (b) row reduction. A = 3 2 =
Test Review Find te determinant of te matrix below using (a cofactor expansion and (b row reduction Answer: (a det + = (b Observe R R R R R R R R R Ten det B = (((det Hence det Use Cramer s rule to solve:
More informationImplicit-explicit variational integration of highly oscillatory problems
Implicit-explicit variational integration of igly oscillatory problems Ari Stern Structured Integrators Worksop April 9, 9 Stern, A., and E. Grinspun. Multiscale Model. Simul., to appear. arxiv:88.39 [mat.na].
More informationSin, Cos and All That
Sin, Cos and All Tat James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 9, 2017 Outline Sin, Cos and all tat! A New Power Rule Derivatives
More informationParametric Spline Method for Solving Bratu s Problem
ISSN 749-3889 print, 749-3897 online International Journal of Nonlinear Science Vol4202 No,pp3-0 Parametric Spline Metod for Solving Bratu s Problem M Zarebnia, Z Sarvari 2,2 Department of Matematics,
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More informationNumerical Differentiation
Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function
More informationConvergence and Descent Properties for a Class of Multilevel Optimization Algorithms
Convergence and Descent Properties for a Class of Multilevel Optimization Algoritms Stepen G. Nas April 28, 2010 Abstract I present a multilevel optimization approac (termed MG/Opt) for te solution of
More informationQuantum Mechanics Chapter 1.5: An illustration using measurements of particle spin.
I Introduction. Quantum Mecanics Capter.5: An illustration using measurements of particle spin. Quantum mecanics is a teory of pysics tat as been very successful in explaining and predicting many pysical
More informationMotivation: Sparse matrices and numerical PDE's
Lecture 20: Numerical Linear Algebra #4 Iterative methods and Eigenproblems Outline 1) Motivation: beyond LU for Ax=b A little PDE's and sparse matrices A) Temperature Equation B) Poisson Equation 2) Splitting
More informationFinite Difference Methods Assignments
Finite Difference Metods Assignments Anders Söberg and Aay Saxena, Micael Tuné, and Maria Westermarck Revised: Jarmo Rantakokko June 6, 1999 Teknisk databeandling Assignment 1: A one-dimensional eat equation
More informationSection 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is
Mat 180 www.timetodare.com Section.7 Derivatives and Rates of Cange Part II Section.8 Te Derivative as a Function Derivatives ( ) In te previous section we defined te slope of te tangent to a curve wit
More information158 Calculus and Structures
58 Calculus and Structures CHAPTER PROPERTIES OF DERIVATIVES AND DIFFERENTIATION BY THE EASY WAY. Calculus and Structures 59 Copyrigt Capter PROPERTIES OF DERIVATIVES. INTRODUCTION In te last capter you
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationAlgebraic Multigrid as Solvers and as Preconditioner
Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationlecture 26: Richardson extrapolation
43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)
More information= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c)
Paper 1: Pure Matematics 1 Mark Sceme 1(a) (i) (ii) d d y 3 1x 4x x M1 A1 d y dx 1.1b 1.1b 36x 48x A1ft 1.1b Substitutes x = into teir dx (3) 3 1 4 Sows d y 0 and states ''ence tere is a stationary point''
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationINTRODUCTION TO MULTIGRID METHODS
INTRODUCTION TO MULTIGRID METHODS LONG CHEN 1. ALGEBRAIC EQUATION OF TWO POINT BOUNDARY VALUE PROBLEM We consider the discretization of Poisson equation in one dimension: (1) u = f, x (0, 1) u(0) = u(1)
More informationGradient Descent etc.
1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary
More informationMultigrid absolute value preconditioning
Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical
More informationMass Lumping for Constant Density Acoustics
Lumping 1 Mass Lumping for Constant Density Acoustics William W. Symes ABSTRACT Mass lumping provides an avenue for efficient time-stepping of time-dependent problems wit conforming finite element spatial
More informationThe Laplace equation, cylindrically or spherically symmetric case
Numerisce Metoden II, 7 4, und Übungen, 7 5 Course Notes, Summer Term 7 Some material and exercises Te Laplace equation, cylindrically or sperically symmetric case Electric and gravitational potential,
More informationSome definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts
Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric
More informationThird order Approximation on Icosahedral Great Circle Grids on the Sphere. J. Steppeler, P. Ripodas DWD Langen 2006
Tird order Approximation on Icosaedral Great Circle Grids on te Spere J. Steppeler, P. Ripodas DWD Langen 2006 Deasirable features of discretisation metods on te spere Great circle divisions of te spere:
More informationGrad-div stabilization for the evolutionary Oseen problem with inf-sup stable finite elements
Noname manuscript No. will be inserted by te editor Grad-div stabilization for te evolutionary Oseen problem wit inf-sup stable finite elements Javier de Frutos Bosco García-Arcilla Volker Jon Julia Novo
More informationResearch Article Smoothing Analysis of Distributive Red-Black Jacobi Relaxation for Solving 2D Stokes Flow by Multigrid Method
Matematical Problems in Engineering Volume 205, Article ID 57298, 7 pages ttp://dx.doi.org/0.55/205/57298 Researc Article Smooting Analysis of Distributive Red-Black Jacobi Relaxation for Solving 2D Stokes
More informationParameter Fitted Scheme for Singularly Perturbed Delay Differential Equations
International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department
More informationSection 3: The Derivative Definition of the Derivative
Capter 2 Te Derivative Business Calculus 85 Section 3: Te Derivative Definition of te Derivative Returning to te tangent slope problem from te first section, let's look at te problem of finding te slope
More informationADAPTIVE MULTILEVEL INEXACT SQP METHODS FOR PDE CONSTRAINED OPTIMIZATION
ADAPTIVE MULTILEVEL INEXACT SQP METHODS FOR PDE CONSTRAINED OPTIMIZATION J CARSTEN ZIEMS AND STEFAN ULBRICH Abstract We present a class of inexact adaptive multilevel trust-region SQP-metods for te efficient
More information7.1 Using Antiderivatives to find Area
7.1 Using Antiderivatives to find Area Introduction finding te area under te grap of a nonnegative, continuous function f In tis section a formula is obtained for finding te area of te region bounded between
More informationMath 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0
3.4: Partial Derivatives Definition Mat 22-Lecture 9 For a single-variable function z = f(x), te derivative is f (x) = lim 0 f(x+) f(x). For a function z = f(x, y) of two variables, to define te derivatives,
More informationSpectral element agglomerate AMGe
Spectral element agglomerate AMGe T. Chartier 1, R. Falgout 2, V. E. Henson 2, J. E. Jones 4, T. A. Manteuffel 3, S. F. McCormick 3, J. W. Ruge 3, and P. S. Vassilevski 2 1 Department of Mathematics, Davidson
More informationThe Triangle Algorithm: A Geometric Approach to Systems of Linear Equations
: A Geometric Approach to Systems of Linear Equations Thomas 1 Bahman Kalantari 2 1 Baylor University 2 Rutgers University July 19, 2013 Rough Outline Quick Refresher of Basics Convex Hull Problem Solving
More informationRobust approximation error estimates and multigrid solvers for isogeometric multi-patch discretizations
www.oeaw.ac.at Robust approximation error estimates and multigrid solvers for isogeometric multi-patc discretizations S. Takacs RICAM-Report 2017-32 www.ricam.oeaw.ac.at Robust approximation error estimates
More informationOn the best approximation of function classes from values on a uniform grid in the real line
Proceedings of te 5t WSEAS Int Conf on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 006 459 On te best approximation of function classes from values on
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationExercise 19 - OLD EXAM, FDTD
Exercise 19 - OLD EXAM, FDTD A 1D wave propagation may be considered by te coupled differential equations u x + a v t v x + b u t a) 2 points: Derive te decoupled differential equation and give c in terms
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationCS522 - Partial Di erential Equations
CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationCopyright c 2008 Kevin Long
Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula
More informationLecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines
Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to
More informationMaster Thesis Literature Study Presentation
Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element
More informationAMG for a Peta-scale Navier Stokes Code
AMG for a Peta-scale Navier Stokes Code James Lottes Argonne National Laboratory October 18, 2007 The Challenge Develop an AMG iterative method to solve Poisson 2 u = f discretized on highly irregular
More information2.11 That s So Derivative
2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point
More informationA First-Order System Approach for Diffusion Equation. I. Second-Order Residual-Distribution Schemes
A First-Order System Approac for Diffusion Equation. I. Second-Order Residual-Distribution Scemes Hiroaki Nisikawa W. M. Keck Foundation Laboratory for Computational Fluid Dynamics, Department of Aerospace
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationLecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.
Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationClick here to see an animation of the derivative
Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,
More informationIntroduction to Scientific Computing
(Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic
More informationAdaptive algebraic multigrid methods in lattice computations
Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann
More informationRegularized Regression
Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize
More informationNumerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems
Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for
More informationSIMG-713 Homework 5 Solutions
SIMG-73 Homework 5 Solutions Spring 00. Potons strike a detector at an average rate of λ potons per second. Te detector produces an output wit probability β wenever it is struck by a poton. Compute te
More informationNumerical Programming I (for CSE)
Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationTHE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS. L. Trautmann, R. Rabenstein
Worksop on Transforms and Filter Banks (WTFB),Brandenburg, Germany, Marc 999 THE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS L. Trautmann, R. Rabenstein Lerstul
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationMAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016
MAT244 - Ordinary Di erential Equations - Summer 206 Assignment 2 Due: July 20, 206 Full Name: Student #: Last First Indicate wic Tutorial Section you attend by filling in te appropriate circle: Tut 0
More information1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)
Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of
More information