Multigrid Methods for Obstacle Problems

Size: px
Start display at page:

Download "Multigrid Methods for Obstacle Problems"

Transcription

1 Multigrid Metods for Obstacle Problems by Cunxiao Wu A Researc Paper presented to te University of Waterloo in partial fulfillment of te requirement for te degree of Master of Matematics in Computational Matematics Supervisor: Prof. Justin W.L. Wan Waterloo, Ontario, Canada, 203 c Cunxiao Wu 203

2 I ereby declare tat I am te sole autor of tis tesis. Tis is a true copy of te tesis, including any required final revisions, as accepted by my examiners. I understand tat my tesis may be made electronically available to te public. ii

3 Abstract Obstacle problems can be posed as elliptic variational inequalities, complementarity inequalities and Hamilton-Jacobi-Bellman (HJB) partial differential equations (PDEs). In tis paper, we propose a multigrid algoritm based on te full approximate sceme (FAS) for solving te membrane constrained obstacle problems and te minimal surface obstacle problems in te formulations of HJB equations. A special coarse grid operator is proposed based on te Galerkin operator for te membrane constrained obstacle problem in tis paper. Comparing wit standard FAS wit te direct discretization coarse grid operator, te FAS wit te proposed operator converges faster. Due to te nonlinear property of te minimal surface operator, te Galerkin operator for te minimal surface obstacle problem is not accurate. We introduce te direct discretization operator for te minimal surface obstacle problem. A special prolongation operator based on te bilinear interpolation is proposed to interpolate functions from te coarse grid to te fine grid. At te boundary between te active set and inactive set, te proposed prolongation operator can capture te active grid points and put accurate values at tese points. We will demonstrate te fast convergence of te proposed multigrid metod for solving obstacle problems by comparing wit oter multigrid metods. iii

4 Acknowledgements I would like to tank my supervisor Prof. Justin W.L. Wan for is elp and patience tougout te year. I also want to tank all of my classmates for teir support. iv

5 Dedication Tis is dedicated to my lovely mom and my sweet dad. v

6 Table of Contents List of Tables List of Figures ix x Introduction. Overview Metods to Solve Obstacle Problems Objective Obstacle Problem 5 2. Deformation of a Membrane Constrained by an Obstacle Elliptic Variational Inequality Formulation of Membrane Constrained Obstacle Problem Linear Complementarity Formulation of Membrane Constrained Obstacle Problem HJB Equation Formulation of Membrane Constrained Obstacle Problem Minimal Surfaces wit Obstacles Finite Difference Discretization of Membrane Constrained Obstacle Problems Finite Difference Discretization of Minimal Surface Obstacle Problem HJB Equation of Discretized Obstacle Problems vi

7 3 Multigrid Metods 7 3. Pre-smooting Jacobi Iteration Gauss-Seidel Iteration Coarse Grid Correction Multigrid Operators Coarse Grid Correction Post-smooting Two-grid V-cycle Multigrid V-cycle Oter Multigrid Cycles W-cycle F-cycle Multigrid Metod for Obstacle Problem Model Problem Nonlinear Problem Operators for FAS Pre-Smooting and Post-Smooting Restriction Operator R and Prolongation Operator P Nonlinear Operator N H on Coarse Grid FAS Applied on Linear Problem Numerical Results D Block Obstacle Problem wit Laplacian Operator D Dome Obstacle Problem wit Laplacian Operator D Square Obstacle Problem wit Laplacian Operator D Dome Obstacle Problem wit Laplacian Operator D Dome Obstacle Problem wit Minimal Surface Operator vii

8 6 Conclusion 50 References 5 viii

9 List of Tables 3. Te computational work of one complete multigrid cycle Iterations of te FAS for D block problem Iterations of te FAS for D dome problem Iterations of te FAS for 2D square problem Iterations of te FAS for 2D dome problem Comparison of iterative errors Comparison of te convergence rate between te F-cycle and te FAS Iterations of te FAS for 2D dome problem wit te minimal surface operator Residuals after eac iteration of different metods Convergence rates for te domain decomposition metod in [2] (Column 2 to Column 5) and te FAS (last Column) ix

10 List of Figures 2. An obstacle problem example, (left) te obstacle, (rigt) te solution Illustration of ig frequency errors reduced by stationary iterative metods. (a) Initial errors wit some random initial value, (b) errors after pre-smooter Illustration of an ierarcy of grids starting wit = A fine grid wit symbols presented te bilinear interpolation in (3.22) used for te interpolation from te coarse grid ( ) Structure of various multigrid cycles for different numbers of grids (, smooting,, exact solution, \, fine grid to coarse grid, /, coarse grid to fine grid) Illustration of bot ig and low frequency errors reduced by te coarse grid correction and te post-smooting, (a) errors after te coarse grid correction, (b) errors after te post-smooting D obstacle problem example to sow te drawback of te linear interpolation Pictures of different obstacles D block obstacle result. (left) Obstacle, (middle) numerical solution, (rigt) difference between te obstacle and te numerical solution, wit grid size 64 and 5 levels D Dome obstacle result, wit grid size and 5 levels. (left) Obstacle, 64 (middle) numerical solution, (rigt) difference between te numerical solution and te obstacle D Square obstacle result, wit grid size and 5 levels. (left) Obstacle, 64 (middle) numerical solution, (rigt) difference between te numerical solution and te obstacle x

11 5.5 2D Dome obstacle result, wit grid size and 5 levels. (left) Obstacle, 64 (middle) numerical solution, (rigt) difference between numerical solution and obstacle D circle obstacle wit minimal surface operator, wit grid size and 5 levels. (left) Obstacle, (middle) numerical solution, (rigt) difference between 64 te numerical solution and te obstacle xi

12 Capter Introduction Te obstacle problem is a problem tat finds te equilibrium position of an elastic membrane wit fixed boundary and is constrained to lie below and/or above te given obstacles. Te problem can be posed as a partial differential equation (PDE). It is regarded as an example in te study of elliptic variational inequalities and free boundary problems. Many problems in real life suc as porous flow troug a dam, minimal surface over an obstacle, American options, elastic-plastic cylinder, etc. can be reformulated as elliptic variational inequalities [5][20]. A free boundary problem is a partial differential equation tat as to be solved for bot an unknown function and an unknown domain [7]. An instance of te free boundary problem is te melting of ice. Given a block of ice, one can solve te eat equation wit an appropriate initial guess and given boundary conditions. However, if te temperature of a specific region is greater tan te melting point of ice, tis domain of ice will be melting and occupied by liquid water. Te solution to te PDE will give te dynamic location of te interface between te ice and liquid. We call tese obstacle type problems. Obstacle problems can also be formulated as linear complementarity problems and Hamilton-Jacobi-Bellman (HJB) equations []. However, it is generally difficult to compute te exact solutions of tese problems, so we will focus on te numerical solutions ere in tis paper. Te way to project te continuous obstacle problem into a discrete problem is troug discretization. Tere are several tecniques tat may be used to discretize a PDE. For instance, finite difference, finite element and finite volume metods [4]. Finite difference metods are widely used, since tey are intuitive and easy to implement. Finite element metods are commonly used wit te domain decomposition and subspace correction metods.

13 . Overview Metods to Solve Obstacle Problems Many metods are introduced to solve elliptic variational inequality PDEs. Existing approaces include projected relaxation metods, classical stationary iterative metods, conjugate gradient metods, multigrid metods and so on [][6][7]. Te classical stationary iterative metods are easy to implement but te convergence rate is slow wen te problem is large. Projected relaxation metods are regarded as standard tecniques to solve elliptic variational inequalities [8][9]. Tey are known to be easy to implement and convergent. However, te drawback of tis metod is tat it igly depends on te coice of te relaxation parameter and as a slow asymptotic convergence rate. Various forms of preconditioned conjugate gradient (PCG) algoritms for solving te nonlinear variational inequalities are presented in [6]. Te conjugate gradient (CG) metod is an iterative metod designed to solve certain systems of linear or nonlinear equations. PCG is introduced to accelerate convergence by adding special parameters. It is more efficient tan te projected relaxation metod. However, te convergence rate still depends on te size of te problem. Wen te grid size is refined, a PCG metod may not be very efficient. A multigrid metod is introduced in [] to solve te finite difference discretized PDE in te form of te minimal surface obstacle problem. Two pases are introduced in te paper. Te aim of te first pase, in wic te computations are not expensive, is to get a good initial guess for te iterative pase two. In te second pase, te problem is solved by a W-cycle multigrid metod. A cutting function is applied after te coarse grid correction. Te convergence is proven to be better tan PCG. However, te application of te cutting function and te pase one leads to expensive computations overall. Te elliptic variational inequalities can be reformulated as linear complementarity problems. It is easy to write te linear complementarity problems into PDEs. A multigrid metod, namely, projected full approximate sceme (PFAS), is proposed in [3] to solve linear complementarity problems arising from free boundary problems. Te proposed multigrid metod is based on te full approximate sceme (FAS), wic is often used for solving nonlinear PDEs. Te multigrid metod is built on a generalization of te projected SOR. Two furter algoritms establised on PFAS are introduced: PFASMD and PFMG, bot of wic are faster tan PFAS. Tese metods are better tan te metod in [] in te sense of te convergence rate. Te multigrid PFAS to solve te American style option problem in te linear complementarity form is introduced in [7], were a Fourier analysis of te smooter is provided. 2

14 Te American style option problem is an obstacle type problem wic can be formulated into PDEs. Te F-cycle multigrid metod is applied and te comparison between te F- cycle and te V-cycle for solving linear complementarity problems is sown. Te F-cycle is faster tan te V-cycle as mentioned in [7]. However, te F-cycle requires more computations in eac iteration. Te additional steps are recombination operations on te fine grid, so it is relatively expensive. Te elliptic variational inequalities can also be reformulated as Hamilton-Jacobi-Bellman (HJB) equations. A multigrid algoritm wic involves an outer and an inner loop is developed in [2]. Te active and inactive sets of all grid levels are computed and stored in te outer loop. Te W-cycle FAS multigrid metod is applied to solve a linear PDE in te inner loop wic is nested in te outer loop. An iterative step is adopted to give a good initial guess starting from te biggest grid size up to te fine grid. Bot [] and [2] are trying to use te active and inactive sets strategy to separate te obstacles from te solving of te direct PDEs. Te computational complexity largely depends on te stopping criterion for te inner loop. If te stopping criterion is not cosen wisely, te computations can be very expensive. A mutilevel domain decomposition and subspace correction algoritm is applied in [2] to solve te HJB equations for obstacle problems over a minimal surface. A special interpolator is cosen to solve te obstacle problem over a convex space. Te proof of te linear rate of convergence for proposed algoritms is provided. A so-called monotone multigrid metod and a truncated version is introduced in [3] to solve te obstacle problem. Te convergence rate of te proposed metod for solving te minimal surface obstacle problem is proved as te same efficiency as for te classical multigrid for te unconstrained case. Te finite element metod is used for discretization..2 Objective We will consider two kinds of obstacle problems. Tey sare te same boundary conditions and obstacle constraints, but wit different partial differential operators. One is wit te Laplacian operator, and te oter operator we call it te minimal surface operator. Te obstacle problems can be reformulated as HJB equations. Te equations are discretized by a finite difference metod. Tis researc paper is to develop an efficient multigrid metod for solving te discrete HJB equations. Te main results of tis researc paper are: Te FAS multigrid metod wit te Galerkin coarse grid operator is introduced and applied on te membrane constrained obstacle problem. 3

15 Te FAS multigrid metod wit te direct discretization operator is presented and used to solve te minimal surface obstacle problem. Te performances of te FAS metod for solving bot obstacle problems are analyzed. Tis researc paper is arranged as follows: te obstacle problems, teir matematical derivations and teir discretization are given in Capter 2. Capter 3 presents te multigrid metods, in wic te V-cycle is sown and te corresponding algoritm is given. A linear problem is considered as an example to introduce multigrid metods. Te FAS V-cycle to solve obstacle problems are presented in Capter 4, were te algoritms for te two kinds of obstacle problems are introduced. Numerical experiments are discussed in Capter 5 and we summarize te work of tis researc paper in Cpater 6. 4

16 Capter 2 Obstacle Problem Te obstacle problem discussed in tis researc paper is originally introduced for finding te equilibrium position of a membrane above a constraint of an obstacle, wit fixed boundary. In classical linear elasticity teory, te obstacle problem is regarded as one of te simplest unilateral problems, and also as geometrical problems [20]. Te obstacle problems can be modeled as elliptic variational inequalities and free boundary problems [5]. Many variational inequalities problem, for example, te elastic-plastic torsion problem and te cylindric bar, are regarded as obstacle type problems by means of te membrane analogy. Anoter example is a cavitation problem in te teory of lubrication. Te solution to te obstacle problem can be separated into two different regions. One region is were te solution equals te obstacle value, also known as te active set, and te oter region is wit te solution above te obstacle. Te interface between te two regions is called te free boundary, so te obstacle problem is also studied as te free boundary problem [5]. Figure 2. sows an example of te obstacle (left) and te solution to te obstacle problem (rigt). In tis capter, we will derive te formulations of two kinds of obstacle problems: membrane constrained and minimal surface obstacle problems. 2. Deformation of a Membrane Constrained by an Obstacle A membrane considered in classical elasticity teory is a tin plate, wic as no resistance to bend, but would act in tension [20]. We assume a membrane in a domain Ω R 2 on te plane Oxy is equally stretced by a uniform tension, and also loaded wit te external 5

17 Figure 2.: An obstacle problem example, (left) te obstacle, (rigt) te solution force f, wic is also uniformly distributed. We assume eac point (x, y) of te membrane is replaced by a numerical amount u(x, y) perpendicular to te domain Ω. Te boundary of te domain Ω is represented as Ω. For simplicity, from now on, we use X stands for te two dimensional point (x, y). Te membrane problem considered ere is defined on a convex domain, wit te function u(x) and boundary conditions u Ω = g(x) wit some obstacle ψ(x) in Ω. To be more precise, we are given (a) A convex domain Ω, (b) u(x) = g(x) on te boundary Ω, to be specific, g(x) = 0 is adopted in tis paper, (c) An obstacle function ψ(x). According to te position of te obstacle, we can classify te obstacle problem into tree classes: (a) te lower obstacle problem, in wic te solution u(x) lies above te obstacle, (b) te upper obstacle problem, in wic te solution u(x) lies below te obstacle, and (c) te two sided obstacle problem, in wic te solution u(x) lies in between te two sides obstacles. In tis capter, we will take te lower obstacle problem as an example to derive te matematical formulations. Here we define a close convex set K to be K = {u V, u Ω = g(x), u ψ}, (2.) were V stands for te lower obstacle problem space wic is te H space considered ere. In te following, we will give tree different formulations of te membrane constrained obstacle problem: an elliptic variation inequality, a linear complementarity problem and an HJB equation. 6

18 2.2 Elliptic Variational Inequality Formulation of Membrane Constrained Obstacle Problem Te potential energy of te membrane can be expressed as: D(u) = 2 u 2 dx, (2.2) Ω were u K, and u stands for te gradient of u. Tere may be external forces f, te energy of wic is given by F (u) = fu dx. (2.3) So (2.2) togeter wit (2.3) give us te total potential energy E = D F, wic is E(u) = 2 u 2 dx fu dx. (2.4) Ω Te membrane constrained problem is to find u suc tat it minimizes te potential energy, min E(u), u K, (2.5) u subject to te obstacle constraint given by te convex set K. In order to find te solution, we need to make sure u satisfies te Euler s equation and (2.5), wic turns out to be te Poisson s equation, u (u xx + u yy ) = f. (2.6) Let ū be te solution to (2.5), tat is, Ω Ω E(ū) E(v), v K. (2.7) Since K is a convex set, ū + λ(v ū) K for λ [0, ]. Define (λ) E(ū + λ(v ū)). Wen λ = 0, (λ) is minimized and ence (0) 0. It follows tat (0) = lim {E(ū + λ(v ū)) E(ū)} 0. (2.8) λ 0 + λ Substituting (2.4) into (2.8), we obtain, (0) = ū (v ū)dx Ω 7 Ω f(v ū)dx 0. (2.9)

19 Tus, if ū is a solution to (2.5) (or (2.7)), ten ū also satisfies ū K : ū (v ū)dx f(v ū)dx, v K, (2.0) wic is called an elliptic variational inequality. Ω Anoter way to see te solution to (2.0) is also te solution to (2.5) (or (2.7)) is tat if u satisfies (2.0) ten for v K, Ω E(v) =E(u + v u) = E(u + (v u)) =E(u) + u (v u)dx f(v u)dx + Ω Ω 2 Ω (v u) 2. Since u satisfies (2.0), ū (v ū)dx f(v ū) 0. Hence we ave Ω Ω E(v) E(u) + (v u) 2 2 E(u). Ω (2.) (2.2) It implies tat u is also a solution to (2.5) (or (2.7)). 2.3 Linear Complementarity Formulation of Membrane Constrained Obstacle Problem In order to derive te linear complementarity formula, we define two sets and K = {u K : u(x) = ψ(x)}, (2.3) K 2 = {u K : u(x) > ψ(x)}. (2.4) Take an arbitrary φ K 2, suc tat φ(x) = 0 on Ω. Tere exists ɛ 0 > 0, suc tat v = ū ± ɛφ K for 0 < ɛ ɛ 0. Substituting v = ū ± ɛφ into (2.0), we can get ū (±ɛφ)dx f(±ɛφ)dx 0. (2.5) Rearranging (2.5), we obtain Ω Ω ± ɛ ( ū φ fφ) dx 0. (2.6) Ω 8

20 Integrating by parts, (2.6) becomes { } ± ɛ [ uφ] Ω + ( ū f)φdx 0, (2.7) Ω Since φ(x) = 0 on Ω, we ave ± ɛ Ω ( ū f)φ 0, φ K 2. (2.8) Since te inequality olds for bot +ɛ and ɛ, we can conclude tat ū = f, ū(x) > ψ(x). (2.9) In te oter case, ū(x) is not strictly greater tan ψ(x). We let φ K, and φ 0. Let v = u + φ K. Ten (2.0) becomes: ū φdx fφdx 0. (2.20) Ω Following te same algebra from (2.5) to (2.8), we ave ( ū f)φ 0, (2.2) for any φ 0. Tis implies tat Ω Ω ū f 0, ū(x) ψ(x). (2.22) From (2.9) and (2.22), we can conclude tat ū f 0 is true almost everywere except for Ω in Ω, were te values are given by te Diriclet boundary conditions. Te two cases (2.9) and (2.22) can be written as a complementarity problem: ū ψ 0, ū f 0, (ū ψ)( ū f) = 0. (2.23) 9

21 2.4 HJB Equation Formulation of Membrane Constrained Obstacle Problem Equation (2.23) implies tat if ū > ψ, or ū ψ > 0, ten ū f = 0. Oterwise, if ū ψ = 0, ten ū f 0. Hence, te linear complementarity formulation (2.23) is equivalent to min( ū f, ū ψ) = 0, on Ω, (2.24) wic is called an Hamilton-Jacobi-Bellman (HJB) equation. Here, ū K, were K is defined in (2.). Tis formulation is only for te lower obstacle problem. For te upper obstacle problem, te corresponding HJB equation is given by, max( ū f, ū ψ) = 0, on Ω. (2.25) Here, ū K u, were K u = {u V u ψ 2 }, and V is te H space. For te two sided obstacle problem, we assume ū K two = {u V ψ u ψ 2 }. Te HJB equation is given by max µ 4 [ Lµ ū f µ ] = 0, (2.26) were L µ u = [sgn(u ψ µ )]( u), f µ = [sgn(u ψ µ )]f, µ =, 2, L µ u = ( ) µ u, f µ = ( ) µ ψ µ 2, µ = 3, 4. (2.27) 2.5 Minimal Surfaces wit Obstacles In tis section, we introduce anoter form of te obstacle problem, i.e, te minimal surface wit obstacles. It is to describe te equilibrium sape for a mass bounded by a surface. We assume tat te superficial tension exerts a pressure tat is proportional to te mean curvature at eac point. Te pressure applied is te same in all directions, suc tat every point on te surface sares te constant mean curvature. Te minimal surface problem is of great interest in te matematical study [20]. Here, we only consider te special case, tat is: in te membrane problem, te minimal surfaces can be represented as a function u = u(x) over a smoot subset Ω in R 2. Te boundary condition is also given by g(x) = 0 on Ω. 0

22 We assume tat te potential energy of te membrane is proportional to its area, wic is given by + u 2 dx, u K. (2.28) Ω Te minimal surface obstacle problem is to find te minimal potential energy of te membrane and can be stated in te form u K : + u 2 dx + v 2 dx, v K. (2.29) Ω Te minimal surface obstacle problem is related to te membrane constrained obstacle problem. If we take te Talyor s expansion, te integrand of (2.28) can be written as Ω + u 2 = + 2 u 2 + O( u 4 ). (2.30) For small u, O( u 4 ) can be discarded. Te minimization of + 2 u 2 is te same as minimizing 2 u 2, since te constant term dose not affect te minimization. By dropping te ig order term O( u 4 ), te minimal surface problem becomes te membrane constrained obstacle problem in (2.5). We can also apply te external forces f ere, te energy of wic is given in (2.3). Similar to te derivation in Section 2.2, (2.29) can be formulated as a variational inequality: u (v u) u K : dx f(v u)dx 0, v K. (2.3) + u 2 Ω Ω Wit te same reasoning as in (2.9) and (2.22), we conclude tat u must satisfy te following equations: u Au ( ) = f + u 2 for u(x) > ψ(x), (2.32) u Au ( ) f for u(x) ψ(x). + u 2 Te HJB equations for te minimal surface problem can be obtained by replacing te u and L µ in (2.24), (2.25) and (2.26). More precisely, te equations for te lower and upper minimal surface obstacle problems are min(aū f, ū ψ) = 0, on Ω, max(aū f, ū ψ) = 0, on Ω, (2.33)

23 respectively. For te two sided minimal surface obstacle problem, te corresponding HJB equation is given by max µ 4 [Gµ ū f µ ] = 0 (2.34) were G µ u = [sgn(u ψ µ )](Au), f µ = [sgn(u ψ µ )]f, µ =, 2, G µ u = ( ) µ u, f µ = ( ) µ ψ µ 2, µ = 3, 4. Te term Au is defined in (2.32) for all tese tree HJB equations. (2.35) Te minimal surface obstacle problem is related to te membrane constrained obstacle problem as mentioned above. However, te minimal surface operator A ere in (2.32) and te Laplacian operator are quite different. Te Laplacian operator does not depend on te function u. It is a linear operator. But te minimal surface operator A in (2.32) is nonlinear and igly depends on u. 2.6 Finite Difference Discretization of Membrane Constrained Obstacle Problems It is difficult to solve te obstacle problem in te HJB form analytically, so we will consider te numerical solution based on a finite difference discretization. Finite difference is a numerical metod to approximate derivatives of a function by algebraic formulas derived from Taylor s polynomials. We will first introduce some notations. Consider te Poisson s equation u xx u yy = f in Ω = (0, ) 2. (2.36) Denote te grid points by (x i, y i ), were x i = i x, y j = j y, 0 i, j N +. x = y = = is te grid size. Let u N+ i,j be an approximation to u(x i, y j ). Ten a finite difference discretization of (2.36) can be written as u i+,j 2u i,j + u i,j 2 u i,j+ 2u i,j + u i,j 2 = f i,j, i, j N. (2.37) Tus, we ave a set of N 2 linear equations. We can rewrite te linear system by an N 2 N 2 matrix A M as 2

24 A M = 2 D B 0 B D B 0 B D B B D, (2.38) were D and B are bot N N matrices defined as, D = 0 4, (2.39) Te superscript M for A M B = (2.40) ere stands for te membrane constrained obstacle problem. 2.7 Finite Difference Discretization of Minimal Surface Obstacle Problem Similarly, we apply a finite difference metod to discretize te operator in (2.32): A MS u i,j = ACu i,j + AW u i,j + AEu i+,j + ASu i,j + ANu i,j+. (2.4) Te superscript MS in A MS ere stands for te minimal surface obstacle problem. AC, AW, AE, AS, AN can be understood as te weigts of te stencil in te center, west, 3

25 east, sout, nort, respectively. Tey are given by AW = 2 2 ( u i,j u i,j ) 2 + ( u i,j u i,j ) 2 + +, 2 ( u i,j u i,j ) 2 + ( u i,j+ u i,j ) 2 + AE = AS = ( u i+,j u i,j AN = ( u i+,j u i,j ) 2 + ( u i+,j u i+,j ) 2 + ) 2 + ( u i,j+ u i,j ) 2 + ( u i,j u i,j ( u i+,j u i,j 2 ( u i+,j u i,j ( u i,j+ u i,j+ and AC = (AW + AE + AS + AN) +. A MS wic is not given ere., ) 2 + ( u i,j u i,j ) 2 +, ) 2 + ( u i,j u i,j ) 2 + ) 2 + ( u i,j+ u i,j ) 2 +, ) 2 + ( u i,j+ u i,j ) 2 + can also be written as a matrix form, It would be nice tat te discretization of te minimal surface problem is monotone so tat te numerical solution will converge to te viscosity solution. However, it is not obvious one can prove tat te given finite difference discretization is actually a monotone sceme. Peraps it is possible to apply forward or backward differencing to te discretization of te u term in suc a way tat te resulting discretization metod is monotone. 4

26 However, it is not clear ow it can be done and we will just focus on te given discretization wen we apply our multigrid metod 2.8 HJB Equation of Discretized Obstacle Problems Now we consider te discretization of te HJB equations for bot obstacle problems, te membrane constrained obstacle problem and te minimal surface obstacle problem. We give several notations ere we will use in te following. We order te grid points (X, X 2, X 3,..., X N 2) lexicograpically and denote u, f, as a vector in te same lexicograpical order. Tus A (u ) is a vector, and A (u ) i represents te i t item of te vector. An i t grid point is called an inactive point if A (u ) i f,i < u,i ψ,i. Oterwise it is called an active point. All te active points build up an active set and te inactive points form an inactive set. Depending on te type of constraints, we divide te obstacle problem into tree types: () te lower obstacle, te set K L = {v R N v ψ }, (2) te upper obstacle, te set K U = {v R N v ψ 2 }, (3) te two sided obstacle, Ktwo = {v R N ψ v ψ 2}. Te HJB equations are given as for te lower obstacle K = K L, for te upper obstacle K = K U, and for te two sided obstacle problem K = K two, were min[a (u ) f, u ψ ] = 0, (2.42) max[a (u ) f, u ψ 2 ] = 0, (2.43) max µ 4 [Aµ (u ) f µ ] = 0, (2.44) A µ (u ) = [sgn(u ψ µ )]A (u ), f µ = [sgn(u ψ µ )]f, µ =, 2, A µ (u ) = ( ) µ u, f µ = ( )µ ψ µ 2, µ = 3, 4. (2.45) 5

27 Here A = A M, defined in (2.38), for te membrane constrained obstacle problem and A = A MS, given by (2.4), for te minimal surface obstacle problem. Note tat te HJB equations sould be understood componentwise. In tis researc paper, we are going to solve te discrete HJB equation wit lower obstacles, i.e (2.42). Te discrete HJB equation is difficult to solve numerically because of its nonlinearity. Any grid point, say te i t grid point, is active or inactive depending on bot A (u ) i f,i and u,i ψ,i. However, we do not know tat in advance. It depends on te approximate solution u and te obstacle function ψ. We need to update te active and inactive set wen we update te approximate solution u. Te active and inactive sets make tis problem nonlinear and difficult to solve numerically. For te minimal surface obstacle problem, te operator A MS itself is igly nonlinear. So te HJB equation for te minimal surface obstacle problem is even more difficult so solve. One way to solve te HJB equation is te policy iteration []. Te policy iteration consists of two loops, i.e. te outer loop and te inner loop. In te outer loop, one can fix te active set and inactive set. In te inner loop, some iterative metods can be applied to solve te linear problem. A W-cycle multigrid metod can be taken as an inner loop solution [2]. In practice, te policy iteration converges fast. However, te computation in te inner loop can be costly. So te total computational work may still be expensive. 6

28 Capter 3 Multigrid Metods Tis capter provides several important and basic concepts in multigrid metods. Multigrid metods, wic are originally applied as a way to solve elliptic boundary value problems, are regarded as a class of efficient iterative metods to solve various PDEs. Typically teir iteration numbers are a constant number and do not depend on te grid sizes. Problems tat can be efficiently solved by multigrid metods involve almost all fields of pysics and engineering sciences, for instance, elliptic PDEs suc as te Poisson s problem, convection diffusion equations, yperbolic problems as well as clustering and eigenvalue problems [4][9][22]. Tese problems are difficult to derive te analytic solutions. A practical way to overcome te difficulties is to give numerical solutions by suitable discretization and to solve te corresponding linear or nonlinear large sparse matrix. In tis capter, we will focus on te discrete Poisson s equation wit Diriclet boundary condition as our model problem: u = f, (3.) in te square Ω wit = N+, N N. We define te finite difference domain Ω as: were Ω = (0, ) 2 R and Ω = Ω G, (3.2) G {(x, y) : x = i, y = j; i, j Z}. (3.3) Let L be te standard five-point approximation of te partial differential operator, and te stencil is given by: (3.4)

29 If we order te unknowns from te lower left corner to te upper rigt corner, i.e, natural ordering, ten te resulting matrix will be a block tridiagonal sparse matrix as in (2.38). Tere exist various metods to solve te matrix problem, for example, te Ricardson metod, te Gauss-Seidel metod, te Jacobi metod and te successive over-relaxation metod. For te model problem (3.), te convergence rate for te Ricardson metod is given by ρ = K(A ), were K(A K(A )+ ) = 4 is te condition number of te matrix A π 2 2 in (2.38) wit te grid size. Te Ricardson metod converges faster as te condition number is getting closer to. Wen te grid size is refined, te condition number becomes a large number. Tus, te Ricardson metod results in an inefficient metod. Te Jacobi and Gauss-Seidel metods ave an asymptotic convergence rate of - O( 2 ). Te optimal successive over-relaxation is one order of magnitude faster tan te Jacobi metod, wic is - O(). In any case, te convergence of te stationary iterative metods will deteriorate wit te grid size refined [0]. Te slow convergence of te stationary iterative metods is mainly due to te difficulty in reducing te low frequency components of te errors (we will explain more about ig and low frequencies later in tis capter). However, te ig frequency errors are damped rapidly in tese metods [4]. Figure 3. demonstrates te error given by te Gauss-Seidel metod applied to te model problem in (3.). Te initial error wit a random initial guess sown in Figure 3.(a) is igly oscillating. Two Gauss-Seidel iterations efficiently damp te ig frequency components of te errors, as sown in Figure 3.(b), in wic te errors are relatively smoot. (a) Initial Errors (b) Error after Pre-smooting Figure 3.: Illustration of ig frequency errors reduced by stationary iterative metods. (a) Initial errors wit some random initial value, (b) errors after pre-smooter. 8

30 In contrast wit te standard or stationary iterative metods, te convergence rate of multigrid metods is often independent of te size of te finest grid. Te main idea beind multigrid metods is to reduce te low frequency errors by applying a ierarcy of coarse grids. Multigrid metods consist of tree important procedures: () te presmooting, wic is typically an iterative metod applied on te current grid aiming to obtain smooted errors, (2) te coarse grid correction, wic computes te error from te coarse grid in order to update te solution on te current grid, (3) te post-smooting, wic is usually te same as te pre-smooting [4]. In tis capter, we will give an overview and examples of eac element of multigrid metods, including pre-smooting, coarse grid correction procedure, coarsening, coarse grid operator, restriction operator, prolongation operator, post-smooting, and demonstrate ow tese parts are organized togeter. 3. Pre-smooting Te pre-smooting operator plays a critical role in multigrid metods since it will reduce te ig frequency errors significantly, and te smooted errors are solved on te coarse grids recursively. Te stationary iterative metods are commonly used as smooter in te multigrid metods, for instance, te Gauss-Seidel metod and te damped-jacobi metod. 3.. Jacobi Iteration Consider te model problem (3.). Let û k,i,j be te approximate solution after kt iteration of te Jacobi metod at grid point (i, j). Te (k + ) t iteration is defined as: û k+,i,j = 4 [2 f,i,j + û k,i,j + û k,i+,j + û k,i,j + û k,i,j+], i, j N. (3.5) Let û k be a vector of ûk,i,j, i, j N, wit te lexicograpic order. Te Jacobi iteration can be expressed as wit te operator were I is te identity operator. û k+ = S û k f, S = I 2 4 L, 9

31 If we introduce a damping factor ω, ten te damped Jacobi iteration is given by: u k+ = û k + ω(û k+ û k ), (3.6) were û k+ is defined in (3.5). Te operator of te damped Jacobi iteration becomes Te stencil of S (ω) is given by ω 4 S (ω) = I ω2 4 L. (3.7) 0 0 4(/ω ) 0 0. (3.8) Note tat wen te damping factor ω =, te damped-jacobi iteration becomes te Jacobi iteration. In order to measure te convergence properties of te damped-jacobi metod, we first look at te eigenfunctions of S (ω). Tey are te same as tose of L, namely, φ k,l (x, y) = sin kπx sin lπy ((x, y) Ω ; (k, l =, 2,..., N)), (3.9) and te corresponding eigenvalues χ k,l = χk,l (ω) = ω (2 cos kπ cos lπ). (3.0) 2 Let ū be te exact solution to (3.). Errors after k t and (k + ) t iteration are ɛ k and ɛ k+, respectively ɛ k = ū û k, ɛ k+ = ū û k+. Using te eigenfunctions and eigenvalues of S, ɛ k can be written as: ɛ k = n k,l= Wit some easy algebra, we can obtain ɛ k+ ɛ k+ = n k,l= α k,l φ k,l. (3.) = S ɛ k. Tus, we can write ɛk+ as: 20 χ k,l n α k,l φ k,l. (3.2)

32 We denote φ k,l to be an eigenfunction of low frequency, if max(k, l) N/2, ig frequency, if N/2 max(k, l) N. In order to see te smooting properties of te damped-jacobi metod, we introduce te smooting factor µ(; w) and its supremum µ over of S (ω) in (3.8) as µ(; ω) max{ χ k,l (ω) : N/2 max(k, l) N}, µ sup µ(; ω), H were H = { = N N, N 4}. Substituting (3.0) into (3.3), we can get N+ µ(; ω) = max{ ω (2 cos kπ cos lπ) : N/2 max(k, l) N}, 2 µ (ω) = max{ ω/2, 2ω }. (3.3) (3.4) Tis sows tat tere is no smooting properties for ω > or ω 0. Wen ω =, µ (ω) =, wile te coice of ω = 4/5 gives µ (4/5) = 3/5. It means all te ig frequency error components are reduced by at least a factor of 3/5 [22]. Tis explains tat te damped Jacobi iteration is efficient to reduce te ig frequency error components Gauss-Seidel Iteration Te Gauss-Seidel (GS) metod is widely used as te smooting operator in multigrid metods. Consider te same problem in (3.), and still assume û k is te numerical approximate solution after k t iteration. Te next iteration becomes: û k+,i,j = [ 2 f,i,j + û k+,i,j 4 + ûk,i+,j + û k+,i,j +,i,j+] ûk. (3.5) By applying te same idea of te damped-jacobi, we can introduce te damping factor ω ere for te GS iteration. Ten te step becomes: u k+,i,j = ûk,i,j + ω ( û k+,i,j ûk,i,j), (3.6) were te û k+,i,j is defined in (3.5). We call tis algoritm ω-gs in te following. For te model problem, te smooting factor of te GS metod is µ(gs) = 0.5, for ω =. Tis implies tat te GS metod is better at smooting tan te Jacobi metod. However, te introduction of a relaxation parameter does not improve te smooting properties of GS [22]. 2

33 3.2 Coarse Grid Correction 3.2. Multigrid Operators In tis section, we will introduce and give examples to some components of te multigrid. Coarse Grids Here we will give a picture of te coarse grids wit standard coarsening in Figure 3.2 wit finest grid size = and coarse grid size =, =, =, respectively. We assume tat =, p N. Ten we can form te grid sequence 2 p Ω, Ω 2, Ω 4, Ω 8,..., Ω 0, were 0 is te grid size of te coarsest grid. Specifically, in tis paper, te coarse grid domain Ω H is wit coarse grid size H = 2. Tere are also oter coarsening, for instance, semicoarsening [4]. Restriction Operator R and Prolongation Operator P In order to inject a grid point on te fine grid to te coarse grid and interpolate te point on te coarse grid back to te fine grid, we need a restriction operator R and an interpolation operator P, respectively. Tese two operators are defined as: R : Ω Ω H, P : Ω H Ω. (3.7) Here we will give two examples of coices of te restriction operator R: injection and averaging. Te injection restriction operator R is defined as an operator projecting from te fine grid to te coarse grid by taking te corresponding value at te fine grid, i.e. u H,i,j = R injectionu,i,j = u,i,j, (3.8) were (i H, j H) = (i, j) Ω H Ω. Te stencil of R injection is: H. (3.9)

34 Averaging operator computes te value at a coarse grid point by taking te weigted average of te corresponding point and its eigt neigbours on te fine grid. For te grid point (i H, j H) = (i, j) Ω H Ω, te formula is: u H,i,j =R averagingu,i,j = 6 [u,i,j + 2u,i,j + u,i,j+ + 2u,i,j + 4u,i,j + 2u,i,j+ +u,i+,j + 2u,i+,j + u,i+,j+ ]. (3.20) (a) Fine grid wit grid size 32 (b) Coarser grid wit grid size 6 (c) Coarser grid wit grid size 8 (d) Coarsest grid wit grid size 4 Figure 3.2: Illustration of an ierarcy of grids starting wit =

35 Te stencil of R averaging is: H. (3.2) Here we introduce one bilinear interpolation or prolongation operator P bilinear wic maps grid points from te coarse grid Ω H to te fine grid Ω. Te formula reads u H,i,j, for [u 2 H,i,j+ + u H,i,j ], for u,i,j = P bilinearu H,i,j = [u 2 H,i+,j + u H,i,j ], for (3.22) 4 [u H,i+,j + u H,i,j +u H,i,j+ + u H,i,j ], for, were (i, j ) Ω. Figure 3.3 presents a fine grid wit te symbols for bot te fine and te coarse grid points referred by (3.22). Figure 3.3: A fine grid wit symbols presented te bilinear interpolation in (3.22) used for te interpolation from te coarse grid ( ) Te stencil of P bilinear is given by: H. (3.23) 24

36 3.2.2 Coarse Grid Correction In tis section, we will introduce te idea of te coarse grid correction. Given te approximate solution ũ k after k smooting steps, we ave: ū = ũ k + ɛ k, (3.24) were ɛ k is te error. Te residual is defined as: r k f A ũ k = A ū A ũ k = A (ū ũ k ) = A ɛ k. (3.25) It is clear tat by solving A ɛ k = r k (3.26) exactly, we will get te error ɛ k, wit wic we can obtain te exact solution by adding ũk to it. However, solving (3.26) exactly leads to te same computation complexity as solving te linear system itself. One way to resolve te problem is to reduce te computational cost by solving te problem on te coarse grid: A H ɛ k H = r k H, (3.27) were A H is an approximation of A on te coarse grid Ω H and rh k = Rrk is te restricted residual. Tis makes sense since ɛ k is already smoot after pre-smooting. Wit te coarse error ɛ k H, we may interpolate it back to te fine grid and update ũk. Te coarse grid correction algoritm is given in Algoritm. Input: ũ k, A, f Output: ũ k+ Compute te residual: r k = f A ũ k Restrict te residual to te coarse grid Ω H : rh k = Rrk Obtain ɛ k H by approximately solving te coarse grid problem:a H ɛ k H = rk H Interpolate te error to te fine grid Ω : ɛ k = P ɛk H Obtain advanced solution ũ k+ = ũ k + ɛk Algoritm : Coarse grid correction 25

37 Coarse Grid Operator Wen solving (3.27) on te coarse grid Ω H, we ave different coices for te coarse grid operator A H. One is te direct discretization on Ω H te same way as on fine grid for A. Anoter coice of te coarse grid operator is te Galerkin operator wic is defined as: A H = R A P, (3.28) were R and P are te restriction and interpolation operators mentioned before. 3.3 Post-smooting Te post-smooting is usually te same as te pre-smooting operator but computed after te coarse grid correction in order to obtain a symmetric algoritm. 3.4 Two-grid V-cycle Given an approximate solution after k t iteration u k, te two-grid V-cycle is to compute u k+ by combining te pre-smooting, coarse grid correction and te post-smooting. Te algoritm is given in Algoritm 2. We call it a V-cycle because te structure of one two-grid cycle looks like te Englis letter V, as sown in Figure 3.4(a). Te multigrid metod can reduce bot ig and low frequency errors. Figure 3.5 illustrates te error after te coarse grid correction and te error after te post-smooting. Togeter wit Figure 3., Figure 3.5 sows te performance of one iteration of te two-grid V-cycle. 3.5 Multigrid V-cycle We ave described te two-grid V-cycle in te previous section. In practice, te two-grid metods are seldom used, since te problem on te coarse grid is still large. However, tey are te basis for te multigrid metods. By applying two-grid V-cycle recursively, we can get a multigrid V-cycle wit a ierarcy level of coarse grids. Te algoritm is given in Algoritm 3. Te level wit te biggest grid size is called te coarsest level, in wic we solve te problem exactly. Exactly means we still give te numerical solution instead 26

38 Input: u k, A, f Output: u k+ =TwoGridVCycle(u k, A, f ) if coarsest grid ten Solve te problem wit stationary iterative metods u k+ = S(u k, A, f ); return u k+ ; end else () Pre-smooting Compute ũ k by applying ν times smooting iterations to u k : ũ k = Sν (u k, A, f ) (2) Coarse Grid Correction Compute te residual: r k = f A ũ k Restrict te residual to te coarse grid: r H k = R r r k Compute te error on te coarse grid Ω H by solving: A H ɛ k H = rk H Interpolate te correction error: ɛ k = P ɛk H Correct te approximation after pre-smooting: û k = ũk + ɛk (3) Post-smooting Compute u k+ by applying ν 2 times smooting iterations to û k : u k+ = S ν 2 (û k, A, f ) return u k+ end Algoritm 2: Two grid V-cycle 27

39 of really solving te problem, but we solve it wit eiter stationary point iterative metods or oter iterative metods to give a relatively accurate numerical solution wit te same stopping criterion on te fine grid. Note tat te parameter γ appears twice in (3.29) in Algoritm 3, i.e, γ and γ H. γ H indicates te cycles to be carried out on te coarser grid. γ specifies te number of cycles to be employed on te current grid [22]. If γ = for all te grid levels, te algoritm is a V-cycle multigrid. Te structure of a five-grid V-cycle is sown in Figure 3.4(b). (a) Two-grid V-cycle (b) Five-grid V-cycle (c) Four-grid W-cycle (d) Five-grid F-cycle Figure 3.4: Structure of various multigrid cycles for different numbers of grids (, smooting,, exact solution, \, fine grid to coarse grid, /, coarse grid to fine grid). 28

40 Input: ν, ν 2, γ, u k, A, f Output: u k+ = MultigridVCycle(ν, ν 2, γ, u k, A, f ) if coarsest grid ten Solve te problem wit stationary iterative metods u k+ = S(u k, A, f ); return u k+ ; end else () Pre-smooting Compute ũ k by applying ν times smooting iterations to u k : ũ k = Sν (u k, A, f ) (2) Coarse Grid Correction Compute te residual: r k = f A ũ k Restrict te residual on coarse grid: r H k = R r r k Give an initial value for te error on te coarse grid: ɛ k H Compute te error on coarse grid Ω H γ times by recursively solve: ɛ k H = MultigridVCycle γ (ν, ν 2, γ H, ɛ k H, A H, r k H) (3.29) Interpolate te correction error: ɛ k = P ɛk H Correct te approximate solution: û k = ũk + ɛk (3) Post-smooting Compute u k+ by applying ν 2 times smooting iterations to û k : u k+ = S ν 2 (û k, A, f ) return u k+ end Algoritm 3: Multigrid V-Cycle 29

41 3.6 Oter Multigrid Cycles 3.6. W-cycle If we let γ = 2 for all te grid levels, in Algoritm 3, te algoritm becomes a W-cycle multigrid algoritm. Tis implies tat we carry out two cycles of computation on eac grid level. Te W-cycle is named after te sape of its structure, as sown in Figure 3.4(c). Since te W-cycle visits te coarse grids more tan twice, te W-cycle converges generally faster tan te V-cycle. However, te computation in one W-cycle is also more expensive tan tat in one V-cycle F-cycle It is convenient and easy to implement if we fix te γ in Algoritm 3.Te V-cycle (γ = ) and te W-cycle (γ = 2) are widely used in practice. However, it is not necessary to take γ as a fixed number. Certain combinations of γ = and γ = 2 can be cosen in practice. Here, we introduce a so-called F-cycle, te structure of wic is illustrated in Figure 3.4(d). Te structure of te F-cycle implies tat wen te algoritm goes down from te fine grid to te coarse grid, γ = 2. Wen it goes back from te coarse grid to te fine grid, γ =. Hence, we can regard te F-cycle as a cycle wit < γ < 2. We mentioned tat te convergence rate of te multigrid metod is independent of te size of its finest grid. However, tis fact says noting about its efficiency unless we take te computational work into account. Table 3. sows te computational work for different cycles. In te table, C is a small constant number, and N L is te total unknowns on te fine grid. From te table, we can conclude tat te computational work in one W-cycle is about.5 times of tat in one V-cycle. Te computational work of one F-cycle is between 4 3 CN L (V-cycle) and 2CN L (W-cycle) [22]. 30

42 γ Computational Work W L 4 γ = CN 3 L γ = 2 2CN L γ = 3 4CN L γ = 4 O(N L log N L ) Table 3.: Te computational work of one complete multigrid cycle (a) Error after CGC (b) Error after post-smooting Figure 3.5: Illustration of bot ig and low frequency errors reduced by te coarse grid correction and te post-smooting, (a) errors after te coarse grid correction, (b) errors after te post-smooting 3

43 Capter 4 Multigrid Metod for Obstacle Problem In tis capter, we will introduce a multigrid metod, full approximate sceme (FAS), wic can be used to solve nonlinear PDEs. 4. Model Problem Consider te lower obstacle problem, were te HJB equation is given in (2.42), as our model problem. Here we let A (u ) = A M (u ) be te Laplacian operator in (3.4), te matrix derived from te membrane constrained obstacle problem and let A (u ) = A MS (u ) be te minimal surface operator in (2.4). We assume tere is no external force and ence f is a zero vector for bot obstacle problems. In tis capter, we will multiply 2 on bot side of te equation. For example, te Laplacian operator A M becomes A M = D B 0 B D B 0 B D B B D were D and B are defined as in (2.39) and (2.40), respectively., (4.) 32

44 For simplicity, we can express our model problem in (2.42) as N u = b. (4.2) Te matrix N can be considered as te merging of te rows of two matrices. One matrix is A, and te oter is te identity matrix I wit te same size as A. If a grid point in ordering i is active, wic means A (u ) i f,i u,i ψ,i, we take te i t row from I. We let te rigt and side b,i = ψ,i. So te HJB equation becomes N (u ) i = u,i = b,i = ψ,i, tat is u,i = ψ,i. Oterwise, we take te i t row from A and let b,i = 0. Tus, te problem becomes A (u ) i = 0. To be more specific, te matrix N and rigt and side b are defined as { A (i, :), if te i t grid point is inactive, N (i, :) = (4.3) I (i, :), oterwise, b,i = { 0, if te i t grid point is inactive, ψ,i, were (i, :) means te i t row of a matrix. oterwise, (4.4) We also define an operator N (u ) N u. Note tat te matrix N or te operator N () depends on te approximate solution u. In general N (v ) N (w ), if v w. Also note tat b,i = 0 if te i t grid point on te fine grid is inactive. However, it is in general nonzero on te coarse grids. 4.2 Nonlinear Problem Te model problem in (2.42) can not be solved by te multigrid metod in Algoritm 2 and Algoritm 3 mainly because in te multigrid V-cycle, we assume (3.25). But in te nonlinear case, N ū N ũ k N (ū ũ k ), (4.5) in general. We cannot compute te error directly on te coarse grid as in te V-cycle algoritm. So tere is no obvious way to compute ɛ k, wic makes te V-cycle algoritm not applicable ere. To solve te nonlinear problems, we will introduce anoter algoritm: FAS. Firstly, we consider r k = b N (ũ k ), (4.6) 33

45 were r k is te residual at kt iteration. Substitute N (ū ) = b into (4.6), we obtain Rearrange (4.7), te equation reads, r k = N (ū ) N (ũ k ). (4.7) N (ū ) = r k + N (ũ k ). (4.8) If we introduce a restriction operator R ere, we can consider a similar problem on te coarse grid as N H (ǔ H ) = R r k + N H (Rũ k ). (4.9) Writing te rigt and side R r k + N H(Rũ k ) as b H, we ave N H (ǔ H ) = b H, (4.0) on te coarse grid Ω H. Wit te updated ǔ H, we can compute te error ɛ H on te current coarse grid by ɛ H = ǔ H Rũ k. Te remaining steps are similar to te coarse grid correction in te V-Cycle. Te algoritm is described in Algoritm 4. Similar to Algoritm 3, te FAS algoritm also consists of tree main components: te pre-smooting, te coarse grid correction and te post-smooting. However, te smooting operators are different from te one in te V-cycle, since te GS metod is not applicable to te nonlinear problems. For te coarse grid correction, instead of computing te error on te coarse grid, te FAS solves te solution on te coarse grid. Note tat γ in te FAS determines different cycles suc as te V-cycle FAS (for γ = ) and te W-cycle FAS (for γ = 2). 4.3 Operators for FAS Similar to linear multigrid metods, we ave different coices for te operators in te FAS. In tis section, we give our coice of operators. 34

46 4.3. Pre-Smooting and Post-Smooting For te smooting operator of te FAS, we coose te Newton-Gauss-Seidel (NGS) metod instead of te Gauss-Seidel (GS) metod. Since te HJB equation we are solving is in general not a linear problem, GS is not applicable directly to te nonlinear system. Before we introduce te NGS metod, we first look at te Newton s metod. Te well known metod to solve F(x) = 0 is te Newton s metod [5]. Here F : R n R n is a continuously differentiable operator, i.e., a smoot operator and x is a vector in R n. Te operator F may depend on te vector x. Input: û k, N, b Output: û k+ = FAS(û k, N, b ) () Pre-smooting Compute ũ k by applying ν times smooting iterations to û k : ũ k = Sν (û k, N, b ) (2) Coarse Grid Correction Compute te residual: r k = b N ũ k Restrict te residual on coarse grid: r H k = R r k Restrict approximate solution on coarse grid: ũ k H = R 2ũ k Compute te rigt and side: b H = r H k + N H(ũ k H ) if Ω H is te coarsest grid ten Solve N H (ǔ k H ) = b H exactly return û k+ end else Solve N H (ǔ k H ) = b H approximately by applying γ times (γ = for V-cycle and γ = 2 for W-cycle) ǔ k H = FAS(ũk H, N H, b H ) recursively end Compute correction error: ɛ k H = ǔk H ũk H Interpolate te correction error: ɛ k = P ɛk H Correct te approximation after pre-smooting: û k,cgc = ũ k + ɛk (3) Post-smooting Compute ũ k+ by applying ν 2 times smooting iterations to û k,cgc : û k+ = S ν 2 (û k,cgc, N, b ) return û k+ Algoritm 4: FAS Algoritm 35

47 One step of te Newton s metod is given by x k+ = x k F (x k ) F(x k ), (k = 0,, 2, 3...). (4.) However, if F is not a smoot operator but a locally Lipscitzian operator, ten (4.) cannot be used anymore. as Let F(x k ) be te generalized Jacobian of F at x k, defined in [6]. One can write (4.) x k+ = x k V k F(xk ), (4.2) were V k F(x k ). Te Newton s metod extends to a non-smoot case by using te generalized Jacobian V k instead of te derivative F (x k ) [9]. We can rewrite (4.2) as V k x k+ = V k x k F(x k ). (4.3) It is sown tat te local and global convergence results old for (4.3) in [9] wen F is semismoot. Solving te linear system (4.3) can be expensive. Te NGS metod is to solve (4.3) approximately by GS instead. In general, te NGS metod we used ere can be viewed as a non-smoot version of te standard NGS. One iteration of NGS for te model problem consists tree steps. Assume we get te û k after te kt iteration of te NGS. Te first step of te (k + ) t iteration is to decide te active and inactive sets based on û k. We update te matrix N and decide te operator N (û k ) based on te active and inactive sets. Te second step is to compute te Jacobian matrix N (ûk ), wic is actually N. Te last step is to get û k+ by solving N (ûk )ûk+ = N (ûk ) N (û k ) wit te GS metod Restriction Operator R and Prolongation Operator P In standard FAS, te restriction operator is te averaging operator. In Algoritm 4, we coose te averaging restriction operator for bot R and R 2. Te bilinear interpolation operator is cosen as te prolongation operator in standard FAS. However, tis operator is not working well for our model problem. Figure 4. sows a D obstacle problem example to explain te drawback of te linear interpolation. In te figure, te grid points x i and x i+2 are coarse grid points, and all x i, x i+ and x i+2 are points on te fine grid. Te rectangular in red is te obstacle we are considering. Let ũ H and ũ to be te approximate solution on te coarse grid and te fine grid, respectively. 36

48 Figure 4.: D obstacle problem example to sow te drawback of te linear interpolation We assume tat x i+ and x i+2 are active points on te fine grid. So ũ,i+ and ũ,i+2 sould be equal to te obstacle value, wic is in our example. We also assume ũ H,i = 0. Wit te linear interpolation, we will get ũ,i+ equals to te value at point c instead of, wic is not accurate. To solve tis problem, we introduce anoter step after te bilinear interpolation. Wen we update te approximate solution by adding te error, i.e, te step û k,cgc = ũ k + ɛk in Algoritm 4, we enforce ɛ k,i = 0 if te it grid point is active. In oter words, in te coarse grid correction, te bilinear interpolation is applied only for unknowns on te inactive points. Te application of tis step results in a better convergence comparing wit te pure bilinear interpolation (for details, see M6 in section 5 of [3]) Nonlinear Operator N H on Coarse Grid N H for Membrane Constrained Obstacle Problem Te standard coarse grid operator of te FAS is te direct discretization. However, tis operator results in slow convergence. From Capter 3, we note tat te Galerkin operator works well for te linear problem. Since te Laplacian operator in te membrane constrained problem, i.e. A M in (2.42), is also linear, we can make use of te Galerkin 37

49 operator in te nonlinear problem ere. Tis coice results in a faster convergence rate tan te direct discretization. We write te coarse grid HJB equation as min [A H u H, u H ψ H ] = R r k + N H Rũ k. (4.4) Te rigt and side of te HJB equation comes from (4.9). For simplicity, te problem can be expressed as N H u H = b H, (4.5) were N H is te matrix on te coarse grid. N H is from te merging of A H and I H (te identity matrix) in te same way as (4.3). In order to get te matrix N H, we need to compute A H. Te Galerkin in (3.28) gives A H = R A P, were A is te fine grid matrix. Similarly, we define te matrix A H in (4.4) as A H = R N P, (4.6) were N is te matrix on te fine grid in (4.3). Te definition of te active and inactive points is te same as tat on te fine grid. If an i t point is active, te left and side of (4.5) becomes N H (u H ) i = I H (u H ) i = u H,i. To make sure te consistency between (4.4) and (4.5), we let b H,i = ψ H + Rˆr k + N HRû k. Te equation becomes u H,i = ψ H + Rˆr k + N HRû k. If te point is inactive, te left and side of (4.5) is A H (u H ) i, and we let b H,i = Rˆr k + N HRû k. Te equation reads A H(u H ) i = Rˆr k + N HRû k. We can get a ierarcy of coarse grid operators by calling (4.6) recursively. Tis forms a multigrid metod. Te operator we introduced is quite different from te direct discretization operator, since te matrix A H in te direct discretization is independent of N, te fine grid matrix. But A H in (4.6) depends on N. N H for Minimal Surface Obstacle Problem Since te Galerkin operator results in fast convergence for te membrane constrained obstacle problem, it is natural to apply tat to te minimal surface obstacle problem. However, tis operator is not working well for te minimal surface obstacle problem. Te Laplacian operator A M in (2.38) for te membrane constrained obstacle problem is independent of te approximate solution u. A M H sould also be independent of u H on 38

50 te coarse grid. We coose te Galerkin operator and define A M H = R N P. Tis coice satisfies te independence of te A M H on u H. On te oter and, te minimal surface operator A MS in (2.4) depends on te approximate solution u. Te operator A MS H on te coarse grid sould also be determined by u H. However, te Galerkin operator gives A MS H = R N P, wic is independent of u H. So A MS H given by te Galerkin operator is not accurate for te minimal surface obstacle problem. Instead, we utilize te direct discretization operator as te coarse grid operator. Tis operator on te coarse grid depends on u H and turns out to converge fast. 4.4 FAS Applied on Linear Problem Here, we explain tat te FAS in Algoritm 4 is equivalent to te V-cycle in Algoritm 3 wen solving linear problems. Assume we solve te linear problem (3.) wit te FAS. N, N H is A, A H, respectively, were A is a linear operator as defined in (2.38). For te step in te FAS is now equivalent to b H = r k H + N H (ũ k H), (4.7) b H = r k H + A H (ũ k H). (4.8) Following te FAS, we obtain A H (ǔ k H ) = rk H + A H(ũ k H ). By rearranging te equation, we can get A H (ǔ k H ũk H ) = rk H. Since we update te coarse grid error by ɛk H = ǔk H ũk H, so it is te same as solving A H ( ɛ k H ) = rk H. Tis is te step in te V-cycle in Algoritm 3. Wit te coarse grid error, te remaining steps in te FAS and te V-cycle are te same. Tis proves tat applying te FAS to solve linear problem is te same as te V-cycle. 39

51 Capter 5 Numerical Results In tis capter, we will demonstrate numerical solutions to one-dimensional (D) and twodimensional (2D) obstacle problems solved by te V-cycle FAS sceme as proposed in Capter 4. Two pre-smooting and post-smooting iterations are applied. Te stopping criterion for all te obstacle problems in tis capter is tat te norm of te residual of te problem at te inactive grid point is less tan 0 6. More precisely, consider te nonlinear problem N (u ) = b and te residual r = N (û ) b. Let r in,i = { r,i, te i t grid point is inactive 0, oterwise. (5.) Te stopping criterion is r in < 0 6. All te D examples given in tis capter sare te same domain Ω = (0, ) R. All te 2D examples given are on Ω = (0, ) 2 R 2. Diriclet boundary conditions are used for all examples. Here we will give a picture of te obstacles discussed in tis capter. Te obstacles considered are te square obstacle (or block obstacle) and te circle obstacle (or dome obstacle). Figure 5. sows examples of D and 2D obstacles. 40

52 5. D Block Obstacle Problem wit Laplacian Operator We consider a D lower obstacle problem subject to a lower block obstacle {, 32 u(x) ψ(x) = x 2, 32 0, oterwise. (5.2) As sown in Capter 2, te HJB equation corresponding to te D discrete obstacle problem can be written as min[a u f, u ψ ] = 0, (5.3) (a) D block obstacle (b) D dome obstacle (c) 2D block obstacle (d) 2D dome obstacle Figure 5.: Pictures of different obstacles. 4

53 Grid Size() Levels FAS Iterations Table 5.: Iterations of te FAS for D block problem were A is a tridiagonal symmetric positive definite matrix as A = 0 2, (5.4) u is te discretized approximate solution and f is a zero vector. We will apply te V-cycle FAS (wic is given in Algoritm 4) to solve te HJB equation. Table 5. sows te FAS iterations for different grid sizes and levels. Te levels ere mean te number of levels of fine plus coarse grids in te wole computation. For example, if we say te grid size, level 5, it means we ave a total of 5 levels, te grid 64 size of eac is:,,,,. Figure 5.2 sows te sape of te obstacle and numerical solution of tis D obstacle problem. Figure 5.2: D block obstacle result. (left) Obstacle, (middle) numerical solution, (rigt) difference between te obstacle and te numerical solution, wit grid size and 5 levels 64 42

54 Grid Size() Levels FAS Iterations Table 5.2: Iterations of te FAS for D dome problem 5.2 D Dome Obstacle Problem wit Laplacian Operator Te HJB equation of te D dome obstacle problem is similar to te D block obstacle except for te obstacle function. However, for te block obstacle te membrane will contact wit te obstacle exactly on te surface of te obstacle. For te dome obstacle, it is not te case. Figure 5.3 (rigt) sows te difference between te numerical solution and te obstacle, from wic we can see tat te membrane is not stick to te obstacle exactly, but leaving several grid points on te boarder of te obstacle inactive. Since te active set and inactive set depend on not only te sape of te dome obstacle, but also te approximate solution, te obstacle problem wit te dome obstacle is more difficult tan te one wit te block obstacle. Te problem we consider ere is wit te constraints { 6(x 0.5) 2 3, 28 u(x) ψ(x) = x 97, 28 0, oterwise. (5.5) Te dome obstacle problem sare te same HJB equation wit (5.3), except te obstacle function ψ is given by (5.5). Using te V-cycle FAS, te iterations are presented in Table 5.2. Te obstacle, numerical solution and te difference between te numerical solution and te obstacle are illustrated in Figure 5.3. Te difference between te numerical solution and te obstacle demonstrates te contact parts between te obstacle and te approximate solution. Table 5. and Table 5.2 sow tat te iterations of te FAS to solve te D block or dome obstacle problem are independent of te finest grid size, and close to a constant number. For instance, for te D block obstacle problem, te iteration numbers of different grid sizes are about 4 to 5. 43

55 Figure 5.3: D Dome obstacle result, wit grid size and 5 levels. (left) Obstacle, (middle) 64 numerical solution, (rigt) difference between te numerical solution and te obstacle Grid Size() Levels FAS Iterations Table 5.3: Iterations of te FAS for 2D square problem 5.3 2D Square Obstacle Problem wit Laplacian Operator Te example of 2D obstacle problem we considered ere is te model problem in (3.). Te obstacle discussed ere is { 3, 32 u(x, y) ψ(x, y) = x 9, 3 y 9, (5.6) 0, oterwise. Using te same discretization, te matrix form as (4.) and te HJB equation in (2.42), we apply te FAS to solve tis problem iteratively, te iterations are given in Table 5.3. Figure 5.4 sows te 2D square obstacle, te numerical solution and te contact information between te numerical solution and te obstacle. 44

56 Figure 5.4: 2D Square obstacle result, wit grid size and 5 levels. (left) Obstacle, 64 (middle) numerical solution, (rigt) difference between te numerical solution and te obstacle 5.4 2D Dome Obstacle Problem wit Laplacian Operator In tis section, we will give numerical examples of te 2D dome obstacle problem. Similar to te 2D square obstacle problem, we use te same HJB equation wit te square obstacle problem but te constraints for te dome obstacle are u(x, y) ψ(x, y) = max{0, (x 0.5) 2 + (y 0.5) 2 }. (5.7) Te iterations of te FAS are similar to tose of te square obstacle problem as sown in Table 5.4. Figure 5.5 sows te obstacle, te solution and te difference for te 2D dome obstacle problem. We will compare our metod wit te metods proposed in [3] and [7]. Tey bot solve te elasto-plasto torsion problem wic is similar to our numerical example ere. Te problem considered in bot paper is on te domain Ω = (0, ) 2 wit te same rigt and side value. Te only difference of te elasto-plasto torsion problem compared wit our example is te sape of te obstacle. Te obstacle considered in bot paper leaves te inactive set a small region wit a cross sape aligned wit two diagonals. In [3], finite element metods are used as te discretization metod and monotone multigrid metods are applied. To compare our metod wit tat in [3], we define te iterative error as ɛ k = ū û k, were ûk is te approximate solution after te kt iteration. 45

57 Grid Size() Levels FAS Iterations Table 5.4: Iterations of te FAS for 2D dome problem Figure 5.5: 2D Dome obstacle result, wit grid size and 5 levels. (left) Obstacle, (middle) 64 numerical solution, (rigt) difference between numerical solution and obstacle ū is te exact numerical solution computed first by some metods wit stopping criterion r in < 0 9. Te comparison of te iteration numbers to acieve different iterative errors of te FAS and te TRCKH in [3] is sown in Table 5.5. From te table, te iterations for te FAS are less tan tat for te TRCKH. Te FAS needs rougly alf iterations of te TRCKH to acieve te same iterative error. Since te V-cycle multigrid metod are used in bot te TRCKH and our FAS, te computations in eac iteration of bot metods are in te same order. We can conclude tat te V-cycle FAS is better tan te TRCKH at reducing te iterative errors. In [7], a multigrid F-cycle is presented. Tree iterations of recombinations are applied after eac F-cycle. In eac recombination, a linear combination of six latest approximate solutions û k i, i = 0,, 2, 3, 4, 5, is computed wit some special parameters. Te special parameters are determined in suc a way tat te residual of te current problem is minimized. Te recombination procedure is not expensive if te parameters are cosen 46

58 Iterative Error Number of Iterations for TRCKH in [3] Number of Iterations for FAS Table 5.5: Comparison of iterative errors F-cycle in [7] FAS = = Table 5.6: Comparison of te convergence rate between te F-cycle and te FAS properly. However, te computation in eac F-cycle is more expensive tan tat in one V-cycle, as explained in Capter 3. Table 5.6 sows te convergence rate for te F-cycle and our V-cycle FAS, were is te grid size. We can see from te table tat for te two grid sizes, te FAS converge faster tan te F-cycle in [7] D Dome Obstacle Problem wit Minimal Surface Operator Now, we consider an obstacle problem wit te minimal surface operator. Te obstacle we considered ere is te same as for te 2D obstacle problem wit te Laplacian operator, tat is (5.7). Te convergence results for solving te corresponding HJB equations wit te V-cycle FAS algoritm are given in Table 5.7. Figure 5.6 sows te obstacle, te numerical solution and te difference between tem. From te figure, we can see tat te solution of te minimal surface obstacle problem contacts more tigtly wit te obstacle. Table 5.8 sows te residuals for grid size = and = in eac iteration computed 6 32 by te FAS. For comparison, we ave listed te residuals reported in [] and [2]. It is noticed tat te residuals in bot [] and [2] are reduced faster tan te FAS metod. However, te W-cycle multigrid is applied in bot papers. Consider te computation in eac W-cycle is about.5 times of tat in one V-cycle (cf. Table 3.), te total computational work of 9 iterations of our FAS is about te same as 6 iterations of te W-cycle. From te table, we see tat at te 9 t iteration of our metod, te residual is reduced to 8.0E - 9 (for grid size ). However, for te metods in [] and [2], te residual after 32 47

59 Figure 5.6: 2D circle obstacle wit minimal surface operator, wit grid size 64 and 5 levels. (left) Obstacle, (middle) numerical solution, (rigt) difference between te numerical solution and te obstacle Grid Size() Levels FAS Iterations Table 5.7: Iterations of te FAS for 2D dome problem wit te minimal surface operator te 6 t iteration is reduced to 5.9E - 7 and 2.2E - 7, respectively. From te table, we can conclude tat te V-cycle FAS converges faster tan te W-cycle in [] and [2]. It is claimed in [2] tat any coice of decompositions of obstacle function ψ ends up wit te same convergence rate. However, te number of te domain decompositions may affect te rate of convergence. Table 5.9 presents te convergence rates for FAS we used to solve te minimal surface problem and te space decomposition and subspace correction algoritms used in [2]. In te table, J is te number of decompositions of some domain decomposition metods. We note tat te numerical experiment given in [2] is not exactly te same. A sligtly different obstacle on te domain Ω = ( 2, 2) 2 is applied in [2]. However, it is quite similar to our example because of te same sape of te obstacle and te size of te fine grid, tat is = L/28, L = 4 ere. From te table, we can conclude tat te FAS converges faster tan te domain decomposition metod in 48

Introduction to Multigrid Method

Introduction to Multigrid Method Introduction to Multigrid Metod Presented by: Bogojeska Jasmina /08/005 JASS, 005, St. Petersburg 1 Te ultimate upsot of MLAT Te amount of computational work sould be proportional to te amount of real

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

A h u h = f h. 4.1 The CoarseGrid SystemandtheResidual Equation

A h u h = f h. 4.1 The CoarseGrid SystemandtheResidual Equation Capter Grid Transfer Remark. Contents of tis capter. Consider a grid wit grid size and te corresponding linear system of equations A u = f. Te summary given in Section 3. leads to te idea tat tere migt

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

New Streamfunction Approach for Magnetohydrodynamics

New Streamfunction Approach for Magnetohydrodynamics New Streamfunction Approac for Magnetoydrodynamics Kab Seo Kang Brooaven National Laboratory, Computational Science Center, Building 63, Room, Upton NY 973, USA. sang@bnl.gov Summary. We apply te finite

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

Multigrid Methods for Discretized PDE Problems

Multigrid Methods for Discretized PDE Problems Towards Metods for Discretized PDE Problems Institute for Applied Matematics University of Heidelberg Feb 1-5, 2010 Towards Outline A model problem Solution of very large linear systems Iterative Metods

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Finite Difference Method

Finite Difference Method Capter 8 Finite Difference Metod 81 2nd order linear pde in two variables General 2nd order linear pde in two variables is given in te following form: L[u] = Au xx +2Bu xy +Cu yy +Du x +Eu y +Fu = G According

More information

arxiv: v1 [math.na] 3 Nov 2011

arxiv: v1 [math.na] 3 Nov 2011 arxiv:.983v [mat.na] 3 Nov 2 A Finite Difference Gost-cell Multigrid approac for Poisson Equation wit mixed Boundary Conditions in Arbitrary Domain Armando Coco, Giovanni Russo November 7, 2 Abstract In

More information

Notes on Multigrid Methods

Notes on Multigrid Methods Notes on Multigrid Metods Qingai Zang April, 17 Motivation of multigrids. Te convergence rates of classical iterative metod depend on te grid spacing, or problem size. In contrast, convergence rates of

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

MANY scientific and engineering problems can be

MANY scientific and engineering problems can be A Domain Decomposition Metod using Elliptical Arc Artificial Boundary for Exterior Problems Yajun Cen, and Qikui Du Abstract In tis paper, a Diriclet-Neumann alternating metod using elliptical arc artificial

More information

Notes on wavefunctions II: momentum wavefunctions

Notes on wavefunctions II: momentum wavefunctions Notes on wavefunctions II: momentum wavefunctions and uncertainty Te state of a particle at any time is described by a wavefunction ψ(x). Tese wavefunction must cange wit time, since we know tat particles

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 27, No. 4, pp. 47 492 c 26 Society for Industrial and Applied Matematics A NOVEL MULTIGRID BASED PRECONDITIONER FOR HETEROGENEOUS HELMHOLTZ PROBLEMS Y. A. ERLANGGA, C. W. OOSTERLEE,

More information

Crouzeix-Velte Decompositions and the Stokes Problem

Crouzeix-Velte Decompositions and the Stokes Problem Crouzeix-Velte Decompositions and te Stokes Problem PD Tesis Strauber Györgyi Eötvös Loránd University of Sciences, Insitute of Matematics, Matematical Doctoral Scool Director of te Doctoral Scool: Dr.

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Preconditioning in H(div) and Applications

Preconditioning in H(div) and Applications 1 Preconditioning in H(div) and Applications Douglas N. Arnold 1, Ricard S. Falk 2 and Ragnar Winter 3 4 Abstract. Summarizing te work of [AFW97], we sow ow to construct preconditioners using domain decomposition

More information

Finite Difference Methods Assignments

Finite Difference Methods Assignments Finite Difference Metods Assignments Anders Söberg and Aay Saxena, Micael Tuné, and Maria Westermarck Revised: Jarmo Rantakokko June 6, 1999 Teknisk databeandling Assignment 1: A one-dimensional eat equation

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Chapter 4: Numerical Methods for Common Mathematical Problems

Chapter 4: Numerical Methods for Common Mathematical Problems 1 Capter 4: Numerical Metods for Common Matematical Problems Interpolation Problem: Suppose we ave data defined at a discrete set of points (x i, y i ), i = 0, 1,..., N. Often it is useful to ave a smoot

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

Research Article Cubic Spline Iterative Method for Poisson s Equation in Cylindrical Polar Coordinates

Research Article Cubic Spline Iterative Method for Poisson s Equation in Cylindrical Polar Coordinates International Scolarly Researc Network ISRN Matematical Pysics Volume 202, Article ID 2456, pages doi:0.5402/202/2456 Researc Article Cubic Spline Iterative Metod for Poisson s Equation in Cylindrical

More information

Superconvergence of energy-conserving discontinuous Galerkin methods for. linear hyperbolic equations. Abstract

Superconvergence of energy-conserving discontinuous Galerkin methods for. linear hyperbolic equations. Abstract Superconvergence of energy-conserving discontinuous Galerkin metods for linear yperbolic equations Yong Liu, Ci-Wang Su and Mengping Zang 3 Abstract In tis paper, we study superconvergence properties of

More information

Convergence and Descent Properties for a Class of Multilevel Optimization Algorithms

Convergence and Descent Properties for a Class of Multilevel Optimization Algorithms Convergence and Descent Properties for a Class of Multilevel Optimization Algoritms Stepen G. Nas April 28, 2010 Abstract I present a multilevel optimization approac (termed MG/Opt) for te solution of

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for

More information

Robotic manipulation project

Robotic manipulation project Robotic manipulation project Bin Nguyen December 5, 2006 Abstract Tis is te draft report for Robotic Manipulation s class project. Te cosen project aims to understand and implement Kevin Egan s non-convex

More information

Efficient algorithms for for clone items detection

Efficient algorithms for for clone items detection Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire

More information

The Laplace equation, cylindrically or spherically symmetric case

The Laplace equation, cylindrically or spherically symmetric case Numerisce Metoden II, 7 4, und Übungen, 7 5 Course Notes, Summer Term 7 Some material and exercises Te Laplace equation, cylindrically or sperically symmetric case Electric and gravitational potential,

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

arxiv: v1 [math.na] 7 Mar 2019

arxiv: v1 [math.na] 7 Mar 2019 Local Fourier analysis for mixed finite-element metods for te Stokes equations Yunui He a,, Scott P. MacLaclan a a Department of Matematics and Statistics, Memorial University of Newfoundland, St. Jon

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

1. Fast Iterative Solvers of SLE

1. Fast Iterative Solvers of SLE 1. Fast Iterative Solvers of crucial drawback of solvers discussed so far: they become slower if we discretize more accurate! now: look for possible remedies relaxation: explicit application of the multigrid

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim1887@aol.com rev 1 Aug 8, 216 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information

4.2 - Richardson Extrapolation

4.2 - Richardson Extrapolation . - Ricardson Extrapolation. Small-O Notation: Recall tat te big-o notation used to define te rate of convergence in Section.: Definition Let x n n converge to a number x. Suppose tat n n is a sequence

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

Aspects of Multigrid

Aspects of Multigrid Aspects of Multigrid Kees Oosterlee 1,2 1 Delft University of Technology, Delft. 2 CWI, Center for Mathematics and Computer Science, Amsterdam, SIAM Chapter Workshop Day, May 30th 2018 C.W.Oosterlee (CWI)

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

The Verlet Algorithm for Molecular Dynamics Simulations

The Verlet Algorithm for Molecular Dynamics Simulations Cemistry 380.37 Fall 2015 Dr. Jean M. Standard November 9, 2015 Te Verlet Algoritm for Molecular Dynamics Simulations Equations of motion For a many-body system consisting of N particles, Newton's classical

More information

Gradient Descent etc.

Gradient Descent etc. 1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

Solving PDEs with Multigrid Methods p.1

Solving PDEs with Multigrid Methods p.1 Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration

More information

Mass Lumping for Constant Density Acoustics

Mass Lumping for Constant Density Acoustics Lumping 1 Mass Lumping for Constant Density Acoustics William W. Symes ABSTRACT Mass lumping provides an avenue for efficient time-stepping of time-dependent problems wit conforming finite element spatial

More information

(4.2) -Richardson Extrapolation

(4.2) -Richardson Extrapolation (.) -Ricardson Extrapolation. Small-O Notation: Recall tat te big-o notation used to define te rate of convergence in Section.: Suppose tat lim G 0 and lim F L. Te function F is said to converge to L as

More information

A Hybrid Mixed Discontinuous Galerkin Finite Element Method for Convection-Diffusion Problems

A Hybrid Mixed Discontinuous Galerkin Finite Element Method for Convection-Diffusion Problems A Hybrid Mixed Discontinuous Galerkin Finite Element Metod for Convection-Diffusion Problems Herbert Egger Joacim Scöberl We propose and analyse a new finite element metod for convection diffusion problems

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

Derivatives of Exponentials

Derivatives of Exponentials mat 0 more on derivatives: day 0 Derivatives of Eponentials Recall tat DEFINITION... An eponential function as te form f () =a, were te base is a real number a > 0. Te domain of an eponential function

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL IFFERENTIATION FIRST ERIVATIVES Te simplest difference formulas are based on using a straigt line to interpolate te given data; tey use two data pints to estimate te derivative. We assume tat

More information

AN EFFICIENT AND ROBUST METHOD FOR SIMULATING TWO-PHASE GEL DYNAMICS

AN EFFICIENT AND ROBUST METHOD FOR SIMULATING TWO-PHASE GEL DYNAMICS AN EFFICIENT AND ROBUST METHOD FOR SIMULATING TWO-PHASE GEL DYNAMICS GRADY B. WRIGHT, ROBERT D. GUY, AND AARON L. FOGELSON Abstract. We develop a computational metod for simulating models of gel dynamics

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

Department of Mathematical Sciences University of South Carolina Aiken Aiken, SC 29801

Department of Mathematical Sciences University of South Carolina Aiken Aiken, SC 29801 RESEARCH SUMMARY AND PERSPECTIVES KOFFI B. FADIMBA Department of Matematical Sciences University of Sout Carolina Aiken Aiken, SC 29801 Email: KoffiF@usca.edu 1. Introduction My researc program as focused

More information

Solving Continuous Linear Least-Squares Problems by Iterated Projection

Solving Continuous Linear Least-Squares Problems by Iterated Projection Solving Continuous Linear Least-Squares Problems by Iterated Projection by Ral Juengling Department o Computer Science, Portland State University PO Box 75 Portland, OR 977 USA Email: juenglin@cs.pdx.edu

More information

Dedicated to the 70th birthday of Professor Lin Qun

Dedicated to the 70th birthday of Professor Lin Qun Journal of Computational Matematics, Vol.4, No.3, 6, 4 44. ACCELERATION METHODS OF NONLINEAR ITERATION FOR NONLINEAR PARABOLIC EQUATIONS Guang-wei Yuan Xu-deng Hang Laboratory of Computational Pysics,

More information

A First-Order System Approach for Diffusion Equation. I. Second-Order Residual-Distribution Schemes

A First-Order System Approach for Diffusion Equation. I. Second-Order Residual-Distribution Schemes A First-Order System Approac for Diffusion Equation. I. Second-Order Residual-Distribution Scemes Hiroaki Nisikawa W. M. Keck Foundation Laboratory for Computational Fluid Dynamics, Department of Aerospace

More information

EXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM

EXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM SIAM J. SCI. COMPUT. Vol. 26, No. 3, pp. 821 843 c 2005 Society for Industrial and Applied Matematics ETENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

Finding and Using Derivative The shortcuts

Finding and Using Derivative The shortcuts Calculus 1 Lia Vas Finding and Using Derivative Te sortcuts We ave seen tat te formula f f(x+) f(x) (x) = lim 0 is manageable for relatively simple functions like a linear or quadratic. For more complex

More information

1 Upwind scheme for advection equation with variable. 2 Modified equations: numerical dissipation and dispersion

1 Upwind scheme for advection equation with variable. 2 Modified equations: numerical dissipation and dispersion 1 Upwind sceme for advection equation wit variable coefficient Consider te equation u t + a(x)u x Applying te upwind sceme, we ave u n 1 = a (un u n 1), a 0 u n 1 = a (un +1 u n ) a < 0. CFL condition

More information

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t). . Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd, periodic function tat as been sifted upwards, so we will use

More information

FEM solution of the ψ-ω equations with explicit viscous diffusion 1

FEM solution of the ψ-ω equations with explicit viscous diffusion 1 FEM solution of te ψ-ω equations wit explicit viscous diffusion J.-L. Guermond and L. Quartapelle 3 Abstract. Tis paper describes a variational formulation for solving te D time-dependent incompressible

More information

Quantum Theory of the Atomic Nucleus

Quantum Theory of the Atomic Nucleus G. Gamow, ZP, 51, 204 1928 Quantum Teory of te tomic Nucleus G. Gamow (Received 1928) It as often been suggested tat non Coulomb attractive forces play a very important role inside atomic nuclei. We can

More information

Stability properties of a family of chock capturing methods for hyperbolic conservation laws

Stability properties of a family of chock capturing methods for hyperbolic conservation laws Proceedings of te 3rd IASME/WSEAS Int. Conf. on FLUID DYNAMICS & AERODYNAMICS, Corfu, Greece, August 0-, 005 (pp48-5) Stability properties of a family of cock capturing metods for yperbolic conservation

More information

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

2.1 THE DEFINITION OF DERIVATIVE

2.1 THE DEFINITION OF DERIVATIVE 2.1 Te Derivative Contemporary Calculus 2.1 THE DEFINITION OF DERIVATIVE 1 Te grapical idea of a slope of a tangent line is very useful, but for some uses we need a more algebraic definition of te derivative

More information

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error.

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error. Lecture 09 Copyrigt by Hongyun Wang, UCSC Recap: Te total error in numerical differentiation fl( f ( x + fl( f ( x E T ( = f ( x Numerical result from a computer Exact value = e + f x+ Discretization error

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator Simulation and verification of a plate eat excanger wit a built-in tap water accumulator Anders Eriksson Abstract In order to test and verify a compact brazed eat excanger (CBE wit a built-in accumulation

More information

Numerical Solution to Parabolic PDE Using Implicit Finite Difference Approach

Numerical Solution to Parabolic PDE Using Implicit Finite Difference Approach Numerical Solution to arabolic DE Using Implicit Finite Difference Approac Jon Amoa-Mensa, Francis Oene Boateng, Kwame Bonsu Department of Matematics and Statistics, Sunyani Tecnical University, Sunyani,

More information

CS522 - Partial Di erential Equations

CS522 - Partial Di erential Equations CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its

More information

Derivatives. By: OpenStaxCollege

Derivatives. By: OpenStaxCollege By: OpenStaxCollege Te average teen in te United States opens a refrigerator door an estimated 25 times per day. Supposedly, tis average is up from 10 years ago wen te average teenager opened a refrigerator

More information

Some Review Problems for First Midterm Mathematics 1300, Calculus 1

Some Review Problems for First Midterm Mathematics 1300, Calculus 1 Some Review Problems for First Midterm Matematics 00, Calculus. Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd,

More information

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser

More information

1. Introduction. We consider the model problem: seeking an unknown function u satisfying

1. Introduction. We consider the model problem: seeking an unknown function u satisfying A DISCONTINUOUS LEAST-SQUARES FINITE ELEMENT METHOD FOR SECOND ORDER ELLIPTIC EQUATIONS XIU YE AND SHANGYOU ZHANG Abstract In tis paper, a discontinuous least-squares (DLS) finite element metod is introduced

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Variational Localizations of the Dual Weighted Residual Estimator

Variational Localizations of the Dual Weighted Residual Estimator Publised in Journal for Computational and Applied Matematics, pp. 192-208, 2015 Variational Localizations of te Dual Weigted Residual Estimator Tomas Ricter Tomas Wick Te dual weigted residual metod (DWR)

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

INTRODUCTION AND MATHEMATICAL CONCEPTS

INTRODUCTION AND MATHEMATICAL CONCEPTS Capter 1 INTRODUCTION ND MTHEMTICL CONCEPTS PREVIEW Tis capter introduces you to te basic matematical tools for doing pysics. You will study units and converting between units, te trigonometric relationsips

More information

arxiv: v1 [math.na] 9 Sep 2015

arxiv: v1 [math.na] 9 Sep 2015 arxiv:509.02595v [mat.na] 9 Sep 205 An Expandable Local and Parallel Two-Grid Finite Element Sceme Yanren ou, GuangZi Du Abstract An expandable local and parallel two-grid finite element sceme based on

More information

Jian-Guo Liu 1 and Chi-Wang Shu 2

Jian-Guo Liu 1 and Chi-Wang Shu 2 Journal of Computational Pysics 60, 577 596 (000) doi:0.006/jcp.000.6475, available online at ttp://www.idealibrary.com on Jian-Guo Liu and Ci-Wang Su Institute for Pysical Science and Tecnology and Department

More information

GRID CONVERGENCE ERROR ANALYSIS FOR MIXED-ORDER NUMERICAL SCHEMES

GRID CONVERGENCE ERROR ANALYSIS FOR MIXED-ORDER NUMERICAL SCHEMES GRID CONVERGENCE ERROR ANALYSIS FOR MIXED-ORDER NUMERICAL SCHEMES Cristoper J. Roy Sandia National Laboratories* P. O. Box 5800, MS 085 Albuquerque, NM 8785-085 AIAA Paper 00-606 Abstract New developments

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

Blanca Bujanda, Juan Carlos Jorge NEW EFFICIENT TIME INTEGRATORS FOR NON-LINEAR PARABOLIC PROBLEMS

Blanca Bujanda, Juan Carlos Jorge NEW EFFICIENT TIME INTEGRATORS FOR NON-LINEAR PARABOLIC PROBLEMS Opuscula Matematica Vol. 26 No. 3 26 Blanca Bujanda, Juan Carlos Jorge NEW EFFICIENT TIME INTEGRATORS FOR NON-LINEAR PARABOLIC PROBLEMS Abstract. In tis work a new numerical metod is constructed for time-integrating

More information

Inf sup testing of upwind methods

Inf sup testing of upwind methods INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Met. Engng 000; 48:745 760 Inf sup testing of upwind metods Klaus-Jurgen Bate 1; ;, Dena Hendriana 1, Franco Brezzi and Giancarlo

More information

Robust approximation error estimates and multigrid solvers for isogeometric multi-patch discretizations

Robust approximation error estimates and multigrid solvers for isogeometric multi-patch discretizations www.oeaw.ac.at Robust approximation error estimates and multigrid solvers for isogeometric multi-patc discretizations S. Takacs RICAM-Report 2017-32 www.ricam.oeaw.ac.at Robust approximation error estimates

More information

Taylor Series and the Mean Value Theorem of Derivatives

Taylor Series and the Mean Value Theorem of Derivatives 1 - Taylor Series and te Mean Value Teorem o Derivatives Te numerical solution o engineering and scientiic problems described by matematical models oten requires solving dierential equations. Dierential

More information