N U M E R I C A L S O L U T I O N O F D I F F E R E N T I A L E Q U A T I O N S

Size: px
Start display at page:

Download "N U M E R I C A L S O L U T I O N O F D I F F E R E N T I A L E Q U A T I O N S"

Transcription

1 Prelims 7/9/6 8:4 page # N U M E R I C A L S O L U T I O N O F D I F F E R E N T I A L E Q U A T I O N S This introduction to finite difference and finite element methods is aimed at advanced undergraduate and graduate students who need to solve differential equations. The prerequisites are few (basic calculus, linear algebra, and ordinary and partial differential equations) and so the book will be accessible and useful to readers from a range of disciplines across science and engineering. Part I begins with finite difference methods. Finite element methods are then introduced in Part II. In each part, the authors begin with a comprehensive discussion of one-dimensional problems, before proceeding to consider two or higher dimensions. An emphasis is placed on numerical algorithms, related mathematical theory, and essential details in the implementation, while some useful packages are also introduced. The authors also provide well-tested Matlab codes, all available online. ZHILIN LI is a tenured full professor at the Center for Scientific Computation & Department of Mathematics at North Carolina State University. His research area is in applied mathematics in general, particularly in numerical analysis for partial differential equations, moving interface/free boundary problems, irregular domain problems, computational mathematical biology, and scientific computing and simulations for interdisciplinary applications. Li has authored one monograph, The Immersed Interface Method, and also edited several books and proceedings. ZHONGHUA QIAO is an associate professor in the Department of Applied Mathematics at the Hong Kong Polytechnic University. TAO TANG is a professor in the Department of Mathematics at Southern University of Science and Technology, China.

2 Prelims 7/9/6 8:4 page #

3 Prelims 7/9/6 8:4 page 3 #3 N U M E R I C A L S O L U T I O N O F D I F F E R E N T I A L E Q U A T I O N S Introduction to Finite Difference and Finite Element Methods ZHILIN LI North Carolina State University, USA ZHONGHUA QIAO Hong Kong Polytechnic University, China TAO TANG Southern University of Science and Technology, China

4 Prelims 7/9/6 8:4 page 4 #4 University Printing House, Cambridge CB 8BS, United Kingdom One Liberty Plaza, th Floor, New York, NY 6, USA 477 Williamstown Road, Port Melbourne, VIC 37, Australia 4843/4, nd Floor, Ansari Road, Daryaganj, Delhi -, India 79 Anson Road, #6-4/6, Singapore 7996 Cambridge University Press is part of the University of Cambridge. It furthers the University s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. Information on this title: DOI:.7/ Zhilin Li, Zhonghua Qiao, and Tao Tang 8 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 8 Printed in the United Kingdom by Clays, St Ives plc A catalogue record for this publication is available from the British Library. ISBN Hardback ISBN Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

5 Prelims 7/9/6 8:4 page v #5 Table of Contents Preface page ix Introduction. Boundary Value Problems of Differential Equations. Further Reading 5 PART I FINITE DIFFERENCE METHODS 7 Finite Difference Methods for D Boundary Value Problems 9. A Simple Example of a Finite Difference Method 9. Fundamentals of Finite Difference Methods 4.3 Deriving FD Formulas Using the Method of Undetermined Coefficients 9.4 Consistency, Stability, Convergence, and Error Estimates of FD Methods.5 FD Methods for D Self-adjoint BVPs 7.6 FD Methods for General D BVPs 9.7 The Ghost Point Method for Boundary Conditions Involving Derivatives 3.8 An Example of a Nonlinear BVP 34.9 The Grid Refinement Analysis Technique 37. * D IIM for Discontinuous Coefficients 39 Exercises 44 3 Finite Difference Methods for D Elliptic PDEs Boundary and Compatibility Conditions The Central Finite Difference Method for Poisson Equations The Maximum Principle and Error Analysis 55

6 Prelims 7/9/6 8:4 page vi #6 vi Table of Contents 3.4 Finite Difference Methods for General Second-order Elliptic PDEs Solving the Resulting Linear System of Algebraic Equations A Fourth-Order Compact FD Scheme for Poisson Equations A Finite Difference Method for Poisson Equations in Polar Coordinates Programming of D Finite Difference Methods 7 Exercises 75 4 FD Methods for Parabolic PDEs The Euler Methods 8 4. The Method of Lines The Crank Nicolson scheme Stability Analysis for Time-dependent Problems FD Methods and Analysis for D Parabolic Equations The ADI Method An Implicit explicit Method for Diffusion and Advection Equations Solving Elliptic PDEs using Numerical Methods for Parabolic PDEs 5 Exercises 5 5 Finite Difference Methods for Hyperbolic PDEs 8 5. Characteristics and Boundary Conditions 9 5. Finite Difference Schemes 5.3 The Modified PDE and Numerical Diffusion/Dispersion The Lax Wendroff Scheme and Other FD methods Numerical Boundary Conditions Finite Difference Methods for Second-Order Linear Hyperbolic PDEs 5.7 Some Commonly Used FD Methods for Linear System of Hyperbolic PDEs Finite Difference Methods for Conservation Laws 7 Exercises 3 PART II FINITE ELEMENT METHODS 33 6 Finite Element Methods for D Boundary Value Problems The Galerkin FE Method for the D Model Different Mathematical Formulations for the D Model Key Components of the FE Method for the D Model 43

7 Prelims 7/9/6 8:4 page vii #7 Table of Contents vii 6.4 Matlab Programming of the FE Method for the D Model Problem 5 Exercises 56 7 Theoretical Foundations of the Finite Element Method Functional Spaces Spaces for Integral Forms, L (Ω) and L p (Ω) Sobolev Spaces and Weak Derivatives FE Analysis for D BVPs Error Analysis of the FE Method 73 Exercises 78 8 Issues of the FE Method in One Space Dimension 8 8. Boundary Conditions 8 8. The FE Method for Sturm Liouville Problems High-order Elements A D Matlab FE Package The FE Method for Fourth-Order BVPs in D The Lax Milgram Lemma and the Existence of FE Solutions *D IFEM for Discontinuous Coefficients Exercises 3 9 The Finite Element Method for D Elliptic PDEs 8 9. The Second Green s Theorem and Integration by Parts in D Weak Form of Second-Order Self-Adjoint Elliptic PDEs Triangulation and Basis Functions Transforms, Shape Functions, and Quadrature Formulas Some Implementation Details Simplification of the FE Method for Poisson Equations Some FE Spaces in H (Ω) and H (Ω) The FE Method for Parabolic Problems 7 Exercises 75 Appendix: Numerical Solutions of Initial Value Problems 79 5 A. System of First-Order ODEs of IVPs 79 A. Well-posedness of an IVP 8 A.3 Some Finite Difference Methods for Solving IVPs 8 A.4 Solving IVPs Using Matlab ODE Suite 84 A.5 Exercises 88 References 89 Index 9

8 Prelims 7/9/6 8:4 page viii #8

9 Prelims 7/9/6 8:4 page ix #9 Preface The purpose of this book is to provide an introduction to finite difference and finite element methods for solving ordinary and partial differential equations of boundary value problems. The book is designed for beginning graduate students, upper level undergraduate students, and students from interdisciplinary areas including engineers, and others who need to obtain such numerical solutions. The prerequisite is a basic knowledge of calculus, linear algebra, and ordinary differential equations. Some knowledge of numerical analysis and partial differential equations would also be helpful but not essential. The emphasis is on the understanding of finite difference and finite element methods and essential details in their implementation with reasonably mathematical theory. Part I considers finite difference methods, and Part II is about finite element methods. In each part, we start with a comprehensive discussion of one-dimensional problems before proceeding to consider two or higher dimensions. We also list some useful references for those who wish to know more in related areas. The materials of this textbook in general can be covered in an academic year. The two parts of the book are essentially independent. Thus it is possible to use only one part for a class. This is a textbook based on materials that the authors have used, and some are from Dr. Randall J. LeVeque s notes, in teaching graduate courses on the numerical solution of differential equations. Most sample computer programming is written in Matlab. Some advantages of Matlab are its simplicity, a wide range of library subroutines, double precision accuracy, and many existing and emerging tool-boxes. A website www4.ncsu.edu/ zhilin/ has been set up, to post or link computer codes accompanying this textbook. We would like to thank Dr. Roger Hoskin, Lilian Wang, Peiqi Huang, and Hongru Chen for proofreading the book, or for providing Matlab code. ix

10 Prelims 7/9/6 8:4 page x #

11 c 7/9/6 8:9 page # Introduction. Boundary Value Problems of Differential Equations We discuss numerical solutions of problems involving ordinary differential equations (ODEs) or partial differential equations (PDEs), especially linear first- and second-order ODEs and PDEs, and problems involving systems of first-order differential equations. A differential equation involves derivatives of an unknown function of one independent variable (say u(x)), or the partial derivatives of an unknown function of more than one independent variable (say u(x, y), or u(t, x), or u(t, x, y, z), etc.). Differential equations have been used extensively to model many problems in daily life, such as pendulums, Newton s law of cooling, resistor and inductor circuits, population growth or decay, fluid and solid mechanics, biology, material sciences, economics, ecology, kinetics, thermodynamics, sports and computer sciences. Examples include the Laplace equation for potentials, the Navier Stokes equations in fluid dynamics, biharmonic equations for stresses in solid mechanics, and Maxwell equations in electromagnetics. For more examples and for the mathematical theory of PDEs, we refer the reader to Evans (998) and references therein. However, although differential equations have such wide applications, too few can be solved exactly in terms of elementary functions such as polynomials, log x, e x, trigonometric functions (sin x, cos x,...), etc. and their combinations. Even if a differential equation can be solved analytically, considerable effort and sound mathematical theory are often needed, and the closed form of the solution may even turn out to be too messy to be useful. If the analytic solution of the differential equation is unavailable or too difficult to obtain, or There are other models in practice, for example, statistical models.

12 c 7/9/6 8:9 page # Introduction Real problem Physical laws/ other approach Mathematical/physical models Analytic/exact Approximated Use computers Solution techniques Interpret solution Visualization Applications Products Experiments Prediction Better models Figure.. A flowchart of a problem-solving process. takes some complicated form that is unhelpful to use, we may try to find an approximate solution. There are two traditional approaches:. Semi-analytic methods. Sometimes we can use series, integral equations, perturbation techniques, or asymptotic methods to obtain an approximate solution expressed in terms of simpler functions.. Numerical solutions. Discrete numerical values may represent the solution to a certain accuracy. Nowadays, these number arrays (and associated tables or plots) are obtained using computers, to provide effective solutions of many problems that were impossible to obtain before. In this book, we mainly adopt the second approach and focus on numerical solutions using computers, especially the use of finite difference (FD) or finite element (FE) methods for differential equations. In Figure., we show a flowchart of the problem-solving process. Some examples of ODE/PDEs are as follows.. Initial value problems (IVP). A canonical first-order system is dy dt = f(t, y), y(t ) = y ; (.) and a single higher-order differential equation may be rewritten as a firstorder system. For example, a second-order ODE u (t) + a(t)u (t) + b(t)u(t) = f(t), u() = u, u () = v. (.)

13 c 7/9/6 8:9 page 3 #3. Boundary Value Problems of Differential Equations 3 can be converted into a first-order system by setting y (t) = u and y (t) = u (t). An ODE IVP can often be solved using Runge Kutta methods, with adaptive time steps. In Matlab, there is the ODE-Suite which includes ode45, ode3, ode3s, ode5s, etc. For a stiff ODE system, either ode3s or ode5s is recommended; see Appendix for more details.. Boundary value problems (BVP). An example of an ODE BVP is u (x) + a(x)u (x) + b(x)u(x) = f(x), < x <, u() = u, u() = u ; (.3) and a PDE BVP example is u xx + u yy = f(x, y), (x, y) Ω, u(x, y) = u (x, y), (x, y) Ω, (.4) where u xx = u and u x yy = u. y 3 in a domain Ω with boundary Ω. The above PDE is linear and classified as elliptic, and there are two other classifications for linear PDE, namely, parabolic and hyperbolic, as briefly discussed below. 3. BVP and IVP, e.g., u t = au xx + f(x, t), u(, t) = g (t), u(, t) = g (t), BC u(x, ) = u (x), IC, (.5) where BC and IC stand for boundary condition(s) and initial condition, respectively, where u t = u t. 4. Eigenvalue problems, e.g., u (x) = λu(x), u() =, u() =. (.6) In this example, both the function u(x) (the eigenfunction) and the scalar λ (the eigenvalue) are unknowns. 5. Diffusion and reaction equations, e.g., u = (β u) + a u + f(u) (.7) t where a is a vector, (β u) is a diffusion term, a u is called an advection term, and f(u) a reaction term.

14 c 7/9/6 8:9 page 4 #4 4 Introduction 6. Systems of PDE. The incompressible Navier Stokes model is an important nonlinear example: ρ (u t + (u )u) = p + µ u + F, u =. (.8) In this book, we will consider BVPs of differential equations in one dimension (D) or two dimensions (D). A linear second-order PDE has the following general form: a(x, y)u xx + b(x, y)u xy + c(x, y)u yy + d(x, y)u x + e(x, y)u y + g(x, y)u(x, y) = f(x, y) (.9) where the coefficients are independent of u(x, y) so the equation is linear in u and its partial derivatives. The solution of the D linear PDE is sought in some bounded domain Ω; and the classification of the PDE form (.9) is: Elliptic if b ac < for all (x, y) Ω, Parabolic if b ac = for all (x, y) Ω, and Hyperbolic if b ac > for all (x, y) Ω. The appropriate solution method typically depends on the equation class. For the first-order system u u = A(x) t x, (.) the classification is determined from the eigenvalues of the coefficient matrix A(x). Finite difference and finite element methods are suitable techniques to solve differential equations (ODEs and PDEs) numerically. There are other methods as well, for example, finite volume methods, collocation methods, spectral methods, etc... Some Features of Finite Difference and Finite Element Methods Many problems can be solved numerically by some finite difference or finite element methods. We strongly believe that any numerical analyst should be familiar with both methods and some important features listed below.

15 c 7/9/6 8:9 page 5 #5 Finite difference methods:. Further Reading 5 Often relatively simple to use, and quite easy to understand. Easy to implement for regular domains, e.g., rectangular domains in Cartesian coordinates, and circular or annular domains in polar coordinates. Their discretization and approximate solutions are pointwise, and the fundamental mathematical tool is the Taylor expansion. There are many fast solvers and packages for regular domains, e.g., the Poisson solvers Fishpack (Adams et al.) and Clawpack (LeVeque, 998). Difficult to implement for complicated geometries. Have strong regularity requirements (the existence of high-order derivatives). Finite element methods: Very successful for structural (elliptic type) problems. Suitable approach for problems with complicated boundaries. Sound theoretical foundation, at least for elliptic PDE, using Sobolev space theory. Weaker regularity requirements. Many commercial packages, e.g., Ansys, Matlab PDE Tool-Box, Triangle, and PLTMG. Usually coupled with multigrid solvers. Mesh generation can be difficult, but there are now many packages that do this, e.g., Matlab, Triangle, Pltmg, Fidap, and Ansys.. Further Reading This textbook provides an introduction to finite difference and finite element methods. There are many other books for readers who wish to become expert in finite difference and finite element methods. For FD methods, we recommend Iserles (8); LeVeque (7); Morton and Mayers (995); Strikwerda (989); Thomas (995). The textbooks Strikwerda (989); Thomas (995) are classical, while Iserles (8); LeVeque (7); Morton and Mayers (995) are relatively new. With LeVeque (7), the readers can find the accompanying Matlab code from the author s website. A classic book on FE methods is Ciarlet (), while Johnson (987) and Strang and Fix (973) have been widely used as graduate textbooks. The series by Carey and Oden (983) not only presents the mathematical background of FE methods, but also gives some details on FE method programming in Fortran. Newer textbooks include Braess (7) and Brenner and Scott ().

16 c 7/9/6 8:9 page 6 #6

17 c 7/9/6 7:58 page 7 # Part I Finite Difference Methods

18 c 7/9/6 7:58 page 8 #

19 c 7/9/6 7:58 page 9 #3 Finite Difference Methods for D Boundary Value Problems. A Simple Example of a Finite Difference Method Let us consider a model problem u (x) = f (x), < x <, u() = u a, u() = u b, to illustrate the general procedure using a finite difference method as follows.. Generate a grid. A grid is a finite set of points on which we seek the function values that represent an approximate solution to the differential equation. For example, given an integer parameter n >, we can use a uniform Cartesian grid x i = i h, i =,,..., n, h = n. The parameter n can be chosen according to accuracy requirement. If we wish that the approximate solution has four significant digits, then we can take n = or larger, for example.. Represent the derivative by some finite difference formula at every grid point where the solution is unknown, to get an algebraic system of equations. Note that for a twice differentiable function ϕ(x), we have ϕ ϕ(x x) ϕ(x) + ϕ(x + x) (x) = lim x ( x). Thus at a grid point x i, we can approximate u (x i ) using nearby function values to get a finite difference formula for the second-order derivative u (x i ) u(x i h) u(x i ) + u(x i + h) h, 9

20 c 7/9/6 7:58 page #4 Finite Difference Methods for D Boundary Value Problems with some error in the approximation. In the finite difference method, we replace the differential equation at each grid point x i by u(x i h) u(x i ) + u(x i + h) h = f (x i ) + error, where the error is called the local truncation error and will be reconsidered later. Thus we define the finite difference (FD) solution (an approximation) for u(x) at all x i as the solution U i (if it exists) of the following linear system of algebraic equations: u a U + U h = f (x ) U U + U 3 h = f (x ) U U 3 + U 4 h = f (x 3 ) = U i U i + U i+ h = f (x i ) = U n 3 U n + U n h = f (x n ) U n U n + u b h = f (x n ). Note that the finite difference equation at each grid point involves solution values at three grid points, i.e., at x i, x i, and x i+. The set of these three grid points is called the finite difference stencil. 3. Solve the system of algebraic equations, to get an approximate solution at each grid point. The system of algebraic equations can be written in the matrix and vector form h h h h h h h h h h h h h U U U 3. U n U n f (x ) u a /h f (x ) f (x 3 ) =. f (x n ) f (x n ) u b /h (.)

21 c 7/9/6 7:58 page #5. A Simple Example of a Finite Difference Method The tridiagonal system of linear equations above can be solved efficiently in O(Cn) operations by the Crout or Cholesky algorithm, see for example, Burden and Faires (), where C is a constant, typically C = 5 in this case. 4. Implement and debug the computer code. Run the program to get the output. Analyze and visualize the results (tables, plots, etc.). 5. Error analysis. Algorithmic consistency and stability implies convergence of the finite difference method, which will be discussed later. The convergence is pointwise, i.e., lim u(x i ) U i =. The finite difference method requires h the solution u(x) to have up to second-order continuous derivatives... A Matlab Code for the Model Problem Below we show a Matlab function called two_point.m, for the model problem, and use this Matlab function to illustrate how to convert the algorithm to a computer code.

22 c 7/9/6 7:58 page #6 Finite Difference Methods for D Boundary Value Problems We can call the Matlab function two_point directly in a Matlab command window, but a better way is to put all Matlab commands in a Matlab file (called an M-file), referred to here as main.m. The advantage of this is to keep a record, and we can also revisit or modify the file whenever we want. To illustrate, suppose the differential equation is defined in the interval of (, ), with f (x) = π cos(πx), u() =, and u() =. A sample Matlab M-file is then as follows. It is easy to check that the exact solution of the BVP is cos(πx). If we plot the computed solution, the finite difference approximation to the true solution at the grid points (use plot(x, u, o ), and the exact solution represented by the

23 c 7/9/6 7:58 page 3 #7 (a) A Simple Example of a Finite Difference Method (b) Figure.. (a) A plot of the computed solution (little o s) with n = 4, and the exact solution (solid line). (b) The plot of the error. solid line in Figure.(a), the difference at the grid points is not too evident. However, if we plot the difference of the computed solution and the exact solution, which we call the error, we see that there is indeed a small difference of O( 3 ), cf. Figure.(b), but in practice we may nevertheless be content with the accuracy of the computed numerical solution. Questions One May Ask from This Example: Are there other finite difference formulas to approximate derivatives? If so, how do we derive them? The reader may have already encountered other formulas in an elementary numerical analysis textbook. How do we know whether a finite difference method works or not? If it works, how accurate is it? Specifically, what is the error of the computed solution? Do round-off errors affect the computed solution? If so, by how much? How do we deal with boundary conditions other than Dirichlet conditions (involving only function values) as above, notably Neumann conditions (involving derivatives) or mixed boundary conditions? Do we need different finite difference methods for different problems? If so, are the procedures similar? How do we know that we are using the most efficient method? What are the criteria, in order to implement finite difference methods efficiently? We will address these questions in the next few chapters.

24 c 7/9/6 7:58 page 4 #8 4 Finite Difference Methods for D Boundary Value Problems. Fundamentals of Finite Difference Methods The Taylor expansion is the most important tool in the analysis of finite difference methods. It may be written as an infinite series u(x + h) = u(x) + hu (x) + h u (x) + + hk k! u(k) (x) + (.) if u(x) is analytic (differentiable to any order), or as a finite sum u(x + h) = u(x) + hu (x) + h u (x) + + hk k! u(k) (ξ), (.3) where x < ξ < x + h (or x + h < ξ < x if h < ), if u(x) is differentiable up to k-th order. The second form of the Taylor expansion is sometimes called the extended mean value theorem. As indicated earlier, we may represent derivatives of a differential equation by finite difference formulas at grid points to get a linear or nonlinear algebraic system. There are several kinds of finite difference formulas to consider, but in general their accuracy is directly related to the magnitude of h (typically small)... Forward, Backward, and Central Finite Difference Formulas for u (x) Let us first consider the first derivative u (x) of u(x) at a point x using the nearby function values u( x ± h), where h is called the step size. There are three commonly used formulas: Forward FD: u( x + h) u( x) + u( x) = u ( x), h (.4) Backward FD: u( x) u( x h) u( x) = u ( x), h (.5) Central FD: u( x + h) u( x h) δu( x) = u ( x). h (.6) Below we derive these finite difference formulas from geometric intuitions and calculus. From calculus, we know that u u( x + h) u( x) ( x) = lim. h h Assume h is small and u (x) is continuous, then we expect that u( x+h) u( x) h is close to but usually not exactly u ( x). Thus an approximation to the first

25 c 7/9/6 7:58 page 5 #9. Fundamentals of Finite Difference Methods 5 derivative at x is the forward finite difference denoted and defined by u( x + h) u( x) + u( x) = u ( x), (.7) h where an error is introduced and h > is called the step size, the distance between two points. Geometrically, + u( x) is the slope of the secant line that connects the two points ( x, u( x)) and ( x + h, u( x + h)), and in calculus we recognize it tends to the slope of the tangent line at x in the limit h. To determine how closely + u( x) represents u ( x), if u(x) has second-order continuous derivatives we can invoke the extended mean value theorem (Taylor series) such that u( x + h) = u( x) + u ( x)h + u (ξ) h, (.8) where < ξ < h. Thus we obtain the error estimate E f (h) = u( x + h) u( x) h u ( x) = u (ξ)h = O(h), (.9) so the error, defined as the difference of the approximate value and the exact one, is proportional to h and the discretization (.7) is called first-order accurate. In general, if the error has the form E(h) = Ch p, p >, (.) then the method is called p-th order accurate. Similarly, we can analyze the backward finite difference formula u( x) = for approximating u ( x), where the error estimate is E b (h) = u( x) u( x h) h u( x) u( x h), h >, (.) h u ( x) = u (ξ) h = O(h), (.) so this formula is also first-order accurate. Geometrically (see Figure.), one may expect the slope of the secant line that passes through ( x + h, u( x + h)) and ( x h, u( x h)) is a better approximation to the slope of the tangent line of u( x) at ( x, u( x)), suggesting that the corresponding central finite difference formula δu( x) = u( x + h) u( x h), h >, (.3) h

26 c 7/9/6 7:58 page 6 # 6 Finite Difference Methods for D Boundary Value Problems Tangent line at x u(x) x h x x + h Figure.. Geometric illustration of the forward, backward, and central finite difference formulas for approximating u ( x). for approximating the first-order derivative may be more accurate. In order to get the relevant error estimate, we need to retain more terms in the Taylor expansion: u(x + h) = u(x) + hu (x) + u (x)h + 6 u (x)h u(4) (x)h 4 +, u(x h) = u(x) hu (x) + u (x)h 6 u (x)h u(4) (x)h 4 +, which leads to u( x + h) u( x h) E c (h) = u ( x) = h 6 u ( x)h + = O(h ) (.4) where stands for higher-order terms, so the central finite difference formula is second-order accurate. It is easy to show that (.3) can be rewritten u( x + h) u( x h) δu( x) = = ( ) + + u( x). h There are other higher-order accurate formulas too, e.g., the third-order accurate finite difference formula u( x + h) + 3u( x) 6u( x h) + u( x h) δ 3 u( x) =. (.5) 6h.. Verification and Grid Refinement Analysis Suppose now that we have learned or developed a numerical method and associated analysis. If we proceed to write a computer code to implement the method, how do we know that our code is bug-free and our analysis is correct? One way is by a grid refinement analysis. Grid refinement analysis can be illustrated by a case where we know the exact solution. Starting with a fixed h, say h =., we decrease h by half to Of course, we usually do not know it and there are other techniques to validate a computed solution to be discussed later.

27 c 7/9/6 7:58 page 7 #. Fundamentals of Finite Difference Methods 7 Grid refinement analysis and comparison 4 Slope of FW and BW = Error 6 8 Slope of CD = Step size h Figure.3. A plot of a grid refinement analysis of the forward, backward, and central finite difference formulas for u (x) using the log log plot. The curves for forward and backward finite difference are almost identical and have slope one. The central formula is second-order accurate and the slope of the plot is two. As h gets smaller, round-off errors become evident and eventually dominant. see how the error changes. For a first-order method, the error should decrease by a factor of two, cf. (.9), and for a second-order method the error should be decrease by a factor of four, cf. (.4), etc. We can plot the errors versus h in a log log scale, where the slope is the order of convergence if the scales are identical on both axes. The forward, backward, and central finite difference formula rendered in a Matlab script file compare.m are shown below. For example, consider the function u(x) = sin x at x =, where the exact derivative is of course cos. We plot the errors versus h in log log scale in Figure.3, where we see that the slopes do indeed correctly produce the convergence order. As h decreases further, the round-off errors become dominant, which affects the actual errors. Thus for finite difference methods, we cannot take h arbitrarily small hoping to improve the accuracy. For this example, the best h that would

28 c 7/9/6 7:58 page 8 # 8 Finite Difference Methods for D Boundary Value Problems provide the smaller error is h ϵ = 6 8 for the forward and backward formulas while it is h 3 ϵ 5 for the central formula, where ϵ is the machine precision which is around 6 in Matlab for most computers. The best h can be estimated by balancing the formula error and the round-off errors. For the central formula, they are O(h ) and ϵ/h, respectively. The best h then is estimated by h = ϵ/h or h = 3 ϵ. The following is a Matlab script file called compare.m that generates Figure.3.

29 c 7/9/6 7:58 page 9 #3.3 Deriving FD Formulas Using the Method of Undetermined Coefficients 9.3 Deriving FD Formulas Using the Method of Undetermined Coefficients Sometimes we need a one-sided finite difference, for example to approximate a first derivative at some boundary value x = b, and such approximations may also be used more generally. Thus to approximate a first derivative to second-order accuracy, we may anticipate a formula involving the values u( x), u( x h), and u( x h) in using the method of undetermined coefficients in which we write u ( x) γ u( x) + γ u( x h) + γ 3 u( x h). Invoking the Taylor expansion at x yields γ u( x) + γ u( x h) + γ 3 u( x h) ( ) = γ u( x) + γ (u( x) hu ( x) + h u ( x) h3 6 u ( x) ( ) + γ 3 (u( x) hu ( x) + 4h u ( x) 8h3 6 u ( x) + O(max γ k h 4 ), which should approximate u ( x) if we ignore the high-order term. So we set γ + γ + γ 3 = hγ hγ 3 = h γ + 4h γ 3 =. It is easy to show that the solution to this linear system is γ = 3 h, γ = h, γ 3 = h, and hence we obtain the one-sided finite difference scheme u ( x) = 3 h u( x) h u( x h) + h u( x h) + O(h ). (.6) Another one-sided finite difference formula is immediately obtained by setting h for h, namely, u ( x) = 3 h u( x) + u( x + h) h h u( x + h) + O(h ). (.7)

30 c 7/9/6 7:58 page #4 Finite Difference Methods for D Boundary Value Problems One may also differentiate a polynomial interpolation formula to get a finite difference scheme. For example, given a sequence of points (x i, u(x i )), i =,,,..., n, the Lagrange interpolating polynomial is p n (x) = n l i (x)u(x i ), where l i (x) = i= suggesting that u ( x) can be approximated by u ( x) p n( x) = n j=,j i n l i( x)u(x i ). i= (x x j ) (x i x j ),.3. FD Formulas for Second-order Derivatives We can apply finite difference operators twice to get finite difference formulas to approximate the second-order derivative u ( x), e.g., the central finite difference formula u( x) u( x h) + u( x) = + h = ( u( x + h) u( x) h h = u( x h) u( x) + u( x + h) h ) u( x) u( x h) h = + u( x) = δ u( x) (.8) approximates u ( x) to O(h ). Using the same finite difference operator twice produces a one-sided finite difference formula, e.g., + + u( x) = ( + ) u( x + h) u( x) u( x) = + h = ( ) u( x + h) u( x + h) u( x + h) u( x) h h h = u( x) u( x + h) + u( x + h) h, (.9) also approximates u ( x), but only to first-order accuracy O(h). In a similar way, finite difference operators can be used to derive approximations for partial derivatives. We obtain similar forms not only for partial

31 c 7/9/6 7:58 page #5.4 Consistency, Stability, Convergence, and Error Estimates of FD Methods derivatives u x, u xx, etc., but also for mixed partial derivatives, e.g., δ x δ y u( x, ȳ) = u( x + h, ȳ + h) + u( x h, ȳ h) u( x + h, ȳ h) u( x h, ȳ + h) 4h u ( x, ȳ), (.) x y if we adopt a uniform step size in both x and y directions. Here we use the x subscript on δ x to denote the central finite difference operator in the x direction, and so on..3. FD Formulas for Higher-order Derivatives We can likewise apply either lower-order finite difference formulas or the method of undetermined coefficients to obtain finite difference formulas for approximating third-order derivatives. For example, + δ u( x) = + u( x h) u( x) + u( x + h) = h u( x h) + 3u( x) 3u( x + h) + u( x + h) h 3 = u ( x) + h u(4) ( x) + is first-order accurate. If we use the central formula u( x h) + u( x h) u( x + h) + u( x + h) h 3 = u ( x) + h 4 u(5) ( x) +, then we can have a second-order accurate scheme. In practice, we seldom need more than fourth-order derivatives. For higher-order differential equations, we usually convert them to first- or second-order systems..4 Consistency, Stability, Convergence, and Error Estimates of FD Methods When a finite difference method is used to solve a differential equation, it is important to know how accurate the resulting approximate solution is compared to the true solution.

32 c 7/9/6 7:58 page #6 Finite Difference Methods for D Boundary Value Problems.4. Global Error If U = [U, U,..., U n ] T denotes the approximate solution generated by a finite difference scheme with no round-off errors and u = [u(x ), u(x ),..., u(x n )] is the exact solution at the grid points x, x,..., x n, then the global error vector is defined as E = U u. Naturally, we seek a smallest upper bound for the error vector, which is commonly measured using one of the following norms: The maximum or infinity norm E = max i { e i }. If the error is large at even one grid point then the maximum norm is also large, so this norm is regarded as the strongest measurement. The -norm, an average norm defined as E = i h i e i, analogous to the L norm e(x) dx, where h i = x i+ x i. The -norm, another average norm defined as E = ( i h i e i ) /, analogous to the L norm ( e(x) dx) /. If E Ch p, p >, we call the finite difference method p-th order accurate. We prefer to use a reasonably high-order accurate method while keeping the computational cost low. Definition.. A finite difference method is called convergent if lim h E =..4. Local Truncation Errors Local truncation errors refer to the differences between the original differential equation and its finite difference approximations at grid points. Local truncation errors measure how well a finite difference discretization approximates the differential equation. For example, for the two-point BVP u (x) = f (x), < x <, u() = u a, u() = u b, the local truncation error of the finite difference scheme U i U i + U i+ h = f (x i ) at x i is T i = u(x i h) u(x i ) + u(x i + h) h f (x i ), i =,,..., n. Thus on moving the right-hand side to the left-hand side, we obtain the local truncation error by rearranging or rewriting the finite difference equation to resemble the original differential equation, and then substituting the true solution u(x i ) for U i.

33 c 7/9/6 7:58 page 3 #7.4 Consistency, Stability, Convergence, and Error Estimates of FD Methods 3 We define the local truncation error as follows. Let P (d/dx) denote a differential operator on u in a linear differential equation, e.g., ( ) d Pu = f represents u (x) = f (x) if P = d dx dx ; and Pu = f represents u + au + bu + cu = f (x) if ( ) d P = d3 d + a(x) dx dx3 dx + b(x) d dx + c(x). Let P h be a corresponding finite difference operator, e.g., for the second-order differential equation u (x) = f (x), a possible finite difference operator is P h u(x) = u(x h) u(x) + u(x + h) h. More examples will be considered later. In general, the local truncation error is then defined as T(x) = P h u Pu, (.) where it is noted that u is the exact solution. For example, for the differential equation u (x) = f (x) and the three-point central difference scheme (.8) the local truncation error is T(x) = P h u Pu = = u(x h) u(x) + u(x + h) h u (x) u(x h) u(x) + u(x + h) h f (x). (.) Note that local truncation errors depend on the solution in the finite difference stencil (three-point in this example) but not on the solution globally (far away), hence the local tag. Definition.. A finite difference scheme is called consistent if lim T(x) = lim (P hu Pu) =. (.3) h h Usually we should use consistent finite difference schemes. If T(x) Ch p, p >, then we say that the discretization is p-th order accurate, where C = O() is the error constant dependent on the solution u(x). To check whether or not a finite difference scheme is consistent, we Taylor expand all the terms in the local truncation error at a master grid point x i. For example, the three-point central finite difference scheme for u (x) = f (x) produces T(x) = u(x h) u(x) + u(x + h) h u (x) = h u(4) (x) + = O(h )

34 c 7/9/6 7:58 page 4 #8 4 Finite Difference Methods for D Boundary Value Problems such that T(x) Ch, where C = max x u(4) (x) i.e., the finite difference scheme is consistent and the discretization is second-order accurate. Now let us examine another finite difference scheme for u (x) = f (x), namely, U i U i+ + U i+ h = f (x i ), i =,,..., n, U n U n + u(b) h = f (x n ). The discretization at x n is second-order accurate since T(x n ) = O(h ), but the local truncation error at all other grid points is T(x i ) = u(x i) u(x i+ ) + u(x i+ ) h f (x i ) = O(h), i.e., at all grid points where the solution is unknown. We have lim h T(x i ) =, so the finite difference scheme is consistent. However, if we implement this finite difference scheme we may get weird results, because it does not use the boundary condition at x = a, which is obviously wrong. Thus consistency cannot guarantee the convergence of a scheme, and we need to satisfy another condition, namely, its stability. Consider the representation Au = F + T, AU = F = A(u U) = T = AE, (.4) where E = U u, A is the coefficient matrix of the finite difference equations, F is the modified source term that takes the boundary condition into account, and T is the local truncation error vector at the grid points where the solution is unknown. Thus, if A is nonsingular, then E = A T A T. However, if A is singular, then E may become arbitrarily large so the finite difference method may not converge. This is the case in the example above, whereas for the central finite difference scheme (.) we have E A h and we can prove that A is bounded by a constant. Note that the global error depends on both A and the local truncation error vector T. Definition.3. A finite difference method for the BVPs is stable if A is invertible and A C, for all < h < h, (.5) where C and h are two constants that are independent of h.

35 c 7/9/6 7:58 page 5 #9.4 Consistency, Stability, Convergence, and Error Estimates of FD Methods 5 From the definitions of consistency and stability, and the discussion above, we reach the following theorem: Theorem.4. A consistent and stable finite difference method is convergent. Usually it is easy to prove consistency but more difficult to prove stability. To prove the convergence of the central finite difference scheme (.) for u (x) = f (x), we can apply the following lemma: Lemma.5. Consider a symmetric tridiagonal matrix A R n n whose main diagonals and off-diagonals are two constants, d and α, respectively. Then the eigenvalues of A are ( ) πj λ j = d + α cos, j =,,..., n, (.6) n + and the corresponding eigenvectors are ( ) x j πkj k = sin, n + k =,,..., n. (.7) The lemma can be proved by direct verification (from Ax j = λ j x j ). We also note that the eigenvectors x j are mutually orthogonal in the R n vector space. Theorem.6. The central finite difference method for u (x) = f (x) and a Dirichlet boundary condition is convergent, with E E Ch 3/. Proof From the finite difference method, we know that the finite difference coefficient matrix A R (n ) (n ) and it is tridiagonal with d = /h and α = /h, so the eigenvalues of A are λ j = h + ( ) πj h cos = h ( ) n cos(πjh). Noting that the eigenvalues of A are /λ j and A is also symmetric, we have A = = min λ j h ( cos(πh)) = h ( ( (πh) / + (πh) 4 /4! + ) < π. Using the inequality A n A, therefore, we have E A T n n A T π Ch Ch 3/, We can also use the identity cos(πh) = sin πh to get A = h ( cos(πh)) = h 4 sin πh 4 π. 3

36 c 7/9/6 7:58 page 6 # 6 Finite Difference Methods for D Boundary Value Problems since n O(/ h). The error bound is overestimated since we can also prove that the infinity norm is also proportional to h using the maximum principle or the Green function approach (cf. LeVeque, 7). Remark.7. The eigenvectors and eigenvalues of the coefficient matrix in (.) can be obtained by considering the Sturm Liouville eigenvalue problem It is easy to check that the eigenvalues are and the corresponding eigenvectors are The discrete form at a grid point is u (x) + λu =, u() = u() =. (.8) λ k = 3(kπ), k =,,..., (.9) u k (x) = sin(kπx). (.3) u k (x i ) = sin(kπih), i =,,..., n, (.3) one of the eigenvectors of the coefficient matrix in (.). The corresponding eigenvalue can be found using the definition Ax = λx..4.3 The Effect of Round-off Errors From knowledge of numerical linear algebra, we know that A = max λ j = h ( cos(π(n )h)) 4 h = 4n, therefore the condition number of A satisfies κ(a) = A A n. The relative error of the computed solution U for a stable scheme satisfies U u u local truncation error + round-off error in solving AU = F A T + Cg(n) A A ϵ Ch + Cg(n) h ϵ, where g(n) is the growth factor of the algorithm for solving the linear system of equations and ϵ is the machine precision. For most computers, ϵ 8 when we use the single precision, and ϵ 6 for the double precision. Usually, the global error decreases as h decreases, but in the presence of round-off errors it may actually increase if h is too small! We can roughly estimate such a critical h. To keep the discussion simple, assume that C O()

37 c 7/9/6 7:58 page 7 #.5 FD Methods for D Self-adjoint BVPs 7 and g(n) O(), roughly the critical h occurs when the local truncation error is about the same as the round-off error (in magnitute), i.e., h h ϵ, = n h = ϵ /4 which is about for the single precision with the machine precision 8 and, for the double precision with the machine precision 6. Consequently, if we use the single precision there is no point in taking more than grid points and we get roughly four significant digits at best, so we usually use the double precision to solve BVPs. Note that Matlab uses double precision by default..5 FD Methods for D Self-adjoint BVPs Consider D self-adjoint BVPs of the form ( p(x)u (x) ) q(x)u(x) = f (x), a < x < b, (.3) u(a) = u a, u(b) = u b, or other BC. (.33) This is also called a Sturm Liouville problem. The existence and uniqueness of the solution is assured by the following theorem. Theorem.8. If p(x) C (a, b), q(x) C (a, b), f (x) C (a, b), q(x) and there is a positive constant such that p(x) p >, then there is unique solution u(x) C (a, b). Here C (a, b) is the space of all continuous functions in [a, b], C (a, b) is the space of all functions that have continuous first-order derivative in [a, b], and so on. We will see later that there are weaker conditions for finite element methods, where integral forms are used. The proof of this theorem is usually given in advanced differential equations courses. Let us assume the solution exists and focus on the finite difference method for such a BVP, which involves the following steps. Step : Generate a grid. For simplicity, consider the uniform Cartesian grid x i = a + ih, h = b a, i =,,..., n, n where in particular x = a, x n = b. Sometimes an adaptive grid may be preferred, but not for the central finite difference scheme. Step : Substitute derivatives with finite difference formulas at each grid point where the solution is unknown. This step is also called the discretization.

38 c 7/9/6 7:58 page 8 # 8 Finite Difference Methods for D Boundary Value Problems Define x i+ = x i + h/, so x i+ x i = h. Thus using the central finite difference formula at a typical grid point x i with half grid size, we obtain p i+ u (x i+ ) p i h u (x i ) q i u(x i ) = f (x i ) + E i, where p i+ = p(x i+ ), q i = q(x i ), f i = f (x i ), and E i = Ch. Applying the central finite difference scheme for the first-order derivative then gives u(x p i+ ) u(x i ) u(x i+ h p i ) u(x i ) i h h q i u(x i ) = f (x i ) + E i + E i, for i =,,..., n. The consequent finite difference solution U i u(x i ) is then defined as the solution of the linear system of equations p i+ U i+ ( p i+ ) U i + p i U i + p i h q i U i = f i, (.34) for i =,,..., n. In a matrix-vector form, this linear system can be written as AU = F, where p /+p 3/ p q 3/ h h p 3/ p 3/+p 5/ p q 5/ h A = h h, q n U = U U U 3. U n U n, F = f (x ) p /u a h f (x ) f (x 3 ). f (x n ) f (x n ) p n /u b h p n 3/ h. p n 3/+p n / h It is important to note that A is symmetric, negative definite, weakly diagonally dominant, and an M-matrix. Those properties guarantee that A is nonsingular. The differential equation may also be written in the nonconservative form p(x)u + p (x)u qu = f (x),

39 c 7/9/6 7:58 page 9 #3.6 FD Methods for General D BVPs 9 where second-order finite difference formulas can be applied. However, the derivative of p(x) or its finite difference approximation is needed and the coefficient matrix of the corresponding finite difference equations is no longer symmetric, nor negative positive definite, nor diagonally dominant. Consequently, we tend to avoid using the nonconservative form if possible. The local truncation error of the conservative finite difference scheme is T i = p i+ u(x i+ ) (p i+ + p i )u(x i ) + p i u(x i ) h q i u(x i ) f i. (.35) Note that P (d/dx) = (d/dx) (p d/dx) q is the differential operator. It is easy to show that T i Ch, but it is more difficult to show that A C. However, we can use the maximum principle to prove second-order convergence of the finite difference scheme, as explained later. Consider the problem.6 FD Methods for General D BVPs p(x)u (x) + r(x)u (x) q(x)u(x) = f (x), a < x < b, (.36) u(a) = u a, u(b) = u b, or other BC. (.37) There are two different discretization techniques that we can use depending on the magnitude of r(x).. Central finite difference discretization for all derivatives: p i U i U i + U i+ h + r i U i+ U i h q i U i = f i, (.38) for i =,,..., n, where p i = p(x i ) and so on. An advantage of this discretization is that the method is second-order accurate, but a disadvantage is that the coefficient matrix may not be diagonally dominant even if q(x) and p(x) >. If u denotes the velocity in some applications, then r(x)u (x) is often called an advection term. When the advection is strong (i.e., r(x) is large), the central finite difference approximation is likely to have nonphysical oscillations, e.g., when r i /h.. The upwinding discretization for the first-order derivative and the central finite difference scheme for the diffusion term: p i U i U i + U i+ h + r i U i+ U i h q i U i = f i, if r i, p i U i U i + U i+ h + r i U i U i h q i U i = f i, if r i <.

40 c 7/9/6 7:58 page 3 #4 3 Finite Difference Methods for D Boundary Value Problems This scheme increases the diagonal dominance of the finite difference coefficient matrix if q(x), but it is only first-order accurate. It is often easier and more accurate to solve a linear system of equations with diagonally dominant matrices. If r(x) is very large (say r(x) /h), the finite difference solution using the upwinding scheme will not have nonphysical oscillations compared with the central finite difference scheme. Note that if p(x) =, r(x) =, and q(x), then the BVP is a D Helmholtz equation that may be difficult to solve if q(x) is large, say q(x) /h..7 The Ghost Point Method for Boundary Conditions Involving Derivatives In this section, we discuss how to treat Neumann and mixed (Robin) boundary conditions. Let us first consider the problem u (x) = f (x), a < x < b, u (a) = α, u(b) = u b, where the solution at x = a is unknown. If we use a uniform Cartesian grid x i = a + ih, then U is one component of the solution. We can still use the central finite difference discretization at interior grid points U i U i + U i+ h = f i, i =,,..., n, but we need an additional equation at x = a given the Neumann boundary condition at a. One approach is to take U U U + U = α or h h = α h, (.39) and the resulting linear system of equations is again tridiagonal and symmetric negative definite: h h h h h h h h h h h h h U U U. U n U n However, this approach is only first-order accurate if α. α h f (x ) f (x ) =. (.4). f (x n ) f (x n ) u b h

41 c 7/9/6 7:58 page 3 #5.7 The Ghost Point Method for Boundary Conditions Involving Derivatives 3 To maintain second-order accuracy, the ghost point method is recommended, where a ghost grid point x = x h = a h is added and the solution is extended to the interval [a h, a]. Then the central finite difference scheme can be used at all grid points where the solution is unknown, i.e., for i =,,..., n. However, we now have n equations and n + unknowns including U, so one more equation is needed to close the system. The additional equation is the central finite difference equation for the Neumann boundary condition U U h = α, (.4) which yields U = U hα. Inserting this into the central finite difference equation at x = a, i.e., at x, now treated as an interior grid point, we have U U + U h = f, U hα U + U h = f, U + U h = f + α h, where the coefficient matrix is precisely the same as for (.4) and the only difference in this second-order method is the component f / + α/h in the vector on the right-hand side, rather than α/h in the previous first-order method. To discuss the stability, we can use the eigenvalues of the coefficient matrix, so the -norm of the inverse of the coefficient matrix, A. The eigenvalues can again be associated with the related continuous problem u + λu =, u () =, u() =. (.4) It is easy to show that the eigenvectors are ( πx ) u k (x) = cos + kπx (.43) corresponding to the eigenvalues λ k = 4(π/ + kπ), from which we can conclude that the ghost point approach is stable. The convergence follows by combining the stability and the consistency. We compare the two finite difference methods in Figure.4, where the differential equation u (x) = f (x) is subject to a Dirichlet boundary condition at x = and a Neumann boundary condition at x =.5. When f (x) = π cos πx, u() =, u (.5) = π, the exact solution is u(x) = cos πx. Figure.4(a) shows the grid refinement analysis using both the backward finite 3

42 c 7/9/6 7:58 page 3 #6 3 Finite Difference Methods for D Boundary Value Problems (a) Grid refinement analysis and comparison (b) 5 Error from the ghost point method Error 3 4 The first-order method: slope = The ghost point method: slope = Error The step size h Figure.4. (a) A grid refinement analysis of the ghost point method and the first-order method. The slopes of the curves are the order of convergence. (b) The error plot of the computed solution from the ghost point method. x difference method (bw_at_b.m) and the ghost point method (ghost_at_b.m). The error in the second-order method is evidently much smaller than that in the first-order method. In Figure.4(b), we show the error plot of the ghost point method, and note the error at x = b is no longer zero..7. A Matlab Code of the Ghost Point Method

43 c 7/9/6 7:58 page 33 #7.7 The Ghost Point Method for Boundary Conditions Involving Derivatives The Matlab Driver Program Below is a Matlab driver code to solve the two-point BVP u (x) = f (x), a < x < b, u(a) = ua, u (b) = uxb.

44 c 7/9/6 7:58 page 34 #8 34 Finite Difference Methods for D Boundary Value Problems.7. Dealing with Mixed Boundary Conditions The ghost point method can be used to discretize a mixed boundary condition. Suppose that αu (a) + βu(a) = γ at x = a, where α. Then we can discretize the boundary condition by or α U U h + βu = γ, U = U + βh α U hγ α, and substitute this into the central finite difference equation at x = x to get ( h + β αh or ) U + h U = f + γ ( h + β αh yielding a symmetric coefficient matrix. αh, (.44) ) U + h U = f + γ αh, (.45).8 An Example of a Nonlinear BVP Discretrizing a nonlinear differential equation generally produces a nonlinear algebraic system. Furthermore, if we can solve the nonlinear system then we can get an approximate solution. We present an example in this section to illustrate the procedure. Consider the following nonlinear (a quasilinear) BVP: d u dx u = f (x), < x < π, u() =, u(π) =. (.46)

45 c 7/9/6 7:58 page 35 #9.8 An Example of a Nonlinear BVP 35 If we apply the central finite difference scheme, then we obtain the system of nonlinear equations U i U i + U i+ h U i = f (x i ), i =,,..., n. (.47) The nonlinear system of equations above can be solved in several ways: Approximate the nonlinear ODE using a linearization process. Unfortunately, not all linearization processes will work. Substitution method, where the nonlinear term is approximated upon an iteration using the previous approximation. For an example, given an initial guess U () (x), we get new approximation using U k+ i Uk+ i + U k+ i+ h U k i U k+ i = f (x i ), k =,,..., (.48) involving a two-point BVP at each iteration. The main concerns are then whether or not the method converges, and the rate of the convergence if it does. Solve the nonlinear system of equations using advanced methods, i.e., Newton s method as explained below, or its variations. In general, a nonlinear system of equations F(U) = is obtained if we discretize a nonlinear ODE or PDE, i.e., F (U, U,..., U m ) =, F (U, U,..., U m ) =, (.49).... F m (U, U,..., U m ) =, where for the example, we have m = n, and F i (U, U,..., U m ) = U i U i + U i+ h U i f (x i ), i =,,..., n. The system of the nonlinear equations can generally be solved by Newton s method or some sort of variation. Given an initial guess U (), the Newton iteration is U (k+) = U (k) (J(U (k) )) F(U (k) ) (.5) or { J(U (k) ) U (k) = F(U (k) ), k =,,... U (k+) = U (k) + U (k),

46 c 7/9/6 7:58 page 36 #3 36 Finite Difference Methods for D Boundary Value Problems (a).5 Initial and consecutive approximations Initial (b) 5 The error E=.854e Last x Figure.5. (a) Plot of the initial and consecutive approximations to the nonlinear system of equations and (b) the error plot. x where J(U) is the Jacobian matrix defined as F F U U F U F U. F m U.. F m U F U m F U m. F m U m. For the example problem, we have h U J(U) = h U h h U n. We implemented Newton s method in a Matlab code non_tp.m. In Figure.5(a), we show the initial and consecutive approximations to the nonlinear BVP using Newton s method with U i = x i(π x i ). With an n = 4 mesh and the tolerance tol = 8, it takes only 6 iterations to converge. The infinity error of the computed solution at the grid points is E = In the right plot, the errors at the grid points are plotted.

47 c 7/9/6 7:58 page 37 #3.9 The Grid Refinement Analysis Technique 37 It is not always easy to find J(U) and it can be computationally expensive. Furthermore, Newton s method is only locally convergent, i.e., it requires a close initial guess to guarantee its convergence. Quasi-Newton methods, such as the Broyden and BFGS rank-one and rank-two update methods, and also conjugate gradient methods, can avoid evaluating the Jacobian matrix. The main issues remain global convergence, the convergence order (Newton s method is quadratically convergent locally), and computational issues (storage, etc.). A well-known software package called MINPACK is available through the netlib (cf. Dennis and Schnabel, 996 for more complete discussions)..9 The Grid Refinement Analysis Technique After we have learned or developed a numerical method, together with its convergence analysis (consistency, stability, order of convergence, and computational complexity such as operation counts, storage), we need to validate and confirm the analysis numerically. The algorithmic behavior becomes clearer through the numerical results, and there are several ways to proceed. Analyze the output. Examine the boundary conditions and maximum/ minimum values of the numerical solutions, to see whether they agree with the ODE or PDE theory and your intuition. Compare the numerical solutions with experiential data, with sufficient parameter variations. Do a grid refinement analysis, whether an exact solution is known or not. Let us now explain the grid refinement analysis when there is an exact solution. Assume a method is p-th order accurate, such that E h Ch p if h is small enough, or log E h log C + p log h. (.5) Thus, if we plot log E h against log h using the same scale, then the slope p is the convergence order. Furthermore, if we divide h by half to get E h/, then we have the following relations: ratio = E h E h/ Chp C(h/) p = p, (.5) p log ( E h / E h/ ) log = log (ratio). (.53) log

48 c 7/9/6 7:58 page 38 #3 38 Finite Difference Methods for D Boundary Value Problems For a first-order method (p = ), the ratio approaches number two as h approaches zero. For a second-order method (p = ), the ratio approaches number four as h approaches zero, and so on. Incidentally, the method is called superlinear convergent if p is some number between one and two. For a simple problem, we may come up with an exact solution easily. For example, for most single linear single ODE or PDE, we can simply set an exact solution u e (x) and hence determine other functions and parameters such as the source term f (x), boundary and initial conditions, etc. For more complicated systems of ODE or PDE, the exact solution is often difficult if not impossible to construct, but one can search the literature to see whether there are similar examples, e.g., some may have become benchmark problems. Any new method may then be compared with benchmark problem results. If we do not have the exact solution, the order of convergence can still be estimated by comparing a numerical solution with one obtained from a finer mesh. Suppose the numerical solution converges and satisfies u h = u e + Ch p + (.54) where u h is the numerical solution and u e is the true solution, and let u h be the solution obtained from the finest mesh Thus we have From the estimates above, we obtain the ratio u h = u e + Ch p +. (.55) u h u h C (h p h p ), (.56) u h/ u h C ((h/) p h p ). (.57) u h u h hp p h u h/ u h (h/) p p h = p ( (h /h) p ) (h /h) p, (.58) from which we can estimate the order of accuracy p. For example, on doubling the number of grid points successively we have h h = k, k =, 3,..., (.59) then the ratio in (.58) is ( ũ(h) ũ(h ) ũ( h ) ũ(h ) = p kp ) p( k). (.6)

49 c 7/9/6 7:58 page 39 #33. * D IIM for Discontinuous Coefficients 39 In particular, for a first-order method (p = ) this becomes ũ(h) ũ(h ) ũ( h ) ũ(h ) = ( k ) k = k k. If we take k =, 3,..., then the ratios above are 3, , , ,. Similarly, for a second-order method (p = ), (.6) becomes ũ(h) ũ(h ) ũ( h ) ũ(h ) = 4 ( 4 k ) 4 k = 4k 4 k, and the ratios are 5, 63 5 = 4., , , when k =, 3,... To do the grid refinement analysis for D problems, we can take n =,, 4,..., 64, depending on the size of the problem and the computer speed; for D problems (, ), (, ),..., (64, 64), or (6, 6), (3, 3),..., (5, 5); and for 3D problems (8, 8, 8), (6, 6, 6),..., (8, 8, 8), if the computer used has enough memory. To present the grid refinement analysis, we can tabulate the grid size n and the ratio or order, so the order of convergence can be seen immediately. Another way is to plot the error versus the step size h, in a log log scale with the same scale on both the horizontal and vertical axes, then the slope of the approximate line is the order of convergence.. * D IIM for Discontinuous Coefficients In some applications, the coefficient of a differential equation can have a finite discontinuity. Examples include composite materials, two-phase flows such as ice and water, etc. Assume that we have a two-point BVP, (pu ) qu = f (x), < x <, u() = u, u() = u. Assume that the coefficient p(x) is a piecewise constant p(x) = { β if < x < α, β + if α < x <, (.6)

50 c 7/9/6 7:58 page 4 #34 4 Finite Difference Methods for D Boundary Value Problems where < α < is called an interface, β and β + are two positive but different constants. For simplicity, we assume that both q and f are continuous in a domain (, ). In a number of applications, u stands for the temperature that should be continuous physically, which means [u] = across the interface, where [ u ] = lim u(x) lim u(x) = x α + x α u+ u, (.6) denotes the jump of u(x) at the interface α. The quantity [ β u x ] = lim x α + β(x)u (x) lim x α β(x)u (x) = β + u + x β u x (.63) is called the jump in the flux. If there is no source at the interface, then the flux should also be continuous which leads to another jump condition [βu x ] =. The two jump conditions [u] =, [βu x ] = (.64) are called the natural jump conditions. Note that since β has a finite jump at α, so does u x unless u x = and u + x = which is unlikely. Using a finite difference method, there are several commonly used methods to deal with the discontinuity in the coefficients. Direct discretization if x i / α for i =,,... since p i / is well-defined. If the interface α = x j / for some j, then we can define the value of p(x) at x j / as the average, that is, p j / = (β + β + )/. The smoothing method using β ϵ (x) = β (x) + ( β + (x) β (x) ) H ϵ (x α), (.65) where H ϵ is a smoothed Heaviside function, if x < ϵ, ( H ϵ (x) = + x ϵ + ) πx sin, if x ϵ, π ϵ, if x > ϵ, (.66) often ϵ is taken as h or Ch for some constant C in a finite difference discretization. Harmonic averaging of p(x) defined as [ xi+ ] p i+ = p (x) dx. (.67) h x i

51 c 7/9/6 7:58 page 4 #35. * D IIM for Discontinuous Coefficients 4 For the natural jump conditions, the harmonic averaging method provides second-order accurate solution in the maximum norm due to error cancellations even though the finite difference discretization may not be consistent. The methods mentioned above are simple but work only with natural jump conditions. The first two methods are less accurate than the third one. The error analysis for these methods are not straightforward. The second and third approaches cannot be directly generalized to D or 3D problems with general interfaces. We now explain the Immersed Interface Method (IIM) for this problem which can be applied for more general jump conditions [βu x ] = c and even with discontinuous solutions ([u] ). We refer the reader to Li and Ito (6) for more details about the method. Assume we have a mesh x i, i =,,..., n. Then there is an integer j such that x j α < x j+. Except for grid points x j and x j+, other grid points are called regular since the standard three-point finite stencil does not contain the interface α. The standard finite difference scheme is still used at regular grid points. At irregular grid points x j and x j+, the finite difference approximations need to be modified to take the discontinuity in the coefficient into account. Note that when f (x) is continuous, we also have β + u + xx q + u + = β u xx q u. Since we assume that q + = q, and u + = u, we can express the limiting quantities from + side in terms of those from the side to get, u + = u, u + x = β β + u x + c β +, u+ xx = β β + u xx. (.68) The finite difference equations are determined from the method of undetermined coefficients: γ j, u j + γ j, u j + γ j,3 u j+ q j u j = f j + C j, γ j+, u j + γ j+, u j+ + γ j+,3 u j+ q j+ u j+ = f j+ + C j+. (.69) For the simple model problem, the coefficients of the finite difference scheme have the following closed form: γ j, = ( β [β](x j α)/h ) /D j, γ j+, = β /D j+, γ j, = ( β + [β](x j α)/h ) /D j, γ j+, = ( β + + [β](x j+ α)/h ) /D j+, γ j,3 = β + /D j, γ j+,3 = ( β + [β](x j+ α)/h ) /D j+,

52 c 7/9/6 7:58 page 4 #36 4 Finite Difference Methods for D Boundary Value Problems where D j = h + [β](x j α)(x j α)/β, D j+ = h [β](x j+ α)(x j+ α)/β +. It has been been shown in Huang and Li (999) and Li (994) that D j and D j+ if β β + >. The correction terms are: C j = γ j,3 (x j+ α) c β +, C j+ = γ j+, (α x j+ ) c β. (.7) Remark.9. If β + = β, i.e., the coefficient is continuous, then the coefficients of the finite difference scheme are the same as that from the standard central finite difference scheme as if there was no interface. The correction term is not zero if c is not zero corresponding to a singular source cδ(x α). On the other hand, if c =, then the correction terms are also zero. But the coefficients are changed due to the discontinuity in the coefficient of the ODE BVP... A Brief Derivation of the Finite Difference Scheme at an Irregular Grid Point We illustrate the idea of the IIM in determining the finite difference coefficients γ j,, γ j, and γ j,3 in (.69). We want to determine the coefficients so that the local truncation error is as small as possible in the magnitude. The main tool is the Taylor expansion in expanding u(x j ), u(x j ), and u(x j+ ) from each side of the interface α. After the expansions, then we can use the interface relations (.68) to express the quantities of u ±, u ± x, and u ± xx in terms of the quantities from one particular side. It is reasonable to assume that the u(x) has up to third-order derivatives in (, α) and (α, ) excluding the interface. Using the Tailor expansion for u(x j+ ) at α, we have u(x j+ ) = u + (α) + (x j+ α) u + x (α) + (x j+ α) u + xx(α) + O(h 3 ). Using the jump relation (.68), the expression above can be written as ( β u(x j+ ) = u (α) + (x j+ α) β + u x (α) + c ) β + + (x j+ α) β β + u xx(α) + O(h 3 ).

53 c 7/9/6 7:58 page 43 #37. * D IIM for Discontinuous Coefficients 43 The Taylor expansions of u(x j ) and u(x j ) at α from the left hand side have the following expression u(x l ) = u (α) + (x l α)u x (α) + (x l α) u xx(α) + O(h 3 ), l = j, j. Therefore we have the following γ j, u(x j ) + γ j, u(x j ) + γ j,3 u(x j+ ) = (γ j, + γ j, + γ j,3 )u (α) ) + ((x j α)γ j, + (x j α)γ j, + β β + (x j+ α)γ j,3 u x (α) + γ j,3 (x j+ α) c + β + ) ((x j α) γ j, + (x j α) γ j, + β β + (x j+ α) u xx(α) + O(max γ j,l h 3 ), l after the Taylor expansions and collecting terms for u (α), u x (α) and u xx(α). By matching the finite difference approximation with the differential equation at α from the side, 3 we get the system of equations for the coefficients γ j s below: γ j, + γ j, + γ j,3 = (α x j ) γ j, (α x j ) γ j, + β β + (x j+ α)γ j,3 = (α x j ) γ j, + (α x j) γ j, + β β + (x j+ α) γ j,3 = β. (.7) It is easy to verify that the γ j s in the left column in the previous page satisfy the system above. Once those γ j s have been computed, it is easy to set the correction term C j to match the remaining leading terms of the differential equation. 3 It is also possible to further expand at x = x j to match the differential equation at x = x j. The order of convergence will be the same.

54 c 7/9/6 7:58 page 44 #38 44 Finite Difference Methods for D Boundary Value Problems Exercises. When dealing with irregular boundaries or using adaptive grids, nonuniform grids are needed. Derive the finite difference coefficients for the following: (A) : u ( x) α u( x h ) + α u( x) + α 3 u( x + h ), (B) : u ( x) α u( x h ) + α u( x) + α 3 u( x + h ), (C) : u ( x) α u( x h ) + α u( x) + α 3 u( x + h ). Are they consistent? In other words, as h = max{h, h } approaches zero, does the error also approach zero? If so, what are the orders of accuracy? Do you see any potential problems with the schemes you have derived?. Consider the following finite difference scheme for solving the two-point BVP u (x) = f (x), a < x < b, u(a) = u a and u(b) = u b : U i U i + U i+ h = f (x i ), i =, 3,..., n, (.7) where x i = a + ih, i =,,..., n, h = (b a)/n. At i =, the finite difference scheme is U U + U 3 h = f (x ). (.73) (a) Find the local truncation errors of the finite difference scheme at x i, i =, 3,..., n, and x. Is this scheme consistent? (b) Does this scheme converge? Justify your answer. 3. Program the central finite difference method for the self-adjoint BVP (β(x)u ) γ(x)u(x) = f (x), < x <, u() = u a, au() + bu () = c, using a uniform grid and the central finite difference scheme β i+ (U i+ U i )/h β i (U i U i )/h γ(x i )U i = f (x i ). (.74) h Test your code for the case where β(x) = + x, γ(x) = x, a =, b = 3, (.75) and the other functions or parameters are determined from the exact solution u(x) = e x (x ). (.76) Plot the computed solution and the exact solution, and the error for a particular grid n = 8. Do the grid refinement analysis, to determine the order of accuracy of the global solution. Also try to answer the following questions: Can your code handle the case when a = or b =? If the central finite difference scheme is used for the equivalent differential equation what are the advantages or disadvantages? βu + β u γu = f, (.77)

55 c 7/9/6 7:58 page 45 #39 Exercises Consider the finite difference scheme for the D steady state convection diffusion equation (a) Verify the exact solution is ϵu u =, < x <, (.78) u() =, u() = 3. (.79) u(x) = + x + ( ) e x/ϵ. (.8) e /ϵ (b) Compare the following two finite difference methods for ϵ =.3,.,.5, and.5. () Central finite difference scheme: ϵ U i U i + U i+ h () Central-upwind finite difference scheme: ϵ U i U i + U i+ h U i+ U i h U i U i h =. (.8) =. (.8) Do the grid refinement analysis for each case to determine the order of accuracy. Plot the computed solution and the exact solution for h =., h = /5, and h =.. You can use the Matlab command subplot to put several graphs together. (c) From your observations, in your opinion which method is better? 5. (*) For the BVP u = f, < x <, (.83) u() =, u () = σ, (.84) show that the finite difference method using the central formula and the ghost point method at x = are stable and consistent. Find their convergence order and prove it. 6. For the set of points (x i, u i ), i =,,..., N, find the Lagrange interpolation polynomial from the formula p(x) = N l i (x) u i, l i (x) = i= N j=,j i x x j x i x j. (.85) By differentiating p(x) with respect to x, one can get different finite difference formulas for approximating different derivatives. Assuming a uniform mesh x i+ x i = x i x i = = x x = h, derive a central finite difference formula for u (4) and a one-sided finite difference formula for u (3) with N = Derive the finite difference method for u (x) q(x)u(x) = f (x), a < x < b, (.86) u(a) = u(b), periodic BC, (.87) using the central finite difference scheme and a uniform grid. Write down the system of equations A h U = F. How many unknowns are there (ignoring redundancies)? Is the coefficient matrix A h tridiagonal?

56 c 7/9/6 7:58 page 46 #4 46 Finite Difference Methods for D Boundary Value Problems Hint: Note that U = U n, and set unknowns as U, U,..., U n. If q(x) =, does the solution exist? Derive a compatibility condition for which the solution exists. If the solution exists, is it unique? How do we modify the finite difference method to make the solution unique? 8. Modify the Matlab code non_tp.m to apply the finite difference method and Newton s nonlinear solver to find a numerical solution to the nonlinear pendulum model d θ + K sin θ =, dt < θ < π, θ() = θ, θ(π) = θ, (.88) where K, θ and θ are parameters. Compare the solution with the linearized model d θ + Kθ =. dt

57 c3 7/9/6 4:8 page 47 # 3 Finite Difference Methods for D Elliptic PDEs There are many important applications of elliptic PDEs, see page 4 for the definition of elliptic PDEs. Below we give some examples of linear and nonlinear equations of elliptic PDEs. Laplace equations in D, u xx + u yy =. (3.) The solution u is sometimes called a potential function, because a conservative vector field v (i.e., such that v = ) is given by v = u (or alternatively v = u), where is the gradient operator that acts as a vector. In D, the gradient operator is = [ x, y ]T. If u is a scalar, the u = [ u x, u y ]T is the gradient vector of u, and if v is a vector, then v = div(v) is the divergence of the vector v. The scalar u = u xx + u yy is the Laplacian of u, which is denoted as u literally from its definition. It is also common to use the notation of u = u for the Laplacian of u. If the conservative vector field v is also divergence free, (i.e., div(v) = v =, then we have v = u = u =, that is, the potential function is the solution of a Laplace equation. Poisson equations in D, Generalized Helmholtz equations, u xx + u yy = f. (3.) u xx + u yy λ u = f. (3.3) Many incompressible flow solvers are based on solving one or several Poisson or Helmholtz equations, e.g., the projection method for solving incompressible Navier Stokes equations for flow problems (Chorin, 968; Li and Lai, 47

58 c3 7/9/6 4:8 page 48 # 48 Finite Difference Methods for D Elliptic PDEs ; Minion, 996) at low or modest Reynolds number, or the stream vorticity formulation method for large Reynolds number (Calhoun, ; Li and Wang, 3). In particular, there are some fast Poisson solvers available for regular domains, e.g., in Fishpack (Adams et al.). Helmholtz equations, u xx + u yy + λ u = f. (3.4) The Helmholtz equation arises in scattering problems, when λ is a wave number, and the corresponding problem may not have a solution if λ is an eigenvalue of the corresponding BVP. Furthermore, the problem is hard to solve numerically if λ is large. General self-adjoint elliptic PDEs, (a(x, y) u(x, y)) q(x, y)u = f (x, y) (3.5) or (au x ) x + (au y ) y q(x, y)u = f (x, y). (3.6) We assume that a(x, y) does not change sign in the solution domain, e.g., a(x, y) a >, where a is a constant, and q(x, y) to guarantee that the solution exists and it is unique. General elliptic PDEs (diffusion and advection equations), a(x, y)u xx + b(x, y)u xy + c(x, y)u yy + d(x, y)u x + e(x, y)u y + g(x, y)u(x, y) = f (x, y), (x, y) Ω, if b ac < for all (x, y) Ω. This equation can be rewritten as (a(x, y) u(x, y)) + w(x, y) u + c(x, y)u = f (x, y) (3.7) after a transformation, where w(x, y) is a vector. Diffusion and reaction equation, (a(x, y) u(x, y)) = f (u). (3.8) Here (a(x, y) u(x, y)) is called a diffusion term, the nonlinear term f (u) is called a reaction term, and if a(x, y) the PDE is a nonlinear Poisson equation. p-laplacian equation, ( ) u p u =, p, (3.9) where u = u x + u y in D.

59 c3 7/9/6 4:8 page 49 #3 3. Boundary and Compatibility Conditions 49 n Ω n Ω Figure 3.. A diagram of a D domain Ω, its boundary Ω, and its unit normal direction. Ω Minimal surface equation, ( ) u =. (3.) + u We note that an elliptic PDE (P( x, y )u = ) can be regarded as the steady state solution of a corresponding parabolic PDE (u t = P( x, y )u). Furthermore, if a linear PDE is defined on a rectangle domain then a finite difference approximation (in each dimension) can be used for both the equation and the boundary conditions, but the more difficult part is to solve the resulting linear system of algebraic equations efficiently. 3. Boundary and Compatibility Conditions Let us consider a D second-order elliptic PDE on a domain Ω, with boundary Ω whose unit normal direction is n according to the right side rule (cf. Figure 3.). Some common boundary conditions are as follows. Dirichlet boundary condition: the solution is known on the boundary, u(x, y) Ω = u (x, y). Neumann or flux boundary condition: the normal derivative is given along the boundary, u n n u = u n = u x n x + u y n y = g(x, y), where n = (n x, n y ) (n x + n y = ) is the unit normal direction.

60 c3 7/9/6 4:8 page 5 #4 5 Finite Difference Methods for D Elliptic PDEs Mixed boundary condition: ( α(x, y)u(x, y) + β(x, y) u = γ(x, y) n) Ω is given along the boundary Ω. In some cases, a boundary condition is periodic, e.g., for Ω = [a, b] [c, d], u(a, y) = u(b, y) is periodic in the x-direction, and u(x, c) = u(x, d) is periodic in the y-direction. There can be different boundary conditions on different parts of the boundary, e.g., for a channel flow in a domain (a, b) (c, d), the flux boundary condition may apply at x = a, and a no-slip boundary condition u = at the boundaries y = c and y = d. It is challenging to set up a correct boundary condition at x = b (outflow). One approximate to the outflow boundary condition is to set u x =. For a Poisson equation with a purely Neumann boundary condition, there is no solution unless a compatibility condition is satisfied. Consider the following problem: u = f (x, y), (x, y) Ω, On integrating over the domain Ω and applying the Green s theorem gives Ω u n = g(x, y). Ω udxdy = f (x, y)dxdy, Ω so we have the compatibility condition Ω Ω u udxdy = Ω n ds, udxdy = g ds = f (x, y)dxdy (3.) Ω Ω for the solution to exist. If the compatibility condition is satisfied and Ω is smooth, then the solution does exist but it is not unique. Indeed, u(x, y) + C is a solution for arbitrary constant C if u(x, y) is a solution, but we can specify the solution at a particular point (e.g., u(x, y ) = ) to render it well-defined.

61 c3 7/9/6 4:8 page 5 #5 3. The Central Finite Difference Method for Poisson Equations 5 3. The Central Finite Difference Method for Poisson Equations Let us now consider the following problem, involving a Poisson equation and a Dirichlet BC: u xx + u yy = f (x, y), (x, y) Ω = (a, b) (c, d), (3.) u(x, y) Ω = u (x, y). (3.3) If f C(Ω), then the solution u(x, y) C (Ω) exists and it is unique. Later on, we can relax the condition f C(Ω) if the finite element method is used in which we seek a weak solution. An analytic solution is often difficult to obtain, and a finite difference approximation can be obtained through the following procedure. Step : Generate a grid. For example, a uniform Cartesian grid can be generated with two given parameters m and n: x i = a + ih x, i =,,,..., m, h x = b a m, (3.4) y j = c + jh y, j =,,,..., n, h y = d c n. (3.5) In seeking an approximate solution U ij at the grid points (x i, y j ) where u(x, y) is unknown, there are (m )(n ) unknowns. Step : Approximate the partial derivatives at grid points with finite difference formulas involving the function values at nearby grid points. For example, if we adopt the three-point central finite difference formula for second-order partial derivatives in the x- and y-directions, respectively, then u(x i, y j ) u(x i, y j ) + u(x i+, y j ) (h x ) + u(x i, y j ) u(x i, y j ) + u(x i, y j+ ) (h y ) = f ij + T ij, i =,..., m, j =,..., n, (3.6) where f ij = f (x i, y j ). The local truncation error satisfies where T ij (h x) 4 u ( ) (h y ) 4 u ( ) xi x 4, y j + xi y 4, y j + O(h 4 ), (3.7) h = max{ h x, h y }. (3.8) We ignore the error term in (3.6) and replace the exact solution values u(x i, y j ) at the grid points with the approximate solution values U ij obtained

62 c3 7/9/6 4:8 page 5 #6 5 Finite Difference Methods for D Elliptic PDEs from solving the linear system of algebraic equations, i.e., U i, j + U i+,j (h x ) + U ( i, j + U i, j+ (h y ) (h x ) + ) (h y ) U ij = f ij, (3.9) i =,,..., m, j =,,..., n. The finite difference equation at a grid point (x i, y j ) involves five grid points in a five-point stencil, (x i, y j ), (x i+, y j ), (x i, y j ), (x i, y j+ ), and (x i, y j ). The grid points in the finite difference stencil are sometimes labeled east, north, west, south, and the center in the literature. The center (x i, y j ) is called the master grid point, where the finite difference equation is used to approximate the PDE. It is obvious that the finite difference discretization is second-order accurate and consistent since lim T ij =, and lim T =, (3.) h h where T is the local truncation error matrix formed by {T ij }. Solve the linear system of algebraic equations (3.9), to get the approximate values for the solution at all of the grid points. Error analysis, implementation, visualization, etc. 3.. The Matrix Vector Form of the FD Equations In solving the algebraic system of finite difference equations by a direct method such as Gaussian elimination or some sparse matrix technique, knowledge of the matrix structure is important, although less so for an iterative solver such as the Jacobi, Gauss Seidel, or SOR(ω) methods. In the matrix-vector form AU = F, the unknown U is a D array. From D Poisson equations the 3 unknowns U ij are a D array, but we can order it to get a D array. We also need to order the finite difference equations, and it is a common practice to use the same ordering for the equations as for the unknown array. There are two commonly used orderings, namely, the natural ordering, a natural choice for sequential computing, and red black ordering, considered to be a good choice for parallel computing The Natural Row Ordering In the natural row ordering, we order the unknowns and equations row by row. Thus the k-th finite difference equation corresponding to (i, j) has the following

63 c3 7/9/6 4:8 page 53 #7 3. The Central Finite Difference Method for Poisson Equations 53 (a) (b) Figure 3.. (a) The natural ordering and (b) the red black ordering. relation: k = i + (m )( j ), i =,,..., m, j =,,..., n (3.) (see the left diagram in Figure 3.). Referring to Figure 3., suppose that h x = h y = h, m = n = 4. Then there are nine equations and nine unknowns, so the coefficient matrix is 9 by 9. To write down the matrix-vector form, use a D array x to express the unknown U ij according to the ordering, we should have x = U, x = U, x 3 = U 3, x 4 = U, x 5 = U, x 6 = U 3, x 7 = U 3, x 8 = U 3, x 9 = U 33. (3.) Now if the algebraic equations are ordered in the same way as the unknowns, the nine equations from the standard central finite difference scheme using the five-point stencil are Eqn. : Eqn. : Eqn.3 : Eqn.4 : h ( 4x + x + x 4 ) = f u + u h h (x 4x + x 3 + x 5 ) = f u h h (x 4x 3 + x 6 ) = f 3 u 3 + u 4 h h (x 4x 4 + x 5 + x 7 ) = f u h

64 c3 7/9/6 4:8 page 54 #8 54 Finite Difference Methods for D Elliptic PDEs Eqn.5 : Eqn.6 : Eqn.7 : Eqn.8 : h (x + x 4 4x 5 + x 6 + x 8 ) = f h (x 3 + x 5 4x 6 + x 9 ) = f 3 u 4 h h (x 4 4x 7 + x 8 ) = f 3 u 3 + u 4 h h (x 5 + x 7 4x 8 + x 9 ) = f 3 u 4 h Eqn.9 : h (x 6 + x 8 4x 9 ) = f 33 u 34 + u 43 h. The corresponding coefficient matrix is block tridiagonal, A = h B I I B I I B where I is the 3 3 identity matrix and 4 B = 4. 4, (3.3) In general, for an n + by n + grid we obtain B I 4 A = I B I 4 h , B = I B 4 n n n n Since A is symmetric positive definite and weakly diagonally dominant, the coefficient matrix A is a nonsingular, and hence the solution of the system of the finite difference equations is unique. The matrix-vector form is useful to understand the structure of the linear system of algebraic equations, and as mentioned it is required when a direct method (such as Gaussian elimination or a sparse matrix technique) is used to solve the system. However, it can sometimes be more convenient to use a twoindex system, especially when an iterative method is preferred but also as more intuitive and to visualize the data. The eigenvalues and eigenvectors of A can.

65 c3 7/9/6 4:8 page 55 #9 3.3 The Maximum Principle and Error Analysis 55 also be indexed by two parameters p and k, corresponding to wave numbers in the x and y directions. The (p, k)-th eigenvector u p,k has n components, u p,k ij = sin(pπih) sin(kπjh), i, j =,,..., n (3.4) 6 for p, k =,,..., n; and the corresponding (p, k)-th eigenvalue is λ p,k = ) (cos(pπh) h ) + cos(kπh) ). (3.5) The least dominant (smallest magnitude) eigenvalue is λ, = π + O(h ), (3.6) obtained from the Taylor expansion of (3.5) in terms of h /n; and the dominant (largest magnitude) eigenvalue is λ int(n/),int(n/) 8 h. (3.7) It is noted that the dominant and least dominant eigenvalues are twice the magnitude of those in D representation, so we have the following estimates: A max λ p,k = 8 h, A = cond (A) = A A 4 π h = O(n ). min λ p,k π, (3.8) Note that the condition number is about the same order magnitude as that in the D case; and since it is large, double precision is recommended to reduce the effect of round-off errors. 3.3 The Maximum Principle and Error Analysis Consider an elliptic differential operator L = a + b x x y + c y, b ac <, for (x, y) Ω, and without loss of generality assume that a >, c >. The maximum principle is given in the following theorem. Theorem 3.. If u(x, y) C 3 (Ω) satisfies Lu(x, y) in a bounded domain Ω, then u(x, y) has its maximum on the boundary of the domain.

66 c3 7/9/6 4:8 page 56 # 56 Finite Difference Methods for D Elliptic PDEs Proof If the theorem is not true, then there is an interior point (x, y ) Ω such that u(x, y ) u(x, y) for all (x, y) Ω. The necessary condition for a local extremum (x, y ) is u x (x, y ) =, u y (x, y ) =. Now since (x, y ) is not on the boundary of the domain and u(x, y) is continuous, there is a neighborhood of (x, y ) within the domain Ω where we have the Taylor expansion, u(x + x, y + y) = u(x, y ) + ( ) ( x) u xx + x yu xy + ( y) u yy + O(( x) 3, ( y) 3 ), with superscript of indicating that the functions are evaluated at (x, y ), i.e., u xx = u (x x, y ) evaluated at (x, y ), and so on. Since u(x + x, y + y) u(x, y ) for all sufficiently small x and y, ( ) ( x) u xx + x yu xy + ( y) u yy. (3.9) On the other hand, from the given condition Lu = a u xx + b u xy + c u yy, (3.3) where a = a(x, y ) and so forth. In order to match the Taylor expansion to get a contradiction, we rewrite the inequality above as ( ) a a u b ( b xx + M M a M u xy + u a M) yy + u yy M (c (b ) a ), (3.3) where M > is a constant. The role of M is to make some choices of x and y that are small enough. Let us now set x = From (3.9), we know that a b, y = M a M. a M u xx + b M u xy + b a M u yy. (3.3)

67 c3 7/9/6 4:8 page 57 # 3.3 The Maximum Principle and Error Analysis 57 Now we take x =, y = ( c (b ) a ) /M ; and from (3.9) again, ( y) u yy = (c (b ) ) u yy. (3.33) M Thus from (3.3) and (3.33), the left-hand side of (3.3) should not be positive, which contradicts the condition a Lu = a u xx + b u xy + c u yy, and with this the proof is completed. On the other hand, if Lu then the minimum value of u is on the boundary of Ω. For general elliptic equations the maximum principle is as follows. Let Lu = au xx + bu xy + cu yy + d u x + d u y + eu =, (x, y) Ω, b ac <, a >, c >, e, where Ω is a bounded domain. Then from Theorem 3., u(x, y) cannot have a positive local maximum or a negative local minimum in the interior of Ω The Discrete Maximum Principle Theorem 3.. Consider a grid function U ij, i =,,..., m, j =,,,..., n. If the discrete Laplacian operator (using the central five-point stencil) satisfies h U ij = U i,j + U i+, j + U i, j + U i, j+ 4U ij h, i =,,..., m, j =,,..., n, (3.34) then U ij attains its maximum on the boundary. On the other hand, if h U ij then U ij attains its minimum on the boundary. Proof Assume that the theorem is not true, so U ij has its maximum at an interior grid point (i, j ). Then U i, j U i, j for all i and j, and therefore U i,j 4 ( ) Ui, j + U i +, j + U i, j + U i, j +. On the other hand, from the condition h U ij U i,j 4 ( ) Ui, j + U i +, j + U i, j + U i, j +,

68 c3 7/9/6 4:8 page 58 # 58 Finite Difference Methods for D Elliptic PDEs in contradiction to the inequality above unless all U ij at the four neighbors of (i, j ) have the same value U(i, j ). This implies that neighboring U i, j is also a maximum, and the same argument can be applied enough times until the boundary is reached. Then we would also know that U,j is a maximum. Indeed, if U ij has its maximum in interior it follows that U ij is a constant. Finally, if h U ij then we consider U ij to complete the proof Error Estimates of the Finite Difference Method for Poisson Equations With the discrete maximum principle, we can easily get the following lemma. Lemma 3.3. Let U ij be a grid function that satisfies h U ij = U i, j + U i+, j + U i,j + U i, j+ 4U ij h = f ij, (3.35) i, j =,,..., n with an homogeneous boundary condition. Then we have U = max i,j n U ij 8 max hu ij = i, j n 8 max f ij. (3.36) i, j n Proof Define a grid function ( ( w ij = x i ) ( + y j ) ), (3.37) 4 where x i = ih, y j = jh, i, j =,,..., n, h = n, corresponding to the continuous function w(x) = ( 4 (x /) + (y /) ). Then ( h w ij = (w xx + w yy ) + h 4 ) w (xi,y j ) x w y 4 =, (3.38) (x i,y j ) where (x i, y j ) is some point near (x i, y j ), and consequently h ( Uij f w ij ) = h U ij f = f ij f, h ( Uij + f w ij ) = h U ij + f = f ij + f. (3.39)

69 c3 7/9/6 4:8 page 59 #3 3.3 The Maximum Principle and Error Analysis 59 From the discrete maximum principle, U ij + f w ij has its maximum on the boundary, while U ij f w ij has its minimum on the boundary, i.e., ( ) min Uij f w ij Uij f w ij Ω ( ) and U ij + f w ij max Uij + f w ij, Ω for all i and j. Since U ij is zero on the boundary and f w ij, we immediately have the following, It is easy to check that and therefore f min w ij Ω U ij f w ij U ij, and U ij U ij + f w ij f max w ij Ω. which completes the proof. w ij Ω = 8, 8 f U ij 8 f, (3.4) Theorem 3.4. Let U ij be the solution of the finite difference equations using the standard central five-point stencil, obtained for a Poisson equation with a Dirichlet boundary condition. Assume that u(x, y) C 4 (Ω), then the global error E satisfies: E = U u = max U ij u(x i, y j ) ij h ( ) max u xxxx + max u yyyy, 96 where max u xxxx = max 4 u (x, y) (x,y) D x4, and so on. Proof We know that h U ij = f ij + T ij, h E ij = T ij, where T ij is the local truncation error at (x i, y j ) and satisfies T ij h ( ) max u xxxx + max u yyyy, so from lemma 3.3 E 8 T h ( ) max u xxxx + max u yyyy. 96 (3.4)

70 c3 7/9/6 4:8 page 6 #4 6 Finite Difference Methods for D Elliptic PDEs 3.4 Finite Difference Methods for General Second-order Elliptic PDEs If the domain of the interest is a rectangle [a, b] [c, d] and there is no mixed derivative term u xy in the PDE, then the PDE can be discretized dimension by dimension. Consider the following example: (p(x, y) u) q(x, y) u = f (x, y), or (pu x ) x + (pu y ) y qu = f, with a Dirichlet boundary condition at x = b, y = c, and y = d but a Neumann boundary condition u x = g(y) at x = a. For simplicity, let us adopt a uniform Cartesian grid again x i = a + ih x, i =,,..., m, h x = b a m, y j = c + jh y, j =,,..., n, h y = d c n. If we discretize the PDE dimension by dimension, at a typical grid point (x i, y j ) the finite difference equation is p i+, ju i+, j (p i+, j + p i, j)u ij + p i, ju i, j (h x ) + p i, j+ U i, j+ (p i, j+ + p i, j )U ij + p i,j U i, j (h y ) q ij U ij = f ij (3.4) for i =,,..., m and j =,,..., n, where p i±, j = p(x i ± h x /, y j ) and so on. For the indices i =, j =,,..., n, we can use the ghost point method to deal with the Neumann boundary condition. Using the central finite difference scheme for the flux boundary condition U, j U, j h x = g(y j ), or U, j = U, j h x g(y j ), j =,,..., n, on substituting into the finite difference equation at (, j), we obtain (p, j + p, j)u,j (p, j + p, j)u j (h x ) + p,j+ U, j+ (p, j+ q j U j = f j + + p, j (h y ) )U j + p, j U, j p, j g(y j) h x. (3.43)

71 c3 7/9/6 4:8 page 6 #5 3.5 Solving the Resulting Linear System of Algebraic Equations 6 For a general second-order elliptic PDE with no mixed derivative term u xy, i.e., (p(x, y) u) + w u q(x, y)u = f (x, y), the central finite difference scheme when w /h can be used, but an upwinding scheme may be preferred to deal with the advection term w u A Finite Difference Formula for Approximating the Mixed Derivative u xy If there is a mixed derivative term u xy, we cannot proceed dimension by dimension but a centered finite difference scheme (.) for u xy can be used, i.e., 3 u xy (x i, y j ) u(x i, y j ) + u(x i+, y j+ ) u(x i+, y j ) (x 4 i, y j+ ). 4h x h y (3.44) 5 6 From a Taylor 7 expansion at (x, y), 8 this finite difference formula can be shown to be consistent and the discretization is second-order accurate, and the consequent central finite difference formula for a second-order linear PDE involves nine grid points. The resulting linear system of algebraic equations for PDE is more difficult to solve, because it is no longer symmetric nor diagonally dominant. Furthermore, there is no known upwinding scheme to deal with the PDE with mixed derivatives. 3.5 Solving the Resulting Linear System of Algebraic Equations The linear systems of algebraic equations resulting from finite difference discretizations for D or higher-dimensional problems are often very large, e.g., the linear system from an n n grid for an elliptic PDE has O(n ) equations, so the coefficient matrix is O(n n ). Even for n =, a modest number, the O( 4 4 ) matrix cannot be stored in most modern computers if the desirable double precision is used. However, the matrix from a self-adjoint elliptic PDE is sparse since the nonzero entries are about O(5n ), so an iterative method or sparse matrix technique may be used. For an elliptic PDE defined on a rectangle domain or a disk, frequently used methods are listed below. Fast Poisson solvers such as the fast Fourier transform (FFT) or cyclic reduction (Adams et al.). Usually the implementation is not so easy, and the use of existing software packages is recommended, e.g., Fishpack, written in Fortran and free on the Netlib.

72 c3 7/9/6 4:8 page 6 #6 6 Finite Difference Methods for D Elliptic PDEs Multigrid solvers, either structured multigrid, e.g., MGD9V (De Zeeuw, 99) that uses a nine-point stencil, or AMGs (algebraic multigrid solvers). Sparse matrix techniques. Simple iterative methods such as Jacobi, Gauss Seidel, SOR(ω). They are easy to implement, but often slow to converge. Other iterative methods such as the conjugate gradient (CG) or preconditioned conjugate gradient (PCG), generalized minimized residual (GMRES), biconjugate gradient (BICG) method for nonsymmetric system of equations. We refer the reader to Saad (986) for more information and references. An important advantage of an iterative method is that zero entries play no role in the matrix-vector multiplications involved and there is no need to manipulate the matrix and vector forms, as the algebraic equations in the system are used directly. Assume that we are given a linear system of equation Ax = b where A is nonsingular (det(a) ), if A can be written as A = M N where M is an invertible matrix, then (M N)x = b or Mx = Nx + b or x = M Nx + M b. We may iterate starting from an initial guess x, via x k+ = M Nx k + M b, k =,,,..., (3.45) and the iteration converges or diverges depending the spectral radius of ρ(m N) = max λ i (M N). Incidentally, if T = M N is a constant matrix, the iterative method is called stationary The Jacobi Iterative Method The idea of the Jacobi iteration is to solve for the variables on the diagonals and then form the iteration. Solving for x from the first equation in the algebraic system, x from the second, and so forth, we have x = ( ) b a x a 3 x 3 a n x n a x = a ( b a x a 3 x 3 a n x n ).... x i = a ii ( b i a i x a i x a i,i x i a i,i+ x i+ a in x n ).... x n = ( ) b n a i x a n x a n,n x n. a nn

73 c3 7/9/6 4:8 page 63 #7 3.5 Solving the Resulting Linear System of Algebraic Equations 63 Given some initial guess x, the corresponding Jacobi iterative method is x k+ = ( ) b a x k a a 3x k 3 a nx k n x k+ = ( ) b a x k a a 3x k 3 a nx k n.... x k+ i = ( ) b i a i x k a a ix k a inx k n ii.... x k+ n = ( ) b n a i x k a a nx k a n,n x k n. nn It can be written compactly as x k+ i = a ii b i n j=,j i a ij x k j, i =,,..., n, (3.46) which is the basis for easy programming. Thus for the finite difference equations U i+ U i + U i+ h = f i with Dirichlet boundary conditions U = ua and U n = ub, we have U k+ = ua + Uk h f U k+ i = Uk i + Uk i+ U k+ n = Uk n + ub h f n ; and for a D Poisson equation, U k+ ij = Uk i, j + Uk i+, j + Uk i, j + Uk i, j+ 4 h f i, i =, 3,..., n h f ij 4, i, j =,,..., n The Gauss Seidel Iterative Method The idea of the Gauss Seidel iteration is to solve for the variables on the diagonals, then form the iteration, and use the most updated information. In the Jacobi iterative method, all components x k+ are updated based on x k, whereas

74 c3 7/9/6 4:8 page 64 #8 64 Finite Difference Methods for D Elliptic PDEs in the Gauss Seidel iterative method the most updated information is used as follows: x k+ = ( ) b a x k a a 3x k 3 a nx k n x k+ = ( ) b a x k+ a a 3 x k 3 a nx k n.... x k+ i = ( ) b i a i x k+ a a i x k+ a i,i x k+ i a i,i+x k i+ a inx k n ii.... x k+ n = ( ) b n a i x k+ a a n x k+ a n,n x k+ n, nn or in a compact form i b i x k+ i = a ii j= a ij x k+ j n j=i+ a ij x k j, i =,,..., n. (3.47) Below is a pseudo-code, where the Gauss Seidel iterative method is used to solve the finite difference equations for the Poisson equation, assuming u is an initial guess: Note that this pseudo-code has a framework generally suitable for iterative methods The Successive Overrelaxation Method SOR(ω) The idea of the successive overrelaxation (SOR(ω)) iteration is based on an extrapolation technique. Suppose x k+ GS denotes the update from x k in the

75 c3 7/9/6 4:8 page 65 #9 3.5 Solving the Resulting Linear System of Algebraic Equations 65 Gauss Seidel method. Intuitively, one may anticipate the update x k+ = ( ω)x k + ωx k+ GS, (3.48) a linear combination of x k and x k+ GS, may give a better approximation for a suitable choice of the relaxation parameter ω. If the parameter < ω <, the combination above is an interpolation, and if ω > it is an extrapolation or overrelaxation. For ω =, we recover the Gauss Seidel method. For elliptic problems, we usually choose ω <. In component form, the SOR(ω) method can be represented as x k+ i = ( ω)x k i + ω i n b i a ij x k+ j a ij x k j, (3.49) a ii j= j=i+ for i =,,..., n. Therefore, only one line in the pseudo-code of the Gauss Seidel method above need be changed, namely, The convergence of the SOR(ω) method depends on the choice of ω. For the linear system of algebraic equations obtained from the standard five-point stencil applied to a Poisson equation with h = h x = h y = /n, it can be shown that the optimal ω is ω opt = + sin(π/n) + π/n, (3.5) which approaches two as n approaches infinity. Although the optimal ω is unknown for general elliptic PDEs, we can use the optimal ω for the Poisson equation as a trial value, and in fact larger rather than smaller ω values are recommended. If ω is so large that the iterative method diverges, this is soon evident because the solution will blow-up. Incidentally, the optimal choice of ω is independent of the right-hand side Convergence of Stationary Iterative Methods For a stationary iterative method, the following theorem provides a necessary and sufficient condition for convergence. Theorem 3.5. Given a stationary iteration x k+ = Tx k + c, (3.5)

76 c3 7/9/6 4:8 page 66 # 66 Finite Difference Methods for D Elliptic PDEs where T is a constant matrix and c is a constant vector, the vector sequence {x k } converges for arbitrary x if and only if ρ(t) < where ρ(t) is the spectral radius of T defined as i.e., the largest magnitude of all the eigenvalues of T. ρ(t) = max λ i (T), (3.5) Another commonly used sufficient condition to check the convergence of a stationary iterative method is given in the following theorem. Theorem 3.6. If there is a matrix norm such that T <, then the stationary iterative method converges for arbitrary initial guess x. We often check whether T p < for p =,,, and if there is just one norm such that T <, then the iterative method is convergent. However, if T there is no conclusion about the convergence. Let us now briefly discuss the convergence of the Jacobi, Gauss Seidel, and SOR(ω) methods. Given a linear system Ax = b, let D denote the diagonal matrix formed from the diagonal elements of A, L the lower triangular part of A, and U the upper triangular part of A. The iteration matrices for the three basic iteration methods are thus Jacobi method: T = D (L + U), c = D b. Gauss Seidel method: T = (D L) U, c = (D L) b. SOR(ω) method: T = (I ωd L) ( ( ω)i + ωd U ), c = ω(i ω L) D b. Theorem 3.7. If A is strictly row diagonally dominant, i.e., a ii > n j=, j n a ij, (3.53) then both the Jacobi and Gauss Seidel iterative methods converge. The conclusion is also true when (): A is weakly row diagonally dominant a ii n j=, j n a ij ; (3.54) (): the inequality holds for at least one row; (3) A is irreducible. We refer the reader to Golub and Van Loan (989) for the definition of irreducibility. From this theorem, both the Jacobi and Gauss Seidel iterative

77 c3 7/9/6 4:8 page 67 # 3.6 A Fourth-Order Compact FD Scheme for Poisson Equations 67 methods converge when they are applied to the linear system of algebraic equations obtained from the standard central finite difference method for Poisson equations. In general, Jacobi and Gauss Seidel methods need O(n ) number of iterations for solving the resulting linear system of finite difference equations for a Poisson equations, while it is O(n) for the SOR(ω) method with the optimal choice of ω. 3.6 A Fourth-Order Compact FD Scheme for Poisson Equations A compact fourth-order accurate scheme ( u U Ch 4 ) can be applied to Poisson equations, using a nine-point discrete Laplacian. An advantage of a higher-order method is that fewer grid points are used for the same order accuracy as a lower-order method; therefore a smaller resulting system of algebraic equations needs to be solved. A disadvantage is that the resulting system of algebraic equations is denser. Although other methods may be used, let us follow a symbolic derivation from the second-order central scheme for u xx. Recalling that, (cf. (.8) on page ) δxxu = u x + h 4 u x 4 + O(h4 ) = ( + h ) x x u + O(h4 ), (3.55) and substituting the operator relation into (3.55), we obtain δxxu = ( + h = from which we further have x = It is notable that x = δ xx + O(h ) ( ) ) δxx + O(h ) ) ( + h δ xx x u + O(h4 ), ( + h δ xx x u + O(h4 ) ) ) δxxu + ( + h δ xx O(h 4 ). ) ( + h δ xx = h δ xx + O(h 4 ),

78 c3 7/9/6 4:8 page 68 # 68 Finite Difference Methods for D Elliptic PDEs if h is sufficiently small. Thus we have the symbolic relation ) ( x = + h δ xx δxx + O(h 4 ), or x = ( h δ xx ) δ xx + O(h 4 ). On a Cartesian grid and invoking this fourth-order operator, the Poisson equation u = f can be approximated by ( ) ( ) + h x δ xx δxxu + + h y δ yy δyyu = f (x, y) + O(h 4 ), where h = max(h x, h y ). On multiplying this by ( ) ( ) + h x δ xx + h y δ yy and using the commutativity ( + ( x) δ xx we get: ( + h y δ yy ) ) ( + ( y) δ yy ) ) ) = ( + ( y) δ yy ( + ( x) δ xx, ( ) ( ) ( ) δxxu + + h x δ xx δyyu = + h x δ xx + h y δ yy f (x, y) + O(h 4 ) = ( + h x δ xx + h y δ yy ) f (x, y) + O(h 4 ). Expanding this expression above and ignoring the high-order terms, we obtain the nine-point scheme ( ) ( ) + h y δ yy δxxu ij + + h y Ui, j U ij + U i+, j δ yy (h x ) = U i, j U ij + U i+,j ( (h x ) + (h y ) U i, j, U i, j + U i, j+ U i, j + 4U ij U i, j+ + U i+,j U i+, j + U i+, j+ ). For the special case when h x = h y = h, the finite difference coefficients and linear combination of f are given in Figure 3.3. The above discretization is called

79 c3 7/9/6 4:8 page 69 #3 3.7 A Finite Difference Method for Poisson Equations in Polar Coordinates 69 u(x,y) f(x,y) 4 6h Figure 3.3. The coefficients of the finite difference scheme using the compact nine-point stencil (a) and the linear combination of f (b). the nine-point discrete Laplacian, and it is a fourth-order compact scheme because the distance is the least between the grid points in the finite difference stencil and the master grid point (of all fourth-order finite difference schemes). The advantages and disadvantages of nine-point finite difference schemes for Poisson equations include: It is fourth-order accurate and it is still compact. The coefficient matrix is still block tridiagonal. Less grid orientation effects compared with the standard five-point finite difference scheme. It seems that there are no FFT-based fast Poisson solvers for the fourth-order compact finite difference scheme. Incidentally, if we apply ) ( x = h δ xx δxxu + O(h 4 ) to the Poisson equation directly, we obtain another nine-point finite difference scheme, which is not compact, and has stronger grid orientation effects. 3.7 A Finite Difference Method for Poisson Equations in Polar Coordinates If the domain of interest is a circle, an annulus, or a fan, etc. (cf. Figure 3.4 for some illustrations), it is natural to use plane polar coordinates x = r cos θ, y = r sin θ. (3.56)

80 c3 7/9/6 4:8 page 7 #4 7 Finite Difference Methods for D Elliptic PDEs BC BC r θ BC r r r BC BC Periodic Periodic BC BC NoBC π θ BC π θ BC θ Figure 3.4. Diagrams of domains and boundary conditions that may be better solved in polar coordinates. The Poisson equation in the polar coordinates is or r r ( r u ) r + u r = f (r, θ), θ u r + u r r + u r = f (r, θ), θ conservative form nonconservative form. (3.57) For < R r R and θ l θ θ r, where the origin is not in the domain of interest, using a uniform grid in the polar coordinates r i = R + i r, i =,,..., m, r = R R m, 3 θ j = θ l + j θ, j =,,..., N, θ = θ r θ l 4 5 N, the discretized finite difference equation (in conservative form) is r i r i U i, j (r i + r i+ ( r) )U ij + r i+ U i+, j 6 + r i U i, j U ij + U i, j+ ( θ) = f (r i, θ j ), (3.58) where again U ij is an approximation to the solution u(r i, θ j ).

81 c3 7/9/6 4:8 page 7 #5 3.7 A Finite Difference Method for Poisson Equations in Polar Coordinates Treating the Polar Singularity If the origin is within the domain and θ < π, we have a periodic boundary condition in the θ direction (i.e., u(r, θ) = u(r, θ + π)), but in the radial (r) direction the origin R = needs special attention. The PDE is singular at r =, which is called a pole singularity. There are different ways of dealing with a singularity at the origin, but some methods lead to an undesirable structure in the coefficient matrix from the finite difference equations. One approach discussed is to use a staggered grid: ( r i = i ) r, r = R m, i =,,..., m, (3.59) where r = r/ and r m = R. Except for i = (i.e., at i =,..., m ), the conservative form of the finite difference discretization can be used. At i =, the following nonconservative form is used to deal with the pole singularity at r = : U j U j + U j ( r) + r U j U j r + r U, j U j + U, j+ ( θ) = f (r, θ j ). Note that r = r/ and r = r/. The coefficient of U j in the above finite difference equation, the approximation at the ghost point r, is zero! The above finite difference equation simplifies to U j + U j ( r) + U j r r + U, j U j + U, j+ r ( θ) = f (r, θ j ), and we still have a diagonally dominant system of linear algebraic equations Using the FFT to Solve Poisson Equations in Polar Coordinates When the solution u(r, θ) is periodic in θ, we can approximate u(r, θ) by the truncated Fourier series u(r, θ) = N/ n= N/ u n (r)e inθ, (3.6) where i = and u n (r) is the complex Fourier coefficient given by u n (r) = N u(r, θ)e inkθ. (3.6) N k=

82 c3 7/9/6 4:8 page 7 #6 7 Finite Difference Methods for D Elliptic PDEs Substituting (3.6) into the Poisson equation (in polar coordinates) (3.57) gives ( ) u n n r r r r r u n = f n (r), n = N/,..., N/, (3.6) where f n (r) is the n-th coefficient of the Fourier series of f (r, θ) defined in (3.6). For each n, we can discretize the above ODE in the r direction using the staggered grid, to get a tridiagonal system of equations that can be solved easily. With a Dirichlet boundary condition u(r max, θ) = u BC (θ) at r = r max, the Fourier transform u BC n (r max ) = N u BC (θ)e inkθ (3.63) N k= can be invoked to find u BC n (r max ), the corresponding boundary condition for the ODE. After the Fourier coefficient u n is obtained, the inverse Fourier transform (3.6) can be applied to get an approximate solution to the problem. 3.8 Programming of D Finite Difference Methods After discretization of an elliptic PDE, there are a variety of approaches that can be used to solve the system of the finite difference equations. Below we list some of them according to our knowledge. Sparse matrix techniques. In Matlab, one can form the coefficient matrix, A and the right-hand side F, then get the finite difference solution using U = A\F. Note that, the solution will be expressed as a D array. For the visualization purpose, it is desirable to convert between the D vector and the D array. Fast Poisson solvers. For Poisson equations defined on rectangular domains and linear boundary conditions (Dirichlet, Neumann, Robin) on four sides of the domain, one can apply a fast Poisson solver, for example, the Fishpack (Adams et al.). The Fishpack includes a variety of solvers for Cartesian, polar, or cylindrical coordinates. Iterative solvers. One can apply stationary iterative methods such as Jacobi, Gauss Seidel, SOR(ω) for the linear system of equations obtained for general elliptic PDEs. The programming is easy and requires least storage. However, the convergence speed is often slow and the number of iterations is O((mn) ) = O(N) 3 4 (N = mn) assuming that m and n are the number of grid lines in each coordinate direction. One can also apply more advanced iterative methods such as the CG method, PCG method if the coefficient

83 c3 7/9/6 4:8 page 73 #7 3.8 Programming of D Finite Difference Methods 73 matrix A is symmetric, or GMRES (Saad, 986) iterative method is A is nonsymmetric. Multigrid solvers. There are two kinds of multigrid solvers that one can use. One is a structured multigrid. A very good package written in Fortran is DMG9V (De Zeeuw, 99) that uses a nine-point stencil which is suitable for the finite difference methods applied to second-order linear elliptic PDEs described in this book, either centered five-point stencil or the compact ninepoint stencil. Another type is algebraic multigrid solvers, for example, (Ruge and Stuben, 987; Stuben, 999). In general, the structured multigrid solvers work better than algebraic multigrid solvers A Matlab Code for Poisson Equations using A\F The accompanying Matlab code, poisson_matlab.m, solves the Poisson equation on a rectangular domain [a b] 45 [c d 6] with a Neumann boundary condition 3 on x = a and Dirichelt boundary conditions on other three sides. The mesh parameters are m and n; and the total number of unknowns is M = (n )m. The conversion between the D solution U(k) and D array using the natural row ordering is k = i + ( j )m. The files include poisson_matlab.m (the main code), f.m (function f (x, y)), ux.m (the Neumann boundary condition u x (a, y) at x = a), and ue.m (the true solution for testing purpose). In Figure 3.5, we show a mesh plot of the solution of the finite difference method and its error plot. (a) The solution plot (b) The error plot x.4..5 y Figure 3.5. (a) The mesh plot of the computed finite difference solution [, ] [, ] and (b) the error plot. Note that we can see the errors are zeros for Dirichlet boundary conditions, and the errors are not zero for Neumann boundary condition at x =. x.4..5 y.5

84 c3 7/9/6 4:8 page 74 #8 74 Finite Difference Methods for D Elliptic PDEs

85 c3 7/9/6 4:8 page 75 #9 Exercises A Matlab Code for Poisson Equations using the SOR(ω) Iteration The accompanying Matlab code, poisson_jacobi.m, and poisson_sor.m, provide interactive Jacobi and SOR(ω) iterative methods for the Poisson equation on a square domain [a b] 34 [c d] 5 with (b a) = (d c) and Dirichelt boundary conditions on all sides. When ω =, the SOR(ω) method becomes the Gauss- Seidel iteration. The other Matlab functions involved are fcn.m, the source term f (x, y), and uexact.m, the solution for the testing purpose. We test an example with true solution u(x, y) = e x sin(πy), the source term then is f (x, y) = e x 6 sin(πy)( π ). We take m = n = 4, and the domain is 7 [ ] 89 [ ]. Thus h = /4 and h = So we take the tolerance as tol = 5. The Jacobi iteration takes 5 iterations, the Gauss Seidel takes 69 iterations, the SOR(ω) with the optimal ω = /( + sin π/n)) =.8545 takes 95 iterations, SOR(.8) takes 58 iterations, and SOR(.9) takes 33 iterations. Usually we would rather take ω larger than smaller in the range ω <. Exercises. State the maximum principle for D elliptic problems. Use the maximum principle to show that the three-point central finite difference scheme for u (x) = f (x) with a Dirichlet boundary condition is second-order accurate in the maximum norm.. Write down the coefficient matrix of the finite difference method using the standard central five-point stencil with both the Red Black and the Natural row ordering for the Poisson equation defined on the rectangle [a, b] [c, d]. Take m = n = 3 and assume a Dirichlet

86 c3 7/9/6 4:8 page 76 #3 76 Finite Difference Methods for D Elliptic PDEs boundary condition at x = a, y = c and y = d, and a Neumann boundary condition u n = g(y) at x = b. Use the ghost point method to deal with the Neumann boundary condition. 3. Implement and compare the Gauss Seidel method and the SOR method (trying to find the best ω by testing), for the elliptic equation subject to the boundary conditions u xx + p(x, y)u yy + r(x, y)u(x, y) = f (x, y) a < x < b, c < y < d, u(a, y) =, u(x, c) =, u(x, d) =, Test and debug your code for the case: < x, y <, and u (b, y) = π sin(πy). x p(x, y) = ( + x + y ), r(x, y) = xy. The source term f (x, y) is determined from the exact solution u(x, y) = sin(πx) sin(πy). Do the grid refinement analysis for n = 6, n = 3, and n = 64 (if possible) in the infinity norm (Hint: In Matlab, use max(max(abs(e)))). Take the tolerance as 5. Does the method behave like a second-order method? Compare the number of iterations, and test the optimal relaxation factor ω. Plot the solution and the error for n = 3. Having ensured your code is working correctly, introduce a point source f (x, y) = δ(x.5)δ(y.5) and u x = at x =, with p(x, y) = and r(x, y) =. 3 The 4 u(x, y) can be interpreted as the steady state temperature distribution in a room with an insulated wall on three sides, a constant heat flow on one side, and a point source such as a heater in the room. Note that the heat source can be expressed as f (n/, n/) = /h, and f (i, j) = for other grid points. Use the mesh and contour plots to visualize the solution for n = 36 (mesh(x, y, u), contour(x, y, u, 3)). 4. (a) Show the eigenvalues and eigenvectors for the Laplace equation are u + λu =, < x, y <, u =, on the boundaries, λ k,l = π ( l + k ), l, k =,,...,, u k,l (x, y) = sin(kπx) sin(lπy). (b) Show that the standard central finite difference scheme using the five-point stencil is stable for the Poisson equation. Hint: The eigenvectors for A h U h = F (grid functions) are u k,l,i, j = sin(ilπ/n) sin( jkπ/n), i, j, l, k =,,..., N. The -norm of A h is / min{ λ i (A h ) }. 5. Modify your SOR code for Problem 4, so that either the mixed boundary condition used at x = b or the periodic boundary condition is used at x = a and x = b.

87 c3 7/9/6 4:8 page 77 #3 Exercises Modify the Matlab code poisson_matlab.m for Poisson equations to solve the general selfadjoint elliptic PDEs (p u) qu = f (x, y), a < x < b, c < y < d. Validate your code with analytic solutions u(x, y) = x + y and u(x, y) = cos x sin y and p(x, y) = and p(x, y) = e x+y ; q(x, y) = and q(x, y) = x + y. Analyze your numerical results and plot the absolute and relate errors. 7. Search the Internet to find a fast Poisson solver or multigrid method, and test it.

88 c4 7/9/6 4:33 page 78 # 4 FD Methods for Parabolic PDEs A linear PDE of the form u t = Lu, (4.) where t usually denotes the time and L is a linear elliptic differential operator in one or more spatial variables, is called parabolic. Furthermore, the secondorder canonical form a(x, t)u tt + b(x, t)u xt + c(x, t)u xx + lower-order terms = f (x, t) is parabolic if b ac in the entire x and t domain. Note that, we can transform this second-order PDE into a system of two PDEs by setting v = u t, where the t-derivative is first order. Some important parabolic PDEs are as follows. D heat equation with a source u t = u xx + f (x, t). The dimension refers to the space variable (x direction). General heat equation u t = (β u) + f (x, t), (4.) where β is the diffusion coefficient and f (x, t) is the source (or sink) term. Diffusion advection equation u t = (β u) + w u + f (x, t), where (β u) is the diffusion term and w u the advection term. Canonical form of diffusion reaction equation u t = (β u) + f (x, t, u). The nonlinear source term f (x, t, u) is a reaction term. 78

89 c4 7/9/6 4:33 page 79 # FD Methods for Parabolic PDEs 79 The steady-state solutions (when u t = ) are the solutions of the corresponding elliptic PDEs, i.e., (β u) + f (x, u) = for the last case, assuming lim t f (x, t, u) = f (x, u) exists. Initial and Boundary Conditions In time-dependent problems, there is an initial condition that is usually specified at t =, i.e., u(x, ) = u (x) for the above PDE, in addition to relevant boundary conditions. If the initial condition is given at t = T, it can of course be rendered at t = by a translation t = t T. Thus for the D heat equation u t = u xx on a < x < b for example, we expect to have an initial condition at t = in addition to boundary conditions at x = a and x = b say. Note that the boundary conditions at t = may or may not be consistent with the initial condition, e.g., if a Dirichlet boundary condition is prescribed at x = a and x = b such that u(a, t) = g (t) and u(b, t) = g (t), then u (a) = g () and u (b) = g () for consistency. Dynamical Stability The fundamental solution u(x.t) = e x /4t / 4πt for the D heat equation u t = u xx is uniformly bounded. However, for the backward heat equation u t = u xx, if u(x, ) then lim t u(x, t) =. The solution is said to be dynamically unstable if it is not uniformly bounded, i.e., if there is no constant C > such that u(x, t) C. Some applications are dynamically unstable and blow up, but we do not discuss how to solve such dynamically unstable problems in this book, i.e., we only consider the numerical solution of dynamically stable problems. Some Commonly Used FD Methods We discuss the following finite difference methods for parabolic PDE in this chapter: the forward and backward Euler methods; the Crank Nicolson and θ methods; the method of lines (MOL), provided a good ODE solver can be applied; and the alternating directional implicit (ADI) method, for high-dimensional problems.

90 c4 7/9/6 4:33 page 8 #3 8 FD Methods for Parabolic PDEs MOL BC BC k+ k FW-CT BW-CT t = IC Figure 4.. Diagram of the finite difference stencil for the forward and backward Euler methods, and the MOL. Finite difference methods applicable to elliptic PDEs can be used to treat the spatial discretization and boundary conditions, so let us focus on the time discretization and initial condition(s). To consider the stability of the consequent numerical methods, we invoke a Fourier transformation and von Neumann stability analysis. 4. The Euler Methods For the following problem involving the heat equation with a source term, u t = βu xx + f (x, t), a < x < b, t >, u(a, t) = g (t), u(b, t) = g (t), u(x, ) = u (x), let us seek a numerical solution for u(x, t) at a particular time T > or at certain times in the interval < t < T. As the first step, we expect to generate a grid x i = a + ih, i =,,..., m, h = b a m, t k = k t, k =,,..., n, t = T n. It turns out that we cannot use arbitrary t (even it may be small) for explicit methods because of numerical instability concerns. The second step is to approximate the derivatives with finite difference approximations. Since we already know how to discretize the spatial derivatives, let us focus on possible finite difference formulas for the time derivative. In Figure 4., we sketch the stencils of several finite difference methods.

91 c4 7/9/6 4:33 page 8 #4 4. The Euler Methods Forward Euler Method (FW-CT) At a grid point (x i, t k ), k >, on using the forward finite difference approximation for u t and central finite difference approximation for u xx we have u(x i, t k + t) u(x i, t k ) t = β u(x i, t k ) u(x i, t k ) + u(x i+, t k ) h + f (x i, t k ) + T(x i, t k ). The local truncation error is T(x i, t k ) = h β u xxxx(x i, t k ) + t u tt(x i, t k ) +, where the dots denote higher-order terms, so the discretization is O(h + t). The discretization is first order in time and second order in space, when the finite difference equation is U k+ i Ui k t = β Uk i Uk i + U k i+ h + f k i, (4.3) where fi k = f (x i, t k ), with U k i again denoting the approximate values for the true solution u(x i, t k ). When k =, Ui is the initial condition at the grid point (x i, ); and from the values U k i at the time level k the solution of the finite difference equation at the next time level k + is ( ) Ui k+ = Ui k + t β Uk i Uk i + U k i+ h + f k i, i =,,..., m. (4.4) The solution of the finite difference equations is thereby directly obtained from the approximate solution at previous time steps and we do not need to solve a system of algebraic equations, so the method is called explicit. Indeed, we successively compute the solution at t from the initial condition at t, and then at t using the approximate solution at t. Such an approach is often called a time marching method. Remark 4.. The local truncation error of the FW-CT finite difference scheme under our definition is T(x, t) = u(x, t + t) u(x, t) t = O(h + t). β u(x h, t) u(x, t) + u(x + h, t) h f (x, t)

92 c4 7/9/6 4:33 page 8 #5 8 FD Methods for Parabolic PDEs In passing, we note an alternative definition of the truncation error in the literature. ( ) u(x h, t) u(x, t)+u(x+h, t) T(x, t) = u(x, t+ t) u(x, t) t β h f (x, t) ( ) = O t(h + t) introduces an additional factor t, so it is one order higher in t. Remark 4.. If f (x, t) and β is a constant, then from u t = βu xx and u tt = β u xx / t = β u t / x = β u xxxx, the local truncation error is ( β ) t T(x, t) = βh ( u xxxx + O ( t) + h 4). (4.5) Thus if β is constant we can choose t = h /(6β) to get O(h 4 + ( t) ) = O(h 4 ), i.e., the local truncation error is fourth-order accurate without further computational complexity, which is significant for an explicit method. It is easy to implement the forward Euler method compared with other methods. Below we list some scripts of the Matlab file called FW_Euler_heat.m: In the code above, we also plot the history of the solution. On testing the forward Euler method with different t and checking the error in a problem with a known exact solution, we find the method works well when < t h β but blows up when t > h β. Since the method is consistent, we anticipate that

93 c4 7/9/6 4:33 page 83 #6 4. The Euler Methods 83 (a) (b) x x Figure 4.. (a) Plot of the computed solutions using two different time step sizes and the exact solution at some time for the test problem. (b) Error plots of the computed solution using the two different step sizes: one is stable and the error is small; the other one is unstable and the error grows rapidly. this is a question of numerical stability. Intuitively, to prevent the errors in u k i being amplified, one can set < β t h, or < t h β. (4.6) This is a time step constraint, often called the CFL (Courant Friedrichs Lewy) stability condition, which can be verified numerically. In Figure 4., we plot the computed solutions and errors for a testing problem with β =, f (x) = sin t sin(πx) + cos t sin(πx)π. The true solution is u(x, t) = cos t sin(πx). We take time marching steps using two different time steps, one is t = h / (stable), and the other one is t = h (unstable). The left plots are the true and computed solutions at t = t and t = t. The gray lines are the history of the solution computed using t = h, and the * indicates the computed solution at the grid points for the final step. We see that the solution begin to 3 grow and oscillates. The plot of the black line is the true solution at t = t with the little o as the finite difference solution at the grid points, which is first-order accurate. The right figure is the error plots with the black one (the error is small) for the computed solution using t = h /; while the gray one for the computed solution using t = h 4 5, whose error grows rapidly and begin oscillates. If the final time T gets larger, so does the error, a phenomenon we call a blow-up due to the instability of the algorithm. The stability and the CFL condition of the time step constraint are very important for explicit or semi-explicit numerical algorithms.

94 c4 7/9/6 4:33 page 84 #7 84 FD Methods for Parabolic PDEs 4.. The Backward Euler Method (BW-CT) If the backward finite difference formula is used for u t and the central finite difference approximation for u xx at (x i, t k ), we get U k i U k i t = β U i k U i k + Ui+ k h + fi k, k =,,..., which is conventionally reexpressed as U k+ i U k i t = β U k+ i U k+ i + U k+ i+ h + f k+ i, k =,,.... (4.7) The backward Euler method is also consistent, and the discretization error is again O( t + h ). Using the backward Euler method, we cannot get Ui k+ with a few simple algebraic operations because all of the Ui k+ s are coupled together. Thus we need to solve the following tridiagonal system of equations, in order to get the approximate solution at the time level k + : + µ µ µ + µ µ µ + µ µ µ + µ µ = U k U k m k+ + t f + µg k+ U k U k 3 U k m + t f k+ + t f k t f k+ m k+ + t fm + µgk+ µ + µ U k+ U k+ U k+ 3. U k+ m U k+ m, (4.8)

95 c4 7/9/6 4:33 page 85 #8 4. The Method of Lines 85 where µ = β t and f k+ h i = f (x i, t k+ ). Note that we can use f (x i, t k ) instead of f (x i, t k+ ), since the method is first-order accurate in time. Such a numerical method is called an implicit, because the solution at time level k + are coupled together. The advantage of the backward Euler method is that it is stable for any choice of t. For D problems, the computational cost is only slightly more than the explicit Euler method if we can use an efficient tridiagonal solver, such as the Grout factorization method at cost O(5n) (cf. Burden and Faires,, for example). 4. The Method of Lines With a good solver for ODEs or systems of ODEs, we can use the MOL to solve parabolic PDEs. In Matlab, we can use the ODE Suite to solve a system of ODEs. The ODE Suite contains Matlab functions such as ode3, ode3s, ode5s, ode45, and others. The Matlab function ode3 uses a combination of Runge Kutta methods of order and 3 and uses an adaptive time step size. The Matlab function ode3s is designed for a stiff system of ODE. Consider a general parabolic equation of the form u t (x, t) = Lu(x, t) + f (x, t), where L is an elliptic operator. Let L h be a corresponding finite difference operator acting on a grid x i = a + ih. We can form a semidiscrete system of ODEs of form U i t = L hu i (t) + f i (t), where U i (t) u(x i, t) is the spatial discretization of u(x, t) along the line x = x i, i.e., we only discretize the spatial variable. For example, the heat equation with a source u t = βu xx + f where L = β / x is represented by L h = βδ xx produces the discretized system of ODE U (t) t U i (t) t U m (t) t = β U (t) + U (t) h + β g (t) h + f (x, t), = β U i (t) U i (t) + U i+ (t) h + f (x i, t), i =, 3,..., m, = β U m (t) U m (t) h + β g (t) h + f (x m, t), (4.9)

96 c4 7/9/6 4:33 page 86 #9 86 FD Methods for Parabolic PDEs and the initial condition is U i () = u (x i, ), i =,,..., m. (4.) The ODE system can be written in the vector form dy dt = f (y, t), y() = y. (4.) The MOL is especially useful for nonlinear PDEs of the form u t = f ( / x, u, t). For linear problems, we typically have dy dt = Ay + c, where A is a matrix and c is a vector. Both A and c may depend on t. There are many efficient solvers for a system of ODEs. Most are based on high-order Runge Kutta methods with adaptive time steps, e.g., ODE suite in Matlab, or dsode.f available through Netlib. However, it is important to recognise that the ODE system obtained from the MOL is typically stiff, i.e., the eigenvalues of A have very different scales. For example, for the heat equation the magnitude of the eigenvalues range from O() to O(/h ). In Matlab, we can call an ODE solver using the format The solution is stored in the last row of y, which can be extracted using Then ysol is the approximate solution at time t = t_ final. To define the ODE system of the MOL, we can create a Matlab file, say yfun-mol.m whose contents contain the following where g(t) and g(t) are two Matlab functions for the boundary conditions at x = a and x = b; and f (t, x) is the source term.

97 c4 7/9/6 4:33 page 87 # The initial condition can be defined as 4.3 The Crank Nicolson Scheme 87 where u (x) is a Matlab function of the initial condition. 4.3 The Crank Nicolson Scheme The time step constraint t = h /(β) for the explicit Euler method is generally considered to be a severe restriction, e.g., if h =., the final time is T = and β =, then we need 7 steps to get the solution at the final time. The backward Euler method does not have the time step constraint, but it is only first-order accurate. If we want second-order accuracy O(h ), we need to take t = O(h ). One finite difference scheme that is second-order accurate both in space and time, without compromising stability and computational complexity, is the Crank Nicolson scheme. The Crank Nicolson scheme is based on the following lemma, which can be proved easily using the Taylor expansion. Lemma 4.3. Let ϕ(t) be a function that has continuous first- and second-order derivatives, i.e., ϕ(t) C. Then ϕ(t) = ( ( ϕ t t ) ( + ϕ t + t )) + ( t) u (t) + h.o.t. (4.) 8 Intuitively, the Crank Nicolson scheme approximates the PDE u t = (βu x ) x + f (x, t) at (x i, t k + t/), by averaging the time level t k and t k+ of the spatial derivative (β u)) and f (x, t). Thus it has the following form U k+ i Ui k t β k U k i = i (βk + β k )U k i i+ i + β k U k i+ i+ h + β k+ U k+ k+ i i (β i + β k+ )U i+ i k+ + β k+ U k+ i+ i+ h + ( ) fi k + fi k+. (4.3) The discretization is second order in time (central at t + t/ with step size t/) and second order in space. This can easily be seen using the following

98 c4 7/9/6 4:33 page 88 # 88 FD Methods for Parabolic PDEs relations, taking β = for simplicity: u(x, t + t) u(x, t) t + O(( t) 4 ), = u t (x, t + t/) + 3 u(x h, t) u(x, t) + u(x + h, t) h = u xx (x, t) + O(h ), ( ) t u ttt(x, t + t/) u(x h, t + t) u(x, t + t) + u(x + h, t + t) = u h xx (x, t + t) + O(h ), ( ) u xx (x, t) + u xx (x, t + t) = u xx (x, t + t/) + O(( t) ), ( ) f (x, t) + f (x, t + t) = f (x, t + t/) + O(( t) ). At each time step, we need to solve a tridiagonal system of equations to get Ui k+. The computational cost is only slightly more than that of the explicit Euler method in one space dimension, and we can take t h and have secondorder accuracy. Although the Crank Nicolson scheme is an implicit method, it is much more efficient than the explicit Euler method since it is secondorder accurate both in time and space with the same computational complexity. A sample Matlab code crank.m is accompanied with the book. If we use a fixed time step t = h, given a final time T, we can easily get the number of time marking steps as N T = int(t/h) as used in the crank.m. In the next section, we will prove it is unconditionally stable for the heat equation A Class of One-Step FD Methods: The θ-method The θ-method for the heat equation u t = u xx + f (x, t) has the following form: U k+ i Ui k t = θδ xxu k i + ( θ)δ xxu k+ i + θf k i + ( θ)f k+ i. When θ =, the method is the explicit Euler method; when θ =, the method is the backward Euler method; and when θ = /, it is the Crank Nicolson scheme. If < θ /, then the method is unconditionally stable, and otherwise it is conditionally stable, i.e., there is a time step constraint. The θ-method is generally first order in time and second order in space, except for θ = /. The accompanying Matlab code for this chapter included Euler, Crank Nicolson, ADI, and MOL methods.

99 c4 7/9/6 4:33 page 89 # 4.4 Stability Analysis for Time-dependent Problems Stability Analysis for Time-dependent Problems A standard approach to stability analysis of finite difference methods for timedependent problems is named after John von Neumann and based on the discrete Fourier transform (FT) Review of the Fourier Transform Let us first consider the Fourier transform in continuous space. Consider u(x) L (, ), i.e., as u dx < or u <. The Fourier transform is defined û(ω) = π e iωx u(x)dx (4.4) where i =, mapping u(x) in the space domain into û(ω) in the frequency domain. Note that if a function is defined in the domain (, ) instead of (, ), we can use the Laplace transform. The inverse Fourier transform is u(x) = π e iωx û(ω)dω. (4.5) Parseval s relation: Under the Fourier transform, we have û = u or û dω = u dx. (4.6) From the definition of the Fourier transform we have ( ) û u = ixu, = iωû. (4.7) ω x To show this we invoke the inverse Fourier transform u(x) x = π iωx u e x dω so that, since u(x) and û(ω) are both in L (, ), on taking the partial derivative of the inverse Fourier transform with respect to x we have u(x) x = π ( e iωx û ) dω = iωûe iωx dω. x π Then as the Fourier transform and its inverse are unique, u/ x = iωû. The proof of the first equality is left as an exercise. It is easy to generalize the

100 c4 7/9/6 4:33 page 9 #3 9 FD Methods for Parabolic PDEs equality, to get m u x m = (iω)m û (4.8) i.e., we remove the derivatives of one variable. The Fourier transform is a powerful tool to solve PDEs, as illustrated below. Example 4.4. Consider u t + au x =, < x <, t >, u(x, ) = u (x) which is called an advection equation, or a one-way wave equation. This is a Cauchy problem since the spatial variable is defined in the entire space and t. On applying the FT to the equation and the initial condition, û t + âu x =, or û t + aiωû =, û(ω, ) = û (ω) i.e., we get an ODE for û(ω) whose solution is û(ω, t) = û(ω, ) e iaωt = û (ω) e iaωt. The solution to the original advection equation is thus u(x, t) = π = π = u(x at, ), e iωx û (ω) e iaωt dω e iω(x at) û (ω) dω on taking the inverse Fourier transform. It is notable that the solution for the advection equation does not change shape, but simply propagates along the characteristic line x at =, and that Example 4.5. Consider u = û = û(ω, )e iaωt = û(ω, ) = u. u t = βu xx, < x <, t >, u(x, ) = u (x), lim x u =, involving the heat (or diffusion) equation. On again applying the Fourier transform to the PDE and the initial condition, û t = βu xx, or û t = β(iω) û = βω û, û(ω, ) = û (ω), and the solution of this ODE is û(ω, t) = û(ω, ) e βωt.

101 c4 7/9/6 4:33 page 9 #4 4.4 Stability Analysis for Time-dependent Problems 9 Consequently, if β >, from the Parseval s relation, we have u = û = û(ω, )e βωt u. Actually, it can be seenthat lim t u = and the second-order partial derivative term is call a diffusion or dissipative. If β <, then lim t u =, the PDE is dynamically unstable. Example 4.6. Dispersive waves. Consider u t = m+ u x m+ + m u + l.o.t., xm where m is a nonnegative integer. For the simplest case u t = u xxx, we have and the solution of this ODE is Therefore, û t = βu xxx, or û t = β(iω) 3 û = iω 3 û, û(ω, t) = û(ω, ) e iω3t. u = û = û(ω, ) = u(ω, ), and the solution to the original PDE can be expressed as u(x, t) = π = π e iωx û (ω) e iω3t dω e iω(x ω t) û (ω) dω. Evidently, the Fourier component with wave number ω propagates with velocity ω, so waves mutually interact but there is no diffusion. Example 4.7. PDEs with higher-order 3 derivatives. Consider u t = α m u x m + m u + l.o.t., xm where m is a nonnegative integer. The Fourier transform yields { αω mû û t = α(iω) m + if m = k +, û + = αω m û + if m = k,

102 c4 7/9/6 4:33 page 9 #5 9 FD Methods for Parabolic PDEs hence û = { û(ω, ) e αiω mt + if m = k +, û(ω, ) e αiωmt + if m = k, such that u t = u xx and u t = u xxxx are dynamically stable, whereas u t = u xx and u t = u xxxx are dynamically unstable The Discrete Fourier Transform Motivations to study a discrete Fourier transform include the stability analysis of finite difference schemes, data analysis in the frequency domain, filtering techniques, etc. Definition 4.8. If..., v, v, v, v, v,... denote the values of a continuous function v(x) at x i = i h, the discrete Fourier transform is defined as Remark 4.9. ˆv(ξ) = π j= h e iξ jh v j. (4.9) The definition is a quadrature approximation to the continuous case, i.e., we approximate by, and replace dx by h. ˆv(ξ) is a continuous and periodic function of ξ with period π/h, since e ijh(ξ+π/h) = e ijhξ e ijπ = e iξ jh, (4.) so we can focus on ˆv(ξ) in the interval [ π/h, π/h], and consequently have the following definition. Definition 4.. The inverse discrete Fourier transform is v j = π π/h π/h Given any finite sequence not involving h, v, v,..., v M, e iξ jh ˆv(ξ) dξ. (4.) we can extend the finite sequence according to the following...,,, v, v,..., v M,,,...,

103 c4 7/9/6 4:33 page 93 #6 4.4 Stability Analysis for Time-dependent Problems 93 and alternatively define the discrete Fourier and inverse Fourier transform as ˆv(ξ) = π j= v j = π π π We also define the discrete norm as v h = e iξ j v j = M e iξ j v j, (4.) j= e iξ j ˆv(ξ) dξ. (4.3) j= v j h, (4.4) which is often denoted by v. Parseval s relation is also valid, i.e., π/h ˆv h = ˆv(ξ) dξ = π/h j= h v j = v h. (4.5) Definition of the Stability of a FD Scheme A finite difference scheme P t,h v k j = is stable in a stability region Λ if for any positive time T there is an integer J and a constant C T independent of t and h such that J v n h C T v j h, (4.6) j= for any n that satisfies n t T with ( t, h) Λ. Remark 4... The stability is usually independent of source terms.. A stable finite difference scheme means that the growth of the solution is at most a constant multiple of the sum of the norms of the solution at the first J + steps. 3. The stability region corresponds to all possible t and h for which the finite difference scheme is stable. The following theorem provides a simple way to check the stability of any finite difference scheme. Theorem 4.. If v k+ h v k h is true for any k, then the finite difference scheme is stable.

104 c4 7/9/6 4:33 page 94 #7 94 FD Methods for Parabolic PDEs Proof From the condition, we have v n h v n h v h v h, and hence stability for J = and C T = The von Neumann Stability Analysis for FD Methods The von Neumann stability analysis of a finite difference scheme can be sketched briefly as Discrete scheme = discrete Fourier transform = growth factor g(ξ) = stability ( g(ξ)?). We will also explain a simplification of the von Neumann analysis. Example 4.3. The forward Euler method (FW-CT) for the heat equation u t = βu xx is ( ) Ui k+ = U k i + µ U k i Uk i + U k i+, µ = β t h. (4.7) From the discrete Fourier transform, we have the following U k j = π π/h π/h U k j+ = π π/h and similarly π/h e iξ jh Û k (ξ)dξ, (4.8) e iξ(j+)h Û k (ξ)dξ = π π/h U k j = π π/h π/h π/h e iξ jh e iξh Û k (ξ)dξ, (4.9) e iξ jh e iξh Û k (ξ)dξ. (4.3) Substituting these relations into the forward Euler finite difference scheme, we obtain U k+ i = π π/h π/h e iξ jh ( + µ(e iξh + e iξh )) Ûk (ξ)dξ. (4.3) On the other hand, from the definition of the discrete Fourier transform, we also know that U k+ i = π π/h π/h e iξ jh Û k+ (ξ)dξ. (4.3) The discrete Fourier transform is unique, which implies ( Û k+ (ξ) = + µ(e iξh + e )) iξh Ûk (ξ) = g(ξ)û k (ξ), (4.33)

105 c4 7/9/6 4:33 page 95 #8 4.4 Stability Analysis for Time-dependent Problems 95 where g(ξ) = + µ(e iξh + e iξh ) (4.34) is called the growth factor. If g(ξ), then Û k+ Û k and thus Û k+ h Û k h, so the finite difference scheme is stable. Let us examine g(ξ) now. We have g(ξ) = + µ (cos( ξh) i sin(ξh) + cos(ξh) + i sin(ξh)) = + µ (cos(ξh) ) = 4µ sin (ξh)/, (4.35) but we need to know when g(ξ), or g(ξ). Note that 4µ 4µ sin (ξh)/ = g(ξ), (4.36) so on taking 4µ we can guarantee that g(ξ), which implies the stability. Thus a sufficient condition for the stability of the forward Euler method is 4µ or 4µ, or t h β. (4.37) Although we cannot claim what will happen if this condition is violated, it provides an upper bound for the stability Simplification of the von Neumann Stability Analysis for One-Step Time Marching Methods Consider the one-step time marching method U k+ = f (U k, U k+ ). The following theorem provides a simple way to determine the stability. Theorem 4.4. Let θ = hξ. A one-step finite difference scheme (with constant coefficients) is stable if and only if there is a constant K (independent of θ, t, and h) and some positive grid spacing t and h such that g(θ, t, h) + K t (4.38) for all θ and < h h. If g(θ, t, h) is independent of h and t, then the stability condition (4.38) can be replaced by g(θ. (4.39) Thus only the amplification factor g(hξ) = g(θ) needs to be considered, as observed by von Neumann.

106 c4 7/9/6 4:33 page 96 #9 96 FD Methods for Parabolic PDEs The von Neumann stability analysis usually involves the following steps:. set U k j = e ijhξ and substitute it into the finite difference scheme;. express Uj k+ as Uj k+ = g(ξ)e ijhξ, etc.; 3. solve for g(ξ) and determine whether or when g(ξ) (for stability); but note that 4. if there are some ξ such that g(ξ) >, then the method is unstable. Example 4.5. The stability of the backward Euler method for the heat equation u t = βu xx is ( ) Ui k+ = U k i + µ U k+ i Uk+ i + U k+ i+, µ = β t h. (4.4) Following the procedure mentioned above, we have ( g(ξ)e ijhξ = e ijhξ + µ e iξ(j )h e iξ jh + e iξ(j+)h) g(ξ) with solution g(ξ) = = ( ( ) ) = e iξ jh + µ e iξh + e iξh i g(ξ), (4.4) µ(e iξh + e iξh ) µ( cos(hξ) ) = + 4µ sin, (4.4) (hξ)/ for any h and t >. Obviously, < g(ξ) so g(ξ) and the backward Euler method is unconditionally stable, i.e., there is no constraint on t for stability. Example 4.6. The Leapfrog scheme (two-stage method) for the heat equation u t = u xx is U k+ i U k i t = Uk i Uk i + U k i+ h, (4.43) involving the central finite difference formula both in time and space. This method is unconditionally unstable! To show this, we use Uj k = e ijhξ /g(ξ) to get g(ξ)e ijhξ = ( ) g(ξ) eijhξ + e iξ jh µ(e iξh + e iξh ) = g(ξ) eijhξ e ijhξ 4µ sin (hξ/),

107 c4 7/9/6 4:33 page 97 # 4.5 FD Methods and Analysis for D Parabolic Equations 97 yielding a quadratic equation for g(ξ): (g(ξ)) + 4µ sin (hξ/) g(ξ) =. (4.44) The two roots are g(ξ) = µ sin (hξ/) ± 4µ sin 4 (hξ/) +, and one root g(ξ) = µ sin (hξ/) 4µ sin 4 (hξ/) + has magnitude g(ξ). Thus there are ξ such that g(ξ) >, so the method is unstable. 4.5 FD Methods and Analysis for D Parabolic Equations The general form of a parabolic PDE is u t + a u x + a u y = (βu x ) x + (βu y ) y + κu + f (x, y, t), with boundary conditions and an initial condition. We need β β > for the dynamic stability. The PDE can be written as u t = Lu + f, where L is the spatial differential operator. The MOL can be used provided there is a good solver for the stiff ODE system. Note that the system is large (O(mn)), if the numbers of grid lines are O(m) and O(n) in the x- and y-directions, respectively. For simplicity, let us consider the heat equation u t = (β u) + f (x, y, t) and assume β is a constant. The simplest method is the forward Euler method: ) U k+ lj = Ulj (U k + µ l, k j + U l+, k j + U l, k j + U l,j+ k 4U l, k j + tflj k,

108 c4 7/9/6 4:33 page 98 # 98 FD Methods for Parabolic PDEs where µ = β t/h. The method is first order in time and second order in space, and it is conditionally stable. The stability condition is t h 4β. (4.45) Note that the factor is now 4, instead of for D problems. To show stability using the von Neumann analysis with f =, set where ξ = [ξ, ξ ] T and x = [h x l, h y j ] T, u k lj = ei (lh xξ +jh y ξ ) = e i ξ x (4.46) U k+ lj = g(ξ, ξ ) e i ξ x. (4.47) Note that the index is l instead of i in the x-direction, to avoid confusion with the imaginary unit i =. Substituting these expressions into the finite difference scheme, we obtain ( ) g(ξ, ξ ) = 4µ sin (ξ h/) + sin (ξ h/), where h x = h y = h for simplicity. If we enforce ( ) 8µ 4µ sin (ξ h/) + sin (ξ h/) 8µ, and take 8µ, we can guarantee that g(ξ, ξ ), which implies the stability. Thus, a sufficient condition for the stability of the forward Euler method in D is in addition to the condition t >. 8 tβ h, or t h 4β, 4.5. The Backward Euler Method (BW-CT) in D The backward Euler scheme can be written as U k+ ij t U k ij = U k+ i, j + U k+ i+, j + U k+ i, j + U k+ i,j+ 4U k+ ij h + f k+ ij, (4.48)

109 c4 7/9/6 4:33 page 99 # 4.6 The ADI Method 99 which is first order in time and second order in space, and it is unconditionally stable. The coefficient matrix for the unknown Uij k+ is block tridiagonal, and strictly row diagonally dominant if the natural row ordering is used to index the Uij k+ and the finite difference equations The Crank Nicolson (C N) Scheme in D The Crank Nicolson scheme can be written as U k+ ij t U k ij = ( U k+ i, j + U k+ i+, j + U k+ i, j + Uk+ i, j+ 4Uk+ ij h + fij k+ + Uk i, j + Uk i+, j + U k i, j + U k i, j+ 4U k ij h + f k ij ). (4.49) Both the local truncation error and global error are O ( ( t) + h ). The scheme is unconditionally stable for linear problems. However, we need to solve a system of equations with a strictly row diagonally dominant and block tridiagonal coefficient matrix, if we use the natural row ordering for both the equations and unknowns. A structured multigrid method can be applied to solve the linear system of equations from the backward Euler method or the Crank Nicolson scheme. 4.6 The ADI Method The ADI is a time splitting or fractional step method. The idea is to use an implicit discretization in one direction and an explicit discretization in another direction. For the heat equation u t = u xx + u yy + f (x, y, t), the ADI method is U k+ ij Uij k ( t)/ Uij k+ U k+ ij ( t)/ = U k+ i, j Uk+ ij + U k+ i+,j h + Uk i, j Uk ij + U i, k j+ x h + f k+ ij, y = U k+ i, j Uk+ ij + U k+ i+,j h + Uk+ x i, j Uk+ ij + U k+ h y i,j+ + f k+ ij, (4.5) which is second order in time and in space if u(x, y, t) C 4 (Ω), where Ω is the bounded domain where the PDE is defined. It is unconditionally stable for linear problems. We can use symbolic expressions to discuss the method,

110 c4 7/9/6 4:33 page #3 FD Methods for Parabolic PDEs rewritten as 3 U k+ ij = U k ij + t δ xxu k+ ij Uij k+ = U k+ ij + t δ xxu k+ ij + t δ yyuij k + t f k+ + t δ yyuij k+ + t ij, f k+ ij. (4.5) Thus on moving unknowns to the left-hand side, in matrix-vector form we have ( I t D x ( I t D y ) U k+ = ( I + t D y ) ( U k+ = I + t D x ) U k + t F k+, ) U k+ t + F k+, (4.5) leading to a simple analytically convenient result as follows. From the first equation we get ( U k+ = I t ) ( D x I + t ) ( D y U k + I t ) t D x Fk+, and substituting into the second equation to have ( I t ) ( D y U k+ = I + t ) ( D x I t ) ( D x I + t ) D y U k We can go further to get ( + I + t D x ) ( I t D x ( I t ) ( D x I t ) ( D y U k+ = I + t ) ( D x I + t ) t F k+ + t F k+. D y ) U k ( + I + t ) t D x Fk+ t + Fk+. This is the equivalent one step time marching form of the ADI method, which will be use to show the stability of the ADI method later. Note that in this derivation we have used ( I + t D x ) ( I + t D y and other commutative operations. ) ( = I + t ) ( D y I + t ) D x

111 c4 7/9/6 4:33 page #4 4.6 The ADI Method 4.6. Implementation of the ADI Algorithm The key idea of the ADI method is to use the implicit discretization dimension by dimension by taking advantage of fast tridiagonal solvers. In the x-direction, the finite difference approximation is U k+ ij = U k ij + t δ xxu k+ ij + t δ yyu k ij + t f k+ ij. For a fixed j, we get a tridiagonal system of equations for U k+ j, U k+ j,..., U k+ m, j, assuming a Dirichlet boundary condition at x = a and x = b. The system of equations in matrix-vector form is + µ µ µ + µ µ µ + µ µ µ + µ µ µ + µ U k+ j U k+ j U k+ 3j. U k+ m, j U k+ m, j = F, where U, k j + t f k+ j + µ u bc (a, y j ) k+ ( + µ U k, j U, k j + U k ), j+ U, k j + t f k+ j + µ ( U k, j U k, j + U k ), j+ U 3j k + t F = f k+ 3j + µ ( U k 3, j U k 3, j + U k ) 3, j+,. Um, k j + t f k+ m, j + µ ( U k m, j U k m, j + U k ) m, j+ Um, k j + t f k+ m, j + µ ( U k m, j U k m, j + U k ) m, j+ + µ ubc (b, y j ) k+ and µ = β t, and f k+ h i = f (x i, t k+ ). For each j, we need to solve a symmetric tridiagonal system of equations. The cost for the x-sweep is about O(5mn).

112 c4 7/9/6 4:33 page #5 FD Methods for Parabolic PDEs A Pseudo-code of the ADI Method in Matlab A Matlab test code adi.m can be found in the depository directory of this chapter.

113 c4 7/9/6 4:33 page 3 #6 4.6 The ADI Method Consistency of the ADI Method Adding the two equations in (4.5) together, we get Uij k ( t)/ U k+ ij = δ xxu k+ ij + δ yy ( ) Uij k+ + Uij k + f k+ ij ; (4.53) and if we subtract the first equation from the second equation, we get ( ) ( ) 4U k+ ij = Uij k+ + Uij k tδyy Uij k+ Uij k. (4.54) Substituting this into (4.53) we get ( + ( t) 4 δ xxδ yy ) U k+ ij Uij k t ( ) U k+ = δxx + δyy ij U34 ij k + f k+ ij, (4.55) and we can clearly see that the discretization is second-order accurate in both space and time, i.e., T k ij = O(( t) + h ). Taking f = and setting Stability Analysis of the ADI Method U k lj = ei(ξ h l+ξ h j), U k+ lj = g(ξ, ξ ) e i(ξ h l+ξ h j), (4.56) on using the operator form we have ( t ) ( δ xx t ) ( δ yy U k+ jl = + t ) ( δ xx + t ) δ yy Ujl k, which yields, ( t ) ( δ xx t = ( + t δ xx After some manipulations, we get g(ξ, ξ ) = δ yy ) g(ξ, ξ ) e i(ξ h l+ξ h j) ) ( + t δ yy ) e i(ξ h l+ξ h j). ( ) ( ) 4µ sin (ξ h/) 4µ sin (ξ h/) ( ) ( ), + 4µ sin (ξ h/) + 4µ sin (ξ h/)

114 c4 7/9/6 4:33 page 4 #7 4 FD Methods for Parabolic PDEs where µ = t h and for simplicity we have set h x = h y = h. Thus g(ξ, ξ ), no matter what t and h are, so the ADI method is unconditionally stable for linear heat equations. 4.7 An Implicit explicit Method for Diffusion and Advection Equations Consider a diffusion and advection PDE in D u t + w u = (β u) + f (x, y, t) where w is a D vector, and is the gradient operator in D, see page 48. In this case, it is not so easy to get a second-order implicit scheme such that the coefficient matrix is diagonally dominant or symmetric positive or negative definite, due to the advection term w u. One approach is to use an implicit scheme for the diffusion term and an explicit scheme for the advection term, of the following form from time level t k to t k+ : u k+ u k where t + (w h u) k+ ( = ( h β h u) k + ( h β h u) k+) + f k+, (4.57) (w h u) k+ = 3 (w hu) k (w hu) k, (4.58) where h u = [δ x u, δ y u] T, and at a grid point (x i, y j ), they are δ x u = u i+, j u i, j h x ; δ x u = u i, j+ u i, j h y. (4.59) We treat the advection term explicitly, since the term only contains the firstorder partial derivatives and the CFL constraint is not a main concern unless w is very large. The time step constraint is t h w. (4.6) At each time step, we need to solve a generalized Helmholtz equation ( β u) k+ uk+ t = uk t + (w u) k+ ( β u) k f k+. (4.6)

115 c4 7/9/6 4:33 page 5 #8 Exercises 5 We need u to get the scheme above started. We can use the explicit Euler method (FW-CT) to approximate u, as this should not affect the stability and global error O(( t) + h ). 4.8 Solving Elliptic PDEs using Numerical Methods for Parabolic PDEs We recall the steady-state solution of a parabolic PDE is the solution of the corresponding elliptic PDE, e.g., the steady-state solution of the parabolic PDE is the solution to the elliptic PDE if the limit u t = (β u) + w u + f (x, t) (β u) + w u + f(x) =, f(x) = lim t f (x, t) exists. The initial condition is irrelevant to the steady-state solution, but the boundary condition is relevant. This approach has some advantages, especially for nonlinear problems where the solution is not unique. We can control the variation of the intermediate solutions, and the linear system of equations is more diagonally dominant. Since we only require the steady-state solution, we prefer to use implicit methods with large time steps since the accuracy in time is unimportant.. Show that a scheme for Exercises u t = β u xx (4.6) of the form U k+ i = αu k i + α ( ) Ui+ k + Ui k where α = βµ, µ = t/h is consistent with the heat equation (4.6). Find the order of the discretization.. Show that the implicit scheme ( tβ ) ( U k+ δ i Ui k xx t ) ) = β ( h δ xx δxxu i k (4.63)

116 c4 7/9/6 4:33 page 6 #9 6 FD Methods for Parabolic PDEs for the heat equation (4.6) has order of accuracy (( t), h 4 ), where δ xxu i = U i U i + U i+ h, and t = O(h ). Compare this method with FW-CT, BW-CT, and Crank Nicolson schemes and explain the advantages and limitations. (Note: The stability condition of the scheme is βµ 3 ). 3. For the implicit Euler method applied to the heat equation u t = u xx, is it possible to choose t such that the discretization is O(( t) + h 4 )? 4. Consider the diffusion and advection equation u t + u x = βu xx, β >. (4.64) Use the von Neumann analysis to derive the time step restriction for the scheme U k+ i t U k i + Uk i+ Ui k = β U i k Ui k + Ui+ k. h h 5. Implement and compare the Crank Nicolson and the MOL methods using Matlab for the heat equation: u t = βu xx + f (x, t), a < x < b, t, u(x, ) = u (x); u(a, t) = g (t); u x (b, t) = g (t), where β is a constant. Use u(x, t) = (cos t) x sin(πx), < x <, tfinal =. to test and debug your code. Write a short report about these two methods. Your discussion should include the grid refinement analysis, error and solution plots for m = 8, comparison of cputime, and any conclusions you can draw from your results. You can use Matlab code ode5s or ode3s to solve the semidiscrete ODE system. Assume that u is the temperature of a thin rod with one end (x = b) just heated. The other end of the rod has a room temperature (7 C). Solve the problem and find the history of the solution. Roughly how long does it take for the temperature of the rod to reach the steady state? What is the exact solution of the steady state? Hint: Take the initial condition as u(x, ) = T e (x b) /γ, where T and γ are two constants, f (x, t) =, and the Neumann boundary condition u x (b, t) =. 6. Carry out the von Neumann analysis to determine the stability of the θ method U (n+) j U n j k for the heat equation u t = bu xx, where ( = b θ δxu (n) j ) + ( θ) δxu (n+) j (4.65) δ xu j = U j U j + U j+ h and θ. 7. Modify the Crank Nicolson Matlab code for the backward Euler method and for variable β(x, t) s in one space dimensions. Validate your code. 8. Implement and compare the ADI and Crank Nicolson methods with the SOR(ω) (try to test optimal ω) for the following problem involving the D heat equation: u t = u xx + u yy + f (t, x, y), a < x < b, c y d, t, u(, x, y) = u (x, y),

117 c4 7/9/6 4:33 page 7 #3 Exercises 7 and Dirichlet boundary conditions. Choose two examples with known exact solutions to test and debug your code. Write a short report about the two methods. Your discussion should include the grid refinement analysis (with a fixed final time, say T =.5), error and solution plots, comparison of cpu time and flops, and any conclusions you can draw from your results. 9. Extra credit: Modify the ADI Matlab code for variable heat conductivity β(x, y).

118 c5 7/9/6 4:49 page 8 # 5 Finite Difference Methods for Hyperbolic PDEs In this chapter, we discuss finite difference methods for hyperbolic PDEs (see page 6 for the definition of hyperbolic PDEs). Let us first list a few typical model problems involving hyperbolic PDEs. Advection equation (one-way wave equation): u t + au x =, < x <, u(x, ) = η(x), IC, u(, t) = g l (t) if a, or u(, t) = g r (t) if a. (5.) Here g l and g r are prescribed boundary conditions from the left and right, respectively. Second-order linear wave equation: u tt = au xx, < x <, u(x, ) = η(x), u (x, ) = v(x), t IC, u(, t) = g l (t), u(, t) = g r (t), BC. (5.) Linear first-order hyperbolic system: u t = Au x + f(x, t), (5.3) where u and f are two vectors and A is a matrix. The system is called hyperbolic if A is diagonalizable, i.e., if there is a nonsingular matrix T such that A = TDT, and all eigenvalues of A are real numbers. 8

119 c5 7/9/6 4:49 page 9 # 5. Characteristics and Boundary Conditions 9 Nonlinear hyperbolic equation or system, notably conservation laws: ( ) u u t + f(u) x =, e.g., Burgers equation u x + = ; (5.4) x u t + f x + g y =. (5.5) For nonlinear hyperbolic PDE, shocks (a discontinuous solution) can develop even if the initial data is smooth. 5. Characteristics and Boundary Conditions We know the exact solution for the one-way wave equation u t + au x =, < x <, u(x, ) = η(x), t > is u(x, t) = η(x at). If the domain is finite, we can also find the exact solution. We can solve the model problem u t + au x =, < x <, u(x, ) = η(x), t >, u(, t) = g l (t) if a > by the method of characteristics since the solution is constant along the characteristics. For any point (x, t) we can readily trace the solution back to the initial data. In fact, for the characteristic z(s) = u(x + ks, t + s) (5.6) along which the solution is a constant (z (s) ), on substituting into the PDE we get z (s) = u t + ku x =, which is always true if k = a. The solution at (x + ks, t + s) is the same as at (x, t), so we can solve the problem by tracing back until the line hits the boundary, i.e., u( x, t) = u(x + as, t + s) = u(x at, ) if x at, on tracing back to the initial condition. If x at <, we can only trace back to x = or s = x/a and t = x/a, and the solution is u( x, t) = u(, t x/a) = g l (t x/a). The solution for the case a can therefore be written as η(x at) if x at, u(x, t) = ( g l t x ) (5.7) if x < at. a

120 c5 7/9/6 4:49 page #3 Finite Difference Methods for Hyperbolic PDEs Now we can see why we have to prescribe a boundary condition at x =, but we cannot have any boundary condition at x =. It is important to have correct boundary conditions for hyperbolic problems! The one-way wave equation is often used as a benchmark problem for different numerical methods for hyperbolic problems. 5. Finite Difference Schemes Simple numerical methods for hyperbolic problems include: Lax Friedrichs method; Upwind scheme; Leap-frog method (note it does not work for the heat equation but works for linear hyperbolic equations); Box scheme; Lax Wendroff method; Crank Nicolson scheme (not recommended for hyperbolic problems, since there are no severe time step size constraints); and Beam Warming method (one-sided second-order upwind scheme if the solution is smooth). There are also some high-order methods in the literature. For linear hyperbolic problems, if the initial data is smooth (no discontinuities), it is recommended to use second-order accurate methods such as the Lax Wendroff method. However, care has to be taken if the initial data has finite discontinuities, called shocks, as second- or high-order methods often lead to oscillations near the discontinuities (Gibbs phenomena). Some of the methods are the bases for numerical methods for a conservation law, a special conservative nonlinear hyperbolic system, for which shocks may develop in finite time even if the initial data is smooth. Also for hyperbolic differential equations, usually there is no strict time step constraint as for parabolic problems. Often explicit methods are preferred. 5.. Lax Friedrichs Method Consider the one-way wave equation u t + au x =, and the simple finite difference scheme U k+ j Uj k t + a h or U k+ j = U k j µ ( ) Uj+ k U j k =, ( U k j+ U k j ),

121 c5 7/9/6 4:49 page #4 5. Finite Difference Schemes where µ = a t/(h). The scheme has O( t + h ) local truncation error, but the method is unconditionally unstable from the von Neumann stability analysis. The growth factor for the FW-CT finite difference scheme is g(θ) = µ (e ihξ e ihξ) where θ = hξ, so = µ i sin(hξ), g(θ) = + 4µ sin (hξ). In the Lax Friedrichs scheme, we average Uj k using Uj k and U j+ k to get Uj k+ = ( ) ( ) Uj k + Uk j+ µ Uj+ k Uk j. The local truncation error is O( t + h) if t h. The growth factor is g(θ) = ( e ihξ + e ihξ) + µ (e ihξ e ihξ) so = cos(hξ) µ sin(hξ)i g(θ) = cos (hξ) + 4µ sin (hξ) = sin (hξ) + 4µ sin (hξ) = ( 4µ ) sin (hξ), and we conclude that g(θ) if 4µ or (a t/h), which implies that t h/ a. This is the CFL (Courant Friedrichs Lewy) condition. For the Lax Friedrichs scheme, we need a numerical boundary condition (NBC) at x =, as explained later. The Lax Friedrichs scheme is the basis for several other popular schemes. A Matlab code called lax_fred.m can be found in the Matlab programming collections that accompany the book. The upwind scheme for u t + au x = is Uj k+ Uj k a h = t a h 5.. The Upwind Scheme ( U k j U k j ) ( ) Uj+ k U j k if a, if a <, (5.8)

122 c5 7/9/6 4:49 page #5 Finite Difference Methods for Hyperbolic PDEs which is first-order accurate in time and in space. To find the CFL constraint, we conduct the von Neumann stability analysis. The growth factor for the case when a is g(θ) = µ ( e ihξ) with magnitude = µ( cos(hξ)) iµ sin(hξ) g(θ) = ( µ + µ cos(hξ)) + µ sin (hξ) = ( µ) + ( µ)µ cos(hξ) + µ = ( µ)µ( cos(hξ)), so if µ (i.e., µ ) or t h/a we have g(θ). Note that no NBC is needed for the upwind scheme, and there is no severe time step restriction, since t h/a. If a = a(x, t) is a variable function that does not change the sign, then the CFL condition is < t h max a(x, t). However, the upwind scheme is first-order in time and in space, and there are some high-order schemes. A Matlab code called upwind.m can be found in the Matlab programming collections accompanying this book. The main structure of the code is listed below:

123 c5 7/9/6 4:49 page 3 #6 5. Finite Difference Schemes 3 (a) The history of the FD solution (b) x x Figure 5.. Plot of the initial and consecutive approximation of the upwinding method for an advection equation. (a) The time step is t = h and the scheme is stable. (b) The time step is t =.5h and the scheme is unstable which leads to a blowing-up quantity. In Figure 5.(a), we show the initial data and several consecutive finite difference approximations of the upwinding scheme applied to the advection equation u t + u x = in the domain < x <. The initial condition is u(x, ) = u (x) = { if < x < /, if / x <. The boundary condition is u(, t) = sin t. The analytic solution is u(x, t) = { u (x t) if < t < x <, sin(t x) if < x < t <. If we take t h, the scheme works well and we obtain the exact solution for this example (see Figure 5.(a)). However, if we take t > h, say t =.5h as in the plot of Figure 5.(b), the solution blows up quickly since the scheme is unstable. Once again, it shows the importance of the time step constraint.

124 c5 7/9/6 4:49 page 4 #7 4 Finite Difference Methods for Hyperbolic PDEs 5..3 The Leap-Frog Scheme The leap-frog scheme for u t + au x = is U k+ j U k j t or Uj k+ = Uj k µ + a ( Uj+ k h U j k ( ) Uj+ k U j k, ) =, (5.9) where µ = a t/(h). The discretization is second-order in time and in space. It requires an NBC at one end and needs U j to get started. We know that the leapfrog scheme is unconditionally unstable for the heat equation. Let us consider the stability for the advection equation through the von Neumann analysis. Substituting U k j = e ijξ, U k+ j = g(ξ)e ijξ, U k j = g(ξ) eijξ into the leap-frog scheme, we get g + µ(e ihξ e ihξ )g =, or g + µi sin(hξ) g =, with solution g ± = iµ sin(hξ) ± µ sin (hξ). (5.) We distinguish three different cases.. If µ >, then there are ξ such that at least one of g > or g + > holds, so the scheme is unstable!. If µ <, then µ sin (hξ) such that g ± = µ sin (hξ) + µ sin (hξ) =. However, since it is a two-stage method, we have to be careful about the stability. From linear finite difference theory, we know the general solution is U k = C g k + C g+ k ( ) U k max{c, C } g k + g+ k max{c, C }, so the scheme is neutral stable according to the definition U k C T J j= U j.

125 c5 7/9/6 4:49 page 5 #8 5.3 The Modified PDE and Numerical Diffusion/Dispersion 5 3. If µ =, we still have g ± =, but we can find ξ such that µ sin(hξ) = and g + = g = i, i.e., i is a double root of the characteristic polynomial. The solution of the finite difference equation therefore has the form U k j = C ( i) k + C k( i) k, where the possibly complex numbers C and C are determined from the initial conditions. Thus there are solutions such that U k k which are unstable (slow growing). In conclusion, the leap-frog scheme is stable if t < h a. Note that we can use the upwind or other scheme (even unstable ones) to initialize the leap-frog scheme to get U j. We call a numerical scheme (such as the Lax Friedrichs and upwind schemes) dissipative if g(ξ) <, and otherwise (such as the leap-frog scheme) it is nondissipative. 5.3 The Modified PDE and Numerical Diffusion/Dispersion A modified PDE is the PDE that a finite difference equation satisfies exactly at grid points. Consider the upwind method for the advection equation u t + au x = in the case a >, U k+ j Uj k t + a h ( U k j U k j ) =. The derivation of a modified PDE is similar to computing the local truncation error, only now we insert v(x, t) into the finite difference equation to derive a PDE that v(x, t) satisfies exactly, thus v(x.t + t) v(x, t) t + a (v(x, t) v(x h, t)) =. h Expanding the terms in Taylor series about (x, t) and simplifying yields v t + ( t v tt + + a v x hv xx + ) 6 h v xxx + =, which can be rewritten as v t + av x = (ahv xx tv tt ) 6 (ah v xxx + ( t) v tt ) +,

126 c5 7/9/6 4:49 page 6 #9 6 Finite Difference Methods for Hyperbolic PDEs which is the PDE that v satisfies. Consequently, v tt = av xt + (ahv xxt tv ttt ) = av xt + O( t, h) = a x ( av x + O( t, h) so the leading modified PDE is v t + av x = ( ah a t ) v xx. (5.) h This is an advection diffusion equation. The grid values Uj n can be viewed as giving a second-order accurate approximation to the true solution of this equation, whereas they only give a first-order accurate approximation to the true solution of the original problem. From the modified equation, we can conclude that: the computed solution smooths out discontinuities because of the diffusion term (the second-order derivative term is called numerical dissipation, or numerical viscosity); if a is a constant and t = h/a, then a t/h = (we have second-order accuracy); we can add the correction term to offset the leading error term to render a higher-order accurate method, but the stability needs to be checked. For instance, we can modify the upwind scheme to get U k+ j Uj k t + a U k j U k j h ), = ( ah a t ) U k j Uj k + Uj+ k h h, which is second-order accurate if t h; from the modified equation, we can see why some schemes are unstable, e.g., the leading term of the modified PDE for the unstable scheme is U k+ j Uj k t + a U k j+ Uk j h = (5.) v t + av x = a t v xx, (5.3) where the highest derivative is similar to the backward heat equation that is dynamically unstable!

127 c5 7/9/6 4:49 page 7 # 5.4 The Lax Wendroff Scheme and Other FD methods The Lax Wendroff Scheme and Other FD methods To derive the Lax Wendroff scheme, we notice that u(x, t + t) u(x, t) t = u t + t u tt + O(( t) ) = u t + a ( t)u xx + O(( t) ). We can add the numerical viscosity a t u xx term to improve the accuracy in time, to get the Lax Wendroff scheme U k+ j Uj k t + a Uk j+ U k j h = a t ( ) h Uj k Uk j + Uj+ k, (5.4) which is second-order accurate both in time and space. To show this, we investigate the corresponding local truncation error T(x, t) = u(x, t + t) u(x, t) t a (u(x + h, t) u(x h, t)) h a t (u(x h, t) u(x, t) + u(x + h, t)) h = u t + t u tt au x a t u xx + O(( t) + h ) = O(( t) + h ), since u t = au x and u tt = au xt = a x u t = a u xx. To get the CFL condition for the Lax Wendroff scheme, we carry out the von Neumann stability analysis. The growth factor of the Lax Wendroff scheme is g(θ) = µ (e ihξ e ihξ) + µ = µi sin θ µ sin (θ/), (e ihξ + e ihξ)

128 c5 7/9/6 4:49 page 8 # 8 Finite Difference Methods for Hyperbolic PDEs where again θ = hξ, so ( ) g(θ) = µ sin θ + µ sin θ = 4µ sin θ + 4µ4 sin 4 θ + 4µ sin θ = 4µ ( µ ) sin 4 θ 4µ ( µ ). ( ) sin θ We conclude g(θ) if µ, i.e., t h/ a. If t > h/ a, there are ξ such that g(θ) > so the scheme is unstable. The leading modified PDE for the Lax Wendroff method is v t + av x = ( ) ) a t ( 6 ah v xxx (5.5) h which is a dispersive equation. The group velocity for the wave number ξ is c g = a ( ) ) a t ( ah ξ, (5.6) h which is less than a for all wave numbers. Consequently, the numerical result can be expected to develop a train of oscillations behind the peak, with high wave numbers lagging farther behind the correct location (cf. Strikwerda, 989 for more details). If we retain one more term in the modified equation for the Lax Wendroff scheme, we get ( (a t v t + av x = ) 6 ah ) v xxx ϵv xxxx, (5.7) h where the ϵ in the fourth-order dissipative term is O(h 3 ) and positive when the stability bound holds. This high-order dissipation causes the highest wave number to be damped, so that the oscillations are limited The Beam Warming Method The Beam Warming method is a one-sided finite difference scheme for the modified equation v t + av x = a t v xx.

129 c5 7/9/6 4:49 page 9 # 5.4 The Lax Wendroff Scheme and Other FD methods 9 Recall the one-sided finite difference formulas, cf. page u (x) = u (x) = 3u(x) 4u(x h) + u(x h) h + O(h ), u(x) u(x h) + u(x h) h + O(h). The Beam Warming method for u t + au x = for a > is Uj k+ = Uj k a t ( 3Uj k h ) + (a t) 4Uj k + U j k Uj k Uj k + U j k (5.8) which is second-order accurate in time and space if t h. The CFL constraint is h ( ), < t h a. (5.9) For this method, we do not require an NBC at x =, but we need a scheme to compute the solution U j. The leading terms of the modified PDE for the Beam Warming method are ( (a t v t + av x = ) 6 ah ) v xxx. (5.) h In this case, the group velocity is greater than a for all wave numbers when a t/h, so initial oscillations would move ahead of the main hump. On the other hand, if a t/h the group velocity is less than a, so the oscillations fall behind The Crank Nicolson Scheme The Crank Nicolson scheme for the advection equation u t + au x = f is U k+ j Uj k t + a Uk j+ U j k + Uk+ j+ Uk+ j = f k+ j, (5.) 4h which is second-order accurate in time and in space, and unconditionally stable. An NBC is needed at x =. This method is effective for the D problem, since it is easy to solve the resulting tridiagonal system of equations. For higher-dimensional problems, the method is not recommended in general as for hyperbolic equations the time step constraint t h is not a major concern.

130 c5 7/9/6 4:49 page #3 Finite Difference Methods for Hyperbolic PDEs The Method of Lines Different method of lines (MOL) methods can be used, depending on how the spatial derivative term is discretized. For the advection equation u t + au x =, if we use U i + a U i+ U i = (5.) t h the ODE solver is likely to be implicit, since the leap-frog method is unstable! 5.5 Numerical Boundary Conditions We need a numerical boundary condition (NBC) at one end for the one-way wave equation when we use any of the Lax Friedrichs, Lax Wendroff, or leapfrog schemes. There are several possible approaches. Extrapolation. One simplest first-order approximation is U k+ M = U k+ M. To get a second-order approximation, recall the Lagrange interpolation formula f(x) f(x ) x x x x + f(x ) x x x x. We can use the same time level for the interpolation to get U k+ M = U k+ M x M x M + U k+ x M x M x M x M. M x M x M If a uniform grid is used with spatial step size h, this formula becomes UM k+ = Uk+ M + Uk+ M. Quasi-characteristics. If we use previous time levels for the interpolation, we get UM k+ = U M k, first order, U k+ M = U M k x M x M + UM k x M x M x M x M, x M x M second order. We may use schemes that do not need an NBC at or near the boundary, e.g., the upwind scheme or the Beam Warming method to provide the boundary conditions. The accuracy and stability of numerical schemes usually depend upon the NBC s used. Usually, the main scheme and the scheme for an NBC s should both be stable.

131 c5 7/9/6 4:49 page #4 5.6 Finite Difference Methods for Second-Order Linear Hyperbolic PDEs 5.6 Finite Difference Methods for Second-Order Linear Hyperbolic PDEs In reality, a D sound wave propagates in two directions and can be modeled by the wave equation u tt = a u xx, (5.3) where a > is the wave speed. We can find the general solution by changing variables as follows, { ξ = x at x = ξ + η or η = x + at t = η ξ (5.4) a and using the chain-rule, we have u t = au ξ + au η, u tt = a u ξξ a u ξη + a u ηη, u x = u ξ + u η, u xx = u ξξ + u ξη + u ηη. Substituting these relations into the wave equation, we get which simplifies to yielding the solution u ξξ a a u ξη + a u ηη = a (u ξξ + u ξη + u ηη ), 4a u ξη =, u ξ = F(ξ), = u(x, t) = F(ξ) + G(η), u(x, t) = F(x at) + G(x + at), where F(ξ) and G(η) are two differential functions of one variable. The two functions are determined by initial and boundary conditions. With the general solution above, we can get the analytic solution to the Cauchy problem below: u tt = a u xx, < x <, u(x, ) = u (x), u t (x, ) = g(x),

132 c5 7/9/6 4:49 page #5 Finite Difference Methods for Hyperbolic PDEs (a) (x,t ) (b) (x at,) (x + at,) x x+ at = x (x,) x at = x x Figure 5.. A diagram of the domain of dependence (a) and influence (b). as u(x, t) = (u (x at) + u (x + at)) + a x+at x at g(s)ds. (5.5) The solution is called the D Alembert s formula. In particular, if u t (x, ) =, then the solution is u(x, t) = ( ) u(x at, ) + u(x + at, ), demonstrating that a signal (wave) propagates along each characteristic x at and x + at with speed a at half of its original strength. The solution u(x, t) at a point (x, t ) depends on the initial conditions only in the interval of (x at, x + at ). The initial values between (x at, x + at ) not only determine the solution value of u(x, t) at (x, t ) but also all the values of u(x, t) in the triangle formed by the three vertices (x, t ), (x at, ), and (x + at, ). This domain is called the domain of dependence (see Figure 5.(a)). Also we see that given any point (x, ), any solution value u(x, t), t >, in the cone formed by the characteristic lines x + at = x and x at = x depends on the initial values at (x, ). The domain formed by the cone is called the domain of influence (see Figure 5.(b)) An FD Method (CT CT) for Second-Order Wave Equations Now we discuss how to solve the boundary value problems for which the analytic solution is difficult to obtain. u tt = a u xx, < x <, IC: u(x, ) = u (x), u t (x, ) = u (x), BC: u(, t) = g (t), u(, t) = g (t).

133 c5 7/9/6 4:49 page 3 #6 5.6 Finite Difference Methods for Second-Order Linear Hyperbolic PDEs 3 We can use the central finite difference discretization both in time and space to get Uj k+ Uj k + Uj k ( t) = a U j k U j k + Uj+ k h, (5.6) which is second-order accurate both in time and space (( t) + h ). The CFL constraint for this method is t h a, as verified through the following discussion. The scheme above cannot be used to obtain the values of U j since Uj u(x j, t) is not explicitly defined. There are several ways to jump-start the process. We list two commonly used ones below. Apply the forward Euler method to the boundary condition u t (x, ) = u (x) to get U j = U j + t u (x j ). The finite difference solution in a finite time t = T will still be second-order accurate. Apply the ghost point method using Uj = U j t u (x j ) to get ( ) a U j = Uj ( ) + t u (x j ) + Uj h U j + Uj+. (5.7) The von Neumann analysis gives The Stability Analysis g + /g ( t) = a e ihξ + e ihξ h. When µ = a t/h, using cos(hξ) = sin (hξ/), this equation becomes ( ) g g + = 4µ sin θ g, or ( ) g 4µ sin θ g + =, where θ = hξ/, with solution g = µ sin θ ± ( µ sin θ). Note that µ sin θ. If we also have µ sin θ <, then one of the roots is g = µ sin θ ( µ sin θ) < so g > for some θ, such that the scheme is unstable.

134 c5 7/9/6 4:49 page 4 #7 4 Finite Difference Methods for Hyperbolic PDEs To have a stable scheme, we require µ sin θ, or µ sin θ, which can be guaranteed if µ or t h/ a. This is the CFL condition expected. Under this CFL constraint, ( ) ( ( ) ) g = µ sin θ + µ sin θ = since the second part in the expression of g is imaginary, so the scheme is neutrally stable. Recall a finite difference scheme for a second-order PDE (in time) P t,h v k j = is stable in a stability region Λ if there is an integer J such that for any positive time T there is a constant C T independent of t and h, such that J v n h + n C T v j h (5.8) for any n that satisfies n t T with ( t, h) Λ. The definition allows linear growth in time. Once again, a finite difference scheme converges if it is consistent and stable. j= 5.6. Transforming the Second-order Wave Equation to a First-Order System Although we can solve the second-order wave equation directly, in this section, let us discuss how to change this equation into a first-order system. The firstorder linear hyperbolic system of interest has the form u t = (Au) x = Au x, which is a special case of D conservation laws u t + (f(u)) x =. To transfer the second-order wave equation to a first-order system, let us consider { p = ut u tt = p t, q x = u xx, q = u x, then we have { pt = u tt = u xx = q x q t = u xt = (u t ) x = p x

135 c5 7/9/6 4:49 page 5 #8 5.6 Finite Difference Methods for Second-Order Linear Hyperbolic PDEs 5 or in matrix-vector form [ ] [ ] [ ] p p = q q and the eigenvalues of A are and. t x, (5.9) Initial and Boundary Conditions for the System From the given boundary conditions for u(x, t), we get u(, t) = g (t), u t (, t) = g (t) = p(, t), u(, t) = g (t), u t (, t) = g (t) = p(, t), and there is no boundary condition for q(x, t). The initial conditions are p(x, ) = u t (x, ) = u (x), q(x, ) = u x (x, ) = x u(x, ) = u (x), known, known. To solve the first-order system u t = Au x numerically, we usually diagonalize the system (corresponding to characteristic directions) and then determine the boundary conditions, and apply an appropriate numerical method (e.g., the upwind method). Thus we write A = T DT, where D = diag(λ, λ,..., λ n ) is the matrix containing the eigenvalues of A on the diagonal and T is a nonsingular matrix. From the following u t = Au x, Tu t = TAT Tu x, (Tu) t = D (Tu) x, and writing ũ = Tu, we get the new first-order system ũ t = Dũ x or (ũ i ) t = λ i (ũ i ) x, i =,,..., n, a simple system of equations that we can solve one by one. We also know at which end we should have a boundary condition, depending on the sign of λ i. For the second-order wave equation, let us recall that the eigenvalues are and. The unit eigenvector (such that x = ) corresponding to the eigenvalue, found by solving Ax = x, is x = [, ] T /. Similarly, the unit eigenvector corresponding to the eigenvalue is x = [, ] T /, so T =, T =.

136 c5 7/9/6 4:49 page 6 #9 6 Finite Difference Methods for Hyperbolic PDEs The transformed result is thus [ ] p = q t [ ] [ ] p q x = [ ] [ ] p q x, or in equivalent component form ( p ) q By setting we get ( p + ) q t t ( = p ) q ( = p + ) q. x y = p q, y = p + q, t y = x y, t y = x y, i.e., two separate one-way wave equations, for which we can use various numerical methods. We already know the initial conditions, but need to determine a boundary condition for y at x = and a boundary condition for y at x =. Note that y (, t) = p(, t) q(, t), y (, t) = p(, t) + q(, t), x

137 c5 7/9/6 4:49 page 7 # 5.8 Finite Difference Methods for Conservation Laws 7 and q(, t) is unknown. We do know that, however, y (, t) + y (, t) = p(, t), and can use the following steps to determine the boundary condition at x = :. update (y ) k+ first, for which we do not need a boundary condition; and. use (y ) k+ = p k+ (y ) k+ to get the boundary condition for y at x =. Similar steps likewise can be applied to the boundary condition at x = as well. 5.7 Some Commonly Used FD Methods for Linear System of Hyperbolic PDEs We now list some commonly used finite difference methods for solving a linear hyperbolic system of PDEs where A is a matrix with real eigenvalues. The Lax Friedrichs scheme U k+ j = u t + Au x =, u(x, ) = u (x), (5.3) ( U k j+ + U k j ) t ( ) h A Uj+ k U j k. (5.3) The leap-frog scheme Uj k+ = Uj k t ( ) h A Uj+ k U j k. (5.3) The Lax Wendroff scheme Uj k+ = Uj k t ( ) h A Uj+ k U j k + ( t) ( ) h A Uj k Uk j + Uj+ k. (5.33) To determine correct boundary conditions, we usually need to find the diagonal form A = T DT and the new system ũ t = Dũ x with ũ = Tu. 5.8 Finite Difference Methods for Conservation Laws The canonical form for the D conservation law is u t + f(u) x =, (5.34)

138 c5 7/9/6 4:49 page 8 # 8 Finite Difference Methods for Hyperbolic PDEs and one famous benchmark problem is Burgers equation ( ) u u t + =, (5.35) x in which f(u) = u /. The term f(u) is often called the flux. This equation can be written in the nonconservative form u t + uu x =, (5.36) and the solution likely develops shock(s) where the solution is discontinuous, even if the initial condition is arbitrarily differentiable, i.e., u (x) = sin x. We can use the upwind scheme to solve Burgers equation. From the nonconservative form, we obtain U k+ j U k+ j Uj k t + U k j Uj k + Uj k t or from the conservative form U k+ j U k+ j Uj k t U k j Uj k =, if Uj k, h U k j+ Uk j h + (U k j ) (U k j ) h =, if U k j <, =, if U k j, Uj k + (U j+ k ) (Uj k) =, if Uj k <. t h If the solution is smooth, both methods work well (first-order accurate). However, if shocks develop the conservative form gives much better results than that of the nonconservative form. We can derive the Lax Wendroff scheme using the modified equation of the nonconservative form. Since u t = uu x, u tt = u t u x uu tx = uu x + u(uu x ) x ) = uu x + u (u x + uu xx = uu x + u u xx, so the leading term of the modified equation for the first-order method is u t + uu x = t ) (uu x + u u xx, (5.37) There is no classical solution to the PDE when shocks develop because u x is not well defined. We need to look for weak solutions.

139 c5 7/9/6 4:49 page 9 # 5.8 Finite Difference Methods for Conservation Laws 9 and the nonconservative Lax Wendroff scheme for Burgers equation is Uj k+ = Uj k tuj k = + ( t) Uj+ k U j k h ( ) U k U j k j+ Uj k + (Uj k ) U j k U j k + Uj+ k. h h 5.8. Conservative FD Methods for Conservation Laws Consider the conservation law u t + f(u) x =, and let us seek a numerical scheme of the form where u k+ j = u k j t h is called the numerical flux, satisfying ( g k g k j+ j ( ) g j+ = g uj p+ k, u j p+ k,..., uk j+q+ ), (5.38) g(u, u,..., u) = f(u). (5.39) Such a scheme is called conservative. For example, we have g(u) = u / for Burgers equation. We can derive general criteria which g should satisfy, as follows.. Integrate the equation with respect to x from x j to x j+, to get xj+ u t dx = xj+ f(u) x dx x j ( = x j f(u(x j+ ), t)) f(u(x j, t)).. Integrate the equation above with respect to t from t k to t k+, to get xj+ x j t k+ t k xj+ x j u t dx dt = ( u(x, t k+ ) u(x, t k) dx = t k+ t k t k+ t k ( ) f(u(x j+, t)) f(u(x j, t)) dt, ( ) f(u(x j+, t)) f(u(x j, t)) dt.

140 c5 7/9/6 4:49 page 3 #3 3 Finite Difference Methods for Hyperbolic PDEs Define the average of u(x, t) as ū k j = h xj+ x j u(x, t k )dx, (5.4) which is the cell average of u(x, t) over the cell (x j, x j+ ) at the time level k. The expression that we derived earlier can therefore be rewritten as ( ) t k+ ū k+ j = ū k j h where = ū k j t h = ū k j t h ( f(u(x j+, t))dt t k t ( g j+ t k+ t k+ f(u(x j+, t))dt t k t g j+ g j+ = t ), t k+ f(u(x j, t))dt t k f(u(x j+, t))dt. t k t k+ f(u(x j, t))dt t k Different conservative schemes can be obtained, if different approximations are used to evaluate the integral. ) 5.8. Some Commonly Used Numerical Scheme for Conservation Laws Some commonly used schemes are: Lax Friedrichs scheme Uj k+ = ( ) Uj+ k + U j k t ) (f(u j+ k h ) f(u j k ) ; (5.4) Lax Wendroff scheme Uj k+ = Uj k t ) (f(u j+ k h ) f(u j k ) where A j+ + ( t) { h A j+ = Df(u(x j+ ( ) )} f(uj+ k ) f(u j k ) A j (f(u j k ) f(uj k ),, t)) is the Jacobian matrix of f(u) at u(x j+ (5.4), t).

141 c5 7/9/6 4:49 page 3 #4 A modified version U k+ = ( U j+ j k + Uj+ k Uj k+ = Uj k t ( h Exercises 3 ) t ( h f(u k+ j+ ) f(uj+ k ) f(u j k ) ), ) f(u k+ ) j (5.43) called the Lax Wendroff Richtmyer scheme, does not need the Jacobian matrix. Exercises. Show that the following scheme is consistent with the PDE u t + cu tx + au x = f: U n+ i k U n i + c U n+ i+ U n+ i U i+ n + Ui n + a U i+ n U n kh h i = f n i. Discuss also the stability, as far as you can.. Implement and test the upwind and the Lax Wendroff schemes for the one-way wave equation u t + u x =. Assume the domain is x, and t final =. Test your code for the following parameters: (a) u(t, ) =, and u(, x) = (x + )e x/. if x < /, (b) u(t, ) =, and u(, x) = if x /, if x > /. Do the grid refinement analysis at t final = for case (a) where the exact solution is available, take m =,, 4, and 8. For problem (b), use m = 4. Plot the solution at t final = for both cases. 3. Use the upwind and Lax Wendroff schemes for Burgers equation ( ) u u t + = x with the same initial and boundary conditions as in problem. 4. Solve the following wave equation numerically using a finite difference method. u t = u c + f(x, t), x < x <, { x/4 if x < /, u(, t) = u(, t) =, u(x, ) = ( x)/4 if / x. (a) Test your code and convergence order using a problem that has the exact solution. (b) Test your code again by setting f(x, t) =. (c) Modify and validate your code to the PDE with a damping term u t β u t = u c + f(x, t). x

142 c5 7/9/6 4:49 page 3 #5 3 Finite Difference Methods for Hyperbolic PDEs (d) Modify and validate your code to the PDE with an advection term u t = u c x + β u + f(x, t), x where c and β are positive constants. 5. Download the Clawpack for conservation laws from the Internet. Run a test problem in D. Write to 3 pages to detail the package and the numerical results.

143 c6 7/9/6 8: page 33 # Part II Finite Element Methods

144 c6 7/9/6 8: page 34 #

145 c6 7/9/6 8: page 35 #3 6 Finite Element Methods for D Boundary Value Problems The finite element (FE) method was developed to solve complicated problems in engineering, notably in elasticity and structural mechanics modeling involving elliptic PDEs and complicated geometries. But nowadays the range of applications is quite extensive. We will use the following D and D model problems to introduce the finite element method: D: u (x) = f(x), < x <, u() =, u() = ; D: ( u xx + u yy ) = f(x, y), (x, y) Ω, u(x, y) Ω =, where Ω is a bounded domain in (x, y) plane with the boundary Ω. 6. The Galerkin FE Method for the D Model We illustrate the finite element method for the D two-point BVP u (x) = f(x), < x <, u() =, u() =, using the Galerkin finite element method described in the following steps.. Construct a variational or weak formulation, by multiplying both sides of the differential equation by a test function v(x) satisfying the boundary conditions (BC) v() =, v() = to get u v = fv, 35

146 c6 7/9/6 8: page 36 #4 36 Finite Element Methods for D Boundary Value Problems and then integrating from to (using integration by parts) to have the following, ( u v) dx = u v + u v dx = = u v dx = u v dx fv dx, the weak form.. Generate a mesh, e.g., a uniform Cartesian mesh x i = i h, i =,,..., n, where h = /n, defining the intervals (x i, x i ), i =,,..., n. 3. Construct a set of basis functions based on the mesh, such as the piecewise linear functions (i =,,..., n ) x x i if x i x < x i, h ϕ i (x) = x i+ x if x i x < x i+, h otherwise, x i x i x i+ often called the hat functions, see the right diagram for a hat function. 4. Represent the approximate (FE) solution by a linear combination of the basis functions n u h (x) = c j ϕ j (x), j= where the coefficients c j are the unknowns to be determined. On assuming the hat basis functions, obviously u h (x) is also a piecewise linear function, although this is not usually the case for the true solution u(x). Other basis functions are considered later. We then derive a linear system of equations for the coefficients by substituting the approximate solution u h (x) for the exact solution u(x) in the weak form u v dx = fv dx, i.e., = n u h v dx = n c j ϕ jv dx = c j ϕ jv dx j= = j= fvdx, (noting that errors are introduced!) fv dx.

147 c6 7/9/6 8: page 37 #5 6. The Galerkin FE Method for the D Model 37 Next, choose the test function v(x) as ϕ, ϕ,..., ϕ n successively, to get the system of linear equations (noting that further errors are introduced): ( ) ( ) ϕ ϕ dx c + + ϕ ϕ n dx c n = ( ) ( ) ϕ ϕ dx c + + ϕ ϕ n dx c n = ( ) ( ) ϕ iϕ dx c + + ϕ iϕ n dx c n = ( ) ( ) ϕ n ϕ dx c + + ϕ n ϕ n dx c n = fϕ dx fϕ dx fϕ i dx fϕ n dx, or in the matrix-vector form: a(ϕ, ϕ ) a(ϕ, ϕ ) a(ϕ, ϕ n ) a(ϕ, ϕ ) a(ϕ, ϕ ) a(ϕ, ϕ n ).... a(ϕ n, ϕ ) a(ϕ n, ϕ ) a(ϕ n, ϕ n ) c c. c n ( f, ϕ ) ( f, ϕ ) =,. ( f, ϕ n ) where a(ϕ i, ϕ j ) = ϕ iϕ jdx, ( f, ϕ i ) = fϕ i dx. The term a(u, v) is called a bilinear form since it is linear with each variable ( function), and ( f, v) is called a linear form. If ϕ i are the hat functions, then

148 c6 7/9/6 8: page 38 #6 38 Finite Element Methods for D Boundary Value Problems in particular we get h h h h h h h h h h h h h c c c 3. c n c n = fϕ dx fϕ dx fϕ 3dx. fϕ n dx fϕ n dx 5. Solve the linear system of equations for the coefficients and hence obtain the approximate solution u h (x) = i c iϕ i (x). 6. Carry out the error analysis (a priori or a posteriori error analysis). Questions are often raised about how to appropriately represent ODE or PDE problems in a weak form; choose the basis functions ϕ, e.g., in view of ODE/PDE, mesh, and the boundary conditions, etc.; implement the finite element method; solve the linear system of equations; and carry out the error analysis, which will be addressed in subsequent chapters.. 6. Different Mathematical Formulations for the D Model Let us consider the D model again, u (x) = f(x), < x <, u() =, u() =. There are at least three different formulations to consider for this problem:. the (D)-form, the original differential equation;. the (V)-form, the variational form or weak form u v dx = (6.) fv dx (6.) for any test function v H (, ), the Sobolev space for functions in integral forms like the C space for functions (see later), and as indicated above, the

149 c6 7/9/6 8: page 39 #7 6. Different Mathematical Formulations for the D Model 39 f (x) x = x x + x Figure 6.. and force. u(x) x u(x) u(x + x) A diagram of elastic string with two ends fixed: the displacement corresponding finite element method is often called the Galerkin method; and 3. the (M)-form, the minimization form min v(x) H (,) { ( ) } (v ) fv dx, (6.3) when the corresponding finite element method is often called the Ritz method. As discussed in subsequent subsections, under certain assumptions these three different formulations are equivalent. 6.. A Physical Example From the viewpoint of mathematical modeling, both the variational (or weak) form and the minimization form are more natural than the differential formulation. For example, suppose we seek the equilibrium position of an elastic string of unit length, with two ends fixed and subject to an external force. The equilibrium is the state that minimizes the total energy. Let u(x) be the displacement of the string at a point x, and consider the deformation of an element of the string in the interval (x, x + x) (see Figure 6. for an illustration). The potential energy of the deformed element is τ increase in the element length ( ) = τ (u(x + x) u(x)) + ( x) x ( = τ u(x) + ux(x) x + ) u xx(x)( x) + u(x) + ( x) x ( ) τ [ + u x(x)] ( x) x τu x(x) x,

150 c6 7/9/6 8: page 4 #8 4 Finite Element Methods for D Boundary Value Problems where τ is the coefficient of the elastic tension that we assume to be constant. If the external force is denoted by f(x), the work done by this force is f(x)u(x) at every point x. Consequently, the total energy of the string (over < x < ) is F(u) = τu x(x) dx f(x)u(x) dx, from the work energy principle: the change in the kinetic energy of an object is equal to the net work done on the object. Thus to minimize the total energy, we seek the extremal u such that F(u ) F(u) for all admissible u(x), i.e., the minimizer u of the functional F(u) (a function of functions). Using the principle of virtual work, we also have u v dx = fvdx for any admissible function v(x). On the other hand, the force balance yields the relevant differential equation. The external force f(x) is balanced by the tension of the elastic string given by Hooke s law, see Figure 6. for an illustration, such that or τ (u x (x + x) u x (x)) f(x) x τ u x(x + x) u x (x) x f(x), thus, for x we get the PDE τu xx = f(x), along with the boundary condition u() = and u() = since the string is fixed at the two ends. The three formulations are equivalent representations of the same problem. We show the mathematical equivalence in the next subsection.

151 c6 7/9/6 8: page 4 #9 6. Different Mathematical Formulations for the D Model Mathematical Equivalence At the beginning of this chapter, we proved that (D) is equivalent to (V) using integration by parts. Let us now prove that under certain conditions (V) is equivalent to (D), and that (V) is equivalent to (M), and that (M) is equivalent (V). Theorem 6.. (V) (D). If u xx exists and is continuous, then u v dx = implies that u xx = f(x). fvdx, v() = v() =, v H (, ), Recall that H (, ) denotes a Sobolev space, which here we can regard as the space of all functions that have a first-order derivative. Proof From integration by parts, we have or = u v dx = u v u v dx, u vdx = ( u + f ) v dx =. fv dx, Since v(x) is arbitrary and continuous, and u and f are continuous, we must have u + f =, i.e., u = f. Theorem 6.. (V) (M). Suppose u (x) satisfies u v dx = vfdx for any v(x) H (, ), and v() = v() =. Then F(u ) F(u) (u ) x dx fu dx or u xdx fudx.

152 c6 7/9/6 8: page 4 # 4 Finite Element Methods for D Boundary Value Problems Proof F(u) = F(u + u u ) = F(u + w) (where w = u u, w() = w() = ), ( ) = (u + w) x (u + w)f dx ( ) (u ) x = + w x + (u ) x w x u f wf dx = = = F(u ) + F(u ). ( ) (u ) x u f ( ) (u ) x u f The proof is completed. w xdx dx + w xdx + dx + w xdx + ((u ) x w x fw) dx Theorem 6.3. (M) (V). If u (x) is the minimizer of F(u ), then (u ) x v x dx = for any v() = v() = and v H (, ). 3 4 Proof Consider the auxiliary function: g(ϵ) = F(u + ϵv). fvdx Since F(u ) F(u + ϵv) for any ϵ, g() is a global or local minimum such that g () =. To obtain the derivative of g(ϵ), we have { } g(ϵ) = (u + ϵv) x (u + ϵv)f dx { ( = (u ) x + (u ) x v x ϵ + v xϵ ) } u f ϵvf dx ( ) = (u ) x u f dx + ϵ ((u ) x v x fv) dx + ϵ v xdx. Thus we have g (ϵ) = ((u ) x v x fv) dx + ϵ v xdx

153 c6 7/9/6 8: page 43 # and 6.3 Key Components of the FE Method for the D Model 43 g () = ((u ) x v x fv) dx = since v(x) is arbitrary, i.e., the weak form is satisfied. However, the three different formulations may not be equivalent for some problems, depending on the regularity of the solutions. Thus, although (D) = (M) = (V), in order for (V) to imply (M), the differential equation is usually required to be self-adjoint, and for (M) or (V) to imply (D); the solution of the differential equation must have continuous second-order derivatives. 6.3 Key Components of the FE Method for the D Model In this section, we discuss the model problem (6.) using the following methods: Galerkin method for the variational or weak formulation; Ritz method for the minimization formulation. We also discuss another important aspect of finite element methods, namely, how to assemble the stiffness matrix using the element by element approach. The first step is to choose an integral form, usually the weak form, say u v dx = fv dx for any v(x) in the Sobolev space H (, ) with v() = v() = Mesh and Basis Functions For a D problem, a mesh is a set of points in the interval of interest, say, x =, x, x,..., x M = (see Figure 6. for an illustration). Let h i = x i+ x i, i =,,..., M, then x i is called a node, or nodal point. (x i, x i+ ) is called an element. h = max { h i} is the mesh size, a measure of how fine the partition is. i M Define a Finite Dimensional Space on the Mesh Let the solution be in the space V, which is H (, ) in the model problem. Based on the mesh, we wish to construct a subspace V h (a finite dimensional space) V (the solution space), such that the discrete problem is contained in the continuous problem.

154 c6 7/9/6 8: page 44 # 44 Finite Element Methods for D Boundary Value Problems f f f i f M x = x x x i x M = Figure 6.. Diagram of a mesh and hat basis functions. Any such finite element method is called a conforming one. Different finite dimensional spaces generate different finite element solutions. Since V h has finite dimension, we can find a set of basis functions that are linearly independent, i.e., if ϕ, ϕ,..., ϕ M V h M j= α j ϕ j =, then α = α = = α M =. Thus V h is the space spanned by the basis functions: M V h = v h(x), v h (x) = α j ϕ j. The simplest finite dimensional space is the piecewise continuous linear function space defined over the mesh: { } V h = v h (x), v h (x) is continuous piecewise linear, v h () = v h () =. It is easy to show that V h has a finite dimension, even though there are an infinite number of elements in V h. j= Find the Dimension of V h A linear function l(x) in an interval (x i, x i+ ) is uniquely determined by its values at x i and x i+ : l(x) = l(x i ) x x i+ x i x i+ + l(x i+ ) x x i x i+ x i. There are M nodal values l(x i ) s, l(x ), l(x ),..., l(x M ) for a piecewise continuous linear function over the mesh, in addition to l(x ) = l(x M ) =. Given a vector [ l(x ), l(x ),..., l(x M )] T R M, we can construct a v h (x) V h by taking v h (x i ) = l(x i ), i =,..., M. On the other hand, given v h (x)

155 c6 7/9/6 8: page 45 #3 6.3 Key Components of the FE Method for the D Model 45 V h, we get a vector [v(x ), v(x ),..., v(x M )] T R M. Thus there is a one to one relation between V h and R M, so V h has the finite dimension M. Consequently, V h is considered to be equivalent to R M Find a Set of Basis Functions The finite dimensional space can be spanned by a set of basis functions. There are infinitely many sets of basis functions, but we should choose one that: is simple; has compact (minimum) support, i.e., zero almost everywhere except for a small region; and meets the regularity requirement, i.e., continuous and differentiable, except at nodal points. The simplest is the set of hat functions ϕ (x ) =, ϕ (x j ) =, j =,, 3,..., M, ϕ (x ) =, ϕ (x j ) =, j =,, 3,..., M, ϕ i (x i ) =, ϕ i (x j ) =, j =,,... i, i +,..., M, ϕ M (x M ) =, ϕ M (x j ) =, j =,,..., M, M. They can be represented simply as ϕ i (x j ) = δ j i, i.e., {, if i = j, ϕ i (x j ) =, otherwise. The analytic form of the hat functions for i =,,..., n is, if x < x i, x x i, if x i x < x i, h i ϕ i (x) = x i+ x, if x i x < x i+, h i+, if x i+ x ; and the finite element solution sought is (6.4) (6.5) M u h (x) = α j ϕ j (x), (6.6) j=

156 c6 7/9/6 8: page 46 #4 46 Finite Element Methods for D Boundary Value Problems and either the minimization form (M) or the variational or weak form (V) can be used to derive a linear system of equations for the coefficients α j. On using the hat functions, we have M u h (x i ) = α j ϕ j (x i ) = α i ϕ i (x i ) = α i, (6.7) j= so α i is an approximate solution to the exact solution at x = x i The Ritz Method Although not every problem has a minimization form, the Ritz method was one of the earliest and has proven to be one of the most successful. For the model problem (6.), the minimization form is min v H (,) F(v) : F(v) = (v x ) dx fv dx. (6.8) M As before, we look for an approximate solution of the form u h (x) = α j ϕ j (x). Substituting this into the functional form gives F(u h ) = M α j ϕ j(x) j= M f j= α j ϕ j (x)dx, (6.9) which is a multivariate function of α, α,..., α M and can be written as j= F(v h ) = F (α, α,..., α M ). The necessary condition for a global minimum (also a local minimum) is F α =, F α =,, F α i =,, Thus taking the partial derivatives with respect to α j we have F M = α j ϕ j ϕ α dx fϕ dx = F α i = j= M α j ϕ j ϕ idx j= F α M =. fϕ i dx =, i =,,..., M,

157 c6 7/9/6 8: page 47 #5 6.3 Key Components of the FE Method for the D Model 47 and on exchanging the order of integration and summation: ( M ) ϕ jϕ idx α j = fϕ i dx, i =,,..., M. j= This is the same system of equations that follow from the Galerkin method with the weak form, i.e., u v dx = M α j ϕ j ϕ idx = j= fv dx immediately gives fϕ i dx, i =,,..., M Comparison of the Ritz and the Galerkin FE Methods For many problems, the Ritz and Galerkin methods are theoretically equivalent. The Ritz method is based on the minimization form, and optimization techniques can be used to solve the problem. The Galerkin method usually has weaker requirements than the Ritz method. Not every problem has a minimization form, whereas almost all problems have some kind of weak form. How to choose a suitable weak form and the convergence of different methods are all important issues for finite element methods Assembling the Stiffness Matrix Element by Element Given a problem, say the model problem, after we have derived the minimization or weak form and constructed a mesh and a set of basis functions we need to form: the coefficient matrix A = {a ij } = { ϕ i ϕ jdx}, often called the stiffness matrix for the first-order derivatives, and the right-hand side vector F = { f i } = { f iϕ i dx}, often called the load vector. The procedure to form A and F is a crucial part in the finite element method. For the model problem, one way is by assembling element by element: (x, x ), (x, x ), (x i, x i ) (x M, x M ), Ω, Ω, Ω i, Ω M.

158 c6 7/9/6 8: page 48 #6 48 Finite Element Methods for D Boundary Value Problems The idea is to break up the integration element by element, so that for any integrable function g(x) we have g(x) dx = M k= xk x k g(x) dx = The stiffness matrix can then be written A = = M k= (ϕ ) dx ϕ ϕ dx ϕ ϕ dx (ϕ ) dx Ω k g(x) dx. ϕ ϕ M dx ϕ ϕ M dx.... ϕ M ϕ dx ϕ M ϕ dx (ϕ M ) dx x x x ϕ ϕ dx x x (ϕ ) dx x ϕ ϕ M dx x x ϕ ϕ dx x x (ϕ ) x dx x ϕ ϕ M dx.... x x ϕ M ϕ dx x x ϕ M ϕ dx x x (ϕ M ) dx + x x x ϕ ϕ dx x x (ϕ ) dx x ϕ ϕ M dx x x ϕ ϕ dx x x (ϕ ) x dx x ϕ ϕ M dx.... x x ϕ M ϕ dx x x ϕ M ϕ dx x x (ϕ M ) dx + + xm x M (ϕ ) dx xm x M ϕ ϕ dx. xm x M ϕ M ϕ dx xm x M ϕ ϕ dx xm x M ϕ ϕ M dx xm x M (ϕ ) xm dx x M ϕ ϕ M dx.... x M x M ϕ M ϕ dx xm x M (ϕ M ) dx

159 c6 7/9/6 8: page 49 #7 6.3 Key Components of the FE Method for the D Model 49 For the hat basis functions, it is noted that each interval has only two nonzero basis functions (cf. Figure 6.3). This leads to x x (ϕ ) dx x (ϕ ) dx x x ϕ ϕ dx x x A = + ϕ ϕ dx x x (ϕ ) dx x x 3 x (ϕ ) dx x 3 x ϕ ϕ 3 dx + x 3 x ϕ 3 ϕ dx x 3 x (ϕ 3 ) dx xm x M (ϕ M ) dx The nonzero contribution from a particular element is xi+ [ xi+ K e x i (ϕ i i = ) dx x i ϕ i ϕ i+ dx ] xi+ x i ϕ i+ ϕ i dx, x i+ x i (ϕ i+ ) dx the two by two local stiffness matrix. Similarly, the local load vector is [ xi+ ] F e x i fϕ i dx i = xi+, fϕ i+ dx and the global load vector can also be assembled element by element: F = x x fϕ dx +. x x x x x i fϕ dx fϕ dx +. x3 x x3 x fϕ dx fϕ 3 dx xm x M fϕ M dx.

160 c6 7/9/6 8: page 5 #8 5 Finite Element Methods for D Boundary Value Problems ϕ ϕ ϕ ϕ ϕ Ω Ω 3 Ω 3 4 Ω 4 5 ψ ψ ψ ψ ψ 3 ψ 3 ψ 4 ψ 4 Figure 6.3. Continuous piecewise linear basis functions ϕ i for a four-element mesh generated by linear shape functions ψ e, ψe defined over each element. On each interior element, there are only two nonzero basis functions. The figure is adapted from Carey and Oden (983).

161 c6 7/9/6 8: page 5 #9 6.3 Key Components of the FE Method for the D Model Computing Local Stiffness Matrix K e i and Local Load Vector F e i In the element (x i, x i+ ), there are only two nonzero hat functions centered at x i and x i+ respectively: ψ e i (x) = x i+ x x i+ x i, ψ e i+ (x) = x x i x i+ x i, (ψ e i ) = h i, (ψ e i+ ) = h i, where ψi e and ψi+ e are defined only on one particular element. We can concentrate on the corresponding contribution to the stiffness matrix and load vector from the two nonzero hat functions. It is easy to verify that xi+ xi+ (ψ i) dx = x i x i h dx = xi+, ψ i h iψ i+ dx = i x i xi+ xi+ (ψ i+ ) dx = x i x i h dx =. i h i xi+ x i h i dx = h i, The local stiffness matrix K e i is therefore K e i = h i h i, h i h i and the stiffness matrix A is assembled as follows: h h + h h A = (M ) (M ), A = h, A = h, h + h h h h + h h A = h h.....

162 c6 7/9/6 8: page 5 # 5 Finite Element Methods for D Boundary Value Problems Thus we finally assemble the tridiagonal matrix h + h h h h + h h h h + h 3 h 3 A = h M 3 h M 3 + h M h M h M h M + h M Remark 6.4. For a uniform mesh x i = ih, h = /M, i =,,..., M and the integral approximated by the mid-point rule xi+ xi+ f(x)ϕ i (x)dx = f(x)ϕ i (x) f(x i )ϕ i (x)dx x i x i xi+ = f(x i ) x i ϕ i (x)dx = f(x i ), the resulting system of equations for the model problem from the finite element method is identical to that obtained from the FD method Matlab Programming of the FE Method for the D Model Problem Matlab code to solve the D model problem u (x) = f(x), a < x < b; u(a) = u(b) = (6.) using the hat basis functions is available either through the link www4.ncsu.edu/ zhilin/fd FEM book or by request to the authors. The Matlab code includes the following Matlab functions: U = femd(x) is the main subroutine of the finite method using the hat basis functions. The input x is the vector containing the nodal points. The output U, U() = U(M) = is the finite element solution at the nodal points, where M + is the total nodal points. y = hat(x, x, x) is the local hat function in the interval [x, x] which takes one at x = x and zero at x = x.

163 c6 7/9/6 8: page 53 # 6.4 Matlab Programming of the FE Method for the D Model Problem 53 y = hat(x, x, x) is the local hat function in the interval [x, x] which takes one at x = x and zero at x = x. x y = int_hata_f(x, x) computes the integral x f(x)hatdx using the Simpson rule. x y = int_hata_f(x, x) computes the integral x f(x)hatdx using the Simpson quadrature rule. The main function is drive.m which solves the problem, plots the solution and the error. y = f(x) is the right-hand side of the differential equation. y = soln(x) is the exact solution of differential equation. y = fem_soln(x, U, xp) evaluates the finite element solution at an arbitrary point xp in the solution domain. We explain some of these Matlab functions in the following subsections Define the Basis Functions In an element [x, x ] there are two nonzero basis functions: one is where the Matlab code is the file hat.m so and the other is ψ e (x) = x x x x (6.) where the Matlab code is the file hat.m so ψ e (x) = x x x x (6.) 6.4. Define f(x)

164 c6 7/9/6 8: page 54 # 54 Finite Element Methods for D Boundary Value Problems The Main FE Routine

165 c6 7/9/6 8: page 55 #3 6.4 Matlab Programming of the FE Method for the D Model Problem 55 Let us consider the test example The exact solution is A Test Example f(x) =, a =, b =. u(x) = x( x) A sample Matlab drive code is listed below:. (6.3) Figure 6.4 shows the plots produced by running the code. Figure 6.4(a) shows both the true solution (the solid line) and the finite element solution (the dashed line). The little o s are the finite element solution values at the nodal points. Figure 6.4(b) shows the error between the true and the finite element solutions at a few selected points (zero at the nodal points in this example, although may not be so in general).

166 c6 7/9/6 8: page 56 #4 56 Finite Element Methods for D Boundary Value Problems.4 (a) Solid line: Exact solution, Dotted line: FEM solution (b) 3 Error plot u(x) and u fem (x) u u fem x x Figure 6.4. (a) Plot of the true solution (solid line) and the finite element solution (the dashed line) and (b) the error plot at some selected points.. Consider the following BVP: Exercises u (x) + u(x) = f(x), < x <, u() = u() =. (a) Show that the weak form (variational form) is where (u, v ) + (u, v) = ( f, v), v(x) H (, ), (u, v) = H (, ) = u(x)v(x)dx, { v(x), v() = v() =, } v dx <. (b) Derive the linear system of the equations for the finite element approximation 3 u h = α j ϕ j (x), with the following information: j= f(x) = ; the nodal points and the elements are indexed as x =, x = 4, x 3 =, x = 3 4, x 4 =. Ω = [x 3, x ], Ω = [x, x 4 ], Ω 3 = [x, x 3 ], Ω 4 = [x, x ] ; the basis functions are the hat functions {, if i = j, ϕ i (x j ) =, otherwise, and do not reorder the nodal points and elements; and assemble the stiffness matrix and the load vector element by element.

167 c6 7/9/6 8: page 57 #5 Exercises 57. (This problem involves modifying drive.m, f.m and soln.m.) Use the Matlab code to solve u (x) = f(x), < x <, u() = u() =. Try two different meshes: (a) the one given in drive.m; (b) the uniform mesh x i = i h, h = /M, i =,,..., M. Take M =, done in Matlab using the command: x = :. :. Use the two meshes to solve the problem for the following f(x) or exact u(x): (a) given u(x) = sin(πx), what is f(x)? (b) given f(x) = x 3, what is u(x)? (c) (extra credit) given f(x) = δ(x /), where δ(x) is the Dirac delta function, what is u(x)? Hint: The Dirac delta function is defined as a distribution satisfying b f(x)δ(x)dx = a f() for any function f(x) C(a, b) if x = is in the interior of the integration. Ensure that the errors are reasonably small. 3. (This problem involves modifying femd.m, drive.m, f.m and soln.m.) Assume that xi+ x i ϕ i (x)ϕ i+ (x)dx = h 6, where h = x i+ x i, and ϕ i and ϕ i+ are the hat functions centered at x i and x i+ respectively. Use the Matlab code to solve u (x) + u(x) = f(x), < x <, u() = u() =. Try to use the uniform grid x = :. : in Matlab, for the following exact u(x): (a) u(x) = sin(πx), what is f(x)? (b) u(x) = x( x)/, what is f(x)?

168 c7 7/9/6 5:5 page 58 # 7 Theoretical Foundations of the Finite Element Method Using finite element methods, we need to answer these questions: What is the appropriate functional space V for the solution? What is the appropriate weak or variational form of a differential equation? What kind of basis functions or finite element spaces should we choose? How accurate is the finite element solution? We briefly address these questions in this chapter. Recalling that finite element methods are based on integral forms and not on the pointwise sense as in finite difference methods, we will generalize the theory corresponding to the pointwise form to deal with integral forms. 7. Functional Spaces A functional space is a set of functions with operations. For example, { } C(Ω) = C (Ω) = u(x), u(x) is continuous on Ω (7.) is a linear space that contains all continuous functions on Ω, the domain where the functions are defined, i.e., Ω = [, ]. The space is linear because for any real numbers α and β and u C(Ω) and u C(Ω), we have αu + βu C(Ω). The functional space with first-order continuous derivatives in D is C (Ω) = { u(x), u(x), u (x) are continuous on Ω }, (7.) and similarly C m (Ω) = { u(x), } u(x), u (x),..., u (m) are continuous on Ω. (7.3) Obviously, C C C m. (7.4) 58

169 c7 7/9/6 5:5 page 59 # Then as m, we define 7. Functional Spaces 59 C (Ω) = {u(x), u(x) is indefinitely differentiable on Ω}. (7.5) For example, e x, sin x, cos x, and polynomials, are in C (, ), but some other elementary functions such as log x, tan x, cot x are not if x = is in the domain. 7.. Multidimensional Spaces and Multi-index Notations Let us now consider multidimensional functions u(x) = u(x, x,..., x n ), x R n, and a corresponding multi-index notation that simplifies expressions for partial derivatives. We can write α = (α, α,..., α n ), α i for an integer vector in R n, e.g., if n = 5, then α = (,,,, ) is one of possible vectors. We can readily represent a partial derivative as D α α u u(x) = x α xα, α = α xα n + α + + α n, α i (7.6) n which is the so-called multi-index notation. Example 7.. For n = and u(x) = u(x, x ), all possible D α u when α = are α = (, ), D α u = u x, α = (, ), D α u = u, x x α = (, ), D α u = u x. With the multi-index notation, the C m space in a domain Ω R n can be defined as C m (Ω) = {u(x, x,..., x n ), D α u are continuous on Ω, α m} (7.7) i.e., all possible derivatives up to order m are continuous on Ω. Example 7.. For n = and m = 3, we have u, u x, u y, u xx, u xy, u yy, u xxx, u xxy, u xyy and u yyy all continuous on Ω if u C 3 (Ω), or simply D α u C 3 (Ω) for α 3. Note that C m (Ω) has infinite dimensions. The distance in C (Ω) is defined as d(u, v) = max u(x) v(x) x Ω

170 c7 7/9/6 5:5 page 6 #3 6 Theoretical Foundations of the Finite Element Method with the properties (), d(u, v) ; (), d(u, v) = if and only if u v ; and (3), d(u, v + w) d(u, v) + d(u, w), the triangle inequality. A linear space with a distance defined is called a metric space. A norm in C (Ω) is a nonnegative function of u that satisfies u(x) = d(u, θ) = max u(x), where θ is the zero element, x Ω with the properties (), u(x), and u(x) = if and only if u ; (), αu(x) = α u(x), where α is a number ; (3), u(x) + v(x) u(x) + v(x), the triangle inequality. A linear space with a norm defined is called a normed space. In C m (Ω), the distance and the norm are defined as d(u, v) = u(x) = max max α m x Ω Dα u(x) D α v(x), (7.8) max max α m x Ω Dα u. (7.9) 7. Spaces for Integral Forms, L (Ω) and L p (Ω) In analogy to pointwise spaces C m (Ω), we can define Sobolev spaces H m (Ω) in integral forms. The square-integrable space H (Ω) = L (Ω) is defined as { } L (Ω) = u(x), u (x) dx < (7.) corresponding to the pointwise C (Ω) space. It is easy to see that C(, ) = C 3 (, ) L 4 (, ). Example 7.3. It is easy to verify that y(x) = /x /4 / C (, ), but ( ) x /4 dx = dx = < x so that y(x) L (, ). But it is obvious that y(x) is not in C(, ) since y(x) blows up as x from x >, see Figure 7. in which we also show that a piecewise constant function and in (, ) is in L (, ) but not in C(, ) since the function is discontinuous (nonremovable discontinuity) at x =.5. Ω The distance in L (Ω) is defined as { / d( f, g) = f g dx}, (7.) Ω

171 c7 7/9/6 5:5 page 6 #4 7. Spaces for Integral Forms, L (Ω) and L p (Ω) y=/x /4.5 y=.5 y= Figure 7.. Plot of two functions that are in L (, ) but not in C[, ]. One function is y(x) = 4 x. The other one is y(x) = in [, /) and y(x) = in [/, ]. which satisfies the three conditions of the distance definition; so L (Ω) is a metric space. We say that two functions f and g are identical (f g) in L (Ω) if d( f, g) =. For example, the following two functions are identical in L (, ): f(x) = {, if x <,, if x, g(x) = {, if x,, if < x. The norm in the L (Ω) space is defined as { / u L = u = u dx}. (7.) Ω It is straightforward to prove that the usual properties for the distance and the norm hold. We say that L (Ω) is a complete space, meaning that any Cauchy sequence {f n (Ω)} in L has a limit in L (Ω), i.e., there is a function f L (Ω) such that lim f n f n L =, or lim f n = f. n A Cauchy sequence is a sequence that satisfies the property that, for any given positive number ϵ, no matter how small it is, there is an integer N such that f n f m L < ϵ, if m N, n N.

172 c7 7/9/6 5:5 page 6 #5 6 Theoretical Foundations of the Finite Element Method A complete normed space is called a Banach space (a Cauchy sequence converges in terms of the norm), so L (Ω) is a Banach space. For any two vectors 7.. The Inner Product in L x y x y x =, y =.. x n y n in R n, we recall that the inner product is (x, y) = x T y = n x i y i = x y + x y + + x n y n. i= Similarly, the inner product in L (Ω) space in R, the real number space, is defined as ( f, g) = f(x)g(x) dx, for any f and g L (Ω), (7.3) Ω and satisfies the familiar properties ( f, g) = (g, f ), (αf, g) = ( f, αg) = α( f, g), α R, ( f, g + w) = ( f, g) + ( f, w) for any f, g, and w L (Ω). The norm, distance, and inner product in L (Ω) are related as follows: u = u L (Ω) = { / (u, u) = d(u, θ) = u dx}. (7.4) Ω With the L (, ) inner product, for the simple model problem we can rewrite the weak form as u = f, < x <, u() = u() =, (u, v ) = ( f, v), v H (, ),

173 c7 7/9/6 5:5 page 63 #6 and the minimization form as 7. Spaces for Integral Forms, L (Ω) and L p (Ω) 63 min F(v) : F(v) = v H (,) (v, v ) ( f, v). 7.. The Cauchy Schwarz Inequality in L (Ω) For a Hilbert space with the norm u = (u, u), the Cauchy Schwarz inequality is (u, v) u v. (7.5) Examples of the Cauchy Schwarz inequality corresponding to inner products in R n and in L spaces are { n n } / { n } / x i y i x i y i, i= i= i= n x i { n } / n x i, Ω i= { fgdx where V is the volume of Ω. Ω { fdx Ω Ω i= } / { / f dx g dx}, Ω f dx} / V, 7... A Proof of the Cauchy Schwarz Inequality Noting that (u, u) = u, we construct a quadratic function of α given u and v: f(α) = (u + αv, u + αv) = (u, u) + α(u, v) + α (v, v). The quadratic function is nonnegative; hence the discriminant of the quadratic form satisfies = b 4ac, i.e., 4(u, v) 4(u, u)(v, v) or (u, v) (u, u)(v, v), yielding the Cauchy Schwarz inequality (u, v) u v, on taking the square root of both sides. A complete Banach space with an inner product defined is called a Hilbert space. Hence, L (Ω) is a Hilbert space (linear space, inner product, complete).

174 c7 7/9/6 5:5 page 64 #7 64 Theoretical Foundations of the Finite Element Method 7... Relationships Between the Spaces The relationships (and relevant additional properties) in the hierarchy of defined spaces may be summarized diagrammatically: Metric Space (distance) = Normed Space (norm) = Banach space (complete) = Hilbert space (inner product) L p (Ω) Spaces An L p (Ω) ( < p < ) space is defined as { } L p (Ω) = u(x), u(x) p dx <, (7.6) Ω and the distance in L p (Ω) is defined as { /p d( f, g) = f g dx} p. (7.7) Ω An L p (Ω) space has a distance and is complete, so it is a Banach space. However, it is not a Hilbert space, because no corresponding inner product is defined unless p =. 7.3 Sobolev Spaces and Weak Derivatives Similar to C m (Ω) spaces, we use Sobolev spaces H m (Ω) to define function spaces with derivatives involving integral forms. If there is no derivative, then the relevant Sobolev space is { } H (Ω) = L (Ω) = v(x), v dx <. (7.8) Ω 7.3. Definition of Weak Derivatives If u(x) C [, ], then for any function ϕ C (, ) such that ϕ() = ϕ() = we recall u (x)ϕ(x) = uϕ u(x)ϕ (x) dx = u(x)ϕ (x) dx, (7.9) where ϕ(x) is a test function in C (, ). The first-order weak derivative of u(x) L (Ω) = H (Ω) is defined to be a function v(x) satisfying v(x)ϕ(x)dx = u(x)ϕ (x)dx (7.) Ω Ω

175 c7 7/9/6 5:5 page 65 #8 7.3 Sobolev Spaces and Weak Derivatives 65 for all ϕ(x) C (Ω) with ϕ() = ϕ() =. If such a function exists, then we write v(x) = u (x). Example 7.4. Consider the following function u(x) x, if x <, u(x) = x, if x. It is obvious that u(x) C[, ]) but u (x) C(, ) since the classic derivative does not exist at x =. Let ϕ(x) 3 C (, ) be any function that vanishes at two ends, i.e., ϕ() = ϕ() =, and has first-order continuous derivative on (, ). We carry out the following integration by parts. ϕ udx = ϕ udx + = ϕ(x) u(x) ϕ udx + ϕ(x) u(x) ( = ϕ(/) u(/ ) u(/+) = ψ(x)ϕ(x)dx, ϕu dx + ) ϕ(x) dx ϕu dx ϕ(x) dx where we have used the property that ϕ() = ϕ() = and ψ(x) is defined as, if < x <, ψ(x) =, if < x <, which is what we would expect. In other words, we define the weak derivative of u(x) as u (x) = ψ(x) which is an H (, ) but not a C(, ) function. Similarly, the m-th order weak derivative of u(x) H (Ω) is defined as a function v(x) satisfying v(x)ϕ(x)dx = ( ) m u(x)ϕ (m) (x)dx (7.) Ω for all ϕ(x) C m (Ω) with ϕ(x) = ϕ (x) = = ϕ (m ) (x) = for all x Ω. If such a function exists, then we write v(x) = u (m) (x). Ω

176 c7 7/9/6 5:5 page 66 #9 66 Theoretical Foundations of the Finite Element Method 7.3. Definition of Sobolev Spaces H m (Ω) The Sobolev space H (Ω) defined as { } H (Ω) = v(x), D α v L (Ω), α (7.) involves first-order derivatives, e.g., { H (a, b) = v(x), a < x < b, b a v dx <, b a (v ) dx < and in two space dimensions, H (Ω) is defined as { H (Ω) = v(x, y), v L (Ω), v x L (Ω), v } y L (Ω). The extension is immediate to the Sobolev space of general dimension { } H m (Ω) = v(x), D α v L (Ω), α m. (7.3) }, Inner Products in H m (Ω) Spaces The inner product in H (Ω) is the same as that in L (Ω), i.e., (u, v) H (Ω) = (u, v) = uv dx. The inner product in H (a, b) is defined as (u, v) H (a,b) = (u, v) = b a Ω ( uv + u v ) dx; the inner product in H (Ω) of two variables is defined as ( (u, v) H (Ω) = (u, v) = uv + u v x x + u ) v dxdy ; y y Ω the inner product in H m (Ω) (of general dimension) is (u, v) H m (Ω) = (u, v) m = (D α u(x)) (D α v(x)) dx. (7.4) Ω α m The norm in H m (Ω) (of general dimension) is u H m (Ω) = u m = Ω α m D α u(x) dx / and, (7.5)

177 c7 7/9/6 5:5 page 67 # 7.3 Sobolev Spaces and Weak Derivatives 67 therefore, H m (Ω) is a Hilbert space. A norm can be defined from the inner product, e.g., in H (a, b), the norm is u = { b a ( u + u ) dx} /. The distance in H m (Ω) (of general dimension) is defined as d(u, v) m = u v m. (7.6) Relations Between C m (Ω) and H m (Ω) The Sobolev Embedding Theorem In D spaces, we have H (Ω) C (Ω), H (Ω) C (Ω),..., H +j (Ω) C j (Ω). Theorem 7.5. The Sobolev embedding theorem: If m > n, then H m+j C j, j =,,..., (7.7) where n is the dimension of the independent variables of the elements in the Sobolev space. Example 7.6. In D spaces, we have n =. The condition m > n means that m >. From the embedding theorem, we have H +j C j, j = = H C, j = = H 3 C,.... (7.8) If u(x, y) H, which means that u, u x, u y, u xx, u xy, and u yy all belong to L, then u(x, y) is continuous, but u x and u y may not be continuous! Example 7.7. In 3D spaces, we have n = 3 and the condition m > n means that m > 3/ whose closest integer is number two, leading to the same result as in D: H +j C j, j = = H C, j = = H 3 C,.... (7.9) We regard the regularity of a solution as the degree of smoothness for a class of problems measured in C m or H m space. Thus, for u(x) H m or u(x) C m, the larger the m the smoother the function.

178 c7 7/9/6 5:5 page 68 # 68 Theoretical Foundations of the Finite Element Method For the simple D model problem we know that the weak form is 7.4 FE Analysis for D BVPs u = f, < x <, u() = u() =, u v dx = fv dx or (u, v ) = ( f, v). Intuitively, because v is arbitrary we can take v = f or v = u to get u v dx = u dx, fv dx = f dx, so u, u, f, v, and v should belong to L (, ), i.e., we have u H (, ) and v H (, ); so the solution is in the Sobolev space H (, ). We should also take v in the same space for a conforming finite element method. From the Sobolev embedding theorem, we also know that H C, so the solution is continuous Conforming FE Methods Definition 7.8. If the finite element space is a subspace of the solution space, then the finite element space is called a conforming finite element space, and the finite element method is called a conforming FE method. For example, the piecewise linear function over a given mesh is a conforming finite element space for the model problem. We mainly discuss conforming finite element methods in this book. On including the boundary condition, we define the solution space as } H {v(x) (, ) =, v() = v() =, v H (, ). (7.3) When we look for a finite element solution in a finite-dimensional space V h, it should be a subspace of H (, ) for a conforming finite element method. For example, given a mesh for the D model, we can define a finite-dimensional space using piecewise continuous linear functions over the mesh: V h = {v h, v h () = v h () =, v h is continuous piecewise linear}. The finite element solution would be chosen from the finite-dimensional space V h, a subspace of H (, ). If the solution of the weak form is in H (, ) but not in the V h space; then an error is introduced on replacing the solution space with the finite-dimensional space. Nevertheless, the finite element solution is the best approximation in V h in some norm, as discussed later.

179 c7 7/9/6 5:5 page 69 # 7.4 FE Analysis for D BVPs FE Analysis for D Sturm Liouville Problems A D Sturm Liouville problem on (x l, x r ) with a Dirichlet boundary condition at two ends is (p(x)u (x)) + q(x)u(x) = f(x), x l < x < x r, u(x l ) =, u(x r ) =, p(x) p min >, q(x) q min. (7.3) The conditions on p(x) and q(x) guarantee the problem is well-posed, such that the weak form has a unique solution. It is convenient to assume p(x) C(x l, x r ) and q(x) C(x l, x r ). Later we will see that these conditions together with f(x) L (x l, x r ), guarantee the unique solution to the weak form of the problem. To derive the weak form, we multiply both sides of the equation by a test function v(x), v(x l ) = v(x r ) = and integrate from x l to x r to get xr x l ( (p(x)u ) + qu ) v dx = pu v = xr ( = pu v + quv ) dx = x l xr x l xr x r + x l fv dx xr x l ( pu v + quv ) dx x l fv dx, v H (x l, x r ) or a(u, v) = L(v). The integral The Bilinear Form xr a(u, v) = ( pu v + quv ) dx (7.3) x l is a bilinear form, because it is linear for both u and v from the following a(αu + βw, v) = = α xr x l xr x l ( p(αu + βw )v + q(αu + βw)v ) dx ( pu v + quv ) xr dx + β x l = αa(u, v) + βa(w, v), where α and β are scalars; and similarly, a(u, αv + βw) = αa(u, v) + βa(u, w). ( w v + qwv ) dx

180 c7 7/9/6 5:5 page 7 #3 7 Theoretical Foundations of the Finite Element Method It is noted that this bilinear form is an inner product, usually different from the L and H inner products, but if p and q then a(u, v) = (u, v). Since a(u, v) is an inner product, under the conditions: p(x) p min >, q(x), we can define the energy norm as u a = { xr ( a(u, u) = p(u ) + qu ) } dx, (7.33) x l where the first term may be interpreted as the kinetic energy and the second term as the potential energy. The Cauchy Schwarz inequality implies a(u, v) u a v a. The bilinear form combined with linear form often simplifies the notation for the weak and minimization forms, e.g., for the above Sturm Liouville problem the weak form becomes and the minimization form is a(u, v) = L(v), v H (x l, x r ), (7.34) min F(v) = v H (x l,x r ) min v H (x l,x r ) { } a(v, v) L(v). (7.35) Later we will see that all self-adjoint differential equations have both weak and minimization forms, and that the finite element method using the Ritz form is the same as with the Galerkin form The FE Method for D Sturm Liouville Problems Using Hat Basis Functions Consider any finite-dimensional space V h H (x l, x r ) with the basis that is, ϕ (x) H (x l, x r ), ϕ (x) H (x l, x r ),..., ϕ M (x) H (x l, x r ), V h = span {ϕ, ϕ,..., ϕ M } { } M = v s, v s = α i ϕ i H (x l, x r ). The Galerkin finite method assumes the approximate solution to be u s (x) = i= M α j ϕ j (x), (7.36) j=

181 c7 7/9/6 5:5 page 7 #4 7.4 FE Analysis for D BVPs 7 and the coefficients {α j } are chosen such that the weak form a(u s, v s ) = ( f, v s ), v s V h is satisfied. Thus we enforce the weak form in the finite-dimensional space V h instead of the solution space H (x l, x r ), which introduces some error. Since any element in the space is a linear combination of the basis functions, we have or or a (u s, ϕ i ) = ( f, ϕ i ), i =,,..., M, M a α j ϕ j, ϕ i = ( f, ϕ i ), i =,,..., M, j= M a(ϕ j, ϕ i ) α j = ( f, ϕ i ), i =,,..., M. j= In the matrix-vector form AX = F, this system of algebraic equations for the coefficients is a(ϕ, ϕ ) a(ϕ, ϕ ) a(ϕ, ϕ M ) α ( f, ϕ ) a(ϕ, ϕ ) a(ϕ, ϕ ) a(ϕ, ϕ M ) α ( f, ϕ ) =, a(ϕ M, ϕ ) a(ϕ M, ϕ ) a(ϕ M, ϕ M ) ( f, ϕ M ) and the system has some attractive properties. The coefficient matrix A is symmetric, i.e., {a ij } = {a ji } or A = A T, since a(ϕ i, ϕ j ) = a(ϕ j, ϕ i ). Note that this is only true for a self-adjoint problem such as the above, with the second-order ODE (pu ) + qu = f. α M For example, the similar problem involving the ODE u + u = f is not self-adjoint; and the Galerkin finite element method using the corresponding weak form (u, v ) + (u, v) = ( f, v) or (u, v ) + (u, v ) = ( f, v)

182 c7 7/9/6 5:5 page 7 #5 7 Theoretical Foundations of the Finite Element Method produces terms such as (ϕ i, ϕ j) that differ from (ϕ j, ϕ i), so that the coefficient matrix A is not symmetric. A is positive definite, i.e., x T A x > if x, and all eigenvalues of A are positive. Proof For any η, we show that η T A η > as follows since v s = η T A η = η T (Aη) = = = = M i= M i= i= η i η i M j= M j= M i= η i M j= a(ϕ i, ϕ j )η j a(ϕ i, η j ϕ j ) a ij η j M M η i a ϕ i, η j ϕ j i= j= M M = a η i ϕ i, η j ϕ j j= = a (v s, v s ) = v s a >, i= M η i ϕ i because η is a nonzero vector and the {ϕ i } s are linearindependent Local Stiffness Matrix and Load Vector Using the Hat Basis Functions The local stiffness matrix using the hat basis functions is a matrix of the following, [ xi+ K e x i p(x) (ϕ i i = ) xi+ dx x i p(x) ϕ i ϕ i+ dx ] xi+ x i p(x) ϕ i+ ϕ i dx x i+ x i p(x) (ϕ i+ ) dx [ xi+ x i q(x) ϕ i + dx xi+ ] x i q(x) ϕ i ϕ i+ dx xi+ xi+ q(x) ϕ i+ ϕ i dx q(x) ϕ i+ dx, x i x i

183 c7 7/9/6 5:5 page 73 #6 7.5 Error Analysis of the FE Method 73 and the local load vector is [ xi+ ] Fi e x i fϕ i dx = xi+. fϕ i+ dx x i The global stiffness matrix and load vector can be assembled element by element. 7.5 Error Analysis of the FE Method Error analysis for finite element methods usually includes two parts:. error estimates for an intermediate function in V h, often the interpolation function; and. convergence analysis, a limiting process that shows the finite element solution converges to the true solution of the weak form in some norm, as the mesh size h approaches zero. We first recall some notations and setting up:. Given a weak form a(u, v) = L(v) and a space V, which usually has infinite dimension, the problem is to find a u V such that the weak form is satisfied for any v V. Then u is called the solution of the weak form.. A finite-dimensional subspace of V denoted by V h (i.e., V h V) is adopted for a conforming finite element method and it does not have to depend on h, however. 3. The solution of the weak form in the subspace V h is denoted by u h, i.e., we require a(u h, v h ) = L(v h ) for any v h V h. 4. The global error is defined by e h = u(x) u h (x), and we seek a sharp upper bound for e h using certain norm. It was noted that error is introduced when the finite-dimensional space replaces the solution space, as the weak form is usually only satisfied in the subspace V h and not in the solution space V. However, we can prove that the solution satisfying the weak form in the subspace V h is the best approximation to the exact solution u in the finite-dimensional space in the energy norm. Theorem 7.9. With the notations above, we have. u h is the projection of u onto V h through the inner product a(u, v), i.e., u u h V h or u u h ϕ i, i =,,..., M, (7.37) a(u u h, v h ) = v h V h or a(u u h, ϕ i ) =, i =,,..., M, (7.38) where {ϕ i } s are the basis functions.

184 c7 7/9/6 5:5 page 74 #7 74 Theoretical Foundations of the Finite Element Method. u h is the best approximation in the energy norm, i.e., u u h a u v h a, v h V h. Proof a(u, v) = ( f, v), v V, a(u, v h ) = ( f, v h ), v h V h since V h V, a(u h, v h ) = ( f, v h ), v h V h since u h is the solution in V h, subtract a(u u h, v h ) = or a(e h, v h ) =, v h V h. Now we prove that u h is the best approximation in V h. u v h a = a(u v h, u v h ) = a(u u h + u h v h, u u h + u h v h ) = a(u u h + w h, u u h + w h ), on letting w h = u h v h V h, = a(u u h, u u h + w h ) + a(w h, u u h + w h ) = a(u u h, u u h ) + a(u u h, w h ) + a(w h, u u h ) + a(w h, w h ) = u u h a w h a, since, a(e h, u h ) = u u h a i.e., u u h a u v h a. Figure 7. is a diagram to illustrate the theorem. u u u h u u u h u h V h Figure 7.. A diagram of FE approximation properties. The finite element solution is the best approximation to the solution u in the finite-dimensional space V h in the energy norm; and the error u u h is perpendicular to the finitedimensional space V h in the inner product of a(u, v). V h u h

185 c7 7/9/6 5:5 page 75 #8 7.5 Error Analysis of the FE Method 75 Example: For the Sturm Liouville problem, u u h a = b a ( p(x) (u u h ) + q(x) (u u h ) ) dx b b p max (u u h ) dx + q max (u u h ) dx a max{p max, q max } = C u u h, b a a ( (u u h ) + (u u h ) ) dx where C = max{p max, q max }. Thus, we obtain u u h a Ĉ u u h, u u h a u v h a C u v h Interpolation Functions and Error Estimates Usually the solution is unknown; so in order to get the error estimate we choose a special v h V h, for which we can get a good error estimate. We may then use the error estimate u u h a u v h a to get an error estimate for the finite element solution (may be overestimated). Usually we can choose a piecewise interpolation function for this purpose. That is another reason that we choose piecewise linear, quadratic, or cubic functions over the given mesh in finite element methods Linear D Piecewise Interpolation Function Given a mesh x, x, x,..., x M, the linear D piecewise interpolation function is defined as u I (x) = x x i x i x i u(x i ) + x x i x i x i u(x i ), x i x x i. It is obvious that u I (x) V h, where V h H is the set of continuous piecewise linear functions that have the first-order weak derivative, so u u h a u u I a. Since u(x) is unknown, then so is u I (x). Nevertheless, we know the upper error bound of the interpolation functions.

186 c7 7/9/6 5:5 page 76 #9 76 Theoretical Foundations of the Finite Element Method Theorem 7.. Given a function u(x) C [a, b] and a mesh x, x, x,..., x M, the continuous piecewise linear function u I has the error estimates u u I = max x [a,b] u(x) u I(x) h 8 u, (7.39) u (x) u I(x) L (a,b) h b a u. (7.4) Proof If ẽ h = u(x) u I (x), then ẽ h (x i ) = ẽ h (x i ) =. From Rolle s theorem, there must be at least one point z i between x i and x i such that ẽ h (z i ) =, hence ẽ h (x) = = = x z i x ẽ h (t) dt ( u (t) u I (t) ) dt z i x z i u (t) dt. Therefore, we obtain the error estimates below ẽ h (x) x z i ẽ h L (a,b) = ẽ h u (t) dt u { u b a x z i dt u h, and h dt } b a u h ; so we have proved the second inequality. To prove the first, assume that x i + h/ z i x i, otherwise we can use the other half interval. From the Taylor expansion ẽ h (x) = ẽ h (z i + x z i ), assuming x i x x i, so at x = x i we have = ẽ h (z i ) + ẽ h (z i )(x z i ) + ẽh (ξ)(x z i ), x i ξ x i, = ẽ h (z i ) + ẽh (ξ)(x z i ), = ẽ h (x i ) = ẽ h (z i ) + ẽh (ξ)(x i z i ), ẽ h (z i ) = ẽh (ξ)(x i z i ), ẽ h (z i ) u (x i z i ) h 8 u. Note that the largest value of ẽ h (x) has to be the z i where the derivative is zero.

187 c7 7/9/6 5:5 page 77 # 7.5 Error Analysis of the FE Method Error Estimates of the Finite Element Methods Using the Interpolation Function Theorem 7.. For the D Sturm Liouville problem, the following error estimates hold, where C and Ĉ are two constants. u u h a Ch u, u u h Ĉh u, Proof u u h a u u I a b ( p(x) (u u I) + q(x) (u u I ) ) dx a max {p max, q max } b max {p max, q max } u Ch u. a ( (u u I) + (u u I ) ) dx b a ( ) h + h 4 /64 dx The second inequality is obtained because a and are equivalent, so c v a v C v a, ĉ v v a Ĉ v Error Estimate in the Pointwise Norm We can easily prove the following error estimate. Theorem 7.. For the D Sturm Liouville problem, u u h Ch u, (7.4) where C is a constant. The estimate is not sharp (or optimal); or simply it is overestimated. We note that u h is discontinuous at nodal points, and the infinity norm u u h can only be defined for continuous functions.

188 c7 7/9/6 5:5 page 78 # 78 Theoretical Foundations of the Finite Element Method Proof e h (x) = u(x) u h (x) = e h (x) b a e h (t) dt x a e h (t)dt { } / { b b e h dt dt a b a { b a b a e h a p min b a ẽ h a p min Ch u. a } / } / p e h p dt min Remark 7.3. Actually, we can prove a better inequality u u h Ch u, so the finite element method is second-order accurate. This is an optimal (sharp) error estimate. Exercises. Assuming the number of variables n = 3, describe the Sobolev space H 3 (Ω) (i.e., for m = 3) in terms of L (Ω), retaining all the terms but not using the multi-index notation. Then using the multi-index notation when applicable, represent the inner product, the norm, the Schwarz inequality, the distance, and the Sobolev embedding theorem in this space.. Consider the function v(x) = x α on Ω = (, ) with α R. For what values of α is v H (Ω)? (Consider both positive and negative α.) For what values is v H (Ω)? in H m (Ω)? For what values of α is v C m (Ω)?

189 c7 7/9/6 5:5 page 79 # Exercises 79 Hint: Make use of the following x α = { x α if x ( x) α if x <, and when k is a nonnegative integer note that { x k if α = k x α = if α = ; also if α > lim x x α = if α = if α < and if α > x α dx = α + if α. 3. Are each of the following statements true or false? Justify your answers. (a) If u H (, ), then u and u are both continuous functions. (b) If u(x, y) H (Ω), then u(x, y) may not have continuous partial derivatives u/ x and u/ y. Does u(x, y) have first- and second-order weak derivatives? Is u(x, y) continuous in Ω? 4. Consider the Sturm Liouville problem ( p(x)u(x) ) + q(x)u(x) = f(x), < x < π, αu() + βu () = γ, u (π) = u b, where < p min p(x) p max <, q min q(x) q max < q. (a) Derive the weak form for the problem. Define a bilinear form a(u, v) and a linear form L(v) to simplify the weak form. What is the energy norm? (b) What kind of restrictions should we have for α, β, and γ in order that the weak form has a solution? (c) Determine the space where the solution resides under the weak form. (d) If we look for a finite element solution in a finite-dimensional space V h using a conforming finite element method, should V h be a subspace of C, or C, or C? (e) Given a mesh x = < x < x < < x M < x M = π, if the finite-dimensional space is generated by the hat functions, what kind of structure do the local and global stiffness matrix and the load vector have? Is the resulting linear system of equations formed by the global stiffness matrix and the load vector symmetric, positive definite and banded?

190 c7 7/9/6 5:5 page 8 #3 8 Theoretical Foundations of the Finite Element Method 5. Consider the two-point BVP u (x) = f(x), a < x < b, u(a) = u(b) =. Let u h (x) be the finite element solution using the piecewise linear space (in H (a, b)) spanned by a mesh {x i }. Show that u u h Ch, where C is a constant. Hint: First show u h u I a =, where u I (x) is the interpolation function in V h.

191 c8 7/9/6 7: page 8 # 8 Issues of the FE Method in One Space Dimension 8. Boundary Conditions For a second-order two-point BVP, typical boundary conditions (BC) include one of the following at each end, say at x = x l,. a Dirichlet condition, e.g., u(x l ) = u l is given;. a Neumann condition, e.g., u (x l ) is given; or 3. a Robin (mixed) condition, e.g., αu(x l ) + βu (x l ) = γ is given, where α, β, and γ are known but u(x l ) and u (x l ) are both unknown. Boundary conditions affect the bilinear and linear form, and the solution space. Example 8.. For example, u = f, < x <, u() =, u () =, involves a Dirichlet BC at x = and a Neumann BC at x =. To derive the weak form, we again follow the familiar procedure: u ()v() + u ()v() + u vdx = u v + u v dx = u v dx = fv dx, fv dx, fv dx. For a conforming finite element method, the solution function of u and the test functions v should be in the same space. So it is natural to require that the test functions v satisfy the same homogeneous Dirichlet BC, i.e., we require v() = ; and the Dirichlet condition is therefore called an essential boundary 8

192 c8 7/9/6 7: page 8 # 8 Issues of the FE Method in One Space Dimension condition. On the other hand, since u () =, the first term in the final expression is zero, so it does not matter what v() is, i.e., there is no constraint on v(); so the Neumann BC is called a natural boundary condition. It is noted that u() is unknown. The weak form of this example is the same as before for homogeneous Dirichlet BC at both ends; but now the solution space is different: (u, v ) = ( f, v), v H E(, ), { } where H E(, ) = v(x), v() =, v H (, ), where we use the subscript E in H E (, ) to indicated an essential boundary condition. Consider a Sturm Liouville problem 8.. Mixed Boundary Conditions (pu ) + qu = f, x l < x < x r, p(x) p min >, q(x), (8.) u(x l ) =, αu(x r ) + βu (x r ) = γ, β, α, (8.) β where α, β, and γ are known constants but u(x r ) and u (x r ) are unknown. Integration by parts again gives p(x r )u (x r )v(x r ) + p(x l )u (x l )v(x l ) + xr x l ( pu v + quv ) dx = xr x l fv dx. (8.3) As explained earlier, we set v(x l ) = (an essential BC). Now we reexpress the mixed BC as and substitute this into (8.3) to obtain or equivalently xr p(x r )v(x r ) γ αu(x r) β u (x r ) = γ αu(x r), (8.4) β + xr x l ( pu v + quv ) dx + α β p(x r)u(x r )v(x r ) = x l ( pu v + quv ) dx = xr xr x l fv dx x l fv dx + γ β p(x r)v(x r ), (8.5)

193 c8 7/9/6 7: page 83 #3 8. Boundary Conditions 83 which is the corresponding weak (variational) form of the Sturm Liouville problem. We define a(u, v) = xr x l ( pu v + quv ) dx + α β p(x r)u(x r )v(x r ), the bilinear form, (8.6) L(v) = ( f, v) + γ β p(x r)v(x r ), the linear form. (8.7) We can prove that:. a(u, v) = a(v, u), i.e., a(u, v) is symmetric;. a(u, v) is a bilinear form, i.e., a(ru + sw, v) = ra(u, v) + sa(w, v), a(u, rv + sw) = ra(u, v) + sa(u, w), for any real numbers r and s; and 3. a(u, v) is an inner product, and the corresponding energy norm is u a = { xr a(u, u) = (pu ) + qu dx + α } x l β p(x r)u(x r ). It is now evident why we require β, and α/β. Using the inner product, the solution of the weak form u(x) satisfies a(u, v) = L(v), v H E(x l, x r ), (8.8) { } H E(x l, x r ) = v(x), v(x l ) =, v H (x l, x r ), (8.9) and we recall that there is no restriction on v(x r ). The boundary condition is essential at x = x l, but natural at x = x r. The solution u is also the minimizer of the functional in the H E (x l, x r ) space. F(v) = a(v, v) L(v) 8.. Nonhomogeneous Dirichlet Boundary Conditions Suppose now that u(x l ) = u l in (8.). In this case, the solution can be decomposed as the sum of the particular solution (pu ) + qu =, x l < x < x r, u (x l ) = u l, αu (x r ) + βu (x r) =, β, α β, (8.)

194 c8 7/9/6 7: page 84 #4 84 Issues of the FE Method in One Space Dimension and (pu ) + qu = f, x l < x < x r, u (x l ) =, αu (x r ) + βu (x r) = γ, β, α β. (8.) We can use the weak form to find the solution u (x) corresponding to the homogeneous Dirichlet BC at x = x l. If we can find a particular solution u (x) of (8.), then the solution to the original problem is u(x) = u (x) + u (x). Another simple way is to choose a function u (x), u (x) H (x l, x r ) that satisfies u (x l ) = u l, αu (x r ) + βu (x r) =. For example, the function u (x) = u l ϕ (x) would be such a function, where ϕ (x) is the hat function centered at x l if a mesh {x i } is given. Then û(x) = u(x) u (x) would satisfy a homogeneous Dirichlet BC at x l and the following S-L problem: (pû ) + qû = f (x) + (pu ) qu, x l < x < x r, û(x l ) =, αû (x r ) + βû(x r ) = γ. We can apply the finite element method for û(x) previously discussed with the modified source term f (x), where the weak form u(x) after substituting back is the same as before: where a (û, v) = L (v) = = a (û, v) = L (v), v(x) H E(x l, x r ), xr x l ( pû v + qûv ) dx + α β p(x r)û(x r )v(x r ) xr x l fvdx + γ β p(x r)v(x r ) + xr x l fvdx + γ β p(x r)v(x r ) α β p(x r)u (x r )v(x r ). xr x l xr x l ( (pu ) v qu v ) dx ( pu v + qu v ) dx

195 c8 7/9/6 7: page 85 #5 8. The FE Method for Sturm Liouville Problems 85 If we define a(u, v) = xr x l ( pu v + quv ) dx + α β p(x r)u(x r )v(x r ), L(v) = as before, then we have xr x l fvdx + γ β p(x r)v(x r ), a (u u, v) = a(u u, v) = L (v) = L(v) a(u, v), or a(u, v) = L(v). (8.) While we still have a(u, v) = L(v), the solution is not in H E (x l, x r ) space due to the nonhomogeneous Dirichlet boundary condition. Nevertheless u u is in H E (x l, x r ). The formula above is also the basis of the numerical treatment of Dirichlet boundary conditions later. 8. The FE Method for Sturm Liouville Problems Let us now consider the finite element method using the piecewise linear function over a mesh x = x l, x,..., x M = x r (see a diagram in Figure 8.) for the Sturm Liouville problem (pu ) + qu = f, x l < x < x r, u(x l ) = u l, αu(x r ) + βu (x r ) = γ, β, We again use the hat functions as the basis such that u h (x) = M α i ϕ i (x), i= α β. and now focus on the treatment of the BC. The solution is unknown at x = x r, so it is not surprising to have ϕ M (x) for the natural BC. The first term ϕ (x) is the function used as u (x), to deal with ϕ ϕ ϕ i M ϕ x = x x x i x M = Figure 8.. Diagram of a D mesh where x l = and x r =.

196 c8 7/9/6 7: page 86 #6 86 Issues of the FE Method in One Space Dimension the nonhomogeneous Dirichlet BC. The local stiffness matrix is xi+ xi+ [ ] a(ϕi, ϕ Ki e i ) a(ϕ i, ϕ i+ ) pϕ i dx pϕ iϕ i+ dx x i x i = = a(ϕ i+, ϕ i ) a(ϕ i+, ϕ i+ ) xi+ xi+ (x i,x i+ ) pϕ i+ ϕ i dx dx + xi+ x i xi+ x i qϕ i dx qϕ i+ ϕ i dx xi+ x i xi+ x i x i qϕ i ϕ i+ dx qϕ i+ dx [ + α β p(x ϕ ] i (x r ) ϕ i (x r )ϕ i+ (x r ) r) ϕ i+ (x r )ϕ i (x r ) ϕ i+ (x, r) and the local load vector is [ ] L(ϕi ) Fi e = = L(ϕ i+ ) xi+ x i xi+ x i x i pϕ i+ fϕ i dx [ ] + γ ϕi β p(x (x r ) r). ϕ fϕ i+ dx i+ (x r ) We can see clearly the contributions from the BC; and in particular, that the only nonzero contribution of the BC to the stiffness matrix and the load vector is from the last element (x M, x M ) due to the compactness of the hat functions. 8.. Numerical Treatments of Dirichlet BC The finite element solution defined on the mesh can be written as u h (x) = u l ϕ (x) + M α j ϕ j (x), where α j, j =,,..., M are the unknowns. Note that u l ϕ (x) is an approximate particular solution in H (x l, x r ), and satisfies the Dirichlet boundary condition at x = x l and homogeneous Robin boundary condition at x = x r. To use the finite element method to determine the coefficients, we enforce the weak form for u h (x) u a ϕ (x) for the modified differential problem, j= (pu ) + qu = f + u l (p ϕ ) u l q ϕ, x l < x < x r, u(x l ) =, αu(x r ) + βu α (x r ) = γ, β, β. (8.3)

197 c8 7/9/6 7: page 87 #7 8. The FE Method for Sturm Liouville Problems 87 Thus the system of linear equations is â (u h (x), ϕ i (x)) = ˆL(ϕ i ), i =,,..., M, where â(:, :) and ˆL(:) are the bilinear and linear for the BVP above, or equivalently a(ϕ, ϕ )α + a(ϕ, ϕ )α + + a(ϕ, ϕ M )α M = L(ϕ ) a(ϕ, ϕ ) u l a(ϕ, ϕ )α + a(ϕ, ϕ )α + + a(ϕ, ϕ M )α M = L(ϕ ) a(ϕ, ϕ ) u l = a(ϕ M, ϕ )α + a(ϕ M, ϕ )α + + a(ϕ M, ϕ M )α M = L(ϕ M ) a(ϕ, ϕ M ) u l, where the bilinear and linear forms are still since a(u, v) = L(v) = xr xr x l ( pu v + quv ) dx + α β p(x r)u(x r )v(x r ); xr x l fvdx + γ β p(x r)v(x r ), x l ( ul (p ϕ ) u l q ϕ ) ϕi (x)dx = u l a(ϕ, ϕ i ). After moving the a(ϕ i, u l ϕ (x)) = a(ϕ i, ϕ (x)) u l to the right-hand side, we get the matrix-vector form a(ϕ, ϕ ) a(ϕ, ϕ M ).... a(ϕ M, ϕ ) a(ϕ M, ϕ M ) α α. α M u l L(ϕ ) a(ϕ, ϕ )u l =.. L(ϕ M ) a(ϕ M, ϕ )u l This outlines one method to deal with a nonhomogeneous Dirichlet boundary condition. 8.. Contributions from Neumann or Mixed BC The contribution of mixed boundary condition at x r using the hat basis functions is zero until the last element [x M, x M ], where ϕ M (x r ) is not zero.

198 c8 7/9/6 7: page 88 #8 88 Issues of the FE Method in One Space Dimension The local stiffness matrix of the last element is xm ( ) xm pϕ ( M + qϕ M dx pϕ M ϕ ) M + qϕ M ϕ M dx x M x M xm ( pϕ M ϕ M + qϕ ) xm ( ) Mϕ M dx pϕ M + ϕ M dx + α x M β p(x, r) and the local load vector is FM e = xm xm x M f (x)ϕ M (x) dx x M f (x)ϕ M (x) dx + γ x M β p(x. r) Initialize: 8..3 Pseudo-code of the FE Method for D Sturm Liouville Problems Using the Hat Basis Functions Assemble the coefficient matrix element by element: for i=, M A(i, i ) = A(i, i ) + xi x i ( ) pϕ i + qϕ i dx xi ( A(i, i) = A(i, i) + pϕ i ϕ ) i + qϕ i ϕ i dx x i A(i, i ) = A(i, i) xi ( ) A(i, i) = A(i, i) + pϕ i + qϕ i dx x i xi F(i ) = F(i ) + f (x)ϕ i (x) dx x i xi F(i) = F(i) + f (x)ϕ i (x) dx x i end.

199 c8 7/9/6 7: page 89 #9 Deal with the Dirichlet BC: 8.3 High-order Elements 89 A(, ) = ; for i=, M end. Deal with the Mixed BC at x = b. Solve AU = F. Carry out the error analysis. F() = u a F(i) = F(i) A(i, ) ua; A(i, ) = ; A(, i) = ; A(M, M) = A(M, M) + α β p(b); F(M) = F(M) + γ β p(b). 8.3 High-order Elements To solve the Sturm Liouville or other problems involving second-order differential equations, we can use the piecewise linear finite-dimensional space over a mesh. The error is usually O(h) in the energy and H norms, and O(h ) in the L and L norms. If we want to improve the accuracy, we can choose to: refine the mesh, i.e., decrease h; or use more accurate (high order) and larger finite-dimensional spaces, i.e., the piecewise quadratic or piecewise cubic basis functions. Let us use the Sturm Liouville problem ( p u ) + qu = f, xl < x < x r, u(x l ) =, u(x r ) =, again as the model problem for the discussion here. The other boundary conditions can be treated in a similar way, as discussed before. We assume a given mesh x = x l, x,..., x M = x r, and the elements, Ω = (x, x ), Ω = (x, x ),..., Ω M = (x M, x M ), and consider piecewise quadratic and piecewise cubic functions, but still require the finite-dimensional spaces to be in H (x l, x r ) so that the finite element methods are conforming.

200 c8 7/9/6 7: page 9 # 9 Issues of the FE Method in One Space Dimension Define V h = 8.3. Piecewise Quadratic Basis Functions {v(x), where v(x) is continuous piecewise quadratic in H }, over a given mesh. The piecewise linear finite-dimensional space is obviously a subspace of the space defined above, so the finite element solution is expected to be more accurate than the one obtained using the piecewise linear functions. To use the Galerkin finite element method, we need to know the dimension of the space V h of the piecewise quadratic functions in order to choose a set of basis functions. The dimension of a finite-dimensional space is sometimes called the the degree of freedom (DOF). Given a function ϕ(x) in V h, on each element a quadratic function has the form ϕ(x) = a i x + b i x + c i, x i x < x i+, so there are three parameters to determine a quadratic function in the interval (x i, x i+ ). In total, there are M elements, and so 3M parameters. However, they are not totally free, because they have to satisfy the continuity condition lim ϕ(x) = lim ϕ(x) x x i x x i + for x, x,..., x M, or more precisely, a i x i + b i x i + c i = a i x i + b i x i + c i, i =,,..., M. There are M interior nodal points, so there are M constraints, and ϕ(x) should also satisfy the BC ϕ(x l ) = ϕ(x r ) =. Thus the total degree of freedom, the dimension of the finite element space, is 3M (M ) = M. We now know that the dimension of V h is M. If we can construct M basis functions that are linearly independent, then all of the functions in V h can be expressed as linear combinations of them. The desired properties are similar to those of the hat basis functions; and they should be continuous piecewise quadratic functions; have minimum support, i.e., be zero almost everywhere; and be determined by the values at nodal points (we can choose the nodal values to be unity at one point and zero at the other nodal points). Since the degree of freedom is M and there are only M interior nodal points, we add M auxiliary points (not nodal points) between x i and

201 c8 7/9/6 7: page 9 # x i+ and define 8.3 High-order Elements 9 z i = x i, nodal points, (8.4) z i+ = x i + x i+, auxiliary points. (8.5) For instance, if the nodal points are x =, x = π/, x = π, then z = x, z = π/4, z = x = π/, z 3 = 3π/4, z 4 = x = π. Note that in general all the basis functions should be one piece in one element (z k, z k+ ), k =,,..., M. Now we can define the piecewise quadratic basis functions as ϕ i (z j ) = { if i = j, otherwise. (8.6) We can derive analytic expressions of the basis functions using the properties of quadratic polynomials. As an example, let us consider how to construct the basis functions in the first element (x, x ) corresponding to the interval (z, z ). In this element, z is the boundary point, z is a nodal point, and z = (z + z )/ is the mid-point (the auxiliary point). For ϕ (x), we have ϕ (z ) =, ϕ (z ) = and ϕ (z ) =, ϕ (z j ) =, j = 3,..., M ; so in the interval (z, z ), ϕ (x) has the form ϕ (x) = C(x z )(x z ), because z and z are roots of ϕ (x). We choose C such that ϕ (z ) =, so ϕ (z ) = C(z z )(z z ) =, = C = and the basis function ϕ (x) is (z z )(z z ) (x z )(x z ) if z x < z, ϕ (x) = (z z )(z z ) otherwise. It is easy to verify that ϕ (x) is a continuous piecewise quadratic function in the domain (x, x M ). Similarly, we have ϕ (x) = (z z )(z z ) (z z )(z z ).

202 c8 7/9/6 7: page 9 # 9 Issues of the FE Method in One Space Dimension Generally, the three global basis functions that are nonzero in the element (x i, x i+ ) have the following forms: if z < x i (z z i )(z z i ) if x i z < x i (z i z i )(z i z i ) ϕ i (z) = (z z i+ )(z z i+ ) if x i z < x i+ (z i z i+ )(z i z i+ ) if x i+ < z. if x < x i (z z i )(z z i+ ) ϕ i+ (z) = if x i z < x i+ (z i+ z i )(z i+ z i+ ) if x i+ < z. if x < x i (z z i )(z z i+ ) if x i z < x i+ (z i+ z i )(z i+ z i+ ) ϕ i+ (z) = (z z i+3 )(z z i+4 ) if x i z < x i+ (z i+ z i+3 )(z i+ z i+4 ) if x i+ < z. In Figure 8., we plot some quadratic basis functions in H. Figure 8.(a) is the plot of the shape functions, that is, the nonzero basis functions defined in the interval (, ). In Figure 8.(b), we plot all the basis functions over a three-node mesh in (, ). In Figure 8.(c), we 5 plot some basis functions over the entire domain, ϕ (x), ϕ (x), ϕ (x), ϕ6 4 (x), ϕ 4 (x), where ϕ (x) is centered at the auxiliary point z and nonzero at only one element while ϕ (x), ϕ 4 (x) are nonzero at two elements Assembling the Stiffness Matrix and the Load Vector The finite element solution can be written as M u h (x) = α i ϕ i (x). i= The entries of the coefficient matrix are {a ij } = a(ϕ i, ϕ j ) and the load vector is F i = L(ϕ i ). On each element (x i, x i+ ), or (z i, z i+ ), there are three nonzero

203 c8 7/9/6 7: page 93 #3 8.3 High-order Elements 93 (a). (b) (c) Figure 8.. Quadratic basis functions in H : (a) the shape functions (basis functions in (, ); (b) all the basis functions over a three-node mesh in (, ); and (c) plot of some basis functions over the entire domain, ϕ (x), ϕ (x), ϕ (x), ϕ 4 (x), ϕ 4 (x), where ϕ (x) is centered at the auxiliary point z and nonzero at only one element, while ϕ (x), ϕ 4 (x) are nonzero at two elements.

204 c8 7/9/6 7: page 94 #4 94 Issues of the FE Method in One Space Dimension * * * * * * * * * * * * * * * * * * * * * * * * * * * * Figure 8.3. functions. Assembling the stiffness matrix using piecewise quadratic basis basis functions: ϕ i, ϕ i+, and ϕ i+. Thus the local stiffness matrix is a(ϕ i, ϕ i ) a(ϕ i, ϕ i+ ) a(ϕ i, ϕ i+ ) K e i = a(ϕ i+, ϕ i ) a(ϕ i+, ϕ i+ ) a(ϕ i+, ϕ i+ ) a(ϕ i+, ϕ i ) a(ϕ i+, ϕ i+ ) a(ϕ i+, ϕ i+ ) (x i,x i+ ) (8.7) and the local load vector is L(ϕ i ) L e i = L(ϕ i+ ) L(ϕ i+ ) (x i,x i+ ), (8.8) see the diagram in Figure 8.3 for an illustration. The stiffness matrix is still symmetric positive definite, but denser than that with the hat basis functions. It is still a banded matrix, with the band width five, a penta-diagonal matrix. The advantage in using quadratic basis functions is that the finite element solution is more accurate than that obtained on using the linear basis functions with the same mesh.

205 c8 7/9/6 7: page 95 #5 8.4 A D Matlab FE Package The Cubic Basis Functions in H (x l, x r ) Space We can also construct piecewise cubic basis functions in H (x l, x r ). On each element (x i, x i+ ), a cubic function has the form ϕ(x) = a i x 3 + b i x + c i x + d i, i =,,..., M. There are four parameters; and the total degree of freedom is 3M if a Dirichlet boundary condition is imposed at both ends. To construct cubic basis functions with properties similar to the piecewise linear and quadratic basis functions, we need to add two auxiliary points between x i and x i+. The local stiffness matrix is then a 4 4 matrix. We leave the construction of the basis functions, and the application to the Sturm Liouville BVPs as a project for students. 8.4 A D Matlab FE Package A general D Matlab package has been written and is available at the book s depository, www4.ncsu.edu/ zhilin/fd_fen_book/matlab/d or upon request. The code can be used to solve a general Sturm Liouville problem (p(x)u ) + c(x)u + q(x)u = f (x), a < x < b, with a Dirichlet, Neumann, or mixed boundary condition at x = a and x = b. We use conforming finite element methods. The mesh is x = a < x < x < x M = b, as elaborated again later. The finite element spaces can be piecewise linear, quadratic, or cubic functions over the mesh. The integration formulas are the Gaussian quadrature of order,, 3, or 4. The matrix assembly is element by element Gaussian Quadrature Formulas In a finite element method, we typically need to evaluate integrals such as b a p(x)ϕ i (x)ϕ j (x) dx, b a q(x)ϕ i(x)ϕ j (x) dx and b a f (x)ϕ i(x) dx over some In the package, p(x) is expressed as k(x).

206 c8 7/9/6 7: page 96 #6 96 Issues of the FE Method in One Space Dimension intervals (a, b) such as (x i, x i ). Although the functions involved may be arbitrary, it is usually neither practical nor necessary to find the exact integrals. A standard approach is to transfer the interval of integration to the interval (, ) as follows where In this way, we have b a b a f (x) dx = f(ξ) dξ, (8.9) ξ = x a b a + x b b a or x = a + b a ( ) + ξ, = dξ = b a dx or dx = b a dξ. f (x) dx = b a ( f a + b a ( + ξ) ) dξ = b a f(ξ) dξ, (8.) (8.) where f(ξ) = f(a + b a ( + ξ)); and then to use a Gaussian quadrature formula to approximate the integral. The general quadrature formula can be written g(ξ)dξ N w i g(ξ i ), where the Gaussian points ξ i and weights w i are chosen so that the quadrature is as accurate as possible. In the Newton Cotes quadrature formulas such as the mid-point rule, the trapezoidal rule, and the Simpson methods, the ξ i are predefined independent of w i. In Gaussian quadrature formulas, all ξ i s and w i s are unknowns, and are determined simultaneously such that the quadrature formula is exact for g(x) =, x,..., x N. The number N is called the algebraic precision of the quadrature formula. Gaussian quadrature formulas have the following features: accurate with the best possible algebraic precision using the fewest points; open with no need to use two end points where some kind of discontinuities may occur, e.g., the discontinuous derivatives of the piecewise linear functions at nodal points; no recursive relations for the Gaussian points ξ i s and the weights w i s; accurate enough for finite element methods, because b a h is generally small and only a few points ξ i s are needed. We discuss some Gaussian quadrature formulas below. i=

207 c8 7/9/6 7: page 97 #7 8.4 A D Matlab FE Package Gaussian Quadrature of Order (One Point): With only one point, the Gaussian quadrature can be written as g(ξ)dξ = w g(ξ ). We choose ξ and w such that the quadrature formula has the highest algebraic precision (N = for N = ) if only one point is used. Thus we choose g(ξ) = and g(ξ) = ξ to have the following, for g(ξ) =, for g(ξ) = ξ, g(ξ)dξ = = = w ; g(ξ)dξ = = = w ξ. Thus we get w = and ξ =. The quadrature formula is simply the mid-point rule. and Gaussian Quadrature of Order (Two Points): With two points, the Gaussian quadrature can be written as g(ξ)dξ = w g(ξ ) + w g(ξ ). We choose ξ, ξ, and w, w such that the quadrature formula has the highest algebraic precision (N = 3 for N = ) if two points are used. Thus we choose g(ξ) =, g(ξ) = ξ, g(ξ) = ξ, and g(ξ) = ξ 3 to have the following, for g(ξ) =, for g(ξ) = ξ, for g(ξ) = ξ, for g(ξ) = ξ 3, g(ξ)dξ = = = w + w ; g(ξ)dξ = = = w ξ + w ξ ; g(ξ)dξ = 3 = 3 = w ξ + w ξ ; g(ξ)dξ = = = w ξ 3 + w ξ 3. On solving the four nonlinear systems of equations by taking advantage of the symmetry, we get w = w =, ξ = 3 and ξ = 3. and

208 c8 7/9/6 7: page 98 #8 98 Issues of the FE Method in One Space Dimension So the Gaussian quadrature formula of order is ( g(ξ)dξ g ) ( + g 3 ). (8.) 3 Higher-order Gaussian quadrature formulas are likewise obtained, and for efficiency we can prestore the Gaussian points and weights in two separate matrices: ξ i , w i Below is a Matlab code setint.m to store the Gaussian points and weights up to order 4.

209 c8 7/9/6 7: page 99 #9 8.4 A D Matlab FE Package Shape Functions Similar to transforming an integral over some arbitrary interval to the integral over the standard interval between and, it is easier to evaluate the basis functions and their derivatives in the standard interval (, ). Basis functions in the standard interval (, ) are called shape functions and often have analytic forms. Using the transform between x and ξ in (8.9) (8.) for each element, on assuming c(x) = we have xi+ x i ( p(x)ϕ i ϕ j + q(x)ϕ i ϕ j ) dx = xi+ x i f (x)ϕ i (x) dx which is transformed to x i+ x i ( p(ξ)ψ i ψ j + q(ξ)ψ i ψ j ) dξ = x i+ x i f(ξ)ψi dξ where ( p(ξ) = p x i + x ) i+ x i ( + ξ), and so on. Here ψ i and ψ j are the local basis functions under the new variables, i.e., the shape functions and their derivatives. For piecewise linear functions,

210 c8 7/9/6 7: page # Issues of the FE Method in One Space Dimension (a) Figure 8.4. Plot of some shape functions: (a) the hat (linear) functions and (b) the quadratic functions (b) there are only two nonzero shape functions ψ = ξ, ψ = + ξ, (8.3) with derivatives ψ =, ψ =. (8.4) There are three nonzero quadratic shape functions ψ = ξ(ξ ), ψ = ξ, ψ 3 = ξ(ξ + ), (8.5) with derivatives ψ = ξ, ψ = ξ, ψ 3 = ξ +. (8.6) These hat (linear) and quadratic shape functions are plotted in Figure 8.4. It is noted that there is an extra factor in the derivatives with respect to x, due to the transform: dϕ i dx = dψ i dξ dξ dx = ψ i. x i+ x i The shape functions can be defined in a Matlab function [ psi, dpsi ] = shape(xi, n), where n = renders the linear basis function, n = the quadratic basis function, and n = 3 the cubic basis function values. For example, with n = the outputs are psi(), psi(), psi(3), three basis function values, dpsi(), dpsi(), dpsi(3), three derivative values.

211 c8 7/9/6 7: page # The Matlab subroutine is as follows. 8.4 A D Matlab FE Package

212 c8 7/9/6 7: page # Issues of the FE Method in One Space Dimension e e e3 e4 x x3 x4 x x5 e e x x3 x4 x x5 Figure 8.5. Example of the relation between nodes and elements: (a) linear basis functions and (b) quadratic basis functions The Main Data Structure In one space dimension, a mesh is a set of ordered points as described below. Nodal points: x = a, x,..., x nnode = b. The number of total nodal points plus the auxiliary points is nnode. Elements: Ω, Ω,..., Ω nelem. The number of elements is nelem. Connection between the nodal points and the elements: nodes(nnode, nelem), where nodes(j, i) is the j-th index of the nodes in the i-th element. For the linear basis function, j =, since there are two nodes in an element; for the quadratic basis function, j =,, 3 since there are two nodes and an auxiliary point. Example. Given the mesh and the indexing of the nodal points and the elements in Figure 8.5, for linear basis functions, we have nodes(, ) =, nodes(, ) = 3, nodes(, ) = 3, nodes(, ) = 4, nodes(, 3) = 4, nodes(, 4) =, nodes(, 3) =, nodes(, 4) = 5. Example. Given the mesh and the indexing of the nodal points and the elements in Figure 8.5, for quadratic basis functions, we have nodes(, ) = 4, nodes(, ) =, nodes(3, ) = 5, nodes(, ) =, nodes(, ) = 3, nodes(3, ) = Outline of the Algorithm

213 c8 7/9/6 7: page 3 #3 8.4 A D Matlab FE Package Assembling Element by Element The Matlab code is formkf.m

214 c8 7/9/6 7: page 4 #4 4 Issues of the FE Method in One Space Dimension Evaluation of Local Stiffness Matrix and the Load Vector The Matlab code is elem.m The Matlab code is assemb.m Global Assembling The Matlab code is propset.m Input Data The relation between the number of nodes nnode and the number of elements nelem: Linear: nelem = nnode. Quadratic: nelem = (nnode )/. Cubic: nelem = (nnode )/3.

215 c8 7/9/6 7: page 5 #5 8.4 A D Matlab FE Package 5 Nodes arranged in ascendant order. Equally spaced points are grouped together. The Matlab code is datain.m function [data] = datain(a,b,nnode,nelem) The output data has nrec groups data(i, ) = n, data(i, ) = n, data(i, 3) = x(n), data(i, 4) = x(n + n), The simple case is index of the beginning of nodes. number of points in this group. the first nodal point. the last nodal point in this group. data(i, ) = i, data(i, ) =, data(i, 3) = x(i), data(i, 4) = x(i). The basis functions to be used in each element: Input Boundary Conditions The Matlab code aplybc.m involves an array of two elements kbc() and a data array vbc(, ). At the left boundary kbc() =, vbc(, ) = u a, Dirichlet BC at the left end; kbc() =, vbc(, ) = p(a)u (a), Neumann BC at the left end; kbc() = 3, vbc(, ) = uxma, vbc(, ) = uaa, Mixed BC of the form: p(a)u (a) = uxma(u(a) uaa). The BC will affect the stiffness matrix and the load vector and are handled in Matlab codes aplybc.m and drchlta.m. Dirichlet BC u(a) = u a = vbc(, ). where gk is the global stiffness matrix and gf is the global load vector.

216 c8 7/9/6 7: page 6 #6 6 Issues of the FE Method in One Space Dimension Neumann BC u (a) = u xa. The boundary condition can be rewritten as p(a)u (a) = p(a)u xa = vbc(, ). We only need to change the load vector. Mixed BC αu(a) + βu (a) = γ, β. The BC can be rewritten as p(a)u (a) = α ( β p(a) u(a) γ ) α = u xma (u(a) u aa ) = vbc(, ) (u(a) vbc(, )). We need to change both the global stiffness matrix and the global load vector. Examples.. u(a) =, we should set kbc() = and vbc(, ) =.. k(x) = + x, a =, u () =. Since p(a) = p() = 6 and p(a)u (a) =, we should set kbc() = and vbc(, ) =. 3. p(x) = + x, a =, u(a) + 3u (a) =. Since 3u (a) = u(a) +, 6u (a) = 4u(a) + = 4 ( u(a) ), we should set kbc() = 3, vbc(, ) = 4 and vbc(, ) = /. Similarly, at the right BC x = b we should have kbc() =, vbc(, ) = u b, Dirichlet BC at the right end; kbc() =, vbc(, ) = p(b)u (b), Neumann BC at the right end; kbc() = 3, vbc(, ) = uxmb, vbc(, ) = ubb, Mixed BC of the form p(b)u (b) = uxmb(u(b) ubb). The BC will affect the stiffness matrix and the load vector and are handled in Matlab codes aplybc.m and drchlta.m. Dirichlet BC u(b) = u b = vbc(, ).

217 c8 7/9/6 7: page 7 #7 8.4 A D Matlab FE Package 7 Neumann BC u (b) = u xb is given. The boundary condition can be rewritten as p(b)u (b) = p(b)u xb = vbc(, ). We only need to change the load vector. Mixed BC αu(b) + βu (b) = γ, β. The BC can be re-written as p(b)u (b) = α ( β p(b) u(b) γ ) α = u xmb (u(b) u bb ) = vbc(, ) (u(b) vbc(, )). We need to change both the global stiffness matrix and the global load vector A Testing Example To check the code, we often try to compare the numerical results with some known exact solution, e.g., we can choose If we set the material parameters as u(x) = sin x, a x b. p(x) = + x, c(x) = cos x, q(x) = x, then the right-hand side can be calculated as f (x) = (pu ) + cu + qu = ( + x) sin x cos x + cos x + x sin x. These functions are defined in the Matlab code getmat.m The mesh is defined in the Matlab code datain.m. All other parameters used for the finite element method are defined in the Matlab code propset.m, including the following: The boundary x = a and x = b, e.g., a =, b = 4. The number of nodal points, e.g., nnode = 4. The choice of basis functions. If we use the same basis function, then for example we can set inf_ele =, which is the quadratic basis function kind(i) = inf_ele. The number of elements. If we use uniform elements, then nelem = (nnode )/inf_ele. We need to make it an integer.

218 c8 7/9/6 7: page 8 #8 8 Issues of the FE Method in One Space Dimension (a) a=, b=4, nnode = 4, inf e le =, nint=, kbc()=, kbc() = 3 (b) a=, b=4, nnode = 4, inf e le =, nint=4, kbc()=3, kbc() = Figure 8.6. Error plots of the FE solutions at nodal points. (a) The result is obtained with piecewise linear basis function and Gaussian quadrature of order in the interval [, 4]. Dirichlet BC at x = a and mixed BC 3u(b) + 4u (b) = γ from the exact solution u(x) = sin x are used. The magnitude of the error is O( 4 ). (b) Mixed BC 3u(a) + 4u (a) = γ at x = a and the Neumann BC at x = b from the exact solution are used. The result is obtained with piecewise quadratic basis functions and Gaussian quadrature of order 4. The magnitude of the error is O( 6 ). The choice of Gaussian quadrature formula, e.g., nint(i) = 4. The order of the Gaussian quadrature formula should be the same or higher than the order of the basis functions, for otherwise it may not converge! For example, if we use linear elements (i.e., inf_ele = ), then we can choose nint(i) = or nint(i) =, etc. Determine the BC kbc() and kbc(), and vbc(i, j), i, j =,. Note that the linear system of equations is singular if both BC are Neumann, for the solution either does not exist or is not unique. To run the program, simply type the following into the Matlab: To find out the detailed usage of the finite element code, read README carefully. Figure 8.6 gives the error plots for two different boundary conditions. 8.5 The FE Method for Fourth-Order BVPs in D Let us now discuss how to solve fourth-order differential equations using the finite element method. An important fourth-order differential equation is the

219 c8 7/9/6 7: page 9 #9 8.5 The FE Method for Fourth-Order BVPs in D 9 biharmonic equation, such as in the model problem u + q(x)u = f (x), < x <, subject to the BC I : u() = u () =, u() = u () = ; or II : u() = u () =, u() =, u () = ; or III : u() = u () =, u () =, u () =. Note that there is no negative sign in the highest derivative term. To derive the weak form, we again multiply by a test function v(x) V and integrate by parts to get (u + q(x)u)v dx = u v u v dx + quv dx = fv dx, fv dx, u v u v + ( u v + quv ) dx = fv dx, u ()v() u ()v() u ()v () + u ()v () + = fv dx. ( u v + quv ) dx For u() = u () =, u() = u () =, they are essential boundary conditions, thus we set The weak form is v() = v () = v() = v () =. (8.7) where the bilinear form and the linear form are a(u, v) = L(v) = a(u, v) = f (v), (8.8) ( u v + quv ) dx, (8.9) fv dx. (8.3) Since the weak form involves second-order derivatives, the solution space is H (, ) = {v(x), v() = v () = v() = v () =, v, v and v L }, (8.3)

220 c8 7/9/6 7: page #3 Issues of the FE Method in One Space Dimension and from the Sobolev embedding theorem we know that H C. For the boundary conditions u() = u () =, we still have v() =, but there is no restriction on v () and the solution space is { } H E = v(x), v() = v () = v() =, v H (, ). (8.3) For the boundary conditions u () = u () =, there are no restrictions on both v() and v () and the solution space is { } H E = v(x), v() = v () =, v H (, ). (8.33) For nonhomogeneous natural or mixed boundary conditions, the weak form and the linear form may be different. For homogeneous essential BC, the weak form and the linear form will be the same. We often need to do something to adjust the essential boundary conditions. Given a mesh 8.5. The Finite Element Discretization = x < x < x < < x M =, we want to construct a finite-dimensional space V h. For conforming finite element methods we have V h H (, ), therefore we cannot use the piecewise linear functions since they are in the Sobolev space H (, ) but not in H (, ). For piecewise quadratic functions, theoretically we can find a finitedimensional space that is a subset of H (, ); but this is not practical as the basis functions would have large support and involve at least six nodes. The most practical conforming finite-dimensional space in D is the piecewise cubic functions over the mesh } V h = {v(x), v(x) is a continuous piecewise cubic function, v H (, ). (8.34) The degree of freedom. On each element, we need four parameters to determine a cubic function. For essential boundary conditions at both x = a and x = b, there are 4M parameters for M elements; and at each interior nodal point, the cubic and its derivative are continuous and there are four boundary conditions, so the dimension of the finite element space is 4M (M ) 4 = (M ).

221 c8 7/9/6 7: page #3 8.5 The FE Method for Fourth-Order BVPs in D Construct the Basis Functions in H in D Since the derivative has to be continuous, we can use the piecewise Hermite interpolation and construct the basis functions in two categories. The first category is ϕ i (x j ) = { if i = j, otherwise, and ϕ i(x j ) =, for any x j, (8.35) i.e., the basis functions in this group have unity at one node and are zero at other nodes, and the derivatives are zero at all nodes. To construct the local basis function in the element (x i, x i+ ), we can set ϕ i (x) = (x x i+) (a(x x i ) + ) (x i x i+ ). It is obvious that ϕ i (x i ) = and ϕ i (x i+ ) = ϕ i (x i+) =, i.e., x i+ is a double root of the polynomial. We use ϕ (x i ) = to find the coefficient a, to finally obtain ( ) (x (x x i+ ) xi ) (x ϕ i (x) = i+ x i ) + (x i x i+ ). (8.36) The global basis function can thus be written as if x x i, ( ) (x (x x i ) xi ) (x i x i ) + (x i x i ) if x i x x i, ϕ i (x) = ( ) (x (x x i+ ) xi ) (x i+ x i ) + (x i x i+ ), if x i x x i+, if x i+ x. (8.37) There are M such basis functions. The second group of basis functions satisfy ϕ i(x j ) = { if i = j, otherwise, and ϕi (x j ) =, for any x j, (8.38)

222 c8 7/9/6 7: page #3 Issues of the FE Method in One Space Dimension i.e., the basis functions in this group are zero at all nodes, and the derivatives are unity at one node and zero at other nodes. To construct the local basis functions in an element (x i, x i+ ), we can set ϕ i (x) = C(x x i )(x x i+ ), since x i and x i+ are zeros of the cubic and x i+ is a double root of the cubic. The constant C is chosen such that ψ i (x i) =, so we finally obtain ϕ i (x) = (x x i)(x x i+ ) (x i x i+ ). (8.39) The global basis function for this category is thus if x x i, (x x i )(x x i ) (x i x i ) ϕ if x i x x i, i (x) = (x x i )(x x i+ ) (x i+ x i ) if x i x x i+, if x i+ x. (8.4) 8.5. The Shape Functions There are four shape functions in the interval (, ), namely, ψ (ξ) = (ξ ) (ξ + ), 4 ψ (ξ) = (ξ + ) ( ξ + ), 4 ψ 3 (ξ) = (ξ ) (ξ + ), 4 ψ 4 (ξ) = (ξ + ) (ξ ). 4 In Figure 8.7, we show the shape functions and some global basis functions from the Hermite cubic interpolation. It is noted that there are two basis functions centered at each node, so-called a double node. The finite element solution can be written as M M u h (x) = α j ϕ j (x) + β j ϕj (x), (8.4) j= j=

223 c8 7/9/6 7: page 3 # The FE Method for Fourth-Order BVPs in D 3 (a) (b) Figure 8.7. (a) Hermite cubic shape functions on a master element and (b) corresponding global functions at a node i in a mesh.

224 c8 7/9/6 7: page 4 #34 4 Issues of the FE Method in One Space Dimension and after the coefficients α j and β j are found we have u h (x j ) = α j, u h (x j) = β j. There are four nonzero basis functions on each element (x i, x i+ ); and on adopting the order ϕ, ϕ, ψ, ψ, ϕ 3, ϕ 3, ψ 3,..., the local stiffness matrix has the form a(ϕ i, ϕ i ) a(ϕ i, ϕ i+ ) a(ϕ i, ϕ i ) a(ϕ i, ϕ i+ ) a(ϕ i+, ϕ i ) a(ϕ i+, ϕ i+ ) a(ϕ i+, ϕ i ) a(ϕ i+, ϕ i+ ) a( ϕ i, ϕ i ) a( ϕ i, ϕ i+ ) a( ϕ i, ϕ i ) a( ϕ i, ϕ. i+ ) a( ϕ i+, ϕ i ) a( ϕ i+, ϕ i+ ) a( ϕ i+, ϕ i ) a( ϕ i+, ϕ i+ ) 3 (x i,x i+ ) This global stiffness matrix is still banded, and has band width six. 8.6 The Lax Milgram Lemma and the Existence of FE Solutions One of the most important issues is whether the weak form has a solution, and if so under what assumptions. Further, if the solution does exist, is it unique, and how close is it to the solution of the original differential equations? Answers to these questions are based on the Lax Milgram Lemma General Settings: Assumptions and Conditions Let V be a Hilbert space with an inner product (u, v) V and the norm u V = (u, u)v, e.g., C m, the Sobolev spaces H and H, etc. Assume there is a bilinear form and a linear form that satisfy the following conditions: a(u, v), V V R, L(v), V R,. a(u, v) is symmetric, i.e., a(u, v) = a(v, u);. a(u, v) is continuous in both u and v, i.e., there is a constant γ such that a(u, v) γ u V v V, for any u and v V; the norm of the operator a(u, v) ; 4 If this condition is true, then a(u, v) is called a bounded operator and the least 56 lower bound of such a γ > is called the norm of a(u, v).

225 c8 7/9/6 7: page 5 # The Lax Milgram Lemma and the Existence of FE Solutions 5 3. a(u, v) is V-elliptic, i.e., there is a constant α such that a(v, v) α v V for any v V (alternative terms are coercive, or inf sup condition); and 4. L is continuous, i.e., there is a constant Λ such that for any v V. L(v) Λ v V, 8.6. The Lax Milgram Lemma Theorem 8.. Under the above conditions to 4, there exists a unique element u V such that a(u, v) = L(v), v V. Furthermore, if the condition is also true, i.e., a(u, v) is symmetric, then. u V Λ α and. u is the unique global minimizer of F(v) = a(v, v) L(v). Sketch of the proof. The proof exploits the Riesz representation theorem from functional analysis. Since L(v) is a bounded linear operator in the Hilbert space V with the inner product a(u, v), there is unique element u in V such that L(v) = a(u, v), v V. Next we show that the a-norm is equivalent to V norm. From the continuity condition of a(u, v), we get u a = a(u, u) γ u V = γ u V. From the V-elliptic condition, we have u a = a(u, u) α u V = α u V, therefore α u V u a γ u V,

226 c8 7/9/6 7: page 6 #36 6 Issues of the FE Method in One Space Dimension or u a u V u a. γ α Often u a is called the energy norm. Now we show that F(u ) the global minimizer. For any v V, if a(u, v) = a(v, u), then F(v) = F(u + v u ) = F(u + w) = a(u + w, u + w) L(u + w) = (a(u + w, u ) + a(u + w, w)) L(u ) L(w) = (a(u, u ) + a(w, u ) + a(u, w) + a(w, w)) L(u ) L(w) = a(u, u ) L(u ) + a(w, w) + a(u, w) L(w) = F(u ) + a(w, w) F(u ). Finally we show the proof of the stability. We have therefore α u V a(u, u ) = L(u ) Λ u V, α u V Λ u V = u V Λ α. Remark: The Lax Milgram Lemma is often used to prove the existence and uniqueness of the solutions of ODEs/PDEs An Example using the Lax Milgram Lemma Let us consider the D Sturm Liouville problem once again: The bilinear form is (pu ) + qu = f, a < x < b, u(a) =, αu(b) + βu (b) = γ, β, α β. a(u, v) = and the linear form is b a ( pu v + quv ) dx + α β p(b)u(b)v(b), L(v) = ( f, v) + γ β p(b)v(b).

227 c8 7/9/6 7: page 7 # The Lax Milgram Lemma and the Existence of FE Solutions 7 The space is V = H E (a, b). To consider the conditions of the Lax Milgram theorem, we need the Poincaré inequality: Theorem 8.3. If v(x) H and v(a) =, then b a b v dx (b a) v (x) dx or a b a v (x) dx (b a) b a v dx. (8.4) Proof We have so that = b a v(x) = = v(x) x a x a v (t) dt { x v (t) dt { } / b b a v (t), v (x) (b a) v (x) dx (b a) This completes the proof. b a b a a a v (t) dt b v (t) dt a } / { x } / v (t) dt dt a b dx (b a) v (x) dx. We now verify the Lax Milgram Lemma conditions for the Sturm Liouville problem. Obviously a(u, v) = a(v, u). The bilinear form is continuous: b ( a(u, v) = pu v + quv ) dx + α β p(b)u(b)v(b) a ( b ( max{ p max, q max } u v + uv ) ) dx + α β u(b)v(b) ( b max{ p max, q max } u v dx + a a b ( max{ p max, q max } u v + α β ) u(b)v(b). a a uv dx + α β u(b)v(b) )

228 c8 7/9/6 7: page 8 #38 8 Issues of the FE Method in One Space Dimension From the inequality b u(b)v(b) = u (x) dx a b b a v (x) dx b (b a) a b u (x) dx a v (x) dx b (b a) (b a) u v, a ( u (x) + u(x) ) dx a ( v (x) + v(x) ) dx we get ( a(u, v) max{ p max, q max } + α β ) (b a) u v i.e., the constant γ can be determined as ( γ = max{ p max, q max } + α β ) (b a). a(v, v) is V-elliptic. We have a(v, v) = b a b a ( p(v ) + qv ) dx + α β p(b)v(b) p(v ) dx b p min (v ) dx a ( b = p min (v ) dx + ) b (v ) dx a a ( b p min (b a) v dx + ) b (v ) dx a a { = p min min (b a), } v, i.e., the constant α can be determined as { α = p min min (b a), }.

229 c8 7/9/6 7: page 9 # The Lax Milgram Lemma and the Existence of FE Solutions 9 L(v) is continuous because b L(v) = f (x)v(x) dx + γ a β p(b)v(b) L(v) ( f, v ) + γ β p(b) b a v f v + γ β p(b) b a v ( ) f + γ β p(b) b a v, i.e., the constant Λ can be determined as Λ = f + γ β p(b) b a. Thus we have verified the conditions of the Lax Milgram lemma under certain assumptions such as p(x) p min >, q(x), etc., and hence conclude that there is the unique solution in H e(a, b) to the original differential equation. The solution also satisfies f + γ/ β p(b) u { }. p min min (b a), Abstract FE Methods In the same setting, let us assume that V h is a finite-dimensional subspace of V and that {ϕ, ϕ,..., ϕ M } is a basis for V h. We can formulate the following abstract finite element method using the finite-dimensional subspace V h. We seek u h V h such that or equivalently a(u h, v) = L(v), v V h, (8.43) F(u h ) F(v), v V h. (8.44) We apply the weak form in the finite-dimensional V h : a(u h, ϕ i ) = L(ϕ i ), i =,..., M. (8.45)

230 c8 7/9/6 7: page #4 Issues of the FE Method in One Space Dimension Let the finite element solution u h be u h = M α j ϕ j. Then from the weak form in V h we get M M a α j ϕ j, ϕ i = α j a(ϕ j, ϕ i ) = L(ϕ i ), i =,..., M, j= j= which in the matrix-vector form is j= AU = F, where U R M, F R M with F(i) = L(ϕ i ) and A is an M M matrix with entries a ij = a(ϕ j, ϕ i ). Since any element in V h can be written as i= v = M η i ϕ i, i= we have M M a(v, v) = a η i ϕ i, η j ϕ j = j= M η i a(ϕ i, ϕ j )η j = η T Aη > provided η T = {η,..., η M } =. Consequently, A is symmetric positive definite. The minimization form using V h is ( ) UT AU F T U = min η R M ηt Aη F T η. (8.46) The existence and uniqueness of the abstract FE method. Since the matrix A is symmetric positive definite and it is invertible, so there is a unique solution to the discrete weak form. Also from the conditions of Lax Milgram lemma, we have whence i,j= α u h V a(u h, u h ) = L(u h ) Λ u h V, u h V Λ α.

231 c8 7/9/6 7: page #4 8.7 *D IFEM for Discontinuous Coefficients Error estimates. If e h = u u h is the error, then: a(e h, v h ) = (e h, v h ) a =, v h V h ; u u h a = a(e h, e h ) u v h a, v h V h, i.e., u h is the best approximation to u in the energy norm; and u u h V γ α u v h V, v h V h, which gives the error estimates in the V norm. Sketch of the proof: From the weak form, we have a(u, v h ) = L(v h ), a(u h, v h ) = L(v h ) = a(u u h, v h ) =. This means the finite element solution is the projection of u onto the space V h. It is the best solution in V h in the energy norm, because u v h a = a(u v h, u v h ) = a(u u h + w h, u u h + w h ) = a(u u h, u u h ) + a(u u h, w h ) + a(w h, u u h ) + a(w h, w h ) = a(u u h, u u h ) + a(w h, w h ) u u h a, where w h = u h v h V h. Finally, from the condition 3, we have α u u h V a(u u h, u u h ) = a(u u h, u u h ) + a(u u h, w h ) = a(u u h, u u h + w h ) = a(u u h, u v h ) γ u u h V u v h V. The last inequality is obtained from condition. 8.7 *D IFEM for Discontinuous Coefficients Now we revisit the D interface problems discussed in Section. (pu ) = f (x), < x <, u() =, u() =, (8.47) and consider the case in which the coefficient has a finite jump, { β (x) if < x < α, p(x) = (8.48) β + (x) if α < x <. The theoretical analysis about the solution still holds if the natural jump conditions [u] α =, [ β u ] =, (8.49) α are satisfied, where [u] α means the jump defined at α.

232 c8 7/9/6 7: page #4 Issues of the FE Method in One Space Dimension Given a uniform mesh x i, i =,,..., n, x i+ x i = h. Unless the interface α in (8.47) itself is a node, the solution obtained from the standard finite element method using the linear basis functions is only first-order accurate in the maximum norm. In Li (998), modified basis functions that are defined below, {, if k = i, ϕ i (x k ) =, otherwise, (8.5) [ϕ i ] α =, [β ϕ i] α =, (8.5) are proposed. Obviously, if x j α < x j+, then only ϕ j and ϕ j+ need to be changed to satisfy the second jump condition. Using the method of undetermined coefficients, that is, we look for the basis function ϕ j (x) in the interval (x j, x j+ ) as ϕ j (x) = { a + a x if x j x < α, b + b x if α x x j+, (8.5) which should satisfy ϕ j (x j ) =, ϕ j (x j+ ) =, ϕ j (α ) = ϕ j (α+), and β ϕ j (α ) = β + ϕ j (α+). There are four unknowns and four conditions. It has been proved in Li (998) that the coefficients are unique determined and have the following closed form if β is a piecewise constant and β β + >,, x < x j,, x < x j, x x j, x j x < x j, h x j x ϕ j (x) = D +, x j x < α, ρ (x j+ x), α x < x j+, D, x j+ x, where ρ = β β +, x x j D, x j x < α, ρ (x x ϕ j+ (x) = j+ ) +, α x < x j+, D x j+ x, x j+ x x j+, h, x j+ x. D = h β+ β β + (x j+ α). Figure 8.8 shows several plots of the modified basis functions ϕ j (x), ϕ j+ (x), and some neighboring basis functions, which are the standard hat functions. At the interface α, we can clearly see kinks in the basis functions which reflect the natural jump condition. Using the modified basis functions, it has been shown in Li (998) that the finite element solution obtained from the Galerkin finite method with the new basis functions is second-order accurate in the maximum norm.

233 c8 7/9/6 7: page 3 #43 Exercises 3 N=4,β =,β+ =5,α=/3 N=4,β =5,β+ =,α=/ N=4, β =,β+ =,α=/3 N=4,β =,β+ =,α=/ Figure 8.8. Plot of some basis function near the interface with different β and β +. The interface is at α = 3. For D interface problems, the finite difference and finite element methods are not much different. The finite element method likely performs better for selfadjoint problems, while the finite difference method is more flexible for general elliptic interface problems. Exercises. (Purpose: Review abstract FE methods.) Consider the Sturm Liouville problem Let V f be the finite-dimensional space u + u = f, < x < π, u() =, u(π) + u (π) =. V f = span { x, sin(x), sin(x) }. Find the best approximation to the solution of the weak form from V f in the energy norm ( : a = a(:, :) ). You can use either analytic derivation or computer software packages

234 c8 7/9/6 7: page 4 #44 4 Issues of the FE Method in One Space Dimension (e.g., Maple, Matlab, SAS, etc.). Take f = for the computation. Compare this approach with the finite element method using three hat basis functions. Find the true solution, and plot the solution and the error of the finite element solution.. Consider the Sturm Liouville problem (( + x )u ) + xu = f, < x <, u() =. Transform the problem to a problem with homogeneous Dirichlet boundary condition at x =. Write down the weak form for each of the following case: (a) u() = 3. Hint: Construct a function u (x) H such that u () = 3 and u () =. (b) u () = 3. Hint: Construct a function u (x) H such that u () = and u () =. (c) u() + u () = 3. Hint: Construct a function u (x) H such that u () = and u () + u () =. 3. Consider the Sturm Liouville problem (pu ) + qu = f, a < x < b, u(a) =, u(b) =. Consider a mesh a = x < x < x M = b and the finite element space { } V h = v(x) H (a, b), the 34 5 set of piecewise cubic functions over the mesh. (a) Find the dimension of V h. (b) Find all nonzero shape functions ψ i (ξ) where ξ, and plot them. (c) What is the size of the local stiffness matrix and load vector? Sketch the assembling process. (d) List some advantages and disadvantages of this finite element space, compared with the piecewise continuous linear finite-dimensional space (the hat functions). 4. Download the files of the D finite element Matlab package. Consider the following analytic solution and parameters, u(x) = e x sin x, p(x) = + x, q(x) = e x, c(x) =, and f (x) determined from the differential equation (pu ) + c(x)u + qu = f, a < x < b. Use this example to become familiar with the D finite element Matlab package, by trying the following boundary conditions: (a) Dirichlet BC at x = a and x = b, where a =, b = ; (b) Neumann BC at x = a and Dirichlet BC at x = b, where a = and b = ; (c) Mixed BC γ = 3u(a) 5u (a) at x = a =, and Neumann BC at x = b =. Using linear, quadratic, and cubic basis functions, tabulate the errors in the infinity norm at the nodes and auxiliary points as follows: e M = max i M u(x i) U i M Basis Gaussian error e M /e M

235 c8 7/9/6 7: page 5 #45 Exercises 5 for different M = 4, 8, 6, 3, 64 (nnode= M + ), or the closest integers if necessary. What are the respective convergence orders? (Note: The method is second-, third-, or fourth-order convergent if the ratio e M /e M approaches 4, 8, or 6, respectively.) For the last case: () Print out the stiffness matrix for the linear basis function with M = 5. Is it symmetric? () Plot the computed solution against the exact one, and the error plot for the case of the linear basis function. Take enough points to plot the exact solution to see the whole picture. (3) Plot the error versus h = /M in log log scale for the three different bases. The slope of such a plot is the convergence order of the method employed. For this problem, you will only produce five plots for the last case. Find the energy norm, H norm and L norm of the error and do the grid refinement analysis. 5. Use the Lax Milgram Lemma to show whether the following two-point value problem has a unique solution: u + q(x)u = f, < x <, u () = u () =, (8.53) where q(x) C(, ), q(x) q min >. What happens if we relax the condition to q(x)? Give counterexamples if necessary. 6. Consider the general fourth-order two-point BVP with the mixed BC a 4 u + a 3 u + a u + a u + a u = f, a < x < b, u (a) u (a) + γ u (a) + ρ u(a) = δ, (8.54) u (a) + u (a) + γ u (a) + ρ u(a) = δ, (8.55) u(b) =, u (b) =. Derive the weak form for this problem. Hint: Solve for u (a) and u (a) from (8.54) and (8.55). The weak form should only involve up to second-order derivatives. 7. (An eigenvalue problem) Consider (pu ) + qu λu =, < x < π, (8.56) u() =, u(π) =. (8.57) (a) Find the weak form of the problem. (b) Check whether the conditions of the Lax Milgram Lemma are satisfied. Which condition is violated? Is the solution unique for arbitrary λ? Note: It is obvious that u = is a solution. For some λ, we can find nontrivial solutions u(x). Such a λ is an eigenvalue of the system, and the nonzero solution is an eigenfunction corresponding to that eigenvalue. The problem to find the eigenvalues and the eigenfunctions is called an eigenvalue problem. (c) Find all the eigenvalues and eigenfunctions when p(x) = and q(x) =. Hint: λ = and u(x) = sin(x) is one pair of the solutions.

236 c8 7/9/6 7: page 6 #46 6 Issues of the FE Method in One Space Dimension 8. Use the D finite element package with linear basis functions and a uniform grid to solve the eigenvalue problem in each of the following two cases: (a) p(x) =, q(x) =, α =. (b) p(x) = + x, q(x) = x, α = 3. (pu ) + qu λu =, < x < π, u() =, u (π) + αu(π) =, where p(x) p min >, q(x), α Try to solve the eigenvalue problem with M = 5 and M =. Print out the eigenvalues but not the eigenfunctions. Plot all the eigenfunctions in a single plot for M = 5, and plot two typical eigenfunctions for M = (6 plots in total). Hint: The approximate eigenvalues λ, λ,..., λ M and the eigenfunction u λi (x) are the generalized eigenvalues of Ax = λbx, where A is the stiffness matrix and B = {b ij } with b ij = π ϕ i(x)ϕ j (x)dx. You can generate the matrix B either numerically or analytically; and in Matlab you can use [V, D] = EIG(A, B) to find the generalized eigenvalues and the corresponding eigenvectors. For a computed eigenvalue λ i, the corresponding eigenfunction is u λi (x) = M α i,j ϕ j (x), j= where [α i,, α i,,..., α i,m ] T is the eigenvector corresponding to the generalized eigenvalue. Note: if we can find the eigenvalues and corresponding eigenfunctions, the solution to the differential equation can be expanded in terms of the eigenfunctions, similar to Fourier series. 9. (An application.) Consider a nuclear fuel element of spherical form, consisting of a sphere of fissionable material surrounded by a spherical shell of aluminum cladding as shown in the figure. We wish to determine the temperature distribution in the nuclear fuel element and the aluminum cladding. The governing equations for the two regions are the same, except that there is no heat source term for the aluminum cladding. Thus d dt r dr r k dr = q, r R F, d dt r dr r k dr =, R F r R C, where the subscripts and refer to the nuclear fuel element and the cladding, respectively. The heat generation in the nuclear fuel element is assumed to be of the form ( ) ] r q = q [ + c, where q and c are constants depending on the nuclear material. The BC are kr dt dr R F = at r = (natural BC), T = T at r = R C, where T is a constant. Note the temperature at r = R F is continuous.

237 c8 7/9/6 7: page 7 #47 Exercises 7 Derive a weak form for this problem. (Hint: First multiply both sides by r.) Use two linear elements [, R F ] and [R F, R C ] to determine the finite element solution. Compare the nodal temperatures T() and T(R f ) with the values from the exact solution {[ T = T + q ( ) ] [ R F r + 3 ( ) ]} 4 r c 6k + q R F 3k T = T + q R F 3k R F ( + 35 ) ( c R ) F, R C ( + 3 ) ( 5 c RF r R ) F. R C Take T = 8, q = 5, k =, k = 5, R F =.5, R C =, c = for plotting and comparison. R F

238 c9 7/9/6 8: page 8 # 9 The Finite Element Method for D Elliptic PDEs The procedure of the finite element method to solve D problems is the same as that for D problems, as the flow chart below demonstrates. PDE Integration by parts weak form in a space V : a(u, v) = L(v) or min v V F(v) V h (finite-dimensional space and basis functions) a(u h, v h ) = L(v h ) u h and error analysis. 9. The Second Green s Theorem and Integration by Parts in D Let us first recall the D version of the well-known divergence theorem in Cartesian coordinates. Theorem 9.. If F H (Ω) H (Ω) is a vector in D, then F dxdy = F n ds, (9.) Ω where n is the unit normal direction pointing outward at the boundary Ω with line element ds, and is the gradient operator, = [ x, y] T. The second Green s theorem is a corollary of the divergence theorem if we set [ F = v u = v u ] T x, v u. Thus since y F = ( v u ) + ( v u ) x x y y 8 Ω = u v x x + v u x + u v y y + u v y = u v + v u,

239 c9 7/9/6 8: page 9 # 9. The Second Green s Theorem and Integration by Parts in D 9 n Ω n Ω Figure 9.. A diagram of a D domain Ω, its boundary Ω and its unit normal direction. Ω where u = u = u xx + u yy, we obtain F dxdy = ( u v + v u) dxdy Ω Ω = F n ds Ω = v u n ds = v u n ds, Ω where n = (n x, n y ) (n x + n y = ) is the unit normal direction, and u n = u n = u n x x + n y u y, the normal derivative of u, see Figure 9. for an illustration. This result immediately yields the formula for integration by parts in D. Theorem 9.. If u(x, y) H (Ω) and v(x, y) H (Ω) where Ω is a bounded domain, then v u dxdy = v u Ω Ω n ds u v dxdy. (9.) Ω Note: The normal derivative u/ n is sometimes written more concisely as u n. Some important elliptic PDEs in D Cartesian coordinates are: Ω u xx + u yy =, u xx u yy = f(x, y), u xx u yy + λu = f, u xxxx + u xxyy + u yyyy =, Laplace equation, Poisson equation, generalized Helmholtz equation, Biharmonic equation. When λ >, the generalized Helmholtz equation is easier to solve than when λ <. Incidentally, the expressions involved in these PDEs may also be abbreviated using the gradient operator, e.g., u xx + u yy = u = u as

240 c9 7/9/6 8: page 3 #3 3 The Finite Element Method for D Elliptic PDEs mentioned before. We also recall that a general linear second-order elliptic PDE has the form a(x, y)u xx + b(x, y)u xy + c(x, y)u yy + d(x, y)u x + e(x, y)u y + g(x, y)u = f(x, y) with discriminant b ac <. A second-order self-adjoint elliptic PDE has the form (p(x, y) u) + q(x, y)u = f(x, y). (9.3) 9.. Boundary Conditions In D, the domain boundary Ω is one or several curves. We consider the following various linear boundary conditions. Dirichlet boundary condition on the entire boundary, i.e., u(x, y) Ω = u (x, y) is given. Neumann boundary condition on the entire boundary, i.e., u/ n Ω = g(x, y) is given. In this case, the solution to a Poisson equation may not be unique or even exist, depending upon whether a compatibility condition is satisfied. Integrating the Poisson equation over the domain, we have Ω f dxdy = = Ω Ω u n ds = u dxdy = g(x, y) ds, Ω Ω u dxdy (9.4) which is the compatibility condition to be satisfied for the solution to exist. If a solution does exist, it is not unique as it is determined within an arbitrary constant. Mixed boundary condition on the entire boundary, i.e., α(x, y)u(x, y) + β(x, y) u = γ(x, y) n is given, where α(x, y), β(x, y), and γ(x, y) are known functions. Dirichlet, Neumann, and Mixed boundary conditions on some parts of the boundary.

241 c9 7/9/6 8: page 3 #4 9. Weak Form of Second-Order Self-Adjoint Elliptic PDEs 3 9. Weak Form of Second-Order Self-Adjoint Elliptic PDEs Now we derive the weak form of the self-adjoint PDE (9.3) with a homogeneous Dirichlet boundary condition on part of the boundary Ω D, u ΩD = and a homogeneous Neumann boundary condition on the rest of boundary Ω N = Ω Ω D, u n Ω N =. Multiplying the equation (9.3) by a test function v(x, y) H (Ω), we have Ω { (p(x, y) u) + q(x, y) u } v dxdy = fv dxdy ; Ω and on using the formula for integration by parts the left-hand side becomes ( ) p u v + quv dxdy pvu n ds, Ω so the weak form is (p u v + quv) dxdy = fv dxdy Ω Ω + pg(x, y)v(x, y) ds v(x, y) H (Ω). Ω N Ω (9.5) Here Ω N is the part of boundary where a Neumann boundary condition is applied; and the solution space resides in { } V = v(x, y), v(x, y) =, (x, y) Ω D, v(x, y) H (Ω), (9.6) where Ω D is the part of boundary where a Dirichlet boundary condition is applied. 9.. Verification of Conditions of the Lax Milgram Lemma The bilinear form for (9.3) is a(u, v) = and the linear form is Ω (p u v + quv) dxdy, (9.7) L(v) = fv dxdy (9.8) Ω for a Dirichlet BC on the entire boundary. As before, we assume that < p min p(x, y) p max, q(x) q max, p C(Ω), q C(Ω). We need the Poincaré inequality to prove the V-elliptic condition.

242 c9 7/9/6 8: page 3 #5 3 The Finite Element Method for D Elliptic PDEs Theorem 9.3. If v(x, y) H (Ω), Ω R, i.e., v(x, y) H (Ω) and vanishes at the boundary Ω (can be relaxed to a point on the boundary), then where C is a constant. Ω v dxdy C v dxdy, (9.9) Ω Now we are ready to check the conditions of the Lax Milgram Lemma.. It is obvious that a(u, v) = a(v, u).. It is easy to see that a(u, v) max {p max, q max } = max {p max, q max } ( u, v ) max {p max, q max } u v, Ω ( u v + uv ) dxdy so a(u, v) is a continuous and bounded bilinear operator. 3. From the Poincaré inequality ( a(v, v) = p v + qv ) dxdy Ω p min v dxdy Ω = p min v dxdy + Ω p min v dxdy Ω p min v dxdy + p min v dxdy Ω C Ω { } p min min, v C, therefore a(u, v) is V-elliptic. 4. Finally, we show that L(v) is continuous: L(v) = (f, v) f v f v. Consequently, the solutions to the weak form and the minimization form are unique and bounded in H (Ω).

243 c9 7/9/6 8: page 33 #6 9.3 Triangulation and Basis Functions Triangulation and Basis Functions The general procedure of the finite element method is the same for any dimension, and the Galerkin finite element method involves the following main steps. Generate a triangulation over the domain. Usually the triangulation is composed of either triangles or rectangles. There are a number of mesh generation software packages available, e.g., the Matlab PDE toolbox from Mathworks, Triangle from Carnegie Mellon University, etc. Some are available through the Internet. Construct basis functions over the triangulation. We mainly consider the conforming finite element method in this book. Assemble the stiffness matrix and the load vector element by element, using either the Galerkin finite method (the weak form) or the Ritz finite method (the minimization form). Solve the system of equations. Do the error analysis. In Figure 9., we show a diagram of a simple mesh generation process. The circular domain is approximated by a polygon with five vertices (selected points on the boundary). We then connect the five vertices and an interior point to get an initial five triangles (solid line) to obtain an initial coarse mesh. We can refine the mesh using the so-called middle point rule by connecting all the middle points of all triangles in the initial mesh to obtain a finer mesh (solid and dashed lines) Triangulation and Mesh Parameters Given a general domain, we can approximate the domain by a polygon and then generate a triangulation over the polygon, and we can refine the triangulation if Figure 9.. rule. A diagram of a simple generation process and the middle point

244 c9 7/9/6 8: page 34 #7 34 The Finite Element Method for D Elliptic PDEs necessary. A simple approach is the mid-point rule by connecting all the middle points of three sides of existing triangles to get a refined mesh. A triangulation usually has the mesh parameters with Ω p : polygonal region = K K K 3 K nelem, K j : N i : are nonoverlapping triangles, j =,,..., nelem, are nodal points, i =,,..., nnode, h j : the longest side of K j, ρ j : the diameter of the circle inscribed in K j (encircle), h : the largest of all h j, h = max{h j }, ρ : the smallest of all ρ j, ρ = min{ρ j }, ρ j h j β >, where the constant β is a measurement of the triangulation quality (see Figure 9.7 for an illustration of such ρ s and h s). The larger the β, the better the quality of the triangulation. Given a triangulation, a node is also the vertex of all adjacent triangles. We do not discuss hanging nodes here The FE Space of Piecewise Linear Functions over a Triangulation For linear second-order elliptic PDEs, we know that the solution space is in the H (Ω). Unlike the D case, an element v(x, y) in H (Ω) may not be continuous h a 3 x p a a p Figure 9.3. A diagram of a triangle with three vertices a, a, and a 3 ; an adjacent triangle with a common side; and the local coordinate system in which a is the origin and a a 3 is the η-axis.

245 c9 7/9/6 8: page 35 #8 9.3 Triangulation and Basis Functions 35 under the Sobolev embedding theorem. However, in practice most solutions are indeed continuous, especially for second-order PDEs with certain regularities. Thus, we still look for a solution in the continuous function space C (Ω). Let us first consider how to construct piecewise linear functions over a triangulation with the Dirichlet BC u(x, y) Ω =. Given a triangulation, we define { V h = v(x, y) is continuous in Ω and piecewise linear over each K j, } v(x, y) Ω =. (9.) We need to determine the dimension of this space and construct a set of basis functions. On each triangle, a linear function has the form v h (x, y) = α + βx + γy, (9.) where α, β and γ are constants (three free parameters). Let P k = { p(x, y), a polynomial of degree of k}. (9.) We have the following theorem. Theorem A linear function p (x, y) = α + βx + γy defined on a triangle is uniquely determined by its values at the three vertices.. If p (x, y) P and p (x, y) P are such that p (A) = p (A) and p (B) = p (B), where A and B are two points in the xy-plane, then p (x, y) p (x, y), (x, y) I AB, where I AB is the line segment between A and B. Proof Assume the vertices of the triangle are (x i, y i ), i =,, 3. The linear function takes the value v i at the vertices, i.e., so we have the three equations p(x i, y i ) = v i, α + βx + γy = v, α + βx + γy = v, α + βx 3 + γy 3 = v 3.

246 c9 7/9/6 8: page 36 #9 36 The Finite Element Method for D Elliptic PDEs The determinant of this linear algebraic system is x y det x y = ± area of the triangle since ρ j β >, (9.3) h j x 3 y 3 hence the linear system of equations has a unique solution. Now let us prove the second part of the theorem. Suppose that the equation of the line segment is We can solve for x or for y: l x + l y + l 3 =, l + l. x = l y + l 3 l if l, or y = l x + l 3 l if l. Without loss of generality, let us assume l such that p (x, y) = α + βx + γy = α + βx l x + l 3 γ = l ( α l ) 3 γ + l = α + β x. ( β l l γ ) x Similarly, we have p (x, y) = ᾱ + β x. Since p (A) = p (A) and p (B) = p (B), α + β x = p(a), α + β x = p(b), ᾱ + β x = p(a), ᾱ + β x = p(b), where both of the linear systems of algebraic equations have the same coefficient matrix [ ] x x

247 c9 7/9/6 8: page 37 # 9.3 Triangulation and Basis Functions 37 that is nonsingular since x x (because points A and B are distinct). Thus we conclude that α = ᾱ and β = β, so the two linear functions have the same expression along the line segment, i.e., they are identical along the line segment. Corollary 9.5. A piecewise linear function in C (Ω) H (Ω) over a triangulation (a set of nonoverlapping triangles) is uniquely determined by its values at the vertices. Theorem 9.6. The dimension of the finite-dimensional space composed of piecewise linear functions in C (Ω) H (Ω) over a triangulation for (9.3) is the number of interior nodal points plus the number of nodal points on the boundary where the natural BC are imposed (Neumann and mixed boundary conditions). Example 9.7. Given the triangulation shown in Figure 9.4, a piecewise continuous function v h (x, y) is determined by its values on the vertices of all triangles, more precisely, v h (x, y) is determined from (,, v(n )), (x, y) K, (, v(n ), v(n )), (x, y) K, (,, v(n )), (x, y) K 3, (,, v(n )), (x, y) K 4, (, v(n 3 ), v(n )), (x, y) K 5, (,, v(n 3 )), (x, y) K 6, (, v(n ), v(n 3 )), (x, y) K 7, (v(n ), v(n ), v(n 3 )), (x, y) K 8. Note that although three values of the vertices are the same, like the values for K 3 and K 4, the geometries are different, hence, the functions will likely have different expressions on different triangles Figure 9.4. A diagram of a simple triangulation with a homogeneous boundary condition.

248 c9 7/9/6 8: page 38 # 38 The Finite Element Method for D Elliptic PDEs Global Basis Functions A global basis function of the piecewise linear functions in C (Ω) H (Ω) can be defined as { if i = j, ϕ i (N j ) = (9.4) otherwise, where N j are nodal points. The shape (mesh plot) of ϕ i (N j ) looks like a tent without a door; and its support of ϕ i (N j ) is the union of the triangles surrounding the node N i (cf. Figure 9.5, where Figure 9.5(a) is the mesh plot of the global basis function and Figure 9.5(b) is the plot of a triangulation and the contour plot of the global basis function centered at a node). The basis function is piecewise linear and it is supported only in the surrounding triangles. It is almost impossible to give a closed form of a global basis function except for some very special geometries (cf. the example in the next section). However, it is much easier to write down the shape function. Example 9.8. Let us consider a Poisson equation and a uniform mesh, as an example to demonstrate the piecewise linear basis functions and the finite element method: (u xx + u yy ) = f(x, y), (x, y) (a, b) (c, d ), u(x, y) Ω =. We know how to use the standard central finite difference scheme with the five-point stencil to solve the Poisson equation. With some manipulations, the (a) (b) Figure 9.5. A global basis function ϕ j : (a) the mesh plot of the global function and (b) the triangulation and the contour plot of the global basis function.

249 c9 7/9/6 8: page 39 # 9.3 Triangulation and Basis Functions Figure 9.6. A uniform triangulation defined on a rectangular domain. linear system of equations on using the finite element method with a uniform triangulation (cf. Figure 9.6) proves to be the same as that obtained from the finite difference method. Given a uniform triangulation as shown in Figure 9.6, if we use row-wise natural ordering for the nodal points (x i, y j ), x i = ih, y j = jh, h =, i =,,..., m, j =,,..., n, n then the global basis function defined at (x i, y j ) = (ih, jh) is x (i )h + y (j )h Region h y (j )h Region h h (x ih) Region 3 h ϕ j(n )+i = x ih + y jh Region 4 h h (y jh) Region 5 h x (i )h Region 6 h otherwise.

250 c9 7/9/6 8: page 4 #3 4 The Finite Element Method for D Elliptic PDEs If m = n = 3, there are nine interior nodal points such that the stiffness matrix is a 9 9 matrix: o o o A = o o, o o o where stands for the nonzero entries and o happens to be zero for Poisson equations. Generally, the stiffness matrix is block tridiagonal: B I 4 I B I 4 A =, where B = I B I 4 I B 4 and I is the identity matrix. The component of the load vector F i can be approximated as D f(x, y)ϕ i dxdy f ij D ϕ i dxdy = h f ij, so after dividing by h we get the same system of equations as in the finite difference scheme, namely, with the same ordering. U i, j + U i+, j + U i, j + U i, j+ 4U ij h = f ij,

251 c9 7/9/6 8: page 4 #4 9.3 Triangulation and Basis Functions The Interpolation Function and Error Analysis We know that the finite element solution u h is the best solution in terms of the energy norm in the finite-dimensional space V h, i.e., u u h a u v h a, assuming that u is the solution to the weak form. However, this does not give a quantitative estimate for the finite element solution, and we may wish to have a more precise error estimate in terms of the solution information and the mesh size h. This can be done through the interpolation function, for which an error estimate is often available from the approximation theory. Note that the solution information appears as part of the error constants in the error estimates, even though the solution is unknown. We will use the mesh parameters defined on page 6 in the discussion here. Definition 9.9. Given a triangulation of T h, let K T h be a triangle with vertices a i, i =,, 3. The interpolation function for a function v(x, y) on the triangle is defined as v I (x, y) = 3 v(a i )ϕ i (x, y), (x, y) K, (9.5) i= where ϕ i (x, y) is the piecewise linear function that satisfies ϕ i (a j ) = δ j i (with δ j i being the Kronecker delta). A global interpolation function is defined as nnode v I (x, y) = v(a i )ϕ i (x, y), (x, y) T h, (9.6) i= where a i s are all nodal points and ϕ i (x, y) is the global basis function centered at a i. Theorem 9.. If v(x, y) C (K), then we have an error estimate for the interpolation function on a triangle K, v v I h max α = Dα v, (9.7) where h is the longest side. Furthermore, we have max α = Dα (v v I ) 8h ρ max α = Dα v. (9.8)

252 c9 7/9/6 8: page 4 #5 4 The Finite Element Method for D Elliptic PDEs Proof From the definition of the interpolation function and the Taylor expansion of v(a i ) at (x, y), we have v I (x, y) = 3 v(a i )ϕ i (x, y) i= 3 ( = ϕ i (x, y) v(x, y) + v x (x, y)(x i x) + v y (x, y)(y i y) i= + v x (ξ, η)(x i x) + v x y (ξ, η)(x i x)(y i y) + ) v y (ξ, η)(y i y) = 3 ϕ i (x, y)v(x, y) + i= + R(x, y), 3 i= ( v ϕ i (x, y) x (x, y)(x i x) + v ) y (x, y)(y i y) where (ξ, η) is a point in the triangle K. It is easy to show that R(x, y) h max α = Dα v 3 i= ϕ i (x, y) = h max α = Dα v, since ϕ(x, y) and 3 i= ϕ i(x, y) =. If we take v(x, y) =, which is a linear function, then v/ x = v/ y = and max α = D α v =. The interpolation is simply the function itself, since it is uniquely determined by the values at the vertices of T, hence v I (x, y) = v(x, y) = 3 v(a i )ϕ i (x, y) = i= 3 ϕ i (x, y) =. (9.9) If we take v(x, y) = d x + d y, which is also a linear function, then v/ x = d, v/ y = d, and max α = D α v =. The interpolation is again simply the function itself, since it is uniquely determined by the values at the vertices of K. Thus from the previous Taylor expansion and the identity 3 i= ϕ i(x, y) =, we have v I (x, y) = v(x, y) = v(x, y) + i= 3 ϕ i (x, y) (d (x i x) + d (y i y)) = v(x, y), i= (9.) hence 3 i= ϕ i(x, y) (d (x i x) + d (y i y)) = for any d and d, i.e., the linear part in the expansion is the interpolation function. Consequently,

253 c9 7/9/6 8: page 43 #6 9.3 Triangulation and Basis Functions 43 for a general function v(x, y) C (K) we have v I (x, y) = v(x, y) + R(x, y), v v I h max α = Dα v, which completes the proof of the first part of the theorem. To prove the second part concerning the error estimate for the gradient, choose a point (x, y ) inside the triangle K and apply the Taylor expansion at (x, y ) to get v(x, y) = v(x, y ) + v x (x, y )(x x ) + v y (x, y )(y y ) + R (x, y), = p (x, y) + R (x, y), R (x, y) h max α = Dα v. Rewriting the interpolation function v I (x, y) as v I (x, y) = v(x, y ) + v x (x, y )(x x ) + v y (x, y )(y y ) + R (x, y), where R (x, y) is a linear function of x and y, we have v I (a i ) = p (a i ) + R (a i ), i =,, 3, from the definition above. On the other hand, v I (x, y) is the interpolation function, such that also v I (a i ) = v(a i ) = p (a i ) + R (a i ), i =,, 3. Since p (a i ) + R (a i ) = p (a i ) + R (a i ), it follows that R (a i ) = R (a i ), i.e., R (x, y) is the interpolation function of R (x, y) in the triangle K, and we have R (x, y) = 3 R (a i )ϕ i (x, y). i= With this equality and on differentiating v I (x, y) = v(x, y ) + v x (x, y )(x x ) + v y (x, y )(y y ) + R (x, y) with respect to x, we get v I v (x, y) = x x (x, y ) + R x (x, y) = v x (x, y ) + 3 i= R (a i ) ϕ i (x, y). x

254 c9 7/9/6 8: page 44 #7 44 The Finite Element Method for D Elliptic PDEs h a 3 x r a a h x Figure 9.7. A diagram used to prove Theorem 9.. Applying the Taylor expansion for v(x, y)/ x at (x, y ) gives v v (x, y) = x x (x, y ) + v x ( x, ȳ)(x x ) + v x y ( x, ȳ)(y y ), where ( x, ȳ) is a point in the triangle K. From the last two equalities, we obtain v x v I x = v x ( x, ȳ)(x x ) + v 3 x y ( x, ȳ)(y y ) R (a i ) ϕ i x max α = Dα v ( ) 3 h + h ϕ i x. It remains to prove that ϕ i / x /ρ, i =,, 3. We take i = as an illustration, and use a shift and rotation coordinate transform such that a a 3 is the η axis and a is the origin (cf. Figure 9.7): i= ξ = (x x ) cos θ + (y y ) sin θ, η = (x x ) sin θ + (y y ) cos θ. Then ϕ (x, y) = ϕ (ξ, η) = Cξ = ξ/ξ, where ξ is the ξ coordinate in the (ξ, η) coordinate system, such that ϕ x = ϕ ξ cos θ ϕ η sin θ cos θ ξ ξ ρ. The same estimate applies to ϕ i / x, i =, 3, so finally we have v x v ) I x max α = Dα v (h + 6h 8h ρ ρ max α = Dα v, i=

255 c9 7/9/6 8: page 45 #8 9.3 Triangulation and Basis Functions 45 from the fact that ρ h. Similarly, we may obtain the same error estimate for v I / y. Corollary 9.. Given a triangulation of T h, we have the following error estimates for the interpolation function: v v I L (T h ) C h v H (T h ), v v I H (T h ) C h v H (T h ), (9.) where C and C are constants Error Estimates of the FE Solution Let us now recall the D Sturm Liouville problem in a bounded domain Ω: (p(x, y) u(x, y)) + q(x, y)u(x, y) = f(x, y), (x, y) Ω, u(x, y) Ω = u (x, y), where u (x, y) is a given function, i.e., a Dirichlet BC is prescribed. If we assume that p, q C(Ω), p(x, y) p >, q(x, y), f L (Ω) and the boundary Ω is smooth (in C ), then we know that the weak form has a unique solution and the energy norm v a is equivalent to the H norm v. Furthermore, we know that the solution u(x, y) H (Ω). Given a triangulation T h with a polygonal approximation to the outer boundary Ω, let V h be the piecewise linear function space over the triangulation T h, and u h be the finite element solution. With those assumptions, we have the following theorem for the error estimates. Theorem 9.. u u h a C h u H (T h ), u u h H (T h ) C h u H (T h ), (9.) u u h L (T h ) C 3 h u H (T h ), u u h C 4 h u H (T h ), (9.3) where C i are constants. Sketch of the proof. Since the finite element solution is the best solution in the energy norm, we have u u h a u u I a C u u I H (T h ) C C h u H (T h ), because the energy norm is equivalent to the H norm. Furthermore, because of the equivalence we get the estimate for the H norm as well. The error estimates for the L and L norm are not trivial in D, and the reader may care to consult other advanced textbooks on finite element methods.

256 c9 7/9/7 :3 page 46 #9 46 The Finite Element Method for D Elliptic PDEs (x 3,y 3 ) h (,) (x,y) (x,y ) (x,h) (x,y ) x (,) (,) Figure 9.8. The linear transform from an arbitrary triangle to the standard triangle (master element) and the inverse map. 9.4 Transforms, Shape Functions, and Quadrature Formulas Any triangle with nonzero area can be transformed to the right-isosceles master triangle, or standard triangle (cf. Figure 9.8). There are three nonzero basis functions over this standard triangle, namely, ψ (ξ, η) = ξ η, (9.4) ψ (ξ, η) = ξ, (9.5) ψ 3 (ξ, η) = η. (9.6) The linear transform from a triangle with vertices (x, y ), (x, y ), and (x 3, y 3 ) arranged in the counterclockwise direction to the master triangle is or x = 3 x j ψ j (ξ, η), y = j= 3 y j ψ j (ξ, η), (9.7) j= ξ = ( ) (y 3 y )(x x ) (x 3 x )(y y ), (9.8) A e η = A e ( (y y )(x x ) + (x x )(y y ) ), (9.9) where A e is the area of the triangle that can be calculated using the formula in (9.3).

257 c9 7/9/6 8: page 47 # 9.4 Transforms, Shape Functions, and Quadrature Formulas 47 b a a c a c b d Figure 9.9. A diagram of the quadrature formulas in D with one, three, and four quadrature points, respectively. Table 9.. Quadrature points and weights corresponding to the geometry in Figure 9.9 L Points (ξ k, η k ) w k a 3 a b c 4 a b c d ( ) 3, 3 ( ), ( ), ( ), ( ) 3, 3 ( ) 5, 5 ( ) 5, 5 ( ) 5, Quadrature Formulas In the assembling process, we need to evaluate the double integrals q(x, y)ϕ i (x, y)ϕ j (x, y) dxdy = q(ξ, η) ψ i (ξ, η)ψ j (ξ, η) (x, y) Ω e ( ξ, η) dξdη,

258 c9 7/9/6 8: page 48 # 48 The Finite Element Method for D Elliptic PDEs f(x, y)ϕ j (x, y) dxdy = Ω e p(x, y) ϕ i ϕ j dxdy = Ω e f(ξ, η) ψ j (ξ, η) (x, y) ( ξ, η) dξdη, p(ξ, η) (x,y) ψ i (x,y) ψ j ( (x, y) ξ, η) dξdη in which, for example, q(ξ, η) should really be q(x(ξ, η), y(ξ, η)) = q(ξ, η) and so on. For simplification of the notations, we omit the bar symbol. A quadrature formula has the form S g(ξ, η)dξdη = L w k g(ξ k, η k ), (9.3) where S is the standard right triangle and L is the number of points involved in the quadrature. In Table 9. we list some commonly used quadrature formulas in D using one, three, and four points. The geometry of the points is illustrated in Figure 9.9, and the coordinates of the points and the weights are given in Table 9.. It is noted that only the three-point quadrature formula is closed, since the three points are on the boundary of the triangle, and the other quadrature formulas are open. k= 9.5 Some Implementation Details The procedure is essentially the same as in the D case, but some details are slightly different Description of a Triangulation A triangulation is determined by its elements and nodal points. We use the following notation: Nodal points: N i, (x, y ), (x, y ),..., (x nnode, y nnode ), i.e., we assume there are nnode nodal points. Elements: K i, K, K,..., K nelem, i.e., we assume there are nelem elements. A D array nodes is used to describe the relation between the nodal points and the elements: nodes(3, nelem). The first index is the index of nodal point in an element, usually in the counterclockwise direction, and the second index is the index of the element.

259 c9 7/9/6 8: page 49 # 9.5 Some Implementation Details 49 Figure A simple triangulation with the row-wise natural ordering. Example 9.3. Below we show the relation between the index of the nodal points and elements, and its relations, cf. also Figure 9.. nodes(, ) = 5, (x 5, y 5 ) = (, h), nodes(, ) =, (x, y ) = (, ), nodes(3, ) = 6, (x 6, y 6 ) = (h, h), nodes(, ) = 7, (x 7, y 7 ) = (h, h), nodes(, ) =, (x, y ) = (h, h), nodes(3, ) = 6, (x 6, y 6 ) = (h, h) Outline of the FE Algorithm using the Piecewise Linear Basis Functions The main assembling process is the following loop. Computing the local stiffness matrix and the load vector.

260 c9 7/9/6 8: page 5 #3 5 The Finite Element Method for D Elliptic PDEs Note that psi has three values corresponding to three nonzero basis functions; dpsi is a 3 matrix which contains the partial derivatives ψ i / ξ and ψ i / η. The evaluation of T is ( ψi ψ j p(x, y) ϕ i ϕ j dx dy = p(ξ, η) Ω e Ω e x x + ψ i y ) ψ j J dξ dη, y where J = (x,y) (ξ,η) is the Jacobian of the transform. We need to calculate ψ i/ x and ψ i / y in terms of ξ and η. Notice that 3 Since we we know that ψ i x = ψ i ξ ξ x + ψ i η η x, ψ i y = ψ i ξ ξ y + ψ i η η y. ξ = A e ( (y 3 y )(x x ) (x 3 x )(y y ) η = A e ( (y y )(x x ) + (x x )(y y ) we obtain those partial derivatives below, ξ x = (y 3 y ), A e η x = (y y ), A e ), ξ y = (x 3 x ), A e η y = (x x ). A e Add to the global stiffness matrix and the load vector. ),

261 c9 7/9/6 8: page 5 #4 9.6 Simplification of the FE Method for Poisson Equations 5 Solve the system of equations gk U = gf. Direct method, e.g., Gaussian elimination. Sparse matrix technique, e.g., A = sparse(m, M). Iterative method plus preconditioning, e.g., Jacobi, Gauss Seidel, SOR(ω), conjugate gradient methods, etc. Error analysis. Construct interpolation functions. Error estimates for interpolation functions. Finite element solution is the best approximation in the finite element space in the energy norm. 9.6 Simplification of the FE Method for Poisson Equations With constant coefficients, there is a closed form for the local stiffness matrix, in terms of the coordinates of the nodal points; so the finite element algorithm can be simplified. We now introduce the simplified finite element algorithm. A good reference is White (985): An introduction to the finite element method with applications to nonlinear problems by R.E. White, John Wiley & Sons. Let us consider the Poisson equation below u = f(x, y), (x, y) Ω, u(x, y) = g(x, y), (x, y) Ω, u n =, (x, y) Ω, where Ω is an arbitrary but bounded domain. We can use Matlab PDE Toolbox to generate a triangulation for the domain Ω. The weak form is u v dxdy = fv dxdy. Ω Ω With the piecewise linear basis functions defined on a triangulation on Ω, we can derive analytic expressions for the basis functions and the entries of the local stiffness matrix. Theorem 9.4. Consider a triangle determined by (x, y ), (x, y ) and (x 3, y 3 ). Let a i = x j y m x m y j, (9.3) b i = y j y m, (9.3) c i = x m x j, (9.33)

262 c9 7/9/6 8: page 5 #5 5 The Finite Element Method for D Elliptic PDEs where i, j, m is a positive permutation of,, 3, e.g., i =, j = and m = 3; i =, j = 3 and m = ; and i = 3, j = and m =. Then the corresponding three nonzero basis functions are ψ i (x, y) = a i + b i x + c i y, i =,, 3, (9.34) where ψ i (x i, y i ) =, ψ i (x j, y j ) = if i j, and x y = det x y = ± area of the triangle. (9.35) x 3 y 3 We prove the theorem for ψ (x, y). Substitute a, b, and c in terms of x i and y i in the definition of ψ, we have, ψ (x, y) = a + b x + c y, = (x y 3 x 3 y ) + (y y 3 )x + (x 3 x )y, so ψ (x, y ) = (x y 3 x 3 y ) + (y y 3 )x + (x 3 x )y ψ (x 3, y 3 ) = (x y 3 x 3 y ) + (y y 3 )x 3 + (x 3 x )y 3 ψ (x, y ) = (x y 3 x 3 y ) + (y y 3 )x + (x 3 x )y =, =, = =. We can prove the same feature for ψ and ψ 3. We also have the following theorem, which is essential for the simplified finite element method.

263 c9 7/9/6 8: page 53 #6 9.6 Simplification of the FE Method for Poisson Equations 53 Theorem 9.5. With the same notations as in Theorem 9.4, we have (ψ ) m (ψ ) n (ψ 3 ) l m! n! l! dxdy =, (9.36) Ω e (m + n + l + )! ψ i ψ j dxdy = b ib j + c i c j Ω e 4, F e = ψ f(x, y) dxdy f Ω e 6 + f + f 3, F e = ψ f(x, y) dxdy f Ω e + f 6 + f 3, F3 e = ψ 3 f(x, y) dxdy f Ω e + f + f 3 6, where f i = f(x i, y i ). The proof is straightforward since we have the analytic form for ψ i. We approximate f(x, y) using and therefore F e Ω e ψ f(x, y) dxdy = f Ω e ψ dxdy + f f(x, y) f ψ + f ψ + f 3 ψ 3, (9.37) Ω e ψ ψ dxdy + f 3 Ω e ψ ψ 3 dxdy. (9.38) Note that the integrals in the last expression can be obtained from the formula (9.36). There is a negligible error from approximating f(x, y) compared with the error from the finite element approximation when we seek approximate solution only in V h space instead of H (Ω) space. Similarly we can get approximation F e and F e A Pseudo-code of the Simplified FE Method Assume that we have a triangulation, e,g., a triangulation generated from Matlab by saving the mesh. Then we have p(, ), p(, ),..., p(, nnode) p(, ), p(, ),..., p(, nnode) as x coordinates of the nodal points, as y coordinates of the nodal points;

264 c9 7/9/6 8: page 54 #7 54 The Finite Element Method for D Elliptic PDEs and the array t (the nodes in our earlier notation) t(, ), t(, ),..., t(, nele) as the index of the first node of an element, t(, ), t(, ),..., t(, nele) as the index of the second node of the element, t(3, ), t(3, ),..., t(3, nele) as the index of the third node of the element; and the array e to describe the nodal points on the boundary e(, ), e(, ),..., e(, nbc) as the index of the beginning node of a boundary edge, e(, ), e(, ),..., e(, nbc) as the index of the end node of the boundary edge. A Matlab code for the simplified finite element method is listed below.

265 c9 7/9/6 8: page 55 #8 9.6 Simplification of the FE Method for Poisson Equations 55

266 c9 7/9/6 8: page 56 #9 56 The Finite Element Method for D Elliptic PDEs Example 9.6. We test the simplified finite element method to solve a Poisson equation using the following example: Domain: Unit square with a hole (cf. Figure 9.). Exact solution: u(x, y) = x + y, for f(x, y) = 4. BC: Dirichlet condition on the whole boundary. Use Matlab PDE Toolbox to generate initial mesh and then export it. Figure 9. shows the domain and the mesh generated by the Matlab PDE Toolbox. Figure 9.(a) is the mesh plot for the finite element solution, and the Figure 9.(b) is the error plot (the magnitude of the error is O(h )) Figure 9.. A mesh generated from Matlab.

267 c9 7/9/6 8: page 57 #3 9.7 Some FE Spaces in H (Ω) and H (Ω) 57 (a) (b) x 3 Figure 9.. (a) A plot of the finite element solution when f(x, y) = 4 and (b) the corresponding error plot Some FE Spaces in H (Ω) and H (Ω) Given a triangulation (triangles, rectangles, quadrilaterals, etc.), let us construct different finite element spaces with finite dimensions. There are several reasons to do so, including: better accuracy of the finite element solution, with piecewise higher-order polynomial basis functions, and to allow for higher-order derivatives in higher-order PDEs, e.g., in solving the biharmonic equation in H space. As previously mentioned, we consider conforming piecewise polynomial finite element spaces. A set of polynomials of degree k is denoted by i+j k P k = v(x, y), v(x, y) = a ij x i x j, in the xy-plane. Below we list some examples, i,j= P = v(x, y), v(x, y) = a + a x + a y }, { } P = v(x, y), v(x, y) = a + a x + a y + a x + a xy + a y, { P 3 = P + a 3 x 3 + a x y + a xy + a 3 y 3},. Degree of freedom of P k. For any fixed x i, all the possible y j terms in a p k (x, y) P k are y, y,..., y k i, i.e., j ranges from to k i. Thus there are

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 11 Partial Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002.

More information

Numerical Analysis for Engineers and Scientists

Numerical Analysis for Engineers and Scientists Numerical Analysis for Engineers and Scientists Striking a balance between theory and practice, this graduate-level text is perfect for students in the applied sciences. The author provides a clear introduction

More information

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,

More information

Preface. 2 Linear Equations and Eigenvalue Problem 22

Preface. 2 Linear Equations and Eigenvalue Problem 22 Contents Preface xv 1 Errors in Computation 1 1.1 Introduction 1 1.2 Floating Point Representation of Number 1 1.3 Binary Numbers 2 1.3.1 Binary number representation in computer 3 1.4 Significant Digits

More information

ALGEBRA AND GEOMETRY. Cambridge University Press Algebra and Geometry Alan F. Beardon Frontmatter More information

ALGEBRA AND GEOMETRY. Cambridge University Press Algebra and Geometry Alan F. Beardon Frontmatter More information ALGEBRA AND GEOMETRY This text gives a basic introduction and a unified approach to algebra and geometry. It covers the ideas of complex numbers, scalar and vector products, determinants, linear algebra,

More information

An Introduction to Numerical Methods for Differential Equations. Janet Peterson

An Introduction to Numerical Methods for Differential Equations. Janet Peterson An Introduction to Numerical Methods for Differential Equations Janet Peterson Fall 2015 2 Chapter 1 Introduction Differential equations arise in many disciplines such as engineering, mathematics, sciences

More information

Finite Difference Methods for Boundary Value Problems

Finite Difference Methods for Boundary Value Problems Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point

More information

Partial Differential Equations

Partial Differential Equations Partial Differential Equations Introduction Deng Li Discretization Methods Chunfang Chen, Danny Thorne, Adam Zornes CS521 Feb.,7, 2006 What do You Stand For? A PDE is a Partial Differential Equation This

More information

Computation Fluid Dynamics

Computation Fluid Dynamics Computation Fluid Dynamics CFD I Jitesh Gajjar Maths Dept Manchester University Computation Fluid Dynamics p.1/189 Garbage In, Garbage Out We will begin with a discussion of errors. Useful to understand

More information

Cambridge IGCSE and O Level Additional Mathematics Coursebook

Cambridge IGCSE and O Level Additional Mathematics Coursebook Cambridge IGCSE and O Level Additional Mathematics Coursebook Second edition University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown

More information

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,

More information

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework Jan Mandel University of Colorado Denver May 12, 2010 1/20/09: Sec. 1.1, 1.2. Hw 1 due 1/27: problems

More information

Applied Mathematics 205. Unit III: Numerical Calculus. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit III: Numerical Calculus. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit III: Numerical Calculus Lecturer: Dr. David Knezevic Unit III: Numerical Calculus Chapter III.3: Boundary Value Problems and PDEs 2 / 96 ODE Boundary Value Problems 3 / 96

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University The Implicit Schemes for the Model Problem The Crank-Nicolson scheme and θ-scheme

More information

Part 1. The diffusion equation

Part 1. The diffusion equation Differential Equations FMNN10 Graded Project #3 c G Söderlind 2016 2017 Published 2017-11-27. Instruction in computer lab 2017-11-30/2017-12-06/07. Project due date: Monday 2017-12-11 at 12:00:00. Goals.

More information

Numerical Methods for Engineers and Scientists

Numerical Methods for Engineers and Scientists Numerical Methods for Engineers and Scientists Second Edition Revised and Expanded Joe D. Hoffman Department of Mechanical Engineering Purdue University West Lafayette, Indiana m MARCEL D E К К E R MARCEL

More information

x n+1 = x n f(x n) f (x n ), n 0.

x n+1 = x n f(x n) f (x n ), n 0. 1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.

More information

An Introduction to Numerical Analysis

An Introduction to Numerical Analysis An Introduction to Numerical Analysis Endre Süli and David F. Mayers University of Oxford published by the press syndicate of the university of cambridge The Pitt Building, Trumpington Street, Cambridge,

More information

7 Hyperbolic Differential Equations

7 Hyperbolic Differential Equations Numerical Analysis of Differential Equations 243 7 Hyperbolic Differential Equations While parabolic equations model diffusion processes, hyperbolic equations model wave propagation and transport phenomena.

More information

STOCHASTIC PROCESSES FOR PHYSICISTS. Understanding Noisy Systems

STOCHASTIC PROCESSES FOR PHYSICISTS. Understanding Noisy Systems STOCHASTIC PROCESSES FOR PHYSICISTS Understanding Noisy Systems Stochastic processes are an essential part of numerous branches of physics, as well as biology, chemistry, and finance. This textbook provides

More information

AM 205: lecture 14. Last time: Boundary value problems Today: Numerical solution of PDEs

AM 205: lecture 14. Last time: Boundary value problems Today: Numerical solution of PDEs AM 205: lecture 14 Last time: Boundary value problems Today: Numerical solution of PDEs ODE BVPs A more general approach is to formulate a coupled system of equations for the BVP based on a finite difference

More information

Basic Aspects of Discretization

Basic Aspects of Discretization Basic Aspects of Discretization Solution Methods Singularity Methods Panel method and VLM Simple, very powerful, can be used on PC Nonlinear flow effects were excluded Direct numerical Methods (Field Methods)

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 28 PART II Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 29 BOUNDARY VALUE PROBLEMS (I) Solving a TWO

More information

Introduction to Partial Differential Equations

Introduction to Partial Differential Equations Introduction to Partial Differential Equations Philippe B. Laval KSU Current Semester Philippe B. Laval (KSU) Key Concepts Current Semester 1 / 25 Introduction The purpose of this section is to define

More information

Index. C 2 ( ), 447 C k [a,b], 37 C0 ( ), 618 ( ), 447 CD 2 CN 2

Index. C 2 ( ), 447 C k [a,b], 37 C0 ( ), 618 ( ), 447 CD 2 CN 2 Index advection equation, 29 in three dimensions, 446 advection-diffusion equation, 31 aluminum, 200 angle between two vectors, 58 area integral, 439 automatic step control, 119 back substitution, 604

More information

Finite difference method for elliptic problems: I

Finite difference method for elliptic problems: I Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

More information

Numerical Methods for PDEs

Numerical Methods for PDEs Numerical Methods for PDEs Partial Differential Equations (Lecture 1, Week 1) Markus Schmuck Department of Mathematics and Maxwell Institute for Mathematical Sciences Heriot-Watt University, Edinburgh

More information

Computational Nanoscience

Computational Nanoscience Computational Nanoscience Applications for Molecules, Clusters, and Solids Computer simulation is an indispensable research tool in modeling, understanding, and predicting nanoscale phenomena. However,

More information

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value Numerical Analysis of Differential Equations 188 5 Numerical Solution of Elliptic Boundary Value Problems 5 Numerical Solution of Elliptic Boundary Value Problems TU Bergakademie Freiberg, SS 2012 Numerical

More information

DISCRETE INVERSE AND STATE ESTIMATION PROBLEMS

DISCRETE INVERSE AND STATE ESTIMATION PROBLEMS DISCRETE INVERSE AND STATE ESTIMATION PROBLEMS With Geophysical The problems of making inferences about the natural world from noisy observations and imperfect theories occur in almost all scientific disciplines.

More information

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK KINGS COLLEGE OF ENGINEERING MA5-NUMERICAL METHODS DEPARTMENT OF MATHEMATICS ACADEMIC YEAR 00-0 / EVEN SEMESTER QUESTION BANK SUBJECT NAME: NUMERICAL METHODS YEAR/SEM: II / IV UNIT - I SOLUTION OF EQUATIONS

More information

Syllabus for Applied Mathematics Graduate Student Qualifying Exams, Dartmouth Mathematics Department

Syllabus for Applied Mathematics Graduate Student Qualifying Exams, Dartmouth Mathematics Department Syllabus for Applied Mathematics Graduate Student Qualifying Exams, Dartmouth Mathematics Department Alex Barnett, Scott Pauls, Dan Rockmore August 12, 2011 We aim to touch upon many topics that a professional

More information

in this web service Cambridge University Press

in this web service Cambridge University Press CONTINUUM MECHANICS This is a modern textbook for courses in continuum mechanics. It provides both the theoretical framework and the numerical methods required to model the behavior of continuous materials.

More information

A Student s Guide to Waves

A Student s Guide to Waves A Student s Guide to Waves Waves are an important topic in the fields of mechanics, electromagnetism, and quantum theory, but many students struggle with the mathematical aspects. Written to complement

More information

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9 Spring 2015 Lecture 9 REVIEW Lecture 8: Direct Methods for solving (linear) algebraic equations Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems and Condition Numbers

More information

FDM for parabolic equations

FDM for parabolic equations FDM for parabolic equations Consider the heat equation where Well-posed problem Existence & Uniqueness Mass & Energy decreasing FDM for parabolic equations CNFD Crank-Nicolson + 2 nd order finite difference

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations.

An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations. An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations by Tong Chen A thesis submitted in conformity with the requirements

More information

Foundations and Applications of Engineering Mechanics

Foundations and Applications of Engineering Mechanics Foundations and Applications of Engineering Mechanics 4843/24, 2nd Floor, Ansari Road, Daryaganj, Delhi - 110002, India Cambridge University Press is part of the University of Cambridge. It furthers the

More information

Finite Difference and Finite Element Methods

Finite Difference and Finite Element Methods Finite Difference and Finite Element Methods Georgy Gimel farb COMPSCI 369 Computational Science 1 / 39 1 Finite Differences Difference Equations 3 Finite Difference Methods: Euler FDMs 4 Finite Element

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

ALGEBRAIC SHIFT REGISTER SEQUENCES

ALGEBRAIC SHIFT REGISTER SEQUENCES ALGEBRAIC SHIFT REGISTER SEQUENCES Pseudo-random sequences are essential ingredients of every modern digital communication system including cellular telephones, GPS, secure internet transactions, and satellite

More information

INTRODUCTION TO PDEs

INTRODUCTION TO PDEs INTRODUCTION TO PDEs In this course we are interested in the numerical approximation of PDEs using finite difference methods (FDM). We will use some simple prototype boundary value problems (BVP) and initial

More information

CLASSICAL MECHANICS. The author

CLASSICAL MECHANICS.  The author CLASSICAL MECHANICS Gregory s Classical Mechanics is a major new textbook for undergraduates in mathematics and physics. It is a thorough, self-contained and highly readable account of a subject many students

More information

METHODS OF ENGINEERING MATHEMATICS

METHODS OF ENGINEERING MATHEMATICS METHODS OF ENGINEERING MATHEMATICS Edward J. Hang Kyung K. Choi Department of Mechanical Engineering College of Engineering The University of Iowa Iowa City, Iowa 52242 METHODS OF ENGINEERING MATHEMATICS

More information

Chapter 3 Second Order Linear Equations

Chapter 3 Second Order Linear Equations Partial Differential Equations (Math 3303) A Ë@ Õæ Aë áöß @. X. @ 2015-2014 ú GA JË@ É Ë@ Chapter 3 Second Order Linear Equations Second-order partial differential equations for an known function u(x,

More information

Numerical Analysis and Methods for PDE I

Numerical Analysis and Methods for PDE I Numerical Analysis and Methods for PDE I A. J. Meir Department of Mathematics and Statistics Auburn University US-Africa Advanced Study Institute on Analysis, Dynamical Systems, and Mathematical Modeling

More information

This page intentionally left blank

This page intentionally left blank This page intentionally left blank Fundamentals of Geophysics Second Edition This second edition of Fundamentals of Geophysics has been completely revised and updated, and is the ideal geophysics textbook

More information

An Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence

An Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence An Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence John Pearson MSc Special Topic Abstract Numerical approximations of

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

4 Stability analysis of finite-difference methods for ODEs

4 Stability analysis of finite-difference methods for ODEs MATH 337, by T. Lakoba, University of Vermont 36 4 Stability analysis of finite-difference methods for ODEs 4.1 Consistency, stability, and convergence of a numerical method; Main Theorem In this Lecture

More information

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group NUMERICAL METHODS lor CHEMICAL ENGINEERS Using Excel', VBA, and MATLAB* VICTOR J. LAW CRC Press Taylor & Francis Group Boca Raton London New York CRC Press is an imprint of the Taylor & Francis Croup,

More information

Lecture 42 Determining Internal Node Values

Lecture 42 Determining Internal Node Values Lecture 42 Determining Internal Node Values As seen in the previous section, a finite element solution of a boundary value problem boils down to finding the best values of the constants {C j } n, which

More information

NUMERICAL METHODS FOR ENGINEERING APPLICATION

NUMERICAL METHODS FOR ENGINEERING APPLICATION NUMERICAL METHODS FOR ENGINEERING APPLICATION Second Edition JOEL H. FERZIGER A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York / Chichester / Weinheim / Brisbane / Singapore / Toronto

More information

A Simple Compact Fourth-Order Poisson Solver on Polar Geometry

A Simple Compact Fourth-Order Poisson Solver on Polar Geometry Journal of Computational Physics 182, 337 345 (2002) doi:10.1006/jcph.2002.7172 A Simple Compact Fourth-Order Poisson Solver on Polar Geometry Ming-Chih Lai Department of Applied Mathematics, National

More information

Tutorial 2. Introduction to numerical schemes

Tutorial 2. Introduction to numerical schemes 236861 Numerical Geometry of Images Tutorial 2 Introduction to numerical schemes c 2012 Classifying PDEs Looking at the PDE Au xx + 2Bu xy + Cu yy + Du x + Eu y + Fu +.. = 0, and its discriminant, B 2

More information

NONLINEAR STRUCTURAL DYNAMICS USING FE METHODS

NONLINEAR STRUCTURAL DYNAMICS USING FE METHODS NONLINEAR STRUCTURAL DYNAMICS USING FE METHODS Nonlinear Structural Dynamics Using FE Methods emphasizes fundamental mechanics principles and outlines a modern approach to understanding structural dynamics.

More information

A Computational Approach to Study a Logistic Equation

A Computational Approach to Study a Logistic Equation Communications in MathematicalAnalysis Volume 1, Number 2, pp. 75 84, 2006 ISSN 0973-3841 2006 Research India Publications A Computational Approach to Study a Logistic Equation G. A. Afrouzi and S. Khademloo

More information

Partial Differential Equations

Partial Differential Equations Partial Differential Equations Xu Chen Assistant Professor United Technologies Engineering Build, Rm. 382 Department of Mechanical Engineering University of Connecticut xchen@engr.uconn.edu Contents 1

More information

Thermal Physics. Energy and Entropy

Thermal Physics. Energy and Entropy Thermal Physics Energy and Entropy Written by distinguished physics educator, this fresh introduction to thermodynamics, statistical mechanics and the study of matter is ideal for undergraduate courses.

More information

Partial Differential Equations with Numerical Methods

Partial Differential Equations with Numerical Methods Stig Larsson Vidar Thomée Partial Differential Equations with Numerical Methods May 2, 2003 Springer Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo Preface Our purpose in this

More information

Numerical Methods for Chemical Engineering

Numerical Methods for Chemical Engineering Numerical Methods for Chemical Engineering Suitable for a first-year graduate course, this textbook unites the applications of numerical mathematics and scientific computing to the practice of chemical

More information

MATHEMATICAL MODELLING IN ONE DIMENSION

MATHEMATICAL MODELLING IN ONE DIMENSION MATHEMATICAL MODELLING IN ONE DIMENSION African Institute of Mathematics Library Series The African Institute of Mathematical Sciences (AIMS), founded in 2003 in Muizenberg, South Africa, provides a one-year

More information

ESSENTIAL SAMPLE. Mathematical Methods 1&2CAS MICHAEL EVANS KAY LIPSON DOUG WALLACE

ESSENTIAL SAMPLE. Mathematical Methods 1&2CAS MICHAEL EVANS KAY LIPSON DOUG WALLACE ESSENTIAL Mathematical Methods 1&2CAS MICHAEL EVANS KAY LIPSON DOUG WALLACE TI-Nspire and Casio ClassPad material prepared in collaboration with Jan Honnens David Hibbard i CAMBRIDGE UNIVERSITY PRESS Cambridge,

More information

Numerical Solutions of Partial Differential Equations

Numerical Solutions of Partial Differential Equations Numerical Solutions of Partial Differential Equations Dr. Xiaozhou Li xiaozhouli@uestc.edu.cn School of Mathematical Sciences University of Electronic Science and Technology of China Introduction Overview

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of,

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of, 269 C, Vese Practice problems [1] Write the differential equation u + u = f(x, y), (x, y) Ω u = 1 (x, y) Ω 1 n + u = x (x, y) Ω 2, Ω = {(x, y) x 2 + y 2 < 1}, Ω 1 = {(x, y) x 2 + y 2 = 1, x 0}, Ω 2 = {(x,

More information

Numerical Methods for PDEs

Numerical Methods for PDEs Numerical Methods for PDEs Problems 1. Numerical Differentiation. Find the best approximation to the second drivative d 2 f(x)/dx 2 at x = x you can of a function f(x) using (a) the Taylor series approach

More information

Marching on the BL equations

Marching on the BL equations Marching on the BL equations Harvey S. H. Lam February 10, 2004 Abstract The assumption is that you are competent in Matlab or Mathematica. White s 4-7 starting on page 275 shows us that all generic boundary

More information

Numerical Methods for Partial Differential Equations CAAM 452. Spring 2005

Numerical Methods for Partial Differential Equations CAAM 452. Spring 2005 Numerical Methods for Partial Differential Equations Instructor: Tim Warburton Class Location: Duncan Hall 1046 Class Time: 9:5am to 10:40am Office Hours: 10:45am to noon in DH 301 CAAM 45 Spring 005 Homeworks

More information

Numerical Methods. King Saud University

Numerical Methods. King Saud University Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation

More information

Finite difference methods for the diffusion equation

Finite difference methods for the diffusion equation Finite difference methods for the diffusion equation D150, Tillämpade numeriska metoder II Olof Runborg May 0, 003 These notes summarize a part of the material in Chapter 13 of Iserles. They are based

More information

An Introduction to Celestial Mechanics

An Introduction to Celestial Mechanics An Introduction to Celestial Mechanics This accessible text on classical celestial mechanics the principles governing the motions of bodies in the solar system provides a clear and concise treatment of

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 9 Initial Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 9 Initial Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m.

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m. Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m. Candidates should submit answers to a maximum of four

More information

Lecture Introduction

Lecture Introduction Lecture 1 1.1 Introduction The theory of Partial Differential Equations (PDEs) is central to mathematics, both pure and applied. The main difference between the theory of PDEs and the theory of Ordinary

More information

MATH-UA 263 Partial Differential Equations Recitation Summary

MATH-UA 263 Partial Differential Equations Recitation Summary MATH-UA 263 Partial Differential Equations Recitation Summary Yuanxun (Bill) Bao Office Hour: Wednesday 2-4pm, WWH 1003 Email: yxb201@nyu.edu 1 February 2, 2018 Topics: verifying solution to a PDE, dispersion

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations We call Ordinary Differential Equation (ODE) of nth order in the variable x, a relation of the kind: where L is an operator. If it is a linear operator, we call the equation

More information

Introduction. Finite and Spectral Element Methods Using MATLAB. Second Edition. C. Pozrikidis. University of Massachusetts Amherst, USA

Introduction. Finite and Spectral Element Methods Using MATLAB. Second Edition. C. Pozrikidis. University of Massachusetts Amherst, USA Introduction to Finite and Spectral Element Methods Using MATLAB Second Edition C. Pozrikidis University of Massachusetts Amherst, USA (g) CRC Press Taylor & Francis Group Boca Raton London New York CRC

More information

Outline. 1 Boundary Value Problems. 2 Numerical Methods for BVPs. Boundary Value Problems Numerical Methods for BVPs

Outline. 1 Boundary Value Problems. 2 Numerical Methods for BVPs. Boundary Value Problems Numerical Methods for BVPs Boundary Value Problems Numerical Methods for BVPs Outline Boundary Value Problems 2 Numerical Methods for BVPs Michael T. Heath Scientific Computing 2 / 45 Boundary Value Problems Numerical Methods for

More information

Ordinary Differential Equations: Initial Value problems (IVP)

Ordinary Differential Equations: Initial Value problems (IVP) Chapter Ordinary Differential Equations: Initial Value problems (IVP) Many engineering applications can be modeled as differential equations (DE) In this book, our emphasis is about how to use computer

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING CAMBRIDGE, MASSACHUSETTS NUMERICAL FLUID MECHANICS FALL 2011

MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING CAMBRIDGE, MASSACHUSETTS NUMERICAL FLUID MECHANICS FALL 2011 MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING CAMBRIDGE, MASSACHUSETTS 02139 2.29 NUMERICAL FLUID MECHANICS FALL 2011 QUIZ 2 The goals of this quiz 2 are to: (i) ask some general

More information

PDEs, part 1: Introduction and elliptic PDEs

PDEs, part 1: Introduction and elliptic PDEs PDEs, part 1: Introduction and elliptic PDEs Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Partial di erential equations The solution depends on several variables,

More information

Selected HW Solutions

Selected HW Solutions Selected HW Solutions HW1 1 & See web page notes Derivative Approximations. For example: df f i+1 f i 1 = dx h i 1 f i + hf i + h h f i + h3 6 f i + f i + h 6 f i + 3 a realmax 17 1.7014 10 38 b realmin

More information

13 PDEs on spatially bounded domains: initial boundary value problems (IBVPs)

13 PDEs on spatially bounded domains: initial boundary value problems (IBVPs) 13 PDEs on spatially bounded domains: initial boundary value problems (IBVPs) A prototypical problem we will discuss in detail is the 1D diffusion equation u t = Du xx < x < l, t > finite-length rod u(x,

More information

PDE Solvers for Fluid Flow

PDE Solvers for Fluid Flow PDE Solvers for Fluid Flow issues and algorithms for the Streaming Supercomputer Eran Guendelman February 5, 2002 Topics Equations for incompressible fluid flow 3 model PDEs: Hyperbolic, Elliptic, Parabolic

More information

ADVANCED ENGINEERING MATHEMATICS

ADVANCED ENGINEERING MATHEMATICS ADVANCED ENGINEERING MATHEMATICS DENNIS G. ZILL Loyola Marymount University MICHAEL R. CULLEN Loyola Marymount University PWS-KENT O I^7 3 PUBLISHING COMPANY E 9 U Boston CONTENTS Preface xiii Parti ORDINARY

More information

AMS527: Numerical Analysis II

AMS527: Numerical Analysis II AMS527: Numerical Analysis II Supplementary Material on Finite Different Methods for Two-Point Boundary-Value Problems Xiangmin Jiao Stony Brook University Xiangmin Jiao Stony Brook University AMS527:

More information

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material. Poisson Solvers William McLean April 21, 2004 Return to Math3301/Math5315 Common Material 1 Introduction Many problems in applied mathematics lead to a partial differential equation of the form a 2 u +

More information

From Completing the Squares and Orthogonal Projection to Finite Element Methods

From Completing the Squares and Orthogonal Projection to Finite Element Methods From Completing the Squares and Orthogonal Projection to Finite Element Methods Mo MU Background In scientific computing, it is important to start with an appropriate model in order to design effective

More information

Differential equations, comprehensive exam topics and sample questions

Differential equations, comprehensive exam topics and sample questions Differential equations, comprehensive exam topics and sample questions Topics covered ODE s: Chapters -5, 7, from Elementary Differential Equations by Edwards and Penney, 6th edition.. Exact solutions

More information

Numerical Analysis of Differential Equations Numerical Solution of Parabolic Equations

Numerical Analysis of Differential Equations Numerical Solution of Parabolic Equations Numerical Analysis of Differential Equations 215 6 Numerical Solution of Parabolic Equations 6 Numerical Solution of Parabolic Equations TU Bergakademie Freiberg, SS 2012 Numerical Analysis of Differential

More information

Applied Numerical Analysis

Applied Numerical Analysis Applied Numerical Analysis Using MATLAB Second Edition Laurene V. Fausett Texas A&M University-Commerce PEARSON Prentice Hall Upper Saddle River, NJ 07458 Contents Preface xi 1 Foundations 1 1.1 Introductory

More information

Chapter 2 Boundary and Initial Data

Chapter 2 Boundary and Initial Data Chapter 2 Boundary and Initial Data Abstract This chapter introduces the notions of boundary and initial value problems. Some operator notation is developed in order to represent boundary and initial value

More information

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information