CS 542G: The Poisson Problem, Finite Differences

Similar documents
Notes for CS542G (Iterative Solvers for Linear Systems)

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX

Introduction and some preliminaries

6 Div, grad curl and all that

Lesson 21 Not So Dramatic Quadratics

MATH 308 COURSE SUMMARY

2 A brief interruption to discuss boundary value/intial value problems

Introduction to Electrostatics

Ideas from Vector Calculus Kurt Bryan

Dan s Morris s Notes on Stable Fluids (Jos Stam, SIGGRAPH 1999)

5.2 Infinite Series Brian E. Veitch

TAYLOR POLYNOMIALS DARYL DEFORD

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Introduction to Partial Differential Equations

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS1800: Sequences & Sums. Professor Kevin Gold

Class Meeting # 2: The Diffusion (aka Heat) Equation

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

%#, for x = a & ' 0, for x $ a. but with the restriction that the area it encloses equals 1. Therefore, whenever it is used it is implied that

Math background. Physics. Simulation. Related phenomena. Frontiers in graphics. Rigid fluids

Lecture 10: Powers of Matrices, Difference Equations

Basic Aspects of Discretization

Practical in Numerical Astronomy, SS 2012 LECTURE 9

( ) by D n ( y) 1.1 Mathematical Considerations. π e nx Dirac s delta function (distribution) a) Dirac s delta function is defined such that

An Invitation to Mathematics Prof. Sankaran Vishwanath Institute of Mathematical Science, Chennai. Unit - I Polynomials Lecture 1B Long Division

3 Green s functions in 2 and 3D

A Primer on Three Vectors

CS1800: Mathematical Induction. Professor Kevin Gold

where is the Laplace operator and is a scalar function.

Slope Fields: Graphing Solutions Without the Solutions

Lecture - 1 Motivation with Few Examples

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

MATH COURSE NOTES - CLASS MEETING # Introduction to PDEs, Fall 2011 Professor: Jared Speck

MATH COURSE NOTES - CLASS MEETING # Introduction to PDEs, Spring 2018 Professor: Jared Speck

No Solution Equations Let s look at the following equation: 2 +3=2 +7

Electro Magnetic Field Dr. Harishankar Ramachandran Department of Electrical Engineering Indian Institute of Technology Madras

( ) Notes. Fluid mechanics. Inviscid Euler model. Lagrangian viewpoint. " = " x,t,#, #

Section 1.x: The Variety of Asymptotic Experiences

3: Gauss s Law July 7, 2008

2.20 Fall 2018 Math Review

=.55 = = 5.05

Fundamentals of Transport Processes Prof. Kumaran Indian Institute of Science, Bangalore Chemical Engineering

Vector calculus background

Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur

10.8 Further Applications of the Divergence Theorem

Fluid Animation. Christopher Batty November 17, 2011

18.303: Introduction to Green s functions and operator inverses

Two common difficul:es with HW 2: Problem 1c: v = r n ˆr

8. TRANSFORMING TOOL #1 (the Addition Property of Equality)

Subject: Mathematics III Subject Code: Branch: B. Tech. all branches Semester: (3rd SEM) i) Dr. G. Pradhan (Coordinator) ii) Ms.

Math (P)Review Part II:

General Technical Remarks on PDE s and Boundary Conditions Kurt Bryan MA 436

18 Green s function for the Poisson equation

CALCULUS I. Review. Paul Dawkins

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know

4.4 Uniform Convergence of Sequences of Functions and the Derivative

The Liapunov Method for Determining Stability (DRAFT)

1. FUNDAMENTAL CONCEPTS AND MATH REVIEW

Lecture 8: Differential Equations. Philip Moriarty,

MATH 115, SUMMER 2012 LECTURE 12

1 Review of the dot product

Chapter 1 Review of Equations and Inequalities

Basic Quantum Mechanics Prof. Ajoy Ghatak Department of Physics Indian Institute of Technology, Delhi

DIFFERENTIAL EQUATIONS

Hydrostatics and Stability Dr. Hari V Warrior Department of Ocean Engineering and Naval Architecture Indian Institute of Technology, Kharagpur

Notes. Multi-Dimensional Plasticity. Yielding. Multi-Dimensional Yield criteria. (so rest state includes plastic strain): #=#(!

1 Fundamentals. 1.1 Overview. 1.2 Units: Physics 704 Spring 2018

MATH 31B: BONUS PROBLEMS

MITOCW MITRES2_002S10nonlinear_lec05_300k-mp4

Working with Square Roots. Return to Table of Contents

Introduction to Electrostatics

Lecture 11: Linear Regression

Gauss's Law -- Conceptual Solutions

LAB 8: INTEGRATION. Figure 1. Approximating volume: the left by cubes, the right by cylinders

Electromagnetic Theory Prof. D. K. Ghosh Department of Physics Indian Institute of Technology, Bombay

Green s Functions and Distributions

Proof Techniques (Review of Math 271)

1. Differential Equations (ODE and PDE)

Lecture 11: Extrema. Nathan Pflueger. 2 October 2013

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Section Example Determine the Maclaurin series of f (x) = e x and its the interval of convergence.

Mathematics for Intelligent Systems Lecture 5 Homework Solutions

MITOCW MITRES18_006F10_26_0602_300k-mp4

Math 231E, Lecture 13. Area & Riemann Sums

Fundamental Solutions and Green s functions. Simulation Methods in Acoustics

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Lecture 13: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay. Poisson s and Laplace s Equations

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the

Accelerated Observers

30. TRANSFORMING TOOL #1 (the Addition Property of Equality)

Math Lecture 4 Limit Laws

Electromagnetic Theory Prof. D. K. Ghosh Department of Physics Indian Institute of Technology, Bombay

Review of Vector Analysis in Cartesian Coordinates

Iterative Methods and Multigrid

Multidimensional Calculus: Mainly Differential Theory

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Mechanics, Heat, Oscillations and Waves Prof. V. Balakrishnan Department of Physics Indian Institute of Technology, Madras

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES

A Hypothesis about Infinite Series, of the minimally centered variety. By Sidharth Ghoshal May 17, 2012

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras

Transcription:

CS 542G: The Poisson Problem, Finite Differences Robert Bridson November 10, 2008 1 The Poisson Problem At the end last time, we noticed that the gravitational potential has a zero Laplacian except at the mass points where it is undefined, leading to the possibility of considering it in terms of a partial differential equation (PDE). Let s first look at really what happens at the singularity, where the potential has an undefined Laplacian: it turns out we can define it using the Dirac delta function. The claim is that 1/ x is proportional to the Dirac delta δ(x). We already saw that 1/ x = 0 for x 0, so we just have to check the other defining property of the Dirac delta: the integral of δ(x) is 1. We ll integrate our Laplacian using the Divergence Theorem 1 to avoid dealing with the unpleasantness of the singularity at the origin. In particular, take to be the volume of the unit sphere centred on the origin, with its boundary (the surface of the sphere). Then: 1 ( x = 1 ) ˆn x Before we worked out that the gradient here is just x/ x, and on the surface of this unit sphere it s also obvious that ˆn = x and x = 1. Therefore 1/ x = x, and the dot-product with the outward normal is just 1. The integral simplifies to which is just the surface area of the sphere: 1 x = 1 1 x = 4π 1 Remember the Divergence Theorem is just one of the multidimensional generalizations of the Fundamental Theorem of Calculus, that the integral of a derivative of a function on an interval is the difference of function values on the endpoints. For the Divergence theorem, the integral R g(x) of the divergence of a vector-valued function g(x) over some volume is equal to the boundary integral with the outward normal, R g(x) ˆn. 1

Since this integral is a nonzero, finite value, we conclude that we do have something proportional to the Dirac delta: 1 x = 4πδ(x) We can then rephrase the gravity problem as the following PDE: u(x) = lim x n 4πm i δ(x x i ) i=1 u(x) = 0 (A so-called boundary condition) The actual acceleration on a point mass at x due to the n point masses is then G u. This term allows us to fairly easily generalize the n-body gravity problem to a continuum problem, where instead of having a finite set of point masses we view mass as spread out continuously through space. Conceptually, we could divide up space into tiny little regions and in each put a point mass with mass equal to the integral of density over that volume: in the limit as the little regions become infinitesimal we are left with the PDE u(x) = 4πρ(x) lim u(x) = 0 x where φ(x) is the mass density at any point in space. We no longer have Dirac deltas or gigantic sums over points (with n large) so in many respects this problem is simpler than the original straightforward point-mass problem! This is an example of the Poisson problem, a special PDE which at its simplest requires the Laplacian of the unknown function u to equal some specified function (in this case 4πρ(x)). The Poisson problem in different guises comes up in a ridiculous number of different applications: gravity, electrostatics (just replace mass with charge in the above!), many different parts of fluid mechanics, heat diffusion problems, image processing, geometry processing,... The special case where the specified function is zero comes up in enough places it gets its own name, the Laplace equation. A typical PDE such as the Poisson or Laplace problem also needs boundary conditions to be wellposed. In the case of gravity, this took the form of requiring the potential u to decay to zero out at infinity. More commonly, the PDE is only satisfied on some bounded domain of space, and the boundary conditions arise at the boundary of that region for example, in image processing the domain is probably the rectangle of the image, and the boundary is the outer edge of the rectangle. Usually either the value of the function is specified there, called a Dirichlet condition (our gravity problem essentially has a Dirichlet condition), or the normal component of the gradient is specified, called a Neumann condition. They 2

could even be mixed: u(x) = f in u(x) = g for x D, the Dirichlet part of u(x) ˆn = h for x N, the Neumann part of Other more complicated, possibly nonlinear, boundary conditions occasionally crop up, but we won t get into them. In this course we ll focus on just the Dirichlet case, which in some ways is the simplest. Getting back to gravity, this provides with an alternative method of approximating gravitational acceleration and extending numerical methods to the continuum case. We split up space into a grid or some other nice division; estimate the density of matter ρ in each grid cell (perhaps just by summing the masses of points in each cell, divided by the volume of the cell); then approximately solve the PDE on that grid. This can get around having to deal with unstructured point sets we transfer the problem to a nice grid of our own choosing instead and the whole business of building trees for Barnes-Hut etc. This can be thought of as a Particle-in-Cell method, where we start with particles (mass points), transfer their properties (mass) to a grid, solve a problem on the grid, then use that to update the particles (find the gradient to get their accelerations). We mentioned the Fast Multipole Method before as a generalization of Barnes-Hut: one of the other tricks commonly used with FMM is exactly this sort of thing, solving the problem of interacting distant clusters on a regular grid instead of on unstructured trees. Going the other way, the derivation from gravity gives us another tool: if we have a Poisson problem from some other application, we could solve it by approximating the right-hand-side of the PDE with a sum over a set of n chosen mass points, and then use something like Barnes-Hut to get an approximate answer efficiently. This sidesteps the need of creating a grid or mesh, which sometimes can be painful. 2 Finite Differences Now that we ve argued it can be worthwhile to directly approximate the solution of a PDE, we re left with the question of how to do it. The simplest tool at our disposal is something called Finite Differences, which are immediately derived from Taylor series. We begin by laying down a regular Cartesian grid covering the domain. For the moment we ll assume the domain is in fact rectangular, but there are ways to work around the cases where it s not which we ll mention later in passing. Through an appropriate translation and/or rotation of coordinates, the coordinates of grid point i, j, k will be x ijk = (i x, j x, k x), where x is the grid spacing (which 3

I m assuming is equal on each axis, so grid cells are cubes). We ll store approximate values of the solution u at these grid points, letting us represent in a 3D array for example; we can always use interpolation to estimate u between grid points. We ll call u ijk the approximate value at x ijk. (Aside on terminology: this process of taking a continuous function in the original PDE and replacing it with a finite set of discrete values is called discretization. The term also refers to all of the following process of taking the continuous PDE and replacing it with a finite set of discrete equations.) Writing the Laplacian operator out with partial derivatives, the Poisson problem is: 2 u x 2 + 2 u y 2 + 2 u z 2 = f plus boundary conditions. We need some way to approximate these derivatives just from the values of u on the grid. Let s focus on the first, the second derivative of u, and forget about the other dimensions for the time being. As our first try, let s estimate the first derivative u in 1D, from grid points u i = u(x i ) = u(i x). Taylor series show: u i+1 = u i + u i x + 1 2 u i + O( x 3 ) Rearranging this and simplifying the error term gives: u i = u i+1 u i x + O( x) This is a finite difference: the quantity (u i+1 u i )/ x which only depends on grid values is a first-order accurate estimate of the first derivative at x i. We say first-order because the exponent of x in the O( x) error term is one. A similar Taylor series shows: and thus another first-order finite difference is: u i 1 = u i + u i( x) + 1 2 u i ( x) 2 + O( x 3 ) u i = u i u i 1 x + O( x) Note that both differences we ve seen are one-sided: we used the value of u at x i and to one side, either u i+1 or u i 1, to approximate the derivative at x i. We can get somewhat higher accuracy with a central finite difference, where we use values from both sides to estimate the derivative in the centre. Subtracting the two Taylor series above gives: u i+1 u i 1 = 2u i x + O( x 3 ) 4

with the second order term cancelling out (it s easy to see the third order term unfortunately doesn t cancel). This cancellation of terms is common with central differences, usually lending them higher accuracy. Rearranging, we get a more accurate approximation of the first derivative: u i = u i+1 u i 1 2 x + O( ) This is second-order accurate, since the exponent in the error term is two. Let s now turn to estimating the second derivative, u. Adding the two Taylor series from above gives a different cancellation: u i+1 + u i 1 = 2u i u i + O( x 4 ) We haven t written it out, but the third order terms (in fact, any odd order term) cancels as well, giving the O( x 4 ) remainder here. Rearranging this gives: u i = u i+1 2u i + u i 1 + O( ) This is a second-order accurate central finite difference estimate. Finally, putting this together in multiple dimensions, we get an approximation for the Poisson problem at grid point x ijk : u i+1jk 2u ijk + u i 1jk + u ij+1k 2u ijk + u ij 1k + u ijk+1 2u ijk + u ijk 1 = f ijk + O( ) That is, if we take the exact solution and plug it into this finite difference equation, we re left with an error term of O( ). In the limit as we refine the grid, letting x 0, the error goes to zero: we call such a method consistent. The finite difference method for solving the PDE turns this around. We ll set u i+1jk 2u ijk + u i 1jk + u ij+1k 2u ijk + u ij 1k + u ijk+1 2u ijk + u ijk 1 = f ijk without the O() error term as a set of equations that our unknown grid values of u must satisfy at every grid point, and hope that the discrete solution is an accurate approximation to the true solution. Right now this is just a hope the fact that the true solution approximately satisfies the discrete equations does not imply the discrete solution has to approximate the true solution, though next time we will prove this is indeed the case for this particular scheme. We said this equation should hold at every grid point, i.e. for every i, j, k. However, obviously we ll have a bit of a problem at the edges of the grid. If i = 0, for example, the equation refers to u i 1,j,k which doesn t lie in the grid anymore. It s here where the boundary conditions will enter into the discretization. For Dirichlet conditions, at least treated the simplest possible way, we simply replace every instance of a 5

discrete u which is not in the actual domain (either off the edge of the grid, or in the grid but not in the domain due to the domain not being rectangular) with the value prescribed to it by the Dirichlet boundary condition. We only write down equations at grid points where we have an unknown variable to solve for. This gives one equation for each unknown, and inspection of the equation above shows they are all linear equations. This boils down to a linear system, which we again hope will be solvable: next time we ll try to establish this properly. 6