lecture 5: Finite Difference Methods for Differential Equations

Similar documents
lecture 4: Constructing Finite Difference Formulas

MATH 320, WEEK 7: Matrices, Matrix Operations

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Math 123, Week 2: Matrix Operations, Inverses

An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations.

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value

Approximate Linear Relationships

Stat 451 Lecture Notes Numerical Integration

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

MA 323 Geometric Modelling Course Notes: Day 07 Parabolic Arcs

Math 1314 Week #14 Notes

CHAPTER 1. FUNCTIONS 7

CS 542G: Conditioning, BLAS, LU Factorization

Finite Elements. Colin Cotter. January 15, Colin Cotter FEM

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices

Function approximation

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems

PDE: The Method of Characteristics Page 1

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Finite Difference Methods for Boundary Value Problems

Lecture 18 Classical Iterative Methods

9.4 Cubic spline interpolation

CHAPTER SEVEN THE GALLEY METHOD. Galley method 2. Exercises 8

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Effective Resistance and Schur Complements

FINITE DIFFERENCE METHODS (II): 1D EXAMPLES IN MATLAB

Calculus (Math 1A) Lecture 5

Linear Algebra And Its Applications Chapter 6. Positive Definite Matrix

2 Two-Point Boundary Value Problems

Numerical Methods. King Saud University

Section 3.1. Best Affine Approximations. Difference Equations to Differential Equations

Sensitivity to Model Parameters

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

MATHEMATICS Lecture. 4 Chapter.8 TECHNIQUES OF INTEGRATION By Dr. Mohammed Ramidh

1 Solutions to selected problems

Linear Algebra, Summer 2011, pt. 2

MA2501 Numerical Methods Spring 2015

1 The linear algebra of linear programs (March 15 and 22, 2015)

Determine whether the following system has a trivial solution or non-trivial solution:

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Jim Lambers MAT 419/519 Summer Session Lecture 13 Notes

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Taylor approximation

Accuracy, Precision and Efficiency in Sparse Grids

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids

Numerical Methods. Lecture Notes #08 Discrete Least Square Approximation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation

Section 8.3 Partial Fraction Decomposition

Scientific Computing I

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

The Matrix Vector Product and the Matrix Product

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

Linear Least-Squares Data Fitting

Extra Practice 1. Name Date. Lesson 4.1: Writing Equations to Describe Patterns

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Hermite normal form: Computation and applications

Lecture 4: Gaussian Elimination and Homogeneous Equations

Lecture 6. Numerical methods. Approximation of functions

MA4001 Engineering Mathematics 1 Lecture 15 Mean Value Theorem Increasing and Decreasing Functions Higher Order Derivatives Implicit Differentiation

In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points,

= W z1 + W z2 and W z1 z 2

Methods for Solving Linear Systems Part 2

Engineering 7: Introduction to computer programming for scientists and engineers

Local regression I. Patrick Breheny. November 1. Kernel weighted averages Local linear regression

Fixed point iteration and root finding

MATH 304 Linear Algebra Lecture 8: Vector spaces. Subspaces.

Lecture 8: Boundary Integral Equations

Numerical Solutions to Partial Differential Equations

Lecture 12. Functional form

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method

Lecture 8. Root finding II

18.085, PROBLEM SET 1 SOLUTIONS

Finite Differences: Consistency, Stability and Convergence

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Matrices, Row Reduction of Matrices

Course and Wavelets and Filter Banks

LINEAR ALGEBRA: THEORY. Version: August 12,

COMPLETED RICHARDSON EXTRAPOLATION IN SPACE AND TIME

Linear algebra for MATH2601: Theory

Justify all your answers and write down all important steps. Unsupported answers will be disregarded.

1 Last time: linear systems and row operations

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005.

Quasi-Newton Methods

Extra Practice 1. Name Date. Lesson 4.1: Writing Equations to Describe Patterns

1 Operations on Polynomials

Further analysis. Objective Based on the understanding of the LV model from last report, further study the behaviours of the LV equations parameters.

Math 354 Transition graphs and subshifts November 26, 2014

Scientific Computing: An Introductory Survey

7x 5 x 2 x + 2. = 7x 5. (x + 1)(x 2). 4 x

CS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5

AM 205: lecture 14. Last time: Boundary value problems Today: Numerical solution of PDEs

4.3 Row operations. As we have seen in Section 4.1 we can simplify a system of equations by either:

Linear Algebra I Lecture 8

Transcription:

lecture : Finite Difference Methods for Differential Equations 1 Application: Boundary Value Problems Example 1 (Dirichlet boundary conditions) Suppose we want to solve the differential equation u (x) g(x) x [ 1] for the unknown function u subject to the Dirichlet boundary conditions u() u(1) One common approach to such problems is to approximate the solution u on a uniform grid of points x < x 1 < < x n 1 with x j j/n We seek to approximate the solution u(x) at each of the grid points x x n The Dirichlet boundary conditions give the end values immediately: u(x ) u(x n ) At each of the interior grid points we require a local approximation of the equation u (x j )g(x j ) j 1 n 1 For each we will (implicitly) construct the quadratic interpolant p j to u(x) at the points x j 1 x j and x j+1 and then approximate p j (x j) u (x j )g(x j ) Aside from some index shifting we have already constructed p j in equation (11): p j (x) u(x j 1) u(x j )+u(x j+1 ) h Just one small caveat remains: we cannot construct pj (x) because we do not know the values of u(x j 1 )u(x j ) and u(x j+1 ): finding those values is the point of our entire endeavor Thus we define approximate values u j u(x j ) j 1 n 1 and will instead use the polynomial p j that interpolates u j 1 u j and u j+1 giving (1) p j (x) u j 1 u j + u j+1 h

1 Let us accumulate all our equations for j n: u u + u h g(x 1 ) u + u h g(x ) u n u n + h g(x n ) u n + u n h g(x n 1 ) u n Notice that this is a system of n + 1 linear equations in n + 1 variables u u n+1 Thus we can arrange this in matrix form as (1) 1 1 u u u n u n h g(x 1 ) h g(x ) h g(x n ) h g(x n 1 ) where the blank entries are zero Notice that the first and last entries are trivial: u u n and so we can trim them off to yield the slightly simpler matrix (1) 1 1 u u n h g(x 1 ) h g(x ) h g(x n ) h g(x n 1 ) Solve this (n 1) (n 1) linear system of equations using Gaussian elimination One can show that the solution to the differential equation inherits the accuracy of the interpolant: the error u(x j ) u j behaves like O(h ) as h! Ideally use an efficient version of Gaussian elimination that exploits the banded structure of this matrix to give a solution in O(n) operations Example 1 (Mixed boundary conditions) Modify the last example to keep the same differential equation u (x) g(x) x [ 1]

but now use mixed boundary conditions u () u(1) The derivative condition on the left is the only modification; we must change the first row of equation (1) to encode this condition One might be tempted to use a simple linear interpolant to approximate the boundary condition on the left side adapting formula (11) to give: (1) u h This equation makes intuitive sense: it forces u so the approximate solution will have zero slope on its left end This gives the equation (1) 1 1 1 u u u n h g(x 1 ) h g(x ) h g(x n ) h g(x n 1 ) where we have trimmed off the elements associated with the u n The approximation of the second derivative (1) is accurate up to O(h ) whereas the estimate (1) of u () is only O(h) accurate Will it matter if we compromise accuracy that little bit if only in one of the n equations in (1)? What if instead we approximate u () to second-order accuracy? Equations (11) (119) provide three formulas that approximate the first derivative to second order Which one is appropriate in this setting? The right-looking formula (11) gives the approximation (1) u () u + u h which involves the variables u and u that we are already considering In contrast the centered formula (118) needs an estimate of u( h) and the left-looking formula (119) needs u( h) and u( h) Since these values of u fall outside the domain [ 1] of u the centered and left-looking formulas would not work Combining the right-looking formula (1) with the boundary condition u () gives u + u h

with which we replace the first row of (1) to obtain 1 u u (18) u n 1 h g(x 1 ) h g(x ) h g(x n ) h g(x n 1 ) Is this O(h ) accurate approach at the boundary worth the (rather minimal) extra effort? Let us investigate with an example Set the right-hand side of the differential equation to g(x) cos(px/) which corresponds to the exact solution u(x) p cos(px/) Verify that u satisfies the boundary conditions u () and u(1) Figure 11 compares the solutions obtained by solving (1) and (18) with n Clearly the simple adjustment that gave the O(h ) approximation to u () makes quite a difference! This figures shows that the solutions from (1) and (18) differ but plots like this are not the best way to understand how the approximations compare as n! Instead compute maximum error at the interpolation points Indeed we used this small value of n because it is difficult to see the difference between the exact solution and the approximation from (18) for larger n max u(x j) u j applejapplen u(x) u + u u Figure 11: Approximate solutions to u (x) cos(px/) with u () u(1) The black curve shows u(x) The red approximation is obtained by solving (1) which uses the O(h) approximation u () ; the blue approximation is from (18) with the O(h ) approximation of u () Both approximations use n 1 1 1 8 9 1 x

max u(x j) u j applejapplen 1 1-1 - 1 - O(h) u O(h ) u + u Figure 111: Convergence of approximate solutions to u (x) cos(px/) with u () u(1) The red line shows the approximation from (1); it converges like O(h) as h! The blue line shows the approximation from (18) which converges like O(h ) 1-8 1-1 1 1 1 1 1 1 n for various values of n Figure 111 shows the results of such experiments for n 1 Notice that this figure is a log-log plot; on such a scale the errors fall on straight lines and from the slope of these lines one can determine the order of convergence The slope of the red curve is 1 so the accuracy of the approximations from (1) is O(n 1 )O(h) accurate The slope of the blue curve is so (18) gives an O(n )O(h ) accurate approximation How large would n need to be to This example illustrates a general lesson: when constructing finite difference approximations to differential equations one must ensure that the approximations to the boundary conditions have the same order of accuracy as the approximation of the differential equation itself These formulas can be nicely constructed by from derivatives of polynomial interpolants of appropriate degree get the same accuracy from the O(h) approximation that was produced by the O(h ) approximation with n 1 9? Extrapolation of the red curve suggests we would need roughly n 1 8