NUMERICAL DIFFERENTIATION

Similar documents
One-sided finite-difference approximations suitable for use with Richardson extrapolation

Numerical Solution of Ordinary Differential Equations

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

Differentiating Gaussian Processes

2 Finite difference basics

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 12

Comparison of Regression Lines

Lecture 2: Numerical Methods for Differentiations and Integrations

Chapter 3 Differentiation and Integration

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

6.3.4 Modified Euler s method of integration

CALCULUS CLASSROOM CAPSULES

Inductance Calculation for Conductors of Arbitrary Shape

MA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials

Chapter 4: Root Finding

Supplementary Notes for Chapter 9 Mixture Thermodynamics

EEE 241: Linear Systems

Review of Taylor Series. Read Section 1.2

Global Sensitivity. Tuesday 20 th February, 2018

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Lecture 4 Hypothesis Testing

More metrics on cartesian products

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential

Numerical Heat and Mass Transfer

6) Derivatives, gradients and Hessian matrices

Polynomial Regression Models

Chapter 13: Multiple Regression

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Kernel Methods and SVMs Extension

2.29 Numerical Fluid Mechanics

Bernoulli Numbers and Polynomials

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Laboratory 1c: Method of Least Squares

Marginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

Laboratory 3: Method of Least Squares

New Method for Solving Poisson Equation. on Irregular Domains

Complex Numbers Alpha, Round 1 Test #123

Foundations of Arithmetic

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Lecture 12: Discrete Laplacian

Linear Approximation with Regularization and Moving Least Squares

Report on Image warping

STAT 3340 Assignment 1 solutions. 1. Find the equation of the line which passes through the points (1,1) and (4,5).

Lecture 5.8 Flux Vector Splitting

REAL ANALYSIS I HOMEWORK 1

Modelli Clamfim Equazioni differenziali 22 settembre 2016

Statistics Chapter 4

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Negative Binomial Regression

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

6. Stochastic processes (2)

= z 20 z n. (k 20) + 4 z k = 4

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

6. Stochastic processes (2)

Finding Dense Subgraphs in G(n, 1/2)

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

DUE: WEDS FEB 21ST 2018

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

CHAPTER 14 GENERAL PERTURBATION THEORY

The equation of motion of a dynamical system is given by a set of differential equations. That is (1)

Lecture 21: Numerical methods for pricing American type derivatives

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method

One Dimension Again. Chapter Fourteen

Numerical Methods. ME Mechanical Lab I. Mechanical Engineering ME Lab I

Lecture 10 Support Vector Machines II

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

This column is a continuation of our previous column

Finite Differences, Interpolation, and Numerical Differentiation

Solution Thermodynamics

Lecture 3: Probability Distributions

Formal solvers of the RT equation

x yi In chapter 14, we want to perform inference (i.e. calculate confidence intervals and perform tests of significance) in this setting.

Appendix B. The Finite Difference Scheme

FTCS Solution to the Heat Equation

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

Problem Set 9 Solutions

MMA and GCMMA two methods for nonlinear optimization

Linear Feature Engineering 11

Errors for Linear Systems

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

x = , so that calculated

Digital Signal Processing

Numerical Simulation of One-Dimensional Wave Equation by Non-Polynomial Quintic Spline

Causal Diamonds. M. Aghili, L. Bombelli, B. Pilgrim

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

The Expectation-Maximization Algorithm

Lecture 17 : Stochastic Processes II

Statistical mechanics handout 4

Lecture Notes on Linear Regression

Transcription:

NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the dervatve of y wth respect to x. In more precse language, the dependence of y upon x means that y s a functon of x. Ths functonal relatonshp s often denoted y = f(x), where f denotes the functon. If x and y are real numbers, and f the graph of y s plotted aganst x, the dervatve measures the slope of ths graph at each pont. When the functonal dependence s gven as a smple mathematcal expresson, the dervatve can be determned analytcally. When analytcal dfferentaton of the expresson s dffcult or not possble, numercal dfferentaton has to be used. When the functonal dependence s specfed as a set of dscrete ponts, dfferentaton s completed usng a numercal method. For a gven set of ponts two approaches can be used to calculate a numercal approxmaton of the dervatve at one of the ponts Fnte dfference approxmaton: In ths approach we approxmate the dervatve based on values of ponts n the neghborhood of the pont. The accuracy of a fnte dfference approxmaton depends on the accuracy of the data ponts, the spacng between the pont, and the specfc formula used for the approxmaton. In the example below, the frst dervatve at x s approxmated by the slope of the lne connectng the ponts adjacent to x. Ths approxmaton s called Two ponts central dfference approxmaton. df dx x=x = f(x +1) f(x 1 ) x +1 x 1 Two addtonal two ponts fnte dfference approxmatons: Two ponts forward dfference approxmaton where the frst dervatve x s approxmated by the slope of the lne connectng x and x + 1 df dx x=x = f(x +1) f(x ) x +1 x 1

Two ponts backward dfference approxmaton where the frst dervatve x s approxmated by the slope of the lne connectng x and x 1 df dx x=x = f(x ) f(x 1 ) x x 1 The second approach s to approxmate the ponts wth an analytcal expresson that can be easly dfferentated and then to calculate the dervatve by dfferentatng the analytcal expresson. The approxmate analytcal expresson can be derved usng curve fttng. Note: Sometmes when the measured data contan scatter as shown n the fgure below, usng one of the two ponts fnte dfference approxmaton may be erroneous and better results may be obtaned usng a hgher order fnte dfference approxmaton as wll be dscussed later n ths chapter or by usng a curve ft. Ths elmnates the problem of wrongly amplfed slopes between successve ponts. 2

2 Fnte dfference formulas usng Taylor seres expanson 2.1 Taylor seres expanson of a functon Taylor seres expanson of a functon s a way to fnd the value of a functon near a known pont, that s, a pont where the value of the functon s known. The functon s represented by a sum of terms of a convergent seres. In some cases (f the functon s a polynomal), the Taylor seres can gve the exact value of the functon. In most cases, however, a sum of an nfnte number of terms s requred for the exact value. If only a few terms are used, the value of the functon that s obtaned from the Taylor seres s an approxmaton. Gven a functon that s dfferentable (n+1) tmes n an nterval contanng a pont x o, Taylor s theorem states that for each x n the nterval, there exsts a value η between x and x o such that: f(x) = f(x o ) + (x x o ) df dx x=x o + (x x o) 2 d 2 f 2! dx 2 x=x o +... + (x x o) n d n f n! dx n x=x o + R n (η) where the remander R n (x) s gven by R n (η) = (x x o) n+1 (n + 1)! d n+1 f dx n+1 x=η 2.2 Fnte dfference formulas of frst dervatves Several fnte dfference formulas, ncludng the two ponts fnte dfference approxmatons can be derved from the Taylor seres expanson of the functon. These approxmatons vares n the number of ponts nvolved n the approxmaton, the spacng and accuracy of the measured values. We assume the spacng between adjacent ponts to be fxed and equal to h Two ponts forward dfference approxmaton f(x +1 ) = f(x ) + (x +1 x ) df dx x=x + (x +1 x ) 2 d 2 f 2! dx 2 x=η f(x +1 ) = f(x ) + hf (x ) + h2 2! f (η) f (x ) = f(x +1) f(x ) h h 2! f (η) If the second term s gnored, the prevous expresson reduces to the two ponts forward dfference approxmaton. 3

Ignorng the second term ntroduces a truncaton error that s proportonal to h. The truncaton error s sad to be of on the order of h Two ponts backward dfference approxmaton truncaton error = h 2! f (η) = O(h) f(x 1 ) = f(x ) + (x 1 x ) df dx x=x + (x 1 x ) 2 d 2 f 2! dx 2 x=η f(x 1 ) = f(x ) hf (x ) + h2 2! f (η) f (x ) = f(x ) f(x 1 ) h + h 2! f (η) If the second term s gnored, the prevous expresson reduces to the two ponts backward fnte dfference approxmaton. Ignorng the second term ntroduces a truncaton error that s proportonal to h. The truncaton error s sad to be of on the order of h Two ponts central dfference approxmaton truncaton error = h 2! f (η) = O(h) f(x +1 ) = f(x ) + (x +1 x ) df dx x=x + (x +1 x ) 2 d 2 f 2! dx 2 x=x + (x +1 x ) 3 d 3 f 3! dx 3 x=η 1 f(x 1 ) = f(x ) + (x 1 x ) df dx x=x + (x 1 x ) 2 d 2 f 2! dx 2 x=x + (x +1 x ) 3 d 3 f 3! dx 3 x=η 2 f(x +1 ) = f(x ) + hf (x ) + h2 2! f (x ) + h3 3! f (η 1 ) f(x 1 ) = f(x ) hf (x ) + h2 2! f (x ) h3 3! f (η 2 ) f (x ) = f(x +1) f(x 1 ) h + h2 3! f (η 1 ) h2 3! f (η 2 ) If the second term s gnored, the prevous expresson reduces to the two ponts central fnte dfference approxmaton. Ignorng the second term ntroduces a truncaton error that s proportonal to h 2. The truncaton error s sad to be of on the order of h 2 truncaton error = h3 3! f (η 1 ) h2 3! f (η 2 ) = O(h 2 ) A comparson between the last three approxmatons show that for small h, the central dfference approxmaton gves better approxmaton that the forward or backward approxmatons. The central dfference approxmaton s useful only for nteror ponts and not for the end ponts x 1 or x n 4

Three ponts forward dfference approxmaton f(x +1 ) = f(x ) + hf (x ) + h2 2! f (x ) + h3 3! f (η 1 ) f(x +2 ) = f(x ) + 2hf (x ) + (2h)2 2! f (x ) = 3f(x ) + 4f(x +1 ) f(x +2 ) 2h f (x ) + (2h)3 f (η 2 ) 3! 2h2 3! f (η 1 ) + 4h2 3! f (η 2 ) If the second term s gnored, the prevous expresson reduces to the three ponts forward fnte dfference approxmaton. Ignorng the second term ntroduces a truncaton error that s proportonal to h 2. The truncaton error s sad to be of on the order of h 2 truncaton error = 2h2 3! f (η 1 ) + 4h2 3! f (η 2 ) = O(h 2 ) The three ponts forward dfference approxmaton s only useful for ponts = 1 to = n 2 Three ponts backward dfference approxmaton f(x 1 ) = f(x ) hf (x ) + h2 2! f (x ) h3 3! f (η 1 ) f(x 2 ) = f(x ) 2hf (x ) + (2h)2 2! f (x ) = 3f(x ) 4f(x 1 ) + f(x 2 ) 2h f (x ) (2h)3 f (η 2 ) 3! 2h2 3! f (η 1 ) + 4h2 3! f (η 2 ) If the second term s gnored, the prevous expresson reduces to the three ponts forward fnte dfference approxmaton. Ignorng the second term ntroduces a truncaton error that s proportonal to h 2. The truncaton error s sad to be of on the order of h 2 truncaton error = 2h2 3! f (η 1 ) 4h2 3! f (η 2 ) = O(h 2 ) The three ponts backward dfference approxmaton s only useful for ponts = n to = 2 The same approach can be used to derve the four ponts central dfference approxmaton f (x ) = f(x 2) 8f(x 1 ) + 8f(x +1 ) f(x +2 ) 12h + O(h 4 ) 5

2.3 Fnte dfference formulas of second dervatves Three ponts forward dfference approxmaton f(x +1 ) = f(x ) + hf (x ) + h2 2! f (x ) + h3 3! f (η 1 ) f(x +2 ) = f(x ) + 2hf (x ) + (2h)2 2! f (x ) = f(x +2) 2f(x +1 ) + f(x ) h 2 f (x ) + (2h)3 f (η 2 ) 3! 8h 3! f (η 2 ) + h 3! f (η 1 ) If the second term s gnored, the prevous expresson reduces to the three ponts forward fnte dfference approxmaton. Ignorng the second term ntroduces a truncaton error that s proportonal to h. The truncaton error s sad to be of on the order of h Three ponts backward dfference approxmaton truncaton error = 8h 3! f (η 2 ) + h 3! f (η 1 ) = O(h) f(x 1 ) = f(x ) hf (x ) + h2 2! f (x ) h3 3! f (η 1 ) f(x 2 ) = f(x ) 2hf (x ) + (2h)2 2! f (x ) = f(x 2) 2f(x 1 ) + f(x ) h 2 f (x ) (2h)3 f (η 2 ) 3! + 8h 3! f (η 2 ) h 3! f (η 1 ) If the second term s gnored, the prevous expresson reduces to the three ponts backward fnte dfference approxmaton. 6

Ignorng the second term ntroduces a truncaton error that s proportonal to h. The truncaton error s sad to be of on the order of h Three ponts central dfference approxmaton truncaton error = 8h 3! f (η 2 ) h 3! f (η 1 ) = O(h) f(x +1 ) = f(x ) + hf (x ) + h2 2! f (x ) + h3 3! f (x ) + h4 4! f (η 1 ) f(x 1 ) = f(x ) hf (x ) + h2 2! f (x ) h3 3! f (x ) h4 4! f (η 2 ) f (x ) = f(x +1) + f(x 1 ) 2f(x ) h 2 + h2 4! f (η 2 ) h2 4! f (η 1 ) If the second term s gnored, the prevous expresson reduces to the three ponts central dfference approxmaton. Ignorng the second term ntroduces a truncaton error that s proportonal to h 2. The truncaton error s sad to be of on the order of h 2 truncaton error = h2 4! f (η 2 ) h2 4! f (η 1 ) = O(h 2 ) The same procedure can be used to develop hgher order fnte dfference approxmatons. The result s summarzed n the table below 7

2.4 3 Fnte dfference formulas of thrd and fourth Rchardson s extrapolaton To ths pont, we have seen that there are two ways to mprove dervatve estmates when employng fnte dvded dfferences: (1) decrease the step sze or (2) use hgher order formula that employs more ponts. A thrd approach, based on Rchardson extrapolaton uses two dervatve estmates to compute a thrd, more accurate approxmaton. In general terms, consder the value, D, of a dervatve (unknown) that s calculated by the dfference formula: D = D(h) + k2 h2 + k4 h4 8

where D(h) s a functon that approxmates the value of the dervatve and k 2 h 2 and k 4 h 4 are error terms n whch the coeffcents, k 2 and k 4 are ndependent of the spacng h. Usng the same formula for calculatng the value of D but usng a spacng of h/2 gves: D = D(h/2) + k 2 ( h 2 Combnng the last two equatons we get D = 1 ( ( ) ) h h 4 4D D(h) k 4 3 2 4 = 1 3 ) 2 ( ) 4 h + k 4 2 ( 4D ( ) ) h D(h) + O(h 4 ) 2 Ths means that an approxmated value of D wth error O(h 4 ) s obtaned from two lower-order approxmatons (D(h) and D(h/2)) that were calculated wth an error O(h 2 ). Example: f(x +1 ) = f(x ) + hf (x ) + h2 2! f (x ) + h3 3! f (x ) + h4 4! f v (x ) + h5 5! f v (η 1 ) f(x 1 ) = f(x ) hf (x ) + h2 2! f (x ) h3 3! f (x ) + h4 4! f v (x ) h5 5! f v (η 2 ) f(x +1 ) f(x 1 ) = 2hf (x ) + 2 h2 2! f (x ) + h3 3! f (x ) + h5 5! f v (η 1 ) + h5 5! f v (η 2 ) f (x ) = f(x +1) f(x 1 ) 2h f (x ) = 1 3 h2 3! f (x ) + 1 2 (f v (η 1 ) + f v (η 2 )) h4 5! D(h) = f(x + h) f(x h) 2h [ 4 f(x + h/2) f(x h/2) f(x ] + h) f(x h) + O(h 4 ) h 2h 4 Error n numercal dfferentaton Throughout ths chapter, expressons have been gven for the truncaton error, also known as the dscretzaton error. These expressons are generated by the partcular numercal scheme used for dervng a specfc fnte dfference formula to estmate the dervatve. In each case, the truncaton error depends on h (the spacng between the ponts) rased to some power. Clearly, the mplcaton s that as h s made smaller and smaller, the error could be made arbtrarly small. When the functon to be dfferentated s specfed as a set of dscrete data ponts, the spacng s fxed, and the truncaton error cannot be reduced by reducng the sze of h. In ths case, a smaller truncaton error can be obtaned by usng a fnte dfference formula that has a hgher-order truncaton error. When the functon that s beng dfferentated s gven by a mathematcal expresson, the spacng h for the ponts that are used n the fnte dfference formulas can be defned by the user. It mght appear then that h can be made arbtrarly small and there s no lmt to how small the error can be made. Ths, however, s not true because the total error s composed of two parts. One s the truncaton error arsng from the numercal method (the specfc fnte dfference formula) that s used. The second part s a round-off error arsng from the fnte precson of the partcular computer used. Therefore, even f the truncaton error can be made vanshngly small by choosng smaller and smaller values of h, the round-off error stll remans, or can even grow as h s made smaller and smaller. 9

5 Numercal partal dfferentaton For a functon of several ndependent varables, the partal dervatve of the functon wth respect to one of the varables represents the rate of change of the value of the functon wth respect to ths varable, whle all the other varables are kept constant. For a functon wth two ndependent varables, the partal dervatves wth respect to x and y at the pont (a, b) are defned as: f(x, b) f(a, b) x=a,y=b = lm x a x a f(a, y) f(a, b) x=a,y=b = lm y y b y b Ths means that the fnte dfference formulas that are used for approxmatng the dervatves of functons wth one ndependent varable can be adopted for calculatng partal dervatves. The formulas are appled for one of the varables, whle the other varables are kept constant. Two ponts forward dfference approxmaton Two ponts backward dfference approxmaton Two ponts central dfference approxmaton f(x +1, y ) f(x, y ) x +1 x y f(x, y +1 ) f(x, y ) y +1 y f(x, y ) f(x 1, y ) x x 1 y f(x, y ) f(x, y 1 ) y y 1 f(x +1, y ) f(x 1, y ) x +1 x 1 y f(x, y +1 ) f(x, y 1 ) y +1 y 1 The second partal dervatves wth the three-pont central dfference formula are: 2 f(x, y) 2 f(x 1, y ) 2f(x, y ) + f(x +1, y ) (x +1 x ) 2 2 f(x, y) y 2 f(x, y 1 ) 2f(x, y ) + f(x, y +1 ) (y +1 y ) 2 Mxed dervatves 2 f(x, y) y = = ( y ( f(x, y+1 ) f(x, y 1 ) y +1 y 1 ) ) = f(x +1, y +1 ) f(x +1, y 1 ) f(x 1, y +1 ) f(x 1, y 1 ) (y +1 y 1 )(x +1 x 1 ) 10