Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Similar documents
CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Scientific Computing: Numerical Integration

Numerical Integration (Quadrature) Another application for our interpolation tools!

Introductory Numerical Analysis

Numerical Methods. King Saud University

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Romberg Integration and Gaussian Quadrature

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

3. Numerical Quadrature. Where analytical abilities end...

LECTURE 14 NUMERICAL INTEGRATION. Find

Numerical Methods I: Numerical Integration/Quadrature

Chapter 5: Numerical Integration and Differentiation

GENG2140, S2, 2012 Week 7: Curve fitting

Physics 115/242 Romberg Integration

Numerical Integra/on

Contents. I Basic Methods 13

Mathematics for Engineers. Numerical mathematics

Extrapolation in Numerical Integration. Romberg Integration

Differentiation and Integration

CS 257: Numerical Methods

Section Taylor and Maclaurin Series

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

We consider the problem of finding a polynomial that interpolates a given set of values:

Section 6.6 Gaussian Quadrature

8.3 Numerical Quadrature, Continued

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Integration. Topic: Trapezoidal Rule. Major: General Engineering. Author: Autar Kaw, Charlie Barker.

Numerical method for approximating the solution of an IVP. Euler Algorithm (the simplest approximation method)

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

Numerical Methods in Physics and Astrophysics

(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0.

Chapter 3: Root Finding. September 26, 2005

Practice Exam 2 (Solutions)

MATH3283W LECTURE NOTES: WEEK 6 = 5 13, = 2 5, 1 13

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Numerical Methods in Physics and Astrophysics

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

NUMERICAL ANALYSIS PROBLEMS

Numerical Analysis: Solving Nonlinear Equations

Math 578: Assignment 2

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Principles of Scientific Computing Local Analysis

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation

Numerical Integra/on

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Ch. 03 Numerical Quadrature. Andrea Mignone Physics Department, University of Torino AA

Numerical Integration

Lecture 10 Polynomial interpolation

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Scientific Computing: An Introductory Survey

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

5 Numerical Integration & Dierentiation

6 Lecture 6b: the Euler Maclaurin formula

Numerical integration - I. M. Peressi - UniTS - Laurea Magistrale in Physics Laboratory of Computational Physics - Unit V

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Numerical Analysis Exam with Solutions

Stat 451 Lecture Notes Numerical Integration

The problems that follow illustrate the methods covered in class. They are typical of the types of problems that will be on the tests.

Fixed point iteration and root finding

3.1 Interpolation and the Lagrange Polynomial

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

NUMERICAL METHODS FOR ENGINEERING APPLICATION

COURSE Numerical integration of functions

Introduction to Numerical Analysis

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

APPM/MATH Problem Set 6 Solutions

MAT 460: Numerical Analysis I. James V. Lambers

Simpson s 1/3 Rule Simpson s 1/3 rule assumes 3 equispaced data/interpolation/integration points

PHYS-4007/5007: Computational Physics Course Lecture Notes Appendix G

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Solution of Algebric & Transcendental Equations

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

On the positivity of linear weights in WENO approximations. Abstract

Chapter 8: Taylor s theorem and L Hospital s rule

5. Hand in the entire exam booklet and your computer score sheet.

4. Numerical Quadrature. Where analytical abilities end... continued

In manycomputationaleconomicapplications, one must compute thede nite n

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials

Preliminary Examination in Numerical Analysis

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

Gregory's quadrature method

Math 0230 Calculus 2 Lectures

Numerical techniques to solve equations

Do not turn over until you are told to do so by the Invigilator.

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lecture Notes on Introduction to Numerical Computation 1

Exact and Approximate Numbers:

Numerical method for approximating the solution of an IVP. Euler Algorithm (the simplest approximation method)

Transcription:

Integration, differentiation, and root finding Phys 420/580 Lecture 7

Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule: simplest geometric approximation a f(x) dx f(x) [a, b] f(x) f(b)+f(a) I (b a) 1 2 a b x

Numerical integration What is the quality of the approximation? Construct a power series from the left: f(x) =f(a)+f (a)(x a)+ f (a) 2! = f(a)+f (a)(x a)+ f (ξ) 2! (x a) 2 + f (a) 3! (x a) 2 (x a) 3 + exact equality true for some choice of ξ [a, b]

Numerical integration Integrating term by term (with H = b a ) gives I = f(a)h + f (a) 2! H 2 + f (ξ) 3! H 3 I trap = f(a) 2 H + f(b) 2 H error: I I trap = O(H 3 ) = f(a) 2 H + 1 f(a)+f (a)h + f H (ξ) H 2 2 2! = f(a)h + f (a) 2 H2 + f (ξ) H 3 8

Numerical integration How to improve the estimate? 1. Find an approximation that matches to higher order: e.g., trapezoid is the best linear fit; Simpson s rule is the best quadratic fit through three points I Simp = H a + b f(a)+4f 6 2 I I Simp = O(H 5 ) + f(b) 2. Make H small by subdividing the interval

Polynomial fitting Newton-Cotes formulas: H (f(b) f(a)) 2 H a + b f(a)+4f + f(b) 6 2 H a +2b 2a + b f(a)+3f +3f + f(b) 8 3 3 H 3a + b a + b a +3b 7f(a) + 32f + 12f + 32f 90 4 2 4 +7f(b) Polynomial fitting over a uniformly spaced set of points At high order, prone to Runge s phenomenon

Runge s phenomenon order 9 polynomial fit through 10 points!"#$#%&' ()(%'*+,-&.(/.0(1*,0()!-2#3#4(/ f(x) = 2 2+(2x 11) 2

Interpolation error Suppose n +1 ordered points x 0 <x 1 < <x n evaluate to f(x i )=y i Order- n interpolating polynomial satisfying P (x i )=y i is related to the original function by f(x) =P (x)+ f (n+1) (ξ) (n + 1)! n (x x i ) i=0 for some ξ [x 0,x n ]

Interpolation error Overall fitting error controlled by E = xn Formal minimization x i = x 0 dx (x) 2 (x) = gives n (x x i ) i=0 E/ x i =0 xn dx x x 0 j=i (x x j ) 2 xn dx x 0 k=i (x x k ) 2 Chebyshev nodes provide a good approximate solution: x i = 1 2 x 0 + x n (x n x 0 ) cos (i +1/2)π n +1

Integrating piecewise Break the integral into I = = b a x1 a f(x) dx a x2 N disjoint intervals covering b f(x) dx + f(x) dx + + f(x) dx x 1 x N 1 Treat the integrals piecewise using, e.g., trapezoid rule x i [a, b] b global error: H N O N 3 = O 1 N 2 Becomes exact in the limit N

Romberg integration Break the interval [a, b] of width H = b a into subintervals of width h n = H/2 n 2 n By recursive construction, define R 0,0 = H 2 (f(a)+f(b)) R n,0 = 1 2 2 R n 1 n 1,0 + h n j=1 R n,m = 4m R n,m 1 R n 1,m 1 4 m 1 f(a +(2k 1)h n )

Romberg integration R 0,0 R 1,0 R 1,1 R 2,0 R 2,1 R 2,1 R 3,0 R 3,1 R 3,2 R 3,3 2 1 dx e x2 π 0. =0.8427007929497149.... 0.77174333 0.82526296 0.84310283 0.83836778 0.84273605 0.84271160 0.84161922 0.84270304 0.84270083 0.84270066 0.84243051 0.84270093 0.84270079 0.84270079 0.84270079

Troublesome cases There are additional complications if the region of integration is infinite or semi-infinite the integrand is otherwise badly behaved: e.g., 1. it diverges 2. has discontinuities 3. oscillates infinitely often in some finite region

Semi-infinite integral Choose a monotonic increasing function φ :[0, ] [0, 1] 1-1 map between R + and the unit interval E.g., y = φ(x) = x 1+x, x = φ 1 (y) = y 1 y Conventional integration in transformed coordinates: y i [0, 1] 0 dx f(x) = 1 0 dy f(φ 1 (y)) φ 1 (y) x i R +

Nonuniform mesh Many problems can be addressed by the proper choice of mesh the points a, x 1,x 2,...,x N 1,b do not have to be uniformly spaced: Gaussian quadrature dx f(x). = i f(x i )w i Clenshaw-Curtis quadrature choose an adaptive mesh to keep the piecewise areas roughly constant

Adaptive mesh Adaptive mesh near a singularity: f(x) a b x Rule of thumb: (x i+1 x i ) 1 2 f(xi )+f(x i+1 ) const.

Numerical Calculus Most physical phenomena evolve continuously and are described in terms of time rates of change and spatial gradients t, 2 t 2,, 2,... On the computer we have only floating point approximations to real numbers and no proper sense of a continuous function and its derivatives

Numerical Calculus Recall: integration can be implemented as a limiting process of ever smaller discretization Numerical differentiation can be done in similar fashion E.g., breaking the real line into a fine grid x 1,x 2,x 3,... leads to the lowest-order approximation f (x i ) = lim δx 0 f(x i + δx) f(x i ) δx f(x i+1) f(x i ) x i+1 x i

Numerical Calculus We must always consider how the finite-difference scheme scales with the grid spacing h = x i+1 x i Taylor expansion around x i yields f(x i+1 ) f(x i )=f (x i )h + 1 2! f (x i )h 2 + 1 3! f (x i )h 3 + We find that the error scales rather badly f (x i )= f(x i+1) f(x i ) h + O(h) asymmetric

Numerical Calculus Instead, expand to the right and left around x i f(x i+1 )=f(x i )+f (x i )h + 1 2! f (x i )h 2 + f(x i 1 )=f(x i ) f (x i )h + 1 2! f (x i )h 2 + Difference gives the symmetric three-point formula f (x i )= f(x i+1) f(x i 1 ) 2h + O(h 2 )

Numerical Calculus Procedure can be generalized to higher order A five-point formula can be derived by including expansions for f(x i+2 ),f(x i 2 ): f(x i+1 ) f(x i 1 )=2hf (x i )+ 1 3 f (x i )h 3 + O(h 5 ) f(x i+2 ) f(x i 2 )=4hf (x i )+ 8 3 f (x i )h 3 + O(h 5 ) Eliminating the f (x i )= 1 12h f term yields f i 2 8f i 1 +8f i+1 f i+2 + O(h 5 )

Richardson extrapolation Method to automate these order-by-order improvements Numerical derivatives built from function evaluations at x, x + h, x + h/ξ, x+ h/ξ 2,... for ξ > 1 Recursive definition: D (1) (h) = f(x + h) f(x) h D (n+1) (h) = ξn D (n) (h/ξ) D (n) (h) ξ n 1

Numerical Calculus The three-point formula is often good enough Sometimes necessary to take h small rather than to use f(x) a high-order multi-point finite difference formula A common problem is that i 3 i 1 i+1 i+3 high-order formulas are illdefined near boundaries f (x i ) x i 4 i 2 i i+2 i+4

Numerical Calculus Finite difference is a well- i 3 i 2 i 1 i i+1 i+2 i+3 defined shift and subtract operation Derivatives at arbitrary order have weights that are binomial coefficients: n [f(x i )] = n n ( 1) k f(x i+n 2k ) k 1 1 1 1 1 2 1 1 2 1 1 2 1 1 3 3 1 k=0

Root finding How to find the solution(s) of the equation f(x) =0? Choice of method depends on whether analytically f (x) is known If we have knowledge of the derivative, a common and efficient scheme is the Newton-Raphson method

Root finding Suppose there is a root at x and we guess that its position is x n Expansion in terms of the deviation x n = x n x yields f(x )=f(x n ) f (x n ) x n + O [ x n ] 2 =0 View approximate solution as a refined guess, : x x n+1 x n f(x n) f (x n ) x n+1

Newton-Raphson iterations Example of a successful Newton-Raphson search f(x) Each iteration brings us closer to the true root x 1 x 3 x If f(x) is smooth and the initial guess is close to the root, convergence is very fast lim x n = x n x 2

Convergence region f(x) Failure to find the root can result if the initial guess is x 1 x not sufficiently close to the true root Linear extrapolation from the local slope may be a bad x 2 approximation

Convergence region f(x) The region in which initial guesses converge to the x 1 x correct solution can be expanded with a quadratic estimate of the y-axis crossing x 2 x n+1 := x n f(x n) f (x n ) 1 f (x n )f(x n ) 2(f (x n )) 2 1

Common variants Infer the direction from the slope but use a much smaller step size than demanded by Newton-Raphson x n+1 := x n sgn f(xn ) f (x n ) h h f(x n ) f (x n ) Use a randomly generated distribution of step sizes