Lecture Notes on Introduction to Numerical Computation 1

Size: px
Start display at page:

Download "Lecture Notes on Introduction to Numerical Computation 1"

Transcription

1 Lecture Notes on Introduction to Numerical Computation 1 Wen Shen Fall These notes are provided to students as a supplement to the textbook. They contain mainly examples that we cover in class. The explanation is not very wordy ; that part you will get by attending the class.

2 Contents 1 Computer arithmetic Introduction Representation of numbers in different bases Floating point representation Loss of significance Review of Taylor Series Numerical differentiations (move to Chapter 1.) Polynomial interpolation Introduction Lagrange interpolation Newton s divided differences Errors in Polynomial Interpolation Convergence for polynomial interpolation Splines Introduction Linear Spline Quadratics spline Natural cubic spline Numerical integration Introduction Trapezoid rule Simpson s rule Recursive trapezoid rule Romberg Algorithm Adaptive Simpson s quadrature scheme Gaussian quadrature formulas Numerical solution of nonlinear equations Introduction Bisection method Fixed point iterations

3 5.4 Newton s method Secant method System of non-linear equations (optional) Direct methods for linear systems Introduction Naive Gaussian elimination, simplest version Gaussian Elimination with scaled partial pivoting (optional) LU-Factorization (optional) Tridiagonal and banded systems Review of linear algebra Fixed point iterative solvers for linear systems Iterative solvers: General introduction Jacobi iterations Gauss-Seidal iterations SOR Writing all methods in matrix-vector form Analysis for errors and convergence Least Squares Problem description Linear regression and basic derivation LSM with parabola LSM with non-polynomial General linear LSM Non-linear LSM Least square for continuous functions ODEs Introduction Taylor series methods for ODE Runge Kutta methods An adaptive Runge-Kutta-Fehlberg method Multi-step methods A case study for a scalar ODE, solved by various methods in Matlab Euler s method Heun s method RK4 method RKF5 method Comparison Methods for first order systems of ODE Higher order equations and systems A case study for a system of ODEs by various methods

4 9.10 Stiff systems Numerical Methods for Some Differential Equations Two-point boundary value problems Introduction Shooting method FDM: Finite Difference Methods Laplace Equation in 2D: Finite Difference Methods Heat Equation in 1D A Homework sets 129 A.1 Homework Set No A.2 Homework Set No A.3 Homework Set No A.4 Homework Set No A.5 Homework Set No A.6 Homework Set No A.7 Homework Set No A.8 Homework Set No A.9 Homework Set No A.10 Homework Set No

5 List of Figures 1.1 The big picture bit computer with single precision Mean Value Theorem Finite differences to approximate derivatives Linear splines Trapezoid rule: straight line approximation in each sub-interval Simpson s rule: quadratic polynomial approximation (thick line) in each sub-interval Simpson s rule: adding the constants in each node Recursive division of intervals, first few levels Romberg triangle Newton s method: linearize f(x) at x k Splitting of A Linear regression A.1 Construction

6 Chapter 1 Computer arithmetic 1.1 Introduction What are numerical methods? They are algorithms that compute approximations to solutions of equations or similar things. Such algorithms should be implemented (programmed) on a computer. Below is an overview of how things are related. physical model mathematical model verification physical explanation of the results solve the model with numerical methods presentation of results visualization mathematical theorems numerical analysis computer programming Figure 1.1: The big picture Keep in mind that numerical methods are not about numbers. It is about mathematical insights. We will study some basic classical types of problems: development of algorithms; implementation; 5

7 a little bit of analysis, including error-estimates, convergence, stability etc. We will use Matlab throughout the course for programming purpose. 1.2 Representation of numbers in different bases Some bases for numbers: 10: decimal, daily use; 2: binary, computer use; 8: octal; 16: hexadecimal, ancient China; 20: ancient France (numbers are counted as to in French); etc... In principle, one can use any number β as the base. integer part fractional part ( {}}{ a n a n 1 a 1 a 0. b {}}{ ) 1 b 2 b 3 = a n β n +a n 1 β n 1 + +a 1 β +a 0 (integer part) Converting between different bases: Example 1. octal decimal +b 1 β 1 +b 2 β 2 +b 3 β 3 + (fractonal part) β (45.12) 8 = = ( ) 10 Example 2. octal binary Observe (1) 8 = (1) 2 (2) 8 = (10) 2 (3) 8 = (11) 2 (4) 8 = (100) 2 (5) 8 = (101) 2 (6) 8 = (110) 2 (7) 8 = (111) 2 (8) 8 = (1000) 2 6

8 Then, (5034) 8 = (101 }{{} 000 }{{} 011 }{{} 100 }{{} ) and ( ) 2 = (}{{} 6 }{{} 2 }{{} 7 }{{} Example 3. decimal binary: write (12.45) 10 in binary base. Answer. Since the computer uses binary base, How would the number (12.45) 10 look like in binary base? This conversion takes two steps. First, we treat the integer part. The procedure is to keep divided by 2, and store the remainders of each step, until one can not divide anymore. (remainder) (12) 10 = (1100) 2 For the fractional part, we multiply by 2: (0.45) 10 = ( ) 2. Put together: (12.45) 10 = ( ) 2 Note that, a simple finite length decimal number such as could be have infinite length of fractional numbers in binary form! How does a computer store such a number? 1.3 Floating point representation Recall normalized scientific notation: Decimal: x = ±r 10 n, 1 r < 10. (Example: 2345 = ) 7

9 Binary: x = ±r 2 n, 1 r < 2 Octal: x = ±r 8 n, 1 r < 8 r: normalized mantissa. For binary numbers, we have r = 1.(fractional-numbers). n: exponent. If n > 0, then x > 1. If n < 1, then x < 1. The computer uses the binary version of the number system. Computers represent numbers with finite length. These are called machine numbers. Each bit can store the value of either 0 or 1. In a 32-bit computer, with single-precision, a number is stored as: 1 bit 8 bits radix point 23 bits s c f sign of mantissa biased exponent mantissa Figure 1.2: 32-bit computer with single precision The exponent: 2 8 = 256. It can represent numbers from 127 to 128. The value of the number: ( 1) s 2 c 127 (1.f) 2 This is called: single-precision IEEE standard floating-point. The smallest representable number is The largest representable number is x min = x max = Computers can only handle numbers with absolute values between x min and x max. We say that x underflows if x < x min. In this case, we consider x = 0. We say that x overflows if x > x max. In this case, we consider x =. Let fl(x) denote the floating point representation of the number x. It contains error. fl(x) = x (1+δ) relative error: δ = fl(x) x x absolute error: = fl(x) x = δ x We know that δ ε, where ε is called machine epsilon, which represents the smallest positive number detectable by the computer, such that fl(1+ε) > 1. In a 32-bit computer: ε = Computer errors in representing numbers: 8

10 relative error in rounding off: relative error in chopping: Error propagation (through arithmetic operation) Example 1. Consider an addition, say z = x+y, done in a computer. How would the errors be propagated? To fix the idea, let x > 0,y > 0, and let fl(x) = x(1+δ x ), fl(y) = y(1+δ y ) where δ x and δ y are the errors in the floating point representation for x and y, respectively. Then fl(z) = fl(fl(x)+fl(y)) = (x(1+δ x )+y(1+δ y ))(1+δ z ) = (x+y)+x (δ x +δ z )+y (δ y +δ z )+(xδ x δ z +yδ y δ z ) (x+y)+x (δ x +δ z )+y (δ y +δ z ) Here, δ z is the round-off error in making the floating point representation for z. Then, we have absolute error = fl(z) (x+y) = x (δ x +δ z )+y (δ y +δ z ) = x δ }{{} x + y δ y }{{} + (x+y) δ }{{ z } abs. err. abs. err. round off err for x for y }{{} propagated error relative error = fl(z) (x+y) x+y 1.4 Loss of significance = xδ x +yδ y x+y }{{} propagated err + δ z }{{} round off err This typically happens when one gets too few significant digits in subtraction. For example, in a 8-digit number: x = 0.d 1 d 2 d 3 d 8 10 a d 1 is the most significant digit, and d 8 is the least significant digit. Let y = 0.b 1 b 2 b 3 b 8 10 a. We want to compute x y. 9

11 If b 1 = d 1, b 2 = d 2, b 3 = d 3, then We lose 3 significant digits. x y = 0.000c 4 c 5 c 6 c 7 c 8 10 a Example 1. Find the roots of x 2 40x + 2 = 0. Use 4 significant digits in the computation. Answer. The roots for the equation ax 2 +bx+c = 0 are In our case, we have so r 1,2 = 1 ( b± ) b 2a 2 4ac x 1,2 = 20± ±19.95 x = 39.95, (OK) x = 0.05, not OK, lost 3 sig. digits To avoid this: change the algorithm. Observe that x 1 x 2 = c/a. Then x 2 = c ax 1 = We get back 4 significant digits in the result. Example 1. Compute the function f(x) = x 2 +2x x 1 in a computer. Explain what problem you might run into in certain cases. Find a way to fix the difficulty. Answer. We see that, for large values of x with x > 0, the values x 2 +2x and x+1 are very close to each. Therefore, in the subtraction we will lose many significant digits. To avoid this problem, we manipulation the function f(x) into an equivalent one that does not do the subtraction, such as f(x) = x 2 +2x+x+1 ( x 2 +2x x 1)( x 2 +2x+x+1) = x 2 +2x+x+1 x 2 +2x (x+1) 2 = ( x 2 +2x+x+1). 10

12 1.5 Review of Taylor Series Given f(x), smooth function. Expand it at point x = c: f(x) = f(c)+f (c)(x c)+ 1 2! f (c)(x c) ! f (c)(x c) 3 + or using the summation sign f(x) = k=0 1 k! f(k) (c)(x c) k. This is called Taylor series of f at the point c. Special case, when c = 0, is called Maclaurin series: f(x) = f(0)+f (0)x+ 1 2! f (0)x ! f (0)x 3 + = Some familiar examples e x = sinx = cosx = k=0 x k k! k=0 = 1+x+ x2 2! + x3 +, x < 3! ( 1) k x 2k+1 (2k +1)! k=0 k=0 ( 1) k x2k (2k)! 1 k! f(k) (0)x k. = x x3 3! + x5 5! x7 +, x < 7! = 1 x2 2! + x4 4! x6 +, x < 6! 1 1 x = x k = 1+x+x 2 +x 3 +x 4 +, x < 1 etc. k=0 This is actually how computers calculate many functions! For example: N e x x k k! k=0 for some large integer N such that the error is sufficiently small. Example 1. Compute e to 6 digit accuracy. Answer. We have e = e 1 = ! + 1 3! + 1 4! + 1 5! + 11

13 And so 1 2! 1 3! 1 4! 1 9! = 0.5 = = = (can stop here) e ! + 1 3! + 1 4! + 1 5! + 1 9! = Error and convergence: Assume f (k) (x) (0 k n) are continuous functions. Call n 1 f n (x) = k! f(k) (c)(x c) k the first n+1 terms in Taylor series. Then, the error is E n+1 = f(x) f n (x) = k=n+1 k=0 1 k! f(k) (c)(x c) k = 1 (n+1)! f(n+1) (ξ)(x c) n+1 where ξ is some value between x and c. This says, for the infinite sum here, if it converges, then the sum is dominated by the first term. Observation: A Taylor series convergence rapidly if x is near c, and slowly (or not at all) if x is far away from c. Special case: n = 0, we have the Mean-Value Theorem : If f is smooth on the interval (a,b), then f(a) f(b) = (b a)f (ξ), for some ξ in (a,b). See Figure 1.3. This implies f (ξ) = f(b) f(a) b a So, if a,b are close to each other, this can be used as an approximation for f. Given h > 0 sufficiently small, we have f (x) f(x+h) f(x) h f (x) f(x) f(x h) h f (x) f(x+h) f(x h) 2h 12

14 f(x) ξ a b x Figure 1.3: Mean Value Theorem where Another way of writing Taylor Series: f(x+h) = E n+1 = k=0 k=n+1 1 k! f(k) (x)h k = 1 k! f(k) (x)h k = n k=0 1 k! f(k) (x)h k +E n+1 (1.1) 1 (n+1)! f(n+1) (ξ)h n+1 (1.2) for some ξ that lies between x and x+h. This form of Taylor series as in (1.1)-(1.2) are the basic formula to derive error estimates! 1.6 Numerical differentiations (move to Chapter 1.) Finite difference: (1) f (x) 1 h (f(x+h) f(x)) (2) f (x) 1 h (f(x) f(x h)) (3) f (x) 1 2h (f(x+h) f(x h)) (central difference) f (x) 1 h 2 (f(x+h) 2f(x)+f(x h)) Truncation erros in Taylor expansion f(x+h) = f(x)+hf (x)+ 1 2 h2 f (x)+ 1 6 h3 f (x)+o(h 4 ) f(x h) = f(x) hf (x)+ 1 2 h2 f (x) 1 6 h3 f (x)+o(h 4 ) 13

15 f(x) (1) (3) f (x) (2) x h x x+h x Figure 1.4: Finite differences to approximate derivatives Then, f(x+h) f(x) h = f (x)+ 1 2 hf (x)+o(h 2 ) = f (x)+o(h), (1 st order) similarly f(x) f(x h) h = f (x) 1 2 hf (x)+o(h 2 ) = f (x)+o(h), (1 st order) and f(x+h) f(x h) 2h = f (x) 1 6 h2 f (x)+o(h 2 ) = f (x)+o(h 2 ), (2 nd order) finally f(x+h) 2f(x)+f(x h) h 2 = f (x) h2 f (4) (x)+o(h 4 ) = f (x)+o(h 2 ), (2 nd order) 14

16 Chapter 2 Polynomial interpolation 2.1 Introduction Problem description: Given (n+1) points, (x i,y i ), i = 0,1,2,,n, with distinct x i, probably sorted x 0 < x 1 < x 2 < < x n, find a polynomial of degree n, call it P n (x), such that it interpolates these points: P n (x) = a 0 +a 1 x+a 2 x 2 + +a n x n P n (x i ) = y i, i = 0,1,2,,n The goal is to determine the coefficients a 0,a 1,,a n. Note that the number of points is 1 larger than the degree of the polynomial. Why should we do this? Here are some reasons: Find the values between the points for discrete data set; To approximate a (probably complicated) function by a polynomial; Then, it is easier to do computations such as derivative, integration etc. We start with a simple example. Example 1. Given the table x i 0 1 2/3 y i Interpolate the data set with a polynomial with degree 2. 15

17 Note that this data set satisfies Answer. Let y i = cos(πx i /2). P 2 (x) = a 0 +a 1 x+a 2 x 2 We need to find the coefficients a 0,a 1,a 2. By the interpolating properties, we have x = 0, y = 1 : P 2 (0) = a 0 = 1 x = 1, y = 0 : P 2 (1) = a 0 +a 1 +a 2 = 0 x = 2/3, y = 0.5 : P 2 (2/3) = a 0 +(2/3)a 1 +(4/9)a 2 = 0.5 Here we have 3 equations and 3 unknowns. Writing it in matrix-vector form a a 1 = 0 a Easy to solve in Matlab (see Homework 1) 4 9 a 0 = 1, a 1 = 1/4, a 2 = 3/4. Then P 2 (x) = x 3 4 x2. The general case with (n+1) points: P n (x i ) = y i, i = 0,1,2,,n We will have (n+1) equations: P n (x 0 ) = y 0 : a 0 +x 0 a 1 +x 2 0 a 2 + +x n 0 a n = y 0 P n (x 1 ) = y 1 : a 0 +x 1 a 1 +x 2 1 a 2 + +x n 1 a n = y 1. P n (x n ) = y n : a 0 +x n a 1 +x 2 na 2 + +x n na n = y n In matrix-vector form 1 x 0 x 2 0 x n 0 a 0 y 0 1 x 1 x 2 1 x n 1 a = y 1. 1 x n x 2 n x n n a n y n 16

18 or with compact notation X a = y where X : (n+1) (n+1) matrix, given, (van der Monde matrix) a : unknown vector, with length (n + 1) y : given vector, with length (n+1) Known: if x i s are distinct, then X is invertible, therefore a has a unique solution. In Matlab, the command vander([x 1,x 2,,x n ]) gives this matrix. Bad news: X has very large condition number, not effective to solve if n is large. Other more efficient and elegant methods include Lagrange polynomials Newton s divided differences 2.2 Lagrange interpolation Given points: x 0,x 1,,x n Define the cardinal functions: l 0,l 1,,l n : P n, which are polynomials of degree n, and satisfy the properties l i (x j ) = δ ij = { 1, i = j 0, i j i = 0,1,,n In words: the cardinal function l i (x) takes value 1 for x = x i, but for all other interpolating points x j with j i, it takes the value 0. For x value between the points x i, it does not set any restrictions. The Lagrange form of the interpolation polynomial is P n (x) = n l i (x) y i. i=0 We check the interpolating property: P n (x j ) = n l i (x j ) y i = y j, i=0 j. The cardinal functions l i (x) can be written as l i (x) = n j=0,j i ( ) x xj x i x j = x x 0 x i x 0 x x 1 x i x 1 x x i 1 x i x i 1 x x i+1 x i x i+1 x x n x i x n One can easily check that l i (x i ) = 1 and l i (x k ) = 0 for i k, i.e., l i (x k ) = δ ik. 17

19 Example 2. Consider again (same as in Example 1) Write the Lagrange polynomial. Answer. The data se corresponds to x i 0 2/3 1 y i x 0 = 0,x 1 = 2/3,x 2 = 1, y 0 = 1,y 1 = 0.5,y 2 = 0. We first compute the cardinal functions so l 0 (x) = (x x 1)(x x 2 ) (x 0 x 1 )(x 0 x 2 ) = x 2/3 0 2/3 x = 3 2 (x 2 3 )(x 1) l 1 (x) = (x x 0)(x x 2 ) (x 1 x 0 )(x 1 x 2 ) = x 0 2/3 0 x 1 2/3 1 = 9 2 x(x 1) l 2 (x) = (x x 0)(x x 1 ) (x 2 x 0 )(x 2 x 1 ) = x x 2/3 1 2/3 = 3x(x 2 3 ) This is the same as in Example 1. P 2 (x) = l 0 (x)y 0 +l 1 (x)y 1 +l 2 (x)y 2 = 3 2 (x 2 3 )(x 1) 9 2 x(x 1)(0.5)+0 = 3 4 x2 1 4 x+1 Pros and cons of Lagrange polynomial: Elegant formula, (+) slow to compute, each l i (x) is different, (-) Not flexible: if one changes a points x j, or add on an additional point x n+1, one must re-compute all l i s. (-) 2.3 Newton s divided differences Given a data set x i y i x 0 x 1 x n y 0 y 1 y n We will try to design an algorithm in a recursive form. 18

20 n = 0 : P 0 (x) = y 0 n = 1 : P 1 (x) = P 0 (x)+a 1 (x x 0 ) Determine a 1 : set in x = x 1, then P 1 (x 1 ) = P 0 (x 1 )+a 1 (x 1 x 0 ) so y 1 = y 0 +a 1 (x 1 x 0 ), we get a 1 = y 1 y 0 x 1 x 0 n = 2 : P 2 (x) = P 1 (x)+a 2 (x x 0 )(x x 1 ) set in x = x 2 : then y 2 = P 1 (x 2 )+a 2 (x 2 x 0 )(x 2 x 1 ) y 2 P 1 (x 2 ) so a 2 = (x 2 x 0 )(x 2 x 1 ) General expression for a n : Assume that P n 1 (x) interpolates (x i,y i ) for i = 0,1,,n 1. We will find P n (x) that interpolates (x i,y i ) for i = 0,1,,n, in the form where P n (x) = P n 1 (x)+a n (x x 0 )(x x 1 ) (x x n 1 ) (2.1) a n = y n P n 1 (x n ) (x n x 0 )(x n x 1 ) (x n x n 1 ) (2.2) One can easily check that such polynomial does the interpolating job! (Detail: For i = 0,1,,n 1, we have P n (x i ) = P n 1 (x i ) = y i, since the last term in (2.1) is 0. Now, for i = n, we have P n (x n ) = y n since a n is chosen as (2.2) which guarantees this interpolating property. ) This now give the Newtons form for interpolating polynomial: P n (x) = a 0 +a 1 (x x 0 )+a 2 (x x 0 )(x x 1 )+ +a n (x x 0 )(x x 1 ) (x x n 1 ), where the coefficient a n could be determined by (2.2). There is a more elegant way of computing these coefficients, which we will study now. The constants a i s are called divided difference, written as a 0 = f[x 0 ], a 1 = f[x 0,x 1 ] a i = f[x 0,x 1,,x i ] These divided differences could be computed recursively, as f[x 0,x 1,,x k ] = f[x 1,x 2,,x k ] f[x 0,x 1,,x k 1 ] x k x 0 (2.3) Proof for (2.3): (optional) The proof is done through induction. The formula is clearly true for n = 0 and n = 1. Now, assume that it holds for n = k 1, i.e., one can use it to write interpolation polynomial of degree k 1, to interpolate k points, in 19

21 Newtons form. We now show that it also holds for n = k, for any k. By induction, it holds for all n. Let P k 1 (x) interpolates the points (x i,y i ) i=0 k 1, and let q(x) interpolates the points (x i,y i ) k i=1. Note that, comparingto P k 1, thefunction q(x) does not interpolate (x 0,y 0 ), instead it interpolates an extra point of (x k,y k ). Both P k 1 and q(x) are polynomials of degree k 1. By our assumption, formula (2.3) holds in Newton s form, i.e., P k 1 (x) = P k 2 (x)+f[x 0,,x k 1 ](x x 0 )(x x 1 ) (x x k 2 ) We now set = f[x 0,,x k 1 ]x k 1 +(l.o.t.) (i.e. lower order terms) (2.4) q(x) = f[x 1,,x k ]x k 1 +(l.o.t.) (2.5) P k = q(x)+ x x k x k x 0 (q(x) P k 1 (x)). We claim that P k (x) interpolates all the points (x i,y i ) k i=0. To check this claim, we go through all the point x i with i = 0,1,2,,k, as i = 1,2,,k : q(x i ) = P k 1 (x i ) = y i, P k (x i ) = y i i = 0 : P k (x 0 ) = q(x 0 )+ x 0 x k (q(x 0 ) y 0 ) = y 0, x k x 0 i = k : P k (x k ) = q(x k )+0 = y k. By using (2.4)-(2.5), we can now write P k (x) = f[x 1,,x k ]x k 1 +l.o.t. + x x [ ] k f[x 0,,x k 1 ]x k 1 +(l.o.t.) x k x 0 = f[x 1,,x k ] f[x 0,,x k 1 ] x k x 0 Comapring this to the Newtons form for P k x k +(l.o.t.) P k (x) = P k 1 (x)+f[x 0,,x k ](x x 0 ) (x x k 1 ) = f[x 0,,x k ]x k +(l.o.t.) Since these are the same polynomial (Uniqueness of interpolating polynomials, coming later), they must have matching coefficients for the leading term x k, i.e., proving (2.3). f[x 0,,x k ] = f[x 1,,x k ] f[x 0,,x k 1 ] x k x 0, Computation of the divided differences: We compute the f s through the following table: 20

22 x 0 f[x 0 ] = y 0 x 1 f[x 1 ] = y 1 f[x 0,x 1 ] = f[x 1] f[x 0 ] x 1 x 0 x 2 f[x 2 ] = y 2 f[x 1,x 2 ] = f[x 2] f[x 1 ] x 2 x 1 f[x 0,x 1,x 2 ] =.... x n f[x n ] = y n f[x n 1,x n ] = f[xn] f[x n 1] x n x n 1 f[x n 2,x n 1,x n ] =... f[x 0,x 1,,x n ] Example : Use Newton s divided difference to write the polynomial that interpolates the data x i 0 1 2/3 1/3 y i 1 0 1/ Answer. Set up the triangular table for computation / / So we have a 0 = 1, a 1 = 1, a 2 = 0.75, a 3 = , x 0 = 0, x 1 = 1, x 2 = 2/3, x 3 = 1/3, therefore the interpolating polynomial in Newton s form is P 3 (x) = x x(x 1) x(x 1)(x 2/3).... Flexibility of Newton s form: easy to add additional points to interpolate. For example, if you add one more point to interpolate, say (x 4,y 4 ) = (0.5,1), you could keep all the work, and add one more line in the table, to get a 4. Try it by yourself! Nested form: P n (x) = a 0 +a 1 (x x 0 )++a 2 (x x 0 )(x x 1 )+ +a n (x x 0 )(x x 1 ) (x x n 1 ) = a 0 +(x x 0 )(a 1 +(x x 1 )(a 2 +(x x 2 )(a 3 + +a n (x x n 1 )))) Effective to compute in a program: Given the data x i and a i for i = 0,1,,n, one can evaluate the Newton s polynomial p = P n (x) by the following algorithm (pseudocode): p = a n for k = n 1,n 2,,0 21

23 end p = p(x x k )+a k Existence and Uniqueness theorem for polynomial interpolation: Given (x i,y i ) n i=0, with x i s distinct. There exists one and only polynomial P n (x) of degree n such that P n (x i ) = y i, i = 0,1,,n Proof. : The existence of such a polynomial is obvious, since we can construct it using one of our methods. Regarding uniqueness: Assume we have two polynomials, call them p(x) and q(x), of degree n, both interpolate the data, i.e., p(x i ) = y i, q(x i ) = y i, i = 0,1,,n Now, let g(x) = p(x) q(x), which will be a polynomial of degree n. Furthermore, we have g(x i ) = p(x i ) q(x i ) = y i y i = 0, i = 0,1,,n So g(x) has n+1 zeros. We must have g(x) 0, therefore p(x) q(x). 2.4 Errors in Polynomial Interpolation Given a function f(x) on the interval a x b, and a set of distinct points x i [a,b], i = 0,1,,n. Let P n (x) be a polynomial of degree n that interpolates f(x) at x i, i.e., P n (x i ) = f(x i ), i = 0,1,,n Define the error e(x) = f(x) P n (x), x [a,b]. Theorem. There exists some value ξ [a,b], such that e(x) = 1 (n+1)! f(n+1) (ξ) n (x x i ), i=0 for all x [a,b]. Proof. If f P n, then f(x) = P n (x), trivial. Now assume f / P n. For x = x i, we have e(x i ) = f(x i ) P n (x i ) = 0, OK. Now fix an a such that a x i for any i. We define W(x) = n (x x i ) P n+1 i=0 22

24 and a constant and another function c = f(a) P n(a), W(a) ϕ(x) = f(x) P n (x) cw(x). Now we find all the zeros for this function ϕ: and ϕ(x i ) = f(x i ) P n (x i ) cw(x i ) = 0, i = 0,1,,n ϕ(a) = f(a) P n (a) cw(a) = 0 So, ϕ has at least (n+2) zeros. Here goes our deduction: ϕ(x) has at least n+2 zeros. ϕ (x) has at least n+1 zeros. ϕ (x) has at least n zeros. Use we get So we have Change a into x, we get e(x) = f(x) P n (x) =. ϕ (n+1) (x) has at least 1 zero. Call it ξ. ϕ (n+1) (ξ) = f (n+1) (ξ) 0 cw (n+1) (ξ) = 0. W (n+1) = (n+1)! f (n+1) (ξ) = cw (n+1) (ξ) = f(a) P n(a) (n+1)!. W(a) 1 (n+1)! f(n+1) (ξ)w(x) = 1 (n+1)! f(n+1) (ξ) n (x x i ). i=0 Example n = 1, x 0 = a,x 1 = b, b > a. We have an upper bound for the error, for x [a,b], e(x) = 1 2 f (ξ) 1 (x a)(x b) f (b a) = 1 8 f (b a) 2. Observation: Different distribution of nodes x i would give different errors. 23

25 Uniform nodes: equally distribute the space. Consider an interval [a, b], and we distribute n+1 nodes uniformly as One can show that x i = a+ih, h = b a n, n x x i 1 4 hn+1 n! i=0 (Try to prove it!) This gives the error estimate where e(x) 1 4(n+1) f (n+1) (x) M n+1 = max f (n+1) (x). x [a,b] i = 0,1,,n. h n+1 M n+1 4(n+1) hn+1 Example Consider interpolating f(x) = sin(πx) with polynomial on the interval [ 1,1] with uniform nodes. Give an upper bound for error, and show how it is related with total number of nodes with some numerical simulations. Answer. We have f (n+1) (x) π n+1 so the upper bound for error is e(x) = f(x) P n (x) πn+1 4(n+1) ( ) 2 n+1. n Below is a table of errors from simulations with various n. n error bound measured error Problem with uniform nodes: peak of errors near the boundaries. See plots. Chebychev nodes: equally distributing the error. Type I: including the end points. For interval [ 1,1] : x i = cos( i nπ), i = 0,1,,n For interval [a,b] : x i = 1 2 (a+b)+ 1 2 (b a)cos( i nπ), i = 0,1,,n With this choice of nodes, one can show that n n x x k = 2 n x x k k=0 24 k=0

26 where x k is any other choice of nodes. This gives the error bound: 1 e(x) f (n+1) (x) 2 n. (n+1)! Example Consider the same example with uniform nodes, f(x) = sin πx. With Chebyshev nodes, we have e(x) 1 (n+1)! πn+1 2 n. The corresponding table for errors: n error bound measured error The errors are much smaller! Type II: Chebyshev nodes can be chosen strictly inside the interval [a, b]: See slides for examples. x i = 1 2 (a+b)+ 1 2 (b a)cos(2i+1 π), i = 0,1,,n 2n+2 Theorem. If P n (x) interpolates f(x) at x i [a,b], i = 0,1,,n, then n f(x) P n (x) = f[x 0,x 1,,x n,x] (x x i ), x x i. i=0 Proof. Leta x i, letq(x)beapolynomialthatinterpolatesf(x)atx 0,x 1,,x n,a. Newton s form gives Since q(a) = f(a), we get q(x) = P n (x)+f[x 0,x 1,,x n,a] n (x x i ). i=0 f(a) = q(a) = P n (a)+f[x 0,x 1,,x n,a] Switching a to x, we prove the Theorem. n (a x i ). i=0 25

27 As a consequence, we have: f[x 0,x 1,,x n ] = 1 n! f(n) (ξ), ξ [a,b]. Proof. Let P n 1 (x) interpolate f(x) at x 0,,x n 1. The error formula gives From above we know f(x n ) P n 1 (x n ) = 1 n! f(n) (ξ) n (x n x i ), ξ (a,b). i=0 f(x n ) P n 1 (x n ) = f[x 0,,x n ] Comparing the rhs of these two equation, we get the result. n (x n x i ) Observation: Newton s divided differences are related to derivatives. n = 1 : f[x 0,x 1 ] = f (ξ), ξ (x 0,x 1 ) n = 2 : f[x 0,x 1,x 2 ] = f (ξ). Let x 0 = x h,x 1 = x,x 2 = x+h, then f[x 0,x 1,x 2 ] = 1 2h 2[f(x+h) 2f(x)+f(x+h)] = 1 2 f (ξ), ξ [x h,x+h]. 2.5 Convergence for polynomial interpolation Here we briefly discuss the convergence issue. The main question to ask is: As n +, does the polynomial P n (x) converge to the function f(x)? Convergence may be understood in different ways. Let e(x) = f(x) P n (x) be the error function. We list some examples of convergence: i=0 uniform convergence : L 1 convergence : L 2 convergence : lim max e(x) = 0 n + a x b lim b n + a b lim n + a e(x) dx = 0 e(x) 2 dx = 0 For uniform nodes, it is known that for some functions f(x), the error grows unbounded as n +. We have observed it in our simulation, and it is very bad news. It is possible to have convergence: For each function f(x), there is a way of designing a sequence of nodes {x i } n i=0, such that e(x) 0 as n +. The sequence is different for each function. With the wrong sequence, there will be no convergence! This is not practical! 26

28 Chapter 3 Piece-wise polynomial interpolation. Splines 3.1 Introduction Usage: visualization of discrete data graphic design VW car design Requirement: interpolation certain degree of smoothness Disadvantages of polynomial interpolation P n (x) n-time differentiable. We do not need such high smoothness; big error in certain intervals (esp. near the ends); no convergence result; Heavy to compute for large n Suggestion: use piecewise polynomial interpolation. Problem setting : Given a set of data x t 0 t 1 t n y y 0 y 1 y n Find a function S(x) which interpolates the points (t i,y i ) n i=0. The set t 0 < t 1 < < t n are called knots. Note that they need to be ordered. 27

29 S(x) consists of piecewise polynomials S 0 (x), t 0 x t 1 S 1 (x), t 1 x t 2 S(x) =. S n 1 (x), t n 1 x t n S(x) is called a spline of degree n, if S i (x) is a polynomial of degree n; S(x) is (n 1) times continuous differentiable, i.e., for i = 1,2,,n 1 we have Commonly used ones: n = 1: linear spline (simplest) n = 2: quadratic spline (less popular) n = 3: cubic spline (most used) S i 1 (t i ) = S i (t i ), S i 1(t i ) = S i(t i ),. S (n 1) i 1 (t i ) = S (n 1) i (t i ), If you are given a function which is piecewise polynomial, could you check if it is a spline of certain degree? All you need to do is to check whether all the conditions are satisfied. Example 1. Determine whether this function is a first-degree spline function: x x [ 1,0] S(x) = 1 x x (0,1) 2x 2 x [1,2] Answer. Check all the properties of a linear spline. Linear polynomial for each piece: OK. S(x) is continuous at inner knots: At x = 0, S(x) is discontinuous, because from the left we get 0 and from the right we get 1. 28

30 Therefore this is NOT a linear spline. Example 2. Determine whether the following function is a quadratic spline: x 2 x [ 10,0] S(x) = x 2 x (0,1) 1 2x x 1 Answer. Let s label them: Q 0 (x) = x 2, Q 1 (x) = x 2, Q 2 (x) = 1 2x. We now check all the conditions, i.e, the continuity of Q and Q at inner knots 0,1: Q 0 (0) = 0, Q 1 (0) = 0, Q 1 (1) = 1, Q 2 (1) = 1, Q 0(0) = 0, Q 1(0) = 0, Q 1(1) = 2, Q 2(1) = 2, OK OK OK OK It passes all the test, so it is a quadratic spline. 3.2 Linear Spline We consider the case when n = 1. Piecewise linear interpolation, i.e., straight line between 2 neighboring points. See Figure 3.1. S(x) y 1 y 0 y 2 y 3 t 0 t 1 t 2 t 3 x Figure 3.1: Linear splines So S i (x) = a i +b i x, i = 0,1,,n 1 29

31 Requirements: S 0 (t 0 ) = y 0 S i 1 (t i ) = S i (t i ) = y i, S n 1 (t n ) = y n. i = 1,2,,n 1 Easy tofind: writetheequation foralinethroughtwo points: (t i,y i ) and(t i+1,y i+1 ), S i (x) = y i + y i+1 y i t i+1 t i (x t i ), i = 0,1,,n 1. Accuracy Theorem for linear spline: Assume t 0 < t 1 < t 2 < < t n, and let h = max(t i+1 t i ) i Let f(x) be a given function, and let S(x) be a linear spline that interpolates f(x) s.t. We have the following, for x [t 0,t n ], (1) If f exists and is continuous, then (2) If f exits and is continuous, then S(t i ) = f(t i ), i = 0,1,,n f(x) S(x) 1 2 h max f (x). x f(x) S(x) 1 8 h2 max f (x). x To minimize error, it is obvious that one should add more knots where the function has large first or second derivative. 3.3 Quadratics spline This type of splines is not much used. Cubic splines is usually favored for its minimum curvature property (see next section). Given a set of knots t 0,t 1,,t n, and the data y 0,y 1,,y n, we seek piecewise polynomial representation Q(x) = Q 0 (x) t 0 x t 1 Q 2 (x) t 1 x t 2. Q n 1 (x) t n 1 x t n 30

32 where Q i (x) (i = 0,1,,n 1) are quadratic polynomials. In general, Q i (x) = a i x 2 + b i x+c i. Total number of unknowns= 3n. Conditions we impose on Q i : Q i (t i ) = y i, Q i (t i+1 ) = y i+1, i = 0,1,,n 1 : 2n conditions Q i(t i ) = Q i+1(t i ), i = 1,2,,n 1 : n 1 conditions. Total number of conditions: 2n+(n 1) = 3n 1. An extra condition could be imposed. For example: Q 0 (t 0) = 0 or Q 0 (t 0) = 0, depending on the specific problem. Construction of Q i (t): Since Q is continuous, we set z i = Q (t i ) We don t know these z i s, they are the unknowns, and will be computed later. Then, each Q i must satisfy the conditions: Q i (t i ) = y i, Q i (t i) = z i, Q i (t i+1) = z i+1, Q i (t i+1 ) = y i+1. (3.1) Using the first 3 conditions, we obtain the polynomials Q i (x) = z i+1 z i 2(t i+1 t i ) (x t i) 2 +z i (x t i )+y i, 0 i n 1. (3.2) It is easy to verify the first 3 conditions in (3.1). To find the values for z i, we now use the 4th condition in (3.1). This gives us ( ) yi+1 y i z i+1 = z i +2, 0 i n 1. (3.3) t i+1 t i Given a z 0, all the z i s can now be constructed. We now summarize the algorithm: Given z 0, compute z i using (3.3). Compute Q i by using (3.2). 3.4 Natural cubic spline Given t 0 < t 1 < < t n, we define the cubic spline S(x) = S i (x) for t i x t i+1. We requirethat S,S,S are all continuous. If in addition we requires 0 (t 0) = S n 1 (t n) = 0, then it is called natural cubic spline. Write S i (x) = a i x 3 +b i x 2 +c i x+d i, i = 0,1,,n 1 Total number of unknowns= 4 n. 31

33 Equations we have equation number (1) S i (t i ) = y i, i = 0,1,,n 1 n (2) S i (t i+1 ) = y i+1, i = 0,1,,n 1 n (3) S i (t i+1) = S i+1 (t i+1), i = 0,1,,n 2 n 1 (4) S i (t i+1) = S i+1 (t i+1), i = 0,1,,n 2 n 1 (5) S 0 (t 0) = 0, 1 (6) S n 1 (t n) = 0, 1. How to compute S i (x)? We know: S i : polynomial of degree 3 S i : polynomial of degree 2 S i : polynomial of degree 1 procedure: total = 4n. Start with S i (x), they are all linear, one can use Lagrange form, Integrate S i (x) twice to get S i(x), you will get 2 integration constant Determine these constants by (2) and (1). Various tricks on the way... Details: Define z i as z i = S (t i ), i = 1,2,,n 1, z 0 = z n = 0 NB! These z i s are our unknowns. Introduce the notation h i = t i+1 t i. Lagrange form S i (x) = z i+1 (x t i ) z i (x t i+1 ). h i h i Then S i(x) = z i+1 2h i (x t i ) 2 z i 2h i (x t i+1 ) 2 +C i D i S i (x) = z i+1 6h i (x t i ) 3 z i 6h i (x t i+1 ) 3 +C i (x t i ) D i (x t i+1 ). (You can check by yourself that these S i,s i Interpolating properties: (1). S i (t i ) = y i gives are correct.) y i = z i 6h i ( h i ) 3 D i ( h i ) = 1 6 z ih 2 i +D i h i D i = y i h i h i 6 z i 32

34 (2). S i (t i+1 ) = y i+1 gives y i+1 = z i+1 6h i h 3 i +C ih i, C i = y i+1 h i h i 6 z i+1. We see that, once z i s are known, then (C i,d i ) s are known, and so S i,s i are known. S i (x) = z i+1 (x t i ) 3 z ( i (x t i+1 ) 3 yi+1 + h ) i 6h i 6h i h i 6 z i+1 (x t i ) ( yi h ) i h i 6 z i (x t i+1 ). (3.4) S i (x) = z i+1 2h i (x t i ) 2 z i 2h i (x t i+1 ) 2y i+1 y i h i z i+1 z i h i. 6 How to compute z i s? Last condition that s not used yet: continuity of S (x), i.e., We have S i 1 (t i) = S i (t i), i = 1,2,,n 1 S i(t i ) = z i ( h i ) 2 + y i+1 y i z i+1 z i h i 2h i h }{{ i 6 } b i = 1 6 h iz i h iz i +b i S i 1 (t i) = 1 6 z i 1h i z ih i 1 +b i 1 Set them equal to each other, we get { hi 1 z i 1 +2(h i 1 +h i )z i +h i z i+1 = 6(b i b i 1 ), i = 1,2,,n 1 z 0 = z n = 0. In matrix-vector form: where 2(h 0 +h 1 ) h 1 H = and h 1 2(h 1 +h 2 ) h 2 h 2 2(h 2 +h 3 ) h z = z 1 z 2 z 3. z n 2 z n 1 H z = b (3.5) h n 3 2(h n 3 +h n 2 ) h n 2 h n 2 2(h n 2 +h n 1 ), b = 33 6(b 1 b 0 ) 6(b 2 b 1 ) 6(b 3 b 2 ). 6(b n 2 b n 3 ) 6(b n 1 b n 2 ).

35 Here, H is a tri-diagonal matrix, symmetric, and diagonal dominant which implies unique solution for z. Summarizing the algorithm: Solve z i from (3.5); Compute S i (x) using (3.4). 2 h i 1 +h i > h i + h i 1 See slides for Matlab codes and solution graphs. Theorem on smoothness of cubic splines. If S is the natural cubic spline function that interpolates a twice-continuously differentiable function f at knots a = t 0 < t 1 < < t n = b then b [ S (x) ] b 2 [ dx f (x) ] 2 dx. a a Then Note that (f ) 2 is related to the curvature of f. Cubic spline gives the least curvature, most smooth, so best choice. Proof. Let and f = S +g, so Claim that b then this would imply a (f ) 2 dx = g(x) = f(x) S(x) g(t i ) = 0, i = 0,1,,n (f ) 2 = (S ) 2 +(g ) 2 +2S g b a b a (S ) 2 dx+ b a (f ) 2 dx b a S g dx = 0 b and we are done. Proof of the claim: Using integration-by-parts, b a S g dx = S g b 34 a (g ) 2 dx+ (S ) 2 dx b a a S g dx b a 2S g dx

36 Since g(a) = g(b) = 0, so the first term is 0. For the second term, since S is piecewise constant. Call c i = S (x), for x [t i,t i+1 ]. Then b a (b/c g(t i ) = 0). n 1 ti+1 n 1 S g dx = c i g (x)dx = c i [g(t i+1 ) g(t i )] = 0, t i i=0 i=0 35

37 Chapter 4 Numerical integration 4.1 Introduction Problem: Given a function f(x), defined on an interval [a, b], we want to find an approximation to the integral Main idea: I(f) = b a f(x)dx. Cut up [a, b] into smaller sub-intervals; In each sub-interval, find a polynomial p i (x) f(x); Integrate p i (x) on each sub-interval, and sum them up. 4.2 Trapezoid rule The grid: We cut up [a,b] into n sub-intervals: x 0 = a, x i < x i+1, x n = b On interval [x i,x i+1 ], approximate f(x) by a linear polynomial that interpolates the 2 end points, i.e, p i (x i ) = f(x i ), p i (x i+1 ) = f(x i+1 ). See Figure 4.1. We see that on each sub-interval, the integral of p i equals to the area of a trapezium: xi+1 x i p i (x)dx = 1 2 (f(x i)+f(x i+1 ))(x i+1 x i ). Now, we use xi+1 x i f(x)dx xi+1 x i p i (x)dx = 1 2 (f(x i+1)+f(x i ))(x i+1 x i ), 36

38 f(x) p i (x) x 0 = a x i x i+1 x n = b Figure 4.1: Trapezoid rule: straight line approximation in each sub-interval. and we sum up all the sub-intervals b a f(x)dx = = n 1 xi+1 i=0 n 1 i=0 x i f(x)dx n 1 xi+1 i=0 x i 1 2 (f(x i+1)+f(x i ))(x i+1 x i ) p i (x)dx We now consider uniform grid, and set In this case, we have h = b a n, x i+1 x i = h. b a f(x)dx = n 1 i=0 h 2 (f(x i)+f(x i+1 )) = h 2 [(f(x 0)+f(x 1 ))+(f(x 1 )+f(x 2 ))+ +(f(x n 1 )+f(x n ))] [ ] = h n 1 f(x 0 )+2 f(x i )+f(x n ) 2 i=1 [ ] n 1 1 = h 2 f(x 0)+ f(x i )+ 1 2 f(x n) i=1 }{{} T(f;h) so we can write b a f(x) T(f;h) 37

39 Example 1: Let f(x) = x 2 +1, and we want to compute I = 1 1 f(x)dx by Trapezoid rule. If we take n = 10, we can set up the data i x i f i Here h = 2/10 = 0.2. By the formula, we get [ ] 9 T = h (f 0 +f 10 )/2+ f i = Sample codes. Here are some possible ways to program trapezoid rule in Matlab. Let a,b,n be given. The function f(x) is also defined, such that it takes a vector x and returns a vector with same size as x. For example, f(x) = x 2 +sin(x) could be defined as: function v=func(x) v=x.^2 + sin(x); end In the following code, the integral value is stored in the variable T. h=(b-a)/n; T = (func(a)+func(b))/2; for i=1:n-1 x = a+i*h; T = T + func(x); end T = T*h; Or, one may use directly the Matlab vector function sum, and the code could be very short: h=(b-a)/n; x=[a+h:h:b-h]; % inner points T = ((f(a)+f(b))/2 + sum(f(x)))*h; i=1 38

40 Error estimates. We define the error: where n 1 E T (f;h) =I(f) T(f;h) = E T,i (f;h) = xi+1 i=0 xi+1 is the error on each sub-interval. We know from polynomial interpolation that [ f(x) p i (x) ] n 1 dx = E T,i (f;h), x i i=0 x i [ f(x) p i (x) ] dx, (i = 0,1,,n 1) f(x) p i (x) = 1 2 f (ξ i )(x x i )(x x i+1 ), (x i < ξ i < x i+1 ) Error estimate on each sub-interval: E T,i (f;h) = 1 2 f (ξ i ) xi+1 x i (x x i )(x x i+1 )dx = 1 12 h3 f (ξ i ). (You may work out the details of the integral!) The total error is: [ n 1 n 1 E T (f;h) = E T,i (f;h) = 1 12 h3 f (ξ i ) = 1 n 1 ] 12 h3 f (ξ i ) i=0 i=0 1 n i=0 }{{} b a h }{{} = f (ξ) = n which gives Error bound E T (f;h) = b a 12 h2 f (ξ), ξ (a,b). E T (f;h) b a 12 h2 max f (x). x (a,b) Example 2. Consider function f(x) = e x, and the integral I(f) = 2 0 e x dx What is the minimum number of points to be used in the trapezoid rule to ensure en error ? Answer. We have f (x) = e x, f (x) = e x, a = 0, b = 2 so max x (a,b) f (x) = e 2. 39

41 By error bound, it is sufficient to require We need at least 314 points. 4.3 Simpson s rule E T (f;h) 1 6 h2 e h e n = h = n We now explorer possibility of using higher order polynomials. We cut up [a,b] into 2n equal sub-intervals x 0 = a, x 2n = b, h = b a 2n, x i+1 x i = h Consider the interval [x 2i,x 2i+2 ]. We will find a 2nd order polynomial that interpolates f(x) at the points x 2i,x 2i+1,x 2i+2 See Figure 4.2. Note that in each sub-interval there is a point in the interior, namely, x 2i+1. f(x) p i (x) x 2i x 2i+1 x 2i+2 Figure 4.2: Simpson s rule: quadratic polynomial approximation (thick line) in each sub-interval. We now use the Lagrange form, and get p i (x x 2i+1 )(x x 2i+2 ) (x) = f(x 2i ) (x 2i x 2i+1 )(x 2i x 2i+2 ) +f(x (x x 2i )(x x 2i+2 ) 2i+1) (x 2i+1 x 2i )(x 2i+1 x 2i+2 ) (x x 2i )(x x 2i+1 ) +f(x 2i+2 ) (x 2i+2 x 2i )(x 2i+2 x 2i+1 ) 40

42 With uniform nodes, this becomes p i (x) = 1 2h 2f(x 2i)(x x 2i+1 )(x x 2i+2 ) 1 h 2f(x 2i+1)(x x 2i )(x x 2i+2 ) + 1 2h 2f(x 2i+2)(x x 2i )(x x 2i+1 ) We work out the integrals (try to fill in the details yourself!) x2i+2 x 2i (x x 2i+1 )(x x 2i+2 )dx = 2 3 h3, x2i+2 x 2i (x x 2i )(x x 2i+2 )dx = 4 3 h3, x2i+2 x 2i (x x 2i )(x x 2i+1 )dx = 2 3 h3, Then x2i+2 we have b a x 2i p i (x)dx = 1 2h 2f(x 2i) x2i+2 1 h 2f(x 2i+1) x2i h 2f(x 2i+2) x 2i x2i+2 (x x 2i+1 )(x x 2i+2 )dx x 2i x2i+2 x 2i (x x 2i )(x x 2i+2 )dx (x x 2i )(x x 2i+1 )dx x 2i p i (x)dx = h 3 [f(x 2i)+4f(x 2i+1 )+f(x 2i+2 )]. We now sum them up f(x)dx S(f;h) = n 1 x2i+2 i=0 p i (x)dx = h n 1 [f(x 2i )+4f(x 2i+1 )+f(x 2i+2 )]. x 2i 3 i= = x 2i 2 x 2i 1 x 2i x 2i+1 x 2i+2 Figure 4.3: Simpson s rule: adding the constants in each node. See Figure 4.3 for the counting of coefficients on each node. We see that for x 0,x 2n we get 1, and for odd indices we have 4, and for all remaining even indices we get 2. The algorithm looks like: S(f;h) = h 3 [ f(x 0 )+4 n i=1 n 1 f(x 2i 1 )+2 f(x 2i )+f(x 2n ) 41 i=1 ]

43 Example 3: Let f(x) = x 2 +1, and we want to compute I = 1 1 f(x)dx by Simpson s rule. If we take n = 5, which means we take 2n+1 = 11 points, we can set up the data (same as in Example 1): i x i f i Here h = 2/10 = 0.2. By the formula, we get S(f;0.2) = h 3 [f 0 +4(f 1 +f 3 +f 5 +f 7 +f 9 )+2(f 2 +f 4 +f 6 +f 8 )+f 10 ] = (This is somewhat smaller than the number we get with trapezoid rule, and it is actually more accurate. Could you intuitively explain that for this particular example?) Sample codes: Let a,b,n be given, and let the function func be defined. To find the integral with Simpson s rule, one could possibly follow the following algorithm: h=(b-a)/2/n; xodd=[a+h:2*h:b-h]; % x_i with odd indices xeven=[a+2*h:2*h:b-2*h]; % x_i with even indices SI=(h/3)*(func(a)+4*sum(func(xodd))+2*sum(func(xeven))+func(b)); Error estimate. One can prove that the basic error on each sub-interval is: (see the proof below) Then, the total error is E S,i (f;h) = 1 90 h5 f (4) (ξ i ), ξ i (x 2i,x 2i+2 ). (4.1) E S (f;h) = I(f) S(f;h) = 1 n 1 90 h5 f (4) (ξ i ) 1 n b a 2h = b a 180 h4 f (4) (ξ), ξ(a,b) This gives us the error bound E S (f;h) b a 180 h4 max f (4) (x). x (a,b) i=0 42

44 Proof. for (4.1): (optional) For notation simplicity, let s consider an interval [a, a + 2h], and approximate the integral by a+2h Use Taylor expansions for f: we get a f(x)dx h 3 [ ] f(a)+4f(a+h)+f(a+2h). f(a+h) = f(a)+hf (a)+ h2 2 f (a)+ h3 6 f (a)+ h4 24 f(4) (a)+ f(a+2h) = f(a)+2hf (a)+2h 2 f (a)+ 4h3 3 f (a)+ 2h4 3 f(4) (a)+ f(a)+4f(a+h)+f(a+2h) = 6f(a)+6hf (a)+4h 2 f (a)+2h 3 f (a)+ 5 6 h4 f (4) (a)+ therefore h [ ] f(a)+4f(a+h)+f(a+2h) 3 = 2hf(a)+2h 2 f (a)+ 4 3 h3 f (a)+ 2 3 h4 f (a) h5 f (4) (a)+ Now we go back to the original integral. We observe that Using the Taylor expansion a+2h a f(x)dx = 2h 0 f(a+s)ds. f(a+s) = f(a)+sf (a)+ s2 2 f (a)+ s3 6 f (a)+ s4 24 f(4) (a)+ we get, by assuming the integral of each term in the Taylor series: 2h 0 f(a+s)ds = 2h 0 [f(a)+sf (a)+ s2 2 f (a)+ s3 6 f (a)+ s4 ] 24 f(4) (a)+ ds = 2hf(a)+f (a) +f (4) (a) 2h 0 2h 0 sds+f (a) s 4 24 ds+ 2h 0 s 2 2 ds+f (a) 2h 0 s 3 6 ds = 2hf(a)+2h 2 f (a)+ 4 3 h3 f (a)+ 2 3 h4 f (a) h5 f (4) (a)+ Comparing this with the Simpson s rule, we get the error E S,i = [ ] h 5 f (4) (a)+ = h5 f (4) (a)+ = 1 90 h5 f (4) (ξ), ξ (a,a+2h) which proves (4.1). 43

45 Example 4. With f(x) = e x in [0,2], we now usesimpson s rule. In order to achieve an error , how many points must we take? Answer. We have E S (f;h) h4 e h /e 2 = h n = b a 2h = We need at least 2n+1 = 13 points. Recall: With trapezoid rule, we need at least 314 points. The Simpson s rule uses much fewer points. 4.4 Recursive trapezoid rule These are also called composite schemes. Divide [a,b] into 2 n equal sub-intervals, see Figure 4.4. So n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 h n = b a 2 n, h n+1 = 1 2 h Figure 4.4: Recursive division of intervals, first few levels [ 1 2n 1 f(b)+ T(f;h n ) = h n 2 f(a)+ 1 f(a+ih n ) 2 i=1 T(f;h n+1 ) = h n f(a)+ 1 2 n f(b)+ f(a+ih n+1 ) We can re-arrange the terms in T(f;h n+1 ): T(f;h n+1 ) = h n f(a)+ 1 2 n 1 2 f(b)+ 2 n 1 f(a+ih n )+ f(a+(2j +1)h n+1 ) = 1 2n 2 T(f;h 1 n)+h n+1 j=0 i=1 i=1 j=0 f(a+(2j +1)h n+1 ) 44 ]

46 Advantages: 1. One can keep the computation for a level n. If this turns out to be not accurate enough, then add one more level to get better approximation. flexibility. 2. This formula allows us to compute a sequence of approximations to a define integral using the trapezoid rule without re-evaluating the integrand at points where it has already been evaluated. efficiency. 4.5 Romberg Algorithm Assuming now we have generated a sequence of approximations, say by Trapezoid rule, withdifferentvaluesofh. CallthemT(f;h),T(f;h/2),T(f;h/4), Onecouldcombine these numbers in particular ways to get much higher order approximations. The particular form of the eventual algorithm depends on the error formula. Consider the trapezoid rule. If f (n) exists and is bounded, then one can prove that the error satisfies the Euler MacLaurin s formula E(f;h) = I(f) T(f;h) = a 2 h 2 +a 4 h 4 +a 6 h 6 + +a n h n Here a n depends on the derivatives f (n). Note that we only have the even power terms of h in the error! Due to symmetry of the methods, all terms with h k where k is odd, would be cancelled out in the error formula! When we half the grid size h, the error formula becomes E(f; h 2 ) = I(f) T(f; h 2 ) = a 2( h 2 )2 +a 4 ( h 2 )4 +a 6 ( h 2 )6 + +a n ( h 2 )n We have (1) I(f) = T(f;h)+a 2 h 2 +a 4 h 4 +a 6 h 6 + (2) I(f) = T(f; h 2 )+a 2( h 2 )2 +a 4 ( h 2 )4 +a 6 ( h 2 )6 + +a n ( h 2 )n The goal is to use the 2 approximations T(f;h) and T(f; h 2 ) to get one that s more accurate, i.e., we wish to cancel the leading error term, the one with h 2. Multiplying (2) by 4 and subtract (1), we get 3 I(f) = 4 T(f;h/2) T(f;h)+a 4 h4 +a 6 h6 + I(f) = 4 3 T(f;h/2) 1 3 T(f;h) +ã 4 h 4 +ã 6 h 6 + }{{} U(h) U(h) is of 4-th order accuracy! Much better than T(f;h). We now write: U(h) = T(f;h/2)+ T(f;h/2) T(f;h)

47 This idea is called the Richardson extrapolation. Then, we could compute U(h/2),U(h/4),U(h/8),. We can now go to the next level: (3) I(f) = U(h)+ã 4 h 4 +ã 6 h 6 + (4) I(f) = U(h/2) +ã 4 (h/2) 4 +ã 6 (h/2) 6 + To cancel the term with h 4, we do: (4) 2 4 (3) Let Then (2 4 1)I(f) = 2 4 U(h/2) U(h)+ã 6h 6 + V(h) = 24 U(h/2) U(h) = U(h/2) + I(f) = V(h)+ã 6h 6 + U(h/2) U(h) So V(h) is even better than U(h). One can keep doing this several layers, until desired accuracy is reached. This gives the Romberg Algorithm: Set H = b a, define: R(0,0) = T(f;H) = H 2 (f(a)+f(b)) R(1,0) = T(f;H/2) R(2,0) = T(f;H/(2 2 )). R(n,0) = T(f;H/(2 n )) Here R(n, 0) s are computed by the recursive trapezoid formula. Romberg triangle: See Figure 4.5. R(0, 0) R(1, 0) R(1, 1) R(2, 0) R(2, 1) R(2, 2) R(3, 0) R(3, 1) R(3, 2) R(n, 0) R(n, 1) R(n, 3) R(n, n) Figure 4.5: Romberg triangle 46

Introduction to Numerical Analysis

Introduction to Numerical Analysis Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight Applied Numerical Analysis (AE0-I) R. Klees and R.P. Dwight February 018 Contents 1 Preliminaries: Motivation, Computer arithmetic, Taylor series 1 1.1 Numerical Analysis Motivation..........................

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

MAT 460: Numerical Analysis I. James V. Lambers

MAT 460: Numerical Analysis I. James V. Lambers MAT 460: Numerical Analysis I James V. Lambers January 31, 2013 2 Contents 1 Mathematical Preliminaries and Error Analysis 7 1.1 Introduction............................ 7 1.1.1 Error Analysis......................

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1 Chapter 01.01 Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 Chapter 01.02 Measuring errors 11 True error 11 Relative

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

Midterm Review. Igor Yanovsky (Math 151A TA)

Midterm Review. Igor Yanovsky (Math 151A TA) Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply

More information

Numerical Methods. Aaron Naiman Jerusalem College of Technology naiman

Numerical Methods. Aaron Naiman Jerusalem College of Technology   naiman Numerical Methods Aaron Naiman Jerusalem College of Technology naiman@math.jct.ac.il http://math.jct.ac.il/ naiman based on: Numerical Mathematics and Computing by Cheney & Kincaid, c 1994 Brooks/Cole

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Numerical Methods. Aaron Naiman Jerusalem College of Technology naiman

Numerical Methods. Aaron Naiman Jerusalem College of Technology  naiman Numerical Methods Aaron Naiman Jerusalem College of Technology naiman@jct.ac.il http://jct.ac.il/ naiman based on: Numerical Mathematics and Computing by Cheney & Kincaid, c 1994 Brooks/Cole Publishing

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method COURSE 7 3. Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method The presence of derivatives in the remainder difficulties in applicability to practical problems

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

An Introduction to Numerical Analysis. James Brannick. The Pennsylvania State University

An Introduction to Numerical Analysis. James Brannick. The Pennsylvania State University An Introduction to Numerical Analysis James Brannick The Pennsylvania State University Contents Chapter 1. Introduction 5 Chapter 2. Computer arithmetic and Error Analysis 7 Chapter 3. Approximation and

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

CHAPTER 4. Interpolation

CHAPTER 4. Interpolation CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given

More information

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning 76 Chapter 7 Interpolation 7.1 Interpolation Definition 7.1.1. Interpolation of a given function f defined on an interval [a,b] by a polynomial p: Given a set of specified points {(t i,y i } n with {t

More information

Notes on floating point number, numerical computations and pitfalls

Notes on floating point number, numerical computations and pitfalls Notes on floating point number, numerical computations and pitfalls November 6, 212 1 Floating point numbers An n-digit floating point number in base β has the form x = ±(.d 1 d 2 d n ) β β e where.d 1

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

INTRODUCTION TO COMPUTATIONAL MATHEMATICS INTRODUCTION TO COMPUTATIONAL MATHEMATICS Course Notes for CM 271 / AMATH 341 / CS 371 Fall 2007 Instructor: Prof. Justin Wan School of Computer Science University of Waterloo Course notes by Prof. Hans

More information

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn Review Taylor Series and Error Analysis Roots of Equations Linear Algebraic Equations Optimization Numerical Differentiation and Integration Ordinary Differential Equations Partial Differential Equations

More information

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Floating Point Number Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview Real number system Examples Absolute and relative errors Floating point numbers Roundoff

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

Additional exercises with Numerieke Analyse

Additional exercises with Numerieke Analyse Additional exercises with Numerieke Analyse March 10, 017 1. (a) Given different points x 0, x 1, x [a, b] and scalars y 0, y 1, y, z 1, show that there exists at most one polynomial p P 3 with p(x i )

More information

Physics 115/242 Romberg Integration

Physics 115/242 Romberg Integration Physics 5/242 Romberg Integration Peter Young In this handout we will see how, starting from the trapezium rule, we can obtain much more accurate values for the integral by repeatedly eliminating the leading

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

Math Numerical Analysis Mid-Term Test Solutions

Math Numerical Analysis Mid-Term Test Solutions Math 400 - Numerical Analysis Mid-Term Test Solutions. Short Answers (a) A sufficient and necessary condition for the bisection method to find a root of f(x) on the interval [a,b] is f(a)f(b) < 0 or f(a)

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

Numerical Methods of Approximation

Numerical Methods of Approximation Contents 31 Numerical Methods of Approximation 31.1 Polynomial Approximations 2 31.2 Numerical Integration 28 31.3 Numerical Differentiation 58 31.4 Nonlinear Equations 67 Learning outcomes In this Workbook

More information

Numerical Analysis and Computing

Numerical Analysis and Computing Numerical Analysis and Computing Lecture Notes #02 Calculus Review; Computer Artihmetic and Finite Precision; and Convergence; Joe Mahaffy, mahaffy@math.sdsu.edu Department of Mathematics Dynamical Systems

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

Preliminary Examination in Numerical Analysis

Preliminary Examination in Numerical Analysis Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

Virtual University of Pakistan

Virtual University of Pakistan Virtual University of Pakistan File Version v.0.0 Prepared For: Final Term Note: Use Table Of Content to view the Topics, In PDF(Portable Document Format) format, you can check Bookmarks menu Disclaimer:

More information

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places. NUMERICAL METHODS 1. Rearranging the equation x 3 =.5 gives the iterative formula x n+1 = g(x n ), where g(x) = (2x 2 ) 1. (a) Starting with x = 1, compute the x n up to n = 6, and describe what is happening.

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Computer Representation of Numbers Counting numbers (unsigned integers) are the numbers 0,

More information

Numerical Methods for Ordinary Differential Equations

Numerical Methods for Ordinary Differential Equations Numerical Methods for Ordinary Differential Equations Answers of the exercises C Vuik, S van Veldhuizen and S van Loenhout 08 Delft University of Technology Faculty Electrical Engineering, Mathematics

More information

Numerical methods. Examples with solution

Numerical methods. Examples with solution Numerical methods Examples with solution CONTENTS Contents. Nonlinear Equations 3 The bisection method............................ 4 Newton s method.............................. 8. Linear Systems LU-factorization..............................

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

1 ERROR ANALYSIS IN COMPUTATION

1 ERROR ANALYSIS IN COMPUTATION 1 ERROR ANALYSIS IN COMPUTATION 1.2 Round-Off Errors & Computer Arithmetic (a) Computer Representation of Numbers Two types: integer mode (not used in MATLAB) floating-point mode x R ˆx F(β, t, l, u),

More information

Principles of Scientific Computing Local Analysis

Principles of Scientific Computing Local Analysis Principles of Scientific Computing Local Analysis David Bindel and Jonathan Goodman last revised January 2009, printed February 25, 2009 1 Among the most common computational tasks are differentiation,

More information

NUMERICAL MATHEMATICS AND COMPUTING

NUMERICAL MATHEMATICS AND COMPUTING NUMERICAL MATHEMATICS AND COMPUTING Fourth Edition Ward Cheney David Kincaid The University of Texas at Austin 9 Brooks/Cole Publishing Company I(T)P An International Thomson Publishing Company Pacific

More information

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes Root finding: 1 a The points {x n+1, }, {x n, f n }, {x n 1, f n 1 } should be co-linear Say they lie on the line x + y = This gives the relations x n+1 + = x n +f n = x n 1 +f n 1 = Eliminating α and

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration. Unit IV Numerical Integration and Differentiation Numerical integration and differentiation quadrature classical formulas for equally spaced nodes improper integrals Gaussian quadrature and orthogonal

More information

Interpolation Theory

Interpolation Theory Numerical Analysis Massoud Malek Interpolation Theory The concept of interpolation is to select a function P (x) from a given class of functions in such a way that the graph of y P (x) passes through the

More information

Exam in TMA4215 December 7th 2012

Exam in TMA4215 December 7th 2012 Norwegian University of Science and Technology Department of Mathematical Sciences Page of 9 Contact during the exam: Elena Celledoni, tlf. 7359354, cell phone 48238584 Exam in TMA425 December 7th 22 Allowed

More information

Lecture 28 The Main Sources of Error

Lecture 28 The Main Sources of Error Lecture 28 The Main Sources of Error Truncation Error Truncation error is defined as the error caused directly by an approximation method For instance, all numerical integration methods are approximations

More information

NUMERICAL ANALYSIS I. MARTIN LOTZ School of Mathematics The University of Manchester. May 2016

NUMERICAL ANALYSIS I. MARTIN LOTZ School of Mathematics The University of Manchester. May 2016 NUMERICAL ANALYSIS I by MARTIN LOTZ School of Mathematics The University of Manchester May 06 Contents Contents ii Week. Computational Complexity....................... Accuracy...............................

More information

Introduction CSE 541

Introduction CSE 541 Introduction CSE 541 1 Numerical methods Solving scientific/engineering problems using computers. Root finding, Chapter 3 Polynomial Interpolation, Chapter 4 Differentiation, Chapter 4 Integration, Chapters

More information

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013 Number Systems III MA1S1 Tristan McLoughlin December 4, 2013 http://en.wikipedia.org/wiki/binary numeral system http://accu.org/index.php/articles/1558 http://www.binaryconvert.com http://en.wikipedia.org/wiki/ascii

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES No. of Printed Pages : 5 BCS-054 BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 058b9 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES Time : 3 hours Maximum Marks

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

Engg. Math. II (Unit-IV) Numerical Analysis

Engg. Math. II (Unit-IV) Numerical Analysis Dr. Satish Shukla of 33 Engg. Math. II (Unit-IV) Numerical Analysis Syllabus. Interpolation and Curve Fitting: Introduction to Interpolation; Calculus of Finite Differences; Finite Difference and Divided

More information

ROOT FINDING REVIEW MICHELLE FENG

ROOT FINDING REVIEW MICHELLE FENG ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

Chapter 5: Numerical Integration and Differentiation

Chapter 5: Numerical Integration and Differentiation Chapter 5: Numerical Integration and Differentiation PART I: Numerical Integration Newton-Cotes Integration Formulas The idea of Newton-Cotes formulas is to replace a complicated function or tabulated

More information

Scientific Computing: Numerical Integration

Scientific Computing: Numerical Integration Scientific Computing: Numerical Integration Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Fall 2015 Nov 5th, 2015 A. Donev (Courant Institute) Lecture

More information

9.4 Cubic spline interpolation

9.4 Cubic spline interpolation 94 ubic spline interpolation Instead of going to higher and higher order, there is another way of creating a smooth function that interpolates data-points A cubic spline is a piecewise continuous curve

More information

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation Outline Numerical Integration Numerical Differentiation Numerical Integration Numerical Differentiation 3 Michael T. Heath Scientific Computing / 6 Main Ideas Quadrature based on polynomial interpolation:

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Computational Methods in Civil Engineering

Computational Methods in Civil Engineering City University of New York (CUNY) CUNY Academic Works Open Educational Resources City College of New York 2018 Computational Methods in Civil Engineering Nir Krakauer CUNY City College How does access

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

1 Lecture 8: Interpolating polynomials.

1 Lecture 8: Interpolating polynomials. 1 Lecture 8: Interpolating polynomials. 1.1 Horner s method Before turning to the main idea of this part of the course, we consider how to evaluate a polynomial. Recall that a polynomial is an expression

More information

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS The general form of a first order differential equations is = f(x, y) with initial condition y(a) = y a We seek the solution y = y(x) for x > a This is shown

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

Numerical Analysis Exam with Solutions

Numerical Analysis Exam with Solutions Numerical Analysis Exam with Solutions Richard T. Bumby Fall 000 June 13, 001 You are expected to have books, notes and calculators available, but computers of telephones are not to be used during the

More information

Previous Year Questions & Detailed Solutions

Previous Year Questions & Detailed Solutions Previous Year Questions & Detailed Solutions. The rate of convergence in the Gauss-Seidal method is as fast as in Gauss Jacobi smethod ) thrice ) half-times ) twice 4) three by two times. In application

More information

Example 1 Which of these functions are polynomials in x? In the case(s) where f is a polynomial,

Example 1 Which of these functions are polynomials in x? In the case(s) where f is a polynomial, 1. Polynomials A polynomial in x is a function of the form p(x) = a 0 + a 1 x + a 2 x 2 +... a n x n (a n 0, n a non-negative integer) where a 0, a 1, a 2,..., a n are constants. We say that this polynomial

More information

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by Numerical Analysis A Comprehensive Introduction H. R. Schwarz University of Zürich Switzerland with a contribution by J. Waldvogel Swiss Federal Institute of Technology, Zürich JOHN WILEY & SONS Chichester

More information

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004 Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question

More information

3.1 Interpolation and the Lagrange Polynomial

3.1 Interpolation and the Lagrange Polynomial MATH 4073 Chapter 3 Interpolation and Polynomial Approximation Fall 2003 1 Consider a sample x x 0 x 1 x n y y 0 y 1 y n. Can we get a function out of discrete data above that gives a reasonable estimate

More information

MTH603 FAQ + Short Questions Answers.

MTH603 FAQ + Short Questions Answers. Absolute Error : Accuracy : The absolute error is used to denote the actual value of a quantity less it s rounded value if x and x* are respectively the rounded and actual values of a quantity, then absolute

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

11.10a Taylor and Maclaurin Series

11.10a Taylor and Maclaurin Series 11.10a 1 11.10a Taylor and Maclaurin Series Let y = f(x) be a differentiable function at x = a. In first semester calculus we saw that (1) f(x) f(a)+f (a)(x a), for all x near a The right-hand side of

More information

Numerical Methods - Preliminaries

Numerical Methods - Preliminaries Numerical Methods - Preliminaries Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Preliminaries 2013 1 / 58 Table of Contents 1 Introduction to Numerical Methods Numerical

More information

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2. INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x

More information

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative

More information

Chapter 1 Mathematical Preliminaries and Error Analysis

Chapter 1 Mathematical Preliminaries and Error Analysis Numerical Analysis (Math 3313) 2019-2018 Chapter 1 Mathematical Preliminaries and Error Analysis Intended learning outcomes: Upon successful completion of this chapter, a student will be able to (1) list

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

Notes for Chapter 1 of. Scientific Computing with Case Studies

Notes for Chapter 1 of. Scientific Computing with Case Studies Notes for Chapter 1 of Scientific Computing with Case Studies Dianne P. O Leary SIAM Press, 2008 Mathematical modeling Computer arithmetic Errors 1999-2008 Dianne P. O'Leary 1 Arithmetic and Error What

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

Numerical techniques to solve equations

Numerical techniques to solve equations Programming for Applications in Geomatics, Physical Geography and Ecosystem Science (NGEN13) Numerical techniques to solve equations vaughan.phillips@nateko.lu.se Vaughan Phillips Associate Professor,

More information

Math 20B Supplement. Bill Helton. September 23, 2004

Math 20B Supplement. Bill Helton. September 23, 2004 Math 0B Supplement Bill Helton September 3, 004 1 Supplement to Appendix G Supplement to Appendix G and Chapters 7 and 9 of Stewart Calculus Edition 5: August 003 Contents 1 Complex Exponentials: For Appendix

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0.

(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0. > 5. Numerical Integration Review of Interpolation Find p n (x) with p n (x j ) = y j, j = 0, 1,,..., n. Solution: p n (x) = y 0 l 0 (x) + y 1 l 1 (x) +... + y n l n (x), l k (x) = n j=1,j k Theorem Let

More information

COURSE Numerical integration of functions

COURSE Numerical integration of functions COURSE 6 3. Numerical integration of functions The need: for evaluating definite integrals of functions that has no explicit antiderivatives or whose antiderivatives are not easy to obtain. Let f : [a,

More information

Chapter 1 Mathematical Preliminaries and Error Analysis

Chapter 1 Mathematical Preliminaries and Error Analysis Chapter 1 Mathematical Preliminaries and Error Analysis Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128A Numerical Analysis Limits and Continuity

More information