(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Similar documents
Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Do not turn over until you are told to do so by the Invigilator.

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Numerical Methods. King Saud University

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point

Iterative Methods. Splitting Methods

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

CS 257: Numerical Methods

Virtual University of Pakistan

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Exact and Approximate Numbers:

Lösning: Tenta Numerical Analysis för D, L. FMN011,

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

AIMS Exercise Set # 1

Introductory Numerical Analysis

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Part IB Numerical Analysis

MATHEMATICAL METHODS INTERPOLATION

M.SC. PHYSICS - II YEAR

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

Exam in TMA4215 December 7th 2012

3.1 Interpolation and the Lagrange Polynomial

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

5. Hand in the entire exam booklet and your computer score sheet.

Numerical Analysis Comprehensive Exam Questions

MTH603 FAQ + Short Questions Answers.

Lecture 4: Numerical solution of ordinary differential equations

Page No.1. MTH603-Numerical Analysis_ Muhammad Ishfaq

PARTIAL DIFFERENTIAL EQUATIONS

Numerical methods. Examples with solution

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Ordinary Differential Equations

Examination paper for TMA4130 Matematikk 4N: SOLUTION

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Differential Equations

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Do not turn over until you are told to do so by the Invigilator.

SRI RAMAKRISHNA INSTITUTE OF TECHNOLOGY DEPARTMENT OF SCIENCE & HUMANITIES STATISTICS & NUMERICAL METHODS TWO MARKS

SBAME CALCULUS OF FINITE DIFFERENCES AND NUMERICAL ANLAYSIS-I Units : I-V

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

NUMERICAL MATHEMATICS AND COMPUTING

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Unit I (Testing of Hypothesis)

Chapter 5: Numerical Integration and Differentiation

INSTITUTE OF AERONAUTICAL ENGINEERING Dundigal, Hyderabad MECHANICAL ENGINEERING TUTORIAL QUESTION BANK

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

NUMERICAL METHODS FOR ENGINEERING APPLICATION

Fixed point iteration and root finding

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Numerical Analysis Exam with Solutions

Preliminary Examination in Numerical Analysis

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Consistency and Convergence

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

CS 323: Numerical Analysis and Computing

Program : M.A./M.Sc. (Mathematics) M.A./M.Sc. (Final) Paper Code:MT-08 Numerical Analysis Section A (Very Short Answers Questions)

COURSE Iterative methods for solving linear systems

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

SAZ3C NUMERICAL AND STATISTICAL METHODS Unit : I -V

Fourth Order RK-Method

Examination paper for TMA4125 Matematikk 4N

Homework and Computer Problems for Math*2130 (W17).

Preface. 2 Linear Equations and Eigenvalue Problem 22

Chapter 4: Interpolation and Approximation. October 28, 2005

EXAMPLE OF ONE-STEP METHOD

Scientific Computing: Numerical Integration

NUMERICAL ANALYSIS PROBLEMS

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University

Previous Year Questions & Detailed Solutions

Computational Methods

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

ASSIGNMENT BOOKLET. Numerical Analysis (MTE-10) (Valid from 1 st July, 2011 to 31 st March, 2012)

Numerical Programming I (for CSE)

Numerical Differential Equations: IVP

Mathematical Methods for Numerical Analysis and Optimization

Numerical Methods. Scientists. Engineers

Euler s Method, cont d

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Numerical Methods for Ordinary Differential Equations

The problems that follow illustrate the methods covered in class. They are typical of the types of problems that will be on the tests.

Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat.

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

USHA RAMA COLLEGE OF ENGINEERING & TECHNOLOGY

x n+1 = x n f(x n) f (x n ), n 0.

Scientific Computing: An Introductory Survey

Numerical Methods with MATLAB

AM205: Assignment 3 (due 5 PM, October 20)

MAT 460: Numerical Analysis I. James V. Lambers

Transcription:

1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How would you obtain an upper bound value to the magnitude of the error? (b) Given f(x) = (x + 1) log e (x), obtain the cubic Taylor polynomial P 3 (x) about x 0 = 1, and hence obtain an upper bound to the error, as a function of x, if x x 0 < 1. Use this information to estimate an upper bound to SOLUTION 1.5 (f(x) P 3 (x)) dx 0.5 (a) The Lagrange formula for the error is given by E(x) = (x x 0) n+1 f (n+1) (α), (n + 1)! where α is an unknown point between x and x 0 : either x 0 < α < x or x < α < x 0. If you can find upper and lower bounds for f (n+1) (α), then you can find an upper bound value for the error. (b) The cubic Taylor polynomial is given by P 3 (x) = f(x 0 ) + (x x 0 )f (x 0 ) + (x x 0) f (x 0 ) + (x x 0) 3 f (x 0 ). 6 Since our function f(x) = (x + 1) log e (x), it follows that f (x) = x log e (x) + x + 1/x, f (x) = log e (x) + 3 1/x, f (x) = /x + /x 3. Substituting our choice for x 0 = 1, we thus find that f(1) = 0 f (1) = f (1) = f (1) = 4. The Taylor polynomial P 3 (x) is thus given by P 3 (x) = (x 1) + (x 1) + (x 1)3. 3 In order to find a bound for the error, we need to calculate the fourth derivative of f(x) f (4) (x) = /x 6/x 4.

We need to find a bound for the case that x x 0 < 1/. The largest value for f (4) (x) on the interval 1/ < x < 3/ is found at x = 1/ : -104. The upper bound for the error in our Taylor polynomial is thus given by E(x) = (x x 0) 4 ( 104). 4 For x = 1/, this error gives E = 0.71. 1.5 (f(x) P 3 (x)) dx 1.5 (x 1) 4 [ ] ( 104)dx 0.5 0.5 4 = (x 1) 5 1.5 ( 104) 10 0.5 1.5 (f(x) P 3 (x)) dx 0.054 0.5

QUESTION. (a) Derive the Newton-Raphson method for solving equations in one variable. Compared to the false-position method, name an advantage and two disadvantages of the Newton-Raphson method. (b) Use the Newton-Raphson method to find an estimate for 3 with 6 figures accuracy by solving the equation f(x) = (x 3)(x 15) = 0 using an initial guess p 0 =. Why does the Newton-Raphson method converge faster to 3 for solving f(x) than for solving g(x) = (x 3) = 0? SOLUTION (a) In the Newton-Raphson method for finding the root of a function f(x) = 0, we start with an initial guess for the root p 0. We approximate the function f(x) by its Taylor polynomial of degree 1 P 1 (x) around p 0 and assume that the root of this Taylor polynomial is an improved guess for the root of the function f(x). We then iterate until we achieve convergence. Around the point p 0, the Taylor polynomial of degree 1 is given by P 1 (x) = f(p 0 ) + (x p 0 )f (p 0 ). The root of this Taylor polynomial is then given by or x p 0 = f(p 0) f (p 0 ) x = p 0 f(p 0) f (p 0 ). Compared to the false-position method, the Newton-Raphson method converges significantly faster. The drawbacks of the method are that convergence is no longer guaranteed, and that we need to be able to evaluate the derivative of the function we re solving for. (b) To apply the Newton-Raphson method, we need the derivative of f(x): f (x) = x(x 15) + x(x 3) = 4x(x 9) Our initial guess is p 0 = The next guess thus becomes The next guess becomes p = p 1 f(p 1) f (p 1 ) p 1 = p 0 f(p 0) f (p 0 ) = 11 40 = 1.75 = 1.75 0.93094141 41.5681875 = 1.730509.

The next guess becomes p 3 = p f(p ) f (p ) = 1.730509 0.00000484956 41.569194 = 1.7305081. The last two guesses differ by 1.1 10 7, and 6 figures accuracy has thus been achieved. The Newton-Raphson method is based on Taylor polynomials. This means that the closer the function resembles a Taylor polynomial of degree 1 the faster convergence will be. This closeness can be estimated by looking at the Taylor polynomial of degree. We thus need to look at the second derivative of the functions f(x) and g(x) at 3. g (x) = f (x) = 1x 36 For f(x), the second derivative is 0 at 3. Close to 3, f(x) resembles a Taylor polynomial of degree 1 much better than g(x).

QUESTION 3. (a) What is meant by LU decomposition of a matrix? When solving a system of linear equations using LU decomposition, what advantages does LU decomposition have over Gauss Elimination? (b) Use LU decomposition to solve SOLUTION 0.5 1 1 1 4 1 1 3.5 x 1 x x 3 = (a) LU decomposition is the decomposition of a matrix A into two matrices L and U, which are lower-triangular and upper triangular, respectively. Since the matrices L and U have n + n unknowns and there are n equations, we have the freedom to choose n coefficients. The most common choices are : the diagonal elements of L are all equal to 1 the diagonal elements of U are all equal to 1 the diagonal elements of L and U are equal to each other The advantage of LU decomposition over Gauss Elimination for solving a system of linear equations is that LU decomposition needs to be performed only once. Thus, if we need to solve Ax = b for many different b at different times, then we need to do the decomposition once, and all solutions for any b vector can then be obtained directly using forward- and back-substitutions. One calculation is of O(n 3 ), while all othe calculations are O(n ). For Gauss Elimination, we need to repeat the entire procedure whenever a new vector b is presented. The previous calculations cannot be used to simplify the calculations for a new b. Every calculation is thus of order O(n 3 ). (b) First we need to decompose A into the separate matrices L and U. We have to choose n = 3 coefficients: the diagonal elements of U are equal to 1. We must then solve the following equation: 0.5 1 1 1 4 1 1 3.5 = α 11 0 0 α 1 α 0 α 31 α 3 α 33 1 The first column of A gives the following equations α 11 = 0.5, α 1 = 1, and 1 β 1 β 13 0 1 β 3 0 0 1.

α 31 = 1. The remaining elements on the first row of A then give 1 = 0.5 β 1 or β 1 =, and 1 = 0.5 β 13 or β 13 =. Focussing our attention on the remaining elements of the second column of A: 4 = α 1 β 1 + α = + α. Thus α = 1 = α 31 β 1 + α 3 = + α 3. Thus α 3 = 1 The last element of the second row : = α 1 β 13 + α β 3 = + β 3. Thus β 3 = The last element of the third row : 3.5 = α 31 β 13 + α 3 β 3 + α 33 = + α 33. Thus α 33 = 0.5. So we have 0.5 1 1 1 4 1 1 3.5 = 0.5 0 0 1 0 1 1 0.5 Now we have to solve the equation Ly = b. Or 0.5 0 0 1 0 1 1 0.5 y 1 y y 3 = 1 1 0 1 0 0 1. Solving the equations using forward-substitution, we find 0.5y 1 = or y 1 = 4 y 1 + y = 1 or y =.5 y 1 y + 0.5y 3 = or y 3 = 9. Now, we have to use backsubstition to solve Ux = y and find x. 1 0 1 0 0 1 x 1 x x 3 = 4.5 9 This gives the following equations: x 3 = 9 x + x 3 =.5 or x = 15.5, x 1 x + x 3 = 4 or x 1 = 4 18 31 = 45. If we substitute this answer back into the original equation, we obtain 0.5 1 1 1 4 1 1 3.5 and our answer is thus correct. 45 15.5 9 =. 1.

4. Question (a) Why are polynomials often used in interpolation? Why are Taylor polynomials not useful for interpolation? What is meant by Lagrangian interpolation? (b) Employ Neville s algorithm to obtain f(1) for the function tabulated below. Use the four most appropriate data points. Solution x 0.7 0.9 1.1 1.3 1.5 1.7 f(x) 0.3567 0.1054 0.0953 0.64 0.4055 0.5306 (a) Polynomials are often used in interpolation because of Weierstrass theorem, which states that for every defined and continuous function f(x) on [a, b] and given ε > 0, there exists a polynomial P such that f(x) P (x) < ε for all x in [a, b]. Moreover, polynomials are easy to evaluate, differentiate and integrate. Taylor polynomials are not useful for interpolation, since all the information is concentrated at a single point x 0. Therefore it is an excellent approximation near x 0, but the error increases rapidly when you move away from x 0. In Lagrangian interpolation, we approximate a function f(x) by a polynomial P (x) whose values agree with f(x) at a set of distinct points {x k }: f(x k ) = P (x k ). (b) Neville s algorithm involves the repeated linear interpolation of two polynomials: P mab = (x x a)p mb (x x b )P ma x b x a where subscripts indicate the k-values where P (x k ) = f(x k ). m indicates a set of x k s common to both polynomials involved in the interpolation. For the present purpose, we need the 4 values closest to 1: 0.7, 0.9, 1.1, 1.3. In Neville s algorithm, we combine nearest neighbours. Thus we need to find P 134 (1). P 1 (1) = (1 0.7)0.1054 (1 0.9)0.3567 0.9 0.7 = 0.003 P 3 (1) = P 34 (1) = (1 0.9)0.0953 (1 1.1)0.1054 1.1 0.9 (1 1.1)0.64 (1 1.3)0.0953 1.3 1.1 = 0.1004 = 0.0118

P 13 (1) = P 34 (1) = P 134 (1) = (1 0.7)0.1004 (1 1.1)( 0.003) 1.1 0.7 (1 0.9)0.0118 (1 1.3)(0.1004) 1.1 0.7 (1 0.7)0.0783 (1 1.3)(0.070) 1.3 0.7 = 0.070 = 0.0783 = 0.0743

5. Question (a) Show that the error e (k) in the k th iteration of the Jacobi method for solving the set of simultaneous linear equations Ax = b satisfies: e (k) = P k e (0) where the iteration matrix P is independent of k, and e (0) is the initial error vector. Show that the method converges for any e (0) if P ij < 1, all i. j (b) Apply three iterations of the Jacobi method to the following system of equations: 3x 1 + x x 3 = 1 x 3x 3 = 5 4x 1 + 6x x 3 = 6 starting from (0,0,0) using 4 significant figures. Identify the matrix P and show that convergence is guaranteed. Solution (a) In the Jacobi method, the matrix equation Ax = b is transformed to an equation x = P x + d, where P = D 1 (L + U) and d = D 1 b. D contains the diagonal elements of A, L and U are the strictly lower-triangular and upper-triangular parts of A respectively. In the Jacobi method, we iteratively calculate new guesses by while the exact solution is given by x (k) = P x (k 1) + d x = P x + d Subtracting the two equations gives for the error term x (k) x = P x (k 1) P x = P (x (k 1) x) Thus, e (k) = P x (k 1) P x = P e (k 1) = P k e (0)

The Jacobi method converges, if the norm of the error vector goes to zero when k goes to infinity. This will happen if the spectral radius of the matrix P is smaller than 1. The spectral radius is a lower bound to the norms. Thus if we find a norm of P that is smaller than 1, then we know that the spectral radius is smaller than 1. For example, the Chebyshev norm is given by A = A ij. j If this sum is less than 1, the spectral radius is less than 1 and the method converges. (b) The system of equations is first transformed to x (k+1) 1 = 1 xk + x k 3 3 x (k+1) = 6 + 4xk 1 + x k 3 6 x (k+1) 3 = 5 xk 3 Our initial guess is x 0 = (0, 0, 0). The first guess is x 1 = ( 0.3333, 1, 1.667). The second guess is x = ( 0.1110, 1.333, ). The third guess is x 3 = ( 0.1110, 1.593,.111). P = 0 1/3 1/3 /3 0 1/3 0 1/3 0 The norms P 1 and P e are < 1. The spectral radius is thus < 1 and convergence is thus assured.

6. QUESTION (a) The spectrum of an isolated narrow resonance can be regarded as a smooth background signal plus a highly-localized disturbance on the signal. Suppose we make a finite-difference table of such a spectrum, and that only one point is affected by the localized disturbance. Explain how we can use the finite-difference table to locate the element in the table that is influenced by the disturbance, and how we can estimate the size of the disturbance. (b) Use finite-difference methods up to third differences to estimate f(1.1) and f(1.95) given the following values of f(x) : x 1 1. 1.4 1.6 1.8 f(x) 0.0000 0.1813 0.330 0.459 0.5545 0.6390 SOLUTION (a) A finite-difference table starts of an ordered list of data points separated by a fixed step-size h. We then add another list to the table by calculating the difference between consecutive points in our first list. We can then continue by iteratively calculating differences between consecutive points for each new list created. When a function is smooth, the numbers will decrease with increasing order of differences. A localized distrubance can be considered an error on a data point. In the case of an error, the numbers will increase with increasing order of differences, and in fact they increase according to the binomial coefficients. In our table, we thus have to look for a set of numbers that increases with increasing of order of differences according to the binomial coefficients. The center of the cone gives the position of the error, while we can get an estimate of the error by dividing the n th difference by the appropriate binomial coefficient. (b) First of all, we need to make a finite-difference table. Since we want to go up to third differences, we need to determine 3 difference columns.

1 0 0.1813 0.1813-0.034 0.1489 0.006 3 0.330-0.06 0.17 0.0051 4 0.459-0.011 0.1016 0.0040 5 0.5545-0.0171 0.0845 6 0.6390 For f(1.), we need to use Newton s forward formula: f(1 + ph) = (1 + ) p f 1 = Thus, ( 1 + p + p(p 1) + f(1.) = f 1 + 1 1 1 8 1 + 3 48 3 1 ) p(p 1)(p ) 3 f 1 6 f(1.) = 0 + 1 0.1813 1 8 ( 0.034) + 3 0.006 = 0.0951 48 For f(1.95), we need to use Newton s backward formula: f(+ph) = (1 ) p f 6 = Thus ( 1 + p + f(1.95) = f 6 + ( 1/4) 6 + ( 1/4)(3/4) p(p + 1) + ) p(p + 1)(p + ) 3 f 6 6 6 + ( 1/4)(3/4)(7/4) 3 6 6 f(1.95) = 0.6390 1 4 0.0845 3 3 ( 0.0171) 7 0.0040 = 0.6193 18

7. QUESTION (a) What is meant by the L p and Chebyshev norms of a function? Use these to define the least squares and minimax approximations to a function, and discuss the advantages and disadvantages of the two approximations. (b) The 6 th order Taylor polynomial for the function f(x) = e (cos(x) 1) around 0 is given by P 6 (x) = 1 x + x4 6 31x6 70. Find a minimax quadratic approximation Q(x) on the interval [-0.5,0.5] to the fourth-order Taylor polynomial P 4 (x) of f(x) = e (cos(x) 1) around 0 using the Chebyshev polynomial T 4 (x) = 8x 4 8x + 1. Can Q(x) be considered a near-minimax quadratic approximation to f(x), and why? SOLUTION (a) The L p norm of a function f(x) on an interval [a, b] is given by [ b 1/p f p = f(x) dx] p. a The Chebyshev norm of a function f(x) on an interval [a, b] is given by f = max a x b f(x). The least squares approximation to a function f(x) is a function F (x) such that the L norm of f(x) F (x) is a minimum. The Chebyshev approximationto a function f(x) is a function F (x) such that the Chebyshev norm of f(x) F (x) is a minimum. The advantage of the least-squares approximation is that it is easy to calculate. On the other hand, a disadvantage is that f(x) F (x) may be large for some x. The advantage of the Chebyshev approximation is that you know the maximum error in the approximation F (x). The drawback is that the approximation is difficult to calculate. (b) We need to economise P 4 (x) using T 4 (x). P 4 (x) = 1 x + x4 6

In economizing, we first need to transform our interval [-0.5,0.5] to the interval [-1,1]. We can achieve this by the transformation t = x. We find a new polynomial P 4 (t), P 4 (t) = P 4 (t/) = 1 t 8 + t4 96 We can now economise P 4 (t) to give us a polynomial Q(t) by subtracting (1/96)(1/8)T 4 (x). Thus Q(t) = P 4 (t) 1 768 T 4(x) = 1 t 8 + t4 96 t4 96 + t 96 1 768 = 767 768 11t 96 To get our minimax approximation, we need to transform Q(t) back to Q(x) using x = t. Thus Q(x) = Q(x) = 767 768 11x 4. In order to estimate whether Q(x) is a near-minimax approximation to f(x), we need to compare the errors made in the Taylor approximation and in the economization. The error in the economization is 0.0013. The error in the Taylor approximation can be estimated from the neglected term in P 6 (x). This last term is largest at the boundaries, x = 1/. The size of the error is approximately 0.00067, while the actual error at 0.5 is 0.00064. The error in the economization is thus larger than the error of Taylor series, and Q(x) can be regarded as a near-minimax approximation to f(x) on [-0.5,0.5].

Question 8. (a) What is composite quadrature? Write down the composite Trapezoidal rule, indicating how the error varies with the step size h. (b) Evaluate 1 1 cos(e x )dx using the composite Trapezoidal rule and four significant figures. Use four different stepsizes :, 1, 0.5 and end with a stepsize of 0.5. Use these results to eliminate the leading term in the error expansion. Solution (a) In composite quadrature the interval of integration is split into subintervals. We then use the same low-order quadrature rule, eg. the trapezoidal rule, to each subinterval. The composite trapezoidal rule is b f(x)dx h [ ] n f 0 + f k + f n a k=1 (b a) h f (µ), 1 where h = (b a)/n and µ an unknown point in the interval (a, b). The error term has the form E(h) = a n h n, n=1 ie. it contains terms containing even powers of h. The knowledge of the error dependence can be exploited using Richardson s extrapolation to give the Romberg method. (b) We have to calculate I = 1 1 cos(e x )dx. We will approximate this integral by I(h) where h is the stepsize. I() = [f(1) + f( 1)] = ( 0.9117 + 0.9331) = 0.014 I(1) = I() + 1 [f(0)] = 0.0107 + 0.5403 = 0.5510 I(0.5) = I(1) + 0.5 [f( 0.5) + f(0.5)] = 0.755 + 0.3719 = 0.6474 I(0.5) = I(0.5) + 0.5 [f( 0.75) + f( 0.5) + f(0.5) + f(0.75)]

I(0.5) = 0.337 + 0.3415 = 0.665 The leading term in the error is a 1 h. Richardson s extrapolation tells us that the leading term ah n can be eleminated using stepsizes h and αh following I 1 (αh) = I 0(αh) α n I 0 (h) 1 α n In the present example α = 1/ and n =. Thus I 1 (1) = 0.5510 0.5(0.014) 1 0.5 = 0.775 I 1 (0.5) = I 1 (0.5) = 0.6474 0.5(0.5510) 1 0.5 0.665 0.5(0.6474) 1 0.5 = 0.6795 = 0.6711

Question 9. (a) Given the first order ordinary differential equation dy dt = f(t, y(t)), derive the general nd order Runge-Kutta formula, and indicate how it can be used to derive the mid-point method. (b) Given dy dx = y + x + 1 and y(1) = 0, use the mid-point method to obtain a value for y() using a stepsize of 0.5. Solution (a) In the second-order Runge Kutta method, we approximate the function value y(t + h) using a second-order Taylor polynomial expanded around y(t): y(t + h) = y(t) + hy (t) + h y (t) The first and second term on the right-hand side are trivial, since y(t) is known and y (t) = f(t, y(t)) is our differential equation. For the last term, y = df dt = df dt + df dy dy dt = f t + f y f If we now approximate f(t + αh, y + αhf(t, y)) by a first-order Taylor polynomial around f(t, y), we find Thus, f(t + αh, y + αhf(t, y)) = f(t, y) + αhf t + αhff y = f(t, y) + αhy y = 1 (f(t + αh, y + αhf(t, y)) f(t, y)) αh If we substitute this expression in the second-order Taylor polynomial, we find y(t + h) = y(t) + hf(t, y) + h 1 (f(t + αh, y + αhf(t, y)) f(t, y)) αh

or ( y(t + h) = y(t) + h 1 1 ) f(t, y) + h f(t + αh, y + αhf(t, y)) α α (b) In this last expression, we can choose any value for α. method corresponds to choosing α = 1/. dy dx = y + x + 1, The midpoint y(1) = 0, a stepsize of 0.5 and we need to use the midpoint method : y(t + h) = y(t) + hf(t + h/, y + hf(t, y)/). So, we start with (t, y) = (1, 0) y(1.5) = y(1) + 0.5f(1.15, 0.15f(1, 0)) f(1, 0) = 1.5 y(1.5) = 0 + 0.5f(1.15, 0.1875) = 0.5 1.57 = 0.3818 y(1.5) = y(1.5) + 0.5f(1.375, 0.3818 + 0.15f(1.5, 0.3818)) f(1.5, 0.3818) = 1.479 y(1.5) = 0.3818 + 0.5f(1.375, 0.5667) = 0.3818 + 0.5 1.366 = 0.734 y(1.75) = y(1.5) + 0.5f(1.65, 0.734 + 0.15f(1.5, 0.734)) f(1.5, 0.734) = 1.7 y(1.75) = 0.734 + 0.5f(1.65, 0.8678) = 0.734 + 0.5 1.044 = 0.9844 y() = y(1.75) + 0.5f(1.875, 0.9844 + 0.15f(1.75, 0.9844)) f(1.75, 0.9844) = 0.9060 y() = 0.9844 + 0.5f(1.875, 1.098) = 0.9844 + 0.5 0.7319 = 1.167