Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question 24 marks are available for each question For two-part questions, part (a) : bookwork part (b) : unseen problem For question 3, part (a) : bookwork parts (b) and (c) : unseen problems For questions 5 and 8 parts (a) and (b) : bookwork part (c) : an unseen problem
Page 2 of 5 110AMA204 1. (a) Given an n th degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully describing any features whose meaning is not conveyed directly by the notation used. How might you be able to obtain an upper bound value to the magnitude of the error? (b) Given f(x) = x ln(x) x, obtain the cubic Taylor polynomial P 3 (x) about x 0 = 1. Show that f (4) (x) < 16 if x 1 < 0.5 and hence obtain an upper bound to the error, as a function of x, if x 1 < 0.5. Use this information to estimate an upper bound to 1.5 (f(x) P 3 (x)) dx. SOLUTION (a) The Lagrange formula for the error is given by 0.5 E(x) = (x x 0) n+1 f (n+1) (α), (n + 1)! where α is an unknown point between x and x 0 : either x 0 < α < x or x < α < x 0. If you can find upper and lower bounds for f (n+1) (α), then you can find an upper bound value for the error. (b) The cubic Taylor polynomial is given by P 3 (x) = f(x 0 ) + (x x 0 )f (x 0 ) + (x x 0) 2 f (x 0 ) + (x x 0) 3 f (x 0 ). 2 6 Since our function f(x) = x ln(x) x, it follows that f (x) = ln(x), f (x) = 1, x f (x) = 1. x 2 Substituting our choice for x 0 = 1, we thus find that f(1) = 1 f (1) = 0 f (1) = 1 f (1) = 1. The Taylor polynomial P 3 (x) is thus given by P 3 (x) = 1 + (x 1)2 2 (x 1)3. 6
In order to find a bound for the error, we need to calculate the fourth derivative of f(x) f (4) (x) = 2 x 3. We need to find a bound for the case that x 1 < 1/2. The maximum value for the derivative is obtained when x is set to 1/2. We then obtain a maximum value for f (4) (x) on the interval 1/2 x 3/2 of 16. The upper bound for the error in our Taylor polynomial is thus given by E(x) < 16 (x 1)4. 24 The upper bound to the error in the integral is then found as follows: 1.5 1.5 (f(x) P 3 (x)) dx < (x 1)4 [ 16 dx 0.5 0.5 24 = (x 1)5 16 120 1.5 (f(x) P 3 (x)) dx < 0.0083 0.5 ] 1.5 0.5 = 1 120
2. (a) Derive the Newton-Raphson method for solving equations in one variable. Compared to the false-position method, name one advantage and two disadvantages of the Newton-Raphson method. (b) Use three iterations of the Newton-Raphson method to approximate the solution to f(x) = e x tan x = 0 in the interval ( π/2, π/2). A good initial guess to this problem would be x 4/3. How accurate do you estimate the final answer to be? SOLUTION (a) In the Newton-Raphson method for finding the root of a function f(x) = 0, we start with an initial guess for the root p 0. We approximate the function f(x) by its Taylor polynomial of degree 1, P 1 (x), around p 0 and assume that the root of this Taylor polynomial is an improved guess for the root of the function f(x). We then iterate until we achieve convergence. Around the point p 0, the Taylor polynomial of degree 1 is given by P 1 (x) = f(p 0 ) + (x p 0 )f (p 0 ). The root of this Taylor polynomial, P 1 (x) = 0, is then given by or x p 0 = f(p 0) f (p 0 ) x = p 0 f(p 0) f (p 0 ). Compared to the false-position method, the Newton-Raphson method converges significantly faster: the convergence rate is quadratic rather than linear. The drawbacks of the method are that convergence is no longer guaranteed, and that we need to be able to evaluate the derivative of the function we re solving for. (b) To apply the Newton-Raphson method, we need the derivative of f(x): f (x) = e x 1 cos 2 x Our initial guess is p 0 = 4/3 = 1.33333 The next guess thus becomes p 1 = p 0 f(p 0) f (p 0 ) = 1.33333 0.338061 14.2775 = 1.309655
The next guess becomes p 2 = p 1 f(p 1) f (p 1 ) The next guess becomes = 1.309655 0.0370087 11.2970 = 1.30638. p 3 = p 2 f(p 2) f (p 2 ) = 1.30638 0.000575124 10.9481 = 1.30633. One way for estimating the accuracy in the guess is to examine the difference between consecutive guesses. This suggests that the results is accurate to within 5 10 5. (Alternatively, one can estimate from the function value and the derivative an accuracy within 2 10 8, although this corresponds to an additional Newton-Raphson step).
3. (a) Why is pivoting important in finding the solution to a system of linear equations? What is meant by LU decomposition? How can you use LU decomposition to determine the determinant of a matrix? (b) Use LU decomposition to solve the following system of linear equations, 2 4 4 1 1 1 2 4 2 x 1 x 2 x 3 = (c) Determine A, L and U, where A = LU are the matrix in part(b) and its decomposition. Are these norms consistent with each other? Provide reasons for your answer. 2 0 4 SOLUTION (a) The essential matrix element in step i is element a ii, the element of equation i on the diagonal, also known as the pivot. In pivoting, one aims to make the pivot the largest element in the column (excluding the part of the column above the diagonal). Pivoting is important, since the accuracy of the calculations is improved when this element is as large as possible. A small pivot may lead to a significant round-off error. In LU decomposition, the matrix A is split into a lower triangular matrix L and an upper triangular matrix U, such that LU = A. Since the matrices L and U have n 2 +n unknown entries and the matrix A has n 2 known entries, we need to choose n entries. A common choice is to set the diagonal entries of either L or U equal to 1. LU decomposition is useful in calculating the determinant of matrix A, since you can calculate the determinant of A by multiplying all the elements on the diagonals of L and U. If you made the choice of all diagonal elements of U are equal to 1, then you simply need to multiply only the diagonal elements of L. (b) If we want to solve the set of equations Ax = b using LU decomposition, then we first decompose the matrix A into two matrices L and U, so that LU = A. We then first solve Ly = b using forward-substitution and then solve Ux = y using backsubstitution. So, first we need to decompose A. 2 4 4 1 1 1 2 4 2 = l 11 0 0 l 21 l 22 0 l 31 l 32 l 33 1 u 12 u 13 0 1 u 23 0 0 1, where we have chosen the diagonal elements of U to be equal to 1.
The first row of A gives the following set of equations: l 11 = 2 l 11 u 12 = 4 u 12 = 2 l 11 u 13 = 4 u 13 = 2 The second row of A gives the following set of equations: l 21 = 1 l 21 u 12 + l 22 = 1 l 22 = 1 l 21 u 13 + l 22 u 23 = 1 u 23 = 1 The third row of A gives the following set of equations: l 31 = 2 l 31 u 12 + l 32 = 4 l 32 = 0 l 31 u 13 + l 32 u 23 + l 33 = 2 l 33 = 2 Now we have to solve Ly = b 2 0 0 1 1 0 2 0 2 y 1 y 2 y 3 = 2 0 4 Hence y 1 = 2/2 = 1 y 2 = 0 y 1 = 1 y 3 = (4 + 2y 1 )/2 = 1 Now we have to solve Ux = y 1 2 2 0 1 1 0 0 1 x 1 x 2 x 3 = 1 1 1 Hence x 3 = 1 x 2 = 1 + x 3 = 2
x 1 = 1 2x 3 + 2x 2 = 1 Substituting the answer back into the original problem gives 2 4 4 1 1 1 2 4 2 and hence the answer is correct. 1 2 1 = (c) The norm is given by the maximum sum of absolute values along a row, A = max A ji j Hence A = 10. Similarly L = 4, and U = 5. These norms are consistent with each other, since A L U. i 2 0 4,
Page 3 of 5 110AMA204 4. (a) Briefly describe Neville s algorithm to perform an interpolation of a tabulated function. What advantage does Neville s method have over computing the Lagrange polynomial directly? (b) In inverse interpolation, one asks the following question: given a function y = f(x) tabulated at a discrete set of points, at what value of x does y assume a specified value? By reversing the roles of the dependent variable y and independent variable x, use inverse interpolation with Neville s algorithm to find the value of x for which y(x) = 0, given the data points in the table below. Use the 4 most appropriate data points. x k -5-3 -1 1 3 5 7 y k -6.8021-6.0833-4.6562-2.3333 1.0729 5.7500 11.8854 SOLUTION (a) Neville s algorithm constructs the Lagrange interpolating polynomial through iterated interpolation. Suppose P ms and P mt are two polynomials. P ms agrees with a function f(x) at a set of points x m and a point x s. P mt agrees with a function f(x) at the identical set of points x m and a point x t. We can then construct a polynomial that agrees with f(x) at the set x m, x s and x t by applying the formula P mst = (x x s)p mt (x) (x x t )P ms (x) x t x s The advantage of Neville s algorithm when we want to know the function at one particular point, is that if we want to improve the approximation through adding data points, these additional data points can be added significantly easier. (b) Let s start by rewriting the table by renaming x k as Y k and y k as X k. X k -6.8021-6.0833-4.6562-2.3333 1.0729 5.7500 11.8854 Y k -5-3 -1 1 3 5 7 Now we have a table that we can apply Neville s method to in the regular fashion. The four most appropriate points are the closest points to 0 : -4.6562, -2.3333, 1.0729 and 5.7500. Hence P 1 = 1, P 2 = 1, P 3 = 3, and P 4 = 5
We first determine P 12, P 23 and P 34 using Neville s algorithm and X = 0, P stu = (X X s)p tu (X X t )P su X t X s. P 12 = (0 ( 4.6562))1 (0 ( 2.3333))( 1) 2.3333 ( 4.6562) = 3.0090 P 23 = P 34 = P 123 and P 234 are then given by (0 ( 2.3333))3 (0 1.0729)1 1.0729 ( 2.3333) (0 1.0729)5 (0 5.75)3 5.75 1.0729 = 2.3700, = 2.5412, P 123 = (0 ( 4.6562))2.3700 (0 1.0729)3.0090) (1.0729 ( 4.6562) = 2.4897 P 234 = Finally, P 1234 is given by (0 ( 2.3333))2.5412 (0 5.75)2.3700) (5.75 ( 2.3333) = 2.4194 P 1234 = (0 ( 4.6562))2.4194 (0 5.75)2.4897) (5.75 ( 4.6562)) = 2.4583
5. In the fixed-point method, one tries to find the root of a given equation by rewriting the given equation into the form x = f(x). Under certain conditions for f (x), one can get an improved estimate x 1 from an initial guess x 0 by calculating x 1 = f(x 0 ). In these circumstances, repeated calculation of x n+1 = f(x n ) gives the root to the equation. The fixed-point method converges linearly. (a) Give the definition of the convergence rate. (b) Linearly convergent methods can be accelerated using Aitken s 2 method. Derive Aitken s 2 method from the definition of the convergence rate. (c) Using an initial guess of x 0 = 0 for the problem x = cos(x), gives an improved guess of x 1 = cos(x 0 ) = cos(0) = 1. Apply the fixed-point method for a further three steps. Estimate from these three steps how many iterations you would need to achieve an accuracy of 10 6. (d) Apply Aitken s 2 algorithm to the three new estimates from part (c) to find an improved estimate to the solution of x = cos(x). SOLUTION (a) The convergence rate α is the particular value of α for which p p n+1 lim n p p n C, α where p is the exact answer, p n a series of guesses converging to p and C is a constant. (b) A linear convergence means α = 1. If the signs of p p n either alternate or remain the same, we can write C p p n+1 p p n p p n+2 p p n+1. If we now assume the difference between the two approximations to be negligible, we get (p p n+1 ) 2 = (p p n )(p p n+2 ) Hence or p 2 2pp n+1 + p 2 n+1 = p 2 p(p n + p n+2 ) + p n p n+2 p(p n 2p n+1 +p n+2 ) = p n p n+2 p 2 n+1 = p n (p n 2p n+1 +p n+2 ) p 2 n+2p n p n+1 p 2 n+1 Thus p = p n (p n p n+1 ) 2 (p n 2p n+1 + p n+2 )
(c) We have x 1 = 1, and we need three additional iterations. Following the scheme outlined in the question, x 2 = cos x 1 = cos 1 = 0.5403 x 3 = cos x 2 = cos 0.5403 = 0.8576 x 4 = cos x 3 = cos 0.8576 = 0.6543 To estimate the number of iterations needed for a precision of 10 6, we consider the difference between consecutive guesses. x 3 x 2 = 0.3173, and : x 4 x 3 = 0.2033. We want this difference to become 10 6. Assuming linear convergence, each iteration reduces the difference between consecutive guesses by a factor 0.64. In order to reach an accuracy of 10 6, we would need a further N guesses, where N is given by 0.2 0.64 N < 10 6, :N 28 (d) The acceleration using Aitken s alogrithm then gives us x = x 2 (x 2 x 3 ) 2 (x 2 2x 3 + x 4 ) = 0.7337
Page 4 of 5 110AMA204 6. (a) Define the finite difference operators E, and D. Derive the Newton forward difference formula for interpolation. Derive the relationship between D and, with h the stepsize. hd = ln(1 + ), (b) Given the tabulated function f(x) below, construct a difference table up to 3 rd differences. Use this difference table to find an approximation to f(0.12) and f(0.85) employing the most appropriate finite-difference formula for interpolation. Justify your choice of finite-difference formula. x k 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 f k 0.0356 0.1861 0.4386 0.7227 1.0393 1.3893 1.7733 2.1922 2.6467 Also use the forward-difference formula for the derivative at a point p, f p = 1 ( ) p 2 p h 2 + 3 p 3..., (1) to estimate the derivative of f(x) at x = 0.2. SOLUTION (a) The finite difference operators are defined as follows Ef(x) = f(x + h) f(x) = f(x + h) f(x) = (E 1)f(x) Df(x) = f (x) This leads to the following expressions for E : E = 1 + If we want to know the value of f(x) at a point x = x 0 + ph, then we have = ( f(x) = f(x 0 + ph) = E p f(x 0 ) = (1 + ) p f(x 0 ) 1 + p + = f(x 0 ) + p 0 + p(p 1) 2 + 2 p(p 1) 2 0 + 2 ) p(p 1)(p 2) 3 +... f(x 0 ) 6 p(p 1)(p 2) 3 0 +... 6
In order to relate and D, we can use Taylor s polynomial: f(x + h) = Ef(x) = ( + 1)f(x) Hence or f(x + h) = f(x) + hf (x) + h2 2! f (x) +... = f(x) + hdf(x) + h2 D 2 f(x) +... = e hd f(x) 2! + 1 = e hd hd = ln( + 1) (b) First, we have to make a difference table. 0.1-0.0356 0.2217 0.2 0.1861 0.0308 0.2525 0.0008 0.3 0.4386 0.0316 0.2841 0.0009 0.4 0.7227 0.0325 0.3166 0.0009 0.5 1.0393 0.0334 0.3500 0.0006 0.6 1.3893 0.0340 0.3840 0.0009 0.7 1.7733 0.0349 0.4189 0.0007 0.8 2.1922 0.0356 0.4545 0.9 2.6467 In order to determine f(0.15), we need to use the forward difference formula, starting from 0.1 with p = 0.5, since this formula uses the most accurate data near the top of the table. f(0.15) = f(0.1) + 0.5 f(0.1) 0.125 2 f(0.1) + 0.0625 3 f(0.1) = 0.0356 + 0.1109 0.0039 + 0.000 = 0.0714 In order to determine f(0.88), we need to use the backward difference formula, f(x 0 + ph) = f(x 0 ) + p 0 + p(p + 1) 2 0 + 2 p(p + 1)(p + 2) 3 0 +... 6
starting from 0.9 with p = 0.2, since this formula uses the most accurate data at the end of the table. f(0.88) = f(0.9) 0.2 f(0.9) 0.08 2 f(0.9) 0.048 3 f(0.9) = 2.6467 0.0909 0.0028 0.0000 = 2.5530 To evaluate the derivative at x = 0.2, we use the equation given f p = 1 h ( ) p 2 p 2 + 3 3... The step-size used in the table h = 0.1. For x = 0.2, the finite-difference table shows that 2 = 0.2525, 2 2 = 0.0316 and 3 2 = 0.0009. Hence the derivative at x = 0.2 is given by f 2 = 10(0.2525 0.0158 + 0.0003) = 2.370
7. (a) What is meant by the least-squares polynomial approximation to a function f(x) and what is meant by the minimax approximation to a function f(x)? Explain why the Legendre polynomials are useful in obtaining least-squares polynomial approximations. (b) Obtain the quadratic least-squares approximation to the function f(x) = e x in the interval [0,2] using the Legendre polynomials. SOLUTION (a) The least-squares polynomial approximation to a function f(x) is the polynomial P n (x) of degree n, which minimizes the L 2 norm of (f(x) P n (x)) (over a given range ([a, b]). The minimax polynomial approximation to a function f(x) is the polynomial P n (x) of degree n, which minimizes the maximum value of (f(x) P n (x)) (over a given range ([a, b]). The Legendre polynomials are well suited for least-squares fitting, since they are orthogonal functions over [-1,1] and a weight function w(x) = 1, (P i, P j ) = 0 if i and j are not equal. In least-squares polynomial fitting using functions φ i (x), one needs to solve a set of equations for a : Ua = b, where U = (φ i, φ j ) and b = (φ i, f). The matrix U becomes a diagonal matrix when orthogonal polynomials are used. The coefficients a can then be found directly by a i = (φ i, f) (φ i, φ i ). (b) In order to obtain the least-squares approximation, we use the Legendre polynomials up to order 2. These polynomials give the least-squares approximation on the interval [-1,1] following a k = 2k + 1 2 1 1 f(x)p k (x)dx. First we need to transform the interval [0,2] over which we want to obtain the leas-squares approximation to the interval [-1,1]. This can be achieved by the transformation t = x 1, or x = t + 1. The transformed function F(t) is thus F (t) = e t 1.
Now we can apply the formula for the coefficients and the Legendre polynomials: P 0 (t) = 1, P 1 (t) = t, P 2 (t) = 1 2 (3t2 1). So a 1 = 3 2 1 1 a 0 = 1 2 1 1 1 1 e t 1 dt = 1 2 [ e t 1 ] 1 1 = 1 e 2 2 te t 1 dt = 3 2 = 3 2 ( 1 ) [ te t 1 ] 1 1 + e t 1 dt = 1 ( e 2 1 + 1 e 2) = 3e 2 a 2 = 5 4 1 1 (3t 2 1)e t 1 dt 3t 2 e t 1 dt = [ 3t 2 e t 1 ] 1 1 + 6 = 3e 2 + 3 12e 2 1 1 te t 1 dt = a 2 = 5 4 (3 15e 2 1 + e 2 ) = 5 2 (1 7e 2 ) Thus the least-squares approximation is given by Q(t) = 1 e 2 1 3e 2 t + 5 2 2 (1 7e 2 ) 3t2 1 2 Transforming this back into a least-squares fit for the original interval [0,2], we use t = x 1 and obtain q(x) = 1 e 2 2 3e 2 (x 1) + 5 4 (1 7e 2 )(3(x 1) 2 1) = = 3 15e 2 + ( 7.5 + 49.5e 2 )x + (3.75 26.25e 2 )x 2.
8. (a) What is meant by adaptive quadrature? (b) Suppose that S 0, S 1, S 2 are the approximations to the integral of a function f(x) using Simpson s method over the intervals [a, b], [a, c] and [c, b] respectively with c the midpoint of the interval [a, b]. Since the error in Simpson s method is given by E 0 = show that b a if, approximately, f(x)dx S 0 = h5 90 f (4) (µ), with h = (b a)/2 b f(x)dx S 1 S 2 < ɛ a S 0 S 1 S 2 < 15ɛ. (c) Use adaptive quadrature and Simpson s rule to calculate π/2 with an (absolute) precision of 5 10 4. SOLUTION 0 sin xdx (a) In adaptive quadrature, the step-size used in the quadrature is varied throughout the interval of integration, so that the approximation error is evenly distributed over the interval. Generally this means a smaller step size where the function varies rapidly and a larger step size where the function varies slower. (b) We use two methods to evaluate the integral. Let h = (b a)/2. Then and b a b a f(x)dx = S 1 We can find a µ so that f(x)dx = S 0 h5 90 f (4) (µ), h5 2 5 90 f (4) (µ ) + S 2 h5 2 5 90 f (4) (µ ). h 5 2 5 90 f (4) (µ ) + h5 2 5 90 f (4) (µ ) = h5 + h 5 2 5 90 f (4) (µ). We now make the approximation that µ = µ, and use the fact that c is the midpoint of [a, b]. b a f(x)dx S 1 + S 2 2 h 5 32 90 f (4) (µ) (2)
If we now set the two expressions for the integral equal to each other, then we get S 0 h5 90 f (4) (µ) = S 1 + S 2 2 32 90 f (4) (µ) or S 0 S 1 S 2 = 15 h 5 16 90 f (4) (µ). If we substitute this into equation 2, we obtain b a h 5 f(x)dx S 1 + S 2 1 15 (S 0 S 1 S 2 ), which demonstrates what we needed to show. (c) First we use Simpson s rule, b a f(x)dx b a 6 over the entire interval : π/2 0 ( f(a) + 4f( a + b ) 2 ) + f(b) sin xdx = π (0 + 4 sin(π/4) + sin(π/2)) = 1.00228 12 Then we use Simpson s rule over the subintervals [0, π/4] and [π/4, π/2] separately. π/2 π/4 π/4 0 sin xdx = π (0 + 4 sin(π/8) + sin(π/4)) = 0.29293 24 sin xdx = π (sin(π/4) + 4 sin(3π/8) + sin(π/2)) = 0.70720 24 Adding the integrals together gives 1.00013. We now have to check whether the difference of the two calculations for the integral from 0 to π/2 is less than 10 times the specified tolerance, 5 10 4. (We use 10 rather than 15 as a safety margin: µ = µ is only an assumption.) The difference is 2.2 10 3. Therefore, within 5 10 4, π/2 0 sin xdx = 1.00013.,
9. (a) Explain how you can use the (Runge-Kutta) midpoint approach to solve the second-order differential equation d 2 y dt 2 = dy dt + y + 2t, when you know y and y at t=0. Give the equation(s) to which you apply the midpoint method, and indicate the order in which calculations need to be performed. Note: you do not need to solve this equation. (b) Given dy dt = 2y + t and y(1) = 2, use the mid-point method to obtain a value for y(2) using a stepsize of 0.25 and four significant figures. SOLUTION (a) In order to apply the midpoint approach to a second-order differential equation, you transform the differential equation into a set of two firstorder equations. Define u 1 = y Then, obviously, and, less obviously, u 2 = y u 1 = f 1 = u 2 u 2 = f 2 = u 2 + u 1 + 2t. When we apply the midpoint method to this approach, we first calculate the derivative of both u 1 and u 2 at t = t 0, multiplied by the step-size k 1u1 = hf 1 (u 1, u 2, t 0 ) k 1u2 = hf 2 (u 1, u 2, t 0 ). Using these derivatives, we can then estimate the derivative at the midpoint k 2u1 = hf 1 (u 1 + k 1u1 /2, u 2 + k 1u2 /2, t + h/2) k 2u2 = hf 2 (u 1 + k 1u1 /2, u 2 + k 1u2 /2, t + h/2)
(b) It is essential that this second step is only performed after we have completed the first step. Once we have calculated these derivatives, we can now propagate the solution: u 11 = u 10 + k 2u1 u 21 = u 20 + k 2u2 Now we just repeat the midpoint method until we have reached the end of the propagation interval. dy dt = 2y + t, y(1) = 2, a stepsize of 0.25 and we need to use the midpoint method : y(t + h) = y(t) + hf(t + h/2, y + hf(t, y)/2). So, we start with (x, y) = (1, 2) y(1.25) = y(1) + 0.25f(1.125, 2 + 0.125f(1, 2)) f(1, 2) = 3 y(1.25) = 2 + 0.25f(1.125, 1.625) = 2 + 0.25 2.125 = 1.4688 y(1.5) = y(1.25) + 0.25f(1.375, 1.4688 + 0.125f(1.25, 1.4688)) f(1.25, 1.4688) = 1.6876 y(1.5) = 1.4688 + 0.25f(1.375, 1.2579) = 1.4688 + 0.25 1.1408 = 1.1836 y(1.75) = y(1.5) + 0.25f(1.625, 1.1836 + 0.125f(1.5, 1.1836)) f(1.5, 1.1836) = 0.8672 y(1.75) = 1.1836+0.25f(1.625, 1.0752) = 1.1836+0.25 0.5254 = 1.0522 y(2) = y(1.75) + 0.25f(1.875, 1.0522 + 0.125f(1.75, 1.0522)) f(1.75, 1.0522) = 0.3544 y(2) = 1.0522 + 0.25f(1.875, 1.0079) = 1.0522 + 0.25 0.1408 = 1.0170