Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 013 August 8, 013 Solutions: 1 Root Finding (a) Let the root be x = α We subtract α from both sides of x n+1 = x n f(xn) f (x n) and write x n α = ε n to obtain ε n+1 = ε n f(α+εn) f (α+ε = ε n) n f(α)+εnf (α)+o(ε n) f (α)+o(ε n) Using f(α) = 0, the RHS simplies to ε n ( ε n + O(ε n) ) = O(ε n) (b) Let the update in the iteration step be x n+1 x n = x n We want to achieve 0 = f(x n + x n ) = f(x n ) + x n f (x n ) + 1 ( x n) f (x n ) + (1) Ignoring the last term and then solving for x n gives the standard Newton iteration: x n = f(xn) f (x Since this is a very good approximation to the ideal x n) n, substituting this into the last term of (1) is much better than ignoring it Doing this and then solving equation (1) again for x n gives the desired formula Numerical quadrature (a) On each subinterval, the interpolation error in the trapezoidal approximation can (by the error formula for polynomial interpolation, in case of a linear function) be estimated by f(x) p 1 (x) (x x k)(x x k+1 )! f (ξ) = O(h ) (since both x x k and x x k+1 are of size O(h)) Over an interval length of b a, the integration error can then be no more than (b a) O(h ), which still is O(h ) (b) In the expansion e cos x = k=0 a k cos kx, the trapezoidal rule is exact for the rst (constant) term and then gives the correct (zero) value for all further cosine modes until we reach the rst mode (following the constant one) that takes the value one at all the node points This mode gives the erroneous result of π a n, which will dominate the total π error With n = 6, the error will thus be approximately 6 1 6! = π/1150 3 10 4 1
3 Interpolation/Approximation (a) p n (x) = n k=0 L k(x) f k, where L k (x) = (x x 0) (x x k 1 ) (x x k+1 ) (x x n) (x k x 0 ) (x k x k 1 )(x k x k+1 ) (x k x n) (b) Suppose there are two dierent polynomials p n (x) and q n (x) that both take the values f k at node locations x k, k = 0, 1,, n The dierence p n (x) q n (x) is again a polynomial of degree n but with n+1 zeros, showing that it must be identically zero, in conict with the assumption that p n (x) and q n (x) were dierent (c) Each of the following three approaches will show that, for n + 1 nodes, the polynomial degree will be n + 1 (i) Direct solution of linear system: Let the Hermite polynomial be H n+1 (x) = a 0 + a 1 x + a x + + a n+1 x n+1 Imposing all the n + requirements gives a square (n+) (n+) linear system of the following structure for the coecients: 1 x 0 x 0 x 3 0 1 x 1 x 1 x 3 1 0 1 x 0 3x 0 0 1 x 1 3x 1 a 0 a 1 a n+1 = f 0 f n f 0 f n (ii) (iii) Based on Lagrange interpolation: With L k (x) denoting the Legendre kernel, the polynomials h i (x) = (x x i )L i (x) and h i (x) = [1 L i (x i)(x x i )] L i (x) will have { the properties that h i (x j) = h i (x j ) = 0, 0 i, j n and h i (x j ) = h i (x 0 i j j) = The Hermite interpolation polynomial is then given by 1 i = j H(x) = n i=0 h i(x) f i + n h i=0 i (x) f i Based on Newton interpolation: One can generalize the standard divided difference layout for Newton's interpolation method by duplicating each node information (location and function value) and in the next column insert the derivative information Following that, one proceeds as usual In case of just two nodes, the divided dierence tables for regular interpolation and for Hermite interpolation will take the forms: x 0 f 0 x 1 f 1 vs x 0 f 0 f 0 x 0 f 0 x 1 f 1 f 1 x 1 f 1, where the entries marked by dots are completed in the regular manner In either case, the interpolation polynomial would be created from the resulting numbers in the top right diagonal This Hermite extension works by the principle that each node can be seen as the limit of two nodes coming together, with function values approaching each other in just the right way that the derivative information becomes obeyed
4 Linear algebra (a) Since A is an antisymmetric matrix, its eigenvalues are purely imaginary, or zero Since it is a matrix with real entries, the roots of the characteristic polynomial come in pairs (if they are complex-valued) For odd-sized matrix these two conditions force at least one of the eigenvalues to be zero (b) For even-sized matrix the product of a pair of complex-valued eigenvalues is always positive and the conclusion follows (c) For non-zero eigenvalues the limit matrix will have a block-diagonal structure with twoby-two block size 5 ODEs (a) A general multistep method with s steps to solve { y = f(t, y) is of the form a m y n+m = h y(0) = y 0 b m f(t n+m, y n+m ), n = 0, 1,,, where a s = 1 For the test problem consider solutions to the recurrence a m y n+m = λh y = λy y(0) = y 0, b m y n+m, n = 0, 1,, The region of absolute stability of a method is a domain λh D C such that the solution of the corresponding linear recurrence is bounded The method is stable if zero belongs to the region of absolute stability Replacing all computed values y n by the values of the solution y(t n ) and using the ODE to evaluate local error, we dene the order of the method as p 1 (if it exists) such that a m y(t + mh) h b m y (t + mh) = O(h p+1 ) as h 0 In fact, for a multistep method, one can nd the order by taking y to be a polynomial and nding the largest degree for which a m y(t + mh) h b m y (t + mh) = 0 The method is consistent if its order p 1 These easily veried algebraic properties of the method, namely its consistency and stability, imply its convergence (a property which otherwise requires a fairly lengthy derivation) 3
(b) For an explicit multistep method, the equation for the roots of the characteristic polynomial has the form φ(u) = u s + lower order terms = 0 6 PDEs Since the polynomial can be written in terms of its roots as φ(u) = (u u 1 ) (u u ) (u u s ), and in the region of absolute stability all roots u k 1, we conclude that, in that region, all coecients of the polynomial φ are bounded (independently of λh) However, if the region of absolute stability is unbounded, then some of the coecients of φ will become arbitrarily large since they exhibit linear dependence on λh Lax-Wendro method is second order in both space and time variables To see that, we write u n+1 j h t u n j = c un j+1 un j 1 h x + 1 c h t u n j+1 un j + un j 1 h x and consider We have ψ(t, x) = u(t + h t, x) u(t, x) + c u(t, x + h x) u(t, x h x ) h t h x 1 u(t, x + h x ) u(t, x) + u(t, x h x ) c h t h x u(t + h t, x) u(t, x) h t = u t + 1 u tth t + O(h t ) u(t, x + h x ) u(t, x h x ) h x = u x (t, x) + O(h x) and obtain u(t, x + h x ) u(t, x) + u(t, x h x ) h x = u xx (t, x) + O(h x) ψ(t, x) = u t + 1 u tth t + cu x (t, x) 1 c u xx (t, x)h t + O(h t ) + O(h x) From the equation, we have u x = 1 c u t and u xx = 1 c u tt, so that ψ(t, x) = u t + 1 u tth t u t 1 u tth t + O(h t ) + O(h x) = O(h t ) + O(h x) For stability analysis, we write the scheme as u n+1 j = Au n j+1 Bu n j + Cu n j 1, 4
where A = 1 (cµ) 1 cµ, B = (cµ) 1 and C = 1 (cµ) + 1 cµ Using { e ijkhx} N 1 j=0 as an eigenvector (with index k = 0, N 1), we compute Ae i(j+1)khx Be ijkhx + Ce i(j 1)khx = e ijkhx ( Ae ikhx B + Ce ikhx ) ] = e [1 ijkhx (cµ) + (cµ) cos (kh x ) icµ sin (kh x ) Computing the absolute value of the eigenvalue λ k = 1 (cµ) +(cµ) cos (kh x ) icµ sin (kh x ), we have [ λ k = 1 (cµ) + (cµ) cos (kh x )] + (cµ) sin (kh x ) [ = 1 (cµ) sin (kh x )] + (cµ) sin (kh x ) Setting a = (cµ), a > 0 and x = sin (kh x ), 0 x 1, as a function of x we have (1 ax) + ax = 1 ax + a x The condition a 1 implies that 1 ax + a x 1 Thus, we obtain stability under the CFL condition cµ 1 or h t /h x 1/c 5