Numerical Methods for Ordinary Differential Equations

Similar documents
Do not turn over until you are told to do so by the Invigilator.

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

Numerical Methods for Differential Equations

Introductory Numerical Analysis

1 Solutions to selected problems

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

DELFT UNIVERSITY OF TECHNOLOGY Faculty of Electrical Engineering, Mathematics and Computer Science

DELFT UNIVERSITY OF TECHNOLOGY Faculty of Electrical Engineering, Mathematics and Computer Science

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

M.SC. PHYSICS - II YEAR

Exam in TMA4215 December 7th 2012

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Virtual University of Pakistan

Do not turn over until you are told to do so by the Invigilator.

CS 257: Numerical Methods

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point

Lecture 5: Single Step Methods

5. Hand in the entire exam booklet and your computer score sheet.

DELFT UNIVERSITY OF TECHNOLOGY Faculty of Electrical Engineering, Mathematics and Computer Science

Additional exercises with Numerieke Analyse

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Previous Year Questions & Detailed Solutions

Numerical Differential Equations: IVP

Fourth Order RK-Method

Numerical Methods in Physics and Astrophysics

Lecture 10: Finite Differences for ODEs & Nonlinear Equations

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Lösning: Tenta Numerical Analysis för D, L. FMN011,

Notes on floating point number, numerical computations and pitfalls

Ordinary Differential Equations

Numerical Methods in Physics and Astrophysics

Introduction to Numerical Analysis

Examination paper for TMA4215 Numerical Mathematics

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Consistency and Convergence

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

AIMS Exercise Set # 1

1 Solutions to selected problems

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Numerical Methods. King Saud University

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

SOLUTION OF EQUATION AND EIGENVALUE PROBLEMS PART A( 2 MARKS)

Preliminary Examination, Numerical Analysis, August 2016

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Midterm Review. Igor Yanovsky (Math 151A TA)

Numerical and Statistical Methods

Numerical and Statistical Methods

Solutions Preliminary Examination in Numerical Analysis January, 2017

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Homework and Computer Problems for Math*2130 (W17).

EXAMPLE OF ONE-STEP METHOD

Ordinary Differential Equations

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Lecture 4: Numerical solution of ordinary differential equations

Chapter 4: Interpolation and Approximation. October 28, 2005

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

Numerical Methods. Scientists. Engineers

Ordinary Differential Equations II

Ordinary Differential Equations II

Name of the Student: Unit I (Solution of Equations and Eigenvalue Problems)

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

8.1 Introduction. Consider the initial value problem (IVP):

x n+1 = x n f(x n) f (x n ), n 0.

Lecture Notes on Introduction to Numerical Computation 1

Principles of Scientific Computing Local Analysis

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Mathematics for Engineers. Numerical mathematics

Integration of Ordinary Differential Equations

Chap. 20: Initial-Value Problems

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CHAPTER 5: Linear Multistep Methods

Unit I (Testing of Hypothesis)

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

On interval predictor-corrector methods

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

MTH603 FAQ + Short Questions Answers.

Taylor Series and Numerical Approximations

Preface. 2 Linear Equations and Eigenvalue Problem 22

Numerical solutions of nonlinear systems of equations

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

Ordinary differential equations - Initial value problems

NUMERICAL SOLUTION OF ODE IVPs. Overview

Numerical Analysis Exam with Solutions

Scientific Computing: An Introductory Survey

Numerical Analysis: Interpolation Part 1

Numerical Methods for Differential Equations

Numerical methods. Examples with solution

NUMERICAL METHODS FOR ENGINEERING APPLICATION

Lecture 28 The Main Sources of Error

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

Differential Equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Ordinary Differential Equations

Mathematical Methods for Numerical Analysis and Optimization

Transcription:

Numerical Methods for Ordinary Differential Equations Answers of the exercises C Vuik, S van Veldhuizen and S van Loenhout 08 Delft University of Technology Faculty Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics

Copyright 08 by Delft Institute of Applied Mathematics, Delft, The Netherlands No part of this work may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission from Delft Institute of Applied Mathematics, Delft University of Technology, The Netherlands

Contents Introduction Solutions Interpolation 4 Solutions 4 3 Numerical differentiation 8 3 Solutions 8 4 Nonlinear equations 4 Solutions 5 Numerical integration 6 5 Solutions 6 6 Numerical time integration of initial-value problems 0 6 Solutions 0 7 The finite-difference method for boundary-value problems 33 7 Solutions 33 8 The instationary heat equation 39 8 Solutions 39 i

Chapter Introduction Solutions We notice that the n th order Taylor polynomial about the point x = c from an (n+) times differentiable function is given by P n (x) = f(c)+(x c)f (c)+ (x c)! f (c)+ + (x c)n f (n) (c) n! The upper bound for the error in x = d is P n (d) f(d) = R n (d), in which for a ξ between x en c R n (x) = (x c)n+ f (n+) (ξ), (n+)! The second order Taylor polynomial of f(x) = x 3 about the point x = is then given by P (x) = f()+(x )f ()+ (x ) f () =! = +3(x )+3(x ) It now follows that P (05) = 05 The upperbound for the error in x = 05 is given by R (05) = (05 )3 3! f (ξ) = (05 )3 6 = 05 3! The actual error easily can be calculated with P (05) f(05) Note that for this problem the actual error is equal to the upper bound of the error

First, note that f (i) (x) = e x for all i N So the n th order Taylor polynomial of f(x) = e x about the point x = 0 is given by P n (x) = +x+ x + + n! xn The error R n (x) for x [0,05 must be smaller than 0 6 So we need an n such that R n (x) 0 6 for x [0,05, in which for a ξ between 0 and x R n (x) = (x)n+ (n+)! eξ, To make sure that R n (x) is smaller than 0 6 on the entire interval, we need to find a n such that (05) n+ (n+)! e05 0 6 () Conclude this yourself Trying some values of n, for example n = 5,6,7, gives that n = 7 is the smallest value which satisfies condition () 3 We use the polynomial P (x) = x to approximate f(x) = cosx It appears that if we take the third order Taylor polynomial about the point x = 0, this is equal to P (x) The first, second and third derivative of f(x) are given by f(x) = cosx f (x) = sinx f (x) = cosx f (x) = sinx The third order Taylor polynomial of f(x) = cosx about the point x = 0 is given by P 3 (x) = cos0 xsin0+ x = x = P (x) cos0 x3 3! sin0 An upperbound for the error is R 4 (x) = 4! x4 f (iv) (ξ) = 4! x4 cosξ, for a ξ between 0 and x We need to find an upperbound of R 4 (x) Notice that cosξ and x 4 ( ) 4 for x [, So an upperbound for R 4(x) is ( ) 4 = 0006 4!

4 It is given that x = 3 so we know that fl(x) = 0333 00, because we need to calculate with a precision of three digits In the same way it follows from y = 5 7 that fl(y) = 074 0 0 To give an example we will calculate fl(fl(x)+fl(y)) and x+y and also the rounding error fl(x)+fl(y) = (0333+074) 0 0 = 047 0 0 So fl(fl(x)+fl(y)) is equal to 05 0 0 ; because we need to calculate with a precision of three digits The rounding error, also called the absolute error, is given by (x+y) fl(fl(x)+fl(y)) = 05 00 = 38 0 3 The other calculations with the floating point numbers can be done in the same way The answers are given in the table below fl(x y) x y absolute error + 005 0 38 0 3-038 0 0 8 048 0 4 * 038 0 0 5 095 0 4 / 0466 0 0 7 5 066 0 3 5 An example program is: mu=; while +mu> mu=mu/; end The end value of µ is then the machine precision of your system 3

Chapter Interpolation Solutions (a) Theorem gives the following equation and we need to prove that f(x) L (x) = (x x 0)(x x )f (ξ) () f(x) L (x) 8 (x x 0 ) max ξ (a,b) f (ξ) () We use the fact that (x x 0)(x x ) has a maximum in x = x 0+x and we do following steps f(x) L (x) = (x x 0)(x x )f (ξ) (x 0 +x x 0 )( x 0 +x x ) max ξ (a,b) f (ξ) = 8 (x x 0 ) max ξ (a,b) f (ξ) (b) The difference between the exact polynomial L and the perturbed linear interpolation polynomial ˆL is bounded by ( ) L (x) ˆL x x + x x 0 (x) ε x x 0 using the proof of Theorem 3 For x [x 0,x we can easily see that ( ) ( ) L (x) ˆL x x + x x 0 x x+x x 0 (x) ε = ε = ε x x 0 x x 0 because x x > 0 and x x 0 > 0 For x x we obtain the following ( ) L (x) ˆL x x + x x 0 (x) ε = x x 0 4 ( x x +x x 0 x x 0 ) ε

( ) ( (x x 0 )+(x x ) = ε = + x x ) ε x x 0 x x 0 because x x < 0 and x x 0 > 0 In x = 3, our function f(x) = x has value f(3) = 3 = 0333333 We are going to approximate f(3) with the second degree Lagrange polynomial, using nodes x 0 =,x = 5 en x = 4 The second degree Lagrange polynomial of f(x) is given by L (x) = f(x k )L k (x) = = L 0(x)+ 5 L (x)+ 4 L (x) The approximation of f(3) is equal to L (3), which is given by L (3) = L 0(3)+ 5 L (3)+ 4 L (3) k=0 Following the theorem of Lagrange, we can calcute L i (x), i = 0,, in x = 3: So L (3) is equal to L 0 (3) = (3 x )(3 x ) (x 0 x )(x 0 x ) = (3 5)(3 4) ( 5)( 4) = 05, L (3) = (3 x 0)(3 x ) (x x 0 )(x x ) = (3 )(3 4) (5 )(5 4) = 4 3, and L (3) = (3 x 0)(3 x ) (x x 0 )(x x ) = (3 )(3 5) (4 )(4 5) = 6 L (3) = ( 05)+ 5 4 3 + 4 6 = 035 3 (i) The cubic spline s of the function f(x) = x is an interpolation spline of degree 3 with nodes x 0 = 0,x = en x = With the definition of the natural spline of degree 3 it follows that s is a third degree polynomial on each subinterval [0, and [,, in which there is a special condition in x = for the connection We would like to determine s( ) Note that it is sufficient to determine the spline s 0(x) In Section 5 is shown that we need to solve the following equation for b ( f f (h 0 +h )b = 3 f ) f 0 h h 0 (3) You can also see in this section that b 0 = b = 0 With the knowledge that h i = x i+ x i and f i = f(x i ) we obtain ( (+)b = 3 0 ), 5

and so we know that b = 0 With formula a 0 = 3h 0 (b b 0 ) we obtain a 0 = 0 With the next formula we got that c 0 = c 0 = f f 0 b 0 +b h 0 h 0 3 The value of d 0 is given by d j = f j which is 0 for d 0 The spline s 0 (x) on the interval [0, is then given by s 0 (x) = a 0 (x x 0 ) 3 +b 0 (x x 0 ) +c 0 (x x 0 )+d 0 = x We conclude that de spline s in x = is exact, namely s( ) = (ii) In this case the function f is given by f(x) = x So we need to solve the following equation ( f f (h 0 +h )b = 3 f ) f 0 h h 0 After substituting the known values, we obtain 4b = 6, so b = 5 We know that b 0 = b = 0 and also that The coefficient c 0 is equal to a 0 = 3h 0 (b b 0 ) = 3 (5 0) = c 0 = f f 0 b 0 +b h 0 h 0 3 = We also know that d 0 = f 0 = 0 The spline s 0 (x) on the interval [0, is then given by Substituting the value of x gives s 0 (x) = x3 + x s 0 ( ) = 5 6 We conclude that the cubical spline is not exact for the function f(x) = x ; f( ) = 4 4 (a) The Lagrange polynomial is given by L 3 (x) = 0 (x )(x )(x 3) (0 )(0 )(0 3) + (x 0)(x )(x 3) ( 0)( )( 3) +05 (x 0)(x )(x 3) ( 0)( )( 3) + (x 0)(x )(x ) (3 0)(3 )(3 ) 6

This can be simplified to L 3 (x) = x3 4 x + 3 x In order to find the speed we take the derivative L 3 (x) = 4 x x+ 3 and to determine the accelaration we differentiate again L 3(x) = x The acceleration is zero at x =, but this corresponds to a minimum acceleration The maximum acceleration is achieved at x = 3 and is the corresponding speed is 7 km/min (b) Following the procedure in the book, we find the following spline: s (x) = 8 5 x 30 x3, 0 x ; s (x) = + 3 30 (x ) 0 (x ) + 6 (x )3, x ; s 3 (x) = + 5 (x )+ 5 (x ) 4 30 (x )3, x 3 For the velocity we get v (x) = 8 5 0 x, 0 x ; v (x) = 3 30 5 (x )+ (x ), x ; v 3 (x) = 5 + 4 5 (x ) 5 (x ), x 3 The maximum speed is achieved at the end point and is 7 5 km/h km/min, which is 68 7

Chapter 3 Numerical differentiation 3 Solutions First we notice that f C 3 [x h,x + h means that f is three times continuous differentiable on[x h, x+h So the first, second and third derivative of f are continuous on [x h,x+h We need to prove : f (x) Q(h) = O(h ), in which Q(h) is given by Q(h) = f(x+h) f(x h) h The Taylor expansion from f(x+h) about x with truncation error O(h 3 ) is f(x+h) = f(x)+hf (x)+ h f (x)+o(h 3 ) (3) We have the following expansion for f(x h) about x with truncation error O(h 3 ): Subtracting (3) from (3) gives us which is the same as f(x h) = f(x) hf (x)+ h f (x)+o(h 3 ) (3) f(x+h) f(x h) = hf (x)+o(h 3 ), (33) f(x+h) f(x h) h The order of the truncation error is now given by = f (x)+o(h ) f (x) Q(h) = f (x) f(x+h) f(x h) h = = f (x) f (x) O(h ) = O(h ) 8

The position of the ship during start up is given by S(t) = 05at The velocity is approximated by a backward difference with step size h, so S b S(t) S(t h) (t) h To determine the truncation error we expand S(t h) about point t with a Taylor polynomial of order ; S(t h) = S(t) hs b h (t)+ S (ξ), (34) with ξ (t h,t) Given the fact that S (t) = a, we can re-order (34) to So the truncation error is ah S b S(t) S(t h) (t) = h ah We can determine the measurement error in the speed as follows; we already now that the measurement error is less than 0 meters So the approximated speed with measurement errors is given by; S b S(t)±0 S(t h)±0 (x) h This means that the measurement error in the approximated speed is bounded by 0 h The total error in the approximated speed is the sum of the truncation error and the measurement error With a = 0004 the total error (te) is given by te(h) = 000h+ 0 h The error in the calculated velocity is minimal in the minimum of the function te For positive step size h, the minimum is equal to h = 00 This can be obtained by setting te (h) = 000 0 equal to zero The total error in the speed is then equal to h te(00) = 04m/s 3 (a) An upper bound for the truncation error is h f(4) (ξ) h because f (4) (ξ) = sinξ An upper bound for the rounding error is 4ǫ h So an upper bound for the total error E tot is given by E tot h + 4ǫ h To minimize this upper bound, we need to calculte when the derivative is equal to zero h 6 8ǫ h 3 = 0 h opt = (48ǫ) 4 6 0 4 9

h Q (h) sin() Q (h) -077364 006786 0-084077 000070099 00-08446 70e-06 000-08447 70083e-08 0000-08447 305e-09 e-05-08447 75798e-07 e-06-08455 78068e-05 e-07-08367 00088037 e-08-0 06875 (b) The table below gives Q (h) and the error for different values of h The experiments show that the optimal value of h is about 0 4 This value is close to the value determined in part (a) 4 The central difference formula Q(h), approaches the unknown value M, in this case f (), in which the error has the form M Q(h) = c p h p Given that the error in the central difference approximation is of order two, it follows that p = f(x) = sinx, h = 0 and f (x) is approximated by a central difference, so f () Q(0) = sin() sin(09) 0 = 05394 Conclude for yourself that the error, cos sin() sin(09) 0, is equal to 09 0 3 For h = 0 we obtain that Q(0) = 05376 With the formula c p = Q(h) Q(h) (h) (( )p ), we found that with p = and h = 0 that c p = 009 Now it follows that M Q(h) = c p h p = 09 0 3 This is a good estimate of the error by Richardson s extreapolation 5 We look for a Q(h) such that f (x) Q(h) = O(h k ), with k maximal, with f(x), f(x+h) and f(x+h) given When the error is minimal, the k is maximal Notice that Q(h) is given by Q(h) = α 0 h f(x)+ α h f(x+h)+ α f(x+h) (35) h 0

Taylor expansion of f(x), f(x+h) and f(x+h) about x gives f(x) = f(x), (36) f(x+h) = f(x)+hf (x)+ h f (x)+o(h 3 ), (37) f(x+h) = f(x)+hf (x)+ 4h f (x)+o(h 3 ) (38) The values of α 0, α and α can be obtained by substituting (36), (37) and (38) into (35) Note that the coefficient in front of f(x) and f (x) must be equal to zero and that the coefficient in front of f (x) must be equal to one This gives the following 3 3 system α f(x) : 0 h + α α h + h = 0, f (x) : α + α =, (39) f h (x) : α + hα = 0 Solving (39) yields Such that α 0 = 3 α = α = Q(h) = 3f(x)+4f(x+h) f(x+h) h The truncation error to approximate f (x) is given by O(h ), because f (x) Q(h) = f (x) f (x)+o(h )

Chapter 4 Nonlinear equations 4 Solutions Wewant todeterminep 3 ontheinterval [,5 Firstweneedtocheck iff(a) = f( ) and f(b) = f(5) have opposite signs With f(x) = 3(x+)(x )(x ) we find that So they have opposite signs f(a) = f( ) = f(b) = f(5) = 3 3 4 To start, we take a 0 = a and b 0 = b and the new approximation for the zero is computed by p 0 = (a 0 +b 0 ) = 05 The stopping criterion is not satisfied, because f(p 0 ) = 0 0 So we construct a new interval [a n+,b n+ f(p 0 )f(a 0 ) < 0 so we choose a = a 0 and b = p 0 This leads to p = 5 and f(p ) = 95 Now we have that f(p )f(a ) > 0 so we choose a = p and b = b This gives the value p = 06875 and f(p ) = 879 So f(p )f(a ) < 0, so we choose a 3 = a and b 3 = p such that p 3 = 099 We can determine p 3 in the same way for interval [ 5,5 (i) We start with the following fixed-point method: p n = 0p n + p n (4) Define the function g as 0x+ x g(x) = Recursion (4) can now be written as p n = g(p n ) The fixed point is the value of p for which: g(p) = p This leads othe following equation p = 0p+ p

Which is equivalent to Rewriting leads to p 3 = and so p = 0p+ p p = 3 The fixed point is p = 3 The convergence speed is given by g (p) Conclude yourself that g (x) = 0 x 3, and so g (p) = 8 0857 If p 0 is equal to then with p = g(p 0 ) we find p = With p = g(p ) we have 0p 0 + p 0 = 0+ = 4 = 954 In the same way we have p = 0 954+ 954 p 3 48 8 (ii) For the next fixed-point method, we define g(x) as g(x) = x x3 3x, so p n = g(p n ) Conclude yourself that the fixed point p = g(p) is obtained by solving p 3 = 0 where we assume that 3p 0 Again we find p = 3, note that our assumption is satisfied The convergence speed is given by g (p) The first derivative of g(x) is g (x) = 3 4 x 3 So we found g (p) = 3 4 = 0 With Theorem 44 we can see that the second method converges faster With p 0 = we get the following values of p i, i =,,3 p = 76667, p = 530 and p 3 = 3747 3

3 (a) The linear interpolation polynomial l(x) is given by See also Chapter l(x) = f(p 0 )+ x p 0 p p 0 (f(p ) f(p 0 )) (b) The new point p is the point where l(p ) = 0 With (a) we have to solve p from Solving this gives f(p 0 )+ p p 0 p p 0 (f(p ) f(p 0 )) = 0 p = p 0 + (p p 0 )f(p 0 ) f(p 0 ) f(p ) A general method for computing p n from p n and p n can be obtained in the same way Then we have the following equation p n = p n + (p n p n )f(p n ) f(p n ) f(p n ) (c) With the formula from (b) it follows that p = + So we have the following expression for p 3 : f() f() f() = 4 3 p 3 = + (4 3 )f() f() f( 4 3 ) = 7 5 = 4 4 We determine an approximation of the zero of f(x) = x cosx on the interval [ 0, π using the Newton-Raphson method With start approximation p 0 = 0 we can generate the sequence {p n } N n=0, with N such that the error is smaller than 0 4, using p n = p n f(p n ) f (p n ) We take the following stop criteria: Stop if the fourth decimal does not change in two following iterations Newton-Raphson has a quadratic convergence so this means that the error is smaller than 0000 Every iteration p n should be determined with four decimals Starting with p 0 = 0 we obtain that p = 0+ 0 cos0 = cos0 = +sin0 p is given by p = + cos +sin 07504 Notice that we approximate the solution of the equation x = cosx 4

In the same way we obtained p 3 = 0739 p 4 = 0739 p 3 and p 4 are identical in four decimals which means that the error is smaller than 0000 5 The system that needs to be solved is { x y 3 = 0 x+y + = 0 In vector notation, this is equal to F(x) = 0 with x = [x,y T The Jacobian matrix of F(x) is ( ) x J(x) = y With starting vector x (0) = [, T the first iteration is given by x () = x (0) J(x (0) ) F(x (0) ) = ( ) ( ) ( ) 3 = = ( ) )( ) ( = ( 3 3 3 8 ) = 3 4 3 3 3 In the same way we found x () = ( 0980 0784 ) ( 6 3 We used the following formula for calculating the inverse of a matrix If ( ) a b A =, c d and ad bc 0, then A is invertible and A is given by ( ) A d b = ad bc c a This is also known as Kramer s rule for matrices ) 5

Chapter 5 Numerical integration 5 Solutions We want to compute the following integral b a f(x)dx, with a =, b = and f(x) = (0x) 3 +000 We assume that the relative rounding error in the function values is less than ε: f(x) ˆf(x) f(x) ε, Now we can determine the following upperbound for the relative rounding: b a f(x)dx b ˆf(x)dx a b Kε, a f(x)dx in which K is called the condition number of the integral The condition number is defined by b a K = f(x) dx b a f(x)dx See page 59 of the textbook (a) The relative rounding error is Kε To calculate the condition number we need to determine the following two integrals: ( (0x) 3 +000 ) dx and The first integral gives us (0x) 3 +000 dx ( (0x) 3 +000 ) dx = 000 6

We need to split the second integral into two parts So we obtain (0x) 3 +000 00 dx = (0x) 3 000dx + 00 (0x) 3 +000dx = 500 The relative rounding error is equal to 500 000 ε = 5 05 ε (b) The relative rounding error in the function values is less than (or equal to) ε, so the absolute rounding error is equal to f(x) ˆf(x) f(x) ε With f(x) = (0x) 3 + 000 it follows that the absolute rounding error of the integral when we choose ε = 4 0 8 is given by ε(0x) 3 dx = 0 ε(0x) 3 dx = 0 5, The truncation error of the Midpoint rule is given by b a 4 h M, in which M is the maximum of f (x), x [, The second derivative of f is f (x) = 6000x So the maximal truncation error is equal to 500h We have a good assumption for the intergral if the rounding error and the truncation error are in balance We have to notice that both errors can t be avoided (in our case) and so we have a good estimation if the errors are equal So the step size h must fulfil 500h = 0 5, so h 0 4 In this exercise we determine or estimate the integral 05 x4 dx with the following methods: (i) Exact (ii) Trapezoidal (iii) Composite Trapezoidal When 05 x4 dx is calculated exact, we obtain 05 x 4 dx = ( ) 5 x5 = 05 5 ( (05)5 ) = 09375 7

With the Trapezoidal rule we receive 05 x 4 dx 05 ( (05) 4 4) = 06565 An estimate of the error can be determined with Theorem 533 of the book The estimate of the error with f (x) = x is given by ( 05) 3 ( ) max x [x R,x L f (x) = (05) 3 = 05 The real error is 09375-06565 = 0079 Thesameintegral iscalculated withthecompositetrapezoidal rule 3 We have h = 05, so x 0 = 05, x = 075 en x =, the integral is approximated by ( x 4 dx 05 (05)4 +(075) 4 + ) 4 = 09 05 The error is estimated by Richardson s method So we must also approximate the integral with the composite Trapezoidal rule with h = 05 This approximation is 0983 From the error estimation follows that α =, so c p = Q(h) Q(h) h ( ) = 090 See Chapter 3 It follows that the estimate of the error is equal to M N(h) = c p (h) p = 008 The real error is 008, so the estimate with Richardson is very good 3 For the Trapezoidal rule we use intervals with the same lengths, so h = 05 and x 0 =, x = 5 and x = 5 Using the composite Trapezoidal rule, we obtain 5 ( x 7 dx 05 ()7 +(5) 7 + ) (5)7 = 345 Trapezoidal rule: xr x L f(x)dx xr xl (f(x L)+f(x R)) Theorem 533 For m = max x [xl,x R f (x) holds xr xr xl f(x)dx (f(x L)+f(x R)) x L m(xr xl)3 3 Composite Trapezoidal: I T = h n ( (f(x k )+f(x k )) = h f(xl)+f(xl +h)+ +f(xr h)+ ) f(xr), k= in which h = x R x L and x n k = x L +kh For the error follows: xr f(x)dx I T M(xR xl)h where,again, M = max x [xl,x x L R f (x) 8

The error for this rule is calculated with f (x) = 4x 5, (5 ) 3 ( ) max x [x R,x L f (x) = (05) 33894 = 33 Using Simpson s rule 4 gives us 5 ( x 7 dx 05 6 7 + 3 (5)7 + 3 (5)7 + 3 (375)7 + ) 6 (5)7 = 30798 The error is calculated with Theorem 534 of the book 5 The estimate of the error with f (4) = 840x 3 is given by (5 ) 5 880 ( ) max x [x R,x L f(4) (x) = (05) 5835 880 = 003 If we compare our results with Table 53 of the book, we can see that the Simpson s rule is better than the -point Gauss quadrature rule 4 Simpson s rule n ( f(x k )+4f ( xk +x ) ) k +f(x k ) I S = h 6 k= ( = h 6 f(a)+ 3 f(a+ h)+ 3 f(a+h)+ 3 f(a+ 3 h)+ + 3 f(b h)+ 6 f(b) 5 Theorem 534 For m 4 = max x [xl,x R f (4) (x) holds xr xr xl f(x)dx (f(x L)+4f(x M)+f(x R)) x L 6 880 m4(xr xl)5 ) 9

Chapter 6 Numerical time integration of initial-value problems 6 Solutions The Modified Euler method approximates the solution to an initial-value problem y = f(t,y) in the following way: Suppose that the numerical approximation w n of y(t n ) is known Then we can compute the following predictor and corrector: predictor : w n+ = w n + tf(t n,w n ), corrector : w n+ = w n + t (f(t n,w n )+f(t n+,w n+ )) The initial-value problem in this exercise is given by { y = +(t y), t 3 y() = The solution is given by y(t) = t+ t With a time step t = 05 we need to determine the following values of w for t 3: w 0 = y(), w = y(5) and w = y(3) From the initial condition we obtain that w 0 = y() = So the error in this point is equal to zero 0

The approximation of y(t) at the time t = 5 follows with the Modified Euler Method, w = w 0 + tf(t 0,w 0 ) = = +05 ( +( ) ) = = +05 =, w = w 0 + t (f(t 0,w 0 )+f(t,w )) = = +05 ( +(+(5 ) ) ) = = +05 35 = 85 The solution y(t) gives the value 8333 in t = 5 So the error in the numerical approximation is 0008 In the second step of the Modified Euler Method we obtain w = 5488, w = 486 The solution in t = 3 gives y(3) = 5 So the error in the numerical approximation is 0084 A summary of the results is shown in the table below i time t y(t) w i error 0 0 5 8333 85 0008 3 5 486 0084 The local truncation error τ n+ of the Midpoint method is defined by τ n+ = y n+ z n+ in which t ( ( z n+ = y n + t f t n + t,y n + t )) f(t n,y n ) Taylor expansion of y n+ about y n gives Using this in τ n+ gives y n+ = y n + ty n + t y n +O( t3 ) τ n+ = y n + t y n f(t n + t,y n + t f(t n,y n ))+O( t ) (6) Thesecond step istoexpandf(t n + t,y n+ t f(t n,y n )) inataylor series about(t n,y n ) This gives f(t n + t,y n + t f(t n,y n )) = f(t n,y n )+ t f t(t n,y n )+ t f y(t n,y n )y n +O( t )

We will use that y = f(t,y) The second derivative in time of y is then given by y = f t +f y y So and y n = f(t n,y n ) (6) y n = (f t +f y y ) n = f t (t n,y n )+f y (t n,y n )y n (63) Using the Taylor series of f(t n + t,y n + t f(t n,y n )) in (6) gives us τ n+ = y n + t y n f(t n,y n ) t f t(t n,y n ) t f y(t n,y n )y n +O( t ) (64) To show that the Midpoint rule is of order two, it is sufficient to see that y n + t y n f(t n,y n ) t f t(t n,y n ) t f y(t n,y n )y n = 0 (65) With (6) and (63) we see that (65) holds The conclusion is that the Midpoint rule is of order two 3 (i) The Trapezoidal method to solve an initial-value problem is given by w n+ = w n + t (f(t n,w n )+f(t n+,w n+ )) (66) Note that the Trapezoidal method is an implicit method to approximate the solution of the initial-value problem y = f(t,y) The amplification factor Q(λ t) is determined in the following way; consider the test equation y = λy Using the Trapeziodal method on this test equation gives a Q(λ t) such that w n+ = Q(λ t)w n Using (66) on the test equation y = λy gives: w n+ = w n + t(λw n +λw n+ ) (67) Restructuring of w n+ and w n in (67) gives ( ) λ t w n+ = (+ ) λ t w n So and thus w n+ = + λ t λ tw n, Q(λ t) = + λ t λ t

(ii) In paragraph 64 in the book on page 74 we find that the truncation error is given by τ n+ = y n+ Q(λ t)y n t The exact solution of the test equation is y n+ = y n e λ t When we combine these results we can obtain that the truncation error of the test equation is determined by the difference of the exponential function and the amplification factor Q(λ t): τ n+ = eλ t Q(λ t) y n (68) t We determine the difference between the exponential function and the amplification factor in the following way: () Taylor expansion of e λ t () Taylor expansion of λ t multiplied by + λ t (3) Subtract: () () The Taylor expansion of e λ t around 0 is: e λ t = +λ t+ (λ t) +O(h 3 ) (69) The Taylor expansion of around 0 is: λ t λ t = + λ t+ 4 λ t +O( t 3 ) (60) With (60) we obtain that + λ t is equal to λ t + λ t λ t = +λ t+ (λ t) +O( t 3 ) (6) To determine e λ t Q(λ t), we need to substract (6) of (69) e λ t Q(λ t) = O( t 3 ) (6) The truncation error is now given by using (6) in (68) We obtain that τ n+ = O( t ) (iii) In paragraph 64 on page 7 it is shown that a numerical method is stable if and only if Q(λ t) 3

The amplification factor of the Trapezoidal method is Q(λ t) = + λ t λ t We need to show that if λ 0 then it follows that Q(λ t) for all step sizes t > 0 From Q(λ t) we know that the following inequalty must hold + λ t λ t Multiplying this inequality with λ t gives + λ t + λ t λ t (63) Note that ( λ t) 0, so the sign in the inequality remains the same To show stability for all t we have to check that (63) holds for all t We start with the left-hand inequality (63), + λ t + λ t This inequality holds for all t > 0 if λ 0 We receive the same result for the right-hand inequality (63), + λ t λ t This inequality holds for every t > 0 The conclusion is that the method is stable for all t > 0 if λ 0 4 We consider the nonlinear initial-value problem y = f(t,y) with f(t,y) = +(t y) To investigate the stability of Modified Euler at the point (t =,y = ) we need the following results: The amplification factor of Modified Euler Linearisation of f(t, y) The amplification factor of Modified Euler is See paragraph 64 from the book Q(λ t) = +λ t+ λ ( t) The linearisation of f(t,y) about the point (t,y) = (,) is f(t,y) f(,)+(t ) f (,)+(y ) f t y (,) = = +(t ) (y ) = y +t = y +g(t) The initial-value problem after linearisation is given by y = y +g(t) 4

For stability it is sufficient to consider y = y Note that this is the test equation with λ = Modified Euler is stable if Q(λ t) with λ = So the method is stable if for the step size t the following holds t+ t The left-hand inequality t+ t holds if t 0 The right-hand inequality t+ t if So the stability condition is given by t 0 t 5 (a) The method in this exercise is defined by w n+ = w n +β tf(t n,w n ) (64) w n+ = w n+ +( β) tf(t n +β t, w n+ ) (65) We follow the steps as in determining the local truncation error in the Modified Euler method in paragraph 64 in the book We start with replacing the numerical approximation w n by the exact value y n, so z n+ = y n +β tf(t n,y n ) (66) z n+ = z n+ +( β) tf(t n +β t, z n+ ) (67) We start with the Taylor expansion of f(t n +β t, z n+ ) about the point (t n,y n ): f(t n +β t, z n+ ) = f(t n,y n )+β t(f t ) n +( z n+ y(t n ))(f y ) n + β t (f tt ) n + + ( z n+ y(t n )) (f yy ) n +β t( z n+ y(t n ))(f ty ) n (68) With equation (66) we obtain that (68) is equal to f(t n +β t, z n+ ) = f(t n,y n )+β t(f t +ff y )+O( t ) (69) Subsituting (69) in (65) gives us z n+ = y n +β tf(t n,y n )+( β) tf(t n,y n )+( β)β t (f t +ff y ) n +O( t 3 ) = = y n + tf(t n,y n )+( β)β t (f t +ff y ) n +O( t 3 ) The Taylor expansion of y n+ is given by y n+ = y n + ty n + t y n +O( t 3 ) 5

Using the fact that y = f(t,y) we can obtain that y n+ z n+ is equal to ( ) y n+ z n+ = t β( β) t y n +O( t 3 ) = O( t ) The last equality follows from the fact that β( β) is not equal to zero for every β Now it s easy to see that the local truncation error τ n+ is given by y n+ z n+ t = O( t) (b) The amplification factor is determined by using the method on the test equation y = λy With f(t n,w n ) equal to λw n in (64) and (65) we can obtain: Substituting w n+ in (60) gives us w n+ = w n +βλ tw n w n+ = w n+ +( β)λ t w n+ (60) w n+ = w n +βλ tw n +( β)λ t(w n +βλ tw n ) = = ( +βλ t+( β)λ t+β( β)(λ t) ) w n = = ( +λ t+β( β)(λ t) ) w n = = Q(λ t)w n The amplification error is Q(λ t) = +λ t+β( β)(λ t) (c) The nonlinear differential equation y = f(y) with f(y) = y 4y need to be linearized about y = We obtain that the linearisation of f(y) about y = is given by f(y) f( )+(y )f ( ) = y + So the linearized differential equation is equal to y = y + For stability it is sufficient to check the equation y = y This is the test equation with λ = The test equation is stable if the following holds: Q(λ t) voor λ = en β = Using the value of Q(λ t) gives that for step size t the inequality must satisfy t+ t The right inequality holds if t The left inequality also satisfies this condition (and we can t find a more strict condition) The conclusion is that the method is stable in this point if t 6 The vector notation for the Forward Euler method is given by w n+ = w n + tf(t n,w n ) 6

In our case we have the value t =, and f(w n ) = [ w n = [ w (n) w (n) 4w (n) w (n) +e tn 3w (n) +w (n) With the initial values we know the vector w 0 : [ 0 w 0 = So the first step of Forward Euler is w = w 0 + tf(w 0 ) = [ 0 = +0 [ 03 = [ 3 7 We use Paragraph 68 of the book In this exercise we have a second order initial value problem We first consider a simplified initial-value problem for a linear differential equation So we rewrite our differential equation to a system of two first order differential equations: { y = y y = y +y +te t t with initial values { y (0) = 0 y (0) = 0 Using Euler Forward with t = 0 gives us [ [ [ y () y (0) y () = y (0) +0 = [ 0 0 The numerical solution is zero after one time step = y (0) y (0) +y (0) +0 e 0 0 The solution of this problem is 6 t3 e t te t +e t t At t = 0, the solution has the value y(0) = 8939 0 6 The error is thus equal to the value of the solution 8 Stable integration of a system of differential equations, written as y = Ay, is only possible if all eigenvalues of the matrix A have a real part smaller or equal to zero = 7

In this excersie we consider a mathematical pendulum The angle as a function of t satisfies: φ + g φ = 0 (6) L This second order differential equation can also be written as a system of two first order differential equations So (6) is equivalent to { φ = φ φ = g L φ (6) If we define vector ψ as then (6) can be written as ψ = [ φ φ ψ = Sψ met S = The eigenvalues of S are calculated by setting, S λi = 0 equal to zero The eigenvalues are the roots of λ + g L = 0 [ 0 g L 0 It follows that λ, = ±i stable g L The real part of both eigenvalues is zero, so the system is 9 We consider the system { y = 95y +995y y = 97y 997y, with initial values { y (0) = y (0) = We define the following vector on time t = n t [ w (n) w n+ = w (n) Euler Forward is given by and Euler Backward by with f(w n ) = w n+ = w n + tf(w n ), w n+ = w n + tf(w n+ ), [ 95w (n) +995w (n) 97w (n) 997w (n) 8

[ (a) Choose step size t = 0, with w 0 = w is given by we obtain that for Euler Forward w = w 0 + tf(w 0 ) [ = +0 [ 640 = 6368 [ 95 +995 ( ) 97 997 ( ) = Euler Backward gives us the following linear system that needs to be solved: [ w () [ [ [ 95 995 w () w () = +0 97 997 w () (63) Reordering in (63) gives [ w () w () = [ [ 95 995 I 0 97 997 = [ 07 995 97 97 85 [ = [ 83 = 490 [ Notice that the inverse matrix can be determined with Kramer s rule The solution in t = 0 is given by [ [ y (0) 887 = y (0) 49 (b) Note that system can be written as { y = 95y +995y y = 97y 997y, y = in which the vector y is definied as [ 95 995 97 997 y = [ y y y = Ay, As we can see in Paragraph 683, Euler Forward is stable if the following condition holds for all eigenvalues: +λ i t <, i =,, (64) with t the step size We start with determining the eigenvalues of A So the eigenvalues are determined by setting 95 λ 995 97 997 λ = 0 9

So the following equation should be solved λ +80λ + 600 = 0 (λ+)(λ + 800) = 0 λ = of λ = 800 To satisfy (64) it is sufficient to sattisfy this for λ = 800 Note yourself that with λ = we also automatically fullfill (64) So we look for all t such that We obtain that t < 0005 800 t < (c) We need to do the same calculations as in (a) For Euler Forward we have w = w 0 + tf(w 0 ) [ = [ 64 = 36 +0000 [ 95 +995 ( ) 97 997 ( ) = For Euler Backwards we solve the system [ w () [ [ 095 0995 w () = + 097 0997 Note that the solution can be determined by [ w () [ 997 0995 w () = 0957 097 08805 The solution on t = 0000 is given by [ y (0000) y (0000) = [ 6 39 [ [ w () w () = [ 60 4 (65) With this step size, both methods give good results However, Euler Backward is more expensive in calculation time Conclusion: With a stiff system, Euler Backward gives a good answer for all step sizes t, while Euler Forward only gives good results with (very small) step sizes, that has to satisfy the stability condition 0 The solution of the differential equation y = y t + in t = 0 is approximated in this exercise by Euler Forward and Runge Kutta 4 (RK4) We have already seen the Euler Forward in previous excersises The RK4 method is explained in Paragraph 65 and is definied as follows w n+ = w n + 6 [k +k +k 3 +k 4, (66) 30

in which the predictors k,,k 4 are given by k = tf(t n,w n ) (67) k = tf(t n + t,w n + k ) (68) k 3 = tf(t n + t,w n + k ) (69) k 4 = tf(t n + t,w n +k 3 ) (630) (i) In the first part we will approximate y(0) with the Euler Forward method with step size h = 005 This means that we need to use four steps of the system w n+ = w n + t(w n t n +) With initial value y(0) = w 0 =, we obtain the following w i s: w = w 0 + t(w 0 0 +) = = +005 = 05375, w = w +005(w 005 +) = 057599 w 3 = w +005(w 0050 +) = 065574 w 4 = w 3 +005(w 3 0075 +) = 0655498 The approximation of y(0) is given by w 4 = 0655498 (ii)in this part we are going to approximate y(0) with the RK4 method, explained by (66) t/m (630), with step size t = 0 Note that in (67) t/m (630) f(t n,w n ) = w n t n + We start with calculating the four predictors, in which y(0) = w 0 = So So for w we obtain: k = tf(t 0,w 0 ) = = t(w 0 t 0 +) = = 0( 0 +) = 05 k = 0(w 0 + k 005 +) = 0575 k 3 = 0(w 0 + k 005 +) = 05765 k 4 = 0(w 0 +k 3 0 +) = 064763 w = w 0 + 6 [k +k +k 3 +k 4 = = + 09444863 = 0657444 6 With RK4, the approximaton of y(0) is given by w = 0657444 3

The solution of the differential equation is y(t) = et +t +t+ At t = 0 is the solution thus given by y(0) = 0657445 Conclusion: We obtain that the approximations of both methods are good and that the work we need to do is almost the same While the error in the approximation of RK4 is equal to 0 7 and of Euler Forward is equal to 0 3, we prefer RK4 To determine the order of the error, we use the formula given in Paragraph 66 of the book: Error estimate if p is unknown: wn/ t w4 t N/4 wn t w t N/ = p In which wn/ t and w t N denote the approximation for y(t) = y(nδt) using N/ time steps of length t and usingntime steps of length t, respectively Inthe firstcolumn we obtain 0750686 075790 075080 0750686 = 458, so the order of Method is For the second column we obtain so the order of Method is 07309 07079 0740587 07309 = 0797, 3

Chapter 7 The finite-difference method for boundary-value problems 7 Solutions The condition number κ(a) of a symmetric matrix A is the quotient of the largest and smallest absolute eigenvalue of A as in Paragraph 73 of the book The matrix A is de symmetric matrix A = [ The eigenvalues of A can be easily calculated by λ λ = λ 4λ+3 = (λ 3)(λ ) (7) Setting (7) equal to zero gives us the eigenvalues λ max = 3 and λ min = It follows that k(a ) = 3 The matrix A is the symmetric matrix [ 00 99 A = 99 00 The eigenvalues of A are λ max = 99 and λ min = Thus the condition number of A is equal to k(a ) = 99 The solution of the system A w = f with f = [99,99 T is easy to see, namely x = [, T We take f = [,0 T as an error in the right-hand side In Paragraph 73 we obtain the following approximation for the relative error: w κ(a ) f f The norm of the vector is w = [, T is w = + = 33 w (7)

In the same way we obtain f = 99 en f = Substituting in (7) gives us w 0707 We will now determine w and then we will compare w with the estimated upper bound If the right-hand side f has error f = [,0 T, then we can determine w with A (w+ w) = (f + f), (73) because w is known Reorder known and unknown in (73) gives A w = (f + f) A w Wit Gauss elimination it follows that w is equal to [ 0505 w = 04975 The norm of w is w = (0505) + ( 04975) 05000 So x is indeed smaller than 0707, as predicted The boundary-value probem can be rewritten as { y (x)+y(x) = x,x [0,, y(0) = y() = 0 (a) The interval [0, can be divided into n + equidistant subintervals with length x = n+ The nodes are given by x j = j x for j = 0,,,n + The numerical approximation of y j = y(x j ) is denoted by w j, which is computed using the differential equation in x j : y j +y j = x j voor j n (74) The finite-difference method approximates the second derivative in (74) by a central-difference formula to obtain When j =, equation (75) is given by w j w j +w j+ x +w j = j x (75) w 0 w +w x +w = x, (76) 34

in which w 0 = 0 because of the boundary condition y(0) = 0 For j = we have the following equation w w x +w = x (77) (75) stays the same for all < j < n For j = n we obtain w n w n x +w n = n x (78) We can solve this system of equations with m equations and m unknown values The system consists of linear equations, so we can rewrite it in the form Aw = f With we obtain and A = x + x x w = x + f = x w w n x 4 x, (n ) x n x x x + x x + x (b) Note that A is symmetric This means that all eigenvalues of A are real values To determine the largest and smallest eigenvalues of A we use Gershgorin s theorem Theorem 7 (Gershgorin circle theorem) The eigenvalues of a general n n matrix A are located in the complex plane in the union of circles where z C z a ii n a ij, Using Gershgorin on row k of A, for < k < n, gives that the eigenvalues are in the union of circles z x x 35 j i j=

For rows and n of A z x x So all eigenvalues of A lie in the union of circles z x x and (79) z x x, (70) in the complex plane We note that the smallest (real) eigenvalue is equal to λ min = and the biggest λ max = + 4 x (c) Since A is symmetric, the condition number of A is definied as In Paragraph 73 it is shown that Together with we obtain 3 The differential equation κ(a) = λ max λ min w w κ(a) f f f f 0 4, w w κ(a) f f + 4 x 0 4 = y (x) = sinx x [0,π, integrated two times gives the general solution Using the boundary conditions y(0) = y(π) = 0 gives ( + ) x 0 4 y(x) = sinx+c x+c (7) y(0) = c = 0 y(π) = sinπ+c π +c = 0, which gives us c = c = 0 The solution of the boundary-value problem is thus y(x) = sinx 36

The numerical solution is determined by taking N = and approximating the second derrivative with central differences With N =, we divide the area in 3 equivalent intervals The numerical approximation consists of the following points y(0) w 0 = 0 becauseoftheboundaryvalue, y( π 3 ) w y( π 3 ) w y(π) w 3 = 0 becauseoftheboundaryvalue So we need to solve the system { w w = sin( π 3 ) x w +w = sin( π 3 ) x, in which x = π 3 With Gauss elimination the solution is given by u = 09497 u = 09497 The global error, y j w j is equal to -00837 for j =, 4 We obtain the boundary-value problem y (x)+y(x) = 0,x [0, y(0) = 0 y () = 0, The grid points are given by x j = jh with x = 7 equation in the point x j is given by for j = 0,,,4 The differential Discretization of the equation above gives in which w j y(x j ) y (x j )+y(x j ) = 0 w j +w j w j+ x +w j = 0, j 3 (7) For j = in (7) we obtain with the boundary value w 0 = w w x +w = x The boundary condition y () = 0 is discretizated by: This results for j = 3 in (7) in y () = 0 w 4 w 3 x = 0 w +w 3 x +w 3 = 0 37

If we define the vector of unknowns by w = and taking the right-hand equal to the zero vector, the disretization of this boundary problem is given by Aw = f, with A = + x 0 x + x 0 + x The matrix A is thus symmetric, which means that all eigenvalues are real Using Gershgorin s circle theorem gives us the fact that the eigenvalues are in the union of circles C = 3 i= C i, with ( ) C : z x + x ( ) C : z x + x en ( ) C 3 : z x + x All eigenvalues lie in the union of C and C 3 All (real) eigenvalues in the circles are greater than, so positive We conclude that A has only positive real eigenvalues w w w 3, 38

Chapter 8 The instationary heat equation 8 Solutions 39