MA3241/EC3121b COURSE KIT fall 2014.

Size: px
Start display at page:

Download "MA3241/EC3121b COURSE KIT fall 2014."

Transcription

1 MA3241/EC3121b COURSE KIT fall PREAMBLE: DELETIONS AND additions. These electronic notes are a rough approximation to what I will cover in class (Ma3241). You also need Chap. 8 (to be sent later) but do NOT need Chap.13 The printed Course-kit (from book-store) is a better approximation, but this e-version is of course storable in e-readers, as requested by at least one student Prof. J..M. McNamee 1

2 CHAPTER 1. INTRODUCTION. ERRORS. SEC. 1. INTRODUCTION. Frequently, in fact most commonly, practical problems do not have neat analytical solutions. As examples of analytical solutions to mathematical problems, consider the following 5 cases:- (i) 1 0 x2 dx = [ x3 3 ]1 0 = 1 3 (ii) d dx (x4 ) at x = 1; solution = 4x 3 = 4 (iii) Evaluate e 1 (solution from calculator). (iv) Find the roots of x 2 +5x+5 = 0 [solutions x = 5± 5 ]. 2 (v) Solve dy +2xy = 0 for y = 1 when x = 0 [solution y = ] dx e x2 Now consider the following rather similar problems:- (vi) 1 0 cos x dx. e x2 (vii) Findthederivative at x=1ofthefunctiongiven by thefollowing table:- x f(x) (viii) Evaluate J 0 (1.2345), using tables at intervals of.01; (ix) Find the roots of x 5 +x 4 +3x 3 +2x 2 +x+1 = 0; (x) Solve ( dy dx )2 = x 2 +y 2 ; The last 5 problems cannot be solved by the standard methods of Calculus, Algebra or Differential Equations, at least not by any obvious method. However, they can be solved by methods which deal entirely with actual numbers, avoiding direct use of the rules of Calculus etc, although they are based on quite advanced Calculus, etc.. These methods are therefore called NUMERICAL METHODS. Sometimes a problem can be solved by analytical methods, but can also be solved more easily by numerical methods. For example a cubic polynomial can be solved algebraically, i.e. by a formula involving the coefficients, but the resulting expressions are so cumbersome that a numerical method such as Newton s would be much quicker. Moreover, even a so-called analytic method often reduces to some kind of numerical method in the end, e.g cos x dx = sin 10 =.1736 by tables or calculator, but the table for sinx did not write itself; it had to be 2

3 calculated numerically by a formula such as sin x = x x3 6 + x for each value of x (and likewise for a calculator). Again, many functions in real life (e.g. Engineering and Science) are given only as a table, making analytical methods impossible. The above problems illustrate most of the methods we shall describe, such as integration, differentiation, interpolation, solution of non-linear equations, and differential equations. To show how numerical methods may be derived, consider ex (vi) above. This is 1 0 ydx, where y = cosx. We may form a table of y for x = 0.0, e x2.1,.2,...,1.0, and hence draw a graph of y. A B U X C V Y D W Z E F G H

4 Nowthe ydx=areaunder thegraph<(areaofrectanglesabfe+xcgf+...etc = S U ) but > (Sum of rectangles UXFE+VYGF+...etc = S L ) [provided curveissmooth]. Aswecancalculatealltheseareas, wecangetanupper and lower bound on our integral. A good approximation is the average of these two bounds, 1 2 (S U +S L ). Also the error ABXUetc = (S U S L ). By taking sufficiently small steps in the x-direction, we may estimate our integral as closely as we like. Similarly we may find the derivative in (vii) by drawing a graph and measuring the slope of the line between f at.9 and 1.1, for remember dy y lim x 0 BC =.210 x AC dy FE = tanθ = 1.0 dx DE dx =.2 = 1.05; or we may draw a tangent at P; then f B A P C F D θ E x CLASS EXERCISE.Find e by graphical methods, given the folowing table:- x e x [HINT. Draw a graph near the point concerned]. 4

5 An example from algebra Solve a 1x+b 1 y = c 1 a 2 x+b 2 y = c 2 (i)we will give a general (analytic) solution by determinants (Cramer s rule) c x = 1 b 1 c 2 b 2 a 1 b 1, etc. This gives a formula for the solution in a 2 b 2 terms of the coefficients. (ii) for particular numerical values of a i,b i,c i we may eliminate x to give an equation in y, solve for y and then x; e.g.:- Hence:- 2x+3y = 5 3x+2y = 5 6x+9y = 15 6x+4y = 10 hence 5y = 5, hence y = 1, hence 2x = 5-3y = 2, hence x = 1. In this method we have no general formula for the solution, although we do have a general prescription, or method, for finding the numerical solution, dealing always with numbers as opposed to symbols. Hence it is commonly called a numerical method. The distinction between numerical and analytical methods is not always clear, but is often a matter of convention. An example from calculus illustrates the difference: a pure mathematician might tell you that e x = 1+x+ x xn +..( to ) = x n 2! n! n=0 n! and consider he has defined e x. A numerical analyst however might take the same formula, but would seek to know how many terms must be included, for a given x, to calculate e x to a specified degree of accuracy (i.e. correct to a certain number of decimal places). Furthermore he would seek if possible the most efficient formula to evaluate e x, i.e. the one with the fewest possible terms. 5

6 SEC. 2. TYPES OF ERRORS. In all numerical work, it is essential to know how accurate (or inaccurate) our result is; or putting it another way we must know what steps to take in order to get an answer within a specified tolerance(e.g. to 3 significant figures). There are at least 4 kinds of errors arising in numerical work:- 1. Plain mistakes, when a problem is done by hand; e.g. copying incorrectly, reversal of digits,wrong signs or decimal point. Such mistakes can be detected as they occur by means of careful checking procdures (NOT based on repetition which usually just repeats the mistake); e.g. in adding a column of figures I always add it from the top down and again from the bottom up. In spite of the great availability of cheap and high-speed computers, manual work is still necessary before testing programs and for small one-shot jobs. Also manual calculation greatly helps in understanding or finding pitfalls in a method which is going to be tried on a computer. (By manual we mean with the aid of a hand-held calculator only). 2. Rounding error. Numbers in a computer or calculator can only be stored to a finite number of significant figures(usually about 8 although in double precision we can have 15). Whenever two numbers are added, subtracted, multiplied or divided an additional error may be introduced, called rounding error. e.g. suppose we add together.9237 and.2347, in a computer which has only space for 4 decimal (base 10) digits per number. The correct answer is but since we have only room for 4 digits we must drop the last digit, introducing a rounding error of Intrinsic error. If data is obtained by experimental observation, it is usually not very accurate (say it is known to only 2 or 3 significant figures). Even if our initial data is known exactly, it can only be stored in the computer to a finite (say 8) number of figures. Similarly, binary to decimal conversion gives an error up to half a unit in the last (binary) place. 6

7 Note that a rounding error in one operation could be regarded as an intrinsic error in the following one. Rounding and intrinsic errors can be made much worse by subtraction of nearly equal numbers. This is disastrous, e.g. given two 8-decimal accurate numbers and , their difference( ) has only one significant figure correct. That is to say, the relative error actual error actual number (= ) is greatly increased, although the actual error is still about the same. Usually it is the relative error which is important, e.g. if we are wrong by 1/2 a meter in measuring a room size it is quite serious, but not if we are measuring the distance between Toronto and Ottawa. Another example:- the root of a quadratic is usually given by x = b+ b 2 4ac; this leads to heavy cancellation if 4ac b 2 (in that 2a 2c case it is much better to use x = b+, which is mathematically b 2 4ac equivalent to the usual formula...but not numerically). CLASS EXERCISE (i) If the three numbers 12.91,.3281, and 1001., all accurate to 4 significant figures (i.e. error up to.5 in 4th figure counting from first non-zero figure in going from left to right), are added together, what is the maximum error in the result? (ii) Solve x x+.01 = 0 using 5 sig figs. (a) by x = b+ b 2 4ac (b) by x = 2a 2c b+ b 2 4ac Generally, suppose two numbers x and y are represented with errors ǫ and η, i.e. their representations x,y are given by x = x + ǫ and y = y + η; then x + y = x + y + ǫ + η + r a, i.e. error in sum = sum of separate errors + new rounding errors. For multiplication, x y = xy+ǫy+ηx+r m (neglecting term ǫη), hence error in product = ǫy +ηx+r m, hence relative error = error = ǫ + η + rm = sum xy x y xy of relative errors + relative rounding error. 4. Truncation error. This is the type of error which arises when we 7

8 use an approximate formula to represent some mathematical function or operation such as integration, e.g.(i) if we represent sin x by x x3 + x5 3! 5! there will be an error about x7, due to the chopping off (truncation) 7! of the infinite series. (ii) When we represent 1 0 f(x) by.1 9 r=0 f(.1r) there will beanerror. This kind of error isalso oftencalled discretization error. SEC. 3. PROPOGATION OF ERROR. If a numerical process is repeated many times (say n), the error produced at an early stage will likely be magnified. If it grows in proportion to the actual solution the process is usually said to be stable, but if it grows faster than the true solution it is unstable. Such a situation is usually disastrous, as the error will eventually become greater than the actual result itself. IGNORE REST OF THIS SECTION An Example. Suppose we wish to compute, for n = 0,1,2,...,5 the integral y n = 1 0 x n dx (1) x+5 One way to do this is to derive a recurrence relation connecting y n for different values of n, as follows:- y n +5y n 1 = 1 0 x n +5x n 1 dx = x x n 1 dx = 1 n and y 0 = [ln(x+5)] 1 0 = ln6 ln5 = = (3) Using the recurrence relation 2, and the value obtained for y 0, and working to 3 decimal places, we get:- y 1 = 1 5y 0 = =.090; y 2 = 1 2 5y 1 =.050 (2) y 3 = 1 3 5y 2 =.083; y 4 = 1 4 5y 3 =.165 This is obviously quite wrong since y n > 0 for all n. 8

9 The reason for this totally wrong result is that the error in y n 1 (due to rounding) is multiplied by -5 at each stage. Thus after 4 stages error ( 5) 4 [error in y 0 ] = +5 4 ( ) =.187 (apart from additional rounding errors at each stage), whereas true result =.034. The problem can be avoided by working backwards, starting with (say) n = 10 and using y n 1 = 1 5n yn 5 ; and taking y 10 = 0; N.B. y x dx = =.017 We get y 9 = =.020; y 8 = 1 45 y 9 5 = =.018 y 7 = = =.021; y 6 = y 5 = = = Calculating y 5 by a different method we may proceed thus:- = =.025 y 5 = 1 0 x 5 6 x+5 = (y 5) 5 5 y [with x+5 = y] = 6 5 (y ) dy y Integrating each term separately gives y 5 =.0284, in good agreement with the previous method by recurrence relations. The reason the backward recurrence relation method is so accurate is that the errors are divided by 5 at each stage; thus they are rapidly diminished. E.g. the error.02 at n = 10 is reduced by a factor at n = 5 i.e. it = = SEC. 4. ILL-CONDITIONING. In real life we usually start with inaccurate data. In some problems the errors in the data may be greatly magnified by the computational process, even if we work to infinite precision, e.g. if we are solving a set of simultaneous equations, and the determinant of the coefficients is small compared to the coefficients themselves, our solution is probably rubbish. (In 2 dimensions, 9

10 this corresponds to finding the intersection of two nearly parallel lines; a small change in slope of the lines means a large change in the position of their intersection). This situation is known as ill-conditioning. A similar situation arises in finding roots of polynomials; sometimes a small change in the coefficients produces a large change in the computed roots. 10

11 EXERCISE 1. QU. 1. Find graphically (i.e. using a graph to help you know what to do) the derivative of e x cosx at x =.1 N.B. Measure x in radians QU. 2. Find graphically 1 0 e x2 cosx, by splitting the range of x into 10 equal steps of length.1 each, and adding the areas of the resulting rectangles. Give a bound to the error in your solution. QU. 3. Find the smaller root of x x = 0 using (i) x = b+ b 2 4ac and (ii) x 2c = 2a b+, b 2 4ac rounding all numbers obtained at each step to 3 significant figures (NOT the same as 3 decimal places). Compare the results with the true solution x = QU. 4 Write a program to add together the 1000 numbers.1,.101,.102,.103,...,1.098,1.099 (a) using single precision (i.e. REAL in Fortran) (b) using double precision (i.e. DOUBLE or REAL*8 in Fortran)) What is the effect of rounding error in (a)? 11

12 CHAPTER 2. REVIEW OF CALCULUS. Although numerical methods do not use algebra or calculus directly, the formulas used rely heavily on a number of definitions of the calculus for their derivation. Some of these will now be reviewed. Derivative Suppose that as x gets closer and closer to a value a ( tends to a ), then the function G(x) ges closer and closer to a value g ( tends to g ). In this case we say that the Limit of the function G(x) as x a (x tends to a) is g; or more concisely: lim G(x) = g x a This leads to the definition of the Derivative: f (x) = df dx = lim f x 0 x = Limit as x approaches 0, or as change in x gets smaller and smaller of change in f change in x 12

13 B (0) T B A C (1) B (2) (2) C (1) C (0) P Q (2) Q (1) Q (0) 13

14 df dx = Limit as x 0, i.e. as Q(n) P, of CB(n) (n = 0,1,2,...) AC (n) As PQ (n) gets smaller and smaller, the chords AB (n) (n=0,1,2,...) get closer to the tangent to the curve at A. So, df TC(0) = slope of tangent = dx AC (0) Example Suppose f = x n, then (x+ x) n x n lim x 0 x df dx = lim f x 0 x = = lim xn +nx n 1 x+n(n 1)x n 2 ( x) x n x = lim x 0 (nxn 1 +n(n 1)x n 2 x+...) = nx n 1 The Integral of f = f(x)dx is defined as F(x) where F (x) or df = f. dx e.g. [ ] x n = xn+1 d (for then x n+1 n+1 dx n+1 = (n+1) x (n+1) 1 = x n ). n+1 Also b a f(x)dx = area under curve y = f(x) betweeen x=a and x=b. y=f(x) a b 14

15 DERIVATIVES OF SIMPLE FUNCTIONS. Function y x n dy dx y nx n 1 e x e x sin(x) cos(x) f(g(x)) cos(x) sin(x) df dg dg dx (1) e.g. (1+x 2 ) 2 2(1+x 2 ).2x uv u dv dx +vdu dx CLASS EXERCISE. Find the derivative of the following functions;- (i)x 3.5 (ii) 1 x (iii) sin(3x) (iv) xsin(x). (v) sin(1+x 2 ) 15

16 EXERCISE 2 Find the derivatives of the following functions: 1)x 5 x 2) 1 x x 3)cos(5x) 4)xe x 5) cos(2+x 3 ) 16

17 CHAPTER 4. ROUNDING ERRORS. SEC. 1. REPRESENTATION OF FLOATING POINT NUMBERS. In most Mathematical or Scientific-Engineering computations on automatic computers, arithmetic is carried out using the floating point representation of numbers. (Integers are used mostly for index purposes, and no rounding error results when they are operated upon, while if they become too big an error message will result, at least on a good compiler). The floating point representation is of the form:- x = f.b e (2) where f is a fraction (called the MANTISSA), normalized so that its leading digit is non-zero, i.e. 1 > f 1 where b is the number BASE. b Usually b = 2, but on hand-held calculators it is in effect 10. In our manual examples we shall take b = 10. e is called the EXPONENT: on Intel-type machines +38 e 38, while in hand calculators and in our manual examples +100 > e > 100. f contains about 8 decimal places on Intel (actually 24 binary places), and 8 in most hand calculators. EXAMPLES. The number is represented as is recorded as , while is written as CLASS EXAMPLE. Express (i) (ii) in float point; Express (iii) (iv) in fixed point. 17

18 SEC. 2. FLOATING POINT ARITHMETIC IN A COMPUTER. Most computers contain a double-length accumulator, or at least a few digits more than single-length, in which the sum, product, etc. of numbers is originally formed; when this has been done the new number is rounded or chopped to fit into a single length word. In fact on Intel-type machines arithmetic is done in an 80-bit register, and afterwards rounded to 64 (double) or 32 bits (real) (a) Addition (or subtraction) ex. suppose we have to add the two numbers x 1 = and x 2 = in a computer having 4 decimal places in the mantissa and one in the exponent. The steps are as follows:- (i) The mantissa of x 2 is shifted 2 places to the right, so that the exponents of both numbers become the same, i.e. we re-write x 2 as (of course x 2 is no longer normalized). (ii) We add x 1 and x 2, using the double-length (8 dec. place) accumulator, thus: = x 3 = f 10 e (On Intel machines we would have even more extra space ). (iii) We normalize x 3, i.e. shift its mantissa to left or right with corresponding changes in e, so that 1 > f 1. In the example no shifting is 10 necessary. (iv) We drop the last 4 or more digits, increasing the 4th digit by one unit if the dropped digits are greater than This is vknown as ROUND- ING. Thus we obtain in the example, with a rounding error of =.031. digit of the first word. In general, the error is at most 1 units in the last 2 digit preserved. The relative error in our example was In general if x 1 = f 1 b e 1, x 2 = f 2 b e 2 and e 1 > e 2 we have 1 f b 3 < 1 after normalization, androunding error 1 2 b t+e 3 where t isnumber ofdigits in mantissa of single length word (for x 3 = f 3 b e 3 ; while error in f b t ). So max relative error = 1 2 b t+e 3 [f min b e 3 ] = 1 2 b t f min = 1 2 b1 t = 2 t 18

19 (in a binary machine), = t (in a decimal machine). We may say computed (x 1 +x 2 ) = [true(x 1 +x 2 )](1+ǫ) where ǫ b1 t (2) CLASS EXERCISE. If the pairs of numbers (i)10 6 (.3127) (.4153) (ii)10 4 (.6314)+10 1 (.3865) (iii)10 4 (.7418)+10 4 (.6158) (iv)10 4 (.7617)+10 4 (.7613) (v)10 4 (.1000) 10 3 (.9999) (vi)10 4 (.1000) 10 2 (.9999) were added on a computer having 4 decimal digits in the mantissa and an 8-digit accumulator, what would be (a) the result, (b) the absolute rounding error, (c) the relative rounding error, in each case (assuming rounding)? Note that in cases (iv) and (v) severe cancellation takes place but no NEW rounding error occurs. However in (iv) if x 1 and x 2 contain absolute errors of , then the relative error in x 1 + x 2 may be as great as =.25 It may be shown that for multiplication and division (using a double length accumulator) the same result applies, namely, if the computed value x 1 x 2 is written (x 1 x 2 ) c and the true value (x 1 x 2 ) t, etc, we have :- (x 1 +x 2 ) c = (x 1 +x 2 ) t (1+ǫ) (3) where in each case (x 1 x 2 ) c = (x 1 x 2 ) t (1+ǫ) (4) (x 1 /x 2 ) c = (x 1 /x 2 ) t (1+ǫ) (5) ǫ < 1 (2) b1 t (6) CLASS EXERCISE. On a 3 decimal digit float-point computer, what is (a) the result and (b) relative rounding error in the following operations?:- (i)10 3 (.121) 10 4 (.171) (ii)10 4 (.124) 10 6 (.987) (iii)

20 SEC. 3. PROPOGATION OF ROUNDING ERRORS. When a whole string of operations is performed, the rounding error produced in an early stage is carried through and perhaps magnified at later stages. Considering the effect of many operations (e.g. n multiplications) we may seek:- i)bounds on (i.e. maximum possible value of) total error, or ii) ESTIMATES (i.e. most probable values) of total error. For i), it may be shown (see Wilkinson, Rounding Errors in Arithmetic Processes ) that for n consecutive multiplications where (x 1 x 2...x n ) C = (x 1 x 2...x n ) T (1+E) (7) E (n 1) 1 (2) b1 t (8) N.B.1 E = pc n p T n = relative error. p T n N.B.2 Here the superscript C means computed and T means true. CLASS EXERCISE. What is maximum relative error if 1,000 single-length numbers are multiplied together on an Intel type machine? For the addition of n numbers it may be shown that:- S C n (x 1 +x x n ) C = x 1 (1+η 1 )+x 2 (1+η 2 )+...+x n (1+η n ) (9) so error = x 1 η 1 +x 2 η x n η n (10) where η r (n+1 r) 1 (2) b1 t v (11) e.g. η 1 n 1 (2) b1 t and η n 1. 1 (2) b1 t. Thus η 1 η n for n large, i.e. error bound depends on the order of the summation -the bound is smallest if smaller magnitude numbers are summed first, since largest η goes with x 1. CLASS EXERCISE. If x 1 = 1000, x 2 = x 3 =... = x 1000 =.01 are added in the order x 1 + x x 1000, on an Intel/RS6000, what is the 20

21 maximum rounding error in the sum? What if 1000 is added last? The bounds given above are very pessimistic, i.e. they are hardly ever reached. More realistic are probabilistic estimates, e.g. Kuki and Cody, in Communications of the ACM, vol 16 pp give average errors for the sum of 1024 numbers, obtained both theoretically and by experiment on simulated machines similar to an IBM mainframe (which chops using 7 main digits and a guard digit) and others which round. For chopping they obtained relative errors of , while for rounding relative errors were only about 10 6 for all positive numbers and 4 times that for sums of mixed sign. In comparison, the maximum relative errors are, for numbers all 1: [ ] 1000 = = for chopping, i.e. 10 times the average errors. For rounding the maximum error is , i.e. 250 times bigger than the average error. MAX AVG 10 6 RDG CHOPG Formultiplication of 20 numbers they found average relative errors for chopping (compare max ), and for rounding (compare maximum 10 5 ). Note how much better rounding is than chopping. An article in Theoretical Computer Science V. 162 p 151 (1996) says that theaverageerrorforadditionofnnumbers allclosetoxisofordern t 2 X CLASS EX. See how the above formula works for Cody and Kuki s data. 21

22 SEC. 4. EFFECT OF INTRINSIC ERRORS. Let the original data x 1,x 2 contain intrinsic relative errors δ 1,δ 2 etc., i.e. x R 1 (recorded) = xt 1 (1+δ 1) = x T 1 +xt 1 δ 1, so rel. err. = δ 1 = xr 1 x T 1 x T 1 (12) Then (x R 1 +x R 2) C = (x T 1 +x T 1δ 1 +x T 2 +x T 2δ 2 )(1+ǫ) Hence error in (x R 1 +xr 2 )C = x 1 δ 1 +x 2 δ 2 +(x 1 +x 2 )ǫ+(very small terms) (13) while error in (x R 1x R 2) C = x 1 x 2 (δ 1 +δ 2 +ǫ) (14) For addition of n numbers, effect of rounding errors is as before, while the additional effect of intrinsic errors is n x i δ i (15) i=1 If in particular all the intrisic relative errors are equal to δ = s (16) (i.e. data correct to s sig. figs.; for example. s=3, x T =.1015, x R =.101, error =.0005, rel. err. = ).101 then total error s ( x i )+[nx 1 +(n 1)x (n r+1)x r +...+x n ] 1 (2) b1 t (17) CLASS EXERCISES 1. If 1000 numbers, all approximately equal to 1, and given correct to 4 sig. figs., are added together using 6 sig. fig. (base 10) rounding arithmetic, what is the maximum error in the sum (approximately)? numbers, approximately equal respectively to 100,99,98, etc..., and known to 5 sig. figs., are added on an Intel machine. What is the maximum error in the sum (roughly). HINT n i=1 i 2 = 1 n(n+1)(2n+1) n3 22

23 EXERCISE 4 QU. 1. Addthenumbers10 3 (.4462), 10 3 (.6412), 10 3 (.2413), 10 0 (.1234), (a) in the order given, (b) in the reverse order, chopping the partial sums to 4 significant figures (NOT decimal places) after each addition. What is the actual error in the sum in each case? QU. 2. If 3 numbers x, y, z have intrinsic errors t, where t is the number of decimal places in the mantissa, what is the maximum possible error in yz+x? What if x 10, y 2, z 5, t = 8? QU. 3. The volume of water in a lake is given by the expression V = 3000(E 1000) where E is the elevation of the lake surface. Write a computer program to produce a table showing values of E and V for E = 1005 to 1010 at steps of.01, with V correct to the nearest unit. Minimize output of paper, i.e. print 4 values of E and V per line. QU. 4. One hundred numbers, all approximately equal to 10, but only known correctly to 3 sig. figs., have to be added in 8-decimal-digit floating point arithmetic (e.g. on an Intel type machine ). What is the maximum error in the sum? 23

24 CHAPTER 5. METHODS OF REDUCING ROUNDING ERROR IN SUMMATION SEC. 1. A LIST OF METHODS. In SIAM J. Scientific Computing, Vol. 14 (1993) pp ( The Accuracy of Floating Point Summation ), Higham compares several methods of adding sets of numbers, including the ones mentioned below:- i)where possible use integers (integer variables in Fortran), e.g. in tables at equal intervals. These have no rounding error at all. ii)add small numbers first iii) Use double or higher precision. But suppose you want your result correct to double precision? iv) Kahan s method. Suppose we are adding a set of n positive numbers x i, let u = k i=1 x i S k, and v = x k+1 (generally v u). We form S k+1 = S k +x k+1 = u+v as follows:- a) Let w = u v, so w = (u+v)+r u+v, since much of v lost due to rounding. Here, mean machine addition or subtraction as opposed to exact. b) Compute r k+1 = (w u) v. Since w u and w u v this gives r w (u+v) fairly accurately, i.e with little new error. c) before adding the next number x k+2 we subtract r k+1 from it. (Alternatively we may sum all the r s at each step and subtract r i from S n at the end). EXAMPLE Add the following numbers by various methods, assuming a 3 figure mantissa with rounding: =x 1 +x x 8 True result (double precision) = Using 3 sig. figs...naive method..we get 159 Adding small numbers first we get 161 Kahan s method with rounding 24

25 u 2 = x 1 = 123, w 2 = 124, r 2 =.23, v 3 = x 3 r 2 = 4.39 u 3 = w 2 = 124, w 3 = 128; r 3 =.39, v 4 = 7.68 u 4 = w 3 = 128, w 4 = 136;r 4 = ( ) 7.68 = +.32, v 5 = = 8.99 u 5 = 136, w 5 = 145; r 5 = ( ) 8.99 = +.01; v 6 = 2.33 w 6 = 147; r 6 =.33; v 7 = 5.80 w 7 = 153; r 7 = +.20 v 8 = 8.11 w 8 = 161 exact to 3 sig. figs. CLASS EXERCISE, Add i) by the normal way in 3 sig. figs. with chopping. ii) by normal way in 6 sig. figs. iii) by Kahan s method in 3 sig figs.: SEC. 2. THE METHOD OF CASCADES is another which Higham describes. A series of accumulators is set up, and the next x i added to the accumulator which corresponds to the exponent of x i. The accumulators are of higher precision than the x i, so that many x i can be added without overflow. Finally the accumulators are added in decreasing order of absolute value. For example, Malcolm in Commumications of the ACM, Vol. 14 pp gives a FORTRAN program for the IBM mainframe. SEC. 3. SOME TESTS iii) The x i are evenly spaced in [1,2]. Kahan s method gives 0 error in all cases. [Question:- would Kahan s method work well for case ii) if we summed + and - values separately?]. He explains why Kahan s method works so well in this case. The effect of each step in his algorithm is shown in the following picture:- 25

26 u = Ŝi u 1 u 2 v = x i+1 v 1 v 2 w = Ŝi+1 u 1 u 2 +v 1 (err = v 2 ) w u v 1 0 (w u) v = r v 2 0 e x i+2 r c 1 c 2 +v 2 Thus the lower part of x i+2 r contains the previous error added in, and this remains true for x i+3,...,x n. He suggests subtracting the final e from S. Kahan s method works well if the numbers are all the same sign. However double precision is equally accurate, when we are summing a moderately large set of same-sign numbers in single precision. By moderately large we mean n < In cases where double precision accuracy is required, or for a VERY large set of numbers, other methods are needed, such as those described in the next section. SEC. 4. Two Perfect Methods In the years at least 3 papers were published describing methods of summation which gave exact results, i.e. correct to 1/2 unit in the last binary place. Also, they both claimed to be the fastest known methods (and it turns out that they are both right in a sense). The first is by Rump, Ultimately Fast Accurate Summation, in SIAM J. Sci. Comp. 31 (2009), The other two, both by Zhu and Hayes, are (1) in above journal, vol 31 (2009) pp , and (2) in ACM Trans. Math. Softw. v37 (2010) article 37. IGNORE REST OF THIS SECTION Zhu and Hayes claim that their method is faster than any other accurate one, but Rump states that he could not reproduce their timing rsults. Ex- 26

27 periments by this author on the CSE computer (Intel Xeon) reveal that the Zhu/Hayes method is faster for n =10,000; but Rump s method is faster for n = 10,000,000. We will describe the Zhu and Hayes method in detail. It consists of two algorithms: OnlineExactSum (OES) and ifastsum (ifs). OES calls ifs, although for smaller sets it is quicker to call ifs directly (by itself). The algorithm for OES follows: Let t = length of mantissa (e.g. 53), and l = length of exponent (usually 11). Let N = β t/2 = 2 26 usually. 1. Create 4 arrays a 1, a 2, b 1, b 2, each with β l (2048) numbers; 2. Initialize all numbers in the above arrays to Set i = 0 (i counts how many summands have been put into a 1 ). 4. (1) Get next number in list; if none go to step 5. (2) j = exp(x) +β l 1. [In a computer the negative exponents go down to -(β l 1 1); for example if l = 3 the exponents are -3,-2,-1,0,1,...,4. By adding β l 1 (4 in the example) we get 1..8 which can be used as indices into the arrays a 1 etc.]. For the next step and later we need a mini-algorithm called AddTwo(a,b). This takes as input two numbers a and b and outputs their sum s=fl(a+b) as found by the computer (often not the true sum), and the rounding error e. The relation a+b = s+e is satisfied exactly. It consists of three steps: i) If exp(a) < exp(b) then swap a and b ii) s = fl(a+b) iii) e = fl(fl(a-s)+b) [Note, this is very similar to Kahan s method]. As mentioned a+b = s+e exactly (but we can t just add s and e in the computer as the word-length is not long enough). 4 (3) Call AddTwo(a 1j,x) to give a new a 1j = s = fl(a 1j + x) and new error e in that addition. [Note: this is similar to the cascade method in that a 1j is a register for adding all the numbers whose exponents are j]. 4 (4) Call AddTwo(a 2j,e) to give new a 2j = s = fl(a 2j + e). [There will be no additional rounding error since a 2j only contains about 25 non-zero bits each add increases a 2j by at most 1 in the 54 th bit, so total of a 2j = But a 2j may go up to without overflow or rounding error. Thus there is plenty of room to accumulate all the errors.] 4 (5) i=i+1 27

28 4 (6) If i N (2 26 ) then (a) Set all of b 1, b 2 to 0 (b) for each number y in a 1 and a 2 : (i) j = exp(y)+β l 1 (ii)addtwo(b 1j,y) gives b 1j = s = fl(b 1j +y), e = error in above (iii)b 2j = AddTwo(b 2j,e) [No new rounding error see above] (c) swap pointers to a 1, a 2 with those of b 1, b 2. (d) i = 2 β l [new a 1 has already received 2 β l summands, i.e. the y s from the old a 1, a 2 ]. (e) goto 4(1) Else, if i < N, go to 4 (1) without steps (a) (d) 5. Join a 1 and a 2 to give a and remove zeros. 6. Call ifastsum to give exact sum of numbers in a. 7. END. EXAMPLE Assume t = 3D ( 10B), l = 3B hence N = 2 5 (but for the example we will pretend it is 2), and β l 1 = 4 data = {.898,.987,.0123,.0978,.00234} 1. a1(i) = a2(i) = 0, i = 1,...,8 3. i = 0 4. (1) x =.898 (2) j = 0+4 = 4 (3) a1(4) =.898 (4) a2(4) = 0 (5) i = i+1 = 1 (6) i < N so go to 4 (1) 4. (1) x =.987 (2) j = 0+4 = 4 (3) a1(4) = s = 1.88 (4) a2(4) = =.005 (5) i = 1+1 = 2 4 (6) i = N = 2, so: (a) b1(i) = b2(i) = 0 (I = 1,...,8) y = a1(4) = 1.88 (i) j = 1+4 = 5 (ii) b1(5) = = 1.88 (iii) b2(5) = 0 y = a2(4) =.005 (i) j = -2+4 = 2 (ii)b1(2) =.005 (iii) b2(2) = 0 (c)(swap a s and b a) a1(2) =.005 a2(2) = 0, b1(2) = b2(2) = 0 a1(4) = a2(4) = 0, b1(4) = 1.88, b2(4) =.005 a1(5) = 1.88, a2(5) = 0, b1(5) = b2(5) = 0 (d) i = 16 (we reset it to 0 to make this artificial example work) 28

29 (e) go to 4(1) CLASS EXERCISE. Add the next number (i.e. the third), according to the method. In the homework exercise you will complete the addition of the last but one number (.0978), and I have added the last. If we complete the final addition by ifastsum, we get This is also what we get by adding in infinite precision (as many places as we need) that gives and then rounding to 3 sig figs. For lack of time we will not describe ifastsum, but we give the algorithm from Zhu and Hayes paper. 29

30 EXERCISE 5 QU. 1. Sum the set of numbers given below by various methods, i.e. a) The normal method (first to last) using 6 sig. figs. b) The normal method using 3 sig. figs. c) Small numbers first using 3 figs. d) Kahan s method using 3 figs. e) The Cascade method using one double precision register (6 sig figs) for each base 10 exponent. The numbers are: QU. 2 Sum the numbers by the following methods: a) By the normal method in 4 sig figs. b) By the normal method in 8 sig figs. c) By Kahan s method in 4 figs. d) By adding the largest numbers first (in 4 figs.) e) By the Cascade method in 4 figs with double precision registers having 8 sig figs. QU. 3 Complete the next step of the example done in class by the method of Zhu and Hayes, i.e. add the 4th number (in 3 figs) 30

31 CHAPTER 7. SOLUTION OF LINEAR EQUA- TIONS SEC. 1. REVIEW OF VECTORS AND MATRICES DEFINITIONS A vector is an array of numbers thus [x 1,x 2,...,x n ] r T (a row vector) y 1 y 2 or.. c (a columnn vector).. y n The scalar product of r T c = [x 1,x 2,...,x n ] x 1 y 1 +x 2 y x n y n i.e. sum of products of corresponding elements. y 1 y y n = A matrix is a double array of numbers a 11 a a 1n a 21 a a 2n A = e.g a n1 a n2... a nn where a ij means element in row i, column j. The product of a matrix by a column vector Ac is defined as another column vector p where p i = scalar product of (row i of A) times (c). e.g = = a i1 c 1 +a i2 c a in c n = The product of two matrices A and B is a new matrix C in which element

32 c ij (row i, column j) = [row i of A]x col j of B (here the x refers to scalar product) = [a i1,a i2,...,a in ] a i1 b 1j +a i2 b 2j +...+a in b nj = b 1j b 2j.... b nj = example n a ik b kj k= CLASS EXERCISE What are the?? = ?? 14?? SEC. 2. TRIANGULAR DECOMPOSITION In this method we first of all factorize ( decompose ) A in the form A = LU = u 11 u u 1n l u u 2n l 31 l u u 3n (1) l n1 l n2... l n,n u nn L, U are called lower and upper triangular matrices (L is called unit lower triangular since its diagonal elements are all units). The elements of L and U may be determined in n steps; in the r th of which wedeterminether thcolumnoflandther throwofu.forsuppose wehave already determined the first r-1 rows of U and columns of L; then we have 32

33 from1(since element inrowr, column i oflu = [l r1,l r2,...,1,0,...,0] (i r)). u 1i u 2i.... u ii a ri = l r1 u 1i +l r2 u 2i +...+l r,r 1 u r 1,i +1.u ri (2) Note that the 1 which multiplies u ri is actually l rr. Also, since r i, we run out of l s before we run out of u s. From the above we have:- u ri = a ri r 1 k=1 l rk u ki (i = r,r +1,...,n) (3) All terms on the right here are already known by the previous steps. Next we have from row i column r of 1 (for i r+1);- Hence [l i1,l i2,...,l ir,...,l ii,0,...,0] u 1r u 2r.... u rr l i1 u 1r +l i2 u 2r +...+l i,r 1 u r 1,r +l ir u rr = a ir (i = r +1,...,n) (4) l ir = (a ir r 1 k=1 Again, all terms on the right are already known. Thus row r of U and column r of L are found. 33 = l ik u kr )/u rr (5)

34 But the first row of U is given by u 1i = a 1i (i = 1,...,n), since a 1i = u 1i u 2i.... [l 11,0,...,0] = 1 u u ii 1i Similarly the first column of L is given by l i1 = a i1 /u 11 (i = 1,...,n), since u 11 0 a i1 = [l i1,l i2,...,] Hence we can find second and third rows and columns, and so on in turn. For example consider the 3x3 matrix, where we have:- a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 = l l 31 l 32 1 u 11 u 12 u 13 0 u 22 u u 33 Henceatfirst stageu 11 = a 11,u 12 = a 12,u 13 = a 13,l 21 = a 21 u 11,l 31 = a 31 Then at stage 2 we use a 22 = l 21 u 12 +u 22 ; a 23 = l 21 u 13 +u 23 hence u 22 = a 22 l 21 u 12, u 23 = a 23 l 21 u 13 a 32 = l 31 u 12 +l 32 u 22, hence l 32 = (a 32 l 31 u 12 )/u 22 u 11. CLASS EXERCISE Find u 33 SEC. 3. BACK-SUBSTITUTION. When the triangular decomposition has been completed, we find our solution in two easy stages thus: we have L(Ux) = b. Hence if we set Ux = y, we have Ly = b; hence Ly = b where Ux = y (6) 34

35 Hence we first solve Ly = b or y 1 l 21 y 1 +y 2 l 31 y 1 +l 32 y 2 +y substitution:- y 1 = b 1 ; y 2 = b 2 l 21 y 1 ; y 3 = b 3 l 31 y 1 l 32 y 2 ; = b 1 b b n by forward y r = b r r 1 k=1 l rk y k (r = 2,...,n) (7) Then we solve Ux = y or y u 11 x 1 + u 12 x u 1n x 1 n.... y u n 1,n 1 x n 1 + u n 1,n x n = u nn x n y n by back substitution (as in Gaussian elimination; in fact U = the triangular matrix obtained in Gaussian elimination). Thus:- x n = y n /u nn, x n 1 = (y n 1 u n 1,n x n )/u n 1,n 1,..., x r = (y r EXAMPLE Solve We express A = n k=r+1 u rk x k )/u rr (r = n 1,n 2,...,2,1) (8) 4x 1 + 3x 2 + 2x 3 = 2 8x x x 3 = 2 12x x x 3 = = l l 31 l 32 1 u 11 u 12 u 13 0 u 22 u u 33 Hence u 11 = 4, u 12 = 3, u 13 = 2, l 21 = 8/4 = 2, l 31 = 12/4 = 3; l 21 u 12 +u 22 = a 22, so u 22 = = 5 l 21 u 13 +u 23 = a 23, so u 23 = = 6 l 31 u 12 +l 32 u 22 = a 32, so l 32 = = 4 5 l 31 u 13 +l 32 u 23 +u 33 = a 33, so u 33 = = 4 i.e. L = , and U =

36 For the next stage we solve Ly = b i.e y y 2 = 2 i.e y 3 22 y 1 = 2 y 2 = 2 2y 1 = 6 y 3 = 22 3y 1 4y 2 = 4 while for the 3rd stage we solve Ux = y i.e x x 2 = 6 i.e x 3 4 CLASS EXERCISE Solve x 3 = 4/4 = 1 x 2 = ( 6 6x 3 )/5 = 0 x 1 = (2 3x 2 2x 3 )/4 = 1 3x 1 + 2x 2 + x 3 = 2 12x 1 + 9x 2 + 6x 3 = 12 9x 1 + 8x x 3 = 17 SEC. 4. THE NUMBER OF OPERATIONS can be shown to be: 2 3 n3 2 n (9) 3 For large n the n 3 term dominates. This is the most efficient simple direct method known for general (i.e. nonsparse) matrices. IGNORE THIS SECTION 5 SEC. 5. PIVOTING FOR SIZE. We see that the calculation of l ir by 5 involves dividing by u rr. If this is zero, the procedure breaks down. Even if it is not zero but is very small, we will be in trouble as is seen from the following example: 10 6 (0.3)x 1 + x 2 = 0.7 x 1 + x 2 = 0.9 where we are using 5 place floating-point arithmetic. The decomposition yields u 11 = 10 6 (0.3), u 12 = 1; l 21 = 1/u 11 = 10 6 (3.3333); u 22 = a 22 l 21 u 12 = (3.3333) = 10 6 (3.3333). 36

37 Here the 1 is lost due to using [ only 5-figure arithmetic. ][ ] [ ] 1 0 y1.7 The solution of Ly = b or 10 6 = (3.3333) 1 y 2.9 is y 1 =.7; y 2 = ( ) = 10 6 (2.3333) (again the.9 is [ lost); while the solution of Ux = y i.e ][ ] [ ] (.3) 1 x = (3.3333) x is x (2.3333) 2 =.7; x 1 =.7.7 = 0. But the correct solution is x 2 =.7; x 1 =.2. Thus a completely erroneous solution for x 1 is obtained. However if we interchange the two rows before commencing the decomposition, we will obtain the correct solution. CLASS EXERCISE Verify the above statement. To avoid the above escalation of errors, we may adopt the method of partial pivoting by rows, i.e. at the r th step we compute in double precision r 1 S i = a ir l ik u kr (10) for i = r,...,n. This is what we would get for u rr if we first interchanged row r and row i. For note that u rr = a rr k=1 r 1 k=1 l rk u kr (11) If we replace r by i in the very first subscript in each term (as if we put row i in place of row r), we will get S i. Then if S r = Max S i, we interchange rows r and r of the whole array, including S r, S r, and b r, b r (we have stored the first r-1 rows of U and columns of L in the positions formerly occupied by the corresponding elements of a ij ). Now if we still refer to the new element in the i,j position as l ij or a ij etc we have:- u rr = S r, l ir = S i /u rr (i = r +1,...,n) (12) 37

38 (u rr now as large as possible); u ri = a ri r 1 k=1 (compute sum in double precision). Interchanging equations gives an equivalent system. Then note that l ir 1 by definition of r. The above method is due to Doolittle. EXAMPLE (n=5, r=3) l rk u ki (i = r +1,...n) (13) u 11 u 12 u 13 u 14 u 15 0 l 21 u 22 u 23 u 24 u 25 0 l 31 l 32 a 33 a 34 a 35 S 3 l 41 l 42 a 43 a 44 a 45 S 4 l 51 l 52 a 53 a 54 a 55 S 5. EXAMPLE OF PIVOTING 4x 1 +2x 2 +3x 3 = 9 Solve 2x 1 +1x 2 +4x 3 = 7 6x 1 +4x 2 +6x 3 = 16 i)without pivoting. u 11 = 4, u 12 = 2, u 13 = 3, l 21 = 2 = 1, l = 6 = 3,u = a 22 l 21 u 12 = breaks down! 2 = 0..the method With pivoting S i b i S 1 = 4, S 2 = 2, S 3 = 6 Max S i = S 3 = 6, so swap rows 1, u 11 = S 1 = 6, l 21 = S 2 u 11 = 2 6 = 1 3, 38

39 l 31 = S 3 u 11 = 4 6 = 2 3 u 12 = a 12 = 4, u 13 = a 13 = 6 r = S 2 = a 22 l 21 u 12 = = S 3 = a 32 l 31 u 12 = = Max S i = S 3 = 2 so swap rows 2, u 23 = a 23 l 21 u 13 = = 1 3 r= = 5 2 No swap u 33 = S 3 = 5 2 END CLASS EXERCISE u 22 = S 2 = 2 3,l 32 = S 3 u 22 = = 1 2 S 3 = a 33 l 31 u 13 l 32 u 23 = ( 1) = Solve.001x 1 +.1x x 3 = x x 2 + 9x 3 = x x x 3 = 110 Try it a) without pivoting, b) with. working to 3 sig. figs. SEC. 6. NUMERICAL PACKAGES There are a number of pre-written Packages of programs available for solving problems in Numerical Methods. Some are commercial, i.e. they 39

40 cost quite a lot of money, while many are free. Some attempt to cover the whole range of problems amenable to numerical solution (including Statistics in some cases), while others cover a special area in much greater depth. GENERAL PURPOSE PACKAGES include the following:- 1)The IMSL Library (International Mathematical and Statistical Library) Web: 2) MATLAB web: (available at York: see later) 3) MATHEMATICA web: 4) SLATEC (free). Download via netlib(see later). Mathematica contains symbolic math features as well as numerical methods. SPECIAL PURPOSE PACKAGES include:- LINPACK. A package for solving linear systems of equations. This is available through NETLIB (see later). The solution of linear equations occurs in the majority of problems in Physics, Engineering, Economics, etc. EISPACK is a package for solving matrix eigenvalue and related problems. It is available through NETLIB. LAPACK is a combination of the above 2 packages (frequently updated) especially for use on Supercomputers (parallel processors). Other special-purpose packages will be mentioned later in connection with NETLIB. Programs published in the Journals Communications of the ACM, and Transactions on Mathematical Software are also available from NETLIB. If you or your company is short of money, you may be able to obtain most of the software you need by use of NETLIB (especially Slatec), rather than purchasing the complete IMSL librariy. OBTAINING NUMERICAL PACKAGES VIA NETLIB Netlib is a node of the Internet from which various numerical packages, or parts of them (i.e. individual programs or subroutines) can be obtained. 40

41 To find which programs are available for a particualar task, do the following:- 1) Google netlib 2) Click on Search the Netlib Repository 3) Enter your request in search field e.g. LU factorization 4) Change Any of the words to All of the words. 5) Click GO 6) Scroll down to lapack/double/dgesvx.f solve general system for x (It is item 57 in page 6 of the list as of 20 JUl 2014) 7) Click on plus dependencies (will load other routines needed by dgesvx.f) 8) Click on Plain text and Start download It quickly will download, 9) click on File, save as ; it will normally be saved in Desktop 10) Use Winscp to move it to Red. The Most useful libraries are as follows:- NAME PURPOSE BMP Multiple Precision Package. EISPACK Eigenvalues and Eigenvectors. LAPACK Linear Equations, Least Squares, Eigenvalue and Eigenvector problems. Designed for Super-Computers. LINPACK Linear Equations and Least Squares, including general, banded, symmetric, triangular, and tridiagonal systems. MATLAB Matrix manipulation simplified. SLATEC General numerical and statistical library (like IMSL but free!) SPARSPACK Large Sparse Positive Definite Linear System. TOMS Collected Algorithms from ACM (i.e. published in ACM Transactions on Mathematical Software). USE OF MATLABTO SOLVE SYSTEMS OF LINEAR EQUATIONS 1) Go to one of the Acad Lab sites, such as Steacie library, or Commons. Or, for remote access go to web-site 41

42 2) Click Connect to Webfas (the first time it will prompt you to download Citrix, a communication software; follow the prompts). 3) Enter User name, Password and click Logon. 4) Click on MATLABr2013a icon at left. 5) Enter commands at top middle (>> prompt). EXAMPLE of linear equations. To solve x 1 + 3x 2 + 3x 3 = 1 x 1 + 3x 2 + 4x 3 = 4 x 1 + 4x 2 + 3x 3 = 1 Type A=[1 3 3;1 3 4;1 4 3] (note elements separated by blanks, rows by ;). b = [1 4 1] (Note means transpose, i.e turn row vector into column vector) Type x = A\b (in effect x = A 1 b, although that is not how it is done) You will see the solution x =

43 EXERCISE 7. QU. 1. Solve by triangular decomposition (without pivoting):- 9x + 8y + 7z + 6w = 2 54x + 53y + 46z + 39w = 8 45x + 55y + 49z + 39w = 4 36x + 42y + 38z + 31w = 2 N.B. All numbers obtained should be integers. QU. 2. Solve by triangular decomposition, with pivoting by rows, and rounding all intermediate results to 2 significant figures (NOT 2 decimal places):- 3x 1 + 6x 2 + 2x 3 = 1 5x 1 + 4x 2 + 3x 3 = 2 x 1 + 2x 2 + 4x 3 = 3 QU. 3. Solve the following equations (a) without pivoting, (b) with pivoting by rows. In each case, round all intermediate results to 3 sig. figs (.6)x 1 + 3x 2 = 1.2 3x 1 + x 2 = 1.3 QU. 4 Solve the following problem in 3 different ways: a) Manually, i.e using a hand-held calculator. b) By a Fortran program (which should be written for general data, but then you input the data for the particular problem). c) Using Matlab. Use the results of your manual calculation to debug or test your program. (BUT first use the MATLAB result to make sure your manual test is correct these are just as likely to be buggy as programs.). The problem is to solve x 1 +2x 2 +3x 3 = 10 (1) = (1 ) 2x 1 +5x 2 +4x 3 = 20 (2) 3x 1 +4x 2 +5x 3 = 22 (3) 43

44 CHAPTER 9. INTERPOLATION SEC. 1. INTRODUCTION We have seen that many real-life functions are not given by a neat formula, but in the form of a table at fixed intervals (or unequal intervals in some cases). The process of evaluating a tabular function at a point between the tabular intervals is called interpolation. This is done by finding some simple function which fits a few of the tabular points exactly, and hopefully gives a good approximation to the actual function in between these points of exact agreement. The most commonly used approximating functions are polynomials, although trig functions, exponentials and rational functions are also used. We shall consider only polynomial interpolation. The simplest polynomial which can be used is that of the first degree (ax+b). That is, given values y 0, y 1 of a function at x 0, x 1 respectively we find a function y = ax + b to pass through (x 0,y 0 ) and (x 1,y 1 ). Then the value of y at any point x between x o and x 1 is found by means of this formula y = ax + b. Since this is the equation of a straight line, this process is known as linear interpolation. y 1 y y 0 E C θ A B D x 0 x x 1 44

45 To pass through the points (x 0,y 0 ),(x 1,y 1 ), our function must satisfy y 0 = ax 0 +b (14) Subtracting 15 from 14 gives:- y 1 = ax 1 +b (15) y 0 y 1 = a(x 0 x 1 ) so a = y 0 y 1 x 0 x 1 Hence, from 14, b = y 0 ax 0 = y 0 y 0 y 1 x 0 x 1 x 0 = y 1x 0 y 0 x 1 x 0 x 1 Hence y = (y 0 y 1 )x+(y 1 x 0 y 0 x 1 ) = x 0 x 1 y 0 (x 0 x 1 ) y 0 x 0 +(y 0 y 1 )x+y 1 x 0 x 0 x 1 = y 0 + (y 1 y 0 )(x x 0 ) x 1 x 0 (16) This last formula expresses the fact that y y 0 = CB = ABtanθ = AB. DE = (x x AD 0) y 1 y 0 x 1 x 0 We may also write 16 as y = y 0 + y 0 x 0 (x x 0 ). This is useful in tables where differences are given. EXAMPLE: from the table below find f(.75) by linear interpolation:- x y SOLUTION Take x 0 and x 1 so that x 0 < x < x 1 (this should always be done if possible), i.e. x 0 =.7, y 0 = 2.785, x 1 =.8, y 1 = Then for x =.75, 16 gives y = (.75.7) = = = =

46 CLASS EXERCISE. From the above table find y(.82). SEC. 2. ERROR IN INTERPOLATION FORMULAS. Often linear interpolation is not sufficiently accurate, so that we need to fit a polynomial of higher degree, e.g. quadratic, cubic, etc. Now it may be shown that there is a unique polynomial of degree n passing through n+1 points (x i,y i ) (i = 0,1,...,n). For example we have a linear (first degree) poly through the two points (x 0,y 0 ) and (x 1,y 1 ). In most cases the fitted polynomial will not agree exactly with the true function, and it is desirable to have an estimate of the difference between the two ( the truncation error). It may be shown that this is given by: y(x) = p(x)+ ψ(x) (n+1)! y(n+1) (ξ) (17) where x 0 ξ x n and ψ(x) = (x x 0 )(x x 1 )...(x x n ) (18) We do not know ξ exactly, only a range of values, but often we can find a bound on its maximum value, which can be very useful. EXAMPLE: n=1 (linear interpolation). We have seen that p 1 (x) = y 0 +(x x 0 ) y 1 y 0 x 1 x 0 then e 1 (x) = error in p 1 (x) = (x x 0)(x x 1 ) y (2) (ξ) where x 2! 0 ξ x 1 Now ψ(x) (x x 0 )(x x 1 ) has its maximum magnitude in this range when x = x 0+x 1, and Max = (x 1 x 0 ) Hence e 1 (x) (x 1 x 0 ) 2 8 M where M = Max x0 x x 1 y (2) (x) CLASS EXAMPLE. What is the largest interval which can be used in a table of sin x so that linear interpolation gives values correct to 4D? 46

CHAPTER 1. INTRODUCTION. ERRORS.

CHAPTER 1. INTRODUCTION. ERRORS. CHAPTER 1. INTRODUCTION. ERRORS. SEC. 1. INTRODUCTION. Frequently, in fact most commonly, practical problems do not have neat analytical solutions. As examples of analytical solutions to mathematical problems,

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Chapter 1 Computer Arithmetic

Chapter 1 Computer Arithmetic Numerical Analysis (Math 9372) 2017-2016 Chapter 1 Computer Arithmetic 1.1 Introduction Numerical analysis is a way to solve mathematical problems by special procedures which use arithmetic operations

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Chapter 1 Mathematical Preliminaries and Error Analysis

Chapter 1 Mathematical Preliminaries and Error Analysis Numerical Analysis (Math 3313) 2019-2018 Chapter 1 Mathematical Preliminaries and Error Analysis Intended learning outcomes: Upon successful completion of this chapter, a student will be able to (1) list

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Notes on floating point number, numerical computations and pitfalls

Notes on floating point number, numerical computations and pitfalls Notes on floating point number, numerical computations and pitfalls November 6, 212 1 Floating point numbers An n-digit floating point number in base β has the form x = ±(.d 1 d 2 d n ) β β e where.d 1

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Time: 1 hour 30 minutes

Time: 1 hour 30 minutes www.londonnews47.com Paper Reference(s) 6665/0 Edexcel GCE Core Mathematics C Bronze Level B4 Time: hour 0 minutes Materials required for examination papers Mathematical Formulae (Green) Items included

More information

Chapter 1: Preliminaries and Error Analysis

Chapter 1: Preliminaries and Error Analysis Chapter 1: Error Analysis Peter W. White white@tarleton.edu Department of Tarleton State University Summer 2015 / Numerical Analysis Overview We All Remember Calculus Derivatives: limit definition, sum

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

Taylor series. Chapter Introduction From geometric series to Taylor polynomials

Taylor series. Chapter Introduction From geometric series to Taylor polynomials Chapter 2 Taylor series 2. Introduction The topic of this chapter is find approximations of functions in terms of power series, also called Taylor series. Such series can be described informally as infinite

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics).

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics). 1 Introduction Read sections 1.1, 1.2.1 1.2.4, 1.2.6, 1.3.8, 1.3.9, 1.4. Review questions 1.1 1.6, 1.12 1.21, 1.37. The subject of Scientific Computing is to simulate the reality. Simulation is the representation

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Introduction CSE 541

Introduction CSE 541 Introduction CSE 541 1 Numerical methods Solving scientific/engineering problems using computers. Root finding, Chapter 3 Polynomial Interpolation, Chapter 4 Differentiation, Chapter 4 Integration, Chapters

More information

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic Computer Arithmetic MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013 Machine Numbers When performing arithmetic on a computer (laptop, desktop, mainframe, cell phone,

More information

Linear Algebra Using MATLAB

Linear Algebra Using MATLAB Linear Algebra Using MATLAB MATH 5331 1 May 12, 2010 1 Selected material from the text Linear Algebra and Differential Equations Using MATLAB by Martin Golubitsky and Michael Dellnitz Contents 1 Preliminaries

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

MAT 460: Numerical Analysis I. James V. Lambers

MAT 460: Numerical Analysis I. James V. Lambers MAT 460: Numerical Analysis I James V. Lambers January 31, 2013 2 Contents 1 Mathematical Preliminaries and Error Analysis 7 1.1 Introduction............................ 7 1.1.1 Error Analysis......................

More information

Elements of Floating-point Arithmetic

Elements of Floating-point Arithmetic Elements of Floating-point Arithmetic Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Floating-point Numbers Representations IEEE Floating-point Standards Underflow

More information

Lies My Calculator and Computer Told Me

Lies My Calculator and Computer Told Me Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing

More information

EXAMPLES OF PROOFS BY INDUCTION

EXAMPLES OF PROOFS BY INDUCTION EXAMPLES OF PROOFS BY INDUCTION KEITH CONRAD 1. Introduction In this handout we illustrate proofs by induction from several areas of mathematics: linear algebra, polynomial algebra, and calculus. Becoming

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Mathematics 1 Lecture Notes Chapter 1 Algebra Review

Mathematics 1 Lecture Notes Chapter 1 Algebra Review Mathematics 1 Lecture Notes Chapter 1 Algebra Review c Trinity College 1 A note to the students from the lecturer: This course will be moving rather quickly, and it will be in your own best interests to

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Notes for Chapter 1 of. Scientific Computing with Case Studies

Notes for Chapter 1 of. Scientific Computing with Case Studies Notes for Chapter 1 of Scientific Computing with Case Studies Dianne P. O Leary SIAM Press, 2008 Mathematical modeling Computer arithmetic Errors 1999-2008 Dianne P. O'Leary 1 Arithmetic and Error What

More information

Chapter 9: Systems of Equations and Inequalities

Chapter 9: Systems of Equations and Inequalities Chapter 9: Systems of Equations and Inequalities 9. Systems of Equations Solve the system of equations below. By this we mean, find pair(s) of numbers (x, y) (if possible) that satisfy both equations.

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

Engineering Computation

Engineering Computation Engineering Computation Systems of Linear Equations_1 1 Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations. Review steps of Gaussian

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

Algebraic. techniques1

Algebraic. techniques1 techniques Algebraic An electrician, a bank worker, a plumber and so on all have tools of their trade. Without these tools, and a good working knowledge of how to use them, it would be impossible for them

More information

Roundoff Error. Monday, August 29, 11

Roundoff Error. Monday, August 29, 11 Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013 Number Systems III MA1S1 Tristan McLoughlin December 4, 2013 http://en.wikipedia.org/wiki/binary numeral system http://accu.org/index.php/articles/1558 http://www.binaryconvert.com http://en.wikipedia.org/wiki/ascii

More information

1.4 Techniques of Integration

1.4 Techniques of Integration .4 Techniques of Integration Recall the following strategy for evaluating definite integrals, which arose from the Fundamental Theorem of Calculus (see Section.3). To calculate b a f(x) dx. Find a function

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Elements of Floating-point Arithmetic

Elements of Floating-point Arithmetic Elements of Floating-point Arithmetic Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Floating-point Numbers Representations IEEE Floating-point Standards Underflow

More information

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

NUMERICAL MATHEMATICS & COMPUTING 7th Edition NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

11.5 Reduction of a General Matrix to Hessenberg Form

11.5 Reduction of a General Matrix to Hessenberg Form 476 Chapter 11. Eigensystems 11.5 Reduction of a General Matrix to Hessenberg Form The algorithms for symmetric matrices, given in the preceding sections, are highly satisfactory in practice. By contrast,

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Chapter 1 Error Analysis

Chapter 1 Error Analysis Chapter 1 Error Analysis Several sources of errors are important for numerical data processing: Experimental uncertainty: Input data from an experiment have a limited precision. Instead of the vector of

More information

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self-paced Course

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self-paced Course SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self-paced Course MODULE ALGEBRA Module Topics Simplifying expressions and algebraic functions Rearranging formulae Indices 4 Rationalising a denominator

More information

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors Roundoff errors and floating-point arithmetic

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Exponential and logarithm functions

Exponential and logarithm functions ucsc supplementary notes ams/econ 11a Exponential and logarithm functions c 2010 Yonatan Katznelson The material in this supplement is assumed to be mostly review material. If you have never studied exponential

More information

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460 Notes for Part 1 of CMSC 460 Dianne P. O Leary Preliminaries: Mathematical modeling Computer arithmetic Errors 1999-2006 Dianne P. O'Leary 1 Arithmetic and Error What we need to know about error: -- how

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Limit. Chapter Introduction

Limit. Chapter Introduction Chapter 9 Limit Limit is the foundation of calculus that it is so useful to understand more complicating chapters of calculus. Besides, Mathematics has black hole scenarios (dividing by zero, going to

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

INTRODUCTION TO COMPUTATIONAL MATHEMATICS INTRODUCTION TO COMPUTATIONAL MATHEMATICS Course Notes for CM 271 / AMATH 341 / CS 371 Fall 2007 Instructor: Prof. Justin Wan School of Computer Science University of Waterloo Course notes by Prof. Hans

More information

printing Three areas: solid calculus, particularly calculus of several

printing Three areas: solid calculus, particularly calculus of several Math 5610 printing 5600 5610 Notes of 8/21/18 Quick Review of some Prerequisites Three areas: solid calculus, particularly calculus of several variables. linear algebra Programming (Coding) The term project

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Numerical Analysis Exam with Solutions

Numerical Analysis Exam with Solutions Numerical Analysis Exam with Solutions Richard T. Bumby Fall 000 June 13, 001 You are expected to have books, notes and calculators available, but computers of telephones are not to be used during the

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING 0. Gaussian Elimination with Partial Pivoting 0.2 Iterative Methods for Solving Linear Systems 0.3 Power Method for Approximating Eigenvalues 0.4 Applications of Numerical Methods Carl Gustav Jacob Jacobi

More information

Reference Material /Formulas for Pre-Calculus CP/ H Summer Packet

Reference Material /Formulas for Pre-Calculus CP/ H Summer Packet Reference Material /Formulas for Pre-Calculus CP/ H Summer Packet Week # 1 Order of Operations Step 1 Evaluate expressions inside grouping symbols. Order of Step 2 Evaluate all powers. Operations Step

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors 3-1 Roundoff errors and floating-point arithmetic

More information

Practical Algebra. A Step-by-step Approach. Brought to you by Softmath, producers of Algebrator Software

Practical Algebra. A Step-by-step Approach. Brought to you by Softmath, producers of Algebrator Software Practical Algebra A Step-by-step Approach Brought to you by Softmath, producers of Algebrator Software 2 Algebra e-book Table of Contents Chapter 1 Algebraic expressions 5 1 Collecting... like terms 5

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

STEP Support Programme. Pure STEP 1 Questions

STEP Support Programme. Pure STEP 1 Questions STEP Support Programme Pure STEP 1 Questions 2012 S1 Q4 1 Preparation Find the equation of the tangent to the curve y = x at the point where x = 4. Recall that x means the positive square root. Solve the

More information

Lecture 28 The Main Sources of Error

Lecture 28 The Main Sources of Error Lecture 28 The Main Sources of Error Truncation Error Truncation error is defined as the error caused directly by an approximation method For instance, all numerical integration methods are approximations

More information

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra

More information

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract What Every Programmer Should Know About Floating-Point Arithmetic Last updated: November 3, 2014 Abstract The article provides simple answers to the common recurring questions of novice programmers about

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

1 Lecture 8: Interpolating polynomials.

1 Lecture 8: Interpolating polynomials. 1 Lecture 8: Interpolating polynomials. 1.1 Horner s method Before turning to the main idea of this part of the course, we consider how to evaluate a polynomial. Recall that a polynomial is an expression

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1 1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A

More information

Practice problems from old exams for math 132 William H. Meeks III

Practice problems from old exams for math 132 William H. Meeks III Practice problems from old exams for math 32 William H. Meeks III Disclaimer: Your instructor covers far more materials that we can possibly fit into a four/five questions exams. These practice tests are

More information

Binary floating point

Binary floating point Binary floating point Notes for 2017-02-03 Why do we study conditioning of problems? One reason is that we may have input data contaminated by noise, resulting in a bad solution even if the intermediate

More information

Differentiation 1. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.

Differentiation 1. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. Differentiation 1 The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. 1 Launch Mathematica. Type

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

Integrals. D. DeTurck. January 1, University of Pennsylvania. D. DeTurck Math A: Integrals 1 / 61

Integrals. D. DeTurck. January 1, University of Pennsylvania. D. DeTurck Math A: Integrals 1 / 61 Integrals D. DeTurck University of Pennsylvania January 1, 2018 D. DeTurck Math 104 002 2018A: Integrals 1 / 61 Integrals Start with dx this means a little bit of x or a little change in x If we add up

More information

Laboratory #3: Linear Algebra. Contents. Grace Yung

Laboratory #3: Linear Algebra. Contents. Grace Yung Laboratory #3: Linear Algebra Grace Yung Contents List of Problems. Introduction. Objectives.2 Prerequisites 2. Linear Systems 2. What is a Matrix 2.2 Quick Review 2.3 Gaussian Elimination 2.3. Decomposition

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

What is it we are looking for in these algorithms? We want algorithms that are

What is it we are looking for in these algorithms? We want algorithms that are Fundamentals. Preliminaries The first question we want to answer is: What is computational mathematics? One possible definition is: The study of algorithms for the solution of computational problems in

More information

NUMERICAL MATHEMATICS & COMPUTING 6th Edition

NUMERICAL MATHEMATICS & COMPUTING 6th Edition NUMERICAL MATHEMATICS & COMPUTING 6th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole www.engage.com www.ma.utexas.edu/cna/nmc6 September 1, 2011 2011 1 / 42 1.1 Mathematical

More information

Chapter 11 - Sequences and Series

Chapter 11 - Sequences and Series Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a

More information

Linear Algebra. PHY 604: Computational Methods in Physics and Astrophysics II

Linear Algebra. PHY 604: Computational Methods in Physics and Astrophysics II Linear Algebra Numerical Linear Algebra We've now seen several places where solving linear systems comes into play Implicit ODE integration Cubic spline interpolation We'll see many more, including Solving

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information