Math 348: Numerical Methods with application in finance and economics. Boualem Khouider University of Victoria

Size: px
Start display at page:

Download "Math 348: Numerical Methods with application in finance and economics. Boualem Khouider University of Victoria"

Transcription

1 Math 348: Numerical Methods with application in finance and economics Boualem Khouider University of Victoria Lecture notes: Last Updated September 2016

2 Contents 1 Basics of numerical analysis Notion of rounding and truncation errors Bounding the truncation error Computer representation of numbers: floating-point arithmetics Representation of numbers in an arbitrary base Floating-point numbers The smallest and largest fl-pt numbers Distance between two successive floating-point numbers, the machine epsilon Precision and round-off errors Floating-point arithmetics and the danger of cancellation of significant digits Stability and Conditioning Problems Direct methods for linear systems Introduction Lower and Upper triangular matrices Gauss elimination Operation count

3 2.3.2 LU factorization Partial pivoting or row interchange Cholesky s LL T factorization LU factorization in Matlab Vector and matrix norms Condition number and conditioning Problems Nonlinear equations: F(X) = Introduction The bisection method Convergence of the bisection method Newton s method Local convergence of Newton s method Rate of convergence Advantages and disadvantages of Newton s method Secant method Convergence or stopping criteria Function fzero of Matlab Fixed point iterations The method of fixed-point iterations Newton s method as a fixed-point iteration The chord method Application: The Black-Scholes formula

4 3.6 Problems Function approximation Curve fitting Least square approximation in Matlab Interpolation polynomial Lagrange interpolation polynomial The interpolation polynomial of a known function Interpolation error Piecewise polynomial interpolation (Introduction to Splines) Splines in Matlab Theory of least square approximation Notion of basis and generalization of linear least square approximation General least squares in Matlab Newton divided differences Problems Numerical Integration Newton-Cotes Integration Integration error Order of accuracy Composite integration rules Error analysis for the composite integration rules Gauss integration Problems

5 6 Monte Carlo Integration Introduction: Probability distributions and random variables Definition and examples Mean, variance, and expectation Conditional probability and notion of dependent and independent random variables Crude Monte Carlo integration Random number generators: pseudo-random numbers Linear congruential generators (LCG) Inverse transform method Acceptance-rejection method Polar approach for generating normal variates Controlling the sampling error and variance reduction techniques Problems Optimization and multidimensional Newton s method Unconstrained optimization in one dimension Unconstrained multivariable smooth optimization and Newton s method for systems One variable unconstrained optimization: Golden section search method Multivariable unconstrained optimization Introduction to convex optimization The method of steepest descent Constrained optimization Equality constraints and Lagrange multipliers method

6 7.5.2 Penalty method Unconstrained optimization in matlab: fminunc Inequality constraint and the barrier function method Problems Ordinary differential equations Euler s method Error analysis Round off errors Higher order and implicit methods One step methods Implicit and multistep methods Differential Equations In Matlab Illustrative example Problems Finite difference methods for partial differential equations Introduction to partial differential equations Classification of PDE s Initial and Boundary conditions Finite differencing Finite difference schemes for the heat equation Finite difference approximation of second order derivative Consistency, order of accuracy, and convergence Crank-Nicholson Method

7 9.6 Reading homework: Application to the Black-Scholes equation

8 Preliminary recommendations and guidelines To solve some questions in the problem sets, you need to use MATLAB. Below are a few hints and recommendations on how to use matlab. Normally both the Windows and the UNIX versions of MATLAB are installed on many of the machines on Campus. For example, check the Computer Lab downstairs of the Clearihue building. For more information on how to install the software on your own laptop and how to access online help etc. visit the UVic Matlab resources website: There are also many free manuals and tutorials that you can find by just searching online: Here are some examples: Controlled%20Materials%20OPEN/matlab_tutorials/MatlabManual.pdf? For help with a specific matlab command, e.g. fzero type in the matlab command window (once you ve launched matlab) >>help fzero Submitting your HOMEWORK ********************************************************** FOR ALL ASSIGNMENTS IN WHICH MATLAB IS USED, HAND IN THE FOLLOWING: a printed copy of the MATLAB statements (including M-files) that you use to solve a problem, any output that is generated on your screen, and any graphs that are plotted. Either edit your saved files to eliminate any statements that are not part of your final solution, or cross them out, so that the marker can easily find your answers. 7

9 CLEARLY IDENTIFY AND LABEL YOUR SOLUTIONS. Using diary to save your work under matlab The simplest way to save a copy of the MATLAB commands that you type into the Command Window and the MATLAB output that you generate is to use the diary command. (See help diary.) Everything that appears on the MATLAB screen after entering the command diary( filename ) will be saved in a file called filename, until you enter diary off Windows version of the diary command Enter (including the quotes) >> diary( c:\students\filename ) %(check with your system administrator) UNIX version of the diary command Enter (including the quotes) >> diary( filename ) To print a file created using diary in the Windows version Select file -> open At the bottom of this window, under Files of type select All files and near the top of this window under Look in select Local Disk (C:) -> Students 8

10 Then select (highlight) the file you want to open, and select open. This will cause the contents of the file to appear in an Editor window. Then select file and print. Then close the Editor window. To print a file created using diary in the UNIX version Select File -> Open and under Files of Type select All files (. ). Then select the appropriate filename. This will cause the contents of the file to appear in an Editor window. Under file in this window, select print to print the file or choose the print icon. Then close the Editor window. Note: if you don t specify a file name and enter the dairy command without the parentheses: >>diary, your work will be saved in a file named dairy by default. ********************************************************** 9

11 Chapter 1 Basics of numerical analysis 1.1 Notion of rounding and truncation errors Because computer resources (memory and disk space) are finite, when real numbers and mathematical operations are used on a computer two kinds of errors are introduced: Truncation errors and rounding or round off errors. Truncation errors occur when for example the mathematical quantity under consideration involves an infinite number of arithmetic operations (i.e, additions and multiplications; subtraction and divisions are considered as additions and multiplications, respectively). It must then be approximated by a truncated expression having a finite number of terms before it is implemented on a computer. Numerical series and power series are the simplest examples. Let S = + n=1 1/n2. One way to estimate the value of S on a computer is to use its partial sums. ForsufficientlylargeN, wehaves S N N n=1 1/n2. ThedifferenceE = S S N = + n=n+1 1/n2 is the truncation error. An expression such as e x can be estimated by using its Taylor (or power) series: e x = x n /n! 1+x+x 2 /2!+x 3 /3!+ +x N /N! n=0 for N sufficiently large. The truncation error in this case, given by E = n=n+1 x n /n!, is also known as the reminder in the theory of Taylor expansions. Rounding or round off errors occur because a computer uses a finite set of rational numbers to represent (approximately) the whole real line: The rational numbers that are exactly represented on a computer are know as floating-point (fl-pt) numbers. All other real numbers are rounded to the nearest floating-point number. Very large and very small numbers are treated as infinity 10

12 and as zero, respectively. Floating point numbers and round-off errors are discussed in detail below, but we first discuss truncation errors, for the simple case of Taylor expansions. 1.2 Bounding the truncation error In practical problems, truncation errors, when introduced, are impossible to estimate directly without further approximations. The numerical analyst often relies on very sophisticated mathematical theories (when available) to find upper bounds for the truncation error to verify a priori that the truncated quantity is an acceptable approximation for the original expression. As an example, assume we want to approximate the following indefinite integral I = x 1 ln(t)dt; for all x [1,1.5] to within an error of at most 0.002, using Taylor expansion on the function ln(t), about x 0 = 1. How many terms in the Taylor expansion do we need to keep? Recall Taylor s expansion: where f(x) = f(x 0 )+(x x 0 )f (x 0 )+ (x x 0) 2 f (x 0 )+ + (x x 0) n f (n) (x 0 )+R n (x) 2! n! R n (x) = (x x 0) (n+1) f n+1 (ξ), ξ lies somewhere between x and x 0. (n+1)! If the reminder is dropped, then we get the following Taylor approximation where f(x) P n (x) P n (x) = f(x 0 )+(x x 0 )f (x)+ (x x 0) 2 f (x 0 )+ + (x x 0) n f (n) (x 0 ) 2! n! is known as the Taylor polynomial of f about x 0 of order or degree n. Notice that a polynomial involves only a finite number of arithmetic operations (additions and multiplications) and therefore can be directly estimated on a computer. The truncation error in this case is given by the reminder, which satisfies R n (x) = f(x) P n (x). Note that R n (x 0 ) = 0 and recall that for well behaved functions, if x is sufficiently close to x 0, then R n (x) decreases to zero as n goes to infinity. Therefore, as n gets larger and larger, the Taylor polynomial P n (x) gets closer and closer to f(x): the larger n is the better the approximation is. Back to our example of the integral I above. Let f(x) = x 1 lnt dt. Then f (x) = ln(x),f (x) = 1 x,f (x) = 1 x 2,f(4) (x) = 2 x 3,f(5) (x) = 6 x 4, 11

13 A Taylor approximation of order 3, about x 0 = 1, yields x 1 where lntdt = f(1)+(x 1)f (1)+ (x 1)2 2! R 3 (x) = (x 1)4 24 f (1)+ (x 1)3 3! f (1)+R 3 (x) = 0+0+ (x 1)2 2 2 ξ3, 1 ξ x 1.5. (x 1)3 +R 3 (x) 6 In order for this approximation to be acceptable we need to verify whether the truncation error satisfies: R 3 (x) for all x [1,1.5]. We have R 3 (x) = (x 1) ξ3, x [1,1.5] and ξ [1,x] and max R 3 (x) R 3 (1.5) = (1.5 1) ξ3,ξ [1,1.5] and if ξ = 1 we have R 3 (1.5) = (0.5) > i.e the 3rd order Taylor approximation is not acceptable for the given tolerance error of We improve the Taylor approximation of f(x) by using instead an expansion of a higher order. With n = 4, we get and This satisfies Because P 4 (x) = (x 1)2 2 R 4 (x) = (x 1)5 120 (x 1) (x 1) ξ4,x [1,1.5] and ξ [1,x]. R 4 (x) < 0.002, x [1,1.5], ξ [1,x]. 120 x and 1 1, x [1,1.5] and ξ [1,x]. ξ4 We used the fact thatmax f(x)g(x) max f(x) max g(x). Thus within an error of 0.002, we have x 1 lntdt P 4 (x) = 1 12 (x4 6x 3 +18x 2 22x+9). The last expression can be implemented on a computer to provide acceptable approximations for the given integral for any given value of x [1,1.5]. Below, we use the Matlab language to plot the function f(x) = x 1 lnt dt = xlnx x+1 (solid line) and the Taylor polynomial P 4(x) (dashed line) on top of each other, for comparison, together with the error f(x) P 4 (x) for x [1,1.5] (bottom panel). As expected, because of the relatively small error, the two curves on the top panel are almost indistinguishable, except for values of x The error plot on the bottom panel provides extra evidence that the truncation error is within the tolerance level of Indeed, the error remains smaller than about in agreement with the upper bound value computed above. Also note the error increases rapidly as x increases away from x 0 = 1. This is typical of Taylor approximation. 12

14 x ln(x) x+1 P 4 (x) x 10 3 Error X Here are the Matlab commands used to produce these plots: (All the statements that are preceded by a % are user comments that are not seen by Matlab). >>f = inline( x*log(x)-x + 1 )% predefines the function f(x) >>figure %opens a new figure window >>subplot(2,1,1) %divides the figure window into two subwindows % and sets the pointer on the top panel. %Use help subplot to learn more. >> fplot(@(x)f(x),[1,1.5]) %creates a graphic for the function f(x) >> hold on %so we can graph a new function on top of the previous >> p4 = inline( (x.^4-6*x.^3+18*x.^2-22*x+9)/12 ) %defines the polynomial p4 as indicated >> fplot(@(x)p4(x),[1,1.5], r-- ) % graphs p4 on top of f(x) %using a red dashed line (thus the r-- option). >> legend( x ln(x)-x+1, P_4(x),0) %puts a legend box >> subplot(2,1,2) %create the second panel >> fplot(@(x)abs(f(x) - p4(x)),[1,1.5]) % abs stands for absolute value >> xlabel( x ) %puts a label on the x axis >> ylabel( Error ) %puts a label on the y axis. >> print -depsc taylorapproximation.eps %saves a hard copy of the figure 13

15 1.3 Computer representation of numbers: floating-point arithmetics Representation of numbers in an arbitrary base The number 3490 in the decimal base is interpreted as and 3490 = = The power of ten determines the order of magnitude and the coefficient in front, which can be any number from 0 to 9 (digits), refers to the actual term. Similarly, we can convert any given number, given for example in base 10, to an arbitrary base b. The commonly used bases are b = 2 (binary, coefficients are 0 or 1), b = 8 (octal, coefficients are digits from 0 to 7), and b = 16 (hexadecimal, coefficients are in {0,1,2,,9,A,B,C,D,E,F}). For example, the number three in the decimal base satisfies 3 = 2+1 = and thus should be written as 11 in the binary base, i.e, we have (3) 10 = (11) 2 where the subscript refers to the base where the number is being represented. Other Examples: (21.5) 10 = /2 = = ( ) 2. Verify that (12) 10 = (C) 16 = (14) 8 = (1100) 2 Exercise: Write (21.5) 10 in both the octal and hexadecimal bases. General representation of integers in the binary base Let N be an integer. Then, there exists a finite sequence of coefficients b 0,b 1,,b k such that N = b k 2 k +b k 1 2 k 1 + +b b To compute the coefficients b 0,b 1,,b k, we first note that N 2 = b k2 k 1 +b k 1 2 k 2 + b 1 + b 0 2 Let Q 0 be the integer such that N = 2Q 0 + b 0, i.e, b 0 is the reminder of the division of N by 2. b 0 = 1 or 0 depending on whether N is odd or even. Similarly, consider Q 1 such that Q 0 = 2Q 1 +b 1, 14

16 i.e b 1 is the reminder of the division of Q 0 by 2, which again is either 0 or 1. Iterating this process over and over yields an effective algorithm for computing the coefficients b 0,b 1,,b k. The corresponding Matlab program is given next. %M-file DecToBinary.m function b=dectobinary(n) %Matlab program converting any giving decimal number to binary: %input: an integer number N in decimal base %output: a vector b--sequence of zeros and ones--the binary representation of N %%Initialization N0 = N ; i=1; %%while loop while(n0>0) N1 = floor(n0/2); %floor(x) returns the largest integer <= X. b(i) = N0-N1*2; %store the reminder of division of N0 by 2; i=i+1; N0=N1; end Type this tittle program an save it as an M-file an execute it for a few integers of your choice. For example 3 and 13 should yield the following outputs. >>DecToBinary(3) 1 1 >>DecToBinary(13) Now, execute the above algorithm by hand for these two examples of converting 3 and 13 from the decimal to the binary bases to confirm the Matlab results. Exercise: a) Find the binary representation of the decimal number 123. b) Explain how you would adopt the algorithm above to convert integers from base 10 to base 8. Write the corresponding Matlab program and find the representation of the decimal number 123 in base 8. Binary representation of real numbers. Let X 0 be a non-negative real number. Then X can be decomposed into its integer and fraction parts: X = N +R,0 R < 1. N = k b j 2 j and R = j=0 15 d j 2 j. j=1

17 Example R = (0.7) 10 = ( ) 2 The notation xyz means that the pattern xyz is repeat infinitely many times. Note that 0.7 has an exact (finite) representation in base 10 but not in base 2, i.e, on a binary computer, the number 0.7 can not be represented exactly. Here is how this sequence has been computed. According to the expression of R, we note that 2R = d j=2 d j 2 j+1 = d j=1 d j+1 2 j, because multiplication by 2 of each term in the expansion of R amounts to augmenting the power of two by 1: 2 2 j = 2 j+1. Therefore, we have d 1 is the integer part of 2R and subsequently if we set F 1 = 2R d 1 (i.e the fraction part of 2R), then d 2 is the integer part of 2F 1 etc. Thus, we have the following algorithm. Set F 0 = R. Then for k = 1,2,, Set d k = integer part of (2F k 1 ), namely { 1 if 2Fk 1 1 d k = 0 if 2F k 1 < 1. and F k = fraction part of 2F k 1 : F k = 2F k 1 d k. Stop if F k = 0 or when the desired precision (i.e number of coefficients) is achieved. End For R = 0.7, we have 2R = 1.4 = d 1 = 1,F 1 = 0.4 2F 1 = 0.8 = d 2 = 0,F 2 = 0.8 2F 2 = 1.6 = d 3 = 1,F 3 = 0.6 2F 3 = 1.2 = d 4 = 1,F 4 = 0.2 2F 4 = 0.4 = d 5 = 0,F 5 = 0.4 But F 5 = F 1 = d 6 = d 2,d 7 = d 3, Exercise (a) Write the matlab code to find the binary (b=2) representation of any fraction F and ultimately any real number X. (b) Adopt the algorithm in (a) for the case of the octal (base 8) representation of real numbers. 1.4 Floating-point numbers In a representation system of numbers with a given base b, we call a floating point number any rational number X that can be written exactly, on the form X = ±0.d 1 d 2 d 3 d k b e = ±mb e (1.1) 16

18 where m = 0.d 1 d 2 d 3 d k is called the mantissa or the fraction and e is the exponent. Note that m is viewed as a fraction in a numeral system with base b. The d j s are digits bewteen 0 and b 1 if b 10 nd digits from 0 to 9 plus a sequence of letters if b 11. k is an integer that limits the size of the mantissa while e satisfies E m e E M where E m < 0,E M > 0 are respectively the smallest and largest exponents that limit the magnitude of the smallest and largest numbers represented by the given floating-point system. Remark: Note that according to the example above 0.7 (in decimal base) is a floating point number in the decimal representation but not in the binary representation. Example: In the binary system adopted by most of today s computers the following two sets of parameters are used, commonly known as the single (or short) and double (or long) precision representations (taking into account the hidden bit). Mode b k E m E M Precision 32 bits Single/short precision 64 bits Double/long precision Normalized floating-point numbers Whenever possible, a floating-point number is normalized so that d 1 0 (for normalized floatingpoint numbers in base 2, we have d 1 = 1). A floating-point number that can be normalized is called normal otherwise it is called subnormal. The normalized representation of a normal floating-point number is unique The smallest and largest fl-pt numbers The smallest positive normal floating point number, X m, is reached when e = E m,d 1 = 1,d j = 0,j = 2,,k, which yields X m = 0.1 b Em = b Em 1. In Matlab, this number is known as the smallest real number and is denoted by realmin. Given that Matlab is by default configured to use double precision, we have realmin = Exercise: Type >>realmin in the Matlab command window and compare the output to the number above. Characterization of subnormal numbers: A non-negative floating point number X is subnormal if and only if X < b Em 1. Therefore by convention (of the IEEE: Institute of Electrical and Electronics Engineers), subnormal numbers take the form X = 0.d 2 d 3 d k b Em 1. (1.2) Note that the size of the new mantissa is reduced by one because the original number was shifted to the right by one bit. 17

19 The number zero is treated as a special number and is represented by 0 = b Em. The smallest (subnormal) positive fl-pt number (let us called it X min to distinguish it from X m, realmin in matlab, which is the smallest normal number) is achieved when d j = 0,j = 2, k 1 and d k = 1, in (1.2). This leads to X min = b k+1 b Em 1. In a 64-bit environment (such as matlab), the smallest representable number is given by X min = which is the smallest number before zero in Matlab. Exercise: To compute the smallest subnormal number of matlab, execute the following matlab program. >>x=eps >>while(x>0), x=x/2, end Explain why this program yields the smallest positive number in matlab(give a different explanation than the one above). Here eps is a predefined matlab constant known as the machine epsilon. See next subsection. The largest representable (floating point) number is achieved when e = E M and d j = (b 1),j = 1,2, k. In the 64-bit mode this is equivalent to It is called realmax in matlab. Exercise: Go to Matlab and type >>realmax and >>2^(1023). Compare the two numbers and explain. Execute the Following Matlab commands. Type >>realmax Type >>realmax * 2 Type (realmax + 10*eps) - realmax Observe the output of each command and explain the results. Overflow and underflow If a number exceeds the largest fl-pt number, it is said to be an overflow and it is treated as either the largest fl-pt number or, depending on the rounding mode. Similarly, any number that is smaller than the smallest ft-pt is called an underflow and is treated as either zero or the smallest fl-pt number. 18

20 1.4.2 Distance between two successive floating-point numbers, the machine epsilon Normalized floating-point numbers with a common exponent t are found in the interval [b t 1,b t ). Note that b t 1 = b t. There are exactly (b 1)b k 1 (normalized) floating-point numbers in the interval [b t 1,b t ). b t 1 is the smallest and 0.(b 1)(b 1) (b 1) b t is the largest. Floating-point numbers within the interval [b t 1,b t ) are uniformly distributed and the distance between two consecutive fl-pt numbers within this interval is D = bt b t 1 (b 1)b k 1 = bt k. (1.3) The distance between 1 and the next floating-point number is called the machine epsilon or machine precision and often denoted by ǫ. Since 1 = (0.1) b b 1, we get ǫ = b k+1 (t = 1 in the above expression). In double precision, the machine epsilon is given by ǫ = , which is also predefined in Matlab. Try >>eps in the Matlab command window. Exercise: Write a small matlab program to find the matlab epsilon, without using the formula (1.3) above Precision and round-off errors Definition 1 Let p denote an approximation to a given number p. Then the quantity p p is called the absolute error and if p 0, the quantity p p / p is called the relative error. When using floating-point numbers, two modes of approximation are often used to find the nearest fl-pt number to a given real number: chopping and rounding. Chopping consists of simply chopping all the digits that exceed the size of the mantissa, in the normalized floating point system and rounding consists of rounding to the nearest fl-pt number that minimizes the absolute error. Example: Find the fl-point approximation and the associated absolute and relative errors for x = 2/3 using a floating point system with base 10 with a precision k = 4, using both chopping and rounding modes. Solution: Mode fl-pt approximation absolute error relative error chopping = 10 4 rounding =

21 Link between the relative error and precision Fact: The relative error specifies the number of correct digits for a given approximation. To see how this works, let us consider the following example. Example: Consider the number π with a series of approximations. Approximation of π # of correct digits Relative error upper and lower bounds < < < < If r 0 is an integer such that p p / p < 5 10 r, then p is said to approximate p to r significant digits. Definition 2 Maximum relative error in fl-pt representation: First let us assume the chopping mode is used. Let p > 0 be a real number and let r be a non-negative integer such that b r 1 p b r. Let p be the fl-pt approximation of p. Then p p p br k p br k b r 1 = b (k 1) = ǫ, the machine epsilon introduced above. The machine epsilon ǫ defines the unit round-off error for the chopping mode. In the definition above, we used the fact that p p is necessarily smaller than the distance between two successive fl-pts within the interval [b r 1,b r ) and that p b r 1, by definition of r. If the rounding mode was used, instead, then p p will be smaller than half the distance between two successive fl-pt numbers, yielding a unit round-off error equal to b (k 1) /2 = ǫ/2 for the rounding mode. 1.5 Floating-point arithmetics and the danger of cancellation of significant digits When arithmetic operations such as additions and multiplications are performed in a floating-point system, in the ideal situation, the arithmetic operations are performed exactly one at a time and the result is rounded to the nearest fl-pt number after each operation. The fl-pt sum of three floating numbers x, y, z, for example, satisfies p = fl(x+y +z) = fl(fl(x+y)+z). Note that accordingly fl(x+y+z) fl(x+z+y) as it is demonstrated by the following example. 20

22 Example: Assume x = ,y = ,z = in 4-digits floating point system with a decimal base b = 10 using the chopping rounding mode. We have fl(x+y) = fl( ) = fl(x+y +z) = fl( z) = fl(0.0001) = The exact value is p = x+y +z = and the relative error p p p 0.31 or 31% But fl(x+z +y) = fl(fl(x+z)+y) = This yields a relative error of or %, which is much smaller than that found in the first computation. What happened there is known as dangerous cancelation of significant digits. It occurs when we compute the difference between two fl-pt numbers that are too close to each other. In fact this is the case for the sum of fl(x+y) and z, in this example. Starting with the sum x+z is just one way to get around the problem. To minimize round-off errors, suspicious expressions should be rewritten in a mathematically equivalent way that is more stable to avoid the danger of the cancellation of significant digits. Examples: 1) fl( x 2 +1 x) will be inaccurate if x is large and positive. Let us for example assume b = 10,k = 4,x = and a rounding mode of approximation. fl(x 2 ) = fl( ) = fl(x 2 +1) = fl( x 2 +1) = fl( ) = fl( x 2 +1 x) = 0.01 = The exact value is given by x 2 +1 x = i.e, the fl-pt calculation has no-significant digits. A better approximation is obtained when the given quantity is written in a mathematical expression that is more stable under fl-pt operations: Multiplying and dividing by the conjugate expression, we get x 2 +1 x = ( x 2 +1 x) x 2 +1+x x 2 +1+x = (x2 +1) x 2 x 2 +1+x = 1 x 2 +1+x Now, for x = 65.43, ( ) 1 fl x 2 +1+x ( ) 1 = fl = fl( ) = This last approximation has technically 4 significant digits, for a relative error of (which technically the best one can hope for in a 4 digit-precision system). 21

23 2) To avoid dangerous cancellation when evaluating the expression x sin(x), when x is near zero (because sin(x) x near zero), one can use Taylor approximation of sin(x) near zero. sin(x) = x x3 3! + x5 5! + +( 1)p x 2p+1 (2p+1)!) +R p(x) (As for the Taylor series of e 5.5 given in example 3 below), in any given fl-pt system, for sufficiently large p, the reminder R p can be dropped and the fl-pt value of sin(x) will be exactly equal to the fl-pt value of the corresponding Taylor polynomial, i.e, ( x 3 fl(sin(x) x) = fl 3! x5 5! + +( 1)p x 2p+1 (2p+1)!) for sufficiently large p. Therefore, the last expression should be used instead of x sin(x) for fl-pt arithmetics when x is close to zero. It is more stable for fl-pt computations. 3) Also 1 sin(x) is problematic for x near π/2. For this one, we do the following. 1 sin(x) = (1 sin(x)) 1+sin(x) 1+sin(x) = 1 sin2 (x) 1+sin(x) = cos2 (x) 1+sin(x). The last expression should be used for fl-pt operations instead of the original when x π/2. Note that conversely, the expression cos 2 (x)/(1+sin(x)) should be avoided if x π/2. ), 1.6 Stability and Conditioning The aim of numerical methods is to solve mathematical problems on a digital a computer. To do so, we first need to design/use an algorithm (or a numerical method) for the given problem, which often provides only an approximate solution to the problem at hand in place of the full or exact solution. It is highly desirable to design algorithms that lead to the most possibly accurate approximations. Here, we present some of the common problems that may arise and hinder this goal and restrict the accuracy of the numerical solution. We will provide some necessary conditions that guarantee a fair approximation. For this we need both a stable algorithm and a well-conditioned problem, whose precise definitions are given next. Definition 3 A given mathematical problem is said to be ill-conditioned if small changes in data produce large deviations in the result. Data Solution X S ˆX = X +ǫ Ŝ The problem is well-conditioned if: ǫ << 1 = S Ŝ /S << 1. Definition 4 An algorithm is said to be stable if its approximate solution is close to the exact solution of the original problem with slightly perturbed data. 22

24 Exact data Algorithm: Approximate solution: S n Perturbed data ˆX = X +ǫ exact solution Ŝ The algorithm is stable if there exists a small perturbation ǫ, such that S n Ŝ /S n is small. Example 1: Hilbert matrix. H ij = 1 i+j 1, 1 i,j n Consider the linear system HX = b with n = 3 1 1/2 1/3 x 11/6 1/2 1/3 1/4 y = 13/12. 1/3 1/4 1/5 z 47/60 The exact solution is X = (1,1,1) T. Consider the perturbed problem, obtained by rounding the entries of both the matrix H and the right hand side vector b to three significant digits. Let Ĥ and ˆb be the truncated matrix and truncated right hand side vector, respectively Ĥ = , ˆb = The (exact) solution to the perturbed problem Ĥ ˆX =ˆb is ˆX = (1.0895, ,1.4910) T. Let s compare the perturbed solution to the solution of the original problem. The absolute error, using the L 1 norm, is and the corresponding relative error is X ˆX = x ˆx + y ŷ + z ẑ = X ˆX X = = 36.4%. A small perturbation of the original problem (on the order 1/1000) resulted in a deviation of 36 % in the solution. The problem HX = b is therefore ill-conditioned. We will see later in the course that this is an issue with the Hilbert matrix itself it is ill conditioned. Stability: Now, assume that we use Gauss elimination with a 3 digit precision calcular (a very bad one indeed; just imagine that they still exist) to approximate the solution of the system HX = b. This is our algorithm. The question is whether this algorithm is stable or not. 23

25 The approximate solution obtained by this algorithm (we will see later in the course what this means exactly) is given by X n = (0.480,1.88,1.22) T. First note that X n ˆX (do you know why?). In fact, the two solutions are very far apart from each other, another pitfall of ill-conditioning. Ill-conditioned problems are very sensitive to round-off errors and thus are very tricky to handle in a fl-pt environment. The question is whether the solution X n is close to the exact solution of a perturbed problem. This is what stability (of the algorithm) means. In other words, can we find a perturbation matrix E and a perturbation vector e to the original (truncated) matrix Ĥ and truncated vector ˆb such that the solution of (Ĥ +E)X p =ˆb+e, is close to X n? It is easy to check that the solution, X p, for x y = , z is X p = (0.4650,1.800,1.1700) T which is indeed fairly close to X n. Also both the matrix and the right hand side vector of this last system are small perturbations (less than 1%) of the matrix Ĥ and vector ˆb. Therefore, our algorithm is stable. Example 2: Consider the problem of computing the quantity w = 1000x,x = ;y = ;z = x y z The exact solution is given by w = 255, To check if the problem is ill-conditioned or not, we consider the small perturbation of the data The perturbed solution is ˆx = ,ŷ = y;ẑ = ŵ = 1000ˆx ˆx ŷ ẑ = 425, The perturbed solution has no significant digits compared to the original solution. Thus, the problem is ill-conditioned. Consider the algorithm of using fl-pt arithmetic to compute w in base b = 10 with precision k = 4 and chopping mode. We have 1000x w = fl( x y z ) = 319,

26 (The details of the fl-pt calculation of w are left as an exercise.) Can we find a perturbation to the data so that the solution to the perturbed problem is close to w? Let x = x,ȳ = y, z = which is a small perturbation for the original data. We have The algorithm is thus stable. Example 3: Consider the approximation Using n = 24, we get 1000 x x ȳ z e x 1+x+ x2 2! Consider the perturbed data: ˆx = x+ǫ. = 319,000 = w + + xn n! ; x = 5.5. e = y. eˆx = e x+ǫ = e x e ǫ = e x (1+ǫ+ ǫ2 2! + ) e x (1+ǫ) (1+ǫ)[1+x+ x2 2! + + xn n! ] = ŷ y ŷ y = ǫ = The problem is well-conditioned. Assume we use 5 digits (base 10) in a rounding mode to evaluate the given Taylor approximation. Is this stable? Note that all terms ( 5.5) n /(n!) with n 25 add no further change (improvement) to the Taylor approximation in this fl-pt arithmetic (In fact /26! = is rounded to zero in a 5 digit precision when added to the sum of the first 25 terms given by y n = = ) This algorithm yields and approximate solution y n = for all n 25. This approximate solution has no significant digits: y y n y = = 35.32%! Can we find a small perturbation ˆx to the original data x so that ŷ eˆx y n? The answer is no. Otherwise, the solution ŷ will be close to y because we know that this problem is well-conditioned: ŷ y /y = ǫ; if we suppose that in addition ŷ is close to y n, we will get a contradiction: Suppose ŷ y n /y n < δ. Then y y n y = y ŷ +ŷ y n y y ŷ y + ŷ y n y 25 ǫ+ ŷ y n y n y n y ǫ+δy n y = α

27 α is small given that both ǫ and δ are small and that yn y is order one. This is a contraction with the fact that y yn y = which is very large compared to the unit round-off which is on the order of In fact, if we attempt to compute a perturbation ǫ that yields the solution y n (exactly or approximately), we find e 5.5+ǫ = = 5.5+ǫ = ln( ) = ǫ = ln( ) , which is clearly a large perturbation of /5.5 = 0.08 = 8%. The algorithm is therefore unstable. Clearly this is due to a cancellation of significant digits in the Taylor expansion. This can be dealt with for instance by changing the order of summation in the Taylor series. 1.7 Problems 1. Provide an algorithm and write a matlab program to convert any given decimal number to the octal base. Too keep it simple, you can assume that the number in question is an INTEGER. Also you can use the DecToBinary.m routine in Section as a template to produce a DecToOctal.m version. 2. (a) Determine the second order (n = 2) Taylor polynomial approximation for f(x) = x+1 expanded about x 0 = 0. Include the remainder term. (b) Use the polynomial approximation in (a) (without the remainder term) to approximate Give all 10 correct significant digits. The exact value of is approximately Use this value to compute the absolute error of your computed approximation. (c) Determine a good upper bound for the truncation error of the Taylor polynomial approximation in (b) by bounding the remainder term. Note that the absolute error in (b) should be smaller than this upper bound. (d) Determine a good upper bound for the truncation error of the Taylor polynomial approximation of order 2 of f(x) = x+1 for all values of x such that 0.05 x Use Matlab to plot the error f(x) P 2 (x) verify than it stays below the computed upper bound. Use the Matlab instructions at the end of section 1.2 as an example to do this part. 3. Consider the evaluation of in floating point arithmetic. f(x) = 1 1 x 1,x 1 and x 2, 2 x (a) Use 4 decimal digit, idealized, chopping floating point arithmetic to evaluate f l(f(2.001). Compute the relative error. (b) Repeat (a) for fl(f( 1234). (c) Based on your results in (a) and (b) and other calculations you may want to try, specify which of the following ranges of values of x give rise to inaccurate computation of f(x) in floating point arithmetic. Explain why. 26

28 (i) x is close to 1 (ii) x is close to 2 (iii) x is close to 0. (iv) x is negative and large in magnitude (v) x is positive and large in magnitude 4. For each of the functions below, find another expression that is mathematically identical to the given function and that is more accurate when using floating-point arithmetic. (i) f(x,y) = x x 2 y when x is positive and x is much larger then y. (ii) g(x) = sinx when x is close to π 1+cosx (iii) h(x) = x 1 when x is very large in magnitude. 1+x 1 (iv) 1 x 1 when x is very large. 2 x 5. Use 4 decimal digit, idealized, chopping arithmetic to evaluate 1000x ŵ = fl( x y z ) for x = ,y = ,z = Find the absolute and relative error if the exact arithmetic value is w 255,251. How many correct digit the fl-pt computation has? 6. Consider the Taylor polynomial approximation for e x e x P n (x) = 1+x+ x2 2! xn + +,n 1. n! Assume we use this polynomial to approximate e 5.5 using floating point arithmetic with b = 10 and K = 5 (decimal base and 5 digit precision) in rounding mode. Show that fl(p n ( 5.5)) = for all n Let f(x) = ln(x),x 0. Show that the problem of evaluating f(x) is ill-conditioned for x values of x close to Consider f(x) = 1+cosx (x π) 2,x π. (x is in radian). (a) Evaluate f l(f(3.129)) using 4 decimal digit, idealized, chopping floating-point arithmetic. Note f l(π) = The exact value of f(3.129) is approximately Compute the relative error. The latter should be larger than 35%. (b) Determine the fourth order Taylor polynomial approximation for g(x) = 1+cosx about x 0 = π expressed in terms of powers (x π) k (without the remainder.) (c) Use the Taylor polynomial in (b) to derive an approximation for f(x), in terms of powers (x π) k. Use this approximation to deduce that the computation in (a) is stable. 9. Using idealized, chopping, floating-pt arithmetic in base 10 and precision K = 4, the evaluation of fl(fl(w x) fl(y z)) for w = 3.456,x = 12.34,y = 23.45,z = gives a result of Show that this computation is stable. 27

29 10. The quadratic formula states that the roots of ax 2 +bx+c = 0, when a 0, are x 1 = b+ b 2 4ac 2a and x 2 = b b 2 4ac. 2a (a) Use four-digit rounding fl-arithmetic, in base 10, to find the fl-pt approximations ˆx 1 and ˆx 2 to the roots x 1,x 2 of x x+1. = 0. Find the associated relative errors if the exact roots are approximately x 1 = and x 2 = Compare and note which one is not accurate. (b) Show that x 1 can be expressed as x 1 = 2c b+ b 2 4ac. Repeat (a) using this new expression for x 1 instead. Compare the relative error and notice the difference. Explain what went wrong in the previous case. (c) Similarly, show that x 2 = 2c b b 2 4ac. For which situation this new expression for x 2 will be more accurate for fl-pt arithmetic than the original one? Hint: Repeat (a) for x x+1 = 0 using both the new expression for x 2 and the original one. 11. Find a good upper bound for the truncation error associated with the approximation e x N n=0 for any given values of x and N > 0. Estimate the minimum number of terms N to guarantee a relative error not larger than a given tolerance ε, i.e, find N ε such x n n!, e x N n=0 xn n! e x ε, for all N N ε and x is fixed (i.e N ε may depend on x also). 28

30 Chapter 2 Direct methods for linear systems 2.1 Introduction In this chapter we are concerned with the solution of linear systems on the form AX = b where A is a square matrix of dimension n, b is a vector in R n and X is the unknown solution vector, which is also in R n. Let us illustrate this through an example. Example: Consider the system of three equations and three unknowns 4x 1 9x 2 +2x 3 = 2 2x 1 4x 2 +4x 3 = 3 x 1 +2x 2 +2x 3 = 1. In matrix form this system reduces to x x 2 = x 3 1 or simply AX = b where x 1 2 A = ;X = x 2 ;b = x 3 1 Recall from linear algebra courses that for a square linear system AX = b, the following statements are equivalent. The system AX = b has a unique solution for any right hand side b. The only solution for the system AX = 0 is the trivial solution X = 0. 29

31 The matrix A is non-singular, i.e, the matrix A has an inverse A 1 such that A 1 A = AA 1 = I, where I is the identity matrix. The determinant of A is non-zero: det(a) 0. Matlab, whose name is short for matrix laboratory is perhaps the best mathematical software to operate and handle matrices. The following three commands are very useful. >> det(a) %returns the determinant of a predefined matrix A >> inv(a) % returns the matrix inverse of A >> X = A\b % returns the solution X of AX=b 2.2 Lower and Upper triangular matrices A matrix A is said to be upper triangular if all its entries below the diagonal are zero: A = (a ij ) 1 i,j n with a ij = 0,i > j. A is said to be lower triangular if a ij = 0,j > i. A matrix which is both lower triangular and upper triangular is said to be diagonal: a ij = 0,i j. Upper triangular, Lower triangular, Diagonal a 11 a 12 a 1n a 11 a 22 a 2n a 21 a 22 A =...., A = ann a n1 a n2 a nn, A = The good thing about triangular (upper or lower) matrices is that the solution for the linear system AX = b is straightforward and can be readily programed on a computer with very little effort. The methods of choice are respectively the backward and forward substitutions. First note that the determinant of a triangular matrix is given by the product of its diagonal elements: n If A is triangular, then det(a) = a 11 a 22 a nn = a ii. Thus, a triangular matrix is non-singular if and only if all its diagonal entries are non zero. To illustrate let us consider the following 3 3 upper triangular system x 1 x+0.5y +2z = y = 2 or 3y +z = z 3 5z = 3 i=1 a ann 30

32 The backward substitution method consists in solving the last equation that involves z alone, then replace z by its value in the second equation and solve for y and so on. More precisely, we have 5z = 3 = z = 3/5; 3y = 2 z = 2 3/5 = 7/5 = y = 7/15; x = 1 0.5y 2z = 1 7/30 6/5 = 13/30; or X = ( 13/30,7/15,3/5) T. General procedure: forward and backward substitution Consider the upper triangular system of arbitrary size a 11 a 12 a 1n x 1 b 1 a 22 a 2n x 2 b = ann x n b n Assume the triangular matrix is non-singular, i.e a ii 0, i = 1,,n. Then, we have the following the backward substitution algorithm for the solution of AX = b. n x n = b n /a nn, and for i = n 1,n 2,,1, x i = b i a ij x j /a ii. j=i+1 Similarly, for a lower triangular system a 11 a 21 a a n1 a n2 a nn x 1 b 1 x 2 b 2. =.,.. x n b n the forward substitution method gives i 1 x 1 = b 1 /a 11, and for i = 2,3,,n,x i = b i a ij x j /a ii. j=1 As mentioned earlier, one advantage of the forward and backward substitution methods is that they can be easily implemented on a computer using an advanced programing language. The matlab program for the backward substitution applied to an upper triangular matrix is given below. Below, we denote by U an arbitrary upper triangular matrix with entries u ij,1 i j n. %%M-file: Usolve.m function X= Usolve(U,b) 31

33 %input: upper triangular matrix U, right hand side vector b %output: solution X n=max(size(u)); %determines the dimension of the matrix U X(n) = b(n)/u(n,n); for k=n-1:-1:1 %Backward for loop X(k) = ( b(k) - sum( U(k, k+1:n).*x(k+1:n) ))/U(k,k); end %%%%%%%% %To run in the command window, follow this example % >> U =[5,0,3,2;0,2.5,0,2;0,0,1.5,3;0,0,0,2] U = >>b=[2;0;1; -1] b = >>X=Usolve(U,b) X = %%We now check if this answer is correct using the %% matlab built in backslash division: >>U\b ans = Exercise: Write a matlab program to solve the lower triangular system LX = b, where L is a lower triangular matrix with entries l ij,1 j i n, using the forward substitution method. 2.3 Gauss elimination The Gauss elimination method is one of the most robust and universal (i.e, it can be used for a wide range of problems) numerical methods for solving linear systems AX = b, where A is a non-singular square matrix. It consists of reducing the full system into an upper triangular one 32

34 that can be then easily solved by the method of backward substitution. Let us start by a simple example for illustration. Consider the three-by-three system: In matrix form 4x 1 9x 2 +2x 3 = 2 2x 1 4x 2 +4x 3 = 3 x 1 +2x 2 +2x 3 = x x 2 = x 3 1 Consider the augmented matrix, formed by the main matrix and the right hand side vector, Subtract 1/2 times the first row from the second and -1/4 times the first row from the third and substitute these differences to the second and third rows, respectively. In mathematical words, let E j denote the row j = 1,2,3 of the augmented matrix. We make the following j th operations: This yields new E 2 = E 2 1/2 E 1 and new E 3 = E 3 ( 1/4) E / /4 5/2. 3/2 This makes two zeros appear below the diagonal, on the first column of the matrix. To obtain an upper triangular system, we need to further eliminate the entree below the diagonal on the second column. For this we multiply the second row by -1/2 and subtract it from the third row (new E 3 = E 3 +(1/2)E 2 ) to yield / /2 This corresponds to the triangular system / x 1 x 2 = x /2, whose solution is given by x 3 = 5/8;x 2 = (2 3 5/8)/(1/2) = 1/4;x 1 = (2+9 1/4 2 5/8)/4 = 3/4 33

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Elements of Floating-point Arithmetic

Elements of Floating-point Arithmetic Elements of Floating-point Arithmetic Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Floating-point Numbers Representations IEEE Floating-point Standards Underflow

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes

More information

MAT 460: Numerical Analysis I. James V. Lambers

MAT 460: Numerical Analysis I. James V. Lambers MAT 460: Numerical Analysis I James V. Lambers January 31, 2013 2 Contents 1 Mathematical Preliminaries and Error Analysis 7 1.1 Introduction............................ 7 1.1.1 Error Analysis......................

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

Elements of Floating-point Arithmetic

Elements of Floating-point Arithmetic Elements of Floating-point Arithmetic Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Floating-point Numbers Representations IEEE Floating-point Standards Underflow

More information

Notes on floating point number, numerical computations and pitfalls

Notes on floating point number, numerical computations and pitfalls Notes on floating point number, numerical computations and pitfalls November 6, 212 1 Floating point numbers An n-digit floating point number in base β has the form x = ±(.d 1 d 2 d n ) β β e where.d 1

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

Mathematical preliminaries and error analysis

Mathematical preliminaries and error analysis Mathematical preliminaries and error analysis Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Round-off errors and computer arithmetic

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Computer Representation of Numbers Counting numbers (unsigned integers) are the numbers 0,

More information

Binary floating point

Binary floating point Binary floating point Notes for 2017-02-03 Why do we study conditioning of problems? One reason is that we may have input data contaminated by noise, resulting in a bad solution even if the intermediate

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors Roundoff errors and floating-point arithmetic

More information

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors 3-1 Roundoff errors and floating-point arithmetic

More information

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Floating Point Number Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview Real number system Examples Absolute and relative errors Floating point numbers Roundoff

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

Numerical Methods - Preliminaries

Numerical Methods - Preliminaries Numerical Methods - Preliminaries Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Preliminaries 2013 1 / 58 Table of Contents 1 Introduction to Numerical Methods Numerical

More information

Numerical Analysis and Computing

Numerical Analysis and Computing Numerical Analysis and Computing Lecture Notes #02 Calculus Review; Computer Artihmetic and Finite Precision; and Convergence; Joe Mahaffy, mahaffy@math.sdsu.edu Department of Mathematics Dynamical Systems

More information

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

INTRODUCTION TO COMPUTATIONAL MATHEMATICS INTRODUCTION TO COMPUTATIONAL MATHEMATICS Course Notes for CM 271 / AMATH 341 / CS 371 Fall 2007 Instructor: Prof. Justin Wan School of Computer Science University of Waterloo Course notes by Prof. Hans

More information

Chapter 1 Mathematical Preliminaries and Error Analysis

Chapter 1 Mathematical Preliminaries and Error Analysis Numerical Analysis (Math 3313) 2019-2018 Chapter 1 Mathematical Preliminaries and Error Analysis Intended learning outcomes: Upon successful completion of this chapter, a student will be able to (1) list

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

Chapter 1 Error Analysis

Chapter 1 Error Analysis Chapter 1 Error Analysis Several sources of errors are important for numerical data processing: Experimental uncertainty: Input data from an experiment have a limited precision. Instead of the vector of

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

1 Backward and Forward Error

1 Backward and Forward Error Math 515 Fall, 2008 Brief Notes on Conditioning, Stability and Finite Precision Arithmetic Most books on numerical analysis, numerical linear algebra, and matrix computations have a lot of material covering

More information

CHAPTER 1. INTRODUCTION. ERRORS.

CHAPTER 1. INTRODUCTION. ERRORS. CHAPTER 1. INTRODUCTION. ERRORS. SEC. 1. INTRODUCTION. Frequently, in fact most commonly, practical problems do not have neat analytical solutions. As examples of analytical solutions to mathematical problems,

More information

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight Applied Numerical Analysis (AE0-I) R. Klees and R.P. Dwight February 018 Contents 1 Preliminaries: Motivation, Computer arithmetic, Taylor series 1 1.1 Numerical Analysis Motivation..........................

More information

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013 Number Systems III MA1S1 Tristan McLoughlin December 4, 2013 http://en.wikipedia.org/wiki/binary numeral system http://accu.org/index.php/articles/1558 http://www.binaryconvert.com http://en.wikipedia.org/wiki/ascii

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

Introduction CSE 541

Introduction CSE 541 Introduction CSE 541 1 Numerical methods Solving scientific/engineering problems using computers. Root finding, Chapter 3 Polynomial Interpolation, Chapter 4 Differentiation, Chapter 4 Integration, Chapters

More information

1 ERROR ANALYSIS IN COMPUTATION

1 ERROR ANALYSIS IN COMPUTATION 1 ERROR ANALYSIS IN COMPUTATION 1.2 Round-Off Errors & Computer Arithmetic (a) Computer Representation of Numbers Two types: integer mode (not used in MATLAB) floating-point mode x R ˆx F(β, t, l, u),

More information

Chapter 1 Computer Arithmetic

Chapter 1 Computer Arithmetic Numerical Analysis (Math 9372) 2017-2016 Chapter 1 Computer Arithmetic 1.1 Introduction Numerical analysis is a way to solve mathematical problems by special procedures which use arithmetic operations

More information

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460 Notes for Part 1 of CMSC 460 Dianne P. O Leary Preliminaries: Mathematical modeling Computer arithmetic Errors 1999-2006 Dianne P. O'Leary 1 Arithmetic and Error What we need to know about error: -- how

More information

Chapter 1: Preliminaries and Error Analysis

Chapter 1: Preliminaries and Error Analysis Chapter 1: Error Analysis Peter W. White white@tarleton.edu Department of Tarleton State University Summer 2015 / Numerical Analysis Overview We All Remember Calculus Derivatives: limit definition, sum

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

An Introduction to Numerical Analysis. James Brannick. The Pennsylvania State University

An Introduction to Numerical Analysis. James Brannick. The Pennsylvania State University An Introduction to Numerical Analysis James Brannick The Pennsylvania State University Contents Chapter 1. Introduction 5 Chapter 2. Computer arithmetic and Error Analysis 7 Chapter 3. Approximation and

More information

Table 1 Principle Matlab operators and functions Name Description Page reference

Table 1 Principle Matlab operators and functions Name Description Page reference Matlab Index Table 1 summarises the Matlab supplied operators and functions to which we have referred. In most cases only a few of the options available to the individual functions have been fully utilised.

More information

Notes for Chapter 1 of. Scientific Computing with Case Studies

Notes for Chapter 1 of. Scientific Computing with Case Studies Notes for Chapter 1 of Scientific Computing with Case Studies Dianne P. O Leary SIAM Press, 2008 Mathematical modeling Computer arithmetic Errors 1999-2008 Dianne P. O'Leary 1 Arithmetic and Error What

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Applied Mathematics 205. Unit 0: Overview of Scientific Computing. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit 0: Overview of Scientific Computing. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit 0: Overview of Scientific Computing Lecturer: Dr. David Knezevic Scientific Computing Computation is now recognized as the third pillar of science (along with theory and experiment)

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

Tu: 9/3/13 Math 471, Fall 2013, Section 001 Lecture 1

Tu: 9/3/13 Math 471, Fall 2013, Section 001 Lecture 1 Tu: 9/3/13 Math 71, Fall 2013, Section 001 Lecture 1 1 Course intro Notes : Take attendance. Instructor introduction. Handout : Course description. Note the exam days (and don t be absent). Bookmark the

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Floating-point Computation

Floating-point Computation Chapter 2 Floating-point Computation 21 Positional Number System An integer N in a number system of base (or radix) β may be written as N = a n β n + a n 1 β n 1 + + a 1 β + a 0 = P n (β) where a i are

More information

Chapter 1 Mathematical Preliminaries and Error Analysis

Chapter 1 Mathematical Preliminaries and Error Analysis Chapter 1 Mathematical Preliminaries and Error Analysis Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128A Numerical Analysis Limits and Continuity

More information

Midterm Review. Igor Yanovsky (Math 151A TA)

Midterm Review. Igor Yanovsky (Math 151A TA) Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Errors. Intensive Computation. Annalisa Massini 2017/2018

Errors. Intensive Computation. Annalisa Massini 2017/2018 Errors Intensive Computation Annalisa Massini 2017/2018 Intensive Computation - 2017/2018 2 References Scientific Computing: An Introductory Survey - Chapter 1 M.T. Heath http://heath.cs.illinois.edu/scicomp/notes/index.html

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

ECS 231 Computer Arithmetic 1 / 27

ECS 231 Computer Arithmetic 1 / 27 ECS 231 Computer Arithmetic 1 / 27 Outline 1 Floating-point numbers and representations 2 Floating-point arithmetic 3 Floating-point error analysis 4 Further reading 2 / 27 Outline 1 Floating-point numbers

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5. Ax = b.

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5. Ax = b. CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5 GENE H GOLUB Suppose we want to solve We actually have an approximation ξ such that 1 Perturbation Theory Ax = b x = ξ + e The question is, how

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

QUADRATIC PROGRAMMING?

QUADRATIC PROGRAMMING? QUADRATIC PROGRAMMING? WILLIAM Y. SIT Department of Mathematics, The City College of The City University of New York, New York, NY 10031, USA E-mail: wyscc@cunyvm.cuny.edu This is a talk on how to program

More information

Errata List Numerical Mathematics and Computing, 7th Edition Ward Cheney & David Kincaid Cengage Learning (c) March 2013

Errata List Numerical Mathematics and Computing, 7th Edition Ward Cheney & David Kincaid Cengage Learning (c) March 2013 Chapter Errata List Numerical Mathematics and Computing, 7th Edition Ward Cheney & David Kincaid Cengage Learning (c) 202 9 March 203 Page 4, Summary, 2nd bullet item, line 4: Change A segment of to The

More information

ESO 208A: Computational Methods in Engineering. Saumyen Guha

ESO 208A: Computational Methods in Engineering. Saumyen Guha ESO 208A: Computational Methods in Engineering Introduction, Error Analysis Saumyen Guha Department of Civil Engineering IIT Kanpur What is Computational Methods or Numerical Methods in Engineering? Formulation

More information

Virtual University of Pakistan

Virtual University of Pakistan Virtual University of Pakistan File Version v.0.0 Prepared For: Final Term Note: Use Table Of Content to view the Topics, In PDF(Portable Document Format) format, you can check Bookmarks menu Disclaimer:

More information

1 Floating point arithmetic

1 Floating point arithmetic Introduction to Floating Point Arithmetic Floating point arithmetic Floating point representation (scientific notation) of numbers, for example, takes the following form.346 0 sign fraction base exponent

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 1 Chapter 4 Roundoff and Truncation Errors PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction

More information

Lecture 7. Floating point arithmetic and stability

Lecture 7. Floating point arithmetic and stability Lecture 7 Floating point arithmetic and stability 2.5 Machine representation of numbers Scientific notation: 23 }{{} }{{} } 3.14159265 {{} }{{} 10 sign mantissa base exponent (significand) s m β e A floating

More information

Math 128A: Homework 2 Solutions

Math 128A: Homework 2 Solutions Math 128A: Homework 2 Solutions Due: June 28 1. In problems where high precision is not needed, the IEEE standard provides a specification for single precision numbers, which occupy 32 bits of storage.

More information

Order of convergence. MA3232 Numerical Analysis Week 3 Jack Carl Kiefer ( ) Question: How fast does x n

Order of convergence. MA3232 Numerical Analysis Week 3 Jack Carl Kiefer ( ) Question: How fast does x n Week 3 Jack Carl Kiefer (94-98) Jack Kiefer was an American statistician. Much of his research was on the optimal design of eperiments. However, he also made significant contributions to other areas of

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 2: Machine precision and condition number) B. Rosić, T.Moshagen Institute of Scientific Computing General information :) 13 homeworks (HW) Work in groups of 2 or 3 people Each HW brings maximally

More information

Introduction to Finite Di erence Methods

Introduction to Finite Di erence Methods Introduction to Finite Di erence Methods ME 448/548 Notes Gerald Recktenwald Portland State University Department of Mechanical Engineering gerry@pdx.edu ME 448/548: Introduction to Finite Di erence Approximation

More information

The purpose of computing is insight, not numbers. Richard Wesley Hamming

The purpose of computing is insight, not numbers. Richard Wesley Hamming Systems of Linear Equations The purpose of computing is insight, not numbers. Richard Wesley Hamming Fall 2010 1 Topics to Be Discussed This is a long unit and will include the following important topics:

More information

Numerical Analysis Exam with Solutions

Numerical Analysis Exam with Solutions Numerical Analysis Exam with Solutions Richard T. Bumby Fall 000 June 13, 001 You are expected to have books, notes and calculators available, but computers of telephones are not to be used during the

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

Introduction, basic but important concepts

Introduction, basic but important concepts Introduction, basic but important concepts Felix Kubler 1 1 DBF, University of Zurich and Swiss Finance Institute October 7, 2017 Felix Kubler Comp.Econ. Gerzensee, Ch1 October 7, 2017 1 / 31 Economics

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1 Chapter 01.01 Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 Chapter 01.02 Measuring errors 11 True error 11 Relative

More information

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic Computer Arithmetic MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013 Machine Numbers When performing arithmetic on a computer (laptop, desktop, mainframe, cell phone,

More information

CHAPTER 3. Iterative Methods

CHAPTER 3. Iterative Methods CHAPTER 3 Iterative Methods As we have seen in the previous two chapters, even for problems, which are theoretically well understood, such as computing the square root, one cannot provide the perfect answer

More information

MATH ASSIGNMENT 03 SOLUTIONS

MATH ASSIGNMENT 03 SOLUTIONS MATH444.0 ASSIGNMENT 03 SOLUTIONS 4.3 Newton s method can be used to compute reciprocals, without division. To compute /R, let fx) = x R so that fx) = 0 when x = /R. Write down the Newton iteration for

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

Lecture 28 The Main Sources of Error

Lecture 28 The Main Sources of Error Lecture 28 The Main Sources of Error Truncation Error Truncation error is defined as the error caused directly by an approximation method For instance, all numerical integration methods are approximations

More information

Non-polynomial Least-squares fitting

Non-polynomial Least-squares fitting Applied Math 205 Last time: piecewise polynomial interpolation, least-squares fitting Today: underdetermined least squares, nonlinear least squares Homework 1 (and subsequent homeworks) have several parts

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract What Every Programmer Should Know About Floating-Point Arithmetic Last updated: November 3, 2014 Abstract The article provides simple answers to the common recurring questions of novice programmers about

More information

CS412: Introduction to Numerical Methods

CS412: Introduction to Numerical Methods CS412: Introduction to Numerical Methods MIDTERM #1 2:30PM - 3:45PM, Tuesday, 03/10/2015 Instructions: This exam is a closed book and closed notes exam, i.e., you are not allowed to consult any textbook,

More information

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING 0. Gaussian Elimination with Partial Pivoting 0.2 Iterative Methods for Solving Linear Systems 0.3 Power Method for Approximating Eigenvalues 0.4 Applications of Numerical Methods Carl Gustav Jacob Jacobi

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information