Notes: Introduction to Numerical Methods

Size: px
Start display at page:

Download "Notes: Introduction to Numerical Methods"

Transcription

1 Notes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania Indiana, PA, 15705, USA November 17, 2015

2 ii

3 Preface These notes will serve as an introduction to numerical methods for scientific computing. From the IUP course catalog the course will contain: This course will cover solving mathematical problems using computer algorithms; in particular, root finding methods, direct and iterative methods for linear systems, nonlinear systems, eigenvalue problems and differential equations. Material presented in the course will tend to follow the presentation of Heath in the text: Scientific Computing: an Introductory Survey (second edition) [?]. Relevant course material will start in chapter 1 of the text and selected chapters will be covered as time in the course permits. I will supplement the Winston text with additional material from other popular books on numerical methods: Numerical Mathematics and Computing by Cheney and Kincaid [?] Numerical Analysis by Burden and Faires [?] My Apologies in advance for any typographical errors or mistakes that are present in this document. That said, I will do my very best to update and correct the document if I am made aware of these inaccuracies. -John Chrispell iii

4 iv

5 Contents 1 Introduction and Review Errors Accurate and Precise Error in a Computation Forward and Backward Error Analysis Horner s Algorithm Activities Taylor s Theorem Taylor s Theorem using h Floating Point Representation Gaussian Elimination Assessment of Algorithm Matrix Norms Residuals Improving Gaussian Elimination Methods for Finding Zeros Bisection Algorithm Newton s Method Newtons Method for Vector Functions The Heat Equation Numerical Solution Taylors s Theorem For Approximations v

6 3.1.2 Discritizing Implicit Time Stepping Tri-Diagonal Systems Order of Accuracy Numerical Integration Trapezoid Rule Newton-Cotes Quadrature Gaussian Quadrature Composite Quadrature Polynomial Interpolation Error in Polynomial Interpolation Highlights Cubic Splines The Finite Element Method Appendices 67 Bibliography 67 vi

7 Chapter 1 Introduction and Review She s a woman, you re a dude. You re not supposed to understand her. That s not what she s after... She doesn t want you to understand her. She knows that s impossible. She just wants you to understand yourself. Everything else is negotiable. Neal Stephenson, Snow Crash Before We Start: SAGE It may also be useful to use SAGE for some of the computations in this course. The following link is for the IUP sage server, it has functionality similar to that of MATLAB, and Mathematica, and will allow for a lot of quick mathematical computations to be accomplished. Interested users may also wish to use sage math cloud to do some computations. or the Enthought canopy software. What is Scientific Computing? The major theme of this class will be solving scientific problems using computers. Many of the examples considered will be smaller parts that can be thought of as tools for implementing or examining larger computational problems of interest. 1

8 We will take advantage of replacing a difficult mathematical problem with simpler problems that are easier to handle. Using the smaller parts insight will be gained into the larger problem of interest. In this class the methods and algorithms underlying computational tools you already use will be examined. Scientific Computing: Deals with computing continuous quantities in science and engineering (time, distance, velocity, temperature, density, pressure, stress) that can not be solved exactly or analytically in a finite number of steps. Typically we are numerically solving problems that involve integrals, derivatives, and nonliterary. Numerical Analysis: An area of mathematics where concern is placed on the design and implementation of algorithms to solve scientific problems. In general for solving a problem you will: Develop a model (expressed by equations) for a phenomenon or system of interest. Find/Develop an algorithm to solve the the system. Develop a Computational Implementations. Run your implementation. Post process your results (Graphs Tables Charts). Interpret validate your results. Problems are well posed provided: 1. A solution to the problem of interest exists. 2. The solution is unique. 3. The solution depends continuously on the data. The last item here is important as problems that are ill conditioned have large changes in output with small changes in the initial conditions or data. This can be troubling for numerical methods, and is not always avoidable. In general we will use some standard techniques to attack problems presented. Replacing an unsolvable problem by a problem that is close to it in some sense and then looking at the closely related solution. Replace infinite dimensional spaces with finite ones. Infinite processes with Finite processes: Integrals with Sums Derivatives with Finite Differences 2

9 Nonlinear Problems with Linear Ones Complicated Functions with Simple Ones (polynomials). General Matrices with Simpler Matrices. With all of this replacement and simplification the sources of error and approximation need to be accounted for. How good is the approximated solution? Significant Digits The significant digits in a computation start with the left most non zero digit in a computation, and end with the right most correct digit (including final zeros that are correct). Example: Lets consider calculating the surface area of the Earth. The area of a sphere is: A = 4πr 2 The radius of the Earth (r 6370 km). The value for π rounded at some point. The numerical computation will be rounded at some point. All of these assumptions will come into play at some point. How many digits are significant? Figure 1.0.1: Here the intersection of two nearly parallel lines is compared with an error range given of size ɛ. Note the closer the two lines are to parallel the more ill conditioned finding the intersection will become. 3

10 Example: Consider solving the following system of equations x y = x y = However you can only keep three significant digits. Keeping only three significant digits in all computations an answer of Solving the problem using sage: x 29.0 and y 19.0 x and y Note that the example in the Cheney text is far more dramatic, and the potential for error when truncating grows dramatically if the two lines of interest are nearly parallel. 1.1 Errors If two values are considered one taken to be true and the other an approximation then the Error is given by: Error = True Approximation The Absolute Error of using the Approximation is Absolute Error = True Approximation and we denote Relative Error = True Approximation True The Relative Error is usually more useful than the Absolute Error. The Relative Error is not defined if the true value we are looking for is zero. Example: Consider the case where we are approximating and have: Here we have the following: True = and Approximation = Error = 0.01 Absolute Error = 0.01 Relative Error =

11 Note that the approximation has 4 significant digits. Example: Consider the case where we are approximating and have: Here we have the following: True = and Approximation = Error = Absolute Error = Relative Error = 1 Here relative error is a much better indicator of how well the approximation fits the true value Accurate and Precise When a computation is accurate to n decimal places then we can trust n digits to the right of the decimal place. Similar when a computation is said to be accurate to n significant digits then the computation is meaningful for n places beginning with the leftmost nonzero digit given. The classic example here is a meter stick. The user can consider it accurate to the level of graduation on the meter stick. A second example would be the mileage on your car. It usually displays in tenth of a mile increments. You could use your car to measure distances accurate withing two tenths of a mile. Precision is a different game. Consider adding the following values: = 9.07 The second digit in 3.4 could be from rounding any of the following: 3.41, , 3.44, 3.36, 3.399, 3.38 to two significant digits. So there can only be two significant digits in the answer. The results from multiplication and division can be even more misleading. Computers will in some cases allow a user to decide if they would like to use rounding or chopping. Note there may be several different schemes for rounding values (especially when it comes to rounding values ending with a 5). 5

12 1.1.2 Error in a Computation The error in a computation may be considered in two parts. Lets start such that x be the exact information or input. ˆx will be the approximate information. Then for some real process f the approximating procedure is given by ˆf. To get a handle on the error then: Total Error = f(x) ˆf(ˆx) = f(x) f(ˆx) + f(ˆx) ˆf(ˆx) = Data Error + Computational Error Note that the Algorithm used does not effect the Data Error being propagated. The computational error is coming in two forms: Truncation or Discretization Error. Comes from truncating infinite series. Replacing derivatives with finite differences. Terminating an algorithm prior to full convergence. Fix with more accurate approximations. Rounding Error Comes from using Finite Precision Arithmetic. The solution is to use higher precision arithmetic. Lets consider estimation of a population based on the birth and death rates. Here we define P (t) as the population at time t, and estimate a time step of size t into the future based on birth and death rates B and D by: P (t + t) = P (t) + (B D)P (t) t Note that we can rearrange the model such that: = lim t 0 = P (t + t) P (t) = (B D)P (t) t P (t + t) P (t) t P (t) = (B D)P (t) t = lim (B D)P (t) t 0 6

13 With the standard solution that comes to mind when considering population growth: P (t) = P (0)e (B D)t Here the model is an approximation that has some initial parameters: P (0), B, and D. These are all approximated with some degree of uncertainty. Note that garbage into any algorithm leads to garbage coming out Forward and Backward Error Analysis The idea with forward and backward error values is to get a handle on how well a model is doing. Forward Error Looks at how close a model is to the value expected. Backward Error Shows how perturbed the input to a given model is. How far away the problem being solved is from the problem of interest. Consider the one-dimensional model with f : R R such that y = f(x). From the model we obtain: ŷ Forward Error = ŷ y = y This can be difficult to get a handle on, as y may not be known. Backward Error assumes that ŷ obtained is the exact solution to a modified (near-by) problem, and we see how much the initial data needs to be changed to obtain the given result. with Backward Error = x = ˆx x f(ˆx) = ŷ Example: Try and compute the relative forward and backward error values for y = π 2 assuming that ŷ = 9. Forward Error: y = y ŷ This yields a forward error of approximately 9%. Backward Error: Note that 3 2 = 9 x = ˆx x = 3 π Yielding a relative error of approximately 4.5%. 7

14 1.1.4 Horner s Algorithm In general it is a good idea to complete most computation using a minimum number of floating point operations. Consider evaluating polynomials. For example given f(x) = a 0 + a 1 x + a 2 x 2 + a n 1 x n 1 + a n x n It would not be wise to compute x 2, then x 3 and so on. Writing the polynomial as: f(x) = a 0 + x(a 1 + x(a 2 + x( x(a n 1 + x(a n )) )) will efficiently evaluate the polynomial without ever having to use exponentiation. Note efficient evaluation of polynomials in this is Horner s Algorithm and is accomplished using synthetic division. 1.2 Activities Using Sage complete the following: Write a piece of code that uses the limit definition of the derivative to evaluate the following derivative: d dx sin(x) x=0.5 Write a piece of code that will find numerically the approximate value of the smallest value (not zero) on the machine in front of you. This is called machine precision or the machine epsilon. 8

15 1.3 Taylor s Theorem There are several useful forms of Taylor s Theorem and it can be argued that it is the most important theorem for the study of numerical methods. Theorem If the function f possess continuous derivatives of orders 0, 1, 2,..., (n+1) in a closed interval I = [a, b] then for any c and x in I, f(x) = n k=0 f (k) (c) (x c) k + E n+1 k! where the error tem E n+1 can be given in the form of E n+1 = f (n+1) (η) (n + 1)! (x c)n+1. Here η is a point that lies between c and x and depends on both. Note we can use Taylor s Theorem to come up with useful series expansions. Example: Use Taylor s Theorem to find a series expansion for e x. Here we need to evaluate the n th derivative of e x. We also need to pick a point of expansion or value for c. We will choose c to be zero, and recall that the derivative of e x is such that Thus, for Taylor s Theorem we need: So we then have: e x = f(0) 0! d dx ex = e x. f(0) = e 0 = 1 f (0) = e 0 = 1 f (0) = e 0 = 1 I see a pattern! (x) 0 + f (0) 1! (x) 1 + f (0) 2! = 1 0! + x 1! x + x2 2! + x3 3! +... x k = for x <. k! k=0 (x) 2 + f (0) (x) ! Note we should be a little more careful here, and prove that the series truly does converge to e x by using the full definition given in Taylor s Theorem. 9

16 In this case we have: e x = n k=0 e k k! + e η (n + 1)! xn+1 (1.3.1) which incorporates the error term. We now look at values of x in some interval around the origin, consider a x a. Then η a and we know e η e a Then the remainder or error term is such that: lim e η n (n + 1)! xn+1 lim e a n (n + 1)! an+1 = 0 Then when the limit is taken of both sides of (1.3.1) it can be seen that: e x = k=0 e k k! Taylor s theorem can be useful to find approximations to hard to compute values: Example: Use the first five terms in a Taylor s series expansion to approximate the value of e. e = Example: In the special case of n = 0 Taylors theorem is known as the Mean Value Theorem. Theorem If f is a continuous function on the closed interval [a, b] and possesses a derivative at each point in the open interval (a, b), then for some η in (a, b). Notice that this can be rearranged so that: f(b) = f(a) + (b a)f (η) f (η) = f(b) f(a) b a The right hand side here is an approximation of the derivative for any x (a, b). 10

17 1.3.1 Taylor s Theorem using h Their is a more useful form of Taylor s Theorem: Theorem If the function f possesses continuous derivatives of order 0, 1, 2,..., (n+1) in a closed interval I = [a, b], then for any x in I, f(x + h) = f(x) + f (x)h f (x)h f (x)h E n+1 = n f (k) (x) h k + E n+1 k! k=0 where h is any value such that x + h is in I and where for some η between x and x + h. E n+1 = f (n+1) (η) (n + 1)! hn+1 Note that the error term E n+1 will depend on h in two ways. Explicitly on h with the h n+1 presence. The point η generally depends on h. Note as h converges to zero we see the Error Term converges to zero at a rate proportional to h n+1. Thus, we typically write: as h goes to zero. This is short hand for: where C is an upper bounding constant. E n+1 = O(h n+1 ) E n+1 C h n+1 We additionally note that Taylor s Theorem in terms of h may be written down specifically for any value of n, and thus represents a family of theorems, each with a specific order of h approximation. f(x + h) = f(x) + O(h) f(x + h) = f(x) + f (x)h + O(h 2 ) f(x + h) = f(x) + f (x)h f (x)h 2 + O(h 3 ) 11

18 1.4 Floating Point Representation Numbers when entered into a computational machine are typically broken into two parts: An integer portion. A fractional portion. with these two parts being separated by a decimal point , A second form that is used is normalized scientific notation, or normalized floating-point representation. Here the decimal point is shifted so it is written as a fraction multiplied by some power of 10. Note the leading digit of the fraction is nonzero = Any decimal in the floating point system may be written in this manner. x = ±0.d 1 d 2 d n with d 1 not equal to zero. More generally we write x = ±r 10 n with ( ) 1 10 r < 1. Here r is the mantissa and n is the exponent. If we are looking at numbers in a binary system then ( ) 1 x = ±q 2 n with 2 q < 1. Computers work exactly like this; however, on a computer we have the issue of needing to use finite word length (No more... ). This means a couple of things: No representation for irrational numbers. No representation for numbers that do not fit into a finite format. 12

19 Activity Numbers that can be expressed on a computer are called its Machine Numbers, and they very depending on the computational system being used. If we consider a binary computational system where numbers must be expressed using normalized scientific notation in the form: x = ±(0.b 1 b 2 b 3 ) 2 2 ±k where the values of b 1, b 2, b 3, and k {0, 1}. what are all the possible numbers in this computational system? What additional observations can be made about the system? We shall consider here only the positive numbers: (0.100) = 1 4 (0.100) = 1 4 (0.100) = 1 2 (0.100) = 1 (0.101) = 5 16 (0.101) = 5 8 (0.101) = 5 4 (0.110) = 3 8 (0.110) = 3 4 (0.110) = 3 2 (0.111) = 7 16 (0.111) = 7 8 (0.111) = 7 4 Note there is a hole in the number system near zero. Note there is also uneven spacing of the numbers we do have. Numbers smaller than the smallest representable number are considered underflow and typically treated as zero. Numbers larger than the largest representable number are considered overflow and will typically through an error. 13

20 For number representations on computers we actually use the IEEE-754 standard has been accepted. Precision Bits Sign Exponent Mantissa Single Double Long Double Note that gives us the ballpark for machine precision when a computation is done using a given number of bits. 14

21 1.5 Gaussian Elimination In the previous section we considered the numbers that are available for our use on a computer. We made note that their are many numbers (especially near zero) that are not machine numbers, and when used in a computation these numbers result in numerical round-off error. As the computation will use the closest available machine number. Lets now look at how this round-off error can come into play when we are solving the familiar linear equation system: Ax = b The normal approach would be to compute A 1 and then use that to find x. However, there are other questions that can come into play: How do we store a large system of this form on a computer? How do we know that the answer we receive is correct? Can the algorithm we use fail? How long will it take to compute the answer? What is the operation count for computing the answer? Will the algorithm be unstable for certain systems of equations? Can we modify the algorithm to control instabilities? What is the best algorithm for the task at hand? Matrix Conditioning Issues? Lets start by considering the system of equations: Ax = b with and the right hand side be such that n n 1 A = n n + 1 n n + 1 n 1 b i = n j=1 A i,j is the sum of any given row. Note then here the solution to the system will trivially be a column of ones. Here A is a well know and poorly conditioned Vandermonde matrix. 15

22 It may be useful to use the sum of a geometric series when coding this so that any row i would look like: n (1 + i) j 1 x j = 1 i ((1 + i)n 1) j=1 The following is pseudo code for a Gaussian elimination procedure. Much like you would do by hand, our goal will be to implement and test this in some computing language. % Forward Elimination. f o r k = 1 to (n 1) f o r i = ( k+1) to n xmult = A( i, k )/A( k, k ) ; A( i, k ) = xmult ; f o r end end Listing 1.1: Straight Gaussian Elimination j = ( k+1) to n A( i, j ) = A( i, j ) ( xmult ) A( k, j ) ; end b ( i, 1 ) = b ( i, 1 ) ( xmult ) b(k, 1 ) ; % Backward S u b s t i t u t i o n. x (n, 1 ) = b (n, 1 ) /A(n, n ) ; f o r i = (n 1) to 1 sum = b ( i, 1 ) ; end f o r j = ( i +1) to n sum = sum A( i, j ) x ( j, 1 ) ; end x ( i, 1 ) = sum/a( i, i ) ; Write a piece of code that implements this algorithm. 16

23 1.5.1 Assessment of Algorithm In order to see how well our algorithm was performing the error can be considered. There are several ways of computing the error of a vector solution. The first is to consider a straight forward vector of the difference between the computed solution and the true solution: e = x h x. A second method that is used when the true solution to a given problem is unknown is to consider a residual vector. r = Ax h b Note the residual vector will be all zeros when the true solution is obtained. In order to get a handle on the size of either the residual vector or the error vector norms are often used. A vector norm is any mapping from R n to R that satisfies the following properties: x > 0 if x 0. αx = α x. x + y x + y. (triangle inequality). where x and y are vectors in R n, and α R. Examples of vector norms include: The l 1 vector norm: The Euclidean/ l 2 -vector norm: x 1 = n x i i=1 x 2 = ( n x 2 i i=1 ) 1/2 l p -vector norm: x p = ( n x p i i=1 ) 1/p Note there are also norms for Matrices. Different norms of the residual and error vectors allow for a single value to be assessed rather than an entire vector. 17

24 1.5.2 Matrix Norms The idea of a matrix norm is to asses the size of a matrix. In general A = max x 0 Ax x is the matrix norm induced by the vector norm on x and measures in some sense the maximum stretching the matrix A does on the vector x. Some matrix norms are easier to compute than others and examples include: The maximum absolute column sum of A: m 1 A 1 = max a ij j i=0 The maximum absolute row sum of A: A = max i n 1 a ij j=0 Matrix norms have the properties: 1. A > 0 if A γa = γ A for and scalar γ. 3. A + B A + B 4. AB A B 5. Ax A x for and vector x Properties 1,2, and 3 hold for the general definition of a matrix norm. Properties 4 and 5 hold for matrix norms that are induced by p-norms. The condition number of a matrix is defined to be: cond(a) = A A 1 where cond(a) = if A is singular. Note this gives a nice way of quantifying how difficult a system will be to work with. 18

25 Properties of the Condition Number 1. cond(a) 1 2. cond(i) = 1 3. cond(γa) = cond(a) for any scalar γ 4. For any Diagonal matrix D D = Diag(d i ) = cond(d) = max d i min d i Note the condition number measures how close a system is to being singular. Compare this to the determinate of a matrix. Det(A) = 0 = Singular Matrix Det(αI) = α n = This can be made arbitrairily small if α < 1 The condition number of a system can be used to give estimates on the bounds of solutions to the systems. 1.6 Residuals What is a residual? Let ˆx denote the approximate solution to a given system. Then r will be the residual denoted by: r = b Aˆx Note that This allows us to consider: Multiply both sides by 1/ ˆx and we obtain: x = ˆx x = 0 if and only if r = 0 x = ˆx x = A 1 (Aˆx b) = A 1 r A 1 r x ˆx A 1 r ˆx = cond(a) r A ˆx Showing that the relative error is bound by the condition number times the relative residual. Thus, a small residual by itself doesn t tell us anything about how an algorithm is behaving. 19

26 1.7 Improving Gaussian Elimination For notes here we will follow Cheney s presentation. The algorithm that we have implemented will not always work! To see this consider the following example: 0x 1 + x 2 = 1 x 1 + x 2 = 2 The solution to this system is clearly x 1 = 1 and x 2 = 1; However, our Gaussian Elimination algorithm will fail! (Division by zero.) When algorithms fail this tells us to be skeptical of the results for values near the failure. If we apply the Gaussian Elimination Algorithm to the following system what happens? ɛx 1 + x 2 = 1 x 1 + x 2 = 2 After step one: ɛx 1 + x 2 = 1 +(1 ɛ 1 )x 2 = 2 ɛ 1 Doing the back solve yields: x 2 = 2 ɛ 1 1 ɛ 1 However we make note that the value of ɛ is very small, and thus and ɛ 1 is very large x 2 = 2 ɛ 1 1 ɛ 1 1 x 1 = ɛ 1 (1 x 2 ) 0. These values are not correct as we would expect in the real world to obtain values of How could we fix the system/algorithm? x 1 = 1 1 ɛ 1 and x 2 = 1 2ɛ 1 ɛ 1 Note that if we had attacked the problem considering the second equation first there would have been no difficulty with division by zero. A second issue comes from the coefficient ɛ being very small compared with the other coefficients in the row. 20

27 At the kth step in the Gaussian elimination process. The entry a k k is known as the pivot element or pivot. The process of interchanging rows or columns of a matrix is known as pivoting and alters the pivot element. We aim to improve the numerical stability of the numerical algorithm. Many different operations may be algebraically equivalent; however, produce different numerical results when implemented numerically. The idea becomes to swap the rows of the system matrix so that the entry with the largest value is used to zero out the entries in the column associated with that variable during Gaussian Elimination. This is known as partial pivoting and is accomplished by interchanging two rows in the system. Gaussian Elimination with full pivoting or complete pivoting would select the pivot entry to be the largest entry in the sub-matrix of the system and reorder both rows and columns to make that element the pivot element. Seeking the largest value possible hopes to make the pivot element as numerically stable as possible. This makes the process less susceptible to roundoff errors. However, the large amount of work is usually not seen as worth the extra effort when compared with partial pivoting. An even more sophisticated method would be scaled partial pivoting. Here the largest entry in each row s i is used when picking the initial pivot equation. The pivot entry is selected by dividing current column entries (for the current variable) by the scaling value s i for each row, and taking the largest as the pivot row (see the Cheney text for an example and the Pseudocode). Simulates full pivoting by using an index vector containing information about the relative sizes of elements in each row. The idea here as that these changes to the Gaussian Elimination algorithm will allow for zero pivots and small pivots to be avoided. Gaussian Elimination is numerically stable for diagonally dominant matrices or matrices that are symmetric positive definite. The Matlab backslash operator attempts to use the best or most numerically stable algorithm available. 21

28 22

29 Chapter 2 Methods for Finding Zeros Four quiet hours is a resource that I can put to good use. Two slabs of time, each two hours long, might add up to the same four hours., but are not nearly as productive as an unbroken four. If I know that I am going to be interrupted, I can t concentrate, and if I suspect that I might be interrupted, I can t do anything at all. â Neal Stephenson, Why I m a Bad Correspondent There are lots of different methods for going about finding the roots or zeros of a function. More methods than could probably be listed in a reasonable space. The importance of finding zeros of functions can be seen by considering that any equation may be written an an equivalent form with a zero on one side of the equal sign. In general methods for finding the roots of a function make a couple of assumptions. We will assume that the domain of the function over which the root is to be found that: The function is continuous. The function is also differentiable on the domain considered. With these assumptions we can now look at several method to find the roots of functions numerically that are especially useful when analytic methods for finding roots are not possible. In order to find a zero of a function most root finding methods make use of the intermediate value theorem. For a continuous function, f, and real values a < b such that there will be a root in the interval (a, b). f(a)f(b) < 0 23

30 2.0.1 Bisection Algorithm The bisection method looks for the root between the end points of the search interval a and b by 1. Looking at the midpoint c = a + b 2 2. Computing f(c). 3. Seeing if f(a)f(c) < 0 and if so looking in the interval (a, c). 4. Else seeing if f(b)f(c) < 0 and if so looking in the interval form (c, b). Class coding Exercise: Write a piece of code that can be used to find the root of a specified function on a given interval in SAGE, MATLAB or Python. Convergence Analysis At this junction it would be a good idea to take stock of how well the bisection algorithm is performing. After the n th iteration of the algorithm the distance from the root r to the center of the interval considered will be: r c n < b n a n 2 b a 2 n+1 < ɛ tol. (2.0.1) The denominator in (2.0.1) has a factor of 2 n+1 as the guess for the root will be at the center of the new interval (a, b). How many iterations will it take for the error to be less than a given tolerance? a b 2 < ɛ n+1 tol = a b < 2 n 2ɛ ( tol ) a b = ln < n ln(2) 2ɛ tol ( ) a b ln 2ɛ tol = n > ln(2) The bisection method works in the same manner as a binary search method that some may have seen in a data structures course. 24

31 False Position Method A modification of the bisection method that can be use to find the zeros of a function is the method false position method. Here instead of using the midpoint of the interval (a, b) as the new end point of the search interval. A secant line between (a, f(a)) and (b, f(b)) is constructed. and the point at which the secant line crosses the x-axis is considered the new decision point. Using the slope of the line segments it can be seen that: c = a(f(a) f(c)) (f(a) f(b)) and the algorithm would carry on in the same manner as the bisection method did. 25

32 2.0.2 Newton s Method Newton s method or Newton-Raphson iterations are a second way to find the root of a function. Note that presented here is Newton s method for a single variable function; however, more general versions of Newton s method may be used to solve systems of equations. As with the bi-section method Newton s method assumes that our function f is continuous. Additionally it is assumed that the function f is differentable. Using the fact that the function is differentiable allows for use of the tangent line at a given point to find an approximate value for the root of the function. Consider the following figure: The initial guess for the root, x 0, of function f is updated to x 1 using the zero of the tangent line of f at the point x 0. Using point slope form of a line gives y = f (x 0 )(x x 0 ) + f(x 0 ) (2.0.2) as the equation of the tangent line of the function f at x 0. Solving (2.0.2) for its root gives x 1 a hopefully better approximation for the root of f. 0 = f (x 0 )(x 1 x 0 ) + f(x 0 ) = f(x 0 ) = f (x 0 )x 1 x 0 f (x 0 ) = f(x 0 ) + x 0 f (x 0 ) = f (x 0 )x 1 = x 1 = x 0 f(x 0) f (x 0 ) Extending this to successive values allows for a sequence of approximations to the root of f(x) to be found where x n+1 is found from x n as: x n+1 = x n f(x n) f (x n ) The algorithm should terminate when successive approximating values become within a defined tolerance of one another. We should examine whether or not for r the root of f. lim x n = r n Coding Exercise Use Newtons Method to find the root f(x) = sin(x) between 2 and 4. Note this will approximate π. 26

33 Alternatives to Newtons Method One of the drawbacks of using Newton s Method for finding the zeros of a function is that you need to know the derivative of the function you are considering. This may not always be the case. To work around the need for an analytic derivative one may be approximated provided there are two values given for an initial guess. Replacing f with a forward difference will allow us to derive the secant method for approximating roots of nonlinear functions. The convenience of not needing an analytic derivative comes at the cost of a slower rate of convergence. Note that in Newton s method we have truncated at second order in the Taylor series expansion. The order of convergence of the secant method falls off some too. 27

34 2.1 Newtons Method for Vector Functions Here the goal is to consider finding the zeros of functions of the form f : R n R n Following our nose and using Taylor series expansions we can truncate after the first order term: f(x + s) f(x) + J f (x)s Note that J f (x) is the Jacobian Matrix where {J f (x)} i,j = f i(x) x j. Making note that we want to use an initial guess x and advance to a zero of f approximated at x + s. Thus we can use, f(x + s) = 0 in our Taylor expansion to obtain: J f (x)s = f(x) a linear matrix system where the next guess will be given by x new = x + s. Example Consider solving the following system of nonlinear equations with Newtons Method: Use the initial guess of (1, 1, 1) T 16x y 4 + z 4 = 16 x 2 + y 2 + z 2 = 3 x 3 y = 0 28

35 Chapter 3 The Heat Equation It is what you don t expect... that most needs looking for. Neal Stephenson, Anathem Basic Notation The heat equation is a fundamental partial differential equation that is used to describe how the temperature of a defined domain changes over time. In order to correctly write the heat equation it can be seen that the value of the temperature of the object at any specified point will depend not only on the observed location, but also on the time of the observation. Denoting a position in space by x where: x = and time using the variable t we have the temperature function x y z u(t, x) for x Ω where Ω is describing the domain of interest. This could be a beam, a room, a part for an engine (I ll leave this to the reader s imagination for now). In order to describe how the temperature u is changing with respect to space and time we need to have a notation to describe changes in the temperature with respect to these different quantities (that is take derivatives). The mathematical notation that allows for taking the derivative of a multi-variable function with respect to a specified variable is: u(t, x) x := derivative of u with respect to x 29

36 Here the derivative of u is taken with respect to the spatial variable x treating all other variables (t, y, z) as constants. The partial derivative of u may be taken with respect to t, x, y, and z, and would be denoted as: u t, u x, u y, and u z respectively. Note that it has been established that u is a function of time and space so we may write u(t, x) as u. In order to simplify notation operators that combine the different spatial derivatives are often used. The gradient operator is defined as: := This is the multi-dimensional equivalent of a first derivative, and is a vector. Recalling the dot product vector operation, where if two vectors w 1 v 1 w = w 2 w 3 and v = v 2 v 3 are dotted with each other: x y z w v = w 1 v 1 + w 2 v 2 + w 3 v 3 allows for the definition of other operators based on the gradient operator. In order to define the Heat Equation the divergence of a vector needs to be considered. The divergence operator denoted by div or is defined such that: div w = w = w 1 x + w 2 y + w 3 z. Taking the divergence of the gradient vector yields the Laplace or Laplacian operator denoted by, or. = = x y z x y z = 2 x y z 2 Some texts use the notation 2 for the Laplacian operator as well. 30

37 Armed with a bunch of new notation we can now write down the Heat Equation that models the change of temperature on a given domain with respect to time and space. Defining the temperature function u(t, x) we have: u t c u = 0 on Ω where c R is a diffusion coefficient. To complete the problem definition an initial condition should be given as well as a description of the boundary conditions. Specifically lets consider the problem in a single dimension. For instance we may desire to model the temperature of a beam or wire that has an initial heat profile along its length. We can consider submerging the ends of the wire into an ice bath. This will keep them consistently at a temperature of 0 C. Lets also consider our wire to be of length 2π units, and having an initial profile of sin(x). The Heat Equation under these assumptions reduces to: u t u c 2 x 2 = 0 for x [0, 2π] governing PDE u(t, 0) = 0 boundary condition u(t, 2π) = 0 boundary condition u(0, x) = sin(x) initial condition 3.1 Numerical Solution The goal now becomes to model the PDE description of the Heat Equation using a numerical implementation. To do this we will need a way to approximate the different differential operators, as well as a discrete approximation to the problem not only in space but also with respect to time! Taylors s Theorem For Approximations In order to get an approximation for the different differential operators that are involved in the PDE we turn to Taylors theorem. Consider the following two Taylor series expansions: f(x + h) = f(x) + f (x)h f (x)h f (x)h 3 + O(h 4 ) (3.1.1) f(x h) = f(x) f (x)h f (x)h f (x)h 3 + O(h 4 ) (3.1.2) By adding expression (3.1.1) to (3.1.2) we obtain: f(x + h) + f(x h) = 2f(x) + f (x)h 2 + O(h 4 ) (3.1.3) 31

38 Solving (3.1.3) for f (x) an approximation of the second derivative of f(x) is found using close points a distance of h on either side of x. Specifically f (x) = f(x h) 2f(x) + f(x + h) h 2 + O(h 2 ). (3.1.4) Expression (3.1.4) is a called a second order centered difference approximation of f (x). The approximation uses only values of the function f to approximate the second derivative. This is especially useful as an analytic function for f may not always be known. Class Exercise Using the Taylor series expansions for f(x + h) and f(x h) create an approximation for the first derivative. Consider the order of the approximation method you have obtained. Implement a simple test program to verify the order of accuracy of your method on several different functions. Be sure and try different test functions: f(x) = sin(x), g(x) = cos(x) How do you compute the error between your approximation and the true solution? 32

39 % S c r i p t to wride data to a f i l e. n = 100; a = 0. 0 ; b = ; h = (b a ) / ( n 1); Listing 3.1: Reading and Writing Files x = l i n s p a c e ( a, b, n ) ; y = ( x. ^ 2 ). s i n ( x ) ; %.^ i s element wise square f u n c t i o n. f i l e O u t = fopen ( Data. txt, w ) ; % F i r s t Line o f Data. f p r i n t f ( filename, %6 s %12s \n, x, f ( x ) ) ; f p r i n t f ( filename, %6.2 f %12.8 f \n, x, y ) ; f i l e I n = fopen ( Data. txt, rt ) ; %Data = t e x t s c a n ( f i l e I n, % f%f, HeaderLines, 1, Delimiter,, CollectOu Data = t e x t s c a n ( f i l e I n, % f%f, HeaderLines, 1, CollectOutput, 1 ) ; f c l o s e ( f i l e I n ) ; YourData = Data {1}; c l e a r r e s u l t 33

40 3.1.2 Discritizing An order to approximate the true solution u(t, x) for the temperature of the wire a discrete solution is considered. A grid of points is placed on the problem domain Ω allowing for an approximate solution to the value of u(t, x) to be considered at these discrete points. Specifically we can consider: h } x x x xi-1 xi i+1 m x x The domain has been divided into m 1 equal length segments using m discrete points. The length of these segments will be considered h with h = 2π 0 m 1 and the approximate solution will be obtained at m points in the spatial domain. This gives x 1 = 0 and x m = 2π. x i = h i for i {1, 2,..., m}. Lets denote our discrete approximation to the temperature as: u h (t, x). Note we will also need to look at the problem using a discrete time step too. If we consider some final time of interest to be T a discrete time step can be defined as: t = T k where k is the total number of time steps to be taken during the simulation. By defining t n := n t then we can use the following notation to describe the approximation of the temperature: u h (t n, x i ) = u n i. 34

41 Explicit Time Stepping Using a finite difference for the temporal derivative and temporally lagging the derived centered difference formula for the second derivative yields the following discrete expression for the heat equation: u n+1 i u n i t Note if we solve the given expression for u n+1 i ( c t u n+1 i = h 2 ( ) u n c i 1 2u n i + u n i+1 = 0 ) u n i 1 + h 2 we have: ( 1 2c t h 2 ) u n i + ( c t h 2 ) u n i+1. Considering this for all values of i allows for a matrix system to be written that can advance the solution from t n to time t n+1. Here we have: Au n h = u n+1 h where the entries in A are given by ( a i,i = 1 2c t ) ( ) c t, and a h 2 i,i 1 = a i,i+1 = h 2 i {2, 3,..., m 1} Leaving the non-temporal derivative terms to be evaluated at time n in the method described above is known as a Forward Euler or Explicit time-stepping technique. The Forward Euler scheme for the heat equation has a time step restriction such that: t 1 2 h2 This is known as the CFL-condition or stability condition. The explicit Forward Euler Time stepping scheme is unstable when this condition is not satisfied. Larger time steps may be taken provided the numerical method is modified. Boundary Conditions For the first and last row we set the values of a 1,1 and a m,m equal to 1 as they are determined by our boundary condition. The values of u are known on the boundary for all time t [0, T ]. Boundary conditions for PDE systems where the values are set to know or specified values on the boundary are called Dirichlet Boundary Conditions. Note that the values in the system form a tri-diagonal matrix system. Note that to advance the solution from one disctete time step to the next we only need to do a matrix multiply. This ease of solving the system comes at a cost! The solutions advancement suffers from small time step restrictions. This is typical for explicit time stepping schemes like the Forward Euler technique described here. 35

42 3.2 Implicit Time Stepping To improve on the time-step restriction of our numerical method we can consider discretizing the Heat Equation as: u n+1 i u n i t ( u n+1 i 1 c 2un+1 i + u n+1 ) i+1 = 0 (3.2.5) The finite difference approximation of the second derivative in (3.2.5) is now considered at the current time-step instead of at the known lagged time step. Finding the value of u n+1 h now requires solving a system instead of a matrix multiply. The system in (3.2.5) is a Backward Euler or Implicit time stepping scheme and has a larger range of stable time steps with t h Rearranging the terms in (3.2.5) we can work to set up the following matrix system for advancing the solution temporally: u n+1 i h 2 c t ( u n+1 h 2 i 1 ) 2un+1 i + u n+1 i+1 = u n i Here we can define: This gives: Thus, with γu n+1 i 1 γ = c t h 2 + (1 + 2γ) un+1 i γu n+1 i+1 = un i Au n+1 h = u n h a i,i 1 = γ, a i,i = (1 + 2γ), and a i,i+1 = γ. Make note that the Dirichlet boundary conditions here may be set in the same manner as with the explicit time stepping method already discussed. Specifically a 1,1 = 1 and a m,m = 1. Dirichlet boundary conditions, being known, may also be taken out of the system completely. This is done by adjusting the values on the right hand side vector (u n h in our current example) Tri-Diagonal Systems The matrix system created in order to solve (3.2.5) is a tri-diagonal system or banded system, and can be solved readily using a tridiagonal solver. 36

43 Consider the system: d 1 c 1 a 2 d 2 c 2 a 3 d 3 c a m d m x 1 x 2. x m = b 1 b 2. b m (3.2.6) The two general steps in solving the system can be thought of as: 1. Forward Elimination (step 1): subtract a 2 /d 1 times row 1 from row 2. Creating a 0 in the a 2 position. In general this modifies the system values as follows (for i {2, 3,..., m}: ( ) ai d i = d i c i 1 d ( i 1 ) ai b i = b i b i 1 d i 1 2. Back Substitution (Step 2): In this portion of the solver the value for x i are computed starting with x m = bm d m, and in general as: x i = (b i c i x i+1 ) d i 37

44 The following listing is a simple Tri-Diagonal solver. Listing 3.2: A Tri-Diagonal Solver f u n c t i o n [ x ] = TriSolve ( A, b ) % TriSolve A simple Tri Diagonal S o l v e r % ( John C h r i s p e l l MATH 640) % This simple f u n c t i o n Takes a T r i d a i g o n a l System % and r e t u r n s a s o l u t i o n v e c t o r x % that s a t i s f i e s Ax = b. n = length ( b ( :, 1 ) ) ; % Forward Elimination f o r i =2:n xmult = A( i, i 1)/A( i 1, i 1); A( i, i ) = A( i, i ) xmult A( i 1, i ) ; b( i, 1 ) = b ( i, 1 ) xmult b( i 1,1); end % Back S u b s t i t u t i o n x (n, 1 ) = b (n, 1 ) /A(n, n ) ; f o r i = (n 1):( 1):1 x ( i, 1 ) = ( b ( i, 1 ) A( i, i +1) x ( i +1))/A( i, i ) ; end Use the code above for a Tri-Diagonal solver to implement a Backward Euler time stepping scheme for the Heat Equation. 3.3 Order of Accuracy For the different methods we have discussed it is often necessary to confirm that the methods have been implemented correctly. Consider adding a forcing function to the Heat Equation such that we can drive the solution to a known given function value. For example if we desire that the true solution to the function be: u(t, x) = e t sin(x) cos(x) By setting f(t, x) = 4ce t sin(x) cos(x) e t sin(x) cos(x) 38

45 and adding f as a right hand side forcing function in our governing PDE we have the system: u t u c 2 x 2 = f for x [0, 2π] governing PDE u(t, 0) = 0 boundary condition u(t, 2π) = 0 boundary condition u(0, x) = sin(x) cos(x) initial condition allows us to know the true solution and the error can be examined. Specifically we can look at the norm of the error between the computed solution and the true solution for any computational grid. If the approximated solution was continuous in space and time then the error between the true solution denoted by u(t, x) and the approximated solution by u h (t, x) would be written as: error = T 2π 0 0 u(t, x) u h (t, x) dx dt (3.3.7) Since the solution to u(x, t) is being approximated using m discrete points with separation h, and advanced in time using k time steps of size t we can compute a discreta approximation using a discrete approximation (3.3.7) as: where ( k ( m ) error h, t = (u(t j, x i ) u h (t j, x i )) 2 h t j=1 i=1 t j := j t, and x i = i h. Note that this discrete norm mimics the continuous error given by the integral expression of the error given in (3.3.7). With a method of computing the error in our approximation at hand the order of accuracy of a computational method may be examined discretely. )

46 40

47 Chapter 4 Numerical Integration On Monday in math class Mrs. Fibonacci says, You know, you can think of almost everything as a math problem. On Tuesday I start having problems. Jon Scieszka and Lane Smith, MATH CURSE In Calculus one of the fundamental topics discussed is integration. The indefinite integral of a function is also a function or class of functions. The definite integral of a function over a fixed interval is a number. Example: Consider the function f(x) = x 3 Indefinite Integral: F (x) = x 3 dx = x4 4 + C Definite Integral: 3 0 x 3 dx = x = 81 4 Example: Consider finding the Indefinite Integral of f(x) = e x2. That is, e x2 dx =? 41

48 Using u-substitution doesn t work, and computational algebra-systems like sage give answers such as: e x2 dx = ( 1/2)i πerf(ix) as no elementary function of x has a derivative that is simply e x2. The definite integral b a f(x) dx is representation of the area under the f(x) curve between a and b. There should be a way to get a handle on this value for f(x) = e x2. Consider the interval of interest to be between 0 and 1. Then, 1 0 e x2 dx = Area How do we find the Area when we don t know the function F needed in the Fundamental Theorem of Calculus? Theorem (Fundamental Theorem of Calculus) If f is continuous on the interval [a, b] and F is an antiderivative of f, then b a f(x) dx = F (b) F (a) 42

49 4.1 Trapezoid Rule Consider dividing the domain of interest [a, b] into sections such that: a = x 0 x 1 x 2 x n = b Then the area under the curve f on each of the sub-intervals [x i, x i+1 ] is approximated using a trapezoid with a base of x i+1 x i and average height of Thus, xi (f(x i) + f(x i+1 )) x i f(x) dx 1 2 (x i+1 x i ) (f(x i ) + f(x i+1 )) and the full definite integral is approximated as: b a f(x) dx 1 n 1 (x i+1 x i ) (f(x i ) + f(x i+1 )) 2 i=0 Note if a uniform spacing of the sub-intervals of size h is used the above estimate of the definite integral simplifies to: b a f(x) dx h n 1 (f(x i ) + f(x i+1 )) 2 and several computations may be saved if the definite integral is written as: b a i=0 f(x) dx h 2 (f(x n 1 0) + f(x n )) + h f(x i ) i=1 Computational Exercise Using the Trapezoid Rule and a uniformly spaced set of points of distance h apart estimate the following definite integral: 1 ( ) sin(x) dx x 0 Assuming that the true solution to the definite integral is compute an estimate for the convergence rate of the Trapezoid Rule with respect to refinement of the mesh spacing h. 43

Notes: Introduction to Numerical Methods

Notes: Introduction to Numerical Methods Notes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania Indiana, PA, 15705, USA E-mail: john.chrispell@iup.edu http://www.math.iup.edu/~jchrispe

More information

Notes: Introduction to Numerical Methods

Notes: Introduction to Numerical Methods Notes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania Indiana, PA, 15705, USA E-mail: john.chrispell@iup.edu http://www.math.iup.edu/~jchrispe

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

Chapter 1 Computer Arithmetic

Chapter 1 Computer Arithmetic Numerical Analysis (Math 9372) 2017-2016 Chapter 1 Computer Arithmetic 1.1 Introduction Numerical analysis is a way to solve mathematical problems by special procedures which use arithmetic operations

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

NUMERICAL MATHEMATICS & COMPUTING 6th Edition

NUMERICAL MATHEMATICS & COMPUTING 6th Edition NUMERICAL MATHEMATICS & COMPUTING 6th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole www.engage.com www.ma.utexas.edu/cna/nmc6 September 1, 2011 2011 1 / 42 1.1 Mathematical

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes

More information

Chapter 1 Mathematical Preliminaries and Error Analysis

Chapter 1 Mathematical Preliminaries and Error Analysis Numerical Analysis (Math 3313) 2019-2018 Chapter 1 Mathematical Preliminaries and Error Analysis Intended learning outcomes: Upon successful completion of this chapter, a student will be able to (1) list

More information

MAT 460: Numerical Analysis I. James V. Lambers

MAT 460: Numerical Analysis I. James V. Lambers MAT 460: Numerical Analysis I James V. Lambers January 31, 2013 2 Contents 1 Mathematical Preliminaries and Error Analysis 7 1.1 Introduction............................ 7 1.1.1 Error Analysis......................

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

X. Numerical Methods

X. Numerical Methods X. Numerical Methods. Taylor Approximation Suppose that f is a function defined in a neighborhood of a point c, and suppose that f has derivatives of all orders near c. In section 5 of chapter 9 we introduced

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

NUMERICAL MATHEMATICS & COMPUTING 7th Edition NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1 Chapter 01.01 Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 Chapter 01.02 Measuring errors 11 True error 11 Relative

More information

printing Three areas: solid calculus, particularly calculus of several

printing Three areas: solid calculus, particularly calculus of several Math 5610 printing 5600 5610 Notes of 8/21/18 Quick Review of some Prerequisites Three areas: solid calculus, particularly calculus of several variables. linear algebra Programming (Coding) The term project

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise:

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise: Math 2250-004 Week 4 notes We will not necessarily finish the material from a given day's notes on that day. We may also add or subtract some material as the week progresses, but these notes represent

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

INTRODUCTION TO COMPUTATIONAL MATHEMATICS INTRODUCTION TO COMPUTATIONAL MATHEMATICS Course Notes for CM 271 / AMATH 341 / CS 371 Fall 2007 Instructor: Prof. Justin Wan School of Computer Science University of Waterloo Course notes by Prof. Hans

More information

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2: Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2: 03 17 08 3 All about lines 3.1 The Rectangular Coordinate System Know how to plot points in the rectangular coordinate system. Know the

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0. 3.1 Introduction Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x 3 +1.5x 1.5 =0, tan x x =0. Practical existence test for roots: by intermediate value theorem, f C[a, b] & f(a)f(b)

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

Notes for Chapter 1 of. Scientific Computing with Case Studies

Notes for Chapter 1 of. Scientific Computing with Case Studies Notes for Chapter 1 of Scientific Computing with Case Studies Dianne P. O Leary SIAM Press, 2008 Mathematical modeling Computer arithmetic Errors 1999-2008 Dianne P. O'Leary 1 Arithmetic and Error What

More information

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics).

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics). 1 Introduction Read sections 1.1, 1.2.1 1.2.4, 1.2.6, 1.3.8, 1.3.9, 1.4. Review questions 1.1 1.6, 1.12 1.21, 1.37. The subject of Scientific Computing is to simulate the reality. Simulation is the representation

More information

Numerical Analysis and Computing

Numerical Analysis and Computing Numerical Analysis and Computing Lecture Notes #02 Calculus Review; Computer Artihmetic and Finite Precision; and Convergence; Joe Mahaffy, mahaffy@math.sdsu.edu Department of Mathematics Dynamical Systems

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460 Notes for Part 1 of CMSC 460 Dianne P. O Leary Preliminaries: Mathematical modeling Computer arithmetic Errors 1999-2006 Dianne P. O'Leary 1 Arithmetic and Error What we need to know about error: -- how

More information

Numerical Methods of Approximation

Numerical Methods of Approximation Contents 31 Numerical Methods of Approximation 31.1 Polynomial Approximations 2 31.2 Numerical Integration 28 31.3 Numerical Differentiation 58 31.4 Nonlinear Equations 67 Learning outcomes In this Workbook

More information

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative

More information

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices We will now switch gears and focus on a branch of mathematics known as linear algebra. There are a few notes worth making before

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

Notes on floating point number, numerical computations and pitfalls

Notes on floating point number, numerical computations and pitfalls Notes on floating point number, numerical computations and pitfalls November 6, 212 1 Floating point numbers An n-digit floating point number in base β has the form x = ±(.d 1 d 2 d n ) β β e where.d 1

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

1.4 Techniques of Integration

1.4 Techniques of Integration .4 Techniques of Integration Recall the following strategy for evaluating definite integrals, which arose from the Fundamental Theorem of Calculus (see Section.3). To calculate b a f(x) dx. Find a function

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy, Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

More information

Numerical Methods - Preliminaries

Numerical Methods - Preliminaries Numerical Methods - Preliminaries Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Preliminaries 2013 1 / 58 Table of Contents 1 Introduction to Numerical Methods Numerical

More information

5 Finding roots of equations

5 Finding roots of equations Lecture notes for Numerical Analysis 5 Finding roots of equations Topics:. Problem statement. Bisection Method 3. Newton s Method 4. Fixed Point Iterations 5. Systems of equations 6. Notes and further

More information

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center

More information

Chapter 1: Preliminaries and Error Analysis

Chapter 1: Preliminaries and Error Analysis Chapter 1: Error Analysis Peter W. White white@tarleton.edu Department of Tarleton State University Summer 2015 / Numerical Analysis Overview We All Remember Calculus Derivatives: limit definition, sum

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Computer Representation of Numbers Counting numbers (unsigned integers) are the numbers 0,

More information

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014 Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination

More information

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013 Number Systems III MA1S1 Tristan McLoughlin December 4, 2013 http://en.wikipedia.org/wiki/binary numeral system http://accu.org/index.php/articles/1558 http://www.binaryconvert.com http://en.wikipedia.org/wiki/ascii

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Chapter 1 Error Analysis

Chapter 1 Error Analysis Chapter 1 Error Analysis Several sources of errors are important for numerical data processing: Experimental uncertainty: Input data from an experiment have a limited precision. Instead of the vector of

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Roberto s Notes on Linear Algebra Chapter 0: Eigenvalues and diagonalization Section Eigenvalues and eigenvectors What you need to know already: Basic properties of linear transformations. Linear systems

More information

Numerical Methods. King Saud University

Numerical Methods. King Saud University Numerical Methods King Saud University Aims In this lecture, we will... Introduce the topic of numerical methods Consider the Error analysis and sources of errors Introduction A numerical method which

More information

Review for Exam 2 Ben Wang and Mark Styczynski

Review for Exam 2 Ben Wang and Mark Styczynski Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Weekly Activities Ma 110

Weekly Activities Ma 110 Weekly Activities Ma 110 Fall 2008 As of October 27, 2008 We give detailed suggestions of what to learn during each week. This includes a reading assignment as well as a brief description of the main points

More information

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Floating Point Number Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview Real number system Examples Absolute and relative errors Floating point numbers Roundoff

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

x n+1 = x n f(x n) f (x n ), n 0.

x n+1 = x n f(x n) f (x n ), n 0. 1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

GENG2140, S2, 2012 Week 7: Curve fitting

GENG2140, S2, 2012 Week 7: Curve fitting GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

An Introduction to Numerical Analysis. James Brannick. The Pennsylvania State University

An Introduction to Numerical Analysis. James Brannick. The Pennsylvania State University An Introduction to Numerical Analysis James Brannick The Pennsylvania State University Contents Chapter 1. Introduction 5 Chapter 2. Computer arithmetic and Error Analysis 7 Chapter 3. Approximation and

More information

1 Backward and Forward Error

1 Backward and Forward Error Math 515 Fall, 2008 Brief Notes on Conditioning, Stability and Finite Precision Arithmetic Most books on numerical analysis, numerical linear algebra, and matrix computations have a lot of material covering

More information

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS The general form of a first order differential equations is = f(x, y) with initial condition y(a) = y a We seek the solution y = y(x) for x > a This is shown

More information

Errors. Intensive Computation. Annalisa Massini 2017/2018

Errors. Intensive Computation. Annalisa Massini 2017/2018 Errors Intensive Computation Annalisa Massini 2017/2018 Intensive Computation - 2017/2018 2 References Scientific Computing: An Introductory Survey - Chapter 1 M.T. Heath http://heath.cs.illinois.edu/scicomp/notes/index.html

More information

Floating-point Computation

Floating-point Computation Chapter 2 Floating-point Computation 21 Positional Number System An integer N in a number system of base (or radix) β may be written as N = a n β n + a n 1 β n 1 + + a 1 β + a 0 = P n (β) where a i are

More information

Elements of Floating-point Arithmetic

Elements of Floating-point Arithmetic Elements of Floating-point Arithmetic Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Floating-point Numbers Representations IEEE Floating-point Standards Underflow

More information

Math 128A: Homework 2 Solutions

Math 128A: Homework 2 Solutions Math 128A: Homework 2 Solutions Due: June 28 1. In problems where high precision is not needed, the IEEE standard provides a specification for single precision numbers, which occupy 32 bits of storage.

More information

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight Applied Numerical Analysis (AE0-I) R. Klees and R.P. Dwight February 018 Contents 1 Preliminaries: Motivation, Computer arithmetic, Taylor series 1 1.1 Numerical Analysis Motivation..........................

More information

Differential Equations

Differential Equations This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

15 Nonlinear Equations and Zero-Finders

15 Nonlinear Equations and Zero-Finders 15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Jan 9

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Jan 9 Problem du jour Week 3: Wednesday, Jan 9 1. As a function of matrix dimension, what is the asymptotic complexity of computing a determinant using the Laplace expansion (cofactor expansion) that you probably

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

Math 12: Discrete Dynamical Systems Homework

Math 12: Discrete Dynamical Systems Homework Math 12: Discrete Dynamical Systems Homework Department of Mathematics, Harvey Mudd College Several of these problems will require computational software to help build our insight about discrete dynamical

More information

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors Roundoff errors and floating-point arithmetic

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information