Notes: Introduction to Numerical Methods

Similar documents
Notes: Introduction to Numerical Methods

Notes: Introduction to Numerical Methods

Introduction to Numerical Analysis

Homework and Computer Problems for Math*2130 (W17).

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Chapter 3: Root Finding. September 26, 2005

Introductory Numerical Analysis

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

CS 257: Numerical Methods

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices

Chapter 1 Computer Arithmetic

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise:

MAT 460: Numerical Analysis I. James V. Lambers

Chapter 1 Mathematical Preliminaries and Error Analysis

Solution of Algebric & Transcendental Equations

NUMERICAL MATHEMATICS & COMPUTING 6th Edition

2 Systems of Linear Equations

1.4 Techniques of Integration

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Math Numerical Analysis

Mathematics for Engineers. Numerical mathematics

Numerical Analysis and Computing

Chapter 1: Preliminaries and Error Analysis

1 What is numerical analysis and scientific computing?

X. Numerical Methods

Numerical Methods. King Saud University

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Exact and Approximate Numbers:

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

Weekly Activities Ma 110

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Numerical Methods - Preliminaries

8.5 Taylor Polynomials and Taylor Series

Math Precalculus I University of Hawai i at Mānoa Spring

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

AIMS Exercise Set # 1

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Finite Mathematics : A Business Approach

Notes for Chapter 1 of. Scientific Computing with Case Studies

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Math Precalculus I University of Hawai i at Mānoa Spring

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

Floating-point Computation

Math 0031, Final Exam Study Guide December 7, 2015

An Introduction to Numerical Analysis. James Brannick. The Pennsylvania State University

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

15 Nonlinear Equations and Zero-Finders

1 Lecture 8: Interpolating polynomials.

printing Three areas: solid calculus, particularly calculus of several

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

5 Finding roots of equations

Notes on floating point number, numerical computations and pitfalls

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Getting Started with Communications Engineering

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Math 411 Preliminaries

Next topics: Solving systems of linear equations

Introduction CSE 541

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

2.1 Definition. Let n be a positive integer. An n-dimensional vector is an ordered list of n real numbers.

Infinite series, improper integrals, and Taylor series

Numerical Methods. King Saud University

The Not-Formula Book for C2 Everything you need to know for Core 2 that won t be in the formula book Examination Board: AQA

Elements of Floating-point Arithmetic

Scientific Computing: An Introductory Survey

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review Questions REVIEW QUESTIONS 71

5. Hand in the entire exam booklet and your computer score sheet.

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

Solutions to Math 41 First Exam October 18, 2012

Numerical Methods. Root Finding

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Lecture 28 The Main Sources of Error

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Numerical Methods. Dr Dana Mackey. School of Mathematical Sciences Room A305 A Dana Mackey (DIT) Numerical Methods 1 / 12

Calculus I Review Solutions

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Numerical Linear Algebra

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

Errors. Intensive Computation. Annalisa Massini 2017/2018

AP Calculus AB. Limits & Continuity.

1 Backward and Forward Error

MATH 1902: Mathematics for the Physical Sciences I

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER /2018

Chapter 2 Solutions of Equations of One Variable

Prepared by Sa diyya Hendrickson. Package Summary

Numerical Analysis. Yutian LI. 2018/19 Term 1 CUHKSZ. Yutian LI (CUHKSZ) Numerical Analysis 2018/19 1 / 41

1.1.1 Algebraic Operations

Transcription:

Notes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania Indiana, PA, 15705, USA E-mail: john.chrispell@iup.edu http://www.math.iup.edu/~jchrispe February 26, 2018

ii

Preface These notes will serve as an introduction to numerical methods for scientific computing. From the IUP course catalog the course will contain: Algorithmic methods for function evaluation, roots of equations, solutions to systems of linear equations, function interpolation, numerical differentiation; and use spline functions for cure fitting. Focus on managing and measuring errors in computation. Also offered as COSC 250; either COSC 250 or MATH 250 may be substituted for the other and may be used interchangeably for D or F repeats but may not be counted for duplicate credit. Material presented in the course will tend to follow the presentation of Cheney and Kincaid in their text: Numerical Mathematics and Computing (seventh edition) [2]. Relevant course material will start in chapter 1 of the text and selected chapters will be covered as time in the course permits. I will supplement the Winston text with additional material from other popular books on numerical methods: Scientific Computing: An Introduction Survey by Heath [3] Numerical Analysis by Burden and Faires [1] My Apologies in advance for any typographical errors or mistakes that are present in this document. That said, I will do my very best to update and correct the document if I am made aware of these inaccuracies. -John Chrispell iii

iv

Contents 1 Introduction and Review 1 1.1 Errors....................................... 4 1.1.1 Accurate and Precise........................... 5 1.1.2 Horner s Algorithm............................ 5 1.2 Floating Point Representation.......................... 6 1.3 Activities..................................... 9 1.4 Taylor s Theorem................................. 10 1.4.1 Taylor s Theorem using h........................ 12 1.5 Gaussian Elimination............................... 13 1.5.1 Assessment of Algorithm......................... 15 1.6 Improving Gaussian Elimination......................... 16 2 Methods for Finding Zeros 19 2.0.1 Bisection Algorithm........................... 20 2.0.2 Newton s Method............................. 22 3 Numerical Integration 25 3.1 Trapezoid Rule.................................. 27 4 Appendices 29 Bibliography 31 v

Chapter 1 Introduction and Review I have never listened to anyone who criticized my taste in space travel, sideshows or gorillas. When this occurs, I pack up my dinosaurs and leave the room. Ray Bradbury, Zen in the Art of Writing What is Scientific Computing? The major theme of this class will be solving scientific problems using computers. Many of the examples considered will be smaller parts that can be thought of as tools for implementing or examining larger computational problems of interest. We will take advantage of replacing a difficult mathematical problem with simpler problems that are easier to handle. Using the smaller parts insight will be gained into the larger problem of interest. In this class the methods and algorithms underlying computational tools you already use will be examined. Scientific Computing: Deals with computing continuous quantities in science and engineering (time, distance, velocity, temperature, density, pressure, stress) that can not be solved exactly or analytically in a finite number of steps. Typically we are numerically solving problems that involve integrals, derivatives, and nonliterary. Numerical Analysis: An area of mathematics where concern is placed on the design and implementation of algorithms to solve scientific problems. In general for solving a problem you will: Develop a model (expressed by equations) for a phenomenon or system of interest. Find/Develop an algorithm to solve the the system. Develop a Computational Implementations. 1

Run your implementation. Post process your results (Graphs Tables Charts). Interpret validate your results. Problems are well posed provided: 1. A solution to the problem of interest exists. 2. The solution is unique. 3. The solution depends continuously on the data. The last item here is important as problems that are ill conditioned have large changes in output with small changes in the initial conditions or data. This can be troubling for numerical methods, and is not always avoidable. In general we will use some standard techniques to attack problems presented. Replacing an unsolvable problem by a problem that is close to it in some sense and then looking at the closely related solution. Replace infinite dimensional spaces with finite ones. Infinite processes with Finite processes: Integrals with Sums Derivatives with Finite Differences Nonlinear Problems with Linear Ones Complicated Functions with Simple Ones (polynomials). General Matrices with Simpler Matrices. With all of this replacement and simplification the sources of error and approximation need to be accounted for. How good is the approximated solution? Significant Digits The significant digits in a computation start with the left most non zero digit in a computation, and end with the right most correct digit (including final zeros that are correct). Example: Lets consider calculating the surface area of the Earth. The area of a sphere is: A = 4πr 2 2

The radius of the Earth (r 6370 km). The value for π 3.141592653 rounded at some point. The numerical computation will be rounded at some point. All of these assumptions will come into play at some point. How many digits are significant? Figure 1.0.1: Here the intersection of two parallel lines is compared with an error range given of size ɛ. Note the closer the two lines are to parallel the more ill conditioned finding the intersection will become. www.math.iup.edu/~jchrispe/math_250/eps_error.html Example: Consider solving the following system of equations. 0.1243x + 0.2345y = 0.8723 0.3237x + 0.5431y = 0.9321 However you can only keep three significant digits. Keeping only three significant digits in all computations an answer of Solving the problem using sage: x 29.0 and y 19.0 x 30.3760666260334 and y 19.8210877680851 Note that the example in the Cheney text is far more dramatic, and the potential for error when truncating grows dramatically if the two lines of interest are nearly parallel. 3

1.1 Errors If two values are considered one taken to be true and the other an approximation then the Error is given by: Error = True Approximation The Absolute Error of using the Approximation is Absolute Error = True Approximation and we denote Relative Error = True Approximation True The Relative Error is usually more useful than the Absolute Error. The Relative Error is not defined if the true value we are looking for is zero. Example: Consider the case where we are approximating and have: Here we have the following: True = 12.34 and Approximation = 12.35 Error = 0.01 Absolute Error = 0.01 Relative Error = 0.0008103727714748612 Note that the approximation has 4 significant digits. Example: Consider the case where we are approximating and have: Here we have the following: True = 0.001 and Approximation = 0.002 Error = 0.001 Absolute Error = 0.001 Relative Error = 1 Here relative error is a much better indicator of how well the approximation fits the true value. 4

1.1.1 Accurate and Precise When a computation is accurate to n decimal places then we can trust n digits to the right of the decimal place. Similar when a computation is said to be accurate to n significant digits then the computation is meaningful for n places beginning with the leftmost nonzero digit given. The classic example here is a meter stick. The user can consider it accurate to the level of graduation on the meter stick. A second example would be the mileage on your car. It usually displays in tenth of a mile increments. You could use your car to measure distances accurate withing two tenths of a mile. Precision is a different game. Consider adding the following values: 3.4 + 5.67 = 9.07 The second digit in 3.4 could be from rounding any of the following: 3.41, 3.4256, 3.44, 3.36, 3.399, 3.38 to two significant digits. So there can only be two signifigant digits in the answer. The results from multiplication and division can be even more misleading. Computers will in some cases allow a user to decide if they would like to use rounding or chopping. Note there may be several different schemes for rounding values (especially when it comes to rounding values ending with a 5). 1.1.2 Horner s Algorithm In general it is a good idea to complete most computation using a minimum number of floating point operations. Consider evaluating polynomials. For example given f(x) = a 0 + a 1 x + a 2 x 2 + a n 1 x n 1 + a n x n It would not be wise to compute x 2, then x 3 and so on. Writing the polynomial as: f(x) = a 0 + x(a 1 + x(a 2 + x( x(a n 1 + x(a n )) )) will efficiently evaluate the polynomial without ever having to use exponentiation. Note efficient evaluation of polynomials in this is Horner s Algorithm and is accomplished using synthetic division. 5

1.2 Floating Point Representation Numbers when entered into a computational machine are typically broken into two parts: An integer portion. A fractional portion. with these two parts being separated by a decimal point. 123.456, 0.0000123 A second form that is used is normalized scientific notation, or normalized floating-point representation. Here the decimal point is shifted so it is written as a fraction multiplied by some power of 10. Note the leading digit of the fraction is nonzero. 0.0000123 = 0.123 10 5 Any decimal in the floating point system may be written in this manner. x = ±0.d 1 d 2 d 3... 10 n with d 1 not equal to zero. More generally we write x = ±r 10 n with ( ) 1 10 r < 1. Here r is the mantissa and n is the exponent. If we are looking at numbers in a binary system then ( ) 1 x = ±q 2 n with 2 q < 1. Computers work exactly like this; however, on a computer we have the issue of needing to use finite word length (No more... ). This means a couple of things: No representation for irrational numbers. No representation for numbers that do not fit into a finite format. 6

Activity Numbers that can be expressed on a computer are called its Machine Numbers, and they very depending on the computational system being used. If we consider a binary computational system where numbers must be expressed using normalized scientific notation in the form: x = ±(0.b 1 b 2 b 3 ) 2 2 ±k where the values of b 1, b 2, b 3, and k {0, 1}. what are all the possible numbers in this computational system? What additional observations can be made about the system? We shall consider here only the positive numbers: (0.100) 2 2 1 = 1 4 (0.100) 2 2 1 = 1 4 (0.100) 2 2 0 = 1 2 (0.100) 2 2 1 = 1 (0.101) 2 2 1 = 5 16 (0.101) 2 2 0 = 5 8 (0.101) 2 2 1 = 5 4 (0.110) 2 2 1 = 3 8 (0.110) 2 2 0 = 3 4 (0.110) 2 2 1 = 3 2 (0.111) 2 2 1 = 7 16 (0.111) 2 2 0 = 7 8 (0.111) 2 2 1 = 7 4 Note there is a hole in the number system near zero. Note there is also uneven spacing of the numbers we do have. Numbers smaller than the smallest representable number are considered underflow and typically treated as zero. Numbers larger than the largest representable number are considered overflow and will typically through an error. 7

For number representations on computers we actually use the IEEE-754 standard has been accepted. Precision Bits Sign Exponent Mantissa Single 32 1 8 23 Double 64 1 11 52 Long Double 80 1 15 64 Note that 2 23 1 10 7 2 52 1 10 16 2 64 1 10 20 gives us the ballpark for machine precision when a computation is done using a given number of bits. 8

1.3 Activities The limit ( e = lim 1 + 1 ) n n n defines the number e in calculus. Estimate e by taking the value of this expression for n = 8, 8 2, 8 3,..., 8 1 0. Compare with e obtained from the exponential function on your machine. Interpret the results. 9

1.4 Taylor s Theorem There are several useful forms of Taylor s Theorem and it can be argued that it is the most important theorem for the study of numerical methods. Theorem 1.4.1 If the function f possess continuous derivatives of orders 0, 1, 2,..., (n+1) in a closed interval I = [a, b] then for any c and x in I, f(x) = n k=0 f (k) (c) (x c) k + E n+1 k! where the error tem E n+1 can be given in the form of E n+1 = f (n+1) (η) (n + 1)! (x c)n+1. Here η is a point that lies between c and x and depends on both. Note we can use Taylor s Theorem to come up with useful series expansions. Example: Use Taylor s Theorem to find a series expansion for e x. Here we need to evaluate the n th derivative of e x. We also need to pick a point of expansion or value for c. We will choose c to be zero, and recall that the derivative of e x is such that Thus, for Taylor s Theorem we need: So we then have: e x = f(0) 0! d dx ex = e x. f(0) = e 0 = 1 f (0) = e 0 = 1 f (0) = e 0 = 1 I see a pattern! (x) 0 + f (0) 1! (x) 1 + f (0) 2! = 1 0! + x 1! x + x2 2! + x3 3! +... x k = for x <. k! k=0 (x) 2 + f (0) (x) 3 +... 3! Note we should be a little more careful here, and prove that the series truly does converge to e x by using the full definition given in Taylor s Theorem. 10

In this case we have: e x = n k=0 e k k! + e η (n + 1)! xn+1 (1.4.1) which incorporates the error term. We now look at values of x in some interval around the origin, consider a x a. Then η a and we know e η e a Then the remainder or error term is such that: lim e η n (n + 1)! xn+1 lim e a n (n + 1)! an+1 = 0 Then when the limit is taken of both sides of (1.4.1) it can be seen that: e x = k=0 e k k! Taylor s theorem can be useful to find approximations to hard to compute values: Example: Use the first five terms in a Taylor s series expansion to approximate the value of e. e 1 + 1 + 1 2 + 1 6 + 1 24 = 2.70833333333 Example: In the special case of n = 0 Taylors theorem is known as the Mean Value Theorem. Theorem 1.4.2 If f is a continuous function on the closed interval [a, b] and possesses a derivative at each point in the open interval (a, b), then for some η in (a, b). Notice that this can be rearranged so that: f(b) = f(a) + (b a)f (η) f (η) = f(b) f(a) b a The right hand side here is an approximation of the derivative for any x (a, b). 11

1.4.1 Taylor s Theorem using h Their is a more useful form of Taylor s Theorem: Theorem 1.4.3 If the function f possesses continuous derivatives of order 0, 1, 2,..., (n+1) in a closed interval I = [a, b], then for any x in I, f(x + h) = f(x) + f (x)h + 1 2 f (x)h 2 + 1 6 f (x)h 3 +... + E n+1 = n f (k) (x) h k + E n+1 k! k=0 where h is any value such that x + h is in I and where for some η between x and x + h. E n+1 = f (n+1) (η) (n + 1)! hn+1 Note that the error term E n+1 will depend on h in two ways. Explicitly on h with the h n+1 presence. The point η generally depends on h. Note as h converges to zero we see the Error Term converges to zero at a rate proportional to h n+1. Thus, we typically write: as h goes to zero. This is short hand for: where C is an upper bounding constant. E n+1 = O(h n+1 ) E n+1 C h n+1 We additionally note that Taylor s Theorem in terms of h may be written down specifically for any value of n, and thus represents a family of theorems, each with a specific order of h approximation. f(x + h) = f(x) + O(h) f(x + h) = f(x) + f (x)h + O(h 2 ) f(x + h) = f(x) + f (x)h + 1 2 f (x)h 2 + O(h 3 ) 12

1.5 Gaussian Elimination In the previous section we considered the numbers that are available for our use on a computer. We made note that their are many numbers (especially near zero) that are not machine numbers, and when used in a computation these numbers result in numerical roundoff error. As the computation will use the closest available machine number. Lets now look at how this roundoff error can come into play when we are solving the familiar linear equation system: Ax = b The normal approach would be to compute A 1 and then use that to find x. However, there are other questions that can come into play: How do we store a large system of this form on a computer? How do we know that the answer we receive is correct? Can the algorithm we use fail? How long will it take to compute the answer? What is the operation count for computing the answer? Will the algorithm be unstable for certain systems of equations? Can we modify the algorithm to control instabilities? What is the best algorithm for the task at hand? Matrix Conditioning Issues? Lets start by considering the system of equations: Ax = b with and the right hand side be such that 1 2 4... 2 n 1 1 3 9... 3 n 1 A = 1 4 16... 4 n 1....... 1 n + 1 n + 1 2... n + 1 n 1 b i = n j=1 A i,j is the sum of any given row. Note then here the solution to the system will trivially be a column of ones. Here A is a well know and poorly conditioned Vandermonde matrix. 13

It may be useful to use the sum of a geometric series when coding this so that any row i would look like: n (1 + i) j 1 x j = 1 i ((1 + i)n 1) j=1 The following is pseudo code for a Gaussian elimination procedure. Much like you would do by hand, our goal will be to implement and test this in MATLAB. % Forward Elimination. f o r k = 1 to (n 1) f o r i = ( k+1) to n xmult = A( i, k )/A( k, k ) ; A( i, k ) = xmult ; f o r end end Listing 1.1: Straight Gaussian Elimination j = ( k+1) to n A( i, j ) = A( i, j ) ( xmult ) A( k, j ) ; end b ( i, 1 ) = b ( i, 1 ) ( xmult ) b(k, 1 ) ; % Backward S u b s t i t u t i o n. x (n, 1 ) = b (n, 1 ) /A(n, n ) ; f o r i = (n 1) to 1 sum = b ( i, 1 ) ; end f o r j = ( i +1) to n sum = sum A( i, j ) x ( j, 1 ) ; end x ( i, 1 ) = sum/a( i, i ) ; Write a piece of code that implements this algorithm. 14

1.5.1 Assessment of Algorithm In order to see how well our algorithm was performing the error can be considered. There are several ways of computing the error of a vector solution. The first is to consider a straight forward vector of the difference between the computed solution and the true solution: e = x h x. A second method that is used when the true solution to a given problem is unknown is to consider a residual vector. r = Ax h b Note the residual vector will be all zeros when the true solution is obtained. In order to get a handle on the size of either the residual vector or the error vector norms are often used. A vector norm is any mapping from R n to R that satisfies the following properties: x > 0 if x 0. αx = α x. x + y x + y. (triangle inequality). where x and y are vectors in R n, and α R. Examples of vector norms include: The l 1 vector norm: The Euclidean/ l 2 -vector norm: x 1 = n x i i=1 x 2 = ( n x 2 i i=1 ) 1/2 l p -vector norm: x p = ( n x p i i=1 ) 1/p Note there are also norms for Matrices. More on this when condition number for matrices is discussed. Different norms of the residual and error vectors allow for a single value to be assessed rather than an entire vector. 15

1.6 Improving Gaussian Elimination For notes here we will follow Cheney s presentation. The algorithm that we have implemented will not always work! To see this consider the following example: 0x 1 + x 2 = 1 x 1 + x 2 = 2 The solution to this system is clearly x 1 = 1 and x 2 = 1; However, our Gaussian Elimination algorithm will fail! (Division by zero.) When algorithms fail this tells us to be skeptical of the results for values near the failure. If we apply the Gaussian Elimination Algorithm to the following system what happens? ɛx 1 + x 2 = 1 x 1 + x 2 = 2 After step one: ɛx 1 + x 2 = 1 +(1 ɛ 1 )x 2 = 2 ɛ 1 Doing the back solve yields: x 2 = 2 ɛ 1 1 ɛ 1 However we make note that the value of ɛ is very small, and thus and ɛ 1 is very large x 2 = 2 ɛ 1 1 ɛ 1 1 x 1 = ɛ 1 (1 x 2 ) 0. These values are not correct as we would expect in the real world to obtain values of How could we fix the system/algorithm? x 1 = 1 1 ɛ 1 and x 2 = 1 2ɛ 1 ɛ 1 Note that if we had attacked the problem considering the second equation first there would have been no difficulty with division by zero. A second issue comes from the coefficient ɛ being very small compared with the other coefficients in the row. 16

At the kth step in the gaussian elimination process. The entry a k k is known as the pivot element or pivot. The process of interchanging rows or columns of a matrix is known as pivoting and alters the pivot element. We aim to improve the numerical stability of the numerical algorithm. Many different operations may be algebraically equivalent; however, produce different numerical results when implemented numerically. The idea becomes to swap the rows of the system matrix so that the entry with the largest value is used to zero out the entries in the column associated with that variable during Gaussian Elimination. This is known as partial pivoting and is accomplished by interchanging two rows in the system. Gaussian Elimination with full pivoting or complete pivoting would select the pivot entry to be the largest entry in the sub-matrix of the system and reorder both rows and columns to make that element the pivot element. Seeking the largest value possible hopes to make the pivot element as numerically stable as possible. This makes the process less susceptible to roundoff errors. However, the large amount of work is usually not seen as worth the extra effort when compared with partial pivoting. An even more sophisticated method would be scaled partial pivoting. Here the largest entry in each row s i is used when picking the initial pivot equation. The pivot entry is selected by dividing current column entries (for the current variable) by the scaling value s i for each row, and taking the largest as the pivot row (see the Cheney text for an example and the Pseudocode). Simulates full pivoting by using an index vector containing information about the relative sizes of elements in each row. The idea here as that these changes to the Gaussian Elimination algorithm will allow for zero pivots and small pivots to be avoided. Gaussian Elimination is numerically stable for diagonally dominant matrices or matrices that are symmetric positive definite. The Matlab backslash operator attempts to use the best or most numerically stable algorithm available. 17

18

Chapter 2 Methods for Finding Zeros Four quiet hours is a resource that I can put to good use. Two slabs of time, each two hours long, might add up to the same four hours., but are not nearly as productive as an unbroken four. If I know that I am going to be interrupted, I can t concentrate, and if I suspect that I might be interrupted, I can t do anything at all. â Neal Stephenson, Why I m a Bad Correspondent There are lots of different methods for going about finding the roots or zeros of a function. More methods than could probably be listed in a reasonable space. The importance of finding zeros of functions can be seen by considering that any equation may be written an an equivalent form with a zero on one side of the equal sign. In general methods for finding the roots of a function make a couple of assumptions. We will assume that the domain of the function over which the root is to be found that: The function is continuous. The function is also differentiatable on the domain considered. With these assumptions we can now look at several method to find the roots of functions numerically that are especially useful when analytic methods for finding roots are not possible. In order to find a zero of a function most root finding methods make use of the intermediate value theorem. For a continuous function, f, and real values a < b such that there will be a root in the interval (a, b). f(a)f(b) < 0 19

2.0.1 Bisection Algorithm The bisection method looks for the root between the end points of the search interval a and b by: 1. Looking at the midpoint c = a + b 2 2. Computing f(c). 3. Seeing if f(a)f(c) < 0 and if so looking in the interval (a, c). 4. Else seeing if f(b)f(c) < 0 and if so looking in the interval form (c, b). Class coding Exercise: Write a piece of code that can be used to find the root of a specified function on a given interval in SAGE, Python, or MATLAB. Convergence Analysis At this junction it would be a good idea to take stock of how well the bisection algorithm is performing. After the n th iteration of the algorithm the distance from the root r to the center of the interval considered will be: r c n < b n a n 2 b a 2 n+1 < ɛ tol. (2.0.1) The denominator in (2.0.1) has a factor of 2 n+1 as the guess for the root will be at the center of the new interval (a, b). How many iterations will it take for the error to be less than a given tolerance? a b 2 < ɛ n+1 tol = a b < 2 n 2ɛ ( tol ) a b = ln < n ln(2) 2ɛ tol ( ) a b ln 2ɛ tol = n > ln(2) The bisection method works in the same manner as a binary search method that some may have seen in a data structures course. 20

False Position Method A modification of the bisection method that can be use to find the zeros of a function is the method false position method. Here instead of using the midpoint of the interval (a, b) as the new end point of the search interval. A secant line between (a, f(a)) and (b, f(b)) is constructed. and the point at which the secant line crosses the x-axis is considered the new decision point. 21

Using the slope of the line segments it can be seen that: c = a(f(a) f(c)) (f(a) f(b)) and the algorithm would carry on in the same manner as the bisection method did. 2.0.2 Newton s Method Newton s method or Newton-Raphson iterations are a second way to find the root of a function. Note that presented here is Newton s method for a single variable function; however, more general versions of Newton s method may be used to solve systems of equations. As with the bi-section method Newton s method assumes that our function f is continuous. Additionally it is assumed that the function f is differentable. Using the fact that the function is differentiable allows for use of the tangent line at a given point to find an approximate value for the root of the function. Consider the following figure: The initial guess for the root, x 0, of function f is updated to x 1 using the zero of the tangent line of f at the point x 0. Using point slope form of a line gives y = f (x 0 )(x x 0 ) + f(x 0 ) (2.0.2) as the equation of the tangent line of the function f at x 0. Solving (2.0.2) for its root gives x 1 a hopefully better approximation for the root of f. 0 = f (x 0 )(x 1 x 0 ) + f(x 0 ) = f(x 0 ) = f (x 0 )x 1 x 0 f (x 0 ) = f(x 0 ) + x 0 f (x 0 ) = f (x 0 )x 1 = x 1 = x 0 f(x 0) f (x 0 ) 22

Extending this to successive values allows for a sequence of approximations to the root of f(x) to be found where x n+1 is found from x n as: x n+1 = x n f(x n) f (x n ) The algorithm should terminate when successive approximating values become within a defined tolerance of one another. We should examine whether or not for r the root of f. lim x n = r n Coding Exercise Use Newtons Method to find the root f(x) = sin(x) between 2 and 4. Note this will approximate pi. 23

24

Chapter 3 Numerical Integration On Monday in math class Mrs. Fibonacci says, You know, you can think of almost everything as a math problem. On Tuesday I start having problems. Jon Scieszka and Lane Smith, MATH CURSE In Calculus one of the fundamental topics discussed is integration. The indefinite integral of a function is also a function of class of functions. The definite integral of a function over a fixed interval is a number. Example: Consider the function f(x) = x 3 Indefinite Integral: F (x) = x 3 dx = x4 4 + C Definite Integral: 3 0 x 3 dx = x4 4 3 0 = 81 4 Example: Consider finding the Indefinite Integral of f(x) = e x2. That is, e x2 dx =? 25

Using u-substitution doesn t work, and computational algebra-systems like sage give answers such as: e x2 dx = ( 1/2)i πerf(ix) as no elementary function of x has a derivative that is simply e x2. The definite integral b a f(x) dx is representation of the area under the f(x) curve between a and b. There should be a way to get a handle on this value for f(x) = e x2. Consider the interval of interest to be between 0 and 1. Then, 1 0 e x2 dx = Area How do we find the Area when we don t know the function F needed in the Fundamental Theorem of Calculus? Theorem 3.0.1 (Fundamental Theorem of Calculus) If f is continuous on the interval [a, b] and F is an antiderivative of f, then b a f(x) dx = F (b) F (a) 26

3.1 Trapezoid Rule Consider dividing the domain of interest [a, b] into sections such that: a = x 0 x 1 x 2 x n = b Then the area under the curve f on each of the sub-intervals [x i, x i+1 ] is approximated using a trapezoid with a base of x i+1 x i and average height of Thus, xi+1 1 2 (f(x i) + f(x i+1 )) x i f(x) dx 1 2 (x i+1 x i ) (f(x i ) + f(x i+1 )) and the full definite integral is approximated as: b a f(x) dx 1 n 1 (x i+1 x i ) (f(x i ) + f(x i+1 )) 2 i=0 Note if a uniform spacing of the sub-intervals of size h is used the above estimate of the definite integral simplifies to: b a f(x) dx h n 1 (f(x i ) + f(x i+1 )) 2 and several computations may be saved if the definite integral is written as: b a i=0 f(x) dx h 2 (f(x n 1 0) + f(x n )) + h f(x i ) i=1 Computational Exercise Using the Trapezoid Rule and a uniformly spaced set of points of distance h apart estimate the following definite integral: 1 ( ) sin(x) dx x 0 Assuming that the true solution to the definite integral is 0.946083070367 compute an estimate for the convergence rate of the Trapezoid Rule with respect to refinement of the mesh spacing h. 27

28

Chapter 4 Appendices 29

30

Bibliography [1] R. Burden and J. Faires. Numerical Analysis. Brooks/Cole, Boston, ninth edition edition, 2011. [2] W. Cheney and D. Kincaid. Numerical Mathematics and Computing. Brooks/Cole, Boston, seventh edition, 2012. [3] M.T. Heath. Scientific Computing: An Introductory Survey, 2nd Edition. McGraw-Hill, New York, 2002. 31

Index Dirichlet Boundary Conditions, 24 Forward Euler, 24 full pivoting, 17 intermediate value theorem, 29 Laplace Operator, 20 partial pivoting, 17 predator-prey problems, 47 scientific notation, 10 32