Numerical Mathematical Analysis

Similar documents
8.5 Taylor Polynomials and Taylor Series

Binary floating point

Homework and Computer Problems for Math*2130 (W17).

Infinite series, improper integrals, and Taylor series

Chapter 1: Preliminaries and Error Analysis

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

Chapter 1: Introduction and mathematical preliminaries

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Notes for Chapter 1 of. Scientific Computing with Case Studies

Chapter 1 Mathematical Preliminaries and Error Analysis

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

Floating-point Computation

Chapter 1 Mathematical Preliminaries and Error Analysis

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

8.7 MacLaurin Polynomials

Introduction to Numerical Analysis

Numerical Methods - Preliminaries

Solution of Algebric & Transcendental Equations

Chapter 1 Computer Arithmetic

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Section 5.8. Taylor Series

MAT 460: Numerical Analysis I. James V. Lambers

Mathematical preliminaries and error analysis

QUADRATIC PROGRAMMING?

Elements of Floating-point Arithmetic

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Mathematics for Engineers. Numerical mathematics

NUMERICAL MATHEMATICS & COMPUTING 6th Edition

Taylor series. Chapter Introduction From geometric series to Taylor polynomials

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Errors. Intensive Computation. Annalisa Massini 2017/2018

Elements of Floating-point Arithmetic

Introduction CSE 541

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

ECS 231 Computer Arithmetic 1 / 27

Chapter 1 Error Analysis

Math Numerical Analysis

1 Solutions to selected problems

Numerical Analysis and Computing

Math 128A: Homework 2 Solutions

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

1 ERROR ANALYSIS IN COMPUTATION

EVALUATING A POLYNOMIAL

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract

1 What is numerical analysis and scientific computing?

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

Numerical Analysis. Yutian LI. 2018/19 Term 1 CUHKSZ. Yutian LI (CUHKSZ) Numerical Analysis 2018/19 1 / 41

Examples MAT-INF1100. Øyvind Ryan

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

Lecture 7. Floating point arithmetic and stability

Math Numerical Analysis

Polynomial Approximations and Power Series

Chapter 11 - Sequences and Series

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

1 Floating point arithmetic

CHAPTER 1. INTRODUCTION. ERRORS.

1.4 Techniques of Integration

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Calculus II Lecture Notes

Midterm Review. Igor Yanovsky (Math 151A TA)

Notes on floating point number, numerical computations and pitfalls

AP Calculus Chapter 9: Infinite Series

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

Tu: 9/3/13 Math 471, Fall 2013, Section 001 Lecture 1

1. Taylor Polynomials of Degree 1: Linear Approximation. Reread Example 1.

Let s Get Series(ous)

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

We consider the problem of finding a polynomial that interpolates a given set of values:

INFINITE SEQUENCES AND SERIES

Introduction and mathematical preliminaries

Homework 2. Matthew Jin. April 10, 2014

Numerical Analysis Exam with Solutions

SBS Chapter 2: Limits & continuity

CALCULUS JIA-MING (FRANK) LIOU

Scientific Computing

Ma 530 Power Series II

1 Solutions to selected problems

Taylor and Maclaurin Series. Approximating functions using Polynomials.

MATH 118, LECTURES 27 & 28: TAYLOR SERIES

Algebraic. techniques1

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Infinite series, improper integrals, and Taylor series

Sequences and Series

MORE APPLICATIONS OF DERIVATIVES. David Levermore. 17 October 2000

Homework 2 Foundations of Computational Math 1 Fall 2018

1 Lecture 8: Interpolating polynomials.

TAYLOR AND MACLAURIN SERIES

Section Taylor and Maclaurin Series

MATH141: Calculus II Exam #4 review solutions 7/20/2017 Page 1

Taylor and Maclaurin Series. Copyright Cengage Learning. All rights reserved.

Completion Date: Monday February 11, 2008

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Introduction to Finite Di erence Methods

Introductory Numerical Analysis

CHAPTER 10 Zeros of Functions

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

REVIEW OF DIFFERENTIAL CALCULUS

Transcription:

Numerical Mathematical Analysis Numerical Mathematical Analysis Catalin Trenchea Department of Mathematics University of Pittsburgh September 20, 2010 Numerical Mathematical Analysis Math 1070

Numerical Mathematical Analysis Outline 1 Introduction, matlab notes 2 Chapter 1: Taylor polynomials 3 Chapter 2: Error and Computer Arithmetic 4 Chapter 4: 4.1-4.3 Interpolation 5 Chapter 5: Numerical integration and differentiation 6 Chapter 3: Rootfinding 7 Chapter 4: Approximation of functions Numerical Mathematical Analysis Math 1070

> Introduction Numerical Analysis Numerical Analysis: This refers to the analysis of mathematical problems by numerical means, especially mathematical problems arising from models based on calculus. Effective numerical analysis requires several things: An understanding of the computational tool being used, be it a calculator or a computer. An understanding of the problem to be solved. Construction of an algorithm which will solve the given mathematical problem to a given desired accuracy and within the limits of the resources (time, memory, etc) that are available. 1. Introduction, Matlab notes Math 1070

> Introduction This is a complex undertaking. Numerous people make this their lifes work, usually working on only a limited variety of mathematical problems. Within this course, we attempt to show the spirit of the subject. Most of our time will be taken up with looking at algorithms for solving basic problems such as rootfinding and numerical integration; but we will also look at the structure of computers and the implications of using them in numerical calculations. We begin by looking at the relationship of numerical analysis to the larger world of science and engineering. 1. Introduction, Matlab notes Math 1070

> Introduction > Science Traditionally, engineering and science had a two-sided approach to understanding a subject: the theoretical and the experimental. More recently, a third approach has become equally important: the computational. Traditionally we would build an understanding by building theoretical mathematical models, and we would solve these for special cases. For example, we would study the flow of an incompressible irrotational fluid past a sphere, obtaining some idea of the nature of fluid flow. But more practical situations could seldom be handled by direct means, because the needed equations were too difficult to solve. Thus we also used the experimental approach to obtain better information about the flow of practical fluids. The theory would suggest ideas to be tried in the laboratory, and the experimental results would often suggest directions for a further development of theory. 1. Introduction, Matlab notes Math 1070

> Introduction > Science With the rapid advance in powerful computers, we now can augment the study of fluid flow by directly solving the theoretical models of fluid flow as applied to more practical situations; and this area is often referred to as computational fluid dynamics. At the heart of computational science is numerical analysis; and to effectively carry out a computational science approach to studying a physical problem, we must understand the numerical analysis being used, especially if improvements are to be made to the computational techniques being used. 1. Introduction, Matlab notes Math 1070

> Introduction > Science Mathematical Models A mathematical model is a mathematical description of a physical situation. By means of studying the model, we hope to understand more about the physical situation. Such a model might be very simple. For example, A = 4πRe, 2 R e 6, 371 km is a formula for the surface area of the earth. How accurate is it? 1 First, it assumes the earth is sphere, which is only an approximation. At the equator, the radius is approximately 6,378 km; and at the poles, the radius is approximately 6,357 km. 2 Next, there is experimental error in determining the radius; and in addition, the earth is not perfectly smooth. Therefore, there are limits on the accuracy of this model for the surface area of the earth. 1. Introduction, Matlab notes Math 1070

> Introduction > Science An infectious disease model For rubella measles, we have the following model for the spread of the infection in a population (subject to certain assumptions). ds dt = asi di = asi bi dt dr dt = bi In this, s, i, and r refer, respectively, to the proportions of a total population that are susceptible, infectious, and removed (from the susceptible and infectious pool of people). All variables are functions of time t. The constants can be taken as a = 6.8 11, b = 1 11. 1. Introduction, Matlab notes Math 1070

> Introduction > Science Mathematical Models The same model works for some other diseases (e.g. flu), with a suitable change of the constants a and b. Again, this is an approximation of reality (and a useful one). But it has its limits. Solving a bad model will not give good results, no matter how accurately it is solved; and the person solving this model and using the results must know enough about the formation of the model to be able to correctly interpret the numerical results. 1. Introduction, Matlab notes Math 1070

> Introduction > Science The logistic equation This is the simplest model for population growth. Let N(t) denote the number of individuals in a population (rabbits, people, bacteria, etc). Then we model its growth by N (t) = cn(t), t 0, N(t 0 ) = N 0 The constant c is the growth constant, and it usually must be determined empirically. Over short periods of time, this is often an accurate model for population growth. For example, it accurately models the growth of US population over the period of 1790 to 1860, with c = 0.2975. 1. Introduction, Matlab notes Math 1070

> Introduction > Science The predator-prey model Let F (t) denote the number of foxes at time t; and let R(t) denote the number of rabbits at time t. A simple model for these populations is called the Lotka-Volterra predator-prey model: dr = a[1 bf (t)]r(t) dt df = c[ 1 + dr(t)]f (t) dt with a, b, c, d positive constants. If one looks carefully at this, then one can see how it is built from the logistic equation. In some cases, this is a very useful model and agrees with physical experiments. Of course, we can substitute other interpretations, replacing foxes and rabbits with other predator and prey. The model will fail, however, when there are other populations that affect the first two populations in a significant way. 1. Introduction, Matlab notes Math 1070

> Introduction > Science 1. Introduction, Matlab notes Math 1070

> Introduction > Science Newton s second law Newtons second law states that the force acting on an object is directly proportional to the product of its mass and acceleration, F ma With a suitable choice of physical units, we usually write this in its scalar form as F = ma Newtons law of gravitation for a two-body situation, say the earth and an object moving about the earth is then m d2 r(t) dt 2 = Gm m e r(t) 2 r(t) r(t) with r(t) the vector from the center of the earth to the center of the object moving about the earth. The constant G is the gravitational constant, not dependent on the earth; and m and m e are the masses, respectively of the object and the earth. 1. Introduction, Matlab notes Math 1070

> Introduction > Science This is an accurate model for many purposes. But what are some physical situations under which it will fail? When the object is very close to the surface of the earth and does not move far from one spot, we take r(t) to be the radius of the earth. We obtain the new model m d2 r(t) dt 2 = mgk with k the unit vector directly upward from the earths surface at the location of the object. The gravitational constant g. = 9.8 meters/second 2 Again this is a model; it is not physical reality. 1. Introduction, Matlab notes Math 1070

> Matlab notes Matlab notes Matlab designed for numerical computing. Strongly oriented towards use of arrays, one and two dimensional. Excellent graphics that are easy to use. Powerful interactive facilities; and programs can also be written in it. It is a procedural language, not an object-oriented language. It has facilities for working with both Fortran and C language programs. 1. Introduction, Matlab notes Math 1070

> Matlab notes At the prompt in Unix or Linux, type Matlab. Or click the Red Hat, then DIVMS, then Mathematics, then Matlab. Run the demo program (simply type demo). Then select one of the many available demos. To seek help on any command, simply type help command or use the online Help command. To seek information on Matlab commands that involve a given word in their description, type lookfor word Look at the various online manuals available thru the help page. 1. Introduction, Matlab notes Math 1070

> Matlab notes MATLAB is an interactive computer language. For example, to evaluate use y = 6 4x + 7x 2 3x 5 + 3 x + 2 y = 6 4 x + 7 x x 3 x 5 + 3/(x + 2); There are many built-in functions, e.g. exp(x), cos(x), x, log(x) The default arithmetic used in MATLAB is double precision (16 decimal digits and magnitude range 10 308 : 10 +308 ) and real. However, complex arithmetic appears automatically when needed. sqrt(-4) results in an answer of 2i. 1. Introduction, Matlab notes Math 1070

> Matlab notes The default output to the screen is to have 4 digits to the right of the decimal point. To control the formatting of output to the screen, use the command format. The default formatting is obtained using format short To obtain the full accuracy available in a number, you can use format long The commands format short e format long e will use scientific notation for the output. Other format options are also available. 1. Introduction, Matlab notes Math 1070

> Matlab notes SEE plot trig.m MATLAB works very efficiently with arrays, and many tasks are best done with arrays. For example, plot sin x and cos x on the interval 0 x 10. t = 0 :.1 : 10; x = cos(t); y = sin(t); plot(t, x, t, y, LineWidth, 4) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 7 8 9 10 1. Introduction, Matlab notes Math 1070

> Matlab notes The statement t = a : h : b; with h > 0 creates a row vector of the form t = [a, a + h, a + 2h,...] giving all values a + jh that are less than b. When h is omitted, it is assumed to be 1. Thus n = 1 : 5 creates the row vector n = [1, 2, 3, 4, 5] 1. Introduction, Matlab notes Math 1070

> Matlab notes Arrays creates a row vector of length 3. creates the square matrix b = [1, 2, 3] A = [1 2 3; 4 5 6; 7 8 9] A = 1 2 3 4 5 6 7 8 9 Spaces or commas can be used as delimiters in giving the components of an array; and a semicolon will separate the various rows of a matrix. For a column vector, b = [1 3 6] results in the column vector 1 3 6 1. Introduction, Matlab notes Math 1070

> Matlab notes Array Operations Addition: Do componentwise addition. results in the answer A = [1, 2; 3, 2; 6, 1]; B = [2, 3; 3, 2; 2, 2]; C = A + B; C = 3 5 0 0 4 1 Multiplication by a constant: Multiply the constant times each component of the array. D = 2 A; results in the answer D = 2 4 6 4 12 2 1. Introduction, Matlab notes Math 1070

> Matlab notes Array Operations Matrix multiplication: This has the standard meaning. E = [1, 2; 2, 1; 3, 2] F = [2, 1, 3; 1, 2, 3]; G = E F ; results in the answer 1 2 [ G = 2 1 3 2 A nonstandard notation: results in the computation [ 1 1 1 H = 3 1 1 1 2 1 3 1 2 3 ] [ + H = 3 + F ; ] = 2 1 3 1 2 3 4 5 3 5 4 3 8 7 3 ] = [ 5 2 6 2 5 6 1. Introduction, Matlab notes Math 1070 ]

> Matlab notes Componentwise operations Matlab also has component-wise operations for multiplication, division and exponentiation. These three operations are denoted by using a period to precede the usual symbol for the operation. With we have The expression a = [1 2 3]; b = [2, 1 4]; a. b = [2 2 12] a./b = [.5 2.0 0.75] a.ˆ3 = [1 8 27] 2.ˆa = [2 4 8] b.ˆa = [2 1 64] y = 6 4x + 7x 2 3x 5 + 3 x + 2 can be evaluated at all of the elements of an array x using the command y = 6 4 x + 7 x. x 3 x.ˆ5 + 3./(x + 2); The output y is then an array of the same size as x. 1. Introduction, Matlab notes Math 1070

> Matlab notes Special arrays A = zeros(2, 3) produces an array with 2 rows and 3 columns, with all components set to zero, [ ] 0 0 0 0 0 0 B = ones(2, 3) produces an array with 2 rows and 3 columns, with all components set to 1, [ ] 1 1 1 1 1 1 eye(3) results in the 3 3 identity matrix, 1 0 0 0 1 0 0 0 1 1. Introduction, Matlab notes Math 1070

> Matlab notes Array functions There are many MATLAB commands that operate on arrays, we include only a very few here. For a vector x, row or column, of length n, we have the following functions. max(x) = maximum component of x min(x) = minimum component of x abs(x) = vector of absolute values of components of x sum(x) = sum of the components of x norm(x)= x 1 2 + + x n 2 1. Introduction, Matlab notes Math 1070

> Matlab notes Script files A list of interactive commands can be stored as a script file. For example, store t = 0 :.1 : 10; x = cos(t); y = sin(t); plot(t, x, t, y) with the file name plot trig.m. Then to run the program, give the command plot trig The variables used in the script file will be stored locally, and parameters given locally are available for use by the script file. 1. Introduction, Matlab notes Math 1070

> Matlab notes Functions To create a function, we proceed similarly, but now there are input and output parameters. Consider a function for evaluating the polynomial p(x) = a 1 + a 2 x + a 3 x 2 + + a n x n 1 MATLAB does not allow zero subscripts for arrays. The following function would be stored under the name polyeval.m. The coefficients {a j } are given to the function in the array named coeff, and the polynomial is to be evaluated at all of the components of the array x. 1. Introduction, Matlab notes Math 1070

> Matlab notes Functions function value = polyeval(x,coeff); % % function value = polyeval(x,coeff) % % Evaluate a polynomial at the points given in x. % The coefficients are to be given in coeff. % The constant term in the polynomial is coeff(1). n = length(coeff) value = coeff(n)*ones(size(x)); for i = n-1:-1:1 value = coeff(i) + x.*value; end >> polyeval(3,[1,2]) yields n=2 ans = 7 1. Introduction, Matlab notes Math 1070

> 1. Taylor polynomials Taylor polynomials 1 Taylor polynomials 1 The Taylor polynomial 2 Error in Taylor s polynomial 3 Polynomial evaluation 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.1 The Taylor polynomial Let f(x) be a given function, for example e x, sin x, log(x). The Taylor polynomial mimics the behavior of f(x) near x = a: Example T (x) f(x), for all x close to a. Find a linear polynomial p 1 (x) for which { p1 (a) = f(a), p 1 (a) = f (a). p 1 is uniquely given by p 1 (x) = f(a) + (x a)f (a). The graph of y = p 1 (x) is tangent to that of y = f(x) at x = a. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.1 The Taylor polynomial Example Let f(x) = e x, a = 0. Then p 1 (x) = 1 + x. 1.2 0.8 0.4-2.4-2 -1.6-1.2-0.8-0.4 0 0.4 0.8 1.2 1.6 2 2.4 y=p_1(x) -0.4-0.8-1.2 Figure: Linear Taylor Approximation e x 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.1 The Taylor polynomial Example Find a quadratic polynomial p 2 (x) to approximate f(x) near x = a. Since p 2 (x) = b 0 + b 1 x + b 2 x 2 we impose three conditions on p 2 (x) to determine the coefficients. To better mimic f(x) at x = a we require p 2 is uniquely given by p 2 (a) = f(a) p 2 (a) = f (a) p 2 (a) = f (a) p 2 (x) = f(a) + (x a)f (a) + 1 2 (x a)2 f (a). 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.1 The Taylor polynomial Example Let f(x) = e x, a = 0. Then p 2 (x) = 1 + x + 1 2 x2. 1.5 1.25 1 0.75 y=p_2(x) 0.5 y=e^x 0.25 y=p_1(x) -1-0.5 0 0.5 1-0.25 Figure: Linear and quadratic Taylor Approximation e x (see eval exp simple.m) 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.1 The Taylor polynomial Let p n (x) be a polynomial of degree n that mimics the behavior of f(x) at x = a. We require p (j) n (a) = f (j) (a), j = 0, 1,..., n where f (j) the j th derivative of f(x). The Taylor polynomial of degree n for the function f(x) at point a: p n (x) = f(a) + (x a)f (a) + = (x a)2 f (a) +... + 2! (x a)n f (n) (a) n! n (x a) j f (j) (a). (3.1) j! j=0 Recall the notations: f (0) (a) = f(a) and the factorial : { 1, j = 0 j! = j (j 1) 2 1, j = 1, 2, 3, 4,... 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.1 The Taylor polynomial Example Let f(x) = e x, a = 0. Since f (j) (x) = e x, f (j) (0) = 1, for all j 0, then p n (x) = 1 + x + 1 2! x2 +... + 1 n! xn = n j=0 x j j! (3.2) For a fixed x, the accuracy improves as the degree n increases. For a fixed degree n, the accuracy improves as x gets close to a = 0. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.1 The Taylor polynomial Table. Taylor approximations to e x x p 1 (x) p 2 (x) p 3 (x) e x 1.0 0 0.500 0.33333 0.36788 0.5 0.5 0.625 0.60417 0.60653 0.1 0.9 0.905 0.90483 0.90484 0 1.0 1.000 1.00000 1.00000 0.1 1.1 1.105 1.10517 1.10517 0.5 1.5 1.625 1.64583 1.64872 1.0 2.0 2.500 2.66667 2.71828 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.1 The Taylor polynomial Example Let f(x) = e x, a arbitrary, not necessarily 0. Since f (j) (x) = e x, f (j) (a) = e a, for all j 0, then p n (x; a) = e a ( 1 + (x a) + 1 2! (x a)2 +... + 1 n! (x a)n ) n = e a (x a) j j! j=0 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.1 The Taylor polynomial Example Let f(x) = ln(x), a = 1. Since f(1) = ln(1) = 0, and then the Taylor polynomial is f (j) (x) = ( 1) j 1 (j 1)! 1 x j f (j) (1) = ( 1) j 1 (j 1)! p n (x) = (x 1) 1 2 (x 1)2 + 1 3 (x 1)3... + ( 1) (n 1) 1 (x 1)n n n ( 1) j 1 = (x 1) j j j=1 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.1 The Taylor polynomial 1 y=p_1(x) y=p_3 y=ln(x) 0.5 y=p_2(x) -0.25 0 0.25 0.5 0.75 1 1.25 1.5 1.75 2-0.5-1 -1.5 Figure: Taylor approximations of ln(x) about x = 1 (see plot log.m) 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Theorem (Lagrange s form) Assume f C n [α, β], x [α, β]. The remainder R n (x) f(x) p n (x), or error in approximating n (x a) j f(x) by p n (x) = f (j) (a) satisfies j! j=0 R n (x) = (x a)n+1 f (n+1) (c x ), α x β (3.3) (n + 1)! where c x is an unknown point between a and x. [Exercise.] Derive the formal Taylor series for f(x) = ln(1 + x) at a = 0, and determine the range of positive x for which the series represents the function. Hint: f (k) (x) = ( 1) k 1 (k 1)! 1 (1+x) k, f (k) (0) = ( 1) k 1 (k 1)!, 0 ln(1 + x) = P n k=1 ( 1) k 1 xk k + ( 1)n n+1 x n+1 (1+cx) n+1 1+cx x 1, if x [0, 1]. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Example Let f(x) = e x and a = 0. n x j Recall that p n (x) = j! = 1 + x + 1 2! x2 +... + 1 n! xn. j=0 The approximation error is e x p n (x) = xn+1 (n + 1)! ec, n 0 (3.4) with c (0, x). [Exercise.] It can be proved that for each fixed x, lim R n(x) = 0, i.e., e x = lim n n n k=0 x k k! = k=0 x k k!. (See the case x 1). For each fixed n, R n becomes larger as x moves away from 0. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial 0.75 y=r_1(x) 0.5 y=r_1(x) 0.25 y=r_2(x) y=r_3(x) y=r_3(x) -1-0.75-0.5-0.25 0 0.25 0.5 0.75 y=r_4(x) 1 y=r_4(x) y=r_2(x) Figure: Error in Taylor polynomial approx. to e x (see plot exp simple.m) 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Example Let x = 1 in (3.4), so from (3.2): and from (3.4) e p n (1) = 1 + 1 + 1 2! + 1 3! + + 1 n! R n (1) = e p n (1) = Since e < 3 and e 0 e c e 1 we have 1 (n + 1)! R n(1) e c (n + 1)!, 0 < c < 1 e (n + 1)! < 3 (n + 1)! 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Suppose we want to approximate e by p n (1) with R n (1) 10 9. We have to take n 12 to guarantee 3 (n + 1)! 10 9, i.e., to have p 12 as a sufficiently good approximation to e. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Exercise Expand 1 + h in powers of h, then compute 1.00001. If f(x) = x 1 2, then f (x) = 1 2 x 1 2, f (x) = 1 4 x 3 2, f (x) = 3 8 x 5 2,... 1 + h = 1 + 1 2 h 1 8 h2 + 1 16 h3 ɛ 5 2, 1 < ɛ < 1 + h, if h > 0. Let h = 10 5. Then 1.00001 1 +.5 10 5 0.125 10 10 = 1.00000 49999 87500 Since 1 < ɛ < 1 + h, the absolute error does not exceed 1 16 h3 ɛ 5 2 < 1 16 10 5 2 = 0.00000 00000 00000 0625 and the numerical value is correct to all 15 decimal places shown. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Approximations and remainder formulae e x = 1 + x + x2 2! + + xn n! + xn+1 (n + 1)! ec (3.5) sin(x) = x x3 3! + x5 x2n 1 +( 1)n 1 5! (2n 1)! +( 1)n x2n+1 cos(c) (3.6) (2n+1)! cos(x) = 1 x2 2! + x4 x2n +( 1)n 4! (2n)! +( 1)n+1 x2n+2 cos(c) (3.7) (2n+2)! 1 1 x = 1 + x + x2 + + x n + xn+1 1 x, x 1 (3.8) ( ) ( ) ( ) (1 + x) α α α = 1 + x + x 2 α + + x n (3.9) 1 2 n ( ) α + x n+1 (1 + c) α n 1, α R n + 1 Recall that c is between 0 and x, and the binomial coefficients are ( ) α α(α 1) (α κ + 1) = κ κ! 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Example Approximate cos(x) for x π 4 with an error no greater than 10 5. Since cos(c) 1 and we must have R 2n+1 (x) x2n+2 (2n + 2)! 10 5 ( π ) 2n+2 4 (2n + 2)! 10 5 which is satisfied when n 3. Hence cos(x) 1 x2 2! + x4 4! x6 6! 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Indirect construction of a Taylor polynomial approximation Remark From (3.5), by replacing x with t 2, we obtain e t2 =1 t 2 + t4 2! t6 3! + + ( 1)n x 2n n! t 2 c 0. + ( 1)(n+1) x 2n+2 e c, (n + 1)! (3.10) If we attempt to construct the Taylor approximation directly, the derivatives of e t2 become quickly too complicated. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Indirect construction of a Taylor polynomial approximation From (3.8) we can easily get: t 0 dx 1 x = ln(1 t) (3.12) = t 0 ( 1 + x + x 2 + + x n) dx t ( t + 1 2 t2 + + 1 n + 1 tn+1 ( ln(1 t) = t + 1 2 t2 + + 1 n + 1 tn+1 where c t is a number between 0 and t, 1 t 1. Theorem (Integral Mean Value Theorem) 0 ) 1 1 c x n+1 1 x dx, t 0 x n+1 dx ) 1 1 c t t n+2 n + 2, (3.11) Let w(x) b a nonnegative integrable function on (a, b), and f C[a, b]. Then at least one point c [a, b] for which b a f(x)w(x)dx = f(c) b a w(x)dx (3.12) 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Infinite Series By rearranging the terms in (3.8) we obtain the sum of a infinite geometric series 1 + x + x 2 + + x n = 1 xn+1, x 1. (3.13) 1 x For x < 1, letting n we obtain the infinite geometric series 1 1 x = 1 + x + x2 + x 3 + = x j, x 1. (3.14) j=0 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Infinite Series Definition The infinite series is convergent if the partial sums j=0 c j S n = form a convergent sequence, i.e., n c j, n 0 j=0 S = lim n S n and we then write S = c j j=0 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Infinite Series For the infinite series (3.14) with x 1, the partial sums are given by (3.13): S n = 1 xn+1 1 x S n x <1 n 1 1 x S n is divergent when x > 1 What happens when x = 1? 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Infinite Series Definition Assume that f(x) has derivatives of any order at x = a. The infinite series (x a) j f (j) (a) j! j=0 is called the Taylor series expansion of the function f(x) about the point x = a. The partial sum n (x a) j f (j) (a) j! j=0 is simply the Taylor polynomial p n (x). 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Infinite Series If the sequence {p n (x)} has the limit f(x), i.e. the error tends to zero as n ( ) f(x) p n (x) = 0 then we can write lim n f(x) = (x a) j f (j) (a) j! j=0 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Infinite series Actually, it can be shown that the errors terms in (3.5)-(3.9) and (3.11) tend to 0 as n for suitable values of x. Hence the Taylor expansions e x = j=0 sin x = x j j!, < x < j=0 ( 1) j x 2j+1, < x < (2j + 1)! ( 1) j x 2j cos x =, < x < (3.15) (2j)! j=0 ( ) (1 + x) α α = x j, 1 < x < 1 j j=0 ln(1 t) = t j j, 1 x < 1 j=0 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Infinite series Definition Infinite series of the form are called power series. j=0 a j (x a) j (3.16) They can arise from Taylor s formulae or some other ways. Their convergence can be examined directly. Theorem (Comparison criterion) Assume the series (3.16) converges for some value x 0. Then the series (3.16) converges for all x satisfying x a x 0 a. Theorem (Quotient criterion) For the series (3.16), assume that the limit R = lim a n+1 n a n exists. Then for x satisfying x a < 1 R, the series (3.16) converges to a limit S(x). When R = 0, the series (3.16) converges for any x R. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.2 The error in Taylor s polynomial Infinite series Example Consider the power series in (3.15). Letting t = x 2, we obtain the series j=0 ( 1) j t j (2j)! (3.17) Applying the quotient criterion with a j = ( 1)j (2j)! we find R = 0, so the series (3.17) converges for any value of t, hence the series in the formula (3.15) converges for any value of x. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.3 Polynomial evaluation Consider the evaluation of the polynomial p(x) = 3 4x 5x 2 6x 3 + 7x 4 8x 5 1 simplest method: compute each term independently, i.e., c x k or c x k yielding 1 + 2 + 3 + 4 + 5 = 15 multiplications 2 a more efficient method: compute each power of x using the preceding one: x 3 = x(x 2 ), x 4 = x(x 3 ), x 5 = x(x 4 ) (3.18) Since each term takes two multiplications for k > 1, the result will be 3 nested multiplication: 1 + 2 + 2 + 2 + 2 = 9 multiplications p(x) = 3 + x( 4 + x( 5 + x( 6 + x(7 8x)))) with only 5 multiplications 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.3 Polynomial evaluation Consider now the general polynomial of degree n: p(x) = a 0 + a 1 x + + a n x n, a n 0 If we use the 2 nd method, with the powers of x computed as in (3.18), the number of multiplications in evaluating p(x) is 2n 1. For the nested multiplication, we write and evaluate p(x) in the form p(x) = a 0 + x(a 1 + x(a 2 + + x(a n 1 + a n x) )) (3.19) using only n multiplications, saving about 50% over the 2 n d method. All methods use n additions. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.3 Polynomial evaluation Example Evaluate the Taylor polynomial p 5 (x) for ln(x) about a = 1. A general formula is (3.11), with t replaced by (x 1), yielding p 5 (x) = (x }{{ 1 } ) 1 2 (x 1)2 + 1 3 (x 1)3 1 4 (x 1)4 + 1 (x 1)5 5 w ( = w 1 + w ( 12 ( 13 ( + w + w 14 + 15 )))) w. 1. Taylor polynomials Math 1070

> 1. Taylor polynomials > 1.3 Polynomial evaluation A more formal algorithm than (3.19) Suppose we want to evaluate p(x) at number z. Let define the sequence of coefficients b i as: b n = a n (3.20) b n 1 = a n 1 + zb n (3.21) b n 2 = a n 2 + zb n 1 (3.22). (3.23) b 0 = a 0 + zb 1 (3.24) Now the nested multiplication is the Horner Method: p(z) = a 0 + z(a 1 + z(a 2 + + z(a n 1 + a }{{} n z) )) b n }{{} b n 1 }{{} }{{} } {{ } b 0 1. Taylor polynomials Math 1070 b 1 b 2

> 1. Taylor polynomials > 1.3 Polynomial evaluation Hence p(z) = b 0 With the coefficients from (3.20), define the polynomial It can be shown that q(x) = b 1 + b 2 x + b 3 x 2 + + b n x n 1 p(x) = b 0 + (x z)q(x), i.e., q(x) is the quotient from dividing p(x) by x z, and b 0 is the remainder. Remark This property we will use it later for polynomial rootfinding method to reduce the degree of a polynomial when a root z has been found, since then b 0 = 0 and p(x) = (x z)q(x). 1. Taylor polynomials Math 1070

> 2. Error and Computer Arithmetic Numerical analysis is concerned with how to solve a problem numerically, i.e., how to develop a sequence of numerical calculations to get a satisfactory answer. Part of this process is the consideration of the errors that arise in these calculations, from the errors in the arithmetic operations or from other sources. 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic Computers use binary arithmetic, representing each number as a binary number: a finite sum of integer powers of 2. Some numbers can be represented exactly, but others, such as 1 10, 1 100, 1 1000,..., cannot. For example, 2.125 = 2 1 + 2 3 has an exact representation in binary (base 2), but 3.1 2 1 + 2 0 + 2 4 + 2 5 + 2 8 + does not. And, of course, there are the transcendental numbers like π that have no finite representation in either decimal or binary number system. 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic Computers use 2 formats for numbers. Fixed-point numbers are used to store integers. Typically, each number is stored in a computer word of 32 binary digits (bits) with values of 0 and 1. at most 2 32 different numbers can be stored. If we allow for negative numbers, we can represent integers in the range 2 31 x 2 31 1, since there are 2 32 such numbers. Since 2 31 2.1 10 9, the range for fixed-point numbers is too limited for scientific computing. = always get an integer answer. the numbers that we can store are equally spaced. very limited range of numbers. Therefore they are used mostly for indices and counters. An alternative to fixed-point, floating-point numbers approximate real numbers. the numbers that we can store are NOT equally spaced. wide range of variably-spaced numbers that can be representeed exactly. 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1 Floating-point numbers Numbers must be stores and used for arithmetic operations. Storing: 1 integer format 2 floating-point format Definition (decimal Floating-point representation) Let consider x 0 written in decimal system. Then it can be written uniquely as where σ = +1 or 1 is the sign e is an integer, the exponent 1 x < 10, the significand or mantissa Example ( 124.62 = (1.2462 }{{} ) 10 2 ) x x = σ x 10 e (4.1) σ = +1, the exponent e = 2, the significand x = 1.2462 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1 Floating-point numbers The decimal floating-point representation of x R is given in (4.1), with limitations on the 1 number of digits in mantissa x 2 size of e Example Suppose we limit 1 number of digits in x to 4 2 99 e 99 We say that a computer with such a representation has a four-digit decimal floating point arithmetic. This implies that we cannot store accurately more than the first four digits of a number; and even the fourth digit may be changed by rounding. What is the next smallest number bigger than 1? What is the next smallest number bigger than 100? What are the errors and relative errors? What is the smallest positive number? 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1 Floating-point numbers Definition (Floating-point representation of a binary number x) Let consider x written in binary format. Analogous to (4.1) x = σ x 2 e (4.2) where σ = +1 or 1 is the sign e is an integer, the exponent x is a binary fraction satisfying (1) 2 x < (10) 2 (in decimal:1 x < 2) For example, if x = (11011.0111) 2 then σ = +1, e = 4 = (100) 2 and x = (1.10110111) 2 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1 Floating-point numbers The floating-point representation of a binary number x is given by (4.2) with a restriction on 1 number of digits in x: the precision of the binary floating-point representation of x 2 size of e The IEEE floating-point arithmetic standard is the format for floating point numbers used in almost all computers. the IEEE single precision floating-point representation of x has a precision of 24 binary digits, and the exponent e is limited by 126 e 127: where, in binary x = σ (1.a 1 a 2... a 23 ) 2 e (1111110) 2 e (1111111) 2 the IEEE double precision floating-point representation of x has a precision of 53 binary digits, and the exponent e is limited by 1022 e 1023: x = σ (1.a 1 a 2... a 52 ) 2 e 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1 Floating-point numbers the IEEE single precision floating-point representation of x has a precision of 24 binary digits, and the exponent e is limited by 126 e 127: stored on 4 bytes (32 bits) b 1 }{{} σ x = σ (1.a 1 a 2... a 23 ) 2 e b 2 b 3 b 9 }{{} E=e+127 b 10 b 11 b 32 }{{} x the IEEE double precision floating-point representation of x has a precision of 53 binary digits, and the exponent e is limited by 1022 e 1023: stored on 8 bytes (64 bits) x = σ (1.a 1 a 2... a 52 ) 2 e b 1 }{{} σ b 2 b 3 b 12 }{{} E=e+1023 b 13 b 14 b 64 }{{} x 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1.1 Accuracy of floating-point representation Epsilon machine How accurate can a number be stored in the floating point representation? How can this be measured? (1) Machine epsilon Machine epsilon For any format, the machine epsilon is the difference between 1 and the next larger number that can be stored in that format. In single precision IEEE, the next larger binary number is 1.0000000000000000000000 1 }{{} a 23 (1 + 2 24 cannot be stored exactly) Then the machine epsilon in single precision IEEE format is 2 23 = 1.19 10 7 i.e., we can store approximately 7 decimal digits of a number x in decimal format. 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1.1 Accuracy of floating-point representation Then the machine epsilon in double precision IEEE format is 2 52 = 2.22 10 16 IEEE double-precision format can be used to store approximately 16 decimal digits of a number x in decimal format. MATLAB: machine epsilon is available as the constant eps. 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1.1 Accuracy of floating-point representation Another way to measure the accuracy of floating-point format: (2) look for the largest integer M such that any integer x such that 0 x M can be stored and represented exactly in floating point form. If n is the number of binary digits in the significand x, all integers less or equal to ( (1.11... 1) 2 2 n 1 = 1 + 1 2 1 + 1 2 2 +... + 1 2 (n 1)) 2 n 1 ( 1 1 ) 2 = n 1 1 2 n 1 = 2 n 1 2 can be represented exactly. In IEEE single precision format M = 2 24 = 16777216 and all 7-digit decimal integers will store exactly. In IEEE double precision format M = 2 53 = 9.0 10 15 and all 15-digit decimal integers and most 16 digit ones will store exactly. 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1.2 Rounding and chopping Let say that a number x has a significand x = 1.a 1 a 2... a n 1 a n a n+1 but the floating-point representation may contain only n binary digits. Then x must be shortened when stored. Definition We denote the machine floating-point version of x by fl(x). 1 truncate or chop x to n binary digits, ignoring the remaining digits 2 round x to n binary digits, based on the size of the part of x following digit n: (a) if digit n + 1 is 0, chop x to n digits (b) if digit n + 1 is 1, chop x to n digits and add 1 to the last digit of the result 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1.2 Rounding and chopping It can be shown that where ɛ is a small number depending of x. (a) If chopping is used (b) If rounding is used Characteristics of chopping: fl(x) = x (1 + ɛ) (4.3) 2 n+1 ɛ 0 (4.4) 2 n ɛ 2 n (4.5) 1 the worst possible error is twice as large as when rounding is used 2 the sign of the error x fl(x) is the same as the sign of x The worst of the two: no possibility of cancellation of errors. Characteristics of rounding: 1 the worst possible error is only half as large as when chopping is used 2 More important: the error x fl(x) is negative for only half the cases, which leads to better error propagation behavior 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.1.2 Rounding and chopping For single precision IEEE floating-point rounding arithmetic (there are n = 24 digits in the significand): 1 chopping ( rounding towards zero ): 2 23 ɛ 0 (4.6) 2 standard rounding: 2 24 ɛ 2 24 (4.7) For double precision IEEE floating-point rounding arithmetic: 1 chopping: 2 52 ɛ 0 (4.8) 2 rounding: 2 53 ɛ 2 53 (4.9) 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic >Conseq. for programming of floating-point arithm. Numbers that have finite decimal expressions may have infinite binary expansions. For example (0.1) 10 = (0.000110011001100110011...) 2 Hence (0.1) 10 cannot be represented exactly in binary floating-point arithmetic. Possible problems: Run into infinite loops pay attention to the language used: Fortran and C have both single and double precision, specify double precision constants correctly MATLAB does all computations in double precision 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2 Errors: Definitions, Sources and Examples Definition The error in a computed quantity is defined as Error(x A ) = x T - x A where x T =true value, x A =approximate value. This is called also absolute error. The relative error Rel(x A ) is a measure off error related to the size of the true value Rel(x A ) = true error value = x T x A x T For example, for the approximation π = 22 7 we have x T = π = 3.14159265... and x A = 22 7 = 3.1428571 ( ) 22 Error = π 22 7 7 ( ) 22 Rel = π 22 7 7 π = 0.00126 = 0.00042 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2 Errors: Definitions, Sources and Examples The notion of relative error is a more intrinsic error measure. 1 The exact distance between 2 cities: x 1 T = 100km and the measured distance is x 1 A = 99km Error ( x 1 ) T = x 1 T x 1 A = 1km Rel ( x 1 ) Error ( x 1 ) T = T = 0.01 = 1% x 1 T 2 The exact distance between 2 cities: x 2 T distance is x 2 A = 1km = 2km and the measured Error ( x 2 ) T = x 2 T x 2 A = 1km Rel ( x 2 ) Error ( x 2 ) T = T = 0.5 = 50% x 2 T 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2 Errors: Definitions, Sources and Examples Definition (significant digits) The number of significant digits in x A is number of its leading digits that are correct relative to the corresponding digits in the true value x T More precisely, if x A, x T are written in decimal form; compute the error x T = a 1 a 2.a 3 a m a m+1 a m+2 x T x A = 0 0.0 0 b m+1 b m+2 If the error is 5 units in the (m + 1) th digit of x T, counting rightward from the first nonzero digit, then we say that x A has, at least, m significant digits of accuracy relative to x T. Example 1 x A = 0.222, x T = 2 9 : x T = 0. 2 2 2 2 2 2 x T x A = 0. 0 0 0 2 2 2 3 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2 Errors: Definitions, Sources and Examples Example 1 x A = 23.496, x T = 23.494: x T = 2 3. 4 9 6 x T x A = 0 0. 0 0 1 9 9 4 2 x A = 0.02138, x T = 0.02144: x T = 0. 0 2 1 4 4 x T x A = 0. 0 0 0 0 6 2 3 x A = 22 7 = 3.1428571..., x T = π = 3.14159265... : x T = 3. 1 4 1 5 9 2 6 5 x T x A = 0. 0 0 1 2 6 4 4 8 3 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.1 Sources of error Errors in a scientific-mathematical-computational problem: 1 Original errors (E1) Modeling errors (E2) Blunders and mistakes (E3) Physical measurement errors (E4) Machine representation and arithmetic errors (E5) Mathematical approximation errors 2 Consequences of errors (F1) Loss-of-significance errors (F2) Noise in function evaluation (F3) Underflow and overflow erros 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.1 Sources of error (E1) Modeling errors: the mathematical equations are used to represent physical reality - mathematical model. Malthusian growth model (it can be accurate for some stages of growth of a population, with unlimited resources) N(t) = N 0 e kt, N 0, k 0 where N(t) = population at time t. For large t the model overestimates the actual population: accurately models the growth of US population for 1790 t 1860, with k = 0.02975, N 0 = 3, 929, 000 e 1790k but considerably overestimates the actual population for 1870 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.1 Sources of error (E2) Blunders and mistakes: mostly programming errors test by using cases where you know the solution break into small subprograms that can be tested separately (E2) Physical measurement errors. For example, the speed of light in vacuum c = (2.997925 + ɛ) 10 10 cm/sec, ɛ 0.000003 Due to the error in data, the calculations will contain the effect of this observational error. Numerical analysis cannot remove the error in the data, but it can look at its propagated effect in a calculation and suggest the best form for a calculation that will minimize the propagated effect of errors in data 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.1 Sources of error (E4) Machine representation and arithmetics errors. For example errors from rounding and chopping. they are inevitable when using floating-point arithmetic they form the main source of errors with some problems (solving systems of linear equations). We will look at the effect of rounding errors for some summation procedures (E4) Mathematical approximation errors: major forms of error that we will look at. For example, when evaluating the integral I = 1 0 e x2 dx, since there is no antiderivative for e x2, I cannot be evaluated explicitly. Instead, we approximate it with a quantity that can be computed. 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.1 Sources of error Using the Taylor approximation we can easily evaluate I e x2 1 x 2 + x4 2! x6 3! + x8 4! 1 0 ) (1 x 2 + x4 2! x6 3! + x8 dx 4! with the truncation error being evaluated by (3.10) 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.2 Loss-of-significance errors Loss of significant digits Example Consider the evaluation of f(x) = x[ x + 1 x] for an increasing sequence of values of x. x 0 Computed f(x) True f(x) 1 0.414210 0.414214 10 1.54340 1.54347 100 4.99000 4.98756 1000 15.8000 15.8074 10, 000 50.0000 49.9988 100, 000 100.000 158.113 Table: results of using a 6-digit decimal calculator As x increases, there are fewer digits of accuracy in the computed value f(x) 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.2 Loss-of-significance errors What happens? For x = 100: 100 = 10.0000 }{{}, exact 101 = 10.04999 }{{} rounded where 101 is correctly rounded to 6 significant digits of accuracy. x + 1 x = 101 100 = 0.0499000 while the true value should be 0.0498756. The calculation has a loss-of-significance error. Three digits of accuracy in x + 1 = 101 were canceled by subtraction of the corresponding digits in x = 100. The loss of accuracy was a by-product of the form of f(x) and the finite precision 6-digit decimal arithmetic being used 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.2 Loss-of-significance errors For this particular f, there is a simple way to reformulate it and avoid the loss-of-significance error: f(x) = x x + 1 x which on a 6 digit decimal calculator will imply the correct answer to six digits. f(100) = 4.98756 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.2 Loss-of-significance errors Example Consider the evaluation of f(x) = 1 cos(x) x 2 for a sequence approaching 0. x 0 Computed f(x) True f(x) 0.1 0.4995834700 0.4995834722 0.01 0.4999960000 0.4999958333 0.001 0.5000000000 0.4999999583 0.0001 0.5000000000 0.4999999996 0.00001 0.0 0.5000000000 Table: results of using a 10-digit decimal calculator 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.2 Loss-of-significance errors Look at the calculation when x = 0.01: cos(0.01) = 0.9999500004 (= 0.999950000416665) has nine significant digits of accuracy, being off in the tenth digit by two units. Next 1 cos(0.01) = 0.0000499996 (= 4.999958333495869e 05) which has only five significant digits, with four digits being lost in the subtraction. To avoid the loss of significant digits, due to the subtraction of nearly equal quantities, we use the Taylor approximation (3.7) for cos(x) about x = 0: cos(x) = 1 x2 2! + x4 4! x6 6! + R 6(x) R 6 (x) = x8 cos(ξ), ξ unknown number between 0 and x. 8! 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.2 Loss-of-significance errors Hence f(x) = 1 ]} {1 x 2 [1 x2 2! + x4 4! x6 6! + R 6(x) = 1 2! x2 4! + x4 6! x6 8! cos(ξ) giving f(0) = 1 2. For x 0.1 x 6 8! cos(ξ) 10 6 8! Therefore, with this accuracy = 2.5 10 11 f(x) 1 2! x2 4! + x4, x < 0.1 6! 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.2 Loss-of-significance errors Remark When two nearly equal quantities are subtracted, leading significant digits will be lost. In the previous two examples, this was easy to recognize and we found ways to avoid the loss of significance. More often, the loss of significance is subtle and difficult to detect, as in calculating sums (for example, in approximating a function f(x) by a Taylor polynomial). If the value of the sum is relatively small compared to the terms being summed, then there are probably some significant digits of accuracy being lost in the summation process. 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.2 Loss-of-significance errors Example Consider using the Taylor series approximation for e x to evaluate e 5 : e 5 = 1 + ( 5) + ( 5)2 + ( 5)3 + ( 5)4 +... 1! 2! 3! 4! Degree Term Sum Degree Term Sum 0 1.000 1.000 13-0.1960-0.04230 1-5.000-4.000 14 0.7001E-1 0.02771 2 12.50 8.500 15-0.2334E-1 0.004370 3-20.83-12.33 16 0.7293E-2 0.01166 4 26.04 13.71 17-0.2145E-2 0.009518 5-26.04-12.33 18 0.5958E-3 0.01011 6 21.70 9.370 19-0.1568E-3 0.009957 7-15.50-6.130 20 0.3920E-4 0.009996 8 9.688 3.558 21-0.9333E-5 0.009987 9-5.382-1.824 22 0.2121E-5 0.009989 10 2.691 0.8670 23-0.4611 E-6 0.009989 11-1.223-0.3560 24 0.9607 E-7 0.009989 12 0.5097 0.1537 25-0.1921 E-7 0.009989 Table. Calculation of e 5 =0.006738 using four-digit decimal arithmetic 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.2 Loss-of-significance errors There are loss-of-significance errors in the calculation of the sum. To avoid the loss of significance is simple in this case: e 5 = 1 e 5 = 1 series for e 5 and form e 5 with a series not involving cancellation of positive and negative terms. 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.3 Noise in function evaluation Consider evaluating a continuous function f for all x [a, b]. The graph is a continuous curve. When evaluating f on a computer using floating-point arithmetic (with rounding or chopping), the errors from arithmetic operations cause the graph to cease being a continuous curve. Let look at on [0, 2]. f(x) = (x 1) 3 = 1 + x(3 + x( 3 + x)) 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.3 Noise in function evaluation Figure: f(x) = x 3 3x 2 + 3x 1 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.3 Noise in function evaluation Figure: Detailed graph of f(x) = x 3 3x 2 + 3x 1 near x = 1 Here is a plot of the computed values of f(x) for 81 evenly spaced values of x [0.99998, 1.00002]. A rootfinding program might consider f(x) t have a very large number of solutions near 1 based on the many sign changes!!! 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.4 Underflow and overflow errors From the definition of floating-point number, there are upper and lower limits for the magnitudes of the numbers that can be expressed in a floating-point form. Attempts to create numbers that are too small underflow errors: the default option is to set the number to zero and proceed that are too large overflow errors: generally fatal errors on most computers. With the IEEE floating-point format, overflow errors can be carried along as having a value of ± or NaN, depending on the context. Usually, an overflow error is an indication of a more significant problem or error in the program. 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.4 Underflow and overflow errors Example (underflow errors) Consider evaluating f(x) = x 10 for x near 0. With the IEEE single precision arithmetic, the smallest nonzero positive number expressible in normalized floating point arithmetic is So f(x) is set to zero if m = 2 126 = 1.18 10 38 x 10 < m x < 10 m = 1.61 10 4 0.000161 < x < 0.000161 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.2.4 Underflow and overflow errors Sometimes is possible to eliminate the overflow error by just reformulating the expression being evaluated. Example (overflow errors) Consider evaluating z = x 2 + y 2. If x or y is very large, then x 2 + y 2 might create an overflow error, even though z might be within the floating-point range of the machine. x 1 + ( y 2, x) 0 y x z = ( ) y 1 + x 2, y 0 x y In both cases, the argument of 1 + w 2 has w 1, which will not cause any overflow error (except when z is too large to be expressed in the floating-point format used). 2. Error and Computer Arithmetic Math 1070

> 2. Error and Computer Arithmetic > 2.3 Propagation of Error When doing calculations with numbers that contain an error, the result will be effected by these errors. If x A, y A denote numbers used in a calculation, corresponding to x T, y T, we wish to bound the propagated error E = (x T ωy T ) (x A ωy A ) where ω denotes: +,,,. The first technique used to bound E is interval arithmetic. Suppose we know bounds on x T x A and y T y A. Using these bounds and x A ωy A, we look for an interval guaranteed to contain x T ωy T. 2. Error and Computer Arithmetic Math 1070