Introduction and mathematical preliminaries

Similar documents
Chapter 1: Introduction and mathematical preliminaries

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Notes for Chapter 1 of. Scientific Computing with Case Studies

Chapter 1 Mathematical Preliminaries and Error Analysis

Notes on floating point number, numerical computations and pitfalls

Chapter 1 Mathematical Preliminaries and Error Analysis

Mathematical preliminaries and error analysis

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Introduction CSE 541

1 Solutions to selected problems

Chapter 1: Preliminaries and Error Analysis

Floating-point Computation

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Essential Mathematics

Numerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods

1 Solutions to selected problems

Numerical Algorithms. IE 496 Lecture 20

Chapter 1 Error Analysis

Lecture Notes 7, Math/Comp 128, Math 250

CSC165H, Mathematical expression and reasoning for computer science week 12

Elements of Floating-point Arithmetic

Numerical Methods in Physics and Astrophysics

Chapter 1 Computer Arithmetic

Essentials of Intermediate Algebra

NUMERICAL MATHEMATICS & COMPUTING 6th Edition

Chapter 4 Number Representations

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Numerical Analysis. Yutian LI. 2018/19 Term 1 CUHKSZ. Yutian LI (CUHKSZ) Numerical Analysis 2018/19 1 / 41

How do computers represent numbers?

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Numerical Methods. King Saud University

Elements of Floating-point Arithmetic

Tu: 9/3/13 Math 471, Fall 2013, Section 001 Lecture 1

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

1.1 COMPUTER REPRESENTATION OF NUM- BERS, REPRESENTATION ERRORS

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

ESO 208A: Computational Methods in Engineering. Saumyen Guha

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

1.2 Finite Precision Arithmetic

17 Galois Fields Introduction Primitive Elements Roots of Polynomials... 8

CISE-301: Numerical Methods Topic 1:

Introduction 5. 1 Floating-Point Arithmetic 5. 2 The Direct Solution of Linear Algebraic Systems 11

1 ERROR ANALYSIS IN COMPUTATION

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract

Binary floating point

Number Representation and Waveform Quantization

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5. Ax = b.

ACM 106a: Lecture 1 Agenda

An Introduction to Differential Algebra

Errors. Intensive Computation. Annalisa Massini 2017/2018

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

QUADRATIC PROGRAMMING?

Midterm Review. Igor Yanovsky (Math 151A TA)

Round-off Errors and Computer Arithmetic - (1.2)

Example: x 10-2 = ( since 10 2 = 100 and [ 10 2 ] -1 = 1 which 100 means divided by 100)

Complex Numbers: Definition: A complex number is a number of the form: z = a + bi where a, b are real numbers and i is a symbol with the property: i

MAT 460: Numerical Analysis I. James V. Lambers

MATH ASSIGNMENT 03 SOLUTIONS

CS412: Introduction to Numerical Methods

Numerical Differentiation & Integration. Numerical Differentiation III

Introduction to Scientific Computing

CHAPTER 3. Iterative Methods

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

Homework and Computer Problems for Math*2130 (W17).

Math 128A: Homework 2 Solutions

Unit 1 Vocabulary. A function that contains 1 or more or terms. The variables may be to any non-negative power.

Lecture 28 The Main Sources of Error

Math 101 Study Session Spring 2016 Test 4 Chapter 10, Chapter 11 Chapter 12 Section 1, and Chapter 12 Section 2

You separate binary numbers into columns in a similar fashion. 2 5 = 32

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Numerical Analysis: Solving Nonlinear Equations

MTH303. Section 1.3: Error Analysis. R.Touma

ALGEBRA GRADE 7 MINNESOTA ACADEMIC STANDARDS CORRELATED TO MOVING WITH MATH. Part B Student Book Skill Builders (SB)

Numerical Mathematical Analysis

Lecture 7. Floating point arithmetic and stability

Theme 1: Solving Nonlinear Equations

Introduction to Finite Di erence Methods

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

SBS Chapter 2: Limits & continuity

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

Chapter 1. Numerical Errors. Module No. 1. Errors in Numerical Computations

Introduction to Numerical Analysis

ALGEBRA+NUMBER THEORY +COMBINATORICS

Math 112 Rahman. Week Taylor Series Suppose the function f has the following power series:

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics).

Assume that you have made n different measurements of a quantity x. Usually the results of these measurements will vary; call them x 1

2. Review of Calculus Notation. C(X) all functions continuous on the set X. C[a, b] all functions continuous on the interval [a, b].

The Not-Formula Book for C2 Everything you need to know for Core 2 that won t be in the formula book Examination Board: AQA

MATH Dr. Halimah Alshehri Dr. Halimah Alshehri

Numerical Derivatives in Scilab

UNC Charlotte Super Competition Level 3 Test March 4, 2019 Test with Solutions for Sponsors

CHAPTER 1 POLYNOMIALS

Tutorial 2: Expressing Uncertainty (Sig Figs, Scientific Notation and Rounding)

Numerical Analysis and Computing

Transcription:

Chapter Introduction and mathematical preliminaries Contents. Motivation..................................2 Finite-digit arithmetic.......................... 2.3 Errors in numerical calculations..................... 2.4 Goals for a good algorithm........................ 5. Motivation Most of the problems that you have encountered so far are unusual in that they can be solved analytically (e.g. integrals, dierential equations). However, this is somewhat contrived as, in real-life, analytic solutions are rather rare, and therefore we must devise a way of approximating the solutions. For example, while the integral 2 be solved in terms of special functions and e x dx has a well known analytic solution, 2 2 e x2 dx can only e x3 dx has no analytic solution. These integrals however exist as the area under the curves y = exp(x 2 ) and y = exp(x 3 ); so, we can obtain a numerical approximation by estimating this area, e.g. by dividing it into strips and using the trapezium rule. 2 Numerical analysis is a part of mathematics concerned with (i) devising methods, called numerical algorithms, for obtaining numerical approximate solutions to mathematical problems and, importantly, (ii) being able to estimate the error involved. Traditionally, numerical algorithms are built upon the most simple arithmetic operations (+,, and ).

2.2 Finite-digit arithmetic Interestingly, digital computers can only do these very basic operations (although very sophisticated programmes like Maple exist). However, they are very fast and, hence, have led to major advances in applied mathematics in the last 60 years..2 Finite-digit arithmetic.2. Computer arithmetic Floating point numbers. Computers can store integers exactly but not real numbers in general. Instead, they approximate them as oating point numbers. A decimal oating point (or machine number) is a number of the form ±0.d d 2...d }{{ k 0 } n, 0 d i 9, d 0, m where the signicand or mantissa m (i.e. the fractional part) and the exponent n are xedlength integers. (m cannot start with a zero.) In fact, computers use binary numbers (base 2) rather than decimal numbers (base 0) but the same principle applies (see appendix A). Machine ε. Consider a simple computer wheremis 3 digits long andnis one digit long. The smallest positive number this computer can store is 0. 0 9 and the largest is 0.999 0 9. Thus, the length of the exponent determines the range of numbers that can be stored. However, not all values in the range can be distinguished: numbers can only be recorded to a certain relative accuracy ε. For example, on our simple computer, the next oating point number after = 0. 0 is 0.0 0 =.0. The quantity ε machine = 0.0 (machine ε) is the worst relative uncertainty in the oating point representation of a number..2.2 Chopping and rounding There are two ways of terminating the mantissa of the k-digit decimal machine number approximating 0.d d 2...d k d k+ d k+2... 0 n, 0 d i 9, d 0, i. chopping: chop o the digits d k+,d k+2,... to get 0.d d 2...d k 0 n. ii. rounding: add 5 0 n (k+) and chop o the k+,k+2,... digits. (If d k+ 5 we add to d k before chopping.) Rounding is more accurate than chopping. Example. The ve-digit oating-point form of π = 3.459265359... is 0.345 0 using chopping and 0.346 0 using rounding. Similarly, the ve-digit oating-point form of 2/3 = 0.6666666... is 0.66666 using chopping and 0.66667 using rounding but that of /3 = 0.3333333... is 0.33333 using either chopping or rounding..3 Errors in numerical calculations Much of numerical analysis is concerned with controlling the size of errors in calculations. These errors, quantied in two dierent ways, arise from two distinct sources.

Chapter Introduction and mathematical preliminaries 3.3. Measure of the error Let p be the result of a numerical calculation and p the exact answer (i.e. p is an approximation to p). We dene two measures of the error, i. absolute error: E = p p ii. relative error: E r = p p / p (provided p 0) which takes into consideration the size of the value. Example.2 If p = 2 and p = 2., the absolute error E = 0 ; if p = 2 0 3 and p = 2. 0 3, E = 0 4 is smaller; and if p = 2 0 3 and p = 2. 0 3, E = 0 2 is larger but in all three cases the relative error remains the same, E r = 5 0 2..3.2 Round-o errors Caused by the imprecision of using nite-digit arithmetic in practical calculations (e.g. oating point numbers). Example.3 The 4-digit representation of x = 2 =.44236... is x =.44 = 0.44 0. Using 4-digit arithmetic, we can evaluate x 2 =.999 2, due to round-o errors. Signicant digits. The number p approximates p to k signicant digits if k if the largest non negative integer such that p p 5 0 k. p This is the bound on the relative error when using k-digit rounding arithmetic. For example, the number p approximates p = 00 to 4 signicant digits if 99.95 < p < 00.05. Magnication of the error. Computers store numbers to a relative accuracy ε. Thus, the true value of a oating point number x could be anywhere between x ( ε) and x (+ε). Now, if we add two numbers together, x +y, the true value lies in the interval (x +y ε( x + y ),x +y +ε( x + y )). Thus, the absolute error is the sum of the errors in x and y, E = ε( x + y ) but the relative error of the answer is E r = ε x + y x +y. If x and y both have the same sign the relative accuracy remains equal to ε, but if the have opposite signs the relative error will be larger. This magnication becomes particularly signicant when two very close numbers are subtracted. Example.4 Recall: the exact solutions of the quadratic equation ax 2 +bx+c = 0 are x = b b 2 4ac 2a, x 2 = b+ b 2 4ac. 2a

4.3 Errors in numerical calculations Using 4-digit rounding arithmetic, solve the quadratic equation x 2 +62x+ = 0, with roots x 6.9838670 and x 2 0.0633230. The discriminant b 2 4ac = 3840 = 6.97 is close to b = 62. Thus, x = ( 62 6.97)/2 = 24.0/2 = 62, with a relative error E r = 2.6 0 4, but x 2 = ( 62 + 6.97)/2 = 0.050, with a much larger relative error E r = 7.0 0 2. Similarly, division by small numbers (or equivalently, multiplication by large numbers) magnies the absolute error, leaving the relative error unchanged. Round-o errors can be minimised by reducing the number of arithmetic operations, particularly those that magnify errors (e.g. using nested form of polynomials)..3.3 Truncation errors Caused by the approximations in the computational algorithm itself. (An algorithm only gives an approximate solution to a mathematical problem, even if the arithmetic is exact.) Example.5 Calculate the derivative of a function f(x) at the point x 0. Recall the denition of the derivative df dx (x f(x 0 +h) f(x 0 ) 0) = lim. h 0 h However, on a computer we cannot take h 0 (there exists a smallest positive oating point number), so h must take a nite value. Using Taylor's theorem, f(x 0 + h) = f(x 0 ) + hf (x 0 ) + h 2 /2f (ξ), where x 0 < ξ < x 0 + h. Therefore, f(x 0 +h) f(x 0 ) h = hf (x 0 )+h 2 /2f (ξ) h = f (x 0 )+ h 2 f (ξ) f (x 0 )+ h 2 f (x 0 ). (.) Consequently, using a nite value of h leads to a truncation error of size h/2 f (x 0 ). (It is called a rst order error since it is proportional to h.) Clearly, small values of h make the truncation error smaller. However, because of round-o errors, f(x 0 ) is only known to a relative accuracy of ε. The absolute round-o error in f(x 0 + h) f(x 0 ) is 2ε f(x 0 ) and that in the derivative (f(x 0 +h) f(x 0 ))/h is 2ε/h f(x 0 ) (assuming h is exact). Note that this error increases with decreasing h. The relative accuracy of the calculation of f (x 0 ) (i.e. the sum of the relative truncation and round-o errors) is E r = h f 2 f + 2ε f h f, which has a minimum, min(e r ) = 2 ε ff / f, for h m = 2 ε f/f. E r h h h m h

Chapter Introduction and mathematical preliminaries 5 To improve the computation of the derivative we can use the central dierence approximation (Taylor's theorem) f(x 0 h) f(x 0 +h) 2h = f (x 0 )+ h2 6 f (ξ), x 0 h < ξ < x 0 +h. (.2) Its truncation error scales with h 2 ; therefore this second order method is more accurate than (.) for small values of h..4 Goals for a good algorithm.4. Accuracy All numerical solutions involve some error. An important goal for any algorithm is that the error should be small. However, it is equally important to have bounds on any error so that we have condence in any answers. How accurate an answer needs to be depends upon the application. For example, an error of 0.% may be more than enough in many cases but might be hopelessly inaccurate for landing a probe on a planet..4.2 Eciency An ecient algorithm is one that minimises the amount of computation time, which usually means the number of arithmetic calculations. Whether this is important depends on the size of the calculation. If a calculation only takes a few seconds of minutes, it isn't worth spending a lot of time making it more ecient. However, a very large calculation can take weeks to run, so eciency becomes important, especially if it needs to be run many times..4.3 Stability and robustness As we have seen, all calculations involve round-o errors. Although the error in each calculation is small, it is possible for the errors to multiply exponentially with each calculation so that the error becomes as large as the answer. When this happens, the methods is described as being unstable. A robust algorithm is one that is stable for a wide range of problems.

6.4 Goals for a good algorithm