Notes for Chapter 1 of. Scientific Computing with Case Studies

Similar documents
Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

ECS 231 Computer Arithmetic 1 / 27

Elements of Floating-point Arithmetic

Lecture 7. Floating point arithmetic and stability

Mathematical preliminaries and error analysis

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Chapter 1 Mathematical Preliminaries and Error Analysis

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Elements of Floating-point Arithmetic

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Notes on floating point number, numerical computations and pitfalls

Chapter 1: Introduction and mathematical preliminaries

Introduction CSE 541

Floating-point Computation

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

How do computers represent numbers?

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Binary floating point

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

Lecture Notes 7, Math/Comp 128, Math 250

1 Floating point arithmetic

Errors. Intensive Computation. Annalisa Massini 2017/2018

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

Math 411 Preliminaries

Numerical Methods in Physics and Astrophysics

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5. Ax = b.

Introduction and mathematical preliminaries

ACM 106a: Lecture 1 Agenda

Tu: 9/3/13 Math 471, Fall 2013, Section 001 Lecture 1

1 Backward and Forward Error

Lecture 28 The Main Sources of Error

Chapter 1 Mathematical Preliminaries and Error Analysis

Chapter 1 Computer Arithmetic

Chapter 4 Number Representations

Chapter 1 Error Analysis

Homework 2 Foundations of Computational Math 1 Fall 2018

Numerical Analysis. Yutian LI. 2018/19 Term 1 CUHKSZ. Yutian LI (CUHKSZ) Numerical Analysis 2018/19 1 / 41

1 ERROR ANALYSIS IN COMPUTATION

Number Representation and Waveform Quantization

Introduction to Scientific Computing Languages

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract

MAT 460: Numerical Analysis I. James V. Lambers

Introduction to Scientific Computing Languages

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 2

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

MATH ASSIGNMENT 03 SOLUTIONS

Applied Mathematics 205. Unit 0: Overview of Scientific Computing. Lecturer: Dr. David Knezevic

Numerics and Error Analysis


Chapter 1: Preliminaries and Error Analysis

Numerical Analysis and Computing

Round-off Errors and Computer Arithmetic - (1.2)

Numerical Algorithms. IE 496 Lecture 20

Numerical Mathematical Analysis

Binary Floating-Point Numbers

Math 471. Numerical methods Introduction

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

Math 128A: Homework 2 Solutions

ALU (3) - Division Algorithms

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics).

ECE260: Fundamentals of Computer Engineering

1.1 COMPUTER REPRESENTATION OF NUM- BERS, REPRESENTATION ERRORS

ESO 208A: Computational Methods in Engineering. Saumyen Guha

Ex code

CS412: Introduction to Numerical Methods

4.2 Floating-Point Numbers

Introduction to Finite Di erence Methods

Examples MAT-INF1100. Øyvind Ryan

Numerical Methods - Preliminaries

Mathematics for Engineers. Numerical mathematics

Complement Arithmetic

Midterm Review. Igor Yanovsky (Math 151A TA)

Numerical Methods. Dr Dana Mackey. School of Mathematical Sciences Room A305 A Dana Mackey (DIT) Numerical Methods 1 / 12

Next, we include the several conversion from type to type.

DSP Design Lecture 2. Fredrik Edman.

8/13/16. Data analysis and modeling: the tools of the trade. Ø Set of numbers. Ø Binary representation of numbers. Ø Floating points.

Order of convergence. MA3232 Numerical Analysis Week 3 Jack Carl Kiefer ( ) Question: How fast does x n

COMPUTER SCIENCE & ENGINEERING DEPARTMENT Harmattan Semester, 2013/2014 Session. CSC 307: Numerical Computations I [PRACTICAL LAB ONE]

Chapter 1. Numerical Errors. Module No. 1. Errors in Numerical Computations

QUADRATIC PROGRAMMING?

1 Finding and fixing floating point problems

Introduction to Scientific Computing

Numerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods

A Brief Introduction to Numerical Methods for Differential Equations

Can ANSYS Mechanical Handle My Required Modeling Precision?

ROUNDOFF ERRORS; BACKWARD STABILITY

Exponential and. Logarithmic Functions. Exponential Functions. Logarithmic Functions

Numbering Systems. Contents: Binary & Decimal. Converting From: B D, D B. Arithmetic operation on Binary.

One-Sided Difference Formula for the First Derivative

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Remainders. We learned how to multiply and divide in elementary

An Introduction to Numerical Analysis. James Brannick. The Pennsylvania State University

Analysis of Finite Wordlength Effects

Math 411 Preliminaries

14:332:231 DIGITAL LOGIC DESIGN. Why Binary Number System?

27 Wyner Math 2 Spring 2019

Quantum Algorithms and Circuits for Scientific Computing

Unit 4 Systems of Equations Systems of Two Linear Equations in Two Variables

CS760, S. Qiao Part 3 Page 1. Kernighan and Pike [5] present the following general guidelines for testing software.

Transcription:

Notes for Chapter 1 of Scientific Computing with Case Studies Dianne P. O Leary SIAM Press, 2008 Mathematical modeling Computer arithmetic Errors 1999-2008 Dianne P. O'Leary 1

Arithmetic and Error What we need to know about error: -- how does error arise -- how machines do arithmetic -- fixed point arithmetic -- floating point arithmetic -- how errors are propagated in calculations. -- how to measure error 1999-2008 Dianne P. O'Leary 2

How does error arise? 1999-2008 Dianne P. O'Leary 3

How does error arise? Example: An engineer wants to study the stresses in a bridge. 1999-2008 Dianne P. O'Leary 4

Measurement error Step 1: Gather lengths, angles, etc. for girders and wires. 1999-2008 Dianne P. O'Leary 5

Modeling error Step 2: Approximate system by finite elements. 1999-2008 Dianne P. O'Leary 6

Truncation error Step 3: Numerical analyst develops an algorithm: the stress can be computed as the limit (as n becomes infinite) of some function G(n). Can t take this limit on a computer, so decide to use G(150). G(1), G(2), G(3), G(4), G(5),... 1999-2008 Dianne P. O'Leary 7

Roundoff error Step 4: The algorithm is programmed and run on a computer. We need π. Approximate it by 3.1415926. 3 1 4 1 5 9 2 6 5 3 5.... 1999-2008 Dianne P. O'Leary 8

Sources of error 1. Measurement error 2. Modeling error 3. Truncation error 4. Rounding error 1999-2008 Dianne P. O'Leary 9

No mistakes! Note: No mistakes: the engineer did not misread ruler, the model was good, the programmer did not make a typo in the definition of π, and the computer worked flawlessly. But the engineer will want to know what the final answer has to do with the stresses on the real bridge! 1999-2008 Dianne P. O'Leary 10

What does a numerical analyst do? -- design algorithms and analyze them. -- develop mathematical software. -- answer questions about how accurate the final answer is. 1999-2008 Dianne P. O'Leary 11

What does a computational scientist do? -- works as part of an interdisciplinary team. -- intelligently uses mathematical software to analyze mathematical models. 1999-2008 Dianne P. O'Leary 12

How machines do arithmetic 1999-2008 Dianne P. O'Leary 13

Machine Arithmetic: Fixed Point How integers are stored in computers: Each word (storage location) in a machine contains a fixed number of digits. Example: A machine with a 6-digit word might represent 1985 as 0 0 1 9 8 5 1999-2008 Dianne P. O'Leary 14

Fixed Point: Decimal vs. Binary 0 0 1 9 8 5 Most calculators use decimal (base 10) representation. Each digit is an integer between 0 and 9. The value of the number is 3 2 1 0 1 x 10 + 9 x 10 + 8 x 10 + 5 x 10. 1999-2008 Dianne P. O'Leary 15

Fixed Point: Decimal vs. Binary 0 1 0 1 1 0 Most computers use binary (base 2) representation. Each digit is the integer 0 or 1. If the number above is binary, its value is 4 3 2 1 0 1 x 2 + 0 x 2 + 1 x 2 + 1 x 2 + 0 x 2. (or 22 in base 10) 1999-2008 Dianne P. O'Leary 16

Example of binary addition 0 0 0 1 1 0 + 0 = 0 0 + 1 = 1 + 0 1 0 1 0 1 + 0 = 1 1 + 1 = 10 (binary) = 10 2 = 2 0 1 1 0 1 1 + 1 + 1 = 11 (binary)=11 2 = 3 Note the carry here! 1999-2008 Dianne P. O'Leary 17

Example of binary addition 0 0 0 1 1 + 0 1 0 1 0 0 1 1 0 1 In decimal notation, 3 +10 =13 Note the carry here! 1999-2008 Dianne P. O'Leary 18

Representing negative numbers Computers represent negative numbers using one s complement, two s complement, or sign-magnitude representation. Sign magnitude is easiest, and enough for us: if the first bit is zero, then the number is positive. Otherwise, it is negative. 0 1 0 1 1 1 1 0 1 1 Denotes +11. Denotes - 11. 1999-2008 Dianne P. O'Leary 19

Range of fixed point numbers Largest 5-digit (5 bit) binary number: 0 1 1 1 1 = 15 1999-2008 Dianne P. O'Leary 20

Range of fixed point numbers Largest 5-digit (5 bit) binary number: 0 1 1 1 1 = 15 Smallest: 1999-2008 Dianne P. O'Leary 21

Range of fixed point numbers Largest 5-digit (5 bit) binary number: Smallest: 0 1 1 1 1 = 15 1 1 1 1 1 = -15 Smallest positive: 1999-2008 Dianne P. O'Leary 22

Range of fixed point numbers Largest 5-digit (5 bit) binary number: Smallest: Smallest positive: 0 1 1 1 1 = 15 1 1 1 1 1 = -15 0 0 0 0 1 = 1 1999-2008 Dianne P. O'Leary 23

Overflow If we try to add these numbers: 0 1 1 1 1 = 15 + 0 1 0 0 0 = 8 we get 1 0 1 1 1 = -7. We call this overflow: the answer is too large to store, since it is outside the range of this number system. 1999-2008 Dianne P. O'Leary 24

Features of fixed point arithmetic Easy: always get an integer answer. Either we get exactly the right answer for addition, subtraction, or multiplication, or we can detect overflow. The numbers that we can store are equally spaced. Disadvantage: very limited range of numbers. 1999-2008 Dianne P. O'Leary 25

Floating point arithmetic If we wanted to store 15 x 2 11, we would need 16 bits: 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 Instead, let s agree to code numbers as two fixed point binary numbers: p z x 2, with z = 15 saved as 01111 and p = 11 saved as 01011. Now we can have fractions, too: binary 0.101 = 1 x 2-1 + 0 x 2-2 + 1 x 2-3. 1999-2008 Dianne P. O'Leary 26

Floating point arithmetic Jargon: z is called the mantissa or significand. p is called the exponent. ± z x 2 p To make the representation unique (since, for example, 2 x 2 1 = 4 x 2 0 ), we normalize to make 1 z < 2. We store d digits for the mantissa, and limit the range of the exponent to m p M, for some integers m and M. 1999-2008 Dianne P. O'Leary 27

Floating point representation Example: Suppose we have a machine with d = 5, m = -15, M = 15. 15 x 2 10 = 1111 2 x 2 10 = 1.111 2 x 2 13 mantissa z = +1.1110 exponent p = +1101 15 x 2-10 = 1111 2 x 2-10 = 1.111 2 x 2-7 mantissa z = +1.1110 exponent p = -0111 1999-2008 Dianne P. O'Leary 28

Floating point standard Up until the mid-1980s, each computer manufacturer had a different choice for d, m, and M, and even a different way to select answers to arithmetic problems. A program written for one machine often would not compute the same answers on other machines. The situation improved somewhat with the introduction in 1987 of IEEE standard floating point arithmetic. 1999-2008 Dianne P. O'Leary 29

Floating point standard On most machines today, single precision: d = 24, m = -126, M = 127 double precision: d = 53, m = -1022, M = 1023. (And the representation is 2 s complement, not sign-magnitude, so that the number - x is stored as 2 d - x.) 1999-2008 Dianne P. O'Leary 30

Floating point addition Machine arithmetic is more complicated for floating point. Example: In fixed point, we added 3 + 10. Here it is in floating point: 3 = 11 (binary) = 1.100 x 2 1 z = 1.100, p = 1 10 = 1010 (binary) = 1.010 x 2 3 z = 1.010, p = 11. 1. Shift the smaller number so that the exponents are equal z = 0.0110 p = 11 2. Add the mantissas z = 0.0110 + 1.010 = 1.1010, p = 11 3. Shift if necessary to normalize. 1999-2008 Dianne P. O'Leary 31

Roundoff in Floating point addition Sometimes we cannot store the exact answer. Example: 1.1001 x 2 0 + 1.0001 x 2-1 1. Shift the smaller number so that the exponents are equal z = 0.10001 p = 0 2. Add the mantissas 0.10001 + 1.1001 = 10.00011, p = 0 3. Shift if necessary to normalize: 1.000011 x 2 1 But we can only store 1.0000 x 2 1! The error is called roundoff. 1999-2008 Dianne P. O'Leary 32

Underflow, overflow. Convince yourself that roundoff cannot occur in fixed point. Other floating point troubles: Overflow: exponent grows too large. Underflow: exponent grows too small. 1999-2008 Dianne P. O'Leary 33

Range of floating point Example: Suppose that d = 5 and exponents range between -15 and 15. Smallest positive normalized number: 1.0000 (binary) x 2-15 (since mantissa needs to be normalized) Largest positive number: 1.1111 (binary) x 2 15 1999-2008 Dianne P. O'Leary 34

Rounding IEEE standard arithmetic uses rounding. Rounding: Store x as r, where r is the machine number closest to x. 1999-2008 Dianne P. O'Leary 35

An important number: machine epsilon Machine epsilon is defined to be gap between 1 and the next larger number that can be represented exactly on the machine. Example: Suppose that d = 5 and exponents range between -15 and 15. What is machine epsilon in this case? Note: Machine epsilon depends on d, but does not depend on m or M! 1999-2008 Dianne P. O'Leary 36

Features of floating point arithmetic The numbers that we can store are not equally spaced. (Try to draw them on a number line.) A wide range of variably-spaced numbers can be represented exactly. For addition, subtraction, and multiplication, either we get exactly the right answer or a rounded version of it, or we can detect underflow or overflow. 1999-2008 Dianne P. O'Leary 37

How errors are propagated 1999-2008 Dianne P. O'Leary 38

Numerical Analysis vs. Analysis Mathematical analysis works with computations involving real or complex numbers. Computers do not work with these; for instance, they do not have a representation for the numbers π or e or even 0.1. Dealing with the finite approximations called floating point numbers means that we need to understand error and its propagation. 1999-2008 Dianne P. O'Leary 39

Absolute vs. relative errors Absolute error in c as an approximation to x: x c Relative error in c as an approximation to nonzero x: x c x 1999-2008 Dianne P. O'Leary 40

Error Analysis Errors can be magnified during computation. Example: 2.003 x 10 0 (suppose ±.001 or.05% error) - 2.000 x 10 0 (suppose ±.001 or.05% error) Result of subtraction: 0.003 x 10 0 but true answer could be as small as 2.002-2.001 = 0.001, or as large as 2.004-1.999 = 0.005! 1999-2008 Dianne P. O'Leary 41

Error Analysis Errors can be magnified during computation. Example: 2.003 x 10 0 (suppose ±.001 or.05% error) - 2.000 x 10 0 (suppose ±.001 or.05% error) Result of subtraction: 0.003 x 10 0 (±.002 or 200% error if true answer is 0.001) Catastrophic cancellation, or loss of significance 1999-2008 Dianne P. O'Leary 42

Error Analysis We could generalize this example to prove a theorem: When adding or subtracting, the bounds on absolute errors add. 1999-2008 Dianne P. O'Leary 43

Error Analysis What if we multiply or divide? Suppose x and y are the true values, and X and Y are our approximations to them. If X = x (1 - r) and Y = y (1 - s) then r is the relative error in x and s is the relative error in y. You could show that xy - XY xy r + s + rs 1999-2008 Dianne P. O'Leary 44

Error Analysis Therefore, When adding or subtracting, the bounds on absolute errors add. When multiplying or dividing, the bounds on relative errors add (approximately). But we may also have additional error -- for example, from chopping or rounding the answer. Error bounds are useful, but they can be pessimistic. Backward error analysis, discussed later, is an alternative. 1999-2008 Dianne P. O'Leary 45

Avoiding error build-up Sometimes error can be avoided by clever tricks. As an example, consider catastrophic cancellation that can arise when solving for the roots of a quadratic polynomial. 1999-2008 Dianne P. O'Leary 46

Cancellation example Example: Find the roots of x 2-56 x + 1 = 0. Usual algorithm: x 1 = 28 + sqrt(783) = 28 + 27.982 (±.0005) = 55.982 (±.0005) x 2 = 28 - sqrt(783) = 28-27.982 (±.0005) = 0.018 (±.0005) The absolute error bounds are the same, but the relative error bounds are 10-5 vs..02! 1999-2008 Dianne P. O'Leary 47

Avoiding cancellation Three tricks: 1) Use an alternate formula. The product of the roots equals the low order term in the polynomial. So x 2 = 1 / x 1 =.0178629 (± 2 x 10-7 ) by our error propagation formula. 1999-2008 Dianne P. O'Leary 48

Avoiding cancellation 2) Rewrite the formula. sqrt(x + e) - sqrt(x) = (sqrt(x+e) - sqrt(x)) (sqrt(x+e) + sqrt(x)) (sqrt(x+e) + sqrt(x)) = x + e - x = e sqrt(x+e) + sqrt(x) sqrt(x+e) + sqrt(x) so x 2 = 28 - sqrt(783) = sqrt(784) - sqrt(783). 1999-2008 Dianne P. O'Leary 49

Avoiding cancellation 3) Use Taylor series. Let f(x) = sqrt(x). Then f(x+a) - f(x) = f (x) a + 1/2 f (x) a 2 +... 1999-2008 Dianne P. O'Leary 50

How errors are measured 1999-2008 Dianne P. O'Leary 51

Error analysis Error analysis determines the cumulative effects of error. Two approaches: Forward error analysis Backward error analysis 1999-2008 Dianne P. O'Leary 52

1) Forward error analysis This is the way we have been discussing. Find an estimate for the answer, and bounds on the error. Space of problems Space of answers 1999-2008 Dianne P. O'Leary 53

1) Forward error analysis This is the way we have been discussing. Find an estimate for the answer, and bounds on the error. True problem (known) * * True solution (unknown) Space of problems Space of answers 1999-2008 Dianne P. O'Leary 54

1) Forward error analysis This is the way we have been discussing. Find an estimate for the answer, and bounds on the error. True problem (known) * Computed solution # * True solution (unknown) Space of problems Space of answers 1999-2008 Dianne P. O'Leary 55

1) Forward error analysis This is the way we have been discussing. Find an estimate for the answer, and bounds on the error. True * (known) problem Computed region (known) solution # * guaranteed to contain true soln. Space of problems Space of answers 1999-2008 Dianne P. O'Leary 56

1) Forward error analysis Report computed solution and location of region to the user. # Computed solution (known) region # * guaranteed to contain true soln. Space of answers 1999-2008 Dianne P. O'Leary 57

2) Backward error analysis Given an answer, determine how close the problem actually solved is to the given problem. True problem (known) * Computed solution # * True solution (unknown) Space of problems Space of answers 1999-2008 Dianne P. O'Leary 58

2) Backward error analysis Given an answer, determine how close the problem actually solved is to the given problem. True problem (known) * # Problem we solved (unknown) Computed solution # * True solution (unknown) Space of problems Space of answers 1999-2008 Dianne P. O'Leary 59

2) Backward error analysis Report computed solution and location of region to the user. Region containing true problem and solved problem Computed solution # Space of problems Space of answers 1999-2008 Dianne P. O'Leary 60

Arithmetic and Error Summary: -- how does error arise -- how machines do arithmetic -- fixed point arithmetic -- floating point arithmetic -- how errors are propagated in calculations. -- how to measure error 1999-2008 Dianne P. O'Leary 61