ROUNDOFF ERRORS; BACKWARD STABILITY

Similar documents
FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

Notes for Chapter 1 of. Scientific Computing with Case Studies

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

Errors. Intensive Computation. Annalisa Massini 2017/2018

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

Mathematical preliminaries and error analysis

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9

1 Floating point arithmetic

Lecture 7. Floating point arithmetic and stability

1 Backward and Forward Error

Notes on floating point number, numerical computations and pitfalls

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5. Ax = b.

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Floating-point Computation

Roundoff Analysis of Gaussian Elimination

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

Lecture Notes 7, Math/Comp 128, Math 250

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Numerical Algorithms. IE 496 Lecture 20

PROBLEM SET 1 SOLUTIONS 1287 = , 403 = , 78 = 13 6.

Chapter 1: Introduction and mathematical preliminaries

Module 9: Further Numbers and Equations. Numbers and Indices. The aim of this lesson is to enable you to: work with rational and irrational numbers

Numerical Methods. Dr Dana Mackey. School of Mathematical Sciences Room A305 A Dana Mackey (DIT) Numerical Methods 1 / 12

Chapter 1 Mathematical Preliminaries and Error Analysis

Robot Position from Wheel Odometry

ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA)

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Next topics: Solving systems of linear equations

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Lecture 6 January 15, 2014

AIMS Exercise Set # 1

Introduction CSE 541

Math 411 Preliminaries

Homework 2 Foundations of Computational Math 1 Fall 2018

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Chapter 1 Mathematical Preliminaries and Error Analysis

Numerics and Error Analysis

Mathematics Review. Sid Rudolph

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Chapter 1 Error Analysis

1 Error analysis for linear systems

SOME GENERAL RESULTS AND OPEN QUESTIONS ON PALINTIPLE NUMBERS

SOLUTIONS FOR ADMISSIONS TEST IN MATHEMATICS, COMPUTER SCIENCE AND JOINT SCHOOLS WEDNESDAY 2 NOVEMBER 2016

1. Define the following terms (1 point each): alternative hypothesis

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables

FinQuiz Notes

Topics. Review of lecture 2/11 Error, Residual and Condition Number. Review of lecture 2/16 Backward Error Analysis The General Case 1 / 22

4.2 Floating-Point Numbers

Effect of Uniform Horizontal Magnetic Field on Thermal Instability in A Rotating Micropolar Fluid Saturating A Porous Medium

Binary floating point

Solutions to Assignment 1

Section 2.1: Reduce Rational Expressions

MATH 225: Foundations of Higher Matheamatics. Dr. Morton. 3.4: Proof by Cases

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1

Dynamical Systems Solutions to Exercises

Dense LU factorization and its error analysis

Chaos and Dynamical Systems

Error Bounds for Arithmetic Operations on Computers Without Guard Digits

Section 8.5. z(t) = be ix(t). (8.5.1) Figure A pendulum. ż = ibẋe ix (8.5.2) (8.5.3) = ( bẋ 2 cos(x) bẍ sin(x)) + i( bẋ 2 sin(x) + bẍ cos(x)).

3.5 Solving Quadratic Equations by the

Linear Algebraic Equations

Program Analysis. Lecture 5. Rayna Dimitrova WS 2016/2017

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

QUADRATIC EQUATIONS EXPECTED BACKGROUND KNOWLEDGE

Routh-Hurwitz Lecture Routh-Hurwitz Stability test

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Elements of Floating-point Arithmetic

Introduction to Scientific Computing Languages

ERASMUS UNIVERSITY ROTTERDAM

PARTIAL DIFFERENTIAL EQUATIONS (PDE S) I

Stability Domain of a Linear Differential Equation with Two Delays

DSP Design Lecture 2. Fredrik Edman.

Elements of Floating-point Arithmetic

1.1 COMPUTER REPRESENTATION OF NUM- BERS, REPRESENTATION ERRORS

1 Systems of Differential Equations

On the Third Order Rational Difference Equation

MATH 131P: PRACTICE FINAL SOLUTIONS DECEMBER 12, 2012

Applied Mathematics 205. Unit 0: Overview of Scientific Computing. Lecturer: Dr. David Knezevic

Answers to the Exercises -- Chapter 1

Numerical Analysis. EE, NCKU Tien-Hao Chang (Darby Chang)

14.2 QR Factorization with Column Pivoting

An Introduction to Numerical Analysis

Numerical methods for solving linear systems

ACM 106a: Lecture 1 Agenda

2.1 Gaussian Elimination

Gaussian Elimination for Linear Systems

Math 216 Second Midterm 28 March, 2013

arxiv:nlin/ v3 [nlin.ps] 12 Dec 2005

Taylor Series and Numerical Approximations

Math 348: Numerical Methods with application in finance and economics. Boualem Khouider University of Victoria

1 Problem 1 Solution. 1.1 Common Mistakes. In order to show that the matrix L k is the inverse of the matrix M k, we need to show that

Can You Count on Your Computer?

1. In class we derived a bound on the relative error in the k-digit chopping representation of y. Show that y fl (y) y k+1

Santilli s Isomathematical theory for changing modern mathematics

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

MCS 115 Exam 2 Solutions Apr 26, 2018

Quadratic Stability and Singular SISO Switching Systems

Technical Appendix: Imperfect information and the business cycle

Transcription:

SECTION.5 ROUNDOFF ERRORS; BACKWARD STABILITY ROUNDOFF ERROR -- error due to the finite representation (usually in floatingpoint form) of real (and complex) numers in digital computers. FLOATING-POINT NUMBER SYSTEM -- a finite approximation to the real numer system, in which numers are represented in a form ± 0. d d Ld k e where d 0 (that is, the floating-point numers are normalized ) k is called the precision (that is, the numer of significant digits) is the ase e is the exponent and lies in some range m e M. Two ways to represent a real numer in floating-point: for example, suppose that x π = 3.45965L k = 4 = 0 with ROUNDING with CHOPPING = + 0.34 0 = + 0.34 0 Definition of IDEALIZED FLOATING-POINT ARITHMETIC If x and y denote floating-point numers, then fl (, x y), x y), x / y) are computed y performing exact arithmetic on the values x and y, and then rounding or chopping that result to k digits. Also, for example, fl (cos(x)) and x) are defined similarly. BASIC BOUNDS ON ROUNDOFF ERROR where If x is a real numer, then = x ( + ε ), (*) 04

and ε u u = k -k / using rounding using chopping The numer u is called the UNIT ROUNDOFF (see page 4 of the textook). NOTE. From (*) aove, x) x ε =, x that is, ε is the relative error in the approximation fl (x) to x. PROOF that (*) is correct: suppose that t x < t for some integer t. Then it can e shown that the distance etween every two adjacent floating-point numers in the interval [ t, t ] is t k. Thus, with chopping, the relative error is x) x t k x k = t (and with rounding, a factor of / is included). A BOUND ON THE ROUNDOFF ERROR OF IDEALIZED FLOATING-POINT ARITHMETIC Let o denote any one of + /. Then if x and y are floating-point numers, y the aove result and using idealized floating-point arithmetic, fl ( x o y) = ( x o y)( + ε ), where ε u. (See (.5.3) on page 4.) Thus, the relative error in doing one floating-point arithmetic operation is small. NOTE. If xˆ and ŷ are real numers ut are not floating-point numers, then the relative error in the computation of fl ( xˆ + yˆ) = xˆ) + yˆ)) may not e small (although this involves only 3 roundoff errors). See the ottom of page 43 of the textook. The reason for this is possile numeric cancellation. 05

ANOTHER WAY OF VIEWING THE ABOVE POINT: if x, y and z are all floatingpoint numers, then the relative error of f l ( may e large (as this involves roundoff errors, and there may e cancellation). EXAMPLE Let x = + 0.34 0 y = + 0.5600 0 z = 0.33 0 0 4 0 and use idealized chopping, floating-point arithmetic with = 0 and k = 4 to evaluate f l (. The result otained is 0.000, however the exact value is 0.00056. Thus the relative error is 0.00056 0.000 0.00056 = 0.359 or 35.9%. There are two TYPES OF ANALYSES of roundoff error: (i) forward (or direct) analysis -- determine a ound on exact solution - computed solution -- requires that one determines a ound on the maximum error in every calculation of the computation -- this is difficult to do, and the results are often overly pessimistic (for example, the early work of von Neumann and Goldstine) Example of such an analysis: Using the result fl ( = ( ( + ε), one otains = ( ( + ε )( + ε ) + z( + ε ) = ( ( + ε + ε + ε ε ) + z( + ε ) Therefore f l ( ( x + y = ( ( ε + ε + εε ) + zε and dividing oth sides y x + y + z gives the relative error. 06

(ii) ackward (or inverse) analysis Given a set of data z, z, K, zm, denote some computation on this data y C z, z, K, z ). A ackward error analysis requires that one prove ( m that there exist values z, z, K, zm (small perturations of the values z, z, K, ) for which z m fl C( z, z, K, z )) = C( z, z, K, z ). ( m m That is, the result of the floating-point computation with the values z, z, K, z m is equal to the exact value of the computation of C using the pertured values z, z, K, z. m Example Proving a result of the form f l ( = x + y + z where x x, y y and z z would e a ackward error analysis result. A ackward error analysis has a close relationship to staility: if the perturations are small, then the computation of C is stale. Sometimes such a computation is said to e ackward stale. NOTE. To prove that an algorithm is stale, one needs to do a ackward error analysis. But this does not guarantee that a computed solution, say xˆ, is accurate (close to the exact solution). To determine this, you need to determine the condition of the prolem. If the algorithm is stale and if the prolem is well-conditioned, then the computed solution is accurate. So in this case, one doesn't need a forward error analysis result. See page 46 for a discussion of this in terms of solving Ax =. ************************************************* Small residual implies ackward staility (pages 46-47) The development of ackward error analyses is due to James Wilkinson, in the 950's and 960's. Usually if you do a ackward error analysis of an algorithm (such as Gaussian elimination) it will show that the algorithm is ackward stale only for certain sets of input data. That is, the perturations are small only for certain sets of input data, and not for all possile sets of input data. 07

For the prolem of solving a fixed linear system Ax =, there is a simple a posteriori method of checking the ackward staility of the computation of a computed solution: use the residual vector. If xˆ denotes any computed solution to x, let rˆ = Axˆ. As noted in Section.4, xˆ is the exact solution of the linear system Az = + δ, where δ = rˆ. Thus, if δ = rˆ is small, then the algorithm that was used to compute xˆ is ackward stale (for this particular input data). Exercise.5.6 on page 47 gives a similar result for a perturation in A rather than. 08