Math 471. Numerical methods Introduction

Similar documents
Tu: 9/3/13 Math 471, Fall 2013, Section 001 Lecture 1

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Lecture 7. Floating point arithmetic and stability

Notes for Chapter 1 of. Scientific Computing with Case Studies

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

1 Backward and Forward Error

Elements of Floating-point Arithmetic

Numerical Analysis and Computing

Notes on floating point number, numerical computations and pitfalls

Numerical Algorithms. IE 496 Lecture 20

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Introduction, basic but important concepts

Chapter 1: Introduction and mathematical preliminaries

Elements of Floating-point Arithmetic

One-Sided Difference Formula for the First Derivative

Taylor series. Chapter Introduction From geometric series to Taylor polynomials

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Lecture Notes 7, Math/Comp 128, Math 250

1.2 Finite Precision Arithmetic

1.1 COMPUTER REPRESENTATION OF NUM- BERS, REPRESENTATION ERRORS

Solving Quadratic & Higher Degree Equations

Math 131 notes. Jason Riedy. 6 October, Linear Diophantine equations : Likely delayed 6

Introduction to Finite Di erence Methods

Math 112 Rahman. Week Taylor Series Suppose the function f has the following power series:

1 ERROR ANALYSIS IN COMPUTATION

Lecture 2: Divide and conquer and Dynamic programming

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Remainders. We learned how to multiply and divide in elementary

Midterm Review. Igor Yanovsky (Math 151A TA)

Chapter 1 Mathematical Preliminaries and Error Analysis

MATH ASSIGNMENT 03 SOLUTIONS

Binary floating point

5.3. Polynomials and Polynomial Functions

Chapter 3 ALGEBRA. Overview. Algebra. 3.1 Linear Equations and Applications 3.2 More Linear Equations 3.3 Equations with Exponents. Section 3.

Mathematical preliminaries and error analysis

Integration of Rational Functions by Partial Fractions

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics).

Numerical Methods - Preliminaries

Applied Linear Algebra in Geoscience Using MATLAB

Lesson 21 Not So Dramatic Quadratics

Chapter 1 Mathematical Preliminaries and Error Analysis

Integration of Rational Functions by Partial Fractions

Instructor Notes for Chapters 3 & 4

MITOCW MITRES18_006F10_26_0602_300k-mp4

MA 580; Numerical Analysis I

Homework and Computer Problems for Math*2130 (W17).

Chap. 19: Numerical Differentiation

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

Hermite Interpolation

CHAPTER 10 Zeros of Functions

Infinite series, improper integrals, and Taylor series

Tropical Polynomials

Lecture 28 The Main Sources of Error

Errors. Intensive Computation. Annalisa Massini 2017/2018

CHAPTER 2 POLYNOMIALS KEY POINTS

Chapter 1. Root Finding Methods. 1.1 Bisection method

B553 Lecture 1: Calculus Review

Physics 115/242 Romberg Integration

CHAPTER 3. Iterative Methods

Chapter 1: Preliminaries and Error Analysis

Solving Quadratic & Higher Degree Equations

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Math 411 Preliminaries

NUMERICAL MATHEMATICS & COMPUTING 6th Edition

A Few Concepts from Numerical Analysis

NATIONAL UNIVERSITY OF SINGAPORE PC5215 NUMERICAL RECIPES WITH APPLICATIONS. (Semester I: AY ) Time Allowed: 2 Hours

Section September 6, If n = 3, 4, 5,..., the polynomial is called a cubic, quartic, quintic, etc.

Algebra Review. Finding Zeros (Roots) of Quadratics, Cubics, and Quartics. Kasten, Algebra 2. Algebra Review

CH 54 PREPARING FOR THE QUADRATIC FORMULA

5.2 Polynomial Operations

Honours Advanced Algebra Unit 2: Polynomial Functions Factors, Zeros, and Roots: Oh My! Learning Task (Task 5) Date: Period:

CONTENTS CHECK LIST ACCURACY FRACTIONS INDICES SURDS RATIONALISING THE DENOMINATOR SUBSTITUTION

2. Review of Calculus Notation. C(X) all functions continuous on the set X. C[a, b] all functions continuous on the interval [a, b].

4 Stability analysis of finite-difference methods for ODEs

A Brief Introduction to Numerical Methods for Differential Equations

MATH 1910 Limits Numerically and Graphically Introduction to Limits does not exist DNE DOES does not Finding Limits Numerically

Numerical Analysis. Yutian LI. 2018/19 Term 1 CUHKSZ. Yutian LI (CUHKSZ) Numerical Analysis 2018/19 1 / 41

Introductory Numerical Analysis

Pre-Algebra Notes Unit Three: Multi-Step Equations and Inequalities (optional)

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

Some notes on applying computational divided differencing in optimization

MA 1128: Lecture 19 4/20/2018. Quadratic Formula Solving Equations with Graphs

Homework 2. Matthew Jin. April 10, 2014

AP Calculus Chapter 9: Infinite Series

Chapter 3: Root Finding. September 26, 2005

Zeros of Functions. Chapter 10

Numerical differentiation

27 Wyner Math 2 Spring 2019

General Least Squares Fitting

Math 2 Variable Manipulation Part 6 System of Equations

LIMITS AND DERIVATIVES

Solving Linear Systems of Equations

Numerical Derivatives in Scilab

IES Parque Lineal - 2º ESO

MATH 1231 MATHEMATICS 1B CALCULUS. Section 5: - Power Series and Taylor Series.

Topic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number

Math 111, Introduction to the Calculus, Fall 2011 Midterm I Practice Exam 1 Solutions

Transcription:

Math 471. Numerical methods Introduction Section 1.1 1.4 of Bradie 1.1 Algorithms Here is an analogy between Numerical Methods and Gastronomy: Calculus, Lin Alg., Diff. eq. Ingredients Algorithm Recipe Coding Cooking ideas. Algorithms bridge abstract mathematical ideas and practical implementation of these They are often written in pseudo-code and require translation into computer codes. One should be comfortable with such convertion after several practices. What is mathematically more important, however, is the theoretical principles that drive these algorithms. This is the main task of this course. The name of Numerical Analysis is often used in lieu of Numerical Methods so as to emphasize the theoretical flavor of this subject. Example. Find the cubic root of a real number a using the iterative scheme x n+1 = 1 ( x n + a ) 2 x 2 n The pseudo-code will look like ************ Algorithm 1 ************* Input a Initiate Loop x 1 =initial guess while (stop criterion is not met) Use (1) x n+1 = 0.5 (x n + a/x n /x n ) end while loop Output x n ********* end of Algorithm 1 ************* Questions to ask before coding: 1. Is this the right scheme? Translated into Math terms, can we prove the convergence lim x n = a 1/3 for any a? n 1 (1)

2. What initial guess should we use for x? Certainly not 0 or too close to 0. Will the initial guess affect the convergence? 3. How fast does x n converge to a if it converges at all? 4. What is the stop criterion? 1.2 Convergence Convergence is a central tool for analyzing the performance of an algorithm. quantify, one uses convergence rate: we call To sequence {x n } converges to x at rate O(r n ) if x n x Cr n for some fixed constant C and sufficiently large n. Here, we do require lim n r n = 0 so that, by the comparison method, the sequence of interest {x n } also converges, i.e. lim n x n x = 0. Remark: the phrase for sufficiently large n is understood as for all n N with N being a fixed number. Usually, the convergence rate r n is a simpler expression than x n itself. It gives us certain confidence that the error x n x is at worst Cr n. Also, the actual value of C is less important and is often omitted in discussions. Example. sin(1/n) converges to 0 at rate O(1/n). Why? sin(x) x for real x. Example. Having the availability of the following identity (the proof of which is nontrivial already!) π = 4 n=0 one can use the partial sum of the first m + 1 terms, S m = 4 m n=0 ( 1) n 2n + 1, (2) ( 1) n 2n + 1 to approximate the numerical value of π. Obviously, the larger m is, the closer S m is to π. But how sure are you about the accuracy of S m? The answer is S m converges to π at rate O(1/m). 2

Why? We learned alternating series in Calculus and it turns out to be handy here. Write (2) out, n=m+1 π = 4(1 1/3 + 1/5 1/7 + 1/9...) which is an alternating series. The difference π S m, also called the truncation error, is ( ) ( 1) n ( 1) m+1 π S m = 4 2n + 1 = 4 2m + 3 + ( 1)m+2 2m + 5 + ( 1)m+3 2m + 7... So it boils down to show that the above error is bounded by a constant times 1/m. To do this, we factor out ( 1) m, ( π S m = 4( 1) m 1 ) 2m + 3 + 1 2m + 5 1 2m + 7.... Remember the trick of dealing with alternating series: pair up adjacent terms and they will give a definite sign; the leaving the first term alone and do the pairing again. π S m = 4( 1) m 1 2m + 3 + 1 1 2m + 5 } 2m + 7 + 1... 2m + 9 } negative negative which implies π S m 4( 1) < 0. m Do another pairing but leave the first term out, π S m = 4( 1) m 1 + 1 2m+3 2m + 5 1 2m + 7 } postive + 1 2m + 9 1 2m + 11 } positive... which implies π S m 4( 1) m > 1 2m + 3. Now that we have lower bound AND upper bound for π Sm, we estimate 4( 1) m π S m 4( 1) m 1 2m + 3 < 1 m which, by the definition of convergence rate, leads to the desired conclusion. Another quantification of convergence is order of convergence: we call {x n } converges to x of order α with asymptotic error constant λ, if x n+1 x lim n x n x = λ. α 3

Here, α, λ are both positive. We will elaborate this concept later when going to fixed point iteration. To give a simple example, one can look at the geometric sequence {2 n }. It converges to 0 at order α = 1 and asymptotic error constant λ = 1/2. α = 1: linear convergence; α = 2: quadratic convergence; α = 3: cubic convergence;... 1.3 Floating point number system β base. n number of significant digits. k exponent. ±(0. siginificand d 1 d 2...d n ) β β k, d 1 0 except for zero Given number x.y, how to calculate its floating point representation with base β? (optional) Roundoff error is introduced by converting a real number to its floating point equivalent. IEEE standard. e.g. double precision β = 2. Each number occupies 64 bits. one number two signs 64bits = 2bits + siginificand 52bits + exponent 10bits Resources of error: theory, model, algorithm (numerical methods), roundoff error, experimental data... Example. An object with measurements volume = 5±2m 2 and mass = 1±0.001kg. Find the range of its density. Example. In a number system with β = 10 and n = 4, the numbers a = 1 and 2 b = 1.0001 are considered the same due to roundoff. Then calculation a b becomes N/A. This is called cancellation error subtraction of nearly equal numbers should be computed in other ways (see next section) 1.4 Finite precision arithmetic Finite precision of computer algorithm is due to the finite digits used in a floating point system. A major disadvantage of floating point system is cancellation error. It happens in such calculation b a that b a b + a (decrease of maginitude loss of significant digits). 4

Quadratic formula. ax 2 + bx + c = 0 x = b ± b 2 4ac 2a The cancellation error from the numerator may cause inaccurate siginificand, esp. when one root is close to zero and the other is not (ill conditioned). Example. Consider 0.2x 2 47.91x+6 = 0. To 10 sinificant digits (higher precision), the roots are 0.2394246996 10 3 and 0.1253003555. But with 4 digits (lower precision), the arithmetic gives 0.2394 10 3 (ok) and 0.15 (loss of precision). note: 47.91 2 4 0.2 0.6 = 2295 4.8 = 2290 = 47.85 47.91 ± 47.85 x 1,2 = cancellation error in the small root. 0.4 To avoid subtraction of nearly equal numbers, reformulate the problem by rationalizing the numerator b b 2 4ac 2a = 2c b + b 2 4ac = 2 6 47.91 + 47.85 Finite difference. Heuristic: f f(y) f(x) (x) = lim y x. Thus, y x f (x) D + f(x) = f(x + h) f(x) h = 0.1253 ok! for small h > 0. (3) Called forward difference approximation. Also, backward difference, centered difference... The error comes from two sources: 1. truncation error due to discretization (derivative finite difference). By the Taylor series f(x + h) = f(x) + f (x)h + f (x)h 2 /2 +... = f(x) + f (x)h + f (ξ)h 2 /2 for some ξ [x, x + h], we have D + f(x) = f(x + h) f(x) h = f (x) + f (ξ)h/2 = f (x) + O(h). So the truncation error Ch where constant C = max f (ξ)/2. 2. roundoff error machine precision 2 52 f(x) in computing f(x + h) f(x). Then, this error is amplified by a factor of 1/h in (3). 2 52 f(x) h 5

So, the total error is majorized by one of the two, depending on h > 2 52 h < 2 52. or Example. If f(x) = e x and x = 1, then f (1) = e = 2.71828... is the exact value. By setting, for instance, h = 0.05, 0.025, 0.125 and h = 2 28, 2 35, 10 48, we calculate the following table h D + f D + f f (x) (D + f f (x))/h 0.05 2.7874 0.0691 1.3821 0.025 2.7525 0.0343 1.3705 0.0125 2.7353 0.0171 1.3648 (As h 0, the tendency of convergence seems...) 0 e 0 e/2 (however) 2 28 2.718 3.666 10 8 9.841 2 35 2.718 1.041 10 5 3.576 10 5 2 48 2.75 0.0317 8.928 10 12 Each line in this table can be calculated using Matlab code h=0.05; a(1)=h; a(2)=(exp(1+h)-exp(1))/h; a(3)=a(2)-exp(1); a(4)=a(3)/h; a The last line, without semicolon, prints out the value of a in the command window. To make the above code reusable, we encapsulate it in a function Ch1F1 function a=ch1f1(h) a(1)=h; a(2)=(exp(1+h)-exp(1))/h; a(3)=a(2)-exp(1); a(4)=a(3)/h; Store this code in a file named Ch1F1.m and call this function from the command window or within another function. Tip: use format short e before displaying an array so that each entry of the array has its own exponent. Try format short and see the difference. 6