Numerical Methods. Aaron Naiman Jerusalem College of Technology naiman

Similar documents
Numerical Methods. Aaron Naiman Jerusalem College of Technology naiman

Introductory Numerical Analysis

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Chapter 4: Interpolation and Approximation. October 28, 2005

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Lecture Notes on Introduction to Numerical Computation 1

CS 257: Numerical Methods

Homework and Computer Problems for Math*2130 (W17).

Introduction to Numerical Analysis

We consider the problem of finding a polynomial that interpolates a given set of values:

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0.

MAT 460: Numerical Analysis I. James V. Lambers

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Chapter 3: Root Finding. September 26, 2005

CHAPTER 4. Interpolation

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

GENG2140, S2, 2012 Week 7: Curve fitting

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

Preliminary Examination in Numerical Analysis

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Chapter 11 - Sequences and Series

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

8.5 Taylor Polynomials and Taylor Series

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Principles of Scientific Computing Local Analysis

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Mathematics for Engineers. Numerical mathematics

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Numerical Analysis Exam with Solutions

Scientific Computing

AP Calculus Chapter 9: Infinite Series

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

5. Hand in the entire exam booklet and your computer score sheet.

MATH 118, LECTURES 27 & 28: TAYLOR SERIES

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Extrapolation in Numerical Integration. Romberg Integration

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Scientific Computing: An Introductory Survey

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

Math Numerical Analysis Mid-Term Test Solutions

MA2501 Numerical Methods Spring 2015

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Approximation theory

Math Numerical Analysis

Virtual University of Pakistan

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Math Numerical Analysis

6 Lecture 6b: the Euler Maclaurin formula

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Solution of Algebric & Transcendental Equations

Numerical Methods. King Saud University

February 13, Option 9 Overview. Mind Map

Infinite series, improper integrals, and Taylor series

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Scientific Computing: An Introductory Survey

Sequences and Series

ECE133A Applied Numerical Computing Additional Lecture Notes

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

15 Nonlinear Equations and Zero-Finders

Numerical Methods of Approximation

Spring 2012 Introduction to numerical analysis Class notes. Laurent Demanet

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

CHAPTER 10 Zeros of Functions

Zeros of Functions. Chapter 10

Numerical Methods in Physics and Astrophysics

Exact and Approximate Numbers:

Numerical Analysis and Computing

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

CALCULUS JIA-MING (FRANK) LIOU

Additional exercises with Numerieke Analyse

Numerical Integration

Midterm Review. Igor Yanovsky (Math 151A TA)

NUMERICAL ANALYSIS. K.D. Anderson J.S.C. Prentice C.M. Villet

M.SC. PHYSICS - II YEAR

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Numerical Methods in Physics and Astrophysics

Math 20B Supplement. Bill Helton. September 23, 2004

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

EXAMPLES OF PROOFS BY INDUCTION

Solution of Nonlinear Equations

Stat 451 Lecture Notes Numerical Integration

Numerical Integration

A Review of Matrix Analysis

Transcription:

Numerical Methods Aaron Naiman Jerusalem College of Technology naiman@jct.ac.il http://jct.ac.il/ naiman based on: Numerical Mathematics and Computing by Cheney & Kincaid, c 1994 Brooks/Cole Publishing Company ISBN 0-534-20112-1 Copyright c 2014 by A. E. Naiman

Taylor Series Definitions and Theorems Examples Proximity of x to c Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 1

Motivation Sought: cos(0.1) Missing: calculator or lookup table Known: cos for another (nearby) value, i.e., at 0 Also known: lots of (all) derivatives at 0 Can we use them to approximate cos(0.1)? What will be the worst error of our approximation? These techniques are used by computers, calculators, tables. Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 2

Taylor Series cosx 1.5 1 1 x2 2 x4 4! 3 function value 0.5 0-0.5-1 -1.5 cosx 1 1 x2 2 1 x2 2 x4 4! x6 6! -4-2 0 2 4 3 Other series Better and better approximation, near c, and away. Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 3

Taylor Series Series definition: If f (k) (c), k = 0,1,2,..., then: f(x) f(c)f (c)(x c) f (c) (x c) 2 2! = k=0 f (k) (c) (x c) k k! c is a constant and much is known about it (f (k) (c)) x a variable near c, and f(x) is sought With c = 0 Maclaurin series What is the maximum error if we stop after n terms? Real life: crowd estimation: 100K ±10K vs. 100K ±1K Key NM questions: What is estimate? What is its max error? Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 4

Taylor s Theorem Theorem: If f C n1 [a,b] then f(x) = n k=0 f (k) (c) k! (x c) k f(n1) (ξ(x)) (x c) n1 (n1)! where x,c [a,b], ξ(x) open interval between x and c Notes: f C(X) means f is continuous on X f C k (X) means f,f,f,f (3),...,f (k) are continuous on X ξ = ξ(x), i.e., a point whose position is a function of x Error term is just like other terms, with k := n1 ξ-term is truncation error, due to series termination Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 5

Taylor Series Procedure Setup: writing it out, step-by-step: write formula for f (k) (x) choose c (if not already specified) write out summation and error term note: sometimes easier to write out a few terms Things to (possibly) prove by analyzing worst case ξ analytically: letting n LHS remains f(x) summation becomes infinite Taylor series if error term 0 infinite Taylor series represents f(x) numerically: for given n, what is max of error term? Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 6

Taylor Series Definitions and Theorems Examples Proximity of x to c Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 7

Taylor Series: e x f(x) = e x, x < f (k) (x) = e x, k Choose c := 0 We have e x = n k=0 x k k! eξ(x) (n1)! xn1 As n take worst case ξ (x > 0: just less than x; x < 0?) error term 0 (why?) e x = k=0 x k k! = 1x x2 2! x3 3! Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 8

Taylor Series: sin x f(x) = sinx, x < f (k) (x) = sin ( x πk ), k, c := 0 2 We have sinx = n k=0 sin ( πk 2 ) k! x k sin( ξ(x) π(n1) ) 2 x n1 (n1)! Error term 0 as n Even k terms are zero l = 0,1,2,..., and k 2l1 sinx = l=0 sin ( π(2l1) 2 (2l1)! ) x 2l1 = k=0 ( 1) k x 2k1 (2k1)! = x x3 3! x5 5! Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 9

Taylor Series: cos x f(x) = cosx, x < f (k) (x) = cos ( x πk ), k, c := 0 2 We have cosx = n k=0 cos ( πk k! 2 ) x k cos( ξ(x) π(n1) ) 2 x n1 (n1)! Error term 0 as n Odd k terms are zero l = 0,1,2,..., and k 2l cosx = l=0 cos ( π(2l) 2 (2l)! ) x 2l = k=0 ( 1) k x2k (2k)! = 1 x2 2! x4 4! Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 10

Numerical Example: cos(0.1) We have 1) f(x) = cosx and 2) c = 0 obtain series: cosx = 1 x2 2! x4 4! Actual value: cos(0.1) = 0.99500416527803... With 3) x = 0.1 and 4) specific n s from Taylor approximations: n approximation error 0, 1 1 0.01/2! 2, 3 0.995 0.0001/4! 4, 5 0.99500416 0.000001/6! 6, 7 0.99500416527778 0.00000001/8!... includes odd k Can also ask: given tolerance τ, what n is needed? Obtain accurate approximation easily and quickly. Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 11

Taylor Series: (1 x) 1 f(x) = 1 x 1, f(1) 1 [a,b]; below: error 0 for x 1 x < 1, f (k) (x) = k! (1 x) k1, k, choose c := 0 We have 1 n 1 x = = k=0 n k=0 x n1 x k (n1)! (1 ξ(x)) n2 (n1)! x k ( x 1 ξ(x) ) n1 1 1 ξ(x) Why bother, with LHS so simple? Ideas? Problem: actually ξ = ξ(n); 1 ξ(x)? Solution... Later: x bdd away from 1 suff.: x n1 0 as n 1 ξ(x) For what range of x is this satisfied? Need to determine radius of convergence. Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 12

(1 x) 1 Range of Convergence Sufficient: x 1 ξ(x) < 1 Approach: get variable x in middle of sufficiency inequality transform range of ξ inequality to LHS and RHS of sufficiency inequality require restriction on x but check if already satisfied ξ < 1 1 ξ > 0 sufficient: (1 ξ) < x < 1 ξ Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 13

(1 x) 1 Range of Convergence (cont.) case x < ξ < 0: LHS: (1 x) < (1 ξ) < 1 require: 1 x RHS: 1 < 1 ξ < 1 x require: x 1 case 0 < ξ < x: LHS: 1 < (1 ξ) < (1 x) require: (1 x) x, or: 1 < 0 RHS: 1 x < 1 ξ < 1 require: x 1 x, or: x 1 2 Therefore, for 1 < x 1 2 1 1 x = k=0 x k = 1xx 2 x 3 ( Zeno: x = 1 2,... ) Need more analysis for the whole range x < 1. Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 14

Taylor Series: ln x f(x) = lnx, 0 < x 2 f (k) (x) = ( 1) k 1 (k 1)! x k, k 1 Choose c := 1 We have lnx = Sufficient n k=1 ξ(x) x 1 ( 1) k 1 (x 1) k k n1 0 as n Again, for what range of x is this satisfied? ( 1) n 1 (x 1) n1 n1 ξ n1 (x) Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 15

Sufficient: lnx Range of Convergence ξ(x) x 1 < 1... 1 ξ < x < 1ξ case 1 < ξ < x: LHS: 1 x < 1 ξ < 0 require: 0 x RHS: 2 < 1ξ < 1x require: x 2 case x < ξ < 1: LHS: 0 < 1 ξ < 1 x require: 1 x x, or: 1 2 x RHS: 1x < 1ξ < 2 require: x 1x Therefore, for 1 2 x 2 lnx = k=1 ( 1) k 1(x 1)k k = (x 1) (x 1)2 2 (x 1)3 3 Again, need more analysis for entire range of x. Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 16

Theorem: Ratio Test and lnx Revisited a n1 a n (< 1) partial sums converge lnx: ratio of adjacent summand terms (not the error term) a n1 a n = n (x 1) n1 Obtain convergence of partial sums for 0 < x < 2 Note: not looking at ξ and the error term x = 2: 1 1 2 1 3, which is convergent (why?) x = 0: same series, all same sign divergent harmonic series we have 0 < x 2 Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 17

Letting x (1 x) ln(1 x) = (1 x) 1 Revisited ( x x2 2 x3 3 ), 1 x < 1 dx d : lhs = 1 1 x and rhs = ( 1xx 2 x 3 )! : no = for x = 1 as rhs oscillates (note: correct avg value) x < 1 we have (also with ratio test) 1 1 x = 1xx2 x 3 Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 18

Taylor Series Definitions and Theorems Examples Proximity of x to c Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 19

Proximity of x to c Problem: Approximate ln 2 Solution 1: Taylor ln(1x) around 0 with x = 1 ln2 = 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 Solution 2: Taylor ln ( 1x 1 x) around 0 with x = 1 3 ln2 = 2 ( 3 1 3 3 3 3 5 5 3 7 7 ) Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 20

Proximity of x to c (cont.) Approximated values, rounded: Solution 1, first 8 terms: 0.63452 Solution 2, first 4 terms: 0.69313 Actual value, rounded: 0.69315 importance of proximity of evaluation and expansion points Accuracy and proximity and for varying c This error is in addition to the truncation error. Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 21

Taylor Series Definitions and Theorems Examples Proximity of x to c Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 22

Polynomials and a Second Form Polynomials C (, ) have finite number of non-zero derivatives, Taylor series c... original polynomial, i.e., error = 0 f(x) = 3x 2 1,... f(x) = 2 k=0 f (k) (0) x k = 103x 2 k! Taylor Theorem can be used for fewer terms e.g.: approximate a P 17 near c by a P 3 Taylor s Theorem, second form (x = constant expansion point, h = distance, x h = variable evaluation point): If f C n1 [a,b] then f(xh) = n k=0 f (k) (x) k! h k f(n1) (ξ(h)) h n1 (n1)! x,xh [a,b], ξ(h) open interval between x and xh Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 23

Taylor Approximate: (1 3h) 4 5 Define: f(z) z 4 5; x = 1 is the constant expansion point Derivs: f (z) = 5 4 z 1 5, f (z) = 4 5 2z 6 5, f (z) = 24 5 3z 11 5,... : (xh) 5 4 = x 4 5 4 5 x 1 5h 4 5h 2 24 5 h 3... 2! 5 2x 6 3! 5 3x 11 (x 3h) 4 5 = x5 4 4 5 x 1 53h 4 59h 2 24 5 27h 3... 2! 5 2x 6 3! 5 3x 11 (1 3h) 4 5 = 1 4 5 3h 4 2! 5 29h2 24 3! 5 327h3... = 1 12 18 h 5 25 h2 108 125 h3... Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 24

Second Form ln(eh) Evaluation of interest: ln(eh), for e < h e Define: f(z) ln(z) x = e is the constant expansion point ln z > 0 Derivatives f(z) = lnz f(e) = 1 f (z) = z 1 f (e) = e 1 f (z) = z 2 f (e) = e 2 f (z) = 2z 3 f (e) = 2e 3 f (n) (z) = ( 1) n 1 (n 1)!z n f (n) (e) = ( 1) n 1 (n 1)!e n Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 25

ln(eh) Expansion and Convergence Expansion (recall: x = e) or ln(eh) f(xh) = 1 ln(eh) = 1 n k=1 n k=1 ( 1) k 1 (k 1)!e k h k k! ( 1) n n!ξ(h) (n1) h n1 ( 1) k 1 k ( h e (n1)! ) ( ) k n1 ( 1)n h n1 ξ(h) Range of convergence, sufficient (for variable h): ξ < h < ξ case eh < ξ < e:... e 2 h case e < ξ < eh:... h e Can prove with: ln(eh) = ln ( e ( 1 h e)) = 1ln ( 1 h e ), and prev. section Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 26

O() Notation and MVT As h 0, we write the speed of f(h) 0 e.g., f(h): h, f(h) = O ( h k) f(h) C h k 1 1000 h, h2 ; let h 1 10, 1 100, 1 1000,... Taylor truncation error = O ( h n1) ; if for a given n the max exists, then C := max ξ(h) f(n1) (ξ(h)) /(n1)! Mean value theorem (Taylor, n = 0): If f C 1 [a,b] then or: f(b) = f(a)(b a)f (ξ), ξ (a,b) f (ξ) = f(b) f(a) b a Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 27

Alternating Series Theorem Alternating series theorem: If a k > 0, a k a k1, k 0, and a k 0, then n k=0 Intuitively understood ( 1) k a k S and S S n a n1 Note: direction of error is also know for specific n We had this with sin and cos Another useful method for max truncation error estimation Max truncation error estimation without ξ-analysis Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 28

ln(eh) Max Trunc. Error Estimate What is the max error after n1 terms? Max error estimate also depends on proximity size of h from Taylor: obtain O ( h n1) error 1 n1 h n1 max ξ 1 ξ n1 from AST (check the conditions!): also obtain O ( h n1), with different constant error 1 h n1 e E.g.: h = 2 e : ln 2 e = 1 1 2 1 2 1 2 2 3 1 1 2 3 1 4 1 2 4 Taylor max error (occurs as ξ e ): 1 n1 1 2 n1 n1 1 n1 1 2 n1 AST max error: note same ln estimate (before); in this case only, same max error estimate as well (not in general) Copyright c 2014 by A. E. Naiman NM Slides Taylor Series, p. 29

Base Representations Definitions Conversions Computer Representation Loss of Significant Digits Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 1

Number Representation Simple representation in one base simple representation in another base, e.g. Base 10: (0.1) 10 = (0.0 0011 0011 0011...) 2 37294 = 490200700030000 = 4 10 0 9 10 1 2 10 2 7 10 3 3 10 4 in general: a n...a 0 = n k=0 a k 10 k Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 2

Base 10 fraction: Fractions and Irrationals 0.7217 = 7 10 1 2 10 2 1 10 3 7 10 4 In general, for real numbers: a n...a 0.b 1... = n k=0 a k 10 k k=1 b k 10 k Note: numbers, i.e., irrationals, such that an infinite number of digits are required, in any rational base, e.g., e,π, 2 Need infinite number of digits in a base irrational (0.333...) 10 but 1 3 is not irrational but, if rational, digits will repeat, e.g.: 0.312461 Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 3

Other Bases Base 8, 8 or 9, using octal digits (21467) 8 = = (9015) 10 (0.36207) 8 = 8 5( 3 8 4 ) = 15495 32768 = (0.47286...) 10 Base 16: 0, 1,..., 9, A (10), B (11), C (12), D (13), E (14), F (15) Base β (a n...a 0.b 1...) β = n k=0 a k β k k=1 b k β k Base 2: just 0 and 1, or for computers: off and on, bit = binary digit Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 4

Base Representations Definitions Conversions Computer Representation Loss of Significant Digits Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 5

Basic idea: Conversion: Base 10 Base 2 8 3781 = 1 }{{} 10 }{{} 10(710(3)) (1010) 2 (1000) 2 = (111 011 000 101) 2 = Easy for computer, but by hand: (3781.372) 10 remainder 2)3781 2)1890 1 = a 0 2)945 0 = a 1. 0.372 2 b 1 = 0.744 2 b 2 = 1.488 (drop 1 ). Valid from/to any base; one digit for each / Usually from base 10, with familiar / Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 6

Base 8 Shortcut Base 2 base 8, trivial (551.624) 8 = (101 101 001.110 010 100) 2 3 bits for every 1 octal digit One digit produced for every step in (hand) conversion base 10 base 8 base 2 For base 8 base 16: via base 2 Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 7

Base Representations Definitions Conversions Computer Representation Loss of Significant Digits Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 8

Scientific notation: Computer Representation 32.213 0.32213 10 2 In general x = ±0.d 1 d 2... 10 n, d 1 0, or: x = ±r 10 n, we have sign, mantissa r and exponent n 1 10 r < 1 On the computer, base 2 is represented x = ±0.b 1 b 2... 2 n, b 1 0, or: x = ±r 2 n, 1 2 r < 1 Finite number of mantissa digits, therefore roundoff or truncation error Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 9

Base Representations Definitions Conversions Computer Representation Loss of Significant Digits Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 10

LSD Addition (ab)c = a(bc) on the computer? Six decimal digits for mantissa because 1,000,000.1. 1. }{{} million times = 1,000,000. 0.100000 10 7 0.100000 10 1 = 0.100000 10 7 but 1. 1. }{{} 1,000,000. = 2,000,000. million times Add numbers in size order. Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 11

LSD Subtraction Assume 10 decimal digits of precision ( N[] or vpa() ) E.g.: x sinx for x s close to zero x = 1 15 (radians) x = 0.66666 66667 10 1 sinx = 0.66617 29492 10 1 x sinx = 0.00049 37175 10 1 = 0.49371 75000 10 4 Note still have 10 10 precision (because no more info), but can we rework calculation for 10 13 precision? Avoid subtraction of close numbers. Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 12

LSD Avoidance for Subtraction x sinx for x 0 use Taylor series no subtraction of close numbers e.g., 3 terms: 0.49371 74328 10 4 actual: 0.49371 74327 10 4 e x e 2x for x 0 use Taylor series twice and add common powers x 2 1 1 for x 0 x 2 x 2 11 cos 2 x sin 2 x for x π 4 cos2x lnx 1 for x e ln x e Copyright c 2014 by A. E. Naiman NM Slides Base Representations, p. 13

Nonlinear Equations Motivation Bisection Method Newton s Method Secant Method Summary Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 1

Motivation For a given function f(x), find its root(s), i.e.: find x (or r = root) such that f(x) = 0 BVP: dipping of suspended power cable. What is λ? λcosh 50 λ λ 10 = 0 (Some) simple equations solve analytically 6x 2 7x2 = 0 (3x 2)(2x 1) = 0 x = 2 3, 1 2 cos3x cos7x = 0 2sin5xsin2x = 0 x = nπ 5,nπ 2, n Z Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 2

Motivation (cont.) In general, we cannot exploit the function, e.g.: 2 x2 10x1 = 0 and ) cosh ( x 2 1 e x log sinx = 0 Note: at times multiple roots e.g., previous parabola and cosine we want at least one we may only get one (for each search) Need a general, function-independent algorithm. Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 3

Nonlinear Equations Motivation Bisection Method Newton s Method Secant Method Summary Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 4

Bisection Method Example 4 3 2 function value 1 0-1 -2-3 -4-5 a x 2 x 3 x 1 x 0 b Intuitive, like guessing a number [0, 100]. Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 5

Restrictions and Max Error Estimate Restrictions of the Bisection Method function slices x-axis at root start with two points a and b f(a)f(b) < 0 graphing tool (e.g., Matlab) can help to find a and b require C 0 [a,b] (why? note: not a big deal) Max error estimate after n steps, guess midpoint of current range error: ǫ b a 2 n1 (think of n = 0,1,2) note: error is in x; can also look at error in f(x) or combination enters entire world of stopping criteria Question: Given tolerance (in x), what is n?... Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 6

Convergence Rate Given tolerance τ (e.g., 10 6 ), how many steps are needed? Tolerance restriction (ǫ from before): ( ǫ b a ) 2 n1 < τ 1) 2, 2) log (any base) log(b a) nlog2 < log2τ or n > log(b a) log2τ log2 Rate is independent of function. Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 7

Convergence Rate (cont.) Base 2 (i.e., bits of accuracy) n > log 2 (b a) 1 log 2 τ i.e., number of steps is a constant plus one step per bit Linear convergence rate: C [0,1) x n1 r C x n r, n 0 i.e., monotonic decreasing error at every step, and x n1 r C n1 x 0 r Bisection convergence not linear (examples?), but compared to init. max error: similar form: x n1 r C n1 (b a), with C = 2 1 Okay, but restrictive and slow. Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 8

Nonlinear Equations Motivation Bisection Method Newton s Method Secant Method Summary Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 9

1.2 Newton s Method Example 1 function value 0.8 0.6 0.4 0.2 0-0.2 x 3 x 2 x 1 x 0 Other functions Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 10

Newton s Method Definition Approximate f(x) near x 0 by tangent l(x) f(x) f(x 0 )f (x 0 )(x x 0 ) l(x) Want l(r) = 0 r = x 0 f(x 0) f (x 0 ), x 1 := r, likewise: x n1 = x n f(x n) f (x n ) Alternatively (Taylor s): have x 0, for what h is f x 0 h = 0 }{{} x 1 f(x 0 h) f(x 0 )hf (x 0 ) or h = f(x 0) f (x 0 ) Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 11

Convergence Rate English: With enough continuity and proximity quadratic convergence! Theorem: With the following three conditions: 1) f(r) = 0, 2) f (r) 0, 3) f C 2 ( B ( r,ˆδ )) δ x 0 B(r,δ) and n we have x n1 r C(δ) x n r 2 for a given δ, C is a constant (not necessarily < 1) Note: again, use graphing tool to seed x 0 Newton s method can be very fast. Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 12

Convergence Rate Example f(x) = x 3 2x 2 x 3, x 0 = 4 n x n f(x n ) 0 4 33 1 3 9 2 2.4375 2.036865234375 3 2.21303271631511 0.256363385061418 4 2.17555493872149 0.00646336148881306 5 2.17456010066645 4.47906804996122e 06 6 2.17455941029331 2.15717547991101e 12 Stopping criteria theorem: uses x; above: uses f(x) often all we have possibilities: absolute/relative, size/change, x or f(x) (combos,...) But proximity issue can bite,... Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 13

Sample Newton Failure #1 5 4 3 function value 2 1 0-1 -2-3 -4 x n Runaway process Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 14

Sample Newton Failure #2 4 function value 2 0-2 -4 x n Division by zero derivative recall algorithm Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 15

Sample Newton Failure #3 function value 2 1.5 1 0.5 0-0.5-1 -1.5-2 x n1 x n Loop-d-loop (can happen over m points) Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 16

Nonlinear Equations Motivation Bisection Method Newton s Method Secant Method Summary Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 17

Secant Method Definition Motivation: avoid derivatives Taylor (or derivative): f (x n ) f(x n) f(x n 1 ) x n x n 1 x n x x n1 = x n f(x n 1 n ) f(x n ) f ( ) ( Secant Method ) x n 1 Bisection requirements comparison: 2 previous points f(a)f(b) < 0 Additional advantage vs. Newton: only one function evaluation per iteration Superlinear convergence: x n1 r C x n r 1.618... (recognize the exponent?) Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 18

Nonlinear Equations Motivation Bisection Method Newton s Method Secant Method Summary Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 19

Root Finding Summary Performance and requirements f C 2 nbhd(r) init. pts. speedy bisection 2 1 Newton 1 2 secant 2 1 \ requirement that f(a)f(b) < 0 function evaluations per iteration Often methods are combined (how?), with restarts for divergence or cycles Recall: use graphing tool to seed x 0 (and x 1 ) Copyright c 2014 by A. E. Naiman NM Slides Nonlinear Equations, p. 20

Interpolation and Approximation Motivation Polynomial Interpolation Numerical Differentiation Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 1

Motivation Three sample problems {(x i,y i ) i = 0,...,n}, (x i distinct), want simple (e.g., polynomial) p(x) y i = p(x i ),i = 0,...,n interpolation Assume data includes errors, relax equality but still close,... least squares Replace complicated f(x) with simple p(x) f(x) Interpolation similar to English term (contrast: extrapolation) for now: polynomial later: splines Use p(x) for p(x new ), p(x)dx,... Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 2

Interpolation and Approximation Motivation Polynomial Interpolation Numerical Differentiation Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 3

Constant and Linear Interpolation function value y 1 y 0 p 1 (x) p 0 (x) x 0 x 1 n = 0: p(x) = y 0 n = 1: { p(x) = y 0 g(x)(y 1 y 0 ), g(x) P 1, and 0, x = x0, g(x) = 1, x = x g(x) = x x 0 x 1 1 x 0 n = 2: more complicated,... Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 4

Lagrange Polynomials Given: x i, i = 0,...,n; Kronecker delta : δ ij = { 0, i j, 1, i = j Lagrange polynomials: l i (x) P n, l i ( xj ) = δij, i = 0,...,n independent of any y i values E.g., n = 2: function value 1 0 l 0 (x) l 1 (x) l 2 (x) x 0 x 1 x 2 Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 5

We have Lagrange Interpolation l 0 (x) = x x 1 x 0 x 1 x x 2 x 0 x 2, l 1 (x) = x x 0 x 1 x 0 x x 2 x 1 x 2, l 2 (x) = x x 0 x 2 x 0 x x 1 x 2 x 1, y 0 l 0 ( xj ) = y0 δ 0j = y 1 l 1 ( xj ) = y1 δ 1j = y 2 l 2 ( xj ) = y2 δ 2j = { 0, j 0, y 0, j = 0 { 0, j 1, y 1, j = 1 { 0, j 2, y 2, j = 2!p(x) P 2, with p ( ) x j = yj, j = 0,1,2: p(x) = In general: l i (x) = n j = 0 j i x x j x i x j, i = 0,...,n 2 i=0 y i l i (x) Great! What could be wrong? Easy functions (polynomials), interpolation ( error = 0 at x i )...but what about p(x new )? Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 6

Interpolation Error & the Runge Function {(x i,f(x i )) i = 0,...,n}, f(x) p(x)? Runge function: f R (x) = ( 1x 2) 1,x [ 5,5] and uniform mesh:! p(x) s wrong shape and high oscillations ( Runge ) lim n max f R(x) p n (x) = 5 x 5 1 function value 0 x 8 (= 5) x 7 x 6 x 5 0 (= x 4 ) x 3 x 2 x 1 x 0 (= 5) Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 7

Error Theorem Theorem:..., f C n1 [a,b], x [a,b], ξ (a,b) f(x) p(x) = 1 (n1)! f(n1) (ξ) n i=0 (x x i ) Max error with x i and x, still need max (a,b) f (n1) (ξ) with x i only, also need max of without x i : max (a,b) n i=0 (x x i ) = (b a) n1 Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 8

1 Chebyshev Points function value 0 xx 87 (= 1) x 6 x 5 x 4 (= 0) x 3 x 2 xx 10 (= 1) Chebyshev points on [ 1,1]: x i = cos [( i n ) π ], i = 0,...,n In general on [a,b]: x i = 1 2 (ab) 1 2 (b a)cos[( i n ) π ], i = 0,...,n Points concentrated at edges Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 9

Runge Function with Chebyshev Points 1 function value 0 x 8 x 7 (= 5) x 6 x 5 0 (= x 4 ) x 3 x 2 x 1 x 0 (= 5) Is this good interpolation? Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 10

Chebyshev Interpolation Same interpolation method Different interpolation points Minimizes n i=0 (x x i ) Periodic behavior interpolate with sins/coss instead of P n uniform mesh minimizes max error Note: uniform partition with spacing = cheb 1 cheb 0 num. points polynomial degree oscillations Note: shape is still wrong... see splines later Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 11

Interpolation and Approximation Motivation Polynomial Interpolation Numerical Differentiation Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 12

Numerical Differentiation Note: until now, approximating f(x), now f (x) f (x) f(xh) f(x) h Error =? Taylor: f(xh) = f(x)hf (x)h 2 f (ξ) 2 f (x) = f(xh) f(x) h I.e., truncation error: O(h) 1 2 hf (ξ) Can we do better? Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 13

Numerical Differentiation Take Two Taylor for h and h: f(x±h) = f(x)±hf (x)h 2 f (x) 2! ±h 3 f (x) 3! h 4 f (4) (x) 4! ±h 5 f (5) (x) 5! Subtracting: f(xh) f(x h) = 2hf (x)2h 3f (x) 3! 2h 5f(5) (x) 5! f (x) = f(xh) f(x h) 2h 1 6 h2 f (x) Note: for = there is either a ξ-term or not both! Note: f (n) (ξ 1 ) cannot cancel f (n) (ξ 2 ) We gained O(h) to O ( h 2). However,... Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 14

Richardson Extrapolation Take Three We have f (x) = f(xh) f(x h) a 2 h 2 a 4 h 4 a 6 h 6 2h } {{ } φ(h) Halving the stepsize, φ(h) = f (x) a 2 h 2 a 4 h 4 a 6 h 6 ( ) ( ) h φ = f h 2 ( ) h 4 ( ) h 6 (x) a 2 a 4 a 6 ( 2 ) 2 2 2 h φ(h) 4φ = 3f (x) 3 2 4 a 4h 4 15 16 a 6h 6 Q: So what? A: The h 2 term disappeared! Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 15

Richardson Take Three (cont.) Divide by 3 and write f (x) f (x) = 4 3 φ ( h 2 = φ ( ) h 2 ) 1 3 φ(h) 1 4 a 4h 4 5 16 a 6h 6 [ ( ) ] h φ φ(h) O ( h 4) 1 3 2 } {{ } ( ) ( ) only uses old and current information We gained O ( h 2) to O ( h 4)!! Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 16

Interpolation and Approximation Motivation Polynomial Interpolation Numerical Differentiation Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 17

Additional Notes Three f (x) formulae used additional points vs. Taylor, more derivatives in same point Similar for f (x): f(x±h) = f(x)±hf (x)h 2 f (x) 2! ±h 3 f (x) 3! h 4 f (4) (x) 4! ±h 5 f (5) (x) 5! Adding: f(xh)f(x h) = 2f(x)h 2 f (x) 1 12 h4 f (4) (x) or: f (x) = f(xh) 2f(x)f(x h) h 2 1 12 h2 f (4) (x) error = O ( h 2) Copyright c 2014 by A. E. Naiman NM Slides Interpolation and Approximation, p. 18

Numerical Quadrature Introduction Riemann Integration Composite Trapezoid Rule Composite Simpson s Rule Gaussian Quadrature Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 1

Numerical Quadrature Interpretation f(x) 0 on [a,b] bounded b a f(x)dx is area under f(x) function value a b Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 2

Numerical Quadrature Motivation Analytical solutions rare: In general: π 2 0 sinxdx = cosx π 20 = (0 1) = 1 π 2 ( 1 a 2 sin 2 θ )1 3 dθ 0 Need general numerical technique. Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 3

Lower Sum Interpretation function value x 0 x 1 x 2 x 3 x 4 (= a) (= b) Clearly a lower bound of integral estimate, and... Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 4

Upper Sum Interpretation function value x 0 x 1 x 2 x 3 x 4 (= a) (= b)... an upper bound. What is the max error? Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 5

Definitions Mesh: P {a = x 0 < x 1 < < x n = b}, n subintervals (n1 points) Infima and suprema (or minima and maxima): m i inf { f(x) : x i x x i1 } M i sup { f(x) : x i x x i1 } Two methods (i.e., integral estimates): lower and upper sums L(f;P) U(f;P) n 1 i=0 n 1 i=0 m i ( xi1 x i ) M i ( xi1 x i ) Or combining,... (how?) Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 6

Lower and Upper Sums Example Third method, use lower and upper sums: (LU)/2 f(x) = x 2, [a,b] = [0,1] and P = { 0, 1 4, 1 2, 3 4,1}..., L = 7 32, U = 15 32 Split the difference: estimate 11 32 (actual 1 3 ) Bottom line naive approach low n still error of 1 96. (!) Max error: (U L)/2 = 1 8 Is this good enough? Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 7

Numerical Quadrature Rethinking Perhaps lower and upper sums are enough? Error seems small Work seems small as well But: estimate of max error was not small ( 1 8 ) Do they converge to integral as n? Will the extrema always be easy to calculate? Accurately? (Probably not!) Proceed in theoretical and practical directions. Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 8

Numerical Quadrature Introduction Riemann Integration Composite Trapezoid Rule Composite Simpson s Rule Gaussian Quadrature Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 9

Riemann Integrability f C 0 [a,b], [a,b] bdd f is Riemann integrable When integrable, and max subinterval in P 0 ( P 0): b lim L(f;P) = f(x)dx = lim U(f;P) P 0 a P 0 Counter example: Dirichlet function d(x) L = 0, U = b a { 0, x rational, 1, x irrational Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 10

Challenge: Estimate n for Third Method Current restrictions for n estimate: Monotone functions Uniform partition Challenge: π estimate 0 ecosx dx error tolerance = 2 1 10 3 using L and U n =? Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 11

Estimate n Solution f(x) = e cosx ց on [0,π] m i = f ( ) x i1 and Mi = f(x i ) n 1 L(f;P) = h i=0 Want 1 2 (U L) < 1 2 10 3 or π n f ( ) n 1 x i1 and U(f;P) = h i=0 ( e 1 e 1) < 10 3 f(x i ), h = π n... n 7385 (!!) (note for later: max error estimate = O(h)) Number of f(x) evaluations 2 for (U L) max error calculation > 7000 for either L or U We need something better. Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 12

Numerical Quadrature Introduction Riemann Integration Composite Trapezoid Rule Composite Simpson s Rule Gaussian Quadrature Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 13

Composite Trapezoid Rule Interpretation function value x 0 x 1 x 2 x 3 x 4 (= a) (= b) Almost always better than first three methods. (When not?) Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 14

Composite Trapezoid Rule (CTR) Each area: 1 2( xi1 x i )[ f(xi )f ( x i1 )] ( CTR & CSR ) Rule: T(f;P) 1 2 n 1 i=0 ( xi1 x i )[ f(xi )f ( x i1 )] Note: for monotone functions and any given mesh (why?): T = (LU)/2 Pro: no need for extrema calculations Con: adding new points to existing ones (for a non-monotonic function) T: bad point ( last func., x i = { 5,5}, x 2 = 0 ) no monotonic improvement (necessarily) L, U and (LU)/2 look for extrema on [ x i,x i1 ] monotonic improvement Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 15

Uniform Mesh and Associated Error Constant stepsize: h = b a T(f;P) h n 1 i=1 nodes : x0 x1 x2 x3 xn 1 xn n, weights : 1 1 1 1 1 1 weights : 1 1 1 1 total : 1 2 2 2 2 1 f(x i ) 1 2 [f(x 0)f(x n )] Theorem: f C 2 [a,b] ξ (a,b) b a f(x)dx T(f;P) = 1 12 (b a)h2 f (ξ) = O ( h 2 Note: leads to popular Romberg algorithm (built on Richardson extrapolation) How many steps does T(f;P) require? ) Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 16

e cosx Revisited Using CTR Challenge: π 0 ecosx dx, error tolerance = 1 2 10 3, n =? f(x) = e cosx f (x) = e cosx sinx... f (x) e on (0,π) error 12 1 π(π/n)2 e 1 2 10 3... n 119 Recall perennial two questions/calculations of NM monotonic estimate of T produces same (LU)/2 but previous max error estimate was less exact (O(h)) Better estimate of max error better estimate of n Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 17

Another CTR Example Challenge: 1 0 e x2 dx, error tolerance = 1 2 10 4, n =? f(x) = e x2, f (x) = 2xe x2 and f (x) = ( 4x 2 2 ) e x2 f (x) 2 on (0,1) error 1 6 h2 1 2 10 4 We have: n 2 1 3 104 or n 58 subintervals How can we do better? Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 18

Numerical Quadrature Introduction Riemann Integration Composite Trapezoid Rule Composite Simpson s Rule Gaussian Quadrature Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 19

Trapezoid Rule as Linear Interpolant Linear interpolant, one subinterval: p 1 (x) = x b a b intuitively: b p 1(x)dx = f(a) b f(b) b (x b)dx a a b a b a (x a)dx a f(a) x a b a f(b), = f(a) [ b 2 a 2 ] b(b a) f(b) [ b 2 a 2 a(b a) a b 2 b a 2 [ ] [ ] ab ab = f(a) 2 b f(b) 2 a ( ) ( ) a b b a = f(a) f(b) 2 2 = b a 2 (f(a)f(b)) CTR is integral of composite linear interpolant. ] Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 20

CTR for Two Equal Subintervals n = 2 (i.e., 3 points): T(f) = b a { ( ab f ) 12 } 2 2 [f(a)f(b)] = b a [ ( ) ] ab f(a)2f f(b) 4 2 with error = O ( (b a ) ) 3 2 (Previously, CTR error = O ( h 2) = TR error n subintervals = O ( h 3) O ( 1 h ) ) Deficiency: each subinterval ignores the other How can we take the entire picture into account? Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 21

Simpson s Rule Motivation: use p 2 (x) over the two equal subintervals Similar analysis actually loses O(h), but... Due to canceling terms, ξ (a,b) b a [ b a f(x)dx = f(a)4f 6 ( ab 2 ) ] f(b) 1 ( ) b a 5 f (4) (ξ) 90 2 Similar to CTR, but weights midpoint more Note: for each method, denominator = coefficients Each method multiplies width by weighted average of height. Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 22

Composite Simpson s Rule (CSR) For an even number of subintervals n, h = b a, ξ (a,b) We have: b n nodes : x 0 x 1 x 2 x 3 x 4 x 5 x 6 x n 2 x n 1 x n weights : 1 4 1 1 4 1 1 4 1 weights : 1 4 1 1 1 total : 1 4 2 4 2 4 2 2 4 1 f(x)dx = h n/2 a 3 [f(a)f(b)]4 f a(2i 1)h 2 (n 2)/2 i=1 f i=1 a2ih }{{} even nodes } {{ } odd nodes b a 180 h4 f (4) (ξ) Note: denominator = coefficients = 3n but only n1 function evaluations Pretty low error: Integration methods and more advanced Can we do better than O ( h 4)? Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 23

Integration Introspection SR beat TR because heavier weighted midpoint But CSR, like CTR, suffers at subinterval-pair boundaries (weight = 2 vs. 4 for no reason) All composite rules ignore other areas patch together local calculations will suffer from this Error: CTR: O ( h 2 f (2) ( (ξ)), CSR: O h 4 f (4) ) (ξ) Note also, error = 0: CTR: f P 1, CSR: f P 3 What about using all n 1 nodes and n-degree interpolation? Since error = 0, f P n lower error for general f? Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 24

Numerical Quadrature Introduction Riemann Integration Composite Trapezoid Rule Composite Simpson s Rule Gaussian Quadrature Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 25

x i, l i (x) = Interpolatory Quadrature n j = 0 j i x x j x i x j, i = 0,...,n; p(x) = a = x 0 and b = x n no longer required b b n n p(x)dx = f(x i )l i (x)dx = a a i=0 i=0 ( A i = A i a,b; { } ) n x j, but A j=0 i A i (f)! f P n f(x) = p(x), and b n f P n f(x)dx = a i=0 f(x i ) b n i=0 a l i(x)dx }{{} A i A i f(x i ), i.e., error = 0 f(x i )l i (x) (Endpoints, nodes) A i b a f(x)dx n i=0 A i f(x i ). Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 26

Interp. Quad. Error Analysis Uniform mesh: Newton-Cotes Quad. (TR: n = 1, SR: n = 2) Is b a f(x)dx b a p(x)dx? Non-composite error: n even: O ( h n3 f (n2) (ξ)), n odd: O ( h n2 f (n1) (ξ) ) Often high order of h good enough, therefore popular What about Runge phenomenon: oscillations drive f (k)? What if we choose n1 specific nodes (with weights, total: 2(n1) choices)? Can we get error = 0 f P 2n1? Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 27

Let Gaussian Quadrature (GQ) Theorem q(x) P n1 b a xk q(x)dx = 0, k = 0,...,n i.e., q(x) all polynomials of lower degree note: n2 coefficients, n1 conditions unique to a constant multiplier x i, i = 0,...,n, q(x i ) = 0 i.e., x i are zeros of q(x) Then f P 2n1, even though f(x) p(x) ( f P m, m > n) b a f(x)dx = n i=0 A i f(x i ) We jumped from P n to P 2n1! Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 28

Gaussian Quadrature Proof Let f P 2n1, and divide by q f = sq r s,r P n We have (note: until last step, x i can be arbitrary) b b b f(x)dx = s(x)q(x)dx r(x)dx (division above) a a = = = = b a a r(x)dx ( ity of q(x) ) n i=0 n i=0 n i=0 A i r(x i ) (r P n ) A i [f(x i ) s(x i )q(x i )] A i f(x i ) (division above) (x i are zeros of q(x)) Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 29

GQ Additional Notes Gaussian quadrature : 1) P 2n1 lower n with same accuracy and fewer oscillations, 2) distr. like Chebyshev pts tame oscillations Example q n (x): Legendre Polynomials: for [a,b] = [ 1,1] and q n (1) = 1 ( a 3-term recurrence formula, Legendre poly. ) q 0 (x) = 1, q 1 (x) = x, q 2 (x) = 3 2 x2 1 2, q 3(x) = 5 2 x3 3 2 x,... Use q n1 (x) (why?), depends only on a, b and n Gaussian nodes (a,b) good if f(a) = ± and/or f(b) = ± (e.g., More general: with weight function w(x) in original integral q(x) orthogonality weights A i 1 0 1 x dx) Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 30

Numerical Quadrature Summary n1 function evaluations composite? node placement error = 0 P CTR uniform (usually) 1 CSR uniform (usually) 3 interp. any (distinct) n GQ zeros of q(x) 2n1 P.S. There are also powerful adaptive quadrature methods Copyright c 2014 by A. E. Naiman NM Slides Numerical Quadrature, p. 31

Linear Systems Introduction Naive Gaussian Elimination Limitations Operation Counts Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 1

What Are Linear Systems (LS)? a 11 x 1 a 12 x 2 a 1n x n = b 1 a 21 x 1 a 22 x 2 a 2n x n = b 2...... =. a m1 x 1 a m2 x 2 a mn x n = b m Dependence on unknowns: powers of degree 1 Summation form: n j=1 a ij x j = b i, 1 i m, i.e., m equations Presently: m = n, i.e., square systems (later: m n) Q: How to solve for [x 1 x 2... x n ] T? A:... Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 2

Linear Systems Introduction Naive Gaussian Elimination Limitations Operation Counts Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 3

Overall Algorithm and Definitions Currently: direct methods only (later: iterative methods) General idea: Generate upper triangular system ( forward elimination ) Easily calculate unknowns in reverse order ( backward substitution ) Pivot row = current one being processed pivot = diagonal element of pivot row Naive = no reordering of rows Steps applied to RHS as well. Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 4

Forward Elimination Generate zero columns below diagonal Process rows downward for each row i := 1,n 1 { // the pivot row for each row k := i1,n { // rows below pivot multiply pivot row a ii = a ki // Gaussian multipliers subtract pivot row from row k // now a ki = 0 } // now column below a ii is zero } // now a ij = 0, i > j Note: multiplication is temporary; pivot row is not changed Obtain triangular system Let s work an example,... Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 5

Compact Form of LS 6x 1 2x 2 2x 3 4x 4 = 16 12x 1 8x 2 6x 3 10x 4 = 26 3x 1 13x 2 9x 3 3x 4 = 19 6x 1 4x 2 1x 3 18x 4 = 34 6 2 2 4 16 12 8 6 10 26 3 13 9 3 19 6 4 1 18 34 Proceeding with the forward elimination,... Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 6

Forward Elimination Example 6 2 2 4 16 12 8 6 10 26 3 13 9 3 19 6 4 1 18 34 2R 1 2 1 R 1 ( 1)R 1 6 2 2 4 16 0 4 2 2 6 0 12 8 1 27 0 2 3 14 18 3R 2 ( 1 2 ) R2 6 2 2 4 16 0 4 2 2 6 0 0 2 5 9 0 0 4 13 21 2R 3 Matrix is upper triangular. 6 2 2 4 16 0 4 2 2 6 0 0 2 5 9 0 0 0 3 3 Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 7

Backward Substitution 6 2 2 4 16 0 4 2 2 6 0 0 2 5 9 0 0 0 3 3 Last equation: 3x 4 = 3 x 4 = 1 Second to last equation: 2x 3 5 x 4 }{{} x 3 = 2... second equation... x 2 =... =1... [x 1 x 2 x 3 x 4 ] T = [3 1 2 1] T = 2x 3 5 = 9 For small problems, check solution in original system. Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 8

Linear Systems Introduction Naive Gaussian Elimination Limitations Operation Counts Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 9

Zero Pivots Clearly, zero pivots prevent forward elimination! zero pivots can appear along the way Later: When guaranteed no zero pivots? All pivots 0? we are safe Experiment with system with known solution. Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 10

Vandermonde Matrix 1 2 4 8 2 n 1 1 3 9 27 3 n 1 1 4 16 64....... 4 n 1. 1 n1 (n1) 2 (n1) 3 (n1) n 1 Want row sums on RHS x i = 1, i = 1,...,n Geometric series: 1tt 2 t n 1 = tn 1 t 1 We obtain b i, for row i = 1,...,n n j=1 (1i) j 1 }{{}}{{} 1 = (1i) n 1 a ij x j (1i) 1 = 1 i [(1i)n 1] }{{} System is ready to be tested. Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 11 b i

Vandermonde Test Platform with 7 significant (decimal) digits n = 1,...,8 expected results n = 9: error > 16,000%!! Questions: What happened? Why so sudden? Can anything be done? Answer: matrix is ill-conditioned Sensitivity to roundoff errors Leads to error propagation and magnification First, how to assess vector errors. Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 12

Errors Given system: Ax = b and solution estimate x Residual (error): r A x b Absolute error (if x is known): e x x Norm taken of r or e: vector scalar quantity (more on norms later) Relative errors: r / b and e / x Back to ill-conditioning,... Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 13

Ill-conditioning 0 x 1 x 2 = 1 x 1 x 2 = 2 } 0 pivot General rule: if 0 is problematic numbers near 0 are problematic ǫx 1 x 2 = 1 x 1 x 2 = 2 }... x 2 = 2 1/ǫ 1 1/ǫ and x 1 = 1 x 2 ǫ ǫ small (e.g., ǫ = 10 9 with 8 significant digits) x 2 = 1 and x 1 = 0 wrong! What can be done? Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 14

Pivoting Switch order of equations: non-naive Gaussian Elimination Moving offending element off diagonal x 1 x 2 = 2 ǫx 1 x 2 = 1 }, x 2 = 1 2ǫ 1 ǫ and x 1 = 2 x 2 = 1 1 ǫ This is correct, even for small ǫ (or even ǫ = 0) Compare size of diagonal (pivot) elements above, to ǫ Ratio of first row of Vandermonde matrix = 1 : 2 n 1 Issue is relative size, not absolute size. Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 15

Scaled Partial Pivoting Also called row pivoting (vs. column pivoting) Instability source: subtracting large values: a kj -= a ij a ki a ii W o l.o.g.: n rows, and choosing which row to be first Find i rows k i, columns j > 1: minimize a a k1 ij a i1 O ( n 3) calculations! simplify (remove k), imagine: a k1 = 1 a find i columns j > 1: min ij i a i1 Still 1) O ( n 2) calculations, 2) how to minimize each row? max Find i: min j a ij a i, or: max i1 a i1 i max j a ij (e.g., first matrix) Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 16

Linear Systems Introduction Naive Gaussian Elimination Limitations Operation Counts Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 17

How Much Work on A? Real life: crowd estimation costs? (will depend on accuracy) Counting and (i.e., long operations) only Pivoting: row decision amongst k rows = k ratios First row: n ratios (for choice of pivot row) n 1 Gaussian multipliers (n 1) 2 multiplications total: n 2 operations forward elimination operations (for large n) n k=2 k 2 = n n3 (n1)(2n1) 1 6 3 How about the work on b? Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 18

Rest of the Work Forward elimination work on RHS: n k=2 (k 1) = n(n 1) 2 Backward substitution: n k=1 k = n(n1) 2 Total: n 2 operations O(n) fewer operations than forward elimination on A Important for multiple RHSs known from the start do not repeat O ( n 3) work for each rather, line them up, and process simultaneously Can we do better at times? Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 19

Sparse Systems 0 0....... 0................. 0.......... 0 0 Above, e.g., tridiagonal system (half bandwidth = 1) note: a ij = 0 for i j > 1 Opportunities for savings storage computations Both are O(n) Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 20

Linear Systems Introduction Naive Gaussian Elimination Limitations Operation Counts Additional Notes Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 21

Pivot-Free Guarantee When are we guaranteed non-zero pivots? Diagonal dominance (just like it sounds): a ii > n j = 1 j i a ij, i = 1,...,n (Or > in one row, and in remaining) Many finite difference and finite element problems diagonally dominant systems Occurs often enough to justify individual study. Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 22

LU Decomposition E.g.: same A, many b s of time-dependent problem not all b s are known from the start Want A = LU for decreased work later Then define y: L}{{} Ux = b y solve Ly = b for y solve Ux = y for x U is upper triangular, result of Gaussian elimination L is unit lower triangular, 1 s on diagonal and Gaussian multipliers below, e.g.: 1 0 0 0 2 1 0 0 1/2 3 1 0 1 1/2 2 1 For small systems, verify (even by hand): A = LU Each new RHS is n 2 work, instead of O ( n 3) Copyright c 2014 by A. E. Naiman NM Slides Linear Systems, p. 23

Approximation by Splines Motivation Linear Splines Quadratic Splines Cubic Splines Summary Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 1

Motivation function value 60 40 20 0-20 -40-4 -2 0 2 4 Given: many points (e.g., n > 10), or very involved function Want: simple representative function for analysis or manufacturing ( Designing a Car Body with Splines ) Any suggestions? Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 2

Let s Try Interpolation function value 60 40 20 0-20 -40-4 -2 0 2 4 Disadvantages: Values outside x-range diverge quickly (interp(10) = 1592) Numerical instabilities of high-degree polynomials Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 3

Runge Function Two Interpolations function value 1 uniform 0 Chebyshev x 8 (= 5) x 7 x 6 x 5 0 (= x 4 = c 4 ) c 3 c 2 c 1 c 0 (= 5) More disadvantages: Within x-range, often high oscillations Even Chebyshev points often uncharacteristic oscillations Still high degree polynomial Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 4

Splines Given domain [a,b], a spline S(x) Is defined on entire domain Provides a certain amount of smoothness partition of knots (= where spline can change form) {a = t 0,t 1,t 2,...,t n = b} such that S(x) = is piecewise polynomial S 0 (x), x [t 0,t 1 ], S 1 (x),. x [t 1,t 2 ],. S n 1 (x), x [ ] t n 1,t n Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 5

Interpolatory Splines Note: splines split up range [a,b] opposite of CTR CSR GQ development Spline implies no interpolation, not even any y-values If given points {(t 0,y 0 ),(t 1,y 1 ),(t 2,y 2 ),...,(t n,y n )} interpolatory spline traverses these as well Splines = nice, analytical functions Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 6

Approximation by Splines Motivation Linear Splines Quadratic Splines Cubic Splines Summary Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 7

Linear Splines Given domain [a,b], a linear spline S(x) Is defined on entire domain Provides continuity, i.e., is C 0 [a,b] partition of knots such that {a = t 0,t 1,t 2,...,t n = b} S i (x) = a i xb i P 1 ([ ti,t i1 ]), i = 0,...,n 1 Recall: no y-values or interpolation yet Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 8

Linear Spline Examples undefined part function value discontinuous nonlinear part linear spline Definition outside of [a,b] is arbitrary a b Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 9

Given points Interpolatory Linear Splines {(t 0,y 0 ),(t 1,y 1 ),(t 2,y 2 ),...,(t n,y n )} spline must interpolate as well Is S(x) (with no additional knots) unique? Coefficients: a i xb i, i = 0,...,n 1 total = 2n Conditions: 2 prescribed interpolation points for S i (x), i = 0,...,n 1 (includes continuity condition) total = 2n Obtain S i (x) = a i x(y i a i t i ), a i = y i1 y i t i1 t i, i = 0,...,n 1 Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 10

Interpolatory Linear Splines Example 60 function value 40 20 0-20 -40-4 -2 0 2 4 Discontinuous derivatives at knots are unpleasing,... Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 11

Approximation by Splines Motivation Linear Splines Quadratic Splines Cubic Splines Summary Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 12

Quadratic Splines Given domain [a, b], a quadratic spline S(x) Is defined on entire domain Provides continuity of zeroth and first derivatives, i.e., is C 1 [a,b] partition of knots such that {a = t 0,t 1,t 2,...,t n = b} S i (x) = a i x 2 b i xc i P 2 ([ ti,t i1 ]), i = 0,...,n 1 Again no y-values or interpolation yet Copyright c 2014 by A. E. Naiman NM Slides Approximation by Splines, p. 13