Numerical Methods. Aaron Naiman Jerusalem College of Technology naiman

Size: px
Start display at page:

Download "Numerical Methods. Aaron Naiman Jerusalem College of Technology naiman"

Transcription

1 Numerical Methods Aaron Naiman Jerusalem College of Technology naiman based on: Numerical Mathematics and Computing by Cheney & Kincaid, c 1994 Brooks/Cole Publishing Company ISBN Copyright c 2010 by A. E. Naiman

2 Taylor Series Definitions and Theorems Examples Proximity of x to c Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 1

3 Motivation Sought: cos(0.1) Missing: calculator or lookup table Known: cos for another (nearby) value, i.e., at 0 Also known: lots of (all) derivatives at 0 Can we use them to approximate cos(0.1)? What will be the worst error of our approximation? These techniques are used by computers, calculators, tables. Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 2

4 Taylor Series Series definition: If f (k) (c), k = 0,1,2,..., then: f(x) f(c)f (c)(x c) f (c) (x c) 2 2! = k=0 f (k) (c) (x c) k k! c is a constant and much is known about it (f (k) (c)) x a variable near c, and f(x) is sought With c = 0 Maclaurin series What is the maximum error if we stop after n terms? Real life: crowd estimation: 100K ±10K vs. 100K ±1K Key NM questions: What is estimate? What is its max error? Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 3

5 Taylor Series cosx x2 2 x4 4! 3 function value cosx 1 1 x2 2 1 x2 2 x4 4! x6 6! Better and better approximation, near c, and away. Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 4

6 Taylor s Theorem Theorem: If f C n1 [a,b] then f(x) = n k=0 f (k) (c) k! (x c) k f(n1) (ξ(x)) (x c) n1 (n1)! where x,c [a,b], ξ(x) open interval between x and c Notes: f C(X) means f is continuous on X f C k (X) means f,f,f,f (3),...,f (k) are continuous on X ξ = ξ(x), i.e., a point whose position is a function of x Error term is just like other terms, with k := n1 ξ-term is truncation error, due to series termination Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 5

7 Taylor Series Procedure Writing it out, step-by-step: write formula for f (k) (x) choose c (if not already specified) write out summation and error term note: sometimes easier to write out a few terms Things to (possibly) prove by analyzing worst case ξ letting n LHS remains f(x) summation becomes infinite Taylor series if error term 0 infinite Taylor series represents f(x) for given n, we can estimate max of error term Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 6

8 Taylor Series Definitions and Theorems Examples Proximity of x to c Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 7

9 Taylor Series: e x f(x) = e x, x < f (k) (x) = e x, k Choose c := 0 We have e x = n k=0 x k k! eξ(x) (n1)! xn1 As n take worst case ξ (just less than x) error term 0 (why?) e x = k=0 x k k! = 1x x2 2! x3 3! Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 8

10 Taylor Series: sin x f(x) = sinx, x < f (k) (x) = sin ( x πk ), k, c := 0 2 We have sinx = n k=0 sin ( πk 2 ) k! x k sin( ξ(x) π(n1) ) 2 x n1 (n1)! Error term 0 as n Even k terms are zero l = 0,1,2,..., and k 2l1 sinx = l=0 sin ( π(2l1) 2 (2l1)! ) x 2l1 = k=0 ( 1) k x 2k1 (2k1)! = x x3 3! x5 5! Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 9

11 Taylor Series: cos x f(x) = cosx, x < f (k) (x) = cos ( x πk ), k, c := 0 2 We have cosx = n k=0 cos ( πk k! 2 ) x k cos( ξ(x) π(n1) ) 2 x n1 (n1)! Error term 0 as n Odd k terms are zero l = 0,1,2,..., and k 2l cosx = l=0 cos ( π(2l) 2 (2l)! ) x 2l = k=0 ( 1) k x2k (2k)! = 1 x2 2! x4 4! Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 10

12 Numerical Example: cos(0.1) We have 1) f(x) = cosx and 2) c = 0 obtain series: cosx = 1 x2 2! x4 4! Actual value: cos(0.1) = With 3) x = 0.1 and 4) specific n s from Taylor approximations: n approximation error 0, /2! 2, /4! 4, /6! 6, /8!... includes odd k Obtain accurate approximation easily and quickly. Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 11

13 Taylor Series: (1 x) 1 f(x) = 1 x 1, x < 1 f (k) (x) = k! (1 x) k1, k, choose c := 0 We have 1 n 1 x = = k=0 n k=0 x k x k x n1 (n1)! (1 ξ(x)) n2 (n1)! ( x ) n1 1 1 ξ(x) 1 ξ(x) Why bother, with LHS so simple? Ideas? Sufficient: x 1 ξ(x) n1 0 as n For what range of x is this satisfied? Need to determine radius of convergence. Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 12

14 (1 x) 1 Range of Convergence Sufficient: x 1 ξ(x) < 1 Approach: get variable x in middle of sufficiency inequality transform range of ξ inequality to LHS and RHS of sufficiency inequality require restriction on x but check if already satisfied ξ < 1 1 ξ > 0 sufficient: (1 ξ) < x < 1 ξ Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 13

15 (1 x) 1 Range of Convergence (cont.) case x < ξ < 0: LHS: (1 x) < (1 ξ) < 1 require: 1 x RHS: 1 < 1 ξ < 1 x require: x 1 case 0 < ξ < x: LHS: 1 < (1 ξ) < (1 x) require: (1 x) x, or: 1 < 0 RHS: 1 x < 1 ξ < 1 require: x 1 x, or: x 1 2 Therefore, for 1 < x x = k=0 x k = 1xx 2 x 3 ( Zeno: x = 1 2,... ) Need more analysis for the whole range x < 1. Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 14

16 Taylor Series: ln x f(x) = lnx, 0 < x 2 f (k) (x) = ( 1) k 1 (k 1)! x k, k 1 Choose c := 1 We have lnx = Sufficient n k=1 ξ(x) x 1 ( 1) k 1 (x 1) k k n1 0 as n Again, for what range of x is this satisfied? ( 1) n 1 (x 1) n1 n1 ξ n1 (x) Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 15

17 Sufficient: lnx Range of Convergence ξ(x) x 1 < ξ < x < 1ξ case 1 < ξ < x: LHS: 1 x < 1 ξ < 0 require: 0 x RHS: 2 < 1ξ < 1x require: x 2 case x < ξ < 1: LHS: 0 < 1 ξ < 1 x require: 1 x x, or: 1 2 x RHS: 1x < 1ξ < 2 require: x 1x Therefore, for 1 2 x 2 lnx = k=1 ( 1) k 1(x 1)k k = (x 1) (x 1)2 2 (x 1)3 3 Again, need more analysis for entire range of x. Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 16

18 Theorem: Ratio Test and lnx Revisited a n1 a n (< 1) partial sums converge lnx: ratio of adjacent summand terms (not the error term) a n1 a n = n (x 1) n1 Obtain convergence of partial sums for 0 < x < 2 Note: not looking at ξ and the error term x = 2: , which is convergent (why?) x = 0: same series, all same sign divergent harmonic series we have 0 < x 2 Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 17

19 Letting x (1 x) ln(1 x) = (1 x) 1 Revisited ( x x2 2 x3 3 ), 1 x < 1 dx d : lhs = 1 1 x and rhs = ( 1xx 2 x 3 )! : no = for x = 1 as rhs oscillates (note: correct avg value) x < 1 we have (also with ratio test) 1 1 x = 1xx2 x 3 Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 18

20 Taylor Series Definitions and Theorems Examples Proximity of x to c Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 19

21 Proximity of x to c Problem: Approximate ln 2 Solution 1: Taylor ln(1x) around 0 with x = 1 ln2 = Solution 2: Taylor ln ( 1x 1 x) around 0 with x = 1 3 ln2 = 2 ( ) Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 20

22 Proximity of x to c (cont.) Approximated values, rounded: Solution 1, first 8 terms: Solution 2, first 4 terms: Actual value, rounded: importance of proximity of evaluation and expansion points This error is in addition to the truncation error. Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 21

23 Taylor Series Definitions and Theorems Examples Proximity of x to c Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 22

24 Polynomials and a Second Form Polynomials C (, ) have finite number of non-zero derivatives, Taylor series c... original polynomial, i.e., error = 0 f(x) = 3x 2 1,... f(x) = 2 k=0 f (k) (0) x k = 103x 2 k! Taylor Theorem can be used for fewer terms e.g.: approximate a P 17 near c by a P 3 Taylor s Theorem, second form (x = constant expansion point, h = distance, x h = variable evaluation point): If f C n1 [a,b] then f(xh) = n k=0 f (k) (x) k! h k f(n1) (ξ(h)) h n1 (n1)! x,xh [a,b], ξ(h) open interval between x and xh Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 23

25 Taylor Approximate: (1 3h) 4 5 Define: f(z) z 4 5; x = 1 is the constant expansion point Derivs: f (z) = 5 4 z 1 5, f (z) = 4 5 2z 6 5, f (z) = z 11 5,... : (xh) 5 4 = x x 1 5h 4 5h h ! 5 2x 6 3! 5 3x 11 (x 3h) 4 5 = x x 1 53h 4 59h h ! 5 2x 6 3! 5 3x 11 (1 3h) 4 5 = h 4 2! 5 29h2 24 3! 5 327h3... = h 5 25 h h3... Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 24

26 Second Form ln(eh) Evaluation of interest: ln(eh), for e < h e Define: f(z) ln(z) x = e is the constant expansion point ln z > 0 Derivatives f(z) = lnz f(e) = 1 f (z) = z 1 f (e) = e 1 f (z) = z 2 f (e) = e 2 f (z) = 2z 3 f (e) = 2e 3 f (n) (z) = ( 1) n 1 (n 1)!z n f (n) (e) = ( 1) n 1 (n 1)!e n Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 25

27 ln(eh) Expansion and Convergence Expansion (recall: x = e) or ln(eh) f(xh) = 1 ln(eh) = 1 n k=1 n k=1 ( 1) k 1 (k 1)!e k h k k! ( 1) n n!ξ(h) (n1) h n1 ( 1) k 1 k ( h e (n1)! ) ( ) k n1 ( 1)n h n1 ξ(h) Range of convergence, sufficient (for variable h): ξ < h < ξ case eh < ξ < e:... e 2 h case e < ξ < eh:... h e Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 26

28 O() Notation and MVT As h 0, we write the speed of f(h) 0 e.g., f(h): h, f(h) = O ( h k) f(h) C h k h, h2 ; let h 10 1, 1 100, ,... Taylor truncation error = O ( h n1) ; if for a given n the max exists, then C := max ξ(h) f(n1) (ξ(h)) /(n1)! Mean value theorem (Taylor, n = 0): If f C 1 [a,b] then or: f(b) = f(a)(b a)f (ξ), ξ (a,b) f (ξ) = f(b) f(a) b a Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 27

29 Alternating Series Theorem Alternating series theorem: If a k > 0, a k a k1, k 0, and a k 0, then n k=0 Intuitively understood ( 1) k a k S and S S n a n1 Note: direction of error is also know for specific n We had this with sin and cos Another useful method for max truncation error estimation Max truncation error estimation without ξ-analysis Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 28

30 ln(eh) Max Trunc. Error Estimate What is the max error after n1 terms? Max error estimate also depends on proximity size of h from Taylor: obtain O ( h n1) error 1 n1 h n1 max ξ 1 ξ n1 from AST (check the conditions!): also obtain O ( h n1), with different constant error 1 n1 h e n1 E.g.: h = 2 e : ln 3 2 e = Taylor max error (occurs as ξ e ): n1 1 n1 1 2 n1 AST max error: n1 note same max error estimate; but can be very different Copyright c 2010 by A. E. Naiman NM Slides Taylor Series, p. 29

31 Base Representations Definitions Conversions Computer Representation Loss of Significant Digits Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 1

32 Number Representation Simple representation in one base simple representation in another base, e.g. Base 10: (0.1) 10 = ( ) = = in general: a n...a 0 = n k=0 a k 10 k Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 2

33 Fractions and Irrationals Base 10 fraction: = In general, for real numbers: a n...a 0.b 1... = n k=0 a k 10 k k=1 b k 10 k Note: numbers, i.e., irrationals, such that an infinite number of digits are required, in any rational base, e.g., e,π, 2 Need infinite number of digits in a base irrational ( ) 10 but 1 3 is not irrational Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 3

34 Other Bases Base 8, 8 or 9, using octal digits (21467) 8 = = (9015) 10 ( ) 8 = 8 5( ) = = ( ) 10 Base 16: 0, 1,..., 9, A (10), B (11), C (12), D (13), E (14), F (15) Base β (a n...a 0.b 1...) β = n k=0 a k β k k=1 b k β k Base 2: just 0 and 1, or for computers: off and on, bit = binary digit Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 4

35 Base Representations Definitions Conversions Computer Representation Loss of Significant Digits Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 5

36 Conversion: Base 10 Base 2 Basic idea: = 1 }{{} 10 }{{} 10(710(3)) (1010) 2 (1000) 2 = ( ) 2 = Easy for computer, but by hand: ( ) 10 remainder 2)3781 2) = a 0 2)945 0 = a b 1 = b 2 = (drop 1 ). Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 6

37 Base 8 Shortcut Base 2 base 8, trivial ( ) 8 = ( ) 2 3 bits for every 1 octal digit One digit produced for every step in (hand) conversion base 10 base 8 base 2 Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 7

38 Base Representations Definitions Conversions Computer Representation Loss of Significant Digits Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 8

39 Scientific notation: Computer Representation In general x = ±0.d 1 d n, d 1 0, or: x = ±r 10 n, we have sign, mantissa r and exponent n 1 10 r < 1 On the computer, base 2 is represented x = ±0.b 1 b n, b 1 0, or: x = ±r 2 n, 1 2 r < 1 Finite number of mantissa digits, therefore roundoff or truncation error Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 9

40 Base Representations Definitions Conversions Computer Representation Loss of Significant Digits Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 10

41 LSD Addition (ab)c = a(bc) on the computer? Six decimal digits for mantissa because 1,000, }{{} million times = 1,000, = but }{{} 1,000,000. = 2,000,000. million times Add numbers in size order. Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 11

42 LSD Subtraction E.g.: x sinx for x s close to zero x = 1 15 (radians) x = sinx = x sinx = = Note still have precision, but can we retain 3 lost digits for precision? Avoid subtraction of close numbers. Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 12

43 LSD Avoidance for Subtraction x sinx for x 0 use Taylor series no subtraction of close numbers e.g., 3 terms: actual: e x e 2x for x 0 use Taylor series twice and add common powers x for x 0 x 2 x 2 11 cos 2 x sin 2 x for x π 4 cos2x lnx 1 for x e ln x e Copyright c 2010 by A. E. Naiman NM Slides Base Representations, p. 13

44 Nonlinear Equations Motivation Bisection Method Newton s Method Secant Method Summary Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 1

45 Motivation For a given function f(x), find its root(s), i.e.: find x (or r = root) such that f(x) = 0 BVP: dipping of suspended power cable. What is λ? λcosh 50 λ λ 10 = 0 (Some) simple equations solve analytically 6x 2 7x2 = 0 (3x 2)(2x 1) = 0 x = 2 3, 1 2 cos3x cos7x = 0 2sin5xsin2x = 0 x = nπ 5,nπ 2, n Z Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 2

46 Motivation (cont.) In general, we cannot exploit the function, e.g.: 2 x2 10x1 = 0 and ) cosh ( x 2 1 e x log sinx = 0 Note: at times multiple roots e.g., previous parabola and cosine we want at least one we may only get one (for each search) Need a general, function-independent algorithm. Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 3

47 Nonlinear Equations Motivation Bisection Method Newton s Method Secant Method Summary Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 4

48 Bisection Method Example function value a x 2 x 3 x 1 x 0 b Intuitive, like guessing a number [0, 100]. Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 5

49 Restrictions and Max Error Estimate Restrictions function slices x-axis at root start with two points a and b f(a)f(b) < 0 graphing tool (e.g., Matlab) can help to find a and b require C 0 [a,b] (why? note: not a big deal) Max error estimate after n steps, guess midpoint of current range error: ǫ b a 2 n1 (think of n = 0,1,2) note: error is in x; can also look at error in f(x) or combination enters entire world of stopping criteria Question: Given tolerance (in x), what is n?... Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 6

50 Convergence Rate Given tolerance τ (e.g., 10 6 ), how many steps are needed? Tolerance restriction (ǫ from before): ( ǫ b a ) 2 n1 < τ 1) 2, 2) log (any base) log(b a) nlog2 < log2τ or n > log(b a) log2τ log2 Rate is independent of function. Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 7

51 Convergence Rate (cont.) Base 2 (i.e., bits of accuracy) n > log 2 (b a) 1 log 2 τ i.e., number of steps is a constant plus one step per bit Linear convergence rate: C [0,1) x n1 r C x n r, n 0 i.e., monotonic decreasing error at every step, and x n1 r C n1 x 0 r Bisection convergence not linear (examples?), but compared to init. max error: similar form: x n1 r C n1 (b a), with C = 1 2 Okay, but restrictive and slow. Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 8

52 Nonlinear Equations Motivation Bisection Method Newton s Method Secant Method Summary Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 9

53 Newton s Method Definition Approximate f(x) near x 0 by tangent l(x) f(x) f(x 0 )f (x 0 )(x x 0 ) l(x) Want l(r) = 0 r = x 0 f(x 0) f (x 0 ), x 1 := r, likewise: x n1 = x n f(x n) f (x n ) Alternatively (Taylor s): have x 0, for what h is f x 0 h = 0 }{{} x 1 f(x 0 h) f(x 0 )hf (x 0 ) or h = f(x 0) f (x 0 ) Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 10

54 Newton s Method Example function value x 3 x 2 x 1 x 0 Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 11

55 Convergence Rate English: With enough continuity and proximity quadratic convergence! Theorem: With the following three conditions: 1) f(r) = 0, 2) f (r) 0, 3) f C 2 ( B ( r,ˆδ )) δ x 0 B(r,δ) and n we have x n1 r C(δ) x n r 2 for a given δ, C is a constant (not necessarily < 1) Note: again, use graphing tool to seed x 0 Newton s method can be very fast. Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 12

56 Convergence Rate Example f(x) = x 3 2x 2 x 3, x 0 = 4 n x n f(x n ) e e 12 Stopping criteria theorem: uses x; above: uses f(x) often all we have possibilities: absolute/relative, size/change, x or f(x) (combos,...) But proximity issue can bite,... Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 13

57 Sample Newton Failure # function value x n Runaway process Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 14

58 Sample Newton Failure #2 4 function value x n Division by zero derivative recall algorithm Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 15

59 Sample Newton Failure #3 function value x n1 x n Loop-d-loop (can happen over m points) Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 16

60 Nonlinear Equations Motivation Bisection Method Newton s Method Secant Method Summary Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 17

61 Secant Method Definition Motivation: avoid derivatives Taylor (or derivative): f (x n ) f(x n) f(x n 1 ) x n x n 1 x n x x n1 = x n f(x n 1 n ) f(x n ) f ( ) x n 1 Bisection requirements comparison: 2 previous points f(a)f(b) < 0 Additional advantage vs. Newton: only one function evaluation per iteration Superlinear convergence: x n1 r C x n r (recognize the exponent?) Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 18

62 Nonlinear Equations Motivation Bisection Method Newton s Method Secant Method Summary Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 19

63 Root Finding Summary Performance and requirements f C 2 nbhd(r) init. pts. speedy bisection 2 1 Newton 1 2 secant 2 1 \ requirement that f(a)f(b) < 0 function evaluations per iteration Often methods are combined (how?), with restarts for divergence or cycles Recall: use graphing tool to seed x 0 (and x 1 ) Copyright c 2010 by A. E. Naiman NM Slides Nonlinear Equations, p. 20

64 Interpolation and Approximation Motivation Polynomial Interpolation Numerical Differentiation Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 1

65 Motivation Three sample problems {(x i,y i ) i = 0,...,n}, (x i distinct), want simple (e.g., polynomial) p(x) y i = p(x i ),i = 0,...,n interpolation Assume data includes errors, relax equality but still close,... least squares Replace complicated f(x) with simple p(x) f(x) Interpolation similar to English term (contrast: extrapolation) for now: polynomial later: splines Use p(x) for p(x new ), p(x)dx,... Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 2

66 Interpolation and Approximation Motivation Polynomial Interpolation Numerical Differentiation Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 3

67 Constant and Linear Interpolation function value y 1 y 0 p 1 (x) p 0 (x) x 0 x 1 n = 0: p(x) = y 0 n = 1: { p(x) = y 0 g(x)(y 1 y 0 ), g(x) P 1, and 0, x = x0, g(x) = 1, x = x g(x) = x x 0 x 1 1 x 0 n = 2: more complicated,... Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 4

68 Lagrange Polynomials Given: x i, i = 0,...,n; Kronecker delta : δ ij = { 0, i j, 1, i = j Lagrange polynomials: l i (x) P n, l i ( xj ) = δij, i = 0,...,n independent of any y i values E.g., n = 2: function value 1 0 l 0 (x) l 1 (x) l 2 (x) x 0 x 1 x 2 Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 5

69 We have Lagrange Interpolation l 0 (x) = x x 1 x 0 x 1 x x 2 x 0 x 2, l 1 (x) = x x 0 x 1 x 0 x x 2 x 1 x 2, l 2 (x) = x x 0 x 2 x 0 x x 1 x 2 x 1, y 0 l 0 ( xj ) = y0 δ 0j = y 1 l 1 ( xj ) = y1 δ 1j = y 2 l 2 ( xj ) = y2 δ 2j = { 0, j 0, y 0, j = 0 { 0, j 1, y 1, j = 1 { 0, j 2, y 2, j = 2!p(x) P 2, with p ( ) x j = yj, j = 0,1,2: p(x) = In general: l i (x) = n j = 0 j i x x j x i x j, i = 0,...,n 2 i=0 y i l i (x) Great! What could be wrong? Easy functions (polynomials), interpolation ( error = 0 at x i )...but what about p(x new )? Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 6

70 Interpolation Error & the Runge Function {(x i,f(x i )) i = 0,...,n}, f(x) p(x)? Runge function: f R (x) = ( 1x 2) 1,x [ 5,5] and uniform mesh:! p(x) s wrong shape and high oscillations lim n max f R(x) p n (x) = 5 x 5 1 function value 0 x 8 x 7 x 6 x 5 0 x 3 x 2 x 1 (= 5) (= x 4 ) (= 5) Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 7 x 0

71 Error Theorem Theorem:..., f C n1 [a,b], x [a,b], ξ (a,b) f(x) p(x) = 1 (n1)! f(n1) (ξ) n i=0 (x x i ) Max error with x i and x, still need max (a,b) f (n1) (ξ) with x i only, also need max of without x i : max (a,b) n i=0 (x x i ) = (b a) n1 Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 8

72 1 Chebyshev Points function value 0 xx 87 (= 1) x 6 x 5 x 4 (= 0) x 3 x 2 xx 10 (= 1) Chebyshev points on [ 1,1]: x i = cos [( i n ) π ], i = 0,...,n In general on [a,b]: x i = 1 2 (ab) 1 2 (b a)cos[( i n ) π ], i = 0,...,n Points concentrated at edges Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 9

73 Runge Function with Chebyshev Points 1 function value 0 x 8 x 7 (= 5) x 6 x 5 0 (= x 4 ) x 3 x 2 x 1 x 0 (= 5) Is this good interpolation? Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 10

74 Chebyshev Interpolation Same interpolation method Different interpolation points Minimizes n i=0 (x x i ) Periodic behavior interpolate with sins/coss instead of P n uniform mesh minimizes max error Note: uniform partition with spacing = cheb 1 cheb 0 num. points polynomial degree oscillations Note: shape is still wrong... see splines later Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 11

75 Interpolation and Approximation Motivation Polynomial Interpolation Numerical Differentiation Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 12

76 Numerical Differentiation Note: until now, approximating f(x), now f (x) f (x) f(xh) f(x) h Error =? Taylor: f(xh) = f(x)hf (x)h 2 f (ξ) 2 f (x) = f(xh) f(x) h I.e., truncation error: O(h) 1 2 hf (ξ) Can we do better? Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 13

77 Numerical Differentiation Take Two Taylor for h and h: f(x±h) = f(x)±hf (x)h 2 f (x) 2! ±h 3 f (x) 3! h 4 f (4) (x) 4! ±h 5 f (5) (x) 5! Subtracting: f(xh) f(x h) = 2hf (x)2h 3f (x) 3! 2h 5f(5) (x) 5! f (x) = f(xh) f(x h) 2h 1 6 h2 f (x) We gained O(h) to O ( h 2). However,... Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 14

78 Richardson Extrapolation Take Three We have f (x) = f(xh) f(x h) a 2 h 2 a 4 h 4 a 6 h 6 2h } {{ } φ(h) Halving the stepsize, φ(h) = f (x) a 2 h 2 a 4 h 4 a 6 h 6 ( ) ( ) h φ = f h 2 ( ) h 4 ( ) h 6 (x) a 2 a 4 a 6 ( 2 ) h φ(h) 4φ = 3f (x) a 4h a 6h 6 Q: So what? A: The h 2 term disappeared! Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 15

79 Richardson Take Three (cont.) Divide by 3 and write f (x) f (x) = 4 3 φ ( h 2 = φ ( ) h 2 ) 1 3 φ(h) 1 4 a 4h a 6h 6 [ ( ) ] h φ φ(h) O ( h 4) } {{ } ( ) ( ) only uses old and current information We gained O ( h 2) to O ( h 4)!! Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 16

80 Interpolation and Approximation Motivation Polynomial Interpolation Numerical Differentiation Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 17

81 Additional Notes Three f (x) formulae used additional points vs. Taylor, more derivatives in same point Similar for f (x): f(x±h) = f(x)±hf (x)h 2 f (x) 2! ±h 3 f (x) 3! h 4 f (4) (x) 4! ±h 5 f (5) (x) 5! Adding: f(xh)f(x h) = 2f(x)h 2 f (x) 1 12 h4 f (4) (x) or: f (x) = f(xh) 2f(x)f(x h) h h2 f (4) (x) error = O ( h 2) Copyright c 2010 by A. E. Naiman NM Slides Interpolation and Approximation, p. 18

82 Numerical Quadrature Introduction Riemann Integration Composite Trapezoid Rule Composite Simpson s Rule Gaussian Quadrature Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 1

83 Numerical Quadrature Interpretation f(x) 0 on [a,b] bounded b a f(x)dx is area under f(x) function value a b Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 2

84 Numerical Quadrature Motivation Analytical solutions rare: In general: π 2 0 sinxdx = cosx π 20 = (0 1) = 1 π 2 ( 1 a 2 sin 2 θ )1 3 dθ 0 Need general numerical technique. Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 3

85 Definitions Mesh: P {a = x 0 < x 1 < < x n = b}, n subintervals (n1 points) Infima and suprema: m i inf { f(x) : x i x x i1 } M i sup { f(x) : x i x x i1 } Two methods (i.e., integral estimates): lower and upper sums L(f;P) U(f;P) n 1 i=0 n 1 i=0 m i ( xi1 x i ) M i ( xi1 x i ) For example,... Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 4

86 Lower Sum Interpretation function value x 0 x 1 x 2 x 3 x 4 (= a) (= b) Clearly a lower bound of integral estimate, and... Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 5

87 Upper Sum Interpretation function value x 0 x 1 x 2 x 3 x 4 (= a) (= b)... an upper bound. What is the max error? Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 6

88 Lower and Upper Sums Example Third method, use lower and upper sums: (LU)/2 f(x) = x 2, [a,b] = [0,1] and P = { 0, 1 4, 1 2, 3 4,1}..., L = 7 32, U = Split the difference: estimate (actual 1 3 ) Bottom line naive approach low n still error of (!) Max error: (U L)/2 = 1 8 Is this good enough? Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 7

89 Numerical Quadrature Rethinking Perhaps lower and upper sums are enough? Error seems small Work seems small as well But: estimate of max error was not small ( 1 8 ) Do they converge to integral as n? Will the extrema always be easy to calculate? Accurately? (Probably not!) Proceed in theoretical and practical directions. Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 8

90 Numerical Quadrature Introduction Riemann Integration Composite Trapezoid Rule Composite Simpson s Rule Gaussian Quadrature Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 9

91 Riemann Integrability f C 0 [a,b], [a,b] bdd f is Riemann integrable When integrable, and max subinterval in P 0 ( P 0): b lim L(f;P) = f(x)dx = lim U(f;P) P 0 a P 0 Counter example: Dirichlet function d(x) L = 0, U = b a { 0, x rational, 1, x irrational Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 10

92 Challenge: Estimate n for Third Method Current restrictions for n estimate: Monotone functions Uniform partition Challenge: π estimate 0 ecosx dx error tolerance = using L and U n =? Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 11

93 Estimate n Solution f(x) = e cosx ց on [0,π] m i = f ( ) x i1 and Mi = f(x i ) n 1 L(f;P) = h i=0 Want 1 2 (U L) < or π n f ( ) n 1 x i1 and U(f;P) = h i=0 ( e 1 e 1) < 10 3 f(x i ), h = π n... n 7385 (!!) (note for later: max error estimate = O(h)) Number of f(x) evaluations 2 for (U L) max error calculation > 7000 for either L or U We need something better. Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 12

94 Numerical Quadrature Introduction Riemann Integration Composite Trapezoid Rule Composite Simpson s Rule Gaussian Quadrature Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 13

95 Composite Trapezoid Rule (CTR) Each area: 1 2 ( xi1 x i )[ f(xi )f ( x i1 )] Rule: T(f;P) 1 2 n 1 i=0 ( xi1 x i )[ f(xi )f ( x i1 )] Note: for monotone functions and any given mesh (why?): T = (LU)/2 Pro: no need for extrema calculations Con: adding new points to existing ones (for a non-monotonic function) T can land on bad point no monotonic improvement (necessarily) L and U look for extrema on [ x i,x i1 ] monotonic improvement Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 14

96 CTR Interpretation function value x 0 x 1 x 2 x 3 x 4 (= a) (= b) Almost always better than L or U. (When not?) Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 15

97 Uniform Mesh and Associated Error Constant stepsize h = b a n T(f;P) h n 1 i=1 f(x i ) 1 2 [f(x 0)f(x n )] Theorem: f C 2 [a,b] ξ (a,b) b a f(x)dx T(f;P) = 1 12 (b a)h2 f (ξ) = O ( h 2 Note: leads to popular Romberg algorithm (built on Richardson extrapolation) How many steps does T(f;P) require? ) Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 16

98 e cosx Revisited Using CTR Challenge: π 0 ecosx dx, error tolerance = , n =? f(x) = e cosx f (x) = e cosx sinx... f (x) e on (0,π) error 12 1 π(π/n)2 e n 119 Recall perennial two questions/calculations of NM monotonic estimate of T produces same (LU)/2 but previous max error estimate was less exact (O(h)) Better estimate of max error better estimate of n Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 17

99 Another CTR Example Challenge: 1 0 e x2 dx, error tolerance = , n =? f(x) = e x2, f (x) = 2xe x2 and f (x) = ( 4x 2 2 ) e x2 f (x) 2 on (0,1) error 1 6 h We have: n or n 58 subintervals How can we do better? Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 18

100 Numerical Quadrature Introduction Riemann Integration Composite Trapezoid Rule Composite Simpson s Rule Gaussian Quadrature Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 19

101 Trapezoid Rule as Linear Interpolant Linear interpolant, one subinterval: p 1 (x) = x b a b intuitively: b p 1(x)dx = f(a) b f(b) b (x b)dx a a b a b a (x a)dx a f(a) x a b a f(b), = f(a) [ b 2 a 2 ] b(b a) f(b) [ b 2 a 2 a(b a) a b 2 b a 2 [ ] [ ] ab ab = f(a) 2 b f(b) 2 a ( ) ( ) a b b a = f(a) f(b) 2 2 = b a 2 (f(a)f(b)) CTR is integral of composite linear interpolant. ] Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 20

102 CTR for Two Equal Subintervals n = 2 (i.e., 3 points): T(f) = b a { ( ab f ) 12 } 2 2 [f(a)f(b)] = b a [ ( ) ] ab f(a)2f f(b) 4 2 with error = O ( (b a ) ) 3 2 (Previously, CTR error = O ( h 2) = TR error n subintervals = O ( h 3) O ( 1 h ) ) Deficiency: each subinterval ignores the other How can we take the entire picture into account? Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 21

103 Simpson s Rule Motivation: use p 2 (x) over the two equal subintervals Similar analysis actually loses O(h), but... ξ (a,b) b a [ b a f(x)dx = f(a)4f 6 ( ab 2 ) ] f(b) 1 ( ) b a 5 f (4) (ξ) 90 2 Similar to CTR, but weights midpoint more Note: for each method, denominator = coefficients Each method multiplies width by weighted average of height. Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 22

104 Composite Simpson s Rule (CSR) For an even number of subintervals n, h = b a, ξ (a,b) b a f(x)dx = h 3 2 n/2 [f(a)f(b)]4 (n 2)/2 i=1 i=1 n f[a(2i 1)h] }{{} odd nodes f(a2ih) }{{} b a 180 h4 f (4) (ξ) even nodes Note: denominator = coefficients = 3n but only n1 function evaluations Can we do better than O ( h 4)? Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 23

105 Evaluating the Error Another important accuracy angle until now: error = O(h α ) now on, looking at f (β) : error = 0 f P β 1 With higher β, p β (x) can approximate any f(x) better Define ǫ(x) f(x) p β (x) f = (p β ǫ) = p β method(f) method(ǫ) ǫ = method ( p β ) ǫ ǫ = As β : ǫ(x), ( ) ǫ method(ǫ) method(f) f Can we do better than Simpson s P 3? Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 24

106 Integration Introspection Simpson beat CTR because heavier weighted midpoint But CSR similarly suffers at subinterval-pair boundaries (weight = 2 vs. 4 for no reason) All composite rules ignore other areas patch together local calculations will suffer from this What about using all nodes and higher degree interpolation? Also note: we can choose weights location of calculation nodes Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 25

107 Numerical Quadrature Introduction Riemann Integration Composite Trapezoid Rule Composite Simpson s Rule Gaussian Quadrature Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 26

108 Interpolatory Quadrature x i, l i (x) = n j = 0 j i x x j x i x j, i = 0,...,n; p(x) = n i=0 f(x i )l i (x) If f(x) p(x) hopefully b a f(x)dx b a p(x)dx b b p(x)dx = a a n i=0 f(x i )l i (x)dx = n i=0 ( A i = A i a,b; { } ) n x j, but A j=0 i A i (f)! f(x i ) b a l i(x)dx }{{} A i (Endpoints, nodes) A i b a f(x)dx n i=0 A i f(x i ). Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 27

109 Interp. Quad. Error Analysis f P n f(x) = p(x), and b n f P n f(x)dx = A i f(x i ), i.e., error = 0 a i=0 n1 weights determined by nodes x i (and a and b) True for any choice of n1 nodes x i What if we choose n1 specific nodes (with weights, total: 2(n1) choices)? Can we get error = 0 f P 2n1? Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 28

110 Let Gaussian Quadrature (GQ) Theorem q(x) P n1 b a xk q(x)dx = 0, k = 0,...,n i.e., q(x) all polynomials of lower degree note: n2 coefficients, n1 conditions unique to a constant multiplier x i, i = 0,...,n, q(x i ) = 0 i.e., x i are zeros of q(x) Then f P 2n1, even though f(x) p(x) ( f P m, m > n) b a f(x)dx = n i=0 A i f(x i ) We jumped from P n to P 2n1! Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 29

111 Gaussian Quadrature Proof Let f P 2n1, and divide by q f = sq r s,r P n We have (note: until last step, x i can be arbitrary) b b b f(x)dx = s(x)q(x)dx r(x)dx (division above) a a = = = = b a a r(x)dx ( ity of q(x) ) n i=0 n i=0 n i=0 A i r(x i ) (r P n ) A i [f(x i ) s(x i )q(x i )] A i f(x i ) (division above) (x i are zeros of q(x)) Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 30

112 GQ Additional Notes Example q n (x): Legendre Polynomials: for [a,b] = [ 1,1] and q n (1) = 1 ( a 3-term recurrence formula) q 0 (x) = 1, q 1 (x) = x, q 2 (x) = 3 2 x2 1 2, q 3(x) = 5 2 x3 3 2 x,... Use q n1 (x) (why?), depends only on a, b and n Gaussian nodes (a,b) good if f(a) = ± and/or f(b) = ± (e.g., x dx) More general: with weight function w(x) in original integral q(x) orthogonality weights A i Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 31

113 Numerical Quadrature Summary n1 function evaluations composite? node placement error = 0 P CTR uniform (usually) 1 CSR uniform (usually) 3 interp. any (distinct) n GQ zeros of q(x) 2n1 P.S. There are also powerful adaptive quadrature methods Copyright c 2010 by A. E. Naiman NM Slides Numerical Quadrature, p. 32

114 Linear Systems Introduction Naive Gaussian Elimination Limitations Operation Counts Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 1

115 What Are Linear Systems (LS)? a 11 x 1 a 12 x 2 a 1n x n = b 1 a 21 x 1 a 22 x 2 a 2n x n = b =. a m1 x 1 a m2 x 2 a mn x n = b m Dependence on unknowns: powers of degree 1 Summation form: equations n j=1 a ij x j = b i, 1 i m, i.e., m Presently: m = n, i.e., square systems (later: m n) Q: How to solve for [x 1 x 2... x n ] T? A:... Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 2

116 Linear Systems Introduction Naive Gaussian Elimination Limitations Operation Counts Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 3

117 Overall Algorithm and Definitions Currently: direct methods only (later: iterative methods) General idea: Generate upper triangular system ( forward elimination ) Easily calculate unknowns in reverse order ( backward substitution ) Pivot row = current one being processed pivot = diagonal element of pivot row Steps applied to RHS as well. Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 4

118 Forward Elimination Generate zero columns below diagonal Process rows downward for each row i := 1,n 1 { for each row k := i1,n { // the pivot row // rows below pivot multiply pivot row a ii = a ki subtract pivot row from row k // now a ki = 0 } // now column below a ii is zero } // now a ij = 0, i > j Obtain triangular system Let s work an example,... Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 5

119 Compact Form of LS 6x 1 2x 2 2x 3 4x 4 = 16 12x 1 8x 2 6x 3 10x 4 = 26 3x 1 13x 2 9x 3 3x 4 = 19 6x 1 4x 2 1x 3 18x 4 = Proceeding with the forward elimination,... Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 6

120 Forward Elimination Example Matrix is upper triangular. Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 7

121 Backward Substitution Last equation: 3x 4 = 3 x 4 = 1 Second to last equation: 2x 3 5 x 4 }{{} x 3 = 2... second equation... x 2 =... =1... [x 1 x 2 x 3 x 4 ] T = [ ] T = 2x 3 5 = 9 For small problems, check solution in original system. Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 8

122 Linear Systems Introduction Naive Gaussian Elimination Limitations Operation Counts Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 9

123 Zero Pivots Clearly, zero pivots prevent forward elimination! zero pivots can appear along the way Later: When guaranteed no zero pivots? All pivots 0? we are safe Experiment with system with known solution. Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 10

124 Vandermonde Matrix n n n 1. 1 n1 (n1) 2 (n1) 3 (n1) n 1 Want row sums on RHS x i = 1, i = 1,...,n Geometric series: 1tt 2 t n 1 = tn 1 t 1 We obtain b i, for row i = 1,...,n n j=1 (1i) j 1 }{{}}{{} 1 = (1i) n 1 a ij x j (1i) 1 = 1 i [(1i)n 1] }{{} System is ready to be tested. Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 11 b i

125 Vandermonde Test Platform with 7 significant (decimal) digits n = 1,...,8 expected results n = 9: error > 16,000%!! Questions: What happened? Why so sudden? Can anything be done? Answer: matrix is ill-conditioned Sensitivity to roundoff errors Leads to error propagation and magnification First, how to assess vector errors. Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 12

126 Errors Given system: Ax = b and solution estimate x Residual (error): r A x b Absolute error (if x is known): e x x Norm taken of r or e: vector scalar quantity (more on norms later) Relative errors: r / b and e / x Back to ill-conditioning,... Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 13

127 Ill-conditioning 0 x 1 x 2 = 1 x 1 x 2 = 2 } 0 pivot General rule: if 0 is problematic numbers near 0 are problematic ǫx 1 x 2 = 1 x 1 x 2 = 2 }... x 2 = 2 1/ǫ 1 1/ǫ and x 1 = 1 x 2 ǫ ǫ small (e.g., ǫ = 10 9 with 8 significant digits) x 2 = 1 and x 1 = 0 wrong! What can be done? Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 14

128 Pivoting Switch order of equations, moving offending element off diagonal x 1 x 2 = 2 ǫx 1 x 2 = 1 }, x 2 = 1 2ǫ 1 ǫ and x 1 = 2 x 2 = 1 1 ǫ This is correct, even for small ǫ (or even ǫ = 0) Compare size of diagonal (pivot) elements to ǫ Ratio of first row of Vandermonde matrix = 1 : 2 n 1 Issue is relative size, not absolute size. Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 15

129 Scaled Partial Pivoting Also called row pivoting (vs. column pivoting) Instability source: subtracting large values: a kj -= a ij a ki a ii W o l.o.g.: n rows, and choosing first row Find i rows k i, columns j > 1: minimize a a k1 ij a i1 O ( n 3) calculations! simplify (remove k), imagine: a k1 = 1 a find i columns j > 1: min ij i a i1 Still 1) O ( n 2) calculations, 2) how to minimize each row? max Find i: min j a ij i, or: max a i1 i a i1 max j aij Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 16

130 Linear Systems Introduction Naive Gaussian Elimination Limitations Operation Counts Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 17

131 How Much Work on A? Real life: crowd estimation costs? (will depend on accuracy) Counting and (i.e., long operations) only Pivoting: row decision amongst k rows = k ratios First row: n ratios (for choice of pivot row) n 1 multipliers (n 1) 2 multiplications total: n 2 operations forward elimination operations (for large n) n k=2 k 2 = n n3 (n1)(2n1) How about the work on b? Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 18

132 Rest of the Work Forward elimination work on RHS: n k=2 (k 1) = n(n 1) 2 Backward substitution: n k=1 k = n(n1) 2 Total: n 2 operations O(n) fewer operations than forward elimination on A Important for multiple RHSs known from the start do not repeat O ( n 3) work for each rather, line them up, and process simultaneously Can we do better at times? Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 19

133 Sparse Systems Above, e.g., tridiagonal system (half bandwidth = 1) note: a ij = 0 for i j > 1 Opportunities for savings storage computations Both are O(n) Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 20

134 Linear Systems Introduction Naive Gaussian Elimination Limitations Operation Counts Additional Notes Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 21

135 Pivot-Free Guarantee When are we guaranteed non-zero pivots? Diagonal dominance (just like it sounds): a ii > n j = 1 j i a ij, i = 1,...,n (Or > in one row, and in remaining) Many finite difference and finite element problems diagonally dominant systems Occurs often enough to justify individual study. Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 22

136 LU Decomposition E.g.: same A, many b s of time-dependent problem not all b s are known from the start Want A = LU for decreased work later Then define y: L Ux }{{} y = b solve Ly = b for y solve Ux = y for x U is upper triangular, result of Gaussian elimination L is unit lower triangular, 1 s on diagonal and Gaussian multipliers below For small systems, verify (even by hand): A = LU Each new RHS is n 2 work, instead of O ( n 3) Copyright c 2010 by A. E. Naiman NM Slides Linear Systems, p. 23

137 Approximation by Splines Motivation Linear Splines Quadratic Splines Cubic Splines Summary Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 1

138 Motivation function value Given: set of many points, or perhaps very involved function Want: simple representative function for analysis or manufacturing Any suggestions? Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 2

139 Let s Try Interpolation function value Disadvantages: Values outside x-range diverge quickly (interp(10) = 1592) Numerical instabilities of high-degree polynomials Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 3

140 Runge Function Two Interpolations function value 1 uniform 0 Chebyshev x 8 (= 5) x 7 x 6 x 5 0 (= x 4 = c 4 ) c 3 c 2 c 1 c 0 (= 5) More disadvantages: Within x-range, often high oscillations Even Chebyshev points often uncharacteristic oscillations Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 4

141 Splines Given domain [a,b], a spline S(x) Is defined on entire domain Provides a certain amount of smoothness partition of knots (= where spline can change form) {a = t 0,t 1,t 2,...,t n = b} such that S(x) = is piecewise polynomial S 0 (x), x [t 0,t 1 ], S 1 (x),. x [t 1,t 2 ],. S n 1 (x), x [ ] t n 1,t n Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 5

142 Interpolatory Splines Note: splines split up range [a,b] opposite of CTR CSR GQ development Spline implies no interpolation, not even any y-values If given points {(t 0,y 0 ),(t 1,y 1 ),(t 2,y 2 ),...,(t n,y n )} interpolatory spline traverses these as well Splines = nice, analytical functions Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 6

143 Approximation by Splines Motivation Linear Splines Quadratic Splines Cubic Splines Summary Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 7

144 Linear Splines Given domain [a,b], a linear spline S(x) Is defined on entire domain Provides continuity, i.e., is C 0 [a,b] partition of knots such that {a = t 0,t 1,t 2,...,t n = b} S i (x) = a i xb i P 1 ([ ti,t i1 ]), i = 0,...,n 1 Recall: no y-values or interpolation yet Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 8

145 Linear Spline Examples undefined part function value discontinuous nonlinear part linear spline Definition outside of [a,b] is arbitrary a b Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 9

146 Given points Interpolatory Linear Splines {(t 0,y 0 ),(t 1,y 1 ),(t 2,y 2 ),...,(t n,y n )} spline must interpolate as well Are the S i (x) (with no additional knots) unique? Coefficients: a i xb i, i = 0,...,n 1 total = 2n Conditions: 2 prescribed interpolation points for S i (x), i = 0,...,n 1 (includes continuity condition) total = 2n Obtain S i (x) = a i x(y i a i t i ), a i = y i1 y i t i1 t i, i = 0,...,n 1 Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 10

147 Interpolatory Linear Splines Example 60 function value Discontinuous derivatives at knots are unpleasing,... Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 11

148 Approximation by Splines Motivation Linear Splines Quadratic Splines Cubic Splines Summary Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 12

149 Quadratic Splines Given domain [a, b], a quadratic spline S(x) Is defined on entire domain Provides continuity of zeroth and first derivatives, i.e., is C 1 [a,b] partition of knots such that {a = t 0,t 1,t 2,...,t n = b} S i (x) = a i x 2 b i xc i P 2 ([ ti,t i1 ]), i = 0,...,n 1 Again no y-values or interpolation yet Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 13

150 Quadratic Spline Example f(x) = x 2, x 0, x 2, 0 x 1, 1 2x, x 1, Defined on domain (, ) f(x)? = quadratic spline Continuity (clearly okay away from x = 0 and 1): Zeroth derivative: f ( 0 ) = f ( 0 ) = 0 f ( 1 ) = f ( 1 ) = 1 First derivative: f ( 0 ) = f ( 0 ) = 0 f ( 1 ) = f ( 1 ) = 2 Each part of f(x) is P 2 Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 14

151 Given points Interpolatory Quadratic Splines {(t 0,y 0 ),(t 1,y 1 ),(t 2,y 2 ),...,(t n,y n )} spline must interpolate as well Are the S i (x) unique (same knots)? Coefficients: a i x 2 b i xc i, i = 0,...,n 1 total = 3n Conditions: 2 prescribed interpolation points for S i (x), i = 0,...,n 1 (includes continuity of function condition) (n 1) C 1 continuities total = 3n 1 Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 15

152 Interpolatory Quadratic Splines (cont.) Underdetermined system need to add one condition Define (as yet to be determined) z i = S (t i ), i = 0,...,n Write S i (x) = z i1 z i 2 ( t i1 t i )(x t i ) 2 z i (x t i )y i therefore S i(x) = z i1 z i t i1 t i (x t i )z i Need to verify continuity and interpolatory conditions determine z i Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 16

153 Checking Interpolatory Quadratic Splines Check four continuity (and interpolatory) conditions: (i) S i (t i = y i (iii) S ( ) i (t i) = z i (ii) S i ti1 = (below) (iv) S i( ) ti1 = z i1 ( ) z (ii) S i ti1 = i1 z ( ) ( ) i ti1 t i zi ti1 t i yi 2 = z i1 z ( ) i ti1 t i yi set = y i1 therefore (n equations, n 1 unknowns) 2 z i1 = 2 y i1 y i t i1 t i z i, i = 0,...,n 1 Choose any 1 z i and the remaining n are determined. Copyright c 2010 by A. E. Naiman NM Slides Approximation by Splines, p. 17

Numerical Methods. Aaron Naiman Jerusalem College of Technology naiman

Numerical Methods. Aaron Naiman Jerusalem College of Technology  naiman Numerical Methods Aaron Naiman Jerusalem College of Technology naiman@jct.ac.il http://jct.ac.il/ naiman based on: Numerical Mathematics and Computing by Cheney & Kincaid, c 1994 Brooks/Cole Publishing

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method COURSE 7 3. Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method The presence of derivatives in the remainder difficulties in applicability to practical problems

More information

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004 Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question

More information

Lecture Notes on Introduction to Numerical Computation 1

Lecture Notes on Introduction to Numerical Computation 1 Lecture Notes on Introduction to Numerical Computation 1 Wen Shen Fall 2012 1 These notes are provided to students as a supplement to the textbook. They contain mainly examples that we cover in class.

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1 Chapter 01.01 Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 Chapter 01.02 Measuring errors 11 True error 11 Relative

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

INTRODUCTION TO COMPUTATIONAL MATHEMATICS INTRODUCTION TO COMPUTATIONAL MATHEMATICS Course Notes for CM 271 / AMATH 341 / CS 371 Fall 2007 Instructor: Prof. Justin Wan School of Computer Science University of Waterloo Course notes by Prof. Hans

More information

(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0.

(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0. > 5. Numerical Integration Review of Interpolation Find p n (x) with p n (x j ) = y j, j = 0, 1,,..., n. Solution: p n (x) = y 0 l 0 (x) + y 1 l 1 (x) +... + y n l n (x), l k (x) = n j=1,j k Theorem Let

More information

Preliminary Examination in Numerical Analysis

Preliminary Examination in Numerical Analysis Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

MAT 460: Numerical Analysis I. James V. Lambers

MAT 460: Numerical Analysis I. James V. Lambers MAT 460: Numerical Analysis I James V. Lambers January 31, 2013 2 Contents 1 Mathematical Preliminaries and Error Analysis 7 1.1 Introduction............................ 7 1.1.1 Error Analysis......................

More information

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight Applied Numerical Analysis (AE0-I) R. Klees and R.P. Dwight February 018 Contents 1 Preliminaries: Motivation, Computer arithmetic, Taylor series 1 1.1 Numerical Analysis Motivation..........................

More information

CHAPTER 4. Interpolation

CHAPTER 4. Interpolation CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given

More information

Principles of Scientific Computing Local Analysis

Principles of Scientific Computing Local Analysis Principles of Scientific Computing Local Analysis David Bindel and Jonathan Goodman last revised January 2009, printed February 25, 2009 1 Among the most common computational tasks are differentiation,

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

AP Calculus Chapter 9: Infinite Series

AP Calculus Chapter 9: Infinite Series AP Calculus Chapter 9: Infinite Series 9. Sequences a, a 2, a 3, a 4, a 5,... Sequence: A function whose domain is the set of positive integers n = 2 3 4 a n = a a 2 a 3 a 4 terms of the sequence Begin

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

GENG2140, S2, 2012 Week 7: Curve fitting

GENG2140, S2, 2012 Week 7: Curve fitting GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

Numerical Analysis Exam with Solutions

Numerical Analysis Exam with Solutions Numerical Analysis Exam with Solutions Richard T. Bumby Fall 000 June 13, 001 You are expected to have books, notes and calculators available, but computers of telephones are not to be used during the

More information

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration. Unit IV Numerical Integration and Differentiation Numerical integration and differentiation quadrature classical formulas for equally spaced nodes improper integrals Gaussian quadrature and orthogonal

More information

Chapter 11 - Sequences and Series

Chapter 11 - Sequences and Series Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning 76 Chapter 7 Interpolation 7.1 Interpolation Definition 7.1.1. Interpolation of a given function f defined on an interval [a,b] by a polynomial p: Given a set of specified points {(t i,y i } n with {t

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn Review Taylor Series and Error Analysis Roots of Equations Linear Algebraic Equations Optimization Numerical Differentiation and Integration Ordinary Differential Equations Partial Differential Equations

More information

Approximation theory

Approximation theory Approximation theory Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 1 1.3 6 8.8 2 3.5 7 10.1 Least 3squares 4.2

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

Math Numerical Analysis Mid-Term Test Solutions

Math Numerical Analysis Mid-Term Test Solutions Math 400 - Numerical Analysis Mid-Term Test Solutions. Short Answers (a) A sufficient and necessary condition for the bisection method to find a root of f(x) on the interval [a,b] is f(a)f(b) < 0 or f(a)

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Virtual University of Pakistan

Virtual University of Pakistan Virtual University of Pakistan File Version v.0.0 Prepared For: Final Term Note: Use Table Of Content to view the Topics, In PDF(Portable Document Format) format, you can check Bookmarks menu Disclaimer:

More information

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

MA 3021: Numerical Analysis I Numerical Differentiation and Integration MA 3021: Numerical Analysis I Numerical Differentiation and Integration Suh-Yuh Yang ( 楊肅煜 ) Department of Mathematics, National Central University Jhongli District, Taoyuan City 32001, Taiwan syyang@math.ncu.edu.tw

More information

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation Outline Numerical Integration Numerical Differentiation Numerical Integration Numerical Differentiation 3 Michael T. Heath Scientific Computing / 6 Main Ideas Quadrature based on polynomial interpolation:

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

MATH 118, LECTURES 27 & 28: TAYLOR SERIES

MATH 118, LECTURES 27 & 28: TAYLOR SERIES MATH 8, LECTURES 7 & 8: TAYLOR SERIES Taylor Series Suppose we know that the power series a n (x c) n converges on some interval c R < x < c + R to the function f(x). That is to say, we have f(x) = a 0

More information

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative

More information

Numerical Methods. King Saud University

Numerical Methods. King Saud University Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes) AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.

More information

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS Math 473: Practice Problems for Test 1, Fall 011, SOLUTIONS Show your work: 1. (a) Compute the Taylor polynomials P n (x) for f(x) = sin x and x 0 = 0. Solution: Compute f(x) = sin x, f (x) = cos x, f

More information

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy, Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

More information

CHAPTER 10 Zeros of Functions

CHAPTER 10 Zeros of Functions CHAPTER 10 Zeros of Functions An important part of the maths syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of problems

More information

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center

More information

Zeros of Functions. Chapter 10

Zeros of Functions. Chapter 10 Chapter 10 Zeros of Functions An important part of the mathematics syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of

More information

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center San Diego State University

More information

15 Nonlinear Equations and Zero-Finders

15 Nonlinear Equations and Zero-Finders 15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

6 Lecture 6b: the Euler Maclaurin formula

6 Lecture 6b: the Euler Maclaurin formula Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Fall 217 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 217 March 26, 218 6 Lecture 6b: the Euler Maclaurin formula

More information

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places. NUMERICAL METHODS 1. Rearranging the equation x 3 =.5 gives the iterative formula x n+1 = g(x n ), where g(x) = (2x 2 ) 1. (a) Starting with x = 1, compute the x n up to n = 6, and describe what is happening.

More information

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution 1 Introduction Lecture 7 Root finding I For our present purposes, root finding is the process of finding a real value of x which solves the equation f (x)=0. Since the equation g x =h x can be rewritten

More information

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter Key References: Interpolation 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter 6. 2. Press, W. et. al. Numerical Recipes in C, Cambridge: Cambridge University Press. Chapter 3

More information

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright cs416: introduction to scientific computing 01/9/07 Unit : Solving Scalar Equations Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright 1 Introduction We now

More information

Numerical methods. Examples with solution

Numerical methods. Examples with solution Numerical methods Examples with solution CONTENTS Contents. Nonlinear Equations 3 The bisection method............................ 4 Newton s method.............................. 8. Linear Systems LU-factorization..............................

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

Extrapolation in Numerical Integration. Romberg Integration

Extrapolation in Numerical Integration. Romberg Integration Extrapolation in Numerical Integration Romberg Integration Matthew Battaglia Joshua Berge Sara Case Yoobin Ji Jimu Ryoo Noah Wichrowski Introduction Extrapolation: the process of estimating beyond the

More information

Midterm Review. Igor Yanovsky (Math 151A TA)

Midterm Review. Igor Yanovsky (Math 151A TA) Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

February 13, Option 9 Overview. Mind Map

February 13, Option 9 Overview. Mind Map Option 9 Overview Mind Map Return tests - will discuss Wed..1.1 J.1: #1def,2,3,6,7 (Sequences) 1. Develop and understand basic ideas about sequences. J.2: #1,3,4,6 (Monotonic convergence) A quick review:

More information

Solution of Nonlinear Equations

Solution of Nonlinear Equations Solution of Nonlinear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 14, 017 One of the most frequently occurring problems in scientific work is to find the roots of equations of the form f(x) = 0. (1)

More information

Convergence of sequences and series

Convergence of sequences and series Convergence of sequences and series A sequence f is a map from N the positive integers to a set. We often write the map outputs as f n rather than f(n). Often we just list the outputs in order and leave

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

NUMERICAL MATHEMATICS AND COMPUTING

NUMERICAL MATHEMATICS AND COMPUTING NUMERICAL MATHEMATICS AND COMPUTING Fourth Edition Ward Cheney David Kincaid The University of Texas at Austin 9 Brooks/Cole Publishing Company I(T)P An International Thomson Publishing Company Pacific

More information

Numerical Methods in Informatics

Numerical Methods in Informatics Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

Lecture 28 The Main Sources of Error

Lecture 28 The Main Sources of Error Lecture 28 The Main Sources of Error Truncation Error Truncation error is defined as the error caused directly by an approximation method For instance, all numerical integration methods are approximations

More information

Numerical Analysis and Computing

Numerical Analysis and Computing Numerical Analysis and Computing Lecture Notes #02 Calculus Review; Computer Artihmetic and Finite Precision; and Convergence; Joe Mahaffy, mahaffy@math.sdsu.edu Department of Mathematics Dynamical Systems

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Chapter 1. Root Finding Methods. 1.1 Bisection method

Chapter 1. Root Finding Methods. 1.1 Bisection method Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes Root finding: 1 a The points {x n+1, }, {x n, f n }, {x n 1, f n 1 } should be co-linear Say they lie on the line x + y = This gives the relations x n+1 + = x n +f n = x n 1 +f n 1 = Eliminating α and

More information

CALCULUS JIA-MING (FRANK) LIOU

CALCULUS JIA-MING (FRANK) LIOU CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion

More information

M.SC. PHYSICS - II YEAR

M.SC. PHYSICS - II YEAR MANONMANIAM SUNDARANAR UNIVERSITY DIRECTORATE OF DISTANCE & CONTINUING EDUCATION TIRUNELVELI 627012, TAMIL NADU M.SC. PHYSICS - II YEAR DKP26 - NUMERICAL METHODS (From the academic year 2016-17) Most Student

More information

Math 20B Supplement. Bill Helton. September 23, 2004

Math 20B Supplement. Bill Helton. September 23, 2004 Math 0B Supplement Bill Helton September 3, 004 1 Supplement to Appendix G Supplement to Appendix G and Chapters 7 and 9 of Stewart Calculus Edition 5: August 003 Contents 1 Complex Exponentials: For Appendix

More information

Spring 2012 Introduction to numerical analysis Class notes. Laurent Demanet

Spring 2012 Introduction to numerical analysis Class notes. Laurent Demanet 18.330 Spring 2012 Introduction to numerical analysis Class notes Laurent Demanet Draft April 25, 2014 2 Preface Good additional references are Burden and Faires, Introduction to numerical analysis Suli

More information

Numerical Integration

Numerical Integration Numerical Integration Sanzheng Qiao Department of Computing and Software McMaster University February, 2014 Outline 1 Introduction 2 Rectangle Rule 3 Trapezoid Rule 4 Error Estimates 5 Simpson s Rule 6

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 20, 2014 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information