Part IB Numerical Analysis

Similar documents
Part IB Numerical Analysis

Theoretical foundations of Gaussian quadrature

CMDA 4604: Intermediate Topics in Mathematical Modeling Lecture 19: Interpolation and Quadrature

1 The Lagrange interpolation formula

Orthogonal Polynomials

Lecture 14: Quadrature

1. Gauss-Jacobi quadrature and Legendre polynomials. p(t)w(t)dt, p {p(x 0 ),...p(x n )} p(t)w(t)dt = w k p(x k ),

CAAM 453 NUMERICAL ANALYSIS I Examination There are four questions, plus a bonus. Do not look at them until you begin the exam.

Math 520 Final Exam Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Orthogonal Polynomials and Least-Squares Approximations to Functions

1 Linear Least Squares

Discrete Least-squares Approximations

Best Approximation. Chapter The General Case

Chapter 3 Polynomials

Lecture 19: Continuous Least Squares Approximation

Lecture 6: Singular Integrals, Open Quadrature rules, and Gauss Quadrature

Best Approximation in the 2-norm

Advanced Computational Fluid Dynamics AA215A Lecture 3 Polynomial Interpolation: Numerical Differentiation and Integration.

Abstract inner product spaces

The Regulated and Riemann Integrals

New Expansion and Infinite Series

HW3, Math 307. CSUF. Spring 2007.

B.Sc. in Mathematics (Ordinary)

Numerical Methods I Orthogonal Polynomials

Matrices, Moments and Quadrature, cont d

ODE: Existence and Uniqueness of a Solution

Math 270A: Numerical Linear Algebra

Review of Calculus, cont d

Elementary Linear Algebra

NUMERICAL INTEGRATION. The inverse process to differentiation in calculus is integration. Mathematically, integration is represented by.

p-adic Egyptian Fractions

Numerical Analysis. Doron Levy. Department of Mathematics Stanford University

Construction of Gauss Quadrature Rules

Math 1B, lecture 4: Error bounds for numerical methods

Exam 2, Mathematics 4701, Section ETY6 6:05 pm 7:40 pm, March 31, 2016, IH-1105 Instructor: Attila Máté 1

Numerical Integration

MATH34032: Green s Functions, Integral Equations and the Calculus of Variations 1

Math& 152 Section Integration by Parts

State space systems analysis (continued) Stability. A. Definitions A system is said to be Asymptotically Stable (AS) when it satisfies

Chapter 1. Basic Concepts

THE EXISTENCE-UNIQUENESS THEOREM FOR FIRST-ORDER DIFFERENTIAL EQUATIONS.

Recitation 3: More Applications of the Derivative

Sturm-Liouville Eigenvalue problem: Let p(x) > 0, q(x) 0, r(x) 0 in I = (a, b). Here we assume b > a. Let X C 2 1

Here we study square linear systems and properties of their coefficient matrices as they relate to the solution set of the linear system.

Complex integration. L3: Cauchy s Theory.

Numerical quadrature based on interpolating functions: A MATLAB implementation

Math 360: A primitive integral and elementary functions

MORE FUNCTION GRAPHING; OPTIMIZATION. (Last edited October 28, 2013 at 11:09pm.)

Contents. Outline. Structured Rank Matrices Lecture 2: The theorem Proofs Examples related to structured ranks References. Structure Transport

Chapter 3. Vector Spaces

Numerical Integration. 1 Introduction. 2 Midpoint Rule, Trapezoid Rule, Simpson Rule. AMSC/CMSC 460/466 T. von Petersdorff 1

P 3 (x) = f(0) + f (0)x + f (0) 2. x 2 + f (0) . In the problem set, you are asked to show, in general, the n th order term is a n = f (n) (0)

Euler, Ioachimescu and the trapezium rule. G.J.O. Jameson (Math. Gazette 96 (2012), )

SOLUTIONS FOR ANALYSIS QUALIFYING EXAM, FALL (1 + µ(f n )) f(x) =. But we don t need the exact bound.) Set

Numerical Integration. Newton Cotes Formulas. Quadrature. Newton Cotes Formulas. To approximate the integral b

Main topics for the First Midterm

8 Laplace s Method and Local Limit Theorems

440-2 Geometry/Topology: Differentiable Manifolds Northwestern University Solutions of Practice Problems for Final Exam

1 Ordinary Differential Equations

Quantum Physics II (8.05) Fall 2013 Assignment 2

(4.1) D r v(t) ω(t, v(t))

Lecture Note 4: Numerical differentiation and integration. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Notes: Orthogonal Polynomials, Gaussian Quadrature, and Integral Equations

Recitation 3: Applications of the Derivative. 1 Higher-Order Derivatives and their Applications

Numerical integration

Semigroup of generalized inverses of matrices

Numerical Analysis: Trapezoidal and Simpson s Rule

Properties of Integrals, Indefinite Integrals. Goals: Definition of the Definite Integral Integral Calculations using Antiderivatives

Review of Gaussian Quadrature method

Unit #9 : Definite Integral Properties; Fundamental Theorem of Calculus

STUDY GUIDE FOR BASIC EXAM

Introduction to Numerical Analysis

Bases for Vector Spaces

W. We shall do so one by one, starting with I 1, and we shall do it greedily, trying

Analytical Methods Exam: Preparatory Exercises

Math 61CM - Solutions to homework 9

Quadratic Forms. Quadratic Forms

Elements of Matrix Algebra

III. Lecture on Numerical Integration. File faclib/dattab/lecture-notes/numerical-inter03.tex /by EC, 3/14/2008 at 15:11, version 9

Modern Methods for high-dimensional quadrature

Section 6.1 INTRO to LAPLACE TRANSFORMS

Advanced Calculus: MATH 410 Notes on Integrals and Integrability Professor David Levermore 17 October 2004

Math 4310 Solutions to homework 1 Due 9/1/16

Variational Techniques for Sturm-Liouville Eigenvalue Problems

Chapter 3. Inner Products and Norms

Linear Algebra 1A - solutions of ex.4

g i fφdx dx = x i i=1 is a Hilbert space. We shall, henceforth, abuse notation and write g i f(x) = f

Review of basic calculus

63. Representation of functions as power series Consider a power series. ( 1) n x 2n for all 1 < x < 1

Lecture 17. Integration: Gauss Quadrature. David Semeraro. University of Illinois at Urbana-Champaign. March 20, 2014

A REVIEW OF CALCULUS CONCEPTS FOR JDEP 384H. Thomas Shores Department of Mathematics University of Nebraska Spring 2007

How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique?

Ordinary differential equations

Linearity, linear operators, and self adjoint eigenvalue problems

Part IB Numerical Analysis

UNIFORM CONVERGENCE. Contents 1. Uniform Convergence 1 2. Properties of uniform convergence 3

MATH 101A: ALGEBRA I PART B: RINGS AND MODULES 35

PARTIAL FRACTION DECOMPOSITION

MATRICES AND VECTORS SPACE

Physics 116C Solution of inhomogeneous ordinary differential equations using Green s functions

Transcription:

Prt IB Numericl Anlysis Theorems with proof Bsed on lectures by G. Moore Notes tken by Dexter Chu Lent 2016 These notes re not endorsed by the lecturers, nd I hve modified them (often significntly) fter lectures. They re nowhere ner ccurte representtions of wht ws ctully lectured, nd in prticulr, ll errors re lmost surely mine. Polynomil pproximtion Interpoltion by polynomils. Divided differences of functions nd reltions to derivtives. Orthogonl polynomils nd their recurrence reltions. Lest squres pproximtion by polynomils. Gussin qudrture formule. Peno kernel theorem nd pplictions. [6] Computtion of ordinry differentil equtions Euler s method nd proof of convergence. Multistep methods, including order, the root condition nd the concept of convergence. Runge-Kutt schemes. Stiff equtions nd A-stbility. [5] Systems of equtions nd lest squres clcultions LU tringulr fctoriztion of mtrices. Reltion to Gussin elimintion. Column pivoting. Fctoriztions of symmetric nd bnd mtrices. The Newton-Rphson method for systems of non-liner lgebric equtions. QR fctoriztion of rectngulr mtrices by Grm-Schmidt, Givens nd Householder techniques. Appliction to liner lest squres clcultions. [5] 1

Contents IB Numericl Anlysis (Theorems with proof) Contents 0 Introduction 3 1 Polynomil interpoltion 4 1.1 The interpoltion problem...................... 4 1.2 The Lgrnge formul........................ 4 1.3 The Newton formul......................... 4 1.4 A useful property of divided differences.............. 5 1.5 Error bounds for polynomil interpoltion............. 5 2 Orthogonl polynomils 7 2.1 Sclr product............................ 7 2.2 Orthogonl polynomils....................... 7 2.3 Three-term recurrence reltion................... 7 2.4 Exmples............................... 8 2.5 Lest-squres polynomil pproximtion.............. 8 3 Approximtion of liner functionls 10 3.1 Liner functionls.......................... 10 3.2 Gussin qudrture......................... 10 4 Expressing errors in terms of derivtives 12 5 Ordinry differentil equtions 13 5.1 Introduction.............................. 13 5.2 One-step methods.......................... 13 5.3 Multi-step methods.......................... 14 5.4 Runge-Kutt methods........................ 15 6 Stiff equtions 16 6.1 Introduction.............................. 16 6.2 Liner stbility............................ 16 7 Implementtion of ODE methods 17 7.1 Locl error estimtion........................ 17 7.2 Solving for implicit methods..................... 17 8 Numericl liner lgebr 18 8.1 Tringulr mtrices.......................... 18 8.2 LU fctoriztion........................... 18 8.3 A = LU for specil A........................ 18 9 Liner lest squres 20 2

0 Introduction IB Numericl Anlysis (Theorems with proof) 0 Introduction 3

1 Polynomil interpoltion IB Numericl Anlysis (Theorems with proof) 1 Polynomil interpoltion 1.1 The interpoltion problem 1.2 The Lgrnge formul Theorem. The interpoltion problem hs exctly one solution. Proof. We define p P n [x] by Evluting t x i gives p(x j ) = p(x) = f k l k (x). f k l k (x j ) = f k δ jk = f j. So we get existence. For uniqueness, suppose p, q P n [x] re solutions. Then the difference r = p q P n [x] stisfies r(x j ) = 0 for ll j, i.e. it hs n + 1 roots. However, non-zero polynomil of degree n cn hve t most n roots. So in fct p q is zero, i.e. p = q. 1.3 The Newton formul Theorem (Recurrence reltion for Newton divided differences). For 0 j < k n, we hve f[x j,, x k ] = f[x j+1,, x k ] f[x j,, x k 1 ] x k x j. Proof. The key to proving this is to relte the interpolting polynomils. Let q 0, q 1 P k j 1 [x] nd q 2 P k j stisfy q 0 (x i ) = f i i = j,, k 1 q 1 (x i ) = f i q 2 (x i ) = f i i = j + 1,, k i = j,, k We now clim tht q 2 (x) = x x j x k x j q 1 (x) + x k x x k x j q 0 (x). We cn check directly tht the expression on the right correctly interpoltes the points x i for i = j,, k. By uniqueness, the two expressions gree. Since f[x j,, x k ], f[x j+1,, x k ] nd f[x j,, x k 1 ] re the leding coefficients of q 2, q 1, q 0 respectively, the result follows. 4

1 Polynomil interpoltion IB Numericl Anlysis (Theorems with proof) 1.4 A useful property of divided differences Lemm. Let g C m [, b] hve continuous mth derivtive. Suppose g is zero t m + l distinct points. Then g (m) hs t lest l distinct zeros in [, b]. Proof. This is repeted ppliction of Rolle s theorem. We know tht between every two zeros of g, there is t lest one zero of g C m 1 [, b]. So by differentiting once, we hve lost t most 1 zeros. So fter differentiting m times, g (m) hs lost t most m zeros. So it still hs t lest l zeros. Theorem. Let {x i } n i=0 [, b] nd f Cn [, b]. Then there exists some ξ (, b) such tht f[x 0,, x n ] = 1 n! f (n) (ξ). Proof. Consider e = f p n C n [, b]. This hs t lest n + 1 distinct zeros in [, b]. So by the lemm, e (n) = f (n) p (n) n must vnish t some ξ (, b). But then p (n) n = n!f[x 0,, x n ] constntly. So the result follows. 1.5 Error bounds for polynomil interpoltion Theorem. Assume {x i } n i=0 [, b] nd f C[, b]. Let x [, b] be noninterpoltion point. Then where e n ( x) = f[x 0, x 1,, x n, x]ω( x), ω(x) = n (x x i ). Proof. We think of x = x n+1 s new interpoltion point so tht i=0 p n+1 (x) p n (x) = f[x 0,, x n, x]ω(x) for ll x R. In prticulr, putting x = x, we hve p n+1 ( x) = f( x), nd we get the result. Theorem. If in ddition f C n+1 [, b], then for ech x [, b], we cn find ξ x (, b) such tht e n (x) = 1 (n + 1)! f (n+1) (ξ x )ω(x) Proof. The sttement is trivil if x is n interpoltion point pick rbitrry ξ x, nd both sides re zero. Otherwise, this follows directly from the lst two theorems. Corollry. For ll x [, b], we hve f(x) p n (x) 1 (n + 1)! f (n+1) ω(x) 5

1 Polynomil interpoltion IB Numericl Anlysis (Theorems with proof) Lemm (3-term recurrence reltion). The Chebyshev polynomils stisfy the recurrence reltions T n+1 (x) = 2xT n (x) T n 1 (x) with initil conditions T 0 (x) = 1, T 1 (x) = x. Proof. cos((n + 1)θ) + cos((n 1)θ) = 2 cos θ cos(nθ). Theorem (Miniml property for n 1). On [ 1, 1], mong ll polynomils 1 p P n [x] with leding coefficient 1, 2 T n 1 n minimizes p. Thus, the 1 minimum vlue is 2. n 1 Proof. We proceed by contrdiction. Suppose there is polynomil q n P n with leding coefficient 1 such tht q n < 1 2 n 1. Define new polynomil r = 1 2 n 1 T n q n. This is, by ssumption, non-zero. Since both the polynomils hve leding coefficient 1, the difference must 1 hve degree t most n 1, i.e. r P n 1 [x]. Since 2 T n 1 n (X k ) = ± 1 2, nd n 1 q n (X n ) < 1 2 by ssumption, r lterntes in sign between these n + 1 points. n 1 But then by the intermedite vlue theorem, r hs to hve t lest n zeros. This is contrdiction, since r hs degree n 1, nd cnnot be zero. Corollry. Consider w = n (x x i ) P n+1 [x] i=0 for ny distinct points = {x i } n i=0 [ 1, 1]. Then min ω = 1 2 n. This minimum is chieved by picking the interpoltion points to be the zeros of T n+1, nmely ( ) 2k + 1 x k = cos 2n + 2 π, k = 0,, n. Theorem. For f C n+1 [ 1, 1], the Chebyshev choice of interpoltion points gives f p n 1 1 2 n (n + 1)! f (n+1). 6

2 Orthogonl polynomils IB Numericl Anlysis (Theorems with proof) 2 Orthogonl polynomils 2.1 Sclr product 2.2 Orthogonl polynomils Theorem. Given vector spce V of functions nd n inner product,, there exists unique monic orthogonl polynomil for ech degree n 0. In ddition, {p k } n form bsis for P n[x]. Proof. This is big induction proof over both prts of the theorem. We induct over n. For the bse cse, we pick p 0 (x) = 1, which is the only degree-zero monic polynomil. Now suppose we lredy hve {p n } n stisfying the induction hypothesis. Now pick ny monic q n+1 P n+1 [x], e.g. x n+1. We now construct p n+1 from q n+1 by the Grm-Schmidt process. We define p n+1 = q n+1 This is gin monic since q n+1 is, nd we hve p n+1, p m = 0 q n+1, p k p k, p k p k. for ll m n, nd hence p n+1, p = 0 for ll p P n [x] = p 0,, p n. To obtin uniqueness, ssume both p n+1, ˆp n+1 P n+1 [x] re both monic orthogonl polynomils. Then r = p n+1 ˆp n+1 P n [x]. So r, r = r, p n+1 ˆp n+1 = r, p n+1 r, ˆp n+1 = 0 0 = 0. So r = 0. So p n+1 = ˆp n 1. Finlly, we hve to show tht p 0,, p n+1 form bsis for P n+1 [x]. Now note tht every p P n+1 [x] cn be written uniquely s p = cp n+1 + q, where q P n [x]. But {p k } n is bsis for P n[x]. So q cn be uniquely decomposed s liner combintion of p 0,, p n. Alterntively, this follows from the fct tht ny set of orthogonl vectors must be linerly independent, nd since there re n + 2 of these vectors nd P n+1 [x] hs dimension n + 2, they must be bsis. 2.3 Three-term recurrence reltion Theorem. Monic orthogonl polynomils re generted by with initil conditions p k+1 (x) = (x α k )p k (x) β k p k 1 (x) p 0 = 1, p 1 (x) = (x α 0 )p 0, where α k = xp k, p k p k, p k, β k = p k, p k p k 1, p k 1. 7

2 Orthogonl polynomils IB Numericl Anlysis (Theorems with proof) Proof. By inspection, the p 1 given is monic nd stisfies p 1, p 0 = 0. Using q n+1 = xp n in the Grm-Schmidt process gives p n+1 = xp n p n+1 = xp n xp n, p k p k, p k p k p n, xp k p k, p k p k We notice tht p n, xp k nd vnishes whenever xp k hs degree less thn n. So we re left with = xp n xp n, p n p n, p n p n p n, xp n 1 p n 1, p n 1 p n 1 = (x α n )p n p n, xp n 1 p n 1, p n 1 p n 1. Now we notice tht xp n 1 is monic polynomil of degree n so we cn write this s xp n 1 = p n + q. Thus p n, xp n 1 = p n, p n + q = p n, p n. Hence the coefficient of p n 1 is indeed the β we defined. 2.4 Exmples 2.5 Lest-squres polynomil pproximtion Theorem. If {p n } n re orthogonl polynomils with respect to,, then the choice of c k such tht p = c k p k minimizes f p 2 is given by nd the formul for the error is c k = f, p k p k 2, f p 2 = f 2 Proof. We consider generl polynomil We substitute this in to obtin p = f p, f p = f, f 2 c k p k. f, p k 2 p k 2. c k f, p k + 8 c 2 k p k 2.

2 Orthogonl polynomils IB Numericl Anlysis (Theorems with proof) Note tht there re no cross terms between the different coefficients. We minimize this qudrtic by setting the prtil derivtives to zero: 0 = c k f p, f p = 2 f, p k + 2c k p k 2. To check this is indeed minimum, note tht the Hessin mtrix is simply 2I, which is positive definite. So this is relly minimum. So we get the formul for the c k s s climed, nd putting the formul for c k gives the error formul. 9

3 Approximtion of liner functionls IB Numericl Anlysis (Theorems with proof) 3 Approximtion of liner functionls 3.1 Liner functionls 3.2 Gussin qudrture Proposition. There is no choice of ν weights nd nodes such tht the pproximtion of w(x)f(x) dx is exct for ll f P 2ν[x]. Proof. Define Then we know q(x) = ν (x c k ) P ν [x]. w(x)q 2 (x) dx > 0, since q 2 is lwys non-negtive nd hs finitely mny zeros. However, So this cnnot be exct for q 2. ν b k q 2 (c n ) = 0. Theorem (Ordinry qudrture). For ny distinct {c k } ν [, b], let {l k} ν be the Lgrnge crdinl polynomils with respect to {c k } ν. Then by choosing the pproximtion L(f) = b k = w(x)l k (x) dx, w(x)f(x) dx ν b k f(c k ) is exct for f P ν 1 [x]. We cll this method ordinry qudrture. Theorem. For ν 1, the zeros of the orthogonl polynomil p ν re rel, distinct nd lie in (, b). Proof. First we show there is t lest one root. Notice tht p 0 = 1. Thus for ν 1, by orthogonlity, we know w(x)p ν (x)p 1 (x) dx = w(x)p ν (x) dx = 0. So there is t lest one sign chnge in (, b). We hve lredy got the result we need for ν = 1, since we only need one zero in (, b). Now for ν > 1, suppose {ξ j } m j=1 re the plces where the sign of p ν chnges in (, b) (which is subset of the roots of p ν ). We define q(x) = m (x ξ j ) P m [x]. j=1 10

3 Approximtion of liner functionls IB Numericl Anlysis (Theorems with proof) Since this chnges sign t the sme plce s p ν, we know qp ν mintins the sme sign in (, b). Now if we hd m < ν, then orthogonlity gives q, p ν = w(x)q(x)p ν (x) dx = 0, which is impossible, since qp ν does not chnge sign. Hence we must hve m = ν. Theorem. In the ordinry qudrture, if we pick {c k } ν to be the roots of p ν (x), then get we exctness for f P 2ν 1 [x]. In ddition, {b n } ν re ll positive. Proof. Let f P 2ν 1 [x]. Then by polynomil division, we get f = qp ν + r, where q, r re polynomils of degree t most ν 1. We pply orthogonlity to get w(x)f(x) dx = Also, since ech c k is root of p ν, we get ν b k f(c k ) = w(x)(q(x)p ν (x) + r(x)) dx = ν b k (q(c k )p ν (c k ) + r(c k )) = w(x)r(x) dx. ν b k r(c k ). But r hs degree t most ν 1, nd this formul is exct for polynomils in P ν 1 [x]. Hence we know w(x)f(x) dx = w(x)r(x) dx = ν b k r(c k ) = ν b k f(c k ). To show the weights re positive, we simply pick s specil f. Consider f {l 2 k }ν P 2ν 2[x], for l k the Lgrnge crdinl polynomils for {c k } ν. Since the qudrture is exct for these, we get 0 < w(x)l 2 k(x) dx = ν b j l 2 k(c j ) = j=1 ν b j δ jk = b k. j=1 Since this is true for ll k = 1,, ν, we get the desired result. 11

4 Expressing errors in terms of IBderivtives Numericl Anlysis (Theorems with proof) 4 Expressing errors in terms of derivtives Theorem (Peno kernel theorem). If λ nnihiltes polynomils of degree k or less, then λ(f) = 1 k! for ll f C k+1 [, b], where K(θ)f (k+1) (θ) dθ 12

5 Ordinry differentil equtions IB Numericl Anlysis (Theorems with proof) 5 Ordinry differentil equtions 5.1 Introduction 5.2 One-step methods Theorem (Convergence of Euler s method). (i) For ll t [0, T ], we hve lim y n y(t) = 0. h 0 nh t (ii) Let λ be the Lipschitz constnt of f. Then there exists c 0 such tht e n ch eλt 1 λ for ll 0 n [T/h], where e n = y n y(t n ). Proof. There re two prts to proving this. We first look t the locl trunction error. This is the error we would get t ech step ssuming we got the previous steps right. More precisely, we write y(t n+1 ) = y(t n ) + h(f, t n, y(t n )) + R n, nd R n is the locl trunction error. For the Euler s method, it is esy to get R n, since f(t n, y(t n )) = y (t n ), by definition. So this is just the Tylor series expnsion of y. We cn write R n s the integrl reminder of the Tylor series, R n = By some creful nlysis, we get tn+1 t n (t n+1 θ)y (θ) dθ. R n ch 2, where c = 1 2 y. This is the esy prt, nd tends to go rther smoothly even for more complicted methods. Once we hve bounded the locl trunction error, we ptch them together to get the ctul error. We cn write e n+1 = y n+1 y(t n+1 ) ) = y n + hf(t n, y n ) (y(t n ) + hf(t n, y(t n )) + R n ( ) = (y n y(t n )) + h f(t n, y n ) f(t n, y(t n )) R n Tking the infinity norm, we get e n+1 y n y(t n ) + h f(t n, y n ) f(t n, y(t n )) + R n e n + hλ e n + ch 2 = (1 + λh) e n + ch 2. 13

5 Ordinry differentil equtions IB Numericl Anlysis (Theorems with proof) This is vlid for ll n 0. We lso know e 0 = 0. Doing some lgebr, we get Finlly, we hve n 1 e n ch 2 (1 + hλ) j ch λ ((1 + hλ)n 1). j=0 (1 + hλ) e λh, since 1 + λh is the first two terms of the Tylor series, nd the other terms re positive. So (1 + hλ) n e λhn e λt. So we obtin the bound e n ch eλt 1. λ Then this tends to 0 s we tke h 0. So the method converges. 5.3 Multi-step methods Theorem. An s-step method hs order p (p 1) if nd only if nd s ρ l = 0 s ρ l l k = k for k = 1,, p, where 0 0 = 1. Proof. The locl trunction error is s ρ l y(t n+l ) h s σ l l k 1 s σ l y (t n+l ). We now expnd the y nd y bout t n, nd obtin ( s ) ( s ) h k s ρ l y(t n ) + ρ l l k k σ l l k 1 y (k) (t n ). k! This is O(h p+1 ) under the given conditions. Theorem. A multi-step method hs order p (with p 1) if nd only if s x 0. Proof. We expnd ρ(e x ) xσ(e x ) = O(x p+1 ) ρ(e x ) xσ(e x ) = s ρ l e lx x 14 s σ l e lx.

5 Ordinry differentil equtions IB Numericl Anlysis (Theorems with proof) We now expnd the e lx in Tylor series bout x = 0. This comes out s ( s s ) 1 s ρ l + ρ l l k k σ l l k 1 x k. k! So the result follows. Theorem (Dhlquist equivlence theorem). A multi-step method is convergent if nd only if (i) The order p is t lest 1; nd (ii) The root condition holds. Lemm. An s-step bckwrd differentition method of order s is obtined by choosing s 1 ρ(w) = σ s l ws l (w 1) l, l=1 with σ s chosen such tht ρ s = 1, nmely σ s = Proof. We need to construct ρ so tht ( s l=1 ) 1 1. l ρ(w) = σ s w s log w + O( w 1 s+1 ). This is esy, if we write ( ) 1 log w = log w ( = log 1 w 1 ) w ( ) l 1 w 1 =. l w l=1 Multiplying by σ s w s gives the desired result. 5.4 Runge-Kutt methods 15

6 Stiff equtions IB Numericl Anlysis (Theorems with proof) 6 Stiff equtions 6.1 Introduction 6.2 Liner stbility Theorem (Mximum principle). Let g be nlytic nd non-constnt in n open set Ω C. Then g hs no mximum in Ω. 16

7 Implementtion of ODE methods IB Numericl Anlysis (Theorems with proof) 7 Implementtion of ODE methods 7.1 Locl error estimtion 7.2 Solving for implicit methods 17

8 Numericl liner lgebr IB Numericl Anlysis (Theorems with proof) 8 Numericl liner lgebr 8.1 Tringulr mtrices 8.2 LU fctoriztion 8.3 A = LU for specil A Theorem. A sufficient condition for the existence for both the existence nd uniqueness of A = LU is tht det(a k ) 0 for k = 1,, n 1. Proof. Strightforwrd induction. Theorem. If det(a k ) 0 for ll k = 1,, n, then A R n n hs unique fctoriztion of the form A = LDÛ, where D is non-singulr digonl mtrix, nd both L nd Û re unit tringulr. Proof. From the previous theorem, A = LU exists. Since A is non-singulr, U is non-singulr. So we cn write this s U = DÛ, where D consists of the digonls of U nd Û = D 1 U is unit. Theorem. Let A R n n be non-singulr nd det(a k ) 0 for ll k = 1,, n. Then there is unique symmetric fctoriztion A = LDL T, with L unit lower tringulr nd D digonl nd non-singulr. Proof. From the previous theorem, we cn fctorize A uniquely s We tke the trnspose to obtin A = LDÛ. A = A T = Û T DL T. This is fctoriztion of the form unit lower - digonl - unit upper. By uniqueness, we must hve Û = LT. So done. Theorem. Let A R n n be positive-definite mtrix. Then det(a k ) 0 for ll k = 1,, n. Proof. First consider k = n. To show A is non-singulr, it suffices to show tht Ax = 0 implies x = 0. But we cn multiply the eqution by x T to obtin x T Ax = 0. By positive-definiteness, we must hve x = 0. So done. Now suppose A k y = 0 for k < n nd y = R k. Then y T A k y = 0. We invent new x R n by tking y nd pd it with zeros. Then x T Ax = 0. By positive-definiteness, we know x = 0. Then in prticulr y = 0. 18

8 Numericl liner lgebr IB Numericl Anlysis (Theorems with proof) Theorem. A symmetric mtrix A R n n is positive-definite iff we cn fctor it s A = LDL T, where L is unit lower tringulr, D is digonl nd D kk > 0. Proof. First suppose such fctoriztion exists, then x T Ax = x T LDL T x = (L T x) T D(L T x). We let y = L T x. Note tht y = 0 if nd only if x = 0, since L is invertible. So x T Ax = y T Dy = ykd 2 kk > 0 if y 0. Now if A is positive definite, it hs n LU fctoriztion, nd since A is symmetric, we cn write it s A = LDL T, where L is unit lower tringulr nd D is digonl. Now we hve to show D kk > 0. We define y k such tht L T y k = e k, which exist, since L is invertible. Then clerly y k 0. Then we hve So done. D kk = e T k De k = y T k LDL T y k = y T k Ay k > 0. Proposition. If bnd mtrix A hs bnd width r nd n LU fctoriztion A = LU, then L nd U re both bnd mtrices of width r. Proof. Strightforwrd verifiction. 19

9 Liner lest squres IB Numericl Anlysis (Theorems with proof) 9 Liner lest squres Theorem. A vector x R n minimizes Ax b 2 if nd only if A T (Ax b) = 0. Proof. A solution, by definition, minimizes f(x) = Ax b, Ax b = x T AAx 2x T A T b + b T b. Then s function of x, the prtil derivtives of this must vnish. We hve So necessry condition is f(x) = 2A T (Ax b). A T (Ax b). Now suppose our x stisfies A T (Ax b) = 0. Then for ll x R n, we write x = x + y, nd then we hve Ax b 2 = A(x + y) b = Ax b + 2y T A T (Ax b) + Ay 2 = Ax b + Ay 2 Ax b. So x must minimize the Eucliden norm. Corollry. If A R m n is full-rnk mtrix, then there is unique solution to the lest squres problem. Proof. We know ll minimizers re solutions to (A T A)x = A T b. The mtrix A being full rnk mens y 0 R n implies Ay 0 R m. Hence A T A R n n is positive definite (nd in prticulr non-singulr), since x T A T Ax = (Ax) T (Ax) = Ax 2 > 0 for x 0. So we cn invert A T A nd find unique solution x. Proposition. A mtrix A R m n cn be trnsformed into upper-tringulr form by pplying n Householder reflections, nmely H n H 1 A = R, where ech H n introduces zero into column k nd leves the other zeroes lone. Lemm. Let, b R m, with b, but = b. Then if we pick u = b, then H u = b. 20

9 Liner lest squres IB Numericl Anlysis (Theorems with proof) Proof. We just do it: H u = where we used the fct tht = b. 2( T b) 2 2 T ( b) = ( b) = b, b + b 2 Lemm. If the first k 1 components of u re zero, then (i) For every x R m, H u x does not lter the first k 1 components of x. (ii) If the lst (m k + 1) components of y R m re zero, then H u y = y. Lemm. Let, b R m, with k. m b ḳ.. b m, but Suppose we pick m m 2 j = b 2 j. j=k j=k u = (0, 0,, 0, k b k,, m b m ) T. Then we hve H u = ( 1,, k 1 b k,, b m ). 21