Numerical Methods for Differential Equations Mathematical and Computational Tools

Similar documents
Numerical Methods for Differential Equations

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Section 3.9. Matrix Norm

CONTENTS. Preface Preliminaries 1

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

FIXED POINT ITERATIONS

The Way of Analysis. Robert S. Strichartz. Jones and Bartlett Publishers. Mathematics Department Cornell University Ithaca, New York

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Linear Algebra Massoud Malek

Ordinary Differential Equations II

The following definition is fundamental.

Contents. Preface xi. vii

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Numerical Analysis: Interpolation Part 1

AIMS Exercise Set # 1

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Solution of Linear State-space Systems

CS 323: Numerical Analysis and Computing

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Notes on Linear Algebra and Matrix Theory

A brief introduction to ordinary differential equations

Linear Analysis Lecture 5

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

Course Summary Math 211

Vectors in Function Spaces

Scientific Computing: An Introductory Survey

Lecture 4: Numerical solution of ordinary differential equations

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

CS 323: Numerical Analysis and Computing

Scientific Computing

LMI Methods in Optimal and Robust Control

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Solutions Preliminary Examination in Numerical Analysis January, 2017

Mathematical Analysis Outline. William G. Faris

Lecture 23: 6.1 Inner Products

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem

Mathematics for Engineers. Numerical mathematics

2 Two-Point Boundary Value Problems

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Exam in TMA4215 December 7th 2012

Preliminary Examination in Numerical Analysis

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Lectures 9-10: Polynomial and piecewise polynomial interpolation

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

Iterative Methods for Linear Systems

Chapter 7 Iterative Techniques in Matrix Algebra

NONLINEAR DC ANALYSIS

Handout on Newton s Method for Systems

MAT 610: Numerical Linear Algebra. James V. Lambers

MATH 205C: STATIONARY PHASE LEMMA

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

Linear Normed Spaces (cont.) Inner Product Spaces

Calculus from Graphical, Numerical, and Symbolic Points of View, 2e Arnold Ostebee & Paul Zorn

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Introduction Linear system Nonlinear equation Interpolation

17 Solution of Nonlinear Systems

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Ordinary Differential Equations

Introductory Numerical Analysis

1 Lyapunov theory of stability

1 Directional Derivatives and Differentiability

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

APPLICATIONS OF DIFFERENTIABILITY IN R n.

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

Boundary Value Problems and Iterative Methods for Linear Systems

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

Multivariable Calculus

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

MATHEMATICS FOR ECONOMISTS. An Introductory Textbook. Third Edition. Malcolm Pemberton and Nicholas Rau. UNIVERSITY OF TORONTO PRESS Toronto Buffalo

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Applied/Numerical Analysis Qualifying Exam

Week 1: need to know. November 14, / 20

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Contents. 2 Sequences and Series Approximation by Rational Numbers Sequences Basics on Sequences...

Simple Iteration, cont d

BSM510 Numerical Analysis

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

Function approximation

Introduction to the Numerical Solution of IVP for ODE

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET

Numerical Methods for Differential Equations

Numerical solutions of nonlinear systems of equations

March 25, 2010 CHAPTER 2: LIMITS AND CONTINUITY OF FUNCTIONS IN EUCLIDEAN SPACE

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

5 Handling Constraints

Numerical Methods in Physics and Astrophysics

NORMS ON SPACE OF MATRICES

Math Ordinary Differential Equations

Transcription:

Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University

Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic norms Vector norms Matrix norms Inner products The logarithmic norm Logarithmic norm properties Applications 2 / 48

1. Vector norms Definition A vector norm : X R satisfies 1. u 0 ; u = 0 u = 0 2. αu = α u 3. u v u ± v u + v A norm generalizes the notion of distance between points 3 / 48

Vector norms ( N ) 1/p Definition The l p norms are defined x p = x k p k=1 Graph of the unit circle in R 2 for l p norms 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 0.5 0.5 0.5 1 1 1 1.5 1.5 1.5 2 2 0 2 2 2 0 2 2 2 0 2 Unit circles for p = 1, p = 2 (Euclidean norm), and p = 4 / 48

2. Matrix norms Definition The operator norm associated with the vector norm is defined by Ax A = sup x 0 x For every vector norm there is a corresponding matrix norm Note 1. Ax A x 2. AB A B 5 / 48

Vector and matrix norms Computation Vector norm Matrix norm x 1 = i x i max j i a ij x 2 = i x i 2 ρ[a H A] x = max i x i max i j a ij Definition The spectral radius of a matrix is defined by ρ[a] = max λ[a] 6 / 48

3. Inner products Definition A bilinear form, : X X C satisfying 1. u, u 0 ; u, u = 0 u = 0 2. u, v = v, u 3. u, αv = α u, v 4. u, v + w = u, v + u, w An inner product generates the Euclidean norm u, u = u 2 An inner product generalizes the notion of scalar product 7 / 48

Inner products Examples 1. Scalar product in R m : u, v = u T v with corresponding Euclidean vector norm N u 2 2 = u k 2 k=1 2. General inner product in C m : u, v G = u H Gv for any symmetric positive definite matrix G 3. Inner product in L 2 [0, 1] : u, v = 1 0 uv dx with corresponding L 2 norm of functions u 2 L 2 = 1 0 u 2 dx 8 / 48

Inner products and operator norms Theorem Cauchy Schwarz inequality u v u, v u v Definition The operator norm associated with, is A 2 Au, Au = sup x 0 u 2 Hence Au, Au A 2 u 2 9 / 48

Interlude The problem of stability Classical stability for ODEs ẋ = Ax; x(0) = x 0 Characterization Elementary stability conditions Re λ k < 0 ( e ta 0 as t ) e ta C for all t 0 d e ta /dt 0 ( e ta 1 for all t 0) 10 / 48

Time-dependent linear systems Consider non-autonomus linear system ẋ = A(t)x; x(0) = x 0 Stability is no longer characterized by eigenvalues Even with constant eigenvalues in the left half plane the system can be unstable (Petrowski & Hoppenstedt) Problem t? Under what conditions does x(t) remain bounded as 11 / 48

The logarithmic norm Differential equations Note that for an inner product, d x 2 2 dt = d x, x dt = 2 x, ẋ and that if ẋ = Ax, then x, ẋ = x, Ax µ 2 [A] x, x Definition The logarithmic norm is defined by x, Ax µ[a] = sup x 0 x 2 12 / 48

4. The logarithmic norm µ[a] Definition defined by For general matrix norms the logarithmic norm is I + ha 1 µ[a] = lim h 0+ h If ẋ = Ax, the following differential inequality holds d x /dt µ[a] x Note The logarithmic norm may be negative. Solution bound x(t) e tµ[a] x(0) ; t 0 13 / 48

Why the logarithmic norm? Crude estimate d x dt ẋ = Ax A x Exponentially growing bound x(t) e t A x(0) Note Because µ[a] A we always have e tµ[a] e t A 14 / 48

Matrix and logarithmic norms Computation Vector norm Matrix norm Log norm µ[a] x 1 = i x i max j i a ij max j [Re a jj + i a ij ] x 2 = i x i 2 ρ[a H A] α[(a + A H )/2] x = max i x i max i j a ij max i [Re a ii + j a ij ] Definition Spectral abscissa α[a] = max Re λ[a] 15 / 48

5. Logarithmic norm properties Theorem The logarithmic norm has the following basic properties, which hold for all matrices A and B 1. µ[a] A 2. µ[a + zi ] = µ[a] + Re z 3. µ[αa] = α µ[a], α 0 4. µ[a + B] µ[a] + µ[b] 5. e ta e tµ[a], t 0 16 / 48

Uniform Monotonicity Theorem The condition µ[a] < 0 is akin to A being negative definite. Then A has a bounded inverse. More precisely, Theorem (Uniform Monotonicity Theorem) If µ[a] < 0 then A is nonsingular and A 1 1/µ[A] Proof (Here only for the Euclidean norm) Note that x x T Ax µ 2 [A] x T x 17 / 48

Proof... Suppose µ 2 [A] < 0. By the Cauchy-Schwarz inequality x 2 Ax 2 x T Ax µ 2 [A] x 2 2 < 0 for all x 0. Hence Ax 0, so A 1 exists! Put x = A 1 y and rearrange to get y 2 µ 2 [A] A 1 y 2 A 1 y 2 y 2 1 µ 2 [A] Take maximum over y to see that A 1 2 1/µ 2 [A] 18 / 48

Application The convergence of Explicit Euler Consider Explicit Euler for ẋ = Ax + f (t) x n+1 = x n + hax n + hf (t n ) x(t n+1 ) = x(t n ) + hax(t n ) + hf (t n ) h 2 r n Global error e n = x n x(t n ) e n+1 = e n + hae n + h 2 r n e n+1 e n + h A e n + h 2 r n 19 / 48

Classical convergence analysis Recall Lemma If u n+1 (1 + hµ)u n + ch 2 with u 0 = 0, then u n ch µ [(1 + hµ)n 1] ch eµtn 1 µ if hµ 0. In case 1 < hµ < 0, we have max u n ch n µ 20 / 48

Classical convergence analysis... e n+1 (1 + h A ) e n + h 2 r n The lemma applies with µ = A and c = max n r n e n h max r n e A tn 1 n A Convergence, but exponentially growing bound (Hopeless!) 21 / 48

Modern convergence analysis Explicit Euler for ẋ = Ax + f (t) x n+1 = x n + hax n + hf (t n ) x(t n+1 ) = x(t n ) + hax(t n ) + hf (t n ) h 2 r n Global error e n = x n x(t n ) e n+1 = e n + hae n + h 2 r n e n+1 ( I + ha ) e n + h 2 r n 22 / 48

Modern convergence analysis... Note that, using the log norm, I + ha 1 µ[a] = lim h 0+ h we have I + ha = 1 + hµ[a] + O(h 2 ) as h A 0 23 / 48

Modern convergence analysis... Now the lemma applies with µ µ[a] and c = max n r n e n h max r n eµ[a]tn 1 ; µ[a] 0 n µ[a] and in case 1 < hµ[a] < 0 we have max e n h max n r n n µ[a] Convergence, with realistic error bound, as µ[a] A 24 / 48

Application Solvability in Implicit Euler Consider Implicit Euler for ẋ = Ax + f (t) x n+1 = x n + hax n+1 + hf (t n+1 ) x(t n+1 ) = x(t n ) + hax(t n+1 ) + hf (t n+1 ) h 2 r n When is it possible to solve (I ha)x n+1 = x n + hf (t n+1 )? Uniform monotonicity theorem guarantees unique solution if µ[ha I ] < 0 µ[ha] < 1 Easily satisfied, even without bound on ha 25 / 48

Contents V4.15 Part 2. Interpolation Polynomial interpolation Lagrange interpolation Basis functions Numerical integration 26 / 48

1. What is interpolation? Interpolation is the opposite of discretization Problem Given a discrete grid function (a vector) F = {f j } N 0 defined on a grid {x j } N 0, find a continuous function f (x) with the interpolating property f (x j ) = f j Compare digital to analog conversion Typically the function f is sought among polynomials or among trigonometric functions (Fourier analysis) 27 / 48

Naïve polynomial interpolation P n (x) = c 0 + c 1 x + c 2 x 2 + + c n x n n + 1 coefficients, n + 1 interpolation conditions P n (x j ) = f j 1 x 0 x0 2... x0 n 1 x 1 x1 2... x1 n.... 1 x n xn 2... xn n c 0 c 1. c n = f 0 f 1. f n Vandermonde matrix, nonsingular if x i x j, unique solution Tedious and often ill-conditioned approach 28 / 48

2. Lagrange interpolation Basis functions On a grid {x 0, x 1,..., x k } construct a degree k polynomial basis {ϕ i (x)} k i=0 such that ϕ i (x j ) = δ ij (the Kronecker delta) Theorem If the values f j = f (x j ) are known for the function f (x), then the degree k polynomial P(x) = interpolates f (x) on the grid: k ϕ j (x) f j j=0 P(x j ) = f j with P(x) f (x) for all x 29 / 48

3. Basis functions 2nd degree Lagrange ϕ 1 (x) = (x x 0)(x x 2 ) (x 1 x 0 )(x 1 x 2 ) 1.2 1 0.8 0.6 0.4 0.2 0 0.2 0.5 1 1.5 2 2.5 3 3.5 30 / 48

Table of Lagrange basis polynomials x ϕ 0 ϕ 1 ϕ 2 f P 2 x 0 1 0 0 f 0 f 0 x 1 0 1 0 f 1 f 1 x 2 0 0 1 f 2 f 2 P 2 (x) = f 0 ϕ 0 (x) + f 1 ϕ 1 (x) + f 2 ϕ 2 (x) (x x 1 )(x x 2 ) P 2 (x) = f 0 (x 0 x 1 )(x 0 x 2 ) +f (x x 0 )(x x 2 ) 1 (x 1 x 0 )(x 1 x 2 ) +f (x x 0 )(x x 1 ) 2 (x 2 x 0 )(x 2 x 1 ) 31 / 48

4. Numerical integration Numerical integration is the approximation of definite integrals I (f ) = b a f (x) dx Problem For many functions no primitive function is known. The integral cannot be calculated analytically Note For polynomials an integral b a P(x)dx can always be computed analytically Idea Approximate f (x) P(x) and compute b a P(x) dx 32 / 48

Numerical integration... Approximate f P and substitute infinite sum by a finite sum The integrand is sampled at a finite number of points I (f ) = n w i f (x i ) + R n i=1 Here R n = I (f ) n i=0 w if (x i ) is the integration error Numerical integration method I (f ) n w i f (x i ) i=0 33 / 48

Numerical integration... Approximate using Lagrange 2nd degree interpolant b a f (x) dx b 2 a i=0 f (x i )ϕ i (x) dx = 2 i=0 b f (x i ) ϕ i (x) dx a Weights w i = b a ϕ i(x) dx can be computed once and for all Numerical integration method b a f (x) dx 2 w i f (x i ) i=0 34 / 48

Contents V4.15 Part 3. Nonlinear equations Solving nonlinear equations Fixed points Newton s method Application. Newton vs Fixed point in Implicit Euler 35 / 48

1. Solving nonlinear equations f (x) = 0 We can have a single equation x cos x = 0 but in general we have systems 4x 2 y 2 = 0 4xy 2 x = 1 Nonlinear equations may have no solution one solution any finite number of solutions infinitely many solutions 36 / 48

Iteration and convergence f (x) = 0 Nonlinear equations are solved by iteration, computing a sequence {x [k] } of approximations to the root x Definition The error is defined by e [k] = x [k] x Definition The method converges if lim k e [k] = 0 Definition The convergence is linear if e [k+1] c e [k] with 0 < c < 1 quadratic if e [k+1] c e [k] p with p = 2 superlinear if p > 1; cubic if p = 3, etc. 37 / 48

2. Fixed points Definition x is called a fixed point of the function g if x = g(x) Definition A function g is called contractive if g(x) g(y) L[g] x y with L[g] < 1 for all x, y in the domain of g A contraction map reduces the distance between points 38 / 48

Fixed Point Theorem Theorem Assume that g is Lipschitz continuous on the compact interval I. Further, If g : I I there exists an x I such that x = g(x ) If in addition L[g] < 1 on I, then x is unique, and x n+1 = g(x n ) converges to the fixed point x for all x 0 I Note Both conditions are absolutely essential! 39 / 48

Fixed Point Theorem Existence and uniqueness 1 1 1 0.9 0.9 0.9 0.8 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 0.4 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0 0 0.5 1 0 0 0.5 1 0 0 0.5 1 Left Center Right No condition satisfied no x First condition satisfied maybe multiple x Both conditions satisfied unique x 40 / 48

Error bound in fixed point iteration By the Lipschitz condition x [k+1] x = g(x [k] ) g(x ) = g(x [k] ) g(x [k+1] ) + g(x [k+1] ) g(x ) we have x [k+1] x L x [k] x [k+1] + L x [k+1] x Theorem If L[g] < 1, then the error in fixed point iteration is bounded by x [k+1] x L[g] 1 L[g] x [k] x [k+1] 41 / 48

3. Newton s method Newton s method solves f (x) = 0 using repeated linearizations. Linearize at the point (x [k], f (x [k] )) 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 42 / 48

Newton s method... Straight line equation y f (x [k] ) = f (x [k] ) (x x [k] ) Define x = x [k+1] y = 0, so that f (x [k] ) = f (x [k] ) (x [k+1] x [k] ) Solve for x = x [k+1], to get Newton s method x [k+1] = x [k] f (x [k] ) f (x [k] ) 43 / 48

Newton s method... Systems of equations Expand f (x [k+1] ) in a Taylor series around x [k] f (x [k+1] ) = f (x [k] + (x [k+1] x [k] )) f (x [k] ) + f (x [k] ) (x [k+1] x [k] ) := 0 x [k+1] = x [k] ( f (x [k] ) ) 1 f (x [k] ) Definition f (x [k] ) is the Jacobian matrix of f, defined by { } f fi (x) = x j 44 / 48

Newton s method... Convergence Write Newton s method as a fixed point iteration x [k+1] = g(x [k] ) with iteration function g(x) := x f (x)/f (x) Note Newton s method converges fast if f (x ) 0, because g (x ) = f (x )f (x )/f (x ) 2 = 0 Expand g(x) in a Taylor series around x g(x [k] ) g(x ) g (x )(x [k] x ) + g (x ) (x [k] x ) 2 2 x [k+1] x g (x ) (x [k] x ) 2 2 45 / 48

Newton s method... Convergence Define the error by ε [k] = x [k] x, then ε [k+1] (ε [k]) 2 Newton s method is quadratically convergent Fixed point iterations are typically only linearly convergent ε [k+1] g (x ) ε [k] A problem with Newton s method is that starting values need to be close enough to the root 46 / 48

Convergence order and rate Definition The convergence order is p with (asymptotic) error constant C p, if ε [k+1] 0 < lim k ε [k] p = C p < Special cases p = 1 p = 2 Linear convergence Fixed point iteration C p = g (x ) Quadratic convergence Newton iteration C p = f (x ) 2f (x ) 47 / 48

Application Newton vs Fixed point in Implicit Euler As y n+1 = y n + hf (y n+1 ) we need to solve an equation y = hf (y) + ψ Note All implicit methods lead to an equation of this form Theorem Fixed point iterations converge if L[hf ] < 1, restricting the step size to h < 1/L[f ] Note For stiff equations L[hf ] 1 so fixed point iterations will not converge; it is necessary to use Newton s method 48 / 48