Lecture 10: Finite Differences for ODEs & Nonlinear Equations

Similar documents
BEKG 2452 NUMERICAL METHODS Solution of Nonlinear Equations

Lecture 5: Single Step Methods

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

2 Two-Point Boundary Value Problems

Finite Difference Methods for Boundary Value Problems

CHAPTER 10 Zeros of Functions

Chapter 3: Root Finding. September 26, 2005

Fixed-Point Iteration

x n+1 = x n f(x n) f (x n ), n 0.

1 Solutions to selected problems

FIXED POINT ITERATION

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Numerical Methods Lecture 3

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Zeros of Functions. Chapter 10

Solutions Preliminary Examination in Numerical Analysis January, 2017

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Computation Fluid Dynamics

Lecture 12: Numerical Quadrature

Bessel s Equation. MATH 365 Ordinary Differential Equations. J. Robert Buchanan. Fall Department of Mathematics

u xx + u yy = 0. (5.1)

Iterative solvers for linear equations

Mathematics for Engineers. Numerical mathematics

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

Final Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

Introduction to Numerical Analysis

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS

Iterative solvers for linear equations

On solving linear systems arising from Shishkin mesh discretizations

Lecture 4 Eigenvalue problems

Series Solutions Near a Regular Singular Point

Homework and Computer Problems for Math*2130 (W17).

REVIEW OF DIFFERENTIAL CALCULUS

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

MB4018 Differential equations

Preliminary Examination, Numerical Analysis, August 2016

Numerical Methods for Ordinary Differential Equations

Series Solution of Linear Ordinary Differential Equations

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Root Finding (and Optimisation)

Numerical Solutions to Partial Differential Equations

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

Root Finding For NonLinear Equations Bisection Method

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

OR MSc Maths Revision Course

12 The Heat equation in one spatial dimension: Simple explicit method and Stability analysis

Figure 1: Graph of y = x cos(x)

Romberg Integration. MATH 375 Numerical Analysis. J. Robert Buchanan. Spring Department of Mathematics

An Introduction to Numerical Methods for Differential Equations. Janet Peterson

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Additional Homework Problems

Instructor: Marios M. Fyrillas HOMEWORK ASSIGNMENT ON NUMERICAL SOLUTION OF NONLINEAR EQUATIONS

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Numerical Methods in Informatics

Lecture 19: Heat conduction with distributed sources/sinks

Iterative solvers for linear equations

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

Numerical Methods for Differential Equations Mathematical and Computational Tools

Solution of nonlinear equations

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Numerical Analysis Exam with Solutions

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

AIMS Exercise Set # 1

ORIE 6334 Spectral Graph Theory October 13, Lecture 15

Math 117: Calculus & Functions II

Tangent Planes, Linear Approximations and Differentiability

Consequences of Orthogonality

Accelerating Convergence

Lösning: Tenta Numerical Analysis för D, L. FMN011,

Lecture 19: Solving linear ODEs + separable techniques for nonlinear ODE s

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

1 Number Systems and Errors 1

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Iterative methods for positive definite linear systems with a complex shift

Optimization and Calculus

Chapter 4. Unconstrained optimization

Hermite Interpolation

MATH 333: Partial Differential Equations

The Factorization Method for Inverse Scattering Problems Part I

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Finite-Elements Method 2

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

MS 3011 Exercises. December 11, 2013

1. Differential Equations (ODE and PDE)

Best approximation in the 2-norm

Numerical Optimization

Transcription:

Lecture 10: Finite Differences for ODEs & Nonlinear Equations J.K. Ryan@tudelft.nl WI3097TU Delft Institute of Applied Mathematics Delft University of Technology 21 November 2012 () Finite Differences & Nonlinear Equations 21 November 2012 1 / 44

Outline 1 Review Neumann BC Convection-diffusion equation Stability of Finite Differences 2 Consistency, Stability, Convergence 3 Nonlinear equations Bisection method Fixed Point Iteration () Finite Differences & Nonlinear Equations 21 November 2012 2 / 44

Neumann BC y + q(x)y = f (x), x L = 0 < x < x R = 1, y(0) = 0, dy dx (x=1) = 0, (Dirichlet) (Neumann) Similar to the previous case, we have Step 1: Subdivide the domain, h = x R x L N+1 = 1 N+1 Step 2: Define the nodal points, x j = x L + jh = jh, j = 0,..., N + 1. Step 3: Discretize the equation y j w j+1 2w j + w j 1 h 2. () Finite Differences & Nonlinear Equations 21 November 2012 3 / 44

Neumann BC Changing the left boundary condition from Dirichlet (y(x R ) = 0) to Neumann ( dy dx (x R ) = 0), changes our approximation through Adding a "ghost" point Approximating the right boundary condition using central differences Vectors increase in length to (N + 1) and matrices increase in size to (N + 1) (N + 1). () Finite Differences & Nonlinear Equations 21 November 2012 4 / 44

Convection-diffusion equation y + νy = 0, y(x L ) = 0, y(x R ) = α x L x x R Need to approximate both first and second derivatives. () Finite Differences & Nonlinear Equations 21 November 2012 5 / 44

Convection-diffusion equation Define a 1 = ( ) ( ) 1 ν h 2 and b 1 = 1 + ν h 2. Then A = 1 h 2 2 b 1 0 0 0 a 1 2 b 1 0 0 0 0 a 1 2 b 1 0 0 0 a 1 2 A is no longer symmetric. () Finite Differences & Nonlinear Equations 21 November 2012 6 / 44

Condition number Definition (Condition number) The condition number of a matrix A is given by κ(a) = A A 1. This is useful in determining the stability of our system of equations. () Finite Differences & Nonlinear Equations 21 November 2012 7 / 44

Condition number For symmetric matrices, the condition number is given by κ(a) = A A 1 = λ max λ min For stability, w w κ(a) f f Small κ(a) affect of perturbations is small. () Finite Differences & Nonlinear Equations 21 November 2012 8 / 44

Gerschgorin s Theorem Theorem (Gerschgorin) The eigenvalues of the matrix A lie in the union of circles z a ii where z is a complex number. n j =1 j i a ij = R i, a ii are the diagonal entries a ij are the non-diagonal entries z a ii is a closed disc centered at a ii with radius R. () Finite Differences & Nonlinear Equations 21 November 2012 9 / 44

Consistency Recall, for consistency, the local truncation error should go to zero as the step size gets smaller: lim τ = 0 h 0 But, now τ is a vector so we need to look at the individual entries: τ j+1 = Exact ODE Approximating ODE () Finite Differences & Nonlinear Equations 21 November 2012 10 / 44

Consistency Consider the equation y + qy = f. At the grid points, this is d 2 y dx 2 x=x j + q j y j = f j. The discretization of this equation is 1 h 2 (y j+1 2y j + y j 1 ) + q j y j = f j. () Finite Differences & Nonlinear Equations 21 November 2012 11 / 44

Consistency The componentwise truncation error is then τ j =( d 2 y dx 2 x=x j + q j y j ) ( 1 h 2 (y j+1 2y j + y j 1 ) + q j y j ) =( y j ) ( 1 h 2 (y j+1 2y j + y j 1 )) }{{} Approximation to 2nd derivative Recall that So that d 2 y dx 2 1 h 2 (y j+1 2y j + y j 1 ) = O(h 2 ). τ O(h 2 ) () Finite Differences & Nonlinear Equations 21 November 2012 12 / 44

Consistency The local truncation error is second order. This is because of how we chose to approximation y. So, we have consistency lim τ = h 0 O(h2 ) 0 as h 0. () Finite Differences & Nonlinear Equations 21 November 2012 13 / 44

Stability Definition (Stability for finite differences) A finite difference scheme is stable if there exists a constant M independent of h such that A 1 M, as h 0. This is equivalent to the system having a unique solution. () Finite Differences & Nonlinear Equations 21 November 2012 14 / 44

Stability For the equation y j + q j y j = f j, The approximating system is given by Aw = f, A = 1 h 2 K + M. where w = w 1 w 2 w N 1 w N, f = f 1 f 2 f N 1 f N () Finite Differences & Nonlinear Equations 21 November 2012 15 / 44

Stability where A = 1 h 2 K + M M = diag(q 1, q 2,..., q N ) and K is a tri-banded symmetric diagonal matrix K = 2 1 0 0 0 1 2 1 0 0 0 1 2 1 0 0 0 1 2 1 0 0 0 1 2 which means that A is symmetric. () Finite Differences & Nonlinear Equations 21 November 2012 16 / 44

Stability Since A is symmetric A has real eigenvalues It also means that we have an exact definition for the matrix norms: A = λ max, A 1 = 1 λ min But, what are the eigenvalues? () Finite Differences & Nonlinear Equations 21 November 2012 17 / 44

Stability Case 1: q = 0 for all x. Then A = 1 h 2 K. K = 2 1 0 0 1 2 1 0 0 0 0 1 2 1 0 0 1 2 () Finite Differences & Nonlinear Equations 21 November 2012 18 / 44

Stability A is a symmetric matrix with eigenvalues λ j = 1 (2 2 cos((n j)hπ)) h2 This means that ) λ min = 1 h 2 (2 2 cos(hπ)) = 4 h 2 sin2 ( hπ 2 = π 2 sin2 (hπ/2) (hπ/2) 2 }{{} 1 λ min π 2 () Finite Differences & Nonlinear Equations 21 November 2012 19 / 44

Stability For the maximum eigenvalue of A, we need to look at this in another way: λ max = 1 h 2 (2 2 cos(hπ)) = 4 ( hπ h 2 sin2 2 }{{} 1 ) λ max 4 h 2 () Finite Differences & Nonlinear Equations 21 November 2012 20 / 44

Stability Thus, the scheme is stable for q = 0 and the condition number is ( ) ( ) 4 1 κ(a) = A A 1 = h 2 π 2 = Notice, the scheme is stable because A 1 1 π 2, even though the condition number depends on h. ( ) 2 2 hπ () Finite Differences & Nonlinear Equations 21 November 2012 21 / 44

aij 4 h 2 Stability Case 2: q(x) 0. We can estimate the eigenvalues using Gerschgorin s theorem. Assume 0 < q min q(x) q max. q min < q min + 4 h 2 λ j q max < q max + 4 h 2, j = 1,..., n because A is symmetric and A 1 1 q min stable scheme () Finite Differences & Nonlinear Equations 21 November 2012 22 / 44

Stability What happens with small perturbations? which gives A(w + w) = f + f w = A 1 f 1 λ min f. () Finite Differences & Nonlinear Equations 21 November 2012 23 / 44

Stability The relative error is w w 1 f λ min f Using w w 1 f f λ min w f }{{} Effective condition number where λ min π 2 and f w is usually bounded. () Finite Differences & Nonlinear Equations 21 November 2012 24 / 44

Convergence For convergence lim y w = 0. h 0 Is this the case? Ay Aw = f + τ f y w A 1 τ 1 π 2 τ But τ O(h 2 ). Therefore y w O(h 2 ) () Finite Differences & Nonlinear Equations 21 November 2012 25 / 44

Consistency + Stability = Convergence Consistency τ j+1 = Exact ODE Approximating ODE Stability We also need A 1 M, as h 0. κ(a) = A A 1 to not be too large. Convergence y w τ () Finite Differences & Nonlinear Equations 21 November 2012 26 / 44

Nonlinear Equations Given a nonlinear equation, f (x) = 0, we want to determine for what x this equation is satisfied. Methods Bisection method Fixed point iteration Newton-Raphson method () Finite Differences & Nonlinear Equations 21 November 2012 27 / 44

Bisection method This idea is based upon the intermediate value theorem. Theorem (Intermediate value theorem) Assume f C[a, b]. Let f (a) f (b) and let F be a number between f (a) and f (b). Then there exists a number c (a, b) such that f (c) = F. () Finite Differences & Nonlinear Equations 21 November 2012 28 / 44

Bisection method Given f (x) C[a, b] such that f (a)f (b) < 0, then we know that f (x) changes sign on [a, b]. f (x) = 0 in the inteval [a, b]. () Finite Differences & Nonlinear Equations 21 November 2012 29 / 44

Bisection method Idea: Repeatedly half the interval, keeping the half where the change of sign occurs. () Finite Differences & Nonlinear Equations 21 November 2012 30 / 44

Bisection method Algorithm Let x = α be the root of f (x) = 0 and ɛ > 0 is the error tolerance. Algorithm 1 Let a 0 = a and b 0 = b 2 Define c j = 1 2 (a j + b j ) 3 If b j c j < ɛ STOP! α = c j. 4 Else if f (b j )f (c j ) 0 then a j+1 = c j, b j+1 = b j. Otherwise a j+1 = a j, b j+1 = c j. 5 j = j + 1. 6 Return to step 2. Notice: for each loop, the interval is halved. () Finite Differences & Nonlinear Equations 21 November 2012 31 / 44

Bisection method Properties Easy implementation Always converges to a solution. Convergence is slow. Often used to generate a good starting point for other methods. () Finite Differences & Nonlinear Equations 21 November 2012 32 / 44

Bisection method Error bounds Let a n, b n, c n be the n th computed values. Then b n+1 a n+1 = 1 2 (b n a n ) = 1 4 (b n 1 a n 1 ) =... = 1 (b a), n 1. 2n 1 () Finite Differences & Nonlinear Equations 21 November 2012 33 / 44

Bisection method Error bounds What is the error? The root α is either in [a n, c n ] or [c n, b n ]. α c n c n a n = b n c n = 1 2 (b n a n ) 1 (b a) 2n α c n 1 (b a) 2n () Finite Differences & Nonlinear Equations 21 November 2012 34 / 44

Bisection method Error bounds Suppose we want α c n < ɛ. Then what are the number of iterations that we need? α c n 1 (b a) < ɛ 2n 1 2 n < ɛ b a 2 n (b a) > ɛ () Finite Differences & Nonlinear Equations 21 November 2012 35 / 44

Bisection method Error bounds ( ) (b a) ln(2 n ) = n ln(2) > ln = ln(b a) ln(ɛ) ɛ n > ln(b a) ln(ɛ) ln(2) # iterations needed to converge to error ɛ. Note: we need to make sure ɛ is larger than the rounding error. () Finite Differences & Nonlinear Equations 21 November 2012 36 / 44

Fixed Point Iteration Definition A fixed point of a given function g(x) is a number α such that g(α) = α. Goal: Find the solution, α. Lemma Let g(x) C[a, b] such that a x b. Then a g(x) b and x = g(x) has at least one solution, α [a, b]. () Finite Differences & Nonlinear Equations 21 November 2012 37 / 44

Fixed Point Iteration Theorem (Contraction mapping theorem) Assume g(x) C 1 [a, b] such that a x b (which means a g(x) b). Further assume that Then λ max g (x) 1. x [a,b] 1 There exists a unique solution α of x = g(x) in [a, b]. 2 For any initial estimate α 0 [a, b], x n α. 3 α x n λn 1 λ x 0 x 1, n 0. 4 lim n α x n+1 α x n = g (α). () Finite Differences & Nonlinear Equations 21 November 2012 38 / 44

Fixed Point Iteration g (α) determines the convergence rate. If g (α) << 1, then we have fast convergence. If g (α) 1 ɛ then we have slow convergence. If g (α) > 1, then we do not have convergence. () Finite Differences & Nonlinear Equations 21 November 2012 39 / 44

Fixed Point Iteration Algorithm 1 α 0 = starting value. 2 α n = g(α n 1 ) 3 If lim n α n = α and g(α) C then STOP! α = g(α n ). 4 Else go to Step 2. () Finite Differences & Nonlinear Equations 21 November 2012 40 / 44

Fixed Point Iteration Example Let f (x) = x 3 + 3x 4. Find g(x). Root finding says that f (x) = 0. To use fixed point iteration, we must find g(x) = x. () Finite Differences & Nonlinear Equations 21 November 2012 41 / 44

Fixed Point Iteration Therefore f (x) = 0 x 3 + 3x 4= 0 x(x 2 + 3) = 4 g(x) = 4 x 2 + 3. x = 4 x 2 + 3 () Finite Differences & Nonlinear Equations 21 November 2012 42 / 44

Fixed Point Iteration Theorem The fixed point iteration always converges. Proof. Recall g(x) C 1 [a, b], a g(x) b, and λ max x [a,b] g (x) 1. Then we have α n α = g(α n 1 ) g(α) = g (ξ) α n 1 α λ α n 1 α lim α n α lim n n λn α 0 α () Finite Differences & Nonlinear Equations 21 November 2012 43 / 44

Material addressed 1 Review Neumann BC Convection-diffusion equation Stability of Finite Differences 2 Consistency, Stability, Convergence 3 Nonlinear equations Bisection method Fixed Point Iteration Material in book: Chapter 7, 1-6,8; Chapter 4, 1-3 Useful exercises: Ch7: 1-4; Ch4: 1-3 () Finite Differences & Nonlinear Equations 21 November 2012 44 / 44