STABILITY FOR PARABOLIC SOLVERS

Similar documents
Solutions Preliminary Examination in Numerical Analysis January, 2017

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Chapter 3. Linear and Nonlinear Systems

Notes on Linear Algebra and Matrix Theory

Review Parabolic PDEs Summary PARABOLIC PDES. Dr. Johnson. School of Mathematics. Semester university-log

Numerical Solution of partial differential equations

Numerical Solutions to Partial Differential Equations

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computation Fluid Dynamics

Multi-Factor Finite Differences

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

ME Computational Fluid Mechanics Lecture 5

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Functional Analysis Review

Differential equations

Matrix Solutions to Linear Systems of ODEs

Econ Slides from Lecture 7

Introduction to PDEs and Numerical Methods: Exam 1

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Numerical Linear Algebra Homework Assignment - Week 2

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Section 4.4 Reduction to Symmetric Tridiagonal Form

An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations.

Properties of Linear Transformations from R n to R m

Finite difference methods for the diffusion equation

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

Applied Linear Algebra

Preface. 2 Linear Equations and Eigenvalue Problem 22

u n 2 4 u n 36 u n 1, n 1.

Math Ordinary Differential Equations

Algorithms for Solving the Polynomial Eigenvalue Problem

Section 3.9. Matrix Norm

HW #1 Solutions: M552 Spring 2006

Finite Difference Methods for

Recall : Eigenvalues and Eigenvectors

Beam Propagation Method Solution to the Seminar Tasks

1 Number Systems and Errors 1

FDM for parabolic equations

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

Linear Algebra Practice Problems

Review of Some Concepts from Linear Algebra: Part 2

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Diagonalization of Matrix

Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

CS 246 Review of Linear Algebra 01/17/19

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Implicit Scheme for the Heat Equation

Numerical Analysis of Differential Equations Numerical Solution of Parabolic Equations

Stability of Mass-Point Systems

JUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson

Eigenvalue and Eigenvector Homework

Basic Calculus Review

Introduction to Applied Linear Algebra with MATLAB

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

The Perron Frobenius theorem and the Hilbert metric

Part 1. The diffusion equation

Math 108b: Notes on the Spectral Theorem

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

arxiv: v1 [math.na] 5 May 2012

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Linear Algebra Review

Symmetric and anti symmetric matrices

Space-time kinetics. Alain Hébert. Institut de génie nucléaire École Polytechnique de Montréal.

Computational math: Assignment 1

Large-scale eigenvalue problems

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Numerical Solution Techniques in Mechanical and Aerospace Engineering

AMS526: Numerical Analysis I (Numerical Linear Algebra)

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Introduction to PDEs and Numerical Methods Tutorial 5. Finite difference methods equilibrium equation and iterative solvers

Matrices and Deformation

Lecture 4.5 Schemes for Parabolic Type Equations

Numerical Methods for Differential Equations Mathematical and Computational Tools

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

Name: Final Exam MATH 3320

Basic Elements of Linear Algebra

CHAPTER 3. Matrix Eigenvalue Problems

Outline Python, Numpy, and Matplotlib Making Models with Polynomials Making Models with Monte Carlo Error, Accuracy and Convergence Floating Point Mod

The Eigenvalue Problem: Perturbation Theory

Linear System Theory

Eigenvalues and Eigenvectors

Linear Algebra Massoud Malek

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

MAT Linear Algebra Collection of sample exams

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Assignment on iterative solution methods and preconditioning

Spring 2019 Exam 2 3/27/19 Time Limit: / Problem Points Score. Total: 280

Chapter 7 Iterative Techniques in Matrix Algebra

A Multi Dimensional Stochastic Differential Equation Model

AIMS Exercise Set # 1

Spectral radius, symmetric and positive matrices

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

Least squares policy iteration (LSPI)

Transcription:

Review STABILITY FOR PARABOLIC SOLVERS School of Mathematics Semester 1 2008

OUTLINE Review 1 REVIEW 2 STABILITY: EXPLICIT METHOD Explicit Method as a Matrix Equation Growing Errors Stability Constraint 3 STABILITY: CRANK-NICOLSON 4 SUMMARY

Review FIRST AND SECOND ORDER METHODS FOR PARABOLIC PDES The explicit method is first order and has a stability constraint. The implicit method is first order and unconditionally stable. But requires a direct solver. The Crank-Nicolson is the average of the two schemes above. It is second order. It is unconditionally stable. Multidimensional scheme require careful consideration. ADI scheme may be preferred to Crank-Nicolson since it only requires a 1D direct solver.

Review Explicit Method as a Matrix Equation Growing Errors Stability Constraint Consider the first order explicit scheme which can be written as w k+1 j = βw k j 1 + (1 2β)w k j + βw k j+1, for 1 i n 1, with w 0, w n given. We can write the above in matrix form as w k+1 = (1 2β) β β (1 2β) β......... β (1 2β) β β (1 2β) w k

Review Explicit Method as a Matrix Equation Growing Errors Stability Constraint SOME USEFUL RESULTS FOR MATRIX NORMS 1 ca = c A for a scalar c 2 AB A B. 3 Gerschgorin s first theorem The largest of the moduli of te eigenvalues of the square matrix A cannot exceed the largest sum of the moduli of the elements along any row or column. ρ(a) A 1 or A 4 Gerschgorin s circle theorem and the norm of matrix A If the eigenvalues λ s are estimated by the circle theorem, then the condition λ s 1 is equivalent to A 1 or A.

Review Explicit Method as a Matrix Equation Growing Errors Stability Constraint THE GROWTH OF ERRORS - EIGENVALUES Now we have w k+1 = Aw k Recall that if we begin some error e 0, the error at the kth step can be written e k = A k e 0. Now express the initial error e 0 as a linear combination of the eigenvectors of A e 0 = n c s v s. s=1

Review Explicit Method as a Matrix Equation Growing Errors Stability Constraint THE GROWTH OF ERRORS - EIGENVALUES Then we may write the error at the kth step as e k = n c s λ k s v s. s=1 where λ s are the eigenvalues of the matrix A. This shows that errors will not increase exponentially if max λ s 1, s = 1, 2,...,n 1. s This is equivalent to the condition A 1

CONSTRAINT Review Explicit Method as a Matrix Equation Growing Errors Stability Constraint Stability requires A 1 for the explicit method. The infinity norm A is defined as for an n n matrix A. A = max j For the explicit method we have n i a i,j A = β + 1 2β + β The scheme is therefore stable when 2β + 1 2β 1.

CONSTRAINT Review Explicit Method as a Matrix Equation Growing Errors Stability Constraint We can evaluate the inequality to find that 2β + 1 2β 1. A 1 β 1 2. Then the size of the timestep must satisfy t x2 2κ

Review The Crank-Nicolson scheme may be written as βw k+1 j 1 + (2 + 2β)wk+1 j βwj+1 k+1 = for j = 1, 2,...,n 1. We can express this in matrix form as βw k j 1 + (2 2β)w k j + βw k j+1 Bw k+1 = Aw k So that the iteration matrix defined from w k+1 = B 1 Aw k

Review THE MATRICES A AND B A = B = (2 2β) β β (2 2β) β......... (2 + 2β) β β (2 + 2β) β......... β (2 2β) β β (2 2β) β (2 + 2β) β β (2 + 2β)

Review We may also write them as A = 2I n 1 + βs n 1, B = 2I n 1 βs n 1. where S = 2 1 1 2 1......... 1 2 1 1 2

Review The Crank-Nicolson scheme is defined by the equation w k+1 = B 1 Aw k so the scheme will be stable if B 1 A 1, or max λ s 1 s This time we will work with the eigenvalues of the matrices, rather than norms. We need to show that the maximum eigenvalue of B 1 A is less than unity. How to find the eigenvalues?

Review EIGENVALUES AND POLYNOMIALS Recall that λ is an eigenvalue of the matrix S, and x a corresponding eigenvector if Thus for any integer p Sx = λx. S p x = S p 1 Sx = S p 1 λx = = λ p x. Hence the eigenvalues of S p are λ p with eigenvector x.

Review EIGENVALUES AND POLYNOMIALS Extending this result, if P(S) is the matrix polynomial then P(S) = a 0 S n + a 1 S n 1 + + a n I P(S)x = P(λ)x, and P 1 (S)x = 1 P(λ) x Finally if Q(S) is any other polynomial in S then we see that P 1 (S)Q(S)x = Q(λ) P(λ) x.

Review BACK TO CRANK-NICOLSON... If we let and P = B(S n 1 ) = 2I n 1 βs n 1, Q = A(S n 1 ) = 2I n 1 + βs n 1 then the eigenvalues of the matrix B 1 A are given by µ = 2 + βλ 2 βλ where λ is an eigenvalue of the matrix S n 1.

Review EIGENVALUES OF A TRIDIAGONAL MATRIX It can be shown that the generic tridiagonal matrix T where b c a b c T =......... a b c a b has the eigenvalues λ s = b + 2 { } sπ ac cos, s = 1, 2,...,n 1 n + 1

Review EIGENVALUES OF B 1 A Hence the eigenvalues of S n 1 are { λ s = 4 sin 2 sπ }, s = 1, 2,...,n 1 2n and so the eigenvalues of B 1 A are µ s = 2 4β { } sin2 sπ 2n 2 + 4β sin 2 { }, s = 1, 2,...,n 1 sπ 2n

Review CONSTRAINT ON CRANK-NICOLSON For stability, we require max µ s 1, s Clearly we can choose any value of beta and this will be satisfied. Hence the scheme is unconditionally stable. If max s µ s 1 the solution may be prone to ringing.

Review Errors will propogate through the solution; e k = A k e 0. We can bound the errors by the equivalent conditions A 1, or max λ s 1 s Using the norm condition, we find the explicit scheme requires β 1 2 and from eigenvalue analysis the Crank-Nicolson scheme has no restrictions.