Analysis of Transient Behavior in Linear Differential Algebraic Equations

Similar documents
arxiv: v3 [math.na] 27 Jun 2017

MATH 5524 MATRIX THEORY Problem Set 5

LINEAR ALGEBRA KNOWLEDGE SURVEY

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

MATH 583A REVIEW SESSION #1

Linear Algebra Massoud Malek

Eigenvectors and Hermitian Operators

Numerical Linear Algebra Homework Assignment - Week 2

Pseudospectra and Nonnormal Dynamical Systems

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

w T 1 w T 2. w T n 0 if i j 1 if i = j

Eigenvalue and Eigenvector Problems

Iterative Methods for Solving A x = b

Conceptual Questions for Review

Review problems for MA 54, Fall 2004.

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

Lecture 3: QR-Factorization

arxiv: v1 [math.na] 5 May 2011

Notes on Eigenvalues, Singular Values and QR

Balanced Truncation 1

Lecture 4: Analysis of MIMO Systems

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

B553 Lecture 5: Matrix Algebra Review

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

The Eigenvalue Problem: Perturbation Theory

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Notes for CS542G (Iterative Solvers for Linear Systems)

Extreme Values and Positive/ Negative Definite Matrix Conditions

Linear Algebra Methods for Data Mining

Robust control of resistive wall modes using pseudospectra

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

MATH 5524 MATRIX THEORY Problem Set 4

Numerical Linear Algebra

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MIT Final Exam Solutions, Spring 2017

COMP 558 lecture 18 Nov. 15, 2010

Lecture 7. Econ August 18

Numerical Methods for Solving Large Scale Eigenvalue Problems

A Review of Preconditioning Techniques for Steady Incompressible Flow

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

21 Linear State-Space Representations

Termination criteria for inexact fixed point methods

Lecture 8 : Eigenvalues and Eigenvectors

Numerical Methods I Eigenvalue Problems

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

The Singular Value Decomposition

Eigenvalues and eigenvectors

1 Last time: least-squares problems

VISUALIZING PSEUDOSPECTRA FOR POLYNOMIAL EIGENVALUE PROBLEMS. Adéla Klimentová *, Michael Šebek ** Czech Technical University in Prague

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

NORMS ON SPACE OF MATRICES

Operations On Networks Of Discrete And Generalized Conductors

Linear Algebra March 16, 2019

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Math Ordinary Differential Equations

Draft. Lecture 01 Introduction & Matrix-Vector Multiplication. MATH 562 Numerical Analysis II. Songting Luo

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Problem set 2. Math 212b February 8, 2001 due Feb. 27

ACM 104. Homework Set 5 Solutions. February 21, 2001

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Linear Algebra: Matrix Eigenvalue Problems

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Preconditioners for the incompressible Navier Stokes equations

Stat 159/259: Linear Algebra Notes

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

EECS 275 Matrix Computation

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Solving the steady state diffusion equation with uncertainty Final Presentation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

EECS 275 Matrix Computation

The Jordan Canonical Form

Lecture notes: Applied linear algebra Part 1. Version 2

12. Perturbed Matrices

AMS526: Numerical Analysis I (Numerical Linear Algebra)

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

Fast Iterative Solution of Saddle Point Problems

MATH 5640: Functions of Diagonalizable Matrices

6 EIGENVALUES AND EIGENVECTORS

Spectral Processing. Misha Kazhdan

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

Notes on SU(3) and the Quark Model

14 Singular Value Decomposition

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Lecture 4 Eigenvalue problems

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Elementary linear algebra

Lecture notes: Applied linear algebra Part 2. Version 1

Spectral Theorem for Self-adjoint Linear Operators

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Review of some mathematical tools

Transcription:

Analysis of Transient Behavior in Linear Differential Algebraic Equations Dr. Mark Embree and Blake Keeler May 6, 5 Abstract The linearized Navier-Stokes equations form a type of system called a differential algebraic equation (DAE). These equations impose both algebraic and differential constraints on the solution, and present significant challenges to solving. This paper discusses a method for solving DAEs by the Schur factorization to avoid some of the obstacles that are typically present. We also investigate the transient behavior of the solution using results from pseudospectral theory, which will help us understand whether the solution to the linearized Navier-Stokes system will become turbulent if its steady state is disturbed. If a flow does indeed grow substantially in energy, then we must be cautious about using the linearized model. Thus, having information about transient behavior is quite instructive. Introduction The Navier-Stokes equations are system of highly nonlinear PDEs that describe how the velocity and pressure fields of a fluid behave. Given a particular fluid flow problem, one often seeks to understand whether that flow will be stable (laminar) or unstable (turbulent) if it is perturbed from its steady state. One of the most common ways to answer this question is to perform a finite element discretization in space, resulting in a system of linear ODEs in time. That system of ODEs turns out to be a linear differential algebraic equation (DAE), a system in which the coefficient matrix of the derivative vector is singular. This makes solving these equations much more complex, as the generalized eigenvalue problem becomes more difficult to interpret and connect to the question of stability. In particular, there will be infinite eigenvalues that must be dealt with. We will see that these infinite eigenvalues impose algebraic constraints on the solution, forcing it to evolve on a lower dimensional subspace of R n. Hence the name differential algebraic equation. This paper uses a method by which these equations may be solved without any direct need for the generalized eigenvalue problem by surgically removing the infinite eigenvalues from the system without breaking the dynamics, resulting in a physically meaningful solution in exponential form. We investigate how to analyze the transient behavior of the solution. The asymptotic properties of the solutions to differential equations are well-understood. It is commonly known that if a solution to a system of differential equations x (t) = Ax(t) has the form

e ta x(), then the eigenvalues of A are the determining factors in the behavior of the solution as t. However, this tells us nothing of what happens between the initial state and the inevitable decay at infinity. It is often possible that the solution experiences growth by several orders of magnitude before settling down and going to zero. This behavior is referred to as transient growth and is a consequence of the departure of the matrix A from normality. One manifestation of this phenomenon is eigenvectors of A associated with distinct eigenvalues being nearly parallel. Unfortunately, the amount of information gained from inspecting the eigenvectors alone is quite limited. They will not tell us how much growth will occur, only that such growth is likely. We can, however, obtain such information by looking not at the spectrum of the matrix, but at the pseudospectra instead. The second portion of this paper applies results from pseudospectral theory to analyze transient behavior in the Navier-Stokes problem. Differential Algebraic Equations Systems of linear differential equations of the form Bx (t) = Ax(t) (.) are commonplace in many applications, and are quite easy to solve when B is invertible. We can simply invert B and use the matrix exponential to find x(t) = e BAt x(). For more practical formulations, one can seek the generalized eigenvalues and eigenvectors of the matrix pencil (A, B) and see that any function of the form x(t) = e µt v is a solution to the differential equation for any eigenpair (µ, v). However, this paper is primarily concerned with the more interesting case where B is singular, making the system a DAE. Problems of this type are much more difficult to solve, since the pencil (A, B) now possesses infinite eigenvalues. This can be seen by writing down the generalized eigenvalue problem as Av = Bv (.) µ and noting that since B is singular, there exist nonzero vectors v for which Bv =. For these v, taking µ = satisfies the eigenvalue problem, making (µ, v) an infinite eigenpair of the matrix pencil. So how shall we handle such an equation? We clearly cannot write x(t) = e t v as a solution. The following section details a method by which we may construct a sensible solution to (.) using the Schur decomposition, thereby avoiding the generalized eigenvalue problem entirely.. Solving a DAE by the Schur Decomposition Details about the solution of DAEs can be found in [. In this section we discuss one particular method that employs the Schur decomposition. Consider (.) where B is singular, but assume that (A, B) is a nonsingular pencil, so there exists some real number λ such that A λb is invertible. We define A λ = (A λb) B and B λ = (A λb) B. Then if we multiply (.) by (A λb) we obtain B λ x (t) = A λ x(t). (.3)

Since (A λb) is nonsingular, (.3) is equivalent to (.). Notice also that A λ λb λ = (A λb) A λ(a λb) B = (A λb) (A λb) = I. Therefore, A λ = I + λb λ, so (.3) can be written as Now we write the Schur decomposition of B λ as B λ x (t) = (I + λb λ )x(t). (.4) B λ = QTQ, where Q is a unitary matrix, and T is an upper triangular matrix in which the eigenvalues of B λ may appear in any order. Suppose the pencil has an infinite eigenvalue of algebraic multiplicity m, so that B λ has a zero eigenvalue of algebraic multiplicity m. Then it is convenient to order the eigenvalues so that [ G S T =, N where N is an m m upper triangular matrix corresponding to the zero eigenvalues of B λ (i.e. its diagonal elements are all zero), G is the (n m) (n m) upper triangular matrix corresponding to the nonzero eigenvalues (so G is invertible), and U is some (n m) m matrix. Hence, if we premultiply (.3) by Q, we obtain which can be rewritten as [ G S N Q B λ QQ x (t) = Q (I + λb λ )QQ x(t), (.5) [ I + λg λs Q x (t) = I + λn Q x(t). (.6) Define y(t) = Q x(t), where y(t) = [y, y T with y of length n m and y of length m, so we have [ [ [ [ G S y I + λg λs y =. (.7) N I + λn y From the second block-row, we have Ny = (I + λn)y. Since the diagonal of N is zero, it must be a nilpotent matrix. Let us assume that it has index k, that is N k =, but N k. Therefore, premultiplying by N k we find (N k + λn k )y = N k y =. So N k y =. Then if we premultiply by N k we have (N k + λn k )y = N k y = (N k y ) =. So N k y =. This process can be continued inductively to obtain N y =. 3 y

Therefore, y =. Let us consider for a moment what this implies about the set of valid initial conditions x(). Let us first partition Q = [Q, Q so that Q R n (n m) and Q R n m. Then since x(t) = Qy(t) with y (t) =, we have that x() = Q y (). (.8) In other words, the only initial conditions that satisfy the algebraic constraints of the DAE are those in Ran(Q ). Similarly, the relationship x(t) = Q y (t) implies that the solution evolves on a subspace of R n of dimension n m. Hence, the infinite eigenvalues of the pencil (A, B) serve to constrain the solution to a subspace of lower dimension. We will see a very clear example of this in section 3.. Continuing from (.7) with the knowledge that y (t) =, we have that Gy = (I + λg)y, and so the solution is y (t) = e (G +λi)t y (). The full solution to (.7) can now be written as [ [ y (t) e (G +λi)t = y (t) [ y (). (.9) Notice that the entries in the (,) and (,) blocks of the above matrix are irrelevant, but makes a convenient choice. We can then write [ y (t) x(t) = Q y (t) ( [ ) [ G = Q exp t + λi Q y () Q. For simplicity, we will write the solution to (.) as ( [ M x(t) = exp tq Q ) x(), (.) where M = G + λi. Thus, we have formulated a solution to (.) that maintains standard exponential form, despite the fact that the pencil (A, B) had infinite eigenvalues. The next few sections will outline some of the interesting properties of this solution.. The Eigenvalues of M As mentioned previously, if we have a standard system of equations like (.) with B nonsingular, we can simply write the solution in terms of the eigenvalues of the pencil (A, B). For a DAE this is complicated by the existence of infinite eigenvalues, but it is reasonable to expect that the finite eigenvalues of the pencil will still dictate the behavior of the solution. In the following, we prove that this is indeed the case, as the eigenvalues of M in (.) are the same as the finite eigenvalues of the pencil. 4

Theorem.. If A, B, M, and λ are as in section., then the spectrum of M is the set of all finite eigenvalues of the pencil (A, B). Proof. Suppose η is a finite eigenvalue of the pencil (A, B). nonzero x. So we can subtract λbx to obtain Then Ax = ηbx for some (A λb)x = (η λ)bx. Since A λb is invertible, η λ must be nonzero. We can rearrange the above to obtain (A λb) Bx = η λ x. Hence is a nonzero eigenvalue of B η λ λ = (A λb) B. If the Schur decomposition of B λ is as in section., then is an eigenvalue of G. Hence, η λ is an eigenvalue of η λ G. Finally, this implies that η is an eigenvalue of M = G + λi. The reverse inclusion follows the same argument, and so the theorem follows. This result, while it is what we had hoped for, is interesting in that no matter what shift λ is applied to make A λb invertible, the key matrix in the solution always has the same eigenvalues. In fact, as we shall see, the solution is entirely independent of λ..3 The Solution s Independence of λ In this section, we will prove that our solution (.) is independent of the parameter λ, whose only purpose was to make the pencil (A, B) nonsingular. We begin with the following lemmas. Lemma.. Let A, B C n n and suppose λ and µ are any real numbers such that A λb and A µb are invertible. Define B λ = (A λb) B and B µ = (A µb) B. Then η σ(b λ ) = η + (λ µ)η σ(b µ). Proof. Suppose η σ(b λ ). Then (A λb) Bx = ηx for some nonzero vector x. Premultiplying by (A λb) and adding and subtracting ηµbx from the right gives A rearrangement yields Bx = η(a µb)x + η(µ λ)bx. ( + η(λ µ))bx = η(a µb)x. If η, then since A µb is invertible, + η(λ µ) must be nonzero. If η =, then clearly + η(λ µ). Therefore, we can write Bx = η (A µb)x. + η(λ µ) Premultiplying by (A µb) demonstrates the result. 5

Lemma.3. Let A, B, B λ and B µ be as in Lemma. and suppose the pencil (A, B) has an infinite eigenvalue of algebraic multiplicity m, so that B λ and B µ each have a zero eigenvalue of algebraic multiplicity m. Let the Schur decompositions of B λ and B µ be given by [ Gλ S B λ = Q λ N λ Q [ Gµ S B µ = U µ N µ U, respectively, where Q, U are unitary, N λ, N µ R m m have zero diagonal entries, and G λ, G µ R (n m) (n m) are invertible. Partition Q = [Q, Q and U = [U, U so that Q, U R n (n m). Then Ran(Q ) = Ran(U ). Proof. Let us write B λ in Jordan form as B λ X = XJ where the nonzero eigenvalues are written in the upper left (n m) (n m) block of J. If we partition X = [X, X in the same way that we partitioned Q, then Ran(X ) is the invariant subspace of B λ associated with the nonzero eigenvalues. Observe that Ran(Q ) is also an invariant subspace of B λ associated with the nonzero eigenvalues. Hence Ran(X ) = Ran(Q ). Next we can premultiply B λ X = XJ by A λb and add and subtract µbxj on the right to obtain Simple rearrangement yields BX = (A µb)xj + (µ λ)bxj. BX(I + (λ µ)j) = (A µb)xj. For µ sufficiently close to λ, I + (λ µ)j is invertible, so we can write (A µb) BX = XJ(I + (λ µ)j). (.) It is important to note that J(I + (λ µ)j) will retain the block upper triangular form of J. This is shown by partitioning J by its Jordan blocks as J = diag{j,..., J p } so that J (I + (λ µ)j ) J(I + (λ µ)j) =..., J p (I p + (λ µ)j p ) where I,..., I p are the appropriate diagonal partitions of the identity matrix. Therefore, the diagonal of J(I + (λ µ)j) will retain the structure of the diagonal of J. That is, the nonzero eigenvalues (of B µ this time) will be listed as the first elements along the diagonal. Hence by (.) X is the invariant subspace associated with the nonzero eigenvalues of B µ. Again, since such invariant subspaces are unique, we have that Ran(U ) = Ran(X ). When we recall that Ran(X ) = Ran(Q ), we have the desired result. Thus far, we have only proved the theorem for µ sufficiently close to λ, since that condition guaranteed that I + (λ µ)j would be invertible. However, we claim that this matrix is invertible for all valid choices of λ and µ. To see this, note that I+(λ µ)j will be invertible provided + (λ µ)η is nonzero for all eigenvalues η of B λ. If η σ(b λ ), Lemma. states that η/( + (λ µ)η) is an eigenvalue of B µ. Since the eigenvalues of B µ must be finite, + (λ µ)η must be nonzero. The result therefore generalizes to all λ and µ for which A λb and A µb are invertible. 6

Lemma.4. If all matrices are as in Lemma.3, [ Q Ĝµ Ŝ B µ Q = µ Nµ, where Ĝµ = Q U G µ U Q. Proof. Direct computation shows that [ Q Q B µ Q = Q B µ [ Q Q = Q B µ Q Q B µ Q Q B µ Q Q B µ Q [ Ĝ =: µ Ŝ µ. Q B µ Q Nµ By Lemma.3 we know that Ran(Q ) = Ran(U ) is an invariant subspace of B µ, so we have Q B µ Q = Q Q Z =, for some matrix Z R (n m) (n m), since the columns of Q are orthogonal to the columns of Q. Therefore, [ Q Ĝµ Ŝ B µ Q = µ. Nµ Also, if we expand the Schur form of B µ, we obtain and hence, B µ = U G µ U + (U S µ + U N µ )U, Ĝ µ = Q B µ Q = Q U G µ U Q, since the columns of Q are orthogonal to the columns of U. We have therefore proven that Q B µ Q remains upper triangular with an explicit formula for the upper left block. Lemma.5. B λ B µ = (λ µ)b λ B µ. Proof. Notice that (A µb) (A λb) = (λ µ)b. If we premultiply by (A λb) and postmultiply by (A µb), we obtain (A λb) (A µb) = (λ µ)(a λb) B(A µb), which is a generalized form of the First Resolvent Identity. If we then postmultiply this equation by B, we have the desired result. 7

This is the final lemma that we need in order to prove the following theorem. Theorem.6. The solution formulated in section. is independent of the choice of the parameter λ, provided A λb is nonsingular. That is, in the context of the preceding lemmas, [ [ G µ + µi G U U λ = Q + λi Q. Proof. By Lemma.5, we have B λ B µ = (λ µ)b λ B µ. Multiplying by Q on the left and Q on the right, we obtain Q B λ Q + Q B µ Q = (λ µ)q B λ QQ B µ Q. If we replace B λ and B µ by their Schur decompositions and invoke Lemma.4, we can write [ [ [ [ Gλ S λ Ĝµ Ŝ µ Gλ S = (λ µ) λ Ĝ µ Ŝ µ. N λ Nµ N λ Nµ Restricting our attention to the (,) block of this matrix equation, we see that G λ Ĝµ = (λ µ)g λ Ĝ µ. Since Ĝµ = Q U G µ U Q, we can multiply on the left by Q and on the right by Q to obtain Q G λ Q (Q Q U )G µ (U Q Q ) = (λ µ)(q G λ Q )U G µ (U Q Q ). (.) Since Q Q and U U are both orthogonal projectors onto Ran(Q ) = Ran(U ), we have that Q Q = U U. So we have and similarly Q Q U = U U U = U U U Q = Q Q Q = Q. We will use these identities freely in the following calculations. Continuing from (.) we have Q G λ Q U G µ U = (λ µ)(q G λ Q )(U G µ U ). Then if we multiply on the left by Q G λ Q, we obtain Q Q (Q G λ Q )(U G µ U ) = (λ µ)u G µ U. Next we can multiply on the right by U G µ U, which gives U G µ U Q G λ Q = (λ µ)u U. 8

Since U U = Q Q, we can rearrange the above to see that U (G µ + µi)u = Q (G λ + λi)q. It follows that [ G µ + µi U [ G U λ = Q + λi Q. Now that we know the solution will not be affected by our choice of the parameter λ, we can be confident in using this approach to solve DAEs..4 Linearized Navier-Stokes As mentioned in the introduction, the primary application that we wish to study is the numerical solution of the incompressible Navier-Stokes equations, ν u + u u + p = u t (.3) u =, (.4) where u is the velocity field of the fluid and p is the pressure field. Equations (.3) and (.4) form a system of nonlinear time-dependent partial differential equations, for which finding a solution is a difficult task. One common approach is to find a steady state solution and linearize the system around that steady state using a finite element formulation, the details of which we will not discuss here. For a full description of this process, see [3. The key idea is to approximate u u h and p p h where u h = n a i (t)φ i (x) and m b j (t)ψ j (x) for some finite collections of basis functions {φ,..., φ n } and {ψ,..., ψ m }. With this discretization, the linearized problem is reduced to a first order system of linear differential equations in time, given by [ A i= [ K C x (t) = C j= x(t), (.5) [ a(t) where x(t) = with a(t) = [a b(t) (t),..., a n (t) and b(t) = [b (t),..., b m (t), the vectors of time-dependent coefficients in u h and p h. Also, A C n n is Hermitian positive definite, C C n m has full column rank (m n), and K C n n. From this we can see the connection to DAEs, since the matrix on the left side of (.5) is singular, as it has m zero columns. It should be noted that (.5) describes how a fluid flow system will behave in time, but only under the assumption that it stays sufficiently close to the steady state. Therefore, the transient properties of the system are very important. If we can conclude that the solution will likely grow substantially from its initial state, then the necessary assumptions for the linearized analysis will not be satisfied. For more details about linear stability analysis, see [ and [8. The next sections describe the necessary tools to analyze how the pseudospectral properties of the matrix M in the solution to (.5) influence transient behavior, allowing us to predict whether or not the linearization will remain descriptive. 9

3 Pseudospectra As we turn our attention to transient growth in the solutions to DAEs, we will find the concept of pseudospectra to be quite useful. Conceptually, the pseudospectra of a matrix A are sets of numbers in the complex plane that are almost eigenvalues. Before we introduce the more mathematically precise definition, notice that if z is not an eigenvalue of A (z / σ(a)), the resolvent norm (A zi) is defined and will grow in magnitude as A zi gets closer to being singular, which is generally when z approaches an eigenvalue of A. Therefore, it is reasonable to take the convention (A zi) = if z σ(a). This motivates the definition of pseudospectra that we will use here, taken from [6, which states that if ε > is arbitrary and A C n n, then the ε-pseudospectrum σ ε (A) of A is the set of z C such that (A zi) > ε. We will not look too deeply into the theory of pseudospectra in this paper; we will simply use a few results that will prove to be useful in studying the transient behavior of our DAE solutions, as discussed in the next section. 3. Pseudospectra and Transient Growth Recall that our solution to a general DAE in section had the form ( [ ) M x(t) = exp tq Q x(). Therefore, since x() is constant, the transient behavior of the solution is determined primarily by the behavior of the matrix exponential as time increases. Specifically, we only need to know about e tm since Q and Q simply make an orthogonal similarity transformation, under which the -norm is invariant. This is where some results from pseudospectral theory prove quite useful. In particular, we can use pseudospectra to put a lower bound on the maximum value of the matrix exponential. From Theorem 5.4 in [6, we have that if the pseudospectral abscissa, α ε (A) is defined by α ε (A) = sup Re(z), (3.) z σ ε (A) which in simple terms is the real part of the rightmost point in the ε-pseudospectrum, then Taking the supremum over ε gives sup e ta α ε (A)/ε ε >. (3.) t sup t e ta K(A) = sup α ε (A)/ε, (3.3) ε>

where K(A) is known as the Kreiss constant. Therefore, if the ε-pseudospectrum of A extends farther than ε into the right-half plane, we can guarantee that e ta must grow by at least a factor of α ε (A)/ε from the initial state. It is important to note that this only guarantees growth in the same norm that defines the pseudospectra. Determining an effective method for computing pseudospectra in the appropriate norm will be discussed in section 3.3. Also, notice that we can only guarantee such growth in the matrix exponential e ta. If the solution to a DAE has the form x(t) = e tm x(), then we have only that sup t x(t) sup e tm x(). (3.4) t Hence, we need not see transient growth for all initial conditions x(). However, by definition of an operator norm, e tm = max e tm x. x = Therefore, there does exist an initial state x() such that maximal transient growth at time t is realized. The next section details a simple three-dimensional example where we see a clear connection between the Kreiss constant and the growth of the solution. 3. A Three-Dimensional Example Consider the dynamical system where B =.5658 8.54 4.746 33.598 7.4939.875 8.596 5.53.557 Bx (t) = Ax(t), (3.5) and A = 3 4 5 6 The matrix A was chosen arbitrarily to simply be nonsingular. B, on the other hand, was specifically constructed so that it has a zero eigenvalue of algebraic multiplicity one. When we follow the procedure outlined in section. with any appropriate λ, we obtain where Q = x(t) = Q exp t.653.7567.43.683.58.5893.44.44.868. Q x(), (3.6) and x() must be in the range of the first two columns of Q, by (.8). A suitable choice for the initial condition was x() = Notice that in the context of (.), M = [..764.499.99.

The eigenvectors of this matrix are [, and [, /, which are close to parallel, so from the standard qualitative perspective we should expect transient growth. The pseudospectra will reflect this and give us a more quantitative idea of exactly how much growth to expect. Before we look at the pseudospectra, however, recall that since B (and therefore B λ ) had a zero eigenvalue of multiplicity one, we should expect the solution to evolve on a twodimensional subspace of R 3. And indeed, as shown in figure 3. we see that the solution is constrained to evolve on a plane. z.5.5.5.5.5 Initial Point Final Point y 5 5 x Figure 3.: The path x(t) and the D subspace on which it lives. 4. 3..3.4.5.6.7.8.9 6 5 3 4 Figure 3.: Pseudospectra of M We see also that there is some transient growth in x(t), so this should be reflected in the pseudospectral properties of M. Based on our discussion in section 3., we should expect

the ε-pseudospectra to extend farther than ε into the right half-plane for some ε >. As shown in figure 3., this is exactly what we see. The black dots show the eigenvalues of M, which are both in the left half-plane, which is sufficient to guarantee that the solution is asymptotically stable. However, the fact that the pseudospectra, denoted by the colored contours, extend so far to the right of the imaginary axis gives intermediate growth before the decay in the solution at infinity. To understand just how much growth we should expect in x(t), we compute the ratios α ε (M)/ε for various ε between. and to find a close approximation to the Kreiss constant. Figure 3.3 demonstrates the relationship between ε and the ratios α ε (M)/ε. The peak that is highlighted in the figure occurs at ε =.398 and takes on a value of α ε (M)/ε = 3.488, which gives us that K(M) 3.488. Hence, we should expect the norm of x(t) to grow by at least a factor of 3.488 from its initial value before its final decay. This behavior is demonstrated in figure 3.3 where the red line indicates our predicted level of growth. We see that the norm grows to a point just past our prediction before quickly shrinking to its steady state at zero. 3.5 3.5 αε(m) ε.5.5...3.4.5.6.7.8.9 ε Figure 3.3: α ε (M)/ε as a function of ε. x(t) αε ε x 3 4 5 6 t Figure 3.4: Transient Behavior of x(t). 3

This example provides a clear demonstration of the relationship between the pseudospectra of M and the transient behavior of the solution norm. However, it is worth noting that we have simply defaulted to using the -norm to compute both the pseudospectra and the growth of the solution. In this case, it made sense because we were only interested in the path of the particle, but in the case of Navier-Stokes it is not so obvious how growth should be described. We discuss this issue in the following section. 3.3 Finding the Appropriate Norm We are interested [ in transient behavior in the solution to the Navier-Stokes problem, which u takes the form, where u and p are the velocity and pressure fields, respectively. The p task at hand is to determine a meaningful way to describe what we mean by growth. For example, if the -norm of the velocity-pressure vectors exhibits transient growth, does that imply that there will be any noticeable behavior in the physical system? In general, it is difficult to assign any physical meaning to such a norm. However, one norm that proves quite useful in this case is the energy-norm, denoted E. If we let Ω denote the physical domain of the problem, then the energy norm for Navier-Stokes is given by [ u p E = ( u H + ) / p L = Ω u u + p dω where H is the H semi-norm and L is the L norm. Suppose now that /, [ uh a finite element approximation to the solution, with u h = n a i φ i (x) and v h = m b i ψ i (x), where {φ,..., φ n } and {ψ,..., ψ m } are the finite element basis functions. Note that any boundary conditions enforced will vary with the particular flow being studied, but this does not affect our calculations. Computing the energy norm of our finite element approximation gives [ u p E [ uh p h E = = Ω n i= u h u h + p hdω n a i a j j= Ω φ i φ j dω + i= m i= m b i b j j= Ω i= p h ψ i (x)ψ j (x)dω. is 4

If we let a = [a,..., a n, b = [b,..., b m, (K) ij = φ Ω i φ j dω, and (A) ij = ψ Ω i(x)ψ j (x)dω, we can write [ u [ uh = a Ka + b Ab p E p h E = [ a b = x Hx. [ K A [ a b [ [ K a where H = and x =. Notice that under the typical restriction that φ A b i and ψ i obey Dirichlet boundary conditions, both K and A are Gram matrices and therefore are symmetric positive definite. Hence, H is symmetric positive definite, and so the function u, v H = v Hu defines an inner product that induces the corresponding norm u H = (u Hu) /. Therefore, we have that [ [ u uh p = x H. E Coincidentally, K and A are the same as in (.4), so it requires no additional effort to construct the appropriate H. Additionally, if RR = H is a Cholesky factorization of H (which must exist since H is symmetric positive definite), then x H = x Hx = x RR x = (R x) R x = R x. Therefore, we can compute the energy norm of our finite element solution by simply computing the -norm of the vector of finite element coefficients, x, transformed by R. Similarly, if we wish to compute the energy norm of a matrix B, we can first note that p h E By E y E = y B HBy y Hy = (R By) R By (R y) Ry = R By R y = R BR z z where z = R y. Since R is invertible, maximizing over y is equivalent to maximizing over z, and therefore B E = R BR. (3.7) Thus, we can compute the energy norm of a linear operator by performing the similarity transformation with respect to R and then computing the -norm. 5

3.4 Results for Navier-Stokes We have finally developed all of the necessary tools to analyze the transient behavior of the linearized Navier-Stokes problem given by [ [ A K C x (t) = C x(t) (3.8) Before we compute any solutions to this system, we perform the change of coordinates as described in section 3.3, resulting in the modified system [ [ A K C R R y (t) = R C R y(t), (3.9) where y(t) = R x(t). Now when we compute the -norm of the solution to (3.9), we obtain the energy norm of the solution to (3.8). We will compute solutions to this system for two classical fluid flow problems in two physical dimensions, namely flow over a step and flow around an obstacle. All of the linearized systems that we analyze here were generated using the IFISS software package from the University of Manchester with grid size 4 ( 4 vertical elements) [4,[5. This is the smallest grid size on which we can obtain reasonable resolution of the eddie currents. With this grid, the coefficient matrices in the step flow problem are of size 797 797. The A and K blocks have dimension 6978 6978, while the bottom right zero blocks have size 98 99. Therefore, when we follow the procedure outlined in section., we obtain a solution of the form x(t) = exp ( [ M tq Q ) x(), (3.) where M has dimension 649 649 (797 99). For the obstacle flow problem, the coefficient matrices are smaller, having dimension 488 488. A and K have dimension 9 9, and M in the solution has size 896. In the following sections we will show how the pseudospectral properties of M change with the viscosity ν by computing approximations to the Kreiss constant, which will in turn tell us how the transient behavior of the system changes with ν. 3.4. Flow Over a Step The step flow system involves a viscous fluid flowing off the edge of a small step, creating an eddy current at the base of the step as well as a noticeable dip at the surface of the fluid. A typical steady state solution to this problem is shown in figure 3.5. Figures 3.6-3.9 show the pseudospectra plots of M for viscosity values of ν =,,, and. 4 5 6

Streamlines: uniform.8.6.4...4.6.8 5 5 5 Pressure field.6.4...4 5 5 5 Figure 3.5: Flow over a Step with ν =. 5 4 3.5.5.75.5.5.75.5.5.75.5 5 7 6 5 3 4.5.75 Figure 3.6: Pseudospectra for step flow, ν =. 7

5 4 3.5.5.75.5.5.75.5.5.75.5 5 7 6 5 3 4.5.75 Figure 3.7: Pseudospectra for step flow, ν =. 5 4 3.5.5.75.5.5.75.5.5.75.5 5 7 6 5 3 4.5.75 Figure 3.8: Pseudospectra for step flow, ν = 4. 8

5 4 3.5.5.75.5.5.75.5.5.75.5 5 7 6 5 3 4.5.75 Figure 3.9: Pseudospectra for step flow, ν = 5. It is easy to see that as the viscosity decreases the spectrum of M drifts closer to the imaginary axis, but all of the eigenvalues remain in the left half-plane for these viscosities, keeping the system asymptotically stable. Perhaps less obvious is the fact that some of the pseudospectral contours also begin to shift farther into the right half-plane, indicating that the Kreiss constant is increasing as the viscosity decreases. As aesthetically pleasing as these plots are, it is difficult to obtain any quantitative information from them, so table 3. lists α approximations to the Kreiss constant, K(M) = sup ε (M) ε>, for each ν. From this we can ε clearly see how the viscosity increases the level of transient growth in the solution. For a nice value of the viscosity like /, maximum energy growth is only guaranteed to reach about. times the energy of its original state, but for ν = /5 the solution s maximal energy growth is guaranteed to reach a level of about 7.8 times the initial value. Therefore, we can feel fairly comfortable using the linearized system (3.8) to model the step flow for viscosities of / or lower since we remain within a factor of of the initial energy, but for higher viscosities we see that it is possible for the energy to grow substantially from the initial value, making linear analysis an unwise choice. Table 3.: Approximate Kreiss constants for various step flow viscosities. ν / / /4 /5 Approximate K(M).66.757 5.47 7.836 Next we will look at similar results for flow around an obstacle. 9

3.4. Flow Around an Obstacle The obstacle flow problem involves a viscous fluid flowing around a rectangular object in the center of the flow path. A typical steady state solution is shown in figure 3.. In a similar fashion to the previous section, figures 3.-3.4 show the pseudospectra plots of the M matrix for viscosities ν =,,, and. We did not use ν = as in the step 4 45 5 flow problem because IFISS did not converge to a steady state solution for that value. Streamlines: uniform Pressure field 4 3 3 4 5 6 7 8 Figure 3.: Flow around an obstacle, ν =. 5 4 3.5.5.75.5.5.75.5.5.75.5 5 7 6 5 3 4.5.75 Figure 3.: Pseudospectra for obstacle flow, ν =.

5 4 3.5.5.75.5.5.75.5.5.75.5 5 7 6 5 3 4.5.75 Figure 3.: Pseudospectra for obstacle flow, ν =. 5 4 3.5.5.75.5.5.75.5.5.75.5 5 7 6 5 3 4.5.75 Figure 3.3: Pseudospectra for obstacle flow, ν = 4.

5 4 3.5.5.75.5.5.75.5.5.75.5 5 7 6 5 3 4.5.75 Figure 3.4: Pseudospectra for obstacle flow, ν = 45. Despite the fact that the pseudospectral contours corresponding to larger ε values extend farther into the right-half plane than for the step, table 3.4. shows that the approximate Kreiss constants follow similar patterns to those of the step flow. We see once again that for low viscosities, the linearized system will serve us just fine, but for the higher values we see enough potential growth that using a linearized system is not a viable choice. Table 3.: Approximate Kriess constants for various obstacle flow viscosities. ν / / /4 /45 Approximate K(M).357.8 4. 5.88 3.5 Future Work Our study of these two Navier-Stokes problems demonstrates how we can combine our method for solving DAEs from section. with results from pseudospectral theory to get a better understanding of the transient properties of fluid systems that are disturbed from their steady state. From this we can predict how valid it is to use the linearized system to model the fluid system. However, one key point that we have not considered in this paper is the convergence of the Kreiss constants for the Navier-Stokes matrices as we refine the grid. That is, are the computed values on grid 4 a reasonable approximation to the true values, or would they change substantially if we went to grid 5 or 6? These numerical experiments would involve a great deal more time and computational power to perform. Thus, a point of further research would be implementing these finer grid experiments on more powerful machines, using techniques from sparse linear algebra.

It would also be interesting to investigate what conditions x() must satisfy in order to guarantee transient growth. We know that such initial conditions must exist, so having a way to determine whether a given initial state will or will not lead to transient growth would be an incredibly useful tool. 4 Conclusions In this paper we have covered a great deal of ground. We have developed a method for solving an arbitrary DAE via the Schur decomposition and proved that the solution that we obtain is not affected by our parameter λ. This proof required us to dig deep into the structure of the orthogonal transformations of the Schur form and how they behave when we change λ. We then discussed how the pseudospectra of M affect the potential for transient growth in the solution, giving a three-dimensional example to make the connection clear. Equipped with these tools we turned our attention to the linearized Navier-Stokes equations, deriving the appropriate change of coordinates so that the -norm of the solution that we compute is equivalent to the energy norm of the solution to the original system. We then investigated the effect of the viscosity on the pseudospectra of M and in turn the effect on the possible energy growth of the solution. This information allows us to be better informed about how valid it is to use the linearized system to analyze fluid stability. 3

References [ J. S. Baggett, T. A. Driscoll, L. N. Trefethen, A Mostly Linear Model of Transition to Turbulence, Physics of Fluids. 7, 833-838 (995). [ S. L. Campbell, C. D. Meyer Jr., Generalized Inverses of Linear Transformations, Pitman, London, 979. [3 H. C. Elman, D. Silvester, A. Wathen, Finite Elements and Fast Iterative Solvers, Oxford University Press, 5. [4 H. C. Elman, A. Ramage, D. J. Silvester, IFISS: A Computational Laboratory for Investigating Incompressible Flow Problems, SIAM Review, 56, 6-73 (4). [5 H. C. Elman, A. Ramage, D. J. Silvester, Algorithm 866: IFISS, A MATLAB Toolbox for Modelling Incompressible Flow, ACM Transactions on Mathematical Software, 33 (7). [6 M. Embree, L. Trefethen, Spectra and Pseudospectra, Princeton University Press, 5. [7 M. Gunzburger, Finite Element Methods for Viscous Incompressible Fluids, Academic Press, 989. [8 L. N. Trefethen, A. E. Trefethen, S. C. Reddy, T. A. Driscoll, Hydrodynamic Stability without Eigenvalues, Science, 6, 578-584 (993). [9 T. G. Wright, (), EigTool, Software available at https://github.com/eigtool. 4