Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling

Size: px
Start display at page:

Download "Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling"

Transcription

1 Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling S. Volkwein Lecture Notes (August 7, 13) University of Konstanz Department of Mathematics and Statistics

2

3 Contents Chapter 1. The POD Method in R m 5 1. POD and Singular Value Decomposition (SVD) 5. Properties of the POD Basis 1 3. The POD Method with a eighted Inner Product POD for Time-Dependent Systems 4.1. Application of POD for Time-Dependent Systems The Continuous Version of the POD Method 4 5. Exercises 31 Chapter. The POD Method for Partial Differential Equations POD for Parabolic Partial Differential Equations Linear Evolution Equations The Continuous POD Method for Linear Evolution Equations The Truth Approximation for Linear Evolution Problems POD for Nonlinear Evolution Equations 4. POD for Parametrized Elliptic Partial Differential Equations 4.1. Linear Elliptic Equations 4.. Extension to Nonlinear Elliptic Problems Exercises 46 Chapter 3. Reduced-Order Models for Finite-Dimensional Dynamical Systems Reduced-Order Modelling 49. Error Analysis for the Reduced-Order Model 5 3. Empirical Interpolation Method for Nonlinear Problem 6 4. Exercises 6 Chapter 4. Balanced Truncation Method The linear-quadratic control problem The linear-quadratic regulator (LQR) problem The Hamilton-Jacobi-Bellman equation The state-feedback law for the LQR problem 66. Balanced truncation Exercises 7 Chapter 5. The Appendix 73 A. Linear and Compact Operators 73 B. Function Spaces 75 C. Evolution Problems 77 D. Nonlinear Optimization 78 3

4 4 CONTENTS Bibliography 81

5 CHAPTER 1 The POD Method in R m In this chapter we introduce the POD method in the Euclidean space R m. For an extension to the complex space C m we refer the reader to [], for instance. The goal is to find a proper orthonormal basis, the POD basis {ψ i } l of rank l, for the snapshot set spanned by n given vectors (the so-called snapshots) y 1,...,y n R m. e assume that l min{m,n} holds true. The POD method is formulated as a constrained optimization problem that is solved by a Lagrangian frame work in Section 1. It turns out that the associated first-order necessary optimality conditions are strongly related to the singular value decomposition (SVD) of the rectangular matrix Y R m n whose columns are given by the snapshots y j, 1 j n. In Section we present properties of the POD basis. Section 3 is devoted to the extension of the POD method for the Euclidean space R m supplied with a weighted inner product. This is used later in the formulation of the POD method for discretized partial differential equations; see Section 1.3 on Chapter. In Section 4 we focus on m-dimensional systems of ordinary differential equations. e consider two different variants of the POD method: one variant utilizes the whole solution trajectory y(t), t [,T], the other one makes use of the solution y at certain time instances t 1 <... < t n T. The relationship of both variants is investigated. 1. POD and Singular Value Decomposition (SVD) Let Y = [y 1,...,y n ] be a real-valued m n matrix of rank d min{m,n} with columns y j R m, 1 j n. Consequently, (1.1.1) ȳ = 1 y j n can be viewed as the column-averaged mean of the matrix Y. Singular value decomposition (SVD) [17] guarantees the existence of real numbers σ 1 σ... σ d > and orthogonal matrices Ψ R m m with columns {ψ i } m and Φ Rn n with columns {φ i } n such that ( ) (1.1.) Ψ D YΦ = =: Σ R m n, where D = diag (σ 1,...,σ d ) R d d, the zeros in (1.1.) denote matrices of appropriate dimensions and stands for the transpose of a matrix(or vector). Moreover the vectors {ψ i } d and {φ i} d satisfy (1.1.3) Yφ i = σ i ψ i and Y ψ i = σ i φ i for i = 1,...,d. TheyareeigenvectorsofYY andy Y, respectively, witheigenvaluesλ i = σi >, i = 1,...,d. The vectors {ψ i } m i=d+1 and {φ i} n i=d+1 (if d < m respectively d < n) are eigenvectors of YY and Y Y with eigenvalue. 5

6 6 1. THE POD METHOD IN R m From (1.1.) we deduce that Y = ΨΣΦ. It follows that Y can also be expressed as (1.1.4) Y = Ψ d D(Φ d ), where the matrices Ψ d R m d and Φ d R n d are given by Ψ d ij = Ψ ij for 1 i m, 1 j d, Φ d ij = Φ ij for 1 i n, 1 j d. Setting B d = D(Φ d ) R d n we can write (1.1.4) in the form Y = Ψ d B d with B d = D(Φ d ) R d n. Thus, the column space of Y can be represented in terms of the d linearly independent columns of Ψ d. The coefficients in the expansion for the columns y j, j = 1,...,n, in the basis {ψ i } d are given by the j-th column of Bd. Since Ψ is orthogonal, we find that y j = d BijΨ d d,i = (1.1.4) = d ( D(Φ d ) ) ψ ij i = d ( (Ψ d ) Y ) ψ ij i = d ( m k=1 d Ψ d kiy kj }{{} =ψ i yj ( (Ψ d ) Ψ d D(Φ d ) ) }{{} ψ ij i =I d ) d ψ i = ψ i,y j R m ψ i, where I d R d d stands for the identity matrix and, R m denotes the canonical inner product in R m. Thus, (1.1.5) y j = d y j,ψ i R m ψ i for j = 1,...,n. Let us now interprete SVD in terms of POD. One of the central issues of POD is the reduction of data expressing their essential information by means of a few basis vectors. The problem of approximating all spatial coordinate vectors y j of Y simultaneously by a single, normalized vector as well as possible can be expressed as (P 1 ) max yj, ψ R m subject to (s.t.) ψ R = 1, m where ψ R m = ψ R m ψ, ψ R m for ψ R m. Note that (P 1 ) is a constrained optimization problem that can be solved by considering first-order necessary optimality conditions; see Appendix D. For that purpose we want to write (P 1 ) in the standard form (P) on page 78. e introduce the function e : R m R by e(ψ) = 1 ψ R m for ψ Rm. Then, the equality constraintin(p 1 )canbeexpressedase(ψ) =. ToensuretheexistenceofLagrange multipliers a constraint qualification is needed. Notice that e(ψ) = ψ is linear independent if ψ holds. In particular, a solution to (P 1 ) satisfies ψ. Thus,

7 1. POD AND SINGULAR VALUE DECOMPOSITION (SVD) 7 any solution to (P 1 ) is a regular point; see Definition D.. Let L : R m R R be the Lagrange functional associated with (P 1 ), i.e., L(ψ,λ) = yj,ψ R m +λ ( 1 ψ ) R for (ψ,λ) R m R. m Suppose that ψ R m is a solution to (P 1 ). Since ψ is a regular point, we infer from Theorem D.4 that there exists a unique Lagrange multiplier λ R m satisfying the first-order necessary optimality condition L(ψ,λ)! = in R m R. e compute the gradient of L with respect to ψ: ( L (ψ,λ) = n ( ψ i ψ i Y kj ψ k +λ 1 k=1 k=1 ( m = Y kj ψ k )Y ij λψ i Thus, k=1 k=1 ( n = Y ij Yjk ψ k ) λψ i. }{{} =(YY ) ik (1.1.6) ψ L(ψ,λ) = ( YY ψ λψ )! = in R m. Equation (1.1.6) yields the eigenvalue problem (1.1.7a) YY ψ = λψ in R m. Notice that YY R m m is a symmetric matrix satisfying ψ k ) ) ψ (YY )ψ = (Y ψ) Y ψ = Y ψ R n for all ψ Rm. Thus, YY is positive semi-definite. It follows that YY possesses m nonnegative eigenvalues λ 1 λ... λ m and the corresponding eigenvectors can be chosen such that they are pairwise orthonormal. From L λ (ψ,λ)! = in R we infer the constraint (1.1.7b) ψ R m = 1. Due to SVD the vector ψ 1 solves (1.1.7) and yj,ψ 1 R m = y j,ψ 1 R m y j,ψ 1 R m = yj,ψ 1 R my j,ψ 1 n n ( m ) = y j,ψ 1 R my j,ψ 1 = Y kj (ψ 1 ) k y j,ψ 1 R m k=1 k=1 m ( n ) = Y,j Yjk(ψ 1 ) k,ψ 1 = YY ψ 1,ψ 1 R m = λ 1 ψ1,ψ 1 R m = λ 1 ψ 1 R m = λ 1, where (ψ 1 ) k denotes the k-th component of the vector ψ 1. R m R m R m

8 8 1. THE POD METHOD IN R m Note that ψψ L(ψ,λ) = (YY λi m ) R m m holds. Let ψ R m be chosen arbitrary. Since YY is symmetric, there exist m orthonomal eigenvectors ψ 1,...,ψ m R m of YY satisfying YY ψ i = λ i ψ i for 1 i m. Then, we can write ψ in the form ψ = ψ,ψ i R m ψ i. At (ψ 1,λ 1 ) we conlude from λ 1 λ... λ m that ψ, ψψ L(ψ 1,λ 1 )ψ R m = ψ, ( YY ) λ 1 I m ψ R m = ψ,ψ i R m ψ,ψ j R m ψ i, ( YY ) λ 1 I m ψj R m = = ( ) λj λ 1 ψ,ψi R m ψ,ψ j R m ψ i,ψ j R m ( λj λ 1 ) ψ,ψi R m. Thus, (ψ 1,λ 1 ) satisfies the second-order necessary optimality conditions for a maximum, but not the sufficient ones; compare Theorems D.5 and D.6. e next prove that ψ 1 actually solves (P 1 ). Suppose that ψ R m is an arbitrary vector with ψ R m = 1. Since {ψ i } m is an orthonormal basis in Rm, we have ψ = ψ,ψ i R m ψ i. Thus, y j, ψ R m = = = = = = = y j, ψ,ψ i R m ψ i k=1 k=1 k=1 k=1 R m ( yj ), ψ,ψ i R m ψ i yj, ψ,ψ R m k R m ψ k R m ( ) y j,ψ i R m y j,ψ k R m ψ,ψ i R m ψ,ψ k R m ( n ) y j,ψ i R m y j,ψ k ψ,ψ i R m ψ,ψ k R m R m } {{ } =λ iψ i ( ) λ i ψ i,ψ k R m ψ,ψ i }{{} Rm ψ,ψ k Rm =λ iδ ik λ i ψ,ψi m R m λ 1 ψ,ψi R m = λ 1 ψ R = λ m 1 y j,ψ 1 R m.

9 1. POD AND SINGULAR VALUE DECOMPOSITION (SVD) 9 Consequently, ψ 1 solves (P 1 ) and argmax(p 1 ) = σ1 = λ 1. If we look for a second vector, orthogonal to ψ 1 that again describes the data set {y i } n as well as possible then we need to solve (P ) max yj, ψ R m s.t. ψ R m = 1 and ψ,ψ 1 R m =. ψ R m SVD implies that ψ is a solution to (P ) and argmax(p ) = σ = λ. In fact, ψ solves the first-order necessary optimality conditions (1.1.7) and for we have ψ = ψ,ψ i R m ψ i span{ψ 1 } i= yj, ψ R m λ = yj,ψ R m. Clearly this procedure can be continued by finite induction. e summarize our results in the following theorem. Theorem Let Y = [y 1,...,y n ] R m n be a given matrix with rank d min{m,n}. Further, let Y = ΨΣΦ T be the singular value decomposition of Y, where Ψ = [ψ 1,...,ψ m ] R m m, Φ = [φ 1,...,φ n ] R n n are orthogonal matrices and the matrix Σ R m n has the form as (1.1.). Then, for any l {1,...,d} the solution to (P l ) max ψ 1,..., ψ l R m yj, ψ i R m s.t. ψ i, ψ j R m = δ ij for 1 i,j l is given by the singular vectors {ψ i } l, i.e., by the first l columns of Ψ. In (Pl ) we denote by δ ij the Kronecker symbol satisfying δ ij = 1 for i = j and δ ij = otherwise. Moreover, (1.1.8) argmax(p l ) = σi = λ i. Proof. Since (P l ) is an equality constrained optimization problem, we introduce the Lagrangian (see Appendix D) by L(ψ 1,...,ψ l,λ) = L : R m... R }{{ m } l-times yj,ψ i R m + R l l ( ) λ ij δij ψ i,ψ j R m i, for ψ 1,...,ψ l R m and Λ = ((λ ij )) R l l. First-order necessary optimality conditions for (P l ) are given by (1.1.9) L ψ k (ψ 1,...,ψ l,λ)δψ k = for all δψ k R m and k {1,...,l}.

10 1 1. THE POD METHOD IN R m From L ψ k (ψ 1,...,ψ l,λ)δψ k = and (1.1.9) we infer that = = y j,ψ i R m y j,δψ k R mδ ik λ ij ψ i,δψ k R mδ jk i, λ ij δψ k,ψ j R mδ ki i, y j,ψ k R m y j,δψ k R m y j,ψ k R m y j (λ ik +λ ki ) ψ i,δψ k R m (λ ik +λ ki )ψ i,δψ k R m (1.1.1) y j,ψ k R m y j = 1 (λ ik +λ ki )ψ i in R m for all k {1,...,l}. Note that YY ψ = y j,ψ R m y j for ψ R m. Thus, condition (1.1.1) can be expressed as (1.1.11) YY ψ k = 1 (λ ik +λ ki )ψ i in R m for all k {1,...,l}. Now we proceed by induction. For l = 1 we have k = 1. It follows from (1.1.11) that (1.1.1) YY ψ 1 = λ 1 ψ 1 in R m with λ 1 = λ 11. Next we suppose that for l 1 the first-order optimality conditions are given by (1.1.13) YY ψ k = λ k ψ k in R m for all k {1,...,l}. e want to show that the first-order necessary optimality conditions for a POD basis {ψ i } l+1 of rank l+1 are given by (1.1.14) YY ψ k = λ k ψ k in R m for all k {1,...,l+1}. By assumption we have (1.1.13). Thus, we only have to prove that (1.1.15) YY ψ l+1 = λ l+1 ψ l+1 in R m. Due to (1.1.11) we have (1.1.16) YY ψ l+1 = 1 l+1 (λ i,l+1 +λ l+1,i )ψ i in R m.

11 1. POD AND SINGULAR VALUE DECOMPOSITION (SVD) 11 Since {ψ i } l+1 is a POD basis we have ψ l+1,ψ j R m = for 1 j l. Using (1.1.13) and the symmetry of YY we have for any j {1,...,l} This gives = λ j ψ l+1,ψ j R m = ψ l+1,yy ψ j R m = YY ψ l+1,ψ j R m = 1 l+1 (λ i,l+1 +λ l+1,i ) ψ i,ψ j R m = (λ j,l+1 +λ l+1,j ). (1.1.17) λ l+1,i = λ i,l+1 for any i {1,...,l}. Inserting (1.1.17) into (1.1.16) we obtain YY ψ l+1 = 1 = 1 (λ i,l+1 +λ l+1,i )ψ i +λ l+1,l+1 ψ l+1 (λ i,l+1 λ i,l+1 )ψ i +λ l+1,l+1 ψ l+1 = λ l+1,l+1 ψ l+1. Setting λ l+1 = λ l+1,l+1 we obtain (1.1.15). Summarizing, the necessary optimality conditions for (P l ) are given by the symmetric m m eigenvalue problem (1.1.18) YY ψ i = λ i ψ i for i = 1,...,l. It follows from SVD that {ψ i } l solves (1.1.18). The proof that {ψ i} l is a solution to (P l ) and that argmax(p l ) = l σ i holds is analogous to the proof for (P 1 ); see Exercise Motivated by the previous theorem we give the next definition. Moreover, in Algorithm 1 the computation of a POD basis of rank l is summarized. Definition For l {1,...,d} the vectors {ψ i } l are called POD basis of rank l. Algorithm 1 (POD basis of rank l) Require: Snapshots {y j } n Rm, POD rank l d and flag for the solver; 1: Set Y = [y 1,...,y n ] R m n ; : if flag = then 3: Compute singular value decomposition [Ψ,Σ,Φ] = svd(y); 4: Set ψ i = Ψ,i R m and λ i = Σ ii for i = 1,...,l; 5: else if flag = 1 then 6: Determine R = YY R m m ; 7: Compute eigenvalue decomposition [Ψ, Λ] = eig(r); 8: Set ψ i = Ψ,i R m and λ i = Λ ii for i = 1,...,l; 9: end if 1: return POD basis {ψ i } l and eigenvalues {λ i} l ;

12 1 1. THE POD METHOD IN R m. Properties of the POD Basis The following result states that for every l d the approximation of the columns of Y by the first l singular vectors {ψ i } l is optimal in the mean among all rank l approximations to the columns of Y. Corollary 1..1 (Optimality of the POD basis). Let all hypotheses of Theorem be satisfied. Suppose that ˆΨ d R m d denotes a matrix with pairwise orthonormal vectors ˆψ i and that the expansion of the columns of Y in the basis {ˆψ i } d be given by Y = ˆΨ d C d, where C d ij = ˆψ i,y j R m for 1 i d,1 j n. Then for every l {1,...,d} we have (1..1) Y Ψ l B l F Y ˆΨ l C l F. In (1..1), F denotes the Frobenius norm given by A F = Aij = trace ( A A ) for A R m n, the matrix Ψ l denotes the first l d columns of Ψ, B l the first l rows of B and similarly for ˆΨ l and C l. Moreover, trace(a) denotes the sum over the diagonal elements of a given matrix A. Proof of Corollary From Exercise it follows that Y ˆΨ d l C l F = ˆΨ d (C d C) l F = Cd C l F = C d ij, where C l R d n results from C R d n by replacing the last d l rows by. Similarly, d Y Ψ l B l F = Ψk (B d B) l F = Bd B l F = B d ij (1..) = = = d d d yj,ψ i R m yj,ψ i R my j,ψ i σ i, R m = d By Theorem the vectors ψ 1,...,ψ l solve (P l ). From (1..), d Y F = ˆΨ d C d F = Cd F = C d ij and Y F = Ψd B d F = Bd F = d B d ij = d YY ψ i,ψ i R m σ i

13 . PROPERTIES OF THE POD BASIS 13 we infer that Y Ψ l B l F = d σ i = Y F = which gives (1..1). Notice that Y ˆΨ l C l F = m Analogously, = d d σi σi = Y F yj,ψ i R m y j, ˆψ i R m = C d ij = Y ˆΨl C l F, Y ij y j Thus, (1..1) implies that y j k=1 d C d ij ˆΨ l n ikc kj = Y ij y j, ˆψ k ˆψ R m k k=1 n Y Ψ l B l F = y j y j,ψ k R mψ k k=1 R m R m. y j,ψ k R mψ k k=1 y j Cij d ˆψ k,y j R mˆψ l ik k=1 R m. y j, ˆψ k ˆψ R m k for any other set {ˆψ i } l of l pairwise orthonormal vectors. Hence, it follows from Corollary 1..1 that the POD basis of rank l can also be determined by solving min y j y j, ψ i ψ R (1..3) ψ 1,..., ψ l R m m i R m k=1 s.t. ψ i, ψ j R m = δ ij for 1 i,j l. Remark 1... e compare first-order optimality conditions for (P l ) and (1..3). Let {ψ i } l be a given set of orthonormal vectors in Rm, i.e. (1..4) ψ i,ψ k R m = δ ik for i,k {1,...,m}. For any index k {1,...,l} and any direction ψ δ R m we have Hence = ( ) δik ψδ = ( ψi,ψ k ψ k ψ R m { k ψi,ψ δ R m for i {1,...,l}\{k}, = ψ i,ψ δ R m for i = k. ) ψδ (1..5) ψ i,ψ δ R m = for i {1,...,l} and ψ δ R m. R m

14 14 1. THE POD METHOD IN R m Suppose that y 1,...,y n R m are the given snapshots. For l {1,...,m} we set Let z j = z j (ψ 1,...,ψ l ) = y j y j,ψ i R m ψ i R m for j = 1,...,n. (1..6) J(ψ 1,...,ψ l ) = Using (1..4) we have (1..7) z j = z j,z j R m = = y j,y j R m + k=1 = y j R m = y j R m y j z j R. m y j,ψ k R m ψ k,y j k=1 y j,ψ i R m y j,ψ i R m y j,ψ i R m y j,ψ k R m ψ i,ψ k R m yj,ψ i R m + yj,ψ i R m. Combining (1..6) and (1..7) we derive (1..8) J(ψ 1,...,ψ l ) = z j R = m y j,ψ i R m ψ i yj,ψ i R m ( y j R m For any k {1,...,l} we will consider the derivatives ψ k ( n ( y j R m yj,ψ i )) R m and y j,ψ i R m ). R m ( n ) z j (ψ 1,...,ψ l ) ψ R m k Due to (1..8) both derivatives must be the same. Notice that J (ψ 1,...,ψ l )ψ δ = ψ k ψ k = = = ( n ( y j R m ( y j ψ R m k yj,ψ i )) R m ψ δ y j,ψ i ) R m ψ δ y j,ψ k R m y j,ψ δ R m y j,ψ k R my j,ψ δ R m

15 . PROPERTIES OF THE POD BASIS 15 for any direction ψ δ R m and for 1 k l. Note that y j,ψ R m y j = YY ψ for ψ R m. Then, we find that (1..9) On the other hand we have J ψ k (ψ 1,...,ψ l ) = YY ψ k for 1 k l. z j ψ k ψ δ = y j,ψ k R mψ δ y j,ψ δ R mψ k = y j,ψ k R m ψ δ y j,ψ δ R m ψ k for 1 k l and ψ δ R m. Using (1..4) and (1..5) we find that ( ) z j ψ R ψ m δ = ( ) zj,z j k ψ R m ψδ = z j, z j u δ k ψ k R m = y j y j,ψ i R m ψ i, y j,ψ δ R m ψ k y j,ψ k R m ψ δ = y j,ψ δ R m y j,ψ k R m + y j,ψ k R m y j,ψ δ R m + R m y j,ψ i R m y j,ψ δ R m y j,ψ k R m y j,u i R m y j,u k R m y j,ψ δ R m = y j,ψ k R m y j,ψ δ R m = y j,ψ k R m y j,ψ δ R m for any direction ψ δ R m, for j = 1,...,n and for 1 k l. Summarizing, we have J (ψ 1,...,ψ l ) = y j,ψ k ψ R m y j = YY ψ k k which coincides with (1..9). Remark It follows from Corollary 1..1 that the POD basis of rank l is optimal in the sense of representing in the mean the columns {y j } n of Y as a linear combination by an orthonormal basis of rank l: yj,ψ i R m = σi = λ i yj, ˆψ i R m for any other set of orthonormal vectors {ˆψ i } l. The next corollary states that the POD coefficients are uncorrelated. Corollary 1..4 (Uncorrelated POD coefficients). Let all hypotheses of Theorem hold. Then. y j,ψ i R m y j,ψ k R m = BijB l kj l = σiδ ik for 1 i,k l.

16 16 1. THE POD METHOD IN R m Proof. The claim follows from (1.1.18) and ψ i,ψ k R m = δ ik for 1 i,k l: n y j,ψ i R m y j,ψ k R m = y j,ψ i R my j,ψ k = σiψ R i,ψ k R m = σiδ ik. m } {{ } =YY ψ i Next we turn to the practical computation of a POD-basis of rank l. If n < m then one can determine the POD basis of rank l as follows: Compute the eigenvectors φ 1,...,φ l R n by solving the symmetric n n eigenvalue problem (1..1) Y Yφ i = λ i φ i for i = 1,...,l and set, by (1.1.3), ψ i = 1 λi Yφ i for i = 1,...,l. For historical reasons [] this method of determing the POD-basis is sometimes called the method of snapshots. On the other hand, if m < n holds, we can obtain the POD basis by solving the m m eigenvalue problem (1.1.18). For the application of POD to concrete problems the choice of l is certainly of central importance for applying POD. It appears that no general a-priori rules are available. Rather the choice of l is based on heuristic considerations combined with observing the ratio of the modeled to the total energy contained in the system Y, which is expressed by l E(l) = λ i d λ. i Notice that we have d λ i = trace(yy ) = trace(y Y). Let us mention that POD is also called Principal Component Analysis (PCA) and Karhunen-Loève Decomposition. In Algorithm we extend Algorithm 1. Algorithm (POD basis of rank l) Require: Snapshots {y j } n Rm, POD rank l d and flag for the solver; 1: Set Y = [y 1,...,y n ] R m n ; : if flag = then 3: Compute singular value decomposition [Ψ,Σ,Φ] = svd(y); 4: Set ψ i = Ψ,i R m and λ i = Σ ii for i = 1,...,l; 5: else if flag = 1 then 6: Determine R = YY R m m ; 7: Compute eigenvalue decomposition [Ψ, Λ] = eig(r); 8: Set ψ i = Ψ,i R m and λ i = Λ ii for i = 1,...,l; 9: else if flag = then 1: Determine K = Y Y R n n ; 11: Compute eigenvalue decomposition [Φ, Λ] = eig(k); 1: Set ψ i = YΦ,i / λ i R m and λ i = Λ ii for i = 1,...,l; 13: end if 14: Compute E(l) = l λ i/ d λ i; 15: return POD basis {ψ i } l, eigenvalues {λ i} l and ratio E(l);

17 3. THE POD METHOD ITH A EIGHTED INNER PRODUCT The POD Method with a eighted Inner Product Let us endow the Euclidean space R m with the weighted inner product (1.3.1) ψ, ψ = ψ ψ = ψ, ψ R m = ψ, ψ R m for ψ, ψ R m, where R m m is a symmetric, positive definite matrix. Furthermore, let ψ = ψ,ψ for ψ R m be the associated induced norm. For the choice = I m, the inner product (1.3.1) coincides the Euclidean inner product. Example Let us motivate the weighted inner product by an example. Suppose that Ω = (,1) R holds. e consider the space L (Ω) of square integrable functions on Ω: { } L (Ω) = ϕ : Ω R ϕ dx <. Recall that L (Ω) is a Hilbert space endowed with the inner product ϕ, ϕ L (Ω) = ϕ ϕdx for ϕ, ϕ L (Ω) Ω and the induced norm ϕ L (Ω) = ϕ,ϕ L (Ω) for ϕ L (Ω). For the step size h = 1/(m 1) let us introduce a spatial grid in Ω by x i = (i 1)h for i = 1,...,m. For any ϕ, ϕ L (Ω) we introduce a discrete inner product by trapezoidal approximation: ( ϕ h (1.3.) ϕ, ϕ L h (Ω) = h 1 ϕ h m 1 ( 1 + ϕ h i ϕ h ) ϕ h i + m ϕ h ) m, where ϕ h i = h 1 h h h/ xi+h/ x i h/ 1 1 h/ i= Ω ϕ(x)dx for i = 1, ϕ(x)dx for i =,...,m 1, ϕ(x)dx for i = m and the ϕ h i s are defined analogously. Setting = diag(h/,h,...,h,h/) R m m, ϕ h = (ϕ h 1,...,ϕ h m) T R m and ϕ h = ( ϕ h 1,..., ϕ h m) T R m we find ϕ, ϕ L h (Ω) = ϕh, ϕ h = (ϕ h ) T ϕ h. Thus, the discrete L -inner product can be written as a weighted inner product of the form (1.3.1). Let us also refer to Exercise 1.5.7, where an extension to a two-dimensional domain Ω is investigated. Now we replace (P 1 ) by (P 1 ) max y j, ψ ψ R m s.t. ψ = 1.

18 18 1. THE POD METHOD IN R m Analogously to Section 1.1 we treat (P 1 ) as an equality constrained optimization problem. The Lagrangian L : R m R R for (P 1 ) is given by L(ψ,λ) = yj,ψ ( ) +λ 1 ψ for (ψ,λ) R m R. e introduce the function e : R m R by e(ψ) = 1 ψ R = 1 m ψ ψ for ψ R m. Then, the equality constraint in (P 1 ) can be expressed as e( ψ) =. Notice that e(ψ) = ψ is linear independent if ψ holds. Suppose that ψ R m is a solution to (P 1 ). Then, ψ is true, so that any solution ψ is a regular point for (P 1 ); compare Definition D.. Consequently, there exists a Lagrange multiplier associated with the optimal solution ψ, so that the first-order necessary optimality condition L(ψ,λ)! = in R m R is satisfied; see Theorem D.4. e compute the gradient of L with respect to ψ: Since is symmetric, we derive ( L (ψ,λ) = n ( ) ) ψ i ψ i Yjν νk ψ k +λ 1 ψ ν νk ψ k k=1ν=1 k=1ν=1 ( m )( m ) = Yjν νk ψ k Yjµ µi Thus, k=1ν=1 ν=1 µ=1 ( m ) λ u ν νi + ik ψ k = k=1ν=1µ=1 k=1 n ( m ) iµ Y µj Yjν νk ψ k λ ik ψ k ( ) = YY ψ λψ (1.3.3) ψ L(ψ,λ) = ( YY ψ λψ )! = in R m. Equation (1.3.3) yields the generalized eigenvalue problem (1.3.4) (Y)(Y) ψ = λψ. Since is symmetric and positive definite, possesses an eigenvalue decomposition of the form = QDQ, where D = diag(η 1,...,η m ) contains the eigenvalues η 1... η m > of and Q R m m is an orthogonal matrix. e define. i k=1 α = Qdiag(η α 1,...,η α m)q for α R. Note that ( α ) 1 = α and α+β = α β for α, β R; see Exercise Moreover, we have ψ, ψ = 1/ ψ, 1/ ψ R m for ψ, ψ R m and ψ = 1/ ψ R m for ψ R m. Setting ψ = 1/ ψ R m and Ȳ = 1/ Y R m n and multiplying (1.3.4) by 1/ from the left we deduce the

19 3. THE POD METHOD ITH A EIGHTED INNER PRODUCT 19 symmetric, m m eigenvalue problem (1.3.5a) ȲȲ ψ = λ ψ in R m. From L λ (ψ,λ)! = in R we infer the constraint ψ = 1 that can be expressed as (1.3.5b) ψ R m = 1. Thus, the first-order optimality conditions (1.3.5) for (P 1 ) are as for (P1 ) (compare (1.1.7)) an m m eigenvalue problem, but the matrix Y as well as the vector ψ have to be weighted by the matrix 1/. Notice that ψψ L(ψ,λ) = (YY ψ λψ) R m m. Let ψ R m be chosen arbitrary. Since ȲȲ is symmetric, there exist m orthonomal (with respect to the Euclidean inner product) eigenvectors ψ 1,..., ψ m R m of ȲȲ satisfying ȲȲ ψ i = λ i ψi for 1 i m. e set ψ i = 1/ ψ i, 1 i m. Then, {ψ i } m form an orthonormal (with respect to the weighted inner product) basis in R m and YY ψ i = λ i ψ i holds true. e write ψ in the form ψ = ψ,ψ i R m ψ i. At (ψ 1,λ 1 ) we conlude from λ 1 λ... λ m that ψ ψψ L(ψ 1,λ 1 )ψ = ψ ( YY λ 1 ) ψ = ψ,ψ i R m ψ,ψ j R mψi ( YY λ 1 ) ψ j = = ( ) λj λ 1 ψ,ψi R m ψ,ψ j R mψi ψ j ( λj λ 1 ) ψ,ψi R m. Thus, the matrix ψψ L(ψ 1,λ 1 ) is negative semi-definite, which is the second-order necessary optimality condition; compare Theorem D.5. However, analogously to Section 1 it can be shown (see Exercise 1.4.1)) that ψ 1 = 1/ ψ1 solves (P 1 ), where ū 1 is an eigenvector of ȲȲ corresponding to the largest eigenvalue λ 1 with ψ 1 R m = 1. Due to SVD the vector ψ 1 can be also determined by solving the symmetric n n eigenvalue problem where Ȳ Ȳ = Y Y, and setting Ȳ Ȳ φ 1 = λ 1 φ1 (1.3.6) ψ 1 = 1/ ψ1 = 1 λ1 1/ Ȳ φ 1 = 1 λ1 Y φ 1. As in Section 1 we can continue by looking at a second vector ψ R m with ψ,ψ 1 = that maximizes n y j,ψ. Let us generalize Theorem as follows; see Exercise

20 1. THE POD METHOD IN R m Theorem Let Y R m n be a given matrix with rank d min{m,n}, a symmetric, positive definite matrix, Ȳ = 1/ Y and l {1,...,d}. Further, let Ȳ = ΨΣ Φ be the singular value decomposition of Ȳ, where Ψ = [ ψ 1,..., ψ m ] R m m, Φ = [ φ 1,..., φ n ] R n n are orthogonal matrices and the matrix Σ has the form ( ) Ψ Ȳ Φ D = = Σ R m n. Then the solution to (P l ) max ψ 1,..., ψ l R m yj, ψ i s.t. ψ i, ψ j = δ ij for 1 i,j l is given by the vectors ψ i = 1/ ψ i, i = 1,...,l. Moreover, (1.3.7) argmax(p l ) = σi = λ i. Proof. Using similar arguments as in the proof of Theorem one can prove that {ψ i } l solves (Pl ); see Exercise Remark Due to SVD and Ȳ Ȳ = Y Y the POD basis {ψ i } l of rank l can be determined by the method of snapshots as follows: Solve the symmetric n n eigenvalue problem and set Y Y φ i = λ i φi for i = 1,...,l, ψ i = 1/ ψi = 1 λi 1/( Ȳ φ i ) = 1 λi 1/ 1/ Y φ i = 1 λi Y φ i for i = 1,...,l. Notice that ψ i,ψ j = ψ i u j = δ ijλ j λi λ j for 1 i,j l. For m n the method of snapshots turns out to be faster than computing the POD basis via (1.3.5). Notice that the matrix 1/ is also not required for the method of snapshots. In Algorithm 3 we extend Algorithm. (1.4.1a) (1.4.1b) 4. POD for Time-Dependent Systems For T > we consider the semi-linear initial value problem ẏ(t) = Ay(t)+f(t,y(t)) for t (,T], y() = y, wherey R m isachoseninitialcondition, A R m m isagivenmatrix, f : [,T] R m R m is continuous in both arguments and locally Lipschitz-continuous with respect to the second argument. It is well known that(1.4.1) has a unique(classical) solution y C 1 (,T ;R m ) C([,T ];R m ) for some maximal time T (,T].

21 4. POD FOR TIME-DEPENDENT SYSTEMS 1 Algorithm 3 (POD basis of rank l with a weighted inner product) Require: Snapshots {y j } n Rm, POD rank l d, symmetric, positive-definite matrix R m m and flag for the solver; 1: Set Y = [y 1,...,y n ] R m n ; : if flag = then 3: Determine Ȳ = 1/ Y R m n ; 4: Compute singular value decomposition [ Ψ,Σ, Φ] = svd(ȳ); 5: Set ψ i = 1/ Ψ,i R m and λ i = Σ ii for i = 1,...,l; 6: else if flag = 1 then 7: Determine Ȳ = 1/ Y R m n ; 8: Set R = ȲȲ R m m ; 9: Compute eigenvalue decomposition [ Ψ,Λ] = eig(r); 1: Set ψ i = 1/ Ψ,i R m and λ i = Λ ii for i = 1,...,l; 11: else if flag = then 1: Determine K = Y Y R n n ; 13: Compute eigenvalue decomposition [ Φ,Λ] = eig(k); 14: Set ψ i = Y Φ,i / λ i R m and λ i = Λ ii for i = 1,...,l; 15: end if 16: Compute E(l) = l λ i/ d λ i; 17: return POD basis {ψ i } l, eigenvalues {λ i} l and ratio E(l); Throughout we suppose that we can choose T = T. Then, the solution y to (1.4.1) is given by the implicit integral representation with e ta = n= tn A n /(n!). y(t) = e ta y + t e (t s)a f(s,y(s))ds 4.1. Application of POD for Time-Dependent Systems. Let t 1 < t <... < t n T be a given time grid in the interval [,T]. For simplicity of the presentation, the time grid is assumed to be equidistant with step-size t = T/(n 1), i.e., t j = (j 1) t. e suppose that we know the solution to (1.4.1) at the given time instances t j, j {1,...,n}. Our goal is to determine a POD basis of rank l min{m,n} that describes the ensemble y j = y(t j ) = e tja y + tj e (tj s)a f(s,y(s))ds, j = 1,...,n, as well as possible with respect to the weighted inner product: (ˆP n,l ) min ψ 1,..., ψ l R m yj α j y j, ψ i ψi s.t. ψ i, ψ j = δ ij, 1 i,j l, where the α j s denote nonnegative weights which will be specified later on. Note that for α j = 1 for j = 1,...,n and = I m problem (ˆP n,l ) coincides with (1..3).

22 1. THE POD METHOD IN R m Example Let us consider the following one-dimensional heat equation: (1.4.a) (1.4.b) (1.4.c) θ t (t,x) = θ xx (t,x) for all (t,x) Q = (,T) Ω, θ x (t,) = θ x (t,1) = for all t (,T), θ(,x) = θ (x) for all x Ω = (,1) R, where θ C(Ω) is a given initial condition. To solve (1.4.) numerically we apply a classical finite difference approximation for the spatial variable x. In Example we have introduced the spatial grid {x i } m in the interval [,1]. Let us denote by y i : [,T] R the numerical approximation for θ(,x i ) for i = 1,...,m. The second partial derivative θ xx in (1.4.a) and the boundary conditions (1.4.b) are discretized by centered difference quotients of second-order so that we obtain the following ordinary differential equations for the time-dependent functions y i : ẏ 1 (t) = y 1(t)+y (t) h, (1.4.3a) ẏ i (t) = y i 1(t) y i (t)+y i+1 (t) h, i =,...,m 1, ẏ m (t) = y m(t)+y m 1 (t) h for t (,T]. From (1.4.c) we infer the initial condion (1.4.3b) Introducing the matrix and the vectors y(t) = A = 1 h y i () = θ (x i ), i = 1,...,m. y 1 (t). y m (t) we can express (1.4.3) in the form (1.4.4) 1 1 for t [,T], y = ẏ(t) = Ay(t) for t (,T], y() = y R m m θ (x 1 ). θ (x m ) R m Setting f the linear initial-value problem coincides with (1.4.1). Note that now the vector y(t), t [,T], represents a function in Ω evaluated at m grid points. Therefore, we should supply R m by a weighted inner product representing a discretized inner product in an appropriate function space. Here we choose the inner product introduced in (1.3.); see Example Next we choose a time grid {t j } n in the interval [,T] and define y j = y(t j ) for j = 1,...,n. If we are interested in finding a POD basis of rank l min{m,n} that desribes the ensemble {y j } n n,l as well as possible, we end up with (ˆP ).

23 4. POD FOR TIME-DEPENDENT SYSTEMS 3 To solve (ˆP n,l ) we apply the techniques used in Sections 1 and 3, i.e., we use the Lagrangian framework; see Appendix D. Thus, we introduce the Lagrange functional L : R m... R }{{ m R } l l R l times by yj ( ) L(ψ 1,...,ψ l,λ) = α j y j,ψ i ψ i + Λ ij 1 ψi,ψ j for ψ 1,...,ψ l R m and Λ R l l with elements Λ ij, 1 i,j l. It turns out that the solution to (ˆP n,l ) is given by the first-order necessary optimality conditions (1.4.5a) ψi L(ψ 1,...,ψ l,λ)! = in R m, 1 i l, and (1.4.5b) ψ i,ψ j! = δ ij, 1 i,j l; compare Theorem D.4. From (1.4.5a) we derive (1.4.6) YDY ψ i = λ i ψ i for i = 1,...,l, where D = diag(α 1,...,α n ) R n n. Inserting ψ i = 1/ ψ i in (1.4.6) and multiplying (1.4.6) by 1/ from the left yield (1.4.7a) 1/ YDY 1/ ψi = λ i ψi. From (1.4.5b) we find (1.4.7b) ψ i, ψ j R m = ψ i ψ j = ψ i ψ j = ψ i,ψ j = δ ij, 1 i,j l. Setting Ȳ = 1/ YD 1/ R m n and using = as well as D = D we infer from (1.4.7) that the solution {ψ i } l n,l to (ˆP ) is given by the symmetric m m eigenvalue problem ȲȲ ψi = λ i ψi, 1 i l and ψ i, ψ j R m = δ ij, 1 i,j l. Note that Ȳ Ȳ = D 1/ Y YD 1/ R n n. Thus, the POD basis of rank l can also be computed by the methods of snapshots as follows: First solve the symmetric n n eigenvalue problem Ȳ Ȳ φ i = λ i φi, 1 i l and φ i, φ j R n = δ ij, 1 i,j l. Then we set (by SVD) ψ i = 1/ ψi = 1 λi 1/ Ȳ φ i = 1 λi YD 1/ φi, 1 i l; compare (1.3.6). Note that ψ i,ψ j = ψi T 1 ψ j = ψ i D 1/ Y YD 1/ λ φj λi λ }{{} = i φ i φj = λ iδ ij j λi λ =Ȳ Ȳ j λi λ j for 1 i,j l, i.e., the POD basis vectors ψ 1,...,ψ l are orthonormal in R m with respect to the inner product,. In Algorithm 4 the computation of a POD basis of rank l is summarized for finite-dimensional dynamical systems.

24 4 1. THE POD METHOD IN R m Algorithm 4 (POD basis of rank l for finite-dimensional dynamical systems) Require: Snapshots {y j } n Rm, POD rank l d, symmetric, positive-definite matrix R m m, diagonal matrix D R n n containing the temporal quadrature weights and flag for the solver; 1: Set Y = [y 1,...,y n ] R m n ; : if flag = then 3: Determine Ȳ = 1/ YD 1/ R m n ; 4: Compute singular value decomposition [ Ψ,Σ, Φ] = svd(ȳ); 5: Set ψ i = 1/ Ψ,i R m and λ i = Σ ii for i = 1,...,l; 6: else if flag = 1 then 7: Determine Ȳ = 1/ YD 1/ R m n ; 8: Set R = ȲȲ R m m ; 9: Compute eigenvalue decomposition [ Ψ,Λ] = eig(r); 1: Set ψ i = 1/ Ψ,i R m and λ i = Λ ii for i = 1,...,l; 11: else if flag = then 1: Determine K = D 1/ Y YD 1/ R n n ; 13: Compute eigenvalue decomposition [ Φ,Λ] = eig(k); 14: Set ψ i = YD 1/ Φ,i / λ i R m and λ i = Λ ii for i = 1,...,l; 15: end if 16: Compute E(l) = l λ i/ d λ i; 17: return POD basis {ψ i } l, eigenvalues {λ i} l and ratio E(l); 4.. The Continuous Version of the POD Method. Of course, the snapshot ensemble {y j } n n,l for (ˆP ) and therefore the snapshot set span{y 1,...,y n } depend on the chosen time instances {t j } n. Consequently, the POD basis vectors {ψ i } l and the corresponding eigenvalues {λ i} l depend also on the time instances, i.e., ψ i = ψi n and λ i = λ n i, 1 i l. Moreover, we have not discussed so far what is the motivation to introduce the nonnegative weights {α j } n n,l in (ˆP ). For this reason we proceed by investigating the following two questions: How to choose good time instances for the snapshots? hat are appropriate nonnegative weights {α j } n? To address these two questions we will introduce a continuous version of POD. Suppose that (1.4.1) has a unique solution y : [,T] R m. If we are interested to find a POD basis of rank l that describes the whole trajectory {y(t) t [,T]} R m as good as possible we have to consider the following minimization problem (ˆP l ) min y(t) y(t), ψ i ψi dt ψ 1,..., ψ l R m s.t. ψ i, ψ j = δ ij, 1 i,j l, To solve (ˆP l ) we use similar arguments as in Sections 1 and 3. For l = 1 we obtain instead of (ˆP l ) the minimization problem (1.4.8) min y(t) y(t), ψ ψ dt s.t. ψ ψ R m = 1,

25 4. POD FOR TIME-DEPENDENT SYSTEMS 5 Suppose that { ψ i } m i= are chosen in such a way that { ψ, ψ,..., ψ m } is an orthonormal basis in R m with respect to the inner product,. Then we have Thus, y(t) = y(t), ψ m ψ + y(t), ψ i ψi i= y(t) y(t), ψ ψ dt = = i= for all t [,T]. y(t), ψ ψi i= y(t), ψi dt we conclude that (1.4.8) is equivalent with the following maximization problem (1.4.9) max y(t), ψ dt s.t. ψ = 1. ψ R m The Lagrange functional L : R m R R associated with (1.4.9) is given by L(ψ,λ) = y(t),ψ ( ) dt+λ 1 ψ dt for (ψ,λ) R m R. Arguing as in Sections 1 and 3 any optimal solution to (1.4.9) is a regular point; see Exercise Consequently, first-order necessary optimality conditions are given by L(ψ,λ)! = in R m R. Therefore, we compute the partial derivative of L with respect to the i-th component ψ i of the vector ψ: L (u,λ) = ( ) y k (t) kν ψ ν dt+λ (1 ) ψ k kν ψ ν ψ i ψ i k=1ν=1 k=1ν=1 ( m ) m = y k (t) kν ψ ν y µ (t) µi dt λ ik ψ k k=1ν=1 µ=1 k=1 ( ) = y(t),ψ y(t)dt λψ for i {1,...,m}. Thus, ( ψ L(ψ,λ) = which gives (1.4.1) ) y(t),ψ y(t)dt λψ y(t),ψ y(t)dt = λψ in R m. Multiplying (1.4.1) by 1 from the left yields (1.4.11) y(t),ψ y(t)dt = λψ in R m. i! = in R m,

26 6 1. THE POD METHOD IN R m e define the operator R : R m R m as Rψ = y(t),ψ y(t)dt for ψ R m. Lemma The operator R is linear and bounded (i.e., continuous). Moreover, 1) R is nonnegative: Rψ,ψ for all ψ R m. ) R is self-adjoint (or symmetric): Rψ, ψ = ψ,r ψ for all ψ, ψ R m. Proof. For arbitrary ψ, ψ R m and α, α R we have R ( αψ + α ψ ) = = = α y(t),αψ + α ψ y(t)dt (α y(t),ψ + α y(t), ψ ) y(t) dt y(t),ψ y(t)dt+ α y(t), ψ y(t)dt = αrψ + αr ψ, so that R is linear. From the Cauchy-Schwarz inequality we derive Rψ y(t),ψ y(t) dt = y(t),ψ y(t) dt ( y(t) ψ dt = y(t) ) ψ dt = y L (,T;R m ) ψ for an arbitrary ψ R m. Since y C([,T];R m ) L (,T;R m ) holds, the norm y L (,T;R m ) is bounded. Therefore, R is bounded. Since ( Rψ,ψ = y(t),ψ y(t)dt) ψ = y(t),ψ y(t) ψdt = y(t),ψ dt for all ψ R m holds, R is nonnegative. Finally, we infer from Rψ, ψ = y(t),ψ y(t), ψ dt = y(t), ψ y(t)dt,ψ = R ψ,ψ = ψ,r ψ for all ψ, ψ R m that R is self-adjoint. Utilizing the operator R we can write (1.4.11) as the eigenvalue problem Rψ = λψ in R m. It follows from Lemma 1.4. that R possesses eigenvectors {ψ i } m and associated real eigenvalues {λ i } m such that (1.4.1) Rψ i = λ i ψ i for 1 i m and λ 1 λ... λ m.

27 4. POD FOR TIME-DEPENDENT SYSTEMS 7 Note that y(t),ψi dt = y(t),ψi y(t),ψ i = λ i for i {1,...,m} dt = Rψ i,ψ i = λ i ψ i so that ψ 1 solves (1.4.8). Proceeding as in Sections 1 and 3 we obtain the following result; see Exercise Theorem Suppose that (1.4.1) has a unique solution y : [,T] R m. Then the POD basis of rank l solving the minimization problem (ˆP l ) is given by the eigenvectors {ψ i } l of R corresponding to the l largest eigenvalues λ 1... λ l. Remark (Methods of snapshots). Let us define the linear and bounded operator Y : L (,T) R m by Yφ = φ(t)y(t)dt for φ L (,T). The (Hilbert space) adjoint Y : R m L (,T) satisfying (see Definition A.5) is given as Then we have Y ψ,φ L (,T) = ψ,yφ for all (ψ,φ) Rm L (,T) (Y ψ)(t) = ψ,y(t) for ψ R m and almost all t [,T]. YY ψ = ψ,y(t) y(t)dt = y(t),ψ y(t)dt = Rψ for all ψ R m, i.e., R = YY holds. Furthermore, (Y Yφ)(t) = φ(s)y(s) ds, y(t) = y(s),y(t) φ(s)ds =: (Kφ)(t) for all φ L (,T) and almost all t [,T]. Thus, K = Y Y. It can be shown that the operator K is linear, bounded, nonnegative and self-adjoint. Moreover, K is compact. Therefore, the POD basis can also be computed as follows: Solve (1.4.13) Kφ i = λ i φ i for 1 i l, λ 1... λ l >, and set ψ i = 1 λi Yφ i = 1 λi φ i (t)y(t)dt for i = 1,...,l. φ i (t)φ j (t)dt = δ ij Note that (1.4.13) is a symmetric eigenvalue problem in the infinite-dimensional function space L (,T). In Algorithm 5 the computation of a POD basis of rank l is summarized in the context of the continuous version of the POD method.

28 8 1. THE POD METHOD IN R m Algorithm 5 (POD basis of rank l for dynamical systems [continuous version]) Require: Snapshots {y(t) t [,T]} R m, POD rank l d, symmetric, positive-definite matrix R m m and flag for the solver; 1: if flag = 1 then : Set R = y(t), y(t)dt L(R m ); 3: Solve the eigenvalue problem Rψ i = λ i ψ i, 1 i l, with ψ i,ψ j = δ ij ; 4: else if flag = then 5: Set K = y(s),y( ) ds L(L (,T)); 6: Solve the problem Kφ i = λ i φ i, 1 i l, with φ i,φ j L (,T) = δ ij ; 7: Set ψ i = y(t)φ i(t)dt/ λ i R m ; 8: end if 9: Compute E(l) = l λ i/ d λ i; 1: return POD basis {ψ i } l, eigenvalues {λ i} l and ratio E(l); Let us turn back to the optimality conditions (1.4.6). For any ψ R m and i {1,...,m} we derive ( YDY ψ ) = m α i j Y ij Y kj kν ψ ν = α j Y ij y j,ψ = ν=1 k=1 α j y j,ψ (y j ) i, where (y j ) i stands for the i-th component of the vector y j R m. Thus, YDY ψ = α j y j,ψ y j =: R n ψ. Note that the operator R n : R m R m is linear and bounded. Moreover, n R n ψ,ψ = α j y j,ψ y j,ψ = α yj j,ψ holds for all ψ R m so that R n is nonnegative. Further, R n ψ, ψ n = α j y j,ψ y j, ψ = α j y j,ψ y j, ψ n = α j y j, ψ y j,ψ = R n ψ,ψ = ψ,r n ψ for all ψ, ψ R m, i.e., R n is self-adjoint. Therefore, R n has the same properties as the operator R. Summarizing, we have (1.4.14a) (1.4.14b) (1.4.15) R n ψ n i = λ n i ψ n i, λ n 1...λ n l...λ n d(n) > λn d(n)+1 =... = λn m =, Let us note that Rψ i = λ i ψ i, λ 1...λ l...λ d > λ d+1 =... = λ m =. d y(t) dt = λ i = λ i.

29 4. POD FOR TIME-DEPENDENT SYSTEMS 9 In fact, Rψ i = y(t),ψ i y(t)dt for every i {1,...,m}. Taking the inner product with u i, using (1.4.14b) and summing over i we arrive at d y(t),ψi d d dt = Rψ i,ψ i = λ i = λ i. Expanding y(t) R m in terms of {ψ i } m we have y(t) = y(t),ψ i ψ i and hence y(t) dt = m which is (1.4.15). Analogously, we obtain (1.4.16) y(t),ψ i m dt = λ i, d(n) α j y(t j ) = λ n i = λ n i for every n N; see Exercise For convenience we do not indicate the dependence of α j on n. Let y C([,T];R m ) hold. To ensure (1.4.17) α j y(t j ) y(t) dt as t we have to choose the α j s appropriately. Here we take the trapezoidal weights (1.4.18) α 1 = t, α j = t for j n 1, α n = t. Suppose that we have (1.4.19) lim n Rn R L(R m ) = lim sup R n ψ Rψ n = ψ =1 provided y C 1 ([,T];R m ) is satisfied. In (1.4.19) we denote by L(R m ) the Banach space of all linear and bounded operators mapping from R m into itself; see Appendix A. Combining (1.4.17) with (1.4.15) and (1.4.16) we find (1.4.) λ n i λ i as n. Now choose and fix (1.4.1) l such that λ l λ l+1. Then by spectral analysis of compact operators [13, pp. 1-14] and (1.4.19) it follows that (1.4.) λ n i λ i for 1 i l as n. Combining (1.4.) and (1.4.) there exists n N such that (1.4.3) λ n i λ i for all n n,

30 3 1. THE POD METHOD IN R m if m λ i. Moreover, for l as above, n can also be chosen such that (1.4.4) d(n) y,ψi n m y,ψ i for all n n, provided that m y,ψ i and (1.4.19) hold. Recall that the vector y R m stands for the initial condition in (1.4.1b). Then we have m (1.4.5) y = y,ψ i. If t 1 = holds, we have y span{y j } n for every n and d(n) (1.4.6) y = y,ψi n. Therefore, for l < d(n) by (1.4.5) and (1.4.6) d(n) d(n) y,ψi n = y,ψi n y,ψi n + y,ψ i = + y,ψ i m y,ψ i ( y,ψ i ) y,ψi n + y,ψ i. As a consequence of (1.4.19) and (1.4.1) we have lim n ψ n i ψ i = for i = 1,...,l and hence (1.4.4) follows. Summarizing we have the following theorem. Theorem Suppose that (1.4.1) has a unique solution y : [,T] R m. Let {(ψi n,λn i )}m and {(ψ i,λ i )} m be the eigenvector-eigenvalue pairs given by (1.4.14). Suppose that l {1,...,m} is fixed such that (1.4.1) and λ i, y,ψ i hold. Then we have (1.4.7) lim n Rn R L(Rm ) =. This implies lim λ n i λ i = lim n n ψn i ψ i = for 1 i l, ( ) lim λ n i λ i = and lim n n y,ψi n m = y,ψ i. Proof. e only have to verify (1.4.7). For that purpose we choose an arbitrary ψ R m with ψ = 1 and introduce f ψ : [,T] R m by f ψ (t) = y(t),ψ y(t) for t [,T].

31 5. EXERCISES 31 Then, we have f u C 1 ([,T];R m ) with f ψ (t) = ẏ(t),ψ y(t)+ y(t),ψ ẏ(t) for t [,T] By Taylor expansion there exist τ j1 (t),τ j (t) [t j,t j+1 ] depending on t Hence, tj+1 t j f ψ (t)dt = 1 R n u Ru = Consequently, + 1 tj+1 t j tj+1 t j f ψ (t j )+ f ψ (τ j1 (t))(t t j )dt f ψ (t j+1 )+ f ψ (τ j (t))(t t j+1 )dt = t (f ψ(t j )+f ψ (t j+1 ))+ 1 n 1 = tj+1 t j α j f ψ (t j ) n 1 tj+1 1 max t [,T] tj+1 t j f ψ (τ j (t))(t t j+1 )dt. f ψ (t)dt f ψ (τ j1 (t))(t t j )dt ( t tj+1 ) (f ψ(t j )+f ψ (t j+1 )) f ψ (t)dt t j t j f ψ (τ j1 (t)) t tj + f ψ (τ j (t)) t tj+1 dt f ψ (t) n 1 ( (t tj ) = t max f ψ (t) n 1 t [,T] t = tt tt = tt max t [,T] max t [,T] f ψ (t) (t j+1 t) ) t=t j+1 t=t j max t [,T] f ψ (t) ẏ(t),ψ y(t)+ y(t),ψ ẏ(t) = tt max t [,T] ẏ(t) y(t) tt y C 1 ([,T];R m ). R n R L(R m ) = sup R n ψ Rψ t y t C 1 ([,T];R m ) ψ =1 which is (1.4.7). 5. Exercises Exercise Let A R m n, m > n, a matrix with rank n. Suppose that Ψ AΦ = Σ is the singular value decomposition of A with the singular values σ 1 σ... σ n >. Prove the following claims:

32 3 1. THE POD METHOD IN R m a) Aφ i = σ i ψ i and A ψ i = σ i φ i for i = 1,...,n, where {ψ i } m Rm and {φ i } n Rn denote the columns of Ψ R m m and Φ R n n, respectively. b) A = σ 1. c) The matrix A A is symmetric and prositive definite. d) The set of all positive singular values of A coincides with the set of square roots of all positive eigenvalues of A A. Exercise Assume that A R n n is an invertible matrix and that A = ΨΣΦ is a singular value decomposition on A. hat is the singular value decomposition of A 1? Exercise Compute the singular value decomposition of the matrix A = 1 1 Exercise Show that any optimal solution to (P l ) is a regular point. Exercise Verify the claim in Theorem that argmax(p l ) = l σ i holds true. Exercise Show that the Frobenius norm is a matrix norm and that AB F A F B F for any A, B R n n is valid. Suppose that Ψ d R m d is a matrix with pairwise orthonormal vectors ψ i R m, 1 i d. Prove that. Ψ d A F = A F for any matrix A R d n. Exercise e extend Example to the two-dimensional domain Ω = (,1) (,1) R be given. e choose the trapezoidal quadrature rule with an equidistant grid size h = 1/(n 1) in both dimensions. Determine the weighting matrix R m m, where m = n holds, so that the trapezoidal approximation can be written as the weighted inner product,. Exercise Suppose that R m m is symmetric and positive definite. Let η 1... η m > denote the eigenvalues of and α = Qdiag(η α 1,...,η α m)q be the eigenvalue decomposition of. e define α = Qdiag(η α 1,...,η α m)q for α R. Show that ( α ) 1 exists and ( α ) 1 = α. Prove that α+β = α β holds for α, β R. Exercise Verify the claims of Theorem a) Ensure a regular point condition, which guarantees the existence of Lagrange multiplieres. b) Prove that ψ i = 1/ ψ i, 1 i l, solves (P l ), where the matrix and the vectors ψ 1,..., ψ m are introduced in Theorem c) Show that (1.3.7) holds. Exercise Agrue that any optimal solution to (1.4.9) is a regular point. Exercise Prove that u 1 given by (1.4.1) is a global solution to (1.4.8). How can this result be extended for (ˆP l )? Exercise Verify (1.4.16).

33 CHAPTER The POD Method for Partial Differential Equations In this chapter we formulate the POD method for partial differential equations (PDEs). For that purpose an extension of the approach presented in Chapter 1 to separable Hilbert spaces is needed. Our approach is motivated by the goal to derive reduced-order models for parabolic and elliptic partial differential equations. In Section 1 we focus on parabolic PDEs. The presented approach generalizes the theory presented in Section 4 of Chapter 1. e also discuss the numerical realization as well as the treatment of nonlinearities. Parametrized elliptic problems are analyzed in Section. hereas for parabolic problems the time t serves as the sampling parameter, a variation of the parameter values are used for elliptic problems to build a POD basis. Throughout this chapter we make use of the following notations and assumptions: Let V and H be real, separable Hilbert spaces and suppose that V is dense in H with compact embedding. By, V and, H we denote the inner products in V and H with associated norm V =, V and H =, H, respectively. 1. POD for Parabolic Partial Differential Equations Now we consider the POD method for linear evolution problems. Then, its numerical approximation is dicussed. Moreover, we explain the extension to nonlinear evolution problems by using empirical interpolation Linear Evolution Equations. Let T > be the final time. For t [,T] we define a time-dependent symmetric bilinear form a(t;, ) : V V R satisfying (.1.1a) a(t;ϕ,ψ) β ϕ V ψ V, (.1.1b) a(t;ϕ,ϕ) κ ϕ V η ϕ H for all ϕ,ψ V and t, t 1, t [,T], where β,κ > and η are constants, which do not depend on t. By identifying H with its dual H it follows that V H = H V, each embedding being continuous and dense. In Appendix B we introduce the function space (,T), which is a Hilbert space endowed with the common inner product. hen the time t is fixed, the expression ϕ(t) stands for the function ϕ(t, ) considered as a function in Ω only. 33

34 34. THE POD METHOD FOR PARTIAL DIFFERENTIAL EQUATIONS (.1.) For y H and f L (,T;V ) we consider the linear evolution problem d dt y(t),ϕ H +a(t;y(t),ϕ) = f(t),ϕ V,V f.a.a. t [,T], ϕ V, y(),ϕ H = y,ϕ H ϕ V. Throughout we write f.a.a. for for almost all. Example.1.1. Suppose that Ω R d, d {1,,3}, is an open and bounded domain with Lipschitz-continuous boundary Γ = Ω. For T > we set Q = (,T) Ω and Σ = (,T) Γ. Let H = L (Ω) and V = H 1 (Ω). Then, for given y H, f L (,T;H) and g L (,T;L (Γ C )), we consider the linear heat equation (.1.3) y t (t,x) (c(t,x) y(t,x) ) +a(t,x)y(t,x) = f(t,x), (t,x) Q, c(t,s) y (t,s) = g(t,s), (t,s) Σ, n y(,x) = y (x), x Ω, where c C([,T];L (Ω)) satisfying c(t,x) c a > f.a.a. (t,x) Q, a C([,T];L (Ω)) and b L (,T;L (Γ C )). For t [,T] a.e. we introduce the bilinear form a(t;, ) : V V R by a(t;ϕ,ψ) = c(t) ϕ ψ +a(t)ϕψdx for ϕ,ψ V Ω and the linear, bounded functional f L (,T;V ) by f(t),ϕ V,V = f(t),ϕ H + Γ N g(t)ϕds for t [,T] a.e. and ϕ,ψ V, where a.e. stands for almost everywhere. Then, it follows that the weak formulation of (.1.3) can be expressed in the form (.1.). From c, a C([,T];L (Ω)) we infer that the time-dependent bilinear form a(t;, ) satisfies (.1.1). Example.1.. Let us present a further example for (.1.). Suppose that as in Example.1.1 the set Ω R d, d {1,,3}, is an open and bounded domain with Lipschitz-continuous boundary Γ = Ω. For T > we set Q = (,T) Ω and Σ = (,T) Γ. Let H = L (Ω) and V = H 1 (Ω). Then, for given initial condition y H we consider the linear heat equation (.1.4a) (.1.4b) (.1.4c) y t (t,x) (c(t,x) y(t,x) ) +a(t,x)y(t,x) = f(t,x), (t,x) Q, y(t,s) =, (t,s) Σ, y(,x) = y (x), x Ω. In (.1.4a) we suppose that c, a and f satisfies the same assumptions as in Example.1.1. Introducing the bilinear form a(t;, ) : V V R for every t [,T] by a(t;ϕ,ψ) = c(t,x) ϕ(x) ψ(x)+a(t,x)ϕ(x)ψ(x)dx for ϕ,ψ V Ω it follows that the weak formulation of (.1.4) can be written in the form (.1.). It follows from Theorem C.1 that for every f L (,T;V ) and y H there exists a unique weak solution y (,T) satisfying (.1.). Moreover, if f L (,T;H), a(t;, ) = a(, ) (independent of t) and y V hold, we even have y C([,T];V); see Corollary C.3.

35 1. POD FOR PARABOLIC PARTIAL DIFFERENTIAL EQUATIONS The Continuous POD Method for Linear Evolution Equations. Let f L (,T;V ) and y V be given arbitrarily so that the solution y (,T) to (.1.) belongs to C([,T];V) C([,T];X), where X denotes either the space V or the space H. Then, (.1.5) V = span { y(t) t [,T] } V X. If y holds, then V {} and d = dimv [1, ], but V may have infinite dimension. Now we proceed similar as in Remark e define a bounded linear operator Y : L (,T) X by Yϕ = ϕ(t)y(t)dt for ϕ L (,T). Its Hilbert space adjoint Y : X L (,T) satisfying Yϕ,ψ X = ϕ,y ψ L (,T) for (ϕ,ψ) L (,T) X is given by (Y ψ)(t) = ψ,y(t) X for ψ X and f.a.a. t [,T]. The linear operator R = YY : X V X has the form (.1.6) Rψ = ψ,y(t) X y(t)dt for ψ X. Moreover, let K = Y Y : L (,T) L (,T) be defined by (.1.7) ( ) T Kφ (t) = y(s),y(t) X φ(s)ds for φ L (,T). Lemma.1.3. Let X denote either the space V or the space H and y (,T) hold. Then, the linear operator R is bounded, compact, nonnegative and symmetric. (.1.8) Proof. Applying the Cauchy-Schwarz inequality we infer that Rψ X ψ,y(t) X y(t) X dt ψ X = y L (,T;X) ψ X for ψ X y(t) X dt holds. By assumption, y (,T) L (,T;X). Thus, from (.1.8) we infer that R is bounded. Again using y (,T) L (,T;X) the kernel k(s,t) = y(t),y(s) X of K is square integrable over (,T) (,T); see Exercise.3.1. By Remark A.14 we conclude that the integral operator K is compact. Remark A.16 implies that R = K is compact as well. From Rψ,ψ X = ψ,y(t) X y(t)dt,ψ X = we infer that R is nonnegative. Finally, we have Rψ, ψ X = ψ,y(t) X y(t)dt, ψ = = X ψ,y(t) X dt for all ψ X ψ, y(t), ψ X y(t) X dt = ψ, = ψ,r ψ X for all ψ, ψ X. Hence, the operator R is selfadjoint. ψ,y(t) X y(t), ψ X dt y(t), ψ X y(t)dt X

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein Proper Orthogonal Decomposition for PDE Constrained Optimization Institute of Mathematics and Statistics, University of Constance Joined work with F. Diwoky, M. Hinze, D. Hömberg, M. Kahlbacher, E. Kammann,

More information

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Technische Universität Berlin Martin Gubisch, Stefan Volkwein University of Konstanz March, 3 Martin Gubisch,

More information

An optimal control problem for a parabolic PDE with control constraints

An optimal control problem for a parabolic PDE with control constraints An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Proper Orthogonal Decomposition Surrogate Models for Nonlinear Dynamical Systems: Error Estimates and Suboptimal Control

Proper Orthogonal Decomposition Surrogate Models for Nonlinear Dynamical Systems: Error Estimates and Suboptimal Control Proper Orthogonal Decomposition Surrogate Models for Nonlinear Dynamical Systems: Error Estimates and Suboptimal Control Michael Hinze 1 and Stefan Volkwein 1 Institut für Numerische Mathematik, TU Dresden,

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Proper Orthogonal Decomposition (POD)

Proper Orthogonal Decomposition (POD) Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28 Outline Intro Results Problem

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

The HJB-POD approach for infinite dimensional control problems

The HJB-POD approach for infinite dimensional control problems The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,

More information

Master Thesis. POD-Based Bicriterial Optimal Control of Time-Dependent Convection-Diffusion Equations with Basis Update

Master Thesis. POD-Based Bicriterial Optimal Control of Time-Dependent Convection-Diffusion Equations with Basis Update Master Thesis POD-Based Bicriterial Optimal Control of Time-Dependent Convection-Diffusion Equations with Basis Update submitted by Eugen Makarov at the Department of Mathematics and Statistics Konstanz,.6.18

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

PDE Constrained Optimization selected Proofs

PDE Constrained Optimization selected Proofs PDE Constrained Optimization selected Proofs Jeff Snider jeff@snider.com George Mason University Department of Mathematical Sciences April, 214 Outline 1 Prelim 2 Thms 3.9 3.11 3 Thm 3.12 4 Thm 3.13 5

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

COMPACT OPERATORS. 1. Definitions

COMPACT OPERATORS. 1. Definitions COMPACT OPERATORS. Definitions S:defi An operator M : X Y, X, Y Banach, is compact if M(B X (0, )) is relatively compact, i.e. it has compact closure. We denote { E:kk (.) K(X, Y ) = M L(X, Y ), M compact

More information

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

More information

INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506

INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506 INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506 SIXTEENTH INTERNATIONAL SYMPOSIUM ON MATHEMATICAL THEORY OF NETWORKS AND SYSTEMS

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Lecture 7 Spectral methods

Lecture 7 Spectral methods CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a d-dimensional

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Appendix A Functional Analysis

Appendix A Functional Analysis Appendix A Functional Analysis A.1 Metric Spaces, Banach Spaces, and Hilbert Spaces Definition A.1. Metric space. Let X be a set. A map d : X X R is called metric on X if for all x,y,z X it is i) d(x,y)

More information

Proper Orthogonal Decomposition in PDE-Constrained Optimization

Proper Orthogonal Decomposition in PDE-Constrained Optimization Proper Orthogonal Decomposition in PDE-Constrained Optimization K. Kunisch Department of Mathematics and Computational Science University of Graz, Austria jointly with S. Volkwein Dynamic Programming Principle

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University The Residual and Error of Finite Element Solutions Mixed BVP of Poisson Equation

More information

Real Variables # 10 : Hilbert Spaces II

Real Variables # 10 : Hilbert Spaces II randon ehring Real Variables # 0 : Hilbert Spaces II Exercise 20 For any sequence {f n } in H with f n = for all n, there exists f H and a subsequence {f nk } such that for all g H, one has lim (f n k,

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Stability of an abstract wave equation with delay and a Kelvin Voigt damping

Stability of an abstract wave equation with delay and a Kelvin Voigt damping Stability of an abstract wave equation with delay and a Kelvin Voigt damping University of Monastir/UPSAY/LMV-UVSQ Joint work with Serge Nicaise and Cristina Pignotti Outline 1 Problem The idea Stability

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49 REAL ANALYSIS II HOMEWORK 3 CİHAN BAHRAN Conway, Page 49 3. Let K and k be as in Proposition 4.7 and suppose that k(x, y) k(y, x). Show that K is self-adjoint and if {µ n } are the eigenvalues of K, each

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

BIHARMONIC WAVE MAPS INTO SPHERES

BIHARMONIC WAVE MAPS INTO SPHERES BIHARMONIC WAVE MAPS INTO SPHERES SEBASTIAN HERR, TOBIAS LAMM, AND ROLAND SCHNAUBELT Abstract. A global weak solution of the biharmonic wave map equation in the energy space for spherical targets is constructed.

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Linear Ordinary Differential Equations

Linear Ordinary Differential Equations MTH.B402; Sect. 1 20180703) 2 Linear Ordinary Differential Equations Preliminaries: Matrix Norms. Denote by M n R) the set of n n matrix with real components, which can be identified the vector space R

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL

OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL O. LASS, S. TRENZ, AND S. VOLKWEIN Abstract. In the present paper the authors consider an optimal control

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal . Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1 Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

DRIFT OF SPECTRALLY STABLE SHIFTED STATES ON STAR GRAPHS

DRIFT OF SPECTRALLY STABLE SHIFTED STATES ON STAR GRAPHS DRIFT OF SPECTRALLY STABLE SHIFTED STATES ON STAR GRAPHS ADILBEK KAIRZHAN, DMITRY E. PELINOVSKY, AND ROY H. GOODMAN Abstract. When the coefficients of the cubic terms match the coefficients in the boundary

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

arxiv: v1 [math.na] 13 Sep 2014

arxiv: v1 [math.na] 13 Sep 2014 Model Order Reduction for Nonlinear Schrödinger Equation B. Karasözen, a,, C. Akkoyunlu b, M. Uzunca c arxiv:49.3995v [math.na] 3 Sep 4 a Department of Mathematics and Institute of Applied Mathematics,

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1 Scientific Computing WS 2017/2018 Lecture 18 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 18 Slide 1 Lecture 18 Slide 2 Weak formulation of homogeneous Dirichlet problem Search u H0 1 (Ω) (here,

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Université de Metz. Master 2 Recherche de Mathématiques 2ème semestre. par Ralph Chill Laboratoire de Mathématiques et Applications de Metz

Université de Metz. Master 2 Recherche de Mathématiques 2ème semestre. par Ralph Chill Laboratoire de Mathématiques et Applications de Metz Université de Metz Master 2 Recherche de Mathématiques 2ème semestre Systèmes gradients par Ralph Chill Laboratoire de Mathématiques et Applications de Metz Année 26/7 1 Contents Chapter 1. Introduction

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

Nonlinear Programming Algorithms Handout

Nonlinear Programming Algorithms Handout Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are

More information

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT Abstract. These are the letcure notes prepared for the workshop on Functional Analysis and Operator Algebras to be held at NIT-Karnataka,

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Switching, sparse and averaged control

Switching, sparse and averaged control Switching, sparse and averaged control Enrique Zuazua Ikerbasque & BCAM Basque Center for Applied Mathematics Bilbao - Basque Country- Spain zuazua@bcamath.org http://www.bcamath.org/zuazua/ WG-BCAM, February

More information

1 Continuity Classes C m (Ω)

1 Continuity Classes C m (Ω) 0.1 Norms 0.1 Norms A norm on a linear space X is a function : X R with the properties: Positive Definite: x 0 x X (nonnegative) x = 0 x = 0 (strictly positive) λx = λ x x X, λ C(homogeneous) x + y x +

More information

FOURIER METHODS AND DISTRIBUTIONS: SOLUTIONS

FOURIER METHODS AND DISTRIBUTIONS: SOLUTIONS Centre for Mathematical Sciences Mathematics, Faculty of Science FOURIER METHODS AND DISTRIBUTIONS: SOLUTIONS. We make the Ansatz u(x, y) = ϕ(x)ψ(y) and look for a solution which satisfies the boundary

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information