Proper Orthogonal Decomposition Surrogate Models for Nonlinear Dynamical Systems: Error Estimates and Suboptimal Control

Size: px
Start display at page:

Download "Proper Orthogonal Decomposition Surrogate Models for Nonlinear Dynamical Systems: Error Estimates and Suboptimal Control"

Transcription

1 Proper Orthogonal Decomposition Surrogate Models for Nonlinear Dynamical Systems: Error Estimates and Suboptimal Control Michael Hinze 1 and Stefan Volkwein 1 Institut für Numerische Mathematik, TU Dresden, D-169 Dresden, Germany hinze@math.tu-dresden.de Institut für Mathematik und Wissenschaftliches Rechnen, Karl-Franzens Universität Graz, Heinrichstrasse 36, A-81 Graz, Austria stefan.volkwein@uni-graz.at 1 Motivation Optimal control problems for nonlinear partial differential equations are often hard to tackle numerically so that the need for developing novel techniques emerges. One such technique is given by reduced order methods. Recently the application of reduced-order models to optimal control problems for partial differential equations has received an increasing amount of attention. The reduced-order approach is based on projecting the dynamical system onto subspaces consisting of basis elements that contain characteristics of the expected solution. This is in contrast to, e.g., finite element techniques, where the elements of the subspaces are uncorrelated to the physical properties of the system that they approximate. The reduced basis method as developed, e.g., in [IR98] is one such reduced-order method with the basis elements corresponding to the dynamics of expected control regimes. Proper orthogonal decomposition (POD) provides a method for deriving low order models of dynamical systems. It was successfully used in a variety of fields including signal analysis and pattern recognition (see [Fuk9]), fluid dynamics and coherent structures (see [AHLS88, HLB96, NAMTT3, RF94, Sir87]) and more recently in control theory (see [AH1, AFS, LT1, SK98, TGP99]) and inverse problems (see [BJWW]). Moreover, in [ABK1] POD was successfully utilized to compute reduced-order controllers. The relationship between POD and balancing was considered in [LMG, Row4, WP1]. Error analysis for nonlinear dynamical systems in finite dimensions were carried out in [RP]. In our application we apply POD to derive a Galerkin approximation in the spatial variable, with basis functions corresponding to the solution of the physical system at pre-specified time instances. These are called the snap-

2 Michael Hinze and Stefan Volkwein shots. Due to possible linear dependence or almost linear dependence, the snapshots themselves are not appropriate as a basis. Rather a singular value decomposition (SVD) is carried out and the leading generalized eigenfunctions are chosen as a basis, referred to as the POD basis. The paper is organized as follows. In Section the POD method and its relation to SVD is described. Furthermore, the snapshot form of POD for abstract parabolic equations is illustrated. Section 3 deals with reduced order modeling of nonlinear dynamical systems. Among other things, error estimates for reduced order models of a general equation in fluid mechanics obtained by the snapshot POD method are presented. Section 4 deals with suboptimal control strategies based on POD. For optimal open-loop control problems an adaptive optimization algorithm is presented which in every iteration uses a surrogate model obtained by the POD method instead of the full dynamics. In particular, in Section 4. first steps towards error estimation for optimal control problems are presented whose discretization is based on POD. The practical behavior of the proposed adaptive optimization algorithm is illustrated for two applications involving the time-dependent Navier- Stokes system in Section 5. For closed-loop control we refer the reader to [Gom, KV99, KVX4, LV3], for instance. Finally, we draw some conclusions and discuss future research perspectives in Section 6. The POD method In this section we propose the POD method and its numerical realization. In particular, we consider both POD in C n (finite-dimensional case) and POD in Hilbert spaces; see Sections.1 and., respectively. For more details we refer to, e.g., [HLB96, KV99, Vol1a]..1 Finite-dimensional POD In this subsection we concentrate on POD in the finite dimensional setting and emphasize the close connection between POD and the singular value decomposition (SVD) of rectangular matrices; see [KV99]. Furthermore, the numerical realization of POD is explained. POD and SVD Let Y be a possibly complex valued n m matrix of rank d. In the context of POD it will be useful to think of the columns {Y,j } m j=1 of Y as the spatial coordinate vector of a dynamical system at time t j. Similarly we consider the rows {Y i, } n of Y as the time-trajectories of the dynamical system evaluated at the locations x i. From SVD (see, e.g., [Nob69]) the existence of real numbers σ 1 σ... σ d > and unitary matrices U C n n with columns {u i } n and V C m m with columns {v i } m such that

3 POD: Error Estimates and Suboptimal Control 3 ( ) U H D Y V = =: Σ C n m, (1) where D = diag (σ 1,..., σ d ) IR d d, the zeros in (1) denote matrices of appropriate dimensions, and the superindex H stands for complex conjugation. Moreover, the vectors {u i } d and {v i} d satisfy Y v i = σ i u i and Y H u i = σ i v i for i = 1,..., d. () They are eigenvectors of Y Y H and Y H Y with eigenvalues σi, i = 1,..., d. The vectors {u i } m i=d+1 and {v i} m i=d+1 (if d < n respectively d < m) are eigenvectors of Y Y H and Y H Y, respectively, with eigenvalue. If Y IR n m then U and V can be chosen to be real-valued. From () we deduce that Y = UΣV H. It follows that Y can also be expressed as Y = U d D(V d ) H, (3) where U d C n d and V d C m d are given by It will be convenient to express (3) as U d i,j = U i,j for 1 i n, 1 j d, V d i,j = V i,j for 1 i m, 1 j d. Y = U d B with B = D(V d ) H C d m. Thus the column space of Y can be represented in terms of the d linearly independent columns of U d. The coefficients in the expansion for the columns Y,j, j = 1,..., m, in the basis {U d,i }d are given by the B,j. Since U is Hermitian we easily find that Y,j = d B i,j U d,i = d U,i, Y,j C nu d,i, where, C n denotes the canonical inner product in C n. In terms of the columns y j of Y we express the last equality as y j = d B i,j u i = d u i, y j C nu i, j = 1,..., m. Let us now interpret singular value decomposition in terms of POD. One of the central issues of POD is the reduction of data expressing their essential information by means of a few basis vectors. The problem of approximating all spatial coordinate vectors y j of Y simultaneously by a single, normalized vector as well as possible can be expressed as

4 4 Michael Hinze and Stefan Volkwein m max y j, u C n subject to (s.t.) u C n = 1. j=1 (P) Here, C n denotes the Euclidean norm in C n. Utilizing a Lagrangian framework a necessary optimality condition for (P) is given by the eigenvalue problem Y Y H u = σ u. (4) Due to singular value analysis u 1 solves (P) and argmax (P) = σ1. If we were to determine a second vector, orthogonal to u 1 that again describes the data set {y i } m as well as possible then we need to solve max m yj, u C n s.t. u C n = 1 and u, u 1 C n =. (P ) j=1 Rayleigh s principle and singular value decomposition imply that u is a solution to (P ) and argmax (P ) = σ. Clearly this procedure can be continued by finite induction so that u k, 1 k d, solves max m y j, u C n s.t. u C n = 1 and u, u i C n =, 1 i k 1. (P k ) j=1 The following result which states that for every l k the approximation of the columns of Y by the first l singular vectors {u i } l is optimal in the mean among all rank l approximations to the columns of Y is now quite natural. More precisely, let Û Cn d denote a matrix with pairwise orthonormal vectors û i and let the expansion of the columns of Y in the basis {û i } d be given by Y = Û ˆB, where ˆB i,j = û i, y j C n for 1 i d, 1 j m. Then for every l k we have Y Û l ˆBl F Y U l B l F. (5) Here, F denotes the Frobenius norm, U l denotes the first l columns of U, B l the first l rows of B and similarly for Û l and ˆB l. Note that the j-th column of U l B l represents the Fourier expansion of order l of the j-th column y j of Y in the orthonormal basis {u i } l. Utilizing the fact that Û ˆB l has rank l and recalling that B l = (D(V k ) H ) l estimate (5) follows directly from singular value analysis [Nob69]. We refer to U l as the POD-basis of rank l. Then we have d d ( m ) d ( m σi = B i,j ˆB i,j ). (6) and i=l+1 i=l+1 j=1 i=l+1 j=1

5 l σi = POD: Error Estimates and Suboptimal Control 5 l ( m ) B i,j j=1 l ( m ˆB i,j ). (7) Inequalities (6) and (7) establish that for every 1 l d the POD-basis of rank l is optimal in the sense of representing in the mean the columns of Y as a linear combination by a basis of rank l. Adopting the interpretation of the Y i,j as the velocity of a fluid at location x i and at time t j, inequality (7) expresses the fact that the first l POD-basis functions capture more energy on average than the first l functions of any other basis. The POD-expansion Y l of rank l is given by j=1 Y l = U l B l = U l ( D(V d ) H) l, and hence the t-average of the coefficients satisfies B l i,, B l j, C m = σ i δ ij for 1 i, j l. This property is referred to as the fact that the POD-coefficients are uncorrelated. Computational issues Concerning the practical computation of a POD-basis of rank l let us note that if m < n then one can choose to determine m eigenvectors v i corresponding to the largest eigenvalues of Y H Y C m m and by () determine the POD-basis from u i = 1 σ i Y v i, i = 1,..., l. (8) Note that the square matrix Y H Y has the dimension of number of timeinstances t j. For historical reasons [Sir87] this method of determine the PODbasis is sometimes called the method of snapshots. For the application of POD to concrete problems the choice of l is certainly of central importance, as is also the number and location of snapshots. It appears that no general a-priori rules are available. Rather the choice of l is based on heuristic considerations combined with observing the ratio of the modeled to the total information content contained in the system Y, which is expressed by E(l) = l σ i d σ i for l {1,..., d}. (9) For a further discussion, also of adaptive strategies based e.g. on this term we refer to [MM3] and the literature cited there.

6 6 Michael Hinze and Stefan Volkwein Application to discrete solutions to dynamical systems Let us now assume that Y IR n m, n m, arises from discretization of a dynamical system, where a finite element approach has been utilized to discretize the state variable y = y(x, t), i.e., y h (x, t j ) = n Y i,j ϕ i (x) for x Ω, with ϕ i, 1 i n, denoting the finite element functions and Ω being a bounded domain in IR or IR 3. The goal is to describe the ensemble {y h (, t j )} m j=1 of L -functions simultaneously by a single normalized L - function ψ as well as possible: max m yh (, t j ), ψ L (Ω) s.t. ψ L (Ω) = 1, ( P) j=1 where, L (Ω) is the canonical inner product in L (Ω). Since y h (, t j ) span {ϕ 1,..., ϕ n } holds for 1 j n, we have ψ span {ϕ 1,..., ϕ n }. Let v be the vector containing the components v i such that ψ(x) = n v i ϕ i (x) and let S IR n n denote the positive definite mass matrix with the elements ϕ i, ϕ j L (Ω). Instead of (4) we obtain that Y Y T Sv = σ v. (1) The eigenvalue problem (1) can be solved by utilizing singular value analysis. Multiplying (1) by the positive square root S 1/ of S from the left and setting u = S 1/ v we obtain the n n eigenvalue problem Ỹ Ỹ T u = σ u, (11) where Ỹ = S1/ Y IR n m. We mention that (11) coincides with (4) when {ϕ i } n is an orthonormal set in L (Ω). Note that if Y has rank k the matrix Ỹ has also rank d. Applying the singular value decomposition to the rectangular matrix Ỹ we have Ỹ = UΣV T (see (1)). Analogous to (3) it follows that Ỹ = U d D(V d ) T, (1) where again U d and V d contain the first k columns of the matrices U and V, respectively. Using (1) we determine the coefficient matrix Ψ = S 1/ U d IR n d, so that the first k POD-basis functions are given by

7 ψ j (x) = POD: Error Estimates and Suboptimal Control 7 n Ψ i,j ϕ i (x), j = 1,..., d. Due to (11) and Ψ,j = S 1/ U d,j, 1 j d, the vectors Ψ,j are eigenvectors of problem (1) with corresponding eigenvalues σ j : Y Y T SΨ,j = Y Y T SS 1/ U k,j = S 1/ Ỹ Ỹ T U k,j = σ j S 1/ U k,j = σ j Ψ,j. Therefore, the function ψ 1 solves ( P) with argmax ( P) = σ 1 and, by finite induction, the function ψ k, k {,..., d}, solves max m y h (, t j ), ψ L (Ω) s.t. ψ L (Ω) = 1, ψ, ψ i L (Ω) =, i < d, ( P k ) j=1 with argmax ( P k ) = σk. Since we have Ψ,j = S 1/ U d,j, the functions ψ 1,..., ψ d are orthonormal with respect to the L -inner product: ψ i, ψ j L (Ω) = Ψ,i, SΨ,j C n = u i, u j C n = δ ij, 1 i, j d. Note that the coefficient matrix Ψ can also be computed by using generalized singular value analysis. If we multiply (1) with S from the left we obtain the generalized eigenvalue problem SY Y T Su = σ Su. From generalized SVD [GL89] there exist orthogonal V IR m m and U IR n n and an invertible R IR n n such that ( ) V (Y T E S)R = =: Σ 1 IR m n, (13a) US 1/ R = Σ IR n n, (13b) where E = diag (e 1,..., e d ) with e i > and Σ = diag (s 1,..., s n ) with s i >. From (13b) we infer that Inserting (14) into (13a) we obtain that R = S 1/ U T Σ. (14) Σ 1 ΣT 1 = Σ 1 RT SY V T = US 1/ Y V T, which is the singular value decomposition of the matrix S 1/ Y with σ i = e i /s i > for i = 1,..., d. Hence, Ψ is again equal to the first k columns of S 1/ U. If m n we proceed to determine the matrix Ψ as follows. From u j = (1/σ j ) S 1/ Y v j for 1 j d we infer that

8 8 Michael Hinze and Stefan Volkwein Ψ,j = 1 σ j Y v j, where v j solves the m m eigenvalue problem Y T SY v j = σ i v j, 1 j d. Note that the elements of the matrix Y T SY are given by the integrals y(, t i ), y(, t j ) L (Ω), 1 i, j n, (15) so that the matrix Y T SY is often called a correlation matrix.. POD for parabolic systems Whereas in the last subsection POD has been motivated by rectangular matrices and SVD, we concentrate on POD for dynamical (non-linear) systems in this subsection. Abstract nonlinear dynamical system Let V and H be real separable Hilbert spaces and suppose that V is dense in H with compact embedding. By, H we denote the inner product in H. The inner product in V is given by a symmetric bounded, coercive, bilinear form a : V V IR: ϕ, ψ V = a(ϕ, ψ) for all ϕ, ψ V (16) with associated norm given by V = a(, ). Since V is continuously injected into H, there exists a constant c V > such that We associate with a the linear operator A: ϕ H c V ϕ V for all ϕ V. (17) Aϕ, ψ V,V = a(ϕ, ψ) for all ϕ, ψ V, where, V,V denotes the duality pairing between V and its dual. Then, by the Lax-Milgram lemma, A is an isomorphism from V onto V. Alternatively, A can be considered as a linear unbounded self-adjoint operator in H with domain D(A) = {ϕ V : Aϕ H}. By identifying H and its dual H it follows that D(A) V H = H V, each embedding being continuous and dense, when D(A) is endowed with the graph norm of A.

9 POD: Error Estimates and Suboptimal Control 9 Moreover, let F : V V V be a bilinear continuous operator mapping D(A) D(A) into H. To simplify the notation we set F (ϕ) = F (ϕ, ϕ) for ϕ V. For given f C([, T ]; H) and y V we consider the nonlinear evolution problem d dt y(t), ϕ H + a(y(t), ϕ) + F (y(t)), ϕ V,V = f(t), ϕ H (18a) for all ϕ V and t (, T ] a.e. and y() = y in H. (18b) Assumption (A1). For every f C([, T ]; H) and y V there exists a unique solution of (18) satisfying y C([, T ]; V ) L (, T ; D(A)) H 1 (, T ; H). (19) Computation of the POD basis Throughout we assume that Assumption (A1) holds and we denote by y the unique solution to (18) satisfying (19). For given n IN let = t < t 1 <... < t n T () denote a grid in the interval [, T ] and set δt j = t j t j 1, j = 1,..., n. Define t = max (δt 1,..., δt n ) and δt = min (δt 1,..., δt n ). (1) Suppose that the snapshots y(t j ) of (18) at the given time instances t j, j =,..., n, are known. We set V = span {y,..., y n }, where y j = y(t j ) for j =,..., n, y j = t y(t j n ) for j = n + 1,..., n with t y(t j ) = (y(t j ) y(t j 1 ))/δt j, and refer to V as the ensemble consisting of the snapshots {y j } n j=, at least one of which is assumed to be nonzero. Furthermore, we call {t j } n j= the snapshot grid. Notice that V V by construction. Throughout the remainder of this section we let X denote either the space V or H. Remark 1 (compare [KV1, Remark 1]). It may come as a surprise at first that the finite difference quotients t y(t j ) are included into the set V of snapshots. To motivate this choice let us point out that while the finite difference quotients are contained in the span of {y j } n j=, the POD bases differ depending on whether { t y(t j )} n j=1 are included or not. The linear dependence does

10 1 Michael Hinze and Stefan Volkwein not constitute a difficulty for the singular value decomposition which is required to compute the POD basis. In fact, the snapshots themselves can be linearly dependent. The resulting POD basis is, in any case, maximally linearly independent in the sense expressed in (P l ) and Proposition 1. Secondly, in anticipation of the rate of convergence results that will be presented in Section 3.3 we note that the time derivative of y in (18) must be approximated by the Galerkin POD based scheme. In case the terms { t y(t j )} n j=1 are included in the snapshot ensemble, we are able to utilize the estimate n t α j y(t j ) j=1 l t y(t j ), ψ i X ψ i X d i=l+1 λ i. () Otherwise, if only the snapshots y j = y(t j ) for j =,..., n, are used, we obtain instead of (37) the error formula n y(tj α j ) j= and () must be replaced by n t α j y(t j ) j=1 l y(t j ), ψ i X ψ i X = d i=l+1 l t y(t j ), ψ i X ψ i X (δt) λ i, d i=l+1 λ i, (3) which in contrast to () contains the factor (δt) on the right-hand side. In [HV3] this fact was observed numerically. Moreover, in [LV3] it turns out that the inclusion of the difference quotients improves the stability properties of the computed feedback control laws. Let us mention the article [AG3], where the time derivatives were also included in the snapshot ensemble to get a better approximation of the dynamical system. Let {ψ i } d denote an orthonormal basis for V with d = dim V. Then each member of the ensemble can be expressed as y j = d y j, ψ i X ψ i for j =,..., n. (4) The method of POD consists in choosing an orthonormal basis such that for every l {1,..., d} the mean square error between the elements y j, j n, and the corresponding l-th partial sum of (4) is minimized on average: min J(ψ 1,..., ψ l ) = n α j yj j= l y j, ψ i X ψ i s.t. ψ i, ψ j X = δ ij for 1 i l, 1 j i. X (P l )

11 POD: Error Estimates and Suboptimal Control 11 Here {α j } n j= are positive weights, which for our purposes are chosen to be α = δt 1, α j = δt j + δt j+1 and α j = α j n for j = n + 1,..., n. for j = 1,..., n 1, α n = δt n Remark. 1) Note that I n (y) = J(ψ 1,..., ψ l ) can be interpreted as a trapezoidal approximation for the integral I(y) = T y(t) l y(t), ψ i X ψ i X + yt (t) l y t (t), ψ i X ψ i X dt. For all y C 1 ([, T ]; X) it follows that lim n I n (y) = I(y). In Section 4. we will address the continuous version of POD (see, in particular, Theorem 4). ) Notice that (P l ) is equivalent with l max n j= α j yj, ψ i X s.t. ψ i, ψ j X = δ ij, 1 j i l. (5) For X = C n, l = 1 and α j = 1 for 1 j n and α j = otherwise, (5) is equivalent with (P). A solution {ψ i } l to (P l) is called POD basis of rank l. The subspace spanned by the first l POD basis functions is denoted by V l, i.e., V l = span {ψ 1,..., ψ l }. (6) The solution of (P l ) is characterized by necessary optimality conditions, which can be written as an eigenvalue problem; compare Section.1. For that purpose we endow IR n+1 with the weighted inner product v, w α = n j= α j v j w j (7) for v = (v,..., v n ) T, w = (w,..., w n ) T IR n+1 and the induced norm. Remark 3. Due to the choices for the weights α j s the weighted inner product, α can be interpreted as the trapezoidal approximation for the H 1 -inner product T v, w H 1 (,T ) = vw + v t w t dt for v, w H 1 (, T ) so that (7) is a discrete H 1 -inner product (compare Section 4.).

12 1 Michael Hinze and Stefan Volkwein Let us introduce the bounded linear operator Y n : IR n+1 X by Y n v = n j= Then the adjoint Y n : X IRn+1 is given by α j v j y j for v IR n+1. (8) Y nz = ( z, y X,..., z, y n X ) T for z X. (9) It follows that R n = Y n Y n L(X) and K n = Y n Y n IR (n+1) (n+1) are given by R n z = n j= α j z, y j X y j for z X and ( K n )ij = α j y j, y i X (3) respectively. By L(X) we denote the Banach space of all linear and bounded operators from X into itself and the matrix K n is again called a correlation matrix; compare (15). Using a Lagrangian framework we derive the following optimality conditions for the optimization problem (P l ): R n ψ = λψ, (31) compare e.g. [HLB96, pp ] and [Vol1a, Section ]. Thus, it turns out that analogous to finite-dimensional POD, we obtain an eigenvalue problem; see (4). Note that R n is a bounded, self-adjoint and nonnegative operator. Moreover, since the image of R n has finite dimension, R n is also compact. By Hilbert-Schmidt theory (see e.g. [RS8, p. 3]) there exist an orthonormal basis {ψ i } i IN for X and a sequence {λ i } i IN of nonnegative real numbers so that R n ψ i = λ i ψ i, λ 1... λ d > and λ i = for i > d. (3) Moreover, V = span {ψ i } d. Note that {λ i} i IN as well as {ψ i } i IN depend on n. Remark 4. a) Setting σ i = λ i, i = 1,..., d, and we find v i = 1 σ i Y n ψ i for i = 1,..., d (33) K n v i = λ i v i and v i, v j α = δ ij, 1 i, j d. (34) Thus, {v i } d is an orthonormal basis of eigenvectors of K n for the image of K n. Conversely, if {v i } d is a given orthonormal basis for the image of

13 POD: Error Estimates and Suboptimal Control 13 K n, then it follows that the first d eigenfunctions of R n can be determined by ψ i = 1 σ i Y n v i for i = 1,..., d, (35) see (8). Hence, we can determine the POD basis by solving either the eigenvalue problem for R n or the one for K n. The relationship between the eigenfunctions of R n and the eigenvectors for K n is given by (33) and (35), which corresponds to SVD for the finite-dimensional POD. b) Let us introduce the matrices D = diag (α,..., α n ) IR (n+1) (n+1), K n = (( y j, y i X )) i,j n IR(n+1) (n+1). Note that the matrix K n is symmetric and positive semi-definite with rank K n = d. Then the eigenvalue problem (34) can be written in matrixvector-notation as follows: K n Dv i = λ i v i and v T i Dv j = δ ij, 1 i, j d. (36) Multiplying the first equation in (36) with D 1/ from the left and setting w i = D 1/ v i, 1 i d, we derive D 1/ Kn D 1/ w i = λ i w i and w T i w j = δ ij, 1 i, j d. where the matrix K n = D 1/ Kn D 1/ is symmetric and positive semidefinite with rank K n = d. Therefore, it turns out that (34) can be expressed as a symmetric eigenvalue problem. The sequence {ψ i } l solves the optimization problem (P l). This fact as well as the error formula below were proved in [HLB96, Section 3], for example. Proposition 1. Let λ 1... λ d > denote the positive eigenvalues of R n with the associated eigenvectors ψ 1,..., ψ d X. Then, {ψi n}l is a POD basis of rank l d, and we have the error formula J(ψ 1,..., ψ l ) = n j= α j yj l y j, ψ i X ψ i X = d i=l+1 λ i. (37) 3 Reduced-order modeling for dynamical systems In the previous section we have described how to compute a POD basis. In this section we focus on the Galerkin projection of dynamical systems utilizing the POD basis functions. We obtain reduced-order models and present error estimates for the POD solution compared to the solution of the dynamical system.

14 14 Michael Hinze and Stefan Volkwein 3.1 A general equation in fluid dynamics In this subsection we specify the abstract nonlinear evolution problem that will be considered in this section and present an existence and uniqueness result, which ensures Assumption (A1) introduced in Section.. We introduce the continuous operator R : V V, which maps D(A) into H and satisfies Rϕ H c R ϕ 1 δ1 V Aϕ δ1 H Rϕ, ϕ V,V c R ϕ 1+δ V ϕ 1 δ H for all ϕ D(A), for all ϕ V for a constant c R > and for δ 1, δ [, 1). We also assume that A + R is coercive on V, i.e., there exists a constant η > such that a(ϕ, ϕ) + Rϕ, ϕ V,V η ϕ V for all ϕ V. (38) Moreover, let B : V V V be a bilinear continuous operator mapping D(A) D(A) into H such that there exist constants c B > and δ 3, δ 4, δ 5 [, 1) satisfying B(ϕ, ψ), ψ V,V =, B(ϕ, ψ), φ V,V cb ϕ δ3 H ϕ 1 δ3 V B(ϕ, χ) H + B(χ, ϕ) H c B ϕ V χ 1 δ4 V Aχ δ4 H, B(ϕ, χ) H c B ϕ δ5 H ϕ 1 δ5 V ψ V φ δ3 V φ 1 δ3 H, χ 1 δ5 V Aχ δ5 H, for all ϕ, ψ, φ V, for all χ D(A). In the context of Section. we set F = B + R. Thus, for given f C(, T ; H) and y V we consider the nonlinear evolution problem d dt y(t), ϕ H + a(y(t), ϕ) + F (y(t)), ϕ V,V = f(t), ϕ H for all ϕ V and almost all t (, T ] and (39a) The following theorem guarantees (A1). y() = y in H. (39b) Theorem 1. Suppose that the operators R and B satisfy the assumptions stated above. Then, for every f C(, T ; H) and y V there exists a unique solution of (39) satisfying y C([, T ]; V ) L (, T ; D(A)) H 1 (, T ; H). (4) Proof. The proof is analogous to that of Theorem.1 in [Tem88, p. 111], where the case with time-independent f was treated.

15 POD: Error Estimates and Suboptimal Control 15 Example 1. Let Ω denote a bounded domain in IR with boundary Γ and let T >. The two-dimensional Navier-Stokes equations are given by ϱ ( u t + (u )u ) ν u + p = f in Q = (, T ) Ω, (41a) div u = in Q, (41b) where ϱ > is the density of the fluid, ν > is the kinematic viscosity, f represents volume forces and (u )u = ( u 1 u 1 x 1 + u u 1 x, u 1 u x 1 + u u x ) T. The unknowns are the velocity field u = (u 1, u ) and the pressure p. Together with (41) we consider nonslip boundary conditions and the initial condition u = u d on Σ = (, T ) Γ (41c) u(, ) = u in Ω. (41d) In [Tem88, pp , ] it was proved that (41) can be written in the form (18) and that (A1) holds provided the boundary Γ is sufficiently smooth. 3. POD Galerkin projection of dynamical systems Given a snapshot grid {t j } n j= and associated snapshots y,..., y n the space V l is constructed as described in Section.. We obtain the POD-Galerkin surrogate of (39) by replacing the space of test functions V by V l = span {ψ 1,..., ψ l }, and by using the ansatz Y (t) = l α i (t)ψ i (4) for its solution. The result is a l-dimensional nonlinear dynamical system of ordinary differential equations for the functions α i (i = 1,..., l) of the form M α + Aα + n(α) = F, Mα() = ( y, ψ j H ) l j=1, (43) where M = ( ψ i, ψ j H ) l i,j=1 denotes the POD mass matrix, A = (a(ψ i, ψ j )) l i,j=1 the POD stiffness matrix, n(α) = ( F (Y ), ψ j V,V ) l j=1 the nonlinearity, and F = ( f, ψ j H ) l j=1. We note that M is the identity matrix if in (P l) X = H is chosen. For the time discretization we choose m IN and introduce the time grid = τ < τ 1 <... < τ m = T, δτ j = τ j τ j 1 for j = 1,..., m,

16 16 Michael Hinze and Stefan Volkwein and set δτ = min{δτ j : 1 j m} and τ = max{δτ j : 1 j m}. Notice that the snapshot grid and the time grid usually does not coincide. Throughout we assume that τ/δτ is bounded uniformly with respect to m. To relate the snapshot grid {t j } n j= and the time grid {τ j} m j= we set for every τ k, k m, an associated index k = argmin { τ k t j : j n} and define σ n {1,..., n} as the maximum of the occurrence of the same value t k as k ranges over k m. The problem consists in finding a sequence {Y k } m k= in V l satisfying Y, ψ H = y, ψ H for all ψ V l (44a) and τ Y k, ψ H + a(y k, ψ) + F (Y k ), ψ V,V = f(τ k), ψ H (44b) for all ψ V l and k = 1,..., m, where we have set τ Y k = (Y k Y k 1 )/δτ k. Note that (44) is a backward Euler scheme for (39). For every k = 1,..., m there exists at least one solution Y k of (44). If τ is sufficiently small, the sequence {Y k } m k=1 is uniquely determined. A proof was given in [KV, Theorem 4.]. 3.3 Error estimates Our next goal is to present an error estimate for the expression m β k Y k y(τ k ) H, k= where y(τ k ) is the solution of (39) at the time instances t = τ k, k = 1,..., m, and the positive weights β j are given by β = δτ 1, β j = δτ j + δτ j+1 for j = 1,..., m 1, and β m = δτ m. Let us introduce the orthogonal projection P l n of X onto V l by P l n ϕ = l ϕ, ψ i X ψ i for ϕ X. (45) In the context of finite element discretizations, Pn l is called the Ritz projection.

17 Estimate for the choice X = V POD: Error Estimates and Suboptimal Control 17 Let us choose X = V in the context of Section.. Since the Hilbert space V is endowed with the inner product (16), the Ritz-projection P l n is the orthogonal projection of V on V l. We make use of the following assumptions: (H1) y W, (, T ; V ), where W, (, T ; V ) = {ϕ L (, T ; V ) : ϕ t, ϕ tt L (, T ; V )} is a Hilbert space endowed with its canonical inner product. (H) There exists a normed linear space W continuously embedded in V and a constant c a > such that y C([, T ]; W ) and a(ϕ, ψ) c a ϕ H ψ W for all ϕ V and ψ W. (46) Example. For V = H 1(Ω), H = L (Ω), with Ω a bounded domain in IR l and a(ϕ, ψ) = ϕ ψ dx for all ϕ, ψ H 1 (Ω), Ω choosing W = H (Ω) H 1(Ω) implies a(ϕ, ψ) ϕ W ψ H for all ϕ W, ψ V, and (46) holds with c a = 1. Remark 5. In the case X = V we infer from (16) that a(p l n ϕ, ψ) = a(ϕ, ψ) for all ψ V l, where ϕ V. In particular, we have P l n L(V ) = 1. Moreover, (H) yields P l n L(H) c P for all 1 l d where c P = cl/λ l (see [KV, Remark 4.4]) and c > depends on y, c a, and T, but is independent of l and of the eigenvalues λ i. The next theorem was proved in [KV, Theorem 4.7 and Corollary 4.11]. Theorem. Assume that (H1), (H) hold and that τ is sufficiently small. Then there exists a constant C depending on T, but independent of the grids {t j } n j= and {τ j} m j=, such that m k= β k Y k y(τ k ) H Cσ n τ( τ + t) y tt L (,T ;V ) ( d + C i=l+1 ( ψi, y σ n τ ) ) V + λ i + σ n τ t y t L δt (,T ;V ). (47) Remark 6. a) If we take the snapshot set Ṽ = span {y(t ),..., y(t n )}

18 18 Michael Hinze and Stefan Volkwein instead of V, we obtain instead of (47) the following estimate: m β k Y k y(τ k ) H k= C d i=l+1 ( ψi, y σ ( n 1 ) ) V + δt δτ + τ λ i + Cσ n τ t y t L (,T ;V ) + Cσ n τ( τ + t) y tt L (,T ;H) (compare [KV, Theorem 4.7]). As we mentioned in Remark 1 the factor (δt δτ) 1 arises on the right-hand of the estimate. While computations for many concrete situations show that d i=l+1 λ i is small compared to τ, the question nevertheless arises whether the term 1/(δτ δt) can be avoided in the estimates. However, we refer the reader to [HV3, Section 4], where significantly better numerical results were obtained using the snapshot set V instead of Ṽ. We refer also to [LV4], where the computed feedback gain was more stabilizing providing information about the time derivatives was included. b) If the number of POD elements for the Galerkin scheme coincides with the dimension of V then the first additive term on the right-hand side disappears. Asymptotic estimate Note that the terms {λ i } d, {ψ i} d and σ n depend on the time discretization of [, T ] for the snapshots as well as the numerical integration. We address this dependence next. To obtain an estimate that is independent of the spectral values of a specific snapshot set {y(t j )} n j= we assume that y W, (, T ; V ), so that in particular (H1) holds, and introduce the operator R L(V ) by Rz = T z, y(t) V y(t) + z, y t (t) V y t (t) dt for z V. (48) Since y W, (, T ; V ) holds, it follows that R is compact, see, e.g., [KV, Section 4]. From the Hilbert-Schmidt theorem it follows that there exists a complete orthonormal basis {ψ i } i IN for X and a sequence {λ i } i IN of nonnegative real numbers so that Rψ i = λ i ψ i, λ 1 λ..., and λ i as i. The spectrum of R is a pure point spectra except for possibly. Each non-zero eigenvalue of R has finite multiplicity and is the only possible accumulation point of the spectrum of R, see [Kat8, p. 185]. Let us note that T y(t) X dt = λ i and y X = y, ψ i X.

19 Due to the assumption y W, (, T ; V ) we have POD: Error Estimates and Suboptimal Control 19 lim R n R t L(V ) =, where the operator R n was introduced in (3). The following theorem was proved in [KV, Corollary 4.1]. Theorem 3. Let all hypothesis of Theorem be satisfied. Let us choose and fix l such that λ l λ l+1. If t = O(δτ) and τ = O(δt) hold, then there exists a constant C >, independent of l and the grids {t j } n j= and {τ j} m j=, and a t >, depending on l, such that m β k Y k y(τ k ) H C ( y ), ψi V + λ i ( i=l+1 ) + C τ t y t L (,T ;V ) + τ( τ + t) y tt L (,T ;V ) k= for all t t. (49) Remark 7. In case of X = H the spectral norm of the POD stiffness matrix with the elements ψ j, ψ i V, 1 i, j, d, arises on the right-hand side of the estimate (47); see [KV, Theorem 4.16]. For this reason, no asymptotic analysis can be done for X = H. 4 Suboptimal control of evolution problems In this section we propose a reduced-order approach based on POD for optimal control problems governed by evolution problems. For linear-quadratic optimal control problems we among other things present error estimates for the suboptimal POD solutions. 4.1 The abstract optimal control problem For T > the space W (, T ) is defined as W (, T ) = { ϕ L (, T ; V ) : ϕ t L (, T ; V ) }, which is a Hilbert space endowed with the common inner product (see, for example, in [DL9, p. 473]). It is well-known that W (, T ) is continuously embedded into C([, T ]; H), the space of continuous functions from [, T ] to H, i.e., there exists an embedding constant c e > such that ϕ C([;T ];H) c e ϕ W (,T ) for all ϕ W (, T ). (5) We consider the abstract problem introduced in Section.. Let U be a Hilbert space which we identify with its dual U, and let U ad U a closed and convex

20 Michael Hinze and Stefan Volkwein subset. For y H and u U ad we consider the nonlinear evolution problem on [, T ] d dt y(t), ϕ H + a(y(t), ϕ) + F (y(t)), ϕ V,V = (Bu)(t)), ϕ V,V for all ϕ V and (51a) y() = y in H, (51b) where B : U L (, T ; V ) is a continuous linear operator. We suppose that for every u U ad and y H there exists a unique solution y of (51) in W (, T ). This is satisfied for many practical situations, including, e.g., the controlled viscous Burgers and two-dimensional incompressible Navier-Stokes equations, see, e.g., [Tem88, Vol1b]. Next we introduce the cost functional J : W (, T ) U IR by J(y, u) = α 1 Cy z 1 W 1 + α Dy(T ) z W + σ u U, (5) where W 1, W are Hilbert spaces and C : L (, T ; H) W 1 and D : H W are bounded linear operators, z 1 W 1 and z W are given desired states and α 1, α, σ >. The optimal control problem is given by min J(y, u) s.t. (y, u) W (, T ) U ad solves (51). (CP) In view of Example 1 a standard discretization (based on, e.g., finite elements) of (CP) may lead to a large-scale optimization problem which can not be solved with the currently available computer power. Here we propose a suboptimal solution approach that utilizes POD. The associated suboptimal control problem is obtained by replacing the dynamical system (51) in (CP) through the POD surrogate model (43), using the Ansatz (4) for the state. With F replaced by ( (Bu)(t), ψ j H ) l j=1 it reads min J(α, u) s.t. (α, u) H 1 (, T ) l U ad solves (43). (SCP) At this stage the question arises which snapshots to use for the POD surrogate model, since it is by no means clear that the POD model computed with snapshots related to a control u 1 is also able to resolve the presumably completely different dynamics related to a control u u 1. To cope with this difficulty we present the following adaptive pseudo-optimization algorithm which is proposed in [AH, AH1]. It successively updates the snapshot samples on which the the POD surrogate model is to be based upon. Related ideas are presented in [AFS, Rav]. Choose a sequence of increasing numbers N j. Algorithm 4.1 (POD-based adaptive control) 1. Let a set of snapshots y i, i = 1,..., N be given and set j=.

21 POD: Error Estimates and Suboptimal Control 1. Set (or determine) l, and compute the POD modes and the space V l. 3. Solve the reduced optimization problem (SCP) for u j. 4. Compute the state y j corresponding to the current control u j and add the snapshots y j+1 i, i = N j +1,..., N j+1 to the snapshot set y j i, i = 1,..., N j. 5. If u j+1 u j is not sufficiently small, set j = j+1 and goto. We note that the term snapshot here may also refer to difference quotients of snapshots, compare Remark 1. We note further that it is also possible to replace its step 4. by 4. Compute the state y j corresponding to the current control u j and store the snapshots y j+1 i, i = N j + 1,..., N j+1 while the snapshot set y j i, i = 1,..., N j is neglected. Many numerical investigations on the basis of Algorithm 4.1 with step 4 can be found in [Afa]. This reference also contains a numerical comparison of POD to other model reduction techniques, including their applications to optimal open-loop control. To anticipate discussion we note that the number N j of snapshots to be taken in the j-th iteration ideally should be determined during the adaptive optimization process. We further note that the choice of l in step might be based on the information content E defined in (9), compare Section 5.. We will pick up these items again in Section 6. Remark 8. It is numerically infeasible to compute an optimal closed-loop feedback control strategy based on a finite element discretization of (51), since the resulting nonlinear dynamical system in general has large dimension and numerical solution of the related Hamilton-Jacobi-Bellman (HJB) equation is infeasible. In [KVX4] model reduction techniques involving POD are used to numerically construct suboptimal closed-loop controllers using the HJB equations of the reduced order model, which in this case only is low dimensional. 4. Error estimates for linear-quadratic optimal control problems It is still an open problem to estimate the error between solutions of (CP) and the related suboptimal control problem (SCP), and also to prove convergence of Algorithm 4.1. As a first step towards we now present error estimates for discrete solutions of linear-quadratic optimal control problems with a POD model as surrogate. For this purpose we combine techniques of [KV1, KV] and [DH, DH4, Hin4]. We consider the abstract control problem (CP) with F and U ad U. We note that J from (5) is twice continuously Fréchet-differentiable. In particular, the second Fréchet-derivative of J at a given point x = (y, u) W (, T ) U in a direction δx = (δy, δu) W (, T ) U is given by J(x)(δx, δx) = α 1 Cδy W 1 + α Dδy(T ) W + σ δu U.

22 Michael Hinze and Stefan Volkwein Thus, J(x) is a non-negative operator. The goal is to minimize the cost J subject to (y, u) solves the linear evolution problem y t (t), ϕ H + a(y(t), ϕ) = (Bu)(t), ϕ H (53a) for all ϕ V and almost all t (, T ) and y() = y in H. (53b) Here, y H is a given initial condition. It is well-known that for every u U problem (53) admits a unique solution y W (, T ) satisfying y W (,T ) C ( y H + u U ) for a constant C > ; see, e.g., [DL9, pp. 51-5]. If, in addition, y V and if there exist two constants c 1, c > with Aϕ, ϕ H c 1 ϕ D(A) c ϕ H for all ϕ D(A) V, then we have y L (, T ; D(A) V ) H 1 (, T ; H), (54) compare [DL9, p. 53]. From (54) we infer that y is almost everywhere equal to an element of C([, T ]; V ). The minimization problem, which is under consideration, can be written as a linear-quadratic optimal control problem min J(y, u) s.t. (y, u) W (, T ) U solves (53). (LQ) Applying standard arguments one can prove that there exists a unique optimal solution x = (ȳ, ū) to (LQ). There exists a unique Lagrange-multiplier p W (, T ) satisfying together with x = (ȳ, ū) the first-order necessary optimality conditions, which consist in the state equations (53), in the adjoint equations p t (t), ϕ H + a( p(t), ϕ) = α 1 C (Cȳ(t) z 1 (t)), ϕ H (55a) for all ϕ V and almost all t (, T ) and and in the optimality condition p(t ) = α D (Dȳ(T ) z ) in H, (55b) σū B p = in U. (56) Here, the linear and bounded operators C : W 1 L (, T ; H), D : W H, and B : L (, T ; H) U stand for the Hilbert space adjoints of C, D, and B, respectively.

23 Introducing the reduced cost functional POD: Error Estimates and Suboptimal Control 3 Ĵ(u) = J(y(u), u), where y(u) solves (53) for the control u U, we can express (LQ) as the reduced problem min Ĵ(u) s.t. u U. (ˆP) From (56) it follows that the gradient of Ĵ at ū is given by Let us define the operator G : U U by Ĵ (ū) = σū B p. (57) G(u) = σu B p, (58) where y = y(u) solves the state equations with the control u U and p = p(y(u)) satisfies the adjoint equations for the state y. As a consequence of (56) it follows that the first-order necessary optimality conditions for ( ˆP) are G(u) = in U. (59) In the POD context the operator G will be replaced by an operator G l : U U which then represents the optimality condition of the optimal control problem (SCP). The construction of G l is described in the following. Computation of the POD basis Let u U be a given control for (LQ) and y = y(u) the associated state satisfying y C 1 ([, T ]; V ). To keep the notation simple we apply only a spatial discretization with POD basis functions, but no time integration by, e.g., an implicit Euler method. Therefore, we apply a continuous POD, where we choose X = V in the context of Section.. Let us mention the work [HY], where estimates for POD Galerkin approximations were derived utilizing also a continuous version of POD. We define the bounded linear Y : H 1 (, T ; IR) V by Yϕ = T ϕ(t)y(t) + ϕ t (t)y t (t) dt for ϕ H 1 (, T ; IR). Notice that the operator Y is the continuous variant of the discrete operator Y n introduced in (8). The adjoint Y : V H 1 (, T ; IR) is given by ( Y z ) (t) = z, y(t) + y t (t) V for z V. (compare (9)). The operator R = YY L(V ) is already introduced in (48).

24 4 Michael Hinze and Stefan Volkwein Remark 9. Analogous to the theory of singular value decomposition for matrices, we find that the operator K = Y Y L(H 1 (, T ; IR)) given by ( ) T Kϕ (t) = y(s), y(t) V ϕ(s)+ y t (s), y t (t) V ϕ t (s) ds has the eigenvalues {λ i v i (t) = 1 λ i } ( Y ψ i and the eigenfunctions ) (t) = 1 λ i ψ i, y(t) + y t (t) V for ϕ H 1 (, T ; IR) for i {j IN : λ j > } and almost all t [, T ]. In the following theorem we formulate properties of the eigenvalues and eigenfunctions of R. For a proof we refer to [HLB96], for instance. Theorem 4. For every l N the eigenfunctions ψ1,..., ψ l minimization problem V solve the min J(ψ 1,..., ψ l ) s.t. ψ j, ψ i X = δ ij for 1 i, j l, (6) where the cost functional J is given by J(ψ,..., ψ l ) = T y(t) l y(t), ψ i V ψ i X + yt (t) l y t (t), ψ i V ψ i V dt. Moreover, the eigenfunctions {λ i } i IN and eigenfunctions {ψi } i IN of R satisfy the formula J(ψ1,..., ψl ) = λ i. (61) i=l+1 Proof. The proof of the theorem relies on the fact that the eigenvalue problem Rψ i = λ i ψ i for i = 1,..., l is the first-order necessary optimality condition for (6). For more details we refer the reader to [HLB96]. Galerkin POD approximation Let us introduce the set V l = span {ψ1,..., ψl } V. To study the POD approximation of the operator G we introduce the orthogonal projection P l of V onto V l by P l ϕ = l ϕ, ψi V ψi for ϕ V. (6)

25 (compare (45)). Note that POD: Error Estimates and Suboptimal Control 5 J(ψ,..., ψ l ) = T y(t) P l y(t) + yt (t) P l y t (t) dt = V V i=l+1 λ i. (63) From (16) it follows directly that a(p l ϕ, ψ) = a(ϕ, ψ) for all ψ V l, where ϕ V. Clearly, we have P l L(V ) = 1. Next we define the approximation G l : U U of the operator G by where p l W (, T ) is the solution to G l (u) = σu B p l, (64) p l t (t), ψ H + a(pl (t), ψ) = α 1 C (Cy l z 1 ), ψ H for all ψ V l and t (, T ) a.e. and p l (T ) = α P l( D (Dy l (T ) z ) ) (65a) (65b) and y l W (, T ), which solves y l t(t), ψ H + a(y l (t), ψ) = (Bu)(t), ψ H (66a) for all ψ V l and almost all t (, T ) and y l () = P l y (66b) Notice that G l (u) = are the first-order optimality conditions for the optimal control problem min Ĵ l (u) s.t. u U, where Ĵ l (u) = J(y l (u), u) and y l (u) denotes the solution to (66). It follows from standard arguments (Lax-Milgram lemma) that the operator G l is well-defined. Furthermore we have Theorem 5. The equation G l (u) = in U (67) admits a unique solution u l U which together with the unique solution u of (59) satisfies the estimate u u l U 1 ) ( B (P P l )Bu σ U + B (S Sl )C z 1 U. (68) Here, P := S C CS, P l := S l C CS l, with S, S l denoting the solution operators in (53) and (66), respectively.

26 6 Michael Hinze and Stefan Volkwein A proof of this theorem immediately follows from the fact, that u l is a admissible test function in (59), and u in (67). Details will be given in [HV5]. Remark 1. We note, that Theorem 5 remains also valid in the situation where admissible controls are taken from a closed convex subset U ad U. The solutions u, u l in this case satisfy the variational inequalities and G(u), v u U for all v U ad, G l (u l ), v u l U for all v U ad, so that adding the first inequality with v = u l and the second with v = u and straightforward estimation finally give (68) also in the present case. The crucial point here is that the set of admissible controls is not discretized a- priori. The discretization of the optimal control u l is determined by that of the corresponding Lagrange multiplier p l. For details of this discrete concept we refer to [Hin4]. It follows from the structure of estimate (68), that error estimates for y y l and p p l directly lead to an error estimate for u u l. Proposition. Let l IN with λ l > be fixed, u U and y = y(u) and p = p(y(u)) the corresponding solutions of the state equations (53) and adjoint equations (55) respectively. Suppose that the POD basis of rank l is computed by using the snapshots {y(t j )} n j= and its difference quotients. Then there exist constants c y, c p > such that and y l y L (,T ;H) + yl y L (,T ;V ) c y i=l+1 λ i (69) p l p L (,T ;V ) ( c p i=l+1 ) λ i + P l p p L (,T ;V ) + Pl p t p t L (,T ;V ), (7) where y l and p l solve (66) and (65), respectively, for the chosen u inserted in (66a). Proof. Let y l (t) y(t) = y l (t) P l y(t) + P l y(t) y(t) = ϑ(t) + ϱ(t), where ϑ = y l P l y and ϱ = P l y y. From (16), (6), (63) and the continuous embedding H 1 (, T ; V ) L (, T ; H) we find

27 POD: Error Estimates and Suboptimal Control 7 ϱ L (,T ;H) + ϱ L (,T ;V ) c E λ i (71) i=l+1 with an embedding constant c E >. Utilizing (53) and (66) we obtain ϑ t (t), ψ H + a(ϑ(t), ψ) = y t (t) P l y t (t), ψ H for all ψ V l and almost all t (, T ). From (16), (17) and Young s inequality it follows that d dt ϑ(t) H + ϑ(t) V c V y t (t) P l y t (t) V. (7) Due to (66b) we have ϑ() =. Integrating (7) over the interval (, t), t (, T ], and utilizing (37), (45) and (63) we arrive at for almost all t (, T ). Thus, t ϑ(t) H + ϑ(s) V ds c V i=l+1 λ i T esssup ϑ(t) H + ϑ(s) V ds c V t [,T ] i=l+1 λ i. (73) Estimates (71) and (73) imply the existence of a constant c y > such that (69) holds. We proceed by estimating the error arising from the discretization of the adjoint equations and write p l (t) p(t) = p l (t) P l p(t) + P l p(t) p(t) = θ(t) + ρ(t), where θ = p l P l p and ρ = P l p p. From (16), (5), and (65b) we get θ(t ) H α D L(H,W 1) yl (T ) y(t ) H α D L(H,W 1) yl y C([,T ];H). Thus, applying (5), (69) and the techniques used above for the state equations, we obtain T esssup θ(t) H + θ(s) V ds t [,T ] ( c V c V c ec y D 4 L(H,W 1) i=l+1 Hence, there exists a constant c p > satisfying (7). λ i + p t P l p t L (,T ;V )).

28 8 Michael Hinze and Stefan Volkwein Remark 11. a) The error in the discretization of the state variable is only bounded by the sum over the not modeled eigenvalues λ i for i > l. Since the POD basis is not computed utilizing adjoint information, the term P l p p in the H 1 (, T ; V )-norm arises in the error estimate for the adjoint variables. For POD based approximation of partial differential equations one cannot rely on results clarifying the approximation properties of the POD-subspaces to elements in function spaces as e.g. L p or C. Such results are an essential building block for e.g. finite element approximations to partial differential equations. b) If we have already computed a second POD basis of rank l IN for the adjoint variable, then we can express the term involving the difference P lp p by the sum over the eigenvalues corresponding to eigenfunctions, which are not used as POD basis functions in the discretization. c) Recall that {ψi } i IN is a basis of V. Thus we have T p(t) P l p(t) V dt T i=l+1 a(p(t), ψ i ) dt. The sum on the right-hand side converges to zero as l tends to. However, usually we do not have a rate of convergence result available. In numerical applications we can evaluate p P l p L (,T ;V ). If the term is large then we should increase l and include more eigenfunctions in our POD basis. d) For the choice X = H we have instead of (71) the estimate ϱ L (,T ;H) + ϱ L (,T ;V ) C S i=l+1 λ i, where C is a positive constant, S denotes the stiffness matrix with the elements S ij = ψj, ψ i V, 1 i, j l, and stands the spectral norm for symmetric matrices, see [KV, Lemma 4.15]. Applying (58), (64), and Proposition we obtain for every u U G l (u) G(u) U c G ( λ i i=l+1 + P l p p L (,T ;H) + Pl p t p t L (,T ;H) ) (74) for a constant c G > depending on c λ and B. Suppose that u 1, u U are given and that y l 1 = y l 1(u 1 ) and y l = y l (u ) are the corresponding solutions of (66). Utilizing Young s inequality it follows that there exists a constant c V > such that

29 POD: Error Estimates and Suboptimal Control 9 y l 1 y l L (,T ;H) + yl 1 y l L (,T ;V ) c V B L(U,L (,T ;H)) u 1 u U. (75) Hence, we conclude from (65) and (75) that p l 1 pl L (,T ;H) + pl 1 pl L (,T ;V ) max ( α 1 c V C L(L (,T ;H),W, α 1) D L(H,W ( ) )) y1 l yl L (,T ;H) + yl 1 yl L (,T ;V ) C u 1 u U. (76) where C = c4 V B L(U,L (,T ;H)) max ( α 1c V C L(L (,T ;H),W, α 1) D ) L(H,W ). If the POD basis of rank l is computed for the control u 1, then (64), (74) and (76) lead to the existence of a constant Ĉ > satisfying G l (u ) G(u 1 ) U G l(u ) G l (u 1 ) U + G l(u 1 ) G(u 1 ) U Ĉ u u 1 U ( + Ĉ ) λ i + P l p 1 p 1 L (,T ;V ) + Pl (p 1 ) t (p 1 ) t L (,T ;V ). i=l+1 Hence, G l (u ) is close to G(u 1 ) in the U-norm provided the terms u 1 u U and i=l+1 λ i are small and provided the l POD basis functions ψ1,..., ψl leads to a good approximation of the adjoint variable p 1 in the H 1 (, T ; V )- norm. In particular, G l (u) in this case is small, if u denotes the unique optimal control of the continuous control problem, i.e., the solution of G(u) =. We further have that both, G and G l are Fréchet differentiable with constant derivatives G σid B p and G l σid B (p l ). Moreover, since B p and B p are selfadjoint positive operators, G and G l are invertible, satisfying (G ) 1 L(U), (G l ) 1 L(U) 1 σ. Since G l also is Lipschitz continuous with some positive constant K we now may argue with a Newton-Kantorovich argument [D85, Theorem 15.6] that the equation G l (v) = in U admits a unique solution in u l B ɛ (u), provided (G l ) 1 G l (u) U ɛ and Kɛ σ < 1.

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506

INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506 INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506 SIXTEENTH INTERNATIONAL SYMPOSIUM ON MATHEMATICAL THEORY OF NETWORKS AND SYSTEMS

More information

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition

More information

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein Proper Orthogonal Decomposition for PDE Constrained Optimization Institute of Mathematics and Statistics, University of Constance Joined work with F. Diwoky, M. Hinze, D. Hömberg, M. Kahlbacher, E. Kammann,

More information

Proper Orthogonal Decomposition in PDE-Constrained Optimization

Proper Orthogonal Decomposition in PDE-Constrained Optimization Proper Orthogonal Decomposition in PDE-Constrained Optimization K. Kunisch Department of Mathematics and Computational Science University of Graz, Austria jointly with S. Volkwein Dynamic Programming Principle

More information

Model order reduction & PDE constrained optimization

Model order reduction & PDE constrained optimization 1 Model order reduction & PDE constrained optimization Fachbereich Mathematik Optimierung und Approximation, Universität Hamburg Leuven, January 8, 01 Names Afanasiev, Attwell, Burns, Cordier,Fahl, Farrell,

More information

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33

More information

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Technische Universität Berlin Martin Gubisch, Stefan Volkwein University of Konstanz March, 3 Martin Gubisch,

More information

An optimal control problem for a parabolic PDE with control constraints

An optimal control problem for a parabolic PDE with control constraints An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

The Dirichlet-to-Neumann operator

The Dirichlet-to-Neumann operator Lecture 8 The Dirichlet-to-Neumann operator The Dirichlet-to-Neumann operator plays an important role in the theory of inverse problems. In fact, from measurements of electrical currents at the surface

More information

Chapter 2 Finite Element Spaces for Linear Saddle Point Problems

Chapter 2 Finite Element Spaces for Linear Saddle Point Problems Chapter 2 Finite Element Spaces for Linear Saddle Point Problems Remark 2.1. Motivation. This chapter deals with the first difficulty inherent to the incompressible Navier Stokes equations, see Remark

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

The HJB-POD approach for infinite dimensional control problems

The HJB-POD approach for infinite dimensional control problems The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,

More information

Hamburger Beiträge zur Angewandten Mathematik

Hamburger Beiträge zur Angewandten Mathematik Hamburger Beiträge zur Angewandten Mathematik Numerical analysis of a control and state constrained elliptic control problem with piecewise constant control approximations Klaus Deckelnick and Michael

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

Eileen Kammann Fredi Tröltzsch Stefan Volkwein

Eileen Kammann Fredi Tröltzsch Stefan Volkwein Universität Konstanz A method of a-posteriori error estimation with application to proper orthogonal decomposition Eileen Kammann Fredi Tröltzsch Stefan Volkwein Konstanzer Schriften in Mathematik Nr.

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations MATHEMATICS Subject Code: MA Course Structure Sections/Units Section A Section B Section C Linear Algebra Complex Analysis Real Analysis Topics Section D Section E Section F Section G Section H Section

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

On Friedrichs inequality, Helmholtz decomposition, vector potentials, and the div-curl lemma. Ben Schweizer 1

On Friedrichs inequality, Helmholtz decomposition, vector potentials, and the div-curl lemma. Ben Schweizer 1 On Friedrichs inequality, Helmholtz decomposition, vector potentials, and the div-curl lemma Ben Schweizer 1 January 16, 2017 Abstract: We study connections between four different types of results that

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

Numerical Analysis Comprehensive Exam Questions

Numerical Analysis Comprehensive Exam Questions Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order

More information

NECESSARY OPTIMALITY CONDITIONS FOR AN OPTIMAL CONTROL PROBLEM OF 2D

NECESSARY OPTIMALITY CONDITIONS FOR AN OPTIMAL CONTROL PROBLEM OF 2D NECESSARY OPTIMALITY CONDITIONS FOR AN OPTIMAL CONTROL PROBLEM OF 2D g-navier-stokes EQUATIONS Nguyen Duc Loc 1 Abstract Considered here is the optimal control problem of 2D g-navier-stokes equations.

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Asymptotic Stability of POD based Model Predictive Control for a semilinear parabolic PDE

Asymptotic Stability of POD based Model Predictive Control for a semilinear parabolic PDE Asymptotic Stability of POD based Model Predictive Control for a semilinear parabolic PDE Alessandro Alla Dipartimento di Matematica, Sapienza Università di Roma, P.le Aldo Moro, 85 Rome, Italy Stefan

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

arxiv: v1 [math.na] 13 Sep 2014

arxiv: v1 [math.na] 13 Sep 2014 Model Order Reduction for Nonlinear Schrödinger Equation B. Karasözen, a,, C. Akkoyunlu b, M. Uzunca c arxiv:49.3995v [math.na] 3 Sep 4 a Department of Mathematics and Institute of Applied Mathematics,

More information

Summary of Week 9 B = then A A =

Summary of Week 9 B = then A A = Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

Review and problem list for Applied Math I

Review and problem list for Applied Math I Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

More information

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems Elliptic boundary value problems often occur as the Euler equations of variational problems the latter

More information

OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL

OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL OPTIMALITY CONDITIONS AND POD A-POSTERIORI ERROR ESTIMATES FOR A SEMILINEAR PARABOLIC OPTIMAL CONTROL O. LASS, S. TRENZ, AND S. VOLKWEIN Abstract. In the present paper the authors consider an optimal control

More information

BASIC FUNCTIONAL ANALYSIS FOR THE OPTIMIZATION OF PARTIAL DIFFERENTIAL EQUATIONS

BASIC FUNCTIONAL ANALYSIS FOR THE OPTIMIZATION OF PARTIAL DIFFERENTIAL EQUATIONS BASIC FUNCTIONAL ANALYSIS FOR THE OPTIMIZATION OF PARTIAL DIFFERENTIAL EQUATIONS S. VOLKWEIN Abstract. Infinite-dimensional optimization requires among other things many results from functional analysis.

More information

Wave function methods for the electronic Schrödinger equation

Wave function methods for the electronic Schrödinger equation Wave function methods for the electronic Schrödinger equation Zürich 2008 DFG Reseach Center Matheon: Mathematics in Key Technologies A7: Numerical Discretization Methods in Quantum Chemistry DFG Priority

More information

Where is matrix multiplication locally open?

Where is matrix multiplication locally open? Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?

More information

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS fredi tröltzsch 1 Abstract. A class of quadratic optimization problems in Hilbert spaces is considered,

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT Abstract. These are the letcure notes prepared for the workshop on Functional Analysis and Operator Algebras to be held at NIT-Karnataka,

More information

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence 1 CONVERGENCE THEOR G. ALLAIRE CMAP, Ecole Polytechnique 1. Maximum principle 2. Oscillating test function 3. Two-scale convergence 4. Application to homogenization 5. General theory H-convergence) 6.

More information

LECTURE 2: SYMPLECTIC VECTOR BUNDLES

LECTURE 2: SYMPLECTIC VECTOR BUNDLES LECTURE 2: SYMPLECTIC VECTOR BUNDLES WEIMIN CHEN, UMASS, SPRING 07 1. Symplectic Vector Spaces Definition 1.1. A symplectic vector space is a pair (V, ω) where V is a finite dimensional vector space (over

More information

The Navier-Stokes problem in velocity-pressure formulation :convergence and Optimal Control

The Navier-Stokes problem in velocity-pressure formulation :convergence and Optimal Control The Navier-Stokes problem in velocity-pressure formulation :convergence and Optimal Control A.Younes 1 A. Jarray 2 1 Faculté des Sciences de Tunis, Tunisie. e-mail :younesanis@yahoo.fr 2 Faculté des Sciences

More information

Mathematical Introduction

Mathematical Introduction Chapter 1 Mathematical Introduction HW #1: 164, 165, 166, 181, 182, 183, 1811, 1812, 114 11 Linear Vector Spaces: Basics 111 Field A collection F of elements a,b etc (also called numbers or scalars) with

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

MP463 QUANTUM MECHANICS

MP463 QUANTUM MECHANICS MP463 QUANTUM MECHANICS Introduction Quantum theory of angular momentum Quantum theory of a particle in a central potential - Hydrogen atom - Three-dimensional isotropic harmonic oscillator (a model of

More information

1 Continuity Classes C m (Ω)

1 Continuity Classes C m (Ω) 0.1 Norms 0.1 Norms A norm on a linear space X is a function : X R with the properties: Positive Definite: x 0 x X (nonnegative) x = 0 x = 0 (strictly positive) λx = λ x x X, λ C(homogeneous) x + y x +

More information

Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids

Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids Jochen Garcke joint work with Axel Kröner, INRIA Saclay and CMAP, Ecole Polytechnique Ilja Kalmykov, Universität

More information

BIHARMONIC WAVE MAPS INTO SPHERES

BIHARMONIC WAVE MAPS INTO SPHERES BIHARMONIC WAVE MAPS INTO SPHERES SEBASTIAN HERR, TOBIAS LAMM, AND ROLAND SCHNAUBELT Abstract. A global weak solution of the biharmonic wave map equation in the energy space for spherical targets is constructed.

More information

Convergence of a finite element approximation to a state constrained elliptic control problem

Convergence of a finite element approximation to a state constrained elliptic control problem Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor Convergence of a finite element approximation to a state constrained elliptic control problem Klaus Deckelnick & Michael Hinze

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49 REAL ANALYSIS II HOMEWORK 3 CİHAN BAHRAN Conway, Page 49 3. Let K and k be as in Proposition 4.7 and suppose that k(x, y) k(y, x). Show that K is self-adjoint and if {µ n } are the eigenvalues of K, each

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

SECOND-ORDER SUFFICIENT OPTIMALITY CONDITIONS FOR THE OPTIMAL CONTROL OF NAVIER-STOKES EQUATIONS

SECOND-ORDER SUFFICIENT OPTIMALITY CONDITIONS FOR THE OPTIMAL CONTROL OF NAVIER-STOKES EQUATIONS 1 SECOND-ORDER SUFFICIENT OPTIMALITY CONDITIONS FOR THE OPTIMAL CONTROL OF NAVIER-STOKES EQUATIONS fredi tröltzsch and daniel wachsmuth 1 Abstract. In this paper sufficient optimality conditions are established

More information

Singular Value Decomposition (SVD) and Polar Form

Singular Value Decomposition (SVD) and Polar Form Chapter 2 Singular Value Decomposition (SVD) and Polar Form 2.1 Polar Form In this chapter, we assume that we are dealing with a real Euclidean space E. Let f: E E be any linear map. In general, it may

More information

Analysis in weighted spaces : preliminary version

Analysis in weighted spaces : preliminary version Analysis in weighted spaces : preliminary version Frank Pacard To cite this version: Frank Pacard. Analysis in weighted spaces : preliminary version. 3rd cycle. Téhéran (Iran, 2006, pp.75.

More information

Spectral Properties of Elliptic Operators In previous work we have replaced the strong version of an elliptic boundary value problem

Spectral Properties of Elliptic Operators In previous work we have replaced the strong version of an elliptic boundary value problem Spectral Properties of Elliptic Operators In previous work we have replaced the strong version of an elliptic boundary value problem L u x f x BC u x g x with the weak problem find u V such that B u,v

More information

Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic Linear-Quadratic Optimal Control

Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic Linear-Quadratic Optimal Control SpezialForschungsBereich F 3 Karl Franzens Universität Graz Technische Universität Graz Medizinische Universität Graz Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic

More information

Proper Orthogonal Decomposition (POD)

Proper Orthogonal Decomposition (POD) Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28 Outline Intro Results Problem

More information

ROBUST NULL CONTROLLABILITY FOR HEAT EQUATIONS WITH UNKNOWN SWITCHING CONTROL MODE

ROBUST NULL CONTROLLABILITY FOR HEAT EQUATIONS WITH UNKNOWN SWITCHING CONTROL MODE DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS Volume 34, Number 10, October 014 doi:10.3934/dcds.014.34.xx pp. X XX ROBUST NULL CONTROLLABILITY FOR HEAT EQUATIONS WITH UNKNOWN SWITCHING CONTROL MODE Qi Lü

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Reduced-Order Model Development and Control Design

Reduced-Order Model Development and Control Design Reduced-Order Model Development and Control Design K. Kunisch Department of Mathematics University of Graz, Austria Rome, January 6 Laser-Oberflächenhärtung eines Stahl-Werkstückes [WIAS-Berlin]: workpiece

More information

Partitioned Methods for Multifield Problems

Partitioned Methods for Multifield Problems C Partitioned Methods for Multifield Problems Joachim Rang, 6.7.2016 6.7.2016 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 C One-dimensional piston problem fixed wall Fluid flexible

More information

Krein-Rutman Theorem and the Principal Eigenvalue

Krein-Rutman Theorem and the Principal Eigenvalue Chapter 1 Krein-Rutman Theorem and the Principal Eigenvalue The Krein-Rutman theorem plays a very important role in nonlinear partial differential equations, as it provides the abstract basis for the proof

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

Sectorial Forms and m-sectorial Operators

Sectorial Forms and m-sectorial Operators Technische Universität Berlin SEMINARARBEIT ZUM FACH FUNKTIONALANALYSIS Sectorial Forms and m-sectorial Operators Misagheh Khanalizadeh, Berlin, den 21.10.2013 Contents 1 Bounded Coercive Sesquilinear

More information