INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506
|
|
- Lambert Benson
- 5 years ago
- Views:
Transcription
1 INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506 SIXTEENTH INTERNATIONAL SYMPOSIUM ON MATHEMATICAL THEORY OF NETWORKS AND SYSTEMS (MTNS2004), KATHOLIEKE UNIVERSITEIT LEUVEN, BELGIUM, JULY 5-9, 2004 S. VOLKWEIN Abstract. The proper orthogonal decomposition (POD) is a method to compute a reduced basis for a dynamical system. In this work the close connection between the singular value decomposition (SVD) and POD is studied in complex Hilbert spaces. For practical computation of the POD basis functions perturbation error bounds are presented. Moreover the combination of model reduction techniques based on POD with the numerical treatment of the Hamilton-Jacobi-Bellman (HJB) equation for infinite horizon optimal control problems by a modification of an algorithm originated by Gonzales-Rofman and further developed by Falcone-Ferretti is addressed. 1. Introduction. Recently the application of reduced-order models to optimal control problems for partial differential equations has received an increasing amount of attention. The reduced-order approach is based on projecting the dynamical system onto subspaces consisting of basis elements that contain characteristics of the expected solution. This is in contrast to e.g. finite element techniques, where the elements of the subspaces are uncorrelated to the physical properties of the system that they approximate. We concern with the proper orthogonal decomposition which was used as an efficient procedure to compute low-order models, see for instance [1, 16] and the references given therein. The paper is organized in the following manner: In Section 2 the close connection between POD and SVD is investigated in complex finite-dimensional Hilbert spaces. For practical issues we consider the computation of the POD-basis and present error bounds in Section 3, which are based on the perturbation theory for the eigenvalues and the eigenvectors of Hermitean matrices. The feedback-design based on the Hamilton-Jacobi-Belman (HJB) equation is briefly mentioned in Section POD and SVD. First we formulate the singular value decomposition for linear mappings from a finite dimensional complex Hilbert space V into another finite dimensional complex Hilbert space W. Proposition 2.1. Let F : V W be a linear operator, where V and W denote two finite dimensional complex Hilbert spaces with inner products (, ) V and (, ) W, Date: June 8, Mathematics Subject Classification. 35Kxx, 49Lxx, 65Kxx. Key words and phrases. Proper orthogonal decomposition, singular value decomposition, matrix perturbation theory, Hamilton-Jacobi-Bellman equation, closed loop control. 1
2 2 S. VOLKWEIN respectively, and with dim V = m and dim W = n, where n m. Then there exist real numbers σ 1 σ 2... σ m 0 and orthonormal bases {v k } m of V and {w k } n of W such that (2.1) F (v k ) = σ k w k and F (w k ) = σ k v k for k = 1,..., m. Proof. Consider the linear mapping F F : V V, where F : W V denotes the Hilbert space adjoint of F defined by (F (v), w) W = (v, F (w)) V for all v V and w W. Then it is obvious that F F is self-adjoint. Hence, there exist an orthonormal basis {v k } m of V and real eigenvalues λ 1 λ 2... λ m such that (2.2) F F (v k ) = λ k v k for k = 1,..., m. Since F F is non-negative, the eigenvalues λ k of F F are non-negative. Let l {1,..., m} such that λ l > 0, but λ l+1 = 0. Then we set σ k = λ k for k = 1,..., m and w k = F (v k )/σ k for k = 1,..., l. It follows that F (v k ) = σ k w k for k = 1,..., l and (w k, w l ) W = 1 (F (v k ), F (v l )) σ k σ W = 1 (F F (v k ), v l ) l σ k σ V = σ kδ kl l σ l for k, l {1,..., l}. Thus, w 1,..., w l are orthonormal in W. We choose orthonormal w l+1,..., w n in such a way that {w k } n is an orthonormal basis of W. Since σ k = 0 for k > l, we obtain that the first identity of (2.1) holds for k = 1,..., m, whereas the second one follows from (2.2). Example 2.2. For V = C m and W = C n Proposition 2.1 is well-known as the singular value decomposition of matrices, see, e.g., [13]. Proposition 2.1 and its proof imply directly the next corollary. Corollary 2.3. Under the hypotheses of Proposition 2.1 we have F F (w k ) = σ 2 k w k and F F (v k ) = σ 2 k v k for k = 1,..., m n. By L(V, W ) we denote the normed linear space of all bounded linear operators from V into W. Let V = {v 1,..., v m } and W = {w 1,..., w n } be bases of V and W, respectively. The uniquely determined representation matrix MW V (F ) = ((a ij)) C n m of a given F L(V, W ) with respect to V and W is defined by n F (v j ) = a ij w i for j = 1,..., m. i=1 It is well-known that the mapping M V W : L(V, W ) Cn m is linear and bijective. Remark 2.4. Let F L(V, W ). Due to Proposition 2.1 there exist a diagonal matrix Σ = diag (σ 1,..., σ m ) C m m and orthonormal bases V and W of V and W, respectively, so that MW V (F ) = (Σ 0)T, where 0 is an (n m) m matrix of zeros. The previous remark motivates the next definition. Definition 2.5. Let F and Σ be given as in Proposition 2.1 and Remark 2.4, respectively. We define a norm on L(V, W ) by ( m 1/2 F = Σ F = σk) 2 for F L(V, W ), where F denotes the Frobenius norm. Furthermore, the rank of F is given by rank Σ.
3 INTERPRETATION OF POD AS SVD 3 Remark 2.6. Since F is a norm, the mapping is a norm on L(V, W ). Furthermore, the properties of the Frobenius norm imply that Σ F = MW V (F ) F for any orthonormal bases V and W of V and W, respectively. Now we formulate POD as a constrained minimization problem. For that purpose let X be a complex Hilbert space endowed with the inner product (, ) X and norm X. Let y 1,..., y m X be given elements. We assume that dim X m and that y 1,..., y m are linearly independent in X. Otherwise we decrease m. We set endowed with the inner product (, ) X. W = span {y 1,..., y m } X Example 2.7. The ensemble can be determined from a dynamical system. For this purpose let us consider the following semi-linear initial value problem: { dy(t) (2.3) + Ay(t) = f(t, y(t)) for t (0, T ), dt y(0) = φ, where A is the infinitesimal generator of a C 0 -semigroup S(t), t > 0, on X, φ X and f : [0, T ] X X is continuous in t and uniformly Lipschitz-continuous on X for every t. Problem (2.3) has a unique mild solution y C([0, T ]; X) given by the integral representation y(t) = S(t)φ + t 0 S(t s)f(s, y(s)) ds for t [0, T ], see, for instance, [12]. Then for given time instances 0 t 1 <... < t m T the members of the ensemble can be given by the mild solution to (2.3): y j = y(t j ) for j = 1,..., m. Let {w k } m denote any orthonormal basis for W. Then we have (2.4) y j = (y j, w k ) X w k for j = 1,..., m. For any l N, l < m, we want to find that orthonormal basis of W, which minimizes the mean square error between the members of the ensembles and their corresponding l-th partial sum of (2.4): l 2 (P l min J(w 1,..., w l ) = y j (y j, w k ) ) X w k X subject to (w i, w j ) X = δ ij for 1 i, j l. We call the solution {w k } l to (Pl ) the POD-basis of rank l. To solve (P l ) we shall apply Proposition 2.1 with V = C m. Let us introduce the linear mapping (2.5) Y L(C m, W ) with Y ( e k ) = y k for k = 1,..., m, where { e k } m denotes the canonical basis in Cm. Obviously, we obtain that Y ( v) = m ( v, e k) C my k for v C m holds. From the definition of the adjoint Y L(W, C m ) of Y we derive that Y (w) = m (w, y k) X e k for w W. This imply the identities (2.6) Y Y = (, y k ) X y k and Y Y = K,
4 4 S. VOLKWEIN where (2.7) K C m m with K ij = (y j, y i ) X denotes the correlation matrix. Example 2.8. Let X = C n endowed with the Euclidean norm and the matrix Y C n m consist in the members y j C n of the ensemble. Then Y satisfies (2.5) and Y = Y F. Theorem 2.9 (POD and SVD). Let σ 1 σ 2... σ m denote the singular values of Y and let V = { v 1,..., v m } and W = {w 1,..., w m } be the corresponding orthonormal bases of C m and W, respectively, determined by Proposition 2.1 such that Y ( v k ) = σ k w k for k = 1,..., m. If we define Y l L(C m, W ) by (2.8) Y l = Y on span { v 1,..., v l } and Y l = 0 otherwise, problem (P l ) is solved by Y l, and the solution to (P l ) is given by w k for k = 1,..., l. Moreover, for all F L(C m, W ) with rank F = l we have (2.9) Y F 2 Y Y l 2 = σk. 2 Proof. Definition 2.5 yield ( m 1/2 ( l 1/2. Y = σk) 2 and Y l = σk) 2 From y j = m (y j, w k ) X w k for j = 1,..., m and from (2.6) we infer that Y Y l = σk 2 = (Y Y 2 w k, w k ) X = (y j, w k ) X w k = J(w 1,..., w l ). Let F l be an arbitrary linear mapping with rank F = l. Then the matrix M V W (F l ) has rank l. Due to Remark 2.6 and Theorem 6.7 in [13] we obtain that Y F l = Σ F = MW(Y V F l ) F = MW(Y V ) MW(F V l ) F ( m 1/2, MW(Y V ) MW(Y V l ) F = MW(Y V Y l ) F = σk) 2 where Σ is a diagonal matrix containing the singular values of Y F l. Hence, the solution of (P l ) is given by {w k } l and argmin (Pl ) = m σ2 k. Remark Theorem 2.9 extends the results in [8], where Y and F were complex matrices. Example Let us consider the Hilbert space X = L 2 (Ω; C), where Ω denotes a bounded domain in R d, d N. We define the kernel (2.10) r(x, x) = y j (x)y j ( x) for x, x Ω, X
5 INTERPRETATION OF POD AS SVD 5 which is often called the averaged autocorrelation function. Then Y Y is the integral operator Y Y (w) = w( x)r(, x) d x for w W, Ω whereas Y Y is the correlation matrix K defined in (2.7). If we extend Y Y as a mapping from L 2 (Ω; C) into itself by (2.11) R(v)(x) = v( x)r(x, x) d x for v L 2 (Ω; C) Ω then the existence of the non-negative eigenvalues σ σ 2 m 0 follows also from the Hilbert-Schmidt theorem if X is separable. Obviously, we have r(x, ) W for every x Ω. Hence, r(x, x) = for every x, x Ω. = (r(x, ), w k ) L 2w k ( x) = σkw 2 k (x)w k ( x) Y Y (w k )(x)w k ( x) To determine the POD-basis of rank l we have to compute the singular value decomposition of the mapping Y. If m dim X holds, one may compute the PODbasis as stated in Proposition 2.12 below, which follows directly from Corollary 2.3, (2.7) and Theorem 2.9. Corollary The POD-basis {w k } l 1) Solve the eigenvalue problem of rank l can be determined as follows: (EVP) K v k = λ k v k for k = 1,..., l to determine the positive eigenvalues {λ k } l and eigenvectors { v k} l. 2) Set w k = Y ( v k )/ λ k for k = 1,..., l. Remark The eigenvalue problem (EVP) has dimension m m and can be solved by a numerical algorithm. If X is an infinite dimensional Hilbert space, the elements (y j, y i ) X, 1 i, j m, of the correlation matrix K has to be computed by a numerical quadrature formula. 3. Approximation of (EVP). In many applications X is an infinite dimensional Hilbert space, which has to be discretized for numerical realization. For that purpose we suppose that we are given a family {h}, h > 0, with accumulation point zero and a family {X h } h of Hilbert spaces with finite dimension. In practice, the parameter h is called mesh-size of the finite dimensional space and h varies over a sequence. Furthermore, we introduce a family {Π h } h>0 of bounded linear and surjective restrictions from X onto X h. Example 3.1. An important class of discretization methods is provided by an Galerkin approach, where for n N the finite dimensional space is given by X h = span {ϕ 1,..., ϕ n },
6 6 S. VOLKWEIN and the set {ϕ i } n i=1 is linearly independent in X. Then we define the family {Πh } h as follows: n ( n (3.1) Π h v = (Mij) h 1 (v, ϕ j ) X )ϕ i for any v X, i=1 where M h denotes the mass matrix (3.2) M h = (( Mij h )) C n n with M h ij = (ϕ j, ϕ i ) X. If {ϕ i } n i=1 is an orthonormal set in X, the matrix M h is the identity in C n n and (3.1) reduces to the n-th partial sum of the Fourier expansion of the element v. Now we prove that Π h is the orthogonal bounded projection from X onto X h : For every h the operator Π h belongs to L(X, X h ), the normed linear space containing all bounded and linear operators mapping from X into X h : From (3.1) it follows directly that Π h is linear for each h. The identity (3.3) (Π h v, v h ) X = (v, v h ) X for any v X and v h X h. implies that Π h v v (X h ) for all v X and for each h. Thus, Π h v 2 X Πh v 2 X + v Πh v 2 X = v 2 X for all v X, so that the family {Π h } h is bounded for each h. Π h is an orthogonal projection: Since (Π h ) 2 = Π h holds for each h, the operators Π h are projections. From (3.3) it follows that these projections are orthogonal. Each member of the ensemble is approximated by the projection of itself onto the space X h yj h = Π h y j X h for j = 1,..., m, and together with (P l ) we investigate the finite dimensional discrete problems (P l h ) min J h (w1 h,..., wl h l ) = yj h (yj h, wk) h X w h k 2 X subject to (wi h, wh j ) = δ X ij for 1 i, j l. From Corollary 2.12 we directly infer the next result. Proposition 3.2. Assume that for each h the set span {y h 1,..., y h m} has dimension m h with l m h m. Analogously to (2.5) we introduce the family of mappings Y h L(C m, W ) with Y h ( e k ) = y h k for k = 1,..., m. Let {wk h}l be determined as follows: 1) Solve the eigenvalue problem (EVP h ) K h v h k = λ k v h k for k = 1,..., l, where K h C m m with K h i,j = (yh j, yh i ) X, to determine the positive eigenvalues {λ h k }l and eigenvectors { vh k }l. 2) Set w h k = Y h ( v h k )/ λ h k for k = 1,..., l. Then (w h 1,..., w h l ) solves (Pl h ) satisfying argmin (Pl h ) = m λ h k.
7 INTERPRETATION OF POD AS SVD 7 Remark 3.3. Clearly, K h is positive semi-definite. Since the set {Π h y j } m is not necessarily linear independent, K h need not to be positive definite. Example 3.4. As in Example 3.1 we are given n linear independent functions {ϕ i } n i=1 in X and set X h = span {ϕ 1,..., ϕ n }. Assume that the coefficient matrix C h C n m arises from discretization of the ensemble {y j } m such that n yj h = Cijϕ h i. i=1 Then the eigenvalue problem (EVP h ) leads to (3.4) (C h ) H M h C h v h k = λ h k v h k for k = 1,..., l, where the mass matrix M h was already introduced in (3.2). Let us introduce the family {K(h)} h of matrices by { K h for h > 0, (3.5) K(h) = K for h = 0. Since K(h) is positive semi-definite, there exist a number m h m depending on h and m real eigenvalues such that λ 1 (h) λ 2 (h)... λ m h(h) > λ mh +1(h) =... = λ m (h) = 0 for h 0. In the next proposition we give a sufficient condition for the right-continuity of K(h) at h = 0. Proposition 3.5. Let K(h) be given by (3.5). pointwise convergent in X, i.e. (3.6) lim h 0 Π h u = u for any u X If the family of restrictions is then K(h) is right-continuous at h = 0. If in addition there exists ε > 0 such that (3.7) max 1 j m Πh y j y j X = O(h ε ) for h 0, then K K(h) 2 = O(h ε ) for h 0, where 2 denotes the spectral norm for Hermitean matrices. Proof. Let k {1,..., m} such that y k X = max 1 i m y i X. Then we estimate for h > 0 K K(h) 2 K K h = max K ij Kij h max 1 i m 1 i m y i X y j Π h y j X + Π h y j X y i Π h y i X m y k X y j Π h y j X +m y k X Π h L(X,Xh ) y i Π h y i X.
8 8 S. VOLKWEIN With (3.6) holding Π h u X is bounded for all u X. According to the principle of uniform boundedness (see, e.g., [18]) there exists a constant C > 0 independent of h such that Π h L(X,Xh ) C for all h. Hence, we have lim K K(h) 2 = 0. h 0 If in addition (3.7) is valid, the claim follows directly. Remark 3.6. Notice that K is positive definite, but that K(h) is only positive semi-definite for h > 0. If (3.6) holds, K(h) converge to K as h tends to zero. Thus, K(h) is positive definite for sufficiently small h, i.e. the family {Π h y j } m is linear independent for sufficiently small h. In particular, the hypothesis l dim(span {y1 h,..., yh m}) m in Proposition 3.2 holds for sufficiently small h. Example 3.7. Let us turn back to Example 2.7. If in addition φ belongs to the domain D(A) of A and f is also Lipschitz-continuous in the first variable then the mild solution of (2.3) is a classical one, i.e., y(t) D(A) for 0 < t < T, y(t) is differentiable on (0, T ) and (2.3) holds on [0, T ), see, e.g., [12]. Hence the members of the ensemble y j = y(t j ) for j = 1,..., m are more regular than the elements of X and there exists a constant C > 0 such that max y j 1 j m D(A) C is satisfied. For appropriate approximation schemes we have estimates of the following form max 1 j m Πh y j y j X max y j 1 j m D(A) h ε for some ε > 0 so that assumption (3.7) can be fulfilled. In the next theorem a perturbation bound for the eigenvalues is presented, which was proved, e.g., in [5]. Theorem 3.8. For any i {1,..., m} we have λ i λ i (h) K K(h) 2, i.e. the condition number of an eigenvalue of K is one. Remark 3.9. From Theorem 3.8 we infer that the condition number of the eigenvalues is equal to one. An analogous perturbation bound also holds for the singular values of the mapping Y, see [5]. With (3.6) and (3.7) holding we conclude that i = 1,..., m. λ i λ i (h) = O(h ε ) for h 0, The sensitivity or condition of an eigenvector depends on the gap of its corresponding eigenvalue: a small gap implies a sensitive eigenvector. This fact is formulated in the next theorem proved in [5]. Theorem For i {1,..., m} let v i and v i (h) denote the i-th eigenvector of K and K(h), respectively. Then 1 2 sin(2θ) K K(h) 2 min j i λ j λ i if min j i λ j λ i > 0 and 1 2 sin(2θ) K K(h) 2 if min min j i λ j (h) λ i (h) λ j(h) λ i (h) > 0, j i where θ i denote the acute angle between v i and v i (h).
9 INTERPRETATION OF POD AS SVD 9 Remark Note that sin(2θ)/2 sin θ θ if θ 1. Let us mention that frequently we know only the eigenvalues of K(h), since they are typically the output of the eigenvalue algorithm that we have used. In this case we can only estimate min j i λ j λ i. Finally, we establish a result for sums of eigenvalues and then specialize it to a single eigenvalue. For a proof of the next proposition we refer the reader to [14]. Proposition For any h > 0 and i = 1,..., m Remark λ i (h) [λ i + λ m (h), λ i + λ 1 (h)]. a) Note that m [λ i + λ m (h), λ i + λ 1 (h)] i=1 contains all eigenvalues of K(h). However, we know just which eigenvalue to look for in each interval. Moreover it is impossible for an eigenvalue corresponding to one of a cluster of overlapping intervals to migrate outside its own interval. b) The intervals are not symmetric about the eigenvalues λ i. In fact if λ m is positive, the i-th interval will not contain the eigenvalue λ i. This occurs if K K(h) is positive definite. Hence: If K K(h) is positive definite, its eigenvalues must increase. 4. HJB-POD based feedback design. In many applications the discretization of optimal control problems for time dependent partial differential equations, e.g., for the unsteady Navier-Stokes equations, require the solution of nonlinear systems with a large number of degrees of freedom. In particular, to compute closed loop controls in state feedback form we have to solve the Hamilton-Jacobi-Bellman (HJB) equation, which is numerically infeasible for parabolic differential equations on a standard workstation equipment till today if classical approximations like finite elements or finite differences are used. In [9] model reduction is applied to reduce the number of unknowns significantly. The obtained low-dimensional models should guarantee a reasonable performance of the controlled plant while being computationally tractable. Proper orthogonal decomposition (POD) provides a method for deriving appropriate low order models. It can be thought of as a Galerkin approximation in the spatial variable, built from functions corresponding to the solution of the physical system at pre-specified time instances. These are called the snapshots. Due to possible linear dependence or almost linear dependence a singular value decomposition of the snapshots is carried out (see Corollary 2.12) and the leading generalized eigenfunctions are chosen as a basis, referred to as the POD basis. Once a low order model of the dynamical system is available feedback synthesis based on approximate solutions to the stationary HJB equation becomes feasible. The feasibility of the proposed approach is demonstrated by means of an optimal boundary control problem for the Burgers equation. Compared to open loop control much less attention has been paid to the important problem of closed loop control. We mention the work by Byrnes-Gilliam-Shubov [3], where a fixed feedback-operator is used and analyzed and Burns-Kang [2] where the feedback synthesis is based on Riccati operators for the linearized equations. In [6] instantaneous control was applied to construct a feedback law which matches a desired state, but at considerable
10 10 S. VOLKWEIN control costs. In [8] the authors utilized model reduction with POD to construct a suboptimal feedback synthesis, and an optimal output feedback reduced-order control law was designed by POD discretization in [10]. In [9] the HJB-based approach is utilized numerically for an optimal boundary control problem for the viscous Burgers equation. Let u a u b. We set U ad = {u R : u a u u b } and define the set of admissible controls (4.1) U ad = {u L 2 loc(0, ) : u(t) U ad for almost all t (0, )}. For a control u U ad we consider the viscous Burgers equation (4.2a) (4.2b) (4.2c) (4.2d) y t νy xx + yy x = 0 in Q = (0, ) Ω, νy x (, 0) + σ 0 y(, 0) = u νy x (, 1) + σ 1 y(, 1) = g in (0, ), in (0, ), y(0, ) = y in Ω = (0, 1), where y L 2 (Ω) is a given initial condition, and σ 0, σ 1, g are real numbers. Henceforth we consider weak solutions y W loc (0, ; V ) of (4.2) satisfying (4.2d) and (4.3) y t (t), ϕ V,V + σ 1y(t, 1)ϕ(1) σ 0 y(t, 0)ϕ(0) y x (t), ϕ + y(t)y (t)ϕ dx = gϕ(1) uϕ(0), Ω for all ϕ H 1 (Ω) and t (0, ) a.e. For the functional analytic treatment of (4.2) we refer to [15, 17], for example. We shall consider the cost functional ( 1 J(y, u) = y(t, x) z(x) 2 dx + β 2 2 u(t) 2) e λt dt, 0 Ω where z L 2 (Ω) is a given desired state, and λ, β > 0 are positive constants. The optimal control problem is given by (OC) min J(y, u) such that (y, u) W loc (0, ) U ad satisfies (4.3). It is straightforward to argue the existence of an optimal control for (OC). Using the l POD basis functions for the Galerkin approximation of the optimal control problem (OC) we obtain min J l (w l, u) (OC l { ) s.t. u U ad and ẇl (t) = F (w l (t), u(t)) for t > 0, w l (0) = w with the state variable w l : [0, T ] R l, w R l and with an appropriate nonlinear F : R l R R l. For (OC l ) an HJB-based feedback law F : H 1 loc (0, ; Rl ) satisfying u l = F(w l ) is computed numerically in [9]. References [1] G. Berkooz, P. Holmes, and J. L. Lumley. Turbulence, Coherent Structuress, Dynamical Systems and Symmetry. Cambridge Monographs on Mechanics. Cambridge University Press, Cambridge, [2] J. A. Burns and S. Kang. A control problem for Burgers equation with bounded input/output ICASE Report 90-45, [3] C. I. Byrnes, D. S. Gilliam, and V. I. Shubov. On the global dynamics of a controlled viscous Burgers equation. J. Dyn. Control Syst., 4: , 1995.
11 INTERPRETATION OF POD AS SVD 11 [4] H. Choi, R. Temam, P. Moin, and J. Kim. Feedback control for unsteady flow and its application to the stochastic Burgers equation. J. Fluid Mech., 253: , [5] J. W. Demmel. Applied Numerical Linear Algebra. SIAM Marcel Dekkar, Inc., [6] M. Hinze and S. Volkwein. Analysis of instantaneous control for the Burgers equation. Nonlinear Analysis Theory and Methods and Applications, 50:1-26, [7] S. Kang, K. Ito, and J. A. Burns. Unbounded observation and boundary control problems for Burgers equation. In 30th IEEE Conf. on Decision and Control, , [8] K. Kunisch and S. Volkwein. Control of Burgers equation by a reduced-order approach using Proper Orthogonal decomposition. J. Optim. Theory and Appl., 102: , [9] K. Kunisch, S. Volkwein, and L. Xie. HJB-POD based feedback design for the optimal control of evolution problems. Submitted, [10] F. Leibfritz and S. Volkwein. Reduced order output feedback control design for PDE systems using proper orthogonal decomposition and nonlinear semidefinite programming. Technical Report No. 233, Special Research Center F 003 Optimization and Control, Project area Continuous Optimization and Control, University of Graz & Technical University of Graz, February 2002, submitted. [11] H. V. Ly, K. D. Mease, and E. S. Titi. Distributed and boundary control of the viscous Burgers equation. Numerical Functional Analysis and Optimization, 18: , [12] A. Pazy. Semigoups of Linear Operators and Applications to Partial Differential Equations. Springer-Verlag, New York, [13] G. W. Stewart. Introduction to Matrix Computations. Academic Press, New York, [14] G. W. Stewart. Matrix Perturbation Theory. Academic Press, New York, [15] R. Temam. Infinite-Dimensional Dynamical Systems in Mechanics and Physics, volume 68 of Applied Mathematical Sciences. Springer-Verlag, New York, [16] S. Volkwein. Optimal and suboptimal control of partial differential equations: augmented Lagrange-SQP methods and reduced order modelling with proper orthogonal decomposition. Grazer Math. Berichte, Bericht Nr. 343, [17] S. Volkwein. Lagrange-SQP techniques for the control constrained optimal control problems for the Burgers equation. Computational Optimization and Applications, 26: , [18] A. Wouk. A Course of Applied Functional Analysis. Wiley-Interscience, New York, S. Volkwein, Karl-Franzens-Universität Graz, Institut für Mathematik, Heinrichstrasse 36, A-8010 Graz, Austria address: stefan.volkwein@uni-graz.at
Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein
Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition
More informationProper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein
Proper Orthogonal Decomposition for PDE Constrained Optimization Institute of Mathematics and Statistics, University of Constance Joined work with F. Diwoky, M. Hinze, D. Hömberg, M. Kahlbacher, E. Kammann,
More informationPOD for Parametric PDEs and for Optimality Systems
POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,
More informationProper Orthogonal Decomposition in PDE-Constrained Optimization
Proper Orthogonal Decomposition in PDE-Constrained Optimization K. Kunisch Department of Mathematics and Computational Science University of Graz, Austria jointly with S. Volkwein Dynamic Programming Principle
More informationNumerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini
Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33
More informationProper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints
Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Technische Universität Berlin Martin Gubisch, Stefan Volkwein University of Konstanz March, 3 Martin Gubisch,
More informationThe HJB-POD approach for infinite dimensional control problems
The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,
More informationProper Orthogonal Decomposition Surrogate Models for Nonlinear Dynamical Systems: Error Estimates and Suboptimal Control
Proper Orthogonal Decomposition Surrogate Models for Nonlinear Dynamical Systems: Error Estimates and Suboptimal Control Michael Hinze 1 and Stefan Volkwein 1 Institut für Numerische Mathematik, TU Dresden,
More informationSuboptimal Open-loop Control Using POD. Stefan Volkwein
Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,
More informationModel order reduction & PDE constrained optimization
1 Model order reduction & PDE constrained optimization Fachbereich Mathematik Optimierung und Approximation, Universität Hamburg Leuven, January 8, 01 Names Afanasiev, Attwell, Burns, Cordier,Fahl, Farrell,
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationAn optimal control problem for a parabolic PDE with control constraints
An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)
More informationParameterized Partial Differential Equations and the Proper Orthogonal D
Parameterized Partial Differential Equations and the Proper Orthogonal Decomposition Stanford University February 04, 2014 Outline Parameterized PDEs The steady case Dimensionality reduction Proper orthogonal
More informationProper Orthogonal Decomposition (POD)
Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28 Outline Intro Results Problem
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationarxiv: v2 [math.pr] 27 Oct 2015
A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic
More informationBIHARMONIC WAVE MAPS INTO SPHERES
BIHARMONIC WAVE MAPS INTO SPHERES SEBASTIAN HERR, TOBIAS LAMM, AND ROLAND SCHNAUBELT Abstract. A global weak solution of the biharmonic wave map equation in the energy space for spherical targets is constructed.
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationReview and problem list for Applied Math I
Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know
More informationSolving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics
Excerpt from the Proceedings of the COMSOL Conference 2009 Milan Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics Fikriye Yılmaz 1, Bülent Karasözen
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationPerturbation Theory for Self-Adjoint Operators in Krein spaces
Perturbation Theory for Self-Adjoint Operators in Krein spaces Carsten Trunk Institut für Mathematik, Technische Universität Ilmenau, Postfach 10 05 65, 98684 Ilmenau, Germany E-mail: carsten.trunk@tu-ilmenau.de
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationMath 113 Final Exam: Solutions
Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationLecture Notes on PDEs
Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential
More informationModel Reduction, Centering, and the Karhunen-Loeve Expansion
Model Reduction, Centering, and the Karhunen-Loeve Expansion Sonja Glavaški, Jerrold E. Marsden, and Richard M. Murray 1 Control and Dynamical Systems, 17-81 California Institute of Technology Pasadena,
More informationThe Dirichlet-to-Neumann operator
Lecture 8 The Dirichlet-to-Neumann operator The Dirichlet-to-Neumann operator plays an important role in the theory of inverse problems. In fact, from measurements of electrical currents at the surface
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationLinear Algebra 2 Spectral Notes
Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More informationON CONVERGENCE OF A RECEDING HORIZON METHOD FOR PARABOLIC BOUNDARY CONTROL. fredi tröltzsch and daniel wachsmuth 1
ON CONVERGENCE OF A RECEDING HORIZON METHOD FOR PARABOLIC BOUNDARY CONTROL fredi tröltzsch and daniel wachsmuth Abstract. A method of receding horizon type is considered for a simplified linear-quadratic
More informationOn Riesz-Fischer sequences and lower frame bounds
On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition
More informationTakens embedding theorem for infinite-dimensional dynamical systems
Takens embedding theorem for infinite-dimensional dynamical systems James C. Robinson Mathematics Institute, University of Warwick, Coventry, CV4 7AL, U.K. E-mail: jcr@maths.warwick.ac.uk Abstract. Takens
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More informationSummary of Week 9 B = then A A =
Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION
ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationRational Chebyshev pseudospectral method for long-short wave equations
Journal of Physics: Conference Series PAPER OPE ACCESS Rational Chebyshev pseudospectral method for long-short wave equations To cite this article: Zeting Liu and Shujuan Lv 07 J. Phys.: Conf. Ser. 84
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationFINITE ELEMENT APPROXIMATION OF ELLIPTIC DIRICHLET OPTIMAL CONTROL PROBLEMS
Numerical Functional Analysis and Optimization, 28(7 8):957 973, 2007 Copyright Taylor & Francis Group, LLC ISSN: 0163-0563 print/1532-2467 online DOI: 10.1080/01630560701493305 FINITE ELEMENT APPROXIMATION
More informationComparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic Linear-Quadratic Optimal Control
SpezialForschungsBereich F 3 Karl Franzens Universität Graz Technische Universität Graz Medizinische Universität Graz Comparison of the Reduced-Basis and POD a-posteriori Error Estimators for an Elliptic
More informationON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM
ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationModel Reduction via Proper Orthogonal Decomposition
Model Reduction via Proper Orthogonal Decomposition René Pinnau Fachbereich Mathematik, Technische Universität Kaiserslautern, D-67663 Kaiserslautern, Germany pinnau@mathematik.uni-kl.de 1 Introduction
More informationK. Krumbiegel I. Neitzel A. Rösch
SUFFICIENT OPTIMALITY CONDITIONS FOR THE MOREAU-YOSIDA-TYPE REGULARIZATION CONCEPT APPLIED TO SEMILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS WITH POINTWISE STATE CONSTRAINTS K. Krumbiegel I. Neitzel A. Rösch
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationReduced-Order Modeling of Channel Flow Using Traveling POD and Balanced POD
3rd AIAA Flow Control Conference, 5 8 June 26, San Francisco Reduced-Order Modeling of Channel Flow Using Traveling POD and Balanced POD M. Ilak and C. W. Rowley Dept. of Mechanical and Aerospace Engineering,
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationASYMPTOTIC BEHAVIOR OF SOLUTIONS OF TIME-DELAYED BURGERS EQUATION. Weijiu Liu. (Communicated by Enrique Zuazua)
DISCRETE AND CONTINUOUS Website: http://aimsciences.org DYNAMICAL SYSTEMS SERIES B Volume 2, Number1, February22 pp. 47 56 ASYMPTOTIC BEHAVIOR OF SOLUTIONS OF TIME-DELAYED BURGERS EQUATION Weijiu Liu Department
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationHomework 11 Solutions. Math 110, Fall 2013.
Homework 11 Solutions Math 110, Fall 2013 1 a) Suppose that T were self-adjoint Then, the Spectral Theorem tells us that there would exist an orthonormal basis of P 2 (R), (p 1, p 2, p 3 ), consisting
More informationCores for generators of some Markov semigroups
Cores for generators of some Markov semigroups Giuseppe Da Prato, Scuola Normale Superiore di Pisa, Italy and Michael Röckner Faculty of Mathematics, University of Bielefeld, Germany and Department of
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationProper Orthogonal Decomposition
Proper Orthogonal Decomposition Kameswararao Anupindi School of Mechanical Engineering Purdue University October 15, 2010 Kameswararao Anupindi (Purdue University) ME611, Principles of Turbulence October
More informationPriority Programme The Combination of POD Model Reduction with Adaptive Finite Element Methods in the Context of Phase Field Models
Priority Programme 1962 The Combination of POD Model Reduction with Adaptive Finite Element Methods in the Context of Phase Field Models Carmen Gräßle, Michael Hinze Non-smooth and Complementarity-based
More informationAPPROXIMATION OF MOORE-PENROSE INVERSE OF A CLOSED OPERATOR BY A SEQUENCE OF FINITE RANK OUTER INVERSES
APPROXIMATION OF MOORE-PENROSE INVERSE OF A CLOSED OPERATOR BY A SEQUENCE OF FINITE RANK OUTER INVERSES S. H. KULKARNI AND G. RAMESH Abstract. Let T be a densely defined closed linear operator between
More informationEileen Kammann Fredi Tröltzsch Stefan Volkwein
Universität Konstanz A method of a-posteriori error estimation with application to proper orthogonal decomposition Eileen Kammann Fredi Tröltzsch Stefan Volkwein Konstanzer Schriften in Mathematik Nr.
More informationC.I.BYRNES,D.S.GILLIAM.I.G.LAUK O, V.I. SHUBOV We assume that the input u is given, in feedback form, as the output of a harmonic oscillator with freq
Journal of Mathematical Systems, Estimation, and Control Vol. 8, No. 2, 1998, pp. 1{12 c 1998 Birkhauser-Boston Harmonic Forcing for Linear Distributed Parameter Systems C.I. Byrnes y D.S. Gilliam y I.G.
More informationThe Role of Exosystems in Output Regulation
1 The Role of Exosystems in Output Regulation Lassi Paunonen In this paper we study the role of the exosystem in the theory of output regulation for linear infinite-dimensional systems. The main result
More informationReduced-order models for flow control: balanced models and Koopman modes
Reduced-order models for flow control: balanced models and Koopman modes Clarence W. Rowley, Igor Mezić, Shervin Bagheri, Philipp Schlatter, and Dan S. Henningson Abstract This paper addresses recent developments
More informationAdaptive methods for control problems with finite-dimensional control space
Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy
More informationLinear Algebra using Dirac Notation: Pt. 2
Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018
More informationStabilization of Heat Equation
Stabilization of Heat Equation Mythily Ramaswamy TIFR Centre for Applicable Mathematics, Bangalore, India CIMPA Pre-School, I.I.T Bombay 22 June - July 4, 215 Mythily Ramaswamy Stabilization of Heat Equation
More informationPositive Stabilization of Infinite-Dimensional Linear Systems
Positive Stabilization of Infinite-Dimensional Linear Systems Joseph Winkin Namur Center of Complex Systems (NaXys) and Department of Mathematics, University of Namur, Belgium Joint work with Bouchra Abouzaid
More informationSolutions for Math 225 Assignment #5 1
Solutions for Math 225 Assignment #5 1 (1) Find a polynomial f(x) of degree at most 3 satisfying that f(0) = 2, f( 1) = 1, f(1) = 3 and f(3) = 1. Solution. By Lagrange Interpolation, ( ) (x + 1)(x 1)(x
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationFinding eigenvalues for matrices acting on subspaces
Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationTechnische Universität Berlin
Technische Universität Berlin Institut für Mathematik Regularity of the adjoint state for the instationary Navier-Stokes equations Arnd Rösch, Daniel Wachsmuth Preprint 1-4 Preprint-Reihe des Instituts
More informationSemismooth Newton Methods for an Optimal Boundary Control Problem of Wave Equations
Semismooth Newton Methods for an Optimal Boundary Control Problem of Wave Equations Axel Kröner 1 Karl Kunisch 2 and Boris Vexler 3 1 Lehrstuhl für Mathematische Optimierung Technische Universität München
More informationINVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS. Quanlei Fang and Jingbo Xia
INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS Quanlei Fang and Jingbo Xia Abstract. Suppose that {e k } is an orthonormal basis for a separable, infinite-dimensional Hilbert
More informationOn Controllability of Linear Systems 1
On Controllability of Linear Systems 1 M.T.Nair Department of Mathematics, IIT Madras Abstract In this article we discuss some issues related to the observability and controllability of linear systems.
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationBASIC FUNCTIONAL ANALYSIS FOR THE OPTIMIZATION OF PARTIAL DIFFERENTIAL EQUATIONS
BASIC FUNCTIONAL ANALYSIS FOR THE OPTIMIZATION OF PARTIAL DIFFERENTIAL EQUATIONS S. VOLKWEIN Abstract. Infinite-dimensional optimization requires among other things many results from functional analysis.
More informationChapter 8 Integral Operators
Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationarxiv: v1 [math.na] 13 Sep 2014
Model Order Reduction for Nonlinear Schrödinger Equation B. Karasözen, a,, C. Akkoyunlu b, M. Uzunca c arxiv:49.3995v [math.na] 3 Sep 4 a Department of Mathematics and Institute of Applied Mathematics,
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationTaylor expansions for the HJB equation associated with a bilinear control problem
Taylor expansions for the HJB equation associated with a bilinear control problem Tobias Breiten, Karl Kunisch and Laurent Pfeiffer University of Graz, Austria Rome, June 217 Motivation dragged Brownian
More informationDYNAMIC BIFURCATION THEORY OF RAYLEIGH-BÉNARD CONVECTION WITH INFINITE PRANDTL NUMBER
DYNAMIC BIFURCATION THEORY OF RAYLEIGH-BÉNARD CONVECTION WITH INFINITE PRANDTL NUMBER JUNGHO PARK Abstract. We study in this paper the bifurcation and stability of the solutions of the Rayleigh-Bénard
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationwhich arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i
MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.
More informationHighly-efficient Reduced Order Modelling Techniques for Shallow Water Problems
Highly-efficient Reduced Order Modelling Techniques for Shallow Water Problems D.A. Bistrian and I.M. Navon Department of Electrical Engineering and Industrial Informatics, University Politehnica of Timisoara,
More informationCHAPTER VIII HILBERT SPACES
CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationMATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations
MATHEMATICS Subject Code: MA Course Structure Sections/Units Section A Section B Section C Linear Algebra Complex Analysis Real Analysis Topics Section D Section E Section F Section G Section H Section
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More information