Citation for published version (APA): Minh, H. B. (2009). Model Reduction in a Behavioral Framework s.n.

Size: px
Start display at page:

Download "Citation for published version (APA): Minh, H. B. (2009). Model Reduction in a Behavioral Framework s.n."

Transcription

1 University of Groningen Model Reduction in a Behavioral Framework Minh, Ha Binh IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below. Document Version Publisher's PDF, also known as Version of record Publication date: 2009 Link to publication in University of Groningen/UMCG research database Citation for published version (APA): Minh, H. B. (2009). Model Reduction in a Behavioral Framework s.n. Copyright Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons). Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Downloaded from the University of Groningen/UMCG research database (Pure): For technical reasons the number of authors shown on this cover page is limited to 10 maximum. Download date:

2 1 1 A behavioral approach to model reduction 1.1 Introduction and outline of the thesis The aim of this thesis is to study a number of problems in the theory of model reduction and approximation for dynamical systems within the behavioral approach. The problem of model reduction and approximation plays an important role in systems and control theory. The problem is to replace a given complex model by a model of lower complexity, while the new model is a reasonable approximation of the original one. This means that the error between the original model and its approximation needs to be small. For some model reduction and approximation methods, error bounds are known, for example in the methods of balanced truncation [42] or Hankel optimal norm approximation [18]. Often, in addition to reduction of complexity of the original model, preservation of certain properties of the model is required. An example of this is preservation of stability and passivity. Moreover, a very important requirement is that the reduction procedure is computationally efficient. All requirements together make the problem of model reduction one of the most difficult problems in system and control theory. In [1] it is argued that currently a best method, i.e. a reduction method that meets all requirements, does not yet exist. Some methods are theoretically beautiful, provide global error bounds and preserve stability, passivity, but are not computationally efficient, for example SVD based methods (see [42, 18, 6, 48]). On the other hand, some methods are efficient from a computational point of view but do not have error bounds and do not preserve stability, for example Padé approximation (see [4]) or momentmatching methods (see [84, 36]). In this thesis, we study a class of theoretical methods focusing especially on the preservation of passivity (or dissipativity in general) of the systems. Passivity is an important characteristic of linear systems. It appears in many engineering applications, such as electrical systems, mechanical systems, thermodynamics, etc. Several methods for model reduction with stability and passivity preservation have been introduced in the past (see for example [34, 35, 70, 6, 45, 48]) and the new methods presented recently by Antoulas (see [2]) and Sorensen (see [60]).

3 2 1 A behavioral approach to model reduction The approach of this thesis is inspired by the axiomatic foundation for the general theory of dynamical systems developed by J.C. Willems in [76, 61], called the behavioral approach. In this theory, a system is not considered as a mapping that transforms inputs to outputs. Instead, the system is identified with the set of all time functions compatible with its laws. This set, called the behavior of the system, plays an important role in this theory. One of the main advantage of the behavioral approach is that, since there is a distinction between the behavior of the system and its representation, it allow to treat a wide variety of model classes, more directly emanating from modeling. In particular, the behavioral approach allows to deal with state space models and transfer functions as special cases of more general model specifications. The behavioral approach to systems and control is natural and powerful setting for the problem of model reduction and approximation. Due to the fact that the approach deals with models on a more abstract and general level it is able to capture much better the essential concepts and structural intricacies of model reduction and approximation. An example of this is Theorem in Chapter 4 where an intrinsic, behavioral, description is given of the system invariants appearing in passivity and bounded realness preserving model reduction by balanced truncation. We now give a description of the outline of this thesis. Chapter 1: A behavioral approach to model reduction. This chapter contains the mathematical background which will be used in later chapters of this thesis. The chapter focuses on definitions and basic results from the behavioral theory. The notions of linear differential systems, dissipative systems, quadratic differential forms, input-state-output, driving-variable and output-nulling representations are explained in the chapter. References for the contents of the chapter are [61, 80, 41, 71]. Chapter 2: Review of balancing methods for linear systems. In this chapter we give a review of the existing balanced truncation methods. We start with the classical method of Lyapunov balancing, which is introduced in [42]. This method is applicable only to stable linear systems. Based on this method, several extensions exist, applicable to unstable linear systems. These methods are the LQG balancing method and methods based on balancing of the normalized coprime factor. These methods are also discussed in this chapter. We also summarize the work of Weiland [71] in order to understand the LQG balancing method in a behavioral framework. Finally, we review the positive real and bounded real balancing methods applied for positive real and bounded real linear systems.

4 1.1 Introduction and outline of the thesis 3 Chapter 3: Dissipativity preserving model reduction by retention of trajectories of minimal dissipation. The aim of this chapter is to extend the technique of Antoulas (see [2]) and Sorensen (see [60]) from the classical state space framework to the general framework of the behavioral theory of dissipative systems. The main results of the chapter are algorithmic procedures to compute, for a given behavior represented in driving variable representation or output nulling representation, a reduced-order behavior that preserves dissipativity, while a given subbehavior of the antistable part of the stationary trajectories of the original behavior is retained. It will turn out that the transfer matrix of the reduced order behavior interpolates the transfer matrix of the original behavior in certain directions, with interpolation points at some of the antistable spectral zeroes of the original behavior, and their mirror images, see also [11]. Chapter 4: Dissipativity preserving model reduction by balanced truncation with error bounds. The aim of this chapter is to study the method of balanced truncation for strictly half line dissipative systems whose input cardinality equals the positive signature of the supply rate. The main results of this chapter are Theorem and the balanced truncation algorithm in section 4.6. Theorem can be seen as an extension of the work of Weiland [71] in the context of strictly half line dissipative systems, in which only the past behavior is an inner product space, while on the future behavior we only have an indefinite inner product. The reduction algorithm in section 4.6 can be seen as a generalization of the positive real and bounded real balancing methods introduced in Chapter 2. Chapter 5: Balanced state-space representations: a polynomial algebraic approach. In this chapter we illustrate several results of independent interest. We show how to iteratively compute from the supply rate and an image representation of the system, the two-variable polynomial matrix representing any storage function. Moreover, we also provide an algorithm to compute the factorization of any storage function as a quadratic function of a minimal state map (see [61]). This factorization can be used in order to ascertain whether a storage function is positive along a behavior, a property which has important consequences in the behavioral theory of dissipative systems (see [80]). Based on this factorization, the problem of obtaining a balanced state map from the an image representation of the system is solved.

5 4 1 A behavioral approach to model reduction 1.2 Linear differential systems Let w denote a vector-valued variable whose components consists of the system variables. We define the signal space W as the space where the variable w takes its values. Usually w itself is a function of an independent variable called time, which takes its values in a set called the time axis. T denotes the time axis. Let W T denote the set of functions from T to W. Thus w is an element of W T. Not every element in W T is allowed by the laws governing the behavior/dynamics of the system. The set of functions that are allowed by the system is precisely the object of our study, and this set is called the behavior. The laws that govern a system bring about this restriction of W T to its behavior. Thus a system is viewed as an exclusion law indicating which trajectories are admissible for the system. This leads to the following definition of a dynamical system (Willems [76]). Definition A dynamical system Σ is a triple Σ = (T,W,B) with T a set, called the time axis; W a set, called the signal space, and B W T the behavior of the system. We now come to properties of dynamical systems. We call the set of functions W T the universum U. The behavior of a system is a subset of the universum U. An element of the behavior is a function with domain T and co-domain W. In this thesis we will study dynamical systems which have three properties: linear, time-invariant and described by ordinary differential equations. Definition A dynamical system Σ = (T,W,B) is called linear if 1. W is a vector space over R, and, 2. the behavior B is a subspace of W T, i.e. w 1,w 2 B and α 1,α 2 R α 1 w 1 + α 2 w 2 B. The latter is called the superposition principle. If the time axis T is a semi-group, and σ t w is defined by (σ t w)(τ) = w(t+τ) for all t,τ T, then we can also define time invariance of a behavior. Definition A dynamical system Σ = (T,W,B) is called timeinvariant if for each trajectory w B the shifted trajectory σ t w is again an element of B, for all t T.

6 1.2 Linear differential systems 5 In this thesis we will restrict ourselves to a special class of linear timeinvariant dynamical systems, called linear differential systems. Suppose we have g scalar linear constant coefficient differential equations written in the following form: d L d R 0 w + R 1 dt w + + R L dt L w = 0 (1.1) where R i R g w for each i = 0... L. By introducing the polynomial matrix R(ξ) = R 0 + R 1 ξ + + R L ξ L, the g equations in equation (1.1) can be written in the form R( d dt )w = 0. Consider the set of C (R,R w ) solutions of the equations (1.1), where C (R,R w ) denotes the space of all infinitely often differentiable functions from R to R w. This set of solutions defines the system Σ = (T,R w,b) with { ( ) } d B = w C (R,R w ) R w = 0. (1.2) dt Obviously, this system is linear and time-invariant. Such a system Σ is called a linear differential system. The set of all linear differential systems with w variables will be denoted by L w. We simply write B L w instead of writing Σ L w. The representation (1.2) of B is called a kernel representation of B, and we often write B = kerr( d dt ). Another type of representation by which linear differential systems can be given is the latent variable representation, defined through polynomial matrices R and M by R( d dt )w = M( d dt )l, with B = {w C (R,R w ) l C (R,R l ) s.t. R( d dt )w = M( d )l}. (1.3) dt The behavior B is then called the external or manifest behavior, and B full := {(w,l) C (R,R w+l ) R( d dt )w = M( d dt )l}, the full behavior. Here, w is called manifest variable and l is called latent variable. If B is the external behavior of B full, then we often write B = (B full ) ext. We also need the notion of state for a behavior. A latent variable representation of B L w is called a state representation if the latent variable (denoted here by x ) has the property of state, i.e.: if (w 1,x 1 ),(w 2,x 2 ) B full are such that x 1 (0) = x 2 (0) then (w 1,x 1 ) (w 2,x 2 ), the concatenation (at t = 0, here) defined as { (w1 (t),x (w 1,x 1 ) (w 2,x 2 )(t) := 1 (t)), t < 0 (w 2 (t),x 2 (t)), t 0, belongs to (the C -closure) of B full. We call such an x a state for B.

7 6 1 A behavioral approach to model reduction A latent variable representation is a state representation of its manifest behavior if and only if its full behavior B full can be represented by a differential equation that is zero-th order in w and first order in x, i.e., d by R 0 w = M 0 x + M 1 dt x, with R 0, M 0, M 1 constant matrices (see [53]- Theorem and the remark follows). A state representation is said to be minimal if the state has minimal dimension among all state representations that have the same manifest behavior. The dimension of this state is the same for all state representation of a given behavior. This number denoted by n(b) and is called the McMillan degree of B. There are many, more structured, types of state representations as, for instance, the driving variable representation d dtx = Ax + Bv, w = Cx + Dv, with v an, obviously free, additional latent variable; the output nulling representation d x = Ax + Bw, 0 = Cx + Dw; or the input-state-output representations dt d dt x = Ax + Bu, y = Cx + Du, w = (u,y), the most popular of them all. We will collect the basic material on these representations in section 1.6. In [55] it was shown that a state variable (and in particular, a minimal one) for B can be obtained from the external- or full trajectories by applying to them a state map, defined as follows. Let X R n w [ξ] be such that the subspace of C (R,R w+n ) defined by { (w,x ( d dt ) w) w B } is a state system; then X( d dt ) is called a state map for B, and X ( d dt) w is a state variable for B. In the following we will consider state maps for systems in image form; in this case it can be shown (see [55]) that a state map can be chosen acting on the latent variable l alone, and we will consider state systems ( ) d w = M l x = X dt ( d dt ) l, with x a state variable. The definition of minimal state map follows in a straightforward manner. In [55] algorithms are stated to construct a state map from the equations describing the system. We call B L w controllable if for all w 1,w 2 B, there exists a T 0 and a w B such that w(t) = w 1 (t) for t < 0 and w(t+t ) = w 2 (t) for t 0. Denote the subset of all controllable elements of L w by L w cont. For controllable systems, it can be shown in [53]-Theorem that B L w cont if and only if it admits an image representation, i.e. there exists an M R w l [ξ] such that

8 1.2 Linear differential systems 7 B = {w C (R,R w ) there exists l C (R,R l ) w = M( d dt )l}. If B is represented in this way, we also write B = imm( d dt ). The controllable part of a behavior is defined as follows. Let B L w. There exists B L w cont,b B such that B L w cont,b B implies B B, i.e, B is the largest controllable sub-behavior contained in B. Denote this system as B cont. It can be shown that B cont is the closure in the C -topology of B D. A behavior B L w is called autonomous if it is a finite-dimensional subspace of C (R,R w ). It can be shown that an autonomous behavior admits a kernel representation B = ker(r( d dt )) in which the matrix R is w w and nonsingular. The polynomial detr(ξ), denoted by χ B, and is called the characteristic polynomial of B; their roots are called the characteristic frequencies of B. It can be proved (see Theorem of [53]) that if B is autonomous and λ i C, i = 1,...,r are the distinct roots of the characteristic polynomial χ B, each with multiplicity n i, then w B if and only if w(t) = r i=1 n j 1 j=0 v ij t j e λ it where the vectors v ij C w satisfy n i 1 j=k (1.4) ( j ) k R (j k) (λ i )v ij = 0, with R (j k) denoting the (j k)-th derivative of the matrix polynomial R. Consequently, in the autonomous case every trajectory w B is a linear combination of polynomial-exponential trajectories associated with the characteristic frequencies λ i. Given a behavior B L w, it is in general possible to choose some components of w as any function in C (R,R). The maximal number of such components that can be chosen arbitrarily is called the input cardinality of B and is denoted as m(b). The number m(b) is exactly equal to the dimension of the input u in any input/state/output representation of B. The complementary number w m(b) is called the output cardinality of B and denoted by p(b). If none of the components of w can be chosen arbitrarily, then B is autonomous, and we write m(b) = 0. Let B L w cont and Σ R w w be symmetric, we define the Σ-orthogonal complement B Σ of B as B Σ := {w C + w Σδ dt = 0 for all δ B D}. It can be proven that B Σ is also a controllable behavior, see section 10 of [80]. If Σ = I, we simply write B, called the orthogonal complement of B.

9 8 1 A behavioral approach to model reduction 1.3 Some notations We denote by R the set of negative real numbers, and by R + the complementary set of nonnegative real numbers. C (C + ) is the subset of C of all λ such that Re(λ) < 0 (Re(λ) > 0). The Euclidean linear space of n dimensional real, respectively complex, vectors is denoted by R n, respectively C n, and the space of m n real, respectively complex, matrices, by R m n, respectively C m n. Given two column vectors x and y, we denote with col(x,y) the vector obtained by stacking x over y; a similar convention holds for the stacking of matrices with the same number of columns. If A R p m, then A R m p denotes its transpose. If A C p m, then A C m p denotes its complex conjugate transpose. In this thesis we use the following function spaces: C (R,R w ): the space of all infinitely differentiable functions from R to R w. D(R,R w ): the subspace of C (R,R w ) consisting of all compactly supported functions. L loc 1 (R,Rw ): the space of all locally integrable functions w from R to R w ; i.e. all w : R R w such that b a w(t) dt < for all a,b R. L loc 2 (R,Rw ): the space of all locally square integrable functions w from R to R w ; i.e. all w : R R w such that b a w(t) 2 dt < for all a,b R. L 2 (R,R w ): the space of all square integrable functions on R; i.e. all w : R R w such that R w(t) 2 dt <. The L 2 -norm of w is w 2 := ( w 2 dt) 1/2. L 2 (R,R w ): the space of all square integrable functions on R ; i.e. all w : R R w such that R w(t) 2 dt <. L 2 (R +,R w ): the space of all square integrable functions on R + ; i.e. all w : R + R w such that R + w(t) 2 dt <. Sometimes, when the domain and co-domain are obvious from the context, we simply write C, D, L loc 1, Lloc 2, L 2, L 2 (R ) and L 2 (R + ). If F (t) is a real p m matrix valued function, then the space of all functions formed as real linear combinations of the columns of F (t) is denoted by span{f (t)} := {F (t)x 0 x 0 R m }. For a given function w on R, we denote by w R and w R+ the restrictions of w to R and R +, respectively. The exponential function whose value at t is e λt will be denoted with exp λ.

10 1.4 Quadratic differential forms 9 The ring of polynomials with real coefficients in the indeterminate ξ is denoted by R[ξ]; the ring of two-variable polynomials with real coefficients in the indeterminates ζ and η is denoted by R[ζ,η]. The space of all n m polynomial matrices in the indeterminate ξ is denoted by R n m [ξ], and that consisting of all n m polynomial matrices in the indeterminates ζ and η by R n m [ζ,η]. Given a matrix R R n m [ξ], we define R(ξ) := R( ξ) R m n [ξ]. With a polynomial matrix P (ξ) = k Z + P k ξ k, we associate its coefficient matrix, defined as the block-column matrix mat(p ) := [ P 0 P 1... P N... ]. Observe that mat(p ) has only a finite number of nonzero entries; moreover, I w I w ξ P (ξ) = mat(p ).. I w ξ k. Finally, if Σ = Σ, then we denote with σ + (Σ) the number of positive eigenvalues, with σ (Σ) the number of negative eigenvalues, and with σ 0 (Σ) the multiplicity of zero as an eigenvalue, of Σ. 1.4 Quadratic differential forms Let Φ R w1 w2 [ζ,η], written out in terms of its coefficient matrices Φ k,l as the (finite) sum Φ(ζ,η) = k,l Z + Φ k,l ζ k η l. It induces the map L Φ : C (R,R w1 ) C (R,R w2 ) C (R,R), defined by L Φ (w 1,w 2 ) = ( dk dt k w 1) T Φ k,l ( dl dt l w 2). k,l Z + This map is called the bilinear differential form (BDF) induced by Φ. When w 1 = w 2 = w, L Φ induces the map Q Φ : C (R,R w ) C (R,R), defined by Q Φ (w) = L Φ (w,w), i.e., Q Φ (w) = ( dk dt k w)t Φ k,l ( dl dt l w). k,l Z +

11 10 1 A behavioral approach to model reduction This map is called the quadratic differential form (QDF) induced by Φ. When considering QDF s, we can without loss of generality assume that Φ is symmetric, i.e. Φ(ζ,η) = Φ(η,ζ). We denote the set of real symmetric w-dimensional two-variable polynomial matrices with Rs w w [ζ,η]. We associate with Φ(ζ,η) = k,l Z + Φ k,l ζ k η l R w 1 w 2 [ζ,η], its coefficient matrix, defined as the infinite block-matrix: Φ 0,0 Φ 0,1 Φ 0,N Φ 1,0 Φ 1,1 Φ 1,N mat(φ) :=..... Φ N,0 Φ N,1 Φ N,N..... Observe that mat(φ) has only a finite number of nonzero entries, and that I w2 Φ(ζ,η) = [ I w1 I w2 ζ... I w1 ζ k... ] I w2 η mat(φ). I w2 η k. When Φ is symmetric, it holds that mat(φ) = (mat(φ)) ; in this case, we can factor mat(φ) = M Σ Φ M with M a matrix having a finite number of rows, full row rank, and an infinite number of columns; and Σ Φ a signature matrix. This decomposition leads to the following decomposition of Φ Φ(ζ,η) = M(ζ) Σ Φ M(η) and is called a canonical symmetric factorization of Φ. A canonical symmetric factorization is not unique; they can all be obtained from one by replacing M(ξ) with UM(ξ), with U a square polynomial matrix such that U Σ Φ U = Σ Φ. The features of the calculus of QDFs which will be used in this thesis are the following. The first one is the delta operator, defined as : R w w s [ζ,η] R w w [ξ] Φ(ξ) := Φ( ξ,ξ). Observe that Φ is a para-hermitian matrix, i.e. Φ(ξ) = Φ( ξ).

12 1.5 Dissipative systems 11 Another concept we use is that of derivative of a BDF. The functional d dt L Φ defined by ( d dt L Φ)(w 1,w 2 ) := d dt (L Φ(w 1,w 2 )), is again a BDF. It is easy to see that the two-variable polynomial matrix inducing it is (ζ + η)φ(ζ,η). Next, we introduce the notion of integral of a QDF. In order to make sure that the integral exists, we assume that the QDF acts on D(R,R w ), the space of infinitely-differentiable compact support functions. The integral of Q Φ is defined as Q Φ : D(R,R w ) R, Q Φ (w) := Q Φ (w)dt. We now introduce the notion of observability of a QDF. Fix w 1 C (R,R w 1 ), and consider the map L Φ (w 1, ) acting on infinitely differentiable trajectories, defined by L Φ (w 1, )(w 2 ) := L Φ (w 1,w 2 ), and the map L Φ (,w 2 ) defined in an analogous way. Then Φ is observable if [L Φ (w 1, ) = 0] = [w 1 = 0], [L Φ (,w 2 ) = 0] = [w 2 = 0]. It can be shown that if Φ is symmetric, and M (ζ)σ Φ M(η) is a symmetric canonical factorization of Φ, then Φ is observable if and only if M(λ) has full column rank for all λ C. Finally, we show how to associate a QDF acting on C (R,R w ) to a behavior B L w cont. Let B = im M ( d dt), and let Σ be a nonsingular matrix. Define Φ R l l s [ζ,η] as Φ(ζ,η) := M (ζ)σm(η). It is easily verified that if w and l satisfy w = M( d dt )l, then Q Σ(w) = Q Φ (l), where Q Σ (w) := w Σw. The introduction of the two-variable matrix Φ allows to study Q Σ along B in terms of properties of the QDF Q Φ acting on free trajectories of C (R,R l ). 1.5 Dissipative systems For an extensive treatment of dissipative systems in a behavioral context we refer to [79, 80, 81, 61]. Here we review the basic material. Definition Let Σ R w w be symmetric, B L w cont and Q Σ (w) := w T Σw.

13 12 1 A behavioral approach to model reduction 1. B is said to be dissipative with respect to Q Σ (or briefly, Σ-dissipative) if + Q Σ(w) dt 0 for all w B D. Further, it is said to be dissipative on R with respect to Q Σ (or briefly, Σ-dissipative on R ) if 0 Q Σ(w) dt 0 for all w B D. We also use the analogous definition of dissipativity on R B is said to be strictly dissipative with respect to Q Σ (or briefly, strictly Σ-dissipative) if there exists an ɛ > 0 such that B is dissipative with respect to Q Σ ɛi. Further, it is said to be strictly dissipative on R with respect to Q Σ (or briefly, strictly Σ-dissipative on R ) if there exists an ɛ > 0 such that B is dissipative on R with respect to Q Σ ɛi. We have the obvious definitions for strict dissipativity on R +. Remark It is easily seen that if B is Σ-dissipative on R or R +, then it is Σ-dissipative. Similarly, if B is strictly Σ-dissipative on R or R +, then it is strictly Σ-dissipative. Definition The QDF Q Ψ induced by Ψ Rs w w [ζ,η] is called a storage function for (B,Q Σ ) if d dt Q Ψ(w) Q Σ (w) for all w B C. (1.5) The QDF Q induced by R w w s [ζ,η] is called a dissipation function for (B,Q Σ ) if Q (w) 0 for all w B C and Q Σ (w)dt = Q (w)dt for all w B D. If the supply rate Q Σ, the dissipation function Q, and the storage function Q Ψ satisfy d dt Q Ψ(w) = Q Σ (w) Q (w) for all w B C (1.6) then we call the triple (Q Σ,Q Ψ,Q ) matched on B. Equation (1.6) expresses that, along w B, the increase in internal storage is equal to the rate at which supply is delivered minus the rate at which supply is dissipated. The following is well-known, see e.g. [61]-Theorem 5.4. Proposition The following conditions are equivalent 1. (B,Q Σ ) is dissipative,

14 1.5 Dissipative systems (B,Q Σ ) admits a storage function, 3. (B,Q Σ ) admits a dissipation function. Furthermore, for any dissipation function Q there exists a unique storage function Q Ψ, and for any storage function Q Ψ there exists a unique dissipation function Q such that (Q Σ,Q Ψ,Q ) is matched on B. In general there exist an infinite number of storage functions of B, but it turns out that all of them lie between two extremal storage functions. Proposition Let B be Σ-dissipative; then there exist storage functions Ψ and Ψ + such that any other storage function Ψ for (B,Σ) satisfies Proof. See [80]-Theorem 5.7. Q Ψ Q Ψ Q Ψ+ Every storage function is a quadratic function of the state, in the following sense. Proposition Let Σ = Σ R w w and B L w cont be Σ-dissipative. Let Q Ψ be a storage function. Then Q Ψ is a state function, i.e. for every X inducing a minimal state map for B, there exists a real symmetric matrix K such that ( ( ) d ( ( ) ) d Q Ψ (w) = X w) K X w. dt dt Proof. See Theorem 5.5 of [80]. The storage function Q Ψ is called a nonnegative storage function if the matrix K in the above proposition is positive semidefinite. It is called a positive storage function if the matrix K in the above proposition is positive definite. If m(b) = σ + (Σ), then the positivity of all storage functions is equivalent with half-line Σ-dissipativity of B, as the following result shows. Proposition Let B L w cont and Σ = Σ R w w be nonsingular. Assume that m(b) = σ + (Σ). Then the following statements are equivalent. 1. B is Σ-dissipative on R ; 2. there exists a positive storage function of B; 3. all storage functions of B are positive;

15 14 1 A behavioral approach to model reduction 4. there exists a real symmetric matrix K > 0 such that Q K (w) := is a storage function of B; ( ( ) d ( ( ) ) d X w) K X w dt dt 5. there exists a storage function of B, and every real symmetric matrix K such that Q K (w) := ( X ( ) d ( ( dt) w K X d ) dt) w is a storage function of B satisfies K > 0. Proof. See [80]-Proposition Basics of input-state-output, driving-variable and output-nulling representations As already noted in section 1.2, linear differential behaviors often result as external behavior of systems with latent variables. Three particular instances of such latent variable representations are input-state-output, driving-variable and output-nulling representations. In these latent variable systems, the latent variable in fact satisfies the axiom of state. In this section we have collected the basic material on input-state-output, driving variable and output nulling representations Input-state-output representations Let A R n n, B R n m, C R p n, D R p m, and consider the equations These equations represent the full behavior d x dt = Ax + Bu, y = Cx + Du. (1.7) B iso (A,B,C,D) := {(u,x,y) C (R,R m ) C (R,R n ) C (R,R p ) (1.7) hold}. If we interpret w = ( u y ) as manifest variable and x as latent variable, then B iso (A,B,C,D) is a latent variable representation of its external behavior B iso (A,B,C,D) ext = {( u y ) C (R,R m+p ) x C (R,R n ) such that (u,x,y) B iso (A,B,C,D)}.

16 1.6 Basics of ISO, DV and ON representations 15 The variable x is in fact a state variable according to [53]-Theorem If B = B iso (A,B,C,D) ext then we call B iso (A,B,C,D) a inputstate-output representation of B. A input-state-output representation B iso (A,B,C,D) of B is called minimal if the state dimension n is minimal over all state representations of B, i.e. if n = n(b), the McMillan degree of B. In the following, let m(b) and p(b) denote the input cardinality and the output cardinality of B, respectively. (A,B) is called a controllable pair if the rank of the matrix [B AB... A n 1 B] is equal to n and (C,A) is called an observable pair if the rank of the matrix following result is well known: C CA. CA n 1 is equal to n. The Proposition Let B L w cont be given. Denote n = n(b), m = m(b) and p = p(b). Then 1. there exists matrices A R n n, B R n m, C R p n, D R p m such that B iso (A,B,C,D) is a minimal input-state-output representation of B, 2. if B iso (A,B,C,D) represents B, then it is a minimal representation if and only if (C,A) is an observable pair, 3. if B iso (A,B,C,D) is a minimal representation of B, then B iso (A,B,C,D ) is a minimal representation of B if and only if there exists an invertible matrix S such that Proof. See [59]. (A,B,C,D ) = (S 1 AS,S 1 B,CS,D). The next proposition explains how to compute a minimal input-stateoutput representation from a given one. Proposition (Kalman decomposition). Let the input-output behavior B L m+p be represented in input-state-output representation as B = B iso (A,B,C,D) ext. Then there is a nonsingular matrix S such that [ ] S 1 A AS = 11 0 A 21 A, CS = [ C 1 0 ], 22 and such that 1. the pair (C 1,A 11 ) is observable, and 2. B iso (A 11,B 1,C 1,D) ext = B iso (A,B,C,D) ext. Consequently, B iso (A 11,B 1,C 1,D) is a minimal input-state-output representation of B.

17 16 1 A behavioral approach to model reduction Driving-variable representations Let A R n n, B R n v, C R w n, D R w v, and consider the equations These equations represent the full behavior d x dt = Ax + Bv, w = Cx + Dv. (1.8) B DV (A,B,C,D) := {(w,x,v) C (R,R w ) C (R,R n ) C (R,R v ) (1.8) hold}. In we interpret w as manifest variable and (x,v) as latent variable, then B DV (A,B,C,D) is a latent variable representation of its external behavior B DV (A,B,C,D) ext = {w C (R,R w ) x C (R,R n ) and v C (R,R v ) such that (w,x,v) B DV (A,B,C,D)}. The variable x is in fact a state variable, the variable v is undefined, and is called the driving variable. If B = B DV (A,B,C,D) ext then we call B DV (A,B,C,D) a driving variable representation of B. A driving variable representation B DV (A,B,C,D) of B is called minimal if the state dimension n and the driving variable dimension v are minimal over all such driving variable representations. The following result holds: Proposition Let B L w be given. Denote n = n(b) and m = m(b). Then 1. there exists matrices A R n n, B R n m, C R w n, D R w m such that B DV (A,B,C,D) is a minimal driving variable representation of B, 2. if B DV (A,B,C,D) represents B, then it is a minimal representation if and only if D is injective and the pair (C + DF,A + BF ) is observable for all F, 3. if B DV (A,B,C,D) is a minimal representation of B, then B DV (A,B,C,D ) is a minimal representation of B if and only if there exist invertible matrices S and R and a matrix F such that (A,B,C,D ) = (S 1 (A + BF )S,S 1 BR,(C + DF )S,DR). Proof. A proof of 1.) is given in [59]-Section 3. For a proof of 2.), see [59]- Corollary 4.2. For 3.) we refer to [77]-Theorem 7.2, (see also Remark 8.3 in this thesis).

18 1.6 Basics of ISO, DV and ON representations 17 The next proposition states that in order to compute a minimal driving variable representation from a given one, we can use state feedback. Proposition Let B L w and let B DV (A,B,C,D) be a driving variable representation of B, with D injective. Define F := (D D) 1 D C. Then there is a nonsingular matrix S such that S 1 (A + BF )S = [ A 11 0 A 21 A 22 ], S 1 B = [ B 1 B 2 ], (C + DF )S = [ C 1 0 ] such that 1. the pair (C 1 + DF,A 11 + B 1 F ) is observable for all F, 2. B DV (A 11,B 1,C 1,D) ext = B DV (A,B,C,D) ext. Consequently, B DV (A 11,B 1,C 1,D) is a minimal driving variable representation of B. Proof. Let V be the weakly unobservable subspace of (A,B,C,D) (see [64], section 7.3). By [64]-Exercise 7.5, V is equal to the unobservable subspace of the pair (C + DF,A + BF ), with F = (D D) 1 D C. With respect to a basis adapted to V, A + BF, C + DF and B have matrices partitioned as claimed above. By construction, the weakly unobservable subspace of (A 11,B 1,C 1,D) is zero and therefore, by [64]-Theorem 7.16, statement (1) of the proposition holds. In order to prove that B DV (A 11,B 1,C 1,D) ext = B DV (A,B,C,D) ext, observe that since coordinate transformations and state feedback do not change the external behavior, we have B DV (S 1 (A + BF )S,S 1 B,(C + DF )S,D) ext = B DV (A,B,C,D) ext. We now prove that B DV (S 1 (A + BF )S,S 1 B,(C + DF )S,D) ext = B DV (A 11,B 1,C 1,D) ext. The inclusion follows immediately. In order to prove the converse inclusion, let w B DV (A 11,B 1,C 1,D) ext. Then there exist x 1, v such that d dt x 1 = A 11x 1 + B 1v w = C 1x 1 + Dv. d Then, let x 2 be any solution of dt x 2 = A 21 x 1 + A 22 x 2 + B 2 v. This proves that w B DV (S 1 (A + BF )S,S 1 B,(C + DF )S,D) ext, so statement (2) of the proposition holds. Finally, the minimality of (A 11,B 1,C 1,D) as a representation of B follows from the fact that D is injective and from statement (1). In this thesis, in the context of dissipative systems, we mostly work with controllable behaviors, and with the controllable part of a behavior. We now examine under what conditions a behavior represented in driving variable form is controllable.

19 18 1 A behavioral approach to model reduction Proposition Let B L w be given. Then the following statements are equivalent 1. B is controllable, 2. there exist matrices A,B,C and D such that B = B DV (A,B,C,D) ext with (A,B) controllable, 3. for every minimal representation B = B DV (A,B,C,D) ext, the pair (A,B) is controllable. Proof. See [71]-Theorem Now let B L w, possibly non-controllable, and let B DV (A,B,C,D) be a driving variable representation. The following result shows how to compute a driving variable representation of the controllable part of B. Proposition Let B L w and let B DV (A,B,C,D) be a driving variable representation of B. Then there exists a nonsingular matrix S such that [ 1. S 1 Ā11 Ā AS = 12 0 Ā (Ā11, B 1 ) is controllable. ] [ B1, S 1 B = 0 ], CS = [ C1 C2 ], Then B DV (Ā11, B 1, C 1,D) is a driving variable representation of the controllable part B cont of B. Proof. First, because of Proposition the full behavior B DV (Ā11, B 1, C 1,D) is controllable. Define B 0 := {(w,(x 1,0),v) (w,x 1,v) B DV (Ā11, B 1, C 1,D)}. Then B 0 is also controllable. Also we have B 0 B DV (S 1 AS,S 1 B,CS,D), and the input cardinalities of these two behaviors coincide. By [3]-Lemma , their controllable parts then coincide, so we have B 0 = B DV (S 1 AS,S 1 B,CS,D) cont. Finally, the two operations of taking the controllable part and taking external behavior commute (see [3]-Lemma ). Thus we obtain B DV (Ā11, B 1, C 1,D) ext = (B 0 ) ext = (B DV (S 1 AS,S 1 B,CS,D) cont ) ext = (B DV (S 1 AS,S 1 B,CS,D) ext ) cont = B cont Output-nulling representations Output-nulling representations are defined as follows. Let A R n n, B R n w, C R p n, D R p w, and consider the equations

20 1.6 Basics of ISO, DV and ON representations 19 These equations represent the full behavior d x dt = Ax + Bw, 0 = Cx + Dw. (1.9) B ON (A,B,C,D) := {(w,x) C (R,R w ) C (R,R n ) (1.9) hold}. Again, if we interpret w as manifest variable and x as latent variable, then B ON (A,B,C,D) is a latent variable representation of its external behavior B ON (A,B,C,D) ext = {w C (R,R w ) x C (R,R n ) such that (w,x) B ON (A,B,C,D)}. Also here, the variable x is a state variable. If B = B ON (A,B,C,D) ext then we call B ON (A,B,C,D) an output nulling representation of B. B ON (A,B,C,D) is called a minimal output nulling representation if n and p are minimal over all output nulling representations of B. Again, the following is well-known: Proposition Let B L w be given. Denote n = n(b) and p = p(b). Then 1. there exist matrices A R n n, B R n w, C R p n, D R p w such that B ON (A,B,C,D) is a minimal output nulling representation of B, 2. if B ON (A,B,C,D) represents B, then it is a minimal representation if and only if D is surjective and (C,A) is observable, 3. if B ON (A,B,C,D) is a minimal representation of B, then B ON (A,B,C,D ) is a minimal representation of B if and only if there exist invertible matrices S and R and a matrix J such that (A,B,C,D ) = (S 1 (A + JC)S,S 1 (B + JD),RCS,RD). Proof. See [71]-Theorem The next proposition shows how to compute a minimal output nulling representation of B from a given one. Proposition Let B L w and let B ON (A,B,C,D) be an output nulling representation of B with D surjective. Then there exist a nonsingular matrix S such that

21 20 1 A behavioral approach to model reduction [ A 1. S 1 AS = 11 0 A 21 A 22 ] [ B, S 1 B = 1 B 2 2. the pair (C 1,A 11 ) is observable. 3. B ON (A 11,B 1,C 1,D) ext = B ON (A,B,C,D) ext. ], CS = [ C 1 0 ], Consequently, B ON (A 11,B 1,C 1,D) is a minimal output nulling representation of B. Proof. The existence of a nonsingular transformation matrix S such that the conditions (1) and (2) hold follows from a standard argument. In order to prove that B ON (A 11,B 1,C 1,D) ext = B ON (A,B,C,D) ext, observe first that the transformation S does not change the external behavior. We now prove that B ON (S 1 AS,S 1 B,CS,D) ext = B ON (A 11,B 1,C 1,D) ext. The inclusion follows immediately from the equations. In order to prove the converse inclusion, let w B ON (A 11,B 1,C 1,D) ext. Then there exist x 1 such that d dt x 1 = A 11x 1 + B 1w 0 = C 1x 1 + D w. With these x 1 and v, let x 2 be any solution of d dt x 2 = A 21 x 1 + A 22 x 2 + B 2 w. The claim follows. As noted before, in the context of dissipative systems we work with controllable behaviors, and with the controllable part of a behavior. We now examine under what conditions a behavior represented in output-nulling form is controllable. Proposition Let B L w be given. Then the following statements are equivalent 1. B is controllable, 2. there exist matrices A,B,C and D such that B = B ON (A,B,C,D) ext with (A + JC,B + JD) controllable for all J, 3. for every minimal representation B = B ON (A,B,C,D) ext we have: the pair (A + JC,B + JD) is controllable for all J. Proof. See [71]-Theorem Next, we show how to compute an output nulling representation of the controllable part of B from a given output nulling representation.

22 1.6 Basics of ISO, DV and ON representations 21 Proposition Let B = B ON (A,B,C,D) ext, with D surjective. Define an output injection by G := BD (DD ) 1. Then there exists a nonsingular matrix S such that [ 1. S 1 Ā11 Ā (A + GC)S = 12 [ ] 0 Ā 22 C1 C2, ] [ B1, S 1 (B + GD) = 0 ], CS = 2. (Ā11 + G C1, B 1 + G D) is controllable for all real matrices G. Furthermore, B ON (Ā11, B 1, C 1,D) is an output nulling representation of the controllable part B cont of B. Proof. A proof of this can be given using the the notion of strongly reachable subspace (see [64]-section 8.3), combined with similar ideas as in the proof of Proposition Relations between DV and ON representations Driving variable and output nulling representations of the same behavior enjoy certain duality properties. These will be examined in this section. The first result we prove explains how an output nulling representation can be obtained from a driving-variable representation of the same behavior, and the other way around. In order to state it, we need to introduce the notion of annihilator of a matrix, defined as follows. Let D be a p m matrix of full column rank; then D denotes any full row rank (p m) p matrix such that D D = 0. If D is a p m matrix of full row rank, then D denotes any m (m p) full column rank matrix such that DD = 0. Proposition Let B L w cont and let Σ = Σ R w w be nonsingular. 1. Let B DV (A,B,C,D) be a minimal driving variable representation of B such that a. D ΣD = I, b. D ΣC = 0. Define  := A, ˆB := BD Σ, Ĉ := D C, ˆD := D. Then B ON (Â, ˆB,Ĉ, ˆD) is a minimal output nulling representation of B and c. ˆDΣ 1 ˆD = J, where J := block diag(i row(^d) q, I q ), q := σ (Σ), and row(^d) is row number of ˆD. d. ˆBΣ 1 ˆD = 0.

23 22 1 A behavioral approach to model reduction 2. Assume that B ON (Â, ˆB,Ĉ, ˆD) is a minimal output nulling representation of B such that the conditions c and d of statement 1 hold. Define A := Â, B := ˆB ˆD, C := Σ 1 ˆD JĈ, D := ˆD. Then B DV (A,B,C,D) is a minimal driving variable representation of B satisfying the conditions a and b of statement 1. Before giving a proof of Proposition we need two lemmas. In the following, D R denotes a right inverse of a full row rank matrix D, i.e. DD R = I, and D L denotes a left inverse of a full column rank matrix D, i.e. D L D = I. The following result appears as Lemma in [41]. Lemma Let B DV (A,B,C,D) define a minimal driving variable representation of B. Then B ON (Â, ˆB,Ĉ, ˆD) given by  := A BD L C, ˆB := BD L, Ĉ = D C, ˆD := D defines a minimal output nulling representation of B. Let B ON (Â, ˆB,Ĉ, ˆD) define a minimal output nulling representation of B. Then B DV (A,B,C,D) given by A :=  ˆB ˆD R Ĉ, B := ˆB ˆD, C := ˆD R Ĉ, D := ˆD defines a minimal driving variable representation of B. We now prove the following inertia Lemma. Lemma Let Σ = Σ R w w be nonsingular. 1. Let D be such that D ΣD = I. Then there exists a full row rank matrix ˆD such that ˆD = D and ˆDΣ 1 ˆD = block diag(i row(^d) q, I q ) =: J, where q is the number of negative eigenvalues of Σ. 2. Let ˆD be such that ˆDΣ 1 ˆD = block diag(i row(^d) q, I q ) =: J, where q is the number of negative eigenvalues of Σ. Then there exists a full column rank matrix D such that D = ˆD and D ΣD = I. Proof. (1). Let D be a full row rank matrix such that D = D. It follows from [ ] D [ D D ΣD ] [ ] [ ] D D DΣD D D DΣD = 0 D = ΣD 0 I that [ D [ D D Σ ΣD ] is nonsingular. Moreover, ] Σ 1 [ D ΣD ] [ DΣ 1 D 0 = 0 D ΣD ] = [ DΣ 1 D 0 0 I ] (1.10)

24 1.6 Basics of ISO, DV and ON representations 23 is then also nonsingular, and this implies that DΣ 1 D is nonsingular as well. By Sylvester s inertia law, the identity (1.10) implies that σ ( DΣ 1 D ) = σ ( Σ 1 ) = q. Consequently, there exists a nonsingular matrix W such that DΣ 1 D = W JW. Set ˆD := W 1 D. It follows that ˆD = D and ˆDΣ 1 ˆD = J. (2). Let D be a full column rank matrix such that D = ˆD. It follows from [ ] [ [ ] Σ ˆD D 1 ˆDΣ 1 ˆD 0 ˆD D = DΣ 1 ˆD D D ] [ = J 0 DΣ 1 ˆD D D ] that [ Σ 1 ˆD D ] is nonsingular. This implies that [ ] ˆDΣ 1 D Σ [ [ ] ] ˆDΣ Σ 1 1 ˆD 0 ˆD D = 0 D Σ D = [ J 0 0 D Σ D ] (1.11) is nonsingular, and consequently also D Σ D. Observe that q = σ (Σ) equals the number of negative eigenvalues of the right hand side of (1.11). It follows that D Σ D > 0, and that there exists a nonsingular matrix W such that DΣ 1 D = W W. Set D := W 1 D. Then D = ˆD and D ΣD = I. We now give a proof of Proposition Proof. (1). Use Lemma to conclude that there exists ˆD such that ˆD = D and ˆDΣ 1 ˆD = J. Since D ΣD = I, we can choose D L := D Σ. It follows from Lemma that B ON (Â, ˆB,Ĉ, ˆD) given by  := A BD L C = A BD ΣC = A, ˆB := BD L = BD Σ, Ĉ := D C, ˆD := D is a minimal output nulling representation of B. It is straightforward to see that B ON (Â, ˆB,Ĉ, ˆD) satisfies conditions (c) and (d). (2). Lemma implies that there exists D such that D = ˆD and DΣD = I. The fact that ˆDΣ 1 ˆD = J implies that we can choose D R := Σ 1 ˆD J. It follows from Lemma that B DV (A,B,C,D) given by A :=  ˆB ˆD R Ĉ =  ˆBΣ 1 ˆD JĈ = Â, B := ˆB ˆD, C := ˆD R Ĉ = Σ 1 ˆD JĈ, D := ˆD is a minimal driving variable representation of B. It is straightforward to check that B DV (A,B,C,D) satisfies conditions (a) and (b). To conclude this section, we recall how driving variable and output nulling representations of a behavior can be used in order to obtain representations for the orthogonal behavior.

25 24 1 A behavioral approach to model reduction Proposition Let B L w contr and let Σ = Σ R w w be nonsingular. Then 1. If B DV (A,B,C,D) is a minimal driving variable representation of B, then B ON ( A,C Σ,B, D Σ) is a minimal output nulling representation of B Σ. 2. If B ON (A,B,C,D) is a minimal output nulling representation of B, then B DV ( A,C,Σ 1 B, Σ 1 D )) is a minimal driving variable representation of B Σ. Proof. See section VI.A of [81] Dissipativity in terms of DV representations We first examine dissipativity for the case that a system is represented by a DV-representation. Proposition Let B L w contr with minimal DV-representation B DV (A,B,C,D) and let Σ = Σ R w w be nonsingular. Assume D ΣD > 0. Then 1. B is Σ-dissipative if and only if there exists a real symmetric solution K = K R n n of the algebraic Riccati equation (ARE) A K +KA C ΣC +(KB C ΣD)(D ΣD) 1 (B K D ΣC) = 0. (1.12) If this is the case, then there exist real symmetric solutions K and K + of the ARE (1.12) such that every real symmetric solution K satisfies K K K B is Σ-dissipative on R if and only if there exists a positive semidefinite solution K = K R n n of the ARE (1.12). 3. If m(b) = σ + (Σ) then B is Σ-dissipative on R if and only if all solutions of ARE (1.12) are positive definite, equivalently K > 0. Proof. 1) is proved in [79]-Theorem and 2), 3) are proved in [80]- Theorem 6.4.

26 1.6 Basics of ISO, DV and ON representations Strict dissipativity in terms of DV representations In this subsection we examine strict dissipativity for the case that our system is represented by a DV-representation. The connection between dissipativity, the algebraic Riccati equation (ARE), and the Hamiltonian matrix of the system is well-known, see [79, 74, 75, 85]. In the following, we will review this connection for the case of half-line dissipativity. First note the following: Lemma Let Σ = Σ R w w. Let B L w contr be strictly Σ- dissipative. Then there exists a minimal driving variable representation B DV (A,B,C,D) of B with D ΣD = I and D ΣC = 0. Proof. Let Â, ˆB,Ĉ, ˆD be such that B DV (Â, ˆB,Ĉ, ˆD) is a minimal DV representation of B. Then ˆD has full column rank (see Proposition 1.6.3). We prove now that ˆD Σ ˆD > 0. Take the driving variable v(t) = δ(t)v 0, with δ(t) representing the Dirac pulse. Then, with state trajectory x(t) = 0 for all t R, ẋ = Ax + Bv holds, so w(t) = ˆDv(t). There exists ɛ > 0 such that v 0 ˆD Σ ˆDv 0 = = ɛ = ɛv 0 ˆD ˆDv0. v(t) ˆD Σ ˆDv(t)dt w(t) Σw(t)dt w(t) w(t)dt Of course, a rigorous proof can be given using smooth approximations of δ. This proves the claim. Let W be a nonsingular matrix such that ˆD Σ ˆD = W W. By applying the state feedback transformation ˆv = ( ˆD Σ ˆD) 1 ˆD ΣĈx + W 1 v to B DV (Â, ˆB,Ĉ, ˆD) we obtain a new driving variable representation B DV (A,B,C,D) of B, with A = Â ˆB( ˆD Σ ˆD) 1 ˆD ΣĈ B = 1 ˆBW C = Ĉ ˆD( ˆD Σ ˆD) 1 ˆD ΣĈ D = ˆDW 1. Observe that D is injective, and that from the minimality of B DV (Â, ˆB,Ĉ, ˆD) and statement (2) of Proposition it follows that B DV (A,B,C,D) is also a minimal representation of B. It is easy to see that D ΣD = I, and moreover

27 26 1 A behavioral approach to model reduction D ΣC = W ˆD Σ(Ĉ ˆD( ˆD 1 Σ ˆD) ˆD ΣĈ) = 0. This concludes the proof. We then have the following: Proposition Let B L w contr, and let Σ = Σ R w w. Let B DV (A,B,C,D) be a minimal driving variable representation of B such that D ΣD = I and D ΣC = 0. If B is strictly Σ-dissipative then the Hamiltonian matrix [ A BB H = ] C ΣC A (1.13) has no eigenvalues on the imaginary axis. Furthermore, the following statements are equivalent: 1. B is strictly Σ-dissipative on R (R + ), 2. the ARE A K + KA C ΣC + KBB K = 0 (1.14) has a real symmetric solution K with K > 0 (K < 0) and A + BB K is antistable (stable), 3. The Hamiltonian matrix (1.13) has no eigenvalues on the imaginary axis, and there exists X 1,Y 1 R n n, with X 1 nonsingular, and M R n n antistable (stable) such that [ ] [ ] X1 X1 H = M, Y 1 Y 1 with X 1 Y 1 > 0 (X 1 Y 1 < 0). If K satisfies the conditions in (2.) above then it is unique, and it is the largest (smallest) real symmetric solution of (1.14). We denote it by K + (K ). If X 1,Y 1 satisfy the conditions in (3.) above, then Y 1 X1 1 is equal to this largest (smallest) real symmetric solution K + (K ) of the ARE (1.14). Proof. Assume that H has an eigenvalue iω, with eigenvector (x 1,x 2 ). Then Ax 1 + BB x 2 = iωx 1 and C ΣCx 1 A x 2 = 0. We will first prove that the vector w 0 := Cx 1 + DB x 2 (1.15)

Linear Hamiltonian systems

Linear Hamiltonian systems Linear Hamiltonian systems P. Rapisarda H.L. Trentelman Abstract We study linear Hamiltonian systems using bilinear and quadratic differential forms. Such a representation-free approach allows to use the

More information

Stationary trajectories, singular Hamiltonian systems and ill-posed Interconnection

Stationary trajectories, singular Hamiltonian systems and ill-posed Interconnection Stationary trajectories, singular Hamiltonian systems and ill-posed Interconnection S.C. Jugade, Debasattam Pal, Rachel K. Kalaimani and Madhu N. Belur Department of Electrical Engineering Indian Institute

More information

DISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS

DISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS DISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS H.L. TRENTELMAN AND S.V. GOTTIMUKKALA Abstract. In this paper we study notions of distance between behaviors of linear differential systems. We introduce

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Citation for published version (APA): Ruíz Duarte, E. An invitation to algebraic number theory and class field theory

Citation for published version (APA): Ruíz Duarte, E. An invitation to algebraic number theory and class field theory University of Groningen An invitation to algebraic number theory and class field theory Ruíz Duarte, Eduardo IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you

More information

adopt the usual symbols R and C in order to denote the set of real and complex numbers, respectively. The open and closed right half-planes of C are d

adopt the usual symbols R and C in order to denote the set of real and complex numbers, respectively. The open and closed right half-planes of C are d All unmixed solutions of the algebraic Riccati equation using Pick matrices H.L. Trentelman Research Institute for Mathematics and Computer Science P.O. Box 800 9700 AV Groningen The Netherlands h.l.trentelman@math.rug.nl

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Lecture Notes of EE 714

Lecture Notes of EE 714 Lecture Notes of EE 714 Lecture 1 Motivation Systems theory that we have studied so far deals with the notion of specified input and output spaces. But there are systems which do not have a clear demarcation

More information

5 Quiver Representations

5 Quiver Representations 5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )

More information

POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK

POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK 1 POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK H.L. Trentelman, Member, IEEE, R. Zavala Yoe, and C. Praagman Abstract In this paper we will establish polynomial algorithms

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Algorithms for polynomial multidimensional spectral factorization and sum of squares

Algorithms for polynomial multidimensional spectral factorization and sum of squares Algorithms for polynomial multidimensional spectral factorization and sum of squares H.L. Trentelman University of Groningen, The Netherlands. Japan November 2007 Algorithms for polynomial multidimensional

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den

System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den University of Groningen System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF)

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Geometric approximation of curves and singularities of secant maps Ghosh, Sunayana

Geometric approximation of curves and singularities of secant maps Ghosh, Sunayana University of Groningen Geometric approximation of curves and singularities of secant maps Ghosh, Sunayana IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish

More information

Citation for published version (APA): Halbersma, R. S. (2002). Geometry of strings and branes. Groningen: s.n.

Citation for published version (APA): Halbersma, R. S. (2002). Geometry of strings and branes. Groningen: s.n. University of Groningen Geometry of strings and branes Halbersma, Reinder Simon IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Stability, Pole Placement, Observers and Stabilization

Stability, Pole Placement, Observers and Stabilization Stability, Pole Placement, Observers and Stabilization 1 1, The Netherlands DISC Course Mathematical Models of Systems Outline 1 Stability of autonomous systems 2 The pole placement problem 3 Stabilization

More information

European Embedded Control Institute. Graduate School on Control Spring The Behavioral Approach to Modeling and Control.

European Embedded Control Institute. Graduate School on Control Spring The Behavioral Approach to Modeling and Control. p. 1/50 European Embedded Control Institute Graduate School on Control Spring 2010 The Behavioral Approach to Modeling and Control Lecture VI CONTROLLABILITY p. 2/50 Theme Behavioral controllability, the

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Balancing of Lossless and Passive Systems

Balancing of Lossless and Passive Systems Balancing of Lossless and Passive Systems Arjan van der Schaft Abstract Different balancing techniques are applied to lossless nonlinear systems, with open-loop balancing applied to their scattering representation.

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

The Behavioral Approach to Systems Theory

The Behavioral Approach to Systems Theory The Behavioral Approach to Systems Theory... and DAEs Patrick Kürschner July 21, 2010 1/31 Patrick Kürschner The Behavioral Approach to Systems Theory Outline 1 Introduction 2 Mathematical Models 3 Linear

More information

A BEHAVIORAL APPROACH TO THE LQ OPTIMAL CONTROL PROBLEM

A BEHAVIORAL APPROACH TO THE LQ OPTIMAL CONTROL PROBLEM A BEHAVIORAL APPROACH TO THE LQ OPTIMAL CONTROL PROBLEM GIANFRANCO PARLANGELI AND MARIA ELENA VALCHER Abstract This paper aims at providing new insights into the linear quadratic optimization problem within

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Controllability, Observability, Full State Feedback, Observer Based Control

Controllability, Observability, Full State Feedback, Observer Based Control Multivariable Control Lecture 4 Controllability, Observability, Full State Feedback, Observer Based Control John T. Wen September 13, 24 Ref: 3.2-3.4 of Text Controllability ẋ = Ax + Bu; x() = x. At time

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

The Behavioral Approach to Systems Theory

The Behavioral Approach to Systems Theory The Behavioral Approach to Systems Theory Paolo Rapisarda, Un. of Southampton, U.K. & Jan C. Willems, K.U. Leuven, Belgium MTNS 2006 Kyoto, Japan, July 24 28, 2006 Lecture 2: Representations and annihilators

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

J-SPECTRAL FACTORIZATION

J-SPECTRAL FACTORIZATION J-SPECTRAL FACTORIZATION of Regular Para-Hermitian Transfer Matrices Qing-Chang Zhong zhongqc@ieee.org School of Electronics University of Glamorgan United Kingdom Outline Notations and definitions Regular

More information

Stabilization and Passivity-Based Control

Stabilization and Passivity-Based Control DISC Systems and Control Theory of Nonlinear Systems, 2010 1 Stabilization and Passivity-Based Control Lecture 8 Nonlinear Dynamical Control Systems, Chapter 10, plus handout from R. Sepulchre, Constructive

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

(1)(a) V = 2n-dimensional vector space over a field F, (1)(b) B = non-degenerate symplectic form on V.

(1)(a) V = 2n-dimensional vector space over a field F, (1)(b) B = non-degenerate symplectic form on V. 18.704 Supplementary Notes March 21, 2005 Maximal parabolic subgroups of symplectic groups These notes are intended as an outline for a long presentation to be made early in April. They describe certain

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Robust Control 2 Controllability, Observability & Transfer Functions

Robust Control 2 Controllability, Observability & Transfer Functions Robust Control 2 Controllability, Observability & Transfer Functions Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University /26/24 Outline Reachable Controllability Distinguishable

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Port-Hamiltonian Realizations of Linear Time Invariant Systems

Port-Hamiltonian Realizations of Linear Time Invariant Systems Port-Hamiltonian Realizations of Linear Time Invariant Systems Christopher Beattie and Volker Mehrmann and Hongguo Xu December 16 215 Abstract The question when a general linear time invariant control

More information

On some interpolation problems

On some interpolation problems On some interpolation problems A. Gombani Gy. Michaletzky LADSEB-CNR Eötvös Loránd University Corso Stati Uniti 4 H-1111 Pázmány Péter sétány 1/C, 35127 Padova, Italy Computer and Automation Institute

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,500 08,000.7 M Open access books available International authors and editors Downloads Our authors

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

University of Groningen. Statistical inference via fiducial methods Salomé, Diemer

University of Groningen. Statistical inference via fiducial methods Salomé, Diemer University of Groningen Statistical inference via fiducial methods Salomé, Diemer IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Citation for published version (APA): Sok, R. M. (1994). Permeation of small molecules across a polymer membrane: a computer simulation study s.n.

Citation for published version (APA): Sok, R. M. (1994). Permeation of small molecules across a polymer membrane: a computer simulation study s.n. University of Groningen Permeation of small molecules across a polymer membrane Sok, Robert Martin IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Spectral Theory, with an Introduction to Operator Means. William L. Green

Spectral Theory, with an Introduction to Operator Means. William L. Green Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

3118 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 57, NO. 12, DECEMBER 2012

3118 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 57, NO. 12, DECEMBER 2012 3118 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 57, NO 12, DECEMBER 2012 ASimultaneousBalancedTruncationApproachto Model Reduction of Switched Linear Systems Nima Monshizadeh, Harry L Trentelman, SeniorMember,IEEE,

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

Algebra I Fall 2007

Algebra I Fall 2007 MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE TIMO REIS AND MATTHIAS VOIGT, Abstract. In this work we revisit the linear-quadratic

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

LINEAR TIME-INVARIANT DIFFERENTIAL SYSTEMS

LINEAR TIME-INVARIANT DIFFERENTIAL SYSTEMS p. 1/83 Elgersburg Lectures March 2010 Lecture II LINEAR TIME-INVARIANT DIFFERENTIAL SYSTEMS p. 2/83 Theme Linear time-invariant differential systems are important for several reasons. They occur often

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

LQR and H 2 control problems

LQR and H 2 control problems LQR and H 2 control problems Domenico Prattichizzo DII, University of Siena, Italy MTNS - July 5-9, 2010 D. Prattichizzo (Siena, Italy) MTNS - July 5-9, 2010 1 / 23 Geometric Control Theory for Linear

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Linear Quadratic Zero-Sum Two-Person Differential Games

Linear Quadratic Zero-Sum Two-Person Differential Games Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,

More information

From integration by parts to state and boundary variables of linear differential and partial differential systems

From integration by parts to state and boundary variables of linear differential and partial differential systems A van der Schaft et al Festschrift in Honor of Uwe Helmke From integration by parts to state and boundary variables of linear differential and partial differential systems Arjan J van der Schaft Johann

More information

Classification of root systems

Classification of root systems Classification of root systems September 8, 2017 1 Introduction These notes are an approximate outline of some of the material to be covered on Thursday, April 9; Tuesday, April 14; and Thursday, April

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Chapter 6: Controllability & Observability Chapter 7: Minimal Realizations May 2, 217 1 / 31 Recap State space equation Linear Algebra Solutions of LTI and LTV system Stability

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Tutorials in Optimization. Richard Socher

Tutorials in Optimization. Richard Socher Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

The norms can also be characterized in terms of Riccati inequalities.

The norms can also be characterized in terms of Riccati inequalities. 9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements

More information

Citation for published version (APA): Kootstra, F. (2001). Time-dependent density functional theory for periodic systems s.n.

Citation for published version (APA): Kootstra, F. (2001). Time-dependent density functional theory for periodic systems s.n. University of Groningen Time-dependent density functional theory for periodic systems Kootstra, Freddie IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

(VII.B) Bilinear Forms

(VII.B) Bilinear Forms (VII.B) Bilinear Forms There are two standard generalizations of the dot product on R n to arbitrary vector spaces. The idea is to have a product that takes two vectors to a scalar. Bilinear forms are

More information

Linear Passive Stationary Scattering Systems with Pontryagin State Spaces

Linear Passive Stationary Scattering Systems with Pontryagin State Spaces mn header will be provided by the publisher Linear Passive Stationary Scattering Systems with Pontryagin State Spaces D. Z. Arov 1, J. Rovnyak 2, and S. M. Saprikin 1 1 Department of Physics and Mathematics,

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information