Frequency domain representation and singular value decomposition

Size: px
Start display at page:

Download "Frequency domain representation and singular value decomposition"

Transcription

1 EOLSS Contribution Frequency domain representation and singular value decomposition AC Antoulas Department of Electrical and Computer Engineering Rice University Houston, Texas , USA - fax: URL: aca June 12, 2001 Abstract This contribution reviews the external and the internal representations of linear time-invariant systems This is done both in the time and the frequency domains The realization problem is then discussed Given the importance of norms in robust control and model reduction, the final part of this contribution is dedicated to the definition and computation of various norms Again, the interplay between time and frequency norms is emphasized Key words: linear systems, internal representation, external representation, Laplace transform, -transform, vector norms, matrix norms, Singular Value Decomposition, convolution operator, Hankel operator, reachability and observability gramians This work was supported in part by the NSF through Grants DMS and CCR

2 Introduction EOLSS Contents 1 Introduction 2 2 Preliminaries 3 21 Norms of vectors, matrices and the SVD Norms of finite-dimensional vectors and matrices The singular value decomposition The Lebesgue spaces Ô and Ä Ô The Hardy spaces Ô and À Ô The Hilbert spaces ¾ and Ä ¾ 9 22 The Laplace transform and the -transform Some properties of the Laplace transform Some properties of the -transform 11 3 The external and the internal representation of linear systems External representation Internal representation Solution in the time domain Solution in the frequency domain The concepts of reachability and observability The infinite gramians The realization problem The solution of the realization problem Realization of proper rational matrix functions The partial realization problem 31 4 Time and frequency domain interpretation of various norms The convolution operator and the Hankel operator Computation of the singular values of Ë Computation of the singular values of À Computation of various norms The À ¾ norm The À norm The Hilbert-Schmidt norm Summary of norms 39 5 Appendix: Glossary 42 List of Tables 1 Basic Laplace transform properties 11 2 Basic -transform properties 12 3 I/O and I/S/O representation of continuous-time linear systems 19 4 I/O and I/S/O representation of discrete-time linear systems 20 5 Norms of linear systems and their relationships 40 1 Introduction One of the most powerful tools in the analysis and synthesis of linear time-invariant systems is the equivalence between the time domain and the frequency domain Thus additional insight into problems in this area is obtained by viewing them both in time and in frequency This dual nature accounts for the presence and great success of linear systems both in engineering theory and applications 2

3 Preliminaries EOLSS In this contribution we will provide an overview of certain results concerning the analysis of linear dynamical systems Time and frequency domain frameworks are inextricably connected Therefore together with frequency domain considerations in the sequel, unavoidably, a good deal of time domain considerations are included as well Our goals are as follows First, basic system representations will be introduced, both in time and in frequency Then the ensuing realization problem is formulated and solved Roughly speaking the realization problem entails the construction of a state space model from frequency response data The second goal is to introduce various norms for linear systems This is of great importance both in robust control and in system approximation/model reduction For details see eg [14, 31, 7, 24, 6, 4] First it is shown that besides the convolution operator we need to attach a second operator to every linear system, namely the Hankel operator The main attribute of this operator is that it has a discrete set of singular values, known as the Hankel singular values These singular values are main ingredients of numerous computations involving robust control and model reduction of linear systems Besides the Hankel norm, we discuss various p-norms, where Ô ¾ It turns out that norms which are obtained for Ô ¾ have both a time domain and a frequency domain interpretation The rest have an interpretation in the time domain only The contribution is organized as follows The next section is dedicated to a collection of useful results on two topics: Norms and the SVD on the one hand and the Laplace and discrete-laplace transforms on the other Two tables 1, 2, summarize the salient properties of these two transforms Section 3 develops the external and internal representations of linear systems This is done both in the time and frequency domains, with the results summarized in two further tables 3, 4 This discussion is followed by the formulation and solution of the realization problem The final section 4 is dedicated to the introduction of various norms for linear systems The basic features of these norms are summarized in the fifth and last table 5 2 Preliminaries 21 Norms of vectors, matrices and the SVD In this section we will first review some material from linear algebra which pertains to norms of vectors, norms of operators (matrices), both in finite and infinite dimensions The latter are of importance because a linear system can be viewed as a map between infinite dimansional spaces The Singular Value Decomposition (SVD) will also be introduced and its properties briefly discussed Textbooks pertaining to the material discussed in this section are [16, 18, 19, 21, 27] 211 Norms of finite-dimensional vectors and matrices Let be a linear space over the field à which is either the field of reals Ê or that of complex numbers A norm on is a function Ê, such that the following three properties are satisfied Strict positiveness: ܵ Ü ¾, with equality if Ü ; triangle inequality: Ü Ýµ ܵ ݵ, Ü Ý ¾ ; positive homogeneity: «Üµ «Üµ, «¾ Ã, Ü ¾ For vectors Ü ¾ Ê Ò or Ü ¾ Ò the Hölder or Ô-norms are defined as follows: Ü Ô È¾Ò Ü Ô Ô Ô ÑÜ ¾Ò Ü Ô Ü where Ò ¾ Ò, Ò ¾ Æ The 2-norm satisfies the Cauchy-Schwartz inequality: Ü Ü Ò (21) Ü Ý Ü ¾ Ý ¾ 3

4 Preliminaries EOLSS with equality holding iff Ý Ü, ¾ à An important property of the ¾-norm is that it is invariant under unitary (orthogonal) transformations Let Í be Ò Ò and ÍÍ Á Ò It follows that ÍÜ ¾ ¾ Ü Í ÍÜ Ü Ü Ü ¾ ¾ The following relationship between the Hölder norms for Ô ¾ holds: Ü Ü ¾ Ü One type of matrix norms are those which are induced by the vector Ô-norms defined above More precisely A Figure 1: maps the unit sphere into an ellipsoid The singular values are the lengths of the semi-axes of the ellipsoid for ¾ Ñ Ò Ü Õ ÔÕ ÙÔ (22) Ü Ü Ô is the induced Ô Õ-norm of In particular, for Ô Õ ¾ the following expressions hold ÑÜ ¾Ò ¾Ñ ÑÜ ¾Ñ ¾Ò ¾ ÑÜ µ ¾ Besides the induced matrix norms, there exist other norms One such class is the Schatten Ô-norms of matrices These non-induced norms are unitarily invariant Let µ, ÑÒ Ñ Òµ, be the singular values of, ie the square roots of the eigenvalues of Then Ô Ô Ô µ Ô (23) ¾Ñ It follows that the Schatten norm for Ô is ÑÜ µ which is the same as the ¾-induced norm of For Ô we obtain the trace norm ¾Ñ 4 µ

5 Preliminaries EOLSS For Ô ¾ the resulting norm is also known as the Frobenius norm, the Schatten 2-norm, or the Hilbert- Schmidt norm of : ¾ ¾ µ ØÖ µµ ¾ ØÖ µµ ¾ (24) ¾Ñ where ØÖ µ denotes the trace of a matrix 212 The singular value decomposition Given a matrix ¾ Ã Ò Ñ, Ò Ñ, let the nonnegative numbers ¾ Ò be the positive square roots of the eigenvalues of There exist unitary matrices Í ¾ Ã Ò Ò, ÍÍ Á Ò, and Î ¾ Ã Ñ Ñ, Î Î Á Ñ, such that Í Î ÛÖ µ ¾ Ê Ò Ñ Ò ¾ Ò ¾ ÊÒ Ò (25) The decomposition (25) is called the singular value decomposition (SVD) of the matrix ; the s are called the singular values of while the columns of Í, Î Í Ù Ù ¾ Ù Ò µ Î Ú Ú ¾ Ú Ñ µ are called the left, right singular vectors of, respectively These singular vectors are the eigenvectors of, respectively Thus Ú Ù Ò Example 21 Consider the matrix Ô ¾ Ô ¾ and are: ¾ ¾ Ô ¾ The eigenvalue decomposition of the matrices Í ¾ Í Î ¾ Î ÛÖ Í Ô ¾ Õ ¾ Ô Õ ¾ ¾ ¾ Ô ¾ Î ¾ Ô ¾ Ô¾ Ô ¾ Ô ¾¾ Ô Ô ¾ ¾¾ Notice that maps the unit disc in the plane to the ellipse with half-axes and ¾ ; more precisely Ú Ù and Ú ¾ ¾ Ù ¾ (see figure 1) It follows that ¾ Ù ¾ Ú ¾ is a perturbation of smallest 2-norm (equal to ¾) such that is singular: ¾ Ô Ô ¾ ¾ µ ¾ Ô¾ Ô¾ 5

6 Preliminaries EOLSS The singular values of are unique The left-, right-singular vectors corresponding to singular values of multiplicity one are also uniquely determined (up to a sign) Thus the SVD is unique in case the matrix is square and the singular values have multiplicity one Lemma 21 The 2-induced norm of is equal to its largest singular value ¾ Ò Proof By definition ¾ Ü ¾ ¾ ÙÔ ¾ Ü Ü ¾ ¾ Ü Ü ÙÔ Ü Ü Ü Let Ý be defined as Ý Î Ü where Î is the matrix containing the eigenvectors of, ie Î Î Substituting in the above expression we obtain ¾ Ò Ü Ü Ü Ü ¾ ݾ Ý ¾ ¾ ÒÝÒ ¾ ݾ Ò ¾ This expression is maximized and equals ¾, for Ý, ie Ü Ú, where Ú is the first column of Î Theorem 21 Every matrix with entries in à has a singular value decomposition Proof We will give two proofs of this result (a) The first is based on the lemma above Let be the ¾- norm of ; there exist unity length vectors Ü ¾ à Ñ, Ü Ü, and Ý ¾ à Ò, Ý Ý, such that Ü Ý Define the unitary matrices Î, Í so that their first column is Ü, Ý respectively: It follows that and consequently Í Î Î Ü Î Í Ý Í Û ÛÖ Û ¾ Ã Ñ Í Í ¾ Û Û Û Û Since the ¾-norm of every matrix is bigger than or equal to the norm of any of its submatrices, we conclude that ¾ Û Û ¾ The implication is that Û must be the zero vector Û Thus Í Î The procedure is now repeated for which has size Ò µ Ñ µ Assume that in (25) Ö while Ö ; the matrices Í,, Î are partitioned in two blocks the first having Ö columns: Í Í Í ¾ ¾ Ò Î Î Î ¾ (26) Ö ¾ Öµ Ò Öµ ¾ Ê Ò 6

7 Preliminaries EOLSS Corollary 21 Given (25) and (26) the following statements hold ÖÒ Ö ÔÒ ÓÐ ÔÒ ÓÐ Í Ö ÔÒ ÓÐ Î ¾ Dyadic decomposition has a decomposition as a sum of Ö matrices of rank one: Ù Ú ¾Ù ¾ Ú ¾ ÖÙ Ö Ú Ö (27) The orthogonal projection onto the span of the columns of is Í Í The orthogonal projection onto the kernel of is Î ¾ Î ¾ The orthogonal projection onto the orthogonal complement of the span of the columns of is Í ¾ Í ¾ The orthogonal projection onto the orthogonal complement of the kernel of is Î Î Õ The Frobenius norm of is ¾ ¾ Ò For symmetric matrices the SVD can be readily obtained from the EVD (Eigenvalue Decomposition) Let the latter be: Î Î Define by Ë Ò Ò Ò µ, where Ò is the signum function; it equals if, if and if Then Í Î where Í Î Ë and Ò µ 213 The Lebesgue spaces Ô and Ä Ô In this section we will define the Ô-norms of infinite sequences and functions These are functions of one real variable, which in the context of system theory is taken to be time Consequently, these are time-domain spaces and norms Let Ò Áµ Á Ã Ò Á denote the set of sequences of vectors in Ã Ò which is either Ê or Frequent choices of Á: Á, Á or Á The Ô-norms of the elements of this space are defined as: Ô The corresponding Ô spaces are: For functions of a continuous variable, let È Ø¾Á ص Ô Ô Ô Ô ÙÔ Ø¾Á ص Ô Ô Ò Ô Áµ ¾ Ò Áµ Ô Ô Ä Ò Áµ Á Ã Ò Á Ê Frequent choices of Á: Á Ê, Á Ê or Á Ê, and the Ô-norms are: Ô The corresponding Ä Ô spaces are: ÊؾÁ ص Ô Ô Ø Ô Ô ÙÔ Ø¾Á ص Ô Ô Ä Ò Ô Áµ ¾ ÄÒ Áµ Ô Ô ¾ Ò Áµ (28) ¾ Ä Ò Áµ (29) 7

8 Preliminaries EOLSS The Hardy spaces Ô and À Ô In this section we consider norms of functions of one complex variable Thus in the system theoretic context, this variable is taken to be complex frequency and the resulting spaces and norms are frequency-domain ones Let denote the (open) unit disc, and let Õ Ö be a matrix-valued function, analytic in Its Ô-norm is defined as follows: Ô ¾ ¾ ÙÔ Ö Ö µ Ô Ô Ô ÙÔ Þµ Ô Ô Þ¾ Ô We will choose Þ µ Ô to be the Schatten Ô-norm of evaluated at Þ Þ ; however, there are other possible choices The resulting Ô spaces are defined as follows: Õ Ö Ô Õ Ö Ô µ ÓÚ ÛØ Ô The following special cases are worth noting: ¾ ¾ ¾ ÙÔ Ö ØÖ Ö µ Ö ¾ µ (210) where ØÖ µ denotes the trace, and µ denotes complex conjugation and transposition; furthermore ÙÔ ÑÜ Þµµ (211) Þ¾ Let denote the (open) left half of the complex plane: Ü Ý ¾, Ü Consider the Õ Ö complex-valued functions as defined above, which are analytic in Then ÀÔ ÙÔ Ü À ÙÔ Þ¾ Ü Ýµ Ô Ô Ý Ô Þµ Ô Ô Ô Again µ Ô is chosen to be the Schatten Ô-norm of evaluated at The resulting À Ô spaces are defined analogously to the Ô spaces: À Õ Ö Ô À Õ Ö Ô µ ÓÚ ÛØ ÀÔ As before, the following special cases are worth noting: À¾ ÙÔ Ü ØÖ Ü Ýµ Ü Ýµ Ý ¾ (212) where ØÖ µ denotes the trace, and µ denotes complex conjugation and transposition; furthermore À ÙÔ ¾ ÑÜ µµ (213) The suprema in the formulae above can be computed by means of the maximum modulus theorem, which states that a function continuous inside a domain as well as on its boundary and analytic inside, attains its maximum on the boundary of Thus (210), (211), (212), (213) become: 8

9 Preliminaries EOLSS ¾ ¾ ØÖ µ ¾ µ ¾ ÙÔ ÑÜ µ ¾¾ (214) (215) À¾ ØÖ ¾ ݵ ݵ Ý (216) À ÙÔ ÑÜ Ýµµ (217) Ý¾Ê If has no poles on the unit circle or the -axis, but is not necessarily analytic in the corresponding domains, the, À norms are not defined Instead the, Ä norm of is defined respectively as follows: ÙÔ ÑÜ µµ Ä ÙÔ ÑÜ Ýµµ Ý where in the first expression the supremum is taken over ¾ ¾, while in the second the supremum is taken over Ý ¾ µ 215 The Hilbert spaces ¾ and Ä ¾ The spaces ¾ Áµ and Ä ¾ Áµ are Hilbert spaces, that is linear spaces where not only a norm but an inner product is defined as well 1 For Á and Á Ê respectively, the inner product is defined as follows: Ü Ý ¾ Ü Ý Ä¾ ¾ ؾÁ Á Ü ØµÝ Øµ (218) Ü ØµÝ ØµØ (219) where as before µ denotes complex conjugation and transposition For Á and Á Ê respectively, elements (vectors or matrices) with entries in ¾ µ and Ä ¾ ʵ have a transform defined as follows: È Øµ Ø µ Ê Øµ Ø Ø It follows that if the domain of is discrete, µ µ µ is the Fourier transform of and belongs to Ä ¾ ¾ ; analogously, if the domain of is continuous, µ µ µ is the Fourier transform of and belongs to the space denoted by Ä ¾ ʵ and defined as follows: Ä ¾ ʵ Ô Ñ Ù ØØ ¾µ (220) Furthermore the following bijective correspondences hold: ¾ µ ¾ µ ¾ µ ľ ¾ ¾ µ ¾ µ 1 The spaces Ô Áµ and Ä Ô Áµ, Ô ¾, do not share this property; they are Banach spaces For details see [12, 18] 9

10 Preliminaries EOLSS and Ä ¾ ʵ Ä ¾ Ê µ Ä ¾ Ê µ Ä Ä¾ ʵ À ¾ µ À ¾ µ For simplicity the above diagram is shown for spaces containing scalars It is however equally valid for the corresponding spaces containing matrices of arbitrary dimension There are two results connecting the spaces introduced above We will only state the continuous-time versions The first has the names of Parseval, Plancherel and Paley-Wiener attached to it Proposition 21 The Fourier transform is a Hilbert space isometric isomorphism between Ä ¾ ʵ and Ä ¾ ʵ It maps Ä ¾ Ê µ, Ä ¾ Ê µ onto À ¾ µ, À ¾ µ respectively The second one shows that the Ä and À norms can be viewed as induced norms Recall that if «µ and µ are two normed spaces with norms «,, respectively, just as in the finite-dimansional case, the «-induced norm of an operator Ì with domain and range is: Ì Ü Ì «ÙÔ (221) Ü Ü «Proposition 22 Let ¾ Ä ; then Ä ¾ ʵ Ä ¾ ʵ and the Ä norm can be viewd as an induced norm in the frequency domain space Ä ¾ ʵ: Ä Ä¾ Ò ÙÔ Ä¾ ľ In this last expression, can be restricted to lie in À ¾ Let ¾ À ; then À ¾ µ À ¾ µ and the À norm can be viewd as an induced norm both in the frequency domain space À ¾ as well as in the time domain space Ä ¾ : À À¾ Ò ÙÔ À¾ À¾ 22 The Laplace transform and the -transform ÙÔ Ü Ü Ä¾ Ü Ä¾ ľ Ò The logarithm can be considered as an elementary transform It assigns a real number to any positive real number It was invented in the middle ages and its purpose was to convert the multiplication of multi-digit numbers to addition In the case of linear, time-invariant systems the operation which one wishes to simplify is the derivative with respect to time in the continuous-time case or the shift in the discrete-time case As a consequence, one also wishes to simplify the operation of convolution, both in discrete- and continuous-time Thus an operation is sought which will transform derivation into simple multiplication in the transform domain In order to achieve this however, the transform needs to operate on functions of time The resulting function will be one of complex frequency This establishes two equivalent ways of dealing with linear, time-invariant systems, namely in the time domain and in the frequency domain In the next two section we will briefly review some basic properties of this transform, which is called Laplace transform in continuous-time and discrete- Laplace or -transform in discrete-time For further details we refer to any introductory book in signals and systems, eg [9] 221 Some properties of the Laplace transform Consider a function of time ص The unilateral Laplace transform of is a function denoted by µ of the complex variable The definition of is as follows: ص Ä µ 10 ص Ø Ø (222)

11 Preliminaries EOLSS Therefore the values of for negative time are ignored by this transform Instead, in order to capture the influence of the past, initial conditions at time zero are required (see Differentiation in time below) Basic Laplace transform properties Property Time signal Ä-transform Linearity ص ¾ ص µ ¾ µ Shifting in the s-domain Ø Øµ µ Time scaling ص Convolution ص ¾ ص µ ¾ µ ص ¾ ص Ø Differentiation in time ص µ µ Ø Differentiation in freq Ø Øµ µ Integration in time Ê Ø µ µ Impulse Æ Øµ Exponential Ø Øµ Initial value theorem: µ ÐÑ µ Final value theorem: ÐÑ Ø Øµ ÐÑ µ Table 1: Basic Laplace transform properties The last 2 properties hold provided that ص contains no impulses or higher-order singularities at Ø 222 Some properties of the -transform Consider a function of time ص, where time is discrete Ø ¾ The unilateral -transform of is a function denoted by Þµ of the complex variable Þ Ö The definition of is as follows: ص Þµ 11 Ø Þ Ø Øµ (223)

12 The external and the internal representation of linear systems EOLSS Basic -transform properties Property Time signal -transform Linearity ص ¾ ص Þµ ¾ Þµ Forward shift Ø µ Þ Þµ µ Backward shift Ø µ Þ Þµ Þ µ Scaling in freq Ø Øµ Þ µ Conjugation ص Þ µ Convolution ص ¾ ص Þµ ¾ Þµ ص ¾ ص Ò Þµ Differentiation in freq Ø Øµ Þ Þ Impulse Æ Øµ Exponential Ò Á ص Þ Þ First difference ص Ø µ Þ µ Þµ µ Accumulation È Ò Øµ Þ Þµ Initial value theorem: ÐÑ Þ Þµ Table 2: Basic -transform properties 3 The external and the internal representation of linear systems In this section we will review some basic results concerning linear dynamical systems General references for the material in this chapter are [31], [28], [29], [9], [7], [15] For an introduction to linear systems from basic principles the reader may consult the book by Willems and Polderman [26] Here we will assume that the external variables have been partitioned into input variables Ù and into output variables Ý, and will be concerned with convolution systems, ie systems where the relation between Ù and Ý is given by a convolution 12

13 The external and the internal representation of linear systems EOLSS sum or integral Ý Ù (31) where is an appropriate weighting pattern This will be called the external representation We will also be concerned with systems where besides the input and output variables, the state Ü has been declared as well Furthermore, the relationship between Ü and Ù is given by means of a set of first order difference or differential equations with constant coefficients, while that of Ý with Ü and Ù is given by a set of linear algebraic equations It will also be assumed that Ü lives in a finite-dimensional space: Ü Ü Ù Ý Ü Ù (32) where is the derivative or shift operator and,,, are linear constant maps This will be called internal representation We will also consider an alternative external representation, in terms of two polynomial matrices É ¾ Ê Ô Ô, È ¾ Ê Ô Ñ : É µý È µù (33) where as above, is the derivative or the backwards shift operator It is usually assumed that Ø É This representation is given in terms of differential or difference equations linking the input and the output The first subsection is devoted to the discussion of systems governed by (31), (33) while the following subsection investigates some structural properties of systems represented by (32) These equations are solved both in the time and the frequency domains The third subsection discusses the equivalence of the external and the internal representation, As it turns out going from the latter to the former involves the elimination of Ü and is thus straightforward The converse however is far from trivial as it involves the construction of state It is called the realization problem This problem can be interpreted as deriving a time domain representation from frequency domain data 31 External representation A discrete-time linear system, with Ñ input and Ô output channels can be viewed as an operator Ë Ñ µ Ô µ, which is linear There exists a sequence of matrices Ë µ ¾ Ã Ô Ñ (recall that à is either Ê or ) such that Ù Ý Ë Ùµ Ý µ ¾ Ë µù µ ¾ (34) This relationship can be written in matrix form as follows Ý ¾µ Ý µ Ý µ Ý µ Ë ¾ ¾µ Ë ¾ µ Ë ¾ µ Ë ¾ µ Ë ¾µ Ë µ Ë µ Ë µ Ë ¾µ Ë µ Ë µ Ë µ Ë ¾µ Ë µ Ë µ Ë µ Ù ¾µ Ù µ Ù µ Ù µ (35) The system described by Ë is called causal iff Ë µ and time invariant iff Ë µ Ë ¾ Ã Ô Ñ 13

14 The external and the internal representation of linear systems EOLSS For a time invariant system, we can define the sequence of Ô Ñ constant matrices Ë ¾ Ë Ë Ë Ë ¾ µ (36) It will be called the impulse response of because it is the output obtained in response to a unit pulse ٠ص Æ Øµ Operation (34) can now be represented as a convolution sum: Ø Ø Ë Ù Ý Ë Ùµ Ù ÛÖ Ùµ ص Moreover, the matrix representation of Ë in this case is a Toeplitz matrix Ý ¾µ Ý µ Ý µ Ý µ Ë Ë Ë ¾ Ë Ë Ë Ë Ë ¾ Ë ¾ Ë Ë Ë Ë Ë ¾ Ë Ë Ë Ø Ù µ Ø ¾ (37) Ù ¾µ Ù µ In the sequel we will restrict our attention to both causal and time-invariant linear systems The matrix representation of Ë in this case is lower triangular and Toeplitz (Ë, ) In analogy to the discrete-time case, a continuous-time linear system, with Ñ input and Ô output channels can be viewed as an operator Ë mapping Ä Ñ Êµ onto Ä Ô Êµ, which is linear In particular we will be concerned with systems which can be expressed by means of an integral Ë Ä Ñ Êµ Ä Ô Êµ: Ë Ù Ý Ý Øµ Ù µ Ù µ (38) Ø µù µ Ø ¾ Ê (39) where Ø µ, is a matrix-valued function called the kernel or weighting pattern of Ë The system just defined is causal iff Ø µ Ø and time invariant iff depends on the difference of the two arguments: In this case Ë is a convolution operator Ø µ Ø µ Ë Ù Ý Ë Ùµ Ù ÛÖ Ùµ ص Ø µù µ Ø ¾ Ê (310) In the sequel we will assume that Ë is both causal and time-invariant which means that the upper limit of integration can be replaced by Ø In addition, we will assume that can be expressed as ص Ë Æ Øµ ص Ë ¾ Ã Ô Ñ Ø (311) where Æ denotes the Æ-distribution and is analytic coefficients of its Taylor series expansion at Ø : Hence is uniquely determined by means of the Ø Øµ Ë Ë ¾ Ë Ø ¾ ¾ Ë Ø µ Ë ¾ Ã Ô Ñ 14

15 The external and the internal representation of linear systems EOLSS It follows that if (311) is satisfied the output Ý is at least as smooth as the input Ù and is consequently called a smooth system Hence just like in the case of discrete-time systems, smooth continuous-time linear system can be described by means of the infinite sequence of Ô Ñ matrices Ë, We formalize this conclusion next Definition 31 The external representation of a time-invariant, causal and smooth continous-time system and that of a time-invariant, causal discrete-time linear system with Ñ inputs and Ô outputs is given by an infinite sequence of Ô Ñ matrices Ë Ë Ë ¾ Ë µ Ë ¾ Ê Ô Ñ (312) The matrices Ë are often referred to as the Markov parameters of the system Ë The (continuous- or discrete-time) Laplace transform of the impulse response yields the transfer function of the system À µ ĵ µ (313) The Laplace transform is denoted for simplicity by Ä for both discrete- and continuous-time, and the Laplace variable is denoted by for both cases It readily follows that À can be expanded in a formal power series in : À µ Ë Ë Ë ¾ ¾ Ë (314) This can also be regarded as a Laurent expansion of À around infinity Consequently (37) and (310) can be written as µ À µí µ An alternative way for describing linear systems externally is by specifying a differential or difference equation which relates one of the input and one of the output channels Given that the input has Ñ and the output Ô channels This representation assumes the existence of polynomials Õ µ, Ô and Ô µ, Ô, Ñ, such that Õ µ Ý Øµ Ô µ ٠ص µ É µ Ý Øµ È µ ٠ص (315) where È É are polynomal matrices É ¾ Ê Ô Ô, È ¾ Ê Ô Ñ If we make the assumption that É is nonsingular, that is, its determinant is not identically zero: Ø É, the transfer function of this system is the rational matrix À É È If in addition this is proper rational, that is, the degree of the numerator of each entry is less that the degree of the corresponding denominator, we can expand this as follows: À µ É µè µ Ë Ë Ë (316) Recall that the variable is used to denote the transform variable or Þ, depending on whether we are dealing with continuous- or discrete-time systems We will not further dwell on this polynomial representation of linear systems since it is the subject of the following contribution in this volume, namely EOLSS Contribution Internal representation An alternative description for linear systems is the internal representation which uses in addition to the input Ù and the output Ý, the state Ü For a first-principles treatment of the concept of state we refer to the book by Willems and Poldeman [26] For our purposes, given are three linear finite-dimensional spaces: the state 15

16 The external and the internal representation of linear systems EOLSS space à Ò2, the input space Í Ã Ñ, and the output space Ã Ô (recall that à denotes the field of real numbers Ê or that of complex numbers ) The state equations describing a linear system are a set of first order linear differential or difference equations, according to whether we are dealing with a continuous- or a discrete-time system: Ü Øµ Ø Ü Øµ ٠ص Ø ¾ Ê ÓÖ (317) Ü Ø µ Ü Øµ ٠ص Ø ¾ (318) In both cases Ü Øµ ¾ is the state of the system at time Ø, while ٠ص ¾ Í is the value of the input function at time Ø Moreover, Í are linear maps; the first one is called the input map, while the second one describes the dynamics or internal evolution of the system Equations (317) and (318) can be written in a unified way as follows: Ü Ü Ù (319) where denotes the derivative operator for continuous-time systems, and the (backwards) shift operator for discrete-time systems The output equations, for both discrete- and continuous-time linear systems, are composed of a set of linear algebraic equations Ý Ü Ù (320) where Ý is the output function (response), and Í are linear maps; is called the output map It describes how the system interacts with the outside world In the sequel the term linear system in internal representation will be used to denote a linear, timeinvariant, continuous- or discrete-time system which is finite-dimensional Linear means: Í,, are linear spaces, and,,, are linear maps; finite-dimensional means: Í,, are all finite dimensional; timeinvariant means:,,, do not depend on time; their matrix representations are constant Ò Ò, Ò Ñ, Ô Ò, Ô Ñ matrices In the sequel (by slight abuse of notation) we will denote the linear maps,,, as well as their matrix representations (in some appropriate basis) with the same symbols We are now ready to give the Definition 32 (a) A linear system in internal or state space representation is a quadruple of linear maps (matrices) ¾ Ã Ò Ò ¾ Ã Ò Ñ ¾ Ã Ô Ò ¾ Ã Ô Ñ (321) The dimension of the system is defined as the dimension of the associated state space: Ñ Ò (322) (b) is called stable if the eigenvalues of have negative real parts or lie inside the unit disc, depending on whether is a continuous-time or a discrete-time system 2 The notation Ã Ò means that is a linear space which is isomorphic to the Ò-dimensional space Ã Ò ; as an example the space of all polynomials of degree less than Ò is isomorphic to Ê Ò, since there is a one-to-one correspondence between each polynomial and an Ò-vector consisting of its coefficients 16

17 The external and the internal representation of linear systems EOLSS Solution in the time domain Let Ù Ü Øµ denote the solution of the state equations (319), ie, the state of the system at time Ø attained from the initial state Ü at time Ø under the influence of the input Ù In particular, for the continuous-time state equations (317) Ù Ü Øµ Ø Ø µ Ü while for the discrete-time state equations (318) Ù Ü Øµ Ø Ø Ü Ø Ø Ø µ Ù µ Ø Ø (323) Ø Ø Ø Ù µ Ø Ø (324) In the above formulae we may assume without loss of generality, that Ø, since the systems we are dealing with are time-invariant The first summand in the above expressions is called zero input and the second zero state part of the solution The nomenclature comes from the fact that the zero input part is obtained when the system is excited exclusively by means of initial conditions and the zero state part is the result of excitation by some input Ù and zero initial conditions In the tables that follow these parts are denoted with the subscripts zi and zs For both discrete- and continuous-time systems it follows that the output is given by: Ý Øµ Ù Ü µ ص ٠ص Ü µ ص ٠ص ٠ص (325) Again the same remark concerning the zero-input and the zero state parts of the output holds If we compare the above expressions for Ø and Ü, with (37) and (310) it follows that the impulse response has the form below For continuous-time systems: ص Ø Æ Øµ Ø Ø where Æ denotes the Æ-distribution For discrete-time systems ص Ø Ø Ø Ø (326) (327) The corresponding external representation given by means of the Markov parameters (312), is: ¾ µ (328) By transforming the state the matrices which describe the system will change Thus, if the new state is Ü Ì Ü, Ø Ì, (319) and (320) in the new state Ü, will be become Ü Ì Ì ßÞ Ð ßÞÐ Ü Ì Ù Ý Ì ßÞ Ð Ü Ù where remains unchanged The corresponding triples are called equivalent Put differently, and are equivalent if there exists Ì such that: Ì Ì Á Ô 17 Á Ñ Ø Ì (329)

18 The external and the internal representation of linear systems EOLSS Let and be equivalent with equivalence transformation Ì It readily follows that À µ Á µ Ì Ì Á µ Ì Ì Ì Á Ì Ì µ Ì Á µ À µ This immediately implies that Ë Ë, ¾ Æ We have thus proved Proposition 31 Equivalent triples have the same transfer function and therefore the same Markov parameters 322 Solution in the frequency domain In this section we will assume that the initial time is Ø Let µ Ä µ µ, where is defined by (323), (324); there holds µ Á µ Ü Á µ Í µ µ µ µ Í µ (330) Thus, by (313), (326), (327), the transfer function of is: A summary of these relationships are provided in the table that follows 323 The concepts of reachability and observability À µ Á µ (331) The concept of reachability provides the tool for answering questions related to the extend to which the state of the system Ü can be manipulated through the input Ù The related concept of controllability will be discussed subsequently Both concepts involve only the state equations For additional information on these issues we refer to [5] Definition 33 Given is ¾ à ÒÜÒ ¾ à ÒÜÑ A state Ü ¾ is reachable from the zero state iff there exist an input function ٠ص and a time Ì, such that Ü Ù Ì µ The reachable subspace Ö of, is the set which contains all reachable states of We will call the system (completely) reachable iff Ö Furthermore Ê Ò µ ¾ Ò (332) will be called the reachability matrix of The finite reachability gramians at time Ø are defined as follows For continuous-time systems: Ø È Øµ Ø ¾ Ê (333) while for discrete-time systems È Øµ Ê Ø µê Ø Ø µ µ Ø ¾ (334) Theorem 31 Consider the pair µ as defined above (a) Ö ÔÒ ÓÐ Ê Ò ÔÒ ÓÐ È Øµ, where Ø, Ø Ò, for continuous-, discrete-time systems, respectively (b) Reachability conditions The following are equivalent: 18

19 The external and the internal representation of linear systems EOLSS I/O and I/S/O representation of continuous-time linear systems I/O I/S/O variables: ٠ݵ variables: Ù Ü Ýµ É Ý Øµ È Ù Øµ, Ü Øµ Ü Øµ ٠ص Ý Øµ Ü Øµ ٠ص Ø Ø Ø Ù Øµ Ý Øµ ¾ Ê Ü Øµ ¾ Ê Ò, ¾ Ê Ò Ôµ Ò Ñµ É Øµ È Æ Øµ Ø Ø Impulse response ص Æ Øµ Ø Ø À µ Ä Øµµ É µè µ À µ Á µ Poles - characteristic roots ØÉ µ Ò Ø Á µ Zeros Þ ¾ : Ú ¾ Ñ satisfying Þ ¾ : Û ¾ Ò, Ú ¾ Ñ, such that Þ Á Û À Þ µú Ú Matrix È exponential Ø Ø µ Ø Ø Ø Ä Ø µ Á µ Solution in the time domain Ý Øµ ص ٠ص µ Ü Øµ Ü Þ Øµ Ü Þ Øµ Ê Ý Øµ Ý Þ Øµ Ý È Þ Øµ Ü Øµ Ø Ø Ü µ Ê Ø µ Ù µ Ò where Ý Þ Øµ Ø Ý Øµ Ø Ø Ü µ Æ Ø µ Ø µ µ Ù µ Ê Ê Ø and Ý Þ Øµ Ø µù µ µ Ý Øµ Ø Ø Ü µ ßÞ Ð µ Ø µù µø Solution in the frequency domain µ É µê µ À µí µ µ Á µ Ü µ Á µ Í µ µ Á µ Ü µ Á µ µ ßÞ Ð À µ µ µ Á µ Ü µ À µí µ Table 3: I/O and I/S/O representation of continuous-time linear systems Í µ 1 The pair µ, ¾ à ÒÜÒ, ¾ à ÒÜÑ, is completely reacable 2 The rank of the reachability matrix is full: ÖÒ Ê µ Ò 3 The reachability gramian is positive definite È Øµ, for some Ø 4 No left eigenvector Ú of is in the left kernel of : Ú Ú µ Ú 5 ÖÒ Á Ò µ Ò, for all ¾ 6 The polynomial matrices Á and are left coprime The fourth and fifth conditions in the theorem above are known as the PHB or Popov-Hautus-Belevich tests for reachability 19

20 The external and the internal representation of linear systems EOLSS I/O and I/S/O representation of discrete-time linear systems I/O I/S/O variables: ٠ݵ variables: Ù Ü Ýµ É µ Ý Øµ È µ ٠ص, Ü Øµ Ü Øµ ٠ص Ý Øµ Ü Øµ ٠ص ٠ص Ý Øµ ¾ Ê Ü Øµ ¾ Ê Ò, ¾ Ê Ò Ôµ Ò Ñµ Impulse response É µ ص È µ Æ Øµ µ ص Ø Ø À Þµ صµ É ÞµÈ Þµ À Þµ ÞÁ µ Poles - zeros: same as for Ø ¾ Ê Exponents of a martix: Ø µ ÞÁ µ Solution in the time domain Ý Øµ ص ٠ص µ Ü Øµ Ü Þ Øµ Ü Þ Øµ Ý Øµ Ý Þ Øµ Ý È Þ Øµ Ü Øµ Ø Ü µ ÈØ Ø Ù µ Ò where Ý Þ Øµ Ø Ý Øµ Ø Ü µ ÈØ Æ Ø µ Ø µ Ù µ ßÞ Ð µ È Ø and Ý Þ Øµ Ø µù µ Ý Øµ Ø Ü µ ÈØ Ø µù µ Solution in the frequency domain Þµ É ÞµÊ Þµ À ÞµÍ Þµ Þµ ÞÁ µ Ü µ ÞÁ µ Í Þµ Þµ ÞÁ µ Ü µ ÞÁ µ µ Í Þµ ßÞ Ð À Þµ µ Þµ ÞÁ µ Ü µ À ÞµÍ Þµ Table 4: I/O and I/S/O representation of discrete-time linear systems We now turn our attention to the concept of observability In order to be able to modify the dynamical behavior of a system, very often the state Ü needs to be available Typically however the state variables are inaccessible and only certain linear combinations Ý thereof, given by the output equations (320) are known Thus we need to discuss the problem of reconstructing the state Ü Ì µ from observations Ý µ where is in some appropriate interval If ¾ Ì Ì Ø we have the state observation problem, while if ¾ Ì Ø Ì we have the state reconstruction problem We will first discuss the observation problem Without loss of generality we will assume that Ì Recall (323), (324) and (325) Since the input Ù is known, the latter two terms in (325) are also known for Ø Therefore, in determining Ü µ we may assume without loss of generality that Ù µ Thus, the observation problem reduces to the following: given Ü µ ص for Ø, find Ü µ Since and are irrelevant, for this subsection ¾ à ÒÜÒ ¾ à ÔÜÒ Definition 34 A state Ü ¾ is unobservable iff Ý Øµ Ü Øµ, for all Ø, ie iff Ü is indistinguishable from the zero state for all Ø The unobservable subspace ÙÒÓ of is the set of all unobservable states of is (completely) observable iff ÙÒÓ The observability matrix of is Ç Ò µ µ Ò µ (335) 20

21 The external and the internal representation of linear systems EOLSS The finite observability gramians at time Ø are: Theorem 32 Given É Øµ Ø Ø µ Ø µ Ø ¾ Ê (336) É Øµ Ç Ø µç Ø µ Ø ¾ (337), for both Ø ¾ and Ø ¾ Ê, ÙÒÓ is a linear subspace of given by ÙÒÓ Ö Ç Ò µ Ö É Øµ Ü ¾ Ü (338) where Ø, Ø Ò, depending on whether the system is continuous-, or discrete-time Thus, is completely observable if, and only if, ÖÒ Ç µ Ò Remark 31 (a) Given Ý Øµ Ø, let denote the following ÒÔ vector: Ý µ Ý µ Ý Ò µµ Ø ¾ Ý µ Ý µ Ò Ý µµ Ø ¾ Ê where The observation problem reduces to the solution of the linear set of equations Ø Ç Ò µü µ This set of equations is solvable for all initial conditions Ü µ, ie it has a unique solution if and only if is observable Otherwise Ü µ can only be determined modulo ÙÒÓ, ie up to an arbitrary linear combination of unobservable states (b) If Ü Ü ¾, are not reachable, there is a trajectory passing through the two points if, and only if, Ü ¾ Ì µü ¾ Ö, for some Ì, where Ì µ Ì for continuous-time systems and Ì µ Ì for discrete-time systems This shows that if we start from a reachable state Ü the states that can be attained are also within the reachable subspace A concept which is closely related to reachability is that of controllability Here, instead of driving the zero state to a desired state, a given non-zero state is steered to the zero state Furthermore, a state Ü ¾ is unreconstructible iff Ý Øµ Ü Øµ, for all Ø, ie iff Ü is indistinguishable from the zero state for all Ø The next result shows that for continuous-time systems the concepts of reachability and controllability are equivalent while for discrete-time systems the latter is weaker Similarly, while for continuous-time systems the concepts of observability and reconstructibility are equivalent, for discrete-time systems the latter is weaker For this reason, only the concepts of reachability and observability are used in the sequel Proposition 32 Given is the triple µ (a) For continuous-time systems ÓÒØÖ Ö and ÙÒÖ ÙÒÓ (b) For discrete-time systems Ö ÓÒØÖ and ÙÒÖ ÙÒÓ ; in particular ÓÒØÖ Ö Ö Ò and ÙÒÖ ÙÒÓ Ñ Ò 324 The infinite gramians Consider a continuous-time linear system which is stable, ie all eigenvalues of have negative real parts In this case both (333) as well as (336) are defined for Ø In addition because of Plancherel s formula, the gramians can be expressed also in the frequency domain (expressions on the righthand side): È µ µ (339) ¾ 21

22 The external and the internal representation of linear systems EOLSS É µ µ (340) ¾ È, É are the infinite reachability and infinite observability gramians associated with These gramians satisfy the following linear matrix equations, called Lyapunov equations; see also [21, 8] Proposition 33 Given the stable, continuous-time system as above, the associated infinite reachability gramian È satisfies the continuous-time Lyapunov equation while the associated infinite observability gramian satisfies Proof Due to stability È È This proves (341); (342) is proved similarly If the discrete-time system È È (341) É É (342) gramians (334) as well as (337) are defined for Ø È Ê µê µ É Ç À µ Ç À µ Ø Ø À Â Ø µ Ø µ is stable, ie all eigenvalues of are inside the unit disc, the ¾ µ Ø À À Ø ¾ ¾ ¾ Á µ Á µ (343) Á µ Á µ (344) Notice that È can be written as È È ; moreover É À À É These are the so-called discrete Lyapunov or Stein equations: Proposition 34 Given the stable, discrete-time system as above, the associated infinite reachability gramian È satisfies the while the associated infinite observability gramian É satisfies discrete-time Lyapunov equation È È É É (345) We conclude this section by summarizing some properties of the system gramians For details see, eg [23, 14, 7] Lemma 31 Let È and É denote the infinite gramians of a linear stable system (a) The minimal energy required to steer the state of the system from to Ü Ö is Ü ÖÈ Ü Ö (b) The maximal energy produced by observing the output of the system whose initial state is Ü Ó is Ü ÓÉÜ Ó (c) The states which are difficult, ie require large amounts of energy, to reach are in the span of those eigenvectors of È which correspond to small eigenvalues Furthermore, the states which are difficult to observe, ie produce small observation energy, are in the span of those eigenvectors of É which correspond to small eigenvalues 22

23 The external and the internal representation of linear systems EOLSS Remark 32 Computation of the reachability gramian Given the pair ¾ Ê Ò Ò, ¾ Ê Ò Ñ, the reachability gramian is defined by (333) We will assume that the eigenvalues of are distinct Then is diagonalizable; let the EVD (Eigenvalue Decomposition) be Î Î ÛÖ Î Ú Ú ¾ Ú Ò Ò µ Ú denotes the eigenvector corresponding to the eigenvalue Notice that if the Ø eigenvalue is complex, the corresponding eigenvector will also be complex Let Ï Î ¾ Ò Ñ and denote by Ï ¾ Ñ the Ø row of Ï With the notation introduced above the following formula holds: È Ì µ Î Ê Ì µî ÛÖ Ê Ì Ï Ï µ ÜÔ µì ¾ (346) Furthermore, if, Ê Ì µ Ï Ï µ Ì If in addition is stable, the infinite gramian (341) is given by È Î ÊÎ, where Ê Ï Ï This formula accomplishes both the computation of the exponential and the integration explicitely, in terms of the EVD of Example 31 Consider the example of the parallel connection of two branches, the first consisting of the series connection of an inductor Ä with a resistor Ê Ä, and the other consisting of the series connection of a capacitor with a resistor Ê Assume that the values of these elements are Ä, Ê Ä,, Ê ; then ¾ Ø µ Ø ¾ ¾ ¾ ¾Ø The gramian È Ì µ and the infinite gramian È are: È Ì µ ¾ ¾Ì ¾ ¾ Ì ¾ ¾ Ì ¾ Ì ¾ È ÐÑ È Ì Ì µ ¾ ¾ If the system is asymptotically stable, ie Ê µµ, the reachability gramian is defined for Ì, and it satisfies (341) Hence, the infinite gramian can be computed as the solution to the above linear matrix equation; no explicit calculation of the matrix exponentials, mutliplication and subsequent intergration is required In matlab, if in addition the pair µ is controllable, we have: È ÐÝÔ µ For the matrices defined earlier, using the lyap command in the format long e, we get: È Example 32 A second simple example is the following: ¾ µ Ø ¾Ø ¾ Ø Ø ¾Ø 23

24 The external and the internal representation of linear systems EOLSS This implies È Ì µ ¾ ¾Ì Ì Ì ¾ Ì ¾Ì Ì ¾ Ì ¾Ì Ì ¾ Ì Ì ¾Ì ¾ And finally È ÐÝÔ µ ¾ A transformation between continuous and discrete time systems One transformation between continuous- and discrete-time systems is given by the bilinear transformation of the complex plane onto itself given by Þ In particular, the transfer function À µ of a continuous-time system is obtained from that of a discrete-time one À Þµ as follows: À µ À This transformation maps the left-half of the complex plane onto the unit disc and vice-versa The matrices  of these two systems are related as given in the following table Continuous-time Áµ Áµ Ô ¾ Áµ Ô ¾À Áµ  À Áµ Þ Þ Þ Discrete-time Á µ Á µ Ô ¾ Á µ À Ô ¾ Á µ  Á µ À  Proposition 35 Given the stable continuous-time system with infinite gramians È, É, let, with infinite gramians È À Â, É, be the discrete-time system obtained by means of the transformation given above It follows that the bilinear transformation introduced above preserves the gramians: È È and É É Furthermore, this transformation preserves the infinity norms (see section 43) 33 The realization problem In the preceding sections we have presented two ways of representing linear systems: the internal and the external The former makes use of the inputs Ù, states Ü, and outputs Ý The latter makes use only of the inputs Ù and the outputs Ý The question thus arises as to the relationship between these two representations 24

25 The external and the internal representation of linear systems EOLSS In one direction this problem is trivial Given the internal representation of a system, the external representation is readily derived As shown earlier, the transfer function of the system is given by (331) À µ Á µ, while from (328), the Markov parameters are given by Ë Ë ¾ Ê Ô Ñ ¾ Æ (347) The converse problem, ie given the external representation, derive the internal one, is far from trivial This is the realization problem: given the external representation of a linear system construct an internal or state variable representation In other words, given the impulse response, or equivalently the transfer function À, or the Markov parameters Ë of a system, construct, such that (347) hold It readily follows without computation that Ë Hence the following problem results: Definition 35 Given the sequence of Ô Ñ matrices Ë, ¾ Æ, the realization problem consists in finding a positive integer Ò and constant matrices µ such that The triple sequence Ë ¾ Ê Ô Ò Ê Ò Ò Ê Ò Ñ ¾ Æ (348) is then called a realization of the sequence Ë, and the latter is called a realizable The realization problem is sometimes referred to as the problem of construction of state for linear systems described by convolution relationships Remark 33 Realization can also be considered as the problem of converting frequency domain data into time domain data The reason is that measurement of the Markov parameters is closely related to measurement of the frequency response Example 33 Consider the following (scalar) sequences: Which sequences are realizable? Ë µ Ë µ ¾ ÒØÙÖÐ ÒÙÑÖ Ë µ ¾ ¾ ÓÒ ÒÙÑÖ Ë µ ¾ ÔÖÑ Ë µ ÒÚÖ ØÓÖÐ ¾ Problem 31 The following problems arise: (a) Existence: given a sequence Ë,, determine whether there exist a positive integer Ò and a triple of matrices such that (348) holds (b) Uniqueness: in case such an integer and triple exist, are they unique in some sense? (c) Construction: in case of existence, find Ò and give an algorithm to construct such a triple 25

26 The external and the internal representation of linear systems EOLSS The main tool for answering the above questions is the matrix À of Markov parameters: À Ë Ë ¾ Ë Ë Ë ¾ Ë Ë Ë ¾ Ë Ë Ë ¾ Ë ¾ Ë Ë ¾ Ë ¾ Ë ¾ (349) This is the Hankel matrix; it has infinitely many rows, infinitely many columns, and block Hankel structure, ie Àµ Ë, for We start by listing conditions related to the realization problem Lemma 32 Each statement below implies the one which follows: (a) The sequence Ë, ¾ Æ, is realizable (b) The formal power series È Ë is rational (c) The sequence Ë, ¾ Æ, satisfies a recursion with constant coefficients, ie there exist a positive integer Ö and constants «, Ö, such that «Ë «Ë «¾ Ë ¾ «Ö ¾ Ë Ö ¾ «Ö Ë Ö Ë Ö (350) (d) The rank of À is finite Proof (a) µ (b) Realizability implies (348) Hence Ë Á µ This proves (b) (b) µ (c) Let Ø Á µ «««Ö Ö Ö µ The previous relationship implies µ Ë Á µ where Å µ denotes the adjoint of the matrix Å On the left-hand side there are terms having both positive and negative powers of, while on the right-hand side there are only terms having positive powers of Hence the coefficients of the negative powers of on the left-hand side must be identically zero; this implies precisely (350) (c) µ (d) Relationships (350) imply that the Ö µ-st block column of À is a linear combination of the previous Ö block columns Furthermore, because of the block Hankel structure, every block column of À is a sub-column of the previous one; this implies that all block columns after the Ö-th are linearly dependent on the first Ö, which in turn implies the finiteness of the rank of À The following lemma describes a fundamental property of À; it also provides a direct proof of the implication (a) µ (d) 26

27 The external and the internal representation of linear systems EOLSS Lemma 33 Factorization of À If the sequence of Markov parameters is realizable by means of the triple µ, À can be factored as follows: À Ç µê µ (351) Consequently, if the sequence of Markov parameters is realizable the rank of À is finite Proof If Ë Ò, Ò ¾ Æ, is realizable the relationships Ë Ò Ò, Ò ¾ Æ, hold true Hence: À ¾ Ç µê µ It follows that: ÖÒ À ÑÜÖÒ Ç ÖÒ Ê Þ µ In order to discuss the uniqueness issue of realizations, we need to recall the concept of equivalent systems defined by (329) In particular, proposition 31 asserts that equivalent triples have the same Markov parameters Hence the best one can hope for in connection with the uniqueness question is that realizations be equivalent Indeed as shown in the next section this holds for realizations with the smallest possible dimension 331 The solution of the realization problem We are now ready to answer the three questions posed at the beginning of the previous sub-subsection This also proves the implication (d) µ (a), and hence the equivalence of the statemenrs of lemma 32 Theorem 33 Main Result (1) The sequence Ë, ¾ Æ, is realizable if, and only if, ÖÒ À Ò (2) The state space dimension of any solution is at least Ò All realizations which are minimal are both reachable and observable Conversely, every realization which is reachable and observable is minimal (3) All minimal realizations are equivalent Lemma 33 proves part (1) of the main theorem in one direction To prove (1) in the other direction we will actually construct a realization assuming that the rank of À is finite Lemma 34 Silverman Realization Algorithm Let ÖÒ À Ò Find an Ò Ò submatrix of À which has full rank Construct the following matrices: (i) ¾ Ê Ò Ò ; it is composed of the same rows as ; its columns are obtained by shifting those of by one block column (ie Ñ columns) (ii) ¾ Ê Ò Ñ is composed of the same rows as ; its columns are the first Ñ columns of À (iii) ¾ Ê Ô Ò is composed of the same columns as ; its rows are the first Ô rows of À The triple µ, where,, and given sequence of Markov parameters µ, is a realization of dimension Ò of the Proof By assumption there exist Ò ÖÒÀ, columns of À which span its column space Denote these columns by ; note that the columns making up need not be consecutive columns of À Let denote the Ò columns of À obtained by shifting those of by one block column, ie by Ñ individual columns; let denote the first Ñ columns of À There exist unique matrices ¾ Ê Ò Ò, ¾ Ê Ò Ñ, such that: (352) (353) 27

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CESAME, Université catholique de Louvain Bâtiment Euler, Avenue G. Lemaître 4-6 B-1348 Louvain-la-Neuve,

More information

Observations on the Stability Properties of Cooperative Systems

Observations on the Stability Properties of Cooperative Systems 1 Observations on the Stability Properties of Cooperative Systems Oliver Mason and Mark Verwoerd Abstract We extend two fundamental properties of positive linear time-invariant (LTI) systems to homogeneous

More information

Analysis of Spectral Kernel Design based Semi-supervised Learning

Analysis of Spectral Kernel Design based Semi-supervised Learning Analysis of Spectral Kernel Design based Semi-supervised Learning Tong Zhang IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Rie Kubota Ando IBM T. J. Watson Research Center Yorktown Heights,

More information

Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation

Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation Christina C. Christara and Kit Sun Ng Department of Computer Science University of Toronto Toronto, Ontario M5S 3G4,

More information

Citation Osaka Journal of Mathematics. 43(2)

Citation Osaka Journal of Mathematics. 43(2) TitleIrreducible representations of the Author(s) Kosuda, Masashi Citation Osaka Journal of Mathematics. 43(2) Issue 2006-06 Date Text Version publisher URL http://hdl.handle.net/094/0396 DOI Rights Osaka

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Self-Testing Polynomial Functions Efficiently and over Rational Domains

Self-Testing Polynomial Functions Efficiently and over Rational Domains Chapter 1 Self-Testing Polynomial Functions Efficiently and over Rational Domains Ronitt Rubinfeld Madhu Sudan Ý Abstract In this paper we give the first self-testers and checkers for polynomials over

More information

arxiv: v1 [math.dg] 19 Mar 2012

arxiv: v1 [math.dg] 19 Mar 2012 BEREZIN-TOEPLITZ QUANTIZATION AND ITS KERNEL EXPANSION XIAONAN MA AND GEORGE MARINESCU ABSTRACT. We survey recent results [33, 34, 35, 36] about the asymptotic expansion of Toeplitz operators and their

More information

A Language for Task Orchestration and its Semantic Properties

A Language for Task Orchestration and its Semantic Properties DEPARTMENT OF COMPUTER SCIENCES A Language for Task Orchestration and its Semantic Properties David Kitchin, William Cook and Jayadev Misra Department of Computer Science University of Texas at Austin

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Applications of Discrete Mathematics to the Analysis of Algorithms

Applications of Discrete Mathematics to the Analysis of Algorithms Applications of Discrete Mathematics to the Analysis of Algorithms Conrado Martínez Univ. Politècnica de Catalunya, Spain May 2007 Goal Given some algorithm taking inputs from some set Á, we would like

More information

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write 1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical

More information

Block vs. Stream cipher

Block vs. Stream cipher Block vs. Stream cipher Idea of a block cipher: partition the text into relatively large (e.g. 128 bits) blocks and encode each block separately. The encoding of each block generally depends on at most

More information

Introduction to the z-transform

Introduction to the z-transform z-transforms and applications Introduction to the z-transform The z-transform is useful for the manipulation of discrete data sequences and has acquired a new significance in the formulation and analysis

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Margin Maximizing Loss Functions

Margin Maximizing Loss Functions Margin Maximizing Loss Functions Saharon Rosset, Ji Zhu and Trevor Hastie Department of Statistics Stanford University Stanford, CA, 94305 saharon, jzhu, hastie@stat.stanford.edu Abstract Margin maximizing

More information

MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT

MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT The following is the list of questions for the oral exam. At the same time, these questions represent all topics for the written exam. The procedure for

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity

Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity László Babai Peter G. Kimmel Department of Computer Science The University of Chicago 1100 East 58th Street Chicago,

More information

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE MICHAEL LAUZON AND SERGEI TREIL Abstract. In this paper we find a necessary and sufficient condition for two closed subspaces, X and Y, of a Hilbert

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

A BRIEF INTRODUCTION TO HILBERT SPACE FRAME THEORY AND ITS APPLICATIONS AMS SHORT COURSE: JOINT MATHEMATICS MEETINGS SAN ANTONIO, 2015 PETER G. CASAZZA Abstract. This is a short introduction to Hilbert

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31 Contents Preamble xiii Linear Systems I Basic Concepts 1 I System Representation 3 1 State-Space Linear Systems 5 1.1 State-Space Linear Systems 5 1.2 Block Diagrams 7 1.3 Exercises 11 2 Linearization

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Correlation at Low Temperature: I. Exponential Decay

Correlation at Low Temperature: I. Exponential Decay Correlation at Low Temperature: I. Exponential Decay Volker Bach FB Mathematik; Johannes Gutenberg-Universität; D-55099 Mainz; Germany; email: vbach@mathematik.uni-mainz.de Jacob Schach Møller Ý Département

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

João P. Hespanha. January 16, 2009

João P. Hespanha. January 16, 2009 LINEAR SYSTEMS THEORY João P. Hespanha January 16, 2009 Disclaimer: This is a draft and probably contains a few typos. Comments and information about typos are welcome. Please contact the author at hespanha@ece.ucsb.edu.

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning

More information

Finding small factors of integers. Speed of the number-field sieve. D. J. Bernstein University of Illinois at Chicago

Finding small factors of integers. Speed of the number-field sieve. D. J. Bernstein University of Illinois at Chicago The number-field sieve Finding small factors of integers Speed of the number-field sieve D. J. Bernstein University of Illinois at Chicago Prelude: finding denominators 87366 22322444 in R. Easily compute

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Fall 2016 MATH*1160 Final Exam

Fall 2016 MATH*1160 Final Exam Fall 2016 MATH*1160 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie Dec 16, 2016 INSTRUCTIONS: 1. The exam is 2 hours long. Do NOT start until instructed. You may use blank

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

An Improved Quantum Fourier Transform Algorithm and Applications

An Improved Quantum Fourier Transform Algorithm and Applications An Improved Quantum Fourier Transform Algorithm and Applications Lisa Hales Group in Logic and the Methodology of Science University of California at Berkeley hales@cs.berkeley.edu Sean Hallgren Ý Computer

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

I teach myself... Hilbert spaces

I teach myself... Hilbert spaces I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION CONTENTS VOLUME VII

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION CONTENTS VOLUME VII CONTENTS VOLUME VII Control of Linear Multivariable Systems 1 Katsuhisa Furuta,Tokyo Denki University, School of Science and Engineering, Ishizaka, Hatoyama, Saitama, Japan 1. Linear Multivariable Systems

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2 MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Solution for Homework 5

Solution for Homework 5 Solution for Homework 5 ME243A/ECE23A Fall 27 Exercise 1 The computation of the reachable subspace in continuous time can be handled easily introducing the concepts of inner product, orthogonal complement

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

TOPICS IN HARMONIC ANALYSIS WITH APPLICATIONS TO RADAR AND SONAR. Willard Miller

TOPICS IN HARMONIC ANALYSIS WITH APPLICATIONS TO RADAR AND SONAR. Willard Miller TOPICS IN HARMONIC ANALYSIS WITH APPLICATIONS TO RADAR AND SONAR Willard Miller October 23 2002 These notes are an introduction to basic concepts and tools in group representation theory both commutative

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Recall that any inner product space V has an associated norm defined by

Recall that any inner product space V has an associated norm defined by Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =

More information

Preface to Second Edition... vii. Preface to First Edition...

Preface to Second Edition... vii. Preface to First Edition... Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

HITCHIN KOBAYASHI CORRESPONDENCE, QUIVERS, AND VORTICES INTRODUCTION

HITCHIN KOBAYASHI CORRESPONDENCE, QUIVERS, AND VORTICES INTRODUCTION ËÁ Ì ÖÛÒ ËÖĐÓÒÖ ÁÒØÖÒØÓÒÐ ÁÒ ØØÙØ ÓÖ ÅØÑØÐ ÈÝ ÓÐØÞÑÒÒ ¹½¼¼ ÏÒ Ù ØÖ ÀØÒßÃÓÝ ÓÖÖ ÓÒÒ ÉÙÚÖ Ò ÎÓÖØ ÄÙ ÐÚÖÞßÓÒ ÙÐ Ç Ö ÖßÈÖ ÎÒÒ ÈÖÖÒØ ËÁ ½¾ ¾¼¼ µ ÂÒÙÖÝ ½ ¾¼¼ ËÙÓÖØ Ý Ø Ù ØÖÒ ÖÐ ÅÒ ØÖÝ Ó ÙØÓÒ ËÒ Ò ÙÐØÙÖ ÚÐÐ Ú

More information

Classes of Linear Operators Vol. I

Classes of Linear Operators Vol. I Classes of Linear Operators Vol. I Israel Gohberg Seymour Goldberg Marinus A. Kaashoek Birkhäuser Verlag Basel Boston Berlin TABLE OF CONTENTS VOLUME I Preface Table of Contents of Volume I Table of Contents

More information

The Important State Coordinates of a Nonlinear System

The Important State Coordinates of a Nonlinear System The Important State Coordinates of a Nonlinear System Arthur J. Krener 1 University of California, Davis, CA and Naval Postgraduate School, Monterey, CA ajkrener@ucdavis.edu Summary. We offer an alternative

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

DISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS

DISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS DISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS H.L. TRENTELMAN AND S.V. GOTTIMUKKALA Abstract. In this paper we study notions of distance between behaviors of linear differential systems. We introduce

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality (October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are

More information

On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time

On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time Alle-Jan van der Veen and Michel Verhaegen Delft University of Technology Department of Electrical Engineering

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information