Flat output characterization for linear systems using polynomial matrices

Size: px
Start display at page:

Download "Flat output characterization for linear systems using polynomial matrices"

Transcription

1 Available online at Systems & Control Letters 48 (23) Flat output characterization for linear systems using polynomial matrices J. Levine a;, D.V. Nguyen b a Centre Automatique et Systemes, Ecole Nationale Superieure des Mines de Paris, 35 rue Saint-Honore, 7735 Fontainebleau Cedex, France b Micro-Controle, Z.I. de Beaune-la-Rolande, BP 29, 4534 Beaune-la-Rolande, France Received 2 October 2; received in revised form 2 August 22 Abstract This paper is devoted to the study of linear at outputs for linear controllable time-invariant systems in polynomial matrix form. We characterize the transformations expressing the system variables in terms of a linear at output and derivatives, called dening matrices, as the kernel of a polynomial matrix. An application to trajectory planning is presented, showing the usefulness of the present characterization. c 22 Elsevier Science B.V. All rights reserved. Keywords: Linear time-invariant system; Polynomial matrices; Flatness; Flat output; Trajectory planning. Introduction Recently, connections between the trajectory planning problem, atness and Brunovsky controllability canonical form have been outlined [7,8], showing in particular that even for linear time-invariant systems, atness may be useful to reference trajectory (or feedforward) design [2,6] and predictive control [9]. If we restrict to linear time-invariant systems, it turns out that such systems are at if and only if they are controllable and that a particular at output may be obtained as a by-product of the Brunovsky controllability canonical form [7,8] (concerning the Brunovsky This work was partly supported by Micro-Controle, a Newport Corporation company. Corresponding author. addresses: levine@cas.ensmp.fr (J. Levine), vnguyen@newport-fr.com (D.V. Nguyen). canonical form and related topics, one may refer to [3,4,7,2,6,,5]). Nevertheless, a general method to nd all possible at outputs and parametrize them is not available at present. In this paper, we propose a direct characterization of the matrices giving the system variables in function of a linear at output y and its time derivatives. It should be noted that proceeding the other way around, namely nding the expression of a at output in function of the system variables and derivatives, appears to be much more dicult. However, our result is shown to be well suited to trajectory planning since, in this case, the inverse relations expressing the coordinates of the at output y in terms of the system variables ones, are not needed. An example of high-precision positioning system is presented to illustrate this remark. Another point of interest concerns the kind of representation of linear systems we have chosen in this paper: polynomial matrix representation has been 67-69/3/$ - see front matter c 22 Elsevier Science B.V. All rights reserved. PII: S67-69(2)257-8

2 7 J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) preferred here since, on the one hand, it does not require explicitly specifying the generally unknown number of time derivatives that corresponds here to polynomial degree involved in each expression and, on the other hand, structural system properties such as atness are nicely described in the framework of the algebra of polynomials. The paper is organized as follows. In Section, basic facts from atness are recalled. Our main result, namely the characterization of the so-called dening matrices of linear at outputs, is stated in Section 2. In Section 3, we present the application to trajectory planning. Finally, some recalls on polynomial matrices are proposed in Appendix A.. Recalls on atness All along this paper, we use the notation u (k), x (k), y (k) ;:::for the kth-order derivative of u, x, y; : : : with respect to time. A system is said to be dierentially at [7,8] if there exists a set of independent variables referred to as at output such that every other system variable (including the input variables) is a function of the at output and a nite number of its successive time derivatives. More precisely, the system ẋ = f(x; u); () with x R N and u R m is dierentially at if one can nd a set of variables (at output) y = h(x; u; u; u; : : : ; u (p) ); y R m ; (2) with p a nite m-tuple of integers, such that x = (y; ẏ; y; : : : ; y (q) ); u = (y; ẏ; y; : : : ; y (q+) ); (3) with q a nite m-tuple of integers, and such that the system equations d dt (y; ẏ; y; : : : ; y(q+) ) = f((y; ẏ; y; : : : ; y (q) );(y; ẏ; y; : : : ; y (q+) )) are identically satised. The reader is invited to refer to [7,8] for further readings concerning various consequences of this property on motion planning and feedback design. Since we focus attention on linear systems, we recall from [7,8] the following result: Proposition. A linear system is at if and only if it is controllable. In the remaining part of this paper, we make an extensive use of some classical concepts and tools of Polynomial Matrix Theory. The reader may refer to Appendix A for a quick introduction and survey of the main results. More advanced aspects can be found in [,2,4,7]. 2. Flat output of a linear system in polynomial matrix form We consider linear systems in the following polynomial matrix representation: A(s)x = Bu; (4) with s =d=dt and A(s) an n n matrix whose entries are polynomials of the formal variable s and B an n m constant matrix of rank m, with 6 m n. 2 System (4) is assumed controllable, i.e. A(s) and B are left coprime (see Appendix A.3). Note that x, in (4), is not a system state, but a partial state of dimension n. In general n is smaller than the dimension N of a reachable state-space realization. We call linear at output an output y dened by (2) and (3) with h, and linear, namely x i = u i = q m j j= k= j= k= i;j;k y (k) j ; i=; : : : ; n; q m j+ i;j;k y (k) j ; i=;:::;m (5) and y a linear combination of x, u and a nite number of derivatives of u. In the polynomial matrix language, (5) reads x = P(s)y; u = Q(s)y; (6) For a m-tuple of integers p = (p ;:::;p m) and an m-dimensional vector u, the notation u (p) stands for (u (p ) ;:::;u m (pm) ). 2 If n 6 m, the method described below is useless: in this case, the problem of nding a at output is trivial since x, completed by m n components of u, can be directly chosen as such.

3 J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) where P (resp. Q) is an n m (resp. m m) polynomial matrix of s, with entries P i;j (s)= q j k= i;j;ks k (resp. Q i;j (s)= q j+ k= i;j;ks k ). Matrices P and Q satisfying (6) are called dening matrices of the at output y. Our main result is the following: Theorem. The variable y =(y ;:::;y m ) is a linear at output of system (4) if and only if its dening matrices P and Q are given by C T A(s)P(s)=; (7) A(s)P(s)=BQ(s); (8) with C an arbitrary matrix of rank n m orthogonal to B (i.e. such that C T B = ), and with P(s) and Q(s) of rank m for every s and right coprime. In addition, a linear at output y of the controllable system (4) always exists (and therefore its dening matrices P and Q always exist too), P(s) solution of (7) is dened up to right multiplication by a unimodular m m matrix, and a possible choice of Q(s) is Q(s)=(B T B) B T A(s)P(s): (9) Proof. Assume that y is a linear at output. Then x and u can be expressed as in (6). Since the mapping y P(s)y = x is onto by the atness denition, the rank of P(s) must be equal to min(n; m)=m for every s. Combining (4) and (6), we get (8). Since B has rank m, there exists an n (n m) constant matrix C of rank n m such that C T B = n m;m where n m;m denotes the (n m) m matrix whose entries are all. Thus, left multiplying (4) by C T, P(s) satises C T A(s)P(s)y = C T BQ(s)y = n m; for every smooth m-dimensional time function y, which implies that C T A(s)P(s)= n m;m, and (7), (8) is proved. Clearly, since A(s) and B are left coprime and since the rank of A(s)P(s) and B is m for every s, the same holds true for Q(s) (by contradiction). Using again the fact that y is a linear at output, it must satisfy y = X (s)x + Y (s)u for some appropriate polynomial matrices X (s) of size m n and Y (s) of size m m. Thus, substituting the expressions of x and u, we get y = X (s)p(s)y + Y (s)q(s)y, or X (s)p(s) + Y (s)q(s)=i, which, according to Bezout identity (see Theorem A.2), means that P(s) and Q(s) are right coprime. Conversely, let P(s) and Q(s) be given by (7), (8) with P(s) and Q(s) right coprime. By Bezout identity, there exist two polynomial matrices X (s) and Y (s) satisfying X (s)p(s)+y (s)q(s)=i: () Right multiplying by y, we get X (s)p(s)y + Y (s)q(s)y = y and, setting x = P(s)y and u = Q(s)y, we obtain that y = X (s)x + Y (s)u and A(s)x = A(s)P(s)y = BQ(s)y = Bu; which proves that y is a linear at output and the proof of the rst part of the theorem is complete. Let us now prove the existence of a solution to (7). By Theorem A., C T A(s) can be decomposed into its Smith form, namely there exist two unimodular matrices V (s) GL n m (k[s]) (see Appendix A) and U(s) GL n (k[s]) and an (n m) (n m) diagonal polynomial matrix (s) such that V (s)c T A(s)U(s)=((s) n m;m ) () or, with the decomposition U(s)=(Û(s) Ũ(s)), Û(s) being of size n (n m) and Ũ(s) of size n m, () reads V (s)c T A(s)Û(s)=(s); V (s)c T A(s)Ũ(s)= n m;m : (2) Let P (s) be an arbitrary m m unimodular matrix. We set ) ) P(s)=U(s) ( n m;m P (s) =(Û(s) Ũ(s)) ( n m;m P (s) = Ũ(s)P (s): (3) Then, by (2), V (s)c T A(s)P(s)=V (s)c T A(s)Ũ(s)P (s)= and, since V (s) is unimodular, we have proved that C T A(s)P(s) =, which means that P(s) dened by (3) is solution to (7) for every m m unimodular matrix P (s). Moreover, since Ũ(s) is a submatrix of the unimodular matrix U(s), its columns are independent for every s and thus has rank m for every s. Clearly, the same holds true by right multiplication by the unimodular matrix P (s), which proves that rank(p(s)) = m for every s. Finally, the existence of Q(s) results from the fact that, since C T A(s)P(s) =, or equivalently,

4 72 J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) A(s)P(s) belongs to the range of B, thus, denoting by : the L 2 ([; ); R n )-norm, we have min u R m 2 A(s)P(s)y Bu 2 = and, at this minimum, the derivative of the squared norm with respect to u must vanish: B T (A(s)P(s)y Bu) =, which immediately gives (9), and which achieves to prove the theorem. Ground Spring k r Passive damper F Base M Stage M B x O x B -F Remark. The above theorem gives an eective way of computing P and Q (see Example below): P can be any solution of (7) of rank m for every s (which can be directly obtained by elementary linear algebraic methods) and then Q is deduced by (9). Note that the Smith decomposition used in the proof is not needed in practice and is only given here to prove the existence of P. Remark 2. A minimal at output can be dened, in the spirit of Forney s minimal bases [], in the sense that x may be described by a minimal number of derivatives of the components of a suitable output y, chosen among all the at outputs. Let us sketch its construction. We denote by r j the maximum polynomial degree of the jth column of A(s): r j = max{d (A i;j (s)) 6 i 6 n}, j =;:::;n, and q i;j = d (P i;j (s)), i =;:::;n, j =;:::;m. Next, we dene n j = max{r i +q i;j 6 i 6 n}. We claim that the dimension of a minimal realization of (4) is equal to the minimal value of the sum n + +n m +m over all possible choices of P(s) satisfying (7). In other words, the minimum sum n + + n m + m is such that the integers n i, i =;:::;m are the Kronecker indices of system (4) Brunovsky canonical form y (ni+) i = v i, i =;:::;m, and the at output given by y =(y ;:::;y m ), is such that the number of derivatives required in the expression of x is minimal. Let us sketch the proof. Clearly, the state X =(x ;:::;x (r ) ;:::;x n ;:::;x (rn ) n ) corresponds to a minimal realization for (4). Its dimension is N = n i= r i. For an arbitrary P(s) satisfying (7), we have x i = m j= P i;j(s)y j, i =;:::;n, and the number of derivatives of y j x (ri ) i in the expression of = m j= sri P i;j (s)y j is equal to r i +q i;j. Thus, the number of derivatives of y j in the expression of x is equal to n j. Since the state X is given by an onto function of (y ;:::;y (n) ;:::;y m ;:::;y m (nm) ), we have n + + n m + m 6 N. The claim readily Fig.. Base-stage high-precision positioning system. follows by remarking that if y=(y ;:::;y m ) is chosen such that (y ;:::;y (n) ;:::;y m ;:::;y m (nm) ) is the state of the Brunovsky canonical form of (4), the equality holds true. Example. We consider a motorized stage 3 (see Fig. ) of mass M moving without friction along a rail xed to the base, whose mass is M B, itself related to the ground in an elastic way, with stiness coecient k and damping r. We note M B = M + M B. The base is supposed to move along a parallel axis to the rails. Let us denote by x B the position of the center of mass of the base in a xed coordinate frame related to the ground and by x the relative position of the center of mass of the stage with respect to a coordinate frame attached to the base whose origin is x B. We note F the force applied to the stage, delivered by the motor. F is the control input. According to Newton s second principle: M x = F; M B x B = F kx B rẋ B : (4) Thus, (4) immediately reads ( )( ) ( ) Ms 2 x = F M Bs 2 + rs + k x B and has the form (4) with ( ) Ms 2 A(s)= ; M Bs 2 + rs + k ( ) B = : 3 This application was the subject of the European Patent No and US Patent No. 9/362,643, registered by Newport Corporation.

5 J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) Note that this description involves smaller matrices (n = 2) than with an equivalent state-space representation (N = 4). Let us compute a matrix C orthogonal to B: C T =( ), C T A(s)=(Ms 2 M Bs 2 + rs + k): (5) Thus, according to Theorem, a at output is obtained by computing P(s) solution to ) C T A(s)P(s)=(Ms 2 which immediately yields P (s)= k (M Bs 2 + rs + k); M Bs 2 +rs+k) ( P P 2 =; (6) P 2 (s)= k (Ms2 ) (7) and P(s) is clearly of rank for all s. Relations (6) read x = k (M Bs 2 + rs + k)y = M B y + r k k ẏ + y; ( ) M x B = k s2 y = M y (8) k and F = M((M B =k)y(4) +(r=k)y (3) +y). Finally, inverting (8), we get y = x r k ẋ + ) (M B r2 x B M B r M k Mk ẋb: 3. Application to trajectory planning We now show that formula (6) is well adapted to motion planning. This is best illustrated by the next example. Example 2. (Example continued) We want to generate displacements of the stage from one steady state to another one with the base also in steady state at the stage s nal position. It suces to generate a polynomial trajectory for y with respect to time, interpolating the initial and nal conditions x(t )=x ; ẋ(t )=; x B (t )=; ẋ B (t )=; F(t )=; x(t )=x ; ẋ(t )=; x B (t )=; ẋ B (t )=; thus F(t )=; y(t )=x ; ẏ(t )=; y(t )=; y (3) (t )=; y (4) (t )=; y(t )=x ; ẏ(t )=; y(t )=; y (3) (t )=; y (4) (t )=: Since there are initial and nal conditions, the minimal degree of an interpolating polynomial is equal x Fig. 2. Motorized stage displacement (left) of 2 mm in :2 s, and base displacement (right), for M =25 kg, M B =45 kg, k =M B (6 2)2, and r =2 :35 km B.

6 74 J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) to 9: y(t)=y +(y y ) 5 ( ) (9) with =(t t )=(t t ). We then deduce the corresponding trajectories for x, ẋ, x B, ẋ B and F by (8), as depicted in Fig. 2. Let us stress that the inverse expression of y as a function of x; ẋ; x B ; ẋ B is not necessary here. 4. Conclusions A characterization of linear at outputs for linear time-invariant systems in polynomial matrix form has been presented. This result is shown to be well suited to reference trajectory design. It is interesting to note that the existence of at outputs is partially based on the Smith decomposition of a matrix, which is valid in the more general context of matrices over an arbitrary ring [3,5], such as the ring of polynomials of the variable s with time-varying coecients, which naturally appears in the representation of linear time-varying systems. The extension of our approach to the time-varying case might thus be a topic for future research. Acknowledgements The authors wish to express their warmful thanks to Alain Danielo and Roger Desailly of Micro-Controle, to Laurent Praly, Michel Fliess, Philippe Martin, Pierre Rouchon, Henri Bourles and Thanos Antoulas for their encouragements and highly stimulating discussions. Appendix A. On polynomial matrices Matrices whose entries belong to the ring k[s] of polynomials of s, are called polynomial matrices. The inverse of an invertible polynomial matrix is not in general a polynomial matrix since the inverse of a non constant polynomial is not a polynomial. Therefore, the subclass GL n (k[s]) of unimodular matrices, de- ned as the set of invertible polynomial square n n matrices whose inverse is polynomial, or equivalently whose determinant is a constant, plays an important role. A.. Smith form The main results on polynomial and unimodular matrices may be found in [,7,2]. We will need the following fundamental result on the transformation of a polynomial matrix to its Smith form (see [] or in the more general context of matrices over an arbitrary ring [3, A.II..9 and A.II., Example 4]): Theorem A.. Given a ( ) polynomial matrix A, with 6 (resp. ), there exist matrices V GL (k[s]) and U GL (k[s]) such that VAU =( ) (resp. =( )), where is a (resp. ) diagonal matrix whose diagonal elements, ( ;:::; ; ;:::;), are such that i is a non zero s-polynomial for i =;:::;, and is a divisor of j for all j i. The integer, which is less than or equal to min(; ), is the rank of A. The unimodular matrices V (left) and U (right) are obtained in practice as a product of unimodular matrices corresponding to the following elementary right and left actions: right actions consist of permuting two columns, multiplying a column by a non-zero real number, or adding the jth column multiplied by an arbitrary polynomial to the ith column, for arbitrary i and j; left actions consist, analogously, of permuting two lines, multiplying a line by a non-zero real number, or adding the jth line multiplied by an arbitrary polynomial to the ith line, for arbitrary i and j. The algorithm to transform the matrix A(s) of the theorem 4 consists rst in permuting lines and columns to put the element of lowest degree in upper left position, denoted by a ; (s). Then divide all the other elements a ;k (s) (resp. a k; (s)) of the new rst line (resp. rst column) by a ; (s). If one of the rests is non zero, say r ;k (s) (resp. r k; (s)), subtract the corresponding column (resp. line) to the rst column (resp. line) multiplied by the corresponding quotient q ;k (s) dened by the Euclidean division a ;k (s)=a ; (s)q ;k (s)+r ;k (s) (resp. q k; (s) dened by a k; (s)=a ; (s)q k; (s)+r k; (s)). We go on reduc- 4 For a precise construction and proof the reader may refer to [, Chapter 6, Section 2].

7 J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) ing the degree of the upper left element by the same process until all the rests are zero. Then multiplying all the columns by the corresponding quotients q ;k (s), k =2;:::;(resp. lines by q k; (s), k =2;:::;), the rst line becomes (a ; (s); ;:::;) and the rst column (a ; (s); ;:::;) T where T is the transposition operator. We then apply the same algorithm to the second line and so on. To each transformation of lines and columns correspond a left or right elementary unimodular matrix and the unimodular matrix V (resp. U) is nally obtained as the product of all left (resp. right) elementary unimodular matrices so constructed. A.2. Divisors, Bezout identity Let A and B be two polynomial matrices with the same number of lines (resp. columns). We say that B is a left (resp. right) divisor of A if there exists a polynomial matrix Q such that A = BQ (resp. A = QB). Accordingly, we say that A is a left (resp. right) multiple of B. We say that the polynomial matrix R is a left (resp. right) common divisor of the polynomial matrices A and B if and only if R is a left (resp. right) divisor of A and B. In addition, R is a left (resp. right) greatest common divisor (GCD) of A and B if it is a right (resp. left) multiple of any other left (resp. right) common divisor of A and B. If the left (resp. right) GCD R is the identity matrix, we say that A and B are left (resp. right) coprime. Theorem A.2 (Bezout identity). The polynomial matrix R is a left (resp. right) GCD of A and B if and only if there exist two polynomial matrices X and Y such that AX + BY = R (resp. XA + YB = R). The reader may refer to [2,4] for further readings on this topic. A.3. Controllability Consider the system A(s)x=B(s)u where A(s) (resp. B(s)) is a polynomial matrix of size n n (resp. n m), x is a partial state of dimension n and u is the m-dimensional control vector. For an adaptation of the controllability denition in this case and a proof of the next theorem, the reader is referred to [2,4,7]. Theorem A.3. The pair (A(s);B(s)) is controllable if and only if one of the following equivalent statements hold true: (i) A(s) and B(s) are left coprime; (ii) the n (n + m) matrix (A(s) B(s)) has rank n for every s. References [] A.C. Antoulas, On canonical forms for linear constant systems, Internat. J. Control 33 () (98) [2] L. Bitauld, M. Fliess, J. Levine, A atness based control synthesis of linear systems and application to windshield wipers, Proceedings of the ECC 97, Paper No. 628, Brussels, 997. [3] N. Bourbaki, Algebre, Elements de Mathematiques, Hermann, Paris, 97, Chapitres 3. [4] P. Brunovsky, A classication of linear controllable systems, Kybernetika 6 (97) [5] P.M. Cohn, Free Rings and their Relations, 2nd Edition, Academic Press, London, 985. [6] R. Desailly, J. Levine, S. Maneuf, D.V. Nguyen, On an anti-vibration control algorithm for non-rigid positioning systems, Proceedings of the RESCCE 2 Conference, Ho-Chi-Minh-City, 2. [7] M. Fliess, J. Levine, Ph. Martin, P. Rouchon, Flatness and defect of nonlinear systems: introductory theory and applications, Internat. J. Control 6 (995) [8] M. Fliess, J. Levine, Ph. Martin, P. Rouchon, A Lie-Backlund approach to equivalence and atness of nonlinear systems, IEEE Trans. Automat. Control 44 (5) (999) [9] M. Fliess, R. Marquez, Continuous-time linear predictive control and atness: a module-theoretic setting with examples, Int. J. Control 73 (7) (2) [] G.D. Forney Jr., Minimal bases of rational vector spaces, with applications to multivariable linear systems, SIAM J. Control 3 (3) (975) [] F.R. Gantmacher, Theorie des Matrices, t., Dunod, Paris, 966. [2] T. Kailath, Linear Systems, Prentice-Hall, Englewood Clis, NJ, 98. [3] D.G. Luenberger, Canonical forms for linear multivariable systems, IEEE Trans. Automat. Control 2 (967) [4] H.H. Rosenbrock, Multivariable and State-Space Theory, Wiley, New York, 97. [5] E.D. Sontag, Mathematical Control Theory, Deterministic Finite Dimensional Systems, 2nd Edition, Springer, New York, 998. [6] A. Tannenbaum, Invariance and System Theory: Algebraic and Geometric Aspects, Springer, New York, 98. [7] W.A. Wolovich, Linear Multivariable Systems, in: Series in Applied Mathematical Sciences, Vol., Springer, New York, 974.

On at systems behaviors and observable image representations

On at systems behaviors and observable image representations Available online at www.sciencedirect.com Systems & Control Letters 51 (2004) 51 55 www.elsevier.com/locate/sysconle On at systems behaviors and observable image representations H.L. Trentelman Institute

More information

Chapter 6. Differentially Flat Systems

Chapter 6. Differentially Flat Systems Contents CAS, Mines-ParisTech 2008 Contents Contents 1, Linear Case Introductory Example: Linear Motor with Appended Mass General Solution (Linear Case) Contents Contents 1, Linear Case Introductory Example:

More information

Lifting to non-integral idempotents

Lifting to non-integral idempotents Journal of Pure and Applied Algebra 162 (2001) 359 366 www.elsevier.com/locate/jpaa Lifting to non-integral idempotents Georey R. Robinson School of Mathematics and Statistics, University of Birmingham,

More information

On some properties of elementary derivations in dimension six

On some properties of elementary derivations in dimension six Journal of Pure and Applied Algebra 56 (200) 69 79 www.elsevier.com/locate/jpaa On some properties of elementary derivations in dimension six Joseph Khoury Department of Mathematics, University of Ottawa,

More information

SOLVING QUADRATIC EQUATIONS IN DIMENSION 5 OR MORE WITHOUT FACTORING

SOLVING QUADRATIC EQUATIONS IN DIMENSION 5 OR MORE WITHOUT FACTORING SOLVING QUADRATIC EQUATIONS IN DIMENSION 5 OR MORE WITHOUT FACTORING PIERRE CASTEL Abstract. Let Q be a 5 5 symmetric matrix with integral entries and with det Q 0, but neither positive nor negative denite.

More information

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc. 1 Polynomial Matrices 1.1 Polynomials Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc., n ws ( ) as a

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES D. Katz The purpose of this note is to present the rational canonical form and Jordan canonical form theorems for my M790 class. Throughout, we fix

More information

NP-hardness of the stable matrix in unit interval family problem in discrete time

NP-hardness of the stable matrix in unit interval family problem in discrete time Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and

More information

1. Affine Grassmannian for G a. Gr Ga = lim A n. Intuition. First some intuition. We always have to rst approximation

1. Affine Grassmannian for G a. Gr Ga = lim A n. Intuition. First some intuition. We always have to rst approximation PROBLEM SESSION I: THE AFFINE GRASSMANNIAN TONY FENG In this problem we are proving: 1 Affine Grassmannian for G a Gr Ga = lim A n n with A n A n+1 being the inclusion of a hyperplane in the obvious way

More information

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA Nonlinear Observer Design using Implicit System Descriptions D. von Wissel, R. Nikoukhah, S. L. Campbell y and F. Delebecque INRIA Rocquencourt, 78 Le Chesnay Cedex (France) y Dept. of Mathematics, North

More information

A note on continuous behavior homomorphisms

A note on continuous behavior homomorphisms Available online at www.sciencedirect.com Systems & Control Letters 49 (2003) 359 363 www.elsevier.com/locate/sysconle A note on continuous behavior homomorphisms P.A. Fuhrmann 1 Department of Mathematics,

More information

Infinite elementary divisor structure-preserving transformations for polynomial matrices

Infinite elementary divisor structure-preserving transformations for polynomial matrices Infinite elementary divisor structure-preserving transformations for polynomial matrices N P Karampetakis and S Vologiannidis Aristotle University of Thessaloniki, Department of Mathematics, Thessaloniki

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Pattern generation, topology, and non-holonomic systems

Pattern generation, topology, and non-holonomic systems Systems & Control Letters ( www.elsevier.com/locate/sysconle Pattern generation, topology, and non-holonomic systems Abdol-Reza Mansouri Division of Engineering and Applied Sciences, Harvard University,

More information

Algebra Homework, Edition 2 9 September 2010

Algebra Homework, Edition 2 9 September 2010 Algebra Homework, Edition 2 9 September 2010 Problem 6. (1) Let I and J be ideals of a commutative ring R with I + J = R. Prove that IJ = I J. (2) Let I, J, and K be ideals of a principal ideal domain.

More information

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Bhaskar Ramasubramanian, Swanand R Khare and Madhu N Belur Abstract This paper formulates the problem

More information

Flatness based analysis and control of distributed parameter systems Elgersburg Workshop 2018

Flatness based analysis and control of distributed parameter systems Elgersburg Workshop 2018 Flatness based analysis and control of distributed parameter systems Elgersburg Workshop 2018 Frank Woittennek Institute of Automation and Control Engineering Private University for Health Sciences, Medical

More information

2 (17) Find non-trivial left and right ideals of the ring of 22 matrices over R. Show that there are no nontrivial two sided ideals. (18) State and pr

2 (17) Find non-trivial left and right ideals of the ring of 22 matrices over R. Show that there are no nontrivial two sided ideals. (18) State and pr MATHEMATICS Introduction to Modern Algebra II Review. (1) Give an example of a non-commutative ring; a ring without unit; a division ring which is not a eld and a ring which is not a domain. (2) Show that

More information

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field Complexity, Article ID 6235649, 9 pages https://doi.org/10.1155/2018/6235649 Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field Jinwang Liu, Dongmei

More information

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT Hans Norlander Systems and Control, Department of Information Technology Uppsala University P O Box 337 SE 75105 UPPSALA, Sweden HansNorlander@ituuse

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Taylor series based nite dierence approximations of higher-degree derivatives

Taylor series based nite dierence approximations of higher-degree derivatives Journal of Computational and Applied Mathematics 54 (3) 5 4 www.elsevier.com/locate/cam Taylor series based nite dierence approximations of higher-degree derivatives Ishtiaq Rasool Khan a;b;, Ryoji Ohba

More information

Reduction of linear systems based on Serre s theorem

Reduction of linear systems based on Serre s theorem Reduction of linear systems based on Serre s theorem Mohamed S. Boudellioua and Alban Quadrat Abstract. Within a module-theoretic approach, we study when a (multidimensional) linear system can be defined

More information

Primitive sets in a lattice

Primitive sets in a lattice Primitive sets in a lattice Spyros. S. Magliveras Department of Mathematical Sciences Florida Atlantic University Boca Raton, FL 33431, U.S.A spyros@fau.unl.edu Tran van Trung Institute for Experimental

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

arxiv: v1 [math.rt] 7 Oct 2014

arxiv: v1 [math.rt] 7 Oct 2014 A direct approach to the rational normal form arxiv:1410.1683v1 [math.rt] 7 Oct 2014 Klaus Bongartz 8. Oktober 2014 In courses on linear algebra the rational normal form of a matrix is usually derived

More information

REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS. Eduardo D. Sontag. SYCON - Rutgers Center for Systems and Control

REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS. Eduardo D. Sontag. SYCON - Rutgers Center for Systems and Control REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS Eduardo D. Sontag SYCON - Rutgers Center for Systems and Control Department of Mathematics, Rutgers University, New Brunswick, NJ 08903

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

Robust Control 2 Controllability, Observability & Transfer Functions

Robust Control 2 Controllability, Observability & Transfer Functions Robust Control 2 Controllability, Observability & Transfer Functions Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University /26/24 Outline Reachable Controllability Distinguishable

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

. Consider the linear system dx= =! = " a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z

. Consider the linear system dx= =! =  a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z Preliminary Exam { 1999 Morning Part Instructions: No calculators or crib sheets are allowed. Do as many problems as you can. Justify your answers as much as you can but very briey. 1. For positive real

More information

Matrix Polynomial Conditions for the Existence of Rational Expectations.

Matrix Polynomial Conditions for the Existence of Rational Expectations. Matrix Polynomial Conditions for the Existence of Rational Expectations. John Hunter Department of Economics and Finance Section, School of Social Science, Brunel University, Uxbridge, Middlesex, UB8 PH,

More information

Honors Algebra 4, MATH 371 Winter 2010 Assignment 3 Due Friday, February 5 at 08:35

Honors Algebra 4, MATH 371 Winter 2010 Assignment 3 Due Friday, February 5 at 08:35 Honors Algebra 4, MATH 371 Winter 2010 Assignment 3 Due Friday, February 5 at 08:35 1. Let R 0 be a commutative ring with 1 and let S R be the subset of nonzero elements which are not zero divisors. (a)

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,

More information

Comments on integral variants of ISS 1

Comments on integral variants of ISS 1 Systems & Control Letters 34 (1998) 93 1 Comments on integral variants of ISS 1 Eduardo D. Sontag Department of Mathematics, Rutgers University, Piscataway, NJ 8854-819, USA Received 2 June 1997; received

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

Theory of Vibrations in Stewart Platforms

Theory of Vibrations in Stewart Platforms Theory of Vibrations in Stewart Platforms J.M. Selig and X. Ding School of Computing, Info. Sys. & Maths. South Bank University London SE1 0AA, U.K. (seligjm@sbu.ac.uk) Abstract This article develops a

More information

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. MATH 311-504 Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. Determinant is a scalar assigned to each square matrix. Notation. The determinant of a matrix A = (a ij

More information

COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK

COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK Séminaire Lotharingien de Combinatoire 52 (2004), Article B52f COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK MARC FORTIN AND CHRISTOPHE REUTENAUER Dédié à notre

More information

Abstract. Jacobi curves are far going generalizations of the spaces of \Jacobi

Abstract. Jacobi curves are far going generalizations of the spaces of \Jacobi Principal Invariants of Jacobi Curves Andrei Agrachev 1 and Igor Zelenko 2 1 S.I.S.S.A., Via Beirut 2-4, 34013 Trieste, Italy and Steklov Mathematical Institute, ul. Gubkina 8, 117966 Moscow, Russia; email:

More information

Modules Over Principal Ideal Domains

Modules Over Principal Ideal Domains Modules Over Principal Ideal Domains Brian Whetter April 24, 2014 This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this

More information

Splitting sets and weakly Matlis domains

Splitting sets and weakly Matlis domains Commutative Algebra and Applications, 1 8 de Gruyter 2009 Splitting sets and weakly Matlis domains D. D. Anderson and Muhammad Zafrullah Abstract. An integral domain D is weakly Matlis if the intersection

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

New Lyapunov Krasovskii functionals for stability of linear retarded and neutral type systems

New Lyapunov Krasovskii functionals for stability of linear retarded and neutral type systems Systems & Control Letters 43 (21 39 319 www.elsevier.com/locate/sysconle New Lyapunov Krasovskii functionals for stability of linear retarded and neutral type systems E. Fridman Department of Electrical

More information

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The

More information

RATIONAL POINTS ON CURVES. Contents

RATIONAL POINTS ON CURVES. Contents RATIONAL POINTS ON CURVES BLANCA VIÑA PATIÑO Contents 1. Introduction 1 2. Algebraic Curves 2 3. Genus 0 3 4. Genus 1 7 4.1. Group of E(Q) 7 4.2. Mordell-Weil Theorem 8 5. Genus 2 10 6. Uniformity of Rational

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,500 08,000.7 M Open access books available International authors and editors Downloads Our authors

More information

Factorization of singular integer matrices

Factorization of singular integer matrices Factorization of singular integer matrices Patrick Lenders School of Mathematics, Statistics and Computer Science, University of New England, Armidale, NSW 2351, Australia Jingling Xue School of Computer

More information

MATHEMATICS 217: HONORS LINEAR ALGEBRA Spring Term, First Week, February 4 8

MATHEMATICS 217: HONORS LINEAR ALGEBRA Spring Term, First Week, February 4 8 MATHEMATICS 217: HONORS LINEAR ALGEBRA Spring Term, 2002 The textbook for the course is Linear Algebra by Hoffman and Kunze (second edition, Prentice-Hall, 1971). The course probably will cover most of

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

Solutions of exercise sheet 8

Solutions of exercise sheet 8 D-MATH Algebra I HS 14 Prof. Emmanuel Kowalski Solutions of exercise sheet 8 1. In this exercise, we will give a characterization for solvable groups using commutator subgroups. See last semester s (Algebra

More information

LECTURE NOTES IN CRYPTOGRAPHY

LECTURE NOTES IN CRYPTOGRAPHY 1 LECTURE NOTES IN CRYPTOGRAPHY Thomas Johansson 2005/2006 c Thomas Johansson 2006 2 Chapter 1 Abstract algebra and Number theory Before we start the treatment of cryptography we need to review some basic

More information

MATH 326: RINGS AND MODULES STEFAN GILLE

MATH 326: RINGS AND MODULES STEFAN GILLE MATH 326: RINGS AND MODULES STEFAN GILLE 1 2 STEFAN GILLE 1. Rings We recall first the definition of a group. 1.1. Definition. Let G be a non empty set. The set G is called a group if there is a map called

More information

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G. Group Theory Jan 2012 #6 Prove that if G is a nonabelian group, then G/Z(G) is not cyclic. Aug 2011 #9 (Jan 2010 #5) Prove that any group of order p 2 is an abelian group. Jan 2012 #7 G is nonabelian nite

More information

Zero controllability in discrete-time structured systems

Zero controllability in discrete-time structured systems 1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state

More information

Minimal-span bases, linear system theory, and the invariant factor theorem

Minimal-span bases, linear system theory, and the invariant factor theorem Minimal-span bases, linear system theory, and the invariant factor theorem G. David Forney, Jr. MIT Cambridge MA 02139 USA DIMACS Workshop on Algebraic Coding Theory and Information Theory DIMACS Center,

More information

Stochastic Realization of Binary Exchangeable Processes

Stochastic Realization of Binary Exchangeable Processes Stochastic Realization of Binary Exchangeable Processes Lorenzo Finesso and Cecilia Prosdocimi Abstract A discrete time stochastic process is called exchangeable if its n-dimensional distributions are,

More information

3 Stability and Lyapunov Functions

3 Stability and Lyapunov Functions CDS140a Nonlinear Systems: Local Theory 02/01/2011 3 Stability and Lyapunov Functions 3.1 Lyapunov Stability Denition: An equilibrium point x 0 of (1) is stable if for all ɛ > 0, there exists a δ > 0 such

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

REPRESENTATIONS OF S n AND GL(n, C)

REPRESENTATIONS OF S n AND GL(n, C) REPRESENTATIONS OF S n AND GL(n, C) SEAN MCAFEE 1 outline For a given finite group G, we have that the number of irreducible representations of G is equal to the number of conjugacy classes of G Although

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Divisor matrices and magic sequences

Divisor matrices and magic sequences Discrete Mathematics 250 (2002) 125 135 www.elsevier.com/locate/disc Divisor matrices and magic sequences R.H. Jeurissen Mathematical Institute, University of Nijmegen, Toernooiveld, 6525 ED Nijmegen,

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Maths for Signals and Systems Linear Algebra for Engineering Applications

Maths for Signals and Systems Linear Algebra for Engineering Applications Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 11 th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

More information

Characters and triangle generation of the simple Mathieu group M 11

Characters and triangle generation of the simple Mathieu group M 11 SEMESTER PROJECT Characters and triangle generation of the simple Mathieu group M 11 Under the supervision of Prof. Donna Testerman Dr. Claude Marion Student: Mikaël Cavallin September 11, 2010 Contents

More information

BUMPLESS SWITCHING CONTROLLERS. William A. Wolovich and Alan B. Arehart 1. December 27, Abstract

BUMPLESS SWITCHING CONTROLLERS. William A. Wolovich and Alan B. Arehart 1. December 27, Abstract BUMPLESS SWITCHING CONTROLLERS William A. Wolovich and Alan B. Arehart 1 December 7, 1995 Abstract This paper outlines the design of bumpless switching controllers that can be used to stabilize MIMO plants

More information

THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM

THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM WILLIAM A. ADKINS AND MARK G. DAVIDSON Abstract. The Cayley Hamilton theorem on the characteristic polynomial of a matrix A and Frobenius

More information

Reduction of Smith Normal Form Transformation Matrices

Reduction of Smith Normal Form Transformation Matrices Reduction of Smith Normal Form Transformation Matrices G. Jäger, Kiel Abstract Smith normal form computations are important in group theory, module theory and number theory. We consider the transformation

More information

Part 1: Overview of Ordinary Dierential Equations 1 Chapter 1 Basic Concepts and Problems 1.1 Problems Leading to Ordinary Dierential Equations Many scientic and engineering problems are modeled by systems

More information

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 We present the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Lecture 7: Polynomial rings

Lecture 7: Polynomial rings Lecture 7: Polynomial rings Rajat Mittal IIT Kanpur You have seen polynomials many a times till now. The purpose of this lecture is to give a formal treatment to constructing polynomials and the rules

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

Groups, Rings, and Finite Fields. Andreas Klappenecker. September 12, 2002

Groups, Rings, and Finite Fields. Andreas Klappenecker. September 12, 2002 Background on Groups, Rings, and Finite Fields Andreas Klappenecker September 12, 2002 A thorough understanding of the Agrawal, Kayal, and Saxena primality test requires some tools from algebra and elementary

More information

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2 1 A Good Spectral Theorem c1996, Paul Garrett, garrett@math.umn.edu version February 12, 1996 1 Measurable Hilbert bundles Measurable Banach bundles Direct integrals of Hilbert spaces Trivializing Hilbert

More information

Null controllable region of LTI discrete-time systems with input saturation

Null controllable region of LTI discrete-time systems with input saturation Automatica 38 (2002) 2009 2013 www.elsevier.com/locate/automatica Technical Communique Null controllable region of LTI discrete-time systems with input saturation Tingshu Hu a;, Daniel E. Miller b,liqiu

More information

WEAK STRUCTURE AT INFINITY AND ROW-BY-ROW DECOUPLING FOR LINEAR DELAY SYSTEMS

WEAK STRUCTURE AT INFINITY AND ROW-BY-ROW DECOUPLING FOR LINEAR DELAY SYSTEMS KYBERNETIKA VOLUME 40 (2004), NUMBER 2, PAGES 181 195 WEAK STRUCTURE AT INFINITY AND ROW-BY-ROW DECOUPLING FOR LINEAR DELAY SYSTEMS Rabah Rabah and Michel Malabre We consider the row-by-row decoupling

More information

Integral Extensions. Chapter Integral Elements Definitions and Comments Lemma

Integral Extensions. Chapter Integral Elements Definitions and Comments Lemma Chapter 2 Integral Extensions 2.1 Integral Elements 2.1.1 Definitions and Comments Let R be a subring of the ring S, and let α S. We say that α is integral over R if α isarootofamonic polynomial with coefficients

More information

INVERSION IN INDIRECT OPTIMAL CONTROL

INVERSION IN INDIRECT OPTIMAL CONTROL INVERSION IN INDIRECT OPTIMAL CONTROL François Chaplais, Nicolas Petit Centre Automatique et Systèmes, École Nationale Supérieure des Mines de Paris, 35, rue Saint-Honoré 7735 Fontainebleau Cedex, France,

More information

Notes on n-d Polynomial Matrix Factorizations

Notes on n-d Polynomial Matrix Factorizations Multidimensional Systems and Signal Processing, 10, 379 393 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Notes on n-d Polynomial Matrix Factorizations ZHIPING LIN

More information

Polynomial functions over nite commutative rings

Polynomial functions over nite commutative rings Polynomial functions over nite commutative rings Balázs Bulyovszky a, Gábor Horváth a, a Institute of Mathematics, University of Debrecen, Pf. 400, Debrecen, 4002, Hungary Abstract We prove a necessary

More information

Absolutely indecomposable symmetric matrices

Absolutely indecomposable symmetric matrices Journal of Pure and Applied Algebra 174 (2002) 83 93 wwwelseviercom/locate/jpaa Absolutely indecomposable symmetric matrices Hans A Keller a; ;1, A Herminia Ochsenius b;1 a Hochschule Technik+Architektur

More information

Parametrization of All Strictly Causal Stabilizing Controllers of Multidimensional Systems single-input single-output case

Parametrization of All Strictly Causal Stabilizing Controllers of Multidimensional Systems single-input single-output case Parametrization of All Strictly Causal Stabilizing Controllers of Multidimensional Systems single-input single-output case K. Mori Abstract We give a parametrization of all strictly causal stabilizing

More information

1. Introduction to commutative rings and fields

1. Introduction to commutative rings and fields 1. Introduction to commutative rings and fields Very informally speaking, a commutative ring is a set in which we can add, subtract and multiply elements so that the usual laws hold. A field is a commutative

More information

Fundamental theorem of modules over a PID and applications

Fundamental theorem of modules over a PID and applications Fundamental theorem of modules over a PID and applications Travis Schedler, WOMP 2007 September 11, 2007 01 The fundamental theorem of modules over PIDs A PID (Principal Ideal Domain) is an integral domain

More information

Contents. 4 Arithmetic and Unique Factorization in Integral Domains. 4.1 Euclidean Domains and Principal Ideal Domains

Contents. 4 Arithmetic and Unique Factorization in Integral Domains. 4.1 Euclidean Domains and Principal Ideal Domains Ring Theory (part 4): Arithmetic and Unique Factorization in Integral Domains (by Evan Dummit, 018, v. 1.00) Contents 4 Arithmetic and Unique Factorization in Integral Domains 1 4.1 Euclidean Domains and

More information

Lattices and Hermite normal form

Lattices and Hermite normal form Integer Points in Polyhedra Lattices and Hermite normal form Gennady Shmonin February 17, 2009 1 Lattices Let B = { } b 1,b 2,...,b k be a set of linearly independent vectors in n-dimensional Euclidean

More information

A(t) L T *(t) t/t t/t

A(t) L T *(t) t/t t/t he Real Story behind Floquet-Lyapunov Factorizations RAYMOND J. SPIERI Department of Mathematics and Statistics Acadia University Wolfville, NS BP X CANADA raymond.spiteri@acadiau.ca http//ace.acadiau.ca/~rspiteri

More information

Asymptotics of generating the symmetric and alternating groups

Asymptotics of generating the symmetric and alternating groups Asymptotics of generating the symmetric and alternating groups John D. Dixon School of Mathematics and Statistics Carleton University, Ottawa, Ontario K2G 0E2 Canada jdixon@math.carleton.ca October 20,

More information

Polynomials, Ideals, and Gröbner Bases

Polynomials, Ideals, and Gröbner Bases Polynomials, Ideals, and Gröbner Bases Notes by Bernd Sturmfels for the lecture on April 10, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra We fix a field K. Some examples of fields

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach E.N. Antoniou, A.I.G. Vardulakis and S. Vologiannidis Aristotle University of Thessaloniki Department of Mathematics

More information

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart. UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated

More information

Comparing the homotopy types of the components of Map(S 4 ;BSU(2))

Comparing the homotopy types of the components of Map(S 4 ;BSU(2)) Journal of Pure and Applied Algebra 161 (2001) 235 243 www.elsevier.com/locate/jpaa Comparing the homotopy types of the components of Map(S 4 ;BSU(2)) Shuichi Tsukuda 1 Department of Mathematical Sciences,

More information

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q),

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q), Elementary 2-Group Character Codes Cunsheng Ding 1, David Kohel 2, and San Ling Abstract In this correspondence we describe a class of codes over GF (q), where q is a power of an odd prime. These codes

More information