Flat output characterization for linear systems using polynomial matrices

Similar documents
On at systems behaviors and observable image representations

Chapter 6. Differentially Flat Systems

Lifting to non-integral idempotents

On some properties of elementary derivations in dimension six

SOLVING QUADRATIC EQUATIONS IN DIMENSION 5 OR MORE WITHOUT FACTORING

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Chap. 3. Controlled Systems, Controllability

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

NP-hardness of the stable matrix in unit interval family problem in discrete time

1. Affine Grassmannian for G a. Gr Ga = lim A n. Intuition. First some intuition. We always have to rst approximation

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA

A note on continuous behavior homomorphisms

Infinite elementary divisor structure-preserving transformations for polynomial matrices

Fraction-free Row Reduction of Matrices of Skew Polynomials

Pattern generation, topology, and non-holonomic systems

Algebra Homework, Edition 2 9 September 2010

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

Flatness based analysis and control of distributed parameter systems Elgersburg Workshop 2018

2 (17) Find non-trivial left and right ideals of the ring of 22 matrices over R. Show that there are no nontrivial two sided ideals. (18) State and pr

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Taylor series based nite dierence approximations of higher-degree derivatives

Reduction of linear systems based on Serre s theorem

Primitive sets in a lattice

Math 121 Homework 5: Notes on Selected Problems

arxiv: v1 [math.rt] 7 Oct 2014

REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS. Eduardo D. Sontag. SYCON - Rutgers Center for Systems and Control

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

Robust Control 2 Controllability, Observability & Transfer Functions

1 Matrices and Systems of Linear Equations

. Consider the linear system dx= =! = " a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z

Matrix Polynomial Conditions for the Existence of Rational Expectations.

Honors Algebra 4, MATH 371 Winter 2010 Assignment 3 Due Friday, February 5 at 08:35

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

Comments on integral variants of ISS 1

Lecture 2: Review of Prerequisites. Table of contents

Theory of Vibrations in Stewart Platforms

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK

Abstract. Jacobi curves are far going generalizations of the spaces of \Jacobi

Modules Over Principal Ideal Domains

Splitting sets and weakly Matlis domains

a11 a A = : a 21 a 22

New Lyapunov Krasovskii functionals for stability of linear retarded and neutral type systems

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

RATIONAL POINTS ON CURVES. Contents

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

Factorization of singular integer matrices

MATHEMATICS 217: HONORS LINEAR ALGEBRA Spring Term, First Week, February 4 8

Foundations of Matrix Analysis

Zeros and zero dynamics

Solutions of exercise sheet 8

LECTURE NOTES IN CRYPTOGRAPHY

MATH 326: RINGS AND MODULES STEFAN GILLE

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Zero controllability in discrete-time structured systems

Minimal-span bases, linear system theory, and the invariant factor theorem

Stochastic Realization of Binary Exchangeable Processes

3 Stability and Lyapunov Functions

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

REPRESENTATIONS OF S n AND GL(n, C)

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Divisor matrices and magic sequences

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Maths for Signals and Systems Linear Algebra for Engineering Applications

Characters and triangle generation of the simple Mathieu group M 11

BUMPLESS SWITCHING CONTROLLERS. William A. Wolovich and Alan B. Arehart 1. December 27, Abstract

THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM

Reduction of Smith Normal Form Transformation Matrices


Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

Jordan normal form notes (version date: 11/21/07)

Lecture 7: Polynomial rings

Notes on the matrix exponential

Groups, Rings, and Finite Fields. Andreas Klappenecker. September 12, 2002

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

Null controllable region of LTI discrete-time systems with input saturation

WEAK STRUCTURE AT INFINITY AND ROW-BY-ROW DECOUPLING FOR LINEAR DELAY SYSTEMS

Integral Extensions. Chapter Integral Elements Definitions and Comments Lemma

INVERSION IN INDIRECT OPTIMAL CONTROL

Notes on n-d Polynomial Matrix Factorizations

Polynomial functions over nite commutative rings

Absolutely indecomposable symmetric matrices

Parametrization of All Strictly Causal Stabilizing Controllers of Multidimensional Systems single-input single-output case

1. Introduction to commutative rings and fields

Fundamental theorem of modules over a PID and applications

Contents. 4 Arithmetic and Unique Factorization in Integral Domains. 4.1 Euclidean Domains and Principal Ideal Domains

Lattices and Hermite normal form

A(t) L T *(t) t/t t/t

Asymptotics of generating the symmetric and alternating groups

Polynomials, Ideals, and Gröbner Bases

Math 1553, Introduction to Linear Algebra

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

Comparing the homotopy types of the components of Map(S 4 ;BSU(2))

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q),

Transcription:

Available online at www.sciencedirect.com Systems & Control Letters 48 (23) 69 75 www.elsevier.com/locate/sysconle Flat output characterization for linear systems using polynomial matrices J. Levine a;, D.V. Nguyen b a Centre Automatique et Systemes, Ecole Nationale Superieure des Mines de Paris, 35 rue Saint-Honore, 7735 Fontainebleau Cedex, France b Micro-Controle, Z.I. de Beaune-la-Rolande, BP 29, 4534 Beaune-la-Rolande, France Received 2 October 2; received in revised form 2 August 22 Abstract This paper is devoted to the study of linear at outputs for linear controllable time-invariant systems in polynomial matrix form. We characterize the transformations expressing the system variables in terms of a linear at output and derivatives, called dening matrices, as the kernel of a polynomial matrix. An application to trajectory planning is presented, showing the usefulness of the present characterization. c 22 Elsevier Science B.V. All rights reserved. Keywords: Linear time-invariant system; Polynomial matrices; Flatness; Flat output; Trajectory planning. Introduction Recently, connections between the trajectory planning problem, atness and Brunovsky controllability canonical form have been outlined [7,8], showing in particular that even for linear time-invariant systems, atness may be useful to reference trajectory (or feedforward) design [2,6] and predictive control [9]. If we restrict to linear time-invariant systems, it turns out that such systems are at if and only if they are controllable and that a particular at output may be obtained as a by-product of the Brunovsky controllability canonical form [7,8] (concerning the Brunovsky This work was partly supported by Micro-Controle, a Newport Corporation company. Corresponding author. E-mail addresses: levine@cas.ensmp.fr (J. Levine), vnguyen@newport-fr.com (D.V. Nguyen). canonical form and related topics, one may refer to [3,4,7,2,6,,5]). Nevertheless, a general method to nd all possible at outputs and parametrize them is not available at present. In this paper, we propose a direct characterization of the matrices giving the system variables in function of a linear at output y and its time derivatives. It should be noted that proceeding the other way around, namely nding the expression of a at output in function of the system variables and derivatives, appears to be much more dicult. However, our result is shown to be well suited to trajectory planning since, in this case, the inverse relations expressing the coordinates of the at output y in terms of the system variables ones, are not needed. An example of high-precision positioning system is presented to illustrate this remark. Another point of interest concerns the kind of representation of linear systems we have chosen in this paper: polynomial matrix representation has been 67-69/3/$ - see front matter c 22 Elsevier Science B.V. All rights reserved. PII: S67-69(2)257-8

7 J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) 69 75 preferred here since, on the one hand, it does not require explicitly specifying the generally unknown number of time derivatives that corresponds here to polynomial degree involved in each expression and, on the other hand, structural system properties such as atness are nicely described in the framework of the algebra of polynomials. The paper is organized as follows. In Section, basic facts from atness are recalled. Our main result, namely the characterization of the so-called dening matrices of linear at outputs, is stated in Section 2. In Section 3, we present the application to trajectory planning. Finally, some recalls on polynomial matrices are proposed in Appendix A.. Recalls on atness All along this paper, we use the notation u (k), x (k), y (k) ;:::for the kth-order derivative of u, x, y; : : : with respect to time. A system is said to be dierentially at [7,8] if there exists a set of independent variables referred to as at output such that every other system variable (including the input variables) is a function of the at output and a nite number of its successive time derivatives. More precisely, the system ẋ = f(x; u); () with x R N and u R m is dierentially at if one can nd a set of variables (at output) y = h(x; u; u; u; : : : ; u (p) ); y R m ; (2) with p a nite m-tuple of integers, such that x = (y; ẏ; y; : : : ; y (q) ); u = (y; ẏ; y; : : : ; y (q+) ); (3) with q a nite m-tuple of integers, and such that the system equations d dt (y; ẏ; y; : : : ; y(q+) ) = f((y; ẏ; y; : : : ; y (q) );(y; ẏ; y; : : : ; y (q+) )) are identically satised. The reader is invited to refer to [7,8] for further readings concerning various consequences of this property on motion planning and feedback design. Since we focus attention on linear systems, we recall from [7,8] the following result: Proposition. A linear system is at if and only if it is controllable. In the remaining part of this paper, we make an extensive use of some classical concepts and tools of Polynomial Matrix Theory. The reader may refer to Appendix A for a quick introduction and survey of the main results. More advanced aspects can be found in [,2,4,7]. 2. Flat output of a linear system in polynomial matrix form We consider linear systems in the following polynomial matrix representation: A(s)x = Bu; (4) with s =d=dt and A(s) an n n matrix whose entries are polynomials of the formal variable s and B an n m constant matrix of rank m, with 6 m n. 2 System (4) is assumed controllable, i.e. A(s) and B are left coprime (see Appendix A.3). Note that x, in (4), is not a system state, but a partial state of dimension n. In general n is smaller than the dimension N of a reachable state-space realization. We call linear at output an output y dened by (2) and (3) with h, and linear, namely x i = u i = q m j j= k= j= k= i;j;k y (k) j ; i=; : : : ; n; q m j+ i;j;k y (k) j ; i=;:::;m (5) and y a linear combination of x, u and a nite number of derivatives of u. In the polynomial matrix language, (5) reads x = P(s)y; u = Q(s)y; (6) For a m-tuple of integers p = (p ;:::;p m) and an m-dimensional vector u, the notation u (p) stands for (u (p ) ;:::;u m (pm) ). 2 If n 6 m, the method described below is useless: in this case, the problem of nding a at output is trivial since x, completed by m n components of u, can be directly chosen as such.

J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) 69 75 7 where P (resp. Q) is an n m (resp. m m) polynomial matrix of s, with entries P i;j (s)= q j k= i;j;ks k (resp. Q i;j (s)= q j+ k= i;j;ks k ). Matrices P and Q satisfying (6) are called dening matrices of the at output y. Our main result is the following: Theorem. The variable y =(y ;:::;y m ) is a linear at output of system (4) if and only if its dening matrices P and Q are given by C T A(s)P(s)=; (7) A(s)P(s)=BQ(s); (8) with C an arbitrary matrix of rank n m orthogonal to B (i.e. such that C T B = ), and with P(s) and Q(s) of rank m for every s and right coprime. In addition, a linear at output y of the controllable system (4) always exists (and therefore its dening matrices P and Q always exist too), P(s) solution of (7) is dened up to right multiplication by a unimodular m m matrix, and a possible choice of Q(s) is Q(s)=(B T B) B T A(s)P(s): (9) Proof. Assume that y is a linear at output. Then x and u can be expressed as in (6). Since the mapping y P(s)y = x is onto by the atness denition, the rank of P(s) must be equal to min(n; m)=m for every s. Combining (4) and (6), we get (8). Since B has rank m, there exists an n (n m) constant matrix C of rank n m such that C T B = n m;m where n m;m denotes the (n m) m matrix whose entries are all. Thus, left multiplying (4) by C T, P(s) satises C T A(s)P(s)y = C T BQ(s)y = n m; for every smooth m-dimensional time function y, which implies that C T A(s)P(s)= n m;m, and (7), (8) is proved. Clearly, since A(s) and B are left coprime and since the rank of A(s)P(s) and B is m for every s, the same holds true for Q(s) (by contradiction). Using again the fact that y is a linear at output, it must satisfy y = X (s)x + Y (s)u for some appropriate polynomial matrices X (s) of size m n and Y (s) of size m m. Thus, substituting the expressions of x and u, we get y = X (s)p(s)y + Y (s)q(s)y, or X (s)p(s) + Y (s)q(s)=i, which, according to Bezout identity (see Theorem A.2), means that P(s) and Q(s) are right coprime. Conversely, let P(s) and Q(s) be given by (7), (8) with P(s) and Q(s) right coprime. By Bezout identity, there exist two polynomial matrices X (s) and Y (s) satisfying X (s)p(s)+y (s)q(s)=i: () Right multiplying by y, we get X (s)p(s)y + Y (s)q(s)y = y and, setting x = P(s)y and u = Q(s)y, we obtain that y = X (s)x + Y (s)u and A(s)x = A(s)P(s)y = BQ(s)y = Bu; which proves that y is a linear at output and the proof of the rst part of the theorem is complete. Let us now prove the existence of a solution to (7). By Theorem A., C T A(s) can be decomposed into its Smith form, namely there exist two unimodular matrices V (s) GL n m (k[s]) (see Appendix A) and U(s) GL n (k[s]) and an (n m) (n m) diagonal polynomial matrix (s) such that V (s)c T A(s)U(s)=((s) n m;m ) () or, with the decomposition U(s)=(Û(s) Ũ(s)), Û(s) being of size n (n m) and Ũ(s) of size n m, () reads V (s)c T A(s)Û(s)=(s); V (s)c T A(s)Ũ(s)= n m;m : (2) Let P (s) be an arbitrary m m unimodular matrix. We set ) ) P(s)=U(s) ( n m;m P (s) =(Û(s) Ũ(s)) ( n m;m P (s) = Ũ(s)P (s): (3) Then, by (2), V (s)c T A(s)P(s)=V (s)c T A(s)Ũ(s)P (s)= and, since V (s) is unimodular, we have proved that C T A(s)P(s) =, which means that P(s) dened by (3) is solution to (7) for every m m unimodular matrix P (s). Moreover, since Ũ(s) is a submatrix of the unimodular matrix U(s), its columns are independent for every s and thus has rank m for every s. Clearly, the same holds true by right multiplication by the unimodular matrix P (s), which proves that rank(p(s)) = m for every s. Finally, the existence of Q(s) results from the fact that, since C T A(s)P(s) =, or equivalently,

72 J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) 69 75 A(s)P(s) belongs to the range of B, thus, denoting by : the L 2 ([; ); R n )-norm, we have min u R m 2 A(s)P(s)y Bu 2 = and, at this minimum, the derivative of the squared norm with respect to u must vanish: B T (A(s)P(s)y Bu) =, which immediately gives (9), and which achieves to prove the theorem. Ground Spring k r Passive damper F Base M Stage M B x O x B -F Remark. The above theorem gives an eective way of computing P and Q (see Example below): P can be any solution of (7) of rank m for every s (which can be directly obtained by elementary linear algebraic methods) and then Q is deduced by (9). Note that the Smith decomposition used in the proof is not needed in practice and is only given here to prove the existence of P. Remark 2. A minimal at output can be dened, in the spirit of Forney s minimal bases [], in the sense that x may be described by a minimal number of derivatives of the components of a suitable output y, chosen among all the at outputs. Let us sketch its construction. We denote by r j the maximum polynomial degree of the jth column of A(s): r j = max{d (A i;j (s)) 6 i 6 n}, j =;:::;n, and q i;j = d (P i;j (s)), i =;:::;n, j =;:::;m. Next, we dene n j = max{r i +q i;j 6 i 6 n}. We claim that the dimension of a minimal realization of (4) is equal to the minimal value of the sum n + +n m +m over all possible choices of P(s) satisfying (7). In other words, the minimum sum n + + n m + m is such that the integers n i, i =;:::;m are the Kronecker indices of system (4) Brunovsky canonical form y (ni+) i = v i, i =;:::;m, and the at output given by y =(y ;:::;y m ), is such that the number of derivatives required in the expression of x is minimal. Let us sketch the proof. Clearly, the state X =(x ;:::;x (r ) ;:::;x n ;:::;x (rn ) n ) corresponds to a minimal realization for (4). Its dimension is N = n i= r i. For an arbitrary P(s) satisfying (7), we have x i = m j= P i;j(s)y j, i =;:::;n, and the number of derivatives of y j x (ri ) i in the expression of = m j= sri P i;j (s)y j is equal to r i +q i;j. Thus, the number of derivatives of y j in the expression of x is equal to n j. Since the state X is given by an onto function of (y ;:::;y (n) ;:::;y m ;:::;y m (nm) ), we have n + + n m + m 6 N. The claim readily Fig.. Base-stage high-precision positioning system. follows by remarking that if y=(y ;:::;y m ) is chosen such that (y ;:::;y (n) ;:::;y m ;:::;y m (nm) ) is the state of the Brunovsky canonical form of (4), the equality holds true. Example. We consider a motorized stage 3 (see Fig. ) of mass M moving without friction along a rail xed to the base, whose mass is M B, itself related to the ground in an elastic way, with stiness coecient k and damping r. We note M B = M + M B. The base is supposed to move along a parallel axis to the rails. Let us denote by x B the position of the center of mass of the base in a xed coordinate frame related to the ground and by x the relative position of the center of mass of the stage with respect to a coordinate frame attached to the base whose origin is x B. We note F the force applied to the stage, delivered by the motor. F is the control input. According to Newton s second principle: M x = F; M B x B = F kx B rẋ B : (4) Thus, (4) immediately reads ( )( ) ( ) Ms 2 x = F M Bs 2 + rs + k x B and has the form (4) with ( ) Ms 2 A(s)= ; M Bs 2 + rs + k ( ) B = : 3 This application was the subject of the European Patent No. 4435.4-228 and US Patent No. 9/362,643, registered by Newport Corporation.

J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) 69 75 73 Note that this description involves smaller matrices (n = 2) than with an equivalent state-space representation (N = 4). Let us compute a matrix C orthogonal to B: C T =( ), C T A(s)=(Ms 2 M Bs 2 + rs + k): (5) Thus, according to Theorem, a at output is obtained by computing P(s) solution to ) C T A(s)P(s)=(Ms 2 which immediately yields P (s)= k (M Bs 2 + rs + k); M Bs 2 +rs+k) ( P P 2 =; (6) P 2 (s)= k (Ms2 ) (7) and P(s) is clearly of rank for all s. Relations (6) read x = k (M Bs 2 + rs + k)y = M B y + r k k ẏ + y; ( ) M x B = k s2 y = M y (8) k and F = M((M B =k)y(4) +(r=k)y (3) +y). Finally, inverting (8), we get y = x r k ẋ + ) (M B r2 x B M B r M k Mk ẋb: 3. Application to trajectory planning We now show that formula (6) is well adapted to motion planning. This is best illustrated by the next example. Example 2. (Example continued) We want to generate displacements of the stage from one steady state to another one with the base also in steady state at the stage s nal position. It suces to generate a polynomial trajectory for y with respect to time, interpolating the initial and nal conditions x(t )=x ; ẋ(t )=; x B (t )=; ẋ B (t )=; F(t )=; x(t )=x ; ẋ(t )=; x B (t )=; ẋ B (t )=; thus F(t )=; y(t )=x ; ẏ(t )=; y(t )=; y (3) (t )=; y (4) (t )=; y(t )=x ; ẏ(t )=; y(t )=; y (3) (t )=; y (4) (t )=: Since there are initial and nal conditions, the minimal degree of an interpolating polynomial is equal.2.8.6.4 8 x -3 6 4.2..8.6.4.2.2.4.6.8..2.4.6.8.2 2-2 -4.2.4.6.8..2.4.6.8.2 Fig. 2. Motorized stage displacement (left) of 2 mm in :2 s, and base displacement (right), for M =25 kg, M B =45 kg, k =M B (6 2)2, and r =2 :35 km B.

74 J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) 69 75 to 9: y(t)=y +(y y ) 5 (26 42 + 54 2 35 3 + 7 4 ) (9) with =(t t )=(t t ). We then deduce the corresponding trajectories for x, ẋ, x B, ẋ B and F by (8), as depicted in Fig. 2. Let us stress that the inverse expression of y as a function of x; ẋ; x B ; ẋ B is not necessary here. 4. Conclusions A characterization of linear at outputs for linear time-invariant systems in polynomial matrix form has been presented. This result is shown to be well suited to reference trajectory design. It is interesting to note that the existence of at outputs is partially based on the Smith decomposition of a matrix, which is valid in the more general context of matrices over an arbitrary ring [3,5], such as the ring of polynomials of the variable s with time-varying coecients, which naturally appears in the representation of linear time-varying systems. The extension of our approach to the time-varying case might thus be a topic for future research. Acknowledgements The authors wish to express their warmful thanks to Alain Danielo and Roger Desailly of Micro-Controle, to Laurent Praly, Michel Fliess, Philippe Martin, Pierre Rouchon, Henri Bourles and Thanos Antoulas for their encouragements and highly stimulating discussions. Appendix A. On polynomial matrices Matrices whose entries belong to the ring k[s] of polynomials of s, are called polynomial matrices. The inverse of an invertible polynomial matrix is not in general a polynomial matrix since the inverse of a non constant polynomial is not a polynomial. Therefore, the subclass GL n (k[s]) of unimodular matrices, de- ned as the set of invertible polynomial square n n matrices whose inverse is polynomial, or equivalently whose determinant is a constant, plays an important role. A.. Smith form The main results on polynomial and unimodular matrices may be found in [,7,2]. We will need the following fundamental result on the transformation of a polynomial matrix to its Smith form (see [] or in the more general context of matrices over an arbitrary ring [3, A.II..9 and A.II., Example 4]): Theorem A.. Given a ( ) polynomial matrix A, with 6 (resp. ), there exist matrices V GL (k[s]) and U GL (k[s]) such that VAU =( ) (resp. =( )), where is a (resp. ) diagonal matrix whose diagonal elements, ( ;:::; ; ;:::;), are such that i is a non zero s-polynomial for i =;:::;, and is a divisor of j for all j i. The integer, which is less than or equal to min(; ), is the rank of A. The unimodular matrices V (left) and U (right) are obtained in practice as a product of unimodular matrices corresponding to the following elementary right and left actions: right actions consist of permuting two columns, multiplying a column by a non-zero real number, or adding the jth column multiplied by an arbitrary polynomial to the ith column, for arbitrary i and j; left actions consist, analogously, of permuting two lines, multiplying a line by a non-zero real number, or adding the jth line multiplied by an arbitrary polynomial to the ith line, for arbitrary i and j. The algorithm to transform the matrix A(s) of the theorem 4 consists rst in permuting lines and columns to put the element of lowest degree in upper left position, denoted by a ; (s). Then divide all the other elements a ;k (s) (resp. a k; (s)) of the new rst line (resp. rst column) by a ; (s). If one of the rests is non zero, say r ;k (s) (resp. r k; (s)), subtract the corresponding column (resp. line) to the rst column (resp. line) multiplied by the corresponding quotient q ;k (s) dened by the Euclidean division a ;k (s)=a ; (s)q ;k (s)+r ;k (s) (resp. q k; (s) dened by a k; (s)=a ; (s)q k; (s)+r k; (s)). We go on reduc- 4 For a precise construction and proof the reader may refer to [, Chapter 6, Section 2].

J. Levine, D.V. Nguyen / Systems & Control Letters 48 (23) 69 75 75 ing the degree of the upper left element by the same process until all the rests are zero. Then multiplying all the columns by the corresponding quotients q ;k (s), k =2;:::;(resp. lines by q k; (s), k =2;:::;), the rst line becomes (a ; (s); ;:::;) and the rst column (a ; (s); ;:::;) T where T is the transposition operator. We then apply the same algorithm to the second line and so on. To each transformation of lines and columns correspond a left or right elementary unimodular matrix and the unimodular matrix V (resp. U) is nally obtained as the product of all left (resp. right) elementary unimodular matrices so constructed. A.2. Divisors, Bezout identity Let A and B be two polynomial matrices with the same number of lines (resp. columns). We say that B is a left (resp. right) divisor of A if there exists a polynomial matrix Q such that A = BQ (resp. A = QB). Accordingly, we say that A is a left (resp. right) multiple of B. We say that the polynomial matrix R is a left (resp. right) common divisor of the polynomial matrices A and B if and only if R is a left (resp. right) divisor of A and B. In addition, R is a left (resp. right) greatest common divisor (GCD) of A and B if it is a right (resp. left) multiple of any other left (resp. right) common divisor of A and B. If the left (resp. right) GCD R is the identity matrix, we say that A and B are left (resp. right) coprime. Theorem A.2 (Bezout identity). The polynomial matrix R is a left (resp. right) GCD of A and B if and only if there exist two polynomial matrices X and Y such that AX + BY = R (resp. XA + YB = R). The reader may refer to [2,4] for further readings on this topic. A.3. Controllability Consider the system A(s)x=B(s)u where A(s) (resp. B(s)) is a polynomial matrix of size n n (resp. n m), x is a partial state of dimension n and u is the m-dimensional control vector. For an adaptation of the controllability denition in this case and a proof of the next theorem, the reader is referred to [2,4,7]. Theorem A.3. The pair (A(s);B(s)) is controllable if and only if one of the following equivalent statements hold true: (i) A(s) and B(s) are left coprime; (ii) the n (n + m) matrix (A(s) B(s)) has rank n for every s. References [] A.C. Antoulas, On canonical forms for linear constant systems, Internat. J. Control 33 () (98) 95 22. [2] L. Bitauld, M. Fliess, J. Levine, A atness based control synthesis of linear systems and application to windshield wipers, Proceedings of the ECC 97, Paper No. 628, Brussels, 997. [3] N. Bourbaki, Algebre, Elements de Mathematiques, Hermann, Paris, 97, Chapitres 3. [4] P. Brunovsky, A classication of linear controllable systems, Kybernetika 6 (97) 76 78. [5] P.M. Cohn, Free Rings and their Relations, 2nd Edition, Academic Press, London, 985. [6] R. Desailly, J. Levine, S. Maneuf, D.V. Nguyen, On an anti-vibration control algorithm for non-rigid positioning systems, Proceedings of the RESCCE 2 Conference, Ho-Chi-Minh-City, 2. [7] M. Fliess, J. Levine, Ph. Martin, P. Rouchon, Flatness and defect of nonlinear systems: introductory theory and applications, Internat. J. Control 6 (995) 327 36. [8] M. Fliess, J. Levine, Ph. Martin, P. Rouchon, A Lie-Backlund approach to equivalence and atness of nonlinear systems, IEEE Trans. Automat. Control 44 (5) (999) 922 937. [9] M. Fliess, R. Marquez, Continuous-time linear predictive control and atness: a module-theoretic setting with examples, Int. J. Control 73 (7) (2) 66 623. [] G.D. Forney Jr., Minimal bases of rational vector spaces, with applications to multivariable linear systems, SIAM J. Control 3 (3) (975) 493 52. [] F.R. Gantmacher, Theorie des Matrices, t., Dunod, Paris, 966. [2] T. Kailath, Linear Systems, Prentice-Hall, Englewood Clis, NJ, 98. [3] D.G. Luenberger, Canonical forms for linear multivariable systems, IEEE Trans. Automat. Control 2 (967) 29 293. [4] H.H. Rosenbrock, Multivariable and State-Space Theory, Wiley, New York, 97. [5] E.D. Sontag, Mathematical Control Theory, Deterministic Finite Dimensional Systems, 2nd Edition, Springer, New York, 998. [6] A. Tannenbaum, Invariance and System Theory: Algebraic and Geometric Aspects, Springer, New York, 98. [7] W.A. Wolovich, Linear Multivariable Systems, in: Series in Applied Mathematical Sciences, Vol., Springer, New York, 974.