Infinite elementary divisor structure-preserving transformations for polynomial matrices

Similar documents
A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1.

On the computation of the Jordan canonical form of regular matrix polynomials

CONSTRUCTION OF ALGEBRAIC AND DIFFERENCE EQUATIONS WITH A PRESCRIBED SOLUTION SPACE

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION.

On the connection between discrete linear repetitive processes and 2-D discrete linear systems

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linear Algebra and its Applications

Triangularizing matrix polynomials. Taslaman, Leo and Tisseur, Francoise and Zaballa, Ion. MIMS EPrint:

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

ELA

Chapter 1. Matrix Algebra

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach

Finite and Infinite Elementary Divisors of Matrix Polynomials: A Global Approach. Zaballa, Ion and Tisseur, Francoise. MIMS EPrint: 2012.

Numerical computation of minimal polynomial bases: A generalized resultant approach

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

SMITH MCMILLAN FORMS

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case

Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Zero controllability in discrete-time structured systems

Elementary linear algebra

Foundations of Matrix Analysis

Paul Van Dooren s Index Sum Theorem: To Infinity and Beyond

Multivariable ARMA Systems Making a Polynomial Matrix Proper

EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS

Zeros and zero dynamics

ON THE COMPUTATION OF THE MINIMAL POLYNOMIAL OF A POLYNOMIAL MATRIX

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form

DISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS

The Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals

The Gauss-Jordan Elimination Algorithm

Math 121 Homework 5: Notes on Selected Problems

Lecture Notes in Linear Algebra

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

ELA

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

Lecture Summaries for Linear Algebra M51A

1 Absolute values and discrete valuations

Positive systems in the behavioral approach: main issues and recent results

MIT Algebraic techniques and semidefinite optimization February 16, Lecture 4

HILBERT FUNCTIONS. 1. Introduction

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. VII Multivariable Poles and Zeros - Karcanias, Nicos

CHAPTER I. Rings. Definition A ring R is a set with two binary operations, addition + and

ANTONIS-IOANNIS VARDULAKIS

SYMBOL EXPLANATION EXAMPLE

Algebra Homework, Edition 2 9 September 2010

Linear Algebra March 16, 2019

ELEMENTARY LINEAR ALGEBRA

Stability, Pole Placement, Observers and Stabilization

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Model reduction via tangential interpolation

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

1 Invariant subspaces

Quadratic Matrix Polynomials

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

ORIE 6300 Mathematical Programming I August 25, Recitation 1

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

On the closures of orbits of fourth order matrix pencils

Factorization of singular integer matrices

Jordan Structures of Alternating Matrix Polynomials

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

VARIETIES WITHOUT EXTRA AUTOMORPHISMS I: CURVES BJORN POONEN

Review of Linear Algebra

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

1 Matrices and Systems of Linear Equations

Chapter 2: Matrix Algebra

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

Lecture 11: Eigenvalues and Eigenvectors

Math 121 Homework 4: Notes on Selected Problems

Sums of diagonalizable matrices

Algebraic Varieties. Chapter Algebraic Varieties

1 Determinants. 1.1 Determinant

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Rings and groups. Ya. Sysak

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Minimal-span bases, linear system theory, and the invariant factor theorem

Vector Space Concepts

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Parametrization of All Strictly Causal Stabilizing Controllers of Multidimensional Systems single-input single-output case

Eigenvalues and Eigenvectors A =

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

ANSWERS. E k E 2 E 1 A = B

LINEAR ALGEBRA REVIEW

A UNIFIED APPROACH TO FIEDLER-LIKE PENCILS VIA STRONG BLOCK MINIMAL BASES PENCILS.

NOTES on LINEAR ALGEBRA 1

Spectral inequalities and equalities involving products of matrices

Subrings and Ideals 2.1 INTRODUCTION 2.2 SUBRING

Chapter 2 Linear Transformations

Transcription:

Infinite elementary divisor structure-preserving transformations for polynomial matrices N P Karampetakis and S Vologiannidis Aristotle University of Thessaloniki, Department of Mathematics, Thessaloniki 546, GREECE, karampet@authgr Abstract The main purpose of this work is to propose new notions of equivalence between polynomial matrices, that preserves both the finite and infinite elementary divisor structure The approach we use is twofold : a) the "homogeneous polynomial matrix approach" where in place of the polynomial matrices, we study their homogeneous polynomial matrix forms and use -D equivalence transformations in order to preserve its elementary divisor structure, and b) the "polynomial matrix approach" where certain conditions between the -D polynomial matrices and their transforming matrices are proposed Introduction Consider a linear homogeneous matrix difference equation of the form A(σ)β(k), k, N () A(σ) A q σ q + A q σ q + + A Rs r r () where σ denotes the forward shift operator It is known from that () exhibits forward behavior due to the finite elementary divisors of A(σ) and backward behavior due to the infinite elementary divisors of A(σ) (and not due to its infinite zeros, as in the continuous time case) Actually, if A(s) is nonsquare or square with zero determinant, then additionally, the right minimal indices play a crucial role both in the forward and backward behavior of the AR-representation Therefore, it seems quite natural to search for transformations that preserve both the finite and infinite elementary divisor structure of polynomial matrices 3 proposed the extended unimodular equivalence transformation (eue) which has the nice property of preserving only the finite elementary divisors However, eue preserves the ied, only if additional conditions are added In the present work we use two different ways to approach and solve this problem of polynomial matrix equivalence Specifically, in the third section we notice that the finite elementary divisor structure of the homogeneous polynomial matrix form A q σ q + A q σ q w + + A w q corresponding to () gives us complete information for both finite and infinite elementary divisor structure of () Based on this line of thought, we reduce the problem of equivalence between -D polynomial matrices to a transformation between -D polynomial matrices A more direct and transparent approach is given in the fourth section, where we propose additional conditions to eue It is shown, that both transformations provide necessary conditions in order for two polynomial matrices to possess the same elementary divisor structure However, in the special set of square and nonsingular polynomial matrices: a) the providing conditions are necessary and sufficient, and b) the proposed transformations are equivalent transformations and define the same equivalence class Discrete time autoregressive representations and elementary divisor structure In what follows R, C denote respectively the fields of real and complex numbers and Z, Z + denote respectively the integers and non negative integers By Rs and Rs p m we denote the sets of polynomials and p m polynomial matrices respectively with real coefficients and indeterminate s C Consider a polynomial matrix A(s) A q s q + A q s q + + A Rs p m (3) A j R p m, j,,, q, A q

Definition Let A(s) Rs p m with rank R(s) A(s) r min (p, m) The values λ i C that satisfy the condition rank C A(λ i ) < r are called finite zeros of A(s) Assume that A (s) has l distinct zeros λ, λ,, λ l C, and let S λi A(s) (s) Q r,m r p r,r p r,m r Q diag{(s λ i ) m i,, (s λ i ) m ir } be the local Smith form S λ i A(s) (s) of A(s) at s λ i, i,,, l where m ij Z + and m i m i m ir The terms (s λ i ) mij are called the finite elementary divisors (fed) of A(s) at s λ i Define also as n : l r i j m ij Definition If A, the dual matrix Ã(s) of A(s) is defined as Ã(s) : A s q +A s q + +A q Since rankã() ranka q the dual matrix Ã(s) of A(s) has zeros at s iff ranka q < r Let ranka q < r and let S diag{s (s) µ,, s µ r} r,m r (4) Ã(s) p r,r p r,m r be the local Smith form of Ã(s) at s where µ j Z + and µ µ µ r The infinite elementary divisors (ied) of A(s) are defined as the finite elementary divisors s µ j of its dual Ã(s) at s Define also as µ r i µ i An interesting consequence of the above definition is that in order to prove that the polynomial matrix (3) has no infinite elementary divisors it is enough to prove that ranka q r It is easily seen also that the finite elementary divisors of A(s) describe the finite zero structure of the matrix polynomial In contrast, the infinite elementary divisors give a complete description of the total structure at infinity (pole and zero structure) and not simply that associated with the zeros 4,5 The structured indices of a polynomial matrix, (finite-infinite elementary divisors and right-left minimal indices) are connected with the rank and the degree of the matrix, as follows Proposition 6, (a) If A(s) A +A s+ +A q s q Rs r r and det A(s), then the total number of elementary divisors (finite and infinite ones and multiplicities accounted for) is equal to the product rq ie n+µ rq (b) If A(s) A + A s + + A q s q is nonsquare or square with determinant equal to zero then the total number of elementary divisors plus the left and right minimal indices of A(s) (order accounted for) is equal to rq where now r denotes the rank of the polynomial matrix A(s) Example Consider the polynomial matrix s A(s) + s + }{{} and its dual Then Ã(s) A + s + A A s s s + s A + A s + A s S C A(s) (s) s + ; S Ã(s) (s) s 3 Therefore A(s) has one finite elementary divisor (s + ) and one infinite elementary divisor s 3 ie n + µ + 3 r q The elementary divisor structure of a polynomial matrix plays a crucial role for the study of the behaviour of discrete time AR-Representations over a closed time interval Consider for example the q-th order discrete time AutoRegressive representation : A q ξ k+q + A q ξ k+q + + A ξ k (5) If σ denotes the forward shift operator σ i ξ k ξ k+i then (5) can be written as A (σ) ξ k, k,,, N q (6) where A (σ) as in (3) and ξ k R r, k,,, N is a vector sequence The solution space or behavior BA(σ) N of the AR-representation (6) over the finite time interval, N is defined as { BA(σ) N : (ξ k ) k,,,n R r } (R r ) N+ ξ k satisfies (6) for k, N (7) and we have Theorem 7,8 (a) If A(s) Rs r r, det A(s) then the behavior of (6) over the finite time interval k,,, N q is given by dim B N A(σ) rq n + µ (b) If A(s) is nonsquare or square with zero determinant then BA(σ) N is separated into an equivalence class space ie ˆBN A(σ), where each equivalence class corresponds to the family of solutions that corresponds to certain boundary conditions, and the dimension of ˆB A(σ) N is n + µ + ε where ε denotes the total number of right minimal indices (since the left minimal indices play no role in the construction of the right solution space)

It is easily seen from 7,8, and 9 that the solution space BA(σ) N is the maximal forward decomposition of (5) Actually, it consists of two subspaces, the one corresponding to the finite elementary divisors of A (σ), which gives rise to solutions moving in the forward direction of time and the other corresponding to the infinite elementary divisors of A (σ), which gives rise to solutions moving in the backward direction of time Example Consider the AR-Representation σ ξ k σ + ξ k Then A(σ) S C A(s) (s) s + B N A(σ) ( δn k ) ξ k ; S Ã(s) (s) s 3 ( ) ( ) k, }{{} due to S C A(s) (s) ( δn k+, ) ( ) δn k+, δ N k } {{ } due to S (s) Ã(s) and dim BA(σ) N rq + 3 n + µ Example 3 Consider the polynomial matrix description ( σ ) ξ k u k y k (σ + ) ξ k In order to find the state-input pair which gives rise to zero output (output zeroing problem) we have to solve the following system of difference equations σ ξk σ + u k P (σ) It is easily seen that the above discrete time AR- Representation is the one we have already studied in the previous example and therefore the state-input pair which gives rise to zero output is given by a simple transformation of the space BA(σ) N defined in the previous example ie ( ξk u k ) x k l ( ) k l 4 δ N k l ( ) k + l δ N k + +l 3 δ N k+ + l 4 δ N k+ Since the elementary divisor structure of a polynomial matrix plays a crucial role in the study of discrete time AR-representations and/or polynomial matrix descriptions over a closed time interval, we are interested in finding transformations that leave invariant the elementary divisor structure of polynomial matrices 3 The homogeneous polynomial matrix approach We present in this section two different approaches in order to study the infinite elementary divisor structure of a polynomial matrix The first approach is to apply a suitably chosen conformal mapping to bring the infinity point to some finite point, while the second approach is to use homogeneous polynomials to study the infinity point Let now P (m, l) be the class of (r + m) (r + l) polynomial matrices where l and m are fixed integers and r ranges over all integers which are greater than max ( m, l) Definition 3 3 A (s), A (s) P (m, l) are said to be extended unimodular equivalent (eue) if there exist polynomial matrices M(s), N(s) such that M(s) A (s) A (s) where the compound matrices M(s) A (s) ; have full rank s C A (s) (8) (9) Eue relates matrices of different dimensions and preserves the fed of the polynomial matrices involved In the case that we are interested to preserve only the elementary divisors at a specific point s then we introduce {s }-equivalence transformation Definition 4 A (s), A (s) P (m, l) are said to be {s }-equivalent if there exist rational matrices M(s), N(s), having no poles at s s, such that (8) is satisfied and where the compound matrices in (9) have full rank at s s {s }-equivalence preserves only the fed of A (s), A (s) P (m, l) of the form (s s ) i, i > Based on the above two polynomial matrix transformations we can easily define the following polynomial matrix transformation : Definition 5 A (s), A (s) P (m, l) are said to be strongly equivalent if there exists : (i) polynomial matrices M (s), N (s), such that (8) is satisfied and where the compound matrices in (9) have full rank s C (ii) rational matrices M (s), N (s), having no poles at s, such that (8) between the dual polynomial matrices Ã(s), Ã(s) is satisfied and where the respective compound matrices in (9) have full rank at s

Some nice properties of the above transformation are given by the following Theorem Theorem 3 (a) Strong equivalence is an equivalence relation on P (m, l) (b) A (s), A (s) P (m, l) are strongly equivalent iff SA C (s) (s) is a trivial expansion of SC A (s) (s) and S (s) is a trivial expansion of à (s) S (s) ie à (s) se leaves invariant the finite and infinite elementary divisors Proof (a) Eue and {s } equivalence are equivalent relations on P (m, l) and thus strong equivalence is an equivalence relation, since it consists of the above two equivalence relations (b) Strong equivalence is consisted of two transformations, the eue and the {s } equivalence However, A (s), A (s) P (m, l) are : a) eue iff SA C (s) (s) is a trivial expansion of SC A (s)(s), and b) {s } equivalent iff S (s) is a trivial expansion of à (s) S (s) à (s) Based on the properties of eue and {s }- equivalent of preserving respective the fed and the ied at s s, we can easily observe that the above transformation has the nice property of preserving both the finite and infinite elementary divisors of A i (s) Example 4 Consider the polynomial matrices s A (s) ; A s + (s) and their dual polynomial matrices à (s) s s + s ; à (s) s s s s s s s s Then we can find polynomial matrices M(s), N(s) and M(s), Ñ(s) such that s s + A (s) M(s) s s s s s A (s) N(s) is an eue and s s + s à (s) M(s) s s s s s s s à (s) Ñ(s) is a {} equivalence Therefore A (s), A (s) are se and thus according to Theorem 3 possess the same finite and infinite elementary divisors ie SA(s) C (s) s + S à (s) (s) s 3 ; SA C (s) I3 (s) s + ; S à (s) I3 (s) s 3 Strong equivalence transformation has the disadvantage that it consists of two separate transformations In order to overcome this difficulty we use the homogeneous variable to represent infinity Define as the homogeneous form of A(s) the matrix A H (s, r) A s n + A s n r + + A n r n () It is easily seen that the finite elementary divisors of A(s) are actually the finite elementary divisors of A H (s, ) Similarly to the previous definition we can easily see that the infinite elementary divisors of A(s) are actually the finite elementary divisors of A H (, r) at r An alternative definition of the finite and infinite elementary divisors in terms of the homogeneous polynomial matrix () is given below Definition 6 6 Let D i be the greatest common divisor of the i i minors of A H, and define D Then l D i /D i+, and let D i /D i : c i (as br) i(b/a), where the product is taken over all pairs (, b) and (, ), and / is denoted by The factors (as br) li(b/a), with l i (b/a) are called the elementary divisors of A(s), and the integers l i the elementary exponents of A(s) It is easily seen that the pairs (, ) corresponds to the ied while the remaining pairs to the fed Example 5 Consider the polynomial matrix A(s) defined in Example 4 ie s A(s) s + Define also the homogeneous polynomial matrix A H r s (s, r) sr + r

Then D, D, D r 3 (s + r) and therefore the Smith form of A H (s, r) over Rs, r is given by S C A H (s,r) (s, r) r 3 (s + r) where we have the following pairs of (a, b) : (, ) with exponent and (, ) with exponent 3 The first pair corresponds to the fed (s+), while the second pair corresponds to the ied r 3 An extension of the eue into the -D setting is given by two transformations, factor and zero coprime equivalence While both of them preserve the invariant polynomials of the equivalent matrices, the second ones has the additionally property to preserve the ideals of a polynomial matrix and therefore is more restrictive Since we are interested only in the invariant polynomials of the homogeneous polynomial matrices and not in their corresponding ideals we present and use only the first of the above two transformations Definition 7 A (s, r), A (s, r) P (m, l) are said to be factor coprime equivalent (fce) if there exists polynomial matrices M(s, r), N(s, r) such that M(s, r) A (s, r) A (s, r) () where the compound matrices M(s, r) A (s, r) ; A (s, r) () are factor coprime ie if all the (r + m) (r + m) (resp (r + l) (r + l)) minors of M(s, r) A (s, r) A (s, r) (resp ) have no polynomial factor Some nice properties of the above transformation are given by the following Theorem Theorem 4, Fce is only reflexive and transitive and therefore is not an equivalence relation Fce is an equivalence relation on P (m, m) the set of square and nonsingular polynomial matrices Fce leaves invariant the invariant polynomials of the fce polynomial matrices Since a) the above transformation leaves invariant the invariant polynomials of the equivalent polynomial matrices and b) the fact that the elementary divisor structure of a polynomial matrix is completely characterized by the invariant polynomials of its homogeneous polynomial matrix, it seems quite natural to reduce the problem of equivalence between two - d polynomial matrices to the problem of equivalence between its respective homogeneous polynomial matrices Definition 8 A (s), A (s) P (m, l) are defined to be factor equivalent if their respective homogeneous polynomial matrices A H (s, r), A H (s, r) are factor coprime equivalent Due to the properties of the factor coprime equivalence, it is easily to prove the following Theorem 5 i) Fe is only reflexive and transitive and therefore is not an equivalence relation Fe is an equivalence relation on P (m, m) the set of square and nonsingular polynomial matrices ii) Fe leaves invariant the finite and infinite elementary divisors of the equivalent polynomial matrices Example 6 Consider the polynomial matrices A (s), A (s) defined in Example 4,and their respective homogeneous polynomial matrices A H r s (s, r) sr + r A H (s, r) s r s r r s r r Then we can find polynomial matrices M(s, r), N(s, r) such that r s sr + r A H(s,r) where M(s,r) s r s r r s r r } {{ } A H (s,r) r r s s }{{} N(s,r) S C M A H (s, r) I 4 4 S C AH N (s, r) I 4 Therefore, A (s), A (s) are fe and thus, according to Theorem 5 they possess the same fed and ied However, it is easily seen that the compound matrix M A has singularities at s r for any matrix M(s, r) and therefore, A H (s, r), A H (s, r) are not zero coprime equivalent (, ), although they possess the same invariant polynomials Therefore, it is seen that zero coprime equivalence would be quite restrictive for our purpose This is easily checked out in case where A (s), A (s) are of different dimensions Then there is no zero coprime equivalence transformation between A H (s, r), A H (s, r)

For -D systems, 3 has presented an algorithm that reduces a general arbitrary polynomial matrix A(s) to an equivalent matrix pencil More specifically, given the polynomial matrix A(s) in (3) and the matrix pencil si m I m si m I m se A I m A A A A q s + A q (3) the following holds Theorem 6 The polynomial matrix A(s) defined in (3) and the matrix pencil se A defined in (3) are fe Proof Consider the transformation r q I m r q si m (q )m,p A H (s, r) se ra I p }{{} rs q I m M(s,r) } s q I m {{ } N(s,r) Then, the compound matrix M(s, r) se ra has two qm qm minors equal to s (q )m and ( r) (q )m respectively and thus the matrices are factor coprime These minors are si m ri m si m ri m det si m A r A r A r A q r ri m si m ri m det ri m A r A r A q s + A q r I p I m and are equal to s (q )m and ( r) (q )m respectively A Similarly the the compound matrix H (s, r) has two coprime m m minors, s (q )m and r (q )m, and thus is factor coprime ie det r q I m r (q )m ; det s q I m s (q )m Therefore, the matrices M(s, r) se ra and A H (s, r) are factor coprime, A H (s, r) and se ra are factor coprime equivalent and A(s), se A are factor equivalent An illustrative example of the above theorem has already been given in example 6 A direct consequence of the above theorem is given by the following lemma Lemma 7 A(s) and se A possess the same finite and infinite elementary divisor structure Proof A(s) and se A are fe from Theorem 6 and thus according to Theorem 5 possess the same finite and infinite elementary divisor structure A completely different and more transparent approach to the problem of equivalence between -D polynomial matrices, without using the theory of -D polynomial matrices, is given in the next section 4 The polynomial matrix approach Although eue preserves the finite elementary divisors, it does not preserve the infinite elementary divisors, as we can see in the following example Example 7 Consider the following eue transformation s s 3 s s 3 s + s + M(s) A (s) A (s) N(s) Although A (s), A (s) have the same finite elementary divisors, ie SA C (s) (s) S C s + A (s) (s) they have different infinite elementary divisors ie S Ã (s) (s) S s 3 S Ã (s) (s) S s s + s s3 s + s 3 s 5 The above example indicates that further restrictions must be placed on the compound matrices (9), in order to ensure that the associated transformation will leave invariant both the finite and infinite elementary divisors A new transformation between polynomial matrices of the same set P (m, l) is given in the following definition Definition 9 Two matrices A (s), A (s) P (m, l) are said to be divisor equivalent (de) if there exist polynomial matrices M(s), N(s) of appropriate dimensions, such that (8) is satisfied, where (i) the compound matrices in (9) are left prime and right prime matrices, respectively, (ii) the compound matrices in (9) have no infinite elementary divisors, (iii) the following degree conditions are satisfied d M(s) A (s) d A (s) (4) A (s) d d A (s) where dp denotes the degree of P (s) seen as a polynomial with nonzero matrix coefficients

An important property of the above transformation is given by the following Theorem Theorem 8 If A (s), A (s) P (m, l) are divisor equivalent then they have the same finite and infinite elementary divisors Proof According to condition (i) of divisor equivalence, A (s) and A (s) are also eue and thus have the same finite elementary divisors (8) may be rewritten by setting s w, as M( w ) A ( w ) A ( w ) N( w ) and then premultiplying and postmultiplying by w d M(s) A (s) d (s) and w respectively as is a divisor equivalence transformation ie S C M A (s) S and S C A N (s) S d M A A d N M Ã Ñ Ã (s) I 4 4 (s) d A d A I 4 Therefore, A (s), A (s) are divisor equivalent and thus, according to Theorem 8 possess the same finite and infinite elementary divisors w d M(s) A (s) M( w ) A ( w ) (5) A ( w ) N( w ) w d A (s) M(w) A (w) A (w) N(w) where ~ denotes the dual matrix Now since d M(s) A (s) d A (s) and A (s) d d A (s) equation (5) may be rewritten as M (w) à (w) à (w) N (6) (w) The compound matrix M(s) A (s) (respectively ) has no infinite elementary divi- A (s) sors and therefore its dual M (w) à (w) (respectively à (w) N ) has no finite zeros at w (w) Therefore, the relation (5) is an {}-equivalence relation which preserves the finite elementary divisors of Ã(w), Ã(w) at w or otherwise the infinite elementary divisors of A (s), A (s) Example 8 Consider the polynomial matrices A (s), A (s) defined in Example 4 Then we can find polynomial matrices M(s), N(s) such that s s s + s A (s) M(s) s s s s s s(s ) s(s ) A (s) N(s) Although de preserves both the fed and ied, it is not known if de a) is an equivalence relation on P (p, m) and b) provides us with necessary and sufficient conditions for two polynomial matrices to possess the same fed and ied Also the exact geometrical meaning of the degree conditions appearing in the definition of de is under research Now consider the following set of polynomial matrices { A(s) A + A R c s : s + + A q s q Rs r r det A(s) and c rq, r (7) Example 9 The polynomial matrices A (s), A (s) defined in example 4 belong to R 4 s since r q 4 r q The degree conditions of de in R c s are redundant as we can see in the following Lemma Lemma 9 4 (a) Let A (s) and A (s) R c s with dimensions m m and (m + r) (m + r) respectively where r Then the first two conditions of de implies the degree conditions of de ie deg M(s) deg A (s) and deg N(s) deg A (s) (b) Let A (s) and A (s) R c s having the same dimensions m m and therefore the same degree d If A (s), A (s) satisfies (8) and the first two conditions of de then deg M(s) deg N(s) Therefore, in this special case we are able to restate the definition of de on R c s with only two conditions Definition Two matrices A (s), A (s) R c s are called divisor equivalent (de) if there exist polynomial matrices M(s), N(s) of appropriate dimensions, such that equation (8) is satisfied where the compound matrices in (9) have full rank and no fed nor ied }

Some properties of de are given in the following Theorem Theorem 4 (a) A (s), A (s) R c s are de iff they have the same fed and ied (b) De is an equivalence relation on R c s A different approach, concerning the equivalence between two polynomial matrices on R c s is presented in 5 Definition 5 A (s) and A (s) R c s are called strictly equivalent iff their equivalent matrix pencils se A R c c and se A R c c proposed in (3), are strictly equivalent in the sense of 6 De and se define the same equivalence class on R c s Theorem 4 Strict equivalence (Definition ) belongs to the same equivalence class as de A geometrical meaning of de is given in the sequel Definition 5 Two AR-representations A i (σ) ξ i k, k,,,, N where σ is the shift operator, A i (σ) R c σ r i r i, i, will be called fundamentally equivalent (fe) over the finite time interval k,,,, N iff there exists a bijective polynomial map between their respective behaviors B N A (σ), BN A (σ) Theorem De implies fe Proof From (8) we have M(σ)A (σ) A (σ)n(σ) (8) By multiplying (8) on the right by ξ k we get M(σ)A (σ)ξ k A (σ)n(σ)ξ k 5 Conclusions The forward and backward behaviour of a discrete time AR-representation over a closed time interval is connected with the finite and infinite elementary divisor structure of the polynomial matrix involved in the AR-representation Furthermore, it is known that a polynomial matrix description can always be written as an AR-representation, and many problems arising from the Rosenbrock system theory can be reduced to problems based on AR-representation theory This was the motivation of this work that presents three new polynomial matrix transformations, strong equivalence, factor equivalence and divisor equivalence that preserve both finite and infinite elementary divisor structure of polynomial matrices More specifically, it is shown that strong equivalence is an equivalence relation and provides necessary and sufficient conditions for two polynomial matrices to possess the same elementary divisor structure However, its main disadvantage is that it consists of two separate transformations We have shown that we can overcome this problem using the homogeneous polynomial matrix form of the one variable polynomial matrices and then using the known transformations from the -D systems theory Following this reasoning, we have introduced the factor equivalence transformation Although factor equivalence is simpler in the sense that it uses only one pair of transformation matrices instead of two (strong equivalence), it suffers since an extra step (homogenization) is needed A solution to this problem is given by adding extra conditions to extended unimodular equivalence transformation giving birth to divisor equivalence We have shown that both factor equivalence and divisor equivalence provide necessary conditions for two polynomial matrices to possess the same elementary divisor structure The conditions become necessary and sufficient, in the case of regular (square and nonsingular) matrices In this special set of matrices, both transformations are equivalence relations sharing the same equivalence class A geometrical interpretation of de in terms of maps between the solution spaces of AR-representations, is given in the special case of regular polynomial matrices Finally, certain questions remain open concerning the sufficiency of divisor equivalence for non- A (σ)n(σ)ξ k square polynomial matrices, or square polynomial matrices with zero determinant 5 has proposed a ξ k B A(σ) st ξ k N(σ)ξ k (9) new notion of equivalence, named fundamental equiv- in terms of mappings between discrete time According to the conditions of de, A (σ) T N(σ) T Talence, AR-representations described by square and nonsingular polynomial matrices Further research is now has full rank and no fed or ied This implies 8 that ξ k Therefore the map defined by the polynomial matrix N(σ) : BA N (s) BN A (s) ξ k ξ focused on: a) how can fundamental equivalence be k is injective Furthermore, dim BA N c dim (s) BN A, extended to nonsquare polynomial matrices, b) what (s) are its invariants, and c) which is the connection between the transformations presented in this work and since A i (σ) R c σ r i r i, and thus N(σ) is a bijection between BA N (σ), BN A (σ) fe transformation An extension of these results to However, certain questions remain open concerning the converse of the above theorem the Rosenbrock system matrix theory is also under research

REFERENCES E N Antoniou, A I G Vardulakis, and N P Karampetakis, A spectral characterization of the behavior of discrete time AR-representations over a finite time interval, Kybernetika (Prague), vol 34, no 5, pp 555 564, 998 N Karampetakis, On the construction of the forward and backward solution space of a discrete time AR-representation 5th IFAC World Congress, 3 A C Pugh and A K Shelton, On a new definition of strict system equivalence, Internat J Control, vol 7, no 5, pp 657 67, 978 4 G E Hayton, A C Pugh, and P Fretwell, Infinite elementary divisors of a matrix polynomial and implications, Internat J Control, vol 47, no, pp 53 64, 988 5 A Vardulakis, Linear Multivariable Control : Algebraic Analysis, and Synthesis Methods John Willey and Sons, 99 6 C Praagman, Invariants of polynomial matrices, pp 74 77, rst European Control Conference, 99 7 E Antoniou, A Vardulakis, and N Karampetakis, A spectral characterization of the behavior of discrete time AR-representations over a finite time interval, Kybernetika, vol 34, no 5, pp 555 564, 998 8 N Karampetakis, On the determination of the dimension of the solution space of discrete time AR-representations 5th IFAC World Congress on Automatic Control, 9 F Lewis, Descriptor systems : Decomposition into forward and backward subsystems, IEEE Transactions on Automatic Control, vol 9, pp 67 7, February 984 N P Karampetakis, A C Pugh, and A I Vardulakis, Equivalence transformations of rational matrices and applications, Internat J Control, vol 59, no 4, pp, 994 D Johnson, Coprimeness in Multidimensional System Theory and Symbolic Computation PhD thesis, Loughborough University of Technology, UK, 993 B Levy, -D Polynomial and Rational Matrices and their Applications for the Modelling of -D Dynamical Systems PhD thesis, Stanford University, USA, 98 3 I Gohberg, P Lancaster, and L Rodman, Matrix polynomials New York: Academic Press Inc Harcourt Brace Jovanovich Publishers, 98 Computer Science and Applied Mathematics 4 N P Karampetakis, S Vologiannidis, and A Vardulakis, Notions of equivalence for discrete time AR-representations, 5th IFAC World Congress on Automatic Control, July 5 A Vardulakis and E Antoniou, Fundamental equivalence of discrete time ar representations, in Proceedings of the rst IFAC Symposium on System Structure and Control, 6 F Gantmacher, The Theory of Matrices Chelsea PubCo, 959