A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1.

Similar documents
Infinite elementary divisor structure-preserving transformations for polynomial matrices

Linear Algebra and its Applications

MATH 583A REVIEW SESSION #1

On the computation of the Jordan canonical form of regular matrix polynomials

CONSTRUCTION OF ALGEBRAIC AND DIFFERENCE EQUATIONS WITH A PRESCRIBED SOLUTION SPACE

NORMS ON SPACE OF MATRICES

Elementary linear algebra

1 Invariant subspaces

A proof of the Jordan normal form theorem

5 Linear Transformations

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

FINITE-DIMENSIONAL LINEAR ALGEBRA

Linear Vector Spaces

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Online Exercises for Linear Algebra XM511

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Dimension. Eigenvalue and eigenvector

Block companion matrices, discrete-time block diagonal stability and polynomial matrices

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Control Systems. Linear Algebra topics. L. Lanari

Chapter 2 Linear Transformations

Foundations of Matrix Analysis

Math 121 Homework 5: Notes on Selected Problems

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Kernel and range. Definition: A homogeneous linear equation is an equation of the form A v = 0

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Topics in linear algebra

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

DESCRIPTOR FRACTIONAL LINEAR SYSTEMS WITH REGULAR PENCILS

Positive systems in the behavioral approach: main issues and recent results

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK

Multivariable ARMA Systems Making a Polynomial Matrix Proper

ELA

A = 1 6 (y 1 8y 2 5y 3 ) Therefore, a general solution to this system is given by

Matrix Polynomial Conditions for the Existence of Rational Expectations.

Math 121 Homework 4: Notes on Selected Problems

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Zeros and zero dynamics

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

ANSWERS. E k E 2 E 1 A = B

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if

Topological Data Analysis - Spring 2018

Dot Products, Transposes, and Orthogonal Projections

ON THE COMPUTATION OF THE MINIMAL POLYNOMIAL OF A POLYNOMIAL MATRIX

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

ALGEBRA 8: Linear algebra: characteristic polynomial

Lecture 2: Linear Algebra

Solutions to Homework 5 - Math 3410

Review of similarity transformation and Singular Value Decomposition

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

First we introduce the sets that are going to serve as the generalizations of the scalars.

The Maximum Dimension of a Subspace of Nilpotent Matrices of Index 2

Solving Homogeneous Systems with Sub-matrices

Review of some mathematical tools

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

The Jordan Canonical Form

Linear Algebra Practice Problems

Chapter 7. Linear Algebra: Matrices, Vectors,

w T 1 w T 2. w T n 0 if i j 1 if i = j

Math 396. Quotient spaces

A Little Beyond: Linear Algebra

MAT Linear Algebra Collection of sample exams

Lecture Summaries for Linear Algebra M51A

Permutation invariant proper polyhedral cones and their Lyapunov rank

Homework 11/Solutions. (Section 6.8 Exercise 3). Which pairs of the following vector spaces are isomorphic?

Discrete Riccati equations and block Toeplitz matrices

Where is matrix multiplication locally open?

Math 353, Practice Midterm 1

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

Components and change of basis

Linear and Bilinear Algebra (2WF04) Jan Draisma

On the closures of orbits of fourth order matrix pencils

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

LINEAR ALGEBRA REVIEW

The disturbance decoupling problem (DDP)

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

MATHEMATICS 217 NOTES

Chapter 2 Spectra of Finite Graphs

Transcription:

A SPECTRAL CHARACTERIZATION OF THE BEHAVIOR OF DISCRETE TIME AR-REPRESENTATIONS OVER A FINITE TIME INTERVAL E.N.Antoniou, A.I.G.Vardulakis, N.P.Karampetakis Aristotle University of Thessaloniki Faculty of Sciences Department of Mathematics Thessaloniki, 54006 GREECE Tel. +30 31 997 951, Fax +30 31 997 951 E-mail: antoniou@ccf.auth.gr avardoula@ccf.auth.gr karampetakis@ccf.auth.gr Abstract In this paper we investigate the behavior of the discrete time AR (Auto Regressive) representations over a finite time interval, in terms of the finite and infinite spectral structure of the polynomial matrix involved in the AR-equation. A boundary mapping equation and a closed formula for the determination of the solution, in terms of the boundary conditions, are also gived. 1. Introduction The class of discrete time descriptor systems has been the subject of several studies in the recent years (see for example 1, 2, 3,4, 5, 6). The main reason for this is that descriptor equations naturally represent a very wide class of physical, economical or social systems. One of the most interesting features of discrete time singular systems is without doubt their non causal behavior, while their counterpart in continuous time exhibit impulsive behavior. However, descriptor systems can be considered as a special - first order case of a more general Auto Regressive Moving Average (ARMA) multivariable model and thus the study of this more general case can be proved to be very important. Such models (known also as polynomial matrix descriptions or PMDs) have been extensively studied in the continuous time case by several authors. In this note we investigate some structural properties of the discrete time autoregressive (AR)-representation, as a first step towards the generalization of the descriptor systems theory to the higher order case. Consider the discrete time AR equation A q x k+q + A q 1 x k+q 1 +... + A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently A(σ)x k = 0, k = 0, 1, 2,..., N q (1.2) 1

where A(σ) = A q σ q + A q 1 σ q 1 +... + A 0 R r r σ is a regular polynomial matrix, i.e. det A(σ) 0 for almost every σ, x k R r, k = 0, 1, 2,..., N is a vector sequence and σ denotes the forward shift operator σx k = x k+1. Notice that we are interested for the behavior of (1.2) over a specified time interval k = 0, 1, 2,..., N and not over Z +. The matrix A q is not in general invertible which means that (1.1) can not be solved by iterating forward, i.e. given x 0, x 1,..., x q 1 determine successively x q, x q+1,... This is the main reason why we treat this equation as a boundary condition problem, where both the initial and final conditions should be given. This naturally leads to the restriction of the time domain to a finite interval instead of Z +.The results of the present paper should be compared to 1, 2, 3, 4, 5, 6 where similar problems for systems in descriptor form, are treated in a similar manner. Finally, following the notation of 12 we define the behavior of (1.2) as where k = 0, 1, 2,..., N. B = {x k / x k R r, x k satisfies (1.2)} (1.3) 2. Preliminaries - Notation The mathematical background required for this note comes mainly from 7, 8, 9, 10 and 11. By R m n σ we denote the set of m n polynomial matrices with real coefficients and indeterminate σ. A square polynomial A(σ) = A q σ q + A q 1 σ q 1 +... + A 0 R r r σ matrix is called regular iff det A(σ) 0 for almost every σ. The (finite) eigenvalues of A(σ) are defined as the roots of the equation det A(σ) = 0. Let S λ i A(σ) = diag{(σ λ i) m i1,..., (σ λ) m ir } be the local Smith form of A(σ) at σ = λ i and λ i is an eigenvalue of A(σ), where 0 m i1 m i2... m ir. The terms (σ λ i ) m ij are called the (finite) elementary divisors of A(σ) at σ = λ i, m ij j = 1, 2,..., r are the partial multiplicities of λ i and m i = r j=1 m ij is the multiplicity of λ i. The dual matrix of A(σ) is defined as Ã(σ) = σq A(σ 1 ) = A 0 σ q +A 1 σ q 1 +... + A q. The infinite elementary divisors of A(σ) are the finite elementary divisors of the dual Ã(σ) at σ = 0. The total number of elementary divisors (finite and infinite) of A(σ) is equal to the product r q, where r is the dimension and q is the degree of A(σ). A pair of matrices X i R r m i, J i R m i m i, where J i is in Jordan form and λ i is an eigenvalue of A(σ)of multiplicity m i is called an eigenpair of A(σ) corresponding to λ i iff q k=0 A k X i J k i = 0, rank col(x i J k i ) m i 1 k=0 = m i (2.1) 2

where col(x i J k i ) m i 1 k=0 = X i X i J i. X i J m i 1 i. The matrix J i consists of Jordan blocks with sizes equal to the partial multiplicities of λ i. Let λ 1, λ 2,..., λ p be the distinct finite eigenvalues of A(σ), and (X i, J i ) their corresponding eigenpairs. The total number of finite elementary divisors is equal to the determinantal degree of A(σ), i.e. n = deg(det A(σ)) = p i=1 m i. The pair of matrices X F = X 1, X 2,..., X p R r n (2.2) J F = diag{j 1, J 2,..., J p } R n n (2.3) is defined as a finite spectral pair of A(σ) and satisfies the following q A k X F JF k = 0, k=0 rank col(x F J k F ) n 1 k=0 = n (2.4) An eigenpair of the dual matrix Ã(σ) corresponding to the eigenvalue λ = 0 is defined as an infinite spectral pair of A(σ), and satisfies the following q k=0 where X R r µ, J R µ µ. A k X J q k = 0, rank col(x J k ) µ 1 k=0 = µ (2.5) 3. Main Results Consider the AR-representation (1.2) and a finite spectral pair (X F, J F ) of A(σ). In the continuous time case, i.e. where σ = d is the differential operator dt instead of the forward shift operator, finite spectral pairs give rise to linearly independent solutions. A similar situation occurs in our case. We state the following theorem Theorem 1. If (X F, J F ) is a finite (If (X, J ) is an infinite) spectral pair of A(σ) of dimensions r n, n n (r µ, µ µ) respectively where n = deg A(σ) (µ is the multiplicity of the eigenvalue at σ = of A(σ)) then the columns of the matrix Ψ F (k) = X F J k F, k = 0, 1, 2,..., N (3.1) (Ψ (k) = X J N k, k = 0, 1, 2,..., N) (3.2) are linearly independent solutions of (1.2) for N n (N µ). Proof. Let (X F, J F ) be a finite spectral pair of A(σ). We have A(σ)X F J k F = = ( q i=0 q i=0 3 A i X F J k+i F = A i X F J i F )J k F (2.4) = 0

for k = 0, 1, 2,..., N q and from the second equation of (2.4) it is obvious that the columns of X F JF k are linearly independent sequences over any interval k = 0, 1, 2,..., N n. The proof of (3.2) follows similarly if we take into account (2.5). The above theorem proves that we can form solutions of (1.2) as linear combinations of the columns of the matrices Ψ F (k) and Ψ (k). It remains to show that the columns of these two matrices are enough to span the entire solution space of the equation over a finite interval k = 0, 1, 2,..., N. Consider equation (1.2) or equivalently the more detailed form (1.1). Then one can write this equation in the following form R N+1 (A) x N+1 = 0 (3.3) where R N+1 (A) is the resultant matrix of A(σ) having N + 1 block columns A 0 A 1 A q 0 0 0 A R N+1 (A) = 0 A 1 A q................. 0 0 0 A 0 A 1 A q (3.4) where R N+1 (A) R r(n q+1) r(n+1) and x N+1 = x T 0 x T 1 x T N T R r(n+1). Obviously equations (1.2) and (1.1) are equivalent to (3.3) in the specified time interval k = 0, 1, 2,..., N. With this simple remark and using the theory for the kernels of resultant matrices of a polynomial matrix, we can state the following very important Theorem 2. The behavior of the AR-representation (1.2) over the finite time interval k = 0, 1, 2,..., N is and B = spanx F, X {J k F J N k } (3.5) dim B = rq where r, q are respectively the dimension of A(σ) and the maximum order of σ in A(σ), (X F, J F ) is a finite spectral pair of A(σ) and (X, J ) is an infinite spectral pair of A(σ). Proof. Consider equation (3.3). Obviously the solution space of this equation is the kernel of R N+1 (A). Then the behavior of (1.2) is clearly isomorphic to the solution space of (3.3), i.e. B KerR N+1 (A) (3.6) But from theorem 1.1 in 10 we have as a special case that KerR N+1 (A) = Im col(x F J i F ) N i=0 Im col(x J N i ) N i=0 (3.7) 4

The dimensions of X F, J F, X, J are r n, n n, r µ and µ µ respectively, where n = deg A(σ) is the total number of finite elementary divisors, µ is the total number of infinite elementary divisors (multiplicities encountered for in both cases) and n + µ = rq (see 8). Furthermore it is known 8 that the columns of col(x F JF i, X J N i ) N i=0 are linearly independent. On the other hand from the regularity assumption of A(σ) we have rankr N+1 (A) = (N q + 1)r (this is a well known result, see for example 11 exercise 4.10) and thus dim KerR N+1 (A) = rq (3.8) Obviously (3.6),(3.7) and (3.8) prove that the columns of the matrix X F, X {J k F J N k } form indeed a basis of B and consequently dim B = rq. It is clear that the solution space B can be decomposed into two subspaces the one corresponding to the finite eigenstructure of the polynomial matrix and the other corresponding to the infinite one, i.e. B = B F B B where B F = Im col(x F JF i ) N i=0 and B B = Im col(x J N i ) N i=0 (the subscripts F, B are the initials of the words Forward and Backward) The first part B F gives rise to solutions moving in the forward direction of time and reflects the forward propagation of the initial conditions x 0, x 1,..., x q 1, while the second part B B gives solutions moving backwards in time, i.e. from N to 0. This discussion should be compared to that in 1, 2, 3. Notice that the above decomposition of the solution space into forward and backward subspaces, corresponds to a maximal forward decomposition of the descriptor space in 3. A very interesting problem is to determine a closed formula for the solution of (1.2) when boundary condition are given. The reason why we have to choose both initial and final conditions is obvious, after the above discussion about the behavior of (1.2). Theorem 3. Given the initial conditions vector ˆx I = x T 0 x T 1 x T q 1 T R rq and the final conditions vector ˆx F = x T N q+1 x T N q+2 x T N T R rq, (1.2) has the unique solution x k = X F JF k M F X J N k ˆxI M (3.9) ˆx F for k = 0, 1, 2,..., N, iff the vectors ˆx I, ˆx F satisfy the compatibility boundary condition ˆxI J N q+1 ker F M F M F (3.10) ˆx F M M 5 J N q+1

where M F R n rq, M R µ rq are defined by MF M = (col(x F JF i 1, X J q i ) q i=1) 1 R rq rq (3.11) Proof. Every solution x k, k = 0, 1, 2,..., N of (1.2) will be a linear combination of the basis of B, i.e. there exists a vector ζ R rq such that x k = X F X {J k F J N k }ζ (3.12) for k = 0, 1, 2,..., N. Our aim is to determine ζ in terms of the given initial - final conditions. The initial conditions vector will be given by (3.12) for k = 0, 1, 2,..., q 1. Thus or equivalently ˆx I = col(x F J i, X J N i ) q 1 i=0 ζ ˆx I = Q where the matrix Q = col(x F J i 1 F In 0 0 J N q+1 ζ (3.13), X J q i ) q i=1 in the above equation is invertible (see decomposable pairs in 8). At this point it would be useful to partition ζ = ζ T F ζ T T, where ζ F and ζ have appropriate dimensions. Now from (3.13) using the definition of M F, M in (3.11) we obtain ζ F = M F ˆx I (3.14) J N q+1 ζ = M ˆx I Notice that (3.14) determines ζ F final conditions vector we obtain but not ζ. Following similar lines for the ζ = M ˆx F (3.15) J N q+1 F ζ F = M F ˆx F Similarly (3.15) determines ζ but not ζ F. Now using the first equations in (3.14) and (3.15) we obtain ζ = ζf ζ = MF 0 J N q+1 0 M ˆxI which in view of (3.12) gives the solution formula (3.9), and combining the second equations in (3.14) and (3.15) we obtain the boundary compatibility condition J N q+1 F M F M F ˆxI = 0 M M ˆx F which is obviously identical to (3.10). Notice that equation (3.10) plays the role of the boundary mapping equation in 2 and it can be considered as a direct generalization of it. Equation 6 ˆx F

(3.10) summarizes the restrictions posed at both end points of the time interval by the system and with an appropriate choice of boundary conditions by (3.9) we can determine uniquely all the intermediate values of x k. For simplicity of notation and following similar lines with 2, we set Z(0, N) = and we prove the following Theorem 4. rankz(0, N) = rq. J N q+1 F M F M F M J N q+1 M R rq 2rq Proof. We set then from (3.11) we have N = col(x F JF i 1, X J q i ) q i=1 MF M N = In 0 0 I µ (3.16) Now, post-multiply Z(0, N) by diag{n, N} which has obviously full rank and use (3.16). We have N 0 Z(0, N) = 0 N = J N q+1 F 0 I n 0 0 I µ 0 J N q+1 Obviously the matrix on the right hand side of the above equation has full row rank and hence rankz(0, N) = rq (3.17) This result should be compared to theorem 1 in 2, where it is proved that a boundary mapping matrix of full rank exists if and only if the corresponding descriptor system is solvable and conditionable. In our case the system is obviously solvable, since we have already determined a solution and conditionable since we have proved that the boundary conditions satisfying (3.10) characterize uniquely the solution. However solvability and conditionability of (1.2) can be easily checked using rank tests as in 1, but in our case this would be trivial due the regularity of A(σ). It is important to notice here that (3.17) implies dim ker Z(0, N) = rq = dim B which means that the initial and final conditions vectors are chosen from a rq dimensional vector space. Thus rq is the total number of arbitrarily assigned values, distributed at both end points of the time interval. The connection between dim ker Z(0, N) and dim B is obvious. 7

4. Example In order to illustrate the above results we shall give an example which exhibits only backward behavior. This is done for brevity reasons, while it is well known that the finite eigenstructure of A(σ) gives rise to forward linearly independent solutions (see for example 8). Consider the unimodular polynomial matrix 1 σ 2 A(σ) = 0 1 Obviously there are no finite elementary divisors and thus no finite spectral pairs, since det A(σ) = 1. Consider also the AR equation A(σ)x k = 0 for k = 0, 1, 2,..., N q. According to the notation used earlier we have q = 2 and r = 2 and thus we have to expect dim B =rq = 4 Indeed, consider an infinite spectral pair of A(σ) X = 1 0 0 0 0 0 1 0, J = 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 Then according to theorem 3, a basis of B = B B is formed by the columns of the matrix Ψ(k) = X J N k = δn k δ = N k 1 δ N k 2 δ N k 3 0 0 δ N k δ N k 1 where δ i = 0 for i 0 and δ 0 = 1. The boundary mapping equation will be Z(0, N) = M, J N q+1 M since there is no finite spectral pair. Now we can see that M = and thus for N > 4, we have J N q+1 = 0 Z(0, N) = 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 8

which has full row rank. Obviously by (3.10) the final conditions can be freely assigned while for N > 4 the initial conditions must be zero. This is natural because, the infinite spectral pair of A(σ) gives rise to reversed in time deadbeat modes, which after four steps of backward propagation of the final conditions, become zero. The intermediate solution formula can be obtained by (3.9) x k = X J N k M ˆx F = δn k 1 δ = N k 3 δ N k δ N k 2 0 δ N k 1 0 δ N k ˆx F where no initial conditions are involved because there is no finite spectral pair of A(σ). 5. Conclusions In this note we have determined the solution space or the behavior B of the discrete time Auto-Regressive representations having the form A(σ)x k = 0, where the matrix A(σ) is a square regular polynomial matrix and x k is a vector sequence over a finite time interval k = 0, 1, 2,.., N. The solution space B is proved to be a linear vector space, of dimension equal to the product of the dimension r of the matrix A(σ) and the highest degree of σ occurring in the polynomial matrix. It is also shown that the behavior can be decomposed into a direct sum of the forward and backward subspace, which corresponds to a maximal F/B decomposition of the descriptor space in 3. We have also determined a basis for the solution space, using a construction based on both the finite and infinite spectral structure of A(σ). We introduce the notion of the dual AR representation which is simply the same system but with reversed time direction. Finally, a generalization of the boundary mapping defined for first order systems in 2, to the higher order case is given and it is shown that such a boundary mapping can be obtained in terms of the spectral pairs of A(σ). References 1 Luenberger D.G., Dynamic Equations in Descriptor Form, IEEE Trans.Autom. Contr., Vol-22, No 3, June 1977, pp. 312-321. 2 Luenberger D.G., Boundary Recursion for Descriptor Variable Systems, IEEE Trans.Autom. Contr., Vol-34, No 3, (1989), pp. 287-292. 3 Lewis F. L., Descriptor Systems: Decomposition into Forward and Backward Subsystems, IEEE Trans.Autom. Contr., Vol-29, No 2, (1984), pp. 167-170. 4 Lewis F. L., B.G.Mertzios, On the Analysis of Discrete Linear Time- Invariant Singular Systems, IEEE Trans. Autom. Contr., Vol 35, No 4, (1990), pp.506-511. 9

5 Lewis F. L., A Survey of Linear Singular Systems, Circuits Systems and Signal Processing, Vol 5, No 1, (1986), pp.3-36. 6 Nikoukhah R, Willsky A., Bernard C. Levy, Boundary-value descriptor systems: well-posedness, reachability and observability, Int. J. Control, Vol 46, (1987), 1715-1737. 7 Gantmacher F.R., Matrix Theory, Chelsea Publishing Company, (1971), New York. 8 Gohberg I., Lancaster P., Rodman L., Matrix Polynomials, Academic Press Inc, (1982), New York. 9 Gohberg I., Kaashoek M.A., Lerer L., Rodman L., Common Multiples and Common Divisors of Matrix Polynomials, I. Spectral Method, Indiana Journal of Math. 30, (1981),pp.321-356. 10 Gohberg I., Kaashoek M.A., Lerer L., Rodman L., Common Multiples and Common Divisors of Matrix Polynomials, II. VanderMonde and Resultant Matrices, Linear and Multilinear Algebra, Vol 12, (1982), pp. 159-203. 11 Vardulakis A.I.G., Linear Multivariable Control - Algebraic Analysis and Synthesis Methods, Willey, (1991), New York. 12 Willems J.C., Paradigms and Puzzles in the Theory of Dynamical Systems, IEEE Trans. Autom. Contr., Vol 36, No 3, (1991), pp. 259-294. 10