Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach

Similar documents
Numerical computation of minimal polynomial bases: A generalized resultant approach

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

Infinite elementary divisor structure-preserving transformations for polynomial matrices

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1.

Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix

An LQ R weight selection approach to the discrete generalized H 2 control problem

Parametrization of All Strictly Causal Stabilizing Controllers of Multidimensional Systems single-input single-output case

On A Special Case Of A Conjecture Of Ryser About Hadamard Circulant Matrices

Multivariable Control. Lecture 05. Multivariable Poles and Zeros. John T. Wen. September 14, 2006

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Key words. Polynomial matrices, Toeplitz matrices, numerical linear algebra, computer-aided control system design.

Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials

On the computation of the Jordan canonical form of regular matrix polynomials

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

A Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms.

SMITH MCMILLAN FORMS

A robust solution of the generalized polynomial Bezout identity

APPROXIMATE GREATEST COMMON DIVISOR OF POLYNOMIALS AND THE STRUCTURED SINGULAR VALUE

M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium

Projection of state space realizations

Minimal-span bases, linear system theory, and the invariant factor theorem

Matrix Equations in Multivariable Control

BUMPLESS SWITCHING CONTROLLERS. William A. Wolovich and Alan B. Arehart 1. December 27, Abstract

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013

Fraction-free Row Reduction of Matrices of Skew Polynomials

System theoretic based characterisation and computation of the least common multiple of a set of polynomials

Robust Control 2 Controllability, Observability & Transfer Functions

The Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field

15. C. Koukouvinos, M. Mitrouli and J. Seberry, On the Smith normal form of weighing matrices, Bull. Inst. Combin. Appl., Vol. 19 (1997), pp

Paul Van Dooren s Index Sum Theorem: To Infinity and Beyond

WEAK STRUCTURE AT INFINITY AND ROW-BY-ROW DECOUPLING FOR LINEAR DELAY SYSTEMS

MATHEMATICAL PROGRAMMING I

Maximizing the Closed Loop Asymptotic Decay Rate for the Two-Mass-Spring Control Problem

MINIMAL GENERATING SETS OF GROUPS, RINGS, AND FIELDS

Editorial Board: Editorial Office:

Zero controllability in discrete-time structured systems

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. VII Multivariable Poles and Zeros - Karcanias, Nicos

Notes on n-d Polynomial Matrix Factorizations

= W z1 + W z2 and W z1 z 2

Introduction to Linear Algebra. Tyrone L. Vincent

SYMBOLIC IMPLEMENTATION OF LEVERRIER-FADDEEV ALGORITHM AND APPLICATIONS

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

A proof of the Jordan normal form theorem

Elementary linear algebra

Hermite normal form: Computation and applications

DECOUPLING IN SINGULAR SYSTEMS: A POLYNOMIAL EQUATION APPROACH

GALOIS THEORY I (Supplement to Chapter 4)

A Method to Teach the Parameterization of All Stabilizing Controllers

Block Bidiagonal Decomposition and Least Squares Problems

THE STABLE EMBEDDING PROBLEM

4.3 - Linear Combinations and Independence of Vectors

Applications of Controlled Invariance to the l 1 Optimal Control Problem

COMPLETED RICHARDSON EXTRAPOLATION IN SPACE AND TIME

Performance assessment of MIMO systems under partial information

Computing Minimal Nullspace Bases

Theoretical Computer Science

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION CONTENTS VOLUME VII

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

ETNA Kent State University

Schur parametrizations and balanced realizations of real discrete-time stable all-pass systems.

ELA

Vandermonde-form Preserving Matrices And The Generalized Signal Richness Preservation Problem

Optimizing simultaneously over the numerator and denominator polynomials in the Youla-Kučera parametrization

Example Linear Algebra Competency Test

arxiv: v1 [cs.sy] 2 Apr 2019

Algebra Exam Syllabus

Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution

Multivariable ARMA Systems Making a Polynomial Matrix Proper

On the Arithmetic and Shift Complexities of Inverting of Dierence Operator Matrices

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

Foundations of Matrix Analysis

Rings, Integral Domains, and Fields

On at systems behaviors and observable image representations

ACI-matrices all of whose completions have the same rank

DEADBEAT CONTROL, POLE AND LQ REGULATION 1

The matrix approach for abstract argumentation frameworks

Flat output characterization for linear systems using polynomial matrices

Section-A. Short Questions

Zeros and zero dynamics

A Transfer Matrix Approach to Feedback Invariants of Linear Systems: Application to Decoupling

A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation.

Quadratic and Copositive Lyapunov Functions and the Stability of Positive Switched Linear Systems

Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

MRAGPC Control of MIMO Processes with Input Constraints and Disturbance

Lattices and Hermite normal form

ANTONIS-IOANNIS VARDULAKIS

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form

Positive systems in the behavioral approach: main issues and recent results

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

arxiv: v1 [cs.sy] 29 Dec 2018

City, University of London Institutional Repository

(VI.C) Rational Canonical Form

Canonical lossless state-space systems: staircase forms and the Schur algorithm

ABSTRACT. Department of Mathematics. interesting results. A graph on n vertices is represented by a polynomial in n

Optimal Polynomial Control for Discrete-Time Systems

Transcription:

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach E.N. Antoniou, A.I.G. Vardulakis and S. Vologiannidis Aristotle University of Thessaloniki Department of Mathematics Faculty of Sciences 54006, Thessaloniki - Greece E-mail: (antoniou, avardula, svol)@math.auth.gr March, 005 Abstract We propose a new algorithm for the computation of a minimal polynomial basis of the left kernel of a given polynomial matrix F (s): The proposed method exploits the structure of the left null space of generalized Wolovich or Sylvester resultants to compute row polynomial vectors that form a minimal polynomial basis of left kernel of the given polynomial matrix. The entire procedure can be implemented using only orthogonal transformations of constant matrices and results to a minimal basis with orthonormal coe cients. Keywords: polynomial matrices; Minimal polynomial basis; Matrix fraction description. AMS classi cation: 15A, 15A, 65F0 1. Introduction The problem of determination of a minimal polynomial basis of a rational vector space (see [8]) is the starting point of many control analysis, synthesis and design techniques based on the polynomial matrix approach [], [6], [15], [1]. Given a rational transfer function matrix P (s) it is usually required to determine left or right coprime polynomial matrix fractional representations (factorizations) of P (s) of the form P (s) = D 1 L (s)n L(s) = N R (s)d 1 R (s): Moreover, in many applications apart from the coprimeness requirement of the above factorizations, it is often desirable to have factorizations where either the denominator matrices D L (s); D R (s) or the compound matrices E(s) := [D L (s); N L (s)]; F (s) := [N R > (s); D> R (s)]> have respectively Corresponding Author. Supported by the Greek State Scholarships Foundation (IKY) (Postdoctoral research grant, Contract Number 411/00-04)

minimal row or column degrees, i.e. they are row or column proper (reduced). Classical examples of such applications are the denominator assignment problem (see [], [6], [], [15], [16], [1]) and the determination of a minimal realization (see [],[1], [0]) of a MIMO rational transfer function, where a minimal in the above sense and coprime factorization of the plant is required. Furthermore, even the problem of row or column reduction of a polynomial matrix itself can be solved using minimal polynomial bases computation techniques as described in [4] and [18]. The classical approach (see [], [11]) to the problem of nding a minimal polynomial basis of a rational vector space, starts from a given, possibly non minimal, polynomial basis and in the sequel applying polynomial matrix techniques (extraction of greatest common divisors and unimodular transformations for row/column reduction) one can obtain the desired minimal basis. However, such implementations are known to su er of serious numerical problems and thus they are not recommended for real life applications. A numerically reliable alternative to the classical approach has been presented in []. The method presented in [] utilizes the pencil approach by applying generalized Schur decomposition on the block companion form of the polynomial matrix, which in turn allows the computation of a minimal polynomial basis of the original matrix. A second alternative appears in [19], where the computation of the minimal basis is accomplished via the Padé approximants of the polynomial matrices involved. Our approach to the problem is comparable to the techniques presented in [1], [1], [14] and [1] where the computation of minimal polynomial bases of matrix pencils is considered and to the one in [] where the structure of Sylvester resultant matrices is being utilized. The problem of computation of a minimal basis can be stated as follows. Given a full column rank (over R (s)) polynomial matrix F (s) R (p+m)m [s] determine a left unimodular, row proper matrix E(s) R p(p+m) [s] such that E(s)F (s) = 0: Then E(s) is a minimal polynomial basis of the left kernel of F (s): Our approach to the problem exploits the structure of the generalized Sylvester or Wolovich resultant (see [5], [1]) of the polynomial matrix F (s). Notice that the methods presented in [], [19], [1], [1] and [] deal with the dual problem, i.e. determination of a minimal basis of the right kernel of E(s): The outline of the paper is as follows. In section we present the necessary mathematical background and notation as well as some known results regarding the structure of generalized resultants. Section presents the main results of the paper along with the proposed algorithm for the computation of minimal polynomial bases. In section 4 we discuss the numerical properties of the proposed algorithm, while in section 5 we provide illustrative examples for the method. Finally, in section 6 we summarize and draw our conclusions.. Mathematical background In the following R; C; R (s) ; R [s] ; R pr (s) ; R po (s) are respectively the elds of real numbers, complex numbers, real rational functions, the rings of polynomials, proper rational and strictly proper rational functions all with coe cients in R and indeterminate s: For a set F; F pm denotes the set of p m matrices with entries in F: N + is the set of positive integers. The symbols rank F (:); ker F (:) and Im F (:) denote respectively the rank, right kernel (null space) and image (column span) of the matrix in brackets over the eld F. Furthermore in certain

cases we may use the symbols ker L F (:) and Im L F (:) to denote the left kernel and row span of the corresponding matrix over F. In case F is omitted in one of these symbols R is implied. If m N + then m denotes the set f1; ; :::; mg : A polynomial matrix T (s) R pm [s] will be called left (resp. right) unimodular i rankt (s 0 ) = p (resp. rankt (s 0 ) = m) for every s 0 C, or equivalently i T (s) has no zeros in C. When T (s) is a square polynomial matrix then T (s) will be called unimodular i rankt (s 0 ) = p = m for every s 0 C. A polynomial matrix X(s) R pm [s](p m) is called column proper or column reduced i its highest column degree coe cient matrix, denoted by X hc ; which is formed by the coe cients of the highest powers of s in each column of X(s); has full column rank. The column degrees of X(s) are usually denoted by deg ci X(s); i m. Respectively Y (s) R pm [s] (p m) is called row proper or row reduced i Y > (s) is column proper and the row degrees of Y (s) are denoted by deg ri Y (s); i p: Let F (s) R (p+m)m [s] with rank R(s) F (s) = m and E(s) R p(p+m) [s] with rank R(s) E(s) = p be polynomial matrices such that E(s)F (s) = 0: (.1) When (.1) is satis ed and E(s) is row proper and left unimodular, E(s) is a minimal polynomial basis [8] of the (rational vector space spanning the) left kernel of F (s) and the row degrees deg ri E(s) =: i ; i p of E(s) are the invariant minimal row indices of the left kernel of F (s) or simply the left minimal indices of F (s): Similarly when (.1) is satis ed with F (s) column proper and right unimodular, F (s) is a minimal polynomial basis of the (rational vector space spanning the) right kernel of E(s) and the column degrees deg ci F (s) =: i ; i m of F (s) are the invariant minimal column indices of the right kernel of E(s) or simply the right minimal indices of E(s): Given a polynomial matrix F (s) R (p+m)m [s] and k; a; b N + we introduce the matrices S a;b (s) := [I a ; si a; :::; s b 1 I a ] > R baa [s] ; (.) X k (s) := S p+m;k (s)f (s) R (p+m)km [s]: (.) Formula (.) is essentially the basis for the construction of generalized resultants. F (s) = F 0 + sf 1 + ::: + s q F q ; F i R (p+m)m and write where R k Let X k (s) = R k S m;q+k (s); (.4) F 0 F 1 : : : F q 0 : : : 0. R k := 0 F 0 F 1 : : : F.. q. 6 4........ : : :.. 0 5 R(p+m)km(q+k) : (.5) 0 : : : 0 F 0 F 1 : : : F q The matrix R k is known [5] as the Generalized Sylvester Resultant of F (s): Let i = deg ci F (s); i m be the column degrees of F (s) : Similarly to [] (page 4) X k (s) can be written as X k (s) = M ek block diagfs 1;i+k(s)g; (.6) im

where M ek R (m+p)k(mk+ mp i) i=1. The matrix Mek is de ned [1] as the Generalized Wolovich Resultant of F (s): Write F (s) = [f 1 (s); f (s); :::; f m (s)] where f i (s) = f i0 + sf i1 + ::: + s i f iki R (m+p)1 [s], i m are the columns of F (s): Then it is easy to see that where M ek = [R 1 k; R k; :::; R m k ]; (.) f i0 f i1 : : : f iki 0 : : : 0 Rk i 0 f i0 f i1 : : : f iki : : : 0 = 6 4......... : : :... 5 R(m+p)k(i+k) i m; 0 0 : : : f i0 f i1 : : : f iki is the generalized Sylvester resultant of the column f i (s), i m of F (s): It is easy to see that the two types of generalized resultants are related through R k = [M ek ; 0 (m+p)k;b ]P k ; (.8) where P k R m(q+k)m(q+k) is a column permutation matrix. The fact that R k contains at least mp b = mq i ; where q = max f ig; zero columns has been observed in []: The following i=1 im result will be very useful in the sequel Theorem.1. Let E(s) R p(p+m) [s] be a minimal polynomial basis for the left kernel of F (s) as in (.1) and let i = deg ri E(s); i p be the invariant minimal row indices of the left kernel of F (s): Then rankr k = rankm ek = (p + m)k v k ; (.9) where v k = P (k i ) = dim ker L M ek = dim ker L R k : i: i <k Proof. The rank formula (.9) for the generalized Sylvester resultant rst appeared in [5], while the corresponding result for the generalized Wolovich resultant has been established in [1]. Furthermore, the fact that rankr k = rankm ek becomes obvious in view of equation (.8).. Computation of Minimal Polynomial Bases It is evident from the last result of the above section that the orders of the left minimal indices of a polynomial matrix F (s) are closely related to the structure of the generalized Sylvester or Wolovich resultants. Furthermore, the following result shows the connection between the coe cients of a minimal polynomial basis of the left kernel of F (s) and a basis of the left kernel of either R k or M ek. Theorem.1. Let E(s) be a minimal polynomial basis for the left kernel of F (s) as in (.1). Let i = deg ri E(s); i p be the invariant minimal row indices of the left kernel of F (s); and denote by a k the number of rows of E(s) with i = k. Then ker L R k = ker L M ek = Im L L k ; (.1) 4

where L k R v kk(p+m), is de ned by block diagfs 1;k i (s)ge k (s) = L k S p+m;k (s); (.) i: i <k and E k (s) R k (p+m) is a polynomial matrix that consists of all k = P k 1 i=0 a i rows of E(s) with row degrees satisfying i < k: Proof. Since E k (s) consists of rows of E(s) satisfying i = deg ri E(s) < k; in view of (.1) it is easy to see that E k (s)f (s) = 0; (.) for every s C. Postmultiplying (.) by F (s) and using (.) gives L k X k (s) = 0; with X k (s) de ned in (.). Now using respectively (.4) and (.6), we get for every s C. Thus L k R k S m;q+k (s) = 0 and L k M ek block diagfs 1;i+k(s)g = 0; im L k R k = 0 and L k M ek = 0; which proves that Im L L k ker L R k and Im L L k ker L M ek : Furthermore it is easy to see that L k has full row rank since the existence of a (constant) row vector w > R 1v k s.t. w > L k = 0, would imply (via eq..) existence of a polynomial vector w > (s) R 1 k [s] satisfying w > (s)e k (s) = 0; which contradicts the fact that E(s) consists of linearly independent polynomial row vectors. Thus dim Im L L k = X (k i ) = dim ker L M ek = dim ker L R k ; i: i <k which completes the proof. Our aim is to propose a method for the determination of a minimal polynomial basis for the left kernel of F (s): As it will be shown in the sequel this can be done via numerical computations on successive generalized Sylvester or Wolovich resultants of the polynomial matrix F (s): The key idea is that if we already know a part of the minimal polynomial basis E(s) of the left kernel of F (s), corresponding to rows with row degrees less than k; then we can easily determine linearly independent polynomial row vectors with degree exactly equal to k; that belong to the left kernel of F (s): Recall that E k (s) R k (p+m) [s] is the matrix de ned in theorem.1, i.e. it is a part of the minimal polynomial basis E(s) of the left kernel of F (s) that contains only those rows of E(s) with i = deg ri E(s) < k. For k = 1; ; ; :::. we de ne the sequence of rational vector spaces It is easy to see that where = max ip f ig; while obviously F k = Im L R(s) E k(s): (.4) F 1 F ::: F +1 = ker L R(s) F (s); (.5) dim R(s) F k = k : (.6) 5

Theorem.. Let E k (s) be a minimal polynomial basis of F k. De ne the (v k + k ) (p + m)(k + 1) matrix L k+1 from the relation block diagfs 1;k i +1(s)gE k (s) = L k+1 S p+m;k+1 (s); (.) i k and let N k+1 R a k(p+m)(k+1) be such that the (a k + v k + k ) (p + m)(k + 1) compound matrix ~ L k+1 := [ L > k+1 ; N > k+1 ]> satis es rank ~ L k+1 = v k+1 and ~ L k+1 M e(k+1) = 0; (.8) i.e. such that ~ L k+1 is a basis of ker L M e(k+1) : Then the rows of the polynomial matrix 1 ~E k+1 (s) := Ek (s) N k+1 (s) form a minimal polynomial basis of F k+1 where N k+1 (s) := N k+1 S p+m;k+1 (s) R a k(p+m) [s] : Proof. Postmultiplying (.) by F (s) and taking into account (.6) and the fact that E k (s)f (s) = 0 for every s C, it is easily seen that ; L k+1 M e(k+1) = 0; (.9) while the rows of L k+1 are linearly independent. We seek to nd linearly independent row vectors that together with the rows of L k+1 form a basis of ker L M e(k+1) : According to theorem.1 dim ker L M e(k+1) = v k+1 which compared to the number of rows of L k+1 shows that we need another a k linearly independent vectors to form a complete basis of ker L M e(k+1) : Assume we determine a a k (p + m)(k + 1) full row rank matrix N k+1 such that with rows linearly independent to those of L k+1 ; i.e. such that N k+1 M e(k+1) = 0; (.10) rank ~ L k+1 = v k+1 : (.11) Obviously the rows of the compound matrix in the above equation form a basis for the left kernel of M e(k+1) : It is easy to verify that the rows of the polynomial matrix N k+1 (s) will satisfy N k+1 (s)f (s) = 0: Furthermore the polynomial rows of N k+1 (s) will have degrees exactly k; since if there exists a row of N k+1 (s) with deg ri N k+1 (s) < k, the corresponding row of Nk+1 would be a linear combination of the rows of L k+1 ; which contradicts (.11). It is easy to see that there exists a row permutation matrix P such that 1 If a k = 0 then obviously ~ E k+1 (s) = E k (s): P L Lk 0 k+1 = X k E hr k ; (.1) 6

where L k R vkk(p+m) is a basis of ker L M ek as de ned in (.), X k is a constant matrix, and is the highest row coe cient matrix of E k (s): Accordingly partition N k+1 as follows E hr k N k+1 = [Y k ; N hr k+1]; where Y k R a k(p+m)k and N hr k+1 Ra k(p+m) is the highest row coe cient matrix of N k+1 (s): We shall prove that ~ E k+1 (s) is row proper or equivalently that the highest row degree coe cient matrix of ~ E k+1 (s) has full row rank. Obviously E ~E k+1 hr hr = k Nk+1 hr Assume that E ~ k+1 (s) is not row proper. Then there exists a row vector [a > ; b > ] such that E [a > ; b > hr ] k Nk+1 hr = 0: (.1) Combining equations (.9), (.10) and (.1) we obtain L k 0 4 5 M e(k+1) = 0; X k Y k E hr k N hr k+1 while premultuplying the above equation by [0; a > ; b > ]; with a > ; b > chosen as in (.1) we get [a > ; b > Xk ] M ek = 0: Now since L k is a basis of the left kernel of M ek there exists a row vector c > such that [a > ; b > Xk ] = c > L k : It is easy to verify that Y k Y k : [ c > ; a > ; b > P 0 Lk+1 ] = 0; 0 I ak N k+1 which contradicts (.11). Thus E ~ k+1 (s) is row proper and thus has full row rank over R (s). Hence the rows of E ~ k+1 (s) form a basis of the rational vector space F k+1 : Furthermore E ~ k+1 (s); as row proper, has no zeros at s = 1 [1] (Corollary.100, page 144): It remains to show that E ~ k+1 (s) has no nite zeros. Consider the rational vector space F k+1 and its minimal polynomial basis formed by the rows of E k+1 (s). The row orders i = deg ri E k+1 (s) are the minimal invariant indices of F k+1 and denote by ordf k+1 the (Forney invariant) minimal order of F k+1, which in our case is ordf k+1 = X i : i: i <k+1

The rows of E ~ k+1 (s) span also the rational vector space F k+1 : It is known ([1], page 1) that if Zf E ~ n o k+1 (s)g is the total number of ( nite and in nite) zeros, ~Ek+1 M (s) is the McMillan degree of E ~ k+1 (s); and ordf k+1 is the (Forney invariant [8]) minimal order of the rational vector space spanned by the rows of E ~ k+1 (s) then n o ~Ek+1 M (s) = Zf E ~ k+1 (s)g + ordf k+1 ; but ~ E k+1 (s) is row proper and thus its McMillan degree is equal to the sum of its row indices, which by construction coincides with P i: i <k+1 i = ordf k+1 : Thus Zf ~ E k+1 (s)g = 0 which establishes the fact that E ~ k+1 (s) has no nite zeros. Thus the polynomial matrix E ~ k+1 (s) is a row proper and left unimodular, i.e. a minimal polynomial basis of F k+1 : The above theorem essentially allows us to determine successively a minimal polynomial basis of ker L R(s) F (s): Starting with k = 0 one can determine a minimal polynomial basis of F 1 ; i.e. the part of the minimal polynomial basis of ker L R(s) F (s) with row indices i = 0: Using this part of the polynomial basis and applying again the procedure of theorem. for k = 1; we determine a minimal polynomial basis of F : The entire procedure can be repeated until we have a minimal polynomial basis consisting of p row vectors. In order to obtain numerically stable results one can use singular value decomposition to obtain orthonormal bases of the kernels of constant matrices involved. Furthermore, the rows of Nk+1 can be chosen not only to be linearly independent to those of L k+1 ;but orthogonal to each one of them. This can be done by computing an orthonormal basis of the left kernel of [M e(k+1) ; L > k+1 ]: The coe cients of a minimal polynomial basis computed this way will form a set of orthonormal vectors, i.e. E k 1 E k > 1 = I p: The entire procedure can be summarized in the following algorithm: Step 1. Compute an orthonormal basis N 1 of ker L M e1, and set E 1 = N 1 Step. Set k = Step. Using (.) compute L k for E k 1 (s) = E k 1 S p+m;k Step 4. Determine an orthonormal basis N k of ker L [M ek ; L > k ] and set E k = Ek 1 j 0 N k Step 5. Set k = k + 1 Step 6. If f# of rows E k 1 g < p go to Step Step. The minimal polynomial basis is given by E k 1 S p+m;k 1 (s) Notice that the above procedure can be applied even if the matrix F (s) has not full column rank over R (s). Assuming that rank R(s) F (s) = r < m; we can modify step 6 so that the loop stops if f# of rows E k 1 g = p + m r, since obviously p + m r is the dimension of the left kernel of F (s). In case r is unknown, we can still use the proposed algorithm by leaving the loop running until k reaches mq; since mq is known to be the upper bound for the maximal left minimal index ; but with a signi cant overhead in computational cost (see next section for more details). 8

Obviously the proposed algorithm can be easily modi ed to compute right minimal polynomial bases, by simply transposing the polynomial matrix whose right null space is to be determined. Finally, notice that throughout the above analysis we have used the generalized Wolovich resultant because in general it has less columns than the corresponding generalized Sylvester resultant (see (.8)). However, the left null space structure of both resultants is identical and the proposed algorithm can be implemented using either. 4. Numerical Considerations The proposed algorithm requires successive determination of orthonormal bases of left kernels of the matrices [M ek ; L > k ] for each k = 1; ; ; :::: The most reliable method to obtain orthonormal bases of null spaces is undoubtedly singular value decomposition (SVD) (see for instance [9]). Thus the computational complexity at each step of the algorithm is about O(n k ) (where for ease of notation we use n := p + m for F (s) R (p+m)m [s]): Applying standard SVD implementations (such as Golub-Reinsch SVD or R-SVD) at each step would result in a relatively high computational cost, since the SVD computed in step k cannot be reused for the next iteration. However, recently a fast and backward stable algorithm for updating the SVD when rows or columns are appended to a matrix, appeared in [10]. The cost of each update is quadratic to matrix dimensions. Applying this technique at each step, could e ectively reduce the total cost of our algorithm up to the kth step, to O(n k ): Taking into account that the iterations will continue until k = + 1, where is the maximal left minimal index of F (s); we can conclude that the cost of the proposed algorithm is about O( n ): Comparing this to the complexity of the algorithm in [] which is about O(q n ); where q is the maximum degree of s in F (s); we can see that our implementation can be more e cient if < q: On the other hand the upper bound for is mq; so the complexity of our algorithm can get as high as O(m q n ): However, this upper bound will only be reached in extreme cases where F (s) has only one left minimal index of order greater than zero and no nite and in nite elementary divisors (see example 5.). In general when F (s) has nite zeros (elementary divisors) and/or is not column reduced or even better if p m; it is expected that q: Still, in bad cases where generalized resultants tend to become very large, one may employ sparse or structured matrix techniques to improve the e ciency of the algorithm. From a numerical stability point of view, each step of the algorithm is stable since it depends on SVD computations. A complete stability analysis of the entire algorithm is hard to be accomplished, since there is not a standard way to de ne small perturbations for polynomial matrices. However, there are some points in the procedure that are worth mentioning. The actual procedure of computing an orthonormal basis of the left kernel of Q k := [M ek ; L > k ]; involves the computation of the singular value decomposition Q k = U k > kv k, where U k ; V k are orthogonal and k = diagf i (Q k ); 0gand i (Q k ) are the singular values of Q k : In the sequel the rank of Q k is determined by choosing r k such that rk (Q k ) k > rk +1(Q k ), where k = kq k k 1 u and u > 0 is a (small) number such that k is consistent with the machine precision ([9]). The basis of the left kernel of Q k is then given by the rows of U k corresponding to singular values smaller than k : It is easy to see that if N k (using the notation in step 4 of the algorithm) is such a basis then Nk M ek < k : The product N k M ek gives the coe cients of the multiplication of the newly computed rows of the minimal polynomial basis, by F (s); so it is important to keep Nk M ek small relatively to the magnitude of M ek : On the other hand it is 9

easy to see that km ek k 1 = km e1 k 1, while due to the special structure of L k it can be seen that L > k 1 p: To avoid problematic situations, where for example km ek k 1 kq k k 1 which may lead to erroneous computation of r k ; it is necessary to scale M ek in order to balance the components of Q k : Experimental results show that a good practice is to normalize F (s) using km e1 k 1 ; i.e. setting F (s) = F (s)= km e1 k 1 : In such a case a quick calculation yields Nk M ek < u km e1 k 1 (p + 1); i.e. that the coe cients of the product E(s)F (s) will be of magnitude about u times the magnitude of the coe cients of F (s); which is close to zero compared to the magnitude of F (s): 5. Examples The examples bellow have been computed on an PC, with relative machine precision EP S = 5 ' :045 10 16 : Example 5.1. Consider the example 5. in []. Given then transfer function P (s) = D 1 L (s)n L(s); where D L (s) = (s + ) 1 0 s + 8 s (s + ) ; N 0 1 L (s) = + 6s + s + 6s + s : + s + 8 We construct the compound matrix F (s) = [D L (s); N L (s)] > and compute a minimal basis of the left kernel of F (s): We rst notice that km e1 k 1 = 6 and normalize F (s) by setting F (s) = F (s)= km e1 k 1 : Due to lack of space we use less decimal digits in intermediate results but the actual computations carried out with the above mentioned relative machine precision. The normalized form of F (s) is 0:08s + 0:194s + 0:444s + 0: 0: F (s) = 6 0: 0:08s + 0:194s + 0:444s + 0: 4 0:08s 0: 0:08s 0:16s 0:056 5 : 0:056s 0:16s 0:056 0:08s 0:194s 0: For k = 1 we compute M e1 = 6 4 0: 0:444 0:194 0:08 0: 0 0 0 0: 0 0 0 0: 0:444 0:194 0:08 0: 0:08 0 0 0:056 0:16 0:08 0 0:056 0:16 0:056 0 0: 0:194 0:08 0 and calculate its left kernel, which in this case is empty. Thus E 1 = ;: For k = we compute 0: 0:444 0:194 0:08 0 0: 0 0 0 0 0: 0 0 0 0 0: 0:444 0:194 0:08 0 0: 0:08 0 0 0 0:056 0:16 0:08 0 0 M e = 0:056 0:16 0:056 0 0 0: 0:194 0:08 0 0 0 0: 0:444 0:194 0:08 0 0: 0 0 0 : 6 0 0: 0 0 0 0 0: 0:444 0:194 0:08 4 0 0: 0:08 0 0 0 0:056 0:16 0:08 0 5 0 0:056 0:16 0:056 0 0 0: 0:194 0:08 0 5 ; 10

Since E 1 = ;; L = ; so we have to compute the left kernel of M e N = 0:4 0:514 0:4 0:686 4: 10 16 :516 10 16 :4 10 16 0:11 ; and set E = N : We proceed for k = and compute M e and L 0: 0:44 0:19 0:0 0 0 0: 0 0 0 0 0 0: 0 0 0 0 0 0: 0:44 0:19 0:0 0 0 0: 0:08 0 0 0 0 0:06 0:1 0:0 0 0 0 0:06 0:1 0:06 0 0 0 0: 0:19 0:08 0 0 0 0 0: 0:44 0:19 0:0 0 0 0: 0 0 0 0 M e = 0 0: 0 0 0 0 0 0: 0:44 0:19 0:0 0 0 0: 0:08 0 0 0 0 0:06 0:1 0:0 0 0 ; 0 0:06 0:1 0:06 0 0 0 0: 0:19 0:08 0 0 0 0 0: 0:44 0:19 0:0 0 0 0: 0 0 0 6 0 0 0: 0 0 0 0 0 0: 0:44 0:19 0:0 4 0 0 0: 0:08 0 0 0 0 0:06 0:1 0:0 0 5 0 0 0:06 0:1 0:06 0 0 0 0: 0:19 0:08 0 L = 0:4 0:514 0:4 0:686 4: 10 16 :516 10 16 0 0 0 0 0:4 0:514 :4 10 16 0:11 0 0 0 0 0:4 0:686 4: 10 16 :516 10 16 :4 10 16 0:11 : Next we compute the left null space of [M e ; L > ] which gives N = 0:1 0:09 0:55 0: 0:06 0:110 0:595 0: 6:461 10 1 1:1 10 16 0:4 0:08 ; and set E = 0:4 0:514 0:4 0:686 4: 10 16 :516 10 16 0:1 0:09 0:55 0: 0:06 0:110 :4 10 16 0:11 0 0 0 0 0:595 0: 6:461 10 1 1:1 10 16 0:4 0:08 : We have p = rows in E so the loop stops. The (transposed) computed minimal basis of the left kernel of F (s) is then computed by setting E(s) = E S 4; (s) ; i.e. 8:9094 10 1 s 0:499 :46895 10 16 s 0:061616s + 0:111 E > (s) = 6 :405 10 16 s 0:514496 :1856 10 16 s + 0:10951s 0:091865 4 :1549 10 18 s 0:499 0:4s + 0:59516s + 0:5548 5 ; 0:11499s 0:685994 0:080808s 0:1s 0:1669 whose (row) partitioning gives the coprime factorization P (s) = N R (s)d 1 R (s): Notice that = which is less than the degree of F (s), q = : 11

Example 5.. Consider the example 5.1 in []. Given then transfer function P (s) = N R (s)d 1 R (s); where s 0 0 0 0 0 0 0 1 s 0 0 0 N R (s) = 6 0 0 0 0 4 0 0 s 0 5 ; D R(s) = 6 0 1 s 0 0 4 0 s 1 s 0 5 ; 0 0 0 1 s 0 0 0 s and construct the compound matrix F (s) = [N R > (s); D> R (s)]> : The minimal basis of the left kernel of F (s) is then given by our algorithm 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 E(s) = 6 0 0 0 0 a(s 1) 0 0 0 as 4 a(s 1) 0 0 0 0 as 0 0 0 5 ; 0 0 0 bs + cs b 0 0 bs b(s s) 0 where a ' 0:55; b ' 0: and c ' 0:66666: The left coprime fractional representation of P (s) = N L (s)d 1 L (s) can be obtained by appropriately partitioning E(s). Notice that = which is equal to q = : Example 5.. Consider the matrix 1 0 0 s q. 1... F (s) = 0 s q... 0 6 4....... 1 0 0 s q R (m+1)m [s]; 5 the minimal polynomial basis for the left kernel of F (s) as computed by the algorithm is E(s) = a[s qm ; s q(m 1) ; :::; s q ; 1] R 1(m+1) [s]; where a = 1 p m+1 : Obviously = qm, which is the worst case from a performance point of view. Notice the absence of nite and in nite elementary divisors in F (s) and the fact that dim ker L R(s) F (s) = 1; thus qm is consumed in just one left minimal index. 6. Conclusions In this note we have proposed a resultant based method for the computation of minimal polynomial bases of a polynomial matrix. The algorithm utilizes the left null space structure of successive generalized Wolovich or Sylvester resultants of a polynomial matrix to obtain the coe cients of the minimal polynomial basis of a the left kernel of the given polynomial matrix. The entire computation can be accomplished using only orthogonal decompositions and the coe cients of the resulting minimal polynomial basis have the appealing property of being 1

orthonormal. From a performance point of view our procedure requires about O(n ) oating point operations which is comparable to other approaches ([], [19]) since in most cases it is expected that the order of the maximal left minimal index is close to the degree q of the polynomial matrix itself. Further research on the subject could address more speci c problems like the computation of row or column reduced polynomial matrices using an approach similar to [4] (and the improved version of [18]) or the determination of rank, left minimal indices and greatest common divisors of polynomial matrices. A test version of the algorithm has been implemented in Mathematica T M 4. and is available upon request to anyone interested. References [1] Antoniou E.N., Vardulakis A.I.G. (00), A numerical method for the computation of proper denominator assigning compensators for strictly proper plants, Proc. of the 11th IEEE Med. Conference on Control and Automation, June 00, Rhodes, Greece. [] Basilio J.C., Kouvaritakis B., (199), An Algorithm for Coprime Matrix Fraction Description using Sylvester Matrices, Linear Algebra and its Apps, Vol 66, pp. 10-15. [] Beelen T.G., Veltkamp G.W. (198), Numerical computation of a coprime factorization of a transfer function matrix, Systems & Control Letters, Vol. 9, pp. 81-88. [4] Beelen T.G., Van Den Hurk G.J., Praagman C., (1988), A new method for computing a column reduced polynomial matrix, Systems & Control Letters, Vol. 10, pp. 1-4. [5] Bitmead R., Kung S, Anderson B., Kailath T. (198), Greatest common divisors via generalized Sylvester and Bezout matrices, IEEE Trans. Autom. Control, Vol AC-, No 6, pp. 104-104. [6] Callier, F.M. and Desoer, C.A. (198). Multivariable Feedback Systems.Springer Verlag. New York. [] Callier, F.M., (001), Polynomial equations giving a proper feedback compensator for a strictly proper plant, Proceedings of the IFAC/IEEE Symposium on System structure and Control, Prague, Czech Republic. [8] Forney G.D. (195), Minimal bases of rational vector spaces with application to multivariable linear systems, SIAM J. Contr., Vol. 1, pp. 9-50. [9] Golub G., Van Loan C., (1996), Matrix Computations, Third Edition, The John Hopkins University Press, [10] Gu M., Eisenstat S.C., (1994), A Stable and Fast Algorithm for Updating the Singular Value Decomposition, Research Report YALEU/DCS/RR-966, Yale University, New Haven, CT. [11] Kailath T., (1980), Linear Systems, Englewood Cli s, NJ: Prentice Hall. 1

[1] Karcanias N., (1994), Minimal Bases of Matrix Pencils: Algebraic Toeplitz Structure and Geometric Properties, Linear Algebra and its Apps, Vol 05-06, pp. 81-868. [1] Karcanias N., Mitrouli M., (00), Minimal Bases of Matrix Pencils and Coprime Fraction Descriptions, IMA Journal of Control and Inf., 19, pp. 45-8. [14] Khazanov V. B., On Some Properties of the Minimal-in-Degree Irreducible Factorizations of a Rational Matrix, Journal of Mathematical Sciences, Volume 114, Issue 6, Apr 00, pp. 1860-186 [15] Kucera V., (190), Discrete Linear Control: The polynomial Equation Approach, Willey, Cichester, U.K. [16] Kucera V., Zagalak P., (1999), Proper Solutions of polynomial equations, Proc. of the 14th World Congress of IFAC, London, Elservier Science, pp. 5-6 [1] Kublanovskaya V. N., Rank division algorithms and their applications, J. Numer. Linear Algebra Appl. 1 (199), No., pp. 199 1. [18] Neven W., Praagman C., (199), Column reduction of polynomial matrices, Linear Algebra and its Apps, Vol 188-189, pp. 569-589. [19] Stuchlik-Quere M.P., (199), How to compute minimal bases using Pade approximants, Laboratoire d Informatique de Paris, Research Report, LIP6 199/05. [0] Va adis D., Karcanias N. (1995), Generalized state-space realizations from matrix fraction descriptions, IEEE Trans. Automat. Control, Vol. 40, No. 6, pp. 114 11. [1] Vardulakis, A.I.G., (1991), Linear Multivariable Control - Algebraic Analysis and Synthesis Methods, John Willey & Sons Ltd, New York. [] Wolovich, W.A. (194), Linear Multivariable Systems. Springer Verlag. New York. [] Zhang S.-Y., Chen C.-T. (198), Design of Unity Feedback Systems to Achieve Arbitrary Denominator Matrix, IEEE Trans. Autom. Control, Vol AC-8, No 4, pp. 518-51. 14