Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix

Similar documents
Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

Computing Minimal Nullspace Bases

Infinite elementary divisor structure-preserving transformations for polynomial matrices

EE731 Lecture Notes: Matrix Computations for Signal Processing

The Singular Value Decomposition and Least Squares Problems

On some matrices related to a tree with attached graphs

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach

Linear Algebra, part 3 QR and SVD

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

CSL361 Problem set 4: Basic linear algebra

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Chapter 3 Transformations

ELA

Key words. n-d systems, free directions, restriction to 1-D subspace, intersection ideal.

Stat 159/259: Linear Algebra Notes

Foundations of Matrix Analysis

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Properties of Matrices and Operations on Matrices

Numerical computation of minimal polynomial bases: A generalized resultant approach

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra Review

Math 240 Calculus III

Asymptotically Fast Polynomial Matrix Algorithms for Multivariable Systems

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if

Singular Value Decomposition

MATHEMATICS 217 NOTES

Generalized eigenvector - Wikipedia, the free encyclopedia

Elementary linear algebra

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

MAT Linear Algebra Collection of sample exams

Key words. Polynomial matrices, Toeplitz matrices, numerical linear algebra, computer-aided control system design.

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form

SINGULAR VALUE DECOMPOSITION

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

2: LINEAR TRANSFORMATIONS AND MATRICES

Vector and Matrix Norms. Vector and Matrix Norms

POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Control Systems. Linear Algebra topics. L. Lanari

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Topics in linear algebra

Definition 2.3. We define addition and multiplication of matrices as follows.

arxiv: v1 [math.na] 5 May 2011

Scientific Computing: Dense Linear Systems

Vector Space Concepts

Lecture 6. Numerical methods. Approximation of functions

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

SUMMARY OF MATH 1600

Review of linear algebra

On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel

Fraction-free Row Reduction of Matrices of Skew Polynomials

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

CS6964: Notes On Linear Systems

Spring 2014 Math 272 Final Exam Review Sheet

On the computation of the Jordan canonical form of regular matrix polynomials

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

1 Introduction. 2 Determining what the J i blocks look like. December 6, 2006

Linear Algebra Formulas. Ben Lee

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Paul Van Dooren s Index Sum Theorem: To Infinity and Beyond

Latent Semantic Models. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Lecture notes: Applied linear algebra Part 1. Version 2

Singular Value Decomposition (SVD) and Polar Form

Lattice reduction of polynomial matrices

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Lecture 1: Review of linear algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Chapter 2. Vector Space (2-4 ~ 2-6)

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

Computational Approaches to Finding Irreducible Representations

A matrix over a field F is a rectangular array of elements from F. The symbol

Elementary Row Operations on Matrices

Solution to Homework 8, Math 2568

Definitions for Quizzes

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

EE731 Lecture Notes: Matrix Computations for Signal Processing

18.312: Algebraic Combinatorics Lionel Levine. Lecture 22. Smith normal form of an integer matrix (linear algebra over Z).

SYMBOL EXPLANATION EXAMPLE

MATH 320, WEEK 11: Eigenvalues and Eigenvectors

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Linear Algebra Practice Final

Algorithms to Compute Bases and the Rank of a Matrix

Chapter 2: Matrix Algebra

18.06SC Final Exam Solutions

Matrix invertibility. Rank-Nullity Theorem: For any n-column matrix A, nullity A +ranka = n

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

THE STABLE EMBEDDING PROBLEM

Math 321: Linear Algebra

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Transcription:

Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS 1 5 9 July, 1 Budapest, Hungary Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix S. R. Khare, H. K. Pillai*, M. N. Belur Abstract In this paper we propose a numerical algorithm to compute the minimal nullspace basis of a univariate polynomial matrix of arbitrary size. In order to do so a sequence of structured matrices is obtained from the given polynomial matrix. he nullspace of the polynomial matrix can be computed from the nullspaces of these structured matrices. Index erms Singular Value Decomposition (SVD), polynomial matrix, nullspace of a polynomial matrix, minimal polynomial basis I. INRODUCION Computation of the nullspace of a polynomial matrix is a very important problem. We give an algorithm to compute the minimal polynomial basis for a given polynomial matrix. Before stating the problem, we introduce some notation and preliminaries required for this paper. Let R p q denote the set of all matrices of size p q with entries from the field of real numbers. For A R p q, let A = UΣ be the Singular Value Decomposition (SVD) of A. Rank of A is then equal to the number nonzero singular values of A. Let r be the rank of A. hen A = U Σ V is called the condensed SVD (see [1]) of A where Σ is a square diagonal matrix of size r r with nonzero singular values on its diagonal, U is an isometry that is obtained by taking the first r columns of U which correspond to the nonzero singular values and V is obtained from V in similar way. In standard MALAB notation U = U(:, 1 : r). Let R[s] denote the ring of polynomials in a single variable s with coefficients from the real field. A polynomial vector v R w [s] is a vector of size w with each entry being a polynomial. he degree of a vector v R w [s] is the maximum amongst degrees of its polynomial components. Alternatively we can write v as v = v +v 1 s+v s + +v n s n where n is the degree of v and v i R w for i =, 1,..., n and v n. hus v is a polynomial of degree n with the coefficients being the vectors from R w. Similarly a polynomial matrix R R g w [s] is a matrix of size g w with the entries from R[s]. he degree of a polynomial matrix is the maximum of the degrees amongst its polynomial entries. As in the case of vectors, a polynomial matrix can be written in polynomial form as R = R + R 1 s + R s + + R n s n (1) where n is the degree of R and R i R g w for i =, 1,..., n are the coefficient matrices and R n. he *Corresponding Author he authors are with the Department of Electrical Engineering, Indian Institute of echnology Bombay, Powai, Mumbai, India E-mail ids: S. R. Khare swanand@ee.iitb.ac.in, H. K. Pillai hp@ee.iitb.ac.in, M. N. Belur belur@ee.iitb.ac.in matrix polynomial representation of polynomial matrices plays a key role in the discussion that follows. A set of vectors {v 1, v,..., v n } R w [s] is called a polynomially independent set if the polynomial combination a 1 (s)v 1 +a (s)v + +a n (s)v n = implies that a i (s) = identically for all i = 1,,..., n. If a set of polynomial vectors is not polynomially independent, then it is called polynomially dependent. Rank (or normal rank) of a polynomial matrix R R g w [s] is defined as the number max λ R rank R(λ). It can be shown that this number is same as the number of polynomially independent rows in R. Further if the matrix R is considered to be an element of R g w (s) (the set of matrices with entries from R(s) which is the quotient field of ring of polynomials R[s]), then the above notion of rank matches the notion of rank on this quotient field. he nullspace of R is defined as N = {v R w [s] Rv = }. We now define minimal polynomial basis for the nullspace N. Definition 1.1: Let B = {m 1 (s), m (s),..., m k (s)} R w [s] be a generating set for N with degrees δ 1 δ δ k. his generating set is called minimal basis (see []) if for any other basis of N with degrees γ 1 γ γ k, it turns out that δ i γ i for i = 1,,..., k. he numbers δ 1, δ,..., δ k are called right minimal indices of N (see [3]). Given a minimal polynomial basis for N, degree of N is defined as the maximum amongst the degrees of the minimal polynomial basis vectors, that is, δ k. We now state the problem. Problem Statement 1.: Given a polynomial matrix R R g w [s] with rank r compute a minimal polynomial basis for the nullspace N of R. he paper is organized as follows: the rest of this section is devoted to discussing the related work in the literature. In Section II we discuss the main results of the paper. In Section III we propose an algorithm to compute the minimal nullspace basis of the given polynomial matrix. Further we illustrate the algorithm with some numerical examples in Section IV. Finally we conclude in Section V. Computation of the minimal nullspace basis has a lot of applications in systems theory where the systems are described using the polynomial matrices. See for instance [4], [5]. Recently an application of nullspace basis in fault detection methods was reported in [6]. In literature there are several ways to compute the minimal polynomial nullspace basis of a given polynomial matrix. One of the approaches to deal with polynomial matrices is to convert the given polynomial matrix into matrix pencil (this is also known in literature as linearization). In [7] and ISBN 978-963-311-37-7 19

S. R. Khare et al. Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix [8] the given polynomial matrix is converted to a matrix pencil and then the problem of computing nullspace basis is solved using this matrix pencil. After computing a nullspace basis for the matrix pencil, this result is then converted back into the polynomial domain. In [9] two vector spaces of matrix pencils that generalize the first and the second companion form were introduced. In [3], it was shown that these linearizations can be used to obtain the left and right minimal indices and minimal bases. As opposed to pencil algorithms, in nonpencil algorithms (see for instance[1], [11]), the problem is converted to the problem of structured matrices obtained from the polynomial matrix. he manipulations are done on these structured matrices in order to compute a minimal nullspace basis. Another approach to compute the nullspace basis is using randomized algorithm as proposed in [1], [13]). he problem of computing the nullspace of a polynomial matrix is reduced to polynomial matrix multiplication. he randomized algorithm is used to compute the minimal or small degree basis elements in the nullspace of the given polynomial matrix. II. MAIN RESULS he algorithm we develop to compute the minimal nullspace basis is a non pencil algorithm. he main idea of computing the nullspace of a polynomial matrix involves constructing a sequence of structured real matrices from the given polynomial matrix. he nullspace of the given polynomial matrix is related to the nullspaces of these real structured matrices. Let R R g w [s] of degree n be represented in matrix polynomial form as in the equation (1). We R R 1 construct matrix A R (n+1)g w from R as A =.. A sequence of the structured matrices is obtained as follows: A 1 = A A, A = A A 1, () R n where s in the above equation are all zero matrices of size g w. In general for some i, A i R (n+i+1)g (i+1)w. Let K i = ker(a i ) and d i = dim(k i ). A result about the sequence {d i } i=,1,... is now proved which is essential in the computation of the basis of nullspace of R. Lemma.1: Let K i = ker(a i ) and d i = dim(k i ). Let A j be as in equation () where j is the smallest index for which d j >. hen d j+k (k + 1)d j for k 1. Proof: Let V = {v 1, v,..., v dj } be a basis for K j, where v i R (j+1)w for i = 1,..., d j. hen p 1 1 = v 1. and p 1 v 1 =., and so on up to p1 k =. are all in v 1 the kernel of A k+j, where is a zero vector of size w and each of p 1 i R(j+k+1)w for i = 1,,..., k. Also note that {p 1 1, p 1,..., p 1 k } forms a linearly independent set. Since this is true for all v i s in the kernel of A j, and V is a linearly independent set, the set {p n m} m=1,...,k,n=1,...,dj is a linearly independent set. Hence kernel of A j+k contains at least (k+ 1)d j linearly independent vectors. We now illustrate the relation of the vectors in the nullspace N with the vectors in the kernel K i. Let for some i, v R (i+1)w be such that v K i. Partition this vector v as v = [v v 1 v i ] where v j = v(jw + 1 : (j + 1)w) for j =, 1,..., i. hen it is easy to see that the polynomial vector x(s) R w [s] defined as x(s) = i j= v js j satisfies Rx(s) =. hat is x(s) N. hus a vector v K j is identified with a polynomial [ ] [ ] vector x(s) N. From Lemma v.1 it is clear that, K v j+1. hese vectors present in K j+1 can now be identified with the polynomial vectors x(s) and sx(s) in N respectively. hus Lemma.1 gives a limit based on all the polynomially dependent vectors present in K j+1 that arise from vectors in K j. In the following theorem we give the minimum stage i for which all the polynomially independent vectors in the nullspace N can be computed from the kernel K i. Before proceeding to the next result we define what we mean by a steadily increasing sequence of real numbers. Definition.: Let {a n } n=,1,... be a non-decreasing sequence of real numbers. hen we say that the sequence {a n } n=,1,... a steadily-increasing sequence if there exists a non-negative integer n N and c >, a positive constant, such that a m+1 a m = c, for all m n. heorem.3: Let R R g w [s] and construct A i as in the equation () where A i R (n+i+1)g (i+1)w. Let K i = ker A i and d i = dim(k i ). hen sequence {d i } i=,1,... a steadilyincreasing sequence and there exists n N such that d i+1 d i = k for i n, where k is the number of polynomially independent generators for the nullspace of the polynomial matrix R. Proof: We first prove that the sequence {d i } i=,1,... is a nondecreasing sequence. Let j be the smallest index such that d j >. Let d j+1 = d j + γ j+1. his implies d j+1 d j. If γ j+1 >, then new polynomially independent vector is added in the basis of N. If γ j+1 = then no new polynomially independent vectors are added in the basis of N. Further d j+ = 3d j + γ j+1 + γ j+. his implies d j+ d j+1. Depending on whether γ j+ is zero or not, we can conclude if polynomially independent vectors are added in the basis of N. Continuing this way it can be shown that the sequence {d i } i=,1,... is a nondecreasing sequence. In general for l 1 d j+l = (l + 1)d j + l (l i + 1)γ j+i. (3) i=1

Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS 1 5 9 July, 1 Budapest, Hungary Note that at the stage j + l, the number of polynomially independent vectors added in the basis of N are given by d j + l i=1 γ j+i for l 1. Let n N be such that k = d j + n j i=1 γ j+i. hen at this stage n, all the polynomially independent vectors are added in the the basis of N. hen from equation (3) we get, d n+1 d n = d j + n j i=1 γ j+i = k. Since γ n+1 =. In fact d n+l d n+l 1 = k as γ n+l = for l = 1,,. his proves the theorem. Remark.4: Let j be the smallest non-negative integer so that d j >. hen d = d 1 = = d j 1 =. In fact the sequence {d i } i=,1, is a strictly increasing sequence after j th term. Remark.5: From heorem.3 it is clear that we pick the polynomially independent vectors in the nullspace of R in an increasing order of degrees without missing any polynomial vector of smaller degree. hus the basis that is obtained using the theorem is a minimal basis for the nullspace of the polynomial matrix R. Further the degree of the minimal polynomial basis is also obtained from the above theorem. III. AN ALGORIHM O COMPUE MINIMAL NULLSPACE BASIS In this section we discuss an algorithm to compute the minimal nullspace basis of a given polynomial matrix R R g w [s] with rank r. We extract a minimal polynomial basis of N from the kernels K i. In order to achieve greater numerical efficiency we use the matrices obtained from the SVD of A for the computations instead of using matrices A i themselves. We now describe the construction of a sequence of matrices J i which are useful in further discussion. Construction 3.1: For a given polynomial matrix R R g w [s] with degree n, form A R (n+1)g w as in equation (). Let A = U Σ be the condensed SVD of A. Let J = U and rank (U ) = r. Further construct U J 1 = R (n+)g r where s are zero U matrices of size g r. Let J 1 = U 1 Σ 1 V1 be the condensed SVD of J 1. Let r 1 = rank(j 1 ). Now J is constructed as U J = U 1 R(n+3)g (r+r1) where s below U are zero matrices of size g r and above U 1 is the zero matrix of size g r 1. Let J = U Σ V be the condensed SVD and r = rank(j ). In general, let U J i = R (n+i+1)g (r+ri 1) (4). U i 1 where s below U are zero matrices of size g r and above U i 1 is the zero matrix of size g r i 1. Let r i = rank(j i ). We now describe the relation between matrices J i and matrices A i. As in Construction 3.1 let A = U Σ be the condensed SVD of A. hen we can write A 1 as A 1 = J 1 [ Σ Σ ] [ V ]. (5) Clearly the above relation is not the SVD of the matrix A 1 as J 1 is not an isometry. Further since the condensed SVD of A is considered, the second and third matrices in the product on the RHS of equation (5) are always full rank. hus the rank drop in A 1 is reflected in the rank drop in J 1. Stated otherwise A 1 loses rank if and only if J 1 loses rank. hus the information of the rank drop in A 1 can be obtained just by studying the rank of J 1. Further, we do similar analysis for A. U A = 6 J 1 4 3 7 4 Σ 5 3 Σ 5 4 V Σ (6) he above equation clearly is not an SVD of A. he first matrix in the product on the RHS of equation (6) can be written as U J 1 = J I Σ 1 I V1 From equations (6) and (7) it is clear that the rank drop in A is reflected in the rank drop in J as all the other matrices in the product are always full rank. We now illustrate the relationship between ker J and K. Let v R r+r1 be such that v ker J. hen we first compute the corresponding nullspace element, say y R 3r, of the matrix on the LHS of the equation (7). For y to be in the nullspace of that matrix, y has to satisfy the following equation: I Σ 1 I V1 3. 5. (7) y = v. (8) Since Σ 1 is a diagonal matrix and V 1 is an isometry, we can compute the vector y very easily from the above equation as I I y = v. (9) V1 1 Since Σ 1 is a diagonal matrix, the number of flops that are required in the above equation are just equal to the flops required to multiply a matrix to a vector. Further both the identity matrices in equation (8) are of the size r r. Hence first r entries in the vector v are not required in the computation of the vector y. hus matrix vector multiplication in equation (8) is that of a matrix of size r 1 r 1 (the size of ) and a vector of size r 1 (the size of truncated v required in the computation of y). hus the system of equations in (8) can be solved in a numerically efficient way. From y we now compute x R 3w, the corresponding element 1

S. R. Khare et al. Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix A i = U U...... U Σ Σ Σ. (1) in K, in a similar way. he nullspace vector x K is obtained as x = y. (11) Note that Σ is a diagonal matrix and we obtain x from y in a numerically efficient way. hus in the case when i =, we obtain the nullspace vector of A from that of J in two steps. We now generalize the procedure to obtain nullspace element of A i from that of J i. In order to do so, we start with the following recurrence relation. [ ] [ ] I I J i+1 Σ i Vi = U. J i (1) Now for some i N, we write A i as in equation (1). Further using recurrence relation (1) we have, U U.... = J i P i 1 P i P 1, (13).. U where matrix P j, for j = 1,..., i 1, is defined as i j times { }}{ I P j = I Σ j i j times {}}{ I I j. (14) From equation (1) and (13) it is clear that the rank loss in A i is reflected in the rank loss of J i since Σ j s are diagonal matrices with nonzero entries on the diagonal and V j s are isometries. We now illustrate the method to compute the corresponding vector in K i to a vector in ker J i. Let v i R r+ri 1 be such that J i v i =. hen we first compute y R (i+1)r such that y is in the nullspace of the matrix on the LHS of equation (13). In order to do so, we go through a series of steps as follows. Given v i R r+ri 1, we first compute v i 1 R r+ri 1 such that v i 1 is in the nullspace of the matrix J i P i 1. Since Σ i 1 is a diagonal matrix and Vi 1 is an isometry, v i 1 can be obtained from v i as a matrix vector multiplication. Note that since the identity matrices in P i 1 are of the size r r, the first r entries in v i are not required in the computation of v i 1. Further we compute v i such that it is in the nullspace of the matrix J i P i 1 P i. We compute v i from v i 1 in a similar way. Note here that the identity matrices in P i are of the size of r r. hus first r entries in v i 1 are not required in the computation of v i. Continuing in this way we compute y = v 1 which is in the nullspace of J i P i 1 P i P 1. Now we compute x R (i+1)w from y such that A i x =. In order to do so we use relation in equation (1). hus we have x = 6 4 3 6 7 5 6 4 3 7 5 y. (15) Since Σ is a diagonal matrix, we obtain x from y in a numerically efficient way. We obtain y from v i 1 in i 1 steps and in the final step we obtain x from y. hus we require i steps in order to compute the nullspace element in K i given a nullspace element of J i. We now propose an algorithm to compute a minimal polynomial basis for the nullspace N of a given polynomial matrix R R g w [s]. In order to compute this basis, we use the sequence of matrices J i obtained from R. Let δ i = dim(ker J i ). In heorem 3. we show that δ i denotes the number of polynomially independent vectors upto degree i in N. In heorem.3 we proved that the sequence {d i } i=,1,... saturates at some stage n N and the degree of minimal polynomial basis is n. hus at the stage n, δ n denotes the number of all the polynomially independent vectors in N upto degree n. heorem 3.: Let R R g w [s] be a polynomial matrix with degree n. Let N denote the nullspace of the polyno-

Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS 1 5 9 July, 1 Budapest, Hungary N = [ ].153.37.548.6.86.34.378.343.9.6.9.59.58.438.35.341.189.13.7.6.16.93.389.5 (16) mial matrix R. Further construct the sequence of matrices {J i } i=,1,... as in Construction 3.1. Let δ i = nullity (J i ). hen the following statements hold: (a) δ i+1 δ i for all i. (b) δ i denotes the number of polynomially independent vectors of degree upto i in the minimal polynomial basis of N. Remark 3.3: From the statement (b) of the above theorem we get the terminating criterion for the algorithm to compute the minimal nullspace basis of N as follows: let r be the normal rank of R R g w [s] with g w without loss of generality. hen the number of polynomially independent vectors in N is given by w r. hus for the smallest i such that δ i = w r, we get all the polynomially independent vectors in N and we know that the highest degree of minimal nullspace basis vectors is i. Algorithm 3.4: Computation of a minimal nullspace basis Input: Polynomial matrix R R g w [s] with rank r Output: Polynomial matrix M R w w r [s] where columns of M form the minimal polynomial basis of N Construct A as in equation (). Compute the condensed SVD of A as A = U Σ r rank(a ), δ nullity(a ), J U i while δ i < dim(n ) = w r δ, repeat i i [ + 1 ] U J i U i 1 Compute the condensed SVD of J i as J i = U i Σ i Vi r i rank(j i ), δ i nullity(j i ) end while m i N [n 1 n... n δm ] R (r+rm 1) δm such that J m N = M f N(1 : r, :) for j = m 1 : 1, repeat N V j j N(jr + 1 : jr + r j, :) M f [M f ; N(1 : r, :)] end for ˆM [ ] for j = : m, repeat ˆM [ ˆM; V M f (jr + 1 : (j + 1)r, :)] end for M m j= ˆM(:, jw + 1 : (j + 1)w)s j. Remark 3.5: he main advantage of the proposed algorithm is that all the polynomially independent vectors in N are computed at one stage as is proved in heorem 3.. Here all the polynomially independent vectors are obtained at the last iteration of the while loop of Algorithm 3.4. IV. NUMERICAL EXAMPLES In this section we consider the numerical examples to illustrate the algorithm proposed to compute the minimal nullspace basis. Example 4.1: [11, Example 3]. Let R = 1 s q 1 s q Rg g+1 [s] for some g, q N. 1 s q he nullspace is spanned by [s qg s q(g 1) s q 1]. Here m = qg and for q = and g = 3, M = [.771s 6.771s 4.771s.771]. Example 4.: In this example we illustrate the different steps in the algorithm with a randomly generated polynomial matrix of size 4 with degree. Consider the polynomial matrix R with rank as [ ] 1 s s + 1 s R = + s + 1 s 1 s + 1 s s s Here the degree m of the minimal nullspace basis is. In the notation of the algorithm N at i = is given as in equation (16). Further the matrix M f is formed by appending the following matrices as M f = [ ] M1 M M3 where [ ].153.37.548.6 M 1 =,.58.438.35.341 [ ].81 1.351 1.167.37 M =, 3.641 9.413 11.38 6.9 [ ] 1.358.57.71 1.149 M 3 =. 1.99 4.45 5.714 9.611 Further the matrix M is obtained as follows where the columns of M form the minimal nullspace bas.8.8.45 1.869 M =.319.5.13.36 +.6 6.69.476 5.347 s.133.7.36.815.795 6.67 +.67 5.4.53 4.448 s (17).. V. CONCLUDING REMARKS In this paper we discussed a numerical algorithm to compute the minimal nullspace basis of a given polynomial matrix. he algorithm involved constructing structured real matrices and working with these matrices to compute the 3

S. R. Khare et al. Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix minimal nullspace basis. Advantage of our algorithm is that all the vectors in the nullspace basis are computed at one time and the efforts to track the new polynomially independent vectors that are added in the nullspace at each stage is avoided. REFERENCES [1] D. S. Watkins, Fundamentals of Matrix Computations. John Wiley and Sons Publications,. [] G. D. Forney, Minimal bases of rational vector spaces, with applications to multivariable linear systems, SIAM Journal of Control, vol. 13, pp. 493 5, 1975. [3] F. D. eran, F. M. Dopico, and D. S. Mackey, Linearizations of singular matrix polynomials and the recovery of minimal indices, Electronic Journal of Linear Algebra, vol. 18, pp. 371 4, 9. [4] J. Polderman and J. Willems, Introduction to Mathematical Systems heory: A Behavioral Approach. New York: Springer, 3. [5] V. Kucera, Discrete Linear Control: he polynomial Equation Approach. Willey, Chichester, UK, 197. [6] A. Varga, On computing nullspace basis - a fault detection perspective, in Proceedings of the IFAC Seoul, 8. [7]. Beelen and P. V. Dooren, A pencil approach for embedding a polynomial matrix into a unimodular matrix, SIAM Journal of Matrix Analysis and Applications, vol. 9, no. 1, pp. 77 89, 1988. [8]. Beelen and G. W. Veltkamp, Numerical computation of a coprime factorization of a transfer function matrix, Systems & Control Letters, vol. 9, no. 4, pp. 81 88, 1987. [9] D. S. Mackey, N. Mackey, C. Mehl, and V. Mehrmann, Vector spaces of linearizations for matrix polynomials, SIAM Journal of Matrix Analysis, vol. 8, pp. 971 14, 6. [1] J. Z. Anaya and D. Henrion, An improved oeplitz algorithm for polynomial matrix null-space computation, Applied Mathematics and Computations, vol. 7, no. 1, pp. 56 7, 9. [11] E. N. Antoniou, A. I. G. Vardulakis, and S. Vologiannidis, Numerical computation of minimal polynomial bases: A generalized resultant approach, Linear Algebra and its Applications, vol. 45, pp. 64 78, 5. [1] A. Strojohann and G. Villard, Computing the rank and a small nullspace basis of a polynomial matrix, in Proceedings of the International Symposium on Symbolic and Algebraic Computation, ISSAC, 5. [13] C. P. Jeannerod and G. Villard, Asymptotically fast polynomial matrix algorithms for multivariable systems, International Journal of Control, vol. 79, no. 11, pp. 1359 1367, 6. 4