Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix


 Lorena Black
 8 months ago
 Views:
Transcription
1 Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS July, 1 Budapest, Hungary Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix S. R. Khare, H. K. Pillai*, M. N. Belur Abstract In this paper we propose a numerical algorithm to compute the minimal nullspace basis of a univariate polynomial matrix of arbitrary size. In order to do so a sequence of structured matrices is obtained from the given polynomial matrix. he nullspace of the polynomial matrix can be computed from the nullspaces of these structured matrices. Index erms Singular Value Decomposition (SVD), polynomial matrix, nullspace of a polynomial matrix, minimal polynomial basis I. INRODUCION Computation of the nullspace of a polynomial matrix is a very important problem. We give an algorithm to compute the minimal polynomial basis for a given polynomial matrix. Before stating the problem, we introduce some notation and preliminaries required for this paper. Let R p q denote the set of all matrices of size p q with entries from the field of real numbers. For A R p q, let A = UΣ be the Singular Value Decomposition (SVD) of A. Rank of A is then equal to the number nonzero singular values of A. Let r be the rank of A. hen A = U Σ V is called the condensed SVD (see [1]) of A where Σ is a square diagonal matrix of size r r with nonzero singular values on its diagonal, U is an isometry that is obtained by taking the first r columns of U which correspond to the nonzero singular values and V is obtained from V in similar way. In standard MALAB notation U = U(:, 1 : r). Let R[s] denote the ring of polynomials in a single variable s with coefficients from the real field. A polynomial vector v R w [s] is a vector of size w with each entry being a polynomial. he degree of a vector v R w [s] is the maximum amongst degrees of its polynomial components. Alternatively we can write v as v = v +v 1 s+v s + +v n s n where n is the degree of v and v i R w for i =, 1,..., n and v n. hus v is a polynomial of degree n with the coefficients being the vectors from R w. Similarly a polynomial matrix R R g w [s] is a matrix of size g w with the entries from R[s]. he degree of a polynomial matrix is the maximum of the degrees amongst its polynomial entries. As in the case of vectors, a polynomial matrix can be written in polynomial form as R = R + R 1 s + R s + + R n s n (1) where n is the degree of R and R i R g w for i =, 1,..., n are the coefficient matrices and R n. he *Corresponding Author he authors are with the Department of Electrical Engineering, Indian Institute of echnology Bombay, Powai, Mumbai, India ids: S. R. Khare H. K. Pillai M. N. Belur matrix polynomial representation of polynomial matrices plays a key role in the discussion that follows. A set of vectors {v 1, v,..., v n } R w [s] is called a polynomially independent set if the polynomial combination a 1 (s)v 1 +a (s)v + +a n (s)v n = implies that a i (s) = identically for all i = 1,,..., n. If a set of polynomial vectors is not polynomially independent, then it is called polynomially dependent. Rank (or normal rank) of a polynomial matrix R R g w [s] is defined as the number max λ R rank R(λ). It can be shown that this number is same as the number of polynomially independent rows in R. Further if the matrix R is considered to be an element of R g w (s) (the set of matrices with entries from R(s) which is the quotient field of ring of polynomials R[s]), then the above notion of rank matches the notion of rank on this quotient field. he nullspace of R is defined as N = {v R w [s] Rv = }. We now define minimal polynomial basis for the nullspace N. Definition 1.1: Let B = {m 1 (s), m (s),..., m k (s)} R w [s] be a generating set for N with degrees δ 1 δ δ k. his generating set is called minimal basis (see []) if for any other basis of N with degrees γ 1 γ γ k, it turns out that δ i γ i for i = 1,,..., k. he numbers δ 1, δ,..., δ k are called right minimal indices of N (see [3]). Given a minimal polynomial basis for N, degree of N is defined as the maximum amongst the degrees of the minimal polynomial basis vectors, that is, δ k. We now state the problem. Problem Statement 1.: Given a polynomial matrix R R g w [s] with rank r compute a minimal polynomial basis for the nullspace N of R. he paper is organized as follows: the rest of this section is devoted to discussing the related work in the literature. In Section II we discuss the main results of the paper. In Section III we propose an algorithm to compute the minimal nullspace basis of the given polynomial matrix. Further we illustrate the algorithm with some numerical examples in Section IV. Finally we conclude in Section V. Computation of the minimal nullspace basis has a lot of applications in systems theory where the systems are described using the polynomial matrices. See for instance [4], [5]. Recently an application of nullspace basis in fault detection methods was reported in [6]. In literature there are several ways to compute the minimal polynomial nullspace basis of a given polynomial matrix. One of the approaches to deal with polynomial matrices is to convert the given polynomial matrix into matrix pencil (this is also known in literature as linearization). In [7] and ISBN
2 S. R. Khare et al. Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix [8] the given polynomial matrix is converted to a matrix pencil and then the problem of computing nullspace basis is solved using this matrix pencil. After computing a nullspace basis for the matrix pencil, this result is then converted back into the polynomial domain. In [9] two vector spaces of matrix pencils that generalize the first and the second companion form were introduced. In [3], it was shown that these linearizations can be used to obtain the left and right minimal indices and minimal bases. As opposed to pencil algorithms, in nonpencil algorithms (see for instance[1], [11]), the problem is converted to the problem of structured matrices obtained from the polynomial matrix. he manipulations are done on these structured matrices in order to compute a minimal nullspace basis. Another approach to compute the nullspace basis is using randomized algorithm as proposed in [1], [13]). he problem of computing the nullspace of a polynomial matrix is reduced to polynomial matrix multiplication. he randomized algorithm is used to compute the minimal or small degree basis elements in the nullspace of the given polynomial matrix. II. MAIN RESULS he algorithm we develop to compute the minimal nullspace basis is a non pencil algorithm. he main idea of computing the nullspace of a polynomial matrix involves constructing a sequence of structured real matrices from the given polynomial matrix. he nullspace of the given polynomial matrix is related to the nullspaces of these real structured matrices. Let R R g w [s] of degree n be represented in matrix polynomial form as in the equation (1). We R R 1 construct matrix A R (n+1)g w from R as A =.. A sequence of the structured matrices is obtained as follows: A 1 = A A, A = A A 1, () R n where s in the above equation are all zero matrices of size g w. In general for some i, A i R (n+i+1)g (i+1)w. Let K i = ker(a i ) and d i = dim(k i ). A result about the sequence {d i } i=,1,... is now proved which is essential in the computation of the basis of nullspace of R. Lemma.1: Let K i = ker(a i ) and d i = dim(k i ). Let A j be as in equation () where j is the smallest index for which d j >. hen d j+k (k + 1)d j for k 1. Proof: Let V = {v 1, v,..., v dj } be a basis for K j, where v i R (j+1)w for i = 1,..., d j. hen p 1 1 = v 1. and p 1 v 1 =., and so on up to p1 k =. are all in v 1 the kernel of A k+j, where is a zero vector of size w and each of p 1 i R(j+k+1)w for i = 1,,..., k. Also note that {p 1 1, p 1,..., p 1 k } forms a linearly independent set. Since this is true for all v i s in the kernel of A j, and V is a linearly independent set, the set {p n m} m=1,...,k,n=1,...,dj is a linearly independent set. Hence kernel of A j+k contains at least (k+ 1)d j linearly independent vectors. We now illustrate the relation of the vectors in the nullspace N with the vectors in the kernel K i. Let for some i, v R (i+1)w be such that v K i. Partition this vector v as v = [v v 1 v i ] where v j = v(jw + 1 : (j + 1)w) for j =, 1,..., i. hen it is easy to see that the polynomial vector x(s) R w [s] defined as x(s) = i j= v js j satisfies Rx(s) =. hat is x(s) N. hus a vector v K j is identified with a polynomial [ ] [ ] vector x(s) N. From Lemma v.1 it is clear that, K v j+1. hese vectors present in K j+1 can now be identified with the polynomial vectors x(s) and sx(s) in N respectively. hus Lemma.1 gives a limit based on all the polynomially dependent vectors present in K j+1 that arise from vectors in K j. In the following theorem we give the minimum stage i for which all the polynomially independent vectors in the nullspace N can be computed from the kernel K i. Before proceeding to the next result we define what we mean by a steadily increasing sequence of real numbers. Definition.: Let {a n } n=,1,... be a nondecreasing sequence of real numbers. hen we say that the sequence {a n } n=,1,... a steadilyincreasing sequence if there exists a nonnegative integer n N and c >, a positive constant, such that a m+1 a m = c, for all m n. heorem.3: Let R R g w [s] and construct A i as in the equation () where A i R (n+i+1)g (i+1)w. Let K i = ker A i and d i = dim(k i ). hen sequence {d i } i=,1,... a steadilyincreasing sequence and there exists n N such that d i+1 d i = k for i n, where k is the number of polynomially independent generators for the nullspace of the polynomial matrix R. Proof: We first prove that the sequence {d i } i=,1,... is a nondecreasing sequence. Let j be the smallest index such that d j >. Let d j+1 = d j + γ j+1. his implies d j+1 d j. If γ j+1 >, then new polynomially independent vector is added in the basis of N. If γ j+1 = then no new polynomially independent vectors are added in the basis of N. Further d j+ = 3d j + γ j+1 + γ j+. his implies d j+ d j+1. Depending on whether γ j+ is zero or not, we can conclude if polynomially independent vectors are added in the basis of N. Continuing this way it can be shown that the sequence {d i } i=,1,... is a nondecreasing sequence. In general for l 1 d j+l = (l + 1)d j + l (l i + 1)γ j+i. (3) i=1
3 Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS July, 1 Budapest, Hungary Note that at the stage j + l, the number of polynomially independent vectors added in the basis of N are given by d j + l i=1 γ j+i for l 1. Let n N be such that k = d j + n j i=1 γ j+i. hen at this stage n, all the polynomially independent vectors are added in the the basis of N. hen from equation (3) we get, d n+1 d n = d j + n j i=1 γ j+i = k. Since γ n+1 =. In fact d n+l d n+l 1 = k as γ n+l = for l = 1,,. his proves the theorem. Remark.4: Let j be the smallest nonnegative integer so that d j >. hen d = d 1 = = d j 1 =. In fact the sequence {d i } i=,1, is a strictly increasing sequence after j th term. Remark.5: From heorem.3 it is clear that we pick the polynomially independent vectors in the nullspace of R in an increasing order of degrees without missing any polynomial vector of smaller degree. hus the basis that is obtained using the theorem is a minimal basis for the nullspace of the polynomial matrix R. Further the degree of the minimal polynomial basis is also obtained from the above theorem. III. AN ALGORIHM O COMPUE MINIMAL NULLSPACE BASIS In this section we discuss an algorithm to compute the minimal nullspace basis of a given polynomial matrix R R g w [s] with rank r. We extract a minimal polynomial basis of N from the kernels K i. In order to achieve greater numerical efficiency we use the matrices obtained from the SVD of A for the computations instead of using matrices A i themselves. We now describe the construction of a sequence of matrices J i which are useful in further discussion. Construction 3.1: For a given polynomial matrix R R g w [s] with degree n, form A R (n+1)g w as in equation (). Let A = U Σ be the condensed SVD of A. Let J = U and rank (U ) = r. Further construct U J 1 = R (n+)g r where s are zero U matrices of size g r. Let J 1 = U 1 Σ 1 V1 be the condensed SVD of J 1. Let r 1 = rank(j 1 ). Now J is constructed as U J = U 1 R(n+3)g (r+r1) where s below U are zero matrices of size g r and above U 1 is the zero matrix of size g r 1. Let J = U Σ V be the condensed SVD and r = rank(j ). In general, let U J i = R (n+i+1)g (r+ri 1) (4). U i 1 where s below U are zero matrices of size g r and above U i 1 is the zero matrix of size g r i 1. Let r i = rank(j i ). We now describe the relation between matrices J i and matrices A i. As in Construction 3.1 let A = U Σ be the condensed SVD of A. hen we can write A 1 as A 1 = J 1 [ Σ Σ ] [ V ]. (5) Clearly the above relation is not the SVD of the matrix A 1 as J 1 is not an isometry. Further since the condensed SVD of A is considered, the second and third matrices in the product on the RHS of equation (5) are always full rank. hus the rank drop in A 1 is reflected in the rank drop in J 1. Stated otherwise A 1 loses rank if and only if J 1 loses rank. hus the information of the rank drop in A 1 can be obtained just by studying the rank of J 1. Further, we do similar analysis for A. U A = 6 J Σ 5 3 Σ 5 4 V Σ (6) he above equation clearly is not an SVD of A. he first matrix in the product on the RHS of equation (6) can be written as U J 1 = J I Σ 1 I V1 From equations (6) and (7) it is clear that the rank drop in A is reflected in the rank drop in J as all the other matrices in the product are always full rank. We now illustrate the relationship between ker J and K. Let v R r+r1 be such that v ker J. hen we first compute the corresponding nullspace element, say y R 3r, of the matrix on the LHS of the equation (7). For y to be in the nullspace of that matrix, y has to satisfy the following equation: I Σ 1 I V (7) y = v. (8) Since Σ 1 is a diagonal matrix and V 1 is an isometry, we can compute the vector y very easily from the above equation as I I y = v. (9) V1 1 Since Σ 1 is a diagonal matrix, the number of flops that are required in the above equation are just equal to the flops required to multiply a matrix to a vector. Further both the identity matrices in equation (8) are of the size r r. Hence first r entries in the vector v are not required in the computation of the vector y. hus matrix vector multiplication in equation (8) is that of a matrix of size r 1 r 1 (the size of ) and a vector of size r 1 (the size of truncated v required in the computation of y). hus the system of equations in (8) can be solved in a numerically efficient way. From y we now compute x R 3w, the corresponding element 1
4 S. R. Khare et al. Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix A i = U U U Σ Σ Σ. (1) in K, in a similar way. he nullspace vector x K is obtained as x = y. (11) Note that Σ is a diagonal matrix and we obtain x from y in a numerically efficient way. hus in the case when i =, we obtain the nullspace vector of A from that of J in two steps. We now generalize the procedure to obtain nullspace element of A i from that of J i. In order to do so, we start with the following recurrence relation. [ ] [ ] I I J i+1 Σ i Vi = U. J i (1) Now for some i N, we write A i as in equation (1). Further using recurrence relation (1) we have, U U.... = J i P i 1 P i P 1, (13).. U where matrix P j, for j = 1,..., i 1, is defined as i j times { }}{ I P j = I Σ j i j times {}}{ I I j. (14) From equation (1) and (13) it is clear that the rank loss in A i is reflected in the rank loss of J i since Σ j s are diagonal matrices with nonzero entries on the diagonal and V j s are isometries. We now illustrate the method to compute the corresponding vector in K i to a vector in ker J i. Let v i R r+ri 1 be such that J i v i =. hen we first compute y R (i+1)r such that y is in the nullspace of the matrix on the LHS of equation (13). In order to do so, we go through a series of steps as follows. Given v i R r+ri 1, we first compute v i 1 R r+ri 1 such that v i 1 is in the nullspace of the matrix J i P i 1. Since Σ i 1 is a diagonal matrix and Vi 1 is an isometry, v i 1 can be obtained from v i as a matrix vector multiplication. Note that since the identity matrices in P i 1 are of the size r r, the first r entries in v i are not required in the computation of v i 1. Further we compute v i such that it is in the nullspace of the matrix J i P i 1 P i. We compute v i from v i 1 in a similar way. Note here that the identity matrices in P i are of the size of r r. hus first r entries in v i 1 are not required in the computation of v i. Continuing in this way we compute y = v 1 which is in the nullspace of J i P i 1 P i P 1. Now we compute x R (i+1)w from y such that A i x =. In order to do so we use relation in equation (1). hus we have x = y. (15) Since Σ is a diagonal matrix, we obtain x from y in a numerically efficient way. We obtain y from v i 1 in i 1 steps and in the final step we obtain x from y. hus we require i steps in order to compute the nullspace element in K i given a nullspace element of J i. We now propose an algorithm to compute a minimal polynomial basis for the nullspace N of a given polynomial matrix R R g w [s]. In order to compute this basis, we use the sequence of matrices J i obtained from R. Let δ i = dim(ker J i ). In heorem 3. we show that δ i denotes the number of polynomially independent vectors upto degree i in N. In heorem.3 we proved that the sequence {d i } i=,1,... saturates at some stage n N and the degree of minimal polynomial basis is n. hus at the stage n, δ n denotes the number of all the polynomially independent vectors in N upto degree n. heorem 3.: Let R R g w [s] be a polynomial matrix with degree n. Let N denote the nullspace of the polyno
5 Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS July, 1 Budapest, Hungary N = [ ] (16) mial matrix R. Further construct the sequence of matrices {J i } i=,1,... as in Construction 3.1. Let δ i = nullity (J i ). hen the following statements hold: (a) δ i+1 δ i for all i. (b) δ i denotes the number of polynomially independent vectors of degree upto i in the minimal polynomial basis of N. Remark 3.3: From the statement (b) of the above theorem we get the terminating criterion for the algorithm to compute the minimal nullspace basis of N as follows: let r be the normal rank of R R g w [s] with g w without loss of generality. hen the number of polynomially independent vectors in N is given by w r. hus for the smallest i such that δ i = w r, we get all the polynomially independent vectors in N and we know that the highest degree of minimal nullspace basis vectors is i. Algorithm 3.4: Computation of a minimal nullspace basis Input: Polynomial matrix R R g w [s] with rank r Output: Polynomial matrix M R w w r [s] where columns of M form the minimal polynomial basis of N Construct A as in equation (). Compute the condensed SVD of A as A = U Σ r rank(a ), δ nullity(a ), J U i while δ i < dim(n ) = w r δ, repeat i i [ + 1 ] U J i U i 1 Compute the condensed SVD of J i as J i = U i Σ i Vi r i rank(j i ), δ i nullity(j i ) end while m i N [n 1 n... n δm ] R (r+rm 1) δm such that J m N = M f N(1 : r, :) for j = m 1 : 1, repeat N V j j N(jr + 1 : jr + r j, :) M f [M f ; N(1 : r, :)] end for ˆM [ ] for j = : m, repeat ˆM [ ˆM; V M f (jr + 1 : (j + 1)r, :)] end for M m j= ˆM(:, jw + 1 : (j + 1)w)s j. Remark 3.5: he main advantage of the proposed algorithm is that all the polynomially independent vectors in N are computed at one stage as is proved in heorem 3.. Here all the polynomially independent vectors are obtained at the last iteration of the while loop of Algorithm 3.4. IV. NUMERICAL EXAMPLES In this section we consider the numerical examples to illustrate the algorithm proposed to compute the minimal nullspace basis. Example 4.1: [11, Example 3]. Let R = 1 s q 1 s q Rg g+1 [s] for some g, q N. 1 s q he nullspace is spanned by [s qg s q(g 1) s q 1]. Here m = qg and for q = and g = 3, M = [.771s 6.771s 4.771s.771]. Example 4.: In this example we illustrate the different steps in the algorithm with a randomly generated polynomial matrix of size 4 with degree. Consider the polynomial matrix R with rank as [ ] 1 s s + 1 s R = + s + 1 s 1 s + 1 s s s Here the degree m of the minimal nullspace basis is. In the notation of the algorithm N at i = is given as in equation (16). Further the matrix M f is formed by appending the following matrices as M f = [ ] M1 M M3 where [ ] M 1 =, [ ] M =, [ ] M 3 = Further the matrix M is obtained as follows where the columns of M form the minimal nullspace bas M = s s (17).. V. CONCLUDING REMARKS In this paper we discussed a numerical algorithm to compute the minimal nullspace basis of a given polynomial matrix. he algorithm involved constructing structured real matrices and working with these matrices to compute the 3
6 S. R. Khare et al. Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix minimal nullspace basis. Advantage of our algorithm is that all the vectors in the nullspace basis are computed at one time and the efforts to track the new polynomially independent vectors that are added in the nullspace at each stage is avoided. REFERENCES [1] D. S. Watkins, Fundamentals of Matrix Computations. John Wiley and Sons Publications,. [] G. D. Forney, Minimal bases of rational vector spaces, with applications to multivariable linear systems, SIAM Journal of Control, vol. 13, pp , [3] F. D. eran, F. M. Dopico, and D. S. Mackey, Linearizations of singular matrix polynomials and the recovery of minimal indices, Electronic Journal of Linear Algebra, vol. 18, pp , 9. [4] J. Polderman and J. Willems, Introduction to Mathematical Systems heory: A Behavioral Approach. New York: Springer, 3. [5] V. Kucera, Discrete Linear Control: he polynomial Equation Approach. Willey, Chichester, UK, 197. [6] A. Varga, On computing nullspace basis  a fault detection perspective, in Proceedings of the IFAC Seoul, 8. [7]. Beelen and P. V. Dooren, A pencil approach for embedding a polynomial matrix into a unimodular matrix, SIAM Journal of Matrix Analysis and Applications, vol. 9, no. 1, pp , [8]. Beelen and G. W. Veltkamp, Numerical computation of a coprime factorization of a transfer function matrix, Systems & Control Letters, vol. 9, no. 4, pp , [9] D. S. Mackey, N. Mackey, C. Mehl, and V. Mehrmann, Vector spaces of linearizations for matrix polynomials, SIAM Journal of Matrix Analysis, vol. 8, pp , 6. [1] J. Z. Anaya and D. Henrion, An improved oeplitz algorithm for polynomial matrix nullspace computation, Applied Mathematics and Computations, vol. 7, no. 1, pp. 56 7, 9. [11] E. N. Antoniou, A. I. G. Vardulakis, and S. Vologiannidis, Numerical computation of minimal polynomial bases: A generalized resultant approach, Linear Algebra and its Applications, vol. 45, pp , 5. [1] A. Strojohann and G. Villard, Computing the rank and a small nullspace basis of a polynomial matrix, in Proceedings of the International Symposium on Symbolic and Algebraic Computation, ISSAC, 5. [13] C. P. Jeannerod and G. Villard, Asymptotically fast polynomial matrix algorithms for multivariable systems, International Journal of Control, vol. 79, no. 11, pp , 6. 4