# ALGORITHMS FOR FINDING THE MINIMAL POLYNOMIALS AND INVERSES OF RESULTANT MATRICES

Save this PDF as:

Size: px
Start display at page:

Download "ALGORITHMS FOR FINDING THE MINIMAL POLYNOMIALS AND INVERSES OF RESULTANT MATRICES" ## Transcription

1 J. Appl. Math. & Computing Vol. 16(2004), No. 1-2, pp ALGORITHMS FOR FINDING THE MINIMAL POLYNOMIALS AND INVERSES OF RESULTANT MATRICES SHU-PING GAO AND SAN-YANG LIU Abstract. In this paper, algorithms for computing the minimal polynomial and the common minimal polynomial of resultant matrices over any field are presented by means of the approach for the Gröbner basis of the ideal in the polynomial ring, respectively, and two algorithms for finding the inverses of such matrices are also presented. Finally, an algorithm for the inverse of partitioned matrix with resultant blocks over any field is given, which can be realized by CoCoA 4.0, an algebraic system over the field of rational numbers or the field of residue classes of modulo prime number. We get examples showing the effectiveness of the algorithms. AMS Mathematics Subject Classification : 15A21, 65F15. Key words and phrases : Resultant matrix, minimal polynomial, inverse, partitioned matrix, Gröbner basis. 0. Introduction In recent years, the companion matrices which is a fruitful subject of research [1, 4, 9-11, 13-14, 19, 21] for a long time have been extended in many directions [5-8, 12, 15-18, 20]. In particular, it will be seen that resultant matrices are precisely those matrices commuting with a companion matrix C f in . Advance of the stability theory is dependent on the study of inertia theory and the resultant matrices properties. Therefore, it is important to discuss the application and properties of the resultant matrices. For solving the resultant linear system, we need to find the inverses of resultant matrices. Received June 13, Revised October 29, Corresponding author. Foundation item: Shaanxi Natural Science Foundation of China (2002A12). c 2004 Korean Society for Computional & Applied Mathematics and Korean SIGCAM. 251

2 252 Shu-Ping Gao and San-Yang Liu The minimal polynomial of a matrix has a wide range of applications in the decomposition of a vector space and the diagonalization of a matrix. But it is not easy to find a minimal polynomial of a given matrix. In this paper, algorithms for computing the minimal polynomial, the common minimal polynomial and the inverse of resultant matrices over any field are presented respectively. We give now some terminologies and notation here. Let F be a field and F[x 1,,x k ] the polynomial ring in k variables over the field F. By Hilbert Basis Theorem, we know that every ideal I in F[x 1,,x k ] is finitely generated. Fixing a term order in F[x 1,,x k ], a set of non-zero polynomials G {g 1,,g t } in an ideal I is called a Gröbner basis for I if and only if for all non-zero f I, there exists i {1,,t} such that lp(g i ) divides lp(f), where lp(g i ) and lp(f) are the leading power products of g i and f respectively. A Gröbner basis G {g 1,,g t } is called a reduced Gröbner basis if and only if, for all i, lc(g i )1 and g i is reduced with respect to G {g i }, that is, for all i, no non-zero term in g i is divisible by any lp(g j ) for any j i, where lc(g i ) is the leading coefficient of g i. In this paper, we set A 0 I for any square matrix A, and f 1,,f m denotes an ideal of F[x 1,,x k ] generated by polynomials f 1,,f m. 1. Definition and lemma Definition 1. Let F be a field and f(x) K[x] be a monic polynomial: f(x) x n + a n 1 x n a 1 x + a 0. (1) The following matrix C f F n n is called the companion matrix of the monic polynomial f(x) : C f (2) a 0 a 1 a 2... a n 2 a n 1 By , we see that the polynomial f(x) is both the minimal polynomial and the characteristic polynomial of the matrix C f. Definition 2. An n n matrix A over F is called a resultant matrix if there exists a polynomial g(x) F(x) such that A g(c f ). (3)

3 Algorithms for finding the minimal polynomials and inverses of resultant matrices 253 The polynomial g(x) is called the representer of the resultant matrix A over F. In view of the structure of the powers of the companion matrix C f over F and Definition 1, it is clear that A is a resultant matrix over F if and only if A commutes with C f, that is, AC f C f A. (4) In addition the algebraic properties of resultant matrices can be easily derived from the representation (3) and (4). The product of two resultant matrices is also a resultant matrix. Furthermore, resultant matrices commute under multiplication and the inverse of a resultant matrix is a resultant matrix, too. Definition 3. Let I be a non-zero ideal of the polynomial ring F[y 1,,y t ]. Then I is called an annihilation ideal of square matrices A 1,,A t, denoted by I(A 1,,A t ), if h(a 1,,A t ) 0 for all h(y 1,,y t ) I. Definition 4. Suppose that A 1,...,A t are not all zero matrices. The unique monic polynomial p(x) of minimum degree that simultaneously annihilates A 1,...,A t is called the common minimal polynomial of A 1,,A t. The following lemmas are well known (see ). Lemma 1. Let I be an ideal of F[x 1,,x k ]. Given f 1,,f m F[x 1,,x k ], consider the following F - algebra homomorphism ϕ : F[y 1,,y m ] F[x 1,,x k ]/I. y 1 f 1 + I y m f m + I Let K I, y 1 f 1,,y m f m be an ideal of F[x 1,,x k,y 1,,y m ] generated by I, y 1 f 1,, y m f m. Then kerϕk F[y 1,,y m ]. Lemma 2. Let L 1, L 2,, L m be ideals of F[x 1,x 2,,x k ] and let m J 1 w i, w 1 L 1, w 2 L 2,, w m L m i1 be an ideal of F[x 1,x 2,,x k, w 1,,w m ] generated by m 1 w i, w 1 L 1, w 2 L 2,, w m L m. i1

4 254 Shu-Ping Gao and San-Yang Liu Then m L i J F[x 1,x 2,,x k ]. i1 The following lemma is well known (see ). Lemma 3. Let A be a non-zero matrix over F, if the minimal polynomial of A is : p(x) a 0 x n + a 1 x n 1 + a 2 x n a n and a n 0, then A 1 (1/a n )( a 0 A n 1 a 1 A n 2 a n 1 I). 2. Main results and proof Let F[C f ]{A A g(c f ), g(x) F[x]}, where C f is given by (2). It is a routine to prove that F[C f ] is a commutative ring with the matrix addition and multiplication. Theorem 1. F[x]/ f(x) F[C f ]. Proof. Consider the following F -algebra homomorphism ϕ : F[x] F[C f ] g(x) A g(c f ) for g(x) F[x]. It is clear that ϕ is an F -algebra epimorphism. So we have F[x]/kerϕ F[C f ]. Since F[x] is a principal ideal integral domain, there is a monic polynomial p(x) F[x] such that kerϕ p(x). Since f(x) is the minimal polynomial of C f, then p(x) f(x). By Theorem 1 and Lemma 1, we can prove the following theorem. Theorem 2. The minimal polynomial of a resultant matrix A F[C f ] is the monic polynomial that generates the ideal f(x), y g(x) F[y], where the polynomial g(x) is the representer of A. Proof. Consider the following F - algebra homomorphism φ : F[y] F[x]/ f(x) F[C f ] y g(x)+ f(x) A g(c f ).

5 Algorithms for finding the minimal polynomials and inverses of resultant matrices 255 It is clear that q(y) kerφ if and only if q(a) 0. By Lemma 1, we have kerφ f(x), y g(x) F[y]. By Theorem 2 and Lemma 3, we know that the minimal polynomial and the inverse of a resultant matrix A F[C f ] are calculated by a Gröbner basis for a kernel of an F -algebra homomorphism. Therefore, we have the following algorithm to calculate the minimal polynomial and the inverse of a resultant matrix A g(c f ): Step 1. Step 2. Step 3. Calculate the reduced Gröbner basis G for the ideal f(x), y g(x) F[y] by CoCoA 4.0, using an elimination order with x>y. Find the polynomial p(y) in G in which the variable x does not appear. This polynomial p(y) is the minimal polynomial of A. In the minimal polynomial p(y) a 0 y n + a 1 y n 1 + a 2 y n a n, if a n is zero, stop. Otherwise, calculate A 1 (1/a n )( a 0 A n 1 a 1 A n 2 a n 1 I). Example 1. Let A g(c f ) be a resultant matrix, where g(x) x 3 +3x 2 +4x +2 and C f We can now calculate the minimal polynomial and the inverse of A with coefficients in the rational field Q as follows. By CoCoA 4.0, we obtain the following reduced Gröbner basis for the ideal x 4 10x 3 +35x 2 50x +24,y g(x) : G {x (349/ )y 3 + (23373/ )y 2 (505919/ )y /341620, y 4 238y y y }. So the minimal polynomial of A in the rational field Q is y 4 238y y y

6 256 Shu-Ping Gao and San-Yang Liu and A 1 ( 1/ )A 3 + (238/ )A 2 (1706/265200)A + (413/2652)I. Theorem 3. The annihilation ideal of resultant matrices A 1,,A t F[C f ] is f(x), y 1 g 1 (x),, y t g t (x) F[y 1,,y t ], where the polynomial g i (x) is the representer of A i, i 1, 2,,t. Proof. Consider the following F - algebra homomorphism ϕ : F[y 1,...,y t ] F[x]/ f(x) F[C f ] y 1 g 1 (x)+ f(x) A 1 g 1 (C f ) y t g t (x)+ f(x) A t g t (C f ). It is clear that ϕ(h(y 1,,y t )) 0 if and only if h(a 1,,A t )0. Hence, by Lemma 1, I(A 1,,A t )kerϕ f(x), y 1 g 1 (x),,y t g t (x) F[y 1,,y t ]. According to Theorem 3, we give the following algorithm for the annihilation ideal of resultant matrices A 1,,A t F[C f ]: Step 1. Step 2. Calculate the reduced Gröbner basis G for the ideal f(x), y 1 g 1 (x),, y t g t (x) by CoCoA 4.0, using an elimination order with x>y 1 > >y t. Find the polynomials in G in which the variable x does not appear. Then the ideal generated by these polynomials is the annihilation ideal of A 1,...,A t. Example 2. Let A 1 g 1 (C f ), A 2 g 2 (C f ) and A 3 g 3 (C f ) be all resultant matrices, where g 1 (x) 8x 3 +3x 2 +5x +2, g 2 (x) 2x 3 +3x 2 + x +3, g 3 (x) 5x 3 +2x 2 + x +4

7 Algorithms for finding the minimal polynomials and inverses of resultant matrices 257 and C f We can now calculate the annihilation ideal of A 1,A 2 and A 3 with coefficients in the field Z 11 as follows: By CoCoA 4.0, we obtain the following reduced Gröbner basis for the ideal x 3 6x 2 +11x 6, y g 1 (x), z g 2 (x), u g 3 (x) : G {z 3u 2 + u +4,u 3 +5u 2 u 5,x 5u 2 5u 2,y 5u 2 +2u 4}. So the annihilation ideal of A 1,A 2 and A 3 in the field Z 11 is z 3u 2 + u +4,u 3 +5u 2 u 5, y 5u 2 +2u 4. Lemma 4. Let h(x) be the least common multiple of p 1 (x),p 2 (x),,p k (x). Then k p i (x) h(x). i1 Proof. For any p(x) k i1 p i(x), we have p i (x) p(x) for i 1, 2,,k. Since h(x) is the least common multiple of p 1 (x),p 2 (x),,p k (x),h(x) p(x), so p(x) h(x). Hence k i1 p i(x) h(x). Conversely, suppose p i (x) h(x) for i 1, 2,,k. Because h(x) is the least common multiple of p 1 (x),p 2 (x),,p k (x), therefore k i1 p i(x) h(x). By Theorem 2, Lemma 2 and Lemma 4, if the minimal polynomial of A i is p i (x) for i 1, 2,,t, then the common minimal polynomial of A 1,,A t is the least common multiple of p 1 (x),p 2 (x),,p t (x). So we have the following algorithm for the common minimal polynomial of resultant matrices A i g i (C f ) for i 1, 2,,t: Step 1. Calculate the Gröbner basis G i for the ideal f(x),y g i (x) by CoCoA 4.0 for each i 1, 2,,t, using an elimination order with x>y. Step 2. Find out the polynomial p i (y) in G i in which the variable x does not appear for each i 1, 2,, t. Step 3. Calculate the Gröbner basis G for the ideal t 1 w i, w 1 p 1 (y),, w t p t (y) i1

8 258 Shu-Ping Gao and San-Yang Liu Step 4. by CoCoA 4.0, using an elimination order with w 1 > > w t >y. Find out the polynomial p(y) in G in which the variables w 1,,w t do not appear. Then the polynomial p(y) is the common minimal polynomial of A i g i (C f ) for i 1, 2,,t. Example 3. Let A 1 g 1 (C f1 ) be a resultant matrix and A 2 g 2 (C f2 )bea resultant matrix, where g 1 (x) 5x 3 +9x 2 + x +1, g 2 (x) x 4 +2x 3 +3x 2 +6x +7 and C f , C f We calculate the common minimal polynomial of A 1 and A 2 in the field Z 11 as follows: By CoCoA 4.0, we obtain the reduced Gröbner basis for the ideal x 3 6x 2 +11x 6, y g 1 (x) is G 1 {x +4y 2 2y 3, y 3 +4y 2 y}. So the minimal polynomial p 1 (y) ofa 1 is y 3 +4y 2 y. By CoCoA 4.0, we get the reduced Gröbner basis for the ideal x 4 10x 3 +35x 2 50x +24,y g 2 (x) is G 2 {xy +3x y 2 +5y +2,y 3 +2y 2 3y, x 2 3x + y 2 5y}. So the minimal polynomial p 2 (y) ofa 2 is y 3 +2y 2 3y. By CoCoA 4.0, we obtain the reduced Gröbner basis for the ideal 1 u v, up 1 (y), vp 2 (y) is G {u + v 1, vy+4y 4 2y 3 + y 2 4y, y 5 5y 4 +4y 3 3y 2 +3y}. So the common minimal polynomial p(y) ofa 1 and A 2 is y 5 5y 4 +4y 3 3y 2 +3y..

9 Algorithms for finding the minimal polynomials and inverses of resultant matrices 259 In the following, we discuss the singularity and the inverse of a resultant matrix. Theorem 4. Let A F[C f ] be an n n resultant matrix. Then A is nonsingular if and only if (g(x),f(x)) 1, where the polynomial g(x) is the representer of A. Proof. A is nonsingular if and only if g(x) + f(x) is an invertible element in F(x)/ f(x). By Theorem 1, if and only if there exists u(x) + f(x) F(x)/ f(x) such that g(x)u(x)+ f(x) 1+ f(x) if and only if there exist u(x),v(x) F[x] such that g(x)u(x)+f(x)v(x) 1 if and only if (g(x),f(x)) 1. Let A F[C f ]beann n resultant matrix, by Theorem 4, we have the following algorithm which can find the inverse of the matrix A: Step 1. Calculate the reduced Gröbner basis G for the ideal g(x),f(x), by CoCoA 4.0, where G is the monic largest common factor of g(x) and f(x). If G {1}, then A is singular, Stop. Otherwise, go to step 2. Step 2. Calculate u(x),v(x) F[x] by the polynomial division algorithm such that g(x)u(x)+f(x)v(x) 1. Then u(x) is the representer of A 1, we obtain A 1 u(c f ). 3. Inverse of partitioned matrix with resultant matrix blocks Let A, B, C and D be resultant matrices with the representer g 1 (x), g 2 (x), g 3 (x) and g 4 (x), respectively. If A is nonsingular, let ( ) ( ) ( ) A B I 0 I A Σ, Γ C D 1 CA 1, Γ I 2 1 B. 0 I Then ( ) A 0 Γ 1 ΣΓ 2 0 D CA 1. (5) B

10 260 Shu-Ping Gao and San-Yang Liu So Σ is nonsingular if and only if D CA 1 B is nonsingular. Since A,B,Cand D are all resultant matrices, then the A i commutes with the A j if i j. Thus A(D CA 1 B)AD BC. (6) By the equation (6), we conclude that Σ is nonsingular if and only if AD BC is nonsingular. Since [g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)) is the representer of AD BC, by Theorem 4, Σ is nonsingular if and only if ([g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)), f(x)) 1. In addition, if Σ is nonsingular, by the equation (5), we have ( )( )( ) Σ 1 I A 1 B A 1 0 I 0 0 I 0 (D CA 1 B) 1 CA 1 I ( ) A 1 +(AD BC) 1 BCA 1 (AD BC) 1 B (AD BC) 1 C (AD BC) 1. A Therefore, we have ( ) A B Theorem 5. Let Σ, where A, B, C and D are all resultant C D matrices with the representer g 1 (x), g 2 (x), g 3 (x) and g 4 (x) respectively. If A is nonsingular, then Σ is nonsingular if and only if ([g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)), f(x)) 1. Moreover, if Σ is nonsingular, then ( Σ 1 A 1 +(AD BC) 1 BCA 1 (AD BC) 1 B (AD BC) 1 C (AD BC) 1 A ). (7) ( ) A B Theorem 6. Let Σ, where A, B, C and D are all resultant C D matrices with the representer g 1 (x), g 2 (x), g 3 (x) and g 4 (x) respectively. If D is nonsingular, then Σ is nonsingular if and only if ([g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)), f(x)) 1. Moreover, if Σ is nonsingular, then ( ) Σ 1 (AD BC) 1 D (AD BC) 1 B (AD BC) 1 C D 1 +(AD BC) 1 BCD 1. (8)

11 Algorithms for finding the minimal polynomials and inverses of resultant matrices 261 Proof. Since D is nonsingular, then ( ) ( I BD 1 I 0 Σ 0 I D 1 C I ) ( A BD 1 C 0 0 D ). (9) So Σ is nonsingular if and only if A BD 1 C is nonsingular. Since A, B, C and D are all resultant matrices, then the A i commutes with the A j if i j. Thus D(A BD 1 C)AD BC. (10) By the equation (10), we conclude that Σ is nonsingular if and only if AD BC is nonsingular. Since [g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)) is the representer of AD BC, by Theorem 4, Σ is nonsingular if and only if ([g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)), f(x)) 1. In addition, if Σ is nonsingular, by the equation (9), we have ( )( )( ) Σ 1 I 0 (A BD 1 C) 1 0 I BD 1 D 1 C I 0 D 1 0 I ( ) (AD BC) 1 D (AD BC) 1 B (AD BC) 1 C D 1 +(AD BC) 1 BCD 1. If Σ is nonsingular, we have the following algorithm for determining the nonsingularity and computing the inverse of Σ. Step 1. Calculate the Gröbner bases G 1, G 4 for the ideals g 1 (x),f(x), g 4 (x),f(x), respectively. If G 1 {1}, G 4 {1}, Stop. Otherwise, go to step 2. Step 2. If G 1 {1}, find v 1 (x),u 1 (x) F[x] by the polynomial division algorithm such that u 1 (x)g 1 (x)+v 1 (x)f(x) 1. Then u 1 (x) is the representer of A 1, go to step 4. Otherwise, go to step 3. Step 3. If G 4 {1}, find v 4 (x),u 4 (x) F[x] by the polynomial division algorithm such that u 4 (x)g 4 (x)+v 4 (x)f(x) 1. Then u 4 (x) is the representer of D 1, go to step 4. Step 4. Calculate the Gröbner bases G for the ideals [g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)), f(x). If G {1}, then AD BC is singular, Stop. Otherwise, go to step 5.

12 262 Shu-Ping Gao and San-Yang Liu Step 5. Find v(x),u(x) F[x] by the polynomial division algorithm such that u(x)[g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)) + v(x)f(x) 1. Then u(x) is the representer of (AD BC) 1. Therefore, Σ 1 can be obtained as follows. If A is nonsingular, then ( Σ 1 u1 (C f )[I + u(c f )BC] u(c f )B u(c f )C u(c f )A If D is nonsingular, then ( Σ 1 u(cf )D u(c f )B u(c f )C u 4 (C f )[I + u(c f )BC] ). ). References 1. Bhubaneswar Mishra, Algorithmic Algebra, Springer-Verlag, Williamm W. Adams and Philippe Loustaunau, An introduction to Gröbner bases, American Mathematical Society, Donald Greenspan, Methods of matrix inversion, Amer. Math. Monthly 62 (1955), R. A. Horn and C. R. Johnson, Matrix Analysis, New York: Cambridge University Press, K. Wang, Resulants and group matrices, Linear Algebra and Its Appl. 33 (1980), Bruce H. Edwards, Rotations and discriminants of quadratic spaces, Linear Algebra and Its Appl. 8 (1980), R. Bhatia, L. Elsner and G. Krause, Bounds for the variation of the roots of a polynomial and the eigenvalues of a matrix, Linear Algebra and Its Appl. 142 (1990), Harald K. Wimmer, On the history of the Bezoution and the resultant matrix, Linear Algebra and Its Appl. 128 (1990), L. N. Vaserstein and E. Wheland, Commutators and companion matrices over rings of stable rank 1 Linear Algebra and Its Appl. 142 (1990), C. Carstensen, Linear construction of companion matrices, Linear Algebra and Its Appl. 149 (1991), Vlastimil Ptak, The infinite companion matrix, Linear Algebra and Its Appl. 166 (1992), Karla Rost, Generalized lyapunov equations, matrices with displacement structure and generalized Bezoutians, Linear Algebra and Its Appl. 193 (1993), Karla Rost, Generalized companion matrices and matrix representations for generalized Bezoutians, Linear Algebra and Its Appl., 193 (1993), Harald K. Wimmer, Pairs of companion matrices and their simultaneous reduction to complementary triangular forms, Linear Algebra and Its Appl. 182 (1993), Andre Klein, On fisher s information matrix of an armax process and sylvester s resultant matrices, Linear Algebra and Its Appl. 237/238 (1996),

13 Algorithms for finding the minimal polynomials and inverses of resultant matrices Bernard Hanzon, A faddeev sequence method for solving lyapunov and sylvester equations, Linear Algebra and Its Appl (1996), Daniel Augot and Paul Camion, On the computation of minimal polynomials, cyclic vectors, and forbenius forms, Linear Algebra and Its Appl. 260 (1997), Dario Andrea Bini, Luca Gemignani, Fast fraction-free triangularization of Bezoutians with applications to sub-resultant chain computation, Linear Algebra and Its Appl. 284 (1998), Marc van barel, Vlastimil Ptak and Zdenek Vavrin, Extending the notions of companion and infinite companion to matrix polynomials, Linear Algebra and Its Appl. 290 (1999), Guangcai Zhou, Xiang-Gen Xia, Ambiguity resistant polynomial matrices, Linear Algebra and Its Appl. 286 (1999), Louis Solomon, Similarity of the companion matrix and its transpose, Linear Algebra and Its Appl (1999), David Chillag, Regular representations of semisimple algebras, separable field extensions, group characters, generalized circulants, and generalized cyclic codes, Linear Algebra and Its Appl. 218 (1995), Predrag S. Stanimirović and Milan B. Tasić, Computing determinantal representation of generalized inverses, J. Appl. Math. & Computing(old: KJCAM) 9(2002), Jae Heon Yun and Sang Wook Kim, A variant of block incomplete factorization preconditioners for a symmetric H-matrix, J. Appl. Math. & Computing(old:KJCAM) 8(2001), Gao Shuping received her BS and MS from Shaanxi Normal University and Xidian University in 1986 and 1994, respectively. Since 1986 she has been at the Xidian University. Since 2000 she has been at the Xidian University for Ph. D. Her research interests focus on the multi-objective programming, transportation network and the matrix theory. Liu Sanyang received his BS, MS and Ph. D from Shaanxi Normal University, Xidian University and Xi an Jiaotong University in 1982, 1984 and 1989, respectively. Since 1987 he has been at the Xidian University. His research interests focus on the multi-objective programming, combinatorial optimization, convex analysis and the matrix theory. Department of Applied Mathematics, Xidian University, Xi an, , P. R. China