ALGORITHMS FOR FINDING THE MINIMAL POLYNOMIALS AND INVERSES OF RESULTANT MATRICES

Similar documents
Efficient algorithms for finding the minimal polynomials and the

Computing Minimal Polynomial of Matrices over Algebraic Extension Fields

TEST CODE: PMB SYLLABUS

GRÖBNER BASES AND POLYNOMIAL EQUATIONS. 1. Introduction and preliminaries on Gróbner bases

Algebra Homework, Edition 2 9 September 2010

Minimum Polynomials of Linear Transformations

A Generalization of Wilson s Theorem

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

CHAPTER I. Rings. Definition A ring R is a set with two binary operations, addition + and

Fast Algorithms for Solving RFPrLR Circulant Linear Systems

where c R and the content of f is one. 1

MATH 431 PART 2: POLYNOMIAL RINGS AND FACTORIZATION

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

MATH 3030, Abstract Algebra FALL 2012 Toby Kenney Midyear Examination Friday 7th December: 7:00-10:00 PM

M.6. Rational canonical form

Algebra homework 6 Homomorphisms, isomorphisms

Algebra Exam Syllabus

Lecture Notes Math 371: Algebra (Fall 2006) by Nathanael Leedom Ackerman

Matrix Inequalities by Means of Block Matrices 1

CHAPTER 14. Ideals and Factor Rings

ON TYPES OF MATRICES AND CENTRALIZERS OF MATRICES AND PERMUTATIONS

2a 2 4ac), provided there is an element r in our

DOES XY 2 = Z 2 IMPLY X IS A SQUARE?

Jae Heon Yun and Yu Du Han

THE GROUP OF UNITS OF SOME FINITE LOCAL RINGS I

SUMMARY ALGEBRA I LOUIS-PHILIPPE THIBAULT

MINIMAL POLYNOMIALS AND CHARACTERISTIC POLYNOMIALS OVER RINGS

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY

DERIVATIONS. Introduction to non-associative algebra. Playing havoc with the product rule? BERNARD RUSSO University of California, Irvine

Toric Ideals, an Introduction

On some properties of elementary derivations in dimension six

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

7.1 Definitions and Generator Polynomials

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

1 Rings 1 RINGS 1. Theorem 1.1 (Substitution Principle). Let ϕ : R R be a ring homomorphism

Linear Algebra. Workbook

AN INVERSE MATRIX OF AN UPPER TRIANGULAR MATRIX CAN BE LOWER TRIANGULAR

SYMMETRIC ALGEBRAS OVER RINGS AND FIELDS

Lecture Notes Math 371: Algebra (Fall 2006) by Nathanael Leedom Ackerman

RINGS ISOMORPHIC TO THEIR NONTRIVIAL SUBRINGS

COMMUTING GRAPHS OF MATRIX ALGEBRAS

Solutions of exercise sheet 8

CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES

Algebraic function fields

School of Mathematics and Statistics. MT5836 Galois Theory. Handout 0: Course Information

Yale University Department of Mathematics Math 350 Introduction to Abstract Algebra Fall Midterm Exam Review Solutions

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

ALGEBRAIC GROUPS. Disclaimer: There are millions of errors in these notes!

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations

Chapter 6: Rational Expr., Eq., and Functions Lecture notes Math 1010

Note that a unit is unique: 1 = 11 = 1. Examples: Nonnegative integers under addition; all integers under multiplication.

Polynomials, Ideals, and Gröbner Bases

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

DIANE MACLAGAN. Abstract. The main result of this paper is that all antichains are. One natural generalization to more abstract posets is shown to be

Factorization of integer-valued polynomials with square-free denominator

(a + b)c = ac + bc and a(b + c) = ab + ac.

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

TRANSITIVE AND ABSORBENT FILTERS OF LATTICE IMPLICATION ALGEBRAS

Computations/Applications

Introduction to D-module Theory. Algorithms for Computing Bernstein-Sato Polynomials. Jorge Martín-Morales

Algebra Review 2. 1 Fields. A field is an extension of the concept of a group.

Domains over which polynomials split

Computation of a Power Integral Basis of a Pure Cubic Number Field

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

SUMS OF SECOND ORDER LINEAR RECURRENCES THOMAS MCKENZIE AND SHANNON OVERBAY

Conjugacy classes of torsion in GL_n(Z)

2-4 Zeros of Polynomial Functions

2 (17) Find non-trivial left and right ideals of the ring of 22 matrices over R. Show that there are no nontrivial two sided ideals. (18) State and pr

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Abstract Algebra: Chapters 16 and 17

Some approaches to construct MDS matrices over a finite field

Spectral inequalities and equalities involving products of matrices

ON POSITIVE SEMIDEFINITE PRESERVING STEIN TRANSFORMATION

Moore-Penrose-invertible normal and Hermitian elements in rings

MTH310 EXAM 2 REVIEW

Lecture 1. (i,j) N 2 kx i y j, and this makes k[x, y]

Sparsity of Matrix Canonical Forms. Xingzhi Zhan East China Normal University

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Rational Canonical Form

Computing with polynomials: Hensel constructions

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON POSITIVE, LINEAR AND QUADRATIC BOOLEAN FUNCTIONS

A connection between number theory and linear algebra

Math 210B: Algebra, Homework 4

Mathematical Methods for Engineers and Scientists 1

Hilbert Nullstellensatz & Quantifier Elimination

MCS 563 Spring 2014 Analytic Symbolic Computation Friday 31 January. Quotient Rings

Algebra Exam Topics. Updated August 2017

4. Noether normalisation

THE REGULAR ELEMENT PROPERTY

Integral Extensions. Chapter Integral Elements Definitions and Comments Lemma


Pairs of matrices, one of which commutes with their commutator

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12

Noetherian property of infinite EI categories

NOTES ON LINEAR ALGEBRA OVER INTEGRAL DOMAINS. Contents. 1. Introduction 1 2. Rank and basis 1 3. The set of linear maps 4. 1.

A POLAR, THE CLASS AND PLANE JACOBIAN CONJECTURE

Transcription:

J. Appl. Math. & Computing Vol. 16(2004), No. 1-2, pp. 251-263 ALGORITHMS FOR FINDING THE MINIMAL POLYNOMIALS AND INVERSES OF RESULTANT MATRICES SHU-PING GAO AND SAN-YANG LIU Abstract. In this paper, algorithms for computing the minimal polynomial and the common minimal polynomial of resultant matrices over any field are presented by means of the approach for the Gröbner basis of the ideal in the polynomial ring, respectively, and two algorithms for finding the inverses of such matrices are also presented. Finally, an algorithm for the inverse of partitioned matrix with resultant blocks over any field is given, which can be realized by CoCoA 4.0, an algebraic system over the field of rational numbers or the field of residue classes of modulo prime number. We get examples showing the effectiveness of the algorithms. AMS Mathematics Subject Classification : 15A21, 65F15. Key words and phrases : Resultant matrix, minimal polynomial, inverse, partitioned matrix, Gröbner basis. 0. Introduction In recent years, the companion matrices which is a fruitful subject of research [1, 4, 9-11, 13-14, 19, 21] for a long time have been extended in many directions [5-8, 12, 15-18, 20]. In particular, it will be seen that resultant matrices are precisely those matrices commuting with a companion matrix C f in [22]. Advance of the stability theory is dependent on the study of inertia theory and the resultant matrices properties. Therefore, it is important to discuss the application and properties of the resultant matrices. For solving the resultant linear system, we need to find the inverses of resultant matrices. Received June 13, 2003. Revised October 29, 2003. Corresponding author. Foundation item: Shaanxi Natural Science Foundation of China (2002A12). c 2004 Korean Society for Computional & Applied Mathematics and Korean SIGCAM. 251

252 Shu-Ping Gao and San-Yang Liu The minimal polynomial of a matrix has a wide range of applications in the decomposition of a vector space and the diagonalization of a matrix. But it is not easy to find a minimal polynomial of a given matrix. In this paper, algorithms for computing the minimal polynomial, the common minimal polynomial and the inverse of resultant matrices over any field are presented respectively. We give now some terminologies and notation here. Let F be a field and F[x 1,,x k ] the polynomial ring in k variables over the field F. By Hilbert Basis Theorem, we know that every ideal I in F[x 1,,x k ] is finitely generated. Fixing a term order in F[x 1,,x k ], a set of non-zero polynomials G {g 1,,g t } in an ideal I is called a Gröbner basis for I if and only if for all non-zero f I, there exists i {1,,t} such that lp(g i ) divides lp(f), where lp(g i ) and lp(f) are the leading power products of g i and f respectively. A Gröbner basis G {g 1,,g t } is called a reduced Gröbner basis if and only if, for all i, lc(g i )1 and g i is reduced with respect to G {g i }, that is, for all i, no non-zero term in g i is divisible by any lp(g j ) for any j i, where lc(g i ) is the leading coefficient of g i. In this paper, we set A 0 I for any square matrix A, and f 1,,f m denotes an ideal of F[x 1,,x k ] generated by polynomials f 1,,f m. 1. Definition and lemma Definition 1. Let F be a field and f(x) K[x] be a monic polynomial: f(x) x n + a n 1 x n 1 + + a 1 x + a 0. (1) The following matrix C f F n n is called the companion matrix of the monic polynomial f(x) : 0 1 0... 0 0 0 0 1... 0 0 C f.................. 0 0 0... 0 1. (2) a 0 a 1 a 2... a n 2 a n 1 By [4], we see that the polynomial f(x) is both the minimal polynomial and the characteristic polynomial of the matrix C f. Definition 2. An n n matrix A over F is called a resultant matrix if there exists a polynomial g(x) F(x) such that A g(c f ). (3)

Algorithms for finding the minimal polynomials and inverses of resultant matrices 253 The polynomial g(x) is called the representer of the resultant matrix A over F. In view of the structure of the powers of the companion matrix C f over F and Definition 1, it is clear that A is a resultant matrix over F if and only if A commutes with C f, that is, AC f C f A. (4) In addition the algebraic properties of resultant matrices can be easily derived from the representation (3) and (4). The product of two resultant matrices is also a resultant matrix. Furthermore, resultant matrices commute under multiplication and the inverse of a resultant matrix is a resultant matrix, too. Definition 3. Let I be a non-zero ideal of the polynomial ring F[y 1,,y t ]. Then I is called an annihilation ideal of square matrices A 1,,A t, denoted by I(A 1,,A t ), if h(a 1,,A t ) 0 for all h(y 1,,y t ) I. Definition 4. Suppose that A 1,...,A t are not all zero matrices. The unique monic polynomial p(x) of minimum degree that simultaneously annihilates A 1,...,A t is called the common minimal polynomial of A 1,,A t. The following lemmas are well known (see [2]). Lemma 1. Let I be an ideal of F[x 1,,x k ]. Given f 1,,f m F[x 1,,x k ], consider the following F - algebra homomorphism ϕ : F[y 1,,y m ] F[x 1,,x k ]/I. y 1 f 1 + I y m f m + I Let K I, y 1 f 1,,y m f m be an ideal of F[x 1,,x k,y 1,,y m ] generated by I, y 1 f 1,, y m f m. Then kerϕk F[y 1,,y m ]. Lemma 2. Let L 1, L 2,, L m be ideals of F[x 1,x 2,,x k ] and let m J 1 w i, w 1 L 1, w 2 L 2,, w m L m i1 be an ideal of F[x 1,x 2,,x k, w 1,,w m ] generated by m 1 w i, w 1 L 1, w 2 L 2,, w m L m. i1

254 Shu-Ping Gao and San-Yang Liu Then m L i J F[x 1,x 2,,x k ]. i1 The following lemma is well known (see [3]). Lemma 3. Let A be a non-zero matrix over F, if the minimal polynomial of A is : p(x) a 0 x n + a 1 x n 1 + a 2 x n 2 + + a n and a n 0, then A 1 (1/a n )( a 0 A n 1 a 1 A n 2 a n 1 I). 2. Main results and proof Let F[C f ]{A A g(c f ), g(x) F[x]}, where C f is given by (2). It is a routine to prove that F[C f ] is a commutative ring with the matrix addition and multiplication. Theorem 1. F[x]/ f(x) F[C f ]. Proof. Consider the following F -algebra homomorphism ϕ : F[x] F[C f ] g(x) A g(c f ) for g(x) F[x]. It is clear that ϕ is an F -algebra epimorphism. So we have F[x]/kerϕ F[C f ]. Since F[x] is a principal ideal integral domain, there is a monic polynomial p(x) F[x] such that kerϕ p(x). Since f(x) is the minimal polynomial of C f, then p(x) f(x). By Theorem 1 and Lemma 1, we can prove the following theorem. Theorem 2. The minimal polynomial of a resultant matrix A F[C f ] is the monic polynomial that generates the ideal f(x), y g(x) F[y], where the polynomial g(x) is the representer of A. Proof. Consider the following F - algebra homomorphism φ : F[y] F[x]/ f(x) F[C f ] y g(x)+ f(x) A g(c f ).

Algorithms for finding the minimal polynomials and inverses of resultant matrices 255 It is clear that q(y) kerφ if and only if q(a) 0. By Lemma 1, we have kerφ f(x), y g(x) F[y]. By Theorem 2 and Lemma 3, we know that the minimal polynomial and the inverse of a resultant matrix A F[C f ] are calculated by a Gröbner basis for a kernel of an F -algebra homomorphism. Therefore, we have the following algorithm to calculate the minimal polynomial and the inverse of a resultant matrix A g(c f ): Step 1. Step 2. Step 3. Calculate the reduced Gröbner basis G for the ideal f(x), y g(x) F[y] by CoCoA 4.0, using an elimination order with x>y. Find the polynomial p(y) in G in which the variable x does not appear. This polynomial p(y) is the minimal polynomial of A. In the minimal polynomial p(y) a 0 y n + a 1 y n 1 + a 2 y n 2 + + a n, if a n is zero, stop. Otherwise, calculate A 1 (1/a n )( a 0 A n 1 a 1 A n 2 a n 1 I). Example 1. Let A g(c f ) be a resultant matrix, where g(x) x 3 +3x 2 +4x +2 and 0 1 0 0 C f 0 0 1 0 0 0 0 1. 24 50 35 10 We can now calculate the minimal polynomial and the inverse of A with coefficients in the rational field Q as follows. By CoCoA 4.0, we obtain the following reduced Gröbner basis for the ideal x 4 10x 3 +35x 2 50x +24,y g(x) : G {x (349/136648000)y 3 + (23373/34162000)y 2 (505919/6832400)y 111161/341620, y 4 238y 3 + 17060y 2 413000y + 2652000}. So the minimal polynomial of A in the rational field Q is y 4 238y 3 + 17060y 2 413000y + 2652000

256 Shu-Ping Gao and San-Yang Liu and A 1 ( 1/2652000)A 3 + (238/2652000)A 2 (1706/265200)A + (413/2652)I. Theorem 3. The annihilation ideal of resultant matrices A 1,,A t F[C f ] is f(x), y 1 g 1 (x),, y t g t (x) F[y 1,,y t ], where the polynomial g i (x) is the representer of A i, i 1, 2,,t. Proof. Consider the following F - algebra homomorphism ϕ : F[y 1,...,y t ] F[x]/ f(x) F[C f ] y 1 g 1 (x)+ f(x) A 1 g 1 (C f ) y t g t (x)+ f(x) A t g t (C f ). It is clear that ϕ(h(y 1,,y t )) 0 if and only if h(a 1,,A t )0. Hence, by Lemma 1, I(A 1,,A t )kerϕ f(x), y 1 g 1 (x),,y t g t (x) F[y 1,,y t ]. According to Theorem 3, we give the following algorithm for the annihilation ideal of resultant matrices A 1,,A t F[C f ]: Step 1. Step 2. Calculate the reduced Gröbner basis G for the ideal f(x), y 1 g 1 (x),, y t g t (x) by CoCoA 4.0, using an elimination order with x>y 1 > >y t. Find the polynomials in G in which the variable x does not appear. Then the ideal generated by these polynomials is the annihilation ideal of A 1,...,A t. Example 2. Let A 1 g 1 (C f ), A 2 g 2 (C f ) and A 3 g 3 (C f ) be all resultant matrices, where g 1 (x) 8x 3 +3x 2 +5x +2, g 2 (x) 2x 3 +3x 2 + x +3, g 3 (x) 5x 3 +2x 2 + x +4

Algorithms for finding the minimal polynomials and inverses of resultant matrices 257 and C f 0 1 0 0 0 1. 6 11 6 We can now calculate the annihilation ideal of A 1,A 2 and A 3 with coefficients in the field Z 11 as follows: By CoCoA 4.0, we obtain the following reduced Gröbner basis for the ideal x 3 6x 2 +11x 6, y g 1 (x), z g 2 (x), u g 3 (x) : G {z 3u 2 + u +4,u 3 +5u 2 u 5,x 5u 2 5u 2,y 5u 2 +2u 4}. So the annihilation ideal of A 1,A 2 and A 3 in the field Z 11 is z 3u 2 + u +4,u 3 +5u 2 u 5, y 5u 2 +2u 4. Lemma 4. Let h(x) be the least common multiple of p 1 (x),p 2 (x),,p k (x). Then k p i (x) h(x). i1 Proof. For any p(x) k i1 p i(x), we have p i (x) p(x) for i 1, 2,,k. Since h(x) is the least common multiple of p 1 (x),p 2 (x),,p k (x),h(x) p(x), so p(x) h(x). Hence k i1 p i(x) h(x). Conversely, suppose p i (x) h(x) for i 1, 2,,k. Because h(x) is the least common multiple of p 1 (x),p 2 (x),,p k (x), therefore k i1 p i(x) h(x). By Theorem 2, Lemma 2 and Lemma 4, if the minimal polynomial of A i is p i (x) for i 1, 2,,t, then the common minimal polynomial of A 1,,A t is the least common multiple of p 1 (x),p 2 (x),,p t (x). So we have the following algorithm for the common minimal polynomial of resultant matrices A i g i (C f ) for i 1, 2,,t: Step 1. Calculate the Gröbner basis G i for the ideal f(x),y g i (x) by CoCoA 4.0 for each i 1, 2,,t, using an elimination order with x>y. Step 2. Find out the polynomial p i (y) in G i in which the variable x does not appear for each i 1, 2,, t. Step 3. Calculate the Gröbner basis G for the ideal t 1 w i, w 1 p 1 (y),, w t p t (y) i1

258 Shu-Ping Gao and San-Yang Liu Step 4. by CoCoA 4.0, using an elimination order with w 1 > > w t >y. Find out the polynomial p(y) in G in which the variables w 1,,w t do not appear. Then the polynomial p(y) is the common minimal polynomial of A i g i (C f ) for i 1, 2,,t. Example 3. Let A 1 g 1 (C f1 ) be a resultant matrix and A 2 g 2 (C f2 )bea resultant matrix, where g 1 (x) 5x 3 +9x 2 + x +1, g 2 (x) x 4 +2x 3 +3x 2 +6x +7 and C f1 0 1 0 0 0 1, C f2 6 11 6 0 1 0 0 0 0 1 0 0 0 0 1 24 50 35 10 We calculate the common minimal polynomial of A 1 and A 2 in the field Z 11 as follows: By CoCoA 4.0, we obtain the reduced Gröbner basis for the ideal x 3 6x 2 +11x 6, y g 1 (x) is G 1 {x +4y 2 2y 3, y 3 +4y 2 y}. So the minimal polynomial p 1 (y) ofa 1 is y 3 +4y 2 y. By CoCoA 4.0, we get the reduced Gröbner basis for the ideal x 4 10x 3 +35x 2 50x +24,y g 2 (x) is G 2 {xy +3x y 2 +5y +2,y 3 +2y 2 3y, x 2 3x + y 2 5y}. So the minimal polynomial p 2 (y) ofa 2 is y 3 +2y 2 3y. By CoCoA 4.0, we obtain the reduced Gröbner basis for the ideal 1 u v, up 1 (y), vp 2 (y) is G {u + v 1, vy+4y 4 2y 3 + y 2 4y, y 5 5y 4 +4y 3 3y 2 +3y}. So the common minimal polynomial p(y) ofa 1 and A 2 is y 5 5y 4 +4y 3 3y 2 +3y..

Algorithms for finding the minimal polynomials and inverses of resultant matrices 259 In the following, we discuss the singularity and the inverse of a resultant matrix. Theorem 4. Let A F[C f ] be an n n resultant matrix. Then A is nonsingular if and only if (g(x),f(x)) 1, where the polynomial g(x) is the representer of A. Proof. A is nonsingular if and only if g(x) + f(x) is an invertible element in F(x)/ f(x). By Theorem 1, if and only if there exists u(x) + f(x) F(x)/ f(x) such that g(x)u(x)+ f(x) 1+ f(x) if and only if there exist u(x),v(x) F[x] such that g(x)u(x)+f(x)v(x) 1 if and only if (g(x),f(x)) 1. Let A F[C f ]beann n resultant matrix, by Theorem 4, we have the following algorithm which can find the inverse of the matrix A: Step 1. Calculate the reduced Gröbner basis G for the ideal g(x),f(x), by CoCoA 4.0, where G is the monic largest common factor of g(x) and f(x). If G {1}, then A is singular, Stop. Otherwise, go to step 2. Step 2. Calculate u(x),v(x) F[x] by the polynomial division algorithm such that g(x)u(x)+f(x)v(x) 1. Then u(x) is the representer of A 1, we obtain A 1 u(c f ). 3. Inverse of partitioned matrix with resultant matrix blocks Let A, B, C and D be resultant matrices with the representer g 1 (x), g 2 (x), g 3 (x) and g 4 (x), respectively. If A is nonsingular, let ( ) ( ) ( ) A B I 0 I A Σ, Γ C D 1 CA 1, Γ I 2 1 B. 0 I Then ( ) A 0 Γ 1 ΣΓ 2 0 D CA 1. (5) B

260 Shu-Ping Gao and San-Yang Liu So Σ is nonsingular if and only if D CA 1 B is nonsingular. Since A,B,Cand D are all resultant matrices, then the A i commutes with the A j if i j. Thus A(D CA 1 B)AD BC. (6) By the equation (6), we conclude that Σ is nonsingular if and only if AD BC is nonsingular. Since [g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)) is the representer of AD BC, by Theorem 4, Σ is nonsingular if and only if ([g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)), f(x)) 1. In addition, if Σ is nonsingular, by the equation (5), we have ( )( )( ) Σ 1 I A 1 B A 1 0 I 0 0 I 0 (D CA 1 B) 1 CA 1 I ( ) A 1 +(AD BC) 1 BCA 1 (AD BC) 1 B (AD BC) 1 C (AD BC) 1. A Therefore, we have ( ) A B Theorem 5. Let Σ, where A, B, C and D are all resultant C D matrices with the representer g 1 (x), g 2 (x), g 3 (x) and g 4 (x) respectively. If A is nonsingular, then Σ is nonsingular if and only if ([g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)), f(x)) 1. Moreover, if Σ is nonsingular, then ( Σ 1 A 1 +(AD BC) 1 BCA 1 (AD BC) 1 B (AD BC) 1 C (AD BC) 1 A ). (7) ( ) A B Theorem 6. Let Σ, where A, B, C and D are all resultant C D matrices with the representer g 1 (x), g 2 (x), g 3 (x) and g 4 (x) respectively. If D is nonsingular, then Σ is nonsingular if and only if ([g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)), f(x)) 1. Moreover, if Σ is nonsingular, then ( ) Σ 1 (AD BC) 1 D (AD BC) 1 B (AD BC) 1 C D 1 +(AD BC) 1 BCD 1. (8)

Algorithms for finding the minimal polynomials and inverses of resultant matrices 261 Proof. Since D is nonsingular, then ( ) ( I BD 1 I 0 Σ 0 I D 1 C I ) ( A BD 1 C 0 0 D ). (9) So Σ is nonsingular if and only if A BD 1 C is nonsingular. Since A, B, C and D are all resultant matrices, then the A i commutes with the A j if i j. Thus D(A BD 1 C)AD BC. (10) By the equation (10), we conclude that Σ is nonsingular if and only if AD BC is nonsingular. Since [g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)) is the representer of AD BC, by Theorem 4, Σ is nonsingular if and only if ([g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)), f(x)) 1. In addition, if Σ is nonsingular, by the equation (9), we have ( )( )( ) Σ 1 I 0 (A BD 1 C) 1 0 I BD 1 D 1 C I 0 D 1 0 I ( ) (AD BC) 1 D (AD BC) 1 B (AD BC) 1 C D 1 +(AD BC) 1 BCD 1. If Σ is nonsingular, we have the following algorithm for determining the nonsingularity and computing the inverse of Σ. Step 1. Calculate the Gröbner bases G 1, G 4 for the ideals g 1 (x),f(x), g 4 (x),f(x), respectively. If G 1 {1}, G 4 {1}, Stop. Otherwise, go to step 2. Step 2. If G 1 {1}, find v 1 (x),u 1 (x) F[x] by the polynomial division algorithm such that u 1 (x)g 1 (x)+v 1 (x)f(x) 1. Then u 1 (x) is the representer of A 1, go to step 4. Otherwise, go to step 3. Step 3. If G 4 {1}, find v 4 (x),u 4 (x) F[x] by the polynomial division algorithm such that u 4 (x)g 4 (x)+v 4 (x)f(x) 1. Then u 4 (x) is the representer of D 1, go to step 4. Step 4. Calculate the Gröbner bases G for the ideals [g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)), f(x). If G {1}, then AD BC is singular, Stop. Otherwise, go to step 5.

262 Shu-Ping Gao and San-Yang Liu Step 5. Find v(x),u(x) F[x] by the polynomial division algorithm such that u(x)[g 1 (x)g 4 (x) g 2 (x)g 3 (x)]mod(f(x)) + v(x)f(x) 1. Then u(x) is the representer of (AD BC) 1. Therefore, Σ 1 can be obtained as follows. If A is nonsingular, then ( Σ 1 u1 (C f )[I + u(c f )BC] u(c f )B u(c f )C u(c f )A If D is nonsingular, then ( Σ 1 u(cf )D u(c f )B u(c f )C u 4 (C f )[I + u(c f )BC] ). ). References 1. Bhubaneswar Mishra, Algorithmic Algebra, Springer-Verlag, 2001. 2. Williamm W. Adams and Philippe Loustaunau, An introduction to Gröbner bases, American Mathematical Society, 1994. 3. Donald Greenspan, Methods of matrix inversion, Amer. Math. Monthly 62 (1955), 303-308. 4. R. A. Horn and C. R. Johnson, Matrix Analysis, New York: Cambridge University Press, 1985. 5. K. Wang, Resulants and group matrices, Linear Algebra and Its Appl. 33 (1980), 111-122. 6. Bruce H. Edwards, Rotations and discriminants of quadratic spaces, Linear Algebra and Its Appl. 8 (1980), 241-246. 7. R. Bhatia, L. Elsner and G. Krause, Bounds for the variation of the roots of a polynomial and the eigenvalues of a matrix, Linear Algebra and Its Appl. 142 (1990), 195-209. 8. Harald K. Wimmer, On the history of the Bezoution and the resultant matrix, Linear Algebra and Its Appl. 128 (1990), 27-34. 9. L. N. Vaserstein and E. Wheland, Commutators and companion matrices over rings of stable rank 1 Linear Algebra and Its Appl. 142 (1990), 263-277. 10. C. Carstensen, Linear construction of companion matrices, Linear Algebra and Its Appl. 149 (1991), 191-214. 11. Vlastimil Ptak, The infinite companion matrix, Linear Algebra and Its Appl. 166 (1992), 65-95. 12. Karla Rost, Generalized lyapunov equations, matrices with displacement structure and generalized Bezoutians, Linear Algebra and Its Appl. 193 (1993), 75-93. 13. Karla Rost, Generalized companion matrices and matrix representations for generalized Bezoutians, Linear Algebra and Its Appl., 193 (1993), 151-172. 14. Harald K. Wimmer, Pairs of companion matrices and their simultaneous reduction to complementary triangular forms, Linear Algebra and Its Appl. 182 (1993), 179-197. 15. Andre Klein, On fisher s information matrix of an armax process and sylvester s resultant matrices, Linear Algebra and Its Appl. 237/238 (1996), 579-590.

Algorithms for finding the minimal polynomials and inverses of resultant matrices 263 16. Bernard Hanzon, A faddeev sequence method for solving lyapunov and sylvester equations, Linear Algebra and Its Appl. 241-243 (1996), 401-430. 17. Daniel Augot and Paul Camion, On the computation of minimal polynomials, cyclic vectors, and forbenius forms, Linear Algebra and Its Appl. 260 (1997), 61-94. 18. Dario Andrea Bini, Luca Gemignani, Fast fraction-free triangularization of Bezoutians with applications to sub-resultant chain computation, Linear Algebra and Its Appl. 284 (1998), 19-39. 19. Marc van barel, Vlastimil Ptak and Zdenek Vavrin, Extending the notions of companion and infinite companion to matrix polynomials, Linear Algebra and Its Appl. 290 (1999), 61-94. 20 Guangcai Zhou, Xiang-Gen Xia, Ambiguity resistant polynomial matrices, Linear Algebra and Its Appl. 286 (1999), 19-35. 21. Louis Solomon, Similarity of the companion matrix and its transpose, Linear Algebra and Its Appl. 302-303 (1999), 555-561. 22. David Chillag, Regular representations of semisimple algebras, separable field extensions, group characters, generalized circulants, and generalized cyclic codes, Linear Algebra and Its Appl. 218 (1995), 147-183. 23. Predrag S. Stanimirović and Milan B. Tasić, Computing determinantal representation of generalized inverses, J. Appl. Math. & Computing(old: KJCAM) 9(2002), 349-360. 24. Jae Heon Yun and Sang Wook Kim, A variant of block incomplete factorization preconditioners for a symmetric H-matrix, J. Appl. Math. & Computing(old:KJCAM) 8(2001), 481-496. Gao Shuping received her BS and MS from Shaanxi Normal University and Xidian University in 1986 and 1994, respectively. Since 1986 she has been at the Xidian University. Since 2000 she has been at the Xidian University for Ph. D. Her research interests focus on the multi-objective programming, transportation network and the matrix theory. Liu Sanyang received his BS, MS and Ph. D from Shaanxi Normal University, Xidian University and Xi an Jiaotong University in 1982, 1984 and 1989, respectively. Since 1987 he has been at the Xidian University. His research interests focus on the multi-objective programming, combinatorial optimization, convex analysis and the matrix theory. Department of Applied Mathematics, Xidian University, Xi an, 710071, P. R. China e-mail: xdgaosp@263.net