Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Similar documents
SMITH MCMILLAN FORMS

1 Determinants. 1.1 Determinant

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Cayley-Hamilton Theorem

Fundamentals of Engineering Analysis (650163)

Linear Systems and Matrices

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Ma 227 Review for Systems of DEs

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Triangularizing matrix polynomials. Taslaman, Leo and Tisseur, Francoise and Zaballa, Ion. MIMS EPrint:

Matrices and Linear Algebra

Foundations of Matrix Analysis

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

MATRICES AND MATRIX OPERATIONS

MATRICES. a m,1 a m,n A =

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Elementary maths for GMT

ELEMENTARY LINEAR ALGEBRA

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Elementary Row Operations on Matrices

A matrix over a field F is a rectangular array of elements from F. The symbol

Lecture Notes in Linear Algebra

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Chapter 3. Determinants and Eigenvalues

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Eighth Homework Solutions

Introduction to Matrices

Multivariable Control. Lecture 05. Multivariable Poles and Zeros. John T. Wen. September 14, 2006

MEM 355 Performance Enhancement of Dynamical Systems MIMO Introduction

MATH2210 Notebook 2 Spring 2018

Chapter 7. Linear Algebra: Matrices, Vectors,

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Matrix Arithmetic. j=1

Product distance matrix of a tree with matrix weights

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

0.1 Rational Canonical Forms

CHAPTER 6. Direct Methods for Solving Linear Systems

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

NUMBER SYSTEMS. Number theory is the study of the integers. We denote the set of integers by Z:

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted.

Prepared by: M. S. KumarSwamy, TGT(Maths) Page

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

ELEMENTARY LINEAR ALGEBRA

Matrices. Chapter Definitions and Notations

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Section 9.2: Matrices.. a m1 a m2 a mn

Chapter 1. Matrix Algebra

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Hermite normal form: Computation and applications

Math 408 Advanced Linear Algebra

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

A Review of Matrix Analysis

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Infinite elementary divisor structure-preserving transformations for polynomial matrices

A connection between number theory and linear algebra

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Introduction to Matrices

Undergraduate Mathematical Economics Lecture 1

Chapter 2. Matrix Arithmetic. Chapter 2

Definition 2.3. We define addition and multiplication of matrices as follows.

Characteristic Polynomial

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

(Inv) Computing Invariant Factors Math 683L (Summer 2003)

Topic 1: Matrix diagonalization

Robust Control 2 Controllability, Observability & Transfer Functions

Phys 201. Matrices and Determinants

THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Math 240 Calculus III

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Eigenvalues and Eigenvectors

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

MATHEMATICS. IMPORTANT FORMULAE AND CONCEPTS for. Final Revision CLASS XII CHAPTER WISE CONCEPTS, FORMULAS FOR QUICK REVISION.

LINEAR SYSTEMS AND MATRICES

3 Matrix Algebra. 3.1 Operations on matrices

LINEAR SYSTEMS, MATRICES, AND VECTORS

Math 489AB Exercises for Chapter 2 Fall Section 2.3

ELEMENTARY LINEAR ALGEBRA

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Graduate Mathematical Economics Lecture 1

An integer arithmetic method to compute generalized matrix inverse and solve linear equations exactly

and let s calculate the image of some vectors under the transformation T.

Determinants Chapter 3 of Lay

Linear Algebra Primer

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Rings. EE 387, Notes 7, Handout #10

Transcription:

1 Polynomial Matrices 1.1 Polynomials Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc., n ws ( ) as a as... as i i 1 n n (1.1.1) i is called a polynomial w(s) in the variable s over the field, where a i for i=,1,...,n are called the coefficients of this polynomial. The set of polynomials (1.1.1) over the field will be denoted by [s]. If a n, then the nonnegative integral n is called the degree of a polynomial and is denoted deg w(s), i.e., n = deg w(s). The polynomial (1.1.1) is called monic, if a n = 1 and zero polynomial, if a i = for i =,1,,n. The sum of two polynomials n w () s a as... a s, (1.1.a) 1 1 1 n m w ( s) b bs... b s, (1.1.b) m is defined in the following way m n i i ( ai bi) s as i, n m i im1 n i w1() s w() s ( ai bi) s, n m. (1.1.3) i n m i i ( ai bi) s bs i, m n i in1

Polynomial and Rational Matrices If n > m, then the sum is a polynomial of degree n, if m > n then the sum is a polynomial of degree m. If n = m and a n +b n, then this sum is a polynomial of degree n and a polynomial of degree less than n, if a n +b n =. Thus we have w s w s w s w s deg ( ) ( ) max deg ( ), deg ( ). (1.1.4) 1 1 In the same vein we define the difference of two polynomials. A polynomial whose coefficients are the products of the coefficients a i and the scalar, i.e., n i w() s as, (1.1.5) i i is called the product of the polynomial (1.1.1) and the scalar (a scalar can be regarded as a polynomial of zero degree). A polynomial of the form i w () s w () s cs (1.1.6a) 1 nm i i is called the product of the polynomials (1.1.), where i c a b, i,1,, nm i k ik k ( a for k n, b for k m). k k (1.1.6b) From (1.1.6a) it follows that 1 deg w ( s) w ( s) n m, (1.1.7) since a n b m for a n, b m. Let w (s) in (1.1.) be a nonzero polynomial and n > m, then there exist exactly two polynomials q(s) and r(s) such that w () s w ()() s q s r() s, (1.1.8) 1 where deg rs ( ) deg w( s) m. (1.1.9) The polynomial q(s) is called the integer part when r(s) and the quotient when r(s) =, and r(s) is called the remainder.

Polynomial Matrices 3 If r(s) =, then w 1 (s) = w (s)q(s); we say then that polynomial w 1 (s) is divisible without remainder by the polynomial w (s), or equivalently, that polynomial w (s) divides without remainder a polynomial w 1 (s), which is denoted by w 1 (s) w (s). We also say that the polynomial w (s) is a divisor of the polynomial w 1 (s). Let us consider the polynomials in (1.1.). We say that a polynomial d(s) is a common divisor of the polynomials w 1 (s) and w (s) if there exist polynomials w 1 (s) and w (s) such that w() s dsw () (), s w() s dsw () () s. (1.1.1) 1 1 Polynomial d m (s) is called a greatest common divisor (GCD) of the polynomials w 1 (s) and w (s), if every common divisor of these polynomials is a divisor of the polynomial d m (s). A GCD d m (s) of polynomials w 1 (s) and w (s) is determined uniquely up to multiplication by a constant factor and satisfies the equality d () s w () s m () s w () s m () s, (1.1.11) m 1 1 where m 1 (s) and m (s) are polynomials, which we can determine using Euclid s algorithm or the elementary operations method. The essence of Euclid s algorithm is as follows. Using division of polynomials we determine the sequences of polynomials q 1,q,,q k and r 1,r,,r k satisfying the following properties w1 wq1 r1 w rq 1 r r r q r rk rk 1qk r k rk1 rkqk 1 1 3 3. (1.1.1) We stop computations when the last nonzero remainder r k is computed and r k-1 is found to be divisible without remainder by r k. With r 1,r,,r k-1 eliminated from (1.1.1) we obtain (1.1.11) for d m (s) = r k. Thus the last nonzero remainder r k is a GCD of the polynomials w 1 (s) and w (s). Example 1.1.1. Let w w () s s 3s 3s1, w w () s s s 1. (1.1.13) 3 1 1

4 Polynomial and Rational Matrices Using Euclid s algorithm we compute w w q r, q s4, r 6s3, 1 1 1 1 1 1 1 3 w rq 1 r, q s, r. 6 1 4 (1.1.14) Here we stop because r 1 is divisible without remainder by r. Thus r is a GCD of the polynomials in (1.1.13). Elimination of r 1 from (1.1.14) yields that is, w ( r, 1 q) w (1 q1q ) 3 3 1 1 1 1 1 7 3. 6 1 6 1 3 4 s 3 s s s s s s s The polynomials in (1.1.) are called relatively prime (or coprime) if and only if their monic GCD is equal to 1. From (1.1.11) for d m (s) = 1 it follows that polynomials w 1 (s) and w (s) are coprime if and only if there exist polynomials m 1 (s) and m (s) such that w () s m () s w () s m () s 1. (1.1.15) 1 1 Dividing both sides of (1.1.11) by d m (s), we obtain 1 w ( s) m ( s) w ( s) m ( s), (1.1.16) 1 1 where wk () s wk ( s) for k 1,,. d () s m Thus if d m (s) is a GCD of the polynomials w 1 (s) and w (s), then polynomials w 1(s) and w (s) are coprime. Let s 1,s,,s p be different roots of multiplicities m 1,m,,m p (m 1 +m + +m p = n), respectively, of the equation w(s) =. The numbers s 1,s,,s p are called the zeros of polynomial (1.1.1). This polynomial can be uniquely written in the form ws ( ) a( ss) ( ss)...( s s ). (1.1.17) n m1 m 1 p m p

Polynomial Matrices 5 1. Basic Notions and Basic Operations on Polynomial Matrices A matrix whose elements are polynomials over a field matrix over the field (briefly polynomial matrix) is called a polynomial a () s a () s 11 1n i1,..., m j 1,..., n A() s aij () s, aij () s () s. (1..1) am 1() s amn() s An ordered pair of the number of rows m and columns n, respectively, is called the dimension of matrix (1..1) and is denoted by mn. A set of polynomial matrices of dimension mn over a field will be denoted by mn [s]. The following matrix is an example of a polynomial matrix over the field of real numbers s s1 s A() s [] s. (1..) s s3 3s s3 Every polynomial matrix can be written in the form of a matrix polynomial. For example, the matrix (1..) can be written in the form of the matrix polynomial A 1 1 1 () s s s s 1s 3 1 1 3 3 A A A. (1..3) Let a matrix of the form (1..1) be expressed as the matrix polynomial q m n A( s) A s... A sa, A, k, 1,..., q. (1..4) q 1 k If A q is not a zero matrix, then number q is called its degree and is denoted by q = deg A(s). For example, the matrix (1..) (and also (1..3)) has the degree two q =. If n = m and det A q, then matrix (1..4) is called regular. The sum of two polynomial matrices k A() s aij () s Aks and i1,..., m j1,..., n k B() s bij () s Bks q i1,..., m j 1,..., n k t k (1..5) of the same dimension mn is defined in the following way

6 Polynomial and Rational Matrices A() s B() s q t k k ( Ak Bk) s Aks q t k kt1 q k aij () s bij () s i1,..., m ( k k ) s q t. j A B 1,..., n k q t k k ( Ak Bk) s Bks q t k kq1 (1..6) If q = t and A q + B q, then the sum in (1..6) is a polynomial matrix of degree q, and if A q + B q =, then this sum is a polynomial matrix of a degree not greater than q. Thus we have s s s s deg A( ) B( ) max deg [ A( )],deg [ B ( )]. (1..7) In the same vein, we define the difference of two polynomial matrices. A polynomial matrix where every entry is the product of an entry of the matrix (1..1) and the scalar is called the product of the polynomial matrix (1..1) and the scalar A() s aij () s. i 1,..., m j1,..., n From this definition for, we have deg [A(s)] = deg [A(s)]. Multiplication of two polynomial matrices can be carried out if and only if the number of columns of the first matrix (1..1) is equal to the number of rows of the second matrix k B() s bij () s B ks. (1..8) i1,..., n j 1,..., p k t A polynomial matrix of the form qt k C() s cij () s A() s B() s C ks (1..9) i1,..., m j 1,..., p k is called the product of these polynomial matrices, where C k A B k,1,..., qt k l kl l ( A, l q, B, l t). l l (1..1)

Polynomial Matrices 7 From (1..1) it follows that C q+t = A q B t and this matrix is a nonzero one if at least one of the matrices A q and B t is nonsingular, in other words one of the matrices A(s) and B(s) is a regular one. Thus we have the relationship deg A(s)B(s) = deg A(s) + deg B(s) if at least one of these deg A(s)B(s) deg A(s) + deg B(s) otherwise. matrices is regular, (1..11) For example, the product of the polynomial matrices 1 1 1 1 A 1 1 s s s 1 s s3 1 3 B() s s 1 1 s1 s1 1 1 1 s s s s 1 () s s s, is the following polynomial matrix 5 9 5 9 7s s1 7 s s1 1 1 A() s B() s s s, 4s 4 s 6s 1 1 4 6 4 1 whose degree is smaller than the sum deg [A(s)] + deg [B(s)], since 1 1 AB 1 1 1 1. The matrix (1..4) can be written in the form q A() s s A... sa A, (1..1) q 1 since multiplication of the matrix A i (i = 1,,,q) by the scalar s is commutative. Substituting the matrix S in place of the scalar s into (1..4) and (1..1), we obtain the following, usually different, matrices q A ( S) A S... A SA, p q 1 q A ( S) S A... SA A. l q 1 The matrix A p (S) (A l (S)) is called the right-sided (left-sided) value of the matrix A(s) for s = S. Let

8 Polynomial and Rational Matrices C() s A() s B () s. It is easy to verify that and C ( S) A ( S) B ( S) p p p C ( S) A ( S) B ( S ). l l l Consider the polynomial matrices in (1..5). Theorem 1..1. If the matrix S commutes with the matrices A i for i = 1,,,q and B j for j = 1,,,t, then the right-sided and the left-sided value of the product of the matrices in (1..5) for s = S is equal to the product of the right-sided and left-sided values respectively, of these matrices for s = S. Proof. Taking into account the polynomial matrices in (1..5) we can write D() s A() s B() s A s B s A B s q t q t i j i j i j i j i j i j and q t q t i j i j D() s A() s B() s s Ais B j s AiB j. i j i j Substituting the matrix S in place of the scalar s, we obtain q t q t i j i j Dp( S) AiB js AS i B js Ap( S) Bp( S ), i j i j since B j S = SB j for j = 1,,,t and p q p q i j i j Dl( S) S AiB j S AiS B j Al( S) Bl( S ), i j i j since SA i =A i S for i = 1,,,q.

Polynomial Matrices 9 1.3 Division of Polynomial Matrices Consider the polynomial matrices A(s) and B(s) where det A(s) and deg A(s) < deg B(s). The matrix A(s) may be not regular, i.e., the matrix of coefficients of the highest power of variable s may be singular. Theorem 1.3.1. If det A(s), then for the pair of polynomial matrices A(s) and B(s), deg B(s) > deg A(s) there exists a pair of matrices Q p (s), R p (s) such that the following equality is satisfied B() s Q () s A() s R (), s deg A() s deg R () s, (1.3.1a) p p p and there exists a pair of matrices Q l (s), R l (s) such that the following equality holds B() s A() s Q () s R (), s deg A() s deg R () s. (1.3.1b) l l l Proof. Dividing the elements of matrix B(s) Adj A(s) by a polynomial det A(s), we obtain a pair of matrices Q p (s), R 1 (s) such that B()Adj s A() s p Q ( s) det A( s) R ( s), deg det A( s) deg R s. 1 1 (1.3.) Post-multiplication of (1.3.) by A(s)/det A(s) yields B() s Q () s A() s R () s, (1.3.3) p p since Adj A(s) A(s) = I n det A(s), where 1 () s () s R p () s R A. (1.3.4) det A( s) From (1.3.4) we have deg R ( ) deg R ( ) deg A ( ) deg det A ( ) deg A ( ), s p s s s s 1 since deg [det A(s)] > deg R 1 (s). The proof of equality (1.3.1b) is similar. Remark 1.3.1. The pairs of matrices Q p (s), R p (s) and Q l (s), R l (s) satisfying the equality (1.3.1) are not uniquely determined (are not unique), since B() s [ Q () s C()] s A() s R () s A() s C () s (1.3.5a) p p

1 Polynomial and Rational Matrices and B() s A()[ s Q () s C()] s R () s A() s C () s (1.3.5b) l are satisfied for an arbitrary matrix C(s) satisfying Example 1.3.1. For the matrices deg C(s) A( s) deg A( s), deg A( s) C(s) deg A ( s). l s 1 s s A() s, () s 1 1 B 1 s 1 determine the matrices Q p (s), R p (s) satisfying the equality (1.3.1a). In this case, det A 1 = and det A(s) = s+1. We compute 1 1 s s Adj A( s), ( s)adj ( s) 3 1 s B A, s s s1 and with (1.3.) taken into account we have s s s ( s 1) 3 s s s 1 s1 s s 1 1, i.e., Q s p () s, 1() s s1 s s R 1 1. According to (1.3.4) we obtain R 1() s A() s R p () s det ( s) 1. A Consider two polynomial matrices n A() s A s A s... A sa, (1.3.6a) n n1 n1 1 m B( s) B s B s... B sb. (1.3.6b) m m1 m1 1

Polynomial Matrices 11 Theorem 1.3.. If A(s) and B(s) are square polynomial matrices of the same dimensions, and A(s) is regular (det A n ), then there exist exactly one pair of polynomial matrices Q p (s), R p (s) satisfying the equality B() s Q () s A() s R () s, (1.3.7a) p p and exactly one pair of polynomial matrices Q l (s), R l (s) satisfying the equality where B() s A() s Q () s R () s (1.3.7b) l l deg A( s) deg R ( s), deg A( s) deg R ( s). p Proof. If n > m, then Q p (s) = and R p (s) = B(s). Assume that m n. By the assumption det A n there exists the inverse matrix A n-1. Note that the matrix B m A n -1 s m-n A(s) has a term in the highest power of s, equal to B m s m. Hence l B s B A s A s B s, 1 mn (1) () m n () () where B (1) (s) is a polynomial matrix of degree m 1 m-1 of the form B () s B s B s... B sb. (1) (1) m1 (1) m11 (1) (1) m1 m11 1 If m 1 n, then we repeat this procedure, taking the matrix matrix B m, and obtain (1) B m instead of the 1 B () s B A s A() s B () s, (1) (1) 1 m1 n () m1 n where B ( s) B s B s... B sb ( m m ). () () m () m 1 () () m m1 1 1 Continuing this procedure, we obtain the sequence of polynomial matrices B(s), B (1) (s), B () (s),, of decreasing degrees m, m 1, m,, respectively. In step r, we obtain the matrix B (r) (s) of degree m r < n and B() s B A s B A s... B A s A() s B () s, that is the equality (1.3.7a) for 1 m n (1) 1 m1n ( r 1) 1 mr 1n ( r ) m n m1 n mr 1 n

1 Polynomial and Rational Matrices 1 mn (1) 1 m1n ( r1) 1 mr 1n p() s m n s m..., 1 n s mr 1 n s Q B A B A B A R ( r) p () s B (). s (1.3.8) Now we will show that there exists only one pair Q p (s), R p (s) satisfying (1.3.7a). Assume that there exist two different pairs Q p (1) (s), R p (1) (s) and Q p () (s), R p () (s) such that B Q A R (1.3.9a) (1) (1) () s p () s () s p () s and B Q A R, (1.3.9b) () () () s p () s () s p () s where deg A(s) > deg R p (1) (s) and deg A(s) > deg R p () (s). From (1.3.9) we have (1) () () (1) Qp () s Qp () s A() s Rp () s R p () s. (1.3.1) For Q p (1) (s) Q p () (s) the matrix [Q p (1) (s) - Q p () (s)]a(s) is a polynomial matrix of a degree greater than n, and [R p () (s) - R p (1) (s)] is a polynomial matrix of a degree less than n. Hence from (1.3.1) it follows that Q p (1) (s) = Q p () (s) and R p (1) (s) = R p () (s). Similarly one can prove that Q A B A B A B R 1 mn 1 (1) m1n 1 ( r1) mr 1n l( s) n ms n m s..., 1 n m s r1 ( r) l () s B (). s (1.3.11) The matrices Q p (s), R p (s) (Q l (s), R l (s)) are called, respectively: the right (left) quotient and the remainder from division of the matrix B(s) by the matrix A(s). From the proof of Theorem 1.3. the following algorithm for determining matrices Q p (s) and R p (s) (Q l (s) and R l (s)) ensues. Procedure 1.3.1. Step 1: Given matrix A n compute A n -1. Step : Compute and B A s A() s A() s A B s 1 mn 1 mn m n n m B () s B() s B A s A() s B s... B sb (1) 1 mn (1) m1 (1) (1) m n m1 1

Polynomial Matrices 13 B () s () s A() s A B s B s... B sb. (1) 1 mn (1) m1 (1) (1) n m m1 1 Step 3: If m 1 n, then compute and B A s A() s A() s A B s (1) 1 m1n 1 (1) m1n m1 n n m1 B () s B () s B A s A() s B s... B sb () (1) (1) 1 m1 n () m () () m1 n m 1 B () s B () s A() s A B s B s... B sb. () (1) 1 (1) m 1 () m () () n m1 m 1 Step 4: If m n, then substituting in the above equalities m 1 and B (1) (s) by m and B () (s), respectively, compute B (3) (s). Repeat this procedure r times until m r < n. Step 5: Compute the matrices Q p (s), R p (s) (Q l (s), R l (s)). Example 1.3.. Given the matrices 4 3 s 1 s s s 1 s s s A() s and () s B 3, s s s s s s s determine matrices Q p (s), R p (s) and Q l (s), R l (s) satisfying (1.3.7). Matrix A(s) is regular, since A 1 1 and 4 1 B. Using Procedure 1.3.1 we compute the following. Steps 1 3: In this case, 1 1 1 1 s s s 4 3 1 s s s s s 4 s A() s s BA and B () s B() s B A s A() s (1) 1 4 4 3 4 3 3 s s 1 s s s s s s 1 s s s 3 3. s s s s s s s s

14 Polynomial and Rational Matrices Since m 1 = 3, n =, and (1) B 3 1, we have 1 1 1 1 s s s s s s 3 (1) 1 s s s s s 3 s () s s 3 B A A and B () s B () s B A sa() s () (1) (1) 1 3 3 3 1 s s s s s s s 1 3s s. 3 3 s s s s s s s s s s s Step 4: We repeat the procedure, since m = = n. Taking into account that () 3 B 1 1, we compute B A A() s () 1 31 s 1 s s 3s 3s s 1 1 1 s s s s s1 s s and (3) () () 1 s 1 3s s () s () s () s B B B A A s s s s s 3s 3s s 3s 3 3s. s s 1 s s s1 3s Step 5: The degree of this matrix is less than the degree of the matrix A(s). Hence, according to (1.3.8), we obtain

Polynomial Matrices 15 1 (1) 1 () 1 () s s s Qp B4A B3 A B A 1 3 s s3 s s 1 1 1 1 s 1 and (3) 3s 3 3s Rp () s B () s s1 3s. We compute Q l (s) and R l (s) using Procedure 1.3.1. Steps 1 3: We compute 4 1 s 1 s 1 1 s s A() s A B4s s 3 s s s 1 s and 4 3 (1) 1 s s 1 s s s () s () s () s 4s 3 B B A A B s s s s 4 3 s s 1 s s s 3 3 3. s s s s s s Taking into account that m 1 = 3 > n = and (1) 1 B 3 1 1, we compute 3 1 (1) s 1 s 1 1 s s s s A() s A B3 s s 3 3 s s s 1 1 1 s s s s and 3 () (1) 1 (1) 1 s s s () s () s () s A 3 s 3 3 B B A B s s s s s 3 s s s s s 1 s. 3 3 s s s s 3s s s s

16 Polynomial and Rational Matrices Step 4: We repeat the procedure, since m = = n. Taking into account that () 1 B 3, we have A() s A B 1 1 s s s 1 3 3s s s 1 () s 1 s s 3s1 s and (3) () 1 () s 1 s () s () s () s B B A A B 3s s s s s 3s1 s 3s s. 3s s s s s 3s Step 5: The degree of this matrix is less than the degree of matrix A(s). Hence according to (1.3.11), we have Ql () s A B s A B sa B 1 1 (1) 1 () 4 3 1 1 1 s 1 s s s, 1 1 3 s3 s (3) 3s s Rl () s B () s s 3s. 1.4 Generalized Bezoute Theorem and the Cayley Hamilton Theorem Let us consider the division of a square polynomial matrix F() s F s F s FsF [] s (1.4.1) n n1 mm n n1 1 by a polynomial matrix of the first degree [I m s A], where F k mm, k =,1,,n and A mm. The right (left) R p (R l ) remainder from division of F(s) by [I m s A] is a polynomial matrix of zero degree, i.e., it does not depend on s. Theorem 1.4.1. (Generalised Bezoute theorem). The right (left) remainder R p (R l ) from division of the matrix F(s) by [I m s - A] is equal to F p (A) (F l (A)), i.e.,

Polynomial Matrices 17 R F ( A) F A F A FAF (1.4.a) n n1 mm p p n n1 1 R F A A F A F AF F (1.4.b) n n1 mm l l( ) n n1 1. Proof. Post-dividing the matrix F(s) by [I m s - A], we obtain F() s Q () s I sar, p m p and pre-dividing by the same matrix, we obtain F() s I sa Q () s R. m l l Substituting the matrix A in place of the scalar s in the above relationships, we obtain and F ( A) Q ( A)( AA) R R p p p p F( A) ( AA) Q ( A) R R. l l l l The following important corollary ensues from Theorem 1.4.1. Corollary 1.4.1. A polynomial matrix F(s) is post-divisible (pre-divisible) without remainder by [I m s A] if and only if F p (A) = (F l (A) = ). Let (s) be the characteristic polynomial of a square matrix A of degree n, i.e., n n1 () s deti sa s a s a sa. n n1 1 From the definition of the inverse matrix we have and I s A Adj I sa I ( s) (1.4.3a) n n n Adj I s A I sa I ( s). (1.4.3b) n n n It follows from (1.4.3) that a polynomial matrix I n (s) is post-divisible and predivisible by [I n s - A]. According to Corollary 1.4.1 this is possible if and only if I n (A) = (A) =. Thus the following theorem has been proved.

18 Polynomial and Rational Matrices Theorem 1.4.. (CayleyHamilton). Every square matrix A satisfies its own characteristic equation n ( A) A a A aaa I. (1.4.4) n1 n1 1 n Example 1.4.1. The characteristic polynomial of the matrix is 1 A 3 4 (1.4.5) s 1 () s detinsa s 5s 3 s 4. It is easy to verify that ( ) 5 5 1 1 1 A A A I 3 4 3 4 1. Theorem 1.4.3. Let a polynomial w(s) [s] be of degree N, and A nn, where N n. There exists a polynomial r(s) of a degree less than n, such that w( A) r( A ). (1.4.6) Proof. Dividing the polynomial w(s) by the characteristic polynomial (s) of the matrix A, we obtain ws () qs () () s rs (), where q(s) and r(s) are the quotient and remainder on division of the polynomial w(s) by (s), respectively, and deg (s) = n > deg r(s). With the matrix A substituted in place of the scalar s and with (1.4.4) taken into account, we obtain w( A) q( A) ( A) r( A) r( A ). Example 1.4.. The following polynomial is given 6 5 4 3 ws () s 5s 3s 5s s 3s.

Polynomial Matrices 19 Using (1.4.6) one has to compute w(a) for the matrix (1.4.5). The characteristic polynomial of the matrix is (s) = s - 5s -. Dividing the polynomial w(s) by (s), we obtain that is Hence 4 ws ( ) s s s 5s 3s, rs () 3s. 1 1 5 6 w( A ) r( A ) 3 A I 3 3 4 1 9 14. The above considerations can be generalized to the case of square polynomial matrices. Theorem 1.4.4. Let W(s) nn [s] be a polynomial square matrix of degree N, and A nn, where N n. There exists, a polynomial matrix R(s) of a degree less than n such that W ( A) R ( A) and W( A) R ( A ), (1.4.7) p p l l where W p (A) and W l (A) are the right-side and left-side values, respectively, of the matrix W(s) with A substituted in place of s. Proof. Dividing the entries of the matrix W(s) by the characteristic polynomial (s) of A, we obtain W() s Q() s () s R () s, where Q(s) and R(s) are the quotient and remainder, respectively, of the division of W(s) by (s), and deg (s) = n > deg R(s). With A substituted in place of the scalar s and with (1.4.4) taken into account, we obtain and W ( A) Q ( A) ( A) R ( A) R ( A) p p p p W( A) Q ( A) ( A) R ( A) R ( A ). l l l l

Polynomial and Rational Matrices Example 1.4.3. Given the polynomial matrix W() s 5 3 1 5 1 4 3 6 5 4, s 5s 3s 5s3 s 1s 4s s 6 5 4 5 4 3 s s s s s s s s s one has to compute W p (A) and W l (A) for the matrix (1.4.5) using (1.4.7). Dividing every entry of W(A) by the characteristic polynomial (s) of matrix A, we obtain i.e., Hence and 4 3 s 1 s s 3 s1 W () s 4s 5s s 1 s 1 s, s3 s1 R () s 1 s. W ( A) R ( A) p p 1 3 1 11 3 1 1 1 A 1 1 3 4 1 Wl( A) Rl( A) 1 3 1 1 1 3 1 5 4 A. 1 1 3 4 1 1 7 5 1.5 Elementary Operations on Polynomial Matrices Definition 1.5.1. The following operations are called elementary operations on a polynomial matrix A(s) mn [s]: 1. Multiplication of any i-th row (column) by the number c.. Addition to any i-th row (column) of the j-th row (column) multiplied by any polynomial w(s). 3. The interchange of any two rows (columns), e.g., of the i-th and the j-th rows (columns).

Polynomial Matrices 1 From now on we will use the following notation: L[ic] multiplication of the i-th row by the number c, P[ic] multiplication of the i-th column by the number c, L[i+jw(s)] addition to the i-th row of the j-th row multiplied by the polynomial w(s), P[i+jw(s)] addition to the i-th column of the j-th column multiplied by the polynomial w(s), L[i, j] the interchange of the i-th and the j-th row, P[i, j] the interchange of the i-th and the j-th column. It is easy to verify that the above elementary operations when carried out on rows are equivalent to pre-multiplication of the matrix A(s) by the following matrices: i-th column 1 1 mm Lm (, ic), c i-th row 1 i j 1 1 Ld (, i j, w( s)) 1 ws ( ) 1 i j L z i, j 1 1 1. 1 1 mm s, (1.5.1)

Polynomial and Rational Matrices The same operations carried out on columns are equivalent to postmultiplication of the matrix A(s) by the following matrices: i-th column 1 1 nn Pm (, ic), c i-th row 1 i j 1 1 nn Pd (, i j, w( s)) 1, ws ( ) 1 (1.5.) i j 1 1 1 nn z (, i j) P. 1 1 It is easy to verify that the determinants of the polynomial matrices (1.5.1) and (1.5.) are nonzero and do not depend on the variable s. Such matrices are called unimodular matrices.

Polynomial Matrices 3 1.6 Linear Independence, Space Basis and Rank of Polynomial Matrices Let a i = a i (s), i = 1,,n be the i-th column of a polynomial matrix A(s) mn [s]. We will consider these columns as m-dimensional polynomial vectors, a i m [s], i = 1,,n. Definition 1.6.1. Vectors a i m [s] are called linearly independent over the field of rational functions (s) if and only if there exist rational functions w i =w i (s) (s) not all equal to zero such that wa 1 1wa... wa (zero order). (1.6.1) n n In other words, these vectors are called linearly independent over the field of rational functions, if the equality (1.6.1) implies w i = for i = 1,,n. For example, the polynomial vectors 1 s a1, a s 1 s (1.6.) are linearly independent over the field of rational functions, since the equation 1 s 1 s w1 wa 1 1wa w1 w s 1s s s 1 w has only the zero solution 1 w1 1 s w s s 1. We will show that the rational functions w i, i = 1,,n in (1.6.1) can be replaced by polynomials p i = p i (s), i = 1,,n. To accomplish this, we multiply both sides of (1.6.1) by the smallest common denominator of rational functions w i, i = 1,,n. We then obtain pa 1 1 pa... p a, (1.6.3) where p i = p i (s) are polynomials. For example, the polynomial vectors n n 1 s 1 a1, a s s s (1.6.4)

4 Polynomial and Rational Matrices are linearly dependent over the field of rational functions, since for 1 w1 1 and w, s 1 we obtain 1 1 s 1 wa 1 1wa s s 1 s s. (1.6.5) Multiplying both sides of (1.6.5) by the smallest common denominator of rational functions w 1 and w, which is equal to s + 1, we obtain 1 s 1 ( s 1) s s s. If the number of polynomial vectors of the space n [s] is larger than n, then these vectors are linearly dependent. For example, adding to two linearly independent vectors (1.6.) an arbitrary vector a a a 11 1 [] s, we obtain linearly dependent vectors, i.e., pa 1 1 pa pa 3, (1.6.6) for p 1, p, p 3 [s] not simultaneously equal to zero. Assuming, for example, p 3 = -1, from (1.6.6) and (1.6.), we obtain 1 s p1 a11 s s 1 p a 1 and 1 p1 1 s a11 s 1 sa11 s 1 a11 sa 1. p s s 1 a 1 s 1 a 1 sa11 a1 Thus vectors a 1, a, a are linearly dependent for any vector a. Definition 1.6.. Polynomial vectors b i = b i (s) n [s], i = 1,,n are called a basis of space n [s] if they are linearly independent over the field of rational function

Polynomial Matrices 5 and an arbitrary vector a n [s] from this space can be represented as a linear combination of these vectors, i.e., a pb 1 1 pb... pnbn, (1.6.7) where p i [s], i = 1,,n. There exist many different bases for the same space. For example, for the space [s] we can adopt the vectors (1.6.) as a basis. Solving system of equations for an arbitrary vector we obtain a11 [] s a, 1 p1 1 s p1 a11 a1 a p s s 1 p a, 1 1 p1 1 s a11 s 1 a11 sa 1 p s s 1 a. 1 sa11 a1 As a basis for this space we can also adopt 1 e1, e 1. In this case, p 1 = a 11 and p = a 1. Definition 1.6.3. The number of linearly independent rows (columns) of a polynomial matrix A(s) nm [s] is called its normal rank (briefly rank). The rank of a polynomial matrix A(s) can be also equivalently defined as the highest order of a minor, which is a nonzero polynomial, of this matrix. The rank of matrix A(s) nm [s] is not greater than the number of its rows n or columns m, i.e., rank A ( s) min ( nm, ). (1.6.8) If a square matrix A(s) nn [s] is of full rank, i.e., rank A(s) = n, then its determinant is a nonzero polynomial w(s), i.e., det A ( s) w( s). (1.6.9)

6 Polynomial and Rational Matrices Such a matrix is called nonsingular or invertible. It is called singular when det A(s) = (the zero polynomial). For example, the square matrix built from linearly independent vectors (1.6.) is nonsingular, since 1 s det 1 s 1 s and the matrix built from linearly dependent vectors (1.6.4) is singular, since 1 s 1 det s s s. Theorem 1.6.1. Elementary operations carried out on a polynomial matrix do not change its rank. Proof. Let nm A() s L() s A() s P() s [] s, (1.6.1) where L(s) nn [s] and P(s) mm [s] are unimodular matrices of elementary operations on rows and columns, respectively. From (1.6.1) we immediately have rank A() s rank L() s A() s P() s rank A () s, since L(s) and P(s) are unimodular matrices. For example, carrying out the operation L d (+1(-s)) on rows of the matrix built from the columns (1.6.), we obtain 1 1 s 1 s s 1 s s 1 1. Both polynomial matrices 1 s 1 s and s s 1 1 are full rank matrices.

Polynomial Matrices 7 1.7. Equivalents of Polynomial Matrices 1.7.1 Left and Right Equivalent Matrices Definition 1.7.1. Two polynomial matrices A(s), B(s) mn [s] are called left (right) or row (column) equivalent if and only if one of them can be obtained from the other as a result of a finite number of elementary operations carried out on its rows (columns) B() s L() s A() s or B() s A() s P () s, (1.7.1) where L(s) (P(s)) is the product of unimodular matrices of elementary operations on rows (columns). Definition 1.7.. Two polynomial matrices A(s), B(s) mn [s] are called equivalent if and only if one of them can be obtained from the other as a result of a finite number of elementary operations carried out on its rows and columns, i.e., B() s L() s A() s P () s, (1.7.) where L(s) and P(s) are the products of unimodular matrices of elementary operations on rows and columns, respectively. Theorem 1.7.1. A full rank polynomial matrix A(s) ml [s] is left equivalent to an upper triangular matrix of the form a11() s a1 () s a1 l () s a( s) al ( s) a1 l ( s) A() s L() s A() s a11() s a1 () s a1 m() s a ( s) a ( s) amm ( s) for m l m for m l (1.7.3)

8 Polynomial and Rational Matrices a11() s a1() s a1 m() s a1 l() s a( s) am( s) al( s) () s () s () s A L A for ml amm ( s) aml ( s) where the elements a 1i (s), a i (s),, a i-1,i (s) are polynomials of a degree less than a ii (s) for i = 1,,,m, and L(s) is the product of the matrices of elementary operations carried out on rows. Proof. Among nonzero entries of the first columns of the matrix A(s) we choose the entry that is a polynomial of the lowest degree and carrying out L[i, j], we move this entry to the position (1,1). Denote this entry by a 11 (s). Then we divide all remaining entries of the first column by a 11 (s). We then obtain a () s a () s q () s r () s for i,3,..., m, i1 11 i1 i1 where q i1 (s) is the quotient and r i1 (s) the remainder of division of the polynomial a i1 (s) by a 11 (s). Carrying out L[i+1(-q i1 (s))], we replace the entry a i1 (s) with the remainder r i1 (s). If not all remainders are equal to zero, then we choose this one, that is the polynomial of the lowest degree, and carrying out operations L[i, j], we move it to position (1,1). Denoting this remainder by r i1 (s), we repeat the above procedure taking the remainder r 11 (s) instead of a 11 (s). The degree r 11 (s) is lower than the degree of a 11 (s). After a finite number of steps, we obtain the matrix A (s) of the form a11() s a1() s a1 l () s a ( s) al ( s) A () s. am ( s) aml( s) We repeat the above procedure for the first column of the submatrix obtained from the matrix A(s) by deleting the first row and the first column. We then obtain a matrix of the form a11() s a1 () s a13() s a1 l () s a ˆ ˆ ( s) a3( s) al ( s) Aˆ () s aˆ ˆ 33() s a3l () s. aˆ ˆ m3() s aml() s If a 1 (s) is not a polynomial of lower degree than the one of a (s), then we divide a 1 (s) by a (s) and carrying out L[1+(-q 1 (s))], we replace the entry

Polynomial Matrices 9 a 1 (s) with the entry a 1 (s) = r 1 (s), where q 1 (s) and r 1 (s) are the quotient and the remainder on the division of a 1 (s) by a (s) respectively. Next, we consider the submatrix obtained from the matrix A (s) by removing the first two rows and the first two columns. Continuing this procedure, we obtain the matrix (1.7.3). An algorithm of determining the left equivalent matrix of the form (1.7.3) follows immediately from the above proof. Example 1.7.1. The given matrix 1 s A() s s1 s 1 s s 1 s 3 is to be transformed to the left equivalent form (1.7.3). To accomplish this, we carry out the following elementary operations: L 1 ( s1) 1 s 1 s L31( s ) L[,3] s s 1 1 1 s s1 L1s 1 L3 ( ( s )) 1. s 1 Theorem 1.7.. A full rank polynomial matrix A(s) ml [s] is right equivalent to a lower triangular matrix of the form A() s A() s P() s a11() s a1() s a() s for n m, am1() s am() s am3() s amm() s a11() s a1() s a() s for n m, am 1() s am() s amm() s (1.7.4)

3 Polynomial and Rational Matrices a11() s a1() s a() s al1() s al() s al() s am 1() s am() s aml() s for n m, (1.7.4) where the elements a i1 (s), a i (s),, a i-1,i (s) are polynomials of lower degree than that of a ii (s) for i = 1,,,n, and P(s) is the product of unimodular matrices of elementary operations carried out on columns. 1.7. Row and Column Reduced Matrices The degree of the i-th column (row) of a polynomial matrix is the highest degree of a polynomial that is an entry of this column (row). The degree of the i-th column (row) of the matrix A(s) will be denotedn by deg c i [A(s)] (deg r i [A(s)]) or shortly deg c i (deg r i ). Let L c (L r ) be the matrix built from the coefficients at the highest powers of variable s in the columns (rows) of the matrix A(s). For example, for the polynomial matrix s 1 s 3s A () s s s, (1.7.5) s s1 s1 we have deg A(s) = deg c, deg c deg c 1, deg r deg r, deg r 1 1 3 1 3 and 1 1 3 1 Lk 1, w 1 1 L. 1 1 1 The matrix (1.7.5) can be written, using the above matrices, as follows 1 1 3s 1 A( s) 1 s s 1 1 s 1 1

Polynomial Matrices 31 or s 1 1 s 3s A () s s 1 1. s 1 s1 s1 In the general case for a matrix A(s) mn [s], we have A degc 1 degc deg c ( s) L diag c s, s,..., s l A ( s) (1.7.6) and A deg r 1 deg r deg r ( s) diag s, s,..., s m L r A ( s), (1.7.7) where A (s), A (s) are polynomial matrices satisfying the conditions deg A( s) deg A( s), deg A ( s) deg A( s). If m = n and det L c, then the determinant of the matrix (1.7.6) is a polynomial of the degree k l n deg c, i1 i since deg 1 deg deg det A c c cl nk ( s) det L k det diag s, s,..., s... s det L c... Similarly, if det L r, then the determinant of the matrix (1.7.7) is a polynomial of the degree r m n deg r. j1 j Definition 1.7.3. A polynomial matrix A(s) is said to be column (row) reduced if and only if L c (L r ) of this matrix is a full rank matrix. Thus, a square matrix A(s) is column (row) reduced if and only if det L c (det L r ). For example, the matrix (1.7.5) is column reduced but not row reduced, since

3 Polynomial and Rational Matrices 1 1 3 1 det L 1 5, det L 1 1. c 1 1 1 From the above considerations and Theorems 1.7.1 and 1.7.1 the following important corollary immediately follows. Corollary 1.7.1. Carrying out only elementary operations on rows or columns it is possible to transform a nonsingular polynomial matrix to one of column reduced form and row reduced form, respectively. r 1.8 Reduction of Polynomial Matrices to the Smith Canonical Form Consider a polynomial matrix A(s) mn [s] of rank r. Definition 1.8.1. A polynomial matrix of the form i1 () s i ( s) mn AS () s ir ( s) [] s. (1.8.1) r min(n,m) is called the Smith canonical form of the matrix A(s) mn [s], where i 1 (s), i (s),,i r (s) are nonzero polynomials that are called invariant, with coefficients by the highest powers of the variable s equal to one, such that the polynomial i k+1 (s) is divisible without remainder by the polynomial i k (s), i.e., i k+1 i k for k = 1,,r-1. Theorem 1.8.1. For an arbitrary polynomial matrix A(s) mn [s] of rank r (r min(n,m)) there exists its equivalent Smith canonical form (1.8.1). Proof. Among the entries of the matrix A(s) we find a nonzero one, which is a polynomial of the lowest degree in respect to s, and interchanging rows and columns we move it to position (1,1). Denote this entry by a 11 (s). Assume at the beginning that all entries of the matrix A(s) are divisible without remainder by the entry a 11 (s). Dividing the entries a i1 (s) of the first column and the first row a 1j (s) by a 11 (s), we obtain

Polynomial Matrices 33 a () s a () s q () s ( i,3,..., m), i1 11 i1 a () s a () s q () s ( j,3,..., n), 1j 11 1j where q i1 (s) and q 1j (s) are the quotients from division of a i1 (s) and a 1j (s) by a 11 (s), respectively. Subtracting from the i-th row (i =,3,,m) the first row multiplied by q i1 (s) and, respectively from the j-th column (j =,3,,m) the first column multiplied by q 1j (s), we obtain a matrix of the form a11() s a( s) an ( s). (1.8.) am( s) amn( s) If the coefficient by the highest power of s of polynomial a 11 (s) is not equal to 1, then to accomplish this we multiply the first row (or column) by the reciprocal of this coefficient. Assume next that not all entries of the matrix A(s) are divisible without remainder by a 11 (s) and that such entries are placed in the first row and the first column. Dividing the entries of the first row and the first column by a 11 (s), we obtain a1 i() s a11() s q1 i() s r1 i() s ( i,3,..., n), a () s a () s q () s r () s ( j,3,..., m), j1 11 j1 j1 where q 1i (s), q j1 (s) are the quotients and r 1i (s), r j1 (s) are the remainders of division of a 1i (s) and a j1 (s) by a 11 (s), respectively. Subtracting from the j-th row (i-th column) the first row (column) multiplied by q j1 (s) (by q 1i (s)), we replace the entry a j1 (s) ( a 1i (s)) by the remainder r j1 (s) (r 1i (s)). Next, among these remainders we find a polynomial of the lowest degree with respect to s and interchanging rows and columns, we move it to the position (1,1). We denote this polynomial by r 11 (s). If not all entries of the first row and the first column are divisible without remainder by r 11 (s), then we repeat this procedure taking the polynomial r 11 (s) instead of the polynomial a 11 (s). The degree of the polynomial r 11 (s) is lower than the degree of a 11 (s). After a finite number of steps, we obtain in the position (1,1) a polynomial that divides without remainder all the entries of the first row and the first column. If the entry a ik (s) is not divisible by a 11 (s), then by adding the i-th row (or k-th column) to the first row (the first column), we reduce this case to the previous one. Repeating this procedure, we finally obtain in the position (1,1) a polynomial that divides without remainder all the entries of the matrix. Further we proceed in the same way as in the first case, when all the entries of the matrix are divisible without remainder by a 11 (s).

34 Polynomial and Rational Matrices If not all entries a ij (s) (i =,3,,m; j =,3,,n) of the matrix (1.8.) are equal to zero, then we find a nonzero entry among them, which is a polynomial of the lowest degree with respect to s, and interchanging rows and columns, we move it to the position (,). Proceeding further as above, we obtain a matrix of the form a11() s a() s a33() s a3n () s, am3() s amn() s where a (s) is divisible without remainder by a 11 (s), and all elements a ij (s) (i = 3,4,,m; j = 3,4,,n) are divisible without remainder by a (s). Continuing this procedure, we obtain a matrix of the Smith canonical form (1.8.1). From this proof the following algorithm for determining of the Smith canonical form follows immediately as, illustrated by the following example. Example 1.8.1. To transform the polynomial matrix ( s) ( s)( s3) s A () s (1.8.3) ( s)( s3) ( s) s3 to the Smith canonical form, we carry out the following elementary operations. Step 1: We carry out the operation P[1, 3] s ( s)( s3) ( s) A 1() s. s3 ( s) ( s)( s3) All entries of this matrix are divisible without remainder by s + with exception of the entry s + 3. Step : Taking into account the equality s 3 1 1 s s, we carry out the operation L[+1(-1)] s ( s)( s3) ( s) A () s. 1 ( s) s

Polynomial Matrices 35 Step 3: We carry out the operation L[1, ] 1 ( s) s A 3() s s ( s)( s3) ( s). Step 4: We carry out the operations P[+1(s+)] and P[3+1(-s-)] 1 A 4() s s ( s)(s5). Step 5: We carry out the operation L[+1(-s-)] and P[1 / ] 1 A s () s 5. ( s) s This matrix is of the desired Smith canonical form of (1.8.3). From divisibility of the invariant polynomials i k+1 i k, k = 1,..., r 1, it follows that there exist polynomials d 1,d,,d r, such that i d, i d d,..., i d d... d. 1 1 1 r 1 r Hence the matrix (1.8.1) can be written in the form d1 dd 1 AS () s dd 1... dr. (1.8.1a) Theorem 1.8.. The invariant polynomials i 1 (s),i (s),,i r (s) of the matrix (1.8.1) are uniquely determined by the relationship Dk () s ik () s for k 1,,..., r, (1.8.4) D () s k1 where D k (s) is the greatest common divisor of all minors of degree k of matrix A(s) (D (s) = 1).

36 Polynomial and Rational Matrices Proof. We will show that elementary operations do not change D k (s). Note that elementary operations 1) consisting of multiplying of an i-th row (column) by a number c causes multiplication of minors containing this row (column) by this number c. Thus this operation does not change D k (s). An elementary operation ) consisting of adding to an i-th row (column) j-th row (column) multiplied by the polynomial w(s) does not change D k (s), if a minor of the degree k contains either the i-th row and the j-th row or does not contain of them. If the minor of the degree k contains the i-th row, and does not contain the j-th row, then we can represent it as a linear combination of two minors of the degree k of the matrix A(s). Hence the greatest common divisor of the minors of the degree k does not change. Finally, an operation 3), consisting on the interchange of i-th and j-th rows (columns), does not change D k (s) either, since as a result of this operation a minor of the degree k either does not change (the both rows (columns) do not belong to this minor), or changes only the sign (both rows belong to the same minor), or it will be replaced by another minor of the degree k of the matrix A(s) (only one of these rows belongs to this minor). Thus equivalent matrices A(s) and A S (s) have the same divisors D 1 (s), D (s),..., D r (s). From the Smith canonical form (1.8.1) it follows that D () s i (), s 1 1 D () s i () s i (), s 1 D () s i () s i ()...(). s i s r 1 r (1.8.5) From (5) we immediately obtain the formula (4). Using the polynomials d 1,d,,d r we can write the relationship (1.8.5) in the form D () s d, 1 1 () d1d, r r1 r 1 D s D () s d d... d r. (1.8.6) From definition (1.8.1) and Theorems 1.8.1 and 1.8., the following important corollary can be derived. Corollary 1.8.1. Two matrices A(s), B(s) mn [s] are equivalent if and only if they have the same invariant polynomials.

Polynomial Matrices 37 1.9 Elementary Divisors and Zeros of Polynomial Matrices 1.9.1 Elementary Divisors Consider a polynomial matrix A(s) mn [s] of the rank r, whose Smith canonical form A S (s) is given by the formula (1.8.1). Let the k-th invariant polynomial of this matrix be of the form m k m 1 k m q 1 q i ( s) ( ss ) ( ss )...( s s ) k. (1.9.1) k From divisibility of the polynomial i k+1 (s) by the polynomial i k (s) it follows that mr,1 mr 1,1... m1,1 m m... m. (1.9.) rq, r1, q 1, q If, for example, i 1 (s) = 1, then m 11 = m 1 = =m 1q =. Definition 1.9.1. Everyone of the expressions (different from 1) ( ), ( ),..., ( ) rq m11 m1 ss1 ss ss q m appearing in the invariant polynomials (1.9.1) is called elementary divisor of the matrix A(s). For example, the elementary divisors of the polynomial matrix (1.8.3) are (s+) and (s+, 5). The elementary divisors of a polynomial matrix are uniquely determined. This follows immediately from the uniqueness of the invariant polynomial of polynomial matrices. Equivalent polynomial matrices possess the same elementary divisors. For a polynomial matrix of known dimensions its rank together with its elementary divisors uniquely determine its Smith canonical form. For example, knowing the elementary divisors s 1, (s 1)(s ), (s ), (s 3), of a polynomial matrix, its rank r = 4 and dimension 44, we can write its Smith canonical form of this polynomial matrix 1 s 1 A s () s. (1.9.3) ( s 1)( s ) ( s1)( s) ( s3) Consider a polynomial, block-diagonal matrix of the form

38 Polynomial and Rational Matrices A() s diag[ A (), s A ()] s A () s 1 1 A( s). (1.9.4) Let A ks (s) be the Smith canonical form of the matrix A k (s), k = 1,, and m11, ( ) k mrk,...,( ) k qk ssk ss 1 kq its elementary divisors. Taking into account that equivalent polynomial matrices have the same elementary divisors, we establish that a set of elementary divisors of the matrix (1.9.4) is the sum of the sets of elementary divisors of A k (s), k = 1,. Example 1.9.1. Determine elementary divisors of the block-diagonal matrix (1.9.4) for s1 1 s1 1 A1() s s 1 1, () s s 1 A. (1.9.5) s1 s It is easy to check that the Smith canonical forms of the matrices (1.9.5) are 1 1 A1S() s 1, S() s 1 A. (1.9.6) 3 ( s1) ( s1) ( s) The elementary divisors of the matrices (1.9.5) are thus equal (s 1) 3, (s 1), and (s ), respectively. It is easy to show that the Smith canonical form of the matrix (1.9.4) with the blocks (1.9.5) is equal to s s s s S 3 A ( ) diag 1 1 1 1 ( 1) ( 1) ( ) (1.9.7) and its elementary divisors are (s - 1), (s - 1) 3, (s - ). Consider a matrix A nn and its corresponding polynomial matrix [I n s - A]. Let where [ I s A ] diag i (), s i (),..., s i () s, (1.9.8) n S 1 n m k m 1 k m k q 1 q i ( s) ( ss ) ( ss )...( ss ), k 1,..., n, (1.9.9) k

Polynomial Matrices 39 and s 1,s,,s q, q n are the eigenvalues of the matrix A. Definition 1.9.. Everyone of the expressions (different from 1) m11 m m 1 ss1 ss ss q ( ), ( ),..., ( ) nq appearing in the invariant polynomials (1.9.9) is called the elementary divisor of the matrix A. The elementary divisors of the matrix A are uniquely determined and they determine its essential structural properties. 1.9. Zeros of Polynomial Matrices Consider a polynomial matrix A(s) mn [s] of rank r, whose Smith canonical form is equal to (1.8.1). From (1.8.5) it follows that D () s i () s i ()...() s i s. (1.9.1) r 1 r Definition 1.9.3. Zeros of the polynomial (1.9.1) are called zeros of the polynomial matrix A(s). The zeros of the polynomial matrix A(s) can be equivalently defined as those values of the variable s, for which this matrix loses its full (normal) rank. For example, for the polynomial matrix (1.8.3) we have D () s ( s)( s.5). r Thus the zeros of the matrix are s 1 = -, s = -.5. It is easy to verify that for these values of the variable s, the matrix (1.8.3) (whose normal rank is equal to ) has a rank equal to 1. If the polynomial matrix A(s) is square and of the full rank r = n, then det A ( s) cd ( s) c is a constant coefficient independent of s (1.9.11) r and the zeros of this matrix coincide with the roots of its characteristic equation det A(s) =. For example, for the first among the matrices (1.9.5) we have s 1 1 det ( ) 1 1 ( 1) 3 A r s s s. s 1

4 Polynomial and Rational Matrices Thus this matrix has the zero s = 1 of multiplicity 3. The same result will be obtained from (1.9.1), since D r (s) = (s - 1) 3 for A 1S (s). Theorem 1.9.1. Let a polynomial matrix A(s) mn [s] have a rank (normal) equal to r min(m,n). Then r s A rank A s, (1.9.1) r di s si A where A is a set of the zeros of the matrix A(s) and d i is a number of distinct elementary divisors containing s i. Proof. By definition of zero, it follows that the matrix A(s) does not lose its full rank if we substitute in place of the variable s a number that does not belong to the set A, i.e., rank A(s) = r for s A. Elementary operations do not change the rank of a polynomial matrix. In view of this rank A(s) = rank A S (s) = r, where r is the number of the invariant polynomials (including those equal to 1). If an invariant polynomial contains s i, then this polynomial is equal to zero for s = s i. Thus we have rank A(s i ) = r - d i, s i A, since the number of polynomials containing s i is equal to the number of distinct elementary divisors containing s i. For instance, the polynomial matrix (1.9.3) of the full column rank has one elementary divisor containing s 1 = 3, two elementary divisors containing s = and three elementary divisors containing s 3 = 1. In view of this, according to (1.9.1) we have rank A (3) 3, rank A (), rank A (1) 1. S S S Remark 1.9.1. A unimodular matrix U(s) nn [s] does not have any zeros since det U(s) = c, where c is certain constant independent of the variable s. Theorem 1.9.. An arbitrary rectangular, polynomial matrix A(s) mn [s] of full rank that does not have any zeros can be written in the form Im P( s), m n A() s In, (1.9.13) L() s, m n where P(s) nn [s] and L(s) mm [s] are unimodular matrices. Proof. If m < n and the matrix does not have any zeros, then applying elementary operations on columns we can bring this matrix to the form [I m ]. Similarly, if

Polynomial Matrices 41 m > n and the matrix does not have any zeros, then applying elementary operations In on rows we can bring this matrix to the form. Remark 1.9.. From the relationship (1.9.13) it follows that a polynomial matrix built from an arbitrary number of rows or columns of a matrix that does not have any zeros, never has any zeros. Theorem 1.9.3. An arbitrary polynomial matrix A(s) mn [s] of rank r min (m, n) having zeros can be presented in the form of the product of matrices A() s B() s C () s, (1.9.14) where the matrix B(s) = L -1 (s) diag [i 1 (s),,i r (s),,,] mm containing all the zeros of the matrix A(s), and is a matrix 1 Im P (), s n m 1 C() s P (), s n m In 1 P (), s n m. (1.9.15) Proof. Let L(s) mm [s] andp(s) nn [s] be unimodular matrices of elementary operations on rows and on columns, respectively, reducing the matrix A(s) to the Smith canonical form A S (s), i.e., A () s L S () s A () s P () s. (1.9.16) Pre-multiplying (1.9.16) by L -1 (s) and post-multiplying by P -1 (s), we obtain A L A P B C, 1 1 () s () s S () s () s () s () s since diag [ i1 ( s),..., ir( s),,..., ] Im, n m AS( s) diag [ i1 ( s),..., ir( s),,..., ], n m. In diag [ i1 ( s),..., ir ( s),,..., ], n m

4 Polynomial and Rational Matrices From (1.9.15) it follows that the matrix C(s) mn [s] does not have any zeros, since the matrix P -1 (s) is a unimodular matrix. 1.1 Similarity and Equivalence of First Degree Polynomial Matrices Definition 1.1.1. Two square matrices A and B of the same dimension are said to be similar matrices if and only if there exists a nonsingular matrix P such that 1 B P AP (1.1.1) and the matrix P is called a similarity transformation matrix. Theorem 1.1.1. Similar matrices have the same characteristic polynomials, i.e., det [ sib] det [ sia ]. (1.1.) Proof. Taking into account (1.1.1), we can write IB P PP AP P IA P 1 1 1 det [ s ] det s det ( s ) 1 det det [ s ]det det [ s ], P IA P IA since det P -1 = (det P) -1. Theorem 1.1.. Polynomial matrices [si - A] and [si - B] are equivalent if and only if the matrices A and B are similar. Proof. Firstly, we show that if the matrices A and B are similar, then the polynomial matrices [si - A] and [si - B] are equivalent. If the matrices A and B are similar, i.e., they satisfy the relationship (1.1.1), then I B I P AP P I A P. 1 1 [ s ] s [ s ] This relationship is a special case (for L(s) = P -1 and P(s) = P) of the relationship (1.7.). Thus the polynomial matrices [si - A] and [si - B] are equivalent. We will show now, that if the matrices [si - A] and [si - B] are equivalent, then the matrices A and B are similar. Assuming that the matrices [si - A] and [si - B] are equivalent, we have [ sib] L( s)[ sia] P ( s), (1.1.3)