Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Size: px
Start display at page:

Download "Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc."

Transcription

1 1 Polynomial Matrices 1.1 Polynomials Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc., n ws ( ) as a as... as i i 1 n n (1.1.1) i is called a polynomial w(s) in the variable s over the field, where a i for i=,1,...,n are called the coefficients of this polynomial. The set of polynomials (1.1.1) over the field will be denoted by [s]. If a n, then the nonnegative integral n is called the degree of a polynomial and is denoted deg w(s), i.e., n = deg w(s). The polynomial (1.1.1) is called monic, if a n = 1 and zero polynomial, if a i = for i =,1,,n. The sum of two polynomials n w () s a as... a s, (1.1.a) n m w ( s) b bs... b s, (1.1.b) m is defined in the following way m n i i ( ai bi) s as i, n m i im1 n i w1() s w() s ( ai bi) s, n m. (1.1.3) i n m i i ( ai bi) s bs i, m n i in1

2 Polynomial and Rational Matrices If n > m, then the sum is a polynomial of degree n, if m > n then the sum is a polynomial of degree m. If n = m and a n +b n, then this sum is a polynomial of degree n and a polynomial of degree less than n, if a n +b n =. Thus we have w s w s w s w s deg ( ) ( ) max deg ( ), deg ( ). (1.1.4) 1 1 In the same vein we define the difference of two polynomials. A polynomial whose coefficients are the products of the coefficients a i and the scalar, i.e., n i w() s as, (1.1.5) i i is called the product of the polynomial (1.1.1) and the scalar (a scalar can be regarded as a polynomial of zero degree). A polynomial of the form i w () s w () s cs (1.1.6a) 1 nm i i is called the product of the polynomials (1.1.), where i c a b, i,1,, nm i k ik k ( a for k n, b for k m). k k (1.1.6b) From (1.1.6a) it follows that 1 deg w ( s) w ( s) n m, (1.1.7) since a n b m for a n, b m. Let w (s) in (1.1.) be a nonzero polynomial and n > m, then there exist exactly two polynomials q(s) and r(s) such that w () s w ()() s q s r() s, (1.1.8) 1 where deg rs ( ) deg w( s) m. (1.1.9) The polynomial q(s) is called the integer part when r(s) and the quotient when r(s) =, and r(s) is called the remainder.

3 Polynomial Matrices 3 If r(s) =, then w 1 (s) = w (s)q(s); we say then that polynomial w 1 (s) is divisible without remainder by the polynomial w (s), or equivalently, that polynomial w (s) divides without remainder a polynomial w 1 (s), which is denoted by w 1 (s) w (s). We also say that the polynomial w (s) is a divisor of the polynomial w 1 (s). Let us consider the polynomials in (1.1.). We say that a polynomial d(s) is a common divisor of the polynomials w 1 (s) and w (s) if there exist polynomials w 1 (s) and w (s) such that w() s dsw () (), s w() s dsw () () s. (1.1.1) 1 1 Polynomial d m (s) is called a greatest common divisor (GCD) of the polynomials w 1 (s) and w (s), if every common divisor of these polynomials is a divisor of the polynomial d m (s). A GCD d m (s) of polynomials w 1 (s) and w (s) is determined uniquely up to multiplication by a constant factor and satisfies the equality d () s w () s m () s w () s m () s, (1.1.11) m 1 1 where m 1 (s) and m (s) are polynomials, which we can determine using Euclid s algorithm or the elementary operations method. The essence of Euclid s algorithm is as follows. Using division of polynomials we determine the sequences of polynomials q 1,q,,q k and r 1,r,,r k satisfying the following properties w1 wq1 r1 w rq 1 r r r q r rk rk 1qk r k rk1 rkqk (1.1.1) We stop computations when the last nonzero remainder r k is computed and r k-1 is found to be divisible without remainder by r k. With r 1,r,,r k-1 eliminated from (1.1.1) we obtain (1.1.11) for d m (s) = r k. Thus the last nonzero remainder r k is a GCD of the polynomials w 1 (s) and w (s). Example Let w w () s s 3s 3s1, w w () s s s 1. (1.1.13) 3 1 1

4 4 Polynomial and Rational Matrices Using Euclid s algorithm we compute w w q r, q s4, r 6s3, w rq 1 r, q s, r (1.1.14) Here we stop because r 1 is divisible without remainder by r. Thus r is a GCD of the polynomials in (1.1.13). Elimination of r 1 from (1.1.14) yields that is, w ( r, 1 q) w (1 q1q ) s 3 s s s s s s s The polynomials in (1.1.) are called relatively prime (or coprime) if and only if their monic GCD is equal to 1. From (1.1.11) for d m (s) = 1 it follows that polynomials w 1 (s) and w (s) are coprime if and only if there exist polynomials m 1 (s) and m (s) such that w () s m () s w () s m () s 1. (1.1.15) 1 1 Dividing both sides of (1.1.11) by d m (s), we obtain 1 w ( s) m ( s) w ( s) m ( s), (1.1.16) 1 1 where wk () s wk ( s) for k 1,,. d () s m Thus if d m (s) is a GCD of the polynomials w 1 (s) and w (s), then polynomials w 1(s) and w (s) are coprime. Let s 1,s,,s p be different roots of multiplicities m 1,m,,m p (m 1 +m + +m p = n), respectively, of the equation w(s) =. The numbers s 1,s,,s p are called the zeros of polynomial (1.1.1). This polynomial can be uniquely written in the form ws ( ) a( ss) ( ss)...( s s ). (1.1.17) n m1 m 1 p m p

5 Polynomial Matrices 5 1. Basic Notions and Basic Operations on Polynomial Matrices A matrix whose elements are polynomials over a field matrix over the field (briefly polynomial matrix) is called a polynomial a () s a () s 11 1n i1,..., m j 1,..., n A() s aij () s, aij () s () s. (1..1) am 1() s amn() s An ordered pair of the number of rows m and columns n, respectively, is called the dimension of matrix (1..1) and is denoted by mn. A set of polynomial matrices of dimension mn over a field will be denoted by mn [s]. The following matrix is an example of a polynomial matrix over the field of real numbers s s1 s A() s [] s. (1..) s s3 3s s3 Every polynomial matrix can be written in the form of a matrix polynomial. For example, the matrix (1..) can be written in the form of the matrix polynomial A () s s s s 1s A A A. (1..3) Let a matrix of the form (1..1) be expressed as the matrix polynomial q m n A( s) A s... A sa, A, k, 1,..., q. (1..4) q 1 k If A q is not a zero matrix, then number q is called its degree and is denoted by q = deg A(s). For example, the matrix (1..) (and also (1..3)) has the degree two q =. If n = m and det A q, then matrix (1..4) is called regular. The sum of two polynomial matrices k A() s aij () s Aks and i1,..., m j1,..., n k B() s bij () s Bks q i1,..., m j 1,..., n k t k (1..5) of the same dimension mn is defined in the following way

6 6 Polynomial and Rational Matrices A() s B() s q t k k ( Ak Bk) s Aks q t k kt1 q k aij () s bij () s i1,..., m ( k k ) s q t. j A B 1,..., n k q t k k ( Ak Bk) s Bks q t k kq1 (1..6) If q = t and A q + B q, then the sum in (1..6) is a polynomial matrix of degree q, and if A q + B q =, then this sum is a polynomial matrix of a degree not greater than q. Thus we have s s s s deg A( ) B( ) max deg [ A( )],deg [ B ( )]. (1..7) In the same vein, we define the difference of two polynomial matrices. A polynomial matrix where every entry is the product of an entry of the matrix (1..1) and the scalar is called the product of the polynomial matrix (1..1) and the scalar A() s aij () s. i 1,..., m j1,..., n From this definition for, we have deg [A(s)] = deg [A(s)]. Multiplication of two polynomial matrices can be carried out if and only if the number of columns of the first matrix (1..1) is equal to the number of rows of the second matrix k B() s bij () s B ks. (1..8) i1,..., n j 1,..., p k t A polynomial matrix of the form qt k C() s cij () s A() s B() s C ks (1..9) i1,..., m j 1,..., p k is called the product of these polynomial matrices, where C k A B k,1,..., qt k l kl l ( A, l q, B, l t). l l (1..1)

7 Polynomial Matrices 7 From (1..1) it follows that C q+t = A q B t and this matrix is a nonzero one if at least one of the matrices A q and B t is nonsingular, in other words one of the matrices A(s) and B(s) is a regular one. Thus we have the relationship deg A(s)B(s) = deg A(s) + deg B(s) if at least one of these deg A(s)B(s) deg A(s) + deg B(s) otherwise. matrices is regular, (1..11) For example, the product of the polynomial matrices A 1 1 s s s 1 s s3 1 3 B() s s 1 1 s1 s s s s s 1 () s s s, is the following polynomial matrix s s1 7 s s1 1 1 A() s B() s s s, 4s 4 s 6s whose degree is smaller than the sum deg [A(s)] + deg [B(s)], since 1 1 AB The matrix (1..4) can be written in the form q A() s s A... sa A, (1..1) q 1 since multiplication of the matrix A i (i = 1,,,q) by the scalar s is commutative. Substituting the matrix S in place of the scalar s into (1..4) and (1..1), we obtain the following, usually different, matrices q A ( S) A S... A SA, p q 1 q A ( S) S A... SA A. l q 1 The matrix A p (S) (A l (S)) is called the right-sided (left-sided) value of the matrix A(s) for s = S. Let

8 8 Polynomial and Rational Matrices C() s A() s B () s. It is easy to verify that and C ( S) A ( S) B ( S) p p p C ( S) A ( S) B ( S ). l l l Consider the polynomial matrices in (1..5). Theorem If the matrix S commutes with the matrices A i for i = 1,,,q and B j for j = 1,,,t, then the right-sided and the left-sided value of the product of the matrices in (1..5) for s = S is equal to the product of the right-sided and left-sided values respectively, of these matrices for s = S. Proof. Taking into account the polynomial matrices in (1..5) we can write D() s A() s B() s A s B s A B s q t q t i j i j i j i j i j i j and q t q t i j i j D() s A() s B() s s Ais B j s AiB j. i j i j Substituting the matrix S in place of the scalar s, we obtain q t q t i j i j Dp( S) AiB js AS i B js Ap( S) Bp( S ), i j i j since B j S = SB j for j = 1,,,t and p q p q i j i j Dl( S) S AiB j S AiS B j Al( S) Bl( S ), i j i j since SA i =A i S for i = 1,,,q.

9 Polynomial Matrices Division of Polynomial Matrices Consider the polynomial matrices A(s) and B(s) where det A(s) and deg A(s) < deg B(s). The matrix A(s) may be not regular, i.e., the matrix of coefficients of the highest power of variable s may be singular. Theorem If det A(s), then for the pair of polynomial matrices A(s) and B(s), deg B(s) > deg A(s) there exists a pair of matrices Q p (s), R p (s) such that the following equality is satisfied B() s Q () s A() s R (), s deg A() s deg R () s, (1.3.1a) p p p and there exists a pair of matrices Q l (s), R l (s) such that the following equality holds B() s A() s Q () s R (), s deg A() s deg R () s. (1.3.1b) l l l Proof. Dividing the elements of matrix B(s) Adj A(s) by a polynomial det A(s), we obtain a pair of matrices Q p (s), R 1 (s) such that B()Adj s A() s p Q ( s) det A( s) R ( s), deg det A( s) deg R s. 1 1 (1.3.) Post-multiplication of (1.3.) by A(s)/det A(s) yields B() s Q () s A() s R () s, (1.3.3) p p since Adj A(s) A(s) = I n det A(s), where 1 () s () s R p () s R A. (1.3.4) det A( s) From (1.3.4) we have deg R ( ) deg R ( ) deg A ( ) deg det A ( ) deg A ( ), s p s s s s 1 since deg [det A(s)] > deg R 1 (s). The proof of equality (1.3.1b) is similar. Remark The pairs of matrices Q p (s), R p (s) and Q l (s), R l (s) satisfying the equality (1.3.1) are not uniquely determined (are not unique), since B() s [ Q () s C()] s A() s R () s A() s C () s (1.3.5a) p p

10 1 Polynomial and Rational Matrices and B() s A()[ s Q () s C()] s R () s A() s C () s (1.3.5b) l are satisfied for an arbitrary matrix C(s) satisfying Example For the matrices deg C(s) A( s) deg A( s), deg A( s) C(s) deg A ( s). l s 1 s s A() s, () s 1 1 B 1 s 1 determine the matrices Q p (s), R p (s) satisfying the equality (1.3.1a). In this case, det A 1 = and det A(s) = s+1. We compute 1 1 s s Adj A( s), ( s)adj ( s) 3 1 s B A, s s s1 and with (1.3.) taken into account we have s s s ( s 1) 3 s s s 1 s1 s s 1 1, i.e., Q s p () s, 1() s s1 s s R 1 1. According to (1.3.4) we obtain R 1() s A() s R p () s det ( s) 1. A Consider two polynomial matrices n A() s A s A s... A sa, (1.3.6a) n n1 n1 1 m B( s) B s B s... B sb. (1.3.6b) m m1 m1 1

11 Polynomial Matrices 11 Theorem If A(s) and B(s) are square polynomial matrices of the same dimensions, and A(s) is regular (det A n ), then there exist exactly one pair of polynomial matrices Q p (s), R p (s) satisfying the equality B() s Q () s A() s R () s, (1.3.7a) p p and exactly one pair of polynomial matrices Q l (s), R l (s) satisfying the equality where B() s A() s Q () s R () s (1.3.7b) l l deg A( s) deg R ( s), deg A( s) deg R ( s). p Proof. If n > m, then Q p (s) = and R p (s) = B(s). Assume that m n. By the assumption det A n there exists the inverse matrix A n-1. Note that the matrix B m A n -1 s m-n A(s) has a term in the highest power of s, equal to B m s m. Hence l B s B A s A s B s, 1 mn (1) () m n () () where B (1) (s) is a polynomial matrix of degree m 1 m-1 of the form B () s B s B s... B sb. (1) (1) m1 (1) m11 (1) (1) m1 m11 1 If m 1 n, then we repeat this procedure, taking the matrix matrix B m, and obtain (1) B m instead of the 1 B () s B A s A() s B () s, (1) (1) 1 m1 n () m1 n where B ( s) B s B s... B sb ( m m ). () () m () m 1 () () m m1 1 1 Continuing this procedure, we obtain the sequence of polynomial matrices B(s), B (1) (s), B () (s),, of decreasing degrees m, m 1, m,, respectively. In step r, we obtain the matrix B (r) (s) of degree m r < n and B() s B A s B A s... B A s A() s B () s, that is the equality (1.3.7a) for 1 m n (1) 1 m1n ( r 1) 1 mr 1n ( r ) m n m1 n mr 1 n

12 1 Polynomial and Rational Matrices 1 mn (1) 1 m1n ( r1) 1 mr 1n p() s m n s m..., 1 n s mr 1 n s Q B A B A B A R ( r) p () s B (). s (1.3.8) Now we will show that there exists only one pair Q p (s), R p (s) satisfying (1.3.7a). Assume that there exist two different pairs Q p (1) (s), R p (1) (s) and Q p () (s), R p () (s) such that B Q A R (1.3.9a) (1) (1) () s p () s () s p () s and B Q A R, (1.3.9b) () () () s p () s () s p () s where deg A(s) > deg R p (1) (s) and deg A(s) > deg R p () (s). From (1.3.9) we have (1) () () (1) Qp () s Qp () s A() s Rp () s R p () s. (1.3.1) For Q p (1) (s) Q p () (s) the matrix [Q p (1) (s) - Q p () (s)]a(s) is a polynomial matrix of a degree greater than n, and [R p () (s) - R p (1) (s)] is a polynomial matrix of a degree less than n. Hence from (1.3.1) it follows that Q p (1) (s) = Q p () (s) and R p (1) (s) = R p () (s). Similarly one can prove that Q A B A B A B R 1 mn 1 (1) m1n 1 ( r1) mr 1n l( s) n ms n m s..., 1 n m s r1 ( r) l () s B (). s (1.3.11) The matrices Q p (s), R p (s) (Q l (s), R l (s)) are called, respectively: the right (left) quotient and the remainder from division of the matrix B(s) by the matrix A(s). From the proof of Theorem 1.3. the following algorithm for determining matrices Q p (s) and R p (s) (Q l (s) and R l (s)) ensues. Procedure Step 1: Given matrix A n compute A n -1. Step : Compute and B A s A() s A() s A B s 1 mn 1 mn m n n m B () s B() s B A s A() s B s... B sb (1) 1 mn (1) m1 (1) (1) m n m1 1

13 Polynomial Matrices 13 B () s () s A() s A B s B s... B sb. (1) 1 mn (1) m1 (1) (1) n m m1 1 Step 3: If m 1 n, then compute and B A s A() s A() s A B s (1) 1 m1n 1 (1) m1n m1 n n m1 B () s B () s B A s A() s B s... B sb () (1) (1) 1 m1 n () m () () m1 n m 1 B () s B () s A() s A B s B s... B sb. () (1) 1 (1) m 1 () m () () n m1 m 1 Step 4: If m n, then substituting in the above equalities m 1 and B (1) (s) by m and B () (s), respectively, compute B (3) (s). Repeat this procedure r times until m r < n. Step 5: Compute the matrices Q p (s), R p (s) (Q l (s), R l (s)). Example Given the matrices 4 3 s 1 s s s 1 s s s A() s and () s B 3, s s s s s s s determine matrices Q p (s), R p (s) and Q l (s), R l (s) satisfying (1.3.7). Matrix A(s) is regular, since A 1 1 and 4 1 B. Using Procedure we compute the following. Steps 1 3: In this case, s s s s s s s s 4 s A() s s BA and B () s B() s B A s A() s (1) s s 1 s s s s s s 1 s s s 3 3. s s s s s s s s

14 14 Polynomial and Rational Matrices Since m 1 = 3, n =, and (1) B 3 1, we have s s s s s s 3 (1) 1 s s s s s 3 s () s s 3 B A A and B () s B () s B A sa() s () (1) (1) s s s s s s s 1 3s s. 3 3 s s s s s s s s s s s Step 4: We repeat the procedure, since m = = n. Taking into account that () 3 B 1 1, we compute B A A() s () 1 31 s 1 s s 3s 3s s s s s s s1 s s and (3) () () 1 s 1 3s s () s () s () s B B B A A s s s s s 3s 3s s 3s 3 3s. s s 1 s s s1 3s Step 5: The degree of this matrix is less than the degree of the matrix A(s). Hence, according to (1.3.8), we obtain

15 Polynomial Matrices 15 1 (1) 1 () 1 () s s s Qp B4A B3 A B A 1 3 s s3 s s s 1 and (3) 3s 3 3s Rp () s B () s s1 3s. We compute Q l (s) and R l (s) using Procedure Steps 1 3: We compute 4 1 s 1 s 1 1 s s A() s A B4s s 3 s s s 1 s and 4 3 (1) 1 s s 1 s s s () s () s () s 4s 3 B B A A B s s s s 4 3 s s 1 s s s s s s s s s Taking into account that m 1 = 3 > n = and (1) 1 B 3 1 1, we compute 3 1 (1) s 1 s 1 1 s s s s A() s A B3 s s 3 3 s s s s s s s and 3 () (1) 1 (1) 1 s s s () s () s () s A 3 s 3 3 B B A B s s s s s 3 s s s s s 1 s. 3 3 s s s s 3s s s s

16 16 Polynomial and Rational Matrices Step 4: We repeat the procedure, since m = = n. Taking into account that () 1 B 3, we have A() s A B 1 1 s s s 1 3 3s s s 1 () s 1 s s 3s1 s and (3) () 1 () s 1 s () s () s () s B B A A B 3s s s s s 3s1 s 3s s. 3s s s s s 3s Step 5: The degree of this matrix is less than the degree of matrix A(s). Hence according to (1.3.11), we have Ql () s A B s A B sa B 1 1 (1) 1 () s 1 s s s, s3 s (3) 3s s Rl () s B () s s 3s. 1.4 Generalized Bezoute Theorem and the Cayley Hamilton Theorem Let us consider the division of a square polynomial matrix F() s F s F s FsF [] s (1.4.1) n n1 mm n n1 1 by a polynomial matrix of the first degree [I m s A], where F k mm, k =,1,,n and A mm. The right (left) R p (R l ) remainder from division of F(s) by [I m s A] is a polynomial matrix of zero degree, i.e., it does not depend on s. Theorem (Generalised Bezoute theorem). The right (left) remainder R p (R l ) from division of the matrix F(s) by [I m s - A] is equal to F p (A) (F l (A)), i.e.,

17 Polynomial Matrices 17 R F ( A) F A F A FAF (1.4.a) n n1 mm p p n n1 1 R F A A F A F AF F (1.4.b) n n1 mm l l( ) n n1 1. Proof. Post-dividing the matrix F(s) by [I m s - A], we obtain F() s Q () s I sar, p m p and pre-dividing by the same matrix, we obtain F() s I sa Q () s R. m l l Substituting the matrix A in place of the scalar s in the above relationships, we obtain and F ( A) Q ( A)( AA) R R p p p p F( A) ( AA) Q ( A) R R. l l l l The following important corollary ensues from Theorem Corollary A polynomial matrix F(s) is post-divisible (pre-divisible) without remainder by [I m s A] if and only if F p (A) = (F l (A) = ). Let (s) be the characteristic polynomial of a square matrix A of degree n, i.e., n n1 () s deti sa s a s a sa. n n1 1 From the definition of the inverse matrix we have and I s A Adj I sa I ( s) (1.4.3a) n n n Adj I s A I sa I ( s). (1.4.3b) n n n It follows from (1.4.3) that a polynomial matrix I n (s) is post-divisible and predivisible by [I n s - A]. According to Corollary this is possible if and only if I n (A) = (A) =. Thus the following theorem has been proved.

18 18 Polynomial and Rational Matrices Theorem (CayleyHamilton). Every square matrix A satisfies its own characteristic equation n ( A) A a A aaa I. (1.4.4) n1 n1 1 n Example The characteristic polynomial of the matrix is 1 A 3 4 (1.4.5) s 1 () s detinsa s 5s 3 s 4. It is easy to verify that ( ) A A A I Theorem Let a polynomial w(s) [s] be of degree N, and A nn, where N n. There exists a polynomial r(s) of a degree less than n, such that w( A) r( A ). (1.4.6) Proof. Dividing the polynomial w(s) by the characteristic polynomial (s) of the matrix A, we obtain ws () qs () () s rs (), where q(s) and r(s) are the quotient and remainder on division of the polynomial w(s) by (s), respectively, and deg (s) = n > deg r(s). With the matrix A substituted in place of the scalar s and with (1.4.4) taken into account, we obtain w( A) q( A) ( A) r( A) r( A ). Example The following polynomial is given ws () s 5s 3s 5s s 3s.

19 Polynomial Matrices 19 Using (1.4.6) one has to compute w(a) for the matrix (1.4.5). The characteristic polynomial of the matrix is (s) = s - 5s -. Dividing the polynomial w(s) by (s), we obtain that is Hence 4 ws ( ) s s s 5s 3s, rs () 3s w( A ) r( A ) 3 A I The above considerations can be generalized to the case of square polynomial matrices. Theorem Let W(s) nn [s] be a polynomial square matrix of degree N, and A nn, where N n. There exists, a polynomial matrix R(s) of a degree less than n such that W ( A) R ( A) and W( A) R ( A ), (1.4.7) p p l l where W p (A) and W l (A) are the right-side and left-side values, respectively, of the matrix W(s) with A substituted in place of s. Proof. Dividing the entries of the matrix W(s) by the characteristic polynomial (s) of A, we obtain W() s Q() s () s R () s, where Q(s) and R(s) are the quotient and remainder, respectively, of the division of W(s) by (s), and deg (s) = n > deg R(s). With A substituted in place of the scalar s and with (1.4.4) taken into account, we obtain and W ( A) Q ( A) ( A) R ( A) R ( A) p p p p W( A) Q ( A) ( A) R ( A) R ( A ). l l l l

20 Polynomial and Rational Matrices Example Given the polynomial matrix W() s , s 5s 3s 5s3 s 1s 4s s s s s s s s s s s one has to compute W p (A) and W l (A) for the matrix (1.4.5) using (1.4.7). Dividing every entry of W(A) by the characteristic polynomial (s) of matrix A, we obtain i.e., Hence and 4 3 s 1 s s 3 s1 W () s 4s 5s s 1 s 1 s, s3 s1 R () s 1 s. W ( A) R ( A) p p A Wl( A) Rl( A) A Elementary Operations on Polynomial Matrices Definition The following operations are called elementary operations on a polynomial matrix A(s) mn [s]: 1. Multiplication of any i-th row (column) by the number c.. Addition to any i-th row (column) of the j-th row (column) multiplied by any polynomial w(s). 3. The interchange of any two rows (columns), e.g., of the i-th and the j-th rows (columns).

21 Polynomial Matrices 1 From now on we will use the following notation: L[ic] multiplication of the i-th row by the number c, P[ic] multiplication of the i-th column by the number c, L[i+jw(s)] addition to the i-th row of the j-th row multiplied by the polynomial w(s), P[i+jw(s)] addition to the i-th column of the j-th column multiplied by the polynomial w(s), L[i, j] the interchange of the i-th and the j-th row, P[i, j] the interchange of the i-th and the j-th column. It is easy to verify that the above elementary operations when carried out on rows are equivalent to pre-multiplication of the matrix A(s) by the following matrices: i-th column 1 1 mm Lm (, ic), c i-th row 1 i j 1 1 Ld (, i j, w( s)) 1 ws ( ) 1 i j L z i, j mm s, (1.5.1)

22 Polynomial and Rational Matrices The same operations carried out on columns are equivalent to postmultiplication of the matrix A(s) by the following matrices: i-th column 1 1 nn Pm (, ic), c i-th row 1 i j 1 1 nn Pd (, i j, w( s)) 1, ws ( ) 1 (1.5.) i j nn z (, i j) P. 1 1 It is easy to verify that the determinants of the polynomial matrices (1.5.1) and (1.5.) are nonzero and do not depend on the variable s. Such matrices are called unimodular matrices.

23 Polynomial Matrices Linear Independence, Space Basis and Rank of Polynomial Matrices Let a i = a i (s), i = 1,,n be the i-th column of a polynomial matrix A(s) mn [s]. We will consider these columns as m-dimensional polynomial vectors, a i m [s], i = 1,,n. Definition Vectors a i m [s] are called linearly independent over the field of rational functions (s) if and only if there exist rational functions w i =w i (s) (s) not all equal to zero such that wa 1 1wa... wa (zero order). (1.6.1) n n In other words, these vectors are called linearly independent over the field of rational functions, if the equality (1.6.1) implies w i = for i = 1,,n. For example, the polynomial vectors 1 s a1, a s 1 s (1.6.) are linearly independent over the field of rational functions, since the equation 1 s 1 s w1 wa 1 1wa w1 w s 1s s s 1 w has only the zero solution 1 w1 1 s w s s 1. We will show that the rational functions w i, i = 1,,n in (1.6.1) can be replaced by polynomials p i = p i (s), i = 1,,n. To accomplish this, we multiply both sides of (1.6.1) by the smallest common denominator of rational functions w i, i = 1,,n. We then obtain pa 1 1 pa... p a, (1.6.3) where p i = p i (s) are polynomials. For example, the polynomial vectors n n 1 s 1 a1, a s s s (1.6.4)

24 4 Polynomial and Rational Matrices are linearly dependent over the field of rational functions, since for 1 w1 1 and w, s 1 we obtain 1 1 s 1 wa 1 1wa s s 1 s s. (1.6.5) Multiplying both sides of (1.6.5) by the smallest common denominator of rational functions w 1 and w, which is equal to s + 1, we obtain 1 s 1 ( s 1) s s s. If the number of polynomial vectors of the space n [s] is larger than n, then these vectors are linearly dependent. For example, adding to two linearly independent vectors (1.6.) an arbitrary vector a a a 11 1 [] s, we obtain linearly dependent vectors, i.e., pa 1 1 pa pa 3, (1.6.6) for p 1, p, p 3 [s] not simultaneously equal to zero. Assuming, for example, p 3 = -1, from (1.6.6) and (1.6.), we obtain 1 s p1 a11 s s 1 p a 1 and 1 p1 1 s a11 s 1 sa11 s 1 a11 sa 1. p s s 1 a 1 s 1 a 1 sa11 a1 Thus vectors a 1, a, a are linearly dependent for any vector a. Definition Polynomial vectors b i = b i (s) n [s], i = 1,,n are called a basis of space n [s] if they are linearly independent over the field of rational function

25 Polynomial Matrices 5 and an arbitrary vector a n [s] from this space can be represented as a linear combination of these vectors, i.e., a pb 1 1 pb... pnbn, (1.6.7) where p i [s], i = 1,,n. There exist many different bases for the same space. For example, for the space [s] we can adopt the vectors (1.6.) as a basis. Solving system of equations for an arbitrary vector we obtain a11 [] s a, 1 p1 1 s p1 a11 a1 a p s s 1 p a, 1 1 p1 1 s a11 s 1 a11 sa 1 p s s 1 a. 1 sa11 a1 As a basis for this space we can also adopt 1 e1, e 1. In this case, p 1 = a 11 and p = a 1. Definition The number of linearly independent rows (columns) of a polynomial matrix A(s) nm [s] is called its normal rank (briefly rank). The rank of a polynomial matrix A(s) can be also equivalently defined as the highest order of a minor, which is a nonzero polynomial, of this matrix. The rank of matrix A(s) nm [s] is not greater than the number of its rows n or columns m, i.e., rank A ( s) min ( nm, ). (1.6.8) If a square matrix A(s) nn [s] is of full rank, i.e., rank A(s) = n, then its determinant is a nonzero polynomial w(s), i.e., det A ( s) w( s). (1.6.9)

26 6 Polynomial and Rational Matrices Such a matrix is called nonsingular or invertible. It is called singular when det A(s) = (the zero polynomial). For example, the square matrix built from linearly independent vectors (1.6.) is nonsingular, since 1 s det 1 s 1 s and the matrix built from linearly dependent vectors (1.6.4) is singular, since 1 s 1 det s s s. Theorem Elementary operations carried out on a polynomial matrix do not change its rank. Proof. Let nm A() s L() s A() s P() s [] s, (1.6.1) where L(s) nn [s] and P(s) mm [s] are unimodular matrices of elementary operations on rows and columns, respectively. From (1.6.1) we immediately have rank A() s rank L() s A() s P() s rank A () s, since L(s) and P(s) are unimodular matrices. For example, carrying out the operation L d (+1(-s)) on rows of the matrix built from the columns (1.6.), we obtain 1 1 s 1 s s 1 s s 1 1. Both polynomial matrices 1 s 1 s and s s 1 1 are full rank matrices.

27 Polynomial Matrices Equivalents of Polynomial Matrices Left and Right Equivalent Matrices Definition Two polynomial matrices A(s), B(s) mn [s] are called left (right) or row (column) equivalent if and only if one of them can be obtained from the other as a result of a finite number of elementary operations carried out on its rows (columns) B() s L() s A() s or B() s A() s P () s, (1.7.1) where L(s) (P(s)) is the product of unimodular matrices of elementary operations on rows (columns). Definition Two polynomial matrices A(s), B(s) mn [s] are called equivalent if and only if one of them can be obtained from the other as a result of a finite number of elementary operations carried out on its rows and columns, i.e., B() s L() s A() s P () s, (1.7.) where L(s) and P(s) are the products of unimodular matrices of elementary operations on rows and columns, respectively. Theorem A full rank polynomial matrix A(s) ml [s] is left equivalent to an upper triangular matrix of the form a11() s a1 () s a1 l () s a( s) al ( s) a1 l ( s) A() s L() s A() s a11() s a1 () s a1 m() s a ( s) a ( s) amm ( s) for m l m for m l (1.7.3)

28 8 Polynomial and Rational Matrices a11() s a1() s a1 m() s a1 l() s a( s) am( s) al( s) () s () s () s A L A for ml amm ( s) aml ( s) where the elements a 1i (s), a i (s),, a i-1,i (s) are polynomials of a degree less than a ii (s) for i = 1,,,m, and L(s) is the product of the matrices of elementary operations carried out on rows. Proof. Among nonzero entries of the first columns of the matrix A(s) we choose the entry that is a polynomial of the lowest degree and carrying out L[i, j], we move this entry to the position (1,1). Denote this entry by a 11 (s). Then we divide all remaining entries of the first column by a 11 (s). We then obtain a () s a () s q () s r () s for i,3,..., m, i1 11 i1 i1 where q i1 (s) is the quotient and r i1 (s) the remainder of division of the polynomial a i1 (s) by a 11 (s). Carrying out L[i+1(-q i1 (s))], we replace the entry a i1 (s) with the remainder r i1 (s). If not all remainders are equal to zero, then we choose this one, that is the polynomial of the lowest degree, and carrying out operations L[i, j], we move it to position (1,1). Denoting this remainder by r i1 (s), we repeat the above procedure taking the remainder r 11 (s) instead of a 11 (s). The degree r 11 (s) is lower than the degree of a 11 (s). After a finite number of steps, we obtain the matrix A (s) of the form a11() s a1() s a1 l () s a ( s) al ( s) A () s. am ( s) aml( s) We repeat the above procedure for the first column of the submatrix obtained from the matrix A(s) by deleting the first row and the first column. We then obtain a matrix of the form a11() s a1 () s a13() s a1 l () s a ˆ ˆ ( s) a3( s) al ( s) Aˆ () s aˆ ˆ 33() s a3l () s. aˆ ˆ m3() s aml() s If a 1 (s) is not a polynomial of lower degree than the one of a (s), then we divide a 1 (s) by a (s) and carrying out L[1+(-q 1 (s))], we replace the entry

29 Polynomial Matrices 9 a 1 (s) with the entry a 1 (s) = r 1 (s), where q 1 (s) and r 1 (s) are the quotient and the remainder on the division of a 1 (s) by a (s) respectively. Next, we consider the submatrix obtained from the matrix A (s) by removing the first two rows and the first two columns. Continuing this procedure, we obtain the matrix (1.7.3). An algorithm of determining the left equivalent matrix of the form (1.7.3) follows immediately from the above proof. Example The given matrix 1 s A() s s1 s 1 s s 1 s 3 is to be transformed to the left equivalent form (1.7.3). To accomplish this, we carry out the following elementary operations: L 1 ( s1) 1 s 1 s L31( s ) L[,3] s s s s1 L1s 1 L3 ( ( s )) 1. s 1 Theorem A full rank polynomial matrix A(s) ml [s] is right equivalent to a lower triangular matrix of the form A() s A() s P() s a11() s a1() s a() s for n m, am1() s am() s am3() s amm() s a11() s a1() s a() s for n m, am 1() s am() s amm() s (1.7.4)

30 3 Polynomial and Rational Matrices a11() s a1() s a() s al1() s al() s al() s am 1() s am() s aml() s for n m, (1.7.4) where the elements a i1 (s), a i (s),, a i-1,i (s) are polynomials of lower degree than that of a ii (s) for i = 1,,,n, and P(s) is the product of unimodular matrices of elementary operations carried out on columns Row and Column Reduced Matrices The degree of the i-th column (row) of a polynomial matrix is the highest degree of a polynomial that is an entry of this column (row). The degree of the i-th column (row) of the matrix A(s) will be denotedn by deg c i [A(s)] (deg r i [A(s)]) or shortly deg c i (deg r i ). Let L c (L r ) be the matrix built from the coefficients at the highest powers of variable s in the columns (rows) of the matrix A(s). For example, for the polynomial matrix s 1 s 3s A () s s s, (1.7.5) s s1 s1 we have deg A(s) = deg c, deg c deg c 1, deg r deg r, deg r and Lk 1, w 1 1 L The matrix (1.7.5) can be written, using the above matrices, as follows 1 1 3s 1 A( s) 1 s s 1 1 s 1 1

31 Polynomial Matrices 31 or s 1 1 s 3s A () s s 1 1. s 1 s1 s1 In the general case for a matrix A(s) mn [s], we have A degc 1 degc deg c ( s) L diag c s, s,..., s l A ( s) (1.7.6) and A deg r 1 deg r deg r ( s) diag s, s,..., s m L r A ( s), (1.7.7) where A (s), A (s) are polynomial matrices satisfying the conditions deg A( s) deg A( s), deg A ( s) deg A( s). If m = n and det L c, then the determinant of the matrix (1.7.6) is a polynomial of the degree k l n deg c, i1 i since deg 1 deg deg det A c c cl nk ( s) det L k det diag s, s,..., s... s det L c... Similarly, if det L r, then the determinant of the matrix (1.7.7) is a polynomial of the degree r m n deg r. j1 j Definition A polynomial matrix A(s) is said to be column (row) reduced if and only if L c (L r ) of this matrix is a full rank matrix. Thus, a square matrix A(s) is column (row) reduced if and only if det L c (det L r ). For example, the matrix (1.7.5) is column reduced but not row reduced, since

32 3 Polynomial and Rational Matrices det L 1 5, det L 1 1. c From the above considerations and Theorems and the following important corollary immediately follows. Corollary Carrying out only elementary operations on rows or columns it is possible to transform a nonsingular polynomial matrix to one of column reduced form and row reduced form, respectively. r 1.8 Reduction of Polynomial Matrices to the Smith Canonical Form Consider a polynomial matrix A(s) mn [s] of rank r. Definition A polynomial matrix of the form i1 () s i ( s) mn AS () s ir ( s) [] s. (1.8.1) r min(n,m) is called the Smith canonical form of the matrix A(s) mn [s], where i 1 (s), i (s),,i r (s) are nonzero polynomials that are called invariant, with coefficients by the highest powers of the variable s equal to one, such that the polynomial i k+1 (s) is divisible without remainder by the polynomial i k (s), i.e., i k+1 i k for k = 1,,r-1. Theorem For an arbitrary polynomial matrix A(s) mn [s] of rank r (r min(n,m)) there exists its equivalent Smith canonical form (1.8.1). Proof. Among the entries of the matrix A(s) we find a nonzero one, which is a polynomial of the lowest degree in respect to s, and interchanging rows and columns we move it to position (1,1). Denote this entry by a 11 (s). Assume at the beginning that all entries of the matrix A(s) are divisible without remainder by the entry a 11 (s). Dividing the entries a i1 (s) of the first column and the first row a 1j (s) by a 11 (s), we obtain

33 Polynomial Matrices 33 a () s a () s q () s ( i,3,..., m), i1 11 i1 a () s a () s q () s ( j,3,..., n), 1j 11 1j where q i1 (s) and q 1j (s) are the quotients from division of a i1 (s) and a 1j (s) by a 11 (s), respectively. Subtracting from the i-th row (i =,3,,m) the first row multiplied by q i1 (s) and, respectively from the j-th column (j =,3,,m) the first column multiplied by q 1j (s), we obtain a matrix of the form a11() s a( s) an ( s). (1.8.) am( s) amn( s) If the coefficient by the highest power of s of polynomial a 11 (s) is not equal to 1, then to accomplish this we multiply the first row (or column) by the reciprocal of this coefficient. Assume next that not all entries of the matrix A(s) are divisible without remainder by a 11 (s) and that such entries are placed in the first row and the first column. Dividing the entries of the first row and the first column by a 11 (s), we obtain a1 i() s a11() s q1 i() s r1 i() s ( i,3,..., n), a () s a () s q () s r () s ( j,3,..., m), j1 11 j1 j1 where q 1i (s), q j1 (s) are the quotients and r 1i (s), r j1 (s) are the remainders of division of a 1i (s) and a j1 (s) by a 11 (s), respectively. Subtracting from the j-th row (i-th column) the first row (column) multiplied by q j1 (s) (by q 1i (s)), we replace the entry a j1 (s) ( a 1i (s)) by the remainder r j1 (s) (r 1i (s)). Next, among these remainders we find a polynomial of the lowest degree with respect to s and interchanging rows and columns, we move it to the position (1,1). We denote this polynomial by r 11 (s). If not all entries of the first row and the first column are divisible without remainder by r 11 (s), then we repeat this procedure taking the polynomial r 11 (s) instead of the polynomial a 11 (s). The degree of the polynomial r 11 (s) is lower than the degree of a 11 (s). After a finite number of steps, we obtain in the position (1,1) a polynomial that divides without remainder all the entries of the first row and the first column. If the entry a ik (s) is not divisible by a 11 (s), then by adding the i-th row (or k-th column) to the first row (the first column), we reduce this case to the previous one. Repeating this procedure, we finally obtain in the position (1,1) a polynomial that divides without remainder all the entries of the matrix. Further we proceed in the same way as in the first case, when all the entries of the matrix are divisible without remainder by a 11 (s).

34 34 Polynomial and Rational Matrices If not all entries a ij (s) (i =,3,,m; j =,3,,n) of the matrix (1.8.) are equal to zero, then we find a nonzero entry among them, which is a polynomial of the lowest degree with respect to s, and interchanging rows and columns, we move it to the position (,). Proceeding further as above, we obtain a matrix of the form a11() s a() s a33() s a3n () s, am3() s amn() s where a (s) is divisible without remainder by a 11 (s), and all elements a ij (s) (i = 3,4,,m; j = 3,4,,n) are divisible without remainder by a (s). Continuing this procedure, we obtain a matrix of the Smith canonical form (1.8.1). From this proof the following algorithm for determining of the Smith canonical form follows immediately as, illustrated by the following example. Example To transform the polynomial matrix ( s) ( s)( s3) s A () s (1.8.3) ( s)( s3) ( s) s3 to the Smith canonical form, we carry out the following elementary operations. Step 1: We carry out the operation P[1, 3] s ( s)( s3) ( s) A 1() s. s3 ( s) ( s)( s3) All entries of this matrix are divisible without remainder by s + with exception of the entry s + 3. Step : Taking into account the equality s s s, we carry out the operation L[+1(-1)] s ( s)( s3) ( s) A () s. 1 ( s) s

35 Polynomial Matrices 35 Step 3: We carry out the operation L[1, ] 1 ( s) s A 3() s s ( s)( s3) ( s). Step 4: We carry out the operations P[+1(s+)] and P[3+1(-s-)] 1 A 4() s s ( s)(s5). Step 5: We carry out the operation L[+1(-s-)] and P[1 / ] 1 A s () s 5. ( s) s This matrix is of the desired Smith canonical form of (1.8.3). From divisibility of the invariant polynomials i k+1 i k, k = 1,..., r 1, it follows that there exist polynomials d 1,d,,d r, such that i d, i d d,..., i d d... d r 1 r Hence the matrix (1.8.1) can be written in the form d1 dd 1 AS () s dd 1... dr. (1.8.1a) Theorem The invariant polynomials i 1 (s),i (s),,i r (s) of the matrix (1.8.1) are uniquely determined by the relationship Dk () s ik () s for k 1,,..., r, (1.8.4) D () s k1 where D k (s) is the greatest common divisor of all minors of degree k of matrix A(s) (D (s) = 1).

36 36 Polynomial and Rational Matrices Proof. We will show that elementary operations do not change D k (s). Note that elementary operations 1) consisting of multiplying of an i-th row (column) by a number c causes multiplication of minors containing this row (column) by this number c. Thus this operation does not change D k (s). An elementary operation ) consisting of adding to an i-th row (column) j-th row (column) multiplied by the polynomial w(s) does not change D k (s), if a minor of the degree k contains either the i-th row and the j-th row or does not contain of them. If the minor of the degree k contains the i-th row, and does not contain the j-th row, then we can represent it as a linear combination of two minors of the degree k of the matrix A(s). Hence the greatest common divisor of the minors of the degree k does not change. Finally, an operation 3), consisting on the interchange of i-th and j-th rows (columns), does not change D k (s) either, since as a result of this operation a minor of the degree k either does not change (the both rows (columns) do not belong to this minor), or changes only the sign (both rows belong to the same minor), or it will be replaced by another minor of the degree k of the matrix A(s) (only one of these rows belongs to this minor). Thus equivalent matrices A(s) and A S (s) have the same divisors D 1 (s), D (s),..., D r (s). From the Smith canonical form (1.8.1) it follows that D () s i (), s 1 1 D () s i () s i (), s 1 D () s i () s i ()...(). s i s r 1 r (1.8.5) From (5) we immediately obtain the formula (4). Using the polynomials d 1,d,,d r we can write the relationship (1.8.5) in the form D () s d, 1 1 () d1d, r r1 r 1 D s D () s d d... d r. (1.8.6) From definition (1.8.1) and Theorems and 1.8., the following important corollary can be derived. Corollary Two matrices A(s), B(s) mn [s] are equivalent if and only if they have the same invariant polynomials.

37 Polynomial Matrices Elementary Divisors and Zeros of Polynomial Matrices Elementary Divisors Consider a polynomial matrix A(s) mn [s] of the rank r, whose Smith canonical form A S (s) is given by the formula (1.8.1). Let the k-th invariant polynomial of this matrix be of the form m k m 1 k m q 1 q i ( s) ( ss ) ( ss )...( s s ) k. (1.9.1) k From divisibility of the polynomial i k+1 (s) by the polynomial i k (s) it follows that mr,1 mr 1,1... m1,1 m m... m. (1.9.) rq, r1, q 1, q If, for example, i 1 (s) = 1, then m 11 = m 1 = =m 1q =. Definition Everyone of the expressions (different from 1) ( ), ( ),..., ( ) rq m11 m1 ss1 ss ss q m appearing in the invariant polynomials (1.9.1) is called elementary divisor of the matrix A(s). For example, the elementary divisors of the polynomial matrix (1.8.3) are (s+) and (s+, 5). The elementary divisors of a polynomial matrix are uniquely determined. This follows immediately from the uniqueness of the invariant polynomial of polynomial matrices. Equivalent polynomial matrices possess the same elementary divisors. For a polynomial matrix of known dimensions its rank together with its elementary divisors uniquely determine its Smith canonical form. For example, knowing the elementary divisors s 1, (s 1)(s ), (s ), (s 3), of a polynomial matrix, its rank r = 4 and dimension 44, we can write its Smith canonical form of this polynomial matrix 1 s 1 A s () s. (1.9.3) ( s 1)( s ) ( s1)( s) ( s3) Consider a polynomial, block-diagonal matrix of the form

38 38 Polynomial and Rational Matrices A() s diag[ A (), s A ()] s A () s 1 1 A( s). (1.9.4) Let A ks (s) be the Smith canonical form of the matrix A k (s), k = 1,, and m11, ( ) k mrk,...,( ) k qk ssk ss 1 kq its elementary divisors. Taking into account that equivalent polynomial matrices have the same elementary divisors, we establish that a set of elementary divisors of the matrix (1.9.4) is the sum of the sets of elementary divisors of A k (s), k = 1,. Example Determine elementary divisors of the block-diagonal matrix (1.9.4) for s1 1 s1 1 A1() s s 1 1, () s s 1 A. (1.9.5) s1 s It is easy to check that the Smith canonical forms of the matrices (1.9.5) are 1 1 A1S() s 1, S() s 1 A. (1.9.6) 3 ( s1) ( s1) ( s) The elementary divisors of the matrices (1.9.5) are thus equal (s 1) 3, (s 1), and (s ), respectively. It is easy to show that the Smith canonical form of the matrix (1.9.4) with the blocks (1.9.5) is equal to s s s s S 3 A ( ) diag ( 1) ( 1) ( ) (1.9.7) and its elementary divisors are (s - 1), (s - 1) 3, (s - ). Consider a matrix A nn and its corresponding polynomial matrix [I n s - A]. Let where [ I s A ] diag i (), s i (),..., s i () s, (1.9.8) n S 1 n m k m 1 k m k q 1 q i ( s) ( ss ) ( ss )...( ss ), k 1,..., n, (1.9.9) k

39 Polynomial Matrices 39 and s 1,s,,s q, q n are the eigenvalues of the matrix A. Definition Everyone of the expressions (different from 1) m11 m m 1 ss1 ss ss q ( ), ( ),..., ( ) nq appearing in the invariant polynomials (1.9.9) is called the elementary divisor of the matrix A. The elementary divisors of the matrix A are uniquely determined and they determine its essential structural properties Zeros of Polynomial Matrices Consider a polynomial matrix A(s) mn [s] of rank r, whose Smith canonical form is equal to (1.8.1). From (1.8.5) it follows that D () s i () s i ()...() s i s. (1.9.1) r 1 r Definition Zeros of the polynomial (1.9.1) are called zeros of the polynomial matrix A(s). The zeros of the polynomial matrix A(s) can be equivalently defined as those values of the variable s, for which this matrix loses its full (normal) rank. For example, for the polynomial matrix (1.8.3) we have D () s ( s)( s.5). r Thus the zeros of the matrix are s 1 = -, s = -.5. It is easy to verify that for these values of the variable s, the matrix (1.8.3) (whose normal rank is equal to ) has a rank equal to 1. If the polynomial matrix A(s) is square and of the full rank r = n, then det A ( s) cd ( s) c is a constant coefficient independent of s (1.9.11) r and the zeros of this matrix coincide with the roots of its characteristic equation det A(s) =. For example, for the first among the matrices (1.9.5) we have s 1 1 det ( ) 1 1 ( 1) 3 A r s s s. s 1

40 4 Polynomial and Rational Matrices Thus this matrix has the zero s = 1 of multiplicity 3. The same result will be obtained from (1.9.1), since D r (s) = (s - 1) 3 for A 1S (s). Theorem Let a polynomial matrix A(s) mn [s] have a rank (normal) equal to r min(m,n). Then r s A rank A s, (1.9.1) r di s si A where A is a set of the zeros of the matrix A(s) and d i is a number of distinct elementary divisors containing s i. Proof. By definition of zero, it follows that the matrix A(s) does not lose its full rank if we substitute in place of the variable s a number that does not belong to the set A, i.e., rank A(s) = r for s A. Elementary operations do not change the rank of a polynomial matrix. In view of this rank A(s) = rank A S (s) = r, where r is the number of the invariant polynomials (including those equal to 1). If an invariant polynomial contains s i, then this polynomial is equal to zero for s = s i. Thus we have rank A(s i ) = r - d i, s i A, since the number of polynomials containing s i is equal to the number of distinct elementary divisors containing s i. For instance, the polynomial matrix (1.9.3) of the full column rank has one elementary divisor containing s 1 = 3, two elementary divisors containing s = and three elementary divisors containing s 3 = 1. In view of this, according to (1.9.1) we have rank A (3) 3, rank A (), rank A (1) 1. S S S Remark A unimodular matrix U(s) nn [s] does not have any zeros since det U(s) = c, where c is certain constant independent of the variable s. Theorem An arbitrary rectangular, polynomial matrix A(s) mn [s] of full rank that does not have any zeros can be written in the form Im P( s), m n A() s In, (1.9.13) L() s, m n where P(s) nn [s] and L(s) mm [s] are unimodular matrices. Proof. If m < n and the matrix does not have any zeros, then applying elementary operations on columns we can bring this matrix to the form [I m ]. Similarly, if

41 Polynomial Matrices 41 m > n and the matrix does not have any zeros, then applying elementary operations In on rows we can bring this matrix to the form. Remark From the relationship (1.9.13) it follows that a polynomial matrix built from an arbitrary number of rows or columns of a matrix that does not have any zeros, never has any zeros. Theorem An arbitrary polynomial matrix A(s) mn [s] of rank r min (m, n) having zeros can be presented in the form of the product of matrices A() s B() s C () s, (1.9.14) where the matrix B(s) = L -1 (s) diag [i 1 (s),,i r (s),,,] mm containing all the zeros of the matrix A(s), and is a matrix 1 Im P (), s n m 1 C() s P (), s n m In 1 P (), s n m. (1.9.15) Proof. Let L(s) mm [s] andp(s) nn [s] be unimodular matrices of elementary operations on rows and on columns, respectively, reducing the matrix A(s) to the Smith canonical form A S (s), i.e., A () s L S () s A () s P () s. (1.9.16) Pre-multiplying (1.9.16) by L -1 (s) and post-multiplying by P -1 (s), we obtain A L A P B C, 1 1 () s () s S () s () s () s () s since diag [ i1 ( s),..., ir( s),,..., ] Im, n m AS( s) diag [ i1 ( s),..., ir( s),,..., ], n m. In diag [ i1 ( s),..., ir ( s),,..., ], n m

42 4 Polynomial and Rational Matrices From (1.9.15) it follows that the matrix C(s) mn [s] does not have any zeros, since the matrix P -1 (s) is a unimodular matrix. 1.1 Similarity and Equivalence of First Degree Polynomial Matrices Definition Two square matrices A and B of the same dimension are said to be similar matrices if and only if there exists a nonsingular matrix P such that 1 B P AP (1.1.1) and the matrix P is called a similarity transformation matrix. Theorem Similar matrices have the same characteristic polynomials, i.e., det [ sib] det [ sia ]. (1.1.) Proof. Taking into account (1.1.1), we can write IB P PP AP P IA P det [ s ] det s det ( s ) 1 det det [ s ]det det [ s ], P IA P IA since det P -1 = (det P) -1. Theorem Polynomial matrices [si - A] and [si - B] are equivalent if and only if the matrices A and B are similar. Proof. Firstly, we show that if the matrices A and B are similar, then the polynomial matrices [si - A] and [si - B] are equivalent. If the matrices A and B are similar, i.e., they satisfy the relationship (1.1.1), then I B I P AP P I A P. 1 1 [ s ] s [ s ] This relationship is a special case (for L(s) = P -1 and P(s) = P) of the relationship (1.7.). Thus the polynomial matrices [si - A] and [si - B] are equivalent. We will show now, that if the matrices [si - A] and [si - B] are equivalent, then the matrices A and B are similar. Assuming that the matrices [si - A] and [si - B] are equivalent, we have [ sib] L( s)[ sia] P ( s), (1.1.3)

SMITH MCMILLAN FORMS

SMITH MCMILLAN FORMS Appendix B SMITH MCMILLAN FORMS B. Introduction Smith McMillan forms correspond to the underlying structures of natural MIMO transfer-function matrices. The key ideas are summarized below. B.2 Polynomial

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated

More information

Ma 227 Review for Systems of DEs

Ma 227 Review for Systems of DEs Ma 7 Review for Systems of DEs Matrices Basic Properties Addition and subtraction: Let A a ij mn and B b ij mn.then A B a ij b ij mn 3 A 6 B 6 4 7 6 A B 6 4 3 7 6 6 7 3 Scaler Multiplication: Let k be

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Triangularizing matrix polynomials. Taslaman, Leo and Tisseur, Francoise and Zaballa, Ion. MIMS EPrint:

Triangularizing matrix polynomials. Taslaman, Leo and Tisseur, Francoise and Zaballa, Ion. MIMS EPrint: Triangularizing matrix polynomials Taslaman, Leo and Tisseur, Francoise and Zaballa, Ion 2012 MIMS EPrint: 2012.77 Manchester Institute for Mathematical Sciences School of Mathematics The University of

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

MATRICES AND MATRIX OPERATIONS

MATRICES AND MATRIX OPERATIONS SIZE OF THE MATRIX is defined by number of rows and columns in the matrix. For the matrix that have m rows and n columns we say the size of the matrix is m x n. If matrix have the same number of rows (n)

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Elementary Row Operations on Matrices

Elementary Row Operations on Matrices King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. MATRICES After studying this chapter you will acquire the skills in knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. List of

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

Eighth Homework Solutions

Eighth Homework Solutions Math 4124 Wednesday, April 20 Eighth Homework Solutions 1. Exercise 5.2.1(e). Determine the number of nonisomorphic abelian groups of order 2704. First we write 2704 as a product of prime powers, namely

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

Multivariable Control. Lecture 05. Multivariable Poles and Zeros. John T. Wen. September 14, 2006

Multivariable Control. Lecture 05. Multivariable Poles and Zeros. John T. Wen. September 14, 2006 Multivariable Control Lecture 05 Multivariable Poles and Zeros John T. Wen September 4, 2006 SISO poles/zeros SISO transfer function: G(s) = n(s) d(s) (no common factors between n(s) and d(s)). Poles:

More information

MEM 355 Performance Enhancement of Dynamical Systems MIMO Introduction

MEM 355 Performance Enhancement of Dynamical Systems MIMO Introduction MEM 355 Performance Enhancement of Dynamical Systems MIMO Introduction Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University 11/2/214 Outline Solving State Equations Variation

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

Product distance matrix of a tree with matrix weights

Product distance matrix of a tree with matrix weights Product distance matrix of a tree with matrix weights R B Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India email: rbb@isidacin Sivaramakrishnan Sivasubramanian

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,500 08,000.7 M Open access books available International authors and editors Downloads Our authors

More information

NUMBER SYSTEMS. Number theory is the study of the integers. We denote the set of integers by Z:

NUMBER SYSTEMS. Number theory is the study of the integers. We denote the set of integers by Z: NUMBER SYSTEMS Number theory is the study of the integers. We denote the set of integers by Z: Z = {..., 3, 2, 1, 0, 1, 2, 3,... }. The integers have two operations defined on them, addition and multiplication,

More information

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic Unit II - Matrix arithmetic matrix multiplication matrix inverses elementary matrices finding the inverse of a matrix determinants Unit II - Matrix arithmetic 1 Things we can already do with matrices equality

More information

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted.

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted. THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION Sunday, March 1, Time: 3 1 hours No aids or calculators permitted. 1. Prove that, for any complex numbers z and w, ( z + w z z + w w z +

More information

Prepared by: M. S. KumarSwamy, TGT(Maths) Page

Prepared by: M. S. KumarSwamy, TGT(Maths) Page Prepared by: M. S. KumarSwamy, TGT(Maths) Page - 50 - CHAPTER 3: MATRICES QUICK REVISION (Important Concepts & Formulae) MARKS WEIGHTAGE 03 marks Matrix A matrix is an ordered rectangular array of numbers

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Section 9.2: Matrices.. a m1 a m2 a mn

Section 9.2: Matrices.. a m1 a m2 a mn Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Hermite normal form: Computation and applications

Hermite normal form: Computation and applications Integer Points in Polyhedra Gennady Shmonin Hermite normal form: Computation and applications February 24, 2009 1 Uniqueness of Hermite normal form In the last lecture, we showed that if B is a rational

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues. Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be

More information

Infinite elementary divisor structure-preserving transformations for polynomial matrices

Infinite elementary divisor structure-preserving transformations for polynomial matrices Infinite elementary divisor structure-preserving transformations for polynomial matrices N P Karampetakis and S Vologiannidis Aristotle University of Thessaloniki, Department of Mathematics, Thessaloniki

More information

A connection between number theory and linear algebra

A connection between number theory and linear algebra A connection between number theory and linear algebra Mark Steinberger Contents 1. Some basics 1 2. Rational canonical form 2 3. Prime factorization in F[x] 4 4. Units and order 5 5. Finite fields 7 6.

More information

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient. ENGI 940 Lecture Notes - Matrix Algebra Page.0. Matrix Algebra A linear system of m equations in n unknowns, a x + a x + + a x b (where the a ij and i n n a x + a x + + a x b n n a x + a x + + a x b m

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Introduction to Matrices

Introduction to Matrices 214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the

More information

Undergraduate Mathematical Economics Lecture 1

Undergraduate Mathematical Economics Lecture 1 Undergraduate Mathematical Economics Lecture 1 Yu Ren WISE, Xiamen University September 15, 2014 Outline 1 Courses Description and Requirement 2 Course Outline ematical techniques used in economics courses

More information

Chapter 2. Matrix Arithmetic. Chapter 2

Chapter 2. Matrix Arithmetic. Chapter 2 Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

Characteristic Polynomial

Characteristic Polynomial Linear Algebra Massoud Malek Characteristic Polynomial Preleminary Results Let A = (a ij ) be an n n matrix If Au = λu, then λ and u are called the eigenvalue and eigenvector of A, respectively The eigenvalues

More information

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then 1 Sec 21 Matrix Operations Review Let A, B, and C be matrices of the same size, and let r and s be scalars Then (i) A + B = B + A (iv) r(a + B) = ra + rb (ii) (A + B) + C = A + (B + C) (v) (r + s)a = ra

More information

(Inv) Computing Invariant Factors Math 683L (Summer 2003)

(Inv) Computing Invariant Factors Math 683L (Summer 2003) (Inv) Computing Invariant Factors Math 683L (Summer 23) We have two big results (stated in (Can2) and (Can3)) concerning the behaviour of a single linear transformation T of a vector space V In particular,

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Robust Control 2 Controllability, Observability & Transfer Functions

Robust Control 2 Controllability, Observability & Transfer Functions Robust Control 2 Controllability, Observability & Transfer Functions Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University /26/24 Outline Reachable Controllability Distinguishable

More information

Phys 201. Matrices and Determinants

Phys 201. Matrices and Determinants Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1

More information

THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM

THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM WILLIAM A. ADKINS AND MARK G. DAVIDSON Abstract. The Cayley Hamilton theorem on the characteristic polynomial of a matrix A and Frobenius

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS n n Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Matrix Arithmetic Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC BY-NC-SA)

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

MATHEMATICS. IMPORTANT FORMULAE AND CONCEPTS for. Final Revision CLASS XII CHAPTER WISE CONCEPTS, FORMULAS FOR QUICK REVISION.

MATHEMATICS. IMPORTANT FORMULAE AND CONCEPTS for. Final Revision CLASS XII CHAPTER WISE CONCEPTS, FORMULAS FOR QUICK REVISION. MATHEMATICS IMPORTANT FORMULAE AND CONCEPTS for Final Revision CLASS XII 2016 17 CHAPTER WISE CONCEPTS, FORMULAS FOR QUICK REVISION Prepared by M. S. KUMARSWAMY, TGT(MATHS) M. Sc. Gold Medallist (Elect.),

More information

LINEAR SYSTEMS AND MATRICES

LINEAR SYSTEMS AND MATRICES CHAPTER 3 LINEAR SYSTEMS AND MATRICES SECTION 3. INTRODUCTION TO LINEAR SYSTEMS This initial section takes account of the fact that some students remember only hazily the method of elimination for and

More information

3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra. 3.1 Operations on matrices 3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

More information

LINEAR SYSTEMS, MATRICES, AND VECTORS

LINEAR SYSTEMS, MATRICES, AND VECTORS ELEMENTARY LINEAR ALGEBRA WORKBOOK CREATED BY SHANNON MARTIN MYERS LINEAR SYSTEMS, MATRICES, AND VECTORS Now that I ve been teaching Linear Algebra for a few years, I thought it would be great to integrate

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 2.1 : 2, 5, 9, 12 2.3 : 3, 6 2.4 : 2, 4, 5, 9, 11 Section 2.1: Unitary Matrices Problem 2 If λ σ(u) and U M n is unitary, show that

More information

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A

More information

Graduate Mathematical Economics Lecture 1

Graduate Mathematical Economics Lecture 1 Graduate Mathematical Economics Lecture 1 Yu Ren WISE, Xiamen University September 23, 2012 Outline 1 2 Course Outline ematical techniques used in graduate level economics courses Mathematics for Economists

More information

An integer arithmetic method to compute generalized matrix inverse and solve linear equations exactly

An integer arithmetic method to compute generalized matrix inverse and solve linear equations exactly Proc. Indian Aead. Sei., Vol. 87 A (Mathematical Sciences-3), No. 9, September 1978, pp. 161-168, @ printed in India An integer arithmetic method to compute generalized matrix inverse and solve linear

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Determinants Chapter 3 of Lay

Determinants Chapter 3 of Lay Determinants Chapter of Lay Dr. Doreen De Leon Math 152, Fall 201 1 Introduction to Determinants Section.1 of Lay Given a square matrix A = [a ij, the determinant of A is denoted by det A or a 11 a 1j

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer D.S. Stutts November 8, 995 Introduction This primer was written to provide a brief overview of the main concepts and methods in elementary linear algebra. It was not intended to

More information

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam This study sheet will not be allowed during the test Books and notes will not be allowed during the test Calculators and cell phones

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Rings. EE 387, Notes 7, Handout #10

Rings. EE 387, Notes 7, Handout #10 Rings EE 387, Notes 7, Handout #10 Definition: A ring is a set R with binary operations, + and, that satisfy the following axioms: 1. (R, +) is a commutative group (five axioms) 2. Associative law for

More information