SMITH MCMILLAN FORMS

Similar documents
Multivariable Control. Lecture 05. Multivariable Poles and Zeros. John T. Wen. September 14, 2006

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Zeros and zero dynamics

MULTIVARIABLE ZEROS OF STATE-SPACE SYSTEMS

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

Robust Control 2 Controllability, Observability & Transfer Functions

Lecture 4 Stabilization

Kevin James. MTHSC 3110 Section 2.2 Inverses of Matrices

MEM 355 Performance Enhancement of Dynamical Systems MIMO Introduction

7.6 The Inverse of a Square Matrix

MATRICES. a m,1 a m,n A =

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Chapter 20 Analysis of MIMO Control Loops

Lattices and Hermite normal form

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

0.1 Rational Canonical Forms

Exercise 1 (MC Questions)

Chapter 1. Vectors, Matrices, and Linear Spaces

Math 3C Lecture 20. John Douglas Moore

Infinite elementary divisor structure-preserving transformations for polynomial matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

The Cayley-Hamilton Theorem and Minimal Polynomials

Simplifying Rational Expressions and Functions

Hermite normal form: Computation and applications

EL2520 Control Theory and Practice

Minimal-span bases, linear system theory, and the invariant factor theorem

CONTROL DESIGN FOR SET POINT TRACKING

* 8 Groups, with Appendix containing Rings and Fields.

MAT Linear Algebra Collection of sample exams

Introduction to Determinants

Matrices and RRE Form

Chapter 7. Linear Algebra: Matrices, Vectors,

Math 240 Calculus III

ECON 186 Class Notes: Linear Algebra

Definition 2.3. We define addition and multiplication of matrices as follows.

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted.

arxiv:math.oc/ v2 14 Dec 1999

Primitive sets in a lattice

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Lesson U2.1 Study Guide

This can be accomplished by left matrix multiplication as follows: I

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

On the computation of the Jordan canonical form of regular matrix polynomials

18.312: Algebraic Combinatorics Lionel Levine. Lecture 22. Smith normal form of an integer matrix (linear algebra over Z).

Math "Matrix Approach to Solving Systems" Bibiana Lopez. November Crafton Hills College. (CHC) 6.3 November / 25

g(x) = 1 1 x = 1 + x + x2 + x 3 + is not a polynomial, since it doesn t have finite degree. g(x) is an example of a power series.

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Dynamic Response

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. VII Multivariable Poles and Zeros - Karcanias, Nicos

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Skew-Symmetric Matrix Polynomials and their Smith Forms

A Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems

Section 5.6. LU and LDU Factorizations

Matrix Operations: Determinant

1 Determinants. 1.1 Determinant

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

CSL361 Problem set 4: Basic linear algebra

Chapter 4. Determinants

1 Data Arrays and Decompositions

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form

Numerical Analysis Lecture Notes

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Matrix Algebra & Elementary Matrices

Foundations of Matrix Analysis

Triangularizing matrix polynomials. Taslaman, Leo and Tisseur, Francoise and Zaballa, Ion. MIMS EPrint:

Chapter 3. Determinants and Eigenvalues

UNIT 3 MATRICES - II

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

Relationships Between Planes

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

MATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices.

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Recall the convention that, for us, all vectors are column vectors.

Cylinder with Conical Cap

3.4 Elementary Matrices and Matrix Inverse

22 APPENDIX 1: MATH FACTS

Numerical Linear Algebra Homework Assignment - Week 2

7 Minimal realization and coprime fraction

Review Questions REVIEW QUESTIONS 71

Stability Margin Based Design of Multivariable Controllers

Solution Sheet (i) q = 5, r = 15 (ii) q = 58, r = 15 (iii) q = 3, r = 7 (iv) q = 6, r = (i) gcd (97, 157) = 1 = ,

Pre-Calculus I. For example, the system. x y 2 z. may be represented by the augmented matrix

and let s calculate the image of some vectors under the transformation T.

Section 1.1: Systems of Linear Equations

(Inv) Computing Invariant Factors Math 683L (Summer 2003)

A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Lecture 4: Gaussian Elimination and Homogeneous Equations

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Introduction to Matrices

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Math 121 Homework 5: Notes on Selected Problems

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

MTH 464: Computational Linear Algebra

A matrix over a field F is a rectangular array of elements from F. The symbol

Transcription:

Appendix B SMITH MCMILLAN FORMS B. Introduction Smith McMillan forms correspond to the underlying structures of natural MIMO transfer-function matrices. The key ideas are summarized below. B.2 Polynomial Matrices Multivariable transfer functions depend on polynomial matrices. There are a number of related terms that are used. Some of these are introduced here: Definition B.. A matrix Π(s) = [p ik (s)] R n n2 is a polynomial matrix if p ik (s) is a polynomial in s, for i =, 2,..., n and k =, 2,..., n 2. Definition B.2. A polynomial matrix Π(s) is said to be a unimodular matrix if its determinant is a constant. Clearly, the inverse of a unimodular matrix is also a unimodular matrix. Definition B.3. An elementary operation on a polynomial matrix is one of the following three operations: (eo) interchange of two rows or two columns; (eo2) multiplication of one row or one column by a constant; (eo3) addition of one row (column) to another row (column) times a polynomial. Definition B.4. A left (right) elementary matrix is a matrix such that, when it multiplies from the left (right) a polynomial matrix, then it performs a row (column) elementary operation on the polynomial matrix. All elementary matrices are unimodular. Definition B.5. Two polynomial matrices Π (s) and Π 2 (s) are equivalent matrices, if there exist sets of left and right elementary matrices, {L (s), L 2 (s),..., L k } and {R (s), R 2 (s),..., R k2 }, respectively, such that Π (s) = L k (s) L 2 (s)l Π 2 (s)r (s)r 2 (s) R k2 (B.2.) 9

92 Smith McMillan Forms Appendix B Definition B.6. The rank of a polynomial matrix is the rank of the matrix almost everywhere in s. The definition implies that the rank of a polynomial matrix is independent of the argument. Definition B.7. Two polynomial matrices V(s) and W(s) having the same number of columns (rows) are right (left) coprime if all common right (left) factors are unimodular matrices. Definition B.. The degree ck ( rk ) of the k th column (row) [V(s)] k ( [V(s)] k ) of a polynomial matrix V(s) is the degree of highest power of s in that column (row). Definition B.9. A polynomial matrix V(s) C m m is column proper if has a finite, nonzero value. lim det(v(s) diag ( s c, s c2,..., s cm) ) s Definition B.. A polynomial matrix V(s) C m m is row proper if has a finite, nonzero value. lim det(diag ( s r, s r2,..., s rm) V(s)) s (B.2.2) (B.2.3) B.3 Smith Form for Polynomial Matrices Using the above notation, we can manipulate polynomial matrices in ways that mirror the ways we manipulate matrices of reals. For example, the following result describes a diagonal form for polynomial matrices. Theorem B. (Smith form). Let Π(s) be a m m 2 polynomial matrix of rank r; then Π(s) is equivalent to either a matrix Π f (s) (for m < m 2 ) or to a matrix Π c (s) (for m 2 < m ), with Π f (s) = [ E(s) Θ f ] ; Πc (s) = E(s) = diag(ɛ (s),..., ɛ r (s),,..., ) [ ] E(s) Θ c (B.3.) (B.3.2) where Θ f and Θ c are matrices with all their elements equal to zero. Furthermore ɛ i (s) are monic polynomials for i =, 2,..., r, such that ɛ i (s) is a factor in ɛ i+ (s), i.e. ɛ i (s) divides ɛ i+ (s). If m = m 2, then Π(s) is equivalent to the square matrix E(s). Proof (by construction) (i) By performing row and column interchange operations on Π(s), bring to position (,) the least degree polynomial entry in Π(s). Say this minimum degree is ν

Section B.3. Smith Form for Polynomial Matrices 93 (ii) Using elementary operation (e3) (see definition B.3), reduce the term in the position (2,) to degree ν 2 < ν. If the term in position (2,) becomes zero, then go to the next step, otherwise, interchange rows and 2 and repeat the procedure until the term in position (2,) becomes zero. (iii) Repeat step (ii) with the other elements in the first column. (iv) Apply the same procedure to all the elements but the first one in the first row. (v) Go back to step (ii) if nonzero entries due to step (iv) appear in the first column. Notice that the degree of the entry (,) will fall in each cycle, until we finally end up with a matrix which can be partitioned as π (j) (s)... Π(s) =.. Π j (s) (B.3.3) where π (j) (s) is a monic polynomial. (vi) If there is an element of Π j (s) which is of lesser degree than π (j) (s), then add the column where this element is to the first column and repeat steps (ii) (s) of less or, at most, equal degree to that of every element in Π j (s). This will yield further reduction in the degree of the entry in position (,). to (v). Do this until the form (B.3.3) is achieved with π (j) (vii) Make ɛ (s) = π (j) (s). (viii) Repeat the procedure from steps (i) through (viii) to matrix Π j (s). Actually the polynomials ɛ i (s) in the above result can be obtained in a direct fashion, as follows: (i) Compute all minor determinants of Π(s). (ii) Define χ i (s) as the (monic) greatest common divisor (g.c.d.) of all i i minor determinants of Π(s). Make χ (s) =. (iii) Compute the polynomials ɛ i (s) as ɛ i (s) = χ i(s) χ i (s) (B.3.4)

94 Smith McMillan Forms Appendix B B.4 Smith McMillan Form for Rational Matrices A straightforward application of Theorem B. leads to the following result, which gives a diagonal form for a rational transfer-function matrix: Theorem B.2 (Smith McMillan form). Let G(s) = [G ik (s)] be an m m matrix transfer function, where G ik (s) are rational scalar transfer functions: G(s) = Π(s) D G (s) (B.4.) where Π(s) is an m m polynomial matrix of rank r and D G (s) is the least common multiple of the denominators of all elements G ik (s). Then, G(s) is equivalent to a matrix M(s), with ( ɛ (s) M(s) = diag δ (s),..., ɛ ) r(s) δ r (s),,..., (B.4.2) where {ɛ i (s), δ i (s)} is a pair of monic and coprime polynomials for i =, 2,..., r. Furthermore, ɛ i (s) is a factor of ɛ i+ (s) and δ i (s) is a factor of δ i (s). Proof We write the transfer-function matrix as in (B.4.). We then perform the algorithm outlined in Theorem B. to convert Π(s) to Smith normal form. Finally, canceling terms for the denominator D G (s) leads to the form given in (B.4.2). We use the symbol G SM (s) to denote M(s), which is the Smith McMillan form of the transfer-function matrix G(s). We illustrate the formula of the Smith McMillan form by a simple example. Example B.. Consider the following transfer-function matrix 4 G(s) = 2 s + We can then express G(s) in the form (B.4.): s + 2 (B.4.3) [ ] G(s) = Π(s) 4 (s + 2) D G (s) ; Π(s) = 2(s + 2) ; D G (s) = 2 (B.4.4)

Section B.5. Poles and Zeros 95 The polynomial matrix Π(s) can be reduced to the Smith form defined in Theorem B.. To do that, we first compute its greatest common divisors: χ = (B.4.5) { χ = gcd 4; (s + 2); 2(s + 2); } = (B.4.6) 2 χ 2 = gcd{2s 2 + s + 6} = s 2 + 4s + 3 = (s + )(s + 3) (B.4.7) This leads to ɛ = χ χ = ; ɛ 2 = χ 2 χ = (s + )(s + 3) (B.4.) From here, the Smith McMillan form can be computed to yield G SM (s) = B.5 Poles and Zeros s + 3 s + 2 (B.4.9) The Smith McMillan form can be utilized to give an unequivocal definition of poles and zeros in the multivariable case. In particular, we have: Definition B.. Consider a transfer-function matrix, G(s). (i) p z (s) and p p (s) are said to be the zero polynomial and the pole polynomial of G(s), respectively, where p z (s) = ɛ (s)ɛ 2 (s) ɛ r (s); p p (s) = δ (s)δ 2 (s) δ r (s) (B.5.) and where ɛ (s), ɛ 2 (s),..., ɛ r (s) and δ (s), δ 2 (s),..., δ r (s) are the polynomials in the Smith McMillan form, G SM (s) of G(s). Note that p z (s) and p p (s) are monic polynomials. (ii) The zeros of the matrix G(s) are defined to be the roots of p z (s), and the poles of G(s) are defined to be the roots of p p (s). (iii) The McMillan degree of G(s) is defined as the degree of p p (s). In the case of square plants (same number of inputs as outputs), it follows that det[g(s)] is a simple function of p z (s) and p p (s). Specifically, we have

96 Smith McMillan Forms Appendix B det[g(s)] = K p z (s) p p (s) (B.5.2) Note, however, that p z (s) and p p (s) are not necessarily coprime. Hence, the scalar rational function det[g(s)] is not sufficient to determine all zeros and poles of G(s). However, the relative degree of det[g(s)] is equal to the difference between the number of poles and the number of zeros of the MIMO transfer-function matrix. B.6 Matrix Fraction Descriptions (MFD) A model structure that is related to the Smith McMillan form is that of a matrix fraction description (MFD). There are two types, namely a right matrix fraction description (RMFD) and a left matrix fraction description (LMFD). We recall that a matrix G(s) and its Smith-McMillan form G SM (s) are equivalent matrices. Thus, there exist two unimodular matrices, L(s) and R(s), such that G SM (s) = L(s)G(s)R(s) (B.6.) This implies that if G(s) is an m m proper transfer-function matrix, then there exist a m m matrix L(s) and an m m matrix R(s), such as G(s) = L(s)G SM (s) R(s) (B.6.2) where L(s) and R(s) are, for example, given by L(s) = [L(s)] ; R(s) = [R(s)] (B.6.3) We next define the following two matrices: N(s) = diag(ɛ (s),..., ɛ r (s),,..., ) D(s) = diag(δ (s),..., δ r (s),,..., ) (B.6.4) (B.6.5) where N(s) and D(s) are m m matrices. Hence, G SM (s) can be written as G SM (s) = N(s)[D(s)] (B.6.6) Combining (B.6.2) and (B.6.6), we can write

Section B.6. Matrix Fraction Descriptions (MFD) 97 G(s) = L(s)N(s)[D(s)] R(s) = [ L(s)N(s)][[ R(s)] D(s)] = G N (s)[g D (s)] where (B.6.7) G N (s) = L(s)N(s); G D (s) = [ R(s)] D(s) (B.6.) Equations (B.6.7) and (B.6.) define what is known as a right matrix fraction description (RMFD). It can be shown that G D (s) is always column-equivalent to a column proper matrix P(s). (See definition B.9.) This implies that the degree of the pole polynomial p p (s) is equal to the sum of the degrees of the columns of P(s). We also observe that the RMFD is not unique, because, for any nonsingular m m matrix Ω(s), we can write G(s) as G(s) = G N (s)ω(s)[g D (s)ω(s)] (B.6.9) where Ω(s) is said to be a right common factor. When the only right common factors of G N (s) and G D (s) are unimodular matrices, then, from definition B.7, we have that G N (s) and G D (s) are right coprime. In this case, we say that the RMFD (G N (s), G D (s)) is irreducible. It is easy to see that when a RMFD is irreducible, then s = z is a zero of G(s) if and only if G N (s) loses rank at s = z; and s = p is a pole of G(s) if and only if G D (s) is singular at s = p. This means that the pole polynomial of G(s) is p p (s) = det(g D (s)). Remark B.. A left matrix fraction description (LMFD) can be built similarly, with a different grouping of the matrices in (B.6.7). Namely, G(s) = L(s)[D(s)] N(s) R(s) = [D(s)[ L(s)] ] [N(s) R(s)] = [G D (s)] G N (s) (B.6.) where G N (s) = N(s) R(s); G D (s) = D(s)[ L(s)] (B.6.)

9 Smith McMillan Forms Appendix B The left and right matrix descriptions have been initially derived starting from the Smith McMillan form. Hence, the factors are polynomial matrices. However, it is immediate to see that they provide a more general description. In particular, G N (s), G D (s), G N (s) and G N (s) are generally matrices with rational entries. One possible way to obtain this type of representation is to divide the two polynomial matrices forming the original MFD by the same (stable) polynomial. An example summarizing the above concepts is considered next. Example B.2. Consider a 2 2 MIMO system having the transfer function 4 G(s) = s + 2.5 s + 2 (B.6.2) B.2. Find the Smith McMillan form by performing elementary row and column operations. B.2.2 Find the poles and zeros. B.2.3 Build a RMFD for the model. Solution B.2. We first compute its Smith McMillan form by performing elementary row and column operations. Referring to equation (B.6.), we have that G SM (s) = L(s)G(s)R(s) = with s 2 + 3s + (B.6.3) L(s) = 4 ; R(s) = s + 2 2(s + ) (B.6.4) B.2.2 We see that the observable and controllable part of the system has zero and pole polynomials given by p z (s) = s 2 + 3s + ; p p (s) = (s + ) 2 (s + 2) 2 (B.6.5)

Section B.6. Matrix Fraction Descriptions (MFD) 99 which, in turn, implies that there are two transmission zeros, located at.5± j3.97, and four poles, located at,, 2 and 2. B.2.3 We can now build a RMFD by using (B.6.2). We first notice that 4 L(s) = [L(s)] = ; R(s) = [R(s)] = s + 2 s + (B.6.6) Then, using (B.6.6), with N(s) = [ s 2 + 3s + ] ; D(s) = [ (B.6.7) the RMFD is obtained from (B.6.7), (B.6.6), and (B.6.7), leading to ] and 4 [ ] 4 G N (s) = s + s 2 = + 3s + s 2 + 3s + s + (B.6.) G D (s) = s + 2 = [ ] 2 (B.6.9) (B.6.2) These can then be turned into proper transfer-function matrices by introducing common stable denominators.