ON THE STABILIZATION OF LINEAR SYSTEMS

Similar documents
T h e C S E T I P r o j e c t

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

MAT 242 CHAPTER 4: SUBSPACES OF R n

P a g e 5 1 of R e p o r t P B 4 / 0 9

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Chapter 1 Vector Spaces

MAT Linear Algebra Collection of sample exams

I M P O R T A N T S A F E T Y I N S T R U C T I O N S W h e n u s i n g t h i s e l e c t r o n i c d e v i c e, b a s i c p r e c a u t i o n s s h o

DEF 1 Let V be a vector space and W be a nonempty subset of V. If W is a vector space w.r.t. the operations, in V, then W is called a subspace of V.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Executive Committee and Officers ( )

Applied Numerical Linear Algebra. Lecture 8

Linear Systems and Matrices

Foundations of Matrix Analysis

ELE/MCE 503 Linear Algebra Facts Fall 2018

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

CONTROL DESIGN FOR SET POINT TRACKING

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix

MAT 2037 LINEAR ALGEBRA I web:

1 Matrices and Systems of Linear Equations

0.1 Rational Canonical Forms

Singular and Invariant Matrices Under the QR Transformation*

Chapter 7. Linear Algebra: Matrices, Vectors,

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

PAIRS OF MATRICES WITH PROPERTY L(0

ON CONSTRUCTING NEARLY DECOMPOSABLE MATRICES D. J. HARTFIEL

The Jordan Canonical Form

1 Matrices and Systems of Linear Equations. a 1n a 2n

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

LINEAR SYSTEMS, MATRICES, AND VECTORS

Sums of diagonalizable matrices

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

An integer arithmetic method to compute generalized matrix inverse and solve linear equations exactly

1. General Vector Spaces

MTH 362: Advanced Engineering Mathematics

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Eigenvalues and Eigenvectors A =

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Lecture Summaries for Linear Algebra M51A

Controllability. Chapter Reachable States. This chapter develops the fundamental results about controllability and pole assignment.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Module 03 Linear Systems Theory: Necessary Background

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Chapter 1. Vectors, Matrices, and Linear Spaces

MATHEMATICS 217 NOTES

Chapter 1. Matrix Algebra

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

A THEOREM IN FINITE PROTECTIVE GEOMETRY AND SOME APPLICATIONS TO NUMBER THEORY*

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Appendix A: Matrices

Chap. 3. Controlled Systems, Controllability

Week 15-16: Combinatorial Design

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Bounds for Eigenvalues of Tridiagonal Symmetric Matrices Computed. by the LR Method. By Gene H. Golub

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

Matrix Arithmetic. j=1

Subrings and Ideals 2.1 INTRODUCTION 2.2 SUBRING

Review of Linear Algebra

ON RINGS WITH ONE-SIDED FIELD OF QUOTIENTS

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

Linear Hyperbolic Systems

On the computation of the Jordan canonical form of regular matrix polynomials

Semicontinuity of rank and nullity and some consequences

Chapter 3. Matrices. 3.1 Matrices

THE CONTIGUOUS FUNCTION RELATIONS FOR pf q WITH APPLICATIONS TO BATEMAN'S ƒ»«and RICE'S # n (r, p, v)

Review for Exam 2 Solutions

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Vector Spaces ปร ภ ม เวกเตอร

Vector Spaces. distributive law u,v. Associative Law. 1 v v. Let 1 be the unit element in F, then

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

2. Let S stand for an increasing sequence of distinct positive integers In ;}

Elementary linear algebra

1 Differentiable manifolds and smooth maps. (Solutions)

P a g e 3 6 of R e p o r t P B 4 / 0 9

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Review of Controllability Results of Dynamical System

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

2 Eigenvectors and Eigenvalues in abstract spaces.

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Z n -GRADED POLYNOMIAL IDENTITIES OF THE FULL MATRIX ALGEBRA OF ORDER n

Solution to Homework 1

CHANGE OF BASIS AND ALL OF THAT

9. Numerical linear algebra background

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Math 321: Linear Algebra

3 Matrix Algebra. 3.1 Operations on matrices

2.3. VECTOR SPACES 25

Eigenvalues and eigenvectors

On Moore Graphs with Diameters 2 and 3

EXAM. Exam #3. Math 2360, Spring April 24, 2001 ANSWERS

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Exercises Chapter II.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

APPENDIX A. Matrix Algebra. a n1 a n2 g a nk

Transcription:

ON THE STABILIZATION OF LINEAR SYSTEMS C. E. LANGENHOP 1. In a recent paper V. N. Romanenko [3] has given a necessary and sufficient condition that the system dx du (1.1) = Ax + bu, = px + qu be "stabilizable." Here A is an n by n matrix, x and b are m by 1 column matrices (or vectors), p is a 1 by n row matrix and q and u are scalars. We shall assume that the elements of all these may be complex numbers. The vector x can be interpreted physically as the output of a linear system characterized by the matrix A. The vector b corresponds to some feedback or control mechanism with u the controlling signal and p and q adjustable parameters in the controlling circuit. Romanenko calls the system (A, b) stabilizable if for any nonempty set 5 of n + l or less complex numbers there exist p and q such that (A b\ has 5 as its set of characteristic values (spectrum). In particular then, if (A, b) is stabilizable there exist p and q such that all characteristic values of G have negative real parts and every solution of (1.1) is such that x(t) >0 and u(t) >0 as i >+». In his paper Romanenko claims to be generalizing a known condition, which he attributes to Yu. M. Berezanskil, namely, that if the characteristic values of A are all distinct, then (A, b) is stabilizable if and only if (1.3) b, Ab,, An_1b are linearly independent. Romanenko's condition appears to be considerably more complicated than (1.3) while it is our intention here to show that in fact (1.3) is necessary and sufficient for (A, b) to be stabilizable irrespective of any condition on the characteristic values of A. Actually this is a corollary to our more general Theorem 1 given below. As pointed out to the author by Dr. J. P. LaSalle the condition (1.3) and related ones have considerable significance in a seemingly Presented to the Society, April 20, 1963 under the title Note on the stabilization of linear systems; received by the editors May 6, 1963. 735

736 C. E. LANGENHOP [October different aspect of control theory, namely, the controllability of linear dynamical systems [2]. Undoubtedly there is a deeper connection between stabilizability and controllability. At any rate a certain canonical form developed in the study of the latter concept provides a simplification of our original proof and a generalization of the Berezanskiï-Romanenko result. 2. Rather than (1.2) we consider matrices of the form (A B\ (21) G = (p 0) where A is n by n, Q is m by m and B and P are correspondingly sized submatrices. We assume that the elements of all these may be complex. By Sr we denote a nonempty set of r or less complex numbers fik; that is, Sr= {ßk\k=l, 2,, r}, where the ptk need not all be distinct. The rank h of the matrix H=(B, AB,, A"~1B) is the significant feature of our results, the first of which is Theorem 1. The condition h^r m is necessary and sufficient that for each Sr there exist P and Q such that the spectrum of G contains Sr. Proof of necessity. Take any set Sr such that the ptk in Sr are distinct and different from any characteristic value of A, and let P and Q be such that the spectrum of G contains this Sr. Then there exist vectors O * - 1. 2. w where * and rjk are «by 1 and m by 1 column matrices, respectively, such that or G e)c) = ^C) (2.2) k = 1,2,,r. P%k + Qvk = ßkVk, From the first of (2.2) we may write (2.3) & = (ukl - A)-'Br,k since A ßkI is nonsingular. Using the characteristic equation of A, one may write

î964] ON THE STABILIZATION OF LINEAR SYSTEMS 737 (2.4) (pkl - A)-' = E Cj(pk)A'~l for suitable scalar functions c ( ). Substituting (2.4) into (2.3), we have n (2.5) & = S A^Bci(pk)nk = Hh, k = 1, 2,, r, where the transpose of the nt» by 1 column matrix f/t is defined by U = (ci(pk)r k, c2(pk)vk,, cn(y.k)i)k). Now by choice of Sr the vectors 0- k - 1, 2, \Vk/ are linearly independent and it is readily seen that the rank of the matrix ( i, 2,, r) must therefore be at least r m. From (2.5) it is then clear that the rank of H must likewise be at least r m. Proof of sufficiency. The proof is accomplished by means of a similarity transformation on G to convert A and B to convenient canonical forms. We first dispose of the case B = 0, however. In this event h 0 and the condition h^r m becomes m~ zr. It is clear then that we may choose P arbitrarily and Q such that the spectrum of Q and hence also that of G contains Sr. Henceforth, then, we assume B^O. Consider now a matrix / of the form (2.6) / \o r) where / is an n by n identity and A is a nonsingular m by m matrix. Then / A BR-1 \ (2.7) JGJ-1 = I ) \RP PQRr1/ and it is evident that we may achieve any reordering of the columns of B with no essential change in the statement of the proposition to be proved. Thus if the columns of B are denoted by &, i= 1, 2,,m, we may assume without loss of generality that the set of h columns oí H (2.8) h, Ah,, il*»-1*!,,b Ab.,, A»--%

738 C. E. LANGENHOP [October is linearly independent. Here l^s^m, Â.èli *=1, 2, -, s, and Yi-i hi = h. We may further assume (see Chapter VII of [l]) that the sequence (2.8) is such that for i g? 2 the linear subspace V, spanned by Ai~1bi,j=l, 2,, A<, is invariant modulo Fi+F2+ + F;_i under premultiplication by A. That is, hi i 1 hk (2.9) A Hi = Y ctija^bi +YY ßnkAi-'bk, 3 = 1 *=1 3=1 for some set of scalars ay, /3,-y* and where the double summation in (2.9) does not appear if i = 1. We now introduce a similarity transformation on G by means of a matrix K of the form (2.10) K c :> where I is an m by m identity and T is a nonsingular n by n matrix. Then /TAT-1 TB\ (2.11) KGK-1 = ( ). The matrix T is defined by choosing certain combinations of the vectors in (2.8) as a new basis system for the space of n by 1 column matrices. Thus we introduce (2.12) ea = A^bi - Y ctijai-^bi,. " ' ''"' *"' j~k+l * 1) 2,, s, and where the summation does not appear in case k = hi. That is (2.13) eihi = bi, i = 1,2,, s. Note from (2.12) and (2.13) we have for k^2 (2.14) Aeik = eilk-i + aikeihi, i = 1, 2,, s, and, using (2.9) in addition, we have, since the A^b/s may be expressed as linear combinations of the e y's, t'-l (2.15) Aen = aneihi + Y Y yak ekj, i = 1,2, -, s, k=l 3=1 hk for some set of scalars y,jk and where, again, the double summation is absent in the case i 1. From the linear independence of the set (2.8) and the remark just preceding (2.15) it is evident that the set

i964] ON THE STABILIZATION OF LINEAR SYSTEMS 739 (2.16) en,, eihv e2i,, e2hl,, e i,, e,h, is likewise linearly independent. If h= /,Li hi<n we may adjoin to the set (2.16) n h additional vectors to get a set which spans the space of n by 1 matrices. We now view the matrix A as defining the linear transformation x >Ax and we let T be defined so that for any n by 1 matrix x the matrix Tx is the column of components of x relative to the set (2.16) (augmented if required) as basis. With respect to this basis the matrix of the linear transformation x >Ax is the matrix TA J"-1 and, moreover, Tbi = uai, ai hi+ +h, and u is an n by 1 column all of whose components are zero except the crth which is 1. From (2.14) and (2.15) we may thus infer that TAT~l has the following block form: Ci Ei2 0 C2 Eu E2s Fi F2 (2.17) TAT'. where C has the canonical form Í0 1 C, 0 F, L (2.18) Ci 1 Moreover, an ai aih. Vi 0 0 wi 0 v2 0 w2 (2.19) TB V, w, -

740 C. E. LANGENHOP [October where» is an hi by 1 column all of whose components are zero except the last which is 1. The submatrices w( have m s columns. We now choose P and Q in convenient form. Let and let P be such that Q = diag(?i, q2, Im) pi 0 0 p2 (2.20) PT- p, 0 0 M where pi is a 1 by hi row and M ism s by n h. The determinant of KGK~l \I is now readily evaluated by means of Laplace expansions using minors from appropriate columns. Thus, if we expand by minors from the first hi columns along with the (w + l)st column we find that only one of these is nonzero, namely, that using the first hi rows and the (n + l)st row. The complementary minor will be a determinant array with exactly similar block structure to that of KGK~l \I. It may thus be expanded similarly by using the corresponding columns, namely, those which appear in KGK-1 XI as the second h2 columns along with the («+2)nd. Continuing in this way we may write (2.21) det(kgk~l -XI) = L-\I 0 M Q* -XI n d XI Vi pi qt X where <2* = diag(g3+i,, qm) and, of course, each appearance of / denotes an identity of appropriate size. First we observe that Q* may be chosen so that the spectrum of G contains any prescribed set Sm_3. We next examine the determinants in the product factor in (2.21). Each of these has the same structure and may readily be evaluated by a Laplace expansion using minors from the last two rows. The result is Ci XI pi Vi qi X = (-X)k^ + (qi - aik,)(-x)k< hi + Y (a'j-i + lian - Piài *)*-1 + (?»«<i- Pu), 3=2

i964] ON THE STABILIZATION OF LINEAR SYSTEMS 741 where the pn, j=l, 2,, At-, are the components of the row pi. In this form it is evident that we may determine q and the pn, j=l, 2,, hi, so that the coefficients of this polynomial are any we desire. Thus g» and pi may be determined so that this factor of det(kgk~l \I) has any given collection of A.- + 1 roots. This is true for each i so using this and the fact mentioned earlier regarding the choice of Q* it is clear that we may specify P and Q so that G contains in its spectrum any given nonempty set of m s+ 2<-i (A. + l) = h+m or less complex numbers. Thus if h^r m we see that for any Sr there exist P and Q such that G contains Sr in its spectrum. Corollary 1. The condition h = n is necessary and sufficient that for each Sn+m there exist P and Q such that G has Sn+m as its spectrum. Proof. In any case h^n, so ilr = n+m, then the condition h^r m is equivalent to A = n. Corollary 2. Condition (1.3) is necessary and sufficient that (A, b) be stabilizable. Proof. This is Corollary 1 in the case m=l. Remark. Even in case h<n there may exist P and Q such that all characteristic roots of G have negative real parts. It is clear from (2.21) that this is the case provided the characteristic roots of L have negative real parts which, in turn, is related to how the linear subspace complementary to that spanned by the columns of H is associated with those characteristic roots of A which have negative real parts. In any case this question does not appear to be directly answerable merely in terms of the rank of H. 3. In this section as an application of the results in 2 we point out the relevancy of condition (1.3) to the behavior of a more complicated system of differential equations than (1.1). Thus we consider the system (3.1) dx. = Ax + bu, dr+1u *L dku -= px + y, Qk-1 *1 r to k where A, b, x, u, p are as before and qk, A = 0, 1, 2, -, r, are scalars. Theorem 2. For any integer r^o, if condition (1.3) holds, then there exist p, qk, A = 0, 1,, r, such that, for every solution of (3.1), x(t) >0 and «( )»0 as i >+«>.

742 C. E. LANGENHOP Proof. Introduce variables uk by the relations u0>=u, (3.2) «* duk-i = 1,2, Then (3.1) may be written in the equivalent matrix form x [A b x «0 10 d (3.3) - + Ur, «/ -2 l«r-lj dur Ut-2 Mr-lJ = (P, 9.0 q,-i)(x', «o,, Mr-i)' + erur. This is the form of (1.1) with (x', u0,, wr-1)' playing the role of x' there, ur the role of u, (p, q0,, qr-i) the role of p and A b 10 0 1 0 (3.4) and b* 0 playing the roles of A and b, respectively. From Corollary 2 P, Ç.0,, qr exist such that íc(í)»0, uk(t)»0, & = 0, 1,, r, as i-»+ co if b*, A*b*, A*2b*,, A*n+r~lb* are linearly independent. From the form of A* and b* as given in (3.4) it is easy to verify that this is true if condition (1.3) holds. Analogous applications of Corollary 1 may be made. References 1. F. R. Gantmacher, The theory of matrices, Vol. 1, Chelsea, New York, 1960. 2. R. E. Kaiman, Y. C. Ho and K. S. Narendra, Controllability of linear dynamical systems, Contributions to Differential Equations 1 (1963), 189-213. 3. V. N. Romanenko, On a stabilization theorem, Dopovidi Akad. Nauk Ukraln. RSR 1962, 863-867. Southern Illinois University 1