CANONICAL FORM. University of Minnesota University of Illinois. Computer Science Dept. Electrical Eng. Dept. Abstract

Size: px
Start display at page:

Download "CANONICAL FORM. University of Minnesota University of Illinois. Computer Science Dept. Electrical Eng. Dept. Abstract"

Transcription

1 PLAING ZEROES and the KRONEKER ANONIAL FORM D L oley P Van Dooren University of Minnesota University of Illinois omputer Science Dept Electrical Eng Dept Minneapolis, MN USA Urbana, IL 8 USA boleymailcsumnedu vdoorenuicslcsluiucedu Keyords: transmission zeroes, zero assignment, control, Kronecker form, matrix pencils Abstract Given a linear time-invariant control system, it is ell knon that the transmission zeroes are the generalized eigenvalues of a matrix pencil Adding outputs to place additional zeroes is equivalent to appending ros to this pencil to place ne generalized eigenvalues Adding inputs is likeise equivalent to appending columns Since both problems are dual to each other, e only sho in this paper ho to choose the ne ros to place the ne zeroes in any desired locations The process involves the extraction of the individual right Kronecker blocks of the pencil, accomplished entirely ith unitary transformations In particular, hen adding one ne output, ie appending a single ro, the maximum number of ne zeroes that can be placed is exactly the largest right Kronecker index Introduction The placement of transmission of zeroes via synthesis of ne outputs and/or inputs has been studied from the point of vie of system theory, and certain algorithms have been developed [8, 9,, ] As for the assignment of zeroes via feedback design, the assignment of zeroes via output synthesis can be analyzed in terms of the theory of matrix pencils, so that a complete characterization of the number of zeroes that can be placed in any given case can be obtained In [8, 9] an algorithm to synthesize outputs to assign the zeroes as proposed based on the desired form of the transfer function In [], a method as proposed to assign zeroes for a SISO system as ell as to assign zeroes to the input/output maps from individual inputs to individual outputs In this paper, e study this problem using the Kronecker theory of pencils [] Specically, e study the problem of zero placement for an arbitrary matrix pencil by the addition of ne ros or columns in terms of the structure of the Kronecker anonical Form (KF) We sho ho additional ros or columns can be appended to a pencil to place as many zeroes as possible, and discuss the limits on this placement Like [, ], our approach ends up reducing the problem to a problem of pole placement, for hich several algorithms are available (see eg [, ]) Hoever, e are { {

2 able to synthesize outputs to place the zeroes for an entire MIMO system, as ell as determining the limits on the number of ne zeroes that can be placed We treat this problem by studying the problem of assigning the generalized eigenvalues of a general matrix pencil The zero placement problem ill then be a special case The methods in this paper are all based on the transformation of the original pencil to one hich can be partitioned into the various components of the Kronecker canonical form The transformations are carried out entirely using unitary transformations, and hence enjoys some good numerical stability properties The computations are based on the so-called staircase algorithm of [], hich separates the left and right Kronecker parts and computes the values of the individual Kronecker indices We propose a ne extension to this algorithm, still based on unitary transformations, hich can actually extract the individual Kronecker blocks Once the individual Kronecker blocks have been extracted, the zeroes may be placed ithin each Kronecker block in a manner very similar in spirit to that of [] This paper is organized as follos First e describe the basic theory that relates the Kronecker theory of matrix pencils to the problem of placing zeroes or more generally placing the generalized eigenvalues for a pencil Next e describe our computational procedure for extracting the Kronecker blocks and placing the zeroes We leave in an Appendix a step by step description of the ne process used to extract the individual Kronecker blocks asic Theory onsider a linear time-invariant generalized state-space system of dimension n : hich is irreducible, ie here _Ex(t) = Ax(t) + u(t); y(t) = x(t) + Du(t); () ( A? E ) and A? E both have full rank n for all nite (this also means reachable and observable at nite points), and here E ( E ) and both have full rank n (this also means reachable and observable at innity) It is ell knon that its transmission zeroes are also the zeroes of the matrix pencil [,, ] : A? E G? F = : () D We ould like to add outputs or inputs to () to place ne zeroes in desired locations in the complex plane This corresponds to appending ros or columns, respectively, to () to place the zeroes of the embedded system We discuss the general question of ho many zeroes can be placed by appending ros and outline a procedure to compute the ros to append to place the zeroes at given locations To x ideas, e analyze ho many zeroes e can place by a judicious choice of additional ros appended to () We rite the Generalized Schur Form for () []: P (G? F )Q = G r? F r G reg? F reg G l? F l { { A ; ()

3 here P and Q are unitary (orthogonal in the real case) and G r? F r contains the right (short fat) Kronecker blocks, G l? F l contains the left (tall thin) Kronecker blocks, and G reg? F reg is the regular part The blocks are characterized by the properties that G r? F r has full ro rank and G l? F l has full column rank for all values of in the complex plane (including innity), and G reg? F reg is square and nonsingular except at a nite number of isolated values of hich are called the eigenvalues of the pencil The nite eigenvalues are the nite zeroes of the pencil The innite eigenvalues correspond to the innite zeroes of the pencil, except that each k k Jordan block I k? J at innity has only k? zeroes at innity (but k innite eigenvalues) [] Notice that this denition implies also that the total number of zeroes equals rank(f reg ) [] In the Kronecker anonical Form P; Q are nonsingular matrices, the entries are zero, and G r? F r has the block \diagonal" form G r? F r = R () A ; () R k () here each R i () is s i (s i + ), has full ro rank for all, and represents a single Kronecker block The fs i g's are the right Kronecker indices of the pencil, and e assume ithout loss of generality that these indices are in nondecreasing order In the sequel, e ill sho ho to obtain an upper triangular version of the overall form (), but ith nonzero entries above the diagonal blocks, using only unitary transformations In any case, G l? F l ill have a similar upper triangular form but ith rectangular diagonal blocks ith one more ro than column and full column rank We append some number p of ne ros to () to obtain P I G F? Q = Z G r? F r G reg? F reg G l? F l Z r Z reg Z l A : () The object is to choose these p ne ros so as to place as many zeroes as possible The rightmost block column (; ; G T l? Fl T ; ZT l )T has full column rank regardless of the choice of Z l The middle block column (; G T reg? F reg T ; ; ZT reg) T can lose rank only at values of here G reg? F reg already loses rank (ie only at existing generalized eigenvalues) and only for certain choices of Z reg The entry Z reg can sometimes be chosen so that the middle block column does not lose rank (or loses less rank than does G reg? F reg alone) at any particular existing eigenvalue In the presence of an G l? F l block, the result may be that the existing eigenvalue disappears from the augmented pencil () (or the eigenvalue remains ith a smaller multiplicity) ut in any case, neither Z reg nor Z l can be used to place any ne zeroes or to increase the multiplicity of any existing zeroes Some of these eects are explained in more detail in the next subsections Hence, only Z r can be used to place ne zeroes The choice of Z r is independent of Z reg ; Z l, so e may set the latter to zero Actually, if e don't set those to zero, there ill be coupling beteen the parts The eect of this coupling is discussed in the next subsections, but in general it ill not aect nely placed zeroes, unless they happen to coincide ith zeroes already present in G reg? F reg We need only consider the sub-pencil of () Gr? F r Z r = R () R k () Z Z k A ; () { {

4 here each R i is s i (s i + ) and represents a single Kronecker block It ill be seen that the zeroes can be placed by choosing the p ros to be appended, Z r = (Z ; ; Z k ), to have the form z T k?p+ z T k?p+ Z r = ; () z T k here each ro vector z T Ri () i is computed so that the individual (s i + ) (s i + ) pencil z T has a i subset of the desired zeroes, for i = k? p + ; ; k The rest of this section and the next section are devoted to lling in many of the theoretical and computational details behind the zero-placement algorithm, respectively A Ho Many Zeroes an e Placed? We discuss the specic question: ho many zeroes can be placed by appending one ro or multiple ros For this e rst need to recall some basic results on pencils and their polynomial null spaces def Let r be the normal rank of the m n pencil G? F, then it has n r = n? r right null vectors v i () def and m r = m? r left null vectors u j (), hich can be chosen to be polynomial ollecting these vectors in a n n r polynomial matrix V () and in a m m r polynomial matrix U() e thus have : (G? F )V () = ; U T ()(G? F ) = : (8) No the columns of V () and U() are said to be a minimal basis for the respective null spaces if their column degrees are minimal This is the case if and only if [, p8]: V (), respectively U(), has full column rank or all nite the highest column degree coecient matrix of V (), respectively U(), has full column rank One proves [] that if V (), respectively U(), is minimal then its column degrees are (up to a permutation)) equal to the right Kronecker indices fs i g, respectively left Kronecker indices ft i g, of G? F Moreover, the minimality of the bases refers to the fact that any other polynomials basis for these null spaces must have higher column degrees A consequence of all this is also that the number k of right Kronecker indices is equal to n r, and the number of left Kronecker indices is equal to m r One denes then the orders o r and o l of the right and left null spaces to be the sum of the column degrees of their minimal bases, ie o r = P n r i= s i and o l = P m r G r? F r has dimension o r (o r + n r ) G l? F l has dimension (o l + m r ) o l i= t i A simple consequence of this is (see []) : In order to use this for bounding the number of assignable zeros hen appending ros or columns e need the folloing result, proved in [] : Lemma Let G? F be a pencil ith null space orders o l and o r and number of nite and innite zeros o f and o (multiplicities counted), then rank(f ) = o l + o r + o f + o { {

5 Notice that in the above result e count zeroes at innity, not eigenvalues, to be compatible ith their system theoretic interpretation (see text above () or []) Since no appending constant ros or columns does not change rank(f ) e can only increase the number of zeroes by minimizing the null space orders We no give certain inequalities hich ill lead to the main result Theorem Let G? F be a m n pencil ith normal rank r and ith Kronecker indices and null space orders o r = P n r i= s i and o l = P m r j= t j Then appending p constant ros and denoting this pencil by G? F yields ne normal rank and null space orders r, o r and o l satisfying : r r r + p; o l o l ; ith equality only if r = r + p; ix j= s j ix j= s j for i n? r : Proof: The rst result is trivial since the normal rank of a pencil is its rank for almost any value of and appending p ros in a constant matrix then immediately gives the bounds r r r + p For the second bound e start from () and perform a generalized Schur decomposition on the subpencil consisting of the upper left part to get : ^P I Z r Z reg Z l G r? F r G reg? F reg G l? F l ^Q = I Since the subpencil ^Gl? ^F l G l? F l ^G r? ^F r ^Greg? ^F reg ^Gl? ^F l G l? F l has full column rank for all values of (including innity) its number of columns equals the ne left null space order o l and hence o l o l Moreover, equality is only met hen ^G l? ^F l is void ut then e also have that the ne dimension of the left null space equals the old one, ie m + p? r = m? r, hich yields the required result For the last inequality, let V () be a (m + p) (m + p? r ) minimal basis for the right null space of G? F Then obviously, e also have (G? F )V () = hich implies that the right null space V () of (G? F ) is a subspace of the right null space V () of (G? F ) As a subspace, it then follos from the theory of minimal bases [, x] that there exists a polynomial matrix M() such that V () = V () M(): Let V h and V h be the coecient matrices of the highest column degrees in V () and V () (these are the column coecients of s i and s j, respectively) Since V () and V () are minimal bases, e kno that V h and V h both have full column rank From this it follos that element m j;i () of the matrix M() can def not have degree larger than d j;i = s i? s j Moreover, the coecient matrix M h ith the coecient of s i?s j as (j; i)-th entry, has also full column rank, since [] : V h = V h M h : A { {

6 For every nonzero element m h j;i in M h e kno that s i = s j +d j;i s j since d j;i must be non-negative and no cancellation can occur beteen columns in this matrix product due to the linear independence of the columns in V h, V h and M h Since M h has full column rank, there must exist distinct indices j ; : : :; j n?r such that m h j i ;i = and hence n?r X i= X s n?r i Since the sequence fs j g is increasing, this implies that n?r X i= i= X s n?r i Finally, the same reasoning can be applied for the rst i columns of the matrices V h and M h, yielding the desired third bound i= s ji : This then automatically leads to the folloing main result Theorem Let G? F be a pencil ith right Kronecker indices s s nr Suppose e append p ros to obtain G F? : Z The maximum number of ne zeroes that can be placed is s i : s nr?p+ + ::: + s nr ; and a matrix Z can be found to place that many zeroes at any previously chosen locations in the complex plane This can be achieved by embedding the p largest Kronecker blocks only The other right Kronecker indices s ; ; s nr?p of the augmented pencil ill then be unchanged Proof: From the inequalities in the previous theorem it is clear that o n?r X l + o r o l + j= s j = o l + o r? n?r X j=n?r + Since rank(f ) is not aected by the embedding, it follos from Lemma that the maximum increase in number of zeroes satises o f + o? (o f + o ) n?r X j=n?r + The right hand side of this inequality is maximized by taking as many terms as possible, ie by taking r = r + p and hence : o f + o? (o f + o ) n?r X j=n?r?p+ Moreover, by using the embedding suggested in (,,) ith Z l = = Z r, this upper bound is actually Ri () met Indeed, each block P i () def = ith z T i = is regular and has s i zeroes After a permutation z T i of ros in () e have that each P i () appears on diagonal and becomes part of the ne regular part G reg? J reg of G? F The blocks for hich z T i = decouple, and the corresponding right Kronecker blocks remains intact in the augmented pencil This can be seen by noting that the right annihilating vectors corresponding to { { s j : s j : s j :

7 these blocks in the original pencil () (or its upper triangular equivalent) remain so for the augmented pencil (), ith the same degrees in Similarly, the left Kronecker blocks remain unaected for the same reason To place the zeroes for an individual Kronecker block, suppose that R() = (b; A)? (; U) (ith A; U square) is a single right Kronecker block and hence has full ro rank for all Suppose e add R() the single ro z T = (; y T ) Then observe that the nite zeroes of P () def = are exactly the eigenvalues of the pencil A + b? y T? U We can choose = and choose y T by standard pole placement techniques [, ] The vector y T alays exists and is generally unique When e choose =, P () has at least one innite zero In fact, as long as z T = the number of trailing zeros in that ro indicates the number of innite zeroes in P () In order to compute the proper ros Z r, it is necessary to extract the individual right Kronecker blocks The procedure to do this is described in detail in the next section, but a brief outline is as follos We rst apply the staircase algorithm [] to extract the right Kronecker part and compute the corresponding indices We then permute the ros and columns to extract the smallest right Kronecker block into the upper left corner and decouple this block from the rest of the pencil On the remaining collection of right Kronecker blocks e repeat this step to extract the next smallest right Kronecker block, until all the right Kronecker blocks have been extracted At each step, to decouple the upper left from the loer right, e annihilate the entries in the loer left block { in a very particular order hich ends up completely lling in the upper right block All the transformations applied are unitary transformations, and the result ill be an upper triangular version of the pencil (), here s s s l The Eect of oupling In this section, e illustrate some of the variations in the Kronecker structure that can occur hen a ro is appended The scheme suggested above implies that the matrix Z r has a decoupled form as in (), hich is not strictly required Since the method proposed belo adds one block P i () at a time to the regular part, let us analyze hat happens hen e append a single ro z T = (z T ; zt ) to a right Kronecker block R() and a regular block G reg? F reg as in : P () = R() G reg? F reg A : (9) z T z T If one of z T or zt is zero, then the to parts decouple, so let us assume that zt = and zt = Let be a zero of P () ith corresponding left eigenvector u T = (u T ; ut ; ) Then (ut ; ) must be a left R() eigenvector of corresponding to eigenvalue Once this condition is satised, one can alays z T nd a u T satisfying ut (G reg? F reg ) + z T =, to make ut the left eigenvector of P ( ), assuming is not an eigenvalue of (G reg? F reg ) For then (G reg? F reg ) has full column rank, and a solution alays exists If ; u T is an eigenvalue and left eigenvector of (G reg? F reg ), then a left eigenvector of P () R() corresponding to is (; u T ; ) regardless of hether or not the pencil also has an eigenvalue We illustrate this case ith the folloing example: P () = R() G reg? F reg z T { { A = ( ) z T z T A : z T

8 Regardless of the choice of z T ;, the entirepencil has an eigenvalue ith left eigenvector (; ; ) If R() e set z T = (; ) so that the pencil also has an eigenvalue, and e set = to couple the to parts together, the resulting pencil is z T P () = A : This is a regular pencil ith characteristic polynomial detp () =? It has a double, defective, eigenvalue at zero If the to parts are decoupled by setting =, the characteristic polynomial remains unchanged, but the double eigenvalue at zero becomes nondefective If instead e set z T = (; ), the characteristic polynomial becomes?, yielding simple eigenvalues at and, independent of the R() choice of So if both individual pencils and (G reg? F reg ) have a common eigenvalue, that z T eigenvalue may remain in the full pencil P () ith a Jordan chain combined from the Jordan chains from the individual pencils, or else the common eigenvalue may have a ne independent Jordan chain This implies that hen placing ne zeroes, it is best to avoid any existing zeroes if one ants to keep Jordan chains as short as possible We can summarize some of this discussion ith the folloing Theorem onsider the augmented pencil () As long as the nely placed zeroes do not coincide ith any zeroes already existing in G reg? F reg, the block Z r may be computed to place those zeroes independent of Z reg ; Z l If ne zeroes are placed over existing ones, their Jordan chains might or might not coalesce Ri () The same comment applies henever common zeros are chosen beteen added blocks P i () = since coupling is likely to occur via the nonzero elements above diagonal in () as ell Since this paper focuses only on the placement of zeroes and not their Jordan structure, e do not pursue this discussion here z T i omputational Procedure We sketch an algorithm to place the zeroes The algorithm consists of three stages In the rst stage, e essentially compute the generalized Schur form (), separating the left Kronecker part from the regular part and the right Kronecker part This can be carried out using the staircase algorithm [, ] and e ill not discuss this process in detail In the second stage, e extract the individual Kronecker blocks, via a ne elimination procedure proposed in this paper Finally, in the third stage e compute the ros to be appended in order to place the zeroes The entire extraction process (the rst to stages) is carried out using only unitary transformations, hence it enjoys a backard stability property The zero placing part also enjoys a backard stability property, hence the numerical stability of the hole process is favorable The broad steps of the process is as follos Algorithm Zero Placement Use the staircase algorithm [, ] to extract the left and right Kronecker part from the pencil Assume the left Kronecker part of the pencil is the m n pencil G l? F l, ith n = m + k hoose p k as the number of ros to append { 8 {

9 Extract the individual Kronecker blocks from the staircase form To the largest Kronecker blocks compute the ro(s) that must be appended in order to place the desired zeroes Append the computed ros and back-transform through all the accumulated basis transformations back to the original basis for the given pencil G? F We ll in some of the details for each step in turn Staircase Form The staircase form is a form that can be obtained from the original pencil that exposes the various Kronecker indices and orders of a pencil, as ell as exposing any regular part In fact, the presence of a regular part can be determined by the staircase algorithm, except that it may not be as numerically reliable as the approach in [] (This is still an open area of research) The algorithm used to extract the left and right Kronecker part is the variant described in [] to hich e refer the reader for all the details, particularly on ho to handle the presence of a regular part or left Kronecker blocks The result of this algorithm is of the folloing typical form : G l? F l = G ; G ; G ; G ;k G ;k+ G ; G ; G ;k G ;k+ G ; G ;k G ;k+ G k;k G k;k+? Figure Typical Staircase Form F ; F ; F ; F ;k+ F ; F ; F ;k+ F ; F ;k+ ; () F k;k+ here the matrices F i;i+ are n i n i nonsingular and the matrices G i;i are n i n i? and of full ro rank n i Therefore, the sequence fn i ; i = ; : : :; kg is nonincreasing and its dual sequence are the right Kronecker indices fs i ; i = ; : : :; n r g, ie : there are n i+? n i indices equal to i for i = ; : : :; k here e have assumed n k+ = Notice that this implies that if the smallest Kronecker index is s = q, then the rst q matrices G i;i for i = ; : : :; q are square invertible as ell Finally, e need to use a variant of the above form here the square matrices F i;i+ are uppertriangular and the rectangular matrices G i;i have leading zero columns and a trailing upper-triangular matrix This form can alays be obtained as explained in [] and ill be exploited in the subsequent steps Extract Individual Kronecker locks The next step is to extract the individual Kronecker blocks We do this by permuting toard the upper left the entries of the matrix corresponding to the Kronecker block of interest, and then annihilating the coupling elements in the appropriate o-diagonal block Due to the nature of the Kronecker structure, it is necessary to extract rst the smallest Kronecker block, then extract the next smallest from hat is left, and so on { 9 {

10 It is easier to illustrate the process and then describe it formally Let q be the number of leading diagonal G blocks that are square That is, let q be the number such that G ; ; G qq in () are square, but G q+;q+ is not Then the smallest Kronecker index is s = q In the example orked out belo e assumed there are such blocks, so the smallest Kronecker index is s = In this particular example, e must thus form a block We form this block from the, entries of those rst square G blocks together ith the, entries of the corresponding F blocks That is, e permute the ros and columns of G; F to collect together the upper left entries of all the leading square G blocks This ill form a leading submatrix Then e must decouple this leading submatrix by eliminating the coupling entries To sho ho this orks in more detail, e partition all the blocks of Figure to expose the, scalar entries, shoing the result as Figure We denote scalar entries by g; f, column vectors by g; f, ro vectors by g ; f (completely unrelated to the transpose of any corresponding column vector), and submatrices by G; F Note that potentially the ro, column and matrix blocks could be empty g g g g g g g g g G g G g G g G G g g g g g g g G g G g G G g g g g g G g G G G G G Figure - Partitioned Staircase Form f f f f f f f F f F f F F f f f f f F f F F f f f F F F We permute the leading, entries into the upper left position to obtain Figure in hich the block is exposed: g g g g g g g g g g g g g g g g g g g g g g g g G G G G G g g G G G G g G G G G G G Figure - Permuted Form f f f f f f f f f f f f f f f f f F F F F f F F F F F F We then decouple the Kronecker block from the rest by annihilating one by one the column entries in the loer left part This is done ith (almost) alternating left and right unitary transformations The entries in the loer left are annihilating in a very particular order, starting ith the \outer" diagonal strip In this example, the rst items eliminated are the entries in the outer diagonal (marked ith in Figure ), eliminated in order: g ; f ; g ; f ; f Then the next diagonal entries are eliminated in order: g ; f ; g Then nally g is eliminated The entries in G are eliminated using unitary transformations from the right, and the entries in F using transformations from the left, as illustrated in the appendix Each elimination results in a ll in the corresponding position in the upper right block This example is suciently general to sho the pattern of lls in the g; f part for the general case { {

11 The result is shon in Figure In Figure, ^ denotes entries that ere modied from Figure, ^ denotes entries that ere purposely eliminated, and ^ denotes entries that ere lled in during this process g ^g ^g ^g ^g ^g ^g g g ^g ^g ^g ^ ^g ^g ^g ^g ^g ^g ^ ^ ^g ^g ^g ^ ^ ^ ^G ^G ^G ^G ^G ^ ^ ^G ^G ^G ^G ^ ^G G G G G G ^f ^f ^f ^ ^f ^f ^f ^f ^ ^ ^f ^f ^ ^ ^ ^f f f ^f ^f ^f ^ ^ ^F ^F ^F ^F ^ ^F ^F ^F F F F Figure - Result from Extraction of Smallest Kronecker lock We summarize the process as follos Algorithm Kronecker lock Extraction Start ith an m n pencil G? F in staircase form, ith k = n? m Let q = s ( q k) be the number of leading diagonal G blocks that are square in the staircase form (Figure ) Permute ros and columns of the pencil so that the \," entries of the blocks G ij ; F ij, i = ; ; q, j = ; ; q +, are in the upper left, as in Figure Denote the partitioned m n pencil by G (;) G (;) F (;) F (;) G (;) G (;)? F (;) F (;) ; here each block is partitioned as in Figure The leading block G (;)? F (;) is q (q + ) In the partitioning of Figure for i q, G ii are k k, so that G (;) ii, G (;) ii, G (;) ii, G (;) ii are, respectively,, (k? ), (k? ), (k? ) (k? ) Eliminate the entries G (;) ij ; F (;) ij, in the permuted matrices This modies parts of all four blocks (,), (,), (,), (,) For i = q; ; ; : For j = i; ; ; : Push G (;) j;j++k?i right into G(;) j;j If j >, Push F (;) j?;j++k?i up into F (;) j+k?i;j++k?i The resulting modied \(,)" block is not further modied by this algorithm, so e denote it by G [] []? F The modied \(,)" block is still is staircase form The modied \(,)" block is full and the modied \(,)" block is all zero If k, apply this algorithm recursively to the (m? k) (m? k? ) pencil G (;)? F (;) Apply all the resulting unitary transformations from the right also to the block G (;)? F (;) In the above algorithm description, e use the short hand \push right" and \push up" to mean the folloing { {

12 Push G (;) right into G (;) means: Find a unitary transformation Q such that (G (;) (; G e(;) ), here G e(;) all, compute ( G e(;) ; G e(;) ) = (G (;) ; G (;) )Q, and replace G (;), G (;) ith G e(;) likeise ith F (;), F (;) Push F (;) up into F (;) ; G(;) )Q = is upper triangular Then apply transformation to entire pencil: ie, for, G e(;) Do means: Find a unitary transformation Q 8 such that! ef (;) = : F (;) Q 8 F (;) Then apply transformation to entire pencil: ie, for all compute!! ef (;) F (;) = ef (;) Q 8 F (;) ; and replace F (;), F (;) ith e F (;), e F (;), respectively Do likeise on the F (;), F (;) The nal form of this process is Q [] L (G[]? F [] )Q [] R = (G[]? F [] ) = G [] G [] k G [] kk A? F [] F [] k F [] kk A ; () here for each i = ; ; k, G [] ii is s i (s i +), upper trapezoidal, and F [] ii = (; U i ) ith U i s i s i, upper triangular, here s i s k+?i are the right Kronecker indices dened as in Theorem, in nondecreasing order The s i (s i + ) pencil G [] ii? F [] ii has full ro rank s i for all values Of course, this is not the Kronecker anonical Form, but it is an analogous form achievable via unitary transformations Each diagonal block is equivalent to a single Kronecker block For most any purpose for hich the Kronecker form ould be required one can make use of this form just as ell We remark that in this extraction process, all the entries that are annihilated are never lled in during subsequent steps Assume e use Givens rotations to annihilate the entries Applying each rotation costs O(n) operations, including the cost of accumulating the Givens rotations At most O(n ) such rotations are generated, since there are no more than O(n ) entries to annihilate (e can't annihilate more entries than there are in the hole matrix!) Hence the entire extraction process takes O(n ) operations Of course, a more precise analysis is possible, but it is dicult because the exact cost ill range from free to O(n ) depending on the exact distribution of the Kronecker indices Place Zeroes To the form () e can compute the p ros needed to place s def = s nr + +s nr?p+ zeroes Let ; ; s be the given set of ne zeroes to be placed Then the ros to append have the folloing form Z [] = z T n r?p+ z T n r?p+ z T n r { { A ; ()

13 here e have n? s = s + + s nr?p leading columns of zeroes, and the nonzero entries are computed as described belo For each i = n r ; n r? ; ; n r? p +, each z T i is an s i -vector chosen so that the square (s i + ) (s i + ) pencil! G [] [] n r+?i;n r +?i F z T? i n r +?i;n r +?i has zeroes f^si +j g s i Recall from () that the pencil () has the general form (b j= i; A i )? (; U i ) here U i is square, upper triangular, and nonsingular Let z T i = (; yi T ) Then observe that the zeroes of bi A i? U i are exactly the eigenvalues of the pencil A i +b i? i yi T?U i We can choose i = and i choose y T i y T i by standard pole placement techniques [, ] In this case, the ne ro is generally unique To see that the fz T i g chosen in this ay places the zeroes for the entire pencil, e can permute the ros to put the ne regular part of the pencil in the loer right and the remaining right Kronecker structure in the upper left: eg e G here eg? e F =! G [] n r?p+;n r?p+ z T n r?p+ is regular ith the desired zeroes and G? F = e G G [] n r;n r z T n r has the right Kronecker structure left over G [] G [] ;n r?p G [] n r?p;n r?p ef e F? F e ; A? A? F [] n r?p+;n r?p+ F [] n r ;n r F [] F [] ;n r?p F [] n r?p;n r?p A ; () A Returning to the original pencil G? F, our original problem as to compute a p m matrix Z to place s zeroes We collect together all the transformations to obtain the formula for Z: [] Q L Q[] L F [] so that I G Z? here H denotes the conjugate transpose of F Q [] R Q[] R = G [] Z []? Z def = Z [] (Q [] R )H (Q [] R )H ; onclusion We have examined the general problem of placing the generalized eigenvalues to an arbitrary matrix pencil by the addition of ne ros of constant coecients We found that the number of zeroes that can be placed is limited to the order of the right Kronecker part (the sum of all the right Kronecker indices) { {

14 In addition, hen p ros are added and there are multiple right Kronecker indices, the number of zeroes that can be placed is limited to the sum of the p largest right Kronecker indices When the number of ros added is just right to make the system square, then the number of zeroes that can be placed is equal to the sum of all the right Kronecker indices We have outlined a ne method based entirely on unitary transformations to compute the right Kronecker indices and to extract the individual Kronecker blocks y combining this procedure ith pole placement algorithms in the literature, e arrive at a complete method for assigning the generalized eigenvalues for a pencil, hich enjoys good numerical stability properties due to the use of unitary transformations From a control point of vie, this method places the transmission zeroes by the synthesis of ne outputs It could just as easily be used to synthesize inputs instead The ne decomposition in hich the individual Kronecker blocks are extracted represents a unitary analog to the Kronecker canonical form in much the same spirit as the Schur decomposition is a unitary analog to the Jordan canonical form Acknoledgement Part of this research as performed hile the authors ere visiting the Institute of Mathematics and Applications of the University of Minnesota, Minneapolis, during the summer quarter of the Applied Linear Algebra Year organized there We greatly appreciated the hospitality and the productive atmosphere of that institute P Van Dooren as partially supported by the National Science Foundation (Grant R 999) References [] W A erger, R J Perry, and H H Sun An algorithm for the assignment of system zeroes Automatica, ():{, 99 [] Th eelen, P Van Dooren, An improved algorithm for the computation of Kronecker's canonical form of a singular pencil Linear Algebra & Applications, :9{, 988 [] D L oley Estimating the sensitivity of the algebraic structure of matrix pencils ith simple eigenvalue estimates SIAM J Matrix Anal, :{, 99 [] A Emami-Naeini and P Van Dooren omputation of zeros of linear multivariable systems Automatica, 8:{, 98 [] F R Gantmacher Theory of Matrices, volume helsea, Ne York, 99 [] T K Kailath Linear Systems Prentice Hall, 98 [] J Kautsky, N K Nichols, and P Van Dooren Robust pole assignment in linear state feedback Int J ontrol, :9{, 98 [8] Kouvaritakis and A G J MacFarlane Geometric approach to the analysis and synthesis of system zeroes, part I Int J ontr, :9{, 9 [9] Kouvaritakis and A G J MacFarlane Geometric approach to the analysis and synthesis of system zeroes, part II: Non-square systems Int J ontr, :{8, 9 { {

15 [] G Miminis and Paige A direct method for pole assignment of time-invariant multi-input linear systems Automatica, ():{, 988 [] P Misra A computational algorithm for squaring up part I: Zero input-output matrix In Proc onf Dec ontr, pages 9{, December 99 [] P Van Dooren The computation of Kronecker's canonical form of a singular pencil Lin Alg & Appl, :{, 99 [] G Verghese, P Van Dooren, and T Kailath Properties of the system matrix of a generalized state-space system Int J ontr, :{, 99 Appendix We sho the process to decouple the Kronecker block from the rest by annihilating one by one the entries in the loer left, starting ith the state of Figure This is done ith almost alternating left and right unitary transformations The entries in the loer left are annihilating in a very particular order, starting ith the \outer" diagonal strip (marked ith an overbar \ " in Figure ) We sho the step by step annihilations for this particular example, here the ros and columns aected by the individual Givens rotation are marked ith arros, the entries used to construct the rotation are enclosed in boxes (eg e denotes the entry being annihilated), and all the other entries modied in that step are marked ith a ide tilde \e" Fills (zeroes made nonzero) are denoted by \g " hen they occur and \ ^" in subsequent steps Likeise, entries set to zero are denoted \ e " the rst time and \^" in subsequent steps The hat \^" denotes entries changed from Figure In this particular example, steps - zero out the outer diagonal strip of G; F blocks (marked \ " in Figure ), steps -8 zero out the next strip of each, and step 9 zeroes out the last, resulting in the situation of Figure Note also that the G ; F blocks and all the entries belo and to their right remain unchanged through this hole process Situation of Figure : g g g g g g g g g g g g g g g g g g g g g g g g G G G G G g g G G G G g G G G G G G f f f f f f f f f f f f f f f f f F F F F f F F F F F F { {

16 Step : Rotate from right: g g g fg g g fg g g f g g fg g f g g g f f f f ff f f g fg g f g g f f f f f f g g fg G G G g G G f f f f f g fg G G g f G G f f F F f F F e G g G G f f f F F F F G G F F G Step : Rotate from left: g g g ^g g g ^g g g g g ^g g ^g g g fg fg f f g f g f g (= g g ^g G G ^G G G fg fg g G g G g G g G (= ^ ^G G G G G G Step : Rotate from right: f f ^f f ^f f f f ^f ^f f f f f e f f f f (= f ^f F ^F F F e f F f F f F (= F F F g g fg ^g g fg ^g g g f g fg ^g g f ^g g g f f ^f ff ^f f f fg ^g e ^g ^g ^g f f ^f f ^f f f g fg ^g G G g ^G G G ^f ^ ^f ^f e ^g G g ^G ^G ^G f f ^f F f ^F F F ^ ^ ^G G G ^F ^F ^F F G G F F G Step : Rotate from left: g g ^g ^g g ^g ^g g g fg fg fg f g f g f g f g f (= ^g ^g ^ ^g ^g ^g fg fg fg G g G g G g G G g (= ^ ^g ^G ^G ^G ^G ^ ^G G G G G G f ^f ^f ^f ^f f f f f f f e f f f f f f ^f ^ ^f (= ^f e f f f F f F f F f F (= ^ ^F ^F ^F F F F { {

17 Step : Rotate from right: g fg ^g ^g f g ^g ^g g g fg ^g ^g e ^g ^g ^g ^g ^g ^g ^ ^g ^g ^g e ^g ^g g G ^G ^G G ^G ^ ^g ^G ^G ^G ^G ^ ^G G G G G G Step : Rotate from right: ff ^f ^f f ^f ^f ^f ^f ^f ^f ^ ^f ^f ^f ^f ^ ^f ^f ^ ^f ^F ^F ^F ^F ^ ^F ^F ^F F F F g ^g ^g fg ^g fg ^g g g ^f ^f f ^g ^g fg ^ g f ^g ^g ^g f ^ ff ^f ^f ^f ^g fg e ^g ^g ^g ^f f f e ^f ^f ^f ^ ^g fg ^G G g ^G G ^G f f f ^ ^f ^f ^ ^ e G g ^G ^G ^G f f F f ^F ^F ^F ^ ^ ^G G G ^F ^F ^F F G G F F G Step : Rotate from left: g ^g ^g ^g ^g ^g ^g g g ^g ^g ^g ^ ^g ^g ^g ^g fg fg f e f g f g f g (= ^ fg fg g G g G g G g G g G (= ^ ^ ^G ^G ^G ^G ^ ^G G G G G G Step 8: Rotate from right: ^f ^f ^f ^ ^f ^f ^f ^f ^ ^f ^f ^f ^f ^f f f e e f f f f (= ^ e f F f F f F f F (= ^ ^F ^F ^F F F F g ^g fg ^g g f ^g ^g g g ^f f f ^f e ^g fg ^g e ^g ^g ^g ^g fg ^g e ^ ^g ^g ^g ^f ^f ^f ^f f f ^f f ^ ^f ^f ^f ^ e ^g G g ^G ^G ^G ^G ^f ^ ^ ^f ^f ^ ^ ^G ^G ^G ^G ^ ^ ^F ^F ^F ^F ^ ^G G G ^ ^F ^F ^F G G F F F G { {

18 Step 9: Rotate from right, yielding situation of Figure : e ^e ^e fe g f ^g ^g g g ^f ^f f f e ^g ^g fg e ^g ^g ^g ^g ^g fg e ^ ^g ^g ^g ^f ^f ^f ^f ^f f f e ^ ^f ^f ^f ^ ^ e G g ^G ^G ^G ^G f f f ^ ^ ^f ^f ^ ^ ^G ^G ^G ^G ^ ^ ^F ^F ^F ^F ^ ^G G G ^ ^F ^F ^F G G F F F G Figure A - Annihilating Steps { 8 {

3 - Vector Spaces Definition vector space linear space u, v,

3 - Vector Spaces Definition vector space linear space u, v, 3 - Vector Spaces Vectors in R and R 3 are essentially matrices. They can be vieed either as column vectors (matrices of size and 3, respectively) or ro vectors ( and 3 matrices). The addition and scalar

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum

CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum 1997 19 CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION 3.0. Introduction

More information

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart. UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

These notes give a quick summary of the part of the theory of autonomous ordinary differential equations relevant to modeling zombie epidemics.

These notes give a quick summary of the part of the theory of autonomous ordinary differential equations relevant to modeling zombie epidemics. NOTES ON AUTONOMOUS ORDINARY DIFFERENTIAL EQUATIONS MARCH 2017 These notes give a quick summary of the part of the theory of autonomous ordinary differential equations relevant to modeling zombie epidemics.

More information

1 :: Mathematical notation

1 :: Mathematical notation 1 :: Mathematical notation x A means x is a member of the set A. A B means the set A is contained in the set B. {a 1,..., a n } means the set hose elements are a 1,..., a n. {x A : P } means the set of

More information

The DMP Inverse for Rectangular Matrices

The DMP Inverse for Rectangular Matrices Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for

More information

1.1. Contributions. The most important feature of problem (1.1) is that A is

1.1. Contributions. The most important feature of problem (1.1) is that A is FAST AND STABLE ALGORITHMS FOR BANDED PLUS SEMISEPARABLE SYSTEMS OF LINEAR EQUATIONS S. HANDRASEKARAN AND M. GU y Abstract. We present fast and numerically stable algorithms for the solution of linear

More information

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

344 A. Davoodi / IJIM Vol. 1, No. 4 (2009)

344 A. Davoodi / IJIM Vol. 1, No. 4 (2009) Available online at http://ijim.srbiau.ac.ir Int. J. Industrial Mathematics Vol., No. (009) -0 On Inconsistency of a Pairise Comparison Matrix A. Davoodi Department of Mathematics, Islamic Azad University,

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,

More information

Math 314 Lecture Notes Section 006 Fall 2006

Math 314 Lecture Notes Section 006 Fall 2006 Math 314 Lecture Notes Section 006 Fall 2006 CHAPTER 1 Linear Systems of Equations First Day: (1) Welcome (2) Pass out information sheets (3) Take roll (4) Open up home page and have students do same

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

STATC141 Spring 2005 The materials are from Pairwise Sequence Alignment by Robert Giegerich and David Wheeler

STATC141 Spring 2005 The materials are from Pairwise Sequence Alignment by Robert Giegerich and David Wheeler STATC141 Spring 2005 The materials are from Pairise Sequence Alignment by Robert Giegerich and David Wheeler Lecture 6, 02/08/05 The analysis of multiple DNA or protein sequences (I) Sequence similarity

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

A Generalization of a result of Catlin: 2-factors in line graphs

A Generalization of a result of Catlin: 2-factors in line graphs AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 72(2) (2018), Pages 164 184 A Generalization of a result of Catlin: 2-factors in line graphs Ronald J. Gould Emory University Atlanta, Georgia U.S.A. rg@mathcs.emory.edu

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT Hans Norlander Systems and Control, Department of Information Technology Uppsala University P O Box 337 SE 75105 UPPSALA, Sweden HansNorlander@ituuse

More information

Computation of Kronecker-like forms of periodic matrix pairs

Computation of Kronecker-like forms of periodic matrix pairs Symp. on Mathematical Theory of Networs and Systems, Leuven, Belgium, July 5-9, 2004 Computation of Kronecer-lie forms of periodic matrix pairs A. Varga German Aerospace Center DLR - berpfaffenhofen Institute

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B.

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B. EE { QUESTION LIST EE KUMAR Spring (we will use the abbreviation QL to refer to problems on this list the list includes questions from prior midterm and nal exams) VECTORS AND MATRICES. Pages - of the

More information

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 9 Observers,

More information

MMSE Equalizer Design

MMSE Equalizer Design MMSE Equalizer Design Phil Schniter March 6, 2008 [k] a[m] P a [k] g[k] m[k] h[k] + ṽ[k] q[k] y [k] P y[m] For a trivial channel (i.e., h[k] = δ[k]), e kno that the use of square-root raisedcosine (SRRC)

More information

Control Systems. Linear Algebra topics. L. Lanari

Control Systems. Linear Algebra topics. L. Lanari Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues - eigenvectors - characteristic polynomial - algebraic multiplicity eigenvalues invariance under similarity

More information

Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution

Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution Biswa N. Datta, IEEE Fellow Department of Mathematics Northern Illinois University DeKalb, IL, 60115 USA e-mail:

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS SECINS 5/54 BSIC PRPERIES F EIGENVUES ND EIGENVECRS / SIMIRIY RNSFRMINS Eigenvalues of an n : there exists a vector x for which x = x Such a vector x is called an eigenvector, and (, x) is called an eigenpair

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

On the approximation of real powers of sparse, infinite, bounded and Hermitian matrices

On the approximation of real powers of sparse, infinite, bounded and Hermitian matrices On the approximation of real poers of sparse, infinite, bounded and Hermitian matrices Roman Werpachoski Center for Theoretical Physics, Al. Lotnikó 32/46 02-668 Warszaa, Poland Abstract We describe a

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

BUMPLESS SWITCHING CONTROLLERS. William A. Wolovich and Alan B. Arehart 1. December 27, Abstract

BUMPLESS SWITCHING CONTROLLERS. William A. Wolovich and Alan B. Arehart 1. December 27, Abstract BUMPLESS SWITCHING CONTROLLERS William A. Wolovich and Alan B. Arehart 1 December 7, 1995 Abstract This paper outlines the design of bumpless switching controllers that can be used to stabilize MIMO plants

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Long run input use-input price relations and the cost function Hessian. Ian Steedman Manchester Metropolitan University

Long run input use-input price relations and the cost function Hessian. Ian Steedman Manchester Metropolitan University Long run input use-input price relations and the cost function Hessian Ian Steedman Manchester Metropolitan University Abstract By definition, to compare alternative long run equilibria is to compare alternative

More information

Chapter 3. Systems of Linear Equations: Geometry

Chapter 3. Systems of Linear Equations: Geometry Chapter 3 Systems of Linear Equations: Geometry Motiation We ant to think about the algebra in linear algebra (systems of equations and their solution sets) in terms of geometry (points, lines, planes,

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

arxiv: v1 [cs.sy] 29 Dec 2018

arxiv: v1 [cs.sy] 29 Dec 2018 ON CHECKING NULL RANK CONDITIONS OF RATIONAL MATRICES ANDREAS VARGA Abstract. In this paper we discuss possible numerical approaches to reliably check the rank condition rankg(λ) = 0 for a given rational

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space Lesson 6: Linear independence, matrix column space and null space New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space Two linear systems:

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems Dierential Equations (part 3): Systems of First-Order Dierential Equations (by Evan Dummit, 26, v 2) Contents 6 Systems of First-Order Linear Dierential Equations 6 General Theory of (First-Order) Linear

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Some Classes of Invertible Matrices in GF(2)

Some Classes of Invertible Matrices in GF(2) Some Classes of Invertible Matrices in GF() James S. Plank Adam L. Buchsbaum Technical Report UT-CS-07-599 Department of Electrical Engineering and Computer Science University of Tennessee August 16, 007

More information

Linear Algebra. James Je Heon Kim

Linear Algebra. James Je Heon Kim Linear lgebra James Je Heon Kim (jjk9columbia.edu) If you are unfamiliar with linear or matrix algebra, you will nd that it is very di erent from basic algebra or calculus. For the duration of this session,

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

The Probability of Pathogenicity in Clinical Genetic Testing: A Solution for the Variant of Uncertain Significance

The Probability of Pathogenicity in Clinical Genetic Testing: A Solution for the Variant of Uncertain Significance International Journal of Statistics and Probability; Vol. 5, No. 4; July 2016 ISSN 1927-7032 E-ISSN 1927-7040 Published by Canadian Center of Science and Education The Probability of Pathogenicity in Clinical

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Control Systems. Laplace domain analysis

Control Systems. Laplace domain analysis Control Systems Laplace domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic equations define an Input/Output

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book] LECTURE VII: THE JORDAN CANONICAL FORM MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO [See also Appendix B in the book] 1 Introduction In Lecture IV we have introduced the concept of eigenvalue

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Early & Quick COSMIC-FFP Analysis using Analytic Hierarchy Process

Early & Quick COSMIC-FFP Analysis using Analytic Hierarchy Process Early & Quick COSMIC-FFP Analysis using Analytic Hierarchy Process uca Santillo (luca.santillo@gmail.com) Abstract COSMIC-FFP is a rigorous measurement method that makes possible to measure the functional

More information

I = i 0,

I = i 0, Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this appr

Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this appr Lectures on ynamic Systems and Control Mohammed ahleh Munther A. ahleh George Verghese epartment of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 6 Balanced

More information