A method for computing quadratic Brunovsky forms

Size: px
Start display at page:

Download "A method for computing quadratic Brunovsky forms"

Transcription

1 Electronic Journal of Linear Algebra Volume 13 Volume 13 (25) Article 3 25 A method for computing quadratic Brunovsky forms Wen-Long Jin wjin@uciedu Follow this and additional works at: Recommended Citation Jin, Wen-Long (25), "A method for computing quadratic Brunovsky forms", Electronic Journal of Linear Algebra, Volume 13 DOI: This Article is brought to you for free and open access by Wyoming Scholars Repository It has been accepted for inclusion in Electronic Journal of Linear Algebra by an authorized editor of Wyoming Scholars Repository For more information, please contact scholcom@uwyoedu

2 A METHOD FOR COMPUTING QUADRATIC BRUNOVSKY FORMS WEN-LONG JIN Abstract In this paper, for continuous, linearly-controllable quadratic control systems with a single input, an explicit, constructive method is proposed for studying their Brunovsky forms, initially studied in [W Kang and A J Krener, Extended quadratic controller normal form and dynamic state feedback linearization of nonlinear systems, SIAM Journal on Control and Optimization, 3: , 1992] In this approach, the computation of Brunovsky forms and transformation matrices and the proof of their existence and uniqueness are carried out simultaneously In addition, it is shown that quadratic transformations in the aforementioned paper can be simplified to prevent multiplicity in Brunovsky forms This method is extended for studying discrete quadratic systems Finally, computation algorithms for both continuous and discrete systems are summarized, and examples demonstrated Key words Continuous quadratic systems, Discrete quadratic systems, Linearly-controllable control systems, Quadratic transformations, Quadratically state feedback equivalence, Quadratic Brunovsky forms AMS subject classifications 93B1, 15A4, 93B4 1 Introduction Linear control systems can be continuous in time t, (11) ξ = Aξ + bµ, or discrete (12) ξ(t +1)=Aξ(t)+bµ(t), where coefficients A R n n and b R n 1 are generally constant, and state variable, ξ( R n ), and control variable, µ( R), are continuous or discrete respectively When controllable, both (11) and (12) admit the Brunovsky form [1], which is derived from controller form, under the following linear change of coordinates and state feedback (13) ξ = Tx, µ = u + x T v, where x and u are new state and control variables respectively, and T R n n and v R n 1 are the transformation matrices (vectors) (refer to Chapter 3 of [2]) In the linear Brunovsky form of (11) and (12), A and b have the following forms: (14) A = 1 1 1, b = Received by the editors 4 April 23 Accepted for publication 26 January 25 Handling Editor: Michael Neumann Institute of Transportation Studies, University of California, 522 Social Science Tower, Irvine, CA , USA (wjin@uciedu) 4 1

3 Quadratic Brunovsky Forms 41 In an attempt to extending the Brunovsky forms for non-linear systems, which generally do not even have controller forms, Kang and Krener [3] studied continuous linearly controllable quadratic control systems with a single input, which can be written as (15) ξ = Aξ + bµ + F [2] (ξ)+gξµ + O(ξ,µ) 3, where F [2] (ξ) =(ξ T F 1 ξ,,ξ T F n ξ) T is a vector of n quadratic terms with symmetric n n matrices F i s (i =1,,n), Gξµ =(G 1 ξµ,,g n ξµ) T is a vector of n bilinear terms with G R n n,ando(ξ,µ) 3 includes all terms ξ a µ b with a + b 3 (We use different notations from [3] for the purpose of clearly presenting our method of computation) Moreover, the following additional assumptions are made for this system First, the coefficients in (15) and (19) A, b, F i (i =1,,n), G and h are assumed to be time-invariant Second, the two systems are assumed to be linearly controllable; by linearly controllable we mean that the linear parts of the two quadratic systems are controllable and, as a result, the linear parts can be transformed into the Brunovsky form with (13) Kang and Krener [3] first defined quadratically state feedback equivalence up to second-order, or quadratic equivalence for short, under the following quadratic change of coordinates and state feedback, (16) ξ = x + P [2] (x)+o(x) 3, ν = µ + x T Qx + rxµ + O(x, µ) 3, in which P [2] (x) = (x T P 1 x,,x T P n x) T is a vector of n quadratic terms, and transformation matrices include symmetric P i R n n (i = 1,,n), symmetric Q R n n,andr R 1 n We can see that (16) are equivalent to (17) ξ = x + P [2] (x)+o(x) 3, µ = ν x T Qx rxν + O(x, ν) 3, and hereafter refer to transformations of [3] as (17) Then, from all quadratically equivalent systems of a general system (15), two types of Brunovsky forms were defined in [3] In type I forms, the nonlinear terms are reduced into a number of quadratic terms x 2 i ; ie, there are no the cross terms in x i and x j (i j) or bilinear terms, xν In type II forms, only bilinear terms are kept In both types of Brunovsky forms, the number of non-zero nonlinear terms is n()/2, compared to n 2 (n+3)/2 for a general quadratic system In this paper, we propose a new method for studying the Brunovsky forms, first for (15) under transformations (17) This method can be carried out in three steps: first, we find the relationships between the coefficients of (15), the coefficients of its quadratically equivalent systems, and the corresponding transformation matrices; second, from these relationships, we derive a mapping from the coefficients of (15) and its equivalent systems to the transformation matrix P 1, which can be considered as the necessary condition that all equivalent systems of (15) should satisfy; third, we show how to compute, from the necessary condition, Brunovsky forms as well as the corresponding transformations With this method, we find that (15) admits the same two types of Brunovsky forms defined in [3] In contrast, our approach, constructive in

4 42 W-L Jin nature, is capable of computing the Brunovsky forms and the transformation matrices simultaneously We further show that quadratic transformations, (17), can be simplified by setting r =to (18) ξ = x + P [2] (x)+o(x) 3, µ = ν x T Qx + O(x, ν) 3 Still defining the quadratic equivalence in the sense of [3], the new transformations prevent multiple solutions of type I or type II Brunovsky forms Ie, the Brunovsky form of each type and the corresponding transformation matrices P i (i =1,,n) and Q are uniquely determined by the original system Moreover, we apply the same method, but with the simplified transformations defined in (18), and study the following discrete system (19) ξ(t +1)=Aξ(t)+bµ(t)+F [2] (ξ(t)) + Gξ(t)µ(t)+hµ 2 (t)+o(ξ,µ) 3, where, similarly, F [2] (ξ(t)) and Gξ(t)µ(t) are a vector of quadratic terms and a vector of bilinear terms respectively 1 We find that for (19) there exists only one type of Brunovsky form consisting of n(n + 1)/2 bilinear terms, which corresponds to type II Brunovsky forms of continuous systems The rest of this paper is organized as follows After reviewing the Brunovsky form of linear systems (Section 2), we study the Brunovsky forms of continuous quadratic systems in Section 3 and of discrete quadratic systems in Section 4 In Section 5, we summarize our method into two computation algorithms: one for continuous systems, and the other for discrete systems We conclude our paper with Section 6 2 Review: computation of the Brunovsky form of a controllable linear control system We first review the computation of the Brunovsky form of a continuous controllable linear control system (11) 2 The procedure and the results also apply to the discrete system (12) Computation of the Brunovsky form for (11) comprises of two steps: first, the linear system is transformed into the controller form with a linear change of coordinates given by the first equation in (13); second, the controller form is further reduced into the Brunovsky form with a state feedback given by the second equation in (13) The control matrix for (11) is defined as (21) C =[A b Ab b] Since (11) is controllable, rank(c) isn and, therefore, C is invertible Let d denote the first row of C 1, then we can compute the transformation matrix T as (22) T = d da da n 2 da 1 Note that the discrete system (19) contains a term quadratic in the control variable, hµ 2 (t), where h R n 1,and(19)hasn 2 (n +3)/2 +n non-zero nonlinear terms 2 For further explanation, refer to [1] and Chapter 3 of [2],

5 Quadratic Brunovsky Forms 43 and with the linear change of coordinates ξ = Tx, (11) can be transformed into the following controller form, (23) in which (24) Ā = T 1 AT = 1 1 ẋ = Āx + bµ, 1 v 1 v 2 v n, b = T 1 b = 1 Then, using a linear state feedback µ = u + x T v,inwhich, v 1 v 2 (25) v =, v v n we can transform the controller form (23) into the following Brunovsky form, 1 1 ẋ = x + (26) u 1 1 From the derivation above, we can see that the Brunovsky form exists and is unique, and the transformations can be explicitly computed In addition, this procedure can be easily integrated in applications related to the Brunovsky form In the same spirit, we propose a method for directly computing quadratic Brunovsky forms and the corresponding transformations in the following sections 3 Computation of continuous quadratic Brunovsky forms Note that, with linear transformations in (13), quadratic systems (15) and (19) will not change their forms except their coefficients Without loss of generality, therefore, we hereafter assume that A and b in (15) (and also in (19)) have already been transformed into the forms defined by (14) In this section, we study Brunovsky forms of the continuous system (15) under transformations (17), in a constructive manner First, we find relationships between coefficients of (15), coefficients of its quadratically equivalent systems, and the corresponding transformation matrices Second, from these relationships, we derive a mapping from the coefficients of (15) and those of the equivalent systems to transformation matrix P 1 ; this mapping is a necessary condition that all the equivalent systems should satisfy Third, we show how to obtain two types of Brunovsky forms as well as the corresponding transformations

6 44 W-L Jin 31 Quadratically equivalent system of (15) Definition 31 Assume A R n n is in the linear Brunovsky form given by (14), we define a linear operator L : R n n R n n as (31) L P = P, LP = A T P + PA, L i+1 P = L L i P, i =, 1, Properties of L The linear operator L has the following properties: 1 L i P =wheni 2n 1 2 The nullity of L is n; ifp ker(l); ie, LP =,thenp can be written as {, i+ j n, P ij = ( 1) i p i+j n, otherwise, in which p 1,,p n are independent 3 L is not invertible 4 If P is symmetric, LP is symmetric Theorem 32 The continuous quadratic system (15) is equivalent, in the sense of [3], under the quadratic transformations given by (17), to a quadratic system, whose ith (i =1,,n)equationis (32) ẋ i = x i+1 + b i ν + x T Fi x + Ḡixν + O(x, ν) 3, where x n+1 (t) is a dummy state variable, F 1,, F n are symmetric, Ḡi is the ith rowof the matrix Ḡ, and the coefficients F i (i =1,,n)andḠ are defined by (33) (34) F i = F i + P i+1 LP i b i Q, Ḡ i = G i 2b T P i b i r, where P n+1 R n n is a zero dummy transformation matrix, and b i is the ith element of b defined in (14) Proof Plugging the transformations defined in (17) into (15), we obtain ẋ + d dt P[2] (x) =Ax + bν + F [2] (x)+gxν +AP [2] (x) bx T Qx brxν + O(x, ν) 3, of which the ith (i =1,,n) differential equation can be written as ẋ i + d dt (xt P i x)=x i+1 + b i ν + x T F i x + G i xν +x T P i+1 x b i x T Qx b i rxν + O(x, ν) 3 After plugging ẋ = Ax + bν + O(x, ν) 2 into the term d dt (xt P i x) and collecting all terms whose orders are higher than two, we then obtain the equivalent system (32) with the coefficients given in (33) and (34)

7 Quadratic Brunovsky Forms 45 Remarks Equations 33 and 34 present the relationships between the coefficients of (15), those of its quadratically equivalent system (32), and the corresponding transformation matrices We further simplify these relationships in the following subsections 32 A necessary condition for quadratically equivalent systems Lemma 33 The mapping from the coefficients, G and Ḡ, andr to the transformation matrices P i (i =1,,n) is given by [P 1 ] (n) [P 2 ] (n) (35) = 1 2 G 1 2Ḡ 1 2 br, [P n ] (n) where the operator [ ] (n) takes the nth rowfrom its object Proof Sinceb i =(i =1,,n 1) and b n = 1, we obtain, from (34), [P i ] (n) = b T P i = 1 2 G i 1 2Ḡi 1 2 b ir Thus we have (35) Lemma 34 The mapping from the coefficients F i and F i (i =1,,n)tothe transformation matrices P i (i =1,,n)andQ is given by i 1 i 1 (36) P i+1 = L i P 1 L j F i j + L j Fi j, i =1,,n 1, (37) j= j= Q = L j (F n j F n j ) L n P 1 j= Proof Wheni =1,,n 1, from (33), we have P i+1 = LP i F i + F i After iterating this equation with respect to i, we obtain (36) When i = n, (33) can be written as Q = F n F n LP n Since P n is given in (36), we then have (37) Definition 35 With A R n n and L defined in (14) and Definition 31 respectively, we define a series of linear operators X i : R n n R n n (i =, 1, ) by (38) X P = [L P ] (n) [L 1 P ] (n) [L P ] (n), X ip =(A T ) i X P

8 46 W-L Jin Properties of X i The linear operators X i (i =, 1, ) have the following properties: 1 X transforms a diagonal matrix into a skew-triangular matrix of the following structure For a kth (k = n +1,,n 1) diagonal matrix P,wedenotethekth diagonal elements by p l = P ( k k)/2+l,( k +k)/2+l (l =1,,n k ) Then X P becomes a diagonal matrix with the following properties: (X P ) n ( k k)/2 l+1,( k +k)/2+l = p l,all(x P ) i,j for i + j = n + k +1+2m, wherem 1andn + k +1+2m 2n, are determined by p l, and the other entries are zeros 3 2 From the preceding property, X transforms a lower-triangular matrix into a full matrix and transforms an upper-triangular matrix into a lower skew-triangular matrix, defined by (39) ij =, when i + j n +1 3 From Properties 1 and 2, we have that the nullity of X is Therefore, X is invertible 4 From the definition of L, X i =wheni n 5 From the definition of X i (i =1,,n 1), X i P can be obtained by shifting X P down by i rows and replacing the first i rows by zeros From Property 1, therefore, X i transforms a diagonal matrix P,whosemain diagonal elements are p 1,,p n into a lower skew-triangular matrix of the following structure: (X i P ) n l+i+1,l = p l for l = i +1,,n,and all other elements except (X i P ) i,j for i + j = n + i +2m, whereinteger m 1andn + k +1+2m 2n, are zeros Theorem 36 The mapping from F i, G, F i, Ḡ (i =1,,n 1), and r to P 1 is given by (31) P 1 = X 1 ( ) X i F i G X i Fi 1 2 Ḡ 1 2 br Proof From (36) we can find the nth row of P i+1 (i =1,,n 1) as i 1 i 1 [P i+1 ] (n) =[L i P 1 ] (n) [L j F i j ] (n) + [L j Fi j ] (n) j= j= Hence, we have (311) [P 1 ] (n) [P 2 ] (n) [P n ] (n) = X P 1 X i F i + X i Fi 3 Note that, for n = 2, there is no solution to m In this case, all entries are zeros except (X P ) ij for i + j = n + k +1

9 Quadratic Brunovsky Forms 47 From (35) and (311), we obtain X P 1 = X i F i G X i Fi 1 2 Ḡ 1 2 br Multiplying both sides by the inverse of X, we then have (31) Remarks Note that P 1 is assumed to be symmetric Therefore quadratically equivalent coefficients F i (i =1,,) and Ḡ have to ensure the symmetry of the right hand side of (31) Thus (31) constitutes a necessary condition for all equivalent systems of (15), including the Brunovsky forms 33 Computation of the Brunovsky forms and the transformation matrices In this section, given r as well as the coefficients of (15), F i (i =1,,n) and G, weshowhowtochoose F i (i =1,,n)andḠ in Brunovsky forms, which, first, satisfy the necessary condition (31) and, second, have the smallest number of non-zero terms Since F n does not appear in the necessary condition, (31), we simply let F n = in Brunovsky forms To determine other coefficients, we decompose the terms related to original system (15) and r on the right hand side of (31) as (312) X 1 ( ) X i F i G 1 2 br = L + D + U, where L, D, U are strictly lower triangular, diagonal, and strictly upper triangular matrices respectively In order to add X 1 ( X i F 1 i + 2Ḡ) to the right hand side of (31) to obtain a symmetric matrix, we have, according to the property of the decomposition, the following three cases 1 If L U T =, L + D + U is already symmetric, and we simply set F i (i =1,,) and Ḡ to be In this case, the Brunovsky form of (15) is a linear system; ie, (15) can be linearized 2 When L U T,wecansetX 1 ( X i F 1 i + 2Ḡ) to a lower-triangular matrix, L U T Inthiscase,P 1 = U T + D + U According to the properties of X, X i F i + 1 2Ḡ is a full matrix 3 When L U T, we can also set X 1 ( X i F 1 i + 2Ḡ) to be an uppertriangular matrix, U L T, X i F i Ḡ is a triangular matrix defined by (39), which has n(n 1)/2 non-zero terms as U L T In this case, we have (313) X i Fi + 1 2Ḡ = X (U L T ) 1, and the first transformation matrix is given by (314) P 1 = L + D + L T 4 In addition to the aforementioned cases, X 1 ( X i F 1 i + 2Ḡ) can be also either L U T or U L T plus an arbitrary symmetric matrix

10 48 W-L Jin Comparing Cases 2, 3, and 4, we can see that the solutions in Cases 2 and 4 have more non-linear terms than in Case 3 Therefore, in the Brunovsky forms, we set X 1 ( X i F i + 1 2Ḡ) tobeu LT, and from (313), we can obtain two types of Brunovsky forms as follows First, by setting Ḡ =, we have from (313) that X i F i = 1,whichis a lower skew-triangular matrix To obtain the smallest number of, ie, n(n 1)/2, non-zero items in the Brunovsky forms, we can select F i to be a diagonal matrix with main diagonal elements as (,,, f i,i+1,, f i,n ), which can be uniquely computed as follows From properties of X i (i =1,,n 1), we first have (315) f 1,j =(X 1 F1 ) n j+2,j =( 1 ) n j+2,j,j =2,,n, and (316) 2 = 1 X 1 F1 Then, we can compute f 2,j for j =3,,n from 2 and 3 = 2 X 2 F2 in the same fashion By repeating this process, we can obtain all non-zero elements in F i for i =1,,n 1 Since all these matrices are uniquely determined by 1, the original system is uniquely equivalent to the following system, (317) n ẋ i = x i+1 + b i ν + f ij x 2 j + O(x, ν) 3, j=i+1 i =1,,n, which is type I, complete-quadratic Brunovsky form in [3] Second, by setting F i =(i =1,,n 1), we have Ḡ =2 1 and the corresponding equivalent system, (318) n ẋ i = x i+1 + b i ν + Ḡ ij x j ν + O(x, ν) 3, j=n (i+1) i =1,,n, which is type II, bilinear Brunovsky form in [3] We can see that Ḡ is also uniquely determined by the original system Once we have the Brunovsky forms of a continuous quadratic system, we can solve the corresponding transformation matrices P i (i =1,,n)andQfrom (314), (36), and (37) These transformation matrices are also unique corresponding to each Brunovsky form 34 Discussions In the preceding subsection, we finished computing the two types of Brunovsky forms and the corresponding transformation matrices of a linearly controllable quadratic system, (15) Besides, we showed that the Brunovsky form of each type and the corresponding transformation matrices are unique However, the uniqueness is dependent on r That is, for different values of r, a system can have multiple solutions of type I or type II Brunovsky forms A straightforward strategy to prevent the multiplicity is to set r =, and we have a simplified version of transformations, defined by (18) That is, with (18), a quadratic system has unique

11 Quadratic Brunovsky Forms 49 type I and type II Brunovsky forms Further, one can follow the arguments of Kang and Krener [3] but with r = and prove that (18) still defines quadratic equivalence in the same sense 4 4 Computation of discrete quadratic Brunovsky forms In this section, we apply the method proposed in the preceding section and solve Brunovsky forms for a discrete quadratic control system (19), but with simplified transformations defined in (18) We carry out our study in the same three-step, constructive procedure as for the continuous system Compared to continuous system, the discrete system has one more term, and the relationships between coefficients and transformations and the resulted Brunovsky forms are fundamentally different from those of continuous systems 41 Quadratically equivalent systems of (19) Definition 41 Assume A R n n is in the linear Brunovsky form defined in (14), we define a linear operator L : R n n R n n by L P = P, LP = A T PA, L i+1 P = L L i P, i =, 1, Properties of L The linear operator L has the following properties: 1 L i P =wheni n 2 The nullity of L is 2n 1; if P ker(l), then P ij =when (n i)(n j) >, and elements in nth row and nth column can be arbitrarily selected 3 L is not invertible 4 If P is symmetric, LP is symmetric; but the inverse proposition may not be true Remarks Note that the linear operator L for the discrete system is different from that for continuous systems This fundamental difference leads to the difference in the relationships between coefficients and transformations and the Brunovsky forms Theorem 42 The Discrete system (19) is quadratically equivalent, in the sense of [3], under the quadratic transformations in (18), to a system, whose ith (i =1,,n)equationis (41) x i (t +1)=x i+1 (t)+b i ν(t)+x T (t) F i x(t)+ḡix(t)ν(t)+o(x, ν) 3, where x n+1 (t) is a dummy state variable, F 1,, F n are symmetric, Ḡi is the ith rowof the matrix Ḡ, and the coefficients F i and h i (i =1,,n)andḠ are defined by (42) (43) (44) F i = F i + P i+1 LP i b i Q, Ḡ i = G i 2b T P i A, h i =(P i ) nn, 4 Due to the difference in notations, r = is equivalent to saying β [1] = in [3]

12 5 W-L Jin where P n+1 is a zero dummy matrix, b i is the ith element of b, and(p i ) nn is the (n, n) entry of the matrix P i The proof is similar to that of Theorem 32 and, therefore, omitted Remarks Note that (43) is different from (34), and we have an additional equation (44) 42 A necessary condition for the equivalent systems Lemma 43 The mapping from the coefficients G and Ḡ to the transformation matrices P i (i =1,,n) is given by [P 1 ] (n) [P 2 ] (n) (45) A = 1 2 G 1 2Ḡ [P n ] (n) The proof is similar to that of Lemma 33 and, therefore, omitted Lemma 44 The mapping from the coefficients F i, F i (i =1,,n)andh to the transformation matrices P i (i =1,,n)andQ is given by (46) (47) i 1 i 1 P i+1 = L i P 1 L j F i j + L j Fi j, i =1,,n 1, j= j= Q = L j (F n j F n j ), j= and (48) (P 1 ) n i n i = i 1 j= (Lj F i j ) nn + h i+1, i =1,,n 1, (P 1 ) nn = h 1 Proof The derivation of (46) and (47) from (42) is similar to that in the proof of Lemma 34 and, therefore, omitted Here we only show how (48) can be derived as follows From (46), we have (i =1,,n 1) i 1 i 1 (P i+1 ) nn =(L i P 1 ) nn (L j F i j ) nn =(P 1 ) n i n i (L j F i j ) nn j= Comparing this with (44), we then obtain (48), from which we can see that the diagonal entries of P 1 are uniquely determined by the coefficients F i (i =1,,n) and h Definition 45 With A R n n and L given by (14) and Definition 41 respectively, we define a series of linear operators X i : R n n R n n (i =, 1, ) j=

13 Quadratic Brunovsky Forms 51 by (49) X P = [L P ] (n) [L 1 P ] (n) [L P ] (n) X i P = (A T ) i X P, Properties of X i The linear operators X i (i =, 1, ) have the following properties: 1 X i =wheni n 2 The rank of X is n(n +1)/2; denote the (i, j)th entry of P by P ij (i, j =1,,n), then the (i, j)th entry of X P canbewrittenas (41) (X P ) ij = { Pn i+1,j i+1, j i;, j < i 3 From Property 2, X is not invertible Note that its continuous counterpart in Definition 35 is invertible Theorem 46 The mapping from the coefficients F i, G, F i,andḡ (i =1,,n) to P 1 is given by (411) X P 1 A = X i F i A G X i Fi A 1 2 Ḡ The proof is similar to that of Theorem 36 and omitted Remarks According to (41), we can find the (i, j)th entry of X PA, (412) (X P 1 A) ij = { (P1 ) n i+1,j i, j > i; j i That is, X PA, the left hand side of (411), is an upper-triangular matrix, which contains all the non-diagonal entries of P 1 Therefore, Fi (i =1,,n)andḠ of a quadratically equivalent system of (19) must satisfy that the right hand side of (411) be also upper triangular Thus, (411) is a necessary condition for all equivalent systems of (19), including the Brunovsky form 43 Computation of the Brunovsky form and the transformation matrices Theorem 47 In the Brunovsky form of (19), the coefficients are (413) F i =, i =1,,n, Ḡ = 2(L + D), where L and D are the strictly lower triangular part and the diagonal part of X if i A G

14 52 W-L Jin Proof The first two terms on the right hand side of (411) can be written as (414) X i F i A G = L + D + U, where L, D, U are strictly lower triangular, diagonal, and strictly upper triangular matrices respectively To ensure the right hand side of (411) to be an upper triangular matrix, we set X i F i A + 1 2Ḡ to be (415) X i Fi A + 1 2Ḡ = L + D, in which there are n(n +1)/2 non-zero elements at most, and we hence have (416) X P 1 A = U Due to the properties of X i,nosimple F i s with n(n+1)/2 non-zero elements can satisfy (415) We simply set the coefficients as in (413), and obtain the only type of Brunovsky form of (19): i (417) x i (t +1)=x i+1 (t)+b i ν(t)+ Ḡ ij x j (t)ν + O(x, ν) 3, j=1 i =1,,n Remarks The Brunovsky form of discrete systems corresponds to type II form of continuous systems and has n(n + 1)/2 bilinear terms After finding the Brunovsky form of a discrete quadratic system, we can solve the corresponding transformation matrices P i (i =1,,n)andQ as follows From (412) and (416), we can find the non-diagonal elements of P 1, and the diagonal elements from (48) Then P i (i =2,,n)andQ can be calculated from (46) and (47) 5 Algorithms and examples In this section, we summarize the algorithms for computing Brunovsky forms and the corresponding transformations of both continuous and discrete linearly controllable quadratic systems with a single input To prevent multiplicity in Brunovsky forms, we use the simplified version of transformations, (18), for both systems Thus, r for all formulas in Section 3 Following each algorithm, an example is given Algorithm 51 Computation ( of continuous quadratic Brunovsky forms Step 1 Compute X 1 ) X if i G and decompose it into L + D + U Step 2 Compute P 1 = L + D + L T and 1 = X (U L T ) Step 3 For type I Brunovsky forms, set Ḡ =and solve the equation X i Fi = 1 to compute F i (i =1,,n 1) as in (315) and (316) Go to Step 5

15 Quadratic Brunovsky Forms 53 Step 4 For type II Brunovsky forms, set F i =(i =1,,n)andḠ =2 1 Go to step 5 Step 5 Compute P i (i =2,,n)andQ from (36) and (37) Example 52 Find type I Brunovsky form of a continuous quadratic control system, which is already in type II Brunovsky form, (51) ξ 1 = ξ 2 + O(ξ,µ) 3, ξ 2 = µ + ξ 2 µ + O(ξ,µ) 3 Solutions In this system, n = 2, and coefficients are F 1 = F 2 =, [ G = 1 Following Algorithm 51, we find type I Brunovsky form, ] (52) ẋ 1 = x x2 2 + O(x, ν)3, ẋ 2 = ν + O(x, ν) 3, and the corresponding transformations, (53) ξ 1 = x 1, ξ 2 = x x2 2, µ = ν Reversely, if given (52), we find the transformations for obtaining (51), x 1 = ξ 1, x 2 = ξ ξ2 2, ν = µ, which are the inverses of (53) Algorithm 53 Computation of discrete quadratic Brunovsky form Step 1 Compute X if i A + 1 2G and decompose it into L + D + U Step 2 Compute Ḡ =2(L + D) Step 3 Compute the non-diagonal elements of P 1 from (412) and (416) Step 4 Compute the diagonal elements of P 1 from (48) Step 5 Compute P i (i =2,,n)andQ from from (46) and (47) Example 54 Find the Brunovsky form of the following discrete system: ξ 1 (t +1)=ξ 2 (t)+ξ1 2 (t)+ξ2 2 (t)+µ2 (t)+o(ξ,µ) 3, ξ 2 (t +1)=µ(t)+µ 2 (t)+o(ξ,µ) 3 Solutions In this system, n=2 and coefficients are [ ] 1 F 1 =, 1

16 54 W-L Jin F 2 =, G =, [ 1 h = 1 ] Following Algorithm 53, we find the Brunovsky form, and the corresponding transformations Thus, this system is linearized x 1 (t +1)=x 2 (t)+o(x, ν) 3, x 2 (t +1)=ν(t)+O(x, ν) 3, ξ 1 (t) =x 1 (t)+2x 2 1(t)+x 2 2(t), ξ 2 (t) =x 2 (t) x 2 1 (t)+x2 2 (t), µ(t) =ν(t) x 2 2(t) 6 Conclusions In this paper, we proposed a method for computing Brunovsky forms of both continuous and discrete quadratic control systems, which are linearly controllable with a single input Our approach is constructive in nature and computes Brunovsky forms explicitly in a three-step manner, with the introduction of linear operators L and X i (i =,,): (i) we first derived relationships between the coefficients of original systems, those of the quadratically equivalent systems, and the corresponding transformation matrices; (ii) after simplifying these relationships, we obtained a necessary condition for all equivalent systems; (iii) from the necessary conditions, we finally computed the Brunovsky forms and the corresponding transformations However, the linear operators L and X i and, therefore, the Brunovsky forms are fundamentally different for continuous and discrete systems For a continuous quadratic system, which has at most n(n +1) 2 /2 nonlinear terms, there are two types of Brunovsky forms with n(n 1)/2 nonlinear terms each; for a discrete quadratic system, there is only one type, and the numbers of nonlinear terms of a general system and its Brunovsky form are n(n+1) 2 /2+n and n(n+1)/2, respectively For continuous systems, we used the same transformations as in [3] and found that the inclusion of transformation vector r yields multiple solutions of a type of Brunovsky forms Therefore, we suggested to set r = in order to maintain the uniqueness That is, under transformations defined by (18) for both continuous and discrete systems, the Brunovsky forms and the corresponding quadratic transformations always exist and are uniquely determined by the original systems In this sense, our study can be viewed as a constructive proof of the existence and uniqueness of Brunovsky forms Finally, the method proposed in this paper can be extended for studying quadratic control systems, not linearly controllable or with multiple input In addition, the method could be useful for applications involving analysis and control of quadratic systems

17 Quadratic Brunovsky Forms 55 REFERENCES [1] P Brunovsky A classification of linear controllable systems Kybernetika, 3: , 197 [2] T Kailath Linear Systems Prentice-Hall, Englewood Cliffs, New Jersey, 198 [3] W Kang and A J Krener Extended quadratic controller normal form and dynamic state feedback linearization of nonlinear systems SIAM Journal on Control and Optimization, 3: , 1992

Nilpotent matrices and spectrally arbitrary sign patterns

Nilpotent matrices and spectrally arbitrary sign patterns Electronic Journal of Linear Algebra Volume 16 Article 21 2007 Nilpotent matrices and spectrally arbitrary sign patterns Rajesh J. Pereira rjxpereira@yahoo.com Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Reachability indices of positive linear systems

Reachability indices of positive linear systems Electronic Journal of Linear Algebra Volume 11 Article 9 2004 Reachability indices of positive linear systems Rafael Bru rbru@matupves Carmen Coll Sergio Romero Elena Sanchez Follow this and additional

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

Algebraic Properties of Solutions of Linear Systems

Algebraic Properties of Solutions of Linear Systems Algebraic Properties of Solutions of Linear Systems In this chapter we will consider simultaneous first-order differential equations in several variables, that is, equations of the form f 1t,,,x n d f

More information

An angle metric through the notion of Grassmann representative

An angle metric through the notion of Grassmann representative Electronic Journal of Linear Algebra Volume 18 Volume 18 (009 Article 10 009 An angle metric through the notion of Grassmann representative Grigoris I. Kalogeropoulos gkaloger@math.uoa.gr Athanasios D.

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Non-trivial solutions to certain matrix equations

Non-trivial solutions to certain matrix equations Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 4 00 Non-trivial solutions to certain matrix equations Aihua Li ali@loyno.edu Duane Randall Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z.

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z. Determining a span Set V = R 3 and v 1 = (1, 2, 1), v 2 := (1, 2, 3), v 3 := (1 10, 9). We want to determine the span of these vectors. In other words, given (x, y, z) R 3, when is (x, y, z) span(v 1,

More information

Sparse spectrally arbitrary patterns

Sparse spectrally arbitrary patterns Electronic Journal of Linear Algebra Volume 28 Volume 28: Special volume for Proceedings of Graph Theory, Matrix Theory and Interactions Conference Article 8 2015 Sparse spectrally arbitrary patterns Brydon

More information

Note on deleting a vertex and weak interlacing of the Laplacian spectrum

Note on deleting a vertex and weak interlacing of the Laplacian spectrum Electronic Journal of Linear Algebra Volume 16 Article 6 2007 Note on deleting a vertex and weak interlacing of the Laplacian spectrum Zvi Lotker zvilo@cse.bgu.ac.il Follow this and additional works at:

More information

Products of commuting nilpotent operators

Products of commuting nilpotent operators Electronic Journal of Linear Algebra Volume 16 Article 22 2007 Products of commuting nilpotent operators Damjana Kokol Bukovsek Tomaz Kosir tomazkosir@fmfuni-ljsi Nika Novak Polona Oblak Follow this and

More information

II. Determinant Functions

II. Determinant Functions Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,

More information

A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp

A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 54 2012 A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp 770-781 Aristides

More information

Interior points of the completely positive cone

Interior points of the completely positive cone Electronic Journal of Linear Algebra Volume 17 Volume 17 (2008) Article 5 2008 Interior points of the completely positive cone Mirjam Duer duer@mathematik.tu-darmstadt.de Georg Still Follow this and additional

More information

Inertially arbitrary nonzero patterns of order 4

Inertially arbitrary nonzero patterns of order 4 Electronic Journal of Linear Algebra Volume 1 Article 00 Inertially arbitrary nonzero patterns of order Michael S. Cavers Kevin N. Vander Meulen kvanderm@cs.redeemer.ca Follow this and additional works

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

THE JORDAN-FORM PROOF MADE EASY

THE JORDAN-FORM PROOF MADE EASY THE JORDAN-FORM PROOF MADE EASY LEO LIVSHITS, GORDON MACDONALD, BEN MATHES, AND HEYDAR RADJAVI Abstract A derivation of the Jordan Canonical Form for linear transformations acting on finite dimensional

More information

A note on estimates for the spectral radius of a nonnegative matrix

A note on estimates for the spectral radius of a nonnegative matrix Electronic Journal of Linear Algebra Volume 13 Volume 13 (2005) Article 22 2005 A note on estimates for the spectral radius of a nonnegative matrix Shi-Ming Yang Ting-Zhu Huang tingzhuhuang@126com Follow

More information

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections ) c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side

More information

On the M-matrix inverse problem for singular and symmetric Jacobi matrices

On the M-matrix inverse problem for singular and symmetric Jacobi matrices Electronic Journal of Linear Algebra Volume 4 Volume 4 (0/0) Article 7 0 On the M-matrix inverse problem for singular and symmetric Jacobi matrices Angeles Carmona Andres Encinas andres.marcos.encinas@upc.edu

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

The Jordan forms of AB and BA

The Jordan forms of AB and BA Electronic Journal of Linear Algebra Volume 18 Volume 18 (29) Article 25 29 The Jordan forms of AB and BA Ross A. Lippert ross.lippert@gmail.com Gilbert Strang Follow this and additional works at: http://repository.uwyo.edu/ela

More information

On cardinality of Pareto spectra

On cardinality of Pareto spectra Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 50 2011 On cardinality of Pareto spectra Alberto Seeger Jose Vicente-Perez Follow this and additional works at: http://repository.uwyo.edu/ela

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Group inverse for the block matrix with two identical subblocks over skew fields

Group inverse for the block matrix with two identical subblocks over skew fields Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional

More information

More calculations on determinant evaluations

More calculations on determinant evaluations Electronic Journal of Linear Algebra Volume 16 Article 007 More calculations on determinant evaluations A. R. Moghaddamfar moghadam@kntu.ac.ir S. M. H. Pooya S. Navid Salehy S. Nima Salehy Follow this

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Minimum rank of a graph over an arbitrary field

Minimum rank of a graph over an arbitrary field Electronic Journal of Linear Algebra Volume 16 Article 16 2007 Minimum rank of a graph over an arbitrary field Nathan L. Chenette Sean V. Droms Leslie Hogben hogben@aimath.org Rana Mikkelson Olga Pryporova

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

1 Introduction. 2 Determining what the J i blocks look like. December 6, 2006

1 Introduction. 2 Determining what the J i blocks look like. December 6, 2006 Jordan Canonical Forms December 6, 2006 1 Introduction We know that not every n n matrix A can be diagonalized However, it turns out that we can always put matrices A into something called Jordan Canonical

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Eigenvalues and eigenvectors of tridiagonal matrices

Eigenvalues and eigenvectors of tridiagonal matrices Electronic Journal of Linear Algebra Volume 15 Volume 15 006) Article 8 006 Eigenvalues and eigenvectors of tridiagonal matrices Said Kouachi kouachisaid@caramailcom Follow this and additional works at:

More information

Refined Inertia of Matrix Patterns

Refined Inertia of Matrix Patterns Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl

More information

Partial isometries and EP elements in rings with involution

Partial isometries and EP elements in rings with involution Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 55 2009 Partial isometries and EP elements in rings with involution Dijana Mosic dragan@pmf.ni.ac.yu Dragan S. Djordjevic Follow

More information

The spectrum of the edge corona of two graphs

The spectrum of the edge corona of two graphs Electronic Journal of Linear Algebra Volume Volume (1) Article 4 1 The spectrum of the edge corona of two graphs Yaoping Hou yphou@hunnu.edu.cn Wai-Chee Shiu Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

On nonnegative realization of partitioned spectra

On nonnegative realization of partitioned spectra Electronic Journal of Linear Algebra Volume Volume (0) Article 5 0 On nonnegative realization of partitioned spectra Ricardo L. Soto Oscar Rojo Cristina B. Manzaneda Follow this and additional works at:

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

SPRING OF 2008 D. DETERMINANTS

SPRING OF 2008 D. DETERMINANTS 18024 SPRING OF 2008 D DETERMINANTS In many applications of linear algebra to calculus and geometry, the concept of a determinant plays an important role This chapter studies the basic properties of determinants

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

On the maximum positive semi-definite nullity and the cycle matroid of graphs

On the maximum positive semi-definite nullity and the cycle matroid of graphs Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 16 2009 On the maximum positive semi-definite nullity and the cycle matroid of graphs Hein van der Holst h.v.d.holst@tue.nl Follow

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2. MAT 1332: CALCULUS FOR LIFE SCIENCES JING LI Contents 1 Review: Linear Algebra II Vectors and matrices 1 11 Definition 1 12 Operations 1 2 Linear Algebra III Inverses and Determinants 1 21 Inverse Matrices

More information

RETRACTED On construction of a complex finite Jacobi matrix from two spectra

RETRACTED On construction of a complex finite Jacobi matrix from two spectra Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 On construction of a complex finite Jacobi matrix from two spectra Gusein Sh. Guseinov guseinov@ati.edu.tr Follow this and additional

More information

Math 344 Lecture # Linear Systems

Math 344 Lecture # Linear Systems Math 344 Lecture #12 2.7 Linear Systems Through a choice of bases S and T for finite dimensional vector spaces V (with dimension n) and W (with dimension m), a linear equation L(v) = w becomes the linear

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

The symmetric linear matrix equation

The symmetric linear matrix equation Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 8 00 The symmetric linear matrix equation Andre CM Ran Martine CB Reurings mcreurin@csvunl Follow this and additional works at: http://repositoryuwyoedu/ela

More information

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. 3.3 Diagonalization Let A = 4. Then and are eigenvectors of A, with corresponding eigenvalues 2 and 6 respectively (check). This means 4 = 2, 4 = 6. 2 2 2 2 Thus 4 = 2 2 6 2 = 2 6 4 2 We have 4 = 2 0 0

More information

Pairs of matrices, one of which commutes with their commutator

Pairs of matrices, one of which commutes with their commutator Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 38 2011 Pairs of matrices, one of which commutes with their commutator Gerald Bourgeois Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

On the reduction of matrix polynomials to Hessenberg form

On the reduction of matrix polynomials to Hessenberg form Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu

More information

Short proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices

Short proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 35 2009 Short proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices Eric A. Carlen carlen@math.rutgers.edu

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Potentially nilpotent tridiagonal sign patterns of order 4

Potentially nilpotent tridiagonal sign patterns of order 4 Electronic Journal of Linear Algebra Volume 31 Volume 31: (2016) Article 50 2016 Potentially nilpotent tridiagonal sign patterns of order 4 Yubin Gao North University of China, ybgao@nuc.edu.cn Yanling

More information

Positive definiteness of tridiagonal matrices via the numerical range

Positive definiteness of tridiagonal matrices via the numerical range Electronic Journal of Linear Algebra Volume 3 ELA Volume 3 (998) Article 9 998 Positive definiteness of tridiagonal matrices via the numerical range Mao-Ting Chien mtchien@math.math.scu.edu.tw Michael

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

Calculation in the special cases n = 2 and n = 3:

Calculation in the special cases n = 2 and n = 3: 9. The determinant The determinant is a function (with real numbers as values) which is defined for quadratic matrices. It allows to make conclusions about the rank and appears in diverse theorems and

More information

Recognition of hidden positive row diagonally dominant matrices

Recognition of hidden positive row diagonally dominant matrices Electronic Journal of Linear Algebra Volume 10 Article 9 2003 Recognition of hidden positive row diagonally dominant matrices Walter D. Morris wmorris@gmu.edu Follow this and additional works at: http://repository.uwyo.edu/ela

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

MTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate

More information

On EP elements, normal elements and partial isometries in rings with involution

On EP elements, normal elements and partial isometries in rings with involution Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 39 2012 On EP elements, normal elements and partial isometries in rings with involution Weixing Chen wxchen5888@163.com Follow this

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES D. Katz The purpose of this note is to present the rational canonical form and Jordan canonical form theorems for my M790 class. Throughout, we fix

More information

Sums of diagonalizable matrices

Sums of diagonalizable matrices Linear Algebra and its Applications 315 (2000) 1 23 www.elsevier.com/locate/laa Sums of diagonalizable matrices J.D. Botha Department of Mathematics University of South Africa P.O. Box 392 Pretoria 0003

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column Math 20F Linear Algebra Lecture 18 1 Determinants, n n Review: The 3 3 case Slide 1 Determinants n n (Expansions by rows and columns Relation with Gauss elimination matrices: Properties) Formula for the

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Matrix functions that preserve the strong Perron- Frobenius property

Matrix functions that preserve the strong Perron- Frobenius property Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

A factorization of the inverse of the shifted companion matrix

A factorization of the inverse of the shifted companion matrix Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 A factorization of the inverse of the shifted companion matrix Jared L Aurentz jaurentz@mathwsuedu Follow this and additional

More information

ICS 6N Computational Linear Algebra Matrix Algebra

ICS 6N Computational Linear Algebra Matrix Algebra ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Unbounded Regions of Infinitely Logconcave Sequences

Unbounded Regions of Infinitely Logconcave Sequences The University of San Francisco USF Scholarship: a digital repository @ Gleeson Library Geschke Center Mathematics College of Arts and Sciences 007 Unbounded Regions of Infinitely Logconcave Sequences

More information

An improved characterisation of the interior of the completely positive cone

An improved characterisation of the interior of the completely positive cone Electronic Journal of Linear Algebra Volume 2 Volume 2 (2) Article 5 2 An improved characterisation of the interior of the completely positive cone Peter J.C. Dickinson p.j.c.dickinson@rug.nl Follow this

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

12x + 18y = 30? ax + by = m

12x + 18y = 30? ax + by = m Math 2201, Further Linear Algebra: a practical summary. February, 2009 There are just a few themes that were covered in the course. I. Algebra of integers and polynomials. II. Structure theory of one endomorphism.

More information

1 Positive definiteness and semidefiniteness

1 Positive definiteness and semidefiniteness Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra PhD Preliminary Exam January 23, 2015 Name: Exam Rules: This exam lasts 4 hours and consists of

More information