Column reduction of polynomial matrices

Size: px
Start display at page:

Download "Column reduction of polynomial matrices"

Transcription

1 Column reduction of polynomial matrices WHL Neven Afd Informatica NLR POBox AD Emmeloord The Netherlands fax C Praagman Department of Econometrics University of Groningen POBox AV Groningen The Netherlands cpraagman@ecorugnl fax: July Abstract A few years ago Beelen developed an algorithm to determine a minimal basis for the kernel of a polynomial matrix (see [1 3] In this paper we use a modified version of this algorithm to find a column reduced polynomial matrix unimodularly equivalent to a given polynomial matrix 1 Introduction For us the problem considered in this paper finding a column reduced polynomial matrix unimodularly equivalent to a given one has its roots in linear systems theory For instance in the book of Kailath [7] one can find several examples of the importance of column or row reduced polynomial matrices Our direct interest stems from the behavioral approach to systems theory see Willems [ ] Assume that we are interested in the behavior of a set of variables w : T R q and from physical or economical considerations we can derive a number a linear differential or difference equations with constant coefficients that these variables have to satisfy: P (τw(t =0 where P R g q [s] and τ = d/dt or the shift: τw(t=w(t+1 For obvious reasons we like to minimize the number and the order of the equations Clearly the set of solutions does not change if we premultiply the equation by an invertible operator It turns out (see [13] that the invertible operators are unimodular polynomial matrices and minimality is reached if we find a unimodular U such that UP is row reduced Transposing leads to the problem we consider here Some preliminary results preceding this paper have been reported on several occasions see for the original idea Beelen van den Hurk Praagman [2] and for successive improvements: Neven [8] Praagman [9 10] all correspondence should be sent to the second author 1

2 2 Preliminaries Let us start with defining the notions mentioned in the introduction: Definition 1 Let P R m n [s] Then d(p the degree of P is defined as the maximum of the degrees of its entries and d j (P the j th column degree of P as the maximum of the degrees in the j th column δ(p is the array of integers obtained by arranging the column degrees of P in non-decreasing order Definition 2 Let U R m m [s] Then U is unimodular if det(u R\{0} Let P (s = diag(s d1(p s dn(p then P P is a proper rational matrix Definition 3 Let P R m n [s] Then the leading column coefficient matrix of P Γ(P is defined as: Γ(P :=P P ( If P =( 0 P TT a permutation matrix and Γ(P has full column rank then P is called column reduced With a little abuse of terminology we will call a matrix Q a basis for the module M ifthe columns of Q form a basis of M: Definition 4 Let M be a submodule of R n [s] Then Q R n r [s] is called a basis of M if rank Q = r andm=im Q If moreover Q is column reduced then Q is called a minimal basis of M Note that if Q(s has full column rank for all s C thenmis a direct summand of R n [s] so in that case Q is a minimal polynomial basis in the sense of Forney [4] or Beelen [1] The theorem (Wolovich [16] Kailath [7] stating that every polynomial matrix is unimodularly equivalent to a column reduced one was the starting point of the investigations leading to this paper Since we need a slightly stronger formulation of the theorem than usually is proven we give a proof here: Theorem 1 Let P R m n [s] then there exists a U R n n [s] unimodular such that R := PU is column reduced Furthermore δ(r δ(p totally Proof By induction on δ(p lexicographically If P is column reduced there is nothing to prove So suppose that in Γ(P there is a linear dependence between its nonzero columns: k a kγ k (P = 0 Let δ j be the largest column degree involved ie such that a j 0 then replacing the jth column of P by k a ks δj(p δ k(p P k yields a P unimodularly equivalent to P for which δ(p <δ(p both totally and lexicographically which proves the theorem The above proof is constructive but unfortunately it has awkward numerical properties as was pointed out in Van Dooren [11] The idea on which this paper is based is the following: calculate a minimal basis for the module ker(p I m :={v R n+m [s] (P Iv=0} see also [2] The first observation is that if (U t R t t is such a basis then U is unimodular Lemma 1 Let (U t R t t be a basis for ker(p I m thenuis unimodular Proof If U is not unimodular then there exists a λ C and a v C n such that U(λv =0 and hence such that R(λv = P (λu(λv = 0 Define w C n+m [s] byw=(u t R t t vthen w(λ = 0 hence w(s =(s λx(s But then x ker(p I m so x =(U t R t t yforsome y C n [s] implying that (s λy(s =v a contradiction Of course R = PU but although (U t R t t is minimal and hence column reduced this does not necessarily hold for R But if R is not column reduced then for b>d(u(u t s b R t t cannot be a minimal basis for ker(s b P I m Calculating a minimal basis for ker(s b P I m yield a pair (U b R b such that s b PU b = s b R b where again we may hope that R b is column reduced 2

3 The first part of this paper is devoted to the proof that indeed for b large enough R b is column reduced and to the investigation into the nature of this b In the second part we describe an algorithm to calculate ker(s b P I m in a numerically reliable way and we estimate the computational effort involved Since the effort increases quickly with the growth of b we develop in the third part an iterative algorithm that consists of calculating ker(s b P I m forb=1 until R b is column reduced as was suggested already in [2] 3 Column Reduction 31 Invariants of polynomial matrices In this section we introduce the concepts of left and right minimal indices and of elementary exponents which will play a role in the next section: Definition 5 Let P R m n [s] Its right minimal indices κ := (κ 1 κ q are defined by κ = δ(q where Q is a minimal basis for ker(p Itsleft minimal indices are the right minimal indices of P t Clearly q equals n r(p with r(p := rank(p Next we define the notion of elementary exponent closely related to elementary divisors Therefore we introduce the homogeneous polynomial associated to P : Let P R m n [s]: then P h R m n [s t] is defined by P (s =P d s d + +P 0 P h (s t =P d s d +P d 1 s d 1 t+ +P 0 t d Let i be the greatest common divisor of the i i minors of P h and define 0 =1 Then i divides i+1 and let i / i 1 =: c i Π(as bt li(b/a where the product is taken over all pairs (1band(01 and 1/0 is denoted by Definition 6 The factors (as bt li(b/a withl i (b/a 0are called the elementary divisors of P and the integers l i the elementary exponents of P Remark It is well known (see Gantmacher [5] that there exist unimodular matrices U and V such that UPV (s=diag( i(s 1/ i 1(s 1 in particular this implies that P and CP have the same finite elementary divisors (ie those for which a 0 for any unimodular C Of course the same holds if one reverses the role of s and t: There exist unimodular S T such that S(tP h (1tT(t= diag( i(1t/ i 1(1t Definition 7 Let P R m n [s] then the structural indices of P are its left and right minimal indices and its elementary exponents Following Beelen [1] we define for each matrix polynomial P R m n [s] of degree d>0its linearization L P R md n+(m 1d [s] by: I 0 0 sp d si I sp d 1 L P (s:= 0 0 si sp 1 + P 0 Remark The concept of linearization of a polynomial matrix is widely used Not always the same definition is used see for another implementation for instance the book of Kailath [7] But basically all linearizations amount to the same kind of construction There is a close relationship between the structural indices of a polynomial matrix and those of its linearization: 3

4 Theorem 2 Let P R m n [s] andletl P be its linearization Then a The right minimal indices of P and L P are equal; b The elementary divisors of P and L P are equal; c The left minimal indices of L P exceed those of P by d(p 1 Proof Premultiplying L P by the unimodular matrix C(s defined by I 0 0 si I C(s := 0 s d 1 I I yields that Note that v := (v 1 v d I 0 0 sp d 0 I s 2 P d +sp d 1 C(sL P (s = 0 0 P(s ker(l P { vd ker(p v i := (s i P d + +sp d i+1 v d i<d Since Pv d = 0 it follows that v i = (P d i + + s i d P 0 v d so d(v = d(v d proving that κ(l P =κ(p Note that CL P and L P have the same finite elementary divisors and that there exist a unimodular D such that CL P D = diag{iip} This implies immediately that the finite elementary divisors of P and L P are the same By symmetry of s and t the same holds for the infinite elementary divisors Let V t be a minimal polynomial basis of P t then clearly (0 0V t is a minimal polynomial basis for (CL P t and hence it follows that C t (0 0V t =(s d 1 V V t is a minimal polynomial basis for (L P t which yields the third statement As an immediate consequence of theorem 2 we find: Theorem 3 Let P R m n [s] be a polynomial matrix Then the sum of its structural indices equals r(p d(p Proof It can be deduced immediately from the well known Kronecker normal form for matrix pencils ([5] that the theorem holds for polynomial matrices of degree 1 Then r(l P =r(cl P = m(d(p 1 + r(p hence its number of left minimal indices (and that of P ism r(pfrom theorem 2 we conclude that the sum the structural indices of P equals the sum of the structural indices of L P minus (m r(p (d(p 1 hence equals Corollary 1 κ i r d and l j r d md m + r md + m + rd r = rd 4

5 32 The associated polynomial matrices For each b 1 we associate to P R m n [s] a matrix polynomial defined by: P b (s :=(s b P(s I Note that P b has no left minimal indices and that all its elementary divisors have the form t ω Denote its right minimal indices by ε(b =(ε 1 (bε n (b and its elementary exponents by ω(b =(ω 1 (bω m (b Lemma 2 Let P R m n [s] have rank r letκ 1 κ 2 κ n r be its right minimal indices and let l i = l i ( Then the structural indices of the associated matrices P b satisfy for all b and i : a if κ i b then ε i (b =κ i b if i>n r then b ε i (b c for all iε i (b b+d d for all iε i (b ε i (b+1 ε i (b+1 e if i r then ω i (b =min(l i b+d and if i>rthen ω i (b =b+d Proof a Note first that any nullvector ( of P extends to a nullvector of P b by adding a number of zeros U So κ i ε i (b Let V b = be a minimal basis for ker(p b Clearly s b divides R b so Rb if any column in V b hasdegreelessthanbit has zeros in R b and therefore is made up of a nullvector of P Hence ε i (b κ i if ε i (b <bif κ i = b then b ε i (b κ i b b This inequality follows from the reasoning above I c Since s b is a basis for ker(p P b U d The fact that s 1 is a basis for ker(p R b+1 implies the first inequality the second b+1 U is an immediate consequence of the fact that is a basis for ker(p sr b+1 b e Note that P h b (s t =(sb P h (s t t b+d I All nonzero order i minors of P h b are of the form t k an order i k minor of P h So i (P b = gcd(t k(b+d s (i kb i k (Pk =1i = gcd(t i(b+d t (i 1(b+d+l1 q 1 (st (i 2(b+d+l2+l1 q 2 (s t i r(b+d+lr+l1 q r (s = t (i g(b+d+lg+l1 gcd(q 1 (sq r (s where g r is such that l g b + d and if g<rthen l g+1 >b+d From this it is clear that for i g ω i (b =l i and for i>gω i (b=b+d proving e Let us analyze the situation: the sum of the structural indices of P b equals (b + dm by theorem 3 and those of P b+1 sum up to (b +1+dm Each index is a nondecreasing function of b and increases at most by 1 in view of lemma 2 These observations give a unique description of the behavior of the structural indices for largeb which enables us to prove the following generalization of theorem 1 5

6 Theorem 4 Let P R m n [s] have rank r and degree d andletv b =(U t R t t be a minimal polynomial basis for ker P b Then U b is unimodular Moreover if b max(κ n r +1l r d then R b is column reduced and if R b is column reduced then b l r d Proof The unimodularity of U b follows as in [2] Since ij ε i(b +ω j (b=(b+dmit is clear that for b max(κ n r +1l r d the structural indices ε n r+1 ε n ω r+1 ω m have to increase with increasing b Since s b divides R b we have in the first place that R b =(0 and further by the observation made in the preceding sentence that diag(is h I V b is a minimal basis for ker P b+h Ifhis large enough this implies that s h R b and hence R b is column reduced If on the other hand R b is column reduced then R b contains n r zero columns implying that all the κ i occur in ε(b and hence ω r (b =l r b+d implying the final statement Remark The bound given here depends on κ n r and l r which are in general unknown As a direct consequence of this theorem and the corollary of theorem 3 we find that if b exceeds r d(p then R b is column reduced If P has full column rank we find this yields b>n d(p a worse bound than was found in [2] But it is not hard to see that max(κ jl i d(p never exceeds the bound given there since the κ only occur if P does not have full column rank If r<nour bound will be much better in general 4 Calculation of a minimal basis Let Q R m n [s] and assume that we want to calculate a minimal basis for ker Q The procedure described in [1] reads as follows: i Linearize Q to L Q and find orthogonal matrices U and V such that UL Q V is in a generalized Schur form: upper triangular staircase form with constant right invertible matrices along the block diagonal ii Find a minimal basis for the kernel of this matrix iii Calculate a minimal basis for ker Q starting from the minimal basis found in the preceding step Since in our case the polynomial matrix P b has some special features this procedure works extremely well if we bring some minor modifications in the algorithm kerpol described in [1] 41 Linearization In the first place we introduce in a slightly different linearization of P b P(s =P d s d +P d 1 s d 1 ++P 0 Define : Let P be given by H b (s = A b s E b = sp d I 0 0 sp d 1 si I 0 sp 0 si I si I L((b + dm n 6

7 where L(m n :={H R m m+n [s] degr H =1H(0 = (0 I} As in the proof of theorem 2 we see that sp d I 0 0 C b (sh b (s = s 2 P d +sp d 1 0 I 0 s b+d P 0 I I 0 0 and if V is a basis of ker(h b then V tb := V is a basis for ker(s 0 0 I b P I Andasintheorem2ifV is minimal then V tb is also minimal So the problem reduces to finding a minimal basis for ker H b 42 Minimal basis of the associated pencil A minimal basis of ker H b can be found by constructing an orthogonal matrix U such that UH b Ũ Ũ = diag(iu t is in an upper staircase form in which the constant part equals E := (0 I as in H b Crucial in that respect is the following theorem on the reduction to a staircase form similar to theorems in [1 11] Since our pencil has a special form the theorem gives a slightly stronger statement and therefore we give a complete proof Theorem 5 Let H = sa E L(m n Then there exists an orthogonal matrix U suchthat: sa 11 sa 12 I sa 13 sa 1l+2 UHŨ = 0 sa 22 sa 23 I 0 0 sa ll sa ll+1 I sa ll sn I with A jj R mj mj 1 [s] right invertible and N R m l+1 m l+1 Moreover m j 1 m j =#{i N ε i =j 1} for j =1l Here m 0 := n Before we prove this theorem we state a lemma which will be needed in the theorem but which on the other hand needs this theorem to be proven The proof of the theorem and the lemma will be proven by a simultaneous induction step Note that H has full row rank Lemma 3 Let ( H L(m n and let V R (m+n n [s] be a minimal polynomial basis for ker H V0 Then V (0 = with V 0 0 R n n Since V is a minimal polynomial basis this implies immediately: Corollary 2 V 0 is invertible Proof (of the theorem and the lemma By induction on m + n If m + n = morn = 0 then we can take l =0andN=AandthenkerH=0so there is nothing left to prove Now let n>0 Partition A in the following way: A =(A 1 A 2 with A 2 square and A 1 R m n [s] then there exists an orthogonal U 1 R m m such that U 1 A 1 = invertible It is easy to see that U 1 HŨ1 = ( A11 0 sa11 sa 12 E 12 0 sa 22 E 22 with A 11 R m1 n right with sa 22 E 22 L(m m1m1 m<n+m so the induction hypothesis on A 22 yields the first statement of the theorem Let K 11 be a left invertible real matrix such that Im K 11 =kera 11 let 7

8 A 11 be a right inverse of A 11 and let V 2 be a minimal basis for sa 22 E 22 where we have used the analogous partitioning of E then ( K11 A V := 11 (E 12 sa 12 V 2 0 sv 2 is a basis for UHŨ Let us show that it is minimal Clearly it is column reduced since its leading column matrix equals K11 Γ(V := 0 Γ(V 2 For all nonzero λ V (λ has full column rank since both K 11 and V 2 (λ have ToseethatV(0 has full column rank suppose that (K 11 A 11 E w1 12V 2 (0 =0 w 2 Premultiplying this equation by A 11 yields that w 2 = 0 since E 12 V 2 (0 is invertible by the induction hypothesis applied on the lemma and its corollary But then K 11 w 1 =0and therefore w 1 =0for K 11 is left invertible The second statement of the theorem is now obvious: the dimension of the space of nullvectors of H equals the number of columns of K 11 and that is exactly m 1 m 0 and the induction hypothesis yields the rest The statement of the lemma is true for V and for any other basis of ker UHŨ since the property is invariant under column manipulations But because of the structure of Ũ the statement also holds for bases of ker H for these have the form ŨV(sQ and ŨV(0 = V (0! Clearly the numbers m i do not depend on the particular choice of U Denote these invariants by µ 0 (Hµ l (H In the proof of this theorem we already showed how a basis for ker H is constructed: Let K ii beabasisforker(a ii and A ii a right inverse for A ii Define V ii (s :=s i K ii then i V ji (s :=A jj ( A jk V ki (s+s 1 V j+1i (s k=j+1 V 11 V 12 V 1l 0 V 22 V := 0 Vll 0 0 is a minimal basis for HŨ 43 The kernel of the original matrix polynomial In the special case that H = H b ( V1 I 0 0 := V I I 0 0 U t V is a minimal basis for (s b P I (see [2] Note that V 1 =(V 11 V 22 V 1l and that V 2 = s b PV 1 8

9 44 Numerical properties Since our method to calculate a generalized Schur form is essentially the same as the method proposed in [1] we can conclude that this step of the algorithm is numerically stable But if we consider the complete algorithm then numerical stability is already hard to define For how do we define a small disturbance of a polynomial matrix? If we define two polynomial matrices to be close if the degrees of all entries are the same and the coefficients are close then there is no chance that we can prove that the algorithm is stable for if P is not column reduced and P = RU then a lot of the higher order terms in the product RU have to vanish exactly a property that will not be satisfied for arbitrary small perturbations of U and R Probably a better idea is to return to the original problem we posed in the introduction: we describe a phenomenon by a set of equations expressed in P What we do request of R is that it describes almost the same phenomenon For this we need to set up a topology on solution sets of difference or differential operators which would go beyond the scope of this paper An intermediate approach is the definition in Van Dooren and DeWilde [12] where distance between polynomial matrices is defined in terms of their linearizations But also in this set-up our problem is an ill-posed one In Geurts and Praagman [6] a Fortran implementation of the algorithm is described In section 6 we describe some examples that have been calculated using this implementation A fair number of examples suggested that the algorithm behaves very well but we also encountered examples in which the algorithm did not behave very well A full description of the implementation and both favorable and unfavorable examples are included in [6] In the algorithm described in Beelen van den Hurk and Praagman [2] a different linearization is used This also leads to different answers but to our present knowledge there is no difference in the quality of the answers The research that we report in this paper is by no means finished We still have to find out whether a different linearization can lead to better answers More basically we have to find a good definition of the condition of this problem in terms of the coefficients of P This will give us the right tools to investigate the nature of the problems that arise in the above mentioned examples Finally we like to implement a number of basically different algorithms to make a comparison For instance our problem is closely related to coprime factorization of rational matrices (see [3] or to the problem of finding minimal solutions to rational matrix equations (see [7] Therefore algorithms for these problems possibly could be adapted to our problem 5 An iterative algorithm In principle the problem we posed is solved taking b large enough and calculating the kernel of P b to yield a column reduced R unimodularly equivalent to P Unfortunately this procedure has a severe drawback: The effort that is needed to calculate ker P b is proportional to (m(b + d 3 so if we take the lowest upperbound for b that we can derive from P directly we get (mnd 3 Therefore an alternative idea was suggested in [2] 51 The idea The idea suggested in [2] was to start with b =0and to increase b by one if the calculation of a minimal basis for H b did not lead to a column reduced R Practical evidence confirms the idea that in most cases a small b already produces a column reduced R But in [2] also an example was given of a matrix polynomial for which b had to be at least (n 2d Comparing the CPU times in the examples did not lead to a clear decision (based on a number of examples! about the best way to proceed in general In this paper we improve on this idea by using information already obtained in the previous step for the computation of ker H b+1 9

10 52 The iterative step Assume that we have terminated the algorithm at step b and we proceed with a = b+h Instepb we have determined a U b andaĥbsuch that Ĥb = U b H b Ũ b where Ĥb has in its leading columns a generalized Schur form structure Clearly Hb 0 H a = sn J with and 0 0 I N = 0 Rhm ((b+dm+n 0 0 I 0 0 si I J = 0 Rhm hm [s] si I so if we define U a = diag(u b I then Ĥb 0 Ĥ a := U a H a Ũ a = snũb J If U b =(U ij U ij R mi m then NŨ b =( 0 U t b+d1 U t b+db+d 0 0 so at first sight it seems that only the first block column of Ĥ a preserves the desired structure But fortunately it turns out that we can select U b in such a way that U b+d1 = U b+d2 = = U b+db =0 Theorem 6 Let H L(k + hm n m k have the following structure: H 0 H = sn J with H L(k n N R hm (n+k and J R hm hm [s] with the block structure as above Then there exists an orthogonal matrix U such that UHŨ has a generalized Schur form which displays the following structure: with U = U ij U ij R mi nj m i = µ i (H i=1l m l+1 = hm + k µ i (H n 1 = k n 2 = =n h+1 = m U ij = 0 if j>i+k h 10

11 Proof By induction on hm + k For h = k = 0 there is nothing to prove so assume that hm + k>0 Let sh 1 be the matrix containing the first n columns of H and let H 1 = QR be a QR-decomposition of H 1 : Q orthogonal and R upper staircase: R =(R t 000 t with R R m1 n of full rank Define U = diag(q t I let N = U HŨ = ( N1 0 R 0 0 H H 0 0 sn 1 Q J sn 2 J 2 and J = J1 0 then sn 2 J 2 Define H H = 1 0 sn J 2 with H1 H H := L(k+m m sn 1 Q J 1 m 1 2 and N =(0N 2 R (h 1 (k+m Since (h 1m+m+k m 1 <hm+kthe induction hypothesis yields that there exists an orthogonal U with the following structure: U =(U ij U ij R m i n j with m i = µ i (H fori=1l 1 m l = hm + k m 1 µ i (H n 1 = k + m m 1 n 2 = =n h =m Uij = 0 if j>i Let V be the orthogonal matrix diag(iu and U = VU Divide V into blocks as follows: V 11 V 1h+1 I 0 0 = 0 U11 U 1h R(k+hm (k+hm V l+11 V l+1h+1 0 Ul1 Ulh with V 11 R m1 m1 etc and V ij = Ui 1j 1 j 2 then V ij R mi nj with m i = µ i 1 (H i 2 n 1 = k n j = m j 2 V ij = 0 if j>i From U = VU it follows immediately that U ij = V ij if j>1 U 1j = V 1j Q 11

12 so we see that U has the requested structure if we can prove that m i = µ i (H Since UHŨ has a generalized Schur form this follows immediately from theorem 5 Remark Note that the proof gives a constructive way to find U In the sequel we will use this several times As a consequence we find that if we have a generalized Schur form for H b and we search one for H a then we can divide the orthogonal U that we constructed into blocks as above and then ( U 0 0 I H a I U t I = sa 11 sa 12 I sa 1b sa 22 sa 1h 0 sab+1b sub+1l+1 t suhl+1 t I 0 0 si I 0 si I and hence we only have to work on the matrix in the box in the lower right corner with sa b+1b+1 as first diagonal block having size ((a + dm m 1 m b ((a + dm m 1 m b 1 instead of on H a L((a + dm n Remark Note that in the algorithm from [2] the information cannot be carried over to the next step since there another linearization is used destroying the structure of U Due to this feature the b th step there is of order (b + d 3 m 3 while in our algorithm each step is of order d 3 m 3 Sincebcanincreaseupto rd this means that in the worst case the algorithm in [2] is of order r 4 d 4 m 3 while the algorithm presented here is of order rd 4 m 3 Note that the one step procedure ie starting with b = rd given in [2] is of order r 3 d 3 m 3 Of course the one step procedure has much lower order in our set up since we thoroughly exploit the zero structure of the initial matrix A quick calculation yields that the one step procedure is of order rd 4 m 3 too This means that the iterative algorithm is even in the worst case of the same order as the one step procedure unlike the situation in [2] As said before at each step we gain at least a factor 4 since in the first place we can use Householder transformations instead of Givens transformations and further we do not have to find the postmultiplication separately but we can use Ũ 6 Examples The algorithm as described in section 5 has been implemented in Fortran For a description of the subroutine see Geurts and Praagman [6] The following example has been calculated in double precision on a VAX with machine precision in the order of 29D 39 Example 1 Let the polynomial matrix P be given by: ( s P (s = 4 +6s 3 +13s 2 +12s+4 s 3 4s 2 5s 2 0 s+2 In Kailath [7] page 386 we can find (if we correct a small typo thatpu 0 = R 0 with 1 0 U 0 (s = s+2 1 ( R 0 (s= 0 (s 3 +4s 2 +5s+2 s 2 +4s+4 s+2 12

13 Clearly R 0 is column reduced and U 0 unimodular This example was also treated in [2] The program with a prescribed tolerance of 10D 12 below which matrix entries are considered to be zero yields the following solution: ( R(s = ( U(s = s s s s (s 3 +4s 2 +5s (s 2 +4s s s s then PU R =0up to the tolerance and U is unimodular: ( s U(s = s This solution was found without iterations ie for b =1and equals the solution found in [2] Example 2 In this example we took P (s = 1 s s s as in example 63 from [2] After several iterations the program found for b =5: 1 s 2 s s 6 U(s= 0 1 s s s R(s =diag{ } Our algorithm leads to success for the same b as in [2] and yields a solution in which the special structure of P is reflected References [1] ThGJ Beelen New algorithms for computing the Kronecker structure of a pencil with applications to systems and control theory PhD thesis Eindhoven University of Technology 1987 [2] ThGJ Beelen GJHH van den Hurk and C Praagman A new method for computing a column reduced polynomial matrix Systems and Control Letters 10: [3] ThGJ Beelen and GW Veltkamp Numerical computation of a coprime factorization of a transfer function matrix Systems and Control Letters 9: [4] GD Forney Minimal bases of rational vector spaces with applications to multivariable linear systems SIAM Journal of Control and Optimization 13: [5] FR Gantmacher Theory of matrices III Chelsea New York 1959 [6] AJ Geurts and C Praagman A Fortran 77 package for column reduction of polynomial matrices Transactions on Mathematical Software 1997 To appear 13

14 [7] T Kailath Linear systems Prentice-Hall 1980 [8] WHL Neven Polynomial methods in systems theory Master s thesis Eindhoven University of Technology 1988 [9] C Praagman Inputs outputs and states in the representation of time series In A Bensoussan and JL Lions editors Analysis and Optimization of Systems pages Berlin 1988 INRIA Springer Lecture Notes in Control and Information Sciences 111 [10] C Praagman Invariants of polynomial matrices In I Landau editor Proceedings of the First ECC pages Grenoble 1991 INRIA [11] P Van Dooren The generalized eigenstructure problem in linear system theory IEEE Transactions on Automatic Control 26: [12] P Van Dooren and P Dewilde The eigenstructure of an arbitrary polynomial matrix: computational aspects Linear Algebra and its Applications 50: [13] JC Willems From time series to linear systems: Part 123 Automatica 22/23: /87 [14] JC Willems Models for dynamics Dynamics reported 2: [15] JC Willems Paradigms and puzzles in the theory of dynamical systems IEEE Trans Automat Control 36: [16] W Wolovich Linear multivariable systems Springer Verlag Berlin New York

A Fortran 77 package for column reduction of polynomial matrices

A Fortran 77 package for column reduction of polynomial matrices A Fortran 77 package for column reduction of polynomial matrices AJ Geurts Department of Mathematics and Computing Sciences Eindhoven University of Technology POBox 513, 5600 MB Eindhoven, The Netherlands

More information

THE STABLE EMBEDDING PROBLEM

THE STABLE EMBEDDING PROBLEM THE STABLE EMBEDDING PROBLEM R. Zavala Yoé C. Praagman H.L. Trentelman Department of Econometrics, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands Research Institute for Mathematics

More information

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc. 1 Polynomial Matrices 1.1 Polynomials Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc., n ws ( ) as a

More information

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Bhaskar Ramasubramanian, Swanand R Khare and Madhu N Belur Abstract This paper formulates the problem

More information

Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix

Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS 1 5 9 July, 1 Budapest, Hungary Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix S.

More information

Infinite elementary divisor structure-preserving transformations for polynomial matrices

Infinite elementary divisor structure-preserving transformations for polynomial matrices Infinite elementary divisor structure-preserving transformations for polynomial matrices N P Karampetakis and S Vologiannidis Aristotle University of Thessaloniki, Department of Mathematics, Thessaloniki

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field Complexity, Article ID 6235649, 9 pages https://doi.org/10.1155/2018/6235649 Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field Jinwang Liu, Dongmei

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,500 08,000.7 M Open access books available International authors and editors Downloads Our authors

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

The Jordan canonical form

The Jordan canonical form The Jordan canonical form Francisco Javier Sayas University of Delaware November 22, 213 The contents of these notes have been translated and slightly modified from a previous version in Spanish. Part

More information

POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK

POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK 1 POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK H.L. Trentelman, Member, IEEE, R. Zavala Yoe, and C. Praagman Abstract In this paper we will establish polynomial algorithms

More information

Sums of diagonalizable matrices

Sums of diagonalizable matrices Linear Algebra and its Applications 315 (2000) 1 23 www.elsevier.com/locate/laa Sums of diagonalizable matrices J.D. Botha Department of Mathematics University of South Africa P.O. Box 392 Pretoria 0003

More information

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,

More information

Computing Minimal Nullspace Bases

Computing Minimal Nullspace Bases Computing Minimal Nullspace ases Wei Zhou, George Labahn, and Arne Storjohann Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada {w2zhou,glabahn,astorjoh}@uwaterloo.ca

More information

Notes on n-d Polynomial Matrix Factorizations

Notes on n-d Polynomial Matrix Factorizations Multidimensional Systems and Signal Processing, 10, 379 393 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Notes on n-d Polynomial Matrix Factorizations ZHIPING LIN

More information

* 8 Groups, with Appendix containing Rings and Fields.

* 8 Groups, with Appendix containing Rings and Fields. * 8 Groups, with Appendix containing Rings and Fields Binary Operations Definition We say that is a binary operation on a set S if, and only if, a, b, a b S Implicit in this definition is the idea that

More information

Paul Van Dooren s Index Sum Theorem: To Infinity and Beyond

Paul Van Dooren s Index Sum Theorem: To Infinity and Beyond Paul Van Dooren s Index Sum Theorem: To Infinity and Beyond Froilán M. Dopico Departamento de Matemáticas Universidad Carlos III de Madrid, Spain Colloquium in honor of Paul Van Dooren on becoming Emeritus

More information

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1.

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1. A SPECTRAL CHARACTERIZATION OF THE BEHAVIOR OF DISCRETE TIME AR-REPRESENTATIONS OVER A FINITE TIME INTERVAL E.N.Antoniou, A.I.G.Vardulakis, N.P.Karampetakis Aristotle University of Thessaloniki Faculty

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 430 (2009) 579 586 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Low rank perturbation

More information

On the closures of orbits of fourth order matrix pencils

On the closures of orbits of fourth order matrix pencils On the closures of orbits of fourth order matrix pencils Dmitri D. Pervouchine Abstract In this work we state a simple criterion for nilpotentness of a square n n matrix pencil with respect to the action

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Alternative Characterization of Ergodicity for Doubly Stochastic Chains Alternative Characterization of Ergodicity for Doubly Stochastic Chains Behrouz Touri and Angelia Nedić Abstract In this paper we discuss the ergodicity of stochastic and doubly stochastic chains. We define

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Key words. Polynomial matrices, Toeplitz matrices, numerical linear algebra, computer-aided control system design.

Key words. Polynomial matrices, Toeplitz matrices, numerical linear algebra, computer-aided control system design. BLOCK TOEPLITZ ALGORITHMS FOR POLYNOMIAL MATRIX NULL-SPACE COMPUTATION JUAN CARLOS ZÚÑIGA AND DIDIER HENRION Abstract In this paper we present new algorithms to compute the minimal basis of the nullspace

More information

M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium

M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium MATRIX RATIONAL INTERPOLATION WITH POLES AS INTERPOLATION POINTS M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium B. BECKERMANN Institut für Angewandte

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach E.N. Antoniou, A.I.G. Vardulakis and S. Vologiannidis Aristotle University of Thessaloniki Department of Mathematics

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

arxiv: v1 [cs.sy] 29 Dec 2018

arxiv: v1 [cs.sy] 29 Dec 2018 ON CHECKING NULL RANK CONDITIONS OF RATIONAL MATRICES ANDREAS VARGA Abstract. In this paper we discuss possible numerical approaches to reliably check the rank condition rankg(λ) = 0 for a given rational

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

A Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems

A Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems A Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems Daniel Cobb Department of Electrical Engineering University of Wisconsin Madison WI 53706-1691

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Factorization of singular integer matrices

Factorization of singular integer matrices Factorization of singular integer matrices Patrick Lenders School of Mathematics, Statistics and Computer Science, University of New England, Armidale, NSW 2351, Australia Jingling Xue School of Computer

More information

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted.

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted. THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION Sunday, March 1, Time: 3 1 hours No aids or calculators permitted. 1. Prove that, for any complex numbers z and w, ( z + w z z + w w z +

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

x y B =. v u Note that the determinant of B is xu + yv = 1. Thus B is invertible, with inverse u y v x On the other hand, d BA = va + ub 2

x y B =. v u Note that the determinant of B is xu + yv = 1. Thus B is invertible, with inverse u y v x On the other hand, d BA = va + ub 2 5. Finitely Generated Modules over a PID We want to give a complete classification of finitely generated modules over a PID. ecall that a finitely generated module is a quotient of n, a free module. Let

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

Reduction of Smith Normal Form Transformation Matrices

Reduction of Smith Normal Form Transformation Matrices Reduction of Smith Normal Form Transformation Matrices G. Jäger, Kiel Abstract Smith normal form computations are important in group theory, module theory and number theory. We consider the transformation

More information

Finitely Generated Modules over a PID, I

Finitely Generated Modules over a PID, I Finitely Generated Modules over a PID, I A will throughout be a fixed PID. We will develop the structure theory for finitely generated A-modules. Lemma 1 Any submodule M F of a free A-module is itself

More information

Efficient Algorithms for Order Bases Computation

Efficient Algorithms for Order Bases Computation Efficient Algorithms for Order Bases Computation Wei Zhou and George Labahn Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada Abstract In this paper we present two algorithms

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE

CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE Bol. Soc. Esp. Mat. Apl. n o 42(2008), 183 193 CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE E. FORNASINI, R. PINTO Department of Information Engineering, University of Padua, 35131 Padova,

More information

COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK

COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK Séminaire Lotharingien de Combinatoire 52 (2004), Article B52f COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK MARC FORTIN AND CHRISTOPHE REUTENAUER Dédié à notre

More information

Journal of Algebra 226, (2000) doi: /jabr , available online at on. Artin Level Modules.

Journal of Algebra 226, (2000) doi: /jabr , available online at   on. Artin Level Modules. Journal of Algebra 226, 361 374 (2000) doi:10.1006/jabr.1999.8185, available online at http://www.idealibrary.com on Artin Level Modules Mats Boij Department of Mathematics, KTH, S 100 44 Stockholm, Sweden

More information

(Inv) Computing Invariant Factors Math 683L (Summer 2003)

(Inv) Computing Invariant Factors Math 683L (Summer 2003) (Inv) Computing Invariant Factors Math 683L (Summer 23) We have two big results (stated in (Can2) and (Can3)) concerning the behaviour of a single linear transformation T of a vector space V In particular,

More information

SYMMETRIC POLYNOMIALS

SYMMETRIC POLYNOMIALS SYMMETRIC POLYNOMIALS KEITH CONRAD Let F be a field. A polynomial f(x 1,..., X n ) F [X 1,..., X n ] is called symmetric if it is unchanged by any permutation of its variables: for every permutation σ

More information

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION.

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. M.I. BUENO AND S. FURTADO Abstract. Many applications give rise to structured, in particular

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Modules Over Principal Ideal Domains

Modules Over Principal Ideal Domains Modules Over Principal Ideal Domains Brian Whetter April 24, 2014 This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

Subdiagonal pivot structures and associated canonical forms under state isometries

Subdiagonal pivot structures and associated canonical forms under state isometries Preprints of the 15th IFAC Symposium on System Identification Saint-Malo, France, July 6-8, 29 Subdiagonal pivot structures and associated canonical forms under state isometries Bernard Hanzon Martine

More information

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since MAS 5312 Section 2779 Introduction to Algebra 2 Solutions to Selected Problems, Chapters 11 13 11.2.9 Given a linear ϕ : V V such that ϕ(w ) W, show ϕ induces linear ϕ W : W W and ϕ : V/W V/W : Solution.

More information

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix.

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. arxiv:math/0506382v1 [math.na] 19 Jun 2005 Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. Adviser: Charles R. Johnson Department of Mathematics College

More information

Numerical computation of minimal polynomial bases: A generalized resultant approach

Numerical computation of minimal polynomial bases: A generalized resultant approach Linear Algebra and its Applications 405 (2005) 264 278 wwwelseviercom/locate/laa Numerical computation of minimal polynomial bases: A generalized resultant approach EN Antoniou,1, AIG Vardulakis, S Vologiannidis

More information

Stabilization, Pole Placement, and Regular Implementability

Stabilization, Pole Placement, and Regular Implementability IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 5, MAY 2002 735 Stabilization, Pole Placement, and Regular Implementability Madhu N. Belur and H. L. Trentelman, Senior Member, IEEE Abstract In this

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

ELA

ELA SHARP LOWER BOUNDS FOR THE DIMENSION OF LINEARIZATIONS OF MATRIX POLYNOMIALS FERNANDO DE TERÁN AND FROILÁN M. DOPICO Abstract. A standard way of dealing with matrixpolynomial eigenvalue problems is to

More information

2: LINEAR TRANSFORMATIONS AND MATRICES

2: LINEAR TRANSFORMATIONS AND MATRICES 2: LINEAR TRANSFORMATIONS AND MATRICES STEVEN HEILMAN Contents 1. Review 1 2. Linear Transformations 1 3. Null spaces, range, coordinate bases 2 4. Linear Transformations and Bases 4 5. Matrix Representation,

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Eighth Homework Solutions

Eighth Homework Solutions Math 4124 Wednesday, April 20 Eighth Homework Solutions 1. Exercise 5.2.1(e). Determine the number of nonisomorphic abelian groups of order 2704. First we write 2704 as a product of prime powers, namely

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

STABLY FREE MODULES KEITH CONRAD

STABLY FREE MODULES KEITH CONRAD STABLY FREE MODULES KEITH CONRAD 1. Introduction Let R be a commutative ring. When an R-module has a particular module-theoretic property after direct summing it with a finite free module, it is said to

More information

Constructing c-ary Perfect Factors

Constructing c-ary Perfect Factors Constructing c-ary Perfect Factors Chris J. Mitchell Computer Science Department Royal Holloway University of London Egham Hill Egham Surrey TW20 0EX England. Tel.: +44 784 443423 Fax: +44 784 443420 Email:

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

arxiv: v1 [math.rt] 7 Oct 2014

arxiv: v1 [math.rt] 7 Oct 2014 A direct approach to the rational normal form arxiv:1410.1683v1 [math.rt] 7 Oct 2014 Klaus Bongartz 8. Oktober 2014 In courses on linear algebra the rational normal form of a matrix is usually derived

More information

Minimal indices and minimal bases via filtrations. Mackey, D. Steven. MIMS EPrint:

Minimal indices and minimal bases via filtrations. Mackey, D. Steven. MIMS EPrint: Minimal indices and minimal bases via filtrations Mackey, D. Steven 2012 MIMS EPrint: 2012.82 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,

More information

MATH 326: RINGS AND MODULES STEFAN GILLE

MATH 326: RINGS AND MODULES STEFAN GILLE MATH 326: RINGS AND MODULES STEFAN GILLE 1 2 STEFAN GILLE 1. Rings We recall first the definition of a group. 1.1. Definition. Let G be a non empty set. The set G is called a group if there is a map called

More information

On some properties of elementary derivations in dimension six

On some properties of elementary derivations in dimension six Journal of Pure and Applied Algebra 56 (200) 69 79 www.elsevier.com/locate/jpaa On some properties of elementary derivations in dimension six Joseph Khoury Department of Mathematics, University of Ottawa,

More information

Projection of state space realizations

Projection of state space realizations Chapter 1 Projection of state space realizations Antoine Vandendorpe and Paul Van Dooren Department of Mathematical Engineering Université catholique de Louvain B-1348 Louvain-la-Neuve Belgium 1.0.1 Description

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES

A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES Int J Appl Math Comput Sci, 00, Vol 1, No 1, 5 1 A LINEAR PROGRAMMING BASED ANALYSIS OF HE CP-RANK OF COMPLEELY POSIIVE MARICES YINGBO LI, ANON KUMMER ANDREAS FROMMER Department of Electrical and Information

More information