where f is some inhomogeneity. In the context of DAEs it is natural to call matrix pencils se 1 A 1 and se 2 A 2 equivalent and write se 1 A 1

Size: px
Start display at page:

Download "where f is some inhomogeneity. In the context of DAEs it is natural to call matrix pencils se 1 A 1 and se 2 A 2 equivalent and write se 1 A 1"

Transcription

1 SIAM J MATRIX ANAL APPL Vol 33, No 2, pp c 212 Society for Industrial and Applied Mathematics THE QUASI-KRONECKER FORM FOR MATRIX PENCILS THOMAS BERGER AND STEPHAN TRENN Abstract We study singular matrix pencils and show that the so-called Wong sequences yield a quasi-kronecker form This form decouples the matrix pencil into an underdetermined part, a regular part, and an overdetermined part This decoupling is sufficient to fully characterize the solution behavior of the differential-algebraic equations associated with the matrix pencil Furthermore, we show that the minimal indices of the pencil can be determined with only the Wong sequences and that the Kronecker canonical form is a simple corollary of our result; hence, in passing, we also provide a new proof for the Kronecker canonical form The results are illustrated with an example given by a simple electrical circuit Key words singular matrix pencil, Kronecker canonical form, differential algebraic equations AMS subject classifications 15A22, 15A21, 34A9, 34A3 DOI 11137/ Introduction We study (singular) linear matrix pencils se A K m n [s], where K is Q, R, orc, and the associated differential algebraic equation (DAE) (11) Eẋ = Ax + f, where f is some inhomogeneity In the context of DAEs it is natural to call matrix pencils se 1 A 1 and se 2 A 2 equivalent and write se 1 A 1 = se2 A 2 or (E 1,A 1 ) = (E 2,A 2 ), if there exist invertible matrices S and T such that S(sE 1 A 2 )T = se 2 A 2 In the literature this is also sometimes called strict or strong equivalence; see, eg, [16, Chap XII, section 1] and [21, Def 21] Based on this notion of equivalence it is of interest to find the simplest matrix pencil within an equivalence class This problem was solved by Kronecker [19] (see also [16, 21]) Nevertheless, the analysis of matrix pencils is still an active research area (see, eg, the recent paper [18]), mainly because of numerical issues, or to find ways to obtain the Kronecker canonical form efficiently (see, eg, [36, 37, 8, 12, 13, 38]) Our main goal in this paper is to highlight the importance of the Wong sequences [4] for the analysis of matrix pencils The Wong sequences for the matrix pencil se A are given by the following sequences of subspaces: V := K n, V i+1 := A 1 (EV i ) K n, W := {}, W i+1 := E 1 (AW i ) K n Received by the editors March 1, 211; accepted for publication (in revised form) by C-H Guo February 7, 212; published electronically May 3, Institut für Mathematik, Technische Universität Ilmenau, Weimarer Straße 25, Ilmenau, Germany (thomasberger@tu-ilmenaude) This author s work was supported by DFG grant Il25/9 AG Technomathematik, Technische Universität Kaiserslautern, Erwin-Schrödinger-Str Geb 48, Kaiserslautern, Germany (trenn@mathematikuni-klde) This author s work was supported by DFG grant Wi1458/1 while at the University of Würzburg 336

2 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS 337 We will show (see Theorem 32 and Remark 33) that the Wong sequences are sufficient to completely characterize the solution behavior of the DAE (11) including the characterization of consistent initial values as well as constraints on the inhomogeneity f The Wong sequences can be traced back to Dieudonné [14]; however, his focus is only on the first of the two Wong sequences Bernhard [1] and Armentano [3] used the Wong sequences to carry out a geometric analysis of matrix pencils In [27] the first Wong sequence is introduced as a fundamental geometric tool in the characterization of the subspace of consistent initial conditions of a regular DAE Both Wong sequences are introduced in [26] where the authors obtain a quasi-kronecker staircase form; however, they did not consider both Wong sequences in combination so that the important role of the spaces V W, V + W, EV AW, EV + AW (see Definition 21, Figure 21, and Theorem 23) is not highlighted They also appear in [1, 2, 2, 35] In control theory modified versions of the Wong sequences (where im B is added to EV i and AW i, resp) have been studied extensively for not necessarily regular DAEs (see, eg, [1, 4, 5, 6, 15, 22, 24, 28, 29]) and they have been found to be the appropriate tool to construct invariant subspaces, the reachability space, and provide a Kalman decomposition, just to name a few features However, it seems that their relevance for a complete solution theory of DAEs (11) associated to a singular matrix pencil has been overlooked We therefore believe that our solvability characterizations solely in terms of the Wong sequences are new The Wong sequences directly lead to a quasi-kronecker triangular form (QKTF), ie, se A = se P A P se R A R, se Q A Q where se R A R is a regular matrix pencil se P A P is the underdetermined pencil and se Q A Q is the overdetermined pencil (Theorem 23) With only a little more effort (invoking solvability of generalized Sylvester equations) we can get rid of the off-diagonal blocks and obtain a quasi-kronecker form (QKF) (Theorem 26) From the latter it is easy to obtain the Kronecker canonical form (KCF) (Corollary 28) and hence another contribution of our work is a new proof for the KCF We have to admit that our proof does not reach the elegance of the proof of Gantmacher [16]; however, Gantmacher does not provide any geometrical insight At the other end of the spectrum, Armentano [3] uses the Wong sequences to obtain a result similar to ours (the QKTF but with more structured diagonal block entries); however, his approach is purely geometrical so that it is not directly possible to deduce the transformation matrices which are necessary to obtain the QKTF or QKF Our result overcomes this disadvantage because it presents geometrical insights and, at the same time, is constructive Furthermore, our work is self-contained in the sense that only results on regular matrix pencils are assumed to be well known Different from other authors, we do not primarily aim to decouple the regular part of the matrix pencil, because (1) the decoupling into three parts which have the solution properties existence, but nonuniquess (underdetermined part), existence and uniqueness (regular part), and uniqueness, but possible nonexistence (overdetermined part) seems very natural, and (2) the regular part can be further decoupled if necessary again with the help of the Wong sequences as we showed in [9] Furthermore, we are also not aiming for a staircase form as is common in the

3 338 THOMAS BERGER AND STEPHAN TRENN numerical literature; here the matrix pencils se P A P, se R A R and se Q A Q have no further structure, which has the advantage that the transformation matrices canbechosentobesimple(inanyspecifiedway) Another advantage of our approach is that we respect the domain of the entries in the matrix pencil, eg, if our matrices are real-valued, then all transformations remain real-valued This is not true for results about the KCF, because, due to possible complex eigenvalues and eigenvectors, even in the case of real-valued matrices it is necessary to allow for complex transformations and complex canonical forms This is often undesirable, because if one starts with a real-valued matrix pencil, one would like to get real-valued results Therefore, we formulated our results in such a way that they are valid for K = Q, K = R, andk = C Especially for K = Q it was also necessary to recheck known results, whetherornottheirproofsarealsovalid in Q We believe that the case K = Q is of special importance because this allows the implementation of our approach in exact arithmetic which might be feasible if the matrices are sparse and not too big In fact, we believe that the construction of the QK(T)F is also possible if the matrix pencil se A contains symbolic entries as is common for the analysis of electrical circuits, where one might just add the symbol R into the matrix instead of a specific value of the corresponding resistor However, we have not formalized this, but our running example will show that it is no problem to keep symbolic entries in the matrix This is a major difference of our approach compared to those available in literature which often aim for unitary transformation matrices (due to numerical stability) and are therefore not suitable for symbolic calculations i L p i + i o p o p o i C i + C I L p + i G i F R F + V p T i R i T p T I R G i V R Fig 11 An electrical circuit with sources and an open terminal used as the origin of the DAE (12) (used as a running example) As a running example we use a DAE arising from an electrical circuit as shown in Figure 11 The electrical circuit has no practical purpose and is for academic analysis only We assume that all the quantities L, C, R, R G,R F are positive To obtain the DAE description, let the state variable be given by x =(p +,p,p o,p T,i L,i p,i m,i G,i F, i R,i o,i V,i C,i T ) consisting of the node potentials and the currents through the branches The inhomogeneity is f = Bu with u =(I,V ) given by the sources and the matrix B as below The defining property of an ideal operational amplifier in feedback configuration is given by p + = p and i + ==i Collecting all defining equations of the circuit we obtain 13 equations for 14 state

4 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS 339 variables, which can be written as a DAE as follows: L -C C (12) ẋ= R G 1-1 R F -1 R x ( I V ) The coefficient matrices are not square, hence the corresponding matrix pencil se A cannot be regular, and standard tools cannot be used to analyze this description of the circuit The paper is organized as follows In section 2 we present our main results, in particular how the Wong sequences directly yield the QKTF (Theorem 23) and the minimal indices associated with the pencil (Theorem 29) Afterward, we show how the QKF can be used to fully characterize the solution behavior of the corresponding DAE in section 3 The proofs of the main results are carried out in section 5 Preliminary results are presented and proved in section 4 We close the introduction with the nomenclature used in this paper N set of natural numbers with zero, N = {, 1, 2,} Q, R, C field of rational, real, and complex numbers, resp K either Q, R, orc Gl n(k) the set of invertible n n matrices over K K[s] the ring of polynomials with coefficients in K K m n I or I n A the set of m n matrices with entries in K the identity matrix of size n n for n N the (conjugate) transpose of the matrix A K m n AS := { Ax K m x S }, the image of S K n under A K m n A 1 S := { x K n Ax S }, the preimage of S K m under A K m n A S S := (A ) 1 S := { x K n s S : x s= }, the orthogonal complement of S K n rank C (λe A) the rank of (λe A) C m n, E,A K m n,forλ C; rank C ( E A):=rank C E C the space of smooth (ie, arbitrarily often differentiable) functions D pwc the space of piecewise-smooth distributions as introduced in [33, 34] 2 Main results As mentioned in the introduction, our approach is based on the Wong sequences which have been introduced in [4] for the analysis of matrix pencils They can be calculated via a recursive subspace iteration In a precursor [9] of this paper we used them to determine the quasi-weierstraß form and it will turn out that they are the appropriate tool to determine a QKF as well Definition 21 (Wong sequences [4]) Consider a matrix pencil se A K m n [s] TheWong sequences corresponding to se A are given by V := K n, V i+1 := A 1 (EV i ) K n, W := {}, W i+1 := E 1 (AW i ) K n

5 34 THOMAS BERGER AND STEPHAN TRENN Let V := i N V i and W := i N W i be the limits of the Wong sequences It is easy to see that the Wong sequences are nested, terminate, and satisfy k N j N : V V 1 V k =V k +j =V =A 1 (EV ) ker A (21) l N j N : W ker E =W 1 W l =W l +j =W =E 1 (AW ) as well as (22) AV EV and EW AW For our example DAE (12), we obtain V 1 =im R = V and W 1 =im , W 2 =im = W We carried out the calculation with MATLAB and its Symbolic Tool Box and the following short function for calculating the preimage: Listing 1 MATLAB function for calculating a basis of the preimage A 1 (im S) for some matrices A and S function V=getPreImage(A,S) [m1,n1]=size(a); [m2,n2]=size(s); if m1==m2 H=null([A,S]); V=colspace(H(1:n1,:)); else error( Both matrices must have same number of rows ); end; Before stating our main result we repeat the result concerning the Wong sequences and regular matrix pencils Theorem 22 (the regular case, see [9]) Consider a regular matrix pencil se A K m n [s], ie, m = n and det(se A) K[s] \{} Let V and W be the limits of the corresponding Wong sequences Choose any full rank matrices V and W such that im V = V and im W = W Then T =[V,W] and S =[EV,AW] 1 are invertible and put the matrix pencil se A into quasi-weierstraß form, (SET,SAT)= ([ I N ], [ J I ]),

6 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS 341 where J K nj nj, n J N, andn K nn nn, n N = n n J, is a nilpotent matrix In particular, when choosing T J and T N such that T 1 J JT J and T 1 N NT N are in Jordan canonical form, then S =[EV T J,AWT N ] 1 and T =[VT J,WT N ] put the regular matrix pencil se A into Weierstraß canonical form Important consequences of Theorem 22 for the Wong sequences in the regular case are V W = {}, EV AW = {}, V + W = K n, EV + AW = K n These properties do not hold anymore for a general matrix pencil se A; see Figure 21 for an illustration of the situation K n K m V + W EV + AW V W n P EV AW m P n R m R n Q m Q Fig 21 The relationship of the limits V and W of the Wong sequences of the matrix pencil se A K m n [s] in the general case; the numbers n P,n R,n Q,m P,m R,m Q N denote the (difference of the) dimensions of the corresponding spaces We are now ready to present our first main result which states that the knowledge of the spaces V and W is sufficient to obtain the QKTF, which already captures most structural properties of the matrix pencil se A With the help of the Wong sequences Armentano [3] already obtained a similar result; however, his aim was to obtain a triangular form where the diagonal blocks are in canonical form Therefore, his result is more general than ours, however, the price is a more complicated proof and it is also not clear how to obtain the transformation matrices explicitly Theorem 23 (quasi-kronecker triangular form (QKTF)) Let se A K m n [s] and consider the corresponding limits V and W of the Wong sequences as in Definition 21 Choose any full rank matrices P 1 K n np, P 2 K m mp, R 1 K n nr, R 2 K m mr, Q 1 K n nq, Q 2 K m mq such that im P 1 = V W, im P 2 = EV AW, V W im R 1 = V + W, EV AW im R 2 = EV + AW, (V + W ) im Q 1 = K n, (EV + AW ) im Q 2 = K m Then T trian =[P 1,R 1,Q 1 ] Gl n (K) and S trian =[P 2,R 2,Q 2 ] 1 Gl m (K) transform se A in QKTF: (23) (S trian ET trian,s trian AT trian )= E P E PR E PQ E R E RQ, A P A PR A PQ A R A RQ, E Q A Q

7 342 THOMAS BERGER AND STEPHAN TRENN where (i) E P,A P K mp np, m P <n P, are such that rank C (λe P A P )=m P for all λ C { }, (ii) E R,A R K mr nr, m R = n R,withsE R A R regular, ie, det(se R A R ), (iii) E Q,A Q K mq nq, m Q >n Q, are such that rank C (λe Q A Q )=n Q for all λ C { } The proof is carried out in section 5 Remark 24 The sizes of the blocks in (23) are uniquely given by the matrix pencil se A because they depend only on the subspaces constructed by the Wong sequences and not on the choice of bases thereof It is also possible that m P =(or n Q = ) which means that there are matrices with no rows (or no columns) On the other hand, if n P =,n R =,orm Q =, then the P -blocks, R-blocks, or Q-blocks are not present at all Furthermore, it is easily seen that if se A fulfills (i), (ii), or (iii) itself, then se A is already in QKTF with T trian = P 1 = I, T trian = R 1 = I, or T trian = Q 1 = I, ands trian = P2 1 = I, S trian = R2 1 = I, ors trian = Q 1 2 = I Remark 25 From Lemma 44 we know that E(V W )=EV AW = A(V W ); hence Furthermore, due to (22), EV AW = E(V W )+A(V W ) EV + AW = E(V + W )+A(V + W ) Hence the subspace pairs (V W,EV AW )and(v + W,EV + AW )are reducing subspaces of the matrix pencil se A in the sense of [37] and are in fact the minimal and maximal reducing subspaces 1 In our example (12) we have and, with K := RG+RF R G, EV AW = EV =im Therefore, we can choose R [P 1,R 1,Q 1 ]= V W = V, V + W = W 1,EV +AW = AW =im , [P 2,R 2,Q 2 ]= 1 We thank the anonymous reviewer for making us aware of this relationship K - R F R 1 -R F KR F G K - R F R 1 -R F KR F G 1 1

8 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS 343 With this choice we obtain the following QKTF for our example: CR -C (E,A) =, L LK R G 1-1 R F R The QKTF is already useful for the analysis of the matrix pencil se A and the associated DAE Eẋ = Ax + f However, a complete decoupling of the different parts, ie, a block triangular form, is more satisfying from a theoretical viewpoint and is also a necessary step to obtaining the KCF as a corollary In the next result we show that we can transform any matrix pencil se A into a block triangular form, which we call quasi-kronecker form (QKF) because all the important features of the KCF are captured In fact, it turns out that the diagonal blocks of the QKTF (23) already are the diagonal blocks of the QKF Theorem 26 (QKF) Using the notation from Theorem 23 the following equations are solvable for matrices F 1,F 2,G 1,G 2,H 1,H 2 of appropriate size: =E RQ + E R F 1 + F 2 E Q, (24a) =A RQ + A R F 1 + F 2 A Q, (24b) =E PR + E P G 1 + G 2 E R, =A PR + A P G 1 + G 2 A R, =(E PQ + E PR F 1 )+E P H 1 + H 2 E Q, (24c) =(A PQ + A PR F 1 )+A P H 1 + H 2 A Q, and for any such matrices let S := I G 1 2 H 2 I F 2 S trian = [P 2,R 2 P 2 G 2,Q 2 P 2 H 2 R 2 F 2 ] 1 I and T := T trian I G 1 H 1 I F 1 = [P 1,R 1 + P 1 G 1,Q 1 + P 1 H 1 + R 1 F 1 ] I Then S Gl m (K) and T Gl n (K) put se A in QKF (25) (SET,SAT)= E P E R, A P A R, E Q A Q where the block diagonal entries are the same as for the QKTF (23) In particular, the QKF (without the transformation matrices S and T ) can be obtained with only the Wong sequences (ie, without solving (24)) Furthermore, the QKF (25) is unique in the following sense: (26) (E,A) = (E,A ) (E P,A P ) = (E P,A P ), (E R,A R ) = (E R,A R ), (E Q,A Q ) = (E Q,A Q ),

9 344 THOMAS BERGER AND STEPHAN TRENN where E P,A P,E R,A R,E P,A P are the corresponding blocks of the QKF of the matrix pencil se A The proof is carried out in section 5 In order to actually find solutions of (24), the following remark might be helpful Remark 27 Matrix equations of the form =M + PX + YQ, =R + SX + YT for given matrices M,P,Q,R,S,T of appropriate size can be written equivalently as a standard linear system [ I P Q I I S T I ]( ) vec(x) = vec(y ) ( ) vec(m), vec(r) where denotes the Kronecker product of matrices and vec(h) denotes the vectorization of the matrix H obtained by stacking all columns of H over each other For our example (12) we already know the QKF, because, as mentioned in Theorem 26, the diagonal blocks are the same as for the QKTF However, we do not yet know the final transformation matrices which yield the QKF Therefore, we have to find solutions of (24): F 1 =,F 2 = - 1 K [, G 1 = G 2 =[ -1 1 R ], -1 R G R -1 1 G R 1-1 G ], H 1 = [ ], H 2 =[] The transformation matrices S and T which put our example into a QKF are then T = R R R 1 and S = 1-1 R G K -1 R G K -1 K 1 K -1 1 R G K -R F 1 R G K K -R F K 1 R F -1 K K K R F R -1 R F -K -R F 1 G Finally, an analysis of the matrix pencils se P A P and se Q A Q in (25), invoking Lemma 412 and Corollary 413, together with Theorem 22, now allows us to obtain the KCF as a corollary Corollary 28 For every matrix pencil se A K m n [s] there exist transformation matrices S Gl m (C) and T Gl n (C) such that, for a, b, c, d N and ε 1,,ε a,ρ 1,,ρ b,σ 1,,σ c,η 1,,η d N,S(sE A)T = diag(p ε1 (s),,p εa (s), J ρ1 (s),,j ρb (s), N σ1 (s),,n σc (s), Q η1 (s),,q ηd (s)),

10 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS 345 where 1 1 P ε (s) =s K ε (ε+1) [s], ε N, 1 1 λ 1 J ρ (s) =si Cρ ρ [s],ρ N,λ C, 1 λ 1 N σ (s) =s 1 I Kσ σ [s],σ N, Q η (s) =s K(η+1) η [s], η N The numbers ρ i, σ i, ε i,andη i in Corollary 28 are usually called (degrees of) elementary divisors and minimal indices and play an important role in the analysis of matrix pencils; see, eg, [16, 23, 24, 25] More precisely, ρ 1,,ρ b are the degrees of the finite elementary divisors, σ 1,,σ c are the degrees of the infinite elementary divisors, ε 1,,ε a are the columns minimal indices, and η 1,,η d are the row minimal indices The minimal indices completely determine (under the standing assumption that ε 1 ε a and η 1 η d ) the singular part of the KCF The following result shows that the minimal indices can actually be determined via the Wong sequences Another method for determining the minimal indices via the control theoretic version of the Wong sequences can be found in [24, Prop 22]; however, the minimal indices there correspond to a feedback canonical form Theorem 29 (minimal indices) ConsidertheWongsequences(21) and the notation of Corollary 28 Let K = V W, Ŵi := (EV i 1 ), i =1, 2,, and K =(AW ) (EV ) Then a = dim(w 1 K), d = dim(ŵ1 K), and with, for i =, 1, 2,, Δ i := dim(w i+2 K) dim(w i+1 K), Δi := dim(ŵi+2 K) dim(ŵi+1 K), it holds that ε 1 = = ε a Δ =, ε a Δi 1+1 = = ε a Δi = i, i =1, 2,, and η 1 = = η d =, Δ η d Δi 1+1 = = η d Δ i = i, i =1, 2,

11 346 THOMAS BERGER AND STEPHAN TRENN Furthermore, the minimal indices are invariant under matrix pencil equivalence In particular, the KCF is unique The proof is carried out in section 5 This proof uses the KCF and, therefore, in particular it uses Lemma 412 and Corollary 413, which provide an explicit method to obtain the KCF for the full rank matrix pencils se P A P and se Q A Q in the QKF (25) However, if one is only interested in the singular part of the KCF (without the necessary transformation), the above result shows that knowledge of the Wong sequences is already sufficient; there is no need to actually carry out the tedious calculations of Lemma 412 and Corollary 413 Note that Δ i 1 =Δ i or Δ i 1 = Δ i is possible for some i In that case there are no minimal indices of value i because the corresponding index range is empty Furthermore, once Δ i =or Δî = for some index i or î, thenδ i+j =and Δî+ĵ =forallj andĵ In particular, there are no minimal indices with values larger than i and î, respectively 3 Application of the QK(T)F to DAE solution theory In this section we study the DAE (11) Eẋ = Ax + f corresponding to the matrix pencil se A R m n [s] Note that we restrict ourselves here to the field K = R, because (1) the vast majority of DAEs arising from modeling physical phenomena are not complex valued, (2) all the results for K = R carry over to K = C without modification (the converse is not true in general), (3) the case K = Q is rather artificial when considering solutions of the DAE (11), because then we had to consider functions f : R Q or even f : Q Q We first have to decide in which (function) space we actually consider the DAE (11) To avoid problems with differentiability, one suitable choice is the space of smooth functions C, ie, we consider smooth inhomogeneities f (C ) m and smooth x (C ) n Unfortunately, this excludes the possibility of considering step functions as inhomogeneities which occur rather frequently It is well known that the solutions of DAEs might involve derivatives of the inhomogeneities, hence jumps in the inhomogeneity might lead to nonexistence of solutions due to a lack of differentiability However, this is not a structural nonexistence since every smooth approximation of the jump could lead to well defined solutions Therefore, one might extend the solution space by considering distributions (or generalized functions) as formally introduced by Schwartz [31] The advantage of this larger solution space is that each distribution is smooth, in particular the unit step function (Heaviside function) has a derivative: the Dirac impulse Unfortunately, the whole space of distributions is too large; for example it is, in general, not possible to speak of an initial value, because evaluation of a distribution at a specific time is not defined To overcome this obstacle we consider the smaller space of piecewise-smooth distributions D pwc as introduced in [33, 34] For piecewise-smooth distributions a left- and right-sided evaluation is possible, ie, for D D pwc the values D(t ) R and D(t+) R are well defined for all t R Altogether, we will formulate all results for both solution spaces S = C and S = D pwc, so that readers who feel uneasy about the distributional solution framework can ignore it Before stating our main results concerning the solution theory of the DAE (11), we need the following result about polynomial matrices A (square) polynomial matrix

12 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS 347 U(s) K n n [s] is called unimodular if and only if it is invertible within the ring K n n [s], ie, there exists V (s) K n n [s] such that U(s)V (s) =I Lemma 31 (existence of unimodular inverse) Consider a matrix pencil se A K m n [s], m n, such that rank λe A =min{m, n} for all λ C Then there exist polynomial matrices M(s) K n m [s] and K(s) K n m [s], n,m N, such that, if m<n, [M(s),K(s)] is unimodular and (se A)[M(s),K(s)] = [I m, ], or, if m>n, [ M(s) K(s) ] is unimodular and [ ] [ ] M(s) In (se A) = K(s) The proof is carried out in section 44 The following theorem gives a complete characterization of the solution behavior of the DAE (11) Note that a discussion of the solution behavior of a DAE which is already in KCF is rather straightforward (see, eg, [39]); however, a characterization based just on the QKF (25) without knowledge of a more detailed structure (eg, some staircase form or even the KCF of se P A P and se Q A Q ) seems new Theorem 32 (complete characterization of solutions of the DAE) Let se A R m n [s] and use the notation from Theorem 26 Consider the solution space S = C or S = D pwc and let f S m According to Theorem 22, lets R,T R Gl nr (R) be the matrices which transform se R A R in quasi-weierstraß form, ie, (31) [ I ] S R S(sE A)T I [ I ] T R = I se P A P si J sn I se Q A Q [ and let (fp I ],f J,f N,f Q ) := S R Sf, where the splitting corresponds to the block I sizes in (31) According to Lemma] 31 choose unimodular matrices [M P (s),k P (s)] R np (mp +(np mp )) [s] and R (nq+(mq nq)) mq [s] such that [ MQ(s) K Q(s) (se P A P )[M P (s),k P (s)] = [I,] and [ ] MQ (s) (se K Q (s) Q A Q )= Then there exist solutions of the DAE Eẋ = Ax + f if and only if K Q ( d dt )(f Q)= [ I T R If this is the case, then an initial value x = T I consistent at t R, ie, there exists a solution of the initial value problem (32) Eẋ = Ax + f, x(t )=x, if and only if (33) x Q = ( M Q ( d dt )(f Q) ) ( nn 1 (t ) and x N = k= [ ] I ] (x P,x J,x N,x Q ) is ) N k ( d dt )k (f N ) (t )

13 348 THOMAS BERGER AND STEPHAN TRENN If (33) holds, then any solution x = T value problem (32) has the form [ I ] (xp T R,x J,x N,x ) Q of the initial I x P = M P ( d dt )(f P )+K P ( d dt )(u x ), P x J = e J( t) x J + ej t ( ) e J f J, x N = n N 1 k= N k ( d dt )k (f N ), x Q = M Q ( d dt )(f Q),, where u x P S np mp is such that the initial condition at t for x P is satisfied (which is always possible due to Lemma 417), but apart from that, arbitrary The proof is carried out in section 5 Note that the antiderivative operator t : S S, f F as used in Theorem 32 is uniquely defined by the two properties d dt F = f and F (t ) =(fors = C this is well known and for S = D pwc this is shown in [34, Prop 3]; see also [33, Prop 236]) We want to use Theorem 32 to characterize the solutions of our example DAE (12) We first observe that the regular part can be brought into quasi-weierstraß form s I by premultiplying with S R = A 1 R In particular, the J-part is nonexistent, which means that the circuit contains no classical dynamics Wechoose [M P (s),k P (s)] = CRs 1 and [ ] [ ] MQ (s) 1 = K Q (s) LKs 1 V Furthermore, f P = R G+R F, f J =, f N = V 1 K [ 1, 1, K,,, R G, 1 R G,, 1 1 R G, R G ], f Q =[I,V ], T R = I; hence the DAE (12) is solvable if and only if =K Q ( d dt )(f Q)= LK d dt I + V or, equivalently, V = LK d dt I, ie, the voltage source must be proportional to the change of current provided by the current source In that case, the initial value must fulfill x( ) =T [,,, f N ( ),M Q ( d dt )(f Q)( )], ie, recalling that x =(p +,p,p o,p T,i L,i p,i m,i G,i F,i R,i o,i V,i C,i T ), i R ( ), i o ( ), i V ( ) are arbitrary, and p ( ) = V ( ) K, p +( ) = V ( ) K, p o( ) = V ( ), p T ( ) = Ri R ( ), i L ( ) = I( ), i p ( ) =, i m ( ) =, i G = V ( ) R F +R G, i F = V ( ) R F +R G, i C ( ) =i V ( ) i o ( ) + V ( ) R F +R G, i T ( ) =i o ( ) i R ( ) i V ( ) V ( ) R F +R G If these conditions are satisfied, then all solutions of the initial value problem corresponding to our example DAE (12) are given by V x = T [u 1,u 2, R G+R F +RC u 1 +u 2, V K, V V V V V K,V,,, R F +R G, R F +R G,, R F +R G, R F +R G,I], =[ V K, V K,V,p V V T (u 1 ),I,,, R F +R G, R F +R G,i R (u 1 ),i o (u 2 ),i V (u 1,u 2 ),i C (u 1 ),i T (u 1 )], where u 1,u 2 S are arbitrary, apart from the initial conditions ( ) u 1 ( ) =i R ( ) V ( ) R, u 1( ) = 1 CR i V ( )+ V ( ) R F +R G i o ( ),u 2 ( ) =i o ( )

14 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS 349 and p T (u 1 )=V + Ru 1, i R (u 1 )=u 1 + V R, i o(u 2 )=u 2, i V (u 1,u 2 )=u 2 V R F +R G + CR u 1, i C (u 1 )=CR u 1, i T (u 1 )= u 1 V R CR u 1 Remark 33 A similar statement as in Theorem 32 is also possible if we consider only the QKTF (23), ie, instead of (31) we consider [ I ] [ I ] S R S trian (se A)T trian T R I I = se P A P se PJ A PJ se PN A PN se PQ A PQ si J se JQ A JQ sn I se NQ A NQ se Q A Q The corresponding conditions for the Q-part remain the same; in the condition for the d N-part the inhomogeneity f N is replaced by f N (E NQ dt A NQ)(x Q ), in the J-part d the inhomogeneity f J is replaced by f J (E JQ dt A JQ)(x Q ),andinthep-part the d inhomogeneity f P is replaced by f P (E PJ dt A d PJ)(x J ) (E PN dt A PN)(x N ) d (E PQ dt A PQ)(x Q ) 4 Useful lemmas In this section we collect several lemmas which are needed to prove the main results Since we use results from different areas we group the lemmas accordingly into subsections 41 Standard results from linear algebra Lemma 41 (orthogonal complements and (pre-)images) For any matrix M K p q we have (i) for all subspaces S K p it holds that (M 1 S) = M (S ); (ii) for all subspaces S K q it holds that (MS) = M (S ) Proof Property (i) is shown, eg, in [7, Property 313] Property (ii) follows from considering (i) for M, S instead of M,S and then taking orthogonal complements Lemma 42 (rank of matrices) Let A, B K m n with im B im A Then for almost all c K, or, equivalently, rank A =rank(a + cb), im A =im(a + cb) In fact, rank A<rank(A + cb) can only hold for at most r =ranka many values of c Proof Consider the Smith form [32] of A, [ ] Σr UAV = with invertible matrices U K m m and V K n n and Σ r = diag(σ 1,σ 2,,σ r ), σ i K \{}, r =ranka Write [ ] B11 B UBV = 12, B 21 B 22

15 35 THOMAS BERGER AND STEPHAN TRENN where B 11 K r r Since im B im A, it follows that B 21 =andb 22 = Hence, we obtain the following implications: rank(a + cb) < rank A rank[σ r + cb 11,cB 12 ] < rank[σ r, ] = r rank(σ r + cb 11 ) <r det(σ r + cb 11 )= Since det(σ r + cb 11 ) is a polynomial in c of degree at most r but not the zero polynomial (since det(σ r ) ), it can have at most r zeros This proves the claim Lemma 43 (dimension formulae) Let S K n be any linear subspace of K n and M K m n Then dim MS =dims dim(ker M S) Furthermore, for any two linear subspaces S, T of K n we have dim(s + T )=dims +dimt dim(s T) Proof See any textbook on linear algebra for the proof 42 The Wong sequences The next lemma highlights an important property of the intersection of the limits of the Wong sequences Lemma 44 (property of V W ) Let se A K m n [s] and V, W be the limits of the corresponding Wong sequences Then Proof Clearly, invoking (22), E(V W )=EV AW = A(V W ) E(V W ) EV EW EV AW and A(V W ) AV AW EV AW ; hence it remains to show the converse subspace relationship To this end we choose x EV AW, which implies the existence of v V and w W such that hence Ev = x = Aw; v E 1 {Aw} E 1 (AW )=W, w A 1 {Ev} A 1 (EV )=V Therefore v, w V W and x = Ev E(V W )aswellasx = Aw A(V W ), which concludes the proof For the proof of the main result we briefly consider the Wong sequences of the (conjugate) transposed matrix pencil se A ; these are connected to the original Wong sequences as follows Lemma 45 (Wong sequences of the transposed matrix pencil) Consider a matrix pencil se A K m n [s] with corresponding limits of the Wong sequences V and W Denote with V and Ŵ the limits of the Wong sequences of the (conjugate) transposed matrix pencil se A Then the following holds: Proof We show that for all i N, Ŵ =(EV ) and V =(AW ) (41) (EV i ) = Ŵi+1 and (AW i ) = V i,

16 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS 351 from which the claim follows For i = this follows from and (EV ) =(ime) =kere = E (A {}) =Ŵ1 (AW ) = {} = R m = V Now suppose that (41) holds for some i N Then (EV i+1 ) ( = EA 1 (EV i ) ) Lem 41(ii) = E (A 1 (EV i )) Lem 41(i) = E ( A (EV i ) ) = E ( A Ŵ i+1 ) = Ŵi+2, and analogously it follows that (AW i+1 ) = V i+1 ; hence we have inductively shown (41) 43 Singular chains In this subsection we introduce the notion of singular chains for matrix pencils This notion is inspired by the theory of linear relations (see [3]), where they are a vital tool for analyzing the structure of linear relations They also play an important role in former works on the KCF; see, eg, [16, Chap XII] and [3] However, in these works only singular chains of minimal length are considered We use them here to determine the structure of the intersection V W of the limits of the Wong sequences Definition 46 (singular chain) Let se A K m n [s] For k N the tuple (x,,x k ) (K n ) k+1 is called a singular chain of the matrix pencil se A if and only if =Ax, Ex = Ax 1,, Ex k 1 = Ax k, Ex k = or, equivalently, the polynomial vector x(s) =x + x 1 s + + x k s k K n [s] satisfies (se A)x(s) = Note that with every singular chain (x,x 1,,x k ) also the tuple (,,,x,, x k,,,) is a singular chain of se A Furthermore, with every singular chain, each scalar multiple is a singular chain and for two singular chains of the same length the sum of both is a singular chain A singular chain (x,,x k ) is called linearly independent if the vectors x,,x k are linearly independent Lemma 47 (linear independency of singular chains) Let se A K m n [s] For every nontrivial singular chain (x,x 1,,x k ), k N, of se A there exists l N, l k, and a linearly independent singular chain (y,y 1,,y l ) with span{x,x 1,,x k } =span{y,y 1,,y l } Proof This result is an extension of [3, Lem 31]; hence our proof resembles some ideas of the latter If (x,x 1,,x k ) is already a linearly independent singular chain, then nothing is to show, therefore, assume existence of a minimal l {, 1,,k 1} such that

17 352 THOMAS BERGER AND STEPHAN TRENN x l+1 = l i= α ix i for some α i K, i =,,l Consider the chains α (,,,,, x, x 1,, x l, x l+1,, x k 1,x k ), α 1 (,,,, x, x 1,, x l, x l+1,, x k 1, x k, ), α 2 (,,, x, x 1,, x l, x l+1,, x k 1, x k,, ), α l 1 (, x,x 1,, x l 2,x l 1, x l, x l+1, x k,,, ), α l (x,x 1,,x l 2,x l 1, x l, x l+1, x k,,,, ) and denote its sum by (z,z 1,,z k+l ) Note that by construction z l = l i= α ix i = x l+1 Now consider the singular chain (v,v 1,,v k+l+1 ):=(x,x 1,,x k,,,) (,z,z 1,,z l+k ), which has the property that v l+1 = x l+1 z l = In particular (v,v 1,,v l )and(v l+2,v l+3,,v k+l+1 ) are both singular chains Furthermore (we abbreviate α i I with α i ), v v 1 v 2 v k v k+1 v k+l+1 = I -α l I -α l-1 -α l I -α 1 -α 2 -α l I -α -α 1 -α 2 -α l I -α -α 1 -α 3 -α l I -α -α 1 -α 3 -α l I x x 1 x 2 x k ; hence span{v,v 1,,v k+l+1 } =span{x,x 1,,x k } =span{v,v 1,,v k }Inparticular span{v k+1,v k+2,,v k+l+1 } span{v,v 1,,v k }; hence, by applying Lemma 42, there exists c such that (note that l<k) (42) im[v,v 1,,v k ]=im([v,v 1,,v k ]+c [v k+1,v k+2,,v k+l+1,,,]) Therefore, the singular chain (w,w 1,,w k 1 ):=c (v l+2,,v k,v k+1,v k+2,,v k+l+1 )+(,,,v,v 1,,v l ) has the property span{w,w 1,,w k 1 } = span{v l+2,v l+3,,v k } +span{cv k+1 + v,cv k+2 + v 1,,cv k+l+1 + v l } v l+1 = = im([v,v 1,,v k ]+c [v k+1,v k+2,,v k+l+1,,,]) (42) = im[v,v 1,,v k ] = span{v,v 1,,v k } =span{x,x 1,,x k } Altogether, we have obtained a shorter singular chain which spans the same subspace as the original singular chain Repeating this procedure until one obtains a linearly independent singular chain proves the claim

18 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS 353 Corollary 48 (basis of the singular chain manifold) Consider a matrix pencil se A K m n [s] and let the singular chain manifold be given by K := { x R n k, i N sing chain (x,,x i 1,x= x i,x i+1,,x k ) (K n ) k+1 }, ie, K is the set of all vectors x appearing somewhere in some singular chain of se A Then there exists a linearly independent singular chain (x,x 1,,x k ) of se A such that K =span{x,,x k } Proof First note that K is indeed a linear subspace of K n, since the scalar multiple of every chain is also a chain and the sum of two chains (extending the chains appropriately with zero vectors) is again a chain Let y,y 1,,y l be any basis of K By the definition of K, foreachi =, 1,,l there exist chains (y i,yi 1,,yi k i ), which contain y i Let(v,v 1,,vˆk) with ˆk = k + k 1 ++k l being the chain which results by concatenating the chains (y i,yi 1,,yi k i ) Clearly, span{v,,vˆk} = K; hence, Lemma 47 yields the claim The following result can, in substance, be found in [3] However, the proof therein is difficult to follow, involving quotient spaces and additional sequences of subspaces Our presentation is much more straightforward and simpler Lemma 49 (singular chain manifold and the Wong sequences) Consider a matrix pencil se A K m n [s] with the limits V and W of the Wong sequences Let the singular chain manifold K be given as in Corollary 48; then V W = K Proof Step 1 We show K V W Let (x,,x k ) be a singular chain Clearly we have x A 1 (E{}) =kera V and x k E 1 (A{}) = kere W ; hence, inductively we have, for i =, 1,,k 1andj = k, k 1,,1, x i+1 A 1 (E{x i }) A 1 (EV )=V and x j 1 E 1 (A{x j }) E 1 (AW )=W Therefore, x,,x k V W Step 2 We show V W K Let x V W,inparticular,x W = W l for some l N Hence there exists x 1 W l 1,x 2 W l 2,,x l W = {}, such that, for x := x, Ex = Ax 1, Ex 1 = Ax 2,, Ex l 1 = Ax l, Ex l = Furthermore, since, by Lemma 44, E(V W )=A(V W ), there exist x 1,x 2,, x (l +1) V W such that Ax = Ex 1,Ax 1 = Ex 2,, Ax (l 1) = Ex l,ax l = Ex (l +1) Let x (l +1) := x (l +1) V W W ; then (with the same argument as above) there exist x l, x (l 1),, x 1 W such that E x (l +1) = A x l, E x l = A x (l 1),, E x 2 = A x 1, E x 1 =,

19 354 THOMAS BERGER AND STEPHAN TRENN and thus, defining ˆx i = x i + x i,fori =1,,l +1, we have ˆx (l +1) =and we get =Eˆx (l +1) =Aˆx l,eˆx l =Aˆx (l 1),, Eˆx 2 =Aˆx 1, Eˆx 1 =Ex 1 =Ax This shows that (ˆx l, ˆx (l 1),,ˆx 1,x,x 1,,x l ) is a singular chain and x = x K The last result in this section relates singular chains with the column rank of the matrix pencil se A Lemma 41 (column rank deficit implies singular chains) Let se A K m n [s] If rank C (λe A) <nfor all λ C { }, then there exists a nontrivial singular chain of se A Proof It suffices to observe that Definition 46 coincides (modulo a reversed indexing) with the notion of singular chains in [3] applied to the linear relation E 1 A := {(x, y) K n K n Ax = Ey} Then the claim follows for K = C from [3, Thm 44] The main idea of the proof there is to choose any m+1 different eigenvalues and corresponding eigenvectors This is also possible for K = R and K = Q; hence the proof in [3] is also valid for K = R and K = Q 44 Polynomial matrices In the following we will say that P (s) K m n [s] can be extended to a unimodular matrix if and only if, in the case m<n,thereexists Q(s) K n m n [s] such that [ P (s) Q(s) ] is unimodular, in the case m>n,thereexists Q(s) K m m n [s] such that [P (s),q(s)] is unimodular, and, in the case m = n, P (s) itself is unimodular Lemma 411 (unimodular extension) AmatrixP (s) K m n [s] can be extended to a unimodular matrix if and only if rank C P (λ) =min{m, n} for all λ C Proof Necessity is clear; hence it remains to show that under the full rank assumption a unimodular extension is possible Note that K[s] is a principal ideal domain; hence we can consider the Smith normal form [32] of P (s) givenby P (s) =U(s) [ ] Σr (s) V (s), where U(s),V(s) are unimodular matrices and Σ(s) =diag(σ 1 (s),,σ r (s)), r N, with nonzero diagonal entries Note that rank C P (λ) =rank C Σ(λ) for all λ C; hence the full rank condition implies r =min{m, n} and σ 1 (s),,σ r (s) areconstant (nonzero) polynomials For m = n this already shows the claim For m>n, ie, P (s) =U(s) [ ] Σ n(s) V (s), the sought unimodular extension is given by [ ][ ] Σn (s) V (s) [P (s),q(s)] = U(s) I I and, for m<n, [ ] P (s) = Q(s) [ U(s) I ][ Σm (s) I ] V (s) Proof of Lemma 31 Let Q(s) be any unimodular extension of se A according to Lemma 411 If m < n,choose[m(s),k(s)] = [ se A Q(s) ] 1, and if m > n, let [ M(s) K(s) ]:=[se A, Q(s)] 1

20 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS KCF for full rank pencils In order to derive the KCF as a corollary of the QKF and also for the proof of the solvability of (24), we need the following lemma, which shows how to obtain the KCF for the special case of full rank pencils Lemma 412 (KCF of full rank rectangular pencil, m<ncase) Let se A K m n [s] be such that m<n,andletl := n m Then rank C (λe A) =m for all λ C { } if and only if there exist numbers ε 1,,ε l N and matrices S Gl m (K), T Gl n (K) such that S(sE A)T = diag(p ε1 (s),,p εl (s)), where P ε (s), ε N, isasincorollary28 Proof Sufficiency is clear; hence it remains to show necessity If m =andn>, then nothing is to show since se A is already in the diagonal form with ε 1 = ε 2 = = ε l = Hence assume m> in the following The main idea is to reduce the problem to a smaller pencil se A K m n [s] with rank C (λe A )=m <n <nfor all λ C { } Then we can inductively use the transformation to the desired block diagonal structure for the smaller pencil to obtain the block diagonal structure for the original pencil By assumption E does not have full column rank; hence there exists a column operation T 1 Gl n (K) such that ET 1 = There are two cases now: Either the first column of AT 1 is zero or it is not We consider the two cases separately Case 1 The first column of AT 1 is zero Let ET 1 =: [,E ]andat 1 =: [,A ] Then, clearly, rank C (λe A)=rank C (λe A )=m := m for all λ C { } Furthermore, with n := n 1, it follows that n m Seeking a contradiction, assume n = m Then the full rank matrix E is square and hence invertible Let λ C be any eigenvalue of the matrix E 1 A Thus =det(λi E 1 A )=det(e ) 1 det(λe A ), and hence rank C (λe A ) <m, a contradiction Altogether, this shows that se A K m n [s] is a smaller pencil which satisfies the assumption of the lemma; hence we can inductively use the result of the lemma [ for se A with transformation matrices S and T Let S := S and T := T 1 ] 1 T ;thens(se A)T has the desired block diagonal structure which coincides with the block structure of se A apart from one additional P block Case 2 The first column of AT 1 is not zero Then there exists a row operation S 1 Gl m (K) such that 1 S 1 (AT 1 )= Since E has full row rank, the first row of S 1 ET 1 cannot be the zero row; hence, there exists a second column operation T 2 Gl n (n) which does not change the first

21 356 THOMAS BERGER AND STEPHAN TRENN column such that 1 (S 1 ET 1 )T 2 = Now let T 3 Gl n (K) be a column operation which adds multiples of the first column to the remaining columns such that 1 (S 1 AT 1 T 2 )T 3 = Since the first column of S 1 ET 1 T 2 is zero, the column operation T 3 has no effect on the matrix S 1 ET 1 T 2 Let 1 1 S 1 ET 1 T 2 T 3 =: E and S 1AT 1 T 2 T 3 =: A, with se A K m n [s] andm := m 1, n := n 1, in particular m <n Seeking a contradiction, assume rank C λe A <m for some λ C { }Ifλ =, then this implies that E does not have full row rank, which would also imply that E does not have full row rank, which is not the case Hence we may choose a vector v C m such that v (λe A )= Letv := [,v ]S 1 Then a simple calculation reveals v(λe A) =[,v (λe A )](T 1 T 2 T 3 ) 1 =, which contradicts full complex rank of λe A As in the first case we can now inductively use the result of the lemma for the smaller matrix pencil se A to obtain transformations S and T which put se A in the desired block diagonal form With S := [ ] [ 1 S S1 and T := T 1 T 2 T 1 ] 3 T we obtain the same block diagonal structure for se A as for se A apart from the first block, which is P ε1+1 instead of P ε1 The following corollary follows directly from Lemma 412 by transposing the respective matrices Corollary 413 (KCF of full rank rectangular pencils, m > n case) Let se A K m n [s] be such that m>n,andletl := m n Thenrank C (λe A) =n for all λ C { } if and only if there exist numbers η 1,,η l N and matrices S Gl m (K), T Gl n (K) such that S(sE A)T = diag(q η1 (s),,q ηl (s)), where Q η (s), η N, isasincorollary28 46 Solvability of linear matrix equations In generalization of the method presented in [11, sect 6] we reduce the problem of solvability of (24) to the problem of solving a generalized Sylvester equation (43) AXB CXD = E

22 THE QUASI-KRONECKER FORM FOR MATRIX PENCILS 357 To this end the following lemma is crucial Lemma 414 Let A, C K m n, B,D K p q, E,F K m q, and consider the system of matrix equations with unknowns Y K n q and Z K m p, (44) =E + AY + ZD, =F + CY + ZB Suppose there exists λ K and M λ K q p such that M λ (B λd) =I, inparticular p q Then, for any solution X K n p of the matrix equation AXB CXD = E (λe F )M λ D, the matrices Y = X(B λd), Z = (C λa)x (F λe)m λ solve (44) Proof We calculate E + AY + ZD = E + AX(B λd) (C λa)xd (F λe)m λ D = E AXλD + λaxd (F λe)m λ D E (λe F )M λ D =, F + CY + ZB = F + CX(B λd) (C λa)xb (F λe)m λ B = F + CXB CXB (F λe)m λ B λ (E +(λe F )M λ D) = (F λe) (F λe)m λ B λ(λe F )M λ D = (F λe) ( I q M λ (B λd) ) = The following result is well known and since it considers only a regular matrix pencil we do not repeat its proof here However, we need to introduce the notion of spectrum of a regular pencil se A K n n [s]: this is the set of all λ C { } such that rank C (λe A) <n Lemma 415 (solvability of the generalized Sylvester equation: regular case [11, 17]) Let A, C K n n, B,D K p p, E K n p, and consider the generalized Sylvester equation (43) Assume that (sb D) and (sc A) are both regular with distinct spectra Then (43) is solvable Finally, we can state and prove the result about the solvability of the generalized sylvester equation which is needed to prove Theorem 26 Lemma 416 (solvability of the generalized Sylvester equation with special properties) Let A, C K m n, m n, B,D K p q, p>q, E K m q, and consider the generalized Sylvester equation (43) Assume that (λb D) has full rank for all λ C { } and that either (λc A) has full rank for all λ C { } or that (sc A) is regular Then (43) is solvable Proof The proof follows that of [17, Thm 2] By Theorem 22 and Lemmas 412 and 413 we already know that we can put the pencils sc A and sb D into KCF Therefore, choose invertible S 1,T 1,S 2,T 2 such that sc A = S 1 (sc A)T 1 and sb D = S 2 (sb D)T 2 are in KCF Hence, with X = T1 1 XS1 1 and E = S 1 ET 2, equation (43) is equivalent to A X B C X D = E

23 358 THOMAS BERGER AND STEPHAN TRENN Let sc A = diag(sc 1 A1,,sCn1 An1 ), n 1 N, andsb D =diag(sb 1 D,,sB 1 n2 D n2 ), n 2 N, corresponding to the block structure of the KCF as in Corollary 28, then (43) is furthermore equivalent to the set of equations where X ij A i X ij Bj Ci X ij Dj = Eij, i =1,,n 1,j=1,,n 2, and Eij are the corresponding subblocks of X and E, respectively Note that (using the notation of Corollary 28) by assumption each pencil sc i Ai is a P ε (s), J ρ (s), or N σ (s) block, and all pencils sb j Dj are Q η(s) blocks IfsC i A i is a P ε (s) block, we consider a reduced equation by deleting the first column of sc i Ai, which results in a regular J ε (s) block with spectrum {}, and by deleting the last row of sb j Dj, which results in a regular N η(s) block with spectrum { } Hence, we use Lemma 415 to obtain solvability of the reduced problem Filling the reduced by a zero row and a zero column results in a solution for the original problem If sc i Ai is a J ρ(s) block, we can apply the same trick (this time we delete only the last row of sb j Dj ) to arrive at the same conclusion, as sci A i has only finite spectrum Finally, if sc i Ai is a N σ(s) block with spectrum { }, we have to reduce sb j Dj by deleting the first row, which results in a J η(s) block with spectrum {} This concludes the proof solution for X ij 47 Solutions of DAEs In order to prove Theorem 32 we need the following lemmas, which characterize the solutions of DAEs in the case of full rank pencils As in section 3 we restrict ourselves to the case K = R Lemma 417 (full row rank pencils) Let se A R m n [s] such that m<n and rank C (λe A) = m for all λ C { } According to Lemma 31 choose M(s) R n m [s] and K(s) R n (n m) [s] such that (se A)[M(s),K(s)] = [I,] and [M(s),K(s)] is unimodular Consider the DAE Eẋ = Ax + f and the associated solution space S = C or S = D pwc Then, for all inhomogeneities f S m, x S n is a solution if and only if there exists u S n m such that x = M( d dt )(f)+k( d dt )(u) Furthermore, all initial value problems have a solution, ie, for all x R n, t R, and all f S m there exists a solution x S n such that x(t )=x Proof Step 1 We show that x = M( d dt )(f)+k( d dt )(u) solveseẋ = Ax + f for any u S n m This is clear since (E d dt A) ( M( d dt )(f)+k( d dt )(u)) = f +=f Step 2 We show that any solution x of the DAE can be represented as above To this end let u := [,I][M( d dt ),K( d dt )] 1 x S n m, which is well defined due to the unimodularity of [M(s),K(s)] Then [ [I,][M( d f =(E d dt A)x =(E d dt A)[M( d dt ),K( d dt )] dt ),K( ] d dt )] 1 x [,I][M( d dt ),K( d dt )] 1 x =[I,][M( d dt ),K( d dt )] 1 x,

Controllability of switched differential-algebraic equations

Controllability of switched differential-algebraic equations Controllability of switched differential-algebraic equations Ferdinand Küsters Markus G-M Ruppert Stephan Trenn Fachbereich Mathematik Technische Universität Kaiserslautern Postfach 349 67653 Kaiserslautern

More information

Impulse free solutions for switched differential algebraic equations

Impulse free solutions for switched differential algebraic equations Impulse free solutions for switched differential algebraic equations Stephan Trenn Institute of Mathematics, Ilmenau University of Technology, Weimerarer Str. 25, 98693 Ilmenau, Germany Abstract Linear

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE TIMO REIS AND MATTHIAS VOIGT, Abstract. In this work we revisit the linear-quadratic

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

ON THE MATRIX EQUATION XA AX = X P

ON THE MATRIX EQUATION XA AX = X P ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Control Systems. Linear Algebra topics. L. Lanari

Control Systems. Linear Algebra topics. L. Lanari Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues - eigenvectors - characteristic polynomial - algebraic multiplicity eigenvalues invariance under similarity

More information

Bohl exponent for time-varying linear differential-algebraic equations

Bohl exponent for time-varying linear differential-algebraic equations Bohl exponent for time-varying linear differential-algebraic equations Thomas Berger Institute of Mathematics, Ilmenau University of Technology, Weimarer Straße 25, 98693 Ilmenau, Germany, thomas.berger@tu-ilmenau.de.

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics Ulrich Meierfrankenfeld Department of Mathematics Michigan State University East Lansing MI 48824 meier@math.msu.edu

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

NOTES II FOR 130A JACOB STERBENZ

NOTES II FOR 130A JACOB STERBENZ NOTES II FOR 130A JACOB STERBENZ Abstract. Here are some notes on the Jordan canonical form as it was covered in class. Contents 1. Polynomials 1 2. The Minimal Polynomial and the Primary Decomposition

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

Linear relations and the Kronecker canonical form

Linear relations and the Kronecker canonical form Linear relations and the Kronecker canonical form Thomas Berger Carsten Trunk Henrik Winkler August 5, 215 Abstract We show that the Kronecker canonical form (which is a canonical decomposition for pairs

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

(VI.C) Rational Canonical Form

(VI.C) Rational Canonical Form (VI.C) Rational Canonical Form Let s agree to call a transformation T : F n F n semisimple if there is a basis B = { v,..., v n } such that T v = λ v, T v 2 = λ 2 v 2,..., T v n = λ n for some scalars

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Eigenvalue placement for regular matrix pencils with rank one perturbations

Eigenvalue placement for regular matrix pencils with rank one perturbations Eigenvalue placement for regular matrix pencils with rank one perturbations Hannes Gernandt (joint work with Carsten Trunk) TU Ilmenau Annual meeting of GAMM and DMV Braunschweig 2016 Model of an electrical

More information

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES D. Katz The purpose of this note is to present the rational canonical form and Jordan canonical form theorems for my M790 class. Throughout, we fix

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since MAS 5312 Section 2779 Introduction to Algebra 2 Solutions to Selected Problems, Chapters 11 13 11.2.9 Given a linear ϕ : V V such that ϕ(w ) W, show ϕ induces linear ϕ W : W W and ϕ : V/W V/W : Solution.

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Chapter 2 Subspaces of R n and Their Dimensions

Chapter 2 Subspaces of R n and Their Dimensions Chapter 2 Subspaces of R n and Their Dimensions Vector Space R n. R n Definition.. The vector space R n is a set of all n-tuples (called vectors) x x 2 x =., where x, x 2,, x n are real numbers, together

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

The Jordan Canonical Form

The Jordan Canonical Form The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS

SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS 1 SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS HUAJUN HUANG AND HONGYU HE Abstract. Let G be the group preserving a nondegenerate sesquilinear form B on a vector space V, and H a symmetric subgroup

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details. Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

MATH JORDAN FORM

MATH JORDAN FORM MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

3. Associativity: (a + b)+c = a +(b + c) and(ab)c = a(bc) for all a, b, c F.

3. Associativity: (a + b)+c = a +(b + c) and(ab)c = a(bc) for all a, b, c F. Appendix A Linear Algebra A1 Fields A field F is a set of elements, together with two operations written as addition (+) and multiplication ( ), 1 satisfying the following properties [22]: 1 Closure: a

More information

On the eigenvalues of specially low-rank perturbed matrices

On the eigenvalues of specially low-rank perturbed matrices On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form LINEARIZATIONS OF SINGULAR MATRIX POLYNOMIALS AND THE RECOVERY OF MINIMAL INDICES FERNANDO DE TERÁN, FROILÁN M. DOPICO, AND D. STEVEN MACKEY Abstract. A standard way of dealing with a regular matrix polynomial

More information

Linear equations in linear algebra

Linear equations in linear algebra Linear equations in linear algebra Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra Pearson Collections Samy T. Linear

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Algebra Exam Syllabus

Algebra Exam Syllabus Algebra Exam Syllabus The Algebra comprehensive exam covers four broad areas of algebra: (1) Groups; (2) Rings; (3) Modules; and (4) Linear Algebra. These topics are all covered in the first semester graduate

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

Reductions of Operator Pencils

Reductions of Operator Pencils Reductions of Operator Pencils Olivier Verdier Department of Mathematical Sciences, NTNU, 7491 Trondheim, Norway arxiv:1208.3154v2 [math.na] 23 Feb 2014 2018-10-30 We study problems associated with an

More information

Product distance matrix of a tree with matrix weights

Product distance matrix of a tree with matrix weights Product distance matrix of a tree with matrix weights R B Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India email: rbb@isidacin Sivaramakrishnan Sivasubramanian

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

3.2 Real and complex symmetric bilinear forms

3.2 Real and complex symmetric bilinear forms Then, the adjoint of f is the morphism 3 f + : Q 3 Q 3 ; x 5 0 x. 0 2 3 As a verification, you can check that 3 0 B 5 0 0 = B 3 0 0. 0 0 2 3 3 2 5 0 3.2 Real and complex symmetric bilinear forms Throughout

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS J. WARNER SUMMARY OF A PAPER BY J. CARLSON, E. FRIEDLANDER, AND J. PEVTSOVA, AND FURTHER OBSERVATIONS 1. The Nullcone and Restricted Nullcone We will need

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

(VI.D) Generalized Eigenspaces

(VI.D) Generalized Eigenspaces (VI.D) Generalized Eigenspaces Let T : C n C n be a f ixed linear transformation. For this section and the next, all vector spaces are assumed to be over C ; in particular, we will often write V for C

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Math 25a Practice Final #1 Solutions

Math 25a Practice Final #1 Solutions Math 25a Practice Final #1 Solutions Problem 1. Suppose U and W are subspaces of V such that V = U W. Suppose also that u 1,..., u m is a basis of U and w 1,..., w n is a basis of W. Prove that is a basis

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

5 Quiver Representations

5 Quiver Representations 5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )

More information