Miniversal deformations of pairs of skew-symmetric matrices under congruence. Andrii Dmytryshyn

Size: px
Start display at page:

Download "Miniversal deformations of pairs of skew-symmetric matrices under congruence. Andrii Dmytryshyn"

Transcription

1 Miniversal deformations of pairs of skew-symmetric matrices under congruence by Andrii Dmytryshyn UMINF-16/16 UMEÅ UNIVERSITY DEPARTMENT OF COMPUTING SCIENCE SE UMEÅ SWEDEN

2 Miniversal deformations of pairs of skew-symmetric matrices under congruence Andrii Dmytryshyn Department of Computing Science, Umeå University, SE Umeå, Sweden Abstract Miniversal deformations for pairs of skew-symmetric matrices under congruence are constructed. To be precise, for each such a pair (A, B) we provide a normal form with a minimal number of independent parameters to which all pairs of skew-symmetric matrices (Ã, B), close to (A, B) can be reduced by congruence transformation which smoothly depends on the entries of the matrices in the pair (Ã, B). An upper bound on the distance from such a miniversal deformation to (A, B) is derived too. We also present an example of using miniversal deformations for analyzing changes in the canonical structure information (i.e. eigenvalues and minimal indices) of skew-symmetric matrix pairs under perturbations. Keywords: Skew-symmetric matrix pair, Skew-symmetric matrix pencil, Congruence canonical form, Congruence, Perturbation, Versal deformation 2000 MSC: 15A21, 15A63 1. Introduction Canonical forms of matrices and matrix pencils, e.g., Jordan and Kronekher canonical forms, are well known and studied with various purposes but the reductions to these forms are unstable operations: both the corresponding canonical forms and the reduction transformations depend discontinuously on the entries of an original matrix or matrix pencil. Therefore, V.I. Arnold introduced a normal form, with the minimal number of independent parameters, to which an arbitrary family of matrices à close to a given matrix A can be reduced by similarity transformations smoothly depending on the entries of Ã. He called such a normal form a miniversal deformation of A. Now the notion of miniversal deformations has been extended to matrices with respect to congruence [14] and *congruence [15], matrix pencils with respect to strict equivalence [19, 23] and congruence [11], etc. (more detailed list is given in the introduction of [15]). Preprint submitted to Elsevier June 10, 2016

3 Miniversal deformations can help us to construct stratifications, i.e., closure hierarchies, [13, 20, 21] of orbits and bundles. These stratifications are the graphs that show which canonical forms the matrices (or matrix pencils) may have in an arbitrarily small neighbourhood of a given matrix (or matrix pencil). In particular, the stratifications show how the eigenvalues may coalesce or split apart, appear or disappear. Both the stratifications and miniversal deformations may be useful when the matrices arise as a result of measures and their entries are known with errors, see [27, 30] for some applications in control and stability theory. The questions related to eigenvalues and another canonical information for the pencils A sb, where A = ±A T and B = ±B T, or A = ±A and B = ±B, dragged some attention over time and, especially, recently, e.g., see the following papers on canonical forms [35, 37], codimension computations [8, 9, 17, 18], low rank perturbations [4], miniversal deformations [11, 14, 23], partial [13, 22] and general [16] stratification results, staircase forms [5, 7]. Such pencils also appear as the structure preserving linearizations of the corresponding matrix polynomials [32, 33]. In particular, the papers [4, 16, 17, 35, 37] deal with skew-symmetric matrix pencils, i.e. A sb, where A = A T and B = B T, and [33] deals with skew-symmetric matrix polynomials. Skew-symmetric matrix pencils appear in multisymplectic partial differential equations [6], systems with bi-hamiltonian structure [34], as well as in the design of a passive velocity field controller [28]. Recall that, an n n skewsymmetric matrix pencil A sb is called congruent to C sd if and only if there is a non-singular matrix S such that S T AS = C and S T BS = D. The set of matrix pencils congruent to a skew-symmetric matrix pencil A sb is called a congruence orbit of A sb. In this paper, we derive the miniversal deformations of skew-symmetric matrix pencils under congruence and bound the distance from these deformations to unperturbed matrix pencils in terms of the norm of the perturbations. The number of independent parameters in the miniversal deformations is equal to the codimensions of the congruence orbits of skew-symmetric matrix pencils (obtained independently in [17]). The Matlab functions for computing these codimensions were developed [12] and added to the Matrix Canonical Structure (MCS) Toolbox [25]. Example 2.1 shows how the miniversal deformations from Theorem 2.1 can be used for the investigation of the possible changes of the canonical structure information. The rest of the paper is organized as follows. In Section 2, we present the main theorems, i.e., we construct miniversal deformations of skew-symmetric matrix pencils and prove an upper bound on the distance between a skewsymmetric matrix pencil and its miniversal deformation. In Section 3, we describe the method of constructing deformations (Section 3.1) and derive 2

4 the miniversal deformations step by step: for the diagonal blocks (Section 3.2), for the off-diagonal blocks that correspond to the canonical summands of the same type (Section 3.3), and for the off-diagonal blocks that correspond to the canonical summands of different types (Section 3.4). In this paper all matrices are considered over the field of complex numbers. Except in Example 2.1, we use the matrix pair notation (A, B) instead of the pencil notation A sb. We also use one calligraphic letter, e.g., A or D, to refer to a matrix pair. 2. The main results In this section, we present the miniversal deformations of pairs of skewsymmetric matrices under congruence and obtain an upper bound on the distance between a skew-symmetric matrix pair and its miniversal deformations. In Section 3, we will derive these miniversal deformations. First we recall the canonical form of pairs of skew-symmetric matrices under congruence given in [37]. For each k = 1, 2,..., define the k k matrices λ 1 λ J k (λ) = 1, I 1 k = λ 1, 1 where λ C, and for each k = 0, 1,..., the k (k + 1) matrices 1 0 F k = 0 1, G k = All non-specified entries of the matrices J k (λ), I k, F k, and G k are zero. Lemma 2.1 ([36, 37]). Every pair of skew-symmetric complex matrices is congruent to a direct sum, determined uniquely up to permutation of summands, of pairs of the form H n (λ) = ([ 0 I n I n 0 ], [ 0 J n (λ) J n (λ) T ]), λ C, (1) 0 0 J K n = ([ n (0) J n (0) T 0 ], [ 0 I n ]), (2) I n 0 L n = ([ 0 F n Fn T 0 ], [ 0 G n G T ]). (3) n 0 3

5 Thus, each pair of skew-symmetric matrices is congruent to a direct sum (A, B) can = a i=1 H hi (λ i ) b j=1 consisting of direct summands of three types of pairs. K kj c L lr, (4) r= Miniversal deformations The concept of a miniversal deformation of a matrix with respect to similarity was given by V. I. Arnold [1] (see also [3, 30B]). This concept can straightforwardly be extended to pairs of skew-symmetric matrices with respect to congruence. A deformation of a pair of skew-symmetric ˆn ˆn matrices (A, B) is a holomorphic mapping A( δ), where δ = (δ 1,..., δ k ), from a neighborhood Ω C k of 0 = (0,..., 0) to the space of pairs of skew-symmetric ˆn ˆn matrices such that A( 0) = (A, B). Note that in this paper we consider only skewsymmetric deformations, i.e., the skew-symmetric structure of matrix pairs is preserved. Therefore we write only deformation but not skew-symmetric deformation without the risk of confusion. Definition 2.1. A deformation A(δ 1,..., δ k ) of a pair of skew-symmetric matrices (A, B) is called versal if for every deformation B(σ 1,..., σ l ) of (A, B) we have B(σ 1,..., σ l ) = I(σ 1,..., σ l ) T A(ϕ 1 ( σ),..., ϕ k ( σ))i(σ 1,..., σ l ), where I(σ 1,..., σ l ) is a deformation of the identity matrix, and all ϕ i ( σ) are convergent in a neighborhood of 0 power series such that ϕ i ( 0) = 0. A versal deformation A(δ 1,..., δ k ) of (A, B) is called miniversal if there is no versal deformation having less than k parameters. By a (0, ) matrix we mean a matrix whose entries are 0 and and we consider pairs D of (0, ) matrices. We say that a pair of skew-symmetric matrices is of the form D if it can be obtained from D by replacing the stars with complex numbers, respecting the skew-symmetry. Denote by D(C) the space of all pairs of skew-symmetric matrices of the form D, and by D( ε) the pair of parametric skew-symmetric matrices obtained from D by replacing the (i, j)-th and (j, i)-th stars with the parameters ε ij and ε ji, respectively, in the first matrix and the (i, j )-th and (j, i )-th stars with the parameters ε i j and ε j i, respectively, in the second matrix. In other words D( ε) = ( (i,j) Ind 1 (D) ε ij E ij, 4 (i,j ) Ind 2 (D) ε i j E i j ), (5)

6 D(C) = {D( ε) ε C k } = ( + where (i,j) Ind 1 (D) Ind 1 (D), Ind 2 (D) {1,..., ˆn} {1,..., ˆn}, CE ij, + CE i j ), (6) (i,j ) Ind 2 (D) are the sets of indices of the stars in the upper-triangular parts of the first and the second matrices, respectively, of the pair D, and E ij is the matrix whose (i, j)-th entry is 1, (j, i)-th entry is 1 and the other entries are zero. Note that the large + in (6) denotes the entrywise sum of matrices. Following [23], we say that a miniversal deformation of (A, B) is simplest if it has the form (A, B) + D( ε), where D is a pair of (0, ) matrices. If the matrix pair D in (A, B) + D( ε) has no zero entries (except on the main diagonals), then D defines the deformation U( ε) = (A + ˆn ˆn i=1 j=i+1 ε ij E ij, B + ˆn ˆn i=1 j=i+1 ε ije ij ). (7) In other words, for all pairs of ˆn ˆn skew-symmetric matrices (A + E, B + E ) that are close to a given pair of skew-symmetric matrices (A, B), we derive the normal form A(E, E ) with respect to the congruence transformation (A + E, B + E ) S(E, E ) T (A + E, B + E )S(E, E ) = A(E, E ), (8) in which S(E, E ) is holomorphic at 0 (i.e. its entries are power series in the entries of E and E that are convergent in a neighborhood of 0) and S(0, 0) is a nonsingular ˆn ˆn matrix. Since A(0, 0) = S(0, 0) T (A, B)S(0, 0), we can take A(0, 0) equal to the congruence canonical form (A, B) can of (A, B), see (4). Then A(E, E ) = (A, B) can + D(E, E ), (9) where D(E, E ) (= D( ε) for some ε C k ) is a pair of skew-symmetric matrices that is holomorphic at 0 and D(0, 0) = (0, 0). In Theorem 2.1 we derive D(E, E ) with the minimal number of nonzero entries that can be attained using the congruence transformation defined in (8). We use the following notation, in which each star denotes a function of the entries of E and E that is holomorphic at zero: 0 mn is the m n zero matrix; mn is the m n matrix m 1,n 1 ;

7 0 mn is the m n matrix 0 m,n 1 if m n, and... if m n (10) 0 m 1,n (if m = n, then we can take any of the matrices defined in (10)); 0, 0 and 0 are matrices that are obtained from 0, by clockwise rotation by 90, 180 and 270, respectively; 0 mn is the m n matrix 0 m,n 1 (in contrast to 0 mn and 0 mn, the matrix 0 mn has stars in the first column even if m > n); 0 mn is the m n matrix 0 m,n 1 (in contrast to 0 mn and 0 mn, the matrix 0 mn has stars in the last column even if m > n); 0 mn is the m n matrix 0 m 1,n 1 ;... 0 mn with m < n is the m n matrix (n m stars) if m n then 0 mn = 0. Further, we will usually omit the indices m and n. Let (A, B) can = X 1 X t (11) be a canonical pair of skew-symmetric complex matrices for congruence, in which X 1,..., X t are pairs of the form (1) (3), and let D(E, E ) be a pair of skew-symmetric matrices, defined in (9), whose matrices are partitioned into blocks conformally to the decomposition (11): D D 1t D D(E, E D 1t ) = D =,. (12) D t1... D tt D t1... D tt 6

8 Note that (D ji, D ji ) = ( DT ij, D T ij ) and define D(X i ) = (D ii, D ii) and D(X i, X j ) = (D ij, D ij), i < j. (13) Since each pair of skew-symmetric matrices is congruent to its canonical pair of matrices, it suffices to construct the miniversal deformations for the pairs of canonical matrices (i.e. direct sums of the pairs (1) (3)). Theorem 2.1. Let (A, B) can be a pair of skew-symmetric complex matrices (4). A simplest miniversal deformation of (A, B) can can be taken in the form (A, B) can +D in which D is a pair of (0, ) matrices (the stars denote independent parameters, up to skew-symmetry, see also Remark 2.1) whose matrices are partitioned into blocks conformally to the decomposition of (A, B) can, see (12), and the blocks of D are defined, in the notation (13), as follows: (i) The diagonal blocks of D are defined by D(H n (λ)) = (0, [ ]), (14) 0 D(K n ) = ([ ], 0), (15) 0 D(L n ) = (0, 0). (16) (ii) The off-diagonal blocks of D whose horizontal and vertical strips contain pairs of (A, B) can of the same type are defined by (0, 0) if λ µ, D(H n (λ), H m (µ)) = 0, if λ = µ, (17) 0 D(K n, K m ) = ([ ], 0), (18) D(L n, L m ) = ([ T m+1,n ], [ n+1,m 0 ]). (19) (iii) The off-diagonal blocks of D whose horizontal and vertical strips contain pairs of (A, B) can of different types are defined by D(H n (λ), K m ) = (0, 0), (20) D(H n (λ), L m ) = (0, [0 0 ]), (21) D(K n, L m ) = (0, 0). (22) 7

9 Remark 2.1 (About the independency of parameters). All parameters placed instead of the stars in the upper triangular parts of matrices of D are independent and the lower triangular parts are defined by the skew-symmetry. In particular, it means that parametric matrix pairs obtained from (D ij, D ij ) and (D i j, D i j ) have dependent (in fact, equal up to the sign) parametric entries if and only if i = j and j = i. Let us give an example of how the miniversal deformations from Theorem 2.1 can be used for the investigation of changes of the canonical structure information under small perturbations. Example 2.1. We show that in an arbitrarily small neighbourhood of a matrix pair with the canonical form L 1 L 0 there is always a matrix pair with the canonical form H 2 (λ), λ 0 (in fact, also with H 2 (0) and K 2 ). It is enough to consider perturbations of L 1 L 0 in the form of the miniversal deformations given in Theorem 2.1 (with only three independent nonzero parameters). Since we will use the theory developed for matrix pencils we switch to the pencil notation X sy, instead of (X, Y ). Thus a miniversal deformation of L 1 L 0 is the pencil s ε s sε = 2, (23) ε ε 3 s 0 0 ε 1 sε ε ε 2 ε sε 2 ε 1 + sε 3 0 which has the Smith form (see [31] for the definition) In turn, the pencil H 2 (λ) is ε 1 sε 2 s 2. (24) ε ε 1 sε 2 s 2 ε λ sλ s λ sλ s =, (25) λ sλ λ 0 0 s 1 + sλ 0 0 and has the Smith form sλ s 2 λ 2. (26) sλ s 2 λ 2 Now (24) with ε 2 = 2ε 1 λ and ε 3 = ε 1 λ 2 is strictly equivalent to (26) which implies that the pencils (23) and (25) are strictly equivalent by [29, Proposition A.5.1, p. 663] (note that λ 0 and we must choose ε 1 0) and due 8

10 to [31, Theorem 3, p. 275] the pencils (23) and (25) are congruent. Since ε 1 (and thus ε 2 and ε 3 ) can be chosen arbitrarily small we can find a pair with the canonical form H 2 (λ), λ 0 in any neighbourhood of L 1 L 0. Note that, for (23) and (25) we could have also computed the skewsymmetric Smith form derived in [33]. The result of this example also follows from the more general result in [16] but the proof given here is constructive, i.e. the perturbation is derived explicitly. The pair of matrices D (12) in Theorem 2.1 will be constructed in Section 3 as follows. The vector space T (A,B)can = {C T (A, B) can + (A, B) can C C Cˆn ˆn } is the tangent space to the congruence class of (A, B) can at the point (A, B) can since (I + εc) T (A, B) can (I + εc) = (A, B) can + ε(c T (A, B) can + (A, B) can C) + ε 2 C T (A, B) can C for all ˆn-by-ˆn matrices C and each ε C. Then D is constructed such that C ˆn ˆn c (27) C ˆn ˆn c = T (A,B)can + D(C) (28) in which Cc ˆn ˆn is the space of all skew-symmetric ˆn ˆn matrices, D(C) is the vector space of all pairs of skew-symmetric matrices obtained from D by replacing its stars by complex numbers, see (6). Thus, one half of the number of stars in D is equal to the codimension of the congruence orbit of (A, B) can (note that the total number of the stars is always even). Lemma 3.2 in Section 3.1 ensures that any pair of (0, ) matrices that satisfies (28) can be taken as D in Theorem Upper bound for the norm of miniversal deformations In this section, we bound the distance from the miniversal deformations to a matrix pair that was originally perturbed, using the norm of the perturbations. In particular, we see that this distance can be made arbitrarily small by decreasing the size of the allowed perturbations. Similar techniques are used in [14, 15] to prove the versality of the deformations. We use the Frobenius norm of a complex n n matrix Y = [y ij ]: Y = y ij 2. Recall that for matrices Y and Z and ν, ω C the following inequalities hold (e.g., see [24, Section 5.6]) νy + ωz ν Y + ω Z and Y Z Y Z. (29) 9

11 Let (A, B) (Cc ˆn ˆn, Cc ˆn ˆn ) and α = A, β = B. By (28), for each pair of skew-symmetric ˆn-by-ˆn matrices (E ij, 0) and (0, E i j ), 1 i, j, i, j ˆn there exist X ij, X i j Cˆn ˆn such that (E ij, 0) + Xij(A T + M, B + N) + (A + M, B + N)X ij D(C), (0, E i j ) + X T i j (A + M, B + N) + (A + M, B + N)X i j D(C), (30) where D(C) is defined in (6). If (i, j) Ind 1 (D), then (E ij, 0) D(C), and so we can put X ij = 0. Analogously, if (i, j ) Ind 2 (D), then (E i j, 0) D(C), and so we can put X i j = 0. Denote γ = (i,j) Ind 1 (D) Theorem 2.2. Let (A, B) (C ˆn ˆn c 0 < ε < X ij + (i,j ) Ind 2 (D) X i j. (31), C ˆn ˆn ) and let ε R such that c 1 max{1 + γ(α + 1)(2 + γ), 1 + γ(β + 1)(2 + γ)}, where α = A, β = B and γ is defined in (31). For each pair of skewsymmetric ˆn-by-ˆn matrices (M, N) satisfying M < ε 2, N < ε 2, (32) there exists a matrix S = Iˆn + X depending holomorphically on the entries of (M, N) in a neighborhood of zero such that S T (A + M, B + N)S = (A + P, B + Q), (P, Q) D(C), P < ε, and Q < ε, where C ˆn ˆn c C ˆn ˆn c = T (A,B)can + D(C). Proof. First, note that if M = 0 and N = 0 then S = Iˆn. We construct S = Iˆn + X. If M = i,j m ij E ij and N = i,j n ij E ij (i.e., M = [m ij ] and N = [n ij ]), then we can chose X ij and X ij (30), such that i,j (m ij E ij, n ij E ij ) + (m ij Xij T + n ij X T ij )(A + M, B + N) i,j + (A + M, B + N) (m ij X ij + n ij X ij) D(C) i,j and for X = (m ij X ij + n ij X ij) i,j 10

12 we have (M, N) + X T (A + M, B + N) + (A + M, B + N)X D(C). If (i, j) Ind 1 (D) (or, respectively, (i, j) Ind 2 (D)), then m ij < ε 2 (or, respectively, n ij < ε 2 ) by (32). We obtain X (i,j) Ind 1 (D) m ij X ij + < ε 2 X ij + (i,j) Ind 1 (D) (i,j) Ind 2 (D) (i,j) Ind 2 (D) n ij X ij ε 2 X ij = ε 2 γ. Put then S T (A + M, B + N)S = (A + P, B + Q) where S = Iˆn + X, (P, Q) = (M, N) + X T (A + M, B + N) + (A + M, B + N)X + X T (A + M, B + N)X. Summing up, we obtain P M + 2 X ( A + M ) + X 2 ( A + M ) < ε 2 + 2ε 2 γ(α + ε 2 ) + ε 4 γ 2 (α + ε 2 ) = ε 2 + ε 2 γ(α + ε 2 )(2 + ε 2 γ) < ε 2 (1 + γ(α + 1)(2 + γ)) < ε, Q N + 2 X ( B + N ) + X 2 ( B + N ) < ε 2 (1 + γ(β + 1)(2 + γ)) < ε. 3. Proof of the main theorem 3.1. A method of construction of miniversal deformations We give a method of construction of simplest miniversal deformations, which will be used in the proof of Theorem 2.1. The deformation (7) is universal in the sense that every deformation B(σ 1,..., σ l ) of (A, B) has the form U( ϕ(σ 1,..., σ l )), where ϕ ij (σ 1,..., σ l ) are convergent in a neighborhood of 0 power series such that ϕ ij ( 0) = 0. Hence every deformation B(σ 1,..., σ l ) in Definition 2.1 can be replaced by U( ε), which proves the following lemma. Lemma 3.1. The following two conditions are equivalent for any deformation A(δ 1,..., δ k ) of pair of matrices (A, B): 11

13 (i) The deformation A(δ 1,..., δ k ) is versal. (ii) The deformation (7) is equivalent to A(ϕ 1 ( ε),..., ϕ k ( ε)) in which all ϕ i ( ε) are convergent in a neighborhood of 0 power series such that ϕ i ( 0) = 0. If U is a subspace of a vector space V, then each set v + U with v V is called a coset of U in V. Lemma 3.2. Let (A, B) (Cc ˆn ˆn, Cc ˆn ˆn ) and let D be a pair of (0, ) matrices of the size ˆn ˆn. The following are equivalent: (i) The deformation (A, B) + D( ε) defined in (5) is miniversal. (ii) The vector space (C ˆn ˆn c, C ˆn ˆn ) decomposes into the sum c (Cc ˆn ˆn, Cc ˆn ˆn ) = T (A,B) + D(C), T (A,B) D(C) = {(A, B)}. (33) (iii) Each coset of T (A,B) in (Cc ˆn ˆn, Cc ˆn ˆn ) contains exactly one matrix pair of the form D. Proof. Define the action of the group GLˆn (C) of nonsingular ˆn-by-ˆn matrices on the space (Cc ˆn ˆn, Cc ˆn ˆn ) by (A, B) S = S T (A, B)S, (A, B) (Cc ˆn ˆn, Cc ˆn ˆn ), S GLˆn (C). The orbit (A, B) GLˆn of (A, B) under this action consists of all pairs of skewsymmetric matrices that are congruent to the pair (A, B). The space T (A,B) is the tangent space to the orbit (A, B) GLˆn at the point (A, B) (see (27)). Hence D( ε) is transversal to the orbit (A, B) GLˆn at the point (A, B) if, C ˆn ˆn ) = T (A,B) + D(C) (C ˆn ˆn c c (see definitions in [3, 29]; two subspaces of a vector space are called transversal if their sum is equal to the whole space). This proves the equivalence of (i) and (ii) since a transversal (of the minimal dimension) to the orbit is a (mini)versal deformation [2, Section 1.6]. The equivalence of (ii) and (iii) is obvious. Due to the versality of each deformation (A, B)+D( ε) in which D satisfies (33): there is a deformation I( ε) of the identity matrix such that (A, B) + D( ε) = I( ε) T U( ε)i( ε), where U( ε) is defined in (7). Thus, a simplest miniversal deformation of (A, B) (Cc ˆn ˆn, Cc ˆn ˆn ) can be constructed as follows. Let (T 1,..., T r ) be a basis of the space T (A,B), 12

14 and let (E 1,..., Eˆn(ˆn 1) ) be the basis of (Cc ˆn ˆn, Cc ˆn ˆn ) in which every E k is either of the form (E ij, 0) or (0, E i j ). Removing from the sequence (T 1,..., T r, E 1,..., Eˆn(ˆn 1) ) every pair of matrices that is a linear combination of the preceding matrices, we obtain a new basis (T 1,..., T r, E i1,..., E ik ) of the space (Cc ˆn ˆn, Cc ˆn ˆn ). By Lemma 3.2, the deformation A(ε 1,..., ε k1, ε 1,..., ε k 2 ) = (A, B) + ε 1 E ε k1 E ik1 + ε 1E ik ε k 2 E ik = (A, B) + ε 1 (E i1,j 1, 0) + + ε k1 (E ik1 j k1, 0) + ε 1(0, E ik1 +1,j k1 +1 ) + + ε k 2 (0, E ik,j k ), where k 1 + k 2 = k, is miniversal. For each pair of skew-symmetric ˆm ˆm matrices (A 1, B 1 ) and each pair of skew-symmetric ˆn ˆn matrices (A 2, B 2 ), define the vector spaces V (A 1, B 1 ) = {S T (A 1, B 1 ) + (A 1, B 1 )S, where S C ˆm ˆm }, (34) V ((A 1, B 1 ), (A 2, B 2 )) = {(R T (A 2, B 2 ) + (A 1, B 1 )S, S T (A 1, B 1 ) + (A 2, B 2 )R), where S C ˆm ˆn and R Cˆn ˆm }. (35) Lemma 3.3. Let (A, B) = (A 1, B 1 ) (A t, B t ) be a block-diagonal matrix in which every (A i, B i ) is n i n i. Let D be a pair of (0, ) matrices of the size of (A, B). Partitioning D into blocks (D ij, D ij ) conformably to the partitioning of (A, B) (see (12)). Then (A, B) + D(E, E ) is a simplest miniversal (skew-symmetric) deformation of (A, B) under congruence if and only if (i) every coset of V (A i, B i ) in (C n i n i c, C n i n i c ) contains exactly one matrix of the form (D ii, D ii ), and (ii) every coset of V ((A i, B i ), (A j, B j )) in (C n i n j, C n i n j) (C n j n i, C n j n i) contains exactly two pairs of matrices ((W 1, W 2 ), ( W1 T, W 2 T )) in which (W 1, W 2 ) is of the form (D ij, D ij ) and correspondingly ( W1 T, W 2 T ) is of the form (D ji, D ji ) = ( DT ij, T D ij ). Proof. By Lemma 3.2(iii), (A, B)+D( ε) is a simplest miniversal deformation of (A, B) if and only if for each (C, C ) (Cˆn ˆn c, Cc ˆn ˆn ) the coset (C, C ) + T (A,B) contains exactly one (D, D ) of the form D, that is, (D, D ) = (C, C ) + S T (A, B) + (A, B)S D(C) with S Cˆn ˆn. (36) Partition (D, D ), (C, C ), and S into blocks conformably to the partitioning of (A, B). By (36), for each i we have (D ii, D ii ) = (C ii, C ii ) + ST ii (A i, B i ) + 13

15 (A i, B i )S ii, and for all i and j such that i < j we have ([ D ii D ij ], [ D ii D ij ]) = ([ C ii C ij ], [ C ii C ij ]) D ji D jj C ji C jj D ji D jj + [ ST ii Sji T Sij T Sjj T ] ([ A i 0 ], [ B i 0 ]) + ([ A i 0 ], [ B i 0 ]) [ S ii S ij ]. (37) 0 A j 0 B j 0 A j 0 B j S ji S jj Thus, (36) is equivalent to the conditions (D ii, D ii) = (C ii, C ii) + S T ii (A i, B i ) + (A i, B i )S ii D ii (C), 1 i t, (38) C ji C jj ((D ij, D ij), (D ji, D ji)) = ((C ij, C ij), (C ji, C ji)) + ((S T jia j + A i S ij, S T jib j + B i S ij ), (S T ija i + A j S ji, S T ijb i + B j S ji )) Hence for each (C, C ) (Cˆn ˆn c of the form (36) if and only if, Cˆn ˆn c D ij (C) D ji (C), 1 i < j t. (39) ) there exists exactly one (D, D ) D(C) (i ) for each (C ii, C ii ) (Cn i n i c, C n i n i c ) there exists exactly one (D ii, D ii ) D ii (C) of the form (38), and (ii ) for each ((C ij, C ij ), (C ji, C ji )) (Cn i n j, C n i n j) (C n j n i, C n j n i) there exists exactly one ((D ij, D ij ), (D ji, D ji )) D ij(c) D ji (C) of the form (39). This proves the lemma. Corollary 3.1. In the notation of Lemma 3.3, (A, B)+D( ε) is a miniversal deformation of (A, B) if and only if each pair of submatrices of the form ([ A i + D ii ( ε) D ij ( ε) D ji ( ε) A j + D jj ( ε) ] [B i + D ii ( ε) D ij ( ε) D ji ( ε) B j + D jj with i < j, ( ε)]) is a miniversal deformation of the pair (A i A j, B i B j ). We are ready to prove Theorem 2.1 now. Each X i in (11) is of the form H n (λ), K n, or L n, and so there are 9 types of pairs D(X i ) and D(X i, X j ) with i < j; they are given in (14) (22). It suffices to prove that the pairs (14) (22) satisfy the conditions (i) and (ii) of Lemma Diagonal blocks of D Fist we verify that the diagonal blocks of D defined in part (i) of Theorem 2.1 satisfy the condition (i) of Lemma

16 Diagonal blocks D(H n (λ)) and D(K n ) We consider the pairs of blocks H n (λ) and K n. Due to Lemma 3.3(i), it suffices to prove that each pair of skew-symmetric 2n-by-2n matrices (A, B) = ([A ij ] 2 i,j=1, [B ij] 2 i,j=1 ) can be reduced to exactly one pair of matrices of the form (14) by adding (A, B) = ( A, B) = ([ A 11 A 12 A 21 A 22 ], [ B 11 B 12 B 21 B 22 ]) = [ ST 11 S21 T S12 T S22 T ] ( [ 0 I n I n 0 ], [ 0 J n (λ) J n (λ) T ] ) 0 + ( [ 0 I n I n 0 ], [ 0 J n (λ) J n (λ) T ] ) [ S 11 S 12 ] 0 S 21 S 22 (40) = ( [ S 21 S21 T S11 T + S 22 S 11 S22 T S12 T S ], 12 [ ST 21 J n(λ) T + J n (λ)s 21 S T 11 J n(λ) + J n (λ)s 22 S T 22 J n(λ) T J n (λ) T S 11 S T 12 J n(λ) J n (λ) T S 12 ] ), in which S = [S ij ] 2 i,j=1 is an arbitrary 2n-by-2n matrix. Due to the skewsymmetry there are three pairs of n-by-n blocks in (40) that can be treated independently. For any X we have XJ n (λ) T +J n (λ)x = X(λI+J n (0)) T +(λi+j n (0))X = XJ n (0) T +J n (0)X. Thus, without loss of generality, we can assume that λ = 0. Therefore the deformation of K n is equal to the deformation of H n (λ) up to the permutation of matrices. First we consider the pair of blocks (A 11, B 11 ) = (S 21 S T 21, ST 21 J n(0) T + J n (0)S 21 ) in which S 21 is an arbitrary n-by-n matrix. Obviously, by adding A 11 = S 21 S T 21 we reduce A 11 to zero. To preserve A 11, we must hereafter take S 21 such that S 21 S T 21 = 0, i.e., S 21 is symmetric. We reduce B 11 by adding B 11 = S T 21 J n(0) T + J n (0)S 21, B 11 = s 11 s 12 s s 1n s 12 s 22 s s s 11 s 12 s s 1n 2n s 12 s 22 s s 2n = s 13 s 23 s s 3n + s 1 13 s 23 s s 3n s 1n s 2n s 3n... s nn s 1n s 2n s 3n... s nn 0 s 22 s 13 s 23 s s 2n s 22 + s 13 0 s 33 s s 3n = s 23 + s 14 s 33 + s s 4n. s 2n s 3n s 4n... 0 (41) 15

17 We reduce B 11 anti-diagonal-wise and since B 11 is skew-symmetric, we just need to reduce the upper triangular part of B 11 and the lower triangular part will be reduced automatically. Let b = (b 1,..., b t 1 ) denote the elements of the upper half of the k-th anti-diagonal (counting from the top left corner) of B 11. Each of the first (n 1) upper halfs of the anti-diagonals of B 11 is of the form (s 2k s 1,k+1, s 3,k 1 s 2k,..., s tt s t 1,t+1 ), if k is even, t = k+2 2 s = ; (s 2k s 1,k+1, s 3,k 1 s 2k,..., s t,t+1 s t 1,t+2 ), if k is odd, t = k+1 where k = 2, 3,..., n 1, (the first anti-diagonal is zero). Choosing the parameters s ij we want to make s equal to b, i.e. we want to solve the system of linear equations s 1,k+1 s 2k s tt b = 2 b 1 b t 1 2,, (42) where k is even (and the analogous system for k being odd). The system (42) has a solution. Therefore, we can reduce each of the first (n 1) anti-diagonals of B 11 to zero, by adding the corresponding anti-diagonals of B 11. For each k-th upper parts of the last n anti-diagonals we have the following systems of equations s 2 n+k,n s 3 n+k,n 1 s t 1,t +1 s t t b 2 = b 1 b t 2 b t 1, (43) where k = n, n + 1,..., 2n 2, (the last anti-diagonal is zero) and t = t k + n and t is defined as above. The system (43) has a solution. Therefore we can reduce the last n anti-diagonals of B 11 to zero. Altogether, we reduce B 11 to zero matrix by adding B 11. The possibility of reducing (A 22, B 22 ) to zero by adding (A 22, B 22 ) = (S T 12 S 12, S T 12 J n(0) J n (0) T S 12 ) follows directly from the reduction of the blocks (A 11, B 11 ). We have 0 = B 11 S T 21 J n(0) T + J n (0)S 21 where B 11 is a skew-symmetric matrix. Multiplying this equality by the n-by-n flip matrix 0 1 Z = (44)

18 from both sides and using that Z 2 = I and ZJ n (0) T Z = J n (0) we get 0 = ZB 11 Z ZS T 21ZJ n (0) + J n (0) T ZS 21 Z. This ensures that the pair of blocks (A 22, B 22 ) can be set to zero since ZB 11 Z and ZS 21 Z are arbitrary skew-symmetric and symmetric matrices, respectively. To the pair of blocks (A 21, B 21 ) we can add (A 21, B 21 ) = (S T 11 + S 22, S T 11 J n(0) + J n (0)S 22 ). Adding S T 11 + S 22 we reduce A 21 to zero. To preserve A 21, we must hereafter take S 11 and S 22 such that S T 11 = S 22. Thus we add B 21 = S 22 J n (0) + J n (0)S 22, with any matrix S 22, B 21 = s 11 s 12 s s 1n s 21 s 22 s s s 11 s 12 s s 1n 2n 0 0 s 21 s 22 s s 2n = s 31 s 32 s s 3n + s s 32 s s 3n s n1 s n2 s n3... s nn 0 0 s n1 s n2 s n3... s nn s 21 s 22 s 11 s 23 s s 2n s 1,n 1 s 31 s 32 s 21 s 33 s s 3n s 2,n 1 = s 41 s 42 s 31 s 43 s s 4n s 3,n 1. 0 s n1 s n2... s n,n 1 (45) We examine each diagonal of B 21 independently since each diagonal has unique variables. For each of the first n diagonals (starting from the bottom left corner) we have the following system of equations s n+2 k,1 s n,k 1 b 1 =. (46) b k The matrix of this system has k 1 columns and k (since the first diagonal is zero k = 2,..., n) rows and its rank is equal to k 1 but the rank of the full matrix of the system is k; by the Kronecker-Capelli theorem [26] the system (46) does not have a solution. Nevertheless, if we turn down the first or the last equation of the system (i.e. we do not set the first or the last element of the corresponding diagonal of B 21 to zero), then (46) will have a solution. For the last (n 1) diagonals we have a system of equations like (42), which has a solution. Therefore we can set each element of the matrix B 21 to zero except the elements either in the first column or the last row. The blocks (A 12, B 12 ) = ( S 11 S T 22, ST 22 J n(0) T J n (0) T S 11 ) are equal to (A 21, B 21 ) up to the transposition and sign. 17

19 Altogether, we obtain D(H n (λ)) = (0, [ ]) and D(K n) = ([ ], 0) Diagonal blocks D(L n ) Using Lemma 3.3(i), like in Section 3.2.1, we prove that each pair (A, B) = ([A ij ] 2 i,j=1, [B ij] 2 i,j=1 ) of skew-symmetric (2n + 1)-by-(2n + 1) matrices can be set to zero by adding (A, B) = ( A, B) = ([ A 11 A 12 A 21 A 22 ], [ B 11 B 12 B 21 B 22 ]) = [ ST 11 S21 T S12 T S22 T ] ( [ 0 F n Fn T 0 ], [ 0 G n G T n 0 ] ) + ([ 0 F n Fn T 0 ], [ 0 G n G T n 0 ]) [S 11 S 12 ] S 21 S 22 = ( [ ST 21 F n T + F n S 21 S11 T F n + F n S 22 S22 T F n T Fn T S 11 S12 T F n Fn T ], [ ST 21 GT n + G n S 21 S11 T G n + G n S 22 S 12 S22 T GT n G T n S 11 S12 T G n G T ] ), n S 12 (47) where S = [S ij ] 2 i,j=1 is an arbitrary matrix. Each pair of blocks (A ij, B ij ), i, j = 1, 2, of (A, B) is changed independently. We add (A 11, B 11 ) = ( S21 T F n T + F n S 21, S21 T GT n + G n S 21 ) in which S 21 is an arbitrary (n + 1)-by-n matrix to the pair of blocks (A 11, B 11 ). Obviously, by adding S21 T F n T + F n S 21 we reduce A 11 to zero. To preserve A 11, we must hereafter take S 21 such that F n S 21 = S21 T F n T. Thus S 21 without the last row is n n and symmetric: s 11 s 12 s s 1n s 12 s 22 s s 2n S 21 = s 13 s 23 s s 3n. s 1n s 2n s 3n... s nn s 1,n+1 s 2,n+1 s 3,n+1... s n,n+1 18

20 Now we reduce B 11 by adding s 11 s 12 s s 1n s 1,n+1 s 12 s 22 s s 2n s 2,n B 11 = s 13 s 23 s s 3n s 3,n+1 0 s 1n s 2n s 3n... s nn s 0 1 n,n+1 s 11 s 12 s s 1n s 12 s 22 s s 2n + s 13 s 23 s s 3n s 1n s 2n s 3n... s nn s 1,n+1 s 2,n+1 s 3,n+1... s n,n+1 s i,j+1 + s i+1,j if i < j, = s i,j+1 s i+1,j if i > j, 0 if i = j, (48) where i, j = 1,..., n. The upper part of each anti-diagonal of B 11 has unique variables. Thus we reduce each anti-diagonal of B 11 independently. We have a system of equations (42) for the upper part of each anti-diagonal, which has a solution. It follows that we can reduce every anti-diagonal of B 11 to zero. Hence we can reduce (A 11, B 11 ) to zero by adding (A 11, B 11 ). To the pair of blocks (A 12, B 12 ) we can add (A 12, B 12 ) = (S11 T F n + F n S 22, S11 T G n + G n S 22 ) in which S 11 and S 22 are arbitrary matrices of corresponding size. Adding S11 T F n + F n S 22, we reduce A 12 to zero. To preserve A 12, we must hereafter take S 11 and S 22 such that F n S 22 = S11 T F n. This means that 0 S11 T 0 S 22 =. 0 y 1 y 2... y n+1 19

21 Therefore we reduce B 12 by adding s 11 s 12 s s 1n s 21 s 22 s s 2n B 12 = S11G T n + G n S 22 = s 31 s 32 s s 3n s n1 s n2 s n3... s nn s 11 s s 1n s 21 s s 2n s n1 s n2... s nn 0 y 1 y 2... y n y n+1 s 21 s 11 + s 22 s 12 + s s 1,n 1 + s 2n s 1n s 31 s 21 + s 32 s 22 + s s 2,n 1 + s 3n s 2n = (49) s n1 s n 1,1 + s n2 s n 1,2 + s n3... s n,n 1 + s nn s n 1n y 1 s n1 y 2 s n2 y 3... s n,n 1 y n s nn y n+1 It is easily seen that we can set B 12 to zero by adding B 12 (diagonal-wise). The pair of blocks (A 21, B 21 ) = ( S22 T F n T Fn T S 11, S22 T GT n G T ns 11 ) is analogous to (A 12, B 12 ) up to transposition and sign. To the pair of blocks (A 22, B 22 ) we add (A 22, B 22 ) = (S12 T F n Fn T S 12, S12 T G n G T ns 12 ) in which S 12 is an arbitrary n-by-(n + 1) matrix. Obviously, by adding S12 T F n Fn T S 12, we reduce A 22 to zero. To preserve A 22, we must hereafter take S 12 such that S12 T F n = Fn T S 12. Thus s 11 s 12 s s 1n 0 s 12 s 22 s s 2n 0 S 12 = s 13 s 23 s s 3n 0. s 1n s 2n s 3n... s nn 0 The matrix S 12 without the last column is n n and symmetric. Now we 20

22 reduce B 22 by adding B 22 = s 11 s 12 s s 1n s 12 s 22 s s 2n s = 13 s 23 s s s 11 s 12 s s 1n 0 3n 1 s 12 s 22 s s 2n 0 s s 1n s 2n s 3n... s nn s 23 s s 3n s 1n s 2n s 3n... s nn 0 0 s 11 s 12 s s 1,n 1 s 1n s 11 0 s 22 s 13 s 23 s s 2,n 1 s 1n s 2n s 12 s 13 s 22 0 s 33 s s 3,n 1 s 2n s 3n = s 13 s 14 s 23 s 24 s s 4,n 1 s 3n s 4n s 1,n 1 s 1n s 2,n 1 s 2n s 3,n 1 s 3n s 4,n s nn s 1n s 2n s 3n s 4n... s nn 0 We have a system of equations of type (43) which has a solution for the upper part of each anti-diagonal. It follows that we can reduce every antidiagonal of B 22 to zero. Hence we can reduce (A 22, B 22 ) to zero by adding (A 22, B 22 ). Summing up the analysis for all pairs of blocks, we get D(L n ) = Off-diagonal blocks of D that correspond to summands of (A, B) can of the same type Now we verify the condition (ii) of Lemma 3.3 for off-diagonal blocks of D defined in Theorem 2.1(ii); the diagonal blocks of their horizontal and vertical strips contain summands of (A, B) can of the same type Pairs of blocks D(H n (λ), H m (µ)) and D(K n, K m ) Due to Lemma 3.3(ii), it suffices to prove that each group of four matrices ((A, B), ( A T, B T )) can be reduced to exactly one group of the form (17) by adding (R T H m (µ) + H n (λ)s, S T H n (λ) + H m (µ))r), S C 2n 2m, R C 2m 2n. Obviously, if we reduce the first pair of matrices, the second pair will be reduced automatically. So we reduce a pair (A, B) of 2n-by-2m matrices by adding (A, B) = R T H m (µ) + H n (λ)s = (R T [ 0 I m I m 0 ] + [ 0 I n I n 0 ] S, 0 J RT [ m (µ) 0 J J m (µ) T ] + [ n (λ) 0 J n (λ) T ] S). 0 21

23 It is clear that we can reduce A to zero. To preserve A, we must hereafter choose R = [R ij ] 2 i,j=1 and S = [S ij] 2 i,j=1 such that R T [ 0 I m I m 0 ] + [ 0 I n I n 0 ] S = 0, or equivalently [S 11 S 12 ] = [ RT 22 R12 T S 21 S 22 R21 T R11 T ]. Now B = [B ij ] 2 i,j=1 is reduced by adding B = [ RT 11 R21 T 0 J R12 T R22 T ] [ m (µ) 0 J J m (µ) T ] + [ n (λ) 0 J n (λ) T ] [ RT 22 R12 T 0 R21 T R11 T ] = [ RT 21 J m(µ) T + J n (λ)r21 T R11 T J m(µ) J n (λ)r11 T R22 T J m(µ) T + J n (λ) T R22 T R12 T J m(µ) J n (λ) T R12 T ]. Therefore B 11 is reduced by adding B 11 = R21J T m (µ) T + J n (λ)r21 T (λ µ)r ij + r i+1,j r i,j+1 if 1 i (n 1), 1 j (m 1), (λ µ)r ij + r i+1,j if 1 i (n 1), j = m, = (λ µ)r ij r i,j+1 if 1 j (m 1), i = n, (λ µ)r ij if i = n, j = m. We have a system of nm equations which has a solution if λ µ. Thus for λ µ we can set B 11 to zero by adding B 11. Now we consider λ = µ, i.e. B 11 = R21J T m (λ) T + J n (λ)r21 T r 21 r 12 r 22 r 13 r 23 r r 2,m 1 r 1m r 2m r 31 r 22 r 32 r 23 r 33 r r 3,m 1 r 2m r 3m = r 41 r 32 r 42 r 33 r 43 r r 4,m 1 r 3m r 4m r n1 r n 1,2 r n2 r n 1,3 r n3 r n 1,4... r n,m 1 r n 1,m r nm r n2 r n3 r n4... r nm 0 Like for the system (45), B 11 can be reduced to 0 by adding B 11. To find the solutions for the other cases we need to multiply the answer 22

24 for the block B 11 by ±Z: B 12 R11J T m (µ) J n (λ)r11 T = B 11 Z + R21ZZJ T m (µ) T Z J n (λ)r21z T 0Z = 0 if λ µ, = 0 Z = 0 if λ = µ, B 21 R22J T m (µ) T + J n (λ) T R22 T = ZB 11 + ZR21J T m (µ) T ZJ n (λ)zzr21 T Z0 = 0 if λ µ, = Z0 = 0 if λ = µ, B 22 R12J T m (µ) J n (λ) T R12 T = ZB 11 Z + ZR21ZZJ T m (µ) T Z ZJ n (λ)zzr21z T Z0Z = 0 if λ µ, = Z0 Z = 0 if λ = µ. Summing up the derivations for all blocks, we get that D(H n (λ), H m (µ)) is equal to (17) and, respectively, D(K n, K m ) is equal to (18) Pairs of blocks D(L n, L m ) Due to Lemma 3.3(ii), it suffices to prove that each group of four matrices ((A, B), ( A T, B T )) can be reduced to exactly one group of the form (19) by adding (R T L m + L n S, S T L n + L m R), S C 2n+1 2m+1, R C 2m+1 2n+1. It is enough to reduce only the first pair of matrices, i.e. (A, B). We reduce it by adding (A, B) = R T L m + L n S = (R T [ 0 F m F T m 0 ] + [ 0 F n F T n 0 ] S, RT [ 0 G m G T m 0 ] + [ 0 G n G T n 0 ] S). It is easily seen that we can set A to zero. To preserve A, we must hereafter take R = [R ij ] 2 i,j=1 and S = [S ij] 2 i,j=1 such that or equivalently [ RT 11 R21 T R12 T R22 T ] [ 0 F m Fm T 0 ] + [ 0 F n Fn T 0 ] [S 11 S 12 ] = 0, S 21 S 22 [ RT 21 F m T R11 T F m R22 T F m T R12 T F ] = [ F ns 21 F n S 22 m Fn T S 11 Fn T ]. (50) S 12 23

25 B = [B ij ] 2 i,j=1 is reduced by adding B = [ B 11 B 12 ] = [ RT 11 R21 T B 21 B 22 R12 T R22 T ] [ 0 G m G T m 0 ] + [ 0 G n G T n 0 ] [S 11 S 12 ] S 21 S 22 = [ RT 21 GT m + G n S 21 R T 11 G m + G n S 22 R T 22 GT m G T n S 11 R T 12 G m G T n S 12 ], where S ij and R ij, i, j = 1, 2 satisfy (50). We reduce each pair of blocks independently. First we reduce B 11. Using the equality R T 21 F T m = F n S 21 we obtain that Q S 21 = [ ], R T a 1... a 21 = m Q, where Q = [q ij ] is any n-by-m matrix. b 1 b n Therefore b 1 B 11 = R21G T T m + G n S 21 = Q G T Q m + G n [ ] b n a 1... a m q 21 q 12 q 22 q q 2,m 1 q 1m q 2m b 1 q 31 q 22 q 32 q q 3,m 1 q 2m q 3m b 2 = q 41 q 32 q 42 q q 4,m 1 q 3m q 4m b (51) q n1 q n 1,2 q n2 q n 1,3... q n,m 1 q n 1,m q nm b m 1 a 1 q n2 a 2 q n3... a n 1 q nm a n b m We can set each anti-diagonal of B 11 to zero independently by adding the corresponding anti-diagonal of B 11. Thus we can reduce B 11 by adding B 11 to zero. Now to the pair (A 12, B 12 ): To preserve A 12, we take R 11 and S 22 such that R11 T F m = F n S 22 thus 0 R S 22 = 11 T, 0 b 1... b m b m+1 24

26 where R11 T is any n-by-m matrix. Thus 0 B 12 = R11G T m + G n S 22 = R11G T R m + G n 11 T 0 b 1... b m b m+1 r 21 r 11 r 22 r 12 r r 1,m 1 r 2m r 1m r 31 r 21 r 32 r 22 r r 2,m 1 r 3m r 2m = r 41 r 31 r 42 r 32 r r 3,m 1 r 4m r 3m r n1 r n 1,1 r n2 r n 1,2 r n3... r n 1,m 1 r nm r n 1m b 1 r n1 + b 2 r n2 + b 3... r n,m 1 + b m r nm + b m+1 If m + 1 n then we can set B 12 to zero by adding B 12. If n > m + 1 then we cannot set B 12 to zero. Then we reduce it diagonal-wise starting from the top-right corner. By adding the first m and the last m + 1 diagonals of B 12 we set the corresponding diagonals of B 12 to zeros. We can set the remaining n m 1 diagonals of B 12 to zeros, except the last element of each of them. Hence (A 12, B 12 ) is reduced to (0, 0 T m+1,n ) by adding (A 12, B 12 ). (A 21, B 21 ) is reduced in the same way (up to the transposition) as (A 12, B 12 ). Hence it can be reduced to the form (0, 0 n+1,m ). Consider (A 22, B 22 ). We reduce A 22 to the form 0 by adding A 22 = R12 T F m Fn T S 12. To preserve A 22, we must hereafter take R 12 and S 12 such that R12 T F m = Fn T S 12 thus R12 T Q = [ ], S 12 = Q Therefore, 0 0, where Q = [q ij ] is any n-by-m matrix. B 22 = R12G T m G T Q 0 n S 12 = [ ] G m G T n Q 0 0 q 11 q q 1,m 1 q 1m q 11 q 21 q 12 q 22 q q 2,m 1 q 1m q 2m = q 21 q 31 q 22 q 32 q q 3,m 1 q 2m q 3m q n 1,1 q n1 q n 1,2 q n2 q n 1,3... q n,m 1 q n 1,m q nm q n1 q n2 q n3... q nm 0 By adding B 22, we can set each element of B 22 to zero except the elements in the first column and the last row (or, alternatively, the elements in the first row and the last column). Summing up the results, we have that D(L n, L m ) is of the form (19). 25

27 3.4. Off-diagonal blocks of D that correspond to summands of (A, B) can of different types Finally, we verify the condition (ii) of Lemma 3.3 for off-diagonal blocks of D defined in Theorem 2.1(iii); the diagonal blocks of their horizontal and vertical strips contain summands of (A, B) can of different types Pairs of blocks D(H n (λ), K m ) Due to Lemma 3.3(ii), it suffices to prove that each group of four matrices ((A, B), ( A T, B T )) can be reduced to exactly one group of the form (20) by adding (R T K m + H n (λ)s, S T H n (λ) + K m R), R C 2m 2n, S C 2n 2m. Obviously, if we reduce (A, B) then the second pair will be reduced automatically. We have (A, B) = R T K m + H m (λ)s = (R T 0 J [ m (0) J m (0) T ] + [ 0 I n 0 I n 0 ] S, RT [ 0 I m I m 0 ] + [ 0 J n (λ) J n (λ) T ] S). 0 It is clear that we can set A to zero. To preserve A, we must hereafter take R = [R ij ] 2 i,j=1 and S = [S ij] 2 i,j=1 such that or equivalently R T 0 J [ m (0) J m (0) T ] + [ 0 I n 0 I n 0 ] S = 0, S = [ RT 22 J m(0) T R12 T J m(0) R21 T J m(0) T R11 T J m(0) ]. Therefore B = [B ij ] 2 i,j=1 is reduced by adding B = [ B 11 B 12 B 21 B 22 ] = [ RT 11 R21 T R12 T R22 T ] [ 0 I m I m 0 ] + [ 0 J n (λ) J n (λ) T ] [ RT 22 J m(0) T R12 T J m(0) 0 R21 T J m(0) T R11 T J m(0) ] = [ RT 21 + J n(λ)r T 21 J m(0) T R T 11 J n(λ)r T 11 J m(0) R T 22 + J n(λ) T R T 22 J m(0) T R T 12 J n(λ) T R T 12 J m(0) ]. The block B 11 is reduced to zero by adding B 11 = R T 21 + J n (λ)r T 21J m (0) T r ij + λr i,j+1 + r i+1,j+1 if 1 i n 1, 1 j m 1, = r ij + λr i,j+1 if 1 j m 1, i = n, r ij if 1 i n, j = m, 26

28 because it results in a square system of nm equations that has a solution. The reduction of the other blocks follows from above since R T 11 J n (λ)r T 11J m (0) = R T 21Z + J n (λ)r T 21ZZJ m (0) T Z, R T 22 + J n (λ) T R T 22J m (0) T = ZR T 21Z + ZJ n (λ)zzr T 21ZZJ m (0) T Z, R T 12 J n (λ) T R T 12J m (0) = ZR T 21 + ZJ n (λ)zzr T 21J m (0) T, where the matrices Z (see (44)) are of the corresponding sizes. Altogether, we have that D(H n (λ), K m ) is zero Pairs of blocks D(H n (λ), L m ) Due to Lemma 3.3(ii), it suffices to prove that each group of four matrices ((A, B), ( A T, B T )) can be reduced to the group of the form (21) by adding (R T L m + H n (λ)s, S T H n (λ) + L m R), S C 2n 2m+1, R C 2m+1 2n. Obviously, if we only reduce (A, B), then ( A T, B T ) will be reduced automatically. We have (A, B) = R T L m + H n (λ)s = (R T [ 0 F m Fm T 0 ] + [ 0 I n I n 0 ] S, RT [ 0 G m G T m 0 ] + [ 0 J n (λ) J n (λ) T ] S). 0 It is easy to check that we can set A to zero. To preserve A, we must hereafter take R = [R ij ] 2 i,j=1 and S = [S ij] 2 i,j=1 such that R T [ 0 F m Fm T 0 ] + [ 0 I n I n 0 ] S = 0, or equivalently S = [ RT 22 F m T R12 T F m R21 T F m T R11 T F ]. m Thus B = [B ij ] 2 i,j=1 is reduced by adding B = [ B 11 B 12 B 21 B 22 ] First, adding = [ RT 11 R21 T R12 T R22 T ] [ 0 G m G T m 0 ] + [ 0 J n (λ) J n (λ) T ] [ RT 22 F m T R12 T F m 0 R21 T F m T R11 T F ] m = [ RT 21 GT m + J n (λ)r21 T F m T R11 T G m J n (λ)r11 T F m R22 T GT m + J n (λ) T R22 T F m T R12 T G m J n (λ) T R12 T F ]. m B 11 = R T 21G T m + J n (λ)r T 21F T m = r 12 + λr 11 + r 21 r 13 + λr 12 + r r 1,m+1 + λr 1m + r 2m r 22 + λr 21 + r 31 r 23 + λr 22 + r r 2,m+1 + λr 2m + r 3m , r n 1,2 + λr n 1,1 + r n1 r n 1,3 + λr n 1,2 + r n2... r n 1,m+1 + λr n 1,m + r nm r n2 + λr n1 r n3 + λr n2... r n,m+1 + λr nm 27

29 we can set B 11 to zero as follows. For the last (n-th) row of B 11 we have the following system of equations λ 1 λ 1 λ 1 r n1 r n2 r nm r n,m+1 b 1 b = 2 b m (52) which has a solution. For the (n 1)-th row we have λ 1 λ 1 λ 1 r n 1,1 r n 1,2 r n 1,m r n 1,m+1 b 1 b = 2 b m r n1 r n2 r nm. (53) The variables r n1, r n2,..., r nm are known from (52), thus (53) becomes a system of the type (52) and the system (53) has a solution. Repeating this reduction to every row from the bottom to the top, we set B 11 to zero. The block B 21 is reduced like the block B 11 and thus we omit this verification. Now we turn to the reduction of B 12 and B 22. It suffices to consider only B 12. We have B 12 = R11G T m J n (λ)r11f T m λr 11 r 21 r 11 λr 12 r r 1,m 1 λr 1m r 2m r 1m λr 21 r 31 r 21 λr 22 r r 2,m 1 λr 2m r 3m r 2m = λr n 1,1 r n1 r n 1,1 λr n 1,2 r n2... r n 1,m 1 λr n 1,m r nm r n 1,m λr n1 r n1 λr n2... r n,m 1 λr nm r nm Adding B 12 we reduce B 12 to the form 0. Summing up the results for all the blocks, we have that D(H n (λ), L m ) is equal to (21) Pairs of blocks D(K n, L m ) Due to Lemma 3.3(ii), it suffices to prove that each group of four matrices ((A, B), ( A T, B T )) can be reduced to the group of the form (22) by adding (R T L m + K n S, S T K n + L m R), S C 2n 2m+1, R C 2m+1 2n. 28

30 As in the previous sections, we reduce only (A, B) and ( A T, B T ) is reduced automatically. We have (A, B) = R T L m + K n S = (R T [ 0 F m F T m 0 ] + [ 0 J n (0) J n (0) T 0 ] S, RT [ 0 G m G T m 0 ] + [ 0 I n I n 0 ] S). It is clear that we can set B to zero. To preserve B, we must hereafter take R = [R ij ] 2 i,j=1 and S = [S ij] 2 i,j=1 such that R T [ 0 G m G T m 0 ] + [ 0 I n I n 0 ] S = 0, or equivalently S = [ RT 22 GT m R12 T G m R11 T G ]. m Hence A = [A ij ] 2 i,j=1 is reduced by adding A = [ A 11 A 12 A 21 A 22 ] R T 21 GT m = [ RT 11 R21 T R12 T R22 T ] [ 0 F m Fm T 0 ] + [ 0 J n (0) J n (0) T 0 ] [ RT 22 GT m R12 T G m R21 T GT m R11 T G ] m = [ RT 21 F m T + J n (0)R21 T GT m R11 T F m J n (0)R11 T G m R22 T F m T + J n (0) T R22 T GT m R12 T F m J n (0) T R12 T G ]. m First we reduce the block A 11 (A 21 is reduced in the same way). We have A 11 = R21F T m T + J n (0)R21G T T m r 11 + r 22 r 12 + r 23 r 13 + r r 1m + r 2,m+1 r 21 + r 32 r 22 + r 33 r 23 + r r 2m + r 3,m+1 = , r n 1,1 + r n2 r n 1,2 + r n3 r n 1,3 + r n4... r n 1,m + r n,m+1 r n1 r n2 r n4... r nm and thus we reduce each diagonal of A 11 independently. For each of the first m diagonals, starting from the bottom-left corner, we have a system of type (43) which has a solution, and for the remaining diagonals we have the system of type (42) which has a solution too. Thus adding A 11 we set A 11 to zero. Last, we reduce the blocks A 12 and A 22 and it is enough to consider only A 12. We have A 12 = R11F T m J n (0)R11G T m r 11 r 12 r 21 r 13 r r 1m r 2,m 1 r 2m r 21 r 22 r 31 r 13 r r 2m r 3,m 1 r 2m = r n 1,1 r n 1,2 r n1 r n 1,3 r n2... r n 1,m r n,m 1 r nm r n1 r n2 r n3... r nm 0 29

The solution of a pair of matrix equations (XTA+AX, XTB + BX) = (0, 0) with skew-symmetric A and B

The solution of a pair of matrix equations (XTA+AX, XTB + BX) = (0, 0) with skew-symmetric A and B The solution of a pair of matrix equations (XTA+AX, XTB + BX) = (, ) with skew-symmetric A and B by Andrii Dmytryshyn, Bo Kågström, and Vladimir Sergeichuk UMINF-12/5 UMEÅ UNIVERSITY DEPARTMENT OF COMPUTING

More information

Skew-Symmetric Matrix Pencils: Stratification Theory and Tools

Skew-Symmetric Matrix Pencils: Stratification Theory and Tools Skew-Symmetric Matrix Pencils: Stratification Theory and Tools Andrii Dmytryshyn Licentiate Thesis DEPARTMENT OF COMPUTING SCIENCE UMEÅ UNIVERSITY Department of Computing Science Umeå University SE-901

More information

Structure preserving stratification of skew-symmetric matrix polynomials. Andrii Dmytryshyn

Structure preserving stratification of skew-symmetric matrix polynomials. Andrii Dmytryshyn Structure preserving stratification of skew-symmetric matrix polynomials by Andrii Dmytryshyn UMINF 15.16 UMEÅ UNIVERSITY DEPARTMENT OF COMPUTING SCIENCE SE- 901 87 UMEÅ SWEDEN Structure preserving stratification

More information

2014 Society for Industrial and Applied Mathematics

2014 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 35, No. 4, pp. 149 1443 014 Society for Industrial and Applied Mathematics ORBIT CLOSURE HIERARCHIES OF SKEW-SYMMETRIC MATRIX PENCILS ANDRII DMYTRYSHYN AND BO KÅGSTRÖM Abstract.

More information

Versal deformations in generalized flag manifolds

Versal deformations in generalized flag manifolds Versal deformations in generalized flag manifolds X. Puerta Departament de Matemàtica Aplicada I Escola Tècnica Superior d Enginyers Industrials de Barcelona, UPC Av. Diagonal, 647 08028 Barcelona, Spain

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Sums of diagonalizable matrices

Sums of diagonalizable matrices Linear Algebra and its Applications 315 (2000) 1 23 www.elsevier.com/locate/laa Sums of diagonalizable matrices J.D. Botha Department of Mathematics University of South Africa P.O. Box 392 Pretoria 0003

More information

16.2. Definition. Let N be the set of all nilpotent elements in g. Define N

16.2. Definition. Let N be the set of all nilpotent elements in g. Define N 74 16. Lecture 16: Springer Representations 16.1. The flag manifold. Let G = SL n (C). It acts transitively on the set F of complete flags 0 F 1 F n 1 C n and the stabilizer of the standard flag is the

More information

ABSOLUTELY FLAT IDEMPOTENTS

ABSOLUTELY FLAT IDEMPOTENTS ABSOLUTELY FLAT IDEMPOTENTS JONATHAN M GROVES, YONATAN HAREL, CHRISTOPHER J HILLAR, CHARLES R JOHNSON, AND PATRICK X RAULT Abstract A real n-by-n idempotent matrix A with all entries having the same absolute

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

arxiv: v4 [math.rt] 9 Jun 2017

arxiv: v4 [math.rt] 9 Jun 2017 ON TANGENT CONES OF SCHUBERT VARIETIES D FUCHS, A KIRILLOV, S MORIER-GENOUD, V OVSIENKO arxiv:1667846v4 [mathrt] 9 Jun 217 Abstract We consider tangent cones of Schubert varieties in the complete flag

More information

EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS

EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS M I BUENO, M MARTIN, J PÉREZ, A SONG, AND I VIVIANO Abstract In the last decade, there has been a continued effort to produce families

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Generic skew-symmetric matrix polynomials with fixed rank and fixed odd grade

Generic skew-symmetric matrix polynomials with fixed rank and fixed odd grade Generic skew-symmetric matrix polynomials with fixed rank and fixed odd grade by Andrii Dmytryshyn and Froilán M. Dopico UMINF-17/07 UMEÅ UNIVERSITY DEPARTMENT OF COMPUTING SCIENCE SE-901 87 UMEÅ SWEDEN

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Chapter One. Introduction

Chapter One. Introduction Chapter One Introduction Besides the introduction, front matter, back matter, and Appendix (Chapter 15), the book consists of two parts. The first part comprises Chapters 2 7. Here, fundamental properties

More information

8.1 Bifurcations of Equilibria

8.1 Bifurcations of Equilibria 1 81 Bifurcations of Equilibria Bifurcation theory studies qualitative changes in solutions as a parameter varies In general one could study the bifurcation theory of ODEs PDEs integro-differential equations

More information

18.S34 linear algebra problems (2007)

18.S34 linear algebra problems (2007) 18.S34 linear algebra problems (2007) Useful ideas for evaluating determinants 1. Row reduction, expanding by minors, or combinations thereof; sometimes these are useful in combination with an induction

More information

SOME REMARKS ON LINEAR SPACES OF NILPOTENT MATRICES

SOME REMARKS ON LINEAR SPACES OF NILPOTENT MATRICES LE MATEMATICHE Vol. LIII (1998) Supplemento, pp. 2332 SOME REMARKS ON LINEAR SPACES OF NILPOTENT MATRICES ANTONIO CAUSA - RICCARDO RE - TITUS TEODORESCU Introduction. In this paper we study linear spaces

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

On the closures of orbits of fourth order matrix pencils

On the closures of orbits of fourth order matrix pencils On the closures of orbits of fourth order matrix pencils Dmitri D. Pervouchine Abstract In this work we state a simple criterion for nilpotentness of a square n n matrix pencil with respect to the action

More information

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper

More information

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

More information

VERSAL DEFORMATIONS OF BILINEAR SYSTEMS UNDER OUTPUT-INJECTION EQUIVALENCE

VERSAL DEFORMATIONS OF BILINEAR SYSTEMS UNDER OUTPUT-INJECTION EQUIVALENCE PHYSCON 2013 San Luis Potosí México August 26 August 29 2013 VERSAL DEFORMATIONS OF BILINEAR SYSTEMS UNDER OUTPUT-INJECTION EQUIVALENCE M Isabel García-Planas Departamento de Matemàtica Aplicada I Universitat

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Skew-Symmetric Matrix Polynomials and their Smith Forms

Skew-Symmetric Matrix Polynomials and their Smith Forms Skew-Symmetric Matrix Polynomials and their Smith Forms D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann March 23, 2013 Abstract Two canonical forms for skew-symmetric matrix polynomials

More information

Perturbations preserving conditioned invariant subspaces

Perturbations preserving conditioned invariant subspaces Perturbations preserving conditioned invariant subspaces Albert ompta, Josep Ferrer and Marta Peña Departament de Matemàtica Aplicada I. E.T.S. Enginyeria Industrial de Barcelona. UP Diagonal 647. 0808

More information

Positive entries of stable matrices

Positive entries of stable matrices Positive entries of stable matrices Shmuel Friedland Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago Chicago, Illinois 60607-7045, USA Daniel Hershkowitz,

More information

CHAPTER 10 Shape Preserving Properties of B-splines

CHAPTER 10 Shape Preserving Properties of B-splines CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially

More information

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University

More information

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu Eigenwerte, Eigenvektoren, Diagonalisierung Vorlesungsnotizen zu Mathematische Methoden der Physik I J. Mark Heinzle Gravitational Physics, Faculty of Physics University of Vienna Version /6/29 2 version

More information

Two applications of the theory of primary matrix functions

Two applications of the theory of primary matrix functions Linear Algebra and its Applications 361 (2003) 99 106 wwwelseviercom/locate/laa Two applications of the theory of primary matrix functions Roger A Horn, Gregory G Piepmeyer Department of Mathematics, University

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

INITIAL COMPLEX ASSOCIATED TO A JET SCHEME OF A DETERMINANTAL VARIETY. the affine space of dimension k over F. By a variety in A k F

INITIAL COMPLEX ASSOCIATED TO A JET SCHEME OF A DETERMINANTAL VARIETY. the affine space of dimension k over F. By a variety in A k F INITIAL COMPLEX ASSOCIATED TO A JET SCHEME OF A DETERMINANTAL VARIETY BOYAN JONOV Abstract. We show in this paper that the principal component of the first order jet scheme over the classical determinantal

More information

arxiv: v3 [math.ra] 10 Jun 2016

arxiv: v3 [math.ra] 10 Jun 2016 To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

Multiple eigenvalues

Multiple eigenvalues Multiple eigenvalues arxiv:0711.3948v1 [math.na] 6 Nov 007 Joseph B. Keller Departments of Mathematics and Mechanical Engineering Stanford University Stanford, CA 94305-15 June 4, 007 Abstract The dimensions

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Jordan Structures of Alternating Matrix Polynomials

Jordan Structures of Alternating Matrix Polynomials Jordan Structures of Alternating Matrix Polynomials D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann August 17, 2009 Abstract Alternating matrix polynomials, that is, polynomials whose coefficients

More information

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu Eigenwerte, Eigenvektoren, Diagonalisierung Vorlesungsnotizen zu Mathematische Methoden der Physik I J. Mark Heinzle Gravitational Physics, Faculty of Physics University of Vienna Version 5/5/2 2 version

More information

Adjoint Representations of the Symmetric Group

Adjoint Representations of the Symmetric Group Adjoint Representations of the Symmetric Group Mahir Bilen Can 1 and Miles Jones 2 1 mahirbilencan@gmail.com 2 mej016@ucsd.edu Abstract We study the restriction to the symmetric group, S n of the adjoint

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

Exercises Chapter II.

Exercises Chapter II. Page 64 Exercises Chapter II. 5. Let A = (1, 2) and B = ( 2, 6). Sketch vectors of the form X = c 1 A + c 2 B for various values of c 1 and c 2. Which vectors in R 2 can be written in this manner? B y

More information

Chapter 3. Matrices. 3.1 Matrices

Chapter 3. Matrices. 3.1 Matrices 40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

Problem 1. CS205 Homework #2 Solutions. Solution

Problem 1. CS205 Homework #2 Solutions. Solution CS205 Homework #2 s Problem 1 [Heath 3.29, page 152] Let v be a nonzero n-vector. The hyperplane normal to v is the (n-1)-dimensional subspace of all vectors z such that v T z = 0. A reflector is a linear

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI 1. Maximal Tori By a torus we mean a compact connected abelian Lie group, so a torus is a Lie group that is isomorphic to T n = R n /Z n. Definition 1.1.

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

ACM106a - Homework 2 Solutions

ACM106a - Homework 2 Solutions ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

DOMINO TILING. Contents 1. Introduction 1 2. Rectangular Grids 2 Acknowledgments 10 References 10

DOMINO TILING. Contents 1. Introduction 1 2. Rectangular Grids 2 Acknowledgments 10 References 10 DOMINO TILING KASPER BORYS Abstract In this paper we explore the problem of domino tiling: tessellating a region with x2 rectangular dominoes First we address the question of existence for domino tilings

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Theorems of Erdős-Ko-Rado type in polar spaces

Theorems of Erdős-Ko-Rado type in polar spaces Theorems of Erdős-Ko-Rado type in polar spaces Valentina Pepe, Leo Storme, Frédéric Vanhove Department of Mathematics, Ghent University, Krijgslaan 28-S22, 9000 Ghent, Belgium Abstract We consider Erdős-Ko-Rado

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

The solution of the equation AX + X B = 0

The solution of the equation AX + X B = 0 The solution of the equation AX + X B = 0 Fernando De Terán, Froilán M Dopico, Nathan Guillery, Daniel Montealegre, and Nicolás Reyes August 11, 011 Abstract We describe how to find the general solution

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems Lecture 1 INF-MAT 4350 2010: Chapter 2. Examples of Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo August 26, 2010 Notation The set of natural

More information

An explicit description of the irreducible components of the set of matrix pencils with bounded normal rank

An explicit description of the irreducible components of the set of matrix pencils with bounded normal rank An explicit description of the irreducible components of the set of matrix pencils with bounded normal rank Fernando De Terán a,, Froilán M. Dopico b, J. M. Landsberg c a Departamento de Matemáticas, Universidad

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

RIEMANN SURFACES. max(0, deg x f)x.

RIEMANN SURFACES. max(0, deg x f)x. RIEMANN SURFACES 10. Weeks 11 12: Riemann-Roch theorem and applications 10.1. Divisors. The notion of a divisor looks very simple. Let X be a compact Riemann surface. A divisor is an expression a x x x

More information

Commuting nilpotent matrices and pairs of partitions

Commuting nilpotent matrices and pairs of partitions Commuting nilpotent matrices and pairs of partitions Roberta Basili Algebraic Combinatorics Meets Inverse Systems Montréal, January 19-21, 2007 We will explain some results on commuting n n matrices and

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

27. Topological classification of complex linear foliations

27. Topological classification of complex linear foliations 27. Topological classification of complex linear foliations 545 H. Find the expression of the corresponding element [Γ ε ] H 1 (L ε, Z) through [Γ 1 ε], [Γ 2 ε], [δ ε ]. Problem 26.24. Prove that for any

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information