The gap distance to the set of singular matrix pencils

Size: px
Start display at page:

Download "The gap distance to the set of singular matrix pencils"

Transcription

1 The gap distance to the set of singular matrix pencils Thomas Berger Hannes Gernandt Carsten Trunk Henrik Winkler Michał Wojtylak March 6, 28 bstract We study matrix pencils se using the associated linear subspace ker[, E]. The distance between subspaces is measured in terms of the gap metric. In particular, we investigate the gap distance of a regular matrix pencil to the set of singular pencils and provide upper and lower bounds for it. relation to the distance to singularity in the Frobenius norm is provided. Keywords: distance to singularity, matrix pencils, gap distance MSC: 522, 476, 4755 Introduction We consider square matrix pencils (s) = se with E, C n n, which are regular, i.e., det(se ) is not the zero polynomial. If det(se ) = for all s C, then the matrix pencil (s) is called singular. In the numerical treatment of matrix pencils it turns out that regular pencils which are close to a singular one are difficult to handle [6]. It is a hard task to compute canonical forms, because rank decisions seem to be impossible in general. One way to characterize the distance to singularity δ(e, ) for a regular matrix pencil se is the Frobenius norm of the smallest perturbation which leads to a singular pencil δ(e, ) := inf E, C n n{ [ E, ] F s(e + E) (+ ) is singular}, () see [9]. Here M F := tr(m M) is the Frobenius norm of a matrix M C m n, and M is the adjoint of M. lthough in [9] several upper and lower bounds for δ(e, ) were obtained, they were all claimed to be insufficient. For current purposes we mention only (cf. [3, 9]) ]) } σ min (W n (E,)),σ min ([E,]), (2) ( +cos π n+ ) δ(e,) min {σ min ([ E Fachbereich Mathematik, Universität Hamburg, Bundesstraße 55, 246 Hamburg, Germany, thomas.berger@uni-hamburg.de Institut für Mathematik, Technische Universität Ilmenau, Weimarer Straße 25, Ilmenau, Germany Instituto rgentino de Matemática lberto P. Calderón (CONICET), Saavedra 5, (83) Buenos ires, rgentina Jagiellonian University, Faculty of Mathematics and Computer Science, ul. Łojasiewicza 6, Kraków, Poland The author gratefully acknowledges the support of the lexander von Humboldt Foundation with a Return Home Scholarship.

2 where W n (E,) is the block matrix E... E. W n (E,) := C (n+)n n2,.. E... see also [8], and σ min (M) is the smallest positive singular value of the matrix M C m k. Recently, in [2], new estimates were obtained in the case that the perturbation s E in () has rank one and the pencil is Hermitian. In [6], the authors proposed a successful gradient based algorithm for finding a nearby singular pencil, however the cost function there is not the distance δ(e,) itself. Following [7], we associate with (s) = se the subspace L = ker[, E] of C 2n, see also [2, 26]. Note that if E equals the identity, then L coincides with the graph of. For two pencils (s) = se and Ã(s) = sẽ Ã we define their gap distance as θ(,ã) = PL P LÃ, where P L and P LÃ are the orthogonal projections onto L = ker[, E] and LÃ = ker[ã, Ẽ], respectively, and M := max x = Mx denotes the spectral norm. The central notion of the present paper is the gap distance to singularity θ sing (E,) of a pencil (s), which is defined as the infimum of all θ(,ã) where Ã(s) is a singular matrix pencil. Let us mention here the basic property of θ sing (E,) that distinguishes it from δ(e,). Namely, if the subspaces L and LÃ coincide (i.e., the pencils (s) and Ã(s) generate the same linear relation, see [7]) then θ sing (E,) = θ sing (Ẽ,Ã). In other words, the distance θ sing (E,) depends on (the linear relation generated by) the subspace L only. In particular, θ sing (SE,S) = θ sing (E,) for any invertible S, while in contrast, δ(τe,τ) for τ. Observe also that if θ(,ã) < θ sing(e,), than regularity of any matrix pencil generating the same linear relation as Ã(s) is guaranteed. This fact allows to study large norm deviations of the matrices E and, see Section 7.2. nother important issue is the asymmetry of θ sing (E,) with respect to the Kronecker canonical form, see Section 3. This fact is particularly interesting when applied to classes with already restricted Kronecker canonical form. pplications to a recently studied class of pencils connected with port-hamiltonian systems can be found in Section 7.3. In the present paper, we give several bounds onθ sing (E,). For instance, in Theorem 4.3 we prove that θ sing (E,) σ min(w n (E,)). 2 [E,] In Theorem 5.4 we obtain upper bounds in terms of the geometry of the underlying subspaces. simplified version says that if x,y C n \ {} are such that x = λex and y = µey for λ,µ C with λ µ, then with ( ) x y z :=, J := span µy λy {( ( )} x y, λx) µy and P J denoting the orthogonal projection onto the orthogonal complement J of J, we have θ sing (E,) P L z P J z. (3) 2

3 Furthermore, in Theorem 6.2 we prove that θ sing (E,) and δ(e,) are related by δ(e, ) δ(e, ) θ sing (E,) [E,] F σ min ([E,]) δ(e,). (4) Note that combining e.g. (3) and (4) yields a new upper bound for δ(e,). nother bound, the proof of which is based on comparing the distances, is θ sing (E,) + E 2, where is assumed to be invertible, see Theorem 6.5. The paper is organized as follows: We recall the gap distance between subspaces in Section 2 together with some basic properties that are needed in due course. In order to define the gap distance between matrix pencils we associate with a pencil (s) = se the linear subspace L = ker[, E], which is discussed in Section 3 together with some of its properties. Then we introduce the gap distance between matrix pencils and the gap distance to singularity θ sing (E,). We derive upper and lower bounds for this number in Sections 4 and 5. comparison of the gap distance to singularity with the distance to singularity δ(e,) is discussed in Section 6. In Section 7 we discuss some examples and, in particular, we show that there are classes of matrix pencils for which regularity can be concluded using θ sing (E,), but not using δ(e,). Throughout this article, we use the following notation: For a subspace L C n we denote by S L := { x L x = } the unit sphere in L and by d(x,l) := inf y L x y the distance of a vector x C n to L. Furthermore, P L denotes the orthogonal projection onto the subspace L. By L L 2 we denote the direct sum of two subspaces L,L 2 C n with L L 2 = {}, and by L L 2 their orthogonal sum provided that L L 2. The singular value decomposition of a matrix C n m reads [ ] = UΣV Σr, Σ =, Σ r = diag(σ,...,σ r ), (5) with unitary matrices U C n n, V C m m and singular values σ... σ r >, r = rk. Note that = σ = σ max (), and for the reduced minimum modulus of we have min x = { x x ker } = σ r = σ min (). We denote by σ min () the minimum modulus of, that is σ min () := min x = { x }. In other words, σ min () = σ min () if ker = {} and σ min () = otherwise. 2 The gap between subspaces Recall from [3, 4, 9] that the gap distance between subspaces L,M C n with L {} or M {} is given by { } θ(l,m) := max sup d(x,l), sup d(x,m). (6) x S M x S L The next proposition collects some well known properties of the gap distance, see [9, Corollary IV.2.6, Theorem IV.2.9], [4, Section S4.3]. Proposition 2.. For any two subspaces L,M C n the gap θ(l,m) has the properties: (a) θ(l,m) = P M P L ; (b) θ(l,m), and if θ(l,m) < then diml = dimm, P M L = M and P L M = L; 3

4 (c) θ(l,m) = θ(l,m ); (d) θ(l,m) = max{ P M L, P L M }. Every matrix C C n d induces a subspace L C n via L = ranc and vice versa. For matrices C C n d of full rank with d n, the following formula for the orthogonal projection onto the range of C holds. P ranc = C(C C) C. (7) If C has orthonormal columns then C C = I d and the equation (7) simplifies to P ranc = CC. Moreover, with Proposition 2. (d) we obtain the following corollary. Corollary 2.2. Let L, L C n be d-dimensional subspaces, d n, with L = ranc and L = ran C for some matrices C, C C n d. Then θ(l, L) = max{ (I n C(C C) C ) ran C, (I n C( C C) C ) ranc }. For later use we record a formula for the gap between two d-dimensional subspaces of C n. In the case of R n, a proof using the CS-decomposition can be found in [5]; here we present a direct method. Proposition 2.3. Let L, L C n be d-dimensional subspaces, d n, such that L = ranc and L = ran C for some matrices C, C C n d of rank d with orthonormal columns. Then θ(l, L) = σ min (C C) 2. (8) Proof. Choose C and C such that U = [C,C ] and Ũ = [ C, C ] are unitary matrices. Then also Q = U Ũ is unitary. Note that Q is of block form [ ] [ ] Q Q Q = 2 C = C C C Q 2 Q 22 C C C C. With Q Q = QQ = I we find that The above relations imply that I = Q Q +Q 2 Q 2, I = Q Q +Q 2 Q 2. σ min (C C) 2 = min x = Q x 2 = min x = x (I Q 2Q 2 )x = max x = x Q 2Q 2 x = Q 2 2 (9) Using that σ min (C C) = σmin ( C C) and Q 2 = Q 2, similarly we obtain that σ min (C C) 2 = Q 2 2. () With P L = CC and P L = C C we find that θ(l, L) = PL CC P L = C C U = (CC C C )Ũ, () and a straightforward calculation using U U = Ũ Ũ = I gives [ ] [ ] [ ] C (CC C C )[ C, C C ] = C Q2 C C =. Q 2 C 4

5 Since [ ] Q2 = max{ Q Q 2 2, Q 2 }, the last relation together with (9), () and () implies the formula (8). Remark 2.4. Note that also the following relations hold, cf. (9) and (): θ(l, L) = C C = C C. Furthermore, if x,x 2 C n are non-zero vectors and L i = span{x i }, i =,2, then Proposition 2.3 gives θ(l,l 2 ) = x x 2 2 x 2 x 2 2. (2) We stress that in complex vector spaces, the formula for the gap as in (2) is not the same as the sine of the angle between the two spanning vectors, which is defined by sin (x,x 2 ) := cos 2 (x,x 2 ), cos (x,x 2 ) := Re x x 2 x x 2. For the convenience of the reader, some properties of the Frobenius norm F of a matrix = (a ij ) C n m are mentioned: F = tr( ) = n m tr( ) = a ij 2, and for unitary matrices U C n n and V C m m we have M F = UMV F. i= j= direct consequence of the above invariance is the following elementary lemma. Lemma 2.5. For C n m and B C m k we have B F F B and B F B F. Proof. Consider the singular value decomposition B = UΣV with unitary U C m m, V C k k. Using the fact that B = σ max (B) we find that B F = UU BV F = UΣ F σ max (B) U F = F B. The second inequality can be inferred in a similar way. In the following we show that if the gap distance between the subspaces L, L C n is small, then the representing matrices can be chosen in such way that the norm of their difference is small. Proposition 2.6. Let L, L C n be subspaces with θ(l, L) < and L = ranc for some matrix C C n d with linearly independent columns c,...,c d C n. Then L = ranp LC and P LC C θ(l, L) C, P LC C F θ(l, L) C F. (3) 5

6 Proof. Since θ(l, L) < we have from Proposition 2. (b) that L = ranp LC. The first inequality in (3) follows from P LC C = P LC P L C Prop. 2.(a) PL P L C = θ(l, L) C. The second inequality in (3) can be inferred from P LC C F = PL C P LC Lem. 2.5 F PL P L C F = θ(l, L) C F. Next, we prove a converse to Proposition 2.6: a small distance of the representing matrices C and C implies a small gap distance. To this end, we need another elementary lemma. Lemma 2.7. Let C n m and x ran such that x =. Then there exists z C m such that x = z and z (σ min ()). Proof. Consider a singular value decomposition [ = UΣV ] of as in (5) with unitary matrices U C n n, V C m m. Denote Σ + = Σ r and put z = VΣ + U x. Since ran = ranuσ, let x = UΣy for some y C m. It follows that z = UΣV VΣ + U UΣy = UΣy = x. Furthermore, z Σ + U x σ r = (σ min ()). Proposition 2.8. Let L, L C n be subspaces with L = ranc and L = ran C for C, C C n d \{}, d n. Then we have θ(l, L) C C min{σ min (C),σ min ( C)}. (4) Proof. Let x ranc with x = and choose, according to Lemma 2.7, z C d such that x = Cz and z (σ min (C)). Then we have d(x,ran C) = inf z C d Cz C z inf z C d ( Cz C z + ( C C) z ) min{ Cz, ( C C)z }, where the last inequality follows from setting z = z or z =. Since C σ min (C) one finds that d(x,ran C) C C σ min (C), (5) and by symmetry it follows for all x ran C with x = that The inequalities (5) and (6) together with (6) imply (4). d(x,ranc) C C σ min ( C). (6) Formula (4) is a slight improvement of a formula from [, Proposition. (i)] for operators in a Hilbert space. 6

7 Remark 2.9. The upper bound (4) is sharp, but on the other hand it can be arbitrarily large. To see that (4) is sharp choose subspacesl, L C n and matricesc, C C n d, d n with orthonormal columns such that P L = CC and P L = C C. We apply Proposition 2.8 to P L and P L. From σ min (P L ) = σ min (P L) = and Proposition 2. (a) we see that (4) holds with equality. On the other hand, for n N\{} let C = [ ], C = [ n Then L = ranc = ran C = L and therefore θ(l, L) = but σ min (C) = σ min ( C) = and C C = n. In the next lemma we compare the gap between two subspaces of a certain structure. Lemma 2.. For subspaces L,L 2,L C n the following holds. ]. (a) If L i L, i =,2, then θ(l L,L 2 L) = θ(l,l 2 ). (b) If L i L = {}, i =,2, then θ(l L,L 2 L) = θ(p L L,P L L 2 ). (c) If L i L = {} as well as L i {}, L {}, i =,2, then θ(l L,L 2 L) θ(l,l 2 ) min{σ min (P L P L ),σ min (P L P L2 )}. Proof. Since P Li L = P Li +P L, i =,2, we have P L L P L2 L = P L P L2 and then (a) follows from Proposition 2. (a). To show (b), decompose C n as C n = L i L i = P L L i P L L i L i, hence (P L L i) = P L L i L i. We conclude (P L L i L) = (P L L i ) L = L i L = (L i L). By taking orthogonal complements we see L i L = P L L i L and (b) follows from (a) as θ(l L,L 2 L) = θ(p L L L,P L L 2 L) = θ(p L L,P L L 2 ). Statement (c) follows from (b) and (4) by choosing C := P L P L and C := P L P L2 since P θ(p L L,P L L 2 ) L P L P L P L2 min{σ min (P L P L ),σ min (P L P L2 )} P L P L2 min{σ min (P L P L ),σ min (P L P L2 )}. 3 Gap distance to singularity In the following linear subspaces L of C 2n are considered, which are known under the name linear relations, see e.g. [, 7, 24, 25]. To a matrix C n n the subspace graph := {(x,x) C 2n x C n } C 2n is associated. Note that graph = ker[, I]. similar correspondence can be obtained for matrix pencils. By [7, Theorem 3.3], to any subspace L of C 2n with diml = d there exist matrices E, C n r with r = 2n d such that L = ker[, E], 7

8 which is called the kernel representation of L. In what follows, to a matrix pencil (s) = se with E, C n n the subspace L := ker[, E] is associated. These spaces are used to investigate the maximal gap distance between pencils (s) and Ã(s) that guarantees regularity of Ã(s). Definition 3.. For pencils (s) = se and Ã(s) = sẽ Ã with E,,Ẽ,Ã Cn n the gap distance between two matrix pencils is defined as θ(,ã) := θ(l.,lã) = P ker[, E] P ker[ Ã, Ẽ] The gap distance to singularity of a regular matrix pencil (s) = se is defined as { } θ sing (E,) := inf θ(,ã) Ã(s) = sẽ Ã is singular. Ẽ,Ã Cn n Remark 3.2. Clearly θ sing (E,) for any regular matrix pencil se. It is also obvious that θ sing (E,) = for E = = []. We leave it to the reader to show that θ sing (,) = for any invertible matrix. Recall that every pencil (s) = se can be transformed into Kronecker canonical form, see e.g. [5, 6, 2]. To introduce this form the following notation is used: For a multiindex α = (α,...,α l ) N l, l, with absolute value α = l i= α i and k N we define the matrices [ ] N k := C k k, N α := diag(n α,...,n αl ) C α α. If k 2 rectangular matrices are defined as [ ] K k :=, L k := and if k = [ K = L :=. ] C (k ) k, The expression means that there is a -column (,...,) C n in the matrix (7) below, and means that there is a -row (,...,) C n in (7) at the corresponding block. The notation indicates that there is no contribution to the number of rows in (7), whereas gives no contribution to the number of columns. For a multi-index α = (α,...,α l ) N l we define K α := diag(k α,...,k αl ), L α := diag(l α,...,l αl ) C ( α l) α. ccording to a result of Kronecker [2], there exist invertible matrices S,T C n n such that S(sE )T has a block diagonal form si n J sn α I α sk β L β (7) skγ L γ for some J C n n in Jordan canonical form, which is unique up to a permutation of its Jordan blocks, and multi-indices α N nα, β N n β, γ N nγ which are unique up to a permutation of their entries. If k N is one of the entries of the multi-indices β or γ in the Kronecker canonical form (7), we say that (s) has a singular block of size k. Recall that (s) is regular if and only if there are no singular blocks in the Kronecker canonical form. In the literature the numbers β,...,β nβ (respectively γ,...,γ nγ ) are often called right (left) minimal indices of se, see e.g. [, 2]. 8

9 Lemma 3.3. For a matrix pencil (s) = se with E, C n n the following holds: [ ] (a) L = ran E. (b) diml = 2n rk[, E] n. (c) If (s) is regular, then diml = n. Proof. Property (a) follows from ker[, E] = ( ran [ ]). E Since L = ker[, E], we see that (b) is a consequence of the dimension formula. Obviously, [, E] C n 2n, hence its kernel has at least dimension n. To show (c) we use the Kronecker canonical form (7) without singular blocks since (s) is regular, i.e., there exist invertible matrices S,T C n n such that rk[, E] = rk ( S [, E] [ ]) [ ] T J In = rk = n. T I α N α The following example shows that the converse of statement (c) in Lemma 3.3 is not true. Example 3.4. Consider the matrix pencil given by se = s. Then (s) = se is singular but rk[, E] = rk = 3 which implies by Lemma 3.3 (b) that diml = 3. If the gap distance between a regular pencil and a singular pencil is smaller than one, then the singular pencil has a singular block of size at least two, as shown in the next proposition. Note that below we use the notation of (7) for the Kronecker canonical form of Ã(s), not of (s). Proposition 3.5. Let E,,Ẽ,Ã Cn n be such that (s) = se is regular and Ã(s) = sẽ Ã is singular. If θ(,ã) <, then in the Kronecker canonical form (7) of Ã(s) we have n γ > and γ i 2 for all i =,...,n γ, i.e., all left minimal indices of Ã(s) are at least one. Proof. Since (s) = se is regular, dimker[, E] = n by Lemma 3.3, and the condition θ(,ã) < implies by Proposition 2. (b) that dimker[ã, Ẽ] = dimker[, E] = n. (8) Let S,T C n n be invertible matrices such that ssẽt SÃT is in Kronecker canonical form (7). s Ã(s) is a square singular pencil, the number n γ of left minimal indices in (7) has to be nonzero, see e.g. [7]. Furthermore, if γ i = for some i {,...,n γ }, then ssẽt SÃT contains a zero row. Hence, rk[sãt, SẼT] < n, and consequently ( [ ]) T rk[ã, Ẽ] = rk S[Ã, Ẽ] = rk[sãt, SẼT] < n, T which contradicts (8). 9

10 We show now that the assumptions of Proposition 3.5 do not restrict the right minimal indices of Ã(s), i.e., we may have β i = for some i {,...,n β }. Example 3.6. Let = [ ], Ã = ε [ ] [ ], E = Ẽ =. Then clearly (s) = se is regular for ε > and Ã(s) = sẽ Ã is in Kronecker canonical form (7) with one right minimal index and one left minimal index, i.e., β = () and γ = (2). Furthermore, ker[, E] = span ε,, ker[ã, Ẽ] = span,. By Lemma 2. (a) and equation (2) we see that θ(,ã) converges to zero for ε. The above asymmetry of θ sing (E,) with respect to the Kronecker canonical form will be further discussed in Section Lower bounds for θ sing (E,) In this section we present lower bounds for θ sing (E,) of a regular matrix pencil (s) = se with E, C n n. The main tool is the matrix E... E. W k (E,) := C (k+)n kn, k,. E... which has been studied e.g. in [2, 8]. The following characterization of regularity of (s) is a consequence of [8, Theorem 3.]. Theorem 4.. Let E, C n n and (s) = se be a matrix pencil with Kronecker canonical form (7). Then kerw k (E,) {}, k, if and only if there exists some entry β i of the multi-index β in (7) with β i k. In particular, (s) is regular if and only if kerw n (E,) = {}. The next lemma contains some properties of the matrices W k (E,). Lemma 4.2. Let E, C n n and k N. Then the following holds: (a) kerw k (SE,S) = kerw k (E,) for all invertible S C n n ; (b) W k (E,) 2 [E,] for all k. Proof. The assertion (a) is an immediate consequence of W k (SE,S) = diag(s,...,s)w k (E,).

11 To show (b) in case k = note that ] E 2 [ { = max Ex 2 + x 2} { max x = (x Ex 2 + x 2 2} 2 [E,] 2.,x 2) = 2 If k 2 take x = (x,...,x k ) C kn with x =, then k W k (E,)x E 2 x 2 + [E,] 2 ( x i 2 + x i+ 2 )+ 2 x k 2 2 [E,]. The following lower bounds for θ sing (E,) are one of our main results. i= Theorem 4.3. Let(s) = se and Ã(s) = sẽ à be two matrix pencils with E,,Ẽ,à C n n such that (s) is regular. If for some k we have θ(,ã) < σ min(w k (E,)) 2 [E,], (9) then β i k + holds for all the entries β i of the multi-index β in the Kronecker canonical form (7) of Ã(s). Furthermore, we have { } σmin (W n (SE,S)) θ sing (E,) sup S C n n,s invertible. (2) 2 [SE,S] and, in particular, θ sing (E,) σ min(w n (E,)) 2 [E,]. (2) If E (or ) is invertible, then ( ) θ sing (E,) σ min(w n (I n,e )) resp. θ sing (E,) σ min(w n (I n, E)). (22) 2 + E E 2 Proof. Note that kerw k (E,) = {} for all k by Theorem 4. since (s) is regular. Furthermore, by regularity, the Lemma 3.3 yields that [ ] E has full column rank. Now assume that (9) holds, then Lemma 4.2 (b) implies that θ(,ã) <. Define matrices Ê, Cn n by Since L à = ran [ à Ẽ ] [ Â Ê ] = P L à [ E ]. and L = ran [ E ] according to Lemma 3.3 (a) and θ(l,l à ) = θ(,ã) < by Proposition 2. (c), it follows with Proposition 2.6 that ] ] ran [ Â Ê = ran [ à Ẽ (23) and [ ] [ ]  E +Ê θ(l,l ) à E = θ(,ã) [E,] < σ min(w k (E,)). 2

12 The Lemma 4.2 (b) yields [ ] W k (E,) W k (Ê,Â) = W k(e Ê, Â) [E Ê, Â] =  2 2 E +Ê, and a combination of the last two inequalities gives Note that by [5, Theorem 2.5.3] W k (E,) W k (Ê,Â) < σ min(w k (E,)). (24) σ min (W k (E,)) = min rkb r W k(e,) B, where r = rkw k (E,) = kn. Therefore, it follows from the inequality (24) that rkw k (Ê,Â) = rkw k(e,), thus kerw k (Ê,Â) = kerw k(e,) = {}. (25) [ ] In particular, the matrix  has full column rank n. Moreover, the relation (23) implies Ê that there is some invertible matrix S C n n such that [ ] Â Ê S = [ ] à Ẽ, which leads to S = à and SÊ = Ẽ. Now it follows from Lemma 4.2 (a) with the relation (25) that kerw k (Ẽ,Ã) = {}. By Theorem 4. this implies β i k+ for all entries of β. For k = n, the Theorem 4. gives that Ã(s) is regular and therefore (2) holds. In the following we will use the fact that for (s) = se the pencil S(s) = sse S for any invertible S C n n generates the same subspace, i.e., L = L S, hence we have θ sing (SE,S) = θ sing (E,), which shows (2). With S = E (or S = ) in (2) and noting that [I n,e ] = + (E ) 2 we immediately get (22). 5 Upper bounds for θ sing (E,) In this section we show that Lemma 2. leads to an upper bound for θ sing (E,). If a priori a singular matrix pencil is known we obtain with Lemma 2. (c), with Lemma 2. (b) and (2) and with Lemma 2. (a) and (2) the following bounds. Proposition 5.. Let (s) = se and Ã(s) = sẽ à be two matrix pencils with E,,Ẽ,à Cn n such that (s) is regular and Ã(s) is singular. Let L,L,L 2 C 2n \{} be such that Then L = ker[, E] = L L and Là = ker[ã, Ẽ] = L 2 L. θ sing (E,) θ(l,lã) θ(l,l 2 ) min{σ min (P L P L ),σ min (P L P L2 )}. If L = span{x } and L 2 = span{x 2 } for some non-zero vectors x and x 2, then x θ sing (E,) θ(l,lã) = θ(p L L,P L L 2 ) = P L x 2 2 P L x 2 P L x 2 2. If, in addition x,x 2 L then θ sing (E,) θ(l,lã) = x x 2 2 x 2 x

13 Proposition 5. is only applicable, if a singular pencil Ã(s) is known. In the next theorem we construct a singular pencil in terms of the eigenvectors and eigenvalues of a given regular pencil and derive an upper bound for θ sing (E,). For this let us introduce the following notions. Let (s) = se with E, C n n be a matrix pencil. We say that x C n \{} is an eigenvector corresponding to the eigenvalue λ C, if (λ)x = or, equivalently, if ( λx) x L = ker[, E]. Moreover, x C n \{} is an eigenvector corresponding to the eigenvalue λ =, if Ex = or, equivalently, if ( x ) L. The set of all eigenvalues of (s) is denoted by σ(). Observe that for a singular pencil we have σ() = C { }. Recall from [4, 4] that a Jordan chain of (s) corresponding to the eigenvalue λ is a sequence (x,...,x k ) ( C n \{} ) k satisfying and ( λe)x =, ( λe)x 2 = Ex,...,( λe)x k = Ex k, if λ C Ex =, Ex 2 = x,...,ex k = x k, if λ =. Note that these conditions can be rewritten as ( x ) ( x λx,..., k ) λx k +x k L, if λ C (26) and ( x ),...,( x k x k ) L, if λ =, (27) respectively. n entry in a Jordan chain corresponding to λ σ() is called a root vector of λ. The linear span R λ () of these vectors is called the root subspace of the matrix pencil (s): R λ () := span { x C n x is a root vector of λ }. The next result is a direct consequence of Theorem 3.7 in [8] for matrix pencils, it also follows from Corollary 3.3 in [24]. Proposition 5.2. Let (s) = se with E, C n n be a matrix pencil. If there exist λ,µ C { } with λ µ such that then (s) is singular. R λ () R µ () {}, (28) Remark 5.3. Note that the above definitions of eigenvalues, eigenvectors and Jordan chains essentially differ from those used e.g. in [, 2, 22, 23], where the eigenvalues are defined via the Kronecker form and the spectrum of any matrix pencil is a finite set. nother recent approach to matrix pencils are the Wong sequences, where the root subspaces can be defined via sequences of certain subspaces, see [4, 5, 6]. fter these preparations we present our second main result: an upper bound for the gap distance to singularity θ sing (E,). Below, if L 2 is a subspace of a linear space L C 2n, we use the symbol L L 2 for L L 2 in the standard inner product on C2n. Theorem 5.4. Let E, C n n and (s) = se be a regular matrix pencil with a Jordan chain (x,...,x k ) corresponding to an eigenvalue λ C { } and a Jordan chain (y,...,y l ) corresponding to an eigenvalue µ C with λ µ. Define { span {( x ) ( x λx J :=,..., k ) λx k +x k,( y µy ),..., ( y l )} µy l +y l, if λ C, span {( ) x,...,( x k x k ),( µy y ),..., ( y l )} µy l +y l, if λ = 3

14 and let Then we have z := {( ) xk y l µx k λy l, if λ C, ( x k ) µx k +y l, if λ =. Furthermore, if θ sing (E,) =, then P L Jz =. θ sing (E,) P L z P J z. (29) Proof. First note that J L, because of (26) and (27). Now we prove the following fact: If span{z}+j LÃ for some matrix pencil Ã(s) = sẽ Ã with Ẽ,Ã Cn n, then the pencil Ã(s) is singular. Indeed, for λ C we have ( ) xk y l = (µ λ) µx k λy l ( η λη +x k ) = (µ λ) ( η µη +y l ), η := y l x k λ µ. (3) Note that η, otherwise y l = x k R λ () R µ (), which contradicts regularity of (s) by Proposition 5.2. Hence, it follows from (3) that ( ) ( ) η η = LÃ, λη +x k µη +y l which implies that(x,...,x k,η) is a Jordan chain ofã(s) corresponding toλand(y,...,y l,η) is a Jordan chain of Ã(s) corresponding to µ. We thus obtain that η R λ (Ã) R µ(ã) and Ã(s) is singular by Proposition 5.2. For λ = we obtain similarly that (y,...,y l,x k ) is a Jordan chain of Ã(s) corresponding to µ, while (x,...,x k ) is a Jordan chain of Ã(s) corresponding to and singularity of Ã(s) follows, again by Proposition 5.2. Now let z := P L Jz and L := L span{ z}. Note that J L. Then, by what has been proved above, we have z / L, otherwise the pencil (s) would be singular. Consequently, dim(span{z}+l)= n and by [7, Theorem 3.3] there exist Ẽ,Ã Cn n such that Ã(s) = sẽ Ã satisfies L Ã = span{z}+l. By the first part of the proof, the pencil Ã(s) is singular. ssume that z. Then Proposition 5. applied to L = span{ z} and L 2 = span{z} yields z θ sing (E,) P L z 2 P L z 2 P L z 2. (3) The definition of L implies that L = span{ z} L and hence P L z = z = P L Jz. Further, as P span{ z} = z z z 2 we find that thus combination of (32) with (3) gives θ sing (E,) P span{ z} z = z z z = z P L Jz P L Jz = P L Jz, = P L z 2 = P L Jz 2 + P L z 2. (32) z P L Jz 2 P L z 2 P L Jz 2 = P L z 2 P L Jz 2 P L z 2 P L z P L Jz 2 + P L z 2. (33) 4

15 Since J = (L J) L holds, we have and with this, (33) can be written as P J z 2 = P L Jz 2 + P L z 2 θ sing (E,) P L z P J z. If z = P L Jz =, then the upper bound in (33), and hence (29), is trivially satisfied, which finishes the proof of (29). ssume now that θ sing (E,) =. Then (33) immediately gives that P L Jz = as claimed. Remark 5.5. We stress that the Jordan chains (x,...,x k ) and (y,...,y l ) in Theorem 5.4 are not required to be maximal (cf. also Example 5.6). Manipulating with these chains may lead to different bounds on θ sing (E,). Further, observe that the proof is based on the construction of an (n )-dimensional subspace L with J L L and an element z L \L in order to get L = L span{ z}, LÃ = L span{z} (34) for some singular matrix pencil Ã(s) = sẽ Ã. Note that for every such L and z the inequality (3) holds. However, one may also easily see that the specific choice of L and z constructed in the proof provide an optimal bound in (3) (for fixed z and J ). Finally, we note that (34) essentially says that the singular pencil Ã(s) is a rank one perturbation of the original regular pencil (s). We refer the reader to [, 2, 22] for other studies on low rank perturbations of singular pencils. We illustrate Theorem 5.4 by the following example, where the right hand side of (29) can be made arbitrarily small. Example 5.6. Consider the regular matrix pencil (s) = se = s ε ε, ε (,). Then L = ker[, E] = span,, ε, which implies σ() = {, } with eigenvectors (,,) at and (,,) at. The Kronecker canonical form (7) of (s) is given by diag(s,sn 2 I 2 ). Then with J = span,, z =, z = P J z = 2, and the bound (29) from Theorem 5.4 gives P L z = z P L z = θ sing (E,) ε 2+ε 2. 2ε 2+ε 2 5

16 6 Distance and gap distance to singularity Here we derive some relations between the distance to singularity δ(e,) from () and the gap distance to singularity θ sing (E,), which lead to new lower bounds for θ sing (E,). First note the following scaling property of δ(e,). Proposition 6.. Let (s) = se be a regular matrix pencil with E, C n n, then for any invertible S,T C n n we have δ(set,st) S T In particular, if τ C\{}, then δ(τe,τ) = τ δ(e,). δ(e,) δ(set,st) S T. (35) Proof. s (s) is regular, det(sset ST) = detsdett det(se ) and therefore the pencil sset ST is regular. Let E k, k C n n and s(e + E k ) (+ k ) be a sequence of singular matrix pencils with Then δ(e,) = lim k [ E k, k ] F. δ(set,st) lim k [S E kt,s k T] F S T δ(e,), where the last inequality follows from Lemma 2.5. This proves the lower bound for δ(e,) in (35). The upper bound follows from the same inequality: δ ( S (SET)T,S (ST)T ) S T δ(set,st). Next we show that the distance to singularity can be estimated by the gap distance to singularity. Theorem 6.2. Let (s) = se be a regular matrix pencil with E, C n n, then and δ(e, ) [E,] F θ sing (E,) (36) ( σmin ([E,]) δ(e,) ) θ sing (E,) δ(e,). (37) Proof. Let Ã(s) = sẽ à be a matrix pencil with Ẽ,à Cn n such that δ(e,) θ(,ã) <. (38) [E,] F By regularity of (s) and Lemma 3.3 we have that [ ] E has full column rank. lso note that (38) together with (2) implies that θ(,ã) <. Hence, by Proposition 2.6, there exist Ê, Cn n with [ ] [ ] [ ] [ ]  à  ran = ran and = P LÃ Ê Ẽ Ê E, (39) where Là = ker[ã, Ẽ], and we have, invoking Lemma 3.3 (a), [ ] [ ] [ ]  E θ(l +Ê,L ) F à E = θ(l,lã) F E. F 6

17 Then, from (38) we obtain [ ] [E Ê, Â] F = [ ]  E +Ê θ(,ã) F E = θ(,ã) [E,] F < δ(e,). F Hence, by the definition of δ(e,), the pencil sê  is regular and by Lemma 3.3 and (39) the pencil Ã(s) is regular as well and (36) is shown. We show (37). Note that by (2) we haveδ(e,) σ min ([E,]). Ifδ(E,) = σ min ([E,]), then (37) is true, so assume that δ(e,) < σ min ([E,]). Let < ε < σ min ([E,]) δ(e,) and let Ã(s) = sẽ à with Ẽ,à Cn n be a singular pencil such that [Ẽ,Ã] and F [Ẽ E, à ] δ(e,)+ε. Hence θ sing (E,) θ(,ã) = θ(l,l à ) = θ (ran (4) = min [ ] [ ]) à E,ran Ẽ [ ] à E +Ẽ { ([ ]) ([ ])} à min σ min E,σ min Ẽ [Ẽ E, à ] {σ ([Ẽ,Ã] δ(e,)+ε )} min ([E,]),σ min min {σ ([Ẽ,Ã] )}. min ([E,]),σ min ccording to Mirsky s Theorem [27, Theorem IV.4.] the inequality σ min ([E,]) σ min ([Ẽ,Ã]) [Ẽ E, à ] holds, and by use of the matrix norm inequality F one finds that s a consequence, and for ε we obtain (37). < σ min ([E,]) δ(e,) ε σ min ([Ẽ,Ã]). θ sing (E,) δ(e,)+ε σ min ([E,]) δ(e,) ε Using lower bounds for δ(e,) from [9, Section 5.2] and from [3], δ(e,) max (s,c) S σ min(se c) and δ(e,) σ min(w n (E,)) ( ), +cos π n+ where S is the unit circle in C 2, we obtain the following corollary. Corollary 6.3. Let (s) = se be a regular matrix pencil with E, C n n, then θ sing (E,) max (s,c) S σ min(se c) [E,] F and θ sing (E,) σ min (W n (E,)) (. +cos ) [E,] F π n+ 7

18 The inequalities (36) and (37) also yield the following. Corollary 6.4. For all E, C n n we have θ sing (E,) +θ sing (E,) σ min([e,]) δ(e,) θ sing (E,) [E,] F. To conclude this section we improve the lower bound for θ sing (E,) for the case that E or is invertible. Theorem 6.5. Let (s) = se be a regular pencil with E, C n n. If E is invertible, then θ sing (E,) max{, σ min(e )} + E 2. (4) If is invertible, then θ sing (E,) max{, σ min( E)} + E 2. (4) Proof. We consider the case that E is invertible; the case of invertible is analogous and omitted. Consider the distance to singularity in spectral norm, δ 2 (E,) = inf E, C n n { [ E, ] s(e + E) (+ ) is singular }. Note that δ 2 (E,) δ(e,) for all E, C n n as a consequence of the matrix norm inequality F. dapting the proof of Theorem 6.2 it is straightforward that and We prove that for all M C n n δ 2 (E,) [E,] θ sing(e,) (42) (σ min ([E,]) δ 2 (E,)) θ sing (E,) δ 2 (E,). δ 2 (I n,m) max{, σ min (M)}. (43) Let E, C n n be such that [ E, ] < max{, σ min (M)}. We consider two cases. Case : max{, σ min (M)} =. Then E [ E, ] < and hence I + E is invertible. Therefore, the pencil s(i n + E) (M + ) is regular. Case 2: max{, σ min (M)} = σ min (M). Then [ E, ] < σ min (M) and hence M M = σ min (M) <. Therefore, I +M is invertible, by which M + is invertible and the pencil s(i n + E) (M + ) is regular. This shows (43). With S = E in Theorem 4.3 we obtain θ sing (E,) = θ sing (I n,e ) (42) δ 2(I n,e ) (43) [I n,e max{, σ min(e )}, ] + E 2 where for the last inequality we also used that [In,E ] = + E 2. 8

19 7 pplications 7. Example 4 from [9] We consider the regular matrix pencil se = s ε, ε <. The singular values of [E,] are given by {, +ε 2, 2}, hence σ min ([E,]) = and [E,] = 2. From [9] we have δ(e,) = ε with := ε, E := and a singular pencil Ã(s) = sẽ Ã is given by sẽ Ã = s. We have [E Ẽ, Ã] = ε and σ min implies ε = δ(e,) 2 [E,] θ sing(e,) ([Ẽ,Ã] ) =. Then (4) together with Theorem 6.2 [E Ẽ, Ã] min {σ ([Ẽ,Ã] )} = ε. min ([E,]),σ min However, in this case, Proposition 5. and Theorem 6.5 yield better bounds. Since (s) and Ã(s) differ only by one row, we consider L = ran, x =, x 2 =. ε Then L = L span{x }, LÃ = L span{x 2 } and hence Proposition 5. gives θ sing (E,) x x 2 2 x 2 x 2 2 = ε +ε 2. pplying (4) from Theorem 6.5 gives the improved lower bound θ sing (E,) max{, σ min( E) 2 } + E 2 = +(/ε) 2 = ε +ε 2, thus θ sing (E,) = ε +ε 2. (44) 9

20 7.2 Example for regularity ensured by θ sing (E,) but not by δ(e,) We show that there are classes of matrix pencils where for the investigation of regularity θ sing (E,) is more suitable than δ(e,). Here we consider a family of matrix pencils that have a gap distance less than θ sing (E,), but the Frobenius norm of the coefficient matrices of the pencils gets arbitrarily large. Therefore, θ sing (E,) can be used to guarantee regularity of this family of matrix pencils, while δ(e, ) is not suitable for this. Consider the regular matrix pencil (s) = se from Section 7. and the pencils Ã(s) = sẽ Ã = s τ τ 2 a τ 2 a 2 τ 2 a 3 τ τ 2 a 2 τ 2 a 4 τ 3 with parametersτ,τ 2,τ 3 R\{} and a,a 2,a 3,a 4 R. We seek to investigate regularity of the pencils Ã(s). To this end, we use that ker[, E] = ran [ ] E and hence we compute the gap distance between τ τ 2 a 2 ε τ 2 a 4 ran and ran τ 3 τ 2 a. τ a 2 τ 2 τ 2 a 3 Subtracting in the representing matrix the first column times τ2a2 τ from the second column we can rewrite the second subspace as follows: τ τ 2 a 2 τ 2 a 4 τ 2 a 4 ran τ 3 τ 2 a = ran τ 2 a = ran τ 2 a 4 span τ 2 a. τ a 2 τ 2 τ 2 a 3 τ 2 a 3 τ 2 a 3 }{{}}{{} =:L With x := (,ε,,,, ) we may observe that ran [ ] E = L span{x }, hence an application of Proposition 5. yields ( ) θ(,ã) = θ L,L = Ã x x 2 2 x 2 x 2 2 = =:x 2 (a 3 +εa 4 ) 2 (+ε 2 )(a 2 (45) +a2 3 +a2 4 ). Regularity of Ã(s) is guaranteed if we choose a,a 2,a 3,a 4 such that (45) is less than θ sing (E,) = ε +ε 2 (cf. (44)), which is equivalent to (a 3 +εa 4 ) 2 a 2 >. +a2 3 +a2 4 This condition is independent of the parameters τ, τ 2, τ 3. On the other hand, choosing these parameters large enough we see that the Frobenius norm [Ẽ E,Ã ] F becomes arbitrarily large, eventually exceeding δ(e, ) = ε; in other words, for these parameters regularity of Ã(s) cannot be concluded by investigating δ(e,) only. 2

21 7.3 Pencils connected with linear systems In this subsection we show how the properties ofθ sing (E,) can be combined with structured assumptions on the matrix pencil. We investigate a recent class of pencils associated with linear time-invariant dissipative Hamiltonian descriptor systems, see [23]. Let (s) = se with E, C n n be such that there exist Q,L C n n with = LQ, E Q = Q E, L+L, se Q is regular. (46) It was proved in [23] that if (s) is singular then all left minimal indices of (s) are zero, i.e., γ i = for all i =,...,n γ in the Kronecker canonical form (7). dditionally, all right minimal indices of (s) are at most one and there are several other constraints on the Kronecker canonical form, see [23]. Moreover, it was also shown in [23] that a singular pencil (s) = se satisfying E = E (47) has only zero left minimal indices. Combining this with Proposition 3.5 we get the following result. Corollary 7.. Let E,,Ẽ,Ã Cn n and let (s) = se be a regular matrix pencil. Then for the pencil Ã(s) = sẽ Ã the following holds true: (a) If Ã(s) is singular and satisfies (46) or (47), then θ(,ã) =. (b) If θ sing (E,) <, then there exists a singular pencil Ã(s) with does neither satisfy (46) nor (47). θ(,ã) < which References [] C. postol, The reduced minimum modulus, Michigan Math. J. 32 (985), [2] P. Benner and R. Byers, n arithmetic for matrix pencils: Theory and new algorithms, Num. Math. 3 (26), [3] T. Berger, H. Gernandt, C. Trunk, H. Winkler, and M. Wojtylak, new bound for the distance to singularity of a regular matrix pencil, Proc. ppl. Math. Mech. 7 (27), To appear. [4] T. Berger,. Ilchmann, and S. Trenn, The quasi-weierstraß form for regular matrix pencils, Linear lgebra ppl. 436 (22), [5] T. Berger and S. Trenn, The quasi-kronecker form for matrix pencils, SIM J. Matrix nal. & ppl. 33 (22), [6] T. Berger and S. Trenn, ddition to The quasi-kronecker form for matrix pencils, SIM J. Matrix nal. & ppl. 34 (23), 94. [7] T. Berger, C. Trunk, and H. Winkler, Linear relations and the Kronecker canonical form, Linear lgebra ppl. 488 (26), [8] T. Berger, H. de Snoo, C. Trunk, and H. Winkler, Decompositions of linear relations in finite-dimensional spaces, Submitted for publication (28). [9] R. Byers, C. He, and V. Mehrmann, Where is the nearest non-regular pencil?, Linear lgebra ppl. 285 (998),

22 [] R. Cross, Multivalued Linear Operators, Marcel Dekker, New York, 998. [] F De Terán, and F. M. Dopico, Low rank perturbation of Kronecker structures without full rank, SIM Journal on Matrix nalysis and pplications 29.2 (27), [2] F. Gantmacher, Theory of Matrices, Chelsea, New York, 959. [3] I. Gohberg and M. Krein, The basic propositions on defect numbers, root numbers and indices of linear operators, mer. Math. Soc. Transl. 3 (96), [4] I. Gohberg, P. Lancaster, and L. Rodman, Matrix Polynomials, SIM, Philadelphia, 29. [5] G. Golub and C. Van Loan, Matrix Computations, 3rd ed., The John Hopkins University Press, Baltimore, Maryland, 996. [6] N. Guglielmi, C. Lubich, and V. Mehrmann, On the nearest singular matrix pencil, SIM J. Matrix nal. & ppl. 38 (27), [7] M. Haase, The Functional Calculus for Sectorial Operators, Operator Theory: dvances and pplications 69, Birkhäuser, Basel, 26. [8] N. Karcanias and G. Kalogeropoulos, Right, left characteristic sequences and column, row minimal indices of a singular pencil, Int. J. Control 47 (988), [9] T. Kato, Perturbation Theory for Linear Operators, 2nd ed., Springer, New York, 98. [2] L. Kronecker, lgebraische Reduction der Schaaren bilinearer Formen, Sitzungsberichte der Königlich Preußischen kademie der Wissenschaften zu Berlin (89), [2] C. Mehl, V. Mehrmann, and M. Wojtylak, On the distance to singularity via low rank perturbations, Oper. Matrices 9 (25), [22] C. Mehl, V. Mehrmann, and M. Wojtylak, Parameter-dependent rank-one perturbations of singular Hermitian or symmetric pencils, SIM Journal on Matrix nalysis and pplications 38. (27): [23] C. Mehl, V. Mehrmann, and M. Wojtylak, Linear algebra properties of dissipative Hamiltonian descriptor systems, arxiv preprint arxiv: [24]. Sandovici, H.S.V. de Snoo, and H. Winkler, The structure of linear relations in Euclidean spaces, Linear lgebra ppl. 397 (25), [25]. Sandovici, H.S.V. de Snoo, and H. Winkler, scent, descent, nullity, defect, and related notions for linear relations in linear spaces, Linear lgebra ppl. 423 (27), [26] J. Sun, Perturbation analysis for the generalized eigenvalue problem and the generalized singular value problem, in: B. Kågström,. Ruhe (Eds.), Matrix Pencils, pages Springer-Verlag, Berlin, 983. [27] G. Stewart, J. Sun, Matrix Perturbation Theory, cademic Press Inc., San Diego,

Linear relations and the Kronecker canonical form

Linear relations and the Kronecker canonical form Linear relations and the Kronecker canonical form Thomas Berger Carsten Trunk Henrik Winkler August 5, 215 Abstract We show that the Kronecker canonical form (which is a canonical decomposition for pairs

More information

FINITE RANK PERTURBATIONS OF LINEAR RELATIONS AND SINGULAR MATRIX PENCILS

FINITE RANK PERTURBATIONS OF LINEAR RELATIONS AND SINGULAR MATRIX PENCILS FINITE RANK PERTURBATIONS OF LINEAR RELATIONS AND SINGULAR MATRIX PENCILS LESLIE LEBEN, FRANCISCO MARTÍNEZ PERÍA, FRIEDRICH PHILIPP, CARSTEN TRUNK, AND HENRIK WINKLER Abstract. We elaborate on the deviation

More information

Hamburger Beiträge zur Angewandten Mathematik

Hamburger Beiträge zur Angewandten Mathematik Hamburger Beiträge zur Angewandten Mathematik Linear relations and the Kronecker canonical form Thomas Berger, Carsten Trunk, and Henrik Winkler Nr 215-27 July 215 LINEAR RELATIONS AND THE KRONECKER CANONICAL

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Eigenvalue placement for regular matrix pencils with rank one perturbations

Eigenvalue placement for regular matrix pencils with rank one perturbations Eigenvalue placement for regular matrix pencils with rank one perturbations Hannes Gernandt (joint work with Carsten Trunk) TU Ilmenau Annual meeting of GAMM and DMV Braunschweig 2016 Model of an electrical

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Where is matrix multiplication locally open?

Where is matrix multiplication locally open? Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Perturbation Theory for Self-Adjoint Operators in Krein spaces

Perturbation Theory for Self-Adjoint Operators in Krein spaces Perturbation Theory for Self-Adjoint Operators in Krein spaces Carsten Trunk Institut für Mathematik, Technische Universität Ilmenau, Postfach 10 05 65, 98684 Ilmenau, Germany E-mail: carsten.trunk@tu-ilmenau.de

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Key words. regular matrix pencils, rank one perturbations, matrix spectral perturbation theory

Key words. regular matrix pencils, rank one perturbations, matrix spectral perturbation theory EIGENVALUE PLACEMENT FOR REGULAR MATRIX PENCILS WITH RANK ONE PERTURBATIONS HANNES GERNANDT AND CARSTEN TRUNK Abstract. A regular matrix pencil se A and its rank one perturbations are considered. We determine

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

On Unitary Relations between Kre n Spaces

On Unitary Relations between Kre n Spaces RUDI WIETSMA On Unitary Relations between Kre n Spaces PROCEEDINGS OF THE UNIVERSITY OF VAASA WORKING PAPERS 2 MATHEMATICS 1 VAASA 2011 III Publisher Date of publication Vaasan yliopisto August 2011 Author(s)

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Operators with numerical range in a closed halfplane

Operators with numerical range in a closed halfplane Operators with numerical range in a closed halfplane Wai-Shun Cheung 1 Department of Mathematics, University of Hong Kong, Hong Kong, P. R. China. wshun@graduate.hku.hk Chi-Kwong Li 2 Department of Mathematics,

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

Infinite-dimensional perturbations, maximally nondensely defined symmetric operators, and some matrix representations

Infinite-dimensional perturbations, maximally nondensely defined symmetric operators, and some matrix representations Available online at www.sciencedirect.com ScienceDirect Indagationes Mathematicae 23 (2012) 1087 1117 www.elsevier.com/locate/indag Infinite-dimensional perturbations, maximally nondensely defined symmetric

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

e jk :=< e k, e j >, e[t ] e = E 1 ( e [T ] e ) h E.

e jk :=< e k, e j >, e[t ] e = E 1 ( e [T ] e ) h E. onjb.tex 14.02.23 Orthonormal Jordan bases in finite dimensional Hilbert spaces 1. Introduction and terminology The aim of this paper is twofold: first, we give a necessary and sufficient condition for

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

Factorizations of linear relations

Factorizations of linear relations Available online at www.sciencedirect.com Advances in Mathematics 233 (2013) 40 55 www.elsevier.com/locate/aim Factorizations of linear relations Dan Popovici a,, Zoltán Sebestyén b a Department of Mathematics,

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form LINEARIZATIONS OF SINGULAR MATRIX POLYNOMIALS AND THE RECOVERY OF MINIMAL INDICES FERNANDO DE TERÁN, FROILÁN M. DOPICO, AND D. STEVEN MACKEY Abstract. A standard way of dealing with a regular matrix polynomial

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

An angle metric through the notion of Grassmann representative

An angle metric through the notion of Grassmann representative Electronic Journal of Linear Algebra Volume 18 Volume 18 (009 Article 10 009 An angle metric through the notion of Grassmann representative Grigoris I. Kalogeropoulos gkaloger@math.uoa.gr Athanasios D.

More information

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0. Chapter Find all x such that A x : Chapter, so that x x ker(a) { } Find all x such that A x ; note that all x in R satisfy the equation, so that ker(a) R span( e, e ) 5 Find all x such that A x 5 ; x x

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Block companion matrices, discrete-time block diagonal stability and polynomial matrices

Block companion matrices, discrete-time block diagonal stability and polynomial matrices Block companion matrices, discrete-time block diagonal stability and polynomial matrices Harald K. Wimmer Mathematisches Institut Universität Würzburg D-97074 Würzburg Germany October 25, 2008 Abstract

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Means of unitaries, conjugations, and the Friedrichs operator

Means of unitaries, conjugations, and the Friedrichs operator J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Linear algebra properties of dissipative Hamiltonian descriptor systems

Linear algebra properties of dissipative Hamiltonian descriptor systems Linear algebra properties of dissipative Hamiltonian descriptor systems C. Mehl V. Mehrmann M. Wojtylak January 6, 28 Abstract A wide class of matrix pencils connected with dissipative Hamiltonian descriptor

More information

Math Homework 8 (selected problems)

Math Homework 8 (selected problems) Math 102 - Homework 8 (selected problems) David Lipshutz Problem 1. (Strang, 5.5: #14) In the list below, which classes of matrices contain A and which contain B? 1 1 1 1 A 0 0 1 0 0 0 0 1 and B 1 1 1

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart. UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations

Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations, VU Amsterdam and North West University, South Africa joint work with: Leonard Batzke, Christian

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE TIMO REIS AND MATTHIAS VOIGT, Abstract. In this work we revisit the linear-quadratic

More information

On invariant graph subspaces

On invariant graph subspaces On invariant graph subspaces Konstantin A. Makarov, Stephan Schmitz and Albrecht Seelmann Abstract. In this paper we discuss the problem of decomposition for unbounded 2 2 operator matrices by a pair of

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

A BRIEF INTRODUCTION TO HILBERT SPACE FRAME THEORY AND ITS APPLICATIONS AMS SHORT COURSE: JOINT MATHEMATICS MEETINGS SAN ANTONIO, 2015 PETER G. CASAZZA Abstract. This is a short introduction to Hilbert

More information

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5.

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5. 2. LINEAR ALGEBRA Outline 1. Definitions 2. Linear least squares problem 3. QR factorization 4. Singular value decomposition (SVD) 5. Pseudo-inverse 6. Eigenvalue decomposition (EVD) 1 Definitions Vector

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Products of commuting nilpotent operators

Products of commuting nilpotent operators Electronic Journal of Linear Algebra Volume 16 Article 22 2007 Products of commuting nilpotent operators Damjana Kokol Bukovsek Tomaz Kosir tomazkosir@fmfuni-ljsi Nika Novak Polona Oblak Follow this and

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Australian Journal of Basic and Applied Sciences, 5(9): , 2011 ISSN Fuzzy M -Matrix. S.S. Hashemi

Australian Journal of Basic and Applied Sciences, 5(9): , 2011 ISSN Fuzzy M -Matrix. S.S. Hashemi ustralian Journal of Basic and pplied Sciences, 5(9): 2096-204, 20 ISSN 99-878 Fuzzy M -Matrix S.S. Hashemi Young researchers Club, Bonab Branch, Islamic zad University, Bonab, Iran. bstract: The theory

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

On families of anticommuting matrices

On families of anticommuting matrices On families of anticommuting matrices Pavel Hrubeš December 18, 214 Abstract Let e 1,..., e k be complex n n matrices such that e ie j = e je i whenever i j. We conjecture that rk(e 2 1) + rk(e 2 2) +

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information