Key words. Nonlinear eigenvalue problem, analytic matrix-valued function, Sylvester-like operator, backward error. T (λ)v = 0. (1.

Size: px
Start display at page:

Download "Key words. Nonlinear eigenvalue problem, analytic matrix-valued function, Sylvester-like operator, backward error. T (λ)v = 0. (1."

Transcription

1 NONLINEAR EIGENVALUE PROBLEMS WITH SPECIFIED EIGENVALUES MICHAEL KAROW, DANIEL KRESSNER, AND EMRE MENGI Abstract. This work considers eigenvalue problems that are nonlinear in the eigenvalue parameter. Given such a nonlinear eigenvalue problem T, we are concerned with finding the minimal backward error such that T has a set of prescribed eigenvalues with prescribed algebraic multiplicities. We consider backward errors that only allow constant perturbations, which do not depend on the eigenvalue parameter. While the usual resolvent norm addresses this question for a single eigenvalue of multiplicity one, the general setting involving several eigenvalues is significantly more difficult. Under mild assumptions, we derive a singular value optimization characterization for the minimal perturbation that addresses the general case. Key words. Nonlinear eigenvalue problem, analytic matrix-valued function, Sylvester-like operator, backward error. AMS subject classifications. 65F5, 65F8, 47A56. Introduction. Let T : Ω C n n be an analytic matrix-valued function on some domain Ω C. Then λ C is called an eigenvalue of T if there is a nonzero vector v C n such that T (λ)v = 0. (.) A number of numerical algorithms have been proposed for solving such nonlinear eigenvalue problems, see [26, 32] for overviews. In these algorithms, it is important to be able to quantify whether the accuracy of the current eigenvalue approximation µ Ω is satisfactory. Let us first suppose that also an eigenvector approximation ṽ C n, normalized such that ṽ 2 =, is at hand. Then the norm of the residual, T (µ)ṽ 2, (.2) is the most common way for quantifying the accuracy of µ and ṽ together. In the absence of eigenvector information or for testing the backward error of the eigenvalue approximation µ alone, one is free to choose a normalized vector ṽ such that (.2) is minimized. This leads to σ min ( T (µ) ) = T (µ) 2, (.3) where σ min denotes the smallest singular value of a matrix. For a linear eigenvalue problem T (λ) = λi A, the quantity (.3) corresponds to the well established notion of pseudospectra [3]. Different extensions of pseudospectra to the nonlinear case have been proposed in [8, 28, 30, 33]. In applications, one is often interested in approximating more than one eigenvalue. This gives rise to the task of quantifying how well a list of s (not necessarily distinct) Department of Mathematics, TU-Berlin, Sekretariat MA 4-5, Strasse des 7. Juni 36, D-0623, Berlin, Germany (karow@math.tu-berlin.de). SB MATHICSE ANCHP, EPF Lausanne, Station 8, CH-05 Lausanne, Switzerland (daniel.kressner@epfl.ch). Department of Mathematics, Koç University, Rumelifeneri Yolu, Sarıyer-İstanbul, Turkey (emengi@ku.edu.tr). The work of this author was supported in part by the European Commision grant PIRG-GA and the TÜBİTAK (the scientific and technological research council of Turkey) carrier grant 09T660.

2 2 scalars λ,..., λ s Ω approximates eigenvalues of T. Of course, one could apply (.3) to each eigenvalue approximation individually, but this bears the danger of failing to detect spurious eigenvalue approximations. More specifically, consider the case of two eigenvalue approximations λ, λ 2, both with small backward errors based on the quantity (.3), but λ and λ 2 actually approximate the same (simple) eigenvalue. Then λ, λ 2 taken together may be a poor approximation to two eigenvalues of T. In this paper, we propose and analyze an approach that is robust to this phenomenon. As explained in Section 6, the use of this approach goes beyond verifying the backward errors of eigenvalue approximations. For example, it can be used to determine the distance to a nearest nonlinear eigenvalue problem with a multiple eigenvalue. ( ) Based on the well-known characterization of σ min T (µ) as the minimal 2 among all C n n such that T (µ) + is singular, we propose the distance τ r (S) := min 2 s m j (T + ) r, (.4) where S := {λ,..., λ s } Ω, r is a specified positive integer, and m j (T + ) denotes the algebraic multiplicity of λ j as an eigenvalue of T (λ) = T (λ) +. With the definition above, we require that r eigenvalues of T (λ), counting their algebraic multiplicities, belong to S, but it is possible that m j (T + ) = 0 for some j, thus λ j S may possibly not be an eigenvalue of T (λ). Note that the algebraic multiplicity of λ j is defined as the multiplicity of λ j as a root of the scalar function det( T (λ)). Specifically, when S = {µ} and r =, this distance has the singular value characterization (.3). The distance (.4) extends our previous definitions for (generalized) linear and polynomial eigenvalue problems [23, 9] to the nonlinear case. In these earlier works, we derived a singular value optimization characterization for τ r (S), which can be numerically solved using global optimization techniques, at least when r is small [2]. The main contribution of this work is to show that such a characterization also holds for the nonlinear case. A notable feature of our derivation is that it does not make use of linearization techniques, but works directly with the original nonlinear eigenvalue problems, based on the technique of so-called invariant pairs [22]. The rest of this paper is organized as follows. In Section 2, we first analyze the nullspace of a linear, Sylvester-like operator associated with an analytic matrixvalued function T. This result then yields a rank characterization for T to have sufficiently many eigenvalues belonging to S. Subsequently, in Section 3, we turn this rank characterization into a singular value characterization, which leads to our main result summarized in Section 4. As this result involves the matricization of the Sylvester-like operator, it may be hard to interpret at first. In Section 5, we therefore show how our result can be brought into more intuitive form, in terms of the divided differences of T. Finally, in Section 6, we illustrate the findings of this paper for two numerical examples. The first example is a small-scale nonlinear eigenvalue problem for which we prescribe a multiple eigenvalue, while the second example is a large-scale problem for which we prescribe two distinct eigenvalues. 2. A rank characterization for eigenvalues of T. For the rest of this paper, we assume that T : Ω C n n is regular, that is, its determinant z det(t (z)) does not vanish identically on Ω. Following the developments in [23, 9], we will approach j=

3 MATRIX-VALUED FUNCTIONS WITH SPECIFIED EIGENVALUES 3 the distance (.4) by first characterizing the inequality s m j (T ) r (2.) j= in terms of the rank of a certain linear operator. 2.. A linear operator for nonlinear eigenvalue problems. The natural extension of the usual Sylvester operator to the nonlinear setting is most easily defined if we suppose that T takes the form T (λ) = f (λ)a + f 2 (λ)a f K (λ)a K, (2.2) for analytic functions f k : Ω C and matrices A k C n n, k =,..., K. Of course, this is always possible by decomposing T (λ) into n 2 terms [T (λ)] ij e i e T j, but in many applications T is already given in the form (2.2). For a fixed matrix C C r r with all eigenvalues contained in Ω, we define K T C : C n r C n r, X T C (X) := A k Xf k (C), (2.3) where f k (C) is a matrix function in the sense of [8]. Equivalently [6, 7], T C can be defined by the relation T C (X) = T (z)x(zi C) dz, (2.4) 2πi C where C is a contour (i.e., a simply closed curve) in Ω enclosing the eigenvalues of C in its interior. The linear operator T C is closely associated with the notion of invariant pairs for nonlinear eigenvalue problems [22]. In particular, the nullspace of T C is trivial if and only if C and T have no eigenvalue in common. To arrive at a rank characterization for (2.), we need to characterize the nullspace of T C when C and T do have eigenvalues in common. First of all, because of T C (X) = T S CS(XS) S, we may assume without loss of generality that C is in Jordan canonical form: k= C = diag(j µ,r,..., J µq,r q ), where J µ,r denotes the Jordan block of size r with eigenvalue µ, i.e. J µ,r = µ I + N r with N r = Cr r For a single Jordan block J µ,r we will abbreviate T Jµ,r by T µ,r. Partitioning X = [X,..., X q ] with X i C n ri, it follows from T C (X) = [ T µ,r (X ),..., T µq,r q (X q ) ] that dim ker T C = q dim ker T µi,r i. (2.5) Hence, we can restrict our discussion to linear operators of the type T µ,r. i=

4 Root polynomials. It is well known [7, 27, 5] that the equation T µ,r (X) = 0 is closely connected to the concept of root polynomials. A vector polynomial r φ(z) = x k (z µ) k, k=0 x k C n is called a root polynomial belonging to the eigenvalue µ of T if φ(µ) 0 and T (µ)φ(µ) = 0. Since z det(t (z)) does not vanish identically the same holds for the function z T (z)φ(z). Thus, there exists a maximal integer κ, called the order of φ, such that T (z)φ(z) = (z µ) κ ψ(z) for some analytic vector function ψ defined in a neighborhood of µ and satisfying ψ(µ) 0. In terms of the derivatives of T (z)φ(z), we have κ = max{k (T φ) (j) (µ) = 0 for j = 0,..., k }. Recall that the multiplicity m(µ) of the eigenvalue µ is defined as the multiplicity of µ as a root of det(t (z)). In contrast, the multiplicity of an eigenvector x 0 is the maximal possible order of any root polynomial φ(z) with φ(µ) = x 0. The following theorem characterizes a particular set of root polynomials. Theorem 2. ([6, 27]). Let µ be an eigenvalue of the regular analytic matrixvalued function T. Then there exist root polynomials φ i (z) = κ i k=0 x ik (z µ) k, i =,..., s (2.6) to the eigenvalue µ with orders κ... κ s such that the vectors x i0 = φ i (µ), i =,..., s form a basis of ker T (µ) and s i= κ i = m(µ). A set of root polynomials φ i having the properties described in Theorem 2. is called a canonical system of root polynomials. The orders κ i in such a canonical system are unique [27] and are called the partial multiplicities of the eigenvalue µ. Note that the degree of φ i cannot be larger than κ i, but it can be smaller. Moreover, for each i =,..., s it holds [20, Pg. 20] that x i0 attains the maximal possible multiplicity among all eigenvectors not expressible in terms of linear combinations of x 0,..., x i,0. This immediately yields the following result. Lemma 2.2. Let µ be an eigenvalue of T and let φ,..., φ s be a canonical system of root polynomials having orders κ... κ s. Given any other root polynomial φ having order κ, it follows that φ(µ) is a linear combination of all φ i (µ) satisfying κ i κ. Proof. Let i be the smallest integer such that κ i < κ. Let us suppose that the statement of the lemma does not hold. Then there is a root polynomial φ of order κ such that φ(µ) cannot be expressed in terms of a linear combination of φ (µ),..., φ i (µ). This, however, violates the maximality of φ i (µ) mentioned above. The following lemma provides the connection between root polynomials and the nullspace of T µ,r.

5 MATRIX-VALUED FUNCTIONS WITH SPECIFIED EIGENVALUES 5 Lemma 2.3. Let µ Ω, q {0,..., r } and X = [0,..., 0, x }{{} 0,..., x r q ] q times C n r with x 0 0. Furthermore, let Then φ X (z) := r q k=0 x k (z µ) k. (2.7) T µ,r (X) = [0,..., 0, y }{{} 0,..., y r q ], where y k = k! (T φ X) (k) (µ). (2.8) q times Thus, X ker T µ,r if and only if φ X is a root polynomial of order at least r q for the eigenvalue µ of T. Proof. Let e j denote the jth unit vector of length r. Then for 0 k r q it holds that T µ,r (X)e q++k = T (z)x ( ) eq++k (z µ)i r N r dz 2πi = 2πi = 2πi = k i=0 C C q+k T (z)x (z µ) i+ e q++k i dz k i=0 C i=0 (z µ) i+ T (z)x k i dz i! T (i) (µ)x k i = = k! (T φ X) (k) (µ). k i=0 i!(k i)! T (i) (µ)φ (k i) X (µ) Analogously one shows T µ,r (X)e j = 0 for j q. This completes the proof of (2.8). As an immediate consequence of Lemma 2.3, we recover the fact that ker T µ,r = {0} if and only if µ Ω is not an eigenvalue of T. For our further developments, observe that the multiplication of N r with X C n r affects a right shift of the columns of X, i.e. XN r = S(X) with S([x 0, x,... x r ]) := [0, x 0,..., x r 2 ]. From the relation N r J µ,r = J µ,r N r, it follows that T µ,r (S(X)) = S(T µ,r (X)). Hence, the operator S commutes with T µ,r. This yields that ker T µ,r is S-invariant: X ker T µ,r S(X) ker T µ,r. (2.9) The following theorem states that a basis of the nullspace can be constructed from a canonical system of root polynomials. An alternative derivation of this result can be found in [4, Sec. 3.4]. Theorem 2.4. Suppose that µ is an eigenvalue of T with partial multiplicities κ... κ s. Let φ,..., φ s be a canonical system of root polynomials, as in (2.6), with coefficients x ik C n. For i =,..., s let [x i0,..., x i,r ] if r κ i X i = [ 0,..., 0, x }{{} i0,..., x i,κi ] if r > κ i r κ i zeros

6 6 Then the matrices S j (X i ), i =,..., s, j = 0,..., min{κ i, r} (2.0) form a basis of ker T µ,r. In particular, dim ker T µ,r = s min{κ i, r}. Proof. By Lemma 2.3 and (2.9), all matrices defined in (2.0) are contained in ker T µ,r. Since the polynomials φ i form a canonical system, the vectors x i0, i =,... s, are linearly independent. This immediately implies that the matrices S j (X i ) defined in (2.0) form a linearly independent set. It remains to show that any X ker T µ,r can be expressed as a linear combination of the matrices S j (X i ). Assuming that X 0, let us partition i= X = [0,..., 0, x }{{} 0,..., x r q ] C n r q times with x 0 0. Then, according to Lemma 2.3, the vector polynomial φ X defined in (2.7) is a root polynomial of order at least r q. By Lemma 2.2, this implies that x 0 can be expressed as a linear combination x 0 = t α i x i0, (2.) i= where t is the largest integer such that κ t r q. Based on these scalars α i, let us now define { t q if r κ X := X α i S qi i (X i ), with q i = q (r κ i ) otherwise. i= Then (2.) implies that the (q + )th column of X becomes zero: X = [ 0,..., 0, x }{{} 0,..., x r q 2 ]. q + times We can repeat this process to successively annihilate columns q + 2, q + 3,..., r. Eventually, this shows that X is a linear combination of the matrices S j (X i ) A rank characterization for s j= m j(t ) r. According to the discussion in Section 2., Theorem 2.4 implies the following characterization of the nullspace dimension of T C. Corollary 2.5. Let T : Ω C n n be a regular analytic matrix-valued function and let C C r r have the Jordan decomposition C = S diag(j µ,r,..., J µq,r q ) S. Suppose that all eigenvalues µ j of C are contained in Ω and hence the linear operator T C in (2.4) is well defined. Let J be the set of j such that µ j is an eigenvalue of T. For j J let κ j... κ sj,j be the partial multiplicities of µ j as an eigenvalue of T. Then dim ker T C = j J s j min{κ ij, r j }. (2.2) i=

7 MATRIX-VALUED FUNCTIONS WITH SPECIFIED EIGENVALUES 7 In what follows, we will choose C to be an upper triangular matrix of the form µ γ 2 γ r. C(µ, Γ) := 0 µ γr,r, 0 0 µ r where µ := [ ] µ... µ r C r, Γ := [ ] T γ 2... γ r,r C r(r )/2. We let the set G(µ) contain all values of Γ for which each eigenvalue of C(µ, Γ) has geometric multiplicity one. The set G(µ) is dense in C r(r )/2 ; see [2]. After these preparations, we are ready to present our main result, a dimensionality characterization for the eigenvalue multiplicities of T. Below, S r denotes the set of r-tuples with elements in S. Theorem 2.6. Let T : Ω C n n be an analytic matrix-valued function. Then the following two conditions are equivalent: (i) s j= m j(t ) r. (ii) There exists µ S r such that dim { X T C(µ,Γ) (X) = 0 } r (2.3) for all Γ G(µ). Proof. The result follows directly from Corollary 2.5, by letting µ contain at most m j (T ) copies of each eigenvalue λ j. Using basic properties of the Kronecker product, the nr nr matrix representation K(µ, Γ) of the linear operator T C(µ,Γ) can be easily derived from (2.4): K(µ, Γ, T ) = (zi C(µ, Γ)) T T (z) dz. (2.4) 2πi C Equivalently, the decomposition (2.2) of T and the definition (2.3) of T C give K(µ, Γ, T ) = K f k (C(µ, Γ)) T A k. k= Using this matrix representation, (2.3) becomes equivalent to the rank condition rank K(µ, Γ, T ) n r r. (2.5) 3. Derivation of the Singular Value Characterization. Coming back to our proposed distance τ r (S) defined in (.4), the rank characterization (2.5) derived in the previous section yields τ r (S) = inf { 2 µ S r such that rank (K(µ, Γ, T + )) n r r for Γ G(µ)}, = inf µ S r P r(µ), where T := T + denotes the analytic matrix-valued function T (λ) = T (λ) +, and P r (µ) := inf { 2 rank (K(µ, Γ, T + )) n r r} (3.)

8 8 for any Γ G(µ). This section is concerned with the derivation of a singular value optimization characterization for P r (µ). We immediately obtain the lower bound P r (µ) σ r (K(µ, Γ, T )) from the Eckart-Young theorem, where σ r ( ) denotes the rth smallest singular value of a matrix. Indeed, since the above inequality holds for any Γ G(µ), and due to the continuity of the singular value σ r ( ) with respect to Γ, we deduce P r (µ) sup Γ σ r (K(µ, Γ, T )) =: κ r (µ). (3.2) As in the derivations for the specialized cases [23, 9], which led us to consider this more general setting, we prove P r (µ) = κ r (µ), under mild multiplicity and linear independence assumptions, by constructing a perturbation such that (i) 2 = κ r (µ), and (ii) rank (K(µ, Γ, T + )) n r r for some Γ G(µ). As discussed in [9], the supremum in (3.2) is always attained at some Γ, as long as r n and for generic values of µ. More precisely, we require the vector µ C r to satisfy the condition that the matrix { (T (µ l ) T (µ k ))/(µ l µ k ) µ l µ k T [µ k, µ l ] := T (µ l ) µ l = µ k is invertible for all k l. The proof for the existence of Γ can be found in [9, Appendix A]. Although this proof targets matrix polynomials P (λ); it makes no use of particular properties of polynomials and directly carries over to an analytic matrixvalued function T (λ) represented in the form (2.2). Let Γ be such that κ r (µ) = σ r (K(µ, Γ, T )), and let U, V C nr be the associated left, right singular vectors satisfying K(µ, Γ, T )V = κ r (µ) U and U K(µ, Γ, T ) = κ r (µ) V. (3.3) Let us consider the perturbation defined by := κ r (µ)uv +, (3.4) where U, V C n r are such that vec(u) = U, vec(v) = V, and V + denotes the Moore-Penrose pseudoinverse of V. We claim that satisfies both of the properties (i) and (ii) under the following mild assumptions. Definition 3. (multiplicity assumption). The multiplicity of the singular value σ r (K(µ, Γ, T )) is one. Definition 3.2 (linear independence assumption). There exists a right singular vector V associated with the singular value σ r (K(µ, Γ, T )) such that the n r matrix V with vec(v) = V has full column rank. The subsequent two subsections are devoted to the proofs of the properties (i) and (ii) for the perturbation defined by (3.4). To simplify the derivation, we make two additional assumptions initially. First, µ is supposed to be comprised of distinct elements. In this case all eigenvalues of C(µ, Γ) are simple, and hence G(µ) = C r(r )/2. Second, we assume that T [µ k, µ l ] has full rank for all k l. Eventually, these two additional assumptions will be dropped.

9 MATRIX-VALUED FUNCTIONS WITH SPECIFIED EIGENVALUES Property (i) Norm of. In this section, we show 2 = κ r (µ) = σ r (K(µ, Γ, T )), which amounts to verifying UV + 2 =. As noted in [25, 23, 9], the latter property follows from the relation U U = V V, which we will establish under the multiplicity assumption. Let us consider the function σ(γ) := σ r (K(µ, Γ, T )) where K(µ, Γ, T ) := 2πi C (zi C(µ, Γ)) T T (z) dz. By differentiating the relation I = (zi C(µ, Γ))R(z, Γ) one obtains the following identities for the partial derivatives of the resolvent R(z, Γ) = (zi C(µ, Γ)) with respect to the real and the imaginary parts of the components γ ik of γ: R C (z, Γ) = R(z, Γ) (z, Γ) R(z, Γ) = R(z, Γ)e i e T k R(z, Γ), Rγ ik Rγ ik R C (z, Γ) = R(z, Γ) (z, Γ) R(z, Γ) = i R(z, Γ)e i e T k R(z, Γ), Iγ ik Iγ ik where i < k r. It follows that K (µ, Γ, T ) = [ R(z, Γ) ei e T k R(z, Γ) ] T T (z) dz, Rγ ik 2πi C K (µ, Γ, T ) = [ R(z, Γ) ei e T k R(z, Γ) ] T T (z) dz, Iγ ik 2π for i < k r. Let C G := R(z, Γ )U T (z)vr(z, γ ) dz. (3.5) 2πi C From the assumption that the singular value σ(γ ) is simple it follows that the function γ σ(γ) is analytic [0] at Γ, and 0 = σ ( ) ( ) (Γ ) = R U K (µ, Γ, T )V = R vec(u) K (µ, Γ, T ) vec(v) Rγ ik Rγ ik Rγ ik ( ( )) = R vec(u) vec T (z)vr(z, γ ) e i e T k R(z, Γ ) dz 2πi C ( ( )) = R tr U T (z)vr(z, Γ ) e i e T k R(z, Γ ) dz 2πi C = R ( e T ) k Ge i for i < k r, where the last equality follows from the trace identity tr(xy ) = tr(y X). Analogously we have 0 = σ Iγ ik (Γ ) = R( ie T k Ge i ) = I(e T k Ge i ) for k < i r. Thus, G is upper triangular. The identities T (z)v (zi C(µ, Γ)) dz = σ(γ ) U, 2πi C (zi C(µ, Γ)) U T (z) dz = σ(γ ) V 2πi C

10 0 imply Thus, (zi C(µ, Γ ))G = 2πi G(zI C(µ, Γ )) = 2πi C C U T (z)v (zi C(µ, Γ)) dz = σ(γ ) U U, (zi C(µ, Γ)) U T (z)v dz = σ(γ ) V V. σ(γ )(U U V V) = GC(µ, Γ ) C(µ, Γ )G. Consequently, since both G and C (µ, Γ ) are upper triangular, the right-hand side of the equation above is strictly upper triangular, whereas the left-hand side is Hermitian. Therefore both sides must vanish, and we have U U = V V as desired Property (ii) Rank Condition. It remains to show that the constructed perturbation = κ r (µ)uv + satisfies the rank property (ii) for K. Using both the multiplicity and linear independence assumptions, we will establish this property by showing that the nullspace of the perturbed linear operator has dimension at least r. We start with writing the left-hand equation in (3.3) in terms of the operator T C(µ,γ ): T C(µ,γ )(V) = σ(γ ) U. By exploiting the property U U = V V from the previous section, we have U = UV + V and hence 0 = T C(µ,Γ )(V) σ(γ ) UV + V = T C(µ,Γ )(V) + V. (3.6) Furthermore, consider the space of matrices {D C(µ, Γ )D DC(µ, Γ ) = 0} commuting with C(µ, Γ ), which has dimension at least r [5, Theorem, p. 29]. It is easy to see that any such D also commutes with (zi C(µ, Γ )), and hence 0 = T C(µ,Γ )(V )D + V D = T (z)v D (zi C(µ, Γ)) dz + V D 2πi C = T C(µ,Γ )(V D) + V D This shows the desired result, that is, the nullspace of X T C(µ,Γ )(X) + X has dimension at least r. 4. Main Result. Before stating the main result, we discuss why the two additional assumptions mentioned above can be removed. Suppose that µ C r has repeating elements and/or that T [µ k, µ l ] is singular for some k, l, but let us still assume that the linear independence and multiplicity assumptions hold at µ. By the regularity of T, it follows that essentially for all µ C r with mutually distinct elements and sufficiently close to µ, we have T [ µ l, µ k ] is nonsingular for all k, l. By continuity of the singular value and vectors, it follows that the multiplicity and linear independence assumptions remain valid for µ, provided that µ is chosen sufficiently close to µ. Consequently, it follows from the previous section that κ r ( µ) = P r ( µ) for all such µ. By continuity, κ r (µ) = P r (µ).

11 MATRIX-VALUED FUNCTIONS WITH SPECIFIED EIGENVALUES The following theorem summarizes our findings. Theorem 4.. Let T : Ω C n n be analytic and regular. Suppose that S = {λ,..., λ s } Ω and r n are given. Then the distance τ r (S) defined in (.4) has the singular value optimization characterization τ r (S) = inf µ S r sup σ r (K(µ, Γ, T )), (4.) Γ C r(r )/2 provided that the multiplicity and linear independence assumptions hold at µ, the optimal value of µ. Moreover, a minimal perturbation with 2-norm equal to τ r (S) is given by (3.4). Let us remark that the statement of Theorem 4. remains true for r > n, under the additional assumption that the inner supremum in (4.) is attained. 5. Divided-Difference Formulas. The singular value characterization (4.) can be conveniently expressed in terms of divided differences, see, e.g., [9]. The divided difference of a function f : R R at nodes x 0,..., x k is given by f [x 0, x,..., x k ] = { f[x,...,x k ] f[x 0,...,x k ] x k x 0 x 0 x k f (k) (x 0) (5.) k! x 0 = x k where the nodes are assumed to be contagious, that is x j = x l for some l > j implies x i = x j for i = j,..., l. Given a lower triangular matrix L, the entries of the matrix function F = f(l), which is also lower triangular, are given by f il = (s 0,s,...,s k ) l ss 0 l s2s... l sk s k f [µ s0,..., µ sk ] for i > l and f ii = f(µ i ), where µ i = l ii and the summation is over all increasing sequences of integers starting with l and ending with i, see [8, Theorem 4.] Applying the formula above for functions of lower triangular matrices to K(µ, Γ, T ) = K f k (C(µ, γ)) T A k k= yields the following result: The matrix K(µ, Γ, T ) C nr nr is block lower triangular and its n n submatrix at rows + (i )n : in and at columns + (l )n : ln is given by (s 0,s,...,s k ) γ s s 0 γ s2s... γ sk s k T [µ s0,..., µ sk ] i > l, T (µ i ) i = l, 0 i < l. The proof of these relations for matrix polynomials can be found in [9, Section 3.4]; it directly extends to analytic matrix-valued functions. When S = {λ} and r = 2, Theorem 4. combined with the representation of K(µ, Γ, T ) above results in the formula sup γ C σ 2 ([ T (λ) 0 γ T (λ) T (λ) ]) (5.2)

12 2 for the distance to a nearest nonlinear eigenvalue problem with λ as a multiple eigenvalue. When two eigenvalues are prescribed, that is S = {λ, λ 2 } and r = 2, we obtain ([ ]) T (µ inf sup σ ) 0 2. µ,µ 2 S 2 γ C γ T [µ, µ 2 ] T (µ 2 ) 6. Numerical Examples. We illustrate the main result on two examples. The first is a small-scale problem, concerning the distance to a nearest nonlinear eigenvalue problem with a multiple eigenvalue. The second one is a large-scale problem, and is aimed at illustrating the point that even if the individual backward errors are small for two approximate eigenvalues, there may not be any nearby problem with both of these two eigenvalues. Furthermore, it is not possible to conclude this by solely looking at the resolvent norms. 6.. Prescribing a multiple eigenvalue. Let us consider the distance τ r (S) with S = {λ} and r = 2 for a prescribed eigenvalue λ C. In this case, the formula (5.2) applies. In fact, the supremum can be taken over R instead of C, since the singular values of [ T (λ) 0 γ T (λ) T (λ) ] and [ T (λ) 0 γ T (λ) T (λ) are identical. We deduce that the distance to a nearest nonlinear eigenvalue problem with a multiple eigenvalue can be expressed as ([ ]) inf sup T (λ) 0 σ 2 λ Ω γ R γ T. (6.) (λ) T (λ) In view of the result by Malyshev [25] for the linear case, we conjecture that (6.) holds without requiring the multiplicity and linear independence assumptions. It is well known for the standard eigenvalue problem T (λ) = A λi that the distance to a nearest matrix with a multiple eigenvalue corresponds to the smallest ɛ such that two components of the ɛ-pseudospectrum coalesce [3]. Generalizations of this result to matrix pencils and matrix polynomials, allowing for perturbations to all coefficients, have been shown in [2, Theorem 5.] and [, Theorem 7.], respectively. The generalization to analytic matrix-valued functions in our setting, when only constant perturbations are permitted, appears to be straightforward. In particular, denoting the set of eigenvalues with Λ( ), we consider the following ɛ-pseudospectrum for the analytic matrix-valued function T : Λ ɛ (T ) := Λ(T (λ) + ) = {λ C σ (T (λ)) ɛ}. 2 ɛ The eigenvalues of T are the only local minimizers of σ (T ( )) [, Theorem 4.2], so there is a connected component of Λ ɛ (T ) around each eigenvalue of T. When T has distinct eigenvalues, the smallest ɛ such that two components of Λ ɛ (T ) coalesce is the distance to a nearest nonlinear eigenvalue problem with a multiple eigenvalue, which admits the singular value characterization (6.). To illustrate (6.), let us compute the distance to a multiple eigenvalue for a nonlinear eigenvalue problem from a delay differential equation [29]: T (λ) = (e λ )B + λ 2 B 2 B 0. (6.2) ]

13 MATRIX-VALUED FUNCTIONS WITH SPECIFIED EIGENVALUES Fig. 6.. Contours of the pseudospectra for the nonlinear eigenvalue problem (6.2), with the eigenvalues marked by crosses. The inner contour corresponds to the ɛ-pseudospectrum for ɛ equal to the distance to a nearest nonlinear eigenvalue problem with a multiple eigenvalue. Two components coalesce as expected, and the point of coalescence marked with an asterisk is the multiple eigenvalue of a nearest nonlinear eigenvalue problem imaginary part real part Fig Eigenvalues of nonlinear eigenvalue problem (6.3). For this purpose, we apply the algorithm in [2] to the characterization (6.). The matrices B 0, B, B 2 R 8 8 are given by B 0 = 00 I 8 B = [b () = [9 max(j, k)] j k jk ], b() jk B 2 = [b (2) jk ], b(2) jk = 9 δ jk + j+k with δ jk = if j = k, and zero otherwise. This is an example taken from [29]. A plot of the pseudospectra of T is provided below in Figure 6.. The inner curves correspond to the boundary of Λ ɛ (T ) for ɛ =.606, which is the computed distance Checking that a pair of eigenvalue approximations corresponds to a nearby pair of eigenvalues. The second example is taken from [4], and concerns the eigenvalues of T (λ) = K λ 2 M + i λ W + i λ 2 σ 2 W 2 (6.3) where K, M, W, W 2 R are symmetric, K is positive semi-definite, M is positive definite, and σ = This nonlinear eigenvalue problem occurs from a model of the radio-frequency gun cavity [24]. Figure 6.2 displays some of the eigenvalues of T, computed with the algorithm described in [3]. Suppose now that a (different) algorithm returns two pairs of eigenvalue approximations: µ () = i, µ () 2 = i, and µ (2) = 326, µ (2) 2 = i

14 4 ( ) ( ) µ (j) µ (j) 2 σ T (µ (j) ) σ T (µ (j) 2 ) τ 2 ({µ (j), µ(j) 2 }) j = i i j = i Table 6. Two pairs of eigenvalue approximations for the nonlinear eigenvalue problem (6.3): Individual backward errors (columns 3 and 4) and the distance (column 5) to a nearest problem having z (j), z(j) 2 as exact eigenvalues Fig The plot of σ 2 (K(µ, γ, T )) with respect to γ [0, 3] for µ = [µ (2), µ(2) 2 ] where µ (2), µ(2) 2 are as in Table 6. Considered individually, each of the approximations gives a reasonably small backward error σ (T (µ)), see Table 6.. However, this does not imply that each pair is a good approximation to an eigenvalue pair of T. In fact, it can already be seen from Figure 6.2 that there is only one eigenvalue (with multiplicity ) close to the first pair µ (), µ() 2. Hence, one of the two approximations is spurious. This can be verified by computing τ 2 ({µ (), µ() 2 }), which gives a much larger value than the individual backward errors, see column 5 of Table 6.. In contrast, the second pair µ (2), µ(2) 2 is visually close to two eigenvalues. This is also confirmed by the fact that the distance τ 2 ({µ (2), µ(2) 2 }) is of the same order as the individual backward errors. Finally, Figure 6.3 illustrates ( ) T µ (2) 0 σ 2 [ ] ( ) γ T µ (2), µ(2) 2 T µ (2) 2 with respect to γ [0, 3]. The figure suggests that the singular value function is unimodal for a single parameter, which can be used in the optimization algorithm. 7. Concluding Remarks. We have derived a further generalization of earlier works [23, 9] for locating nearest analytic matrix-valued functions with specified eigenvalues. For this purpose, we have studied the structure of the nullspace for Sylvester-like operators associated with nonlinear eigenvalue problems. We believe that these auxiliary results could be of independent interest. Our main result is the singular value characterization in Theorem 4.. This characterization can be turned

15 MATRIX-VALUED FUNCTIONS WITH SPECIFIED EIGENVALUES 5 into a numerical method by exploiting the smoothness properties of singular values; using, for instance, the algorithm described in [2]. 8. Acknowledgments. We thank Cedric Effenberger for inspiring discussions on the nullspace of T C and for providing Figure 6.2. We are also grateful to two anonymous referees and Francoise Tisseur for their invaluable comments. REFERENCES [] S. S. Ahmad and R. Alam. Pseudospectra, critical points and multiple eigenvalues of matrix polynomials. Linear Algebra Appl., 430(4):7 95, [2] S. S. Ahmad, R. Alam, and R. Byers. On pseudospectra, critical points, and multiple eigenvalues of matrix pencils. SIAM J. Matrix Anal. Appl., 3(4):95 933, 200. [3] R. Alam and S. Bora. On sensitivity of eigenvalues and eigendecompositions of matrices. Linear Algebra Appl., 396:273 30, [4] T. Betcke, N. J. Higham, V. Mehrmann,, C. Schröder, and F. Tisseur. NLEVP: A collection of nonlinear eigenvalue problems. ACM Trans. Math. Software, 39(2):7: 7:28, 203. [5] T. Betcke and D. Kressner. Perturbation, extraction and refinement of invariant pairs for matrix polynomials. Linear Algebra Appl., 435(3): , 20. [6] W.-J. Beyn. An integral method for solving nonlinear eigenvalue problems. Linear Algebra Appl., 436(0): , 202. [7] W.-J. Beyn, C. Effenberger, and D. Kressner. Continuation of eigenvalues and invariant pairs for parameterized nonlinear eigenvalue problems. Numer. Math., 9(3):489 56, 20. [8] D. Bindel and A. Hood. Localization theorems for nonlinear eigenvalue problems. SIAM J. Matrix Anal. Appl., 203. To appear. [9] C. De Boor. Divided differences. Surveys in Approximation Theory, :46 69, [0] A Bunse-Gerstner, R. Byers, V. Mehrmann, and N.K. Nichols. Numerical computation of an analytic singular value decomposition of a matrix valued function. Numer. Math., 60: 39, 99. [] J. Burke, A. S. Lewis, and M. L. Overton. Optimization and pseudospectra with applications to robust stability. SIAM J. Matrix Anal. Appl., 25:80 04, [2] J. W. Demmel and A. Edelman. The dimension of matrices (matrix pencils) with given Jordan (Kronecker) canonical forms. Linear Algebra Appl., 230:6 87, 995. [3] C. Effenberger. Robust successive computation of eigenpairs for nonlinear eigenvalue problems. Technical report, MATHICSE, EPFL, July 202. [4] C. Effenberger. Robust solution methods for nonlinear eigenvalue problems. PhD thesis, EPF Lausanne, Switzerland, 203. Available at [5] F. R. Gantmacher. The Theory of Matrices, volume. Chelsea, 959. [6] I. Gohberg, M. A. Kaashoek, and F. van Schagen. On the local theory of regular analytic matrix functions. Linear Algebra Appl., 82:9 25, 993. [7] I. Gohberg, P. Lancaster, and L. Rodman. Matrix polynomials. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 982. Computer Science and Applied Mathematics. [8] N. J. Higham. Functions of matrices - theory and computation. SIAM, [9] M. Karow and E. Mengi. Matrix polynomials with specified eigenvalues. math arxiv, 203. [20] M. V. Keldysh. On the completeness of the eigenfunctions of some classes of non-selfadjoint linear operators. Russian Mathematical Surveys, 26(4):5, 97. [2] M. Kilic, E. Mengi, and E. A. Yildirim. Numerical optimization of eigenvalues of Hermitian matrix functions. arxiv: , 20. [22] D. Kressner. A block Newton method for nonlinear eigenvalue problems. Numer. Math., 4(2): , [23] D. Kressner, E. Mengi, I. Nakic, and N. Truhar. Generalized eigenvalue problems with specified eigenvalues. IMA J. Numer. Anal., 203. To appear. [24] B.-S. Liao, Z. Bai, L.-Q. Lee, and K. Ko. Nonlinear Rayleigh-Ritz iterative method for solving large scale nonlinear eigenvalue problems. Taiwanese J. Math., 4(3A): , 200. [25] A. N. Malyshev. A formula for the 2-norm distance from a matrix to the set of matrices with multiple eigenvalues. Numer. Math., 83: , 999. [26] V. Mehrmann and H. Voss. Nonlinear eigenvalue problems: A challenge for modern eigenvalue methods. GAMM Mitteilungen, 27, [27] R. Mennicken and M. Möller. Non-self-adjoint boundary eigenvalue problems, volume 92 of North-Holland Mathematics Studies. North-Holland Publishing Co., Amsterdam, 2003.

16 6 [28] W. Michiels, K. Green, T. Wagenknecht, and S.-I. Niculescu. Pseudospectra and stability radii for analytic matrix functions with application to time-delay systems. Linear Algebra Appl., 48():35 335, [29] A. Ruhe. Algorithms for the nonlinear eigenvalue problem. SIAM J. Numer. Anal., 0: , 973. [30] F. Tisseur and N.J. Higham. Structured pseudospectra for polynomial eigenvalue problems with applications. SIAM J. Matrix Anal. Appl., 23:87 208, 200. [3] L. N. Trefethen and M. Embree. Spectra and Pseudospectra. The Behavior of Nonnormal Matrices and Operators. Princeton University Press, Princeton, NJ, [32] H. Voss. Nonlinear eigenvalue problems. In L. Hogben, editor, Handbook of Linear Algebra. Chapman & Hall/CRC, FL, 203. To appear. [33] T. Wagenknecht, W. Michiels, and K. Green. Structured pseudospectra for nonlinear eigenvalue problems. J. Comput. Appl. Math., 22(2): , 2008.

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

NEAREST LINEAR SYSTEMS WITH HIGHLY DEFICIENT REACHABLE SUBSPACES

NEAREST LINEAR SYSTEMS WITH HIGHLY DEFICIENT REACHABLE SUBSPACES NEAREST LINEAR SYSTEMS WITH HIGHLY DEFICIENT REACHABLE SUBSPACES EMRE MENGI Abstract. We consider the 2-norm distance τ r(a, B) from a linear time-invariant dynamical system (A, B) of order n to the nearest

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Nick Problem: HighamPart I Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester Woudschoten

More information

Distance bounds for prescribed multiple eigenvalues of matrix polynomials

Distance bounds for prescribed multiple eigenvalues of matrix polynomials Distance bounds for prescribed multiple eigenvalues of matrix polynomials Panayiotis J Psarrakos February 5, Abstract In this paper, motivated by a problem posed by Wilkinson, we study the coefficient

More information

Curriculum Vitae. Current and Past Positions. Department of Mathematics Koç University Assistant Professor Department of Mathematics

Curriculum Vitae. Current and Past Positions. Department of Mathematics Koç University Assistant Professor Department of Mathematics Curriculum Vitae Name : Emre Mengi Date of Birth : February 27, 1978 Place of Birth : Ankara, Turkey Nationality : Turkish Address : Koç University Rumelifeneri Yolu 34450 Sarıyer, Istanbul / Turkey Phone

More information

The distance from a matrix polynomial to matrix polynomials with a prescribed multiple eigenvalue 1

The distance from a matrix polynomial to matrix polynomials with a prescribed multiple eigenvalue 1 The distance from a matrix polynomial to matrix polynomials with a prescribed multiple eigenvalue Nikolaos Papathanasiou and Panayiotis Psarrakos Department of Mathematics, National Technical University

More information

Some Formulas for the Principal Matrix pth Root

Some Formulas for the Principal Matrix pth Root Int. J. Contemp. Math. Sciences Vol. 9 014 no. 3 141-15 HIKARI Ltd www.m-hiari.com http://dx.doi.org/10.1988/ijcms.014.4110 Some Formulas for the Principal Matrix pth Root R. Ben Taher Y. El Khatabi and

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

A numerical method for polynomial eigenvalue problems using contour integral

A numerical method for polynomial eigenvalue problems using contour integral A numerical method for polynomial eigenvalue problems using contour integral Junko Asakura a Tetsuya Sakurai b Hiroto Tadano b Tsutomu Ikegami c Kinji Kimura d a Graduate School of Systems and Information

More information

MATH 5524 MATRIX THEORY Problem Set 4

MATH 5524 MATRIX THEORY Problem Set 4 MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.

More information

Multiparameter eigenvalue problem as a structured eigenproblem

Multiparameter eigenvalue problem as a structured eigenproblem Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Backward error of polynomial eigenvalue problems solved by linearization of Lagrange interpolants Piers W. Lawrence Robert M. Corless Report TW 655, September 214 KU Leuven Department of Computer Science

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis Volume 38, pp 75-30, 011 Copyright 011, ISSN 1068-9613 ETNA PERTURBATION ANALYSIS FOR COMPLEX SYMMETRIC, SKEW SYMMETRIC, EVEN AND ODD MATRIX POLYNOMIALS SK

More information

Continuation of eigenvalues and invariant pairs for parameterized nonlinear eigenvalue problems

Continuation of eigenvalues and invariant pairs for parameterized nonlinear eigenvalue problems Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich Continuation of eigenvalues and invariant

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de

More information

Stability and Inertia Theorems for Generalized Lyapunov Equations

Stability and Inertia Theorems for Generalized Lyapunov Equations Published in Linear Algebra and its Applications, 355(1-3, 2002, pp. 297-314. Stability and Inertia Theorems for Generalized Lyapunov Equations Tatjana Stykel Abstract We study generalized Lyapunov equations

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

arxiv: v3 [math.na] 15 Jun 2017

arxiv: v3 [math.na] 15 Jun 2017 A SUBSPACE METHOD FOR LARGE-SCALE EIGENVALUE OPTIMIZATION FATIH KANGAL, KARL MEERBERGEN, EMRE MENGI, AND WIM MICHIELS arxiv:1508.04214v3 [math.na] 15 un 2017 Abstract. We consider the minimization or maximization

More information

Where is matrix multiplication locally open?

Where is matrix multiplication locally open? Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?

More information

Nonlinear Eigenvalue Problems: Interpolatory Algorithms and Transient Dynamics

Nonlinear Eigenvalue Problems: Interpolatory Algorithms and Transient Dynamics Nonlinear Eigenvalue Problems: Interpolatory Algorithms and Transient Dynamics Mark Embree Virginia Tech with Serkan Gugercin Jonathan Baker, Michael Brennan, Alexander Grimm Virginia Tech SIAM Conference

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

Nonlinear Eigenvalue Problems and Contour Integrals

Nonlinear Eigenvalue Problems and Contour Integrals Nonlinear Eigenvalue Problems and Contour Integrals Marc Van Barel a,,, Peter Kravanja a a KU Leuven, Department of Computer Science, Celestijnenlaan 200A, B-300 Leuven (Heverlee), Belgium Abstract In

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Analytic roots of invertible matrix functions

Analytic roots of invertible matrix functions Electronic Journal of Linear Algebra Volume 13 Volume 13 (2005) Article 15 2005 Analytic roots of invertible matrix functions Leiba X. Rodman lxrodm@math.wm.edu Ilya M. Spitkovsky Follow this and additional

More information

Numerical Optimization of Eigenvalues of Hermitian Matrix Functions

Numerical Optimization of Eigenvalues of Hermitian Matrix Functions Numerical Optimization of Eigenvalues of Hermitian Matrix Functions Mustafa Kılıç Emre Mengi E. Alper Yıldırım February 13, 2012 Abstract The eigenvalues of a Hermitian matrix function that depends on

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 430 (2009) 579 586 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Low rank perturbation

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1.

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1. A SPECTRAL CHARACTERIZATION OF THE BEHAVIOR OF DISCRETE TIME AR-REPRESENTATIONS OVER A FINITE TIME INTERVAL E.N.Antoniou, A.I.G.Vardulakis, N.P.Karampetakis Aristotle University of Thessaloniki Faculty

More information

Nonlinear Eigenvalue Problems: An Introduction

Nonlinear Eigenvalue Problems: An Introduction Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction

More information

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

On the computation of the Jordan canonical form of regular matrix polynomials

On the computation of the Jordan canonical form of regular matrix polynomials On the computation of the Jordan canonical form of regular matrix polynomials G Kalogeropoulos, P Psarrakos 2 and N Karcanias 3 Dedicated to Professor Peter Lancaster on the occasion of his 75th birthday

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

On Pseudospectra of Matrix Polynomials and their Boundaries

On Pseudospectra of Matrix Polynomials and their Boundaries On Pseudospectra of Matrix Polynomials and their Boundaries Lyonell Boulton, Department of Mathematics, Heriot-Watt University, Edinburgh EH14 2AS, United Kingdom. Peter Lancaster, Department of Mathematics

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Generic rank-k perturbations of structured matrices

Generic rank-k perturbations of structured matrices Generic rank-k perturbations of structured matrices Leonhard Batzke, Christian Mehl, André C. M. Ran and Leiba Rodman Abstract. This paper deals with the effect of generic but structured low rank perturbations

More information

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Nick Problem: HighamPart II Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester Woudschoten

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Control Systems. Linear Algebra topics. L. Lanari

Control Systems. Linear Algebra topics. L. Lanari Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues - eigenvectors - characteristic polynomial - algebraic multiplicity eigenvalues invariance under similarity

More information

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13]) SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to

More information

Eigopt : Software for Eigenvalue Optimization

Eigopt : Software for Eigenvalue Optimization Eigopt : Software for Eigenvalue Optimization Emre Mengi E. Alper Yıldırım Mustafa Kılıç August 8, 2014 1 Problem Definition These Matlab routines are implemented for the global optimization of a particular

More information

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester

More information

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics. An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

LOCALIZATION THEOREMS FOR NONLINEAR EIGENVALUE PROBLEMS

LOCALIZATION THEOREMS FOR NONLINEAR EIGENVALUE PROBLEMS LOCALIZATION THEOREMS FOR NONLINEAR EIGENVALUE PROBLEMS DAVID BINDEL AND AMANDA HOOD Abstract. Let T : Ω C n n be a matrix-valued function that is analytic on some simplyconnected domain Ω C. A point λ

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics

More information

Solving the Polynomial Eigenvalue Problem by Linearization

Solving the Polynomial Eigenvalue Problem by Linearization Solving the Polynomial Eigenvalue Problem by Linearization Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Ren-Cang Li,

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

On families of anticommuting matrices

On families of anticommuting matrices On families of anticommuting matrices Pavel Hrubeš December 18, 214 Abstract Let e 1,..., e k be complex n n matrices such that e ie j = e je i whenever i j. We conjecture that rk(e 2 1) + rk(e 2 2) +

More information

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Silvia Noschese 1 and Lothar Reichel 2 1 Dipartimento di Matematica, SAPIENZA Università di Roma, P.le Aldo Moro 5, 185 Roma,

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic December 15, 2012 EVENUAL PROPERIES OF MARICES LESLIE HOGBEN AND ULRICA WILSON Abstract. An eventual property of a matrix M C n n is a property that holds for all powers M k, k k 0, for some positive integer

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information