Distance bounds for prescribed multiple eigenvalues of matrix polynomials

Size: px
Start display at page:

Download "Distance bounds for prescribed multiple eigenvalues of matrix polynomials"

Transcription

1 Distance bounds for prescribed multiple eigenvalues of matrix polynomials Panayiotis J Psarrakos February 5, Abstract In this paper, motivated by a problem posed by Wilkinson, we study the coefficient perturbations of a (square) matrix polynomial to a matrix polynomial that has a prescribed eigenvalue of specified algebraic multiplicity and index of annihilation For an n n matrix polynomial P(λ) and a given scalar µ C, we introduce two weighted spectral norm distances, E r (µ) and E r,k (µ), from P(λ) to the n n matrix polynomials that have µ as an eigenvalue of algebraic multiplicity at least r and to those that have µ as an eigenvalue of algebraic multiplicity at least r and maximum Jordan chain length (exactly) k, respectively Then we obtain a lower bound for E r,k (µ), and derive an upper bound for E r (µ) by constructing an associated perturbation of P(λ) Keywords: matrix polynomial, eigenvalue, algebraic multiplicity, index of annihilation, perturbation, singular value, singular vector AMS Subject Classifications: 5A8, 5A, 65F5, 65F35 Introduction and preliminaries Wilkinson s problem [4, 5] concerns computing the spectral norm distance (known as Wilkinson s distance) from an n n matrix with n distinct eigenvalues to the set of n n matrices having multiple eigenvalues, and has a strong connection to ill-conditioning of eigenvalue problems Malyshev [9] provided a solution to Wilkinson s problem by obtaining a singular value characterization of Wilkinson s distance Recently, Mengi [] (extending Malyshev s methodology) derived a singular value optimization characterization for the smallest perturbation to a matrix that has an eigenvalue of specified algebraic multiplicity Papathanasiou and Psarrakos [3] studied the case of polynomial eigenvalue problems, and applied Malyshev s technique to derive lower and upper bounds for a weighted distance from a given n n matrix polynomial to the n n matrix polynomials that have a prescribed multiple eigenvalue Motivated by the above, we consider an n n matrix polynomial P(λ) = A m λ m + A m λ m + + A λ + A, () where λ is a complex variable and A j C n n (j =,,,m) with A m We also assume that P(λ) is regular, that is, the determinant det P(λ) is not identically zero Department of Mathematics, National Technical University of Athens, Zografou Campus, 578 Athens, Greece (ppsarr@mathntuagr)

2 The study of matrix polynomials, especially with regard to their spectral analysis, has a long history and important applications; see [4, 8, ] and the references therein A scalar λ C is called an eigenvalue of P(λ) if the system P(λ )x = has a nonzero solution x C n This solution x is known as a (right) eigenvector of P(λ) corresponding to λ The set of all eigenvalues of P(λ), σ(p) = {λ C : detp(λ) = }, is the spectrum of P(λ), and since P(λ) is regular, it contains no more than nm finite elements The algebraic multiplicity of a λ σ(p) is the multiplicity of λ as a zero of the (scalar) polynomial detp(λ), and it is always greater than or equal to the geometric multiplicity of λ, that is, the dimension of the null space of matrix P(λ ) An eigenvalue of P(λ) is called semisimple if its algebraic and geometric multiplicities are equal, and it is called defective otherwise Suppose that for an eigenvalue λ σ(p), there exist x, x,,x k C n with x, such that ξ j= j! P (j) (λ )x ξ j = ; ξ =,,,k, () where P (j) (λ) denotes the jth derivative of P(λ) and k+ cannot exceed the algebraic multiplicity of λ Then the vector x is clearly an eigenvector of λ, and the vectors x, x,,x k are known as generalized eigenvectors The set {x, x,,x k } is called a Jordan chain of length k + of P(λ) corresponding to the eigenvalue λ Moreover, it is apparent that any set {x, x,,x ξ }, ξ =,,,k, is also a Jordan chain of P(λ) corresponding to λ Any eigenvalue of P(λ) of geometric multiplicity p has p maximal Jordan chains associated with p linearly independent eigenvectors, with total number of eigenvectors and generalized eigenvectors equal to the algebraic multiplicity of this eigenvalue The largest length of Jordan chains of P(λ) corresponding to λ σ(p) is known as the index of annihilation of λ [7] This index coincides with the size of the largest Jordan blocks of the Jordan canonical form of P(λ) corresponding to λ, and it is equal to if and only if the eigenvalue λ is semisimple; for details on the Jordan structure of matrix polynomials, see [4, 8] We are interested in (additive) perturbations of the matrix polynomial P(λ) of the form m Q(λ) = P(λ) + (λ) = (A j + j )λ j, (3) where the matrices j C n n (j =,,,m) are arbitrary For a given parameter ε > and a given set of nonnegative weights w = {w, w,,w m } with w >, we define the class of admissible perturbed matrix polynomials j= B(P, ε, w) = {Q(λ) as in (3) : j εw j, j =,,,m}, where denotes the spectral matrix norm (ie, the norm subordinate to the Euclidean vector norm), and the polynomial w(λ) = w m λ m + + w λ + w The weights w, w,,w m allow freedom in how perturbations are measured Moreover, B(P, ε, w) is convex and compact, with respect to the max norm P(λ) = max j m A j []

3 Now we can introduce weighted distances from P(λ) to the matrix polynomials that have a prescribed eigenvalue of algebraic multiplicity at least r, and to those that have a prescribed eigenvalue of algebraic multiplicity at least r and index of annihilation (exactly) k Definition For the matrix polynomial P(λ) in () and a given µ C, we define the distance from P(λ) to µ as an eigenvalue of algebraic multiplicity at least r by E r (µ) = inf {ε : Q(λ) B(P, ε, w) with µ as an eigenvalue of algebraic multiplicity at least r}, and the distance from P(λ) to µ as an eigenvalue of algebraic multiplicity at least r and index of annihilation k by E r,k (µ) = inf {ε : Q(λ) B(P, ε, w) with µ as an eigenvalue of algebraic multiplicity at least r and index of annihilation k} Notice that we have the identity E r (µ) = min E r,k(µ) Here, we allow values k=,,,mn of k greater than r because the optimal perturbed matrix polynomial Q(λ) may have µ as an eigenvalue of (both) algebraic multiplicity and index of annihilation greater than r The singular values of an n n complex matrix A, ie, the nonnegative square roots of the eigenvalues of A A, are denoted by s (A) s (A) s n (A) It is apparent that for the matrix polynomial P(λ), σ(p) = {λ C : s n (P(λ)) = }, and a scalar λ C is an eigenvalue of P(λ) of geometric multiplicity exactly p if and only if s (P(λ )) s n p (P(λ )) > s n p+ (P(λ )) = = s n (P(λ )) = If P(λ) = Iλ A for some A C n n, then σ(p) coincides with the standard spectrum of A, σ(a), and if in addition w = {w, w } = {, }, then B(P, ε, w) = {Iλ (A + E) : E ε} In this case, Malyshev [9] (inspired by []) has proved that E (µ) = sup γ> ([ Iµ A s n γi Iµ A ]) Extending Malyshev s methodology, Ikramov and Nazari [6] (for r = 3), and Mengi [] have obtained (always for P(λ) = Iλ A and w = {w, w } = {, }, ie, for constant matrices) E r (µ) = sup s rn r+ γ i,j C\{} Iµ A γ, I Iµ A γ 3, I γ 3, I Iµ A γ r, I γ r, I γ r,3 I Iµ A (4) In this work, our goal is to derive a primer estimation of the distances E r,k (µ) and E r (µ) for matrix polynomials Unfortunately, in the case of matrix polynomials, Malyshev s technique leads to lower and upper bounds for the distance E (µ) and not to its exact value [3] Hence, to be able to exploit the definition of Jordan chains of 3

4 matrix polynomials in () and avoid multivariable optimization problems, we consider block-toeplitz matrices of higher order with only one real parameter γ > and its powers instead of the r(r )/ complex parameters γ,, γ 3,, γ 3,,,γ r,r in (4) (see Definition below and the discussion before Theorem 4) Generalizing results of [3], in Section, we obtain a lower bound for E r,k (µ), and in Section 3, we construct an upper bound for E r (µ) and an associated perturbation of P(λ) Simple numerical examples are given in Section 4 to illustrate our results A lower bound for the distance E r,k (µ) By Theorem 4 of [3], we know that if both the required algebraic and geometric multiplicities of µ are equal to r (ie, the index of annihilation is equal to ), then E r, (µ) s n r+ (P(µ))/ Hence, it follows that E (µ) = s n(p(µ)) E r (µ) s n r+(p(µ)) E r, (µ) So, in the special case s n (P(µ)) = s n (P(µ)) = = s n r+ (P(µ)), it is clear that E r (µ) = s n r+ (P(µ))/, and an optimal perturbation of P(λ) is given by [3, formula (3)] Thus, in what follows, we assume that s n (P(µ)) s n r+ (P(µ)) and consider perturbations of P(λ) such that the perturbed matrix polynomial has µ as a defective eigenvalue (ie, k ) of algebraic multiplicity at least r The next definition is necessary for the remainder Definition For the matrix polynomial P(λ) in (), a positive integer k and a scalar γ C, we define the kn kn matrix polynomial P(λ) γ P () (λ) P(λ) γ F k [P(λ); γ] =! P () (λ) γ P () (λ) γ k (k )! P (k ) γ (λ) k (k )! P (k ) (λ) P(λ) We observe that a scalar λ C is an eigenvalue of P(λ) if and only if it is an eigenvalue of F k [P(λ); γ] Furthermore, if λ is an eigenvalue of P(λ) of algebraic multiplicity r and index of annihilation k, then for every γ, the null space of matrix F k [P(λ ); γ] has dimension at least r (for γ =, see [5, Lemma 5] and [7]) Lemma Suppose λ C is an eigenvalue of P(λ) of algebraic multiplicity at least r and index of annihilation k Then for any nonzero γ C, we have s kn r+ (F k [P(λ ); γ]) = Proof By hypothesis, we may assume that P(λ) has p Jordan chains corresponding to λ, namely, {x,, x,,,x,k }, {x,, x,,,x,k },, {x p,, x p,,,x p,kp }, with x,, x,,,x p, linearly independent eigenvectors, k = k k k p and (k + ) + (k + ) + + (k p + ) = r Notice that the first Jordan chain, 4

5 {x,, x,,,x,k }, is necessarily maximal Then, recalling (), we see that the r vectors x i, γx i,, x i,,, γ x i, γx i, C kn ; i =,,,p (5) γ ki x i,ki γ ki x i,ki γ ki x i,ki γ ki x i,ki x i, satisfy F k [P(λ ); γ] x i, γx i, γ x i, γ ξ x i,ξ γ ξ x i,ξ = P(λ )x i, γ j! P (j) (λ )x i, j j= = γ j! P (j) (λ )x i, j j= ξ γ ξ j! P (j) (λ )x i,ξ j for all i =,,,p and ξ =,,,k i Hence, they lie in the null space of matrix F k [P(λ ); γ] Moreover, one can verify that the vectors in (5) are linearly independent, keeping in mind their block form and the linear independence of the eigenvectors x,, x,,,x p, C n As a consequence, the dimension of the null space of F k [P(λ ); γ] is greater than or equal to r (ie, is an eigenvalue of matrix F k [P(λ ); γ] of geometric multiplicity at least r), and thus, s kn r+ (F k [P(λ ); γ]) = By this lemma, if a scalar µ C is a multiple eigenvalue of a perturbed matrix polynomial Q(λ) = P(λ) + (λ) (as in (3)) of algebraic multiplicity at least r and index of annihilation k, then for any nonzero γ C, µ is an eigenvalue of the kn kn matrix polynomial F k [Q(λ); γ] of geometric multiplicity at least r This observation and the discussion in Section 3 of [3] yield readily the following lemma Lemma 3 If µ C is an eigenvalue of a matrix polynomial Q(λ) = P(λ) + (λ) of algebraic multiplicity at least r and index of annihilation k, then for every γ, s kn r+ (F k [P(µ); γ]) F k [ (µ); γ] u v u It is straightforward to verify that, v Ckn (u j, v j C n, j =,,,k) is a pair of left and right singular vectors corresponding to a singular 5 u k j= v k

6 u (γ/ γ )u v (γ/ γ )v value of F k [P(µ); γ] (γ ) if and only if, is (γ/ γ ) k u k (γ/ γ ) k v k a pair of left and right singular vectors of F k [P(µ); γ ] corresponding to the same singular value Hence, for convenience, and without loss of generality, from this point and in the remainder of the paper, we assume that the parameter γ is real positive Theorem 4 Suppose µ C is an eigenvalue of a perturbed matrix polynomial Q(λ) = P(λ) + (λ) B(P, ε, w) of algebraic multiplicity at least r and index of annihilation k Then for every γ >, ε F k[ (µ); γ] F k [; γ ] s kn r+(f k [P(µ); γ]) F k [; γ ] Proof For the matrix polynomial (λ) and its derivatives, we have and (i) (µ) (µ) m j µ j ε j= m j (j ) (j i + ) j µ j i εw (i) ( µ ); i =,,,r j=i Thus, for any γ >, there is a unit vector ŷ = y y y k Ckn with y, y,,y k C n such that (µ)y γ () (µ)y + (µ)y F k [ (µ); γ] = F k [ (µ); γ] ŷ = k γ i i! (i) (µ)y k i i= = (µ)y + γ () k γ i (µ)y + (µ)y + + i! (i) (µ)y k i i= (µ) y + γ () (µ) y + γ (µ) () (µ) y y + (µ) y + + (µ) y k 6

7 (ε) y + γ (εw () ( µ )) y = + γ(ε)(εw () ( µ )) y y + (ε) y + + (ε) y k ε y γεw () ( µ ) y + ε y k γ i i! εw(i) ( µ ) y k i i= ε F k [; γ] The proof is completed by Lemma 3 As mentioned before, for k =, E r, (µ) s n r+ (P(µ))/ [3, Theorem 4] Corollary 5 For any γ >, the inequalities E r,k (µ) s kn r+(f k [P(µ); γ]) F k [; γ] ; k =,,,r and hold E r (µ) s kn r+ (F k [P(µ); γ]) min k=,,,nm F k [; γ] Let us denote by η(r, k) the smallest integer that is greater than or equal to n r k One can see that as γ + s kn r+ (F k [P(µ); γ]) F k [; γ] s η(r,k)(p(µ)) 3 An upper bound for the distance E r (µ) E n η(r,k)+, (µ) The technique applied in the proof of Theorem of [3] can be extended for the derivation of an upper bound for the distance E r (µ) from P(λ) in () to the n n matrix polynomials that have µ C as an eigenvalue of algebraic multiplicity at least r The following definition is necessary Definition 3 For any γ > and r {, 3,,n}, let u (γ) u (γ), v (γ) v (γ) Crn u r (γ) v r (γ) (u j (γ), v j (γ) C n, j =,,,r) be a pair of left and right singular vectors of s rn r+ (F r [P(µ); γ]), respectively Then we define the n r matrices U(γ) = [u (γ) u (γ) u r (γ)] and V (γ) = [v (γ) v (γ) v r (γ)] 7

8 For any γ > with rank(v (γ)) = r {, 3,n}, we will construct a perturbation γ (λ) such that the perturbed matrix polynomial Q γ (λ) = P(λ) + γ (λ) has µ as a defective eigenvalue with an associated (not necessarily maximal) Jordan chain of length r First we define the quantities φ i = w(i) ( µ ) (i!) ( ) µ i ; i =,,,r, µ setting µ/ µ = whenever µ =, and recalling that w > We also consider the r r upper triangular Toeplitz matrix γφ γ (φ φ ) γ 3 (φ φ φ 3 φ 3 ) γφ γ (φ φ ) Θ γ = [θ i,j ] = γφ, whose entries above the main diagonal are given by the recursive formulae θ i,j = θ i,i γ j i φ j i θ i,i+ γ j (i+) φ j (i+) θ i,j γφ ; i < j r (6) Denoting by V (γ) the Moore-Penrose pseudoinverse of V (γ), we consider the n n matrix γ = s rn r+ (F r [P(µ); γ])u(γ)θ γ V (γ), and define the matrices γ,j = and the matrix polynomial w ( ) j µ j γ ; j =,,,m µ γ (λ) = m γ,j λ j j= We observe that γ (µ) = γ, and for i =,,,r, (i) γ (µ) = m ( ) w j µ j j (j ) (j i + ) γ µ j i µ j=i = γ = γ w (i) ( µ ) m ( µ j (j ) (j i + )w j µ j=i ( ) µ i = (i!)φ i γ µ ) i ( ) µ j i µ j i µ Since u(γ), v(γ) C rn are a left and a right singular vector of s rn r+ (F r [P(µ); γ]), respectively, it holds that F r [P(µ); γ] v(γ) = s rn r+ (F r [P(µ); γ])u(γ) (7) 8

9 Denote also by e, e, the vectors of the standard basis, ie, the columns of the identity matrix If rank(v (γ)) = r ( {, 3,,n}), or equivalently, if V (γ) V (γ) = I r (the r r identity matrix), then the perturbed matrix polynomial satisfies Q γ (λ) = P(λ) + γ (λ) = m (A j + γ,j )λ j (8) j= Q γ (µ)v (γ) = P(µ)v (γ) + γ v (γ) = s rn r+ (F r [P(µ); γ])u (γ) s rn r+ (F r [P(µ); γ])u(γ)e = s rn r+ (F r [P(µ); γ])u (γ) s rn r+ (F r [P(µ); γ])u (γ) = By straightforward computations, setting φ = and recalling (6) and (7), we verify that for every i =,,,r, = j= j= j! γj Q (j) γ (µ)v i j+ (γ) j! γj P (j) (µ)v i j+ (γ) + = s rn r+ (F r [P(µ); γ])u i+ (γ) j= = s rn r+ (F r [P(µ); γ]) u i+ (γ) = s rn r+ (F r [P(µ); γ]) u i+ (γ) = s rn r+ (F r [P(µ); γ]) u i+ (γ) j! γj (j) γ (µ)v i j+ (γ) γ j φ j γ v i j+ (γ) j= γ j φ j U(γ)Θ γ V (γ) v i j+ (γ) j= γ j φ j U(γ)Θ γ e i j+ j= j= i j+ ξ= = s rn r+ (F r [P(µ); γ]) u i+ (γ) θ i+,i+ u i+ (γ) γ j φ j θ ξ,i j+ u ξ (γ) i+ = s rn r+ (F r [P(µ); γ]) γ i j+ φ i j+ θ ξ,j u ξ (γ) = Dividing by γ i yields j= ξ= j=ξ i+ γ i j+ φ i j+ θ ξ,j u ξ (γ) ξ= ( ) j! Q(j) γ (µ) γ (i j) v i j+ (γ) = 9 j=ξ

10 As a consequence, if rank(v (γ)) = r ( {, 3,,n}), then µ is a defective eigenvalue of Q γ (λ) with { v (γ), γ v (γ), γ v 3 (γ),, γ (r ) v r (γ) } as an associated Jordan chain of length r (recall the definition of Jordan chains in ()) Furthermore, we see that γ (µ) = γ = s rn r+ (F r [P(µ); γ]) U(γ)Θ γ V (γ) and γ,j = w j s rn r+ (F r [P(µ); γ]) U(γ)Θ γ V (γ) ; j =,,,r Hence, we have the next result, which is a direct generalization of the second part of Theorem in [3] Theorem 3 Let P(λ) be a matrix polynomial as in (), µ C, and r {, 3,,n} Then for every γ > such that rank(v (γ)) = r, it holds that E r (µ) s rn r+(f r [P(µ); γ]) U(γ)Θ γ V (γ), ( and Q γ (λ) in (8) lies on the boundary of B P, s rn r+(f r[p(µ);γ]) U(γ)Θ γ V (γ) ), w and has µ as a defective eigenvalue with a (not necessarily maximal) Jordan chain of length r We remark that for r 3, it is not easy to find values of γ for which the condition rank(v (γ)) = r is ensured, as it was done in [3] for r = On the other hand, in all our experiments, this rank condition appears to hold generically 4 Numerical examples To illustrate the proposed (lower and upper) bounds and their tightness, we begin with the special case of constant matrices Example 4 Consider the 6 6 smoke matrix that can be generated by the Matlab command gallery( smoke,6), the corresponding linear pencil L(λ) = I 6 λ S, the weights w = {w, w } = {, }, and the scalar µ = 384+i 6767 By [], we know that E 3 (384 + i6767) = E 3,3 (384 + i 6767) = 37 The graphs of our lower bound for E 3,3 (384 + i 6767) (see Corollary 5) and our upper bound for E 3 (384 + i 6767) (see Theorem 3) are depicted in Figure for γ (, 5] For γ = 6748, Corollary 5 implies the lower bound 345 Moreover, for γ = 54, Theorem 3 yields the upper bound 4694 and an associated perturbed matrix S + with = 4694, which has µ = i 6767 as a defective eigenvalue of algebraic multiplicity 3 and geometric multiplicity

11 6 4 upper bound lower bound Figure : Lower and upper bounds for E3 (384 + i 6767), < γ upper bound 5 lower bound Figure : Lower bound for E3,3 ( 5) and upper bound for E3 ( 5), < γ 5 For our second example, we consider a real quadratic matrix polynomial Example 4 Let P (λ) = λ + 3 λ and w = {w, w, w } = {, 68, 3} (the norms of the coefficient matrices) For the scalar µ = 5, by Example of [3] (see also Proposition 7 and Theorem 8 in []), we know that E ( 5) = E, ( 5) = E ( 5) = (9) The graphs of our lower bound for E3,3 ( 5) and our upper bound for E3 ( 5) are illustrated in Figure for γ (, 5] Setting γ = 553 and γ = 658, we get the lower bound 48 and the upper bound 377, respectively It is worth noting that these bounds are clearly compatible with (9) Furthermore, Theorem 3 yields

12 upper bound 5 5 lower bound Figure 3: Lower bound for E 3, () and upper bound for E 3 (), < γ 5 the perturbed matrix polynomial Q(λ) = λ that lies on the boundary of B (P, 377, w), and has µ = 5 as a defective eigenvalue of algebraic [ multiplicity ] 3 and geometric multiplicity, with an associated 957 eigenvector x = The matrix polynomial in our last example is triangular, and hence, we can directly compute a (non optimal) perturbation with the desired properties Example 43 Consider the 4 4 quadratic matrix polynomial λ 5λ + 4 λ + λ + 6 P(λ) = λ + λ 5 λ λ i8 iλ λ λ + 5 and the weights w = {w, w, w } = {,, } The graphs of the proposed lower bound for E 3, () and upper bound for E 3 () are plotted in Figure 3 for γ (, 5] As γ + and for γ = 87, we get the lower bound 493 for E 3, () and the upper bound 6357 for E 3 (), respectively The exact values of the distances E 3, () and E 3 () are not known, but these bounds are consistent with the fact that the matrix polynomial R(λ) = λ 4λ + 4 λ + λ + 6 λ + λ 6 λ λ i 9 iλ λ + 6 λ

13 lies on the boundary of B (P,, w), and has µ = as a defective eigenvalue of algebraic multiplicity 3 and index of annihilation Recalling the comment in the last paragraph of Section (and applying the methodology in [3, Section 3]), we also observe that the lower bound 493 coincides with the distance E, () Corollary 5 and Theorem 3 provide an infinite number of lower bounds of E r,k (µ) and upper bounds of E r (µ), respectively, in such a way that the best lower/upper bound is given by the maximum/minimum of these bounds over all γ > Since these optimization problems are over only one real variable, in our examples above, we apply the standard grid search with respect to γ (, 5] The difficulty with this brute-force approach is that it usually requires too many bound evaluations Alternatively, we can use golden section search [3, pp ], or Brent s method [, Chapter 5] The latter algorithm is based on the combination of inverse quadratic interpolation and golden section search, and one of its implementations is the Matlab function fminbnd As it is expected, these two derivative-free optimization methods compute at best a local extremum, and there is no guarantee that the global extremum will be found (so, the grid search with a few grid points can be used to initiate an interval of the global optimum) For example, we consider the upper bound of the distance E 3 ( 5) in Example (see Figure ) Applying fminbnd on the interval (, ], we get the global minimum 377 at γ = 658 after 8 iterations Over the interval (, 5], fminbnd finds the local minimum 4449 at γ = 995 after iterations Acknowledgment The author is grateful to the anonymous referees for their valuable comments and suggestions References [] L Boulton, P Lancaster and P Psarrakos, On pseudospecta of matrix polynomials and their boundaries, Math Comp, 77 (8), [] RP Brent, Algorithms for Minimization without Derivatives, Prentice-Hall, Englewood Cliffs, New Jersey, 973 [3] G Dahlquist and A Björck, Numerical Methods in Scientific Computing, Society for Industrial and Applied Mathematics, Philadelphia, 8 [4] I Gohberg, P Lancaster and L Rodman, Matrix Polynomials, Academic Press, New York, 98 [5] I Gohberg and L Rodman, Analytic matrix functions with prescribed local data, J Anal Math, 4 (98), 9 8 [6] KhD Ikramov and AM Nazari, On the distance to the closest matrix with triple zero eigenvalue, Math Notes, 73 (3), 5 5 [7] G Kalogeropoulos, P Psarrakos and N Karcanias, On the computation of the Jordan canonical form of regular matrix polynomials, Linear Algebra Appl, 385 (4), 7 3 [8] P Lancaster and M Tismenetsky, The Theory of Matrices, nd edition, Academic Press, Orlando, 985 3

14 [9] AN Malyshev, A formula for the -norm distance from a matrix to the set of matrices with a multiple eigenvalue, Numer Math, 83 (999), [] AS Markus, Introduction to the Spectral Theory of Polynomial Operator Pencils, Amer Math Society, Providence, Translations of Math Monographs, Vol 7, 988 [] E Mengi, Locating a nearest matrix with an eigenvalue of prescribed algebraic multiplicity, Numer Math, 8 (), 9 35 [] L Qiu, B Bernhardsson, A Rantzer, EJ Davison, PM Young and JC Doyle, A formula for computation of the real stability radius, Automatika, 3 (995), [3] N Papathanasiou and P Psarrakos, The distance from a matrix polynomial to matrix polynomials with a prescribed multiple eigenvalue, Linear Algebra Appl, 49 (8), [4] JH Wilkinson, The Algebraic Eigenvalue Problem, Claredon Press, Oxford, 965 [5] JH Wilkinson, Note on matrices with a very ill-conditioned eigenprolem, Numer Math, 9 (97),

The distance from a matrix polynomial to matrix polynomials with a prescribed multiple eigenvalue 1

The distance from a matrix polynomial to matrix polynomials with a prescribed multiple eigenvalue 1 The distance from a matrix polynomial to matrix polynomials with a prescribed multiple eigenvalue Nikolaos Papathanasiou and Panayiotis Psarrakos Department of Mathematics, National Technical University

More information

On the computation of the Jordan canonical form of regular matrix polynomials

On the computation of the Jordan canonical form of regular matrix polynomials On the computation of the Jordan canonical form of regular matrix polynomials G Kalogeropoulos, P Psarrakos 2 and N Karcanias 3 Dedicated to Professor Peter Lancaster on the occasion of his 75th birthday

More information

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NIKOLAOS PAPATHANASIOU AND PANAYIOTIS PSARRAKOS Abstract. In this paper, we introduce the notions of weakly noral and noral atrix polynoials,

More information

Gershgorin type sets for eigenvalues of matrix polynomials

Gershgorin type sets for eigenvalues of matrix polynomials Electronic Journal of Linear Algebra Volume 34 Volume 34 (18) Article 47 18 Gershgorin type sets for eigenvalues of matrix polynomials Christina Michailidou National Technical University of Athens, xristina.michailidou@gmail.com

More information

On Pseudospectra of Matrix Polynomials and their Boundaries

On Pseudospectra of Matrix Polynomials and their Boundaries On Pseudospectra of Matrix Polynomials and their Boundaries Lyonell Boulton, Department of Mathematics, Heriot-Watt University, Edinburgh EH14 2AS, United Kingdom. Peter Lancaster, Department of Mathematics

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Higher rank numerical ranges of rectangular matrix polynomials

Higher rank numerical ranges of rectangular matrix polynomials Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 430 (2009) 579 586 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Low rank perturbation

More information

Matrix functions that preserve the strong Perron- Frobenius property

Matrix functions that preserve the strong Perron- Frobenius property Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu

More information

Bounds for Levinger s function of nonnegative almost skew-symmetric matrices

Bounds for Levinger s function of nonnegative almost skew-symmetric matrices Linear Algebra and its Applications 416 006 759 77 www.elsevier.com/locate/laa Bounds for Levinger s function of nonnegative almost skew-symmetric matrices Panayiotis J. Psarrakos a, Michael J. Tsatsomeros

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

On the reduction of matrix polynomials to Hessenberg form

On the reduction of matrix polynomials to Hessenberg form Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

An angle metric through the notion of Grassmann representative

An angle metric through the notion of Grassmann representative Electronic Journal of Linear Algebra Volume 18 Volume 18 (009 Article 10 009 An angle metric through the notion of Grassmann representative Grigoris I. Kalogeropoulos gkaloger@math.uoa.gr Athanasios D.

More information

arxiv: v2 [math.na] 8 Aug 2018

arxiv: v2 [math.na] 8 Aug 2018 THE CONDITIONING OF BLOCK KRONECKER l-ifications OF MATRIX POLYNOMIALS JAVIER PÉREZ arxiv:1808.01078v2 math.na] 8 Aug 2018 Abstract. A strong l-ification of a matrix polynomial P(λ) = A i λ i of degree

More information

Point equation of the boundary of the numerical range of a matrix polynomial

Point equation of the boundary of the numerical range of a matrix polynomial Linear Algebra and its Applications 347 (2002) 205 27 wwwelseviercom/locate/laa Point equation of the boundary of the numerical range of a matrix polynomial Mao-Ting Chien a,, Hiroshi Nakazato b, Panayiotis

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

The Perron eigenspace of nonnegative almost skew-symmetric matrices and Levinger s transformation

The Perron eigenspace of nonnegative almost skew-symmetric matrices and Levinger s transformation Linear Algebra and its Applications 360 (003) 43 57 www.elsevier.com/locate/laa The Perron eigenspace of nonnegative almost skew-symmetric matrices and Levinger s transformation Panayiotis J. Psarrakos

More information

The Jordan Canonical Form

The Jordan Canonical Form The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

ELA

ELA LEFT EIGENVALUES OF 2 2 SYMPLECTIC MATRICES E. MACíAS-VIRGÓS AND M.J. PEREIRA-SÁEZ Abstract. A complete characterization is obtained of the 2 2 symplectic matrices that have an infinite number of left

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

BIRKHOFF-JAMES ǫ-orthogonality SETS IN NORMED LINEAR SPACES

BIRKHOFF-JAMES ǫ-orthogonality SETS IN NORMED LINEAR SPACES BIRKHOFF-JAMES ǫ-orthogonality SETS IN NORMED LINEAR SPACES MILTIADIS KARAMANLIS AND PANAYIOTIS J. PSARRAKOS Dedicated to Profess Natalia Bebiano on the occasion of her 60th birthday Abstract. Consider

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

Spectral Properties of Matrix Polynomials in the Max Algebra

Spectral Properties of Matrix Polynomials in the Max Algebra Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

On families of anticommuting matrices

On families of anticommuting matrices On families of anticommuting matrices Pavel Hrubeš December 18, 214 Abstract Let e 1,..., e k be complex n n matrices such that e ie j = e je i whenever i j. We conjecture that rk(e 2 1) + rk(e 2 2) +

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

BESSEL MATRIX DIFFERENTIAL EQUATIONS: EXPLICIT SOLUTIONS OF INITIAL AND TWO-POINT BOUNDARY VALUE PROBLEMS

BESSEL MATRIX DIFFERENTIAL EQUATIONS: EXPLICIT SOLUTIONS OF INITIAL AND TWO-POINT BOUNDARY VALUE PROBLEMS APPLICATIONES MATHEMATICAE 22,1 (1993), pp. 11 23 E. NAVARRO, R. COMPANY and L. JÓDAR (Valencia) BESSEL MATRIX DIFFERENTIAL EQUATIONS: EXPLICIT SOLUTIONS OF INITIAL AND TWO-POINT BOUNDARY VALUE PROBLEMS

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

arxiv:math/ v2 [math.ac] 1 Dec 2007

arxiv:math/ v2 [math.ac] 1 Dec 2007 The upper bound for the index of nilpotency for a matrix commuting with a given nilpotent matrix arxiv:math/0701561v2 [math.ac] 1 Dec 2007 Polona Oblak polona.oblak@fmf.uni-lj.si February 2, 2008 Abstract.

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Computing the pth Roots of a Matrix. with Repeated Eigenvalues

Computing the pth Roots of a Matrix. with Repeated Eigenvalues Applied Mathematical Sciences, Vol. 5, 2011, no. 53, 2645-2661 Computing the pth Roots of a Matrix with Repeated Eigenvalues Amir Sadeghi 1, Ahmad Izani Md. Ismail and Azhana Ahmad School of Mathematical

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Jordan Structures of Alternating Matrix Polynomials

Jordan Structures of Alternating Matrix Polynomials Jordan Structures of Alternating Matrix Polynomials D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann August 17, 2009 Abstract Alternating matrix polynomials, that is, polynomials whose coefficients

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Block companion matrices, discrete-time block diagonal stability and polynomial matrices

Block companion matrices, discrete-time block diagonal stability and polynomial matrices Block companion matrices, discrete-time block diagonal stability and polynomial matrices Harald K. Wimmer Mathematisches Institut Universität Würzburg D-97074 Würzburg Germany October 25, 2008 Abstract

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,

More information

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Structured Condition Numbers of Symmetric Algebraic Riccati Equations Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

A Spectral Characterization of Closed Range Operators 1

A Spectral Characterization of Closed Range Operators 1 A Spectral Characterization of Closed Range Operators 1 M.THAMBAN NAIR (IIT Madras) 1 Closed Range Operators Operator equations of the form T x = y, where T : X Y is a linear operator between normed linear

More information

How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues

How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues S. Noschese and L. Pasquini 1 Introduction In this paper we consider a matrix A C n n and we show how to construct

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Silvia Noschese 1 and Lothar Reichel 2 1 Dipartimento di Matematica, SAPIENZA Università di Roma, P.le Aldo Moro 5, 185 Roma,

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

On an Inverse Problem for a Quadratic Eigenvalue Problem

On an Inverse Problem for a Quadratic Eigenvalue Problem International Journal of Difference Equations ISSN 0973-6069, Volume 12, Number 1, pp. 13 26 (2017) http://campus.mst.edu/ijde On an Inverse Problem for a Quadratic Eigenvalue Problem Ebru Ergun and Adil

More information

Memoirs on Differential Equations and Mathematical Physics

Memoirs on Differential Equations and Mathematical Physics Memoirs on Differential Equations and Mathematical Physics Volume 51, 010, 93 108 Said Kouachi and Belgacem Rebiai INVARIANT REGIONS AND THE GLOBAL EXISTENCE FOR REACTION-DIFFUSION SYSTEMS WITH A TRIDIAGONAL

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

MINIMAL NORMAL AND COMMUTING COMPLETIONS

MINIMAL NORMAL AND COMMUTING COMPLETIONS INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Chapter 3. Linear and Nonlinear Systems

Chapter 3. Linear and Nonlinear Systems 59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13]) SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas Opuscula Mathematica Vol. 31 No. 3 2011 http://dx.doi.org/10.7494/opmath.2011.31.3.303 INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES Aikaterini Aretaki, John Maroulas Abstract.

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information