ETNA Kent State University

Size: px
Start display at page:

Download "ETNA Kent State University"

Transcription

1 Electronic Transactions on Numerical Analysis. Volume 40, pp , Copyright 2013,. ISSN ETNA ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS ALEKSANDRA KOSTIĆ AND HEINRICH VOSS Dedicated to Lothar Reichel on the occasion of his 60th birthday Abstract. For Hermitian matrices and generalized definite eigenproblems, the LDL H factorization provides an easy tool to slice the spectrum into two disjoint intervals. In this note we generalize this method to nonlinear eigenvalue problems allowing for a max characterization of (some of) their real eigenvalues. In particular we apply this approach to several classes of quadratic pencils. Key words. eigenvalue, variational characterization, max principle, Sylvester s law of inertia AMS subject classification. 15A18, 65F15 1. Introduction. The inertia of a Hermitian matrix A is the triplet of nonnegative integers In(A) := (n p,n n,n z ), where n p, n n, and n z are the number of positive, negative, and zero eigenvalues of A (counting multiplicities). Sylvester s classical law of inertia states that two Hermitian matricesa,b C n n are congruent (i.e.,a = S H BS for some nonsingular matrixs) if and only if they have the same inertia In(A) = In(B). An obvious consequence of the law of inertia is the following corollary: if A has an LDL H factorization A = LDL H, then n p and n n equal the number of positive and negative entries of D, and if a block LDL H factorization exists where D is a block diagonal matrix with 1 1 and indefinite 2 2 blocks on its diagonal, then one has to increase the number of positive and negative 1 1 blocks of D by the number of 2 2 blocks to get n p and n n, respectively. Hence, the inertia of A can be computed easily. This is particularly advantageous if the matrix is banded. If B C n n is positive definite, and A σb = LDL H is the block diagonal LDL H factorization of A σb for some σ R, we get the inertia In(A σb) = (n p,n n,n z ) as described in the previous paragraph. Then, the generalized eigenvalue problem Ax = λbx has n n eigenvalues smaller than σ. Hence, the law of inertia yields a tool to locate eigenvalues of Hermitian matrices or definite matrix pencils. Combining it with bisection or the secant method, one can detere all eigenvalues in a given interval or detere initial approximations for fast eigensolvers, and it can be used to check whether a method has found all eigenvalues in an interval of interest or not. The law of inertia was first proved in 1858 by J. J. Sylvester [19], and several different proofs can be found in textbooks [3, 6, 11, 13, 15], one of which is based on the max characterization of eigenvalues of Hermitian matrices. In this note we discuss generalizations of the law of inertia to nonlinear eigenvalue problems allowing for a max characterization of their eigenvalues. 2. Minmax characterization. Our main tools in this paper are variational characterizations of eigenvalues of nonlinear eigenvalue problems generalizing the well known max characterization of Poincaré [16] or Courant [2] and Fischer [5] for linear eigenvalue problems. Received October 17, Accepted January 29, Published online on April 22, Recommended by D. Kressner. Ma sinski fakultet Sarajevo, Univerzitet Sarajevo, Bosnia and Herzegovina, (kostic@mef.unsa.ba). Institute of Mathematics, Hamburg University of Technology, D Hamburg, Germany (voss@tu-harburg.de). 82

2 SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENPROBLEMS 83 We consider the nonlinear eigenvalue problem (2.1) T(λ)x = 0, where T(λ) C n n, λ J, is a family of Hermitian matrices depending continuously on the parameter λ J, and J is a real open interval which may be unbounded. To generalize the variational characterization of eigenvalues, we need a generalization of the Rayleigh quotient. To this end we assume that (A 1 ) for every fixed x C n, x 0, the scalar real equation (2.2) f(λ;x) := x H T(λ)x = 0 has at most one solution λ =: p(x) J. Then the equation f(λ;x) = 0 implicitly defines a functional p on some subset D C n, which is called the Rayleigh functional of (2.1), and which is exactly the Rayleigh quotient in case of a monic linear matrix function T(λ) = λi A. Generalizing the definiteness requirement for linear pencilst(λ) = λb A, we further assume that (A 2 ) for every x D and every λ J withλ p(x) it holds that (λ p(x))f(λ;x) > 0. If p is defined on D = C n \ {0}, then the problem T(λ)x = 0 is called overdamped. This notion is motivated by the finite dimensional quadratic eigenvalue problem T(λ)x = λ 2 Mx+λCx+Kx = 0, where M, C, and K are Hermitian and positive definite matrices. If C is large enough such thatd(x) := () 2 4(x H Kx)(x H Mx) > 0 for everyx 0, thent( ) is overdamped. Generalizations of the max and max characterizations of eigenvalues were proved by Duffin [4] for the quadratic case and by Rogers [17] for general overdamped problems. For nonoverdamped eigenproblems, the natural ordering to call the smallest eigenvalue the first one, the second smallest the second one, etc., is not appropriate. This is obvious if we make a linear eigenvalue problemt(λ)x := (λi A)x = 0 nonlinear by restricting it to an interval J which does not contain the smallest eigenvalue of A. Then the conditions (A 1 ) and (A 2 ) are satisfied, p is the restriction of the Rayleigh quotient R A to D := {x 0 : R A (x) J}, and inf x D p(x) will in general not be an eigenvalue. If λ J is an eigenvalue of T( ), then µ = 0 is an eigenvalue of the linear problem T(λ)y = µy, and therefore there exists an l N such that 0 = max V H l v V\{0} v H T(λ)v v 2, where H l denotes the set of all l dimensional subspaces of C n. In this case, λ is called an lth eigenvalue of T( ). With this enumeration the following max characterization for eigenvalues was proved in [20, 21]. THEOREM 2.1. Let J be an open interval in R, and let T(λ) C n n, λ J, be a family of Hermitian matrices depending continuously on the parameter λ J such that the conditions (A 1 ) and (A 2 ) are satisfied. Then the following statements hold.

3 84 ALEKSANDRA KOSTIĆ AND HEINRICH VOSS (i) For every l N there is at most one lth eigenvalue of T( ) which can be characterized by (ii) If (2.3) λ l = sup p(v). V H l, V D v V D λ l := inf sup p(v) J V H l, V D v V D for somel N, then λ l is the lth eigenvalue oft( ) inj, and (2.3) holds. (iii) If there exist the kth and the lth eigenvalue λ k and λ l in J (k < l), then J contains thejth eigenvalue λ j (k j l) as well withλ k λ j λ l. (iv) Let λ 1 = inf x D p(x) J and λ l J. If the imum in (2.3) is attained for an l-dimensional subspace V, then V D {0}, and (2.3) can be replaced by λ l = sup V H l, V D {0} v V, v 0 p(v). (v) λ is an lth eigenvalue if and only if µ = 0 is the lth largest eigenvalue of the linear eigenproblem T( λ)x = µx. (vi) The imum in (2.3) is attained for the invariant subspace oft(λ l ) corresponding to itsllargest eigenvalues. 3. Sylvester s law for nonlinear eigenvalue problems. We first consider the overdamped case. Then T( ) has exactly n eigenvalues λ 1 λ n inj [17]. THEOREM 3.1. Assume that T : J C n n satisfies the conditions of the max characterization in Theorem 2.1, and assume that the nonlinear eigenvalue problem (2.1) is overdamped, i.e., for every x 0 Equation (2.2) has a unique solutionp(x) J. For σ J, let (n p,n n,n z ) be the inertia of T(σ). Then the nonlinear eigenproblem T(λ)x = 0 has n eigenvalues in J, n p of which are less than σ, n n exceed σ, and for n z > 0, σ is an eigenvalue of geometric multiplicityn z. Proof. The invariant subspace W of T(σ) corresponding to its positive eigenvalues has dimension n p, and it holds that f(σ;x) = x H T(σ)x > 0 for every x W, x 0. Hence p(x) < σ by (A 2 ), and therefore the n p th smallest eigenvalue of T( ) satisfies λ np = dimv=n p max p(x) max p(x) < σ. x V, x W, On the other hand for every subspace V of C n of dimension n p + n z + 1, there exists a vector x V such that f(σ; x) < 0. Thus p(x) > σ, and it holds that λ np+n z+1 = dimv=n p+n z+1 max p(x) > σ, x V, which completes the proof. Next we consider the case that an extreme eigenvalue, i.e., either λ 1 := inf x D p(x) or λ n := sup x D p(x), is contained inj. THEOREM 3.2. Assume that T : J C n n satisfies the conditions of the max characterization, and let (n p,n n,n z ) be the inertia of T(σ) for someσ J. (i) If λ 1 := inf x D p(x) J, then the nonlinear eigenproblem T(λ)x = 0 has exactly n p eigenvalues λ 1 λ np inj which are smaller than σ. (ii) If sup x D p(x) J, then the nonlinear eigenproblem T(λ)x = 0 has exactly n n eigenvalues λ n nn+1 λ n inj exceeding σ.

4 SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENPROBLEMS 85 Proof. (i): We first show that f(λ;x) < 0 for every λ J with λ < λ 1 and for every vector x 0. Assume that f(λ;x) 0 for some λ < λ 1 and x 0, let ˆx be an eigenvector of (2.1) corresponding to λ 1, and let w(t) := tˆx + (1 t)x, 0 t 1. Thenφ(t) := f(λ;w(t)) is continuous in[0,1],φ(0) = f(λ;x) 0, andφ(1) = f(λ;ˆx) < 0 by (A 2 ). Hence, there exists a value ˆt [0,1) such that f(λ;w(ˆt)) = 0, i.e., w(ˆt) D and p(w(ˆt)) = λ < λ 1 contradicting λ 1 := inf x D p(x). For n p = 0, the matrix T(σ) is negative semidefinit, i.e., x H T(σ)x 0 for x 0, and it follows from(a 2 ) thatp(x) σ holds true for everyx D. Hence, there is no eigenvalue less than σ. For n p > 0, let W denote the invariant subspace of T(σ) corresponding to its positive eigenvalues. Then f(σ;x) = x H T(σ)x > 0 for x W, x 0, and from f(λ;x) < 0 forλ < λ 1, it follows thatx D andp(x) < σ. Hence,W D {0} and as in the proof of Theorem 3.1 we obtain λ np = dimv=n p, V D max p(x) max p(x) < σ, x V D x W, i.e.,t( ) has at least n p eigenvalues less than σ. Assume that there exists an (n p + 1)th eigenvalue λ np+1 < σ of T( ), and let W be the invariant subspace of T(λ np+1) corresponding to its nonnegative eigenvalues. Then this implies that dimw n p + 1, W \ {0} D, and p(x) λ np+1 < σ for every x W with x 0, contradicting the fact that for every subspace V with dimv = n p +1 there exists a vector x V withx H T(σ)x 0, i.e., either x D or p(x) σ. (ii)s(λ) := T( λ) satisfies the conditions of Theorem 2.1 in the interval J, and J contains the smallest eigenvalue λ n of S. For the general case the law of inertia has the following form: THEOREM 3.3. Let T : J C n n satisfy the conditions of the max characterization, and let σ,τ J,σ < τ. Let (n pσ,n nσ,n zσ ) and (n pτ,n nτ,n zτ ) be the inertias of T(σ) and T(τ), respectively. Then the inequalityn pσ n pτ holds, and the eigenvalue problem (2.1) has exactlyn pτ n pσ eigenvalues λ npσ +1 λ npτ in(σ,τ). Proof. Let W be the invariant subspace of T(σ) corresponding to its positive eigenvalues. Then by the positive definiteness, x H T(σ)x > 0, for every x W, x 0, it follows from (A 2 ) that x H T(τ)x > 0 holds as well, hence, n pσ n pτ. Let V be a subspace of C n with V D and dimv = n pσ +1. We first show that there exits ax V D withp(x) > σ, from which we then obtain λ npσ +1 := inf dimv=n pσ +1, V D sup p(x) > σ. x V D From the hypothesis dimv > n pσ, it follow that there exists a vector x V, x 0 such that x H T(σ)x < 0. If x D, then it follows from (A 2 ) that we are done. Otherwise, we choose y V D and ω > (p(y),σ). Then x H T(ω)x < 0 < y H T(ω)y, and with w(t) := tx + (1 t)y it follows in the same way as in the proof of Theorem 3.2 that there exist a value ˆt [0,1] such that w(ˆt) V D and p(w(ˆt)) = ω > σ. If U denotes the invariant subspace of T(τ) corresponding to its positive eigenvalues, then x H T(τ)x > 0 holds for every x U,x 0. If U D =, then x H T(λ)x > 0 holds for every λ J and x U, x 0 and in particular for λ = σ. Hence, U W and from σ < τ the equality n pσ = n pτ follows. In this case, T( ) has no eigenvalue in (σ,τ) because otherwise a corresponding eigenvector x would satisfyx H T(σ)x < 0 < x H T(τ)x, i.e., x W and x U contradicting U W.

5 86 ALEKSANDRA KOSTIĆ AND HEINRICH VOSS IfU D, then p(x) < τ holds for every x U D and therefore λ npτ = inf dimv=n pτ, V D sup p(x) sup p(x) < τ. x V D x U D Hence, λ npσ +1 and λ npτ are both contained in (σ,τ) and so are the eigenvalues λ j forj = n pσ +1,...,n pτ. REMARK 3.4. Without using the max characterization of eigenvalues, Neumaier [13] proved Theorem 3.3 for matrices T : J C n n which are Hermitian and (elementwise) differentiable in J with positive definite derivative T (λ), λ J. Obviously, such T( ) satisfy the conditions of the max characterization. EXAMPLE 3.5. Consider the rational eigenvalue problem T(λ) := K +λm + p j=1 λ σ j λ C jc T j, where K,M R n n are symmetric and positive definite, the matrix C j R n kj has rank k j, and 0 < σ 1 < < σ p. This problem models the free vibrations of certain fluid solid structures; cf. [1]. In each interval J l := (σ l,σ l+1 ), l = 0,...,p, σ 0 = 0, σ p+1 =, the function f l (λ,x) := x H T(λ)x is strictly monotonically increasing, and therefore all eigenvalues inj l are max values of the Rayleigh functional p l. For the first interval J 0, Theorem 3.2 applies. Hence, if τ J 0 and (n p,n n,n z ) is the inertia of T(τ), then there are exactly n p eigenvalues in J 0 which are less than τ. Moreover, if τ 1 < τ 2 are contained in one interval J j, then the number of eigenvalues in the interval (τ 1,τ 2 ) can be obtained from the inertias of T(τ 1 ) and T(τ 2 ) according to Theorem Quadratic eigenvalue problems. We consider quadratic matrix pencils (4.1) Q(λ) := λ 2 A+λB +C, with Hermitian matricesa,b,c C n n under several conditions that guarantee that (some of) the real eigenvalues allow for a variational characterization and hence for slicing of the spectrum using the inertia C < 0 and A 0. Let C be negative definite and A positive semidefinite. Multiplying Q(λ)x = 0 byλ 1, one gets the equivalent nonlinear eigenvalue problem Q(λ)x := λax+bx+λ 1 Cx = 0. Differentiating f(λ;x) := x H Q(λ)x with respect toλyields λ f(λ;x) = xh Ax λ 2 > 0 for every x 0 and every λ 0. Hence, Q satisfies the conditions of the max characterization for both intervals J := (,0) and J + := (0, ). For the corresponding Rayleigh functional p ± with domain D ±, it holds that λ + 1 = inf x D + p + (x) J + and λ n = sup x D p (x) J, and therefore the following statement follows from Theorem 3.2. THEOREM 4.1. Let C be negative definite and A positive semidefinite. (i) For σ > 0 let In( Q(σ)) = (n p,n n,n z ) be the inertia of Q(σ). Then the quadratic pencil (4.1) has n p positive eigenvalues less than σ.

6 SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENPROBLEMS 87 (ii) For σ < 0 let In( Q(σ)) = (n p,n n,n z ) be the inertia of Q(σ). Then (4.1) has n n negative eigenvalues exceeding σ. If A is positive definite, then Q is overdamped with respect to J + and J, and therefore there exist exactly n positive and n negative eigenvalues. If A 0 is positive semidefinite andr = rank(a), then is an infinite eigenvalue of multiplicityn r, and there are onlyn+r finite eigenvalues. IfB is positive definite, then the Rayleigh functional p + (x) = 2 x H Bx+ (x H Bx) 2 4(x H Ax)() is defined on C n \ {0}. Hence, ( Q,J + ) is overdamped, and there exist n positive and r negative eigenvalues. Theorem 4.1 can be strengthened according to the following result. THEOREM 4.2. Assume that A is positive semidefinite, B is positive definite, and C is negative definite. (i) For σ > 0 let In( Q(σ)) = (n p,n n,n z ) be the inertia of Q(σ). Then the quadratic pencil (4.1) has n p positive eigenvalues less than σ, n n eigenvalues exceeding σ, and ifn z 0, thenσ is an eigenvalue of Q( ) with multiplicityn z. (ii) For σ < 0 let In( Q(σ)) = (n p,n n,n z ) be the inertia of Q(σ). Then (4.1) has n n negative eigenvalues exceedingσ,n p r finite eigenvalues less thanσ, and ifn z 0, then σ is an eigenvalue of Q( ) with multiplicityn z Hyperbolic problems. The quadratic pencil Q( ) defined by the Hermitian matrices A,B,C C n n is called hyperbolic if A is positive definite and for every x C n, x 0, the quadratic polynomial has two distinct real roots (4.2) p ± (x) = xh Bx 2x H Ax ± f(λ;x) := λ 2 x H Ax+λx H Bx+ = 0 ( ) 2 xh Bx 2x H xh Cx Ax x H Ax. A hyperbolic quadratic matrix polynomial Q( ) has the following properties (cf. [12]): the ranges J ± := p ± (C n \ {0}) are disjoint real closed intervals with max J < J +, moreover Q(λ) is positive definite for λ < J and λ > max J +, and it is negative definite forλ (max J, J + ). Let J + be an open interval with J + J + and J J + =, and let J be an open interval with J J and J + J =. Then(Q,J + ) and( Q,J ) satisfy the conditions of the variational characterization of eigenvalues and they are both overdamped. Hence, there exist 2n eigenvalues and λ 1 λ n < λ n+1 λ 2n λ j = dimv=j max p (x) and x V, λ n+j = dimv=j max p +(x), j = 1,...,n. x V, If In(Q(σ)) = (n p,n n,n z ) is the inertia of Q(σ) and n n = n, then Q(σ) is negative definite and there are n eigenvalues smaller than σ and n eigenvalues exceeding σ. If n p = n holds, then Q(σ) is positive definite, f(σ;x) > 0 is valid for every x 0, and if λ f(σ;x) < 0 holds, then it follows that σ < λ 1, and σ > λ 2n otherwise.

7 88 ALEKSANDRA KOSTIĆ AND HEINRICH VOSS If n p n and n n n, then σ J J + and Theorem 3.1 applies. We only have to find out which of these intervals σ is located in. To this end we detere x 0 such thatf(σ;x) := x H Q(σ)x > 0 (this can be done by a few steps of the Lanczos method which is known to converge first to extreme eigenvalues). If λ f(σ;x) = 2σxH Ax+x H Bx < 0, then it follows that p (x) > σ, and therefore σ < λ n = max p (x). Similarly, the inequalites f(σ;x) > 0 and 2σx H Ax + x H Bx > 0 imply σ > λ n+1 = p + (x). Hence we obtain the following slicing of the spectrum of Q( ). THEOREM 4.3. Let Q(λ) := λ 2 A+λB +C be hyperbolic, and let(n p,n n,n z ) be the inertia of Q(σ) forσ R. (i) If n n = n then there are n eigenvalues smaller than σ and n eigenvalues greater than σ. (ii) Let n p = n. If 2σx H Ax + x H Bx < 0 for an arbitrary x 0, then there are 2n eigenvalues exceeding σ. If 2σx H Ax+x H Bx > 0 for an arbitrary x 0, then all 2n eigenvalues are less than σ. (iii) For n p = 0 and n z > 0 let x 0 be an element of the null space of Q(σ). If 2σx H Ax+x H Bx < 0, then Q(λ)x = 0 has n n z eigenvalues in (,σ), n eigenvalues in(σ, ) and σ = λ n with multiplicityn z. If 2σx H Ax+x H Bx > 0, then Q( ) has n eigenvalues in (,σ), n n z eigenvalues in(σ, ), and σ = λ n+1 with multiplicityn z. (iv) For n p > 0 and n z = 0 let x 0 be such that f(σ;x) > 0. If2σx H Ax+x H Bx < 0, thenq( ) hasn n p eigenvalues in(,σ) andn+n p eigenvalues in(σ, ). If2σx H Ax+x H Bx > 0, thenq( ) hasn+n p eigenvalues in(,σ) andn n p eigenvalues in(σ, ). (v) For n p > 0 and n z > 0 let x 0 be such that f(σ;x) > 0. If 2σx H Ax + x H Bx < 0, then Q( ) has n n p n z eigenvalues in (,σ) and n+n p eigenvalues in(σ, ). If 2σx H Ax + x H Bx > 0, then Q( ) has n + n p eigenvalues in (,σ) and n n p n z eigenvalues in(σ, ). In either case σ is an eigenvalue with multiplicityn z. REMARK 4.4. These results on quadratic hyperbolic pencils can be generalized to a hyperbolic matrix polynomial of higher degree P(λ) = k λ j A j, A j = A H j, j = 0,...,k, A k > 0, j=0 which is hyperbolic if A k is positive definite and for every x 0 the corresponding polynomial f(λ;x) := x H P(λ)x has k real and distinct roots. In this case there exist k disjoint open intervals J j R, j = 1,...,k such that P( ) has exactlyneigenvalues in eachj j, and these eigenvalues allow for a max characterization; cf. [12, 14]. To fix the numeration let supj j+1 < infj j for j = 1,...,k 1. For σ R, let (n p,n n,n z ) be the inertia of P(σ), and let x C n be a vector such that x H P(σ)x > 0. Iff( ;x) has exactly j roots which exceed σ, then it holds that σ J j+1 or σ [supj j+1,infj j ] or σ J j.

8 SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENPROBLEMS 89 Which one of these situations occur can be deduced from the inertia (n n = n orn p = n) and the derivative λf(σ;x) as for the quadratic case Definite quadratic pencils. In a recent paper, Higham, Mackey, and Tisseur [10] generalized the concept of hyperbolic quadratic polynomials waiving the positive definiteness of the leading matrixa. A quadratic pencil (4.1) is definite if A, B, and C are Hermitian, there exists a real value µ R { } such that Q(µ) is positive definite, and for every x C n, x 0 the scalar quadratic polynomial f(λ;x) := λ 2 x H Ax+λx H Bx+ = 0 has two distinct roots inr { }. The following theorem was proved in [10]. THEOREM 4.5. The Hermitian matrix polynomialq(λ) is definite if and only if any two (and hence all) of the following properties hold: d(x) := (x H Bx) 2 4(x H Ax)() > 0 for every x C n \{0}, Q(η) > 0 for some η R { }, Q(ξ) < 0 for some ξ R { }. For ξ < η (otherwise consider Q( λ)) it was shown in [14] that there are n eigenvalues in(ξ,η) which are max values of a Rayleigh functional, and the remainingneigenvalues in [,ξ) and (η, ] are max and max values of a second Rayleigh functional. Hence, if ξ and η are known, then the slicing of the spectrum using the LDL H factorization follows similarly to the hyperbolic case. However, given σ and the LDL H factorization ofq(σ), we are not aware of an easy way to decide in which of the intervals[,ξ),(ξ,η), or(η, ] the parameterσ is located. The articles [7, 8, 14] contain methods to detect whether a quadratic pencil is definite and to compute the parameters ξ and η, however they are much more costly than computing anldl H factorization of a matrix. For the Examples 4.6 and 4.8 at least one of these parameters are known and the slicing can be given explicitly. EXAMPLE 4.6. Duffin [4] called a quadratic eigenproblem (4.1) an overdamped network, ifa, B, and C are positive semidefinite and the so called overdamping condition d(x) = (x H Bx) 2 4(x H Ax)() > 0 for every x 0 is satisfied. So, actually B has to be positive definite, and therefore Q(µ) is positive definite for every µ > 0, and Q( ) is definite. Ifr denotes the rank ofa, then (4.1) hasn+r finite real eigenvalues, the largestnones of which (called primary eigenvalues by Duffin) are max values of p + (x) := 2 x H Bx+ d(x), and the smallestr ones (called secondary eigenvalues) are max values λ n+1 j = max p (x), dimv=j, V D x V D ( where D := {x C n : x H Ax 0} and p (x) = x H Bx ) d(x) /(2x H Ax) for x D. Hence, the following slicing of the spectrum can be derived. THEOREM 4.7. LetA,B,C C n n be positive semidefinite and assume thatd(x) > 0 for x 0. Let r be the rank of A and In(Q(σ)) = (n p,n n,n z ) be the inertia of Q(σ) for some σ R. Then the following holds.

9 90 ALEKSANDRA KOSTIĆ AND HEINRICH VOSS (i) If n n = n, then there are r eigenvalues smaller than σ and n eigenvalues greater than σ. (ii) For n p = 0 and n z > 0 let x 0 be an element of the null space of Q(σ). If 2σx H Ax + x H Bx < 0, then Q( ) has r n z eigenvalues in (,σ) and n eigenvalues in(σ,0]. If 2σx H Ax + x H Bx > 0, then Q( ) has r eigenvalues in (,σ), and n n z eigenvalues in(σ,0]. In either case, σ is an eigenvalue of Q( ) with multiplicityn z. (iii) For n p > 0 let x 0 be such that f(σ;x) > 0. If 2σx H Ax + x H Bx < 0, then Q(λ)x = 0 has r n p eigenvalues in (,σ) and n+n p n z eigenvalues in(σ,0]. If2σx H Ax+x H Bx > 0, thenq(λ)x = 0 hasr+n p n z eigenvalues in(,σ) and n n p eigenvalues in(σ,0]. EXAMPLE 4.8. Free vibrations of fluid solid structures are governed by the nonsymmetric eigenvalue problem [9, 18] [ ][ ] [ ][ ] Ks C xs Ms 0 xs (4.3) = λ 0 K f x f C T x f where K s R s s, K f R f f are the stiffness matrices, and M s R s s, M f R f f are the mass matrices of the structure and the fluid, respectively, andc R s f describes the coupling of structure and fluid. The vector x s is the structure displacement vector, and x f is the fluid pressure vector. K s, M s,k f, and M f are symmetric and positive definite. Multiplying the first line of (4.3) by λ, one obtains the quadratic pencil [ ] [ ] [ ] Q(λ) := λ 2 Ms 0 Ks C 0 0 +λ 0 0 C T +. M f 0 K f [ ] It can easily be seen that for x s 0 the quadratic equation [x T s,x T f ]Q(λ) xs = 0 has one positive solution p + (x s,x f ) and one negative solution p (x s,x f ), and for x s = 0 it has one positive solutionp + (x s,x f ) := x T f K fx f /(x T f M fx f ) and the solutionp (x s,x f ) :=. Moreover the positive eigenvalues of (4.3) are max values of the Rayleigh functionalp +. Hence, one gets the following result for the (physically meaningful) positive eigenvalues: if In(Q(σ)) = (n p,n n,n z ) for σ > 0, then there are exactly n p eigenvalues in (0,σ), n n eigenvalues in(σ, ), and ifn z 0, then σ is an eigenvalue of multiplicityn z Nonoverdamped quadratic pencils. We consider the quadratic pencil (4.1) where the matrices A, B, and C are positive definite. Then for x 0, the two complex roots of f(λ;x) := x H Q(λ)x are given in (4.2). Let us define δ := sup{p (x) : p (x) R}, M f x f δ + := inf{p + (x) : p + (x) R}, J := (,δ + ), J + = (δ,0), and D ± := {x C n : p ± (x) J ± }. If f(λ,x) > 0 for x 0 and λ R, then it follows that δ = and δ + =. The eigenvalue problem Q(λ)x = 0 has no real eigenvalues, but this does not have to be known in advance. Theorem 4.9 applies to this case as well. It is obvious that Q and Q satisfy the conditions of the max characterization of its eigenvalues in J and J +, respectively. Hence, all eigenvalues in J are max-values of p : λ j = dimv=j, V D max p (x), j = 1,2,... x V D

10 SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENPROBLEMS 91 Taking advantage of the max characterization of the eigenvalues of Q(λ) := Q( λ) in J := J + with the Rayleigh functional p := p +, we obtain the following max characterization λ + 2n+1 j = max dimv=j, V D + p + (x), j = 1,2,... x V D + of all eigenvalues of Q inj +. Hence, for σ < δ + and for σ > δ, we obtain slicing results for the spectrum of Q( ) from Theorem 3.2. If In(Q(σ)) = (n p,n n,n z ) and σ < δ +, then there exist n n eigenvalues ofq( ) in(,σ), and ifσ (δ,0), then there aren n eigenvalues in(σ,0). However, δ + and δ are usually not known. The following theorem contains upper bounds of δ and lower bounds of δ +, thus yielding subintervals of (,δ + ) and (δ,0) where the above slicing applies. THEOREM 4.9. Let A,B,C C n n be positive definite, and let p + and p be defined in (4.2). Then it holds that (i) δ + := max x H Ax δ + = inf{p + (x) : p + (x) R} and δ = sup{p (x) : p (x) R} x H Ax =: δ. (ii) ˆδ + := 2max x H Bx δ + and δ 2 x H Bx =: ˆδ. Proof: (i): The value δ + is a lower bound of δ + if for every x 0 such that p + (x) R it holds that p + (x) δ +. The following proof takes advantage of the facts that p + (x) < 0 and λ f(p +(x);x) 0. The equation f(p + (x);x) = x H Q(p + (x))x = 0 is satisfied if and only if Hence, x H Bx = p + (x)x H Ax 1 p + (x) xh Cx. λ f(p +(x);x) = 2p + (x)x H Ax+x H Bx = p + (x)x H Ax 1 p + (x) xh Cx 0 if and only if p + (x) 2 xh Cx x H Ax, i.e., δ + max x H Ax = δ +, and analogously we obtain δ x H Ax = δ.

11 92 ALEKSANDRA KOSTIĆ AND HEINRICH VOSS (ii): Solving f(p + (x);x) = 0 for x H Ax, one gets from λ f(p +(x);x) 0 that and analogously p + (x) 2 xh Cx x H Bx, i.e.,δ + 2max x H Bx = ˆδ +, δ 2 x H Bx = ˆδ. From Theorem 3.2 we obtain the following slicing of the spectrum of Q( ). THEOREM Let A, B, and C be positive definite matrices, and for some σ R let In(Q(σ)) = (n p,n n,n z ). (i) If σ max { max x H Ax, 2max } x H, Bx then there exist n n eigenvalues of Q(λ)x = 0 in(,σ). (ii) If { } x σ H Cx x H Ax, 2 x H, Bx then there exist n n eigenvalues of Q(λ)x = 0 in(σ,0). EXAMPLE In a numerical experiment, the matrices A, B, and C were generated by the following MATLAB statements: randn( state,0); A=eye(20); B=randn(20);B=B *B; C=randn(20);C=C *C;. It was found that Q(λ)x = 0 has 26 real eigenvalues, 13 in the domain of p and 13 in the domain of p +. So, Sylvester s theorem can be applied to all of them. 12 eigenvalues are less than max(λ(c,a)) and 6 eigenvalues exceed (λ(c,a)). Conclusions. We have considered a given family of Hermitian matrices T(λ) depending continuously on a parameter λ in an open real interval J which allows for a variational characterization of its eigenvalues. We proved slicing results for the spectrum oft( ), where at first general nonlinear eigenvalue problems are considered, which are then specialized to various types of quadratic eigenproblems. REFERENCES [1] C. CONCA, J. PLANCHARD, AND M. VANNINATHAN, Fluid and Periodic Structures, vol. 38 of Research in Applied Mathematics, Masson, Paris, [2] R. COURANT, Über die Eigenwerte bei den Differentialgleichungen der mathematischen Physik, Math. Z., 7 (1920), pp [3] J. DEMMEL, Applied Numerical Linear Algebra, SIAM, Philadelphia, [4] R. DUFFIN, A imax theory for overdamped networks, J. Rational Mech. Anal., 4 (1955), pp [5] E. FISCHER, Über quadratische Formen mit reellen Koeffizienten, Monatsh. Math. Phys., 16 (1905), pp [6] G. GOLUB AND C. VAN LOAN, Matrix Computations, 3rd ed., John Hopkins University Press, Baltimore, [7] C.-H. GUO, N. HIGHAM, AND F. TISSEUR, Detecting and solving hyperbolic quadratic eigenvalue problems, SIAM J. Matrix Anal. Appl., 30 (2009), pp

12 SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENPROBLEMS 93 [8], An improved arc algorithm for detecting definite Hermitian pairs, SIAM J. Matrix Anal. Appl., 31 (2009), pp [9] C. HARRIS AND A. PIERSOL, Harris Shock and Vibration Handbook, 5th ed., McGraw-Hill, New York, [10] N. HIGHAM, D. MACKEY, AND F. TISSEUR, Definite matrix polynomials and their linearization by definite pencils, SIAM J. Matrix Anal. Appl., 31 (2009), pp [11] R. HORN AND C. JOHNSON, Matrix Analysis, Cambridge University Press, Cambridge, [12] A. MARKUS, Introduction to the Spectral Theory of Polynomial Operator Pencils, vol. 71 of Translations of Mathematical Monographs, AMS, Providence, [13] A. NEUMAIER, Introduction to Numerical Analysis, Cambridge University Press, Cambridge, [14] V. NIENDORF AND H. VOSS, Detecting hyperbolic and definite matrix polynomials, Linear Algebra Appl., 432 (2010), pp [15] B. PARLETT, The Symmetric Eigenvalue Problem, SIAM, Philadelphia, [16] H. POINCARÉ, Sur les equations aux dérivées partielles de la physique mathématique, Amer. J. Math., 12 (1890), pp [17] E. ROGERS, A imax theory for overdamped systems, Arch. Rational Mech. Anal., 16 (1964), pp [18] M. STAMMBERGER AND H. VOSS, On an unsymmetric eigenvalue problem governing free vibrations of fluid-solid structures, Electr. Trans. Numer. Anal., 36 (2010), pp [19] J. SYLVESTER, A demonstration of the theorem that every homogeneous quadratic polynomial is reducible by orthogonal substitutions to the form of a sum of positive and negative squares, Philosophical Magazin, 4 (1852), pp [20] H. VOSS, A max principle for nonlinear eigenproblems depending continuously on the eigenparameter, Numer. Linear Algebra Appl., 16 (2009), pp [21] H. VOSS AND B. WERNER, A imax principle for nonlinear eigenvalue problems with applications to nonoverdamped systems, Math. Methods Appl. Sci., 4 (1982), pp

ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS

ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS ALEKSANDRA KOSTIĆ AND HEINRICH VOSS Key words. eigenvalue, variational characterization, principle, Sylvester s law of inertia AMS subject

More information

Available online at ScienceDirect. Procedia Engineering 100 (2015 ) 56 63

Available online at   ScienceDirect. Procedia Engineering 100 (2015 ) 56 63 Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 100 (2015 ) 56 63 25th DAAAM International Symposium on Intelligent Manufacturing and Automation, DAAAM 2014 Definite Quadratic

More information

Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization

Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization Heinrich Voss voss@tuhh.de Joint work with Kemal Yildiztekin Hamburg University of Technology Institute

More information

Variational Principles for Nonlinear Eigenvalue Problems

Variational Principles for Nonlinear Eigenvalue Problems Variational Principles for Nonlinear Eigenvalue Problems Heinrich Voss voss@tuhh.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Variational Principles for Nonlinear EVPs

More information

A MAXMIN PRINCIPLE FOR NONLINEAR EIGENVALUE PROBLEMS WITH APPLICATION TO A RATIONAL SPECTRAL PROBLEM IN FLUID SOLID VIBRATION

A MAXMIN PRINCIPLE FOR NONLINEAR EIGENVALUE PROBLEMS WITH APPLICATION TO A RATIONAL SPECTRAL PROBLEM IN FLUID SOLID VIBRATION A MAXMIN PRINCIPLE FOR NONLINEAR EIGENVALUE PROBLEMS WITH APPLICATION TO A RATIONAL SPECTRAL PROBLEM IN FLUID SOLID VIBRATION HEINRICH VOSS Abstract. In this paper we prove a maxmin principle for nonlinear

More information

Dedicated to Ivo Marek on the occasion of his 75th birthday. T(λ)x = 0 (1.1)

Dedicated to Ivo Marek on the occasion of his 75th birthday. T(λ)x = 0 (1.1) A MINMAX PRINCIPLE FOR NONLINEAR EIGENPROBLEMS DEPENDING CONTINUOUSLY ON THE EIGENPARAMETER HEINRICH VOSS Key words. nonlinear eigenvalue problem, variational characterization, minmax principle, variational

More information

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems A Jacobi Davidson-type projection method for nonlinear eigenvalue problems Timo Betce and Heinrich Voss Technical University of Hamburg-Harburg, Department of Mathematics, Schwarzenbergstrasse 95, D-21073

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS

CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Nonlinear eigenvalue problems Eigenvalue problems

More information

Solving Regularized Total Least Squares Problems

Solving Regularized Total Least Squares Problems Solving Regularized Total Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 436 (2012) 3954 3973 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Hermitian matrix polynomials

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification. Al-Ammari, Maha and Tisseur, Francoise

Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification. Al-Ammari, Maha and Tisseur, Francoise Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification Al-Ammari, Maha and Tisseur, Francoise 2010 MIMS EPrint: 2010.9 Manchester Institute for Mathematical Sciences

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/

More information

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2. RATIONAL KRYLOV FOR NONLINEAR EIGENPROBLEMS, AN ITERATIVE PROJECTION METHOD ELIAS JARLEBRING AND HEINRICH VOSS Key words. nonlinear eigenvalue problem, rational Krylov, Arnoldi, projection method AMS subject

More information

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics. An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University

More information

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2) A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

Solving a rational eigenvalue problem in fluidstructure

Solving a rational eigenvalue problem in fluidstructure Solving a rational eigenvalue problem in fluidstructure interaction H. VOSS Section of Mathematics, TU Germany Hambu~g-Harburg, D-21071 Hamburg, Abstract In this paper we consider a rational eigenvalue

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

Higher rank numerical ranges of rectangular matrix polynomials

Higher rank numerical ranges of rectangular matrix polynomials Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis Efficient Methods For Nonlinear Eigenvalue Problems Diploma Thesis Timo Betcke Technical University of Hamburg-Harburg Department of Mathematics (Prof. Dr. H. Voß) August 2002 Abstract During the last

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/

More information

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

SOLVING RATIONAL EIGENVALUE PROBLEMS VIA LINEARIZATION

SOLVING RATIONAL EIGENVALUE PROBLEMS VIA LINEARIZATION SOLVNG RATONAL EGENVALUE PROBLEMS VA LNEARZATON YANGFENG SU AND ZHAOJUN BA Abstract Rational eigenvalue problem is an emerging class of nonlinear eigenvalue problems arising from a variety of physical

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Notes on Linear Algebra and Matrix Analysis

Notes on Linear Algebra and Matrix Analysis Notes on Linear Algebra and Matrix Analysis Maxim Neumann May 2006, Version 0.1.1 1 Matrix Basics Literature to this topic: [1 4]. x y < y,x >: standard inner product. x x = 1 : x is normalized x y = 0

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis Volume 8, pp 127-137, 1999 Copyright 1999, ISSN 1068-9613 ETNA etna@mcskentedu BOUNDS FOR THE MINIMUM EIGENVALUE OF A SYMMETRIC TOEPLITZ MATRIX HEINRICH VOSS

More information

How to Detect Definite Hermitian Pairs

How to Detect Definite Hermitian Pairs How to Detect Definite Hermitian Pairs Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/ Joint work with Chun-Hua Guo and Nick

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Nick Problem: HighamPart I Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester Woudschoten

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play? Overview Last week introduced the important Diagonalisation Theorem: An n n matrix A is diagonalisable if and only if there is a basis for R n consisting of eigenvectors of A. This week we ll continue

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

Positive definite preserving linear transformations on symmetric matrix spaces

Positive definite preserving linear transformations on symmetric matrix spaces Positive definite preserving linear transformations on symmetric matrix spaces arxiv:1008.1347v1 [math.ra] 7 Aug 2010 Huynh Dinh Tuan-Tran Thi Nha Trang-Doan The Hieu Hue Geometry Group College of Education,

More information

The eigenvalue map and spectral sets

The eigenvalue map and spectral sets The eigenvalue map and spectral sets p. 1/30 The eigenvalue map and spectral sets M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland

More information

Solving Polynomial Eigenproblems by Linearization

Solving Polynomial Eigenproblems by Linearization Solving Polynomial Eigenproblems by Linearization Nick Higham School of Mathematics University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey and Françoise

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Eigenvalue problems and optimization

Eigenvalue problems and optimization Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

T(λ)x = 0 (1.1) k λ j A j x = 0 (1.3)

T(λ)x = 0 (1.1) k λ j A j x = 0 (1.3) PROJECTION METHODS FOR NONLINEAR SPARSE EIGENVALUE PROBLEMS HEINRICH VOSS Key words. nonlinear eigenvalue problem, iterative projection method, Jacobi Davidson method, Arnoldi method, rational Krylov method,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems

Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Yuji Nakatsukasa Abstract When a symmetric block diagonal matrix [ A 1 A2 ] undergoes an

More information

Dimension. Eigenvalue and eigenvector

Dimension. Eigenvalue and eigenvector Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Nonlinear Eigenvalue Problems

Nonlinear Eigenvalue Problems 115 Nonlinear Eigenvalue Problems Heinrich Voss Hamburg University of Technology 115.1 Basic Properties........................................ 115-2 115.2 Analytic matrix functions.............................

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 41, pp. 159-166, 2014. Copyright 2014,. ISSN 1068-9613. ETNA MAX-MIN AND MIN-MAX APPROXIMATION PROBLEMS FOR NORMAL MATRICES REVISITED JÖRG LIESEN AND

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

CHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v

CHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v CHAPTER 5 REVIEW Throughout this note, we assume that V and W are two vector spaces with dimv = n and dimw = m. T : V W is a linear transformation.. A map T : V W is a linear transformation if and only

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems. Chapter 3 Linear Algebra In this Chapter we provide a review of some basic concepts from Linear Algebra which will be required in order to compute solutions of LTI systems in state space form, discuss

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

c 2009 Society for Industrial and Applied Mathematics

c 2009 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol.??, No.?, pp.?????? c 2009 Society for Industrial and Applied Mathematics GRADIENT FLOW APPROACH TO GEOMETRIC CONVERGENCE ANALYSIS OF PRECONDITIONED EIGENSOLVERS ANDREW V.

More information

1 Review: symmetric matrices, their eigenvalues and eigenvectors

1 Review: symmetric matrices, their eigenvalues and eigenvectors Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems. Lecture 2: Eigenvalues, eigenvectors and similarity The single most important concept in matrix theory. German word eigen means proper or characteristic. KTH Signal Processing 1 Magnus Jansson/Emil Björnson

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information