SOLVING RATIONAL EIGENVALUE PROBLEMS VIA LINEARIZATION

Size: px
Start display at page:

Download "SOLVING RATIONAL EIGENVALUE PROBLEMS VIA LINEARIZATION"

Transcription

1 SOLVNG RATONAL EGENVALUE PROBLEMS VA LNEARZATON YANGFENG SU AND ZHAOJUN BA Abstract Rational eigenvalue problem is an emerging class of nonlinear eigenvalue problems arising from a variety of physical applications n this paper, we propose a linearization-based method to solve the rational eigenvalue problem The proposed method converts the rational eigenvalue problem into a well-studied linear eigenvalue problem, and meanwhile, exploits and preserves the structure and properties of the original rational eigenvalue problem For example, the low-rank property leads to a trimmed linearization We show that solving a class of rational eigenvalue problems is just as convenient and efficient as solving linear eigenvalue problems of slightly larger sizes Key words Rational eigenvalue problem, linearization AMS subject classifications 65F5, 65F5, 5A8 ntroduction n recent years, there are a great deal of interest to study the rational eigenvalue problem REP) ) Rλ)x =, where Rλ) is an n n matrix rational function of the form 2) Rλ) = P λ) s i λ) q i λ) E i, P λ) is an n n matrix polynomial in λ of degree d, s i λ) and q i λ) are scalar polynomials of degrees n i and d i, respectively, and E i are n n constant matrices The REP 2) arises from optimization of acoustic emissions of high speed trains [3, free vibration of plates with elastically attached masses [2, vibration of fluid-solid structure [2, free vibrations of a structure with a viscoelastic constitutive relation describing the behavior of a material [6 and electronic structure calculations of quantum dots [23, A brute-force approach to solve the REP ) is to multiply equation ) by the scalar polynomial k q iλ) to turn it into a polynomial eigenvalue problem PEP) of the degree d = d + d + + d k Subsequently, the PEP is converted into a linear eigenvalue problem LEP) by the process known as linearization This approach is noted in [6 and employed in [, 9 for electronic structure calculations of quantum dots The recent study of the linearization techniques of the PEP can be found in [3, 4, 7, 8 and references therein This is a practical approach only if the number k of the rational terms and the degree d i of the polynomials q i λ) are small, say k = d = However, when k and/or d i are large, it leads to a PEP of much higher degree and becomes impractical for large-scale problems Furthermore, the possible low-rank property of the matrices E i is lost in the linearized eigenvalue problem An alternative approach is to treat the REP ) as a general nonlinear eigenvalue problem NEP), and solve it by a nonlinear eigensolver, such as Picard iteration self-consistent iteration), Newton s method, nonlinear Rayleigh quotient method, nonlinear Jacobi-Davidson method, and nonlinear Arnoldi method [6, 8, 2, 22 This approach limits the exploitation of the underlying rich structure and property of the REP ), and is challenging in convergence analysis and validation of computed eigenvalues School of Mathematical Sciences, Fudan University, Shanghai 2433, China Affiliated with Scientific Computing Key Laboratory of Shanghai Normal University, Division of Computational Science, E-nstitute of Shanghai Universities, Peoples Republic of China yfsu@fudaneducn Department of Computer Science and Mathematics, University of California, Davis, CA 9566 USA bai@csucdavisedu

2 2 Y SU AND Z BA n this paper, we propose a linearization-based approach to solve the REP ) Similarly to the linearization of the PEP, the new approach converts the REP ) into a linear eigenvalue problem LEP), exploits and preserves the structure and property of the REP ) as much as possible t has a number of advantages For example, the low-rank property of matrices E i as frequently encountered in applications leads to a trimmed linearization, namely, only small increase of the size comparing to the size of the original REP ) The symmetry of the REP can also be preserved in the LEP We show that under mild assumptions, the problem of solving a class of the REPs is just as convenient and efficient as the problem of solving an LEP of slightly larger size The rest of this paper is organized as follows n section 2, we formalize the definition of the REP ) and the assumptions we will use throughout n section 3, we present a linearization scheme n section 4, we show how to use the proposed linearization scheme to a number of REPs from different applications Numerical examples are given in section 5 2 Settings We assume throughout that the matrix rational function Rλ) is regular, that is, detrλ)) The roots of q i λ) are the poles of Rλ) Rλ) is not defined on these poles A scalar λ such that detrλ)) = is referred to as an eigenvalue, and the corresponding nonzero vector x satisfying the equation ) is called an eigenvector The pair λ, x) is referred to as an eigenpair Let us denote the matrix polynomial P λ) of degree d in λ as 2) P λ) = λ d A d + λ d A d + + λa + A, where A i are n n constant matrices We assume throughout that the leading coefficient matrix A d is nonsingular, which is equivalent to the assumption of a monic matrix polynomial A d = ) in the study of matrix polynomial [5 The treatment in the presence of singular A d is beyond the scope of this paper We assume that s i λ) and q i λ) are coprime, that is, having no common factors Furthermore, the rational functions siλ) q are proper, that is, s iλ) iλ) having smaller degree than q i λ) Otherwise by the polynomial long division, an improper rational function can be written as the sum of a polynomial and a proper rational function: s i λ) q i λ) = p iλ) + ŝiλ) q i λ) with ŝ i λ) having smaller degree than q i λ) Subsequently, the term p i λ)e i can be absorbed into the matrix polynomial term P λ): P λ) := P λ) + p i λ)e i f it is necessary, we assume that the leading coefficient matrix of the updated matrix polynomial is still nonsingular Since s i λ) and q i λ) are coprime, the proper rational function siλ) q iλ) can be represented as 22) s i λ) q i λ) = at i C i λd i ) b i, for some matrices C i, D i R di di, and vectors a i, b i R di Moreover, D i is nonsingular The process of constructing the quadruple C i, D i, a i, b i ) satisfying 22) is called a minimal realization in the theory of control system, see for example [, pp9 98 and [9 Finally, we assume that coefficient matrices E i have the rank-revealing decompositions 23) E i = L i Ui T, where L i, U i R n ri are of full column rank r i n section 4, we will see that the decompositions 23) are often immediately available in the practical REPs The rank of E i is typically much smaller than the size n, that is, r i n The algorithms for computing such sparse rank-revealing decompositions can be found in [7

3 RATONAL EGENVALUE PROBLEMS 3 3 Linearization Under the assumptions discussed in the previous section, let us consider a linearization method for solving the REP ) By the realizations 22) of the rational functions s i λ)/q i λ) and the factorizations 23) of the coefficient matrices E i, the rational terms of the matrix rational function Rλ) can be rewritten as the following s i λ) q i λ) E i = = = a T i C i λd i ) b i L i Ui T [ L i a T i C i λd i ) b i ri U T i L i ri a i ) T ri C i λ ri D i ) ri b i )Ui T, where is the Kronecker product Define C = diag r C, r2 C 2,, rk C k ), D = diag r D, r2 D 2,, rk D k ), L = [ L r a ) T L 2 r2 a 2 ) T L k rk a k ) T, U = [ U r b ) T U 2 r2 b 2 ) T U k rk b k ) T, where the size of C and D is m m, the size of L and U is n m, and m = r d + r 2 d r k d k Then the rational terms of Rλ) can be compactly represented in a realization form 3) s i λ) q i λ) E i = LC λd) U T We note that the matrix D is nonsingular since the matrices D i in 22) are nonsingular The eigenvalues of the matrix pencil C λd are the poles of Rλ) Using the representation 3), the REP ) can be equivalently written in the following compact form [ 32) P λ) LC λd) U T x =, and the matrix rational function Rλ) is written as 33) Rλ) = P λ) LC λd) U T f P λ) is linear and is denoted as P λ) = A λb, then the REP 32) is of the form [ A λb LC λd) U T x = 34) By introducing the auxiliary vector y = C λd) U T x, the equation 32) can be written as the following LEP: 35) where [ A L A = U T C A λb)z =, [ B, B = D [ x, z = y

4 4 Y SU AND Z BA n general, if the matrix polynomial P λ) is of the form 2), we can first write the REP 32) as a semi-pep of the form ) 36) λ d A d + λ d A d + + λa + Ãλ) x =, where Ãλ) A LC λd) U T Then by symbolically applying the well-known first) companion form linearization to the semi-pep 36), we have 37) A d A d 2 Ã λ) λ A d λ d x λ d 2 x x =, which can be equivalently written as A d A d 2 A λ A d L C λd) U T λ d x λ d 2 x x = The above equation is of the same form as 34) Therefore, by introducing the variable λ d x y = C λd) [ U T λ d 2 x = C λd) U T x, x we derive the following linearization of the REP ): 38) A λb)z =, where A = A d A d 2 A L U T C, B = A d D, z = λ d x λ d 2 x x y The size of matrices A and B is nd + m, where m = r d + r 2 d r k d k n the case that all the coefficient matrices E i are of full rank, ie, r i = n, the LEP 38) is of the size nd, where d = d+d + +d k This is the same size as the one derived by the brute-force approach However, it is typical that r i n in practice, then nd + m nd The LEP 38) is a trimmed linearization of the REP ) This will be illustrated by four REPs from applications in section 4 Note that under the assumption of nonsingularity of the matrix A d, the matrix B is nonsingular Therefore all eigenvalues of the LEP 38) are finite There is no infinite eigenvalue

5 RATONAL EGENVALUE PROBLEMS 5 The following theorem shows the connection between eigenvalues of the REP ) and the LEP 38) Theorem 3 LEP 38) a) f λ is an eigenvalue of the REP ), then it is an eigenvalue of the b) f λ is an eigenvalue of the LEP 38) and is not a pole of Rλ), then it is an eigenvalue of the REP ) Proof Define dn-by-dn matrices V L λ) = V R λ) = λa d A d A d 2 A λ λ d λ λ, We have detv L λ)) = detv R λ)) = and λa d + A d A d 2 A λ λ V Rλ) = V L λ) P λ) Therefore, 39) [ VR λ) det A λb) = det A λb) m λa d + A d A d 2 A L λ [ = det VR λ) λ U T C λd P λ) L [ VL λ) = det m U T C λd P λ) L = det, U T C λd ) m

6 6 Y SU AND Z BA where for the third equality, we use the identities [ U T V R λ) = [ U T and L = V Lλ) L By exploiting the block structure of the matrix in the determinant of the right-hand-side of the equation 39), we derive that [ ) P λ) L 3) det A λb) = det U T C λd f λ is an eigenvalue of the REP ), then it is not a pole of the REP t implies that λ is not an eigenvalue of C λd and C λd is nonsingular Therefore, we have the block factorization [ [ [ P λ) L LC λd) Rλ) U T = C λd U T, C λd where Rλ) is defined by 33) The proof of part a) immediately follows the identity 3) det A λb) = det Rλ)) detc λd) For the part b), if λ is an eigenvalue of the pencil A λb and is not the pole of Rλ), then the matrix C λd is nonsingular By the identity 3), λ is an eigenvalue of the REP ) We note that the condition that λ is not a pole of the Rλ) in Theorem 3b) is necessary Consider the following example: λ 2 ) 32) λ e 2e T 2 x =, where 2 is a 2 by 2 identity matrix, and e 2 is the second column of 2 Since detrλ)) = λλ λ ), the REP 32) has two eigenvalues and λ = is a pole Let y = λ e T 2 x, then the corresponding LEP is given by A λb)z = [ e2 e T 2 [ 2 λ ) [ x y = t has three eigenvalues,, and But λ = is not an eigenvalue of the REP 32) Two additional remarks are in order First, the realization of a rational function can be represented in different forms For example, the realization of σ λ) could be given by 2 σ λ) 2 = [ [ σ σ ) [ λ or in a symmetric form σ λ) 2 = [ [ σ σ [ λ ) [ The study of realization can be found in [, pp9 98 and references therein Second, there are many different ways of linearization for the matrix polynomials, including recent work [7, 8, 3, 4 Many of these linearizations can be easily integrated into the proposed

7 RATONAL EGENVALUE PROBLEMS 7 linearization of the REP For example, if the original REP ) is symmetric, namely A i are symmetric, matrices C and D are symmetric in the realization and L = U, then we can use a symmetric linearization proposed in [7 Specifically, let us consider a symmetric matrix polynomial of degree d = 3, then a symmetric linearization is given by 33) A A U U T C where y = C λd) U T x + λ A 3 A 3 A 2 A 3 A 2 A D λ 2 x λx x y =, 4 Applications n this section, we apply the proposed linearization in section 3 to four REPs from applications 4 Loaded elastic string We consider the following rational eigenvalue problem arising from the finite element discretization of a boundary value problem describing the eigenvibration of a string with a load of mass attached by an elastic spring: 4) A λb + λ λ σ E ) x =, where A and B are tridiagonal and symmetric positive definite, and E = e n e T n, e n is the last column of the identity matrix σ is a parameter [2, 2 By the linearization proposed in section 3, the first step is to write the rational function λ/λ σ) in a proper form t results that the REP 4) becomes 42) A + e n e T n λb + σ λ σ e ne T n ) x = Then it can be easily written in the realization form 34): [A + e n e Tn λb e n λ ) 43) e T n x = σ By defining the auxiliary vector y = λ ) e T n x, σ we have the linear eigenvalue problem 44) A λb)z =, where A = [ A + en e T n e n e T n [ B, B = /σ [ x, z = y The n + ) n + ) matrices A and B have the same structure and property as the coefficient matrices A and B in the original REP 4) They are tridiagonal and symmetric positive definite An alternative way to solve the REP 4) is to first transform it to a quadratic eigenvalue problem QEP) by multiplying the linear factor λ σ on the equation 4): 45) Qλ)x = [ λ 2 B λa + σb + e n e T n ) + σa x =,

8 8 Y SU AND Z BA and then convert the QEP 45) into an equivalent linear eigenvalue problem by the linearization process A symmetric linearization is [ [ ) [ A + σb + en e T n ) σa B λx 46) + λ = σa σa x This is a generalized symmetric indefinite eigenvalue problem f Qa) < for some scalar a, then by letting λ = µ + a, the QEP 45) becomes [ µ 2 B µa + σ 2a)B + e n e T n ) + Qa) x = Consequently, it can be linearized to a generalized symmetric definite eigenvalue problem: [ ) [ ) [ A + σ 2a)B + en e T n Qa) B µx 47) + µ = Qa) Qa) x A practical issue is on the existence of the shift a and how to find it numerically, see recent work [6 We note that the size of the LEP 46) and 47) is 2n On the other hand, the size of the LEP 44) by the new linearization process is only n + 48) 42 Quadratic eigenvalue problem with low-rank stiffness matrix Consider the QEP: λ 2 M + λd + K)x = f K is singular, then zero is an eigenvalue since Kx = for some nonzero vector x Let us consider how to compute nonzero eigenvalues of the QEP 48) by exploiting the fact that K is singular and is of low rank Let K = LU T be the full-rank decomposition, where L, U R n r, r is the rank of K By the linearization discussed in section 3, the QEP 48) can be written in the REP form ) λm + D + λ LU T ) x = n the compact form 32), it becomes [ λm + D L λ) U T x = Note that the polynomial term P λ) = λm + D is linear Hence, by introducing the auxiliary vector we have the LEP 49) y = λ) U T x = λ U T x, [ M λ [ ) [ D L x + U T = y f L = K, U =, then the LEP 49) is a linearization of the QEP in the first companion form f L =, U = K T, then the LEP 49) is a linearization in the second companion form [ f K has rank r < n, the linearization 49) is not a linearization of the QEP 48) under the standard definition of linearization of the PEPs [5 However, by Theorem 3, we can conclude that the nonzero eigenvalues of QEP 48) and LEP 49) are the same The LEP 49) is a trimmed linearization of the QEP The order of the LEP 49) n + r could be significantly smaller than the size 2n of the linearization in the companion forms f M is singular or more general, the leading coefficient matrix A d is singular in the matrix polynomial, the PEP P λ) = has infinite eigenvalues and/or corresponding singular structure t is discussed in [3 that how to exploit the structure of the zero blocks in the coefficient matrices to obtain a trimmed linearization by partially) deflating those infinite eigenvalues and/or singular structures n the linearization 49), by using a full-rank factorization of the stiffness matrix K, we derived a trimmed linearization such that those zero eigenvalues are explicitly deflated

9 RATONAL EGENVALUE PROBLEMS 9 43 Vibration of a fluid-solid structure Let us consider an REP arising from the simulation of mechanical vibrations of fluid-solid structures [5, 6, 2 t is of the form ) λ 4) A λb + E i x =, λ σ i where the poles σ i, i =, 2,, k are positive, matrices A and B are symmetric positive definite, and E i = C i C T i, C i R n ri has rank r i for i =, 2,, k By the linearization proposed in section 3, we first write the rational terms of 4) in the proper form 4) Let A + C i Ci T λb σ i σ i λ C ic T i ) x = C = [ C C 2 C k, Σ = diagσ r,, σ k rk ), where ri is the r i -by-r i identity Then the equation 4) can be written as [ A + CC T λb C λσ ) C T x = By introducing the variable y = λσ ) C T x, we have the following LEP: [ x 42) A λb) =, y where [ A + CC T C A = C T [ B, B = Σ are of the size n + k r i Note that the matrix A is symmetric, and B is symmetric positive definite The LEP 42) is a generalized symmetric definite eigenvalue problem, which can be essentially solved by a symmetric eigensolver, such as implicitly restarted Lanczos algorithm [2 or the thick-restart Lanczos method [24 Mazurenko and Voss [5 addressed the question to determine the number of eigenvalues of the REP 4) in a given interval α, β) by using the fact that the eigenvalues of 4) can be characterized as minimax values of a Rayleigh functional [5, p6 By using the linearization 42), the question can be answered through computing the inertias of the symmetric matrix pencil A λb First, we have the following proposition Proposition 4 f A and B are positive definite and the poles σ i >, then all eigenvalues of the REP 4) are real and positive Proof Since the matrix pencil A λb in 42) is a symmetric definite pencil, all eigenvalues are real By the fact that A is semi-positive definite, all eigenvalues are nonnegative Therefore, we just need to show that zero is not an eigenvalue of the pencil A λb By contradiction, let z = [ x T y T T be an eigenvector corresponding to the zero eigenvalue Then by A B)z = Az =, we have A + CC T )x + Cy =, C T x + y =

10 Y SU AND Z BA Note that A is positive definite, we have x = and y = This is a contradiction Therefore all eigenvalues of the LEP 42) are positive By Theorem 3, we conclude that all eigenvalues of the REP 4) are positive Based on Theorem 3 and Proposition 4, we conclude that the number of eigenvalues of the REP 4) in the interval α, β) is given by 43) κ = l l, where l is the number of eigenvalues of A λb in the interval α, β), l = σ i α,β) l i, and l i is the number of zero eigenvalues of A σ i B The quantities l and l can be computed using the Sylvester s law of inertia for the real symmetric matrices A τb for τ = α, β and poles σ i α, β) 44 Damped vibration of a structure This is an REP arising from the free vibrations of a structure if one uses a viscoelastic constitutive relation to describe the behavior of a material [6 The REP is of the form ) λ 2 44) M + K + b i λ G i x =, where the mass and stiffness matrices M and K are symmetric positive definite, b j are relaxation parameters over the k regions, G j is an assemblage of element stiffness matrices over the region with the distinct relaxation parameters We consider the case where G i = L i L T i and L i R n ri By defining L = [L, L 2,, L k, D = diagb r, b 2 r2,, b k rk ), the REP 44) can be written in the form 32): λ 2 M + K L + λd) L T )x = By linearizing the second-order matrix polynomial term λ 2 M + K in a symmetric form, we derive the following symmetric LEP: M M K L + λ M λx x =, L T D y where the auxiliary vector y = + λd) L T x The size of the LEP is 2n + r + r r k 5 Numerical examples n this section, we present two numerical examples to show computational efficiency of the proposed linearization process of the REP ) in sections 3 and 4 We do not compare the proposed approach with a general-purpose nonlinear eigensolver, such as Newton s method, nonlinear Arnoldi method[22 or preconditioned iterative methods [2 nstead, we compare the extra cost of solving the REP ) over the problem of solving the PEP P λ)x = without the rational terms in ) All numerical experiments were run in MATLAB 7 on a Pentium V PC with 26GHz CPU and GB of core memory Example We present numerical results for the REP 4) arising from vibration analysis of a loaded elastic string discussed in section 4 This REP is included in the collection of nonlinear eigenvalue problems NLEVP) [2 f the NLEVP is included in MATLAB, then the matrices A, B and E can be generated by calling coeffs = nlevp loaded_string,n); A = coeffs{}; B = coeffs{2}; E = coeffs{3}; where n is the size of the REP As in [2, the pole σ is set to be The interested eigenvalues are few smallest ones in the interval λ, + ) The following table records the computed smallest eigenvalues and the corresponding residual norms of the trimmed LEP 44) with the size n = by MATLAB function eig:

11 RATONAL EGENVALUE PROBLEMS i λi residual norm e e e e e e e e e e 3 The residual norm R λ) x 2 / x 2 is used to measure the precision of a computed eigenpair λ, x) of REP ), the same as in [6, Algorithm 5 We note that the first eigenvalue λ <, which is not of practical interest according to [2 Eigenvalues λ 2 to λ 6 match all significant digits of the computed eigenvalues by a preconditioned iterative method reported in [2 The following table reports the CPU elapsed time for solving the trimmed LEP 44) for different sizes n by using MATLAB dense eigensolver eig For comparison, we also report the CPU elapsed time of solving the QEP 45) by using MATLAB function polyeig, and the symmetric LEP 47) of size 2n by using eig n this particular case, it is known that Qa) < for a = 2 Therefore, the symmetric LEP 47) is a generalized symmetric definite eigenproblem solver n = 2 n = 4 n = 6 n = 8 Trimmed LEP 44) eig QEP 45) polyeig Full sym-lep 47) eig A λb eig t is clear that the trimmed LEP 44) is the most efficient linearization scheme to solve the REP 4) Although one can efficiently exploit the positive definiteness in the generalized symmetric definite eigenproblem 47) it is still slower than the trimmed LEP 44) due to the fact that the size of 47) is 2n By the table, we also see that the brute-force approach to convert the REP 4) into the QEP 45) and then solve it via a companion form linearization is most expensive Note that the matrix pair A, B) in the trimmed linearization 44) is symmetric tridiagonal and positive definite, the same properties as the matrices A and B Therefore, from the last row of the previous table we see that the CPU elapsed time of solving the REP via LEP 44) and the one of solving the eigenvalue problem of the pencil A λb are essentially the same Example 2 This is a numerical example for the REP 4) discussed in section 43 The size of matrices A and B is n = 36, 46 The number of nonzeros of A is nnz = 255, 88 B has the same sparsity as A There are nine rational terms, k = 9 The matrices C i in rational terms have two dense columns and are of rank r i = 2 The pole σ i = i for i =, 2,, k Our aim is to compute all eigenvalues in the interval α, β) =, 2), ie, between the first and second poles As we discussed in section 4, the linearization of the REP 4) leads to the LEP A λb)z =, where A and B are defined as in 42) with the size n + r d + + r k d k = n = n + 8 By the expression 43), we can conclude that there are 8 eigenvalues in the interval To apply the expression 43), we need to know the inertias of the matrices A τb for τ = α, β These can be computed using the LDL T decomposition of the matrix A τb Since C i are dense, the explicit computation of the LDL T decomposition of A τb is too expensive nstead, we can first perform the following congruence transformation [ A τb C A τb 5) = L τσ L T C T

12 2 Y SU AND Z BA = L L 2 A τb τσ F L T 2 L T, where F = C T A τb) C + Σ/τ, C L =, L 2 = C T A τb) τ Σ Under these congruence transformations, we see that the inertias of the matrices A τb can be computed from the the inertias of the matrices A τb, τ Σ and F Therefore, we only need to compute the LDL T decomposition of the sparse symmetric matrix A τb n MATLAB, the LDL T decomposition of a sparse symmetric matrix is computed by the function ldlsparse, which is based on [4 Let us turn to compute the 8 eigenvalues in the interval α, β) by applying MATLAB s sparse eigensolver eigs for the LEP 42) The function eigs is based on the implicitly restarted Arnoldi RA) algorithm [2 The following parameters are used for applying the function eigs with the shift-and-invert spectral transformation: tau = 5; % the shift num = 8; % number of wanted eigenvalues optsissym = true; optsisreal = true; optsdisp = ; optstol = e-3; % residual bound optsp = 4*num; % number of Lanczos basis Furthermore, we need to provide an external linear solver for the linear system [ [ ) [ [ A τb C τσ + [C T x b 52) = x 2 b 2 The following procedure is such a solver to compute the solution vectors x and x 2 based on the LDL T decomposition A τb = LDL T and the Sherman-Morrison-Woodbury formula : C := L C 2 b := L b 3 e = C T D b Σb 2 /τ 4 e := + C T D C Σ/τ) e 5 x = L T D b Ce), x 2 = Σb 2 e)/τ The linear system 52) needs to be solved repeatedly for different right-hand sides For computational efficiency, the matrix C := L C in step and the matrix + C T D C Σ/τ) in step 4 are computed only once and stored before calling eigs The CPU elapsed time is displayed in the following table, where ldlsparse is the time for computing the decomposition A τb = LDL T preprocessing is for computing the matrices C := L C and + C T D C Σ/τ), and the assemblage of the matrix B = diagb, Σ ) A λb A λb ldlsparse preprocessing 49 eigs Total The residuals for all 8 computed eigenpairs are less than 55 3 For comparison, in the third column of the previous table, we also record the CPU time to solve only the eigenvalue problem of A + BB T ) = A A B + B T A B) B T A

13 RATONAL EGENVALUE PROBLEMS 3 the linear term A λb of REP 4) using the same parameters for calling eigs As we can see, it only takes about 33% extra time to solve the full REP than the simple eigenvalue problem of the pencil A λb The computed 8 eigenvalues λ, λ 2,, λ 8 of the REP 4) in the interval, 2) are plotted below, along with the computed 8 eigenvalues µ, µ 2,, µ 8 of the linear pencil A µb closest to the shift τ = 5 µ µ 2 µ 3 µ 4 µ 5 µ 6 µ 7 µ 8 2 λλ λ λ λ λ λ λ We observe that there are only 6 eigenvalues of the pencil A λb in the interval, 2) Among them, µ 6, µ 7, µ 8 are good approximations of λ 6, λ 7, λ 8, respectively f we treat the REP 4) as a general NEP and use an iterative method such as the nonlinear Arnoldi method [22) with the initial approximations µ 6, µ 7, µ 8, we would expect good convergence to the desired eigenvalues λ 6, λ 7, λ 8 t would be difficult to predict the convergence behavior of the iterative method with the initial approximations µ, µ 2,, µ 5 Acknowledgement We are grateful to Heinrich Voss for providing the data for Example 2 in section 5 We thank Françoise Tisseur for her numerous comments Support for this work has been provided in part by the National Science Foundation under the grant DMS-6548 and DOE under the grant DE-FC2-6ER25794 The work of Y Su was supported in part by China NSF Project 8749 and E-nstitutes of Shanghai Municipal Education Commission, N E34 REFERENCES [ A C Antoulas Approximation of Large-Scale Dynamical Systems SAM, 25 [2 T Betcke, N J Higham, V Mehrmann, C Schroder, and F Tisseur NLEVP: A collection of nonlinear eigenvalue problems Technical Report MMS EPrint 284, School of Mathematics, The University of Manchester, 28 [3 R Byers, V Mehrmann, and H Xu Trimmed linearizations for structured matrix polynomials Linear Alg Appl, 429: , 28 [4 T A Davis Algorithm 849: A concise sparse Cholesky factorization package ACM Trans Math Software, 34):587 59, Dec 25 [5 Gohberg, P Lancaster, and L Rodman Matrix Polynomials Academic Press, New York, 982 [6 C-H Guo, N J Higham, and F Tisseur Detecting and solving hyperbolic quadratic eigenvalue problems SAM J Matrix Anal Appl, 3:593 63, 29 [7 N J Higham, D S Mackey, N Mackey, and F Tisseur Symmetric linearizations for matrix polynomials SAM J Matrix Anal Appl, 29):43 59, 26 [8 N J Higham, D S Mackey, and F Tisseur The conditioning of linearizations of matrix polynomials SAM J Matrix Anal Appl, 284):5 28, 26 [9 T-M Hwang, W-W Lin, J-L Liu, and W Wang Jacobi-Davidson methods for cubic eigenvalue problems Numer Lin Alg Appl, 2:65 624, 25 [ T-M Hwang, W-W Lin, W-C Wang, and W Wang Numerical simulation of three dimensional quantum dot J Comp Physics, 96:28 232, 24 [ P Lancaster and M Tismenetsky The Theory of Matrices Academic Press, London, 2nd edition, 985 [2 R B Lehoucq, D C Sorensen, and C Yang ARPACK Users Guide: Solution of Large-Scale Eigenvalue Problems with mplicitly Restarted Arnoldi Methods SAM, Philadelphia, 998 [3 DS Mackey, N Mackey, C Mehl, and V Mehrmann Structured polynomial eigenvalue problems: Good vibrations from good linearization SAM J Matrix Anal Appl, 284):29 5, 26 [4 DS Mackey, N Mackey, C Mehl, and V Mehrmann Vector spaces of linearizations for matrix polynomials SAM J Matrix Anal Appl, 284):97 4, 26 [5 L Mazurenko and H Voss Low rank rational perturbations of linear symmetric eigenproblems Z Angew Math Mech, 868):66 66, 26 [6 V Mehrmann and H Voss Nonlinear eigenvalue problems: A challenge for modern eigenvalue methods GAMM- Reports, 27:2 52, 24 [7 M J O Sullivan and M A Saunders Sparse rank-revealing LU factorization via threshold complete pivoting

14 4 Y SU AND Z BA and threshold rook pivoting) presented at Householder Symposium XV on Numerical Linear Algebra, Peebles, Scotland, June 7-2, 22 [8 A Ruhe Algorithms for the nonlinear eigenvalue problem SAM J Numer Anal, 4): , 973 [9 B De Schutter Minimal state-space realization in linear system theory: An overview J Comp Applied Math, 2-2):33 354, Sep 2 Special ssue on Numerical Analysis in the 2th Century, Vol : Approximation Theory [2 S Solov ëv Preconditioned iterative methods for a class of nonlinear eigenvalue problems Linear Alg Appl, 45:2 229, 26 [2 H Voss A rational spectral problem in fluid-solid vibration Electronic Trans Numer Anal, 6:94 6, 23 [22 H Voss An Arnoldi method for nonlinear eigenvalue problems BT Numer Math, 44:387 4, 24 [23 H Voss terative projection methods for computing relevant energy states of a quantum dot J Comp Physics, 27: , 26 [24 K Wu and H Simon Thick-restart Lanczos method for large symmetric eigenvalue problems SAM J Matrix Anal Appl, 222):62 66, 2

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/

More information

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics. An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Solving Polynomial Eigenproblems by Linearization

Solving Polynomial Eigenproblems by Linearization Solving Polynomial Eigenproblems by Linearization Nick Higham School of Mathematics University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey and Françoise

More information

Solving the Polynomial Eigenvalue Problem by Linearization

Solving the Polynomial Eigenvalue Problem by Linearization Solving the Polynomial Eigenvalue Problem by Linearization Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Ren-Cang Li,

More information

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Nonlinear eigenvalue problems - A Review

Nonlinear eigenvalue problems - A Review Nonlinear eigenvalue problems - A Review Namita Behera Department of Electrical Engineering Indian Institute of Technology Bombay Mumbai 8 th March, 2016 Outline 1 Nonlinear Eigenvalue Problems 2 Polynomial

More information

A Jacobi Davidson Method for Nonlinear Eigenproblems

A Jacobi Davidson Method for Nonlinear Eigenproblems A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.

More information

Nonlinear Eigenvalue Problems: An Introduction

Nonlinear Eigenvalue Problems: An Introduction Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

More information

A numerical method for polynomial eigenvalue problems using contour integral

A numerical method for polynomial eigenvalue problems using contour integral A numerical method for polynomial eigenvalue problems using contour integral Junko Asakura a Tetsuya Sakurai b Hiroto Tadano b Tsutomu Ikegami c Kinji Kimura d a Graduate School of Systems and Information

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Backward error of polynomial eigenvalue problems solved by linearization of Lagrange interpolants Piers W. Lawrence Robert M. Corless Report TW 655, September 214 KU Leuven Department of Computer Science

More information

Available online at ScienceDirect. Procedia Engineering 100 (2015 ) 56 63

Available online at   ScienceDirect. Procedia Engineering 100 (2015 ) 56 63 Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 100 (2015 ) 56 63 25th DAAAM International Symposium on Intelligent Manufacturing and Automation, DAAAM 2014 Definite Quadratic

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Compact rational Krylov methods for nonlinear eigenvalue problems Roel Van Beeumen Karl Meerbergen Wim Michiels Report TW 651, July 214 KU Leuven Department of Computer Science Celestinenlaan 2A B-31 Heverlee

More information

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Nick Problem: HighamPart I Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester Woudschoten

More information

Rational Krylov methods for linear and nonlinear eigenvalue problems

Rational Krylov methods for linear and nonlinear eigenvalue problems Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Eigenvector error bound and perturbation for nonlinear eigenvalue problems

Eigenvector error bound and perturbation for nonlinear eigenvalue problems Eigenvector error bound and perturbation for nonlinear eigenvalue problems Yuji Nakatsukasa School of Mathematics University of Tokyo Joint work with Françoise Tisseur Workshop on Nonlinear Eigenvalue

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of

More information

Quadratic Matrix Polynomials

Quadratic Matrix Polynomials Research Triangularization Matters of Quadratic Matrix Polynomials February 25, 2009 Nick Françoise Higham Tisseur Director School of of Research Mathematics The University of Manchester School of Mathematics

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

How to Detect Definite Hermitian Pairs

How to Detect Definite Hermitian Pairs How to Detect Definite Hermitian Pairs Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/ Joint work with Chun-Hua Guo and Nick

More information

An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems. Hammarling, Sven and Munro, Christopher J. and Tisseur, Francoise

An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems. Hammarling, Sven and Munro, Christopher J. and Tisseur, Francoise An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems Hammarling, Sven and Munro, Christopher J. and Tisseur, Francoise 2011 MIMS EPrint: 2011.86 Manchester Institute for Mathematical

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2. RATIONAL KRYLOV FOR NONLINEAR EIGENPROBLEMS, AN ITERATIVE PROJECTION METHOD ELIAS JARLEBRING AND HEINRICH VOSS Key words. nonlinear eigenvalue problem, rational Krylov, Arnoldi, projection method AMS subject

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

An Arnoldi method with structured starting vectors for the delay eigenvalue problem

An Arnoldi method with structured starting vectors for the delay eigenvalue problem An Arnoldi method with structured starting vectors for the delay eigenvalue problem Elias Jarlebring, Karl Meerbergen, Wim Michiels Department of Computer Science, K.U. Leuven, Celestijnenlaan 200 A, 3001

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Polynomial eigenvalue solver based on tropically scaled Lagrange linearization. Van Barel, Marc and Tisseur, Francoise. MIMS EPrint: 2016.

Polynomial eigenvalue solver based on tropically scaled Lagrange linearization. Van Barel, Marc and Tisseur, Francoise. MIMS EPrint: 2016. Polynomial eigenvalue solver based on tropically scaled Lagrange linearization Van Barel, Marc and Tisseur, Francoise 2016 MIMS EPrint: 201661 Manchester Institute for Mathematical Sciences School of Mathematics

More information

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2) A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Eigenvector Error Bound and Perturbation for Polynomial and Rational Eigenvalue Problems

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Eigenvector Error Bound and Perturbation for Polynomial and Rational Eigenvalue Problems MATHEMATICAL ENGINEERING TECHNICAL REPORTS Eigenvector Error Bound and Perturbation for Polynomial and Rational Eigenvalue Problems Yuji NAKATSUKASA and Françoise TISSEUR METR 2016 04 April 2016 DEPARTMENT

More information

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter

More information

Iterative projection methods for sparse nonlinear eigenvalue problems

Iterative projection methods for sparse nonlinear eigenvalue problems Iterative projection methods for sparse nonlinear eigenvalue problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Iterative projection

More information

Tropical roots as approximations to eigenvalues of matrix polynomials. Noferini, Vanni and Sharify, Meisam and Tisseur, Francoise

Tropical roots as approximations to eigenvalues of matrix polynomials. Noferini, Vanni and Sharify, Meisam and Tisseur, Francoise Tropical roots as approximations to eigenvalues of matrix polynomials Noferini, Vanni and Sharify, Meisam and Tisseur, Francoise 2014 MIMS EPrint: 2014.16 Manchester Institute for Mathematical Sciences

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

of dimension n 1 n 2, one defines the matrix determinants

of dimension n 1 n 2, one defines the matrix determinants HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue

More information

ELA

ELA SHARP LOWER BOUNDS FOR THE DIMENSION OF LINEARIZATIONS OF MATRIX POLYNOMIALS FERNANDO DE TERÁN AND FROILÁN M. DOPICO Abstract. A standard way of dealing with matrixpolynomial eigenvalue problems is to

More information

Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification. Al-Ammari, Maha and Tisseur, Francoise

Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification. Al-Ammari, Maha and Tisseur, Francoise Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification Al-Ammari, Maha and Tisseur, Francoise 2010 MIMS EPrint: 2010.9 Manchester Institute for Mathematical Sciences

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS

ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS ALEKSANDRA KOSTIĆ AND HEINRICH VOSS Key words. eigenvalue, variational characterization, principle, Sylvester s law of inertia AMS subject

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 436 (2012) 3954 3973 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Hermitian matrix polynomials

More information

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems A Jacobi Davidson-type projection method for nonlinear eigenvalue problems Timo Betce and Heinrich Voss Technical University of Hamburg-Harburg, Department of Mathematics, Schwarzenbergstrasse 95, D-21073

More information

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils A Structure-Preserving Method for Large Scale Eigenproblems of Skew-Hamiltonian/Hamiltonian (SHH) Pencils Yangfeng Su Department of Mathematics, Fudan University Zhaojun Bai Department of Computer Science,

More information

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Elias Jarlebring a, Michiel E. Hochstenbach b,1, a Technische Universität Braunschweig,

More information

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Elias Jarlebring a, Michiel E. Hochstenbach b,1, a Technische Universität Braunschweig,

More information

On an Inverse Problem for a Quadratic Eigenvalue Problem

On an Inverse Problem for a Quadratic Eigenvalue Problem International Journal of Difference Equations ISSN 0973-6069, Volume 12, Number 1, pp. 13 26 (2017) http://campus.mst.edu/ijde On an Inverse Problem for a Quadratic Eigenvalue Problem Ebru Ergun and Adil

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

in Numerical Linear Algebra

in Numerical Linear Algebra Exploiting ResearchTropical MattersAlgebra in Numerical Linear Algebra February 25, 2009 Nick Françoise Higham Tisseur Director Schoolof ofresearch Mathematics The School University of Mathematics of Manchester

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay

A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay Zheng-Jian Bai Mei-Xiang Chen Jin-Ku Yang April 14, 2012 Abstract A hybrid method was given by Ram, Mottershead,

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

ANALYSIS OF STRUCTURED POLYNOMIAL EIGENVALUE PROBLEMS

ANALYSIS OF STRUCTURED POLYNOMIAL EIGENVALUE PROBLEMS ANALYSIS OF STRUCTURED POLYNOMIAL EIGENVALUE PROBLEMS A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Sciences 2011

More information

Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods

Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods Volker Mehrmann Heinrich Voss November 29, 2004 Abstract We discuss the state of the art in numerical solution methods for large

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Nick Higham. Director of Research School of Mathematics

Nick Higham. Director of Research School of Mathematics Exploiting Research Tropical Matters Algebra in Numerical February 25, Linear 2009 Algebra Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University of Mathematics

More information

ACCURATE SOLUTIONS OF POLYNOMIAL EIGENVALUE PROBLEMS

ACCURATE SOLUTIONS OF POLYNOMIAL EIGENVALUE PROBLEMS ACCURATE SOLUTIONS OF POLYNOMIAL EIGENVALUE PROBLEMS YILING YOU, JOSE ISRAEL RODRIGUEZ, AND LEK-HENG LIM Abstract. Quadratic eigenvalue problems (QEP) and more generally polynomial eigenvalue problems

More information

An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems

An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems SVEN HAMMARLING, Numerical Algorithms Group Ltd. and The University of Manchester CHRISTOPHER J. MUNRO, Rutherford Appleton Laboratory

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Solving Regularized Total Least Squares Problems

Solving Regularized Total Least Squares Problems Solving Regularized Total Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total

More information

A Nonlinear QR Algorithm for Banded Nonlinear Eigenvalue Problems

A Nonlinear QR Algorithm for Banded Nonlinear Eigenvalue Problems A Nonlinear QR Algorithm for Banded Nonlinear Eigenvalue Problems C. KRISTOPHER GARRETT, Computational Physics and Methods, Los Alamos National Laboratory ZHAOJUN BAI, Department of Computer Science and

More information

Reduction of nonlinear eigenproblems with JD

Reduction of nonlinear eigenproblems with JD Reduction of nonlinear eigenproblems with JD Henk van der Vorst H.A.vanderVorst@math.uu.nl Mathematical Institute Utrecht University July 13, 2005, SIAM Annual New Orleans p.1/16 Outline Polynomial Eigenproblems

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems

Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems Zhaojun Bai University of California, Davis Berkeley, August 19, 2016 Symmetric definite generalized eigenvalue problem

More information

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem HOMOGENEOUS JACOBI DAVIDSON MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. We study a homogeneous variant of the Jacobi Davidson method for the generalized and polynomial eigenvalue problem. Special

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems

Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Yuji Nakatsukasa Abstract When a symmetric block diagonal matrix [ A 1 A2 ] undergoes an

More information

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Silvia Noschese 1 and Lothar Reichel 2 1 Dipartimento di Matematica, SAPIENZA Università di Roma, P.le Aldo Moro 5, 185 Roma,

More information

Multiparameter eigenvalue problem as a structured eigenproblem

Multiparameter eigenvalue problem as a structured eigenproblem Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Trimmed linearizations for structured matrix polynomials

Trimmed linearizations for structured matrix polynomials Trimmed linearizations for structured matrix polynomials Ralph Byers Volker Mehrmann Hongguo Xu January 5 28 Dedicated to Richard S Varga on the occasion of his 8th birthday Abstract We discuss the eigenvalue

More information