Downloaded 03/01/13 to Redistribution subject to SIAM license or copyright; see

Size: px
Start display at page:

Download "Downloaded 03/01/13 to Redistribution subject to SIAM license or copyright; see"

Transcription

1 SIAM J MARIX ANAL APPL Vol 34 No 1 pp c 213 Society for Industrial and Applied Mathematics HE ANIRIANGULAR FACORIZAION OF SYMMERIC MARICES NICOLA MASRONARDI AND PAUL VAN DOOREN Abstract Indefinite symmetric matrices occur in many applications such as optimization least squares problems partial differential equations and variational problems In these applications one is often interested in computing a factorization of the indefinite matrix that puts into evidence the inertia of the matrix or possibly provides an estimate of its eigenvalues In this paper we propose an algorithm that provides this information for any symmetric indefinite matrix by transforming it to a block antitriangular form using orthogonal similarity transformations We also show that the algorithm is backward stable and has a complexity that is comparable to existing matrix decompositions for dense indefinite matrices Key words indefinite matrix saddle point problem inertia eigenvalue estimate AMS subject classifications 65F5 15A23 65F3 DOI 11137/ Introduction Indefinite symmetric matrices occur in many applications such as optimization partial differential equations and variational problems where they are for instance linked to a so-called saddle point problem In these applications one is typically interested in computing a factorization of the indefinite matrix that puts into evidence the inertia of the matrix or possibly provides an estimate of its eigenvalues We develop in this paper a new type of matrix factorization that provides exactly that type of information and we present a backward stable algorithm relying only on Givens and Householder transformations for computing it In the literature there are already several types of matrix factorizations of a symmetric matrix A that provide very similar information he LDL factorization where L is unit-lower triangular and D is a block-diagonal matrix with 1 1 and 2 2 diagonal blocks determines immediately the inertia of A via its diagonal he UU decomposition where U is orthogonal and is tridiagonal can be combined with Sturm sequences to give exactly the number of positive negative and zero eigenvalues of A A third factorization is the L L decomposition where L is unit lower-triangular and is tridiagonal which can also be combined with Sturm sequences to determine the inertia of A he LDL factorization of an n n symmetric matrix A is typically combined with certain pivoting strategies and requires O(n 3 ) floating point operations Algorithms for constructing such factorizations (Bunch and Parlett 5 and Bunch and Kaufman 6) have been proved to be backward stable (see eg 14) provided they incorporate a good pivoting strategy he pivoting strategy ensuring backward sta- Received by the editors December ; accepted for publication (in revised form) by S C Eisenstat December 1 212; published electronically February Istituto per le Applicazioni del Calcolo M Picone Consiglio Nazionale delle Ricerche sede di Bari I-7126 Italy (nmastronardi@baiaccnrit) he work of this author is partly supported by GNCS INdAM project Analisi di strutture nella ricostruzione di immagini e monumenti Department of Mathematical Engineering Catholic University of Louvain Louvain-la-Neuve B-1348 Belgium (paulvandooren@uclouvainbe) he work of this author is partly supported by the Belgian Network DYSCO (Dynamical Systems Control and Optimization) funded by the Interuniversity Attraction Poles Programme initiated by the Belgian State Science Policy Office he scientific responsibility rests with its authors 173

2 174 NICOLA MASRONARDI AND PAUL VAN DOOREN bility in the L L decomposition is much simpler 1 But in both cases the transformations are nonorthogonal and therefore the central matrices D and do not provide any useful information about the eigenvalues of A besides their sign he UU decomposition on the other hand uses orthogonal transformations and hence the eigenvalues of A are also those of his makes it possible to obtain additional estimates on the eigenvalues of A at low additional cost In this paper we present a new symmetric factorization A = QMQ whereq is orthogonal and M is symmetric block lower antitriangular (which means that blocks above the main antidiagonal are zero) requiring O(n 3 ) floating point operations Since Q is orthogonal the eigenvalues of A and M are the same he sizes of the blocks of M are shown to yield the inertia of A and when all the eigenvalues of A are located into two clusters a good estimation of them can be obtained by computing the eigenvalues of a matrix formed by the main diagonal and the main antidiagonal of some blocks of M Similar antisymmetric decompositions have appeared in the literature in the context of matrix pencils with particular types of symmetries he problem of tracking the dominant eigenspace of a symmetric indefinite matrix making use of the block antitriangular decomposition has been considered in 2 he paper is organized as follows In section 2 it is shown that any symmetric matrix can be transformed into a block lower antitriangular matrix Moreover an algorithm for computing the factorization A = QMQ is proposed Some bounds and inequalities on the values on the main diagonal and antidiagonal of the computed block antitriangular matrix are also given An algorithm for the rank-one updating/downdating of block antitriangular matrices is proposed in section 3 Numerical experiments showing the properties of the proposed algorithm are reported in section 4 followed by the conclusions 2 he antitriangular decomposition In this section we describe the reduction of an arbitrary symmetric matrix to a block lower antitriangular form via orthogonal similarity transformations We will say that a matrix A R nn is lower antitriangular if A(i j) =i+j n he inertia of a symmetric matrix A R nn is the triple Inertia(A) =(n n n + ) where n n andn + are the number of negative zero and positive eigenvalues of A respectively and n + n + n + = n A symmetric indefinite matrix A R nn is in block antitriangular form if A = Y X Z Y Z W with YX andw square submatrices of different sizes X and W symmetric and Y antitriangular Moreover a symmetric indefinite matrix A R nn with Inertia(A) = (n n n + )n 1 = min(n n + ) and n 2 =max(n n + ) n 1 is in proper block antitriangular form if (21) A = Y X Z Y Z W }n }n 1 }n 2 }n 1 with Z R n1n2 W R n1n1 symmetric Y R n1n1 nonsingular lower antitriangular and X R n2n2 symmetric definite if n 2 > ie X = εll with { 1 if n+ >n ε = 1 if n + <n

3 SYMMERIC ANIRIANGULAR MARICES 175 and L lower triangular Otherwise ε is not defined Hence X is symmetric positive definite if ε = 1 and is symmetric negative definite if ε = 1 Remark 1 Notice that some of the blocks of A may have zero dimension he identity matrix of order n is denoted by I n its columns the unit vectors are denoted by e i i=1n and their size can vary depending on the context If there is no ambiguity different sequences of Givens rotations will be denoted by the same symbols for instance G i i=1ksubmatrices are denoted by the colon notation of MALAB: A(i : j k : l) denotes the submatrix of A formed by the intersection of rows i to j and columns k to l anda(:k : l) denotes the columns of A from k to l A null submatrix is denoted by and its size can vary depending on the context Moreover the symbol = is also used to introduce new variables Here we show that any symmetric matrix can be transformed into a proper block antitriangular form by orthogonal similarity transformations heorem 21 Let A R nn be a symmetric indefinite matrix with Inertia(A) = (n n n + ) Let n 1 = min(n n + ) and n 2 =max(n n + ) n 1 hen there exists an orthogonal matrix Q R nn such that M = Q AQ is in proper block antitriangular form M = Y X Z Y Z W }n }n 1 }n 2 }n 1 Proof he proof is based on the existence of a set of spaces associated with a symmetric matrix A with inertia (n n n + ) Let us consider orthogonal bases U U + U andu n ; then the associated spaces U U + U andu n are called a nullspace of A if AU = ; the nullspace of maximal dimension is unique and of dimension n a nonnegative subspace of A if U+ AU + is positive semidefinite; its maximal dimension is n + n + a nonpositive subspace of A if U AU is negative semidefinite; its maximal dimension is n + n a neutral subspace of A if Un AU n = ; its maximal dimension is n + min(n + n ) he properties of neutral positive and negative spaces are proved in eg 9 but can also easily be recovered from the Cauchy interlacing inequalities Without loss of generality we now assume that n + n (If not one can just consider A instead of A) Suppose we are given the nullspace U and two subspaces U n and U + of maximal dimension; then U U n U + since otherwise U n and U + would not be of maximal dimension We then construct an orthogonal matrix Q =Q 1 Q 2 Q 3 Q 4 where Q 1 spans U Q 2 spans the orthogonal complement of U in U n Q 3 spans the orthogonal complement of U n in U + and Q 4 spans the orthogonal complement of U + Let us denote by K R n2n1 the block (3 2) of M We observe that K = because otherwise Q 1 Q 2 Q 3 AQ 1 Q 2 Q 3 = K K X is an indefinite matrix contradicting the fact that Q 1 Q 2 Q 3 spans the subspace U + It then follows that the number of columns of the Q i matrices are respectively

4 176 NICOLA MASRONARDI AND PAUL VAN DOOREN n n (n + n ) and n and that M must be of the form given above with Y and X square nonsingular In fact Y is full column rank; otherwise the dimension of U is greater than n Moreover the matrix Y is also full row rank; otherwise the dimension of U + is greater than n + n 1 + n 2 Observe that X is nonsingular; otherwise the dimension of U n would be greater than n + n 1 he matrix Y can be reduced to antitriangular form by a QR-like factorization without changing the above subspaces he fact that the block antitriangular structure of M determines its inertia has been proved in 1 21 Reduction to antitriangular form In this section it is shown how a symmetric matrix can be transformed into a block antitriangular one he algorithm for constructing the form is recursive and is also a proof of the existence of the form We suppose that we start with A (k) = A(1 : k 1:k) 1 k<nwhichisinthis form ie A (k) = Q (k) M (k) Q (k) with Q (k) R kk orthogonal and M (k) R kk proper block antitriangular Notice that such a form is trivial to obtain for k =1 We then show how to transform A (k+1) = A(1 : k +1 1:k + 1) into a proper block antitriangular form by updating orthogonal transformations Let Inertia(A (k) ) = (k k k + )k + k + k + = k and let k 1 = min(k k + )k 2 =max(k k + ) k 1 Let M (k) be the proper block antitriangular form of A (k) ie M (k) = Y X Z Y Z W }k }k 1 }k 2 }k 1 with Y R k1k1 nonsingular lower antitriangular X R k2k2 symmetric definite Z R k1k2 andw R k1k1 symmetric Notice that some of the blocks may have zero dimension Without loss of generality let us assume that k 2 > andx is positive definite with Cholesky factorization X = LL Let a = A(1 : k k +1)and γ = a k+1k+1 Partition (22) ã = Q (k) a = Let Q (k+1) = ã 1 ã 2 ã 3 ã 4 Q (k) Since A (k+1) A = (k) a 1 a γ }k }k 1 }k 2 }k 1 M (k+1) = Q M (k+1) A (k+1) (k) Q(k+1) ã = ã = γ ã 1 Y ã 2 X Z ã 3 Y Z W ã 4 ã 1 ã 2 ã 3 ã 4 γ We now show how to update the orthogonal similarity transformations in order to reduce M (k+1) to proper block antitriangular form distinguishing the following 3 cases

5 SYMMERIC ANIRIANGULAR MARICES 177 Case a ã γ 2 = Let 1 P = 1 R(k+1)(k+1) 1 be a circulant matrix hen M a (k+1) = P M (k+1) P is in the required block antitriangular form So for this case let Q a (k+1) = Q (k+1) P hen M a (k+1) is in proper block antitriangular form M (k+1) a = Q (k+1) a A (k+1) Q (k+1) a with Inertia(A (k+1) )=(k k +1k + ) Case b 1 ã 1 2 > Let us consider the Householder matrix H R kk such that Hã 1 = θe k Let (23) Q (k+1) b hen M (k+1) b = Q (k+1) H I k+1 k = Q (k+1) b A (k+1) Q (k+1) b is in proper block antitriangular form M (k+1) b = Yb X Zb Y b Z b W b where Y Y b = θ ã 2 Z Z b = ã 3 }k 1 }k 1 +1 }k 2 }k 1 +1 W ã4 W b = ã 4 γ Moreover Inertia(A (k+1) )=(k +1k 1k + +1) Case c ã 1 2 = his subcase happens for instance when M (k) is nonsingular ie k = and ã 1 = he first step is to annihilate the entries of the main antidiagonal of the submatrix Y ã 2 by means of Γ 1 R (k1+1)(k1+1) the product of k 1 Givens matrices such that Ỹ (24) =Γ 1 Yc a 2 with Y c nonsingular lower antitriangular of order k 1 Let v Z (25) =Γ Z 1 c ã 3 1 From a computational point of view all the binary comparisons with must be replaced by comparisons with a user defined tolerance τ

6 178 NICOLA MASRONARDI AND PAUL VAN DOOREN and (26) ˆγ w w W c with v R k2 w R k1 Moreover let W ã4 =Γ 1 ã 4 γ (27) Q (k+1) c = Q (k+1) Ik k1 Γ 1 hen (28) M (k+1) c Γ 1 Y = Q (k+1) c A (k+1) Q c (k+1) c = X v Zc v ˆγ w Y c Z c w W c Let us define the submatrix X v (29) X 1 = v ˆγ }k }k 1 }k 2 }1 }k 1 hen the next step of this subcase is to check the definiteness of X 1 LetΓ 2 R k2k2 be the product of k 2 1 Givens rotations such that Γ 2 v = αe k2 with α R hen Γ2 Γ X 2 = X 2 Γ2 LΓ 1 1 = 3 Γ 3L Γ 2 αe (21) k2 1 αe k 2 ˆγ L1 L = 1 αe k2 αe k 2 ˆγ where Γ 3 R k2k2 is a sequence of k 2 1 inner Givens rotations chosen such that L 1 =Γ 2 LΓ 3 is nonsingular lower triangular Let us decompose L 1 as L2 L 1 = β with L 2 R (k2 1)(k2 1) lower triangular l 2 R k2 1 and β R It turns out that L 2 L 2 l 2 X 2 = l 2 + β 2 α α ˆγ l 2 Notice that β Since the Schur complement of L 1 L 1 in X 2 is ˆγ α 2 /β 2 X 2 β is positive definite singular or indefinite if the 2 2matrix = 2 α is positive α ˆγ definite singular or indefinite respectively If we define (211) Q c (k+1) 1 = Q c (k+1) I k +k 1 Γ 2 and (212) Z c1 = Z c w Γ 2 then we have to distinguish the following cases 1 I k1+1

7 SYMMERIC ANIRIANGULAR MARICES 179 c1 he matrix is symmetric positive definite with l11 l 21 l 22 being its Cholesky factor herefore X 2 is symmetric positive definite (SPD) and so is X 1 and its Cholesky factor is given by hen M (k+1) c 1 L 2 L c1 = l 2 l 11 ie X c1 = X 2 = L c1 L c 1 l 21 l 22 = Q (k+1) c 1 A (k+1) Q c (k+1) 1 = Yc X c1 Zc 1 Y c Z c1 W c }k }k 1 }k 2 +1 }k 1 c2 he matrix is singular with eigenvalues = λ 1 <λ 2 Let Q 1 be the Givens rotation such that (213) Q 1 Q 1 = λ2 hen X 3 = with Ik2 1 Q 1 X 2 Ik2 1 l 3 l 4 Q 1 L 2 L 2 l 3 l 4 = l 3 + l 4 = Q 1 l 2 Let Γ 4 R k2k2 be the product of k 2 1 Givens rotations such that (214) L2 =Γ 4 L3 l 3 with L 3 R (k2 1)(k2 1) hen Γ4 Γ (215) X = L 1 3 L 3 L 3 l 4 l 4 L 3 λ 2 + l 4 l 4 hen X c2 = L c2 L c 2 with L c2 = (216) Q(k+1) c 2 = Q (k+1) c 1 L3 l 4 I k +k 1+k 2 1 λ2 Let Q 1 I k1 I k +k 1 Γ 4 I k1+1 λ 2

8 18 NICOLA MASRONARDI AND PAUL VAN DOOREN hen with (217) M (k+1) c 2 = Yc (k+1) (k+1) Q c 2 A Q(k+1) c 2 = y X c2 Zc 2 Y c y Z c2 W c y Zc2 = Zc1 Ik2 1 Γ 4 Q 1 1 }k }k 1 }1 }k 2 }k 1 Let Γ 5 R (k1+1)(k1+1) be the product of k 1 Givens rotations such that (218) Yc2 = Yc y Γ 5 with Y c2 R k1k1 nonsingular lower antitriangular Let (219) Q c (k+1) (k+1) 2 = Q I k c 2 Γ 5 I k1+k2 hen M (k+1) c 2 = Q (k+1) c 2 A (k+1) Q c (k+1) 2 = Yc 2 X c2 Zc 2 Y c2 Z c2 W c }k +1 }k 1 }k 2 }k 1 c3 he matrix is indefinite Let Q 2 be the Givens rotation such that (22) Q 2 Q ξ1 2 = ξ 1 ξ 2 Let hen Ik2 1 l 5 l 6 = Q 2 l 2 Ik2 1 X 3 = X Q 2 2 Q 2 L 2 L 2 l 5 l 6 = l 5 + ξ 1 l 6 ξ 1 ξ 2 Similar to the previous subcase let Γ 6 R k2k2 be the product of k 2 1 Givens rotations such that L2 (221) =Γ L 6 c3 l 5

9 SYMMERIC ANIRIANGULAR MARICES 181 with L c3 R (k2 1)(k2 1) hen Γ6 Γ (222) X = L c3 L c 1 3 L c3 l 6 + l 6 L c 3 l 6 l 6 where z = ξ 1 Γ 6 e k2 Let z = (223) Ẑ c3 = Z c1 Ik2 1 Y c3 = z ξ 2 z L c3 l 6 + z γ c3 = l 6 l 6 + ξ 2 Let X c3 = L c3 L c 3 Γ 6 Q 2 1 z(1) z(2:k Z Y c Ẑ c3 (:1) c3 = 2) Moreover Ẑ c3 (:2:k 2) (224) Q (k+1) c 3 hen with = Q (k+1) c 1 I k +k 1+k 2 1 M k+1 = Q (k+1) c 3 A (k+1) Q c (k+1) 3 = γ W c3 = c3 Ẑ c3 (:k 2 +1) Q 2 I k1 I k +k 1 Γ 6 Yc 3 X c3 Zc 3 Y c3 Z c3 W c3 Ẑ c3 (:k 2 +1) I k1+1 }k }k 1 +1 }k 2 1 }k 1 +1 Remark 2 he proposed algorithm is backward stable because it relies only on Householder and Givens transformations 8 Remark 3 We observe that the signs of the elements of the antidiagonal of Y and in the diagonal of L can be arbitrarily chosen Remark 4 he coefficients c and s of the Givens rotations involved in (213) and (22) transforming into an antitriangular matrix can be computed solving the following equation: c 2 β 2 2scα + s 2ˆγ = he parameters c and s can be computed in a stable way from the largest root in absolute value of the quadratic equation 14 ˆγt 2 2αt + β 2 = W c with t = s/c 22 Computational cost Given A (k) = Q (k) M (k) Q (k) we have just described an algorithm to reduce A (k+1) into proper block antitriangular form by orthogonal transformations he cost depends on which case happens Here we examine the cost for each separate case he only common operation to each case is the product (22) requiring 2k 2 floating point operations a Check whether ã γ 2 =requireso(k) floating point operations b he multiplication (23) requires 4k k floating point operations

10 182 NICOLA MASRONARDI AND PAUL VAN DOOREN c he products (24) (25) (26) (27) (21) (211) and (212) are common to this case and their costs are 3k 2 16k 1 k 2 6k 2 1 (due to the symmetry) 6k 1 k 6k 2 2 6k 2k and6k 1 k 2 floating point operations respectively c1 No additional floating point operations are required here c2 he products (214) (216) (217) (218) and (219) require 3k 2 2 6k 2 k 6k 1 k 2 3k 2 1 and 6k 1k floating point operations respectively c3 he products (221) L c3 l 6 in (222) (223) and (224) require 3k 2 2k 2 2 6k 1 k 2 and 6k 2 k floating point operations respectively Hence the complexity of the algorithm strongly depends on the inertia of the principal submatrices of the symmetric matrix to be factorized and it is of order n 3 he number of operations of the multiplications by the Givens rotations in the proposed algorithm can be halved if the latter rotations are replaced by the fast Givens transformations 2 23 Bounds and inequalities Let A be an arbitrary symmetric matrix; then we have an antitriangular decomposition of the type Q AQ = M = Y X Z Y Z W }n (2n 1 + n 2 ) }n 1 }n 2 }n 1 where Q R nn is orthogonal W R n1n1 X = εll R n2n2 andε = ±1 and where the inertia is given by {n 1 + n 2 n (2n 1 + n 2 )n 1 } if ε = 1 andby {n 1 n (2n 1 + n 2 )n 1 + n 2 } if ε = 1 We want to relate the singular values of Y and L to the eigenvalues of A We first note that the zero eigenvalues (and corresponding eigenvectors) are correctly identified by this decomposition For the sake of simplicity we will therefore exclude them from our discussion and we will thus assume that we have a nonsingular matrix A with the following decomposition: Q AQ = M = Y X Z }n 1 }n 2 Y Z W }n 1 with n 1 +n 2 eigenvalues of sign ε and n 1 eigenvalues of sign ε and with n =2n 1 +n 2 We will order the eigenvalues of A (or equivalently of M) and the singular values of L and Y in decreasing order and denote them as follows (here we assume ε =1): (225) (226) (227) Λ(A) :λ 1 λ 2 λ n1+n 2 > >λ n1+n 2+1 λ n Σ(L) :σ 1 σ 2 σ n2 > Σ(Y ):γ 1 γ 2 γ n1 > Notice that for the case ε = 1 it suffices to consider A instead of A and essentially the same discussion will apply We will therefore restrict ourselves here to the case ε = 1 he first sets of inequalities are easy to obtain heorem 22 he eigenvalues of X are given by σ1 2 σ2 n 2 > and satisfy the inequalities <λ i+n1 σ 2 i λ i i =1n 2

11 SYMMERIC ANIRIANGULAR MARICES 183 Proof o prove it just apply the Cauchy interlacing theorem to the (n 1 + n 2 ) (n 1 + n 2 ) principal submatrix X of M whose largest n 2 eigenvalues are the σ 2 i i=1n 2 We now look at another submatrix of M heorem 23 he ordered eigenvalues ˆλ 1 ˆλ n1 > > ˆλ n1+1 ˆλ 2n1 of the 2n 1 2n 1 submatrix Y Y W of M satisfy the following inequalities: (228) (229) <λ i+n2 ˆλ i λ i i =1n 1 λ i+n1+n 2 ˆλ n1+i < i =1n 1 Proof All the inequalities follow from applying the Cauchy interlacing inequalities to the given submatrix of M But the upper bounds of the inequalities (229) are replaced by zero because this is also guaranteed by our decomposition We now relate the 2n 1 eigenvalues ˆλ i to the n 1 singular values γ i of the matrix Y heorem 24 he ordered singular values γ 1 γ n1 > of the n 1 n 1 submatrix Y of M satisfy the following inequalities and equality: (23) (231) j j γi 2 ˆλ i i=1 n 1 i=1 n 1 j ˆλ 2n1 i+1 j =1n 1 1 i=1 n 1 γi 2 = ˆλ i i=1 i=1 i=1 ˆλ 2n1 i+1 Proof Without loss of generality we can assume that Y is diagonal and ordered Y = diag{γ 1 γ n1 } Consider the 2j 2j submatrix (made of the leading j j subblocks of Y and W ): Y j Y of Y j W j Y W Its eigenvalues λ 1 λ j > > λ j+1 λ 2j satisfy the bounds (232) < λ i ˆλ i i =1j and ˆλ 2n1+1 i λ 2j+1 i < i =1j But the product j λ j i=1 i i=1 λ 2j i+1 of the absolute values of these 2j eigenvalues is also to equal to j i=1 γ2 i since this is the absolute value of the determinant of the corresponding matrix Combining this with the Cauchy interlacing inequalities (232) yields the desired inequalities (23) he product equality (231) follows from the expression of the determinant of the whole matrix

12 184 NICOLA MASRONARDI AND PAUL VAN DOOREN We now use this to obtain bounds involving the eigenvalues of A heorem 25 he ordered singular values γ 1 γ n1 > of the n 1 n 1 submatrix Y of M satisfy the following inequalities: j j j (233) γi 2 λ 2n1+n 2 i+1 j =1n 1 i=1 λ i i=1 i=1 Proof his follows trivially from combining heorems 23 and 24 Let us suppose that the matrix M is sparse with the matrices σ 1 ω 1 L = σ 2 W= ω 2 σ n2 ω n1 γ 1 Y = γ 2 γ n1 as the only nonzero blocks he eigenvalues of such a matrix M are easily seen to be σi 2i =1n 2 and the positive and negative eigenvalues of the 2 2 matrices γi γ i ω i i=1n1 ie ˆλ i i=12n 1 If we choose the ordering ˆλ i i =1n 1 λ i = σi n 2 1 i = n 1 +1n 1 + n 2 ˆλ i n2 i = n 1 + n 2 +12n 1 + n 2 then the upper bounds of heorem 25 become equalities Notice that the λ i may not be ordered in a decreasing way It also turns out that the matrices M obtained by our reduction algorithm are in fact reasonably near to such a sparse matrix M his means that the above diagonal approximation in general gives good estimates of the true eigenvalues of A his will be illustrated in section 4 with numerical experiments We now prove that if A has only one (repeated) negative eigenvalue λ and one (repeated) positive eigenvalue λ then the antitriangular form must have the above sparse structure where moreover the σ i γ i andω i are constant apart from the signs of γ i which can change Lemma 26 If A has only one (repeated) negative eigenvalue λ and one (repeated) positive eigenvalue λ + then the proper block antitriangular form is unique up toasignmatrix ±1 S = R n1n1 ±1 and given by Q AQ = γsj n1 λ + I n2 γj n1 S ωi n1 where γ = λ λ + ω= λ + λ + J n1 is the antiidentity of order n 1

13 SYMMERIC ANIRIANGULAR MARICES 185 Proof Without loss of generality we suppose that the algebraic multiplicities of λ and λ + are n 1 and n 1 + n 2 respectively with 2n 1 + n 2 = n Let λ+ I A = V n1+n 2 λ In1 V be the eigenvalue decomposition of A with V R nn orthogonal For simplicity we consider the coordinate system where V = I n ie A = λ +I n1 λ + I n2 λ I n1 Any neutral subspace U n of maximal dimension then has dimension n 1 and is formed by the vectors x such that x Ax = If we normalize these vectors such that x 2 =1 then x = x 1 x 2 }n 1 }n 2 x 3 }n 1 x1 x 2 2 = c x 3 2 = s where c = λ / λ + λ s= λ + / λ + λ A basis for this space must therefore be given by Q n = Q1 Q2 ci n 1 n2 si n1 with Q 1 R (n1+n2)(n1+n2) and Q 2 R n1n1 orthogonal matrices Any nonnegative subspace U + of maximal dimension containing U n then has dimension n 1 + n 2 and must have an orthogonal basis Q + = with ˆX ˆX + Ŷ Ŷ = I n2 But Q1 Q2 ci n1 s ˆX n2 Ŷ si n1 c ˆX ci n1 s ˆX Q +AQ + = n2 Ŷ λ +I n1 ci n1 s ˆX λ + I si n1 c ˆX n2 n2 Ŷ λ I n1 si n1 c ˆX cs(λ = + λ ) ˆX cs(λ + λ ) ˆX λ + Ŷ Ŷ +(λ + s 2 + λ c 2 ) ˆX ˆX λ+ λ = ˆX λ+ λ ˆX λ + I + λ ˆX ˆX

14 186 NICOLA MASRONARDI AND PAUL VAN DOOREN It is easily seen that this is positive semidefinite only if ˆX = and hence Ŷ is a n 2 n 2 orthogonal matrix Q 3 herefore we have Q + = Q1 Q2 I n 1 Q 3 I n1 ci n 1 n2 I n2 si n1 Finally any orthogonal completion must then be of the form ˆQ1 Q = Q 2 ci n 1 si n1 I n2 In1+n 2 SJn1 si n1 ci n1 ˆQ1 = Q 1 In1 Q 3 so that after applying this general transformation we find indeed the required antitriangular shape Q AQ = γsj n1 λ + I n2 γsj n1 ωi n1 Remark 5 Notice that Q is not unique but the resulting proper block antitriangular matrix is up to the sign matrix S 3 Rank-one modification In many applications it is important to update (downdate) a symmetric indefinite matrix A R nn whose dominant/minor subspace is easily detectable with a symmetric rank-one modification in a fast way Algorithms are available to reduce rank-one modifications of symmetric tridiagonal matrices of diagonal ones and of symmetric diagonal plus semiseparable ones in tridiagonal form Let the symmetric matrix A of order n be factorized as A = Q Q where is one of the latter matrices and Q is an orthogonal one Although the rank-one modification of the matrix can be computed in O(n 2 ) flops the updating of the Q factor requires O(n 3 )flops he updating/downdating of the symmetric rank-revealing decomposition A = QR ΩRQ with Q orthogonal R upper triangular and Ω = diag(±1) modified by a symmetric rank-one matrix described in 13 requires O(n 2 )flops However it involves hyperbolic transformations hese transformations can introduce severe cancellation in the updated matrix even when choosing careful implementations 3 In this section we show that a symmetric proper block antitriangular matrix can be updated/downdated with a symmetric rank-one modification with O(n 2 )flopsin astableway Let y R n and let A = QMQ with Inertia(A) =(n n n + )n +n +n + = n and Q orthogonal Let n 1 = min(n n + )andn 2 =max(n n + ) n 1 M = Y X Z Y Z W }n }n 1 }n 2 }n 1 with Y R n1n1 nonsingular lower antitriangular and X R n2n2 X = εll with ε =1ifn + >n ε = 1 ifn + <n and L lower triangular he aim is to

15 SYMMERIC ANIRIANGULAR MARICES 187 Fig 31 First step of the algorithm Application of the Householder similarity transformation annihilating the first n 1 entries in the vector x For the sake of simplicity only the block antitriangular matrix M and the vector ˆx are displayed he entries to be annihilated are denoted by and the entries modified by the multiplication are in red he entries of the central definite submatrix X are denoted by the symbol transform QMQ ± yy = Q(M ± Q yy Q)Q with M symmetric proper block antitriangular into ˆQ ˆM ˆQ a matrix with the same structure of M and ˆQ orthogonal annihilating the entries of the vector x = Q y Let N 1 = n +n 1 and N 2 = n +n 1 +n 2 We first consider the case n n 1 n 2 > he case n = is shortly described after We divide the algorithm into four different steps 31 First step If n > 1 the first step is to determine a Householder matrix H (1) such that the product H (1) x(1 : n ) has annihilated all the entries but the last one Let H 1 =diag(h (1) I n n ) hen Q(M ± xx )Q = QH 1 (H 1MH 1 = Q 1 (M ± ˆxˆx )Q 1 ± H 1xx H 1 )H 1Q with Q 1 = QH1 ˆx = H 1 x We observe that M remains unchanged since its first n rows/columns are zero his reduction is depicted in Figure 31 for a proper block antitriangular matrix with inertia (3 3 8) We remark that the algorithm reverts to the n =caseif x(1 : n ) 2 = Computational complexity Due to the special structure of the Householder matrix H 1 the multiplication Q 1 = QH1 requires 4n n flops and ˆx = H 1 x requires 4n he number of operations could be reduced to 3n n and 3n by using a sequence of fast Givens transformations 2 rather than a single Householder transformation 32 Second step If n 1 = min(n n + ) > ie if M is indefinite in this step the entries n n +1N 1 1ofˆx are annihilated by means of multiplication by a sequence of n 1 Givens rotations G i i =1 2n 1 such that G i acts on the (n + i 1)th and (n + i)th rows of G i 1 G 1ˆx annihilating the entry in position (n + i 1) Let M () = M and M (i) = G i M (i 1) G i ˆx = G iˆx i=1n 1 he product G i M (i 1) G i modifies the last i entries of the rows/columns n + i 1and n + i of M (i 1) introducing a nonzero entry in position (n + i 1n i + 1) and symmetrically in position (n i +1n + i 1) he second step is depicted in Figure 32 For the sake of simplicity the first n 1entriesofx and the first n 1 rows and columns of M are not depicted since they are zero

16 188 NICOLA MASRONARDI AND PAUL VAN DOOREN Fig 32 Second step of the algorithm Application of a sequence Givens rotations annihilating the entries n n +1N 1 1 of the vector ˆx (a) (b) P (c) Fig 33 hird step of the algorithm Application of the sequence of Givens rotations annihilating all the entries but the last one of ˆx corresponding to the central definite submatrix X Computational complexity Due to the antitriangular structure of M () the computation of M (n1) = G n1 G n1 1 G 1 M () G 1 G n 1 1 G n 1 requires 3n 2 1 flops Moreover the computation of Q 2 = Q 1 G 1 G n 1 1G n 1 requires 6n 1 n flops 33 hird step he aim of this step is to annihilate all the entries but the last one of ˆx corresponding to the central definite submatrix X by a sequence of Givens rotations his step is depicted in Figure 33 ((a) (b)) Since X is in factored form ie X = εll with ε = ±1 another sequence of inner Givens rotations is considered to preserve the factored structure In particular the first Givens rotation Ǧ 1 is applied to ˆx modifying the rows N 1 +1and N and annihilating the entry N 1 +1 hen the similarity transformation is computed M (n1+1) = Ǧ1M (n1) Ǧ 1 We observe that this transformation modifies the first two rows/columns of the submatrix X creating a nonzero entry in the position (1 2) of L o remove this bulge a new Givens rotation Ğ1 is applied to the right of L We observe that the latter rotation acts only on the matrix L he described procedure is depicted in Figure 34 Without loss of generality only the central submatrix X and the corresponding entries of the vector ˆx are depicted Let M (n1+n2 1) = Ǧn 2 1 Ǧ1M (n1) Ǧ 1 Ǧ n 2 1 Finally the row (column) N 1 of M (n1+n2 1) is moved below (to the right of) the definite submatrix M (n1+n2 1) (N 1 +1 :N 2 N 1 +1:N 2 ) by the multiplication of a permutation matrix P Let M (n1+n2) = PM (n1+n2 1) P and ˆx = P ˆx his transformation is depicted in Figure 33 ((b) (c)) Computational complexity Due to the antitriangular structure the computation of M (n1+n2 1) requires 6n 1 n 2 +3n 2 2 flops Moreover to remove the bulges of L in

17 SYMMERIC ANIRIANGULAR MARICES 189 Fig 34 hird step of the algorithm o preserve the Cholesky structure each multiplication by an outer Givens rotation Ǧi introducing a bulge in the lower triangular structure of L is followed by a multiplication by an inner Givens rotation Ği removing the bulge (a) (b) (c) Fig 35 Fourth step Reduction of the matrix ˆM (a) to the matrix either (b) (c) or(d) if the submatrix ˆM(N 1 : N 2 1N 1 : N 2 1) is definite singular or indefinite order to restore the lower triangular structure requires 3n 2 2 flops Furthermore the computation of Q 4 = Q 3 Ǧ 1 Ǧ n 2 1 requires 6n 2 n flops 34 Fourth step he rank-one matrix ˆxˆx is now added or subtracted to M (n1+n2) ˆM = M (n1+n2) ± ˆxˆx Since ˆx(1 : N 2 2) = ˆM differs from M (n1+n2) for the submatrix ˆM(N 2 1: N 2 + n 1 N 2 1:N 2 + n 1 ) Although ˆM(N 1 : N 2 2N 1 : N 2 2) is still definite nothing can be said about the definiteness of the submatrix ˆM(N 1 : N 2 N 1 : N 2 ) o complete the reduction it is sufficient to reduce to proper block antitriangular form the submatrix ˆM(N 1 : N 2 N 1 : N 2 ) by applying either one step of the algorithm described in section 2 starting from the symmetric definite submatrix ˆM(N 1 : N 2 2N 1 : N 2 2) if the submatrix ˆM(N 1 : N 2 1N 1 : N 2 1) is singular or two steps otherwise Of course the Givens rotations must be applied to the whole matrix ˆM Let ˆM (1) be the matrix obtained after one step of the reduction For the sake of clarity we describe the first of the two steps distinguishing the three different cases and the second step in only one subcase Case a ˆM(N1 : N 2 1N 1 : N 2 1) definite Applying the procedure c1 of section 21 the latter matrix is transformed to proper block antitriangular form his is graphically depicted in Figure 35 ((a) (b)) Case b ˆM(N1 : N 2 1N 1 : N 2 1) singular Applying the procedure c2 of section 21 the latter matrix is transformed to proper block antitriangular form his is graphically depicted in Figure 35 ((a) (c)) Case c ˆM(N1 : N 2 1N 1 : N 2 1) indefinite Applying the procedure c3 of section 21 the latter matrix is transformed to proper block antitriangular form his is graphically depicted in Figure 35 ((a) (d)) o complete this case and the whole reduction a similarity transformation based on a Givens rotation acting on the rows (d)

18 19 NICOLA MASRONARDI AND PAUL VAN DOOREN (a) (b) (c) Fig 36 Fourth step Reduction of the matrix ˆM (1) (a) to the matrix either (b) (c) or(d) if the submatrix ˆM (1) (N 1 : N 2 1N 1 : N 2 1) is definite singular or indefinite (columns) N 2 1andN 2 in order to annihilate the entry (N 2 1N 1 )((N 1 N 2 1)) is considered In what follows we describe the second step of the reduction of ˆM (1) to proper block antitriangular form for Case a ( ˆM (1) (N 1 : N 2 1N 1 : N 2 1) definite) Case c(ˆm (1) (N 1 : N 2 1N 1 : N 2 1) indefinite) can be handled in a similar way We distinguish the following three subcases: 1 Subcase a1 ˆM (1) (N 1 : N 2 N 1 : N 2 ) definite Applying the procedure c1 of section 21 the latter matrix is transformed to proper block antitriangular form his is graphically depicted in Figure 36 ((a) (b)) 2 Subcase a2 ˆM (1) (N 1 : N 2 N 1 : N 2 ) singular hen the matrix ˆM (1) is singular Applying the procedure c3 of section 21 the latter matrix is transformed to proper block antitriangular form his is graphically depicted in Figure 36 ((a) (c)) hen as described in c3 of section 21 a sequence of n 1 Givens rotations Ḡi i =1n 1 each of those acting on the rows (columns) N 1 i and N 1 i + 1 annihilating the entry (N 1 i n 1 i +1) must be applied in order to reduce the whole matrix in proper block antitriangular form 3 Subcase a3 ˆM(N1 : N 2 N 1 : N 2 ) indefinite Applying the procedure c3 of section 21 the latter matrix is transformed to proper block antitriangular form his is graphically depicted in Figure 36 ((a) (d)) Computational complexity he computational complexity of the initial substep is mainly due to the reduction to proper block antitriangular form of the central part of ˆM requiring according to the section 21 case c O(n2 n 1 + n 2 )flops Moreover to update the Q matrix O(n 2 n) flops are required Another O(n 1 n)flopsmustbe added in subcase a2 due to the multiplication of the sequence Ḡi i=1n 1 by the orthogonal matrix of the factorization 35 Case n = If n = step 1 of the above described algorithm is skipped Furthermore step 2 is modified as follows Let M () = M A sequence of Givens rotations G i i =1n 1 1 is considered in order to annihilate the first n 1 1 entries of x Let M (i) = G i M (i 1) Gi he matrix M (n1 1) differs from a proper block antitriangular matrix for the entries (i 2n 1 + n 2 i) and(2n 1 + n 2 i i) that are different from zero for i =1n 1 1 In order to annihilate the latter entries another sequence G i i =1n 1 1 of Givens rotations each of those acting on the rows and columns 2n 1 + n 2 i and 2n 1 + n 2 i + 1 is considered such that M (n1+i 1) = G i M (n1+i 2) G i i=1n 1 1 his step is graphically depicted in Figure 37 ((a) (e)) hen the reduction proceeds similarly as for the case n (see Figure 37 (e) (f) (g)) (d)

19 SYMMERIC ANIRIANGULAR MARICES 191 (a) (b) (c) (d) (e) (f) P (g) Fig 37 Case n = 4 Numerical experiments Some numerical experiments showing the properties of the proposed algorithm are reported in this section In particular it is shown that the numerical results agree with the bounds described in section 23 he experiments are carried out in MALAB In all the examples we denote by A the initial symmetric indefinite matrix by M the similar block antitriangular matrix computed by the proposed algorithm by Q the computed orthogonal matrix such that A = Q MQ and by M d the symmetric indefinite matrix obtained by extracting the main diagonal from M and the antidiagonal from Y Moreover we denote by λ 1 (A) λ 2 (A) λ n 1 (A) λ n (A) andbyλ 1 (M d ) λ 2 (M d ) λ n 1 (M d ) λ n (M d ) the eigenvalues of A and M d respectively When M d is close to M obviously σ 2 i i =1n 2 are good approximations of n 2 eigenvalues of M and λ i (A)λ n i+1 (A) i=1n 1 are good approximations of γ 2 i with σ i and γ i defined in subsection 23 for an appropriate ordering of the eigenvalues of A We chose the tolerance τ = for the binary comparisons in the examples he considered examples illustrate well some of the properties which we want to highlight he first example is characterized by having two clusters of eigenvalues As a consequence (see Lemma 26) the eigenvalues of M d approximate well those of A he second example shows that if the entries of the matrix are chosen randomly we have almost the same number of positive and negative eigenvalues Moreover the computed proper block antitriangular matrix is almost antidiagonal dominant he third and fourth example are taken from the Matrix Market 15 he eigenvalue estimation property for these more practical examples is less pronounced except for the last example which is diagonal dominant Example 1 In this example n = 1 d= 15 ones(4 1); 25 ones(6 1) + 5randn(n 1) and A = Qdiag(d) Q with Q a random orthogonal matrix herefore 4 eigenvalues of A are clustered around 15 and 6 around 25 he computed matrix M is depicted in Figure 41 he eigenvalues of A and M d are depicted in Figure 42 he backward error is A QMQ 2 =

20 192 NICOLA MASRONARDI AND PAUL VAN DOOREN Fig 41 Plot of the entries of the block antitriangular matrix M of Example 1 computed by the proposed algorithm Fig 42 Eigenvalues of A and M d of Example Λ(A) Λ(M D ) Example 2 Let n = 1 B= randn(1) and A = B + B he matrix M computed by the proposed algorithm is depicted in Figure 43 he eigenvalues of A and M d are depicted in Figure 44 he backward error is A QMQ 2 = Example 3 he matrix A considered in this example is the FIDAPM5 matrix generated by FIDAP and available at the Matrix Market 15 and depicted in Figure 45 Its order is n = 42 and it has 14 negative eigenvalues 1 zero and 27 positive ones he computed matrix M is displayed in Figure 46 he computed inertia is (14127) he eigenvalues of A and M d are depicted in Figure 47 he backward error is A QMQ 2 = Example 4 he matrix A considered in this example is the real part of the complex symmetric matrix called QC324 available at the Matrix Market 15 modeling H 2 + in an Electromagnetic Field and depicted in Figure 48 Its order is n =

21 SYMMERIC ANIRIANGULAR MARICES Fig 43 Plot of the entries of matrix M of Example Fig 44 Eigenvalues of A and M d of Example Λ(A) Λ(M d ) Fig 45 Plot of the entries of matrix A of Example 3

22 194 NICOLA MASRONARDI AND PAUL VAN DOOREN Fig 46 Plot of the entries of matrix M of Example Fig 47 Eigenvalues of A and M d of Example 3 Λ(A) Λ(M d ) 4 3 Fig 48 Plot of the entries of matrix A of Example 4

23 SYMMERIC ANIRIANGULAR MARICES Fig 49 Plot of the entries of matrix M of Example Fig 41 Eigenvalues of A and M d of Example 4 and it has 211 negative and 113 positive eigenvalues respectively he matrix M computed by the proposed algorithm is depicted in Figure 49 he eigenvalues of A and M d are depicted in Figure 41 he eigenvalues of M d approximate well the eigenvalues of A because A is strongly diagonal dominant he backward error is A QMQ 2 = Conclusions A new symmetric factorization A = QMQ whereqis orthogonal and M is block lower antitriangular has been proposed Since Q is orthogonal the eigenvalues of A and M are the same he sizes of the blocks of M are shown to yield the inertia of A and the diagonal elements of some of the blocks are shown to yield for the considered examples good estimates of the eigenvalues of A Acknowledgments he authors thank the anonymous associate editor and the anonymous referees for their constructive comments and suggestions he first author would like to acknowledge the hospitality of the Katholieke Uversiteit Leuven and the Catholic University of Louvain-la-Neuve Belgium where part of this work took place Λ(A) Λ(M d )

24 196 NICOLA MASRONARDI AND PAUL VAN DOOREN REFERENCES 1 JO Aasen On the reduction of a symmetric matrix to tridiagonal form BI 11 (1971) pp AA Anda and H Park Fast plane rotations with dynamic scaling SIAMJ MatrixAnal Appl 15 (1994) pp AW Bojanczyk RP Brent P Van Dooren and F de Hoog A note on downdating the Cholesky factorization SIAM J Sci Statist Comput 8 (1987) pp JR Bunch CP Nielsen and DC Sorensen Rank-one modification of the symmetric eigenproblem Numer Math 31 (1978) pp JR Bunch and BN Parlett Direct methods for solving symmetric indefinite systems of linear equations SIAM J Numer Anal 8 (1971) pp JR Bunch and L Kaufman Some stable methods for calculating inertia and solving symmetric linear systems Math Comp 31 (1977) pp GH Golub Some modified matrix eigenvalue problems SIAM Rev 15 (1973) pp GH Golub and CF Van Loan Matrix Computations 3rd ed he Johns Hopkins University Press Baltimore MD I Gohberg P Lancaster and L Rodman Indefinite Linear Algebra and Applications Birkhäuser Verlag Basel Switzerland 25 1 NIM Gould On practical conditions for the existence and uniqueness of solutions to the general equality quadratic programming problem Math Program 32 (1985) pp WB Gragg and WJ Harrod he numerically stable reconstruction of Jacobi matrices from spectral data Numer Math 44 (1984) pp S Hammarling and MA Singer A canonical form for the algebraic Riccati equation in Mathematical heory of Networks and Systems PA Fuhrmann ed Springer-Verlag Berlin Germany PC Hansen and P Yalamov Computing symmetric rank-revealing decompositions via triangular factorization SIAM J Matrix Anal Appl 23 (21) pp NJ Higham Accuracy and Stability of Numerical Algorithms 2nd ed SIAM Philadelphia NJ Higham he Matrix Computation oolbox higham/ mctoolbox 16 D Kressner C Schröder and DS Watkins Implicit QR algorithms for palindromic and even eigenvalue problems Numer Algorithms 51 (29) pp DS Mackey N Mackey C Mehl and V Mehrmann Numerical methods for palindromic eigenvalue problems: Computing the anti-triangular Schur form Numer Linear Algebra Appl 16 (29) pp N Mastronardi S Chandrasekaran and S Van Huffel Fast and stable algorithms for reducing diagonal plus semi-separable matrices to tridiagonal and bidiagonal form BI 41 (21) pp N Mastronardi E Van Camp and M Van Barel Divide and conquer algorithms for computing the eigendecomposition of symmetric diagonal plus semiseparable matrices Numer Algorithms 9 (25) pp N Mastronardi and P Van Dooren Recursive approximation of the dominant eigenspace of an indefinite matrix J Comput Appl Math 236 (212) pp C Mehl On asymptotic convergence of nonsymmetric Jacobi algorithms SIAM J Matrix Anal Appl 3 (28) pp H Park and S Van Huffel wo way bidiagonalization scheme for downdating the singularvalue decomposition Linear Algebra Appl 222 (29) pp S Van Huffel and H Park Parallel tri- and bi-diagonalization of bordered bidiagonal matrices Parallel Comput 2 (1994) pp

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Orthogonal similarity transformation of a symmetric matrix into a diagonal-plus-semiseparable one with free choice of the diagonal Raf Vandebril, Ellen Van Camp, Marc Van Barel, Nicola Mastronardi Report

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science The Lanczos-Ritz values appearing in an orthogonal similarity reduction of a matrix into semiseparable form M Van Barel R Vandebril N Mastronardi Report TW 36, May 23 Katholieke Universiteit Leuven Department

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL

More information

REVISITING THE STABILITY OF COMPUTING THE ROOTS OF A QUADRATIC POLYNOMIAL

REVISITING THE STABILITY OF COMPUTING THE ROOTS OF A QUADRATIC POLYNOMIAL Electronic Transactions on Numerical Analysis. Volume 44, pp. 73 82, 2015. Copyright c 2015,. ISSN 1068 9613. ETNA REVISITING THE STABILITY OF COMPUTING THE ROOTS OF A QUADRATIC POLYNOMIAL NICOLA MASTRONARDI

More information

Matrix Computations and Semiseparable Matrices

Matrix Computations and Semiseparable Matrices Matrix Computations and Semiseparable Matrices Volume I: Linear Systems Raf Vandebril Department of Computer Science Catholic University of Louvain Marc Van Barel Department of Computer Science Catholic

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Using semiseparable matrices to compute the SVD of a general matrix product/quotient

Using semiseparable matrices to compute the SVD of a general matrix product/quotient Using semiseparable matrices to compute the SVD of a general matrix product/quotient Marc Van Barel Yvette Vanberghen Paul Van Dooren Report TW 508, November 007 n Katholieke Universiteit Leuven Department

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Let ^A and ^Z be a s.p.d. matrix and a shift matrix of order

Let ^A and ^Z be a s.p.d. matrix and a shift matrix of order FAS IMPLEMENAION OF HE QR FACORIZAION IN SUBSPACE IDENIFICAION N. Mastronardi, P. Van Dooren, S. Van Huffel Università della Basilicata, Potenza, Italy, Katholieke Universiteit Leuven, Leuven, Belgium

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

A multiple shift QR-step for structured rank matrices

A multiple shift QR-step for structured rank matrices A multiple shift QR-step for structured rank matrices Raf Vandebril, Marc Van Barel and Nicola Mastronardi Raf Vandebril Department of Computer Science KU Leuven, Belgium Raf.Vandebril@cs.kuleuven.be Marc

More information

A Schur based algorithm for computing the smallest eigenvalue of a symmetric positive definite Toeplitz matrix

A Schur based algorithm for computing the smallest eigenvalue of a symmetric positive definite Toeplitz matrix A Schur based algorithm for computing the smallest eigenvalue of a symmetric positive definite Toeplitz matrix Nicola Mastronardi Marc Van Barel Raf Vandebril Report TW 461, May 26 n Katholieke Universiteit

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper

More information

Efficient and Accurate Rectangular Window Subspace Tracking

Efficient and Accurate Rectangular Window Subspace Tracking Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu

More information

Dominant feature extraction

Dominant feature extraction Dominant feature extraction Francqui Lecture 7-5-200 Paul Van Dooren Université catholique de Louvain CESAME, Louvain-la-Neuve, Belgium Goal of this lecture Develop basic ideas for large scale dense matrices

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

arxiv: v1 [math.na] 24 Jan 2019

arxiv: v1 [math.na] 24 Jan 2019 ON COMPUTING EFFICIENT DATA-SPARSE REPRESENTATIONS OF UNITARY PLUS LOW-RANK MATRICES R BEVILACQUA, GM DEL CORSO, AND L GEMIGNANI arxiv:190108411v1 [mathna 24 Jan 2019 Abstract Efficient eigenvalue solvers

More information

A Divide-and-Conquer Method for the Takagi Factorization

A Divide-and-Conquer Method for the Takagi Factorization A Divide-and-Conquer Method for the Takagi Factorization Wei Xu 1 and Sanzheng Qiao 1, Department of Computing and Software, McMaster University Hamilton, Ont, L8S 4K1, Canada. 1 xuw5@mcmaster.ca qiao@mcmaster.ca

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Principal pivot transforms of quasidefinite matrices and semidefinite Lagrangian subspaces. Federico Poloni and Nataša Strabi.

Principal pivot transforms of quasidefinite matrices and semidefinite Lagrangian subspaces. Federico Poloni and Nataša Strabi. Principal pivot transforms of quasidefinite matrices and semidefinite Lagrangian subspaces Federico Poloni and Nataša Strabi October 2015 MIMS EPrint: 2015.92 Manchester Institute for Mathematical Sciences

More information

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Affine iterations on nonnegative vectors

Affine iterations on nonnegative vectors Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B-348 Louvain-la-Neuve Belgium Introduction In this paper we consider

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

An SVD-Like Matrix Decomposition and Its Applications

An SVD-Like Matrix Decomposition and Its Applications An SVD-Like Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

On the Iwasawa decomposition of a symplectic matrix

On the Iwasawa decomposition of a symplectic matrix Applied Mathematics Letters 20 (2007) 260 265 www.elsevier.com/locate/aml On the Iwasawa decomposition of a symplectic matrix Michele Benzi, Nader Razouk Department of Mathematics and Computer Science,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

FACTORIZING COMPLEX SYMMETRIC MATRICES WITH POSITIVE DEFINITE REAL AND IMAGINARY PARTS

FACTORIZING COMPLEX SYMMETRIC MATRICES WITH POSITIVE DEFINITE REAL AND IMAGINARY PARTS MATHEMATICS OF COMPUTATION Volume 67, Number 4, October 1998, Pages 1591 1599 S 005-5718(98)00978-8 FACTORIZING COMPLEX SYMMETRIC MATRICES WITH POSITIVE DEFINITE REAL AND IMAGINARY PARTS NICHOLAS J. HIGHAM

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q A NUMERICAL METHOD FOR COMPUTING AN SVD-LIKE DECOMPOSITION HONGGUO XU Abstract We present a numerical method to compute the SVD-like decomposition B = QDS 1 where Q is orthogonal S is symplectic and D

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name: Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d

More information

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Implementation of the regularized structured total least squares algorithms for blind image deblurring

Implementation of the regularized structured total least squares algorithms for blind image deblurring Linear Algebra and its Applications 391 (2004) 203 221 wwwelseviercom/locate/laa Implementation of the regularized structured total least squares algorithms for blind image deblurring N Mastronardi a,,1,

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

Eigen-solving via reduction to DPR1 matrices

Eigen-solving via reduction to DPR1 matrices Computers and Mathematics with Applications 56 (2008) 166 171 www.elsevier.com/locate/camwa Eigen-solving via reduction to DPR1 matrices V.Y. Pan a,, B. Murphy a, R.E. Rosholt a, Y. Tang b, X. Wang c,

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices Part 2: block displacement structure algorithms JC Zúñiga and D Henrion Abstract Motivated by some control

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Structures preserved by matrix inversion Steven Delvaux Marc Van Barel Report TW 414, December 24 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 2A B-31 Heverlee (Belgium)

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de

More information

Matrix Multiplication Chapter IV Special Linear Systems

Matrix Multiplication Chapter IV Special Linear Systems Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.

More information