Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems

Size: px
Start display at page:

Download "Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems"

Transcription

1 Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems Zhaojun Bai University of California, Davis Berkeley, August 19, 2016

2 Symmetric definite generalized eigenvalue problem Symmetric definite generalized eigenvalue problem Ax = λbx where A T = A and B T = B > 0 Eigen-decomposition AX = BXΛ where Λ = diag(λ 1, λ 2,..., λ n ) X = (x 1, x 2,..., x n ) X T BX = I. Assume λ 1 λ 2 λ n

3 LAPACK solvers LAPACK routines xsygv, xsygvd, xsygvx are based on the following algorithm (Wilkinson 65): 1. compute the Cholesky factorization B = GG T 2. compute C = G 1 AG T 3. compute symmetric eigen-decomposition Q T CQ = Λ 4. set X = G T Q xsygv[d,x] could be numerically unstable if B is ill-conditioned: and λ i λ i p(n)( B 1 2 A 2 + cond(b) λ i ) ɛ θ( x i, x i ) p(n) B 1 2 A 2 (cond(b)) 1/2 + cond(b) λ i specgap i User s choice between the inversion of ill-conditioned Cholesky decomposition and the QZ algorithm that destroys symmetry ɛ

4 Algorithms to address the ill-conditioning 1. Fix-Heiberger 72 (Parlett 80): explicit reduction 2. Chang-Chung Chang 74: SQZ method (QZ by Moler and Stewart 73) 3. Bunse-Gerstner 84: MDR method 4. Chandrasekaran 00: proper pivoting scheme 5. Davies-Higham-Tisseur 01: Cholesky+Jacobi 6. Working notes by Kahan 11 and Moler 14

5 This talk Three approaches: 1. A LAPACK-style implementation of Fix-Heiberger algorithm 2. An algebraic reformulation 3. Locally accelerated block preconditioned steepest descent (LABPSD)

6 This talk Three approaches: 1. A LAPACK-style implementation of Fix-Heiberger algorithm Status: beta-release 2. An algebraic reformulation Status: completed basic theory and proof-of-concept 3. Locally accelerated block preconditioned steepest descent (LABPSD) Status: published two manuscripts, software to be released

7 A LAPACK-style implementation of Fix-Heiberger algorithm (with C. Jiang)

8 A LAPACK-style solver xsygvic: computes ε-stable eigenpairs when B T = B 0 wrt a prescribed threshold ε. Implementation is based on Fix-Heiberger s algorithm, and organized in three phases. Given the threshold ε, xsygvic determines: 1. A λb is regular and has k (0 k n) ε-stable eigenvalues or 2. A λb is singular. The new routine xsygvic has the following calling sequence: xsygvic( ITYPE, JOBZ, UPLO, N, A, LDA, B, LDB, ETOL, & K, W, WORK, LDWORK, WORK2, LWORK2, IWORK, INFO )

9 xsygvic Phase I 1. Compute the eigenvalue decomposition of B (xsyev): B (0) = Q T 1 BQ 1 = D = [ n1 n2 n 1 D (0) n 2 E (0) ], where diagonal entries of D: d 11 d d nn, and elements of E (0) are smaller than εd (0) Set E (0) = 0, and update A and B (0) : A (1) = R T 1 Q T 1 AQ 1 R 1 = [ n 1 n 2 n 1 A (1) 11 A (1) 12 n 2 A (1)T 12 A (1) 22 ] and B (1) = R T 1 B (0) R 1 = n 1 [ n1 n2 I n 2 0 ], where R 1 = diag((d (0) ) 1/2, I)

10 xsygvic Phase I 3. Early exit B is ε-well-conditioned. A λb is regular and has n ε-stable eigenpairs (Λ, X): A (1) U = UΛ (xsyev). X = Q1R 1U

11 xsygvic Phase I: performance profile Test matrices A = Q A D A Q T A and B = Q BD B Q T B where QA, Q B are random orthogonal matrices; DA is diagonal with 1 < D A(i, i) < 1, i = 1,..., n; DB is diagonal with 0 < ε < D B(i, i) < 1, i = 1,..., n; 12-core on an Intel Ivy Bridge processor (Edison@NERSC) Runtime of xsygv/xsygvic with B>0 DSYGV DSYGVIC Percentage of sections in xsygvic Runtime (seconds) Percentage Matrix order n Matrix order n xsyev of B xsyev of A Others

12 xsygvic Phase II 1. Compute the eigendecomposition of (2,2)-block A (1) 22 of A(1) (xsyev): A (2) 22 = Q(2)T 22 A (1) 22 Q(2) 22 = [ n 3 n 4 n 3 D (2) n 4 E (2) ] where eigenvalues are ordered such that d (2) 11 d(2) 22 d(2) n 2n 2, and elements of E (2) are smaller than ε d (2) Set E (2) = 0, and update A (1) and B (1) : where Q 2 = diag(i, Q (2) 22 ). A (2) = Q T 2 A (1) Q 2, B (2) = Q T 2 B (1) Q 2 3. Early exit When A (1) 22 is a ε-well-conditioned matrix. A λb is regular and has n 1 ε-stable eigenpairs (Λ, X): A (2) U = B (2) UΛ (Schur complement and xsyev) X = Q1R 1Q 2U.

13 xsygvic Phase II (backup) where A (2) U = B (2) UΛ (1) A (2) = [ n 1 n 2 n 1 A (2) 11 A (2) 12 n 2 A (2)T 12 D (2) ] and B (2) = n 1 [ n1 n2 I n 2 0 ]. Let U = [ n1 n 1 U 1 n 2 U 2 The eigenvalue problem (1) becomes ( ) F (2) U 1 = A (2) 11 A(2) 12 (D(2) ) 1 A (2)T 12 U 1 = U 1 Λ ] (xsyev) U 2 = (D (2) ) 1 (A (2) 12 )T U 1

14 xsygvic Phase II: performance profile Accuracy: 1. If B 0 has n 2 zero eigenvalues: xsygv stops, the Cholesky factorization of B could not be completed. xsygvic successfully computes n n2 ε-stable eigenpairs. 2. If B has n 2 small eigenvalues about δ, both xsygv and xsygvic work, but produce different quality numerically. 1 n = 1000, n2 = 100, δ = and ε = Res1 Res2 DSYGV 3.5e-8 1.7e-11 DSYGVIC 9.5e e-12 n = 1000, n2 = 100, δ = and ε = Res1 Res2 DSYGV 3.6e-6 1.8e-10 DSYGVIC 1.3e e-14 1 Res1 = A X B X Λ F /(n A F X F ) and Res2 = X T B X I F /( B F X 2 F ).

15 xsygvic Phase II: performance profile Timing: Test matrices A = Q A D A Q T A and B = Q BD B Q T B where QA, Q B are random orthogonal matrices; DA is diagonal with 1 < D A(i, i) < 1, i = 1,..., n; DB is diagonal with 0 < D B(i, i) < 1, i = 1,..., n and n 2/n D B(i, i) < ε. 12-core on an Intel Ivy Bridge processor (Edison@NERSC) Runtime of xsygv/xsygvic xsygv xsygvic Percentage of sections in xsygvic Runtime (seconds) Percentage Matrix order n Matrix order n xsyev(b) xsyev(f (2) ) Others

16 xsygvic Phase II: performance profile Why is the overhead ratio of xsygvic lower? Performance of xsygv varies depending on the percentage of zero eigenvalues of B. For example, for n = 4000 on a 12-core processor execution: Runtime of DSYGV for n=4000 DSYEV Others Runtime (seconds) % 20% 33% 50% Percentage of "zero" eigenvalues of B

17 xsygvic Phase III 1. A (2) and B (2) can be written as 3 by 3 blocks: A (2) = n 1 n 3 n 4 n 1 A (2) 11 A (2) 12 A (2) 13 n 3 n 4 A (2)T 12 D (2) A (2)T 13 0 and B (2) = n 1 n 1 n 3 n 4 I n 3 0 n 4 0 where n 3 + n 4 = n Reveal the rank of A (2) 13 by QR decomposition with pivoting: A (2) 13 P (3) 13 = Q(3) 13 R(3) 13 where R (3) 13 = [ n 4 n 4 A (3) 14 n 5 0 ]

18 xsygvic Phase III 3. Final exit When n 1 > n 4 and A (2) 13 is full rank,2 then A λb is regular and has n 1 n 4 ε-stable eigenpairs (Λ, X): A (3) U = B (3) UΛ X = Q1R 1Q 2Q 3U. 2 All the other cases either lead A λb to be singular or regular but no finite eigenvalues.

19 xsygvic Phase III (backup) Update where A (3) U = B (3) UΛ (2) A (3) = Q T 3 A (2) Q 3 and B (3) = Q T 3 B (2) Q 3 Q 3 = Write A (3) and B (3) as 4 4 blocks: n 1 n 3 n 4 n 1 Q (3) 13 n 3 I n 4 P (3) 13 n 4 n 5 n 3 n 4 n 4 A (3) 11 A (3) 12 A (3) 13 A (3) 14 A (3) n = 5 (A (3) 12 )T A (3) 22 A (3) 23 0 n 3 (A (3) 13 )T (A (3) 23 )T D (2) 0 n 4 (A (3) 14 )T n 4 n 5 n 3 n 4 n 4 I, B (3) n = 5 I n 3 0, n 4 0 where n 1 = n 4 + n 5 and n 2 = n 3 + n 4.

20 xsygvic Phase III (backup) Let U = n 5 n 4 U 1 n 5 U 2 n 3 U 3 n 4 U 4 then the eigenvalue problem (2) becomes: U 1 = 0 ( ) A (3) 22 A(3) 23 (D(3) ) 1 A (3)T 23 U 2 = U 2 Λ (xsyev) U 3 = (D (2) ) 1 A (3)T 23 U 2 U 4 = (A (3) 14 ) 1 ( A (3) 12 U 2 + A (3) 13 U 3 )

21 xsygvic Phase III: performance profile Test case (Fix-Heiberger 72) 1. Consider 8 8 matrices: where and A = Q T HQ and B = Q T SQ, H = S = diag[1, 1, 1, 1, δ, δ, δ, δ] As δ 0, λ = 3, 4 are the only stable eigenvalues of A λb.

22 xsygvic Phase III: performance profile 2. The computed eigenvalues when δ = : λ i eig(a,b, chol ) DSYGV DSYGVIC(ε = ) e e e e e e e e e e e e e e e e e e+16

23 An algebraic reformulation (with H. Xie)

24 Symmetric semi-definite pencil Symmetric semi-definite pencil: A λb, with A T = A and B T = B 0

25 Symmetric semi-definite pencil Canonical form. There exists a nonsingular matrix W R n n such that W T AW = 2n 1 Λ 1 2n 1 r n 2 s r Λ 2 n 2 Λ 3 s 0 and W T BW = 2n 1 Ω 1 r 2n 1 r n 2 s n 2 0 s 0 I where Λ 1 = I n1 K, Λ 2 = diag(λ i ), Λ 3 = diag(±1), Ω 1 = I n1 T and K = [ ] [ 1 0, T = 0 0 ]

26 Algebraic reformulation Theorem [Xie and B. 16]. Suppose that the symmetric semi-definite pencil A λb is regular and simultaneously diagonalizable with a congruence transformation. Given a symmetric positive definite matrix H R k k and µ R, let us define à = A + µ(az)h(az) T, B = B + (AZ)H(AZ) T, where Z R n k spans the nullspace of B. Then 3 (1) The pencil à λ B is symmetric definite, (2) λ(ã, B) = λ f (A, B) λ(µh + (Z T AZ) 1, H) By appropriately chosen H and µ, one can compute the k smallest (finite) eigenvalues of A λb directly, say by LOBPCG. 3 Notations: λ(a, B) denotes the set of eigenvalues of a pencil A λb. λ f (A, B) denotes the set of all finite eigenvalues of A λb.

27 Algebraic reformulation A test case from structure dynamics LOBPCG: Residual norms st eigenvalue 2nd eigenvalue 3rd eigenvalue 4th eigenvalue 5th eigenvalue Iterations

28 Algebraic reformulation A test case from structure dynamics Algebraic reformulation + LOBPCG without preconditioning Residual norms st eigenvalue 2nd eigenvalue rd eigenvalue 4th eigenvalue 5th eigenvalue Iterations

29 Algebraic reformulation A test case from structure dynamics Algebraic reformulation + LOBPCG with preconditioning Residual norms st eigenvalue 2nd eigenvalue 3rd eigenvalue 4th eigenvalue 5th eigenvalue Iterations

30 A locally accelerated BPSD (LABPSD)

31 Ill-conditioned GSEP GSEP: Hu i = λ i Su i 1. Matrices H and S are ill-conditioned e.g., cond(h), cond(s) = O(10 10 ) 2. Share a near-nullspace span{v } e.g., HV = SV = O(10 4 ) 3. No obvious spectrum gap between eigenvalues of interest and the rest e.g., lowest 8

32 PSDid PSDed: Preconditioned Steepest Descent with implicit deflation Assume the first i 1 eigenpairs (λ 1, u 1 ),..., (λ i 1, u i 1 ) computed, and denote U i 1 = [u 1, u 2,..., u i 1 ]. PSDid computes the i-th eigenpair (λ i, u i ) 0 initialize (λ i;0, u i;0 ) 1 for j = 0, 1,... until convergence 2 compute r i;j = Hu i;j λ i;j Su i;j 3 precondition p i;j = K i;j r i;j 4 (γ i, w i ) = RR(H, S, Z), where Z = [U i 1 u i;j p i;j ] 5 λ i;j+1 = γ i, u i;j+1 = Zw i..., [Faddeev/Faddeeva 63],..., [Longsine/McCormick 80] for K i;j = I,...

33 PSDid assumptions 1. initialize u i;0 such that Ui 1 H Su i;0 = 0 and u i;0 S = 1 2. λ i;0 = ρ(u i;0 ), Rayleigh quotient 3. the preconditioners K i;j are effective positive definite, namely, Ki;j d (Ui 1) c T SK i;j SUi 1 c > 0, where Ui 1 c = [u i, u i+1,..., u n ].

34 PSDid properties 1. Z is of full column rank 2. Ui 1 H Su i;j+1 = 0 and u i;j+1 S = 1 3. λ i λ i;j+1 < λ i;j 4. λ i;j λ i;j+1 g 2 + φ 2 g = step size > 0, λ i:j+1 λ i:j λ i-1 λ i step size λ i+1 5. p i;j = K i;j r i;j is an ideal search direction if p i;j satisfies U T S(u i;j + p i;j ) = (,...,, ξ i, 0,..., 0) T and ξ i 0. (3) It implies that λ i;j+1 = λ i.

35 PSDid convergence If λ i < λ i;0 < λ i+1 and sup j cond(ki;j d ) = q <, then the sequence {λ i;j } j is strictly decreasing and bounded from below by λ i, i.e., and as j, λ i;0 > λ i;1 > > λ i;j > λ i;j+1 > λ i 1. λ i;j λ i 2. u i;j converges to u i directionally: r i;j S 1 = Hu i;j λ i;j Su i;j S 1 0

36 PSDid convergence rate Let ɛ i;j = λ i;j λ i, then [ + τ ] 2 θ i;j ɛ i;j ɛ i;j+1 1 τ( ɛ i;j θ i;j ɛ i;j + δ i;j ɛ i;j ) provided that the i-th approximate eigenvalue λ i;j is localized, i.e. where = Γ γ Γ +γ and τ = 2 Γ +γ τ( θ i;j ɛ i;j + δ i;j ɛ i;j ) < 1, δ i;j = S 1 2 K i;j S 1 2 and θ i:j = S 1 2 K i;j MK i;j S 1 2 Γ and γ are largest and smallest pos. eigenvalues of K i;j M and M = P H i 1 (H λ is)p i 1 and P i 1 = I U i 1 U H i 1 S

37 PSDid convergence rate, cont d Remarks: 1. If K i;j = I, the convergence of SD proven in [Faddeev/Faddeeva 63,..., Longside/McCormick 80,...] 2. If i = 1 and K 1;j = K > 0, it is Samokish s theorem (1958), which is first and still sharpest quantitative analysis [Ovtchinnikov 06]. 3. Asymptotically, ɛ i;j+1 [ + O(ɛ 1/2 i;j ) ] 2 ɛi;j 4. Optimal K i;j : = 0 quadratic conv. 5. Semi-optimal K i;j : + τ θ i;j ɛ i;j 0 superlinear conv. 6. (Semi-)optimality depends on the eigenvalue distribution of K i;j M

38 Locally accelerated preconditioner Consider the preconditioner K i;j = ( H β i;j S ) 1 with β i;j = λ i;j c r i;j S 1 If 0 < i;j < min{ i, 0.1} and c > 3 i;j. Then 1. K i;j is effective positive definite 2. λ i;j is localized 3. + τ θ i;j ɛ i;j 0 Therefore, Ki;j is asymptotically optimal

39 PSDid LABPSD = Locally Accelerated BPSD 0 Initialize U m+l;0 = [ u 1;0 u 2;0... u m+l;0 ] 1 (Γ, W ) = RR(H, S, U m+l;0 ) 2 update Λ m+l;0 = Γ and U m+l;0 = U m+l;0 W 3 for j = 0, 1,..., do 4 compute R = HU m;j SU m;j Λ m;j [ r 1;j r 2;j... r m;j ] 5 if Res[Λ m;j, U m;j ] = max 1 i m Res[λ i;j, u i;j ] τ eig, break 6 for i = 1, 2,..., m if λ i;j is localized, then solve (H λ i;j S)p i;j = r i;j for p i;j 7 (Γ m+l, W m+l ) = RR(H, S, Z), where Z = [U m+l;j P j ] 8 update Λ m+l;j+1 = Γ m+l and U m+l;j+1 = ZW m+l 9 end 10 return {(λ i;j, u i;j )} m i=1 Note: A global preconditioner (H σs) 1 can be used to accelerate the localization and convergence of step 6.

40 Numerical example 1: Harmonic1D PUFE discretization for harmonic oscillator in 1D n = 112 for 6-digit accuracy of 4 smallest eigenvalues λ 1, λ 2, λ 3, λ 4 H and S are ill-conditioned cond(h) = and cond(s) = H and S share a near-nullspace span{v } HV = SV = O(10 5 ) and dim(v ) = 17 All computed λ 1, λ 2, λ 3, λ 4 have 6-digit accuracy DPSD Res LABPSD Res 1 Res Res 2 Res 3 relative residual 10 5 Res 3 Res 4 relative residual Res eigen iteration eigen iteration

41 Numerical example 2: CeAl-PUFE Metallic, triclinic CeAl, particularly challenging [Cai, B., Pask, Sukumar 13] n = 5336 from PUFE discretization of the Kohn-Sham equation H and S are ill-conditioned cond(h) = and cond(s) = H and S share a near-nullspace span{v } HV = SV = O(10 4 ) and dim(v ) = DPSD Res LABPSD Res 1 Res Res 2 Res 3 relative residual 10 5 Res 3 Res 4 relative residual Res eigen iteration eigen iteration

Advances in Algebraic Nonlinear Eigenvalue Problems

Advances in Algebraic Nonlinear Eigenvalue Problems Advances in Algebraic Nonlinear Eigenvalue Problems Zhaojun Bai University of California, Davis with the assistance of Ding Lu of University of Geneva Lecture notes prepared for LSEC Summer School, July

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Lecture 2: Numerical linear algebra

Lecture 2: Numerical linear algebra Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Intel Math Kernel Library (Intel MKL) LAPACK

Intel Math Kernel Library (Intel MKL) LAPACK Intel Math Kernel Library (Intel MKL) LAPACK Linear equations Victor Kostin Intel MKL Dense Solvers team manager LAPACK http://www.netlib.org/lapack Systems of Linear Equations Linear Least Squares Eigenvalue

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

EIGENVALUE PROBLEMS (EVP)

EIGENVALUE PROBLEMS (EVP) EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)

More information

Analysis of the Cholesky Method with Iterative Refinement for Solving the Symmetric Definite Generalized Eigenproblem

Analysis of the Cholesky Method with Iterative Refinement for Solving the Symmetric Definite Generalized Eigenproblem Analysis of the Cholesky Method with Iterative Refinement for Solving the Symmetric Definite Generalized Eigenproblem Davies, Philip I. and Higham, Nicholas J. and Tisseur, Françoise 2001 MIMS EPrint:

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD

Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD Yuji Nakatsukasa PhD dissertation University of California, Davis Supervisor: Roland Freund Householder 2014 2/28 Acknowledgment

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

LAPACK Introduction. Zhaojun Bai University of California/Davis. James Demmel University of California/Berkeley

LAPACK Introduction. Zhaojun Bai University of California/Davis. James Demmel University of California/Berkeley 93 LAPACK Zhaojun Bai University of California/Davis James Demmel University of California/Berkeley Jack Dongarra University of Tennessee Julien Langou University of Colorado Denver Jenny Wang University

More information

Eigenvalue problems. Eigenvalue problems

Eigenvalue problems. Eigenvalue problems Determination of eigenvalues and eigenvectors Ax x, where A is an N N matrix, eigenvector x 0, and eigenvalues are in general complex numbers In physics: - Energy eigenvalues in a quantum mechanical system

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

A conjugate gradient-based algorithm for large-scale quadratic programming problem with one quadratic constraint

A conjugate gradient-based algorithm for large-scale quadratic programming problem with one quadratic constraint Noname manuscript No. (will be inserted by the editor) A conjugate gradient-based algorithm for large-scale quadratic programming problem with one quadratic constraint A. Taati, M. Salahi the date of receipt

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Mitglied der Helmholtz-Gemeinschaft Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Birkbeck University, London, June the 29th 2012 Edoardo Di Napoli Motivation and Goals

More information

A Quick Tour of Linear Algebra and Optimization for Machine Learning

A Quick Tour of Linear Algebra and Optimization for Machine Learning A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Fast Hessenberg QR Iteration for Companion Matrices

Fast Hessenberg QR Iteration for Companion Matrices Fast Hessenberg QR Iteration for Companion Matrices David Bindel Ming Gu David Garmire James Demmel Shivkumar Chandrasekaran Fast Hessenberg QR Iteration for Companion Matrices p.1/20 Motivation: Polynomial

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the Singular Value Decomposition

Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the Singular Value Decomposition Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the Singular Value Decomposition By YUJI NAKATSUKASA B.S. (University of Tokyo) 2005 M.S. (University of Tokyo) 2007 DISSERTATION Submitted

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

The Future of LAPACK and ScaLAPACK

The Future of LAPACK and ScaLAPACK The Future of LAPACK and ScaLAPACK Jason Riedy, Yozo Hida, James Demmel EECS Department University of California, Berkeley November 18, 2005 Outline Survey responses: What users want Improving LAPACK and

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Continuous methods for numerical linear algebra problems

Continuous methods for numerical linear algebra problems Continuous methods for numerical linear algebra problems Li-Zhi Liao (http://www.math.hkbu.edu.hk/ liliao) Department of Mathematics Hong Kong Baptist University The First International Summer School on

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

of dimension n 1 n 2, one defines the matrix determinants

of dimension n 1 n 2, one defines the matrix determinants HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue

More information

Is there life after the Lanczos method? What is LOBPCG?

Is there life after the Lanczos method? What is LOBPCG? 1 Is there life after the Lanczos method? What is LOBPCG? Andrew V. Knyazev Department of Mathematics and Center for Computational Mathematics University of Colorado at Denver SIAM ALA Meeting, July 17,

More information

Multiparameter eigenvalue problem as a structured eigenproblem

Multiparameter eigenvalue problem as a structured eigenproblem Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction

More information

Preconditioned Eigenvalue Solvers for electronic structure calculations. Andrew V. Knyazev. Householder Symposium XVI May 26, 2005

Preconditioned Eigenvalue Solvers for electronic structure calculations. Andrew V. Knyazev. Householder Symposium XVI May 26, 2005 1 Preconditioned Eigenvalue Solvers for electronic structure calculations Andrew V. Knyazev Department of Mathematics and Center for Computational Mathematics University of Colorado at Denver Householder

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

FAME-matlab Package: Fast Algorithm for Maxwell Equations Tsung-Ming Huang

FAME-matlab Package: Fast Algorithm for Maxwell Equations Tsung-Ming Huang FAME-matlab Package: Fast Algorithm for Maxwell Equations Tsung-Ming Huang Modelling, Simulation and Analysis of Nonlinear Optics, NUK, September, 4-8, 2015 1 2 FAME group Wen-Wei Lin Department of Applied

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Unconstrained Optimization Models for Computing Several Extreme Eigenpairs of Real Symmetric Matrices

Unconstrained Optimization Models for Computing Several Extreme Eigenpairs of Real Symmetric Matrices Unconstrained Optimization Models for Computing Several Extreme Eigenpairs of Real Symmetric Matrices Yu-Hong Dai LSEC, ICMSEC Academy of Mathematics and Systems Science Chinese Academy of Sciences Joint

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

The QR Algorithm. Marco Latini. February 26, 2004

The QR Algorithm. Marco Latini. February 26, 2004 The QR Algorithm Marco Latini February 26, 2004 Contents 1 Introduction 2 2 The Power and Inverse Power method 2 2.1 The Power Method............................... 2 2.2 The Inverse power method...........................

More information

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents Eigenvalue and Least-squares Problems Module Contents Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems nag nsym gen eig provides procedures for solving nonsymmetric generalized

More information

A hybrid reordered Arnoldi method to accelerate PageRank computations

A hybrid reordered Arnoldi method to accelerate PageRank computations A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

5 Selected Topics in Numerical Linear Algebra

5 Selected Topics in Numerical Linear Algebra 5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Yunshen Zhou Advisor: Prof. Zhaojun Bai University of California, Davis yszhou@math.ucdavis.edu June 15, 2017 Yunshen

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information