Polynomial filtering for interior eigenvalue problems Yousef Saad Department of Computer Science and Engineering University of Minnesota
|
|
- Magdalen Maxwell
- 6 years ago
- Views:
Transcription
1 Polynomial filtering for interior eigenvalue problems Yousef Saad Department of Computer Science and Engineering University of Minnesota SIAM CSE Boston - March 1, 2013
2 First: Joint work with: Haw-ren Fang Grady Schoefield and Jim Chelikowsky [UT Austin] [windowing into PARSEC] Work supported in part by NSF (to 2012) and now by DOE 2 SIAM-CSE February 27, 2013
3 Introduction Q: A: How do you compute eigenvalues in the middle of the spectrum of a large Hermitian matrix? Method of choice: Shift and invert + some projection process (Lanczos, subspace iteration..) Main steps: 1) Select a shift (or sequence of shifts) σ; 2) Factor A σi: A σi = LDL T 3) Apply Lanczos algorithm to (A σi) 1 Solves with A σi carried out using factorization Limitation: factorization 3 SIAM-CSE February 27, 2013
4 Q: What if factoring A is too expensive (e.g., Large 3-D similation)? A: Obvious answer: Use iterative solvers... However: systems are highly indefinite Wont work too well. Digression: Still lots of work to do in iterative solution methods for highly indefinite linear systems 4 SIAM-CSE February 27, 2013
5 Other common characteristic: Need a very large number of eigenvalues and eigenvectors Applications: Excited states in quantum physics: TDDFT, GW,... Or just plain Density Functional Theory (DFT) Number of wanted eigenvectors is equal to number of occupied states [ == the number of valence electrons in DFT] An example: in real-space code (PARSEC), you can have a Hamiltonian of size a few Millions, and number of ev s in the tens of thousands. 5 SIAM-CSE February 27, 2013
6 Polynomial filtered Lanczos Possible solution: Use Lanczos with polynomial filtering. Idea not new (and not too popular in the past) 1. Very large problems; What is new? 2. (tens of) Thousands of eigenvalues; 3. Parallelism. Most important factor is 2. Main rationale of polynomial filtering : reduce the cost of orthogonalization Important application: compute the spectrum by pieces [ spectrum slicing a term coined by B. Parlett] 6 SIAM-CSE February 27, 2013
7 Introduction: What is filtered Lanczos? In short: we just replace (A σi) 1 in S.I. Lanczos by p k (A) where p k (t) = polynomial of degree k We want to compute eigenvalues near σ = 1 of a matrix A with Λ(A) [0, 4]. Use the simple transform: p 2 (t) = 1 α(t σ) 2. For α =.2, σ = 1 you get Use Lanczos with B = p 2 (A) Eigenvalues near σ become the dominant ones so Lanczos will work but they are now poorly separated slow convergence. 7 SIAM-CSE February 27, 2013
8 A low-pass filter [a,b]=[0,3], [ξ,η]=[0,1], d= base filter ψ(λ) polynomial filter ρ(λ) γ 0.6 f(λ) λ 8 SIAM-CSE February 27, 2013
9 A mid-pass filter [a,b]=[0,3], [ξ,η]=[0.5,1], d= base filter ψ(λ) polynomial filter ρ(λ) γ 0.6 f(λ) λ 9 SIAM-CSE February 27, 2013
10 Misconception: High degree polynomials are bad d=1000 Degree 1000 (zoom) f(λ) base filter ψ(λ) polynomial filter ρ(λ) γ λ d=1600 Degree 1600 (zoom) f(λ) base filter ψ(λ) polynomial filter ρ(λ) γ SIAM-CSE February 27, 2013 λ
11 Hypothetical scenario: large A, zillions of wanted e-values Assume A has size 10M (Not huge by todays standard)... and you want to compute 50,000 eigenvalues/vectors (huge for numerical analysits, not for physicists) in the lower part of the spectrum - or the middle. By (any) standard methods you will need to orthogonalize at least 50K vectors of size 10M Space is an issue: bytes = 4TB of mem *just for the basis* Orthogonalization is also an issue: = 50 PetaOPS. Toward the end, at step k, each orthogonalization step costs about 4kn 200, 000n for k close to 50, SIAM-CSE February 27, 2013
12 The alternative: Spectrum slicing or windowing Rationale. Eigenvectors on both ends of wanted spectrum need not be orthogonalized against each other : Idea: Get the spectrum by slices or windows Can get a few hundreds or thousands of vectors at a time. 12 SIAM-CSE February 27, 2013
13 Compute eigenpairs one slice at a time Deceivingly simple looking idea. Issues: Deal with interfaces : duplicate/missing eigenvalues Window size [need estimate of eigenvalues] polynomial degree 13 SIAM-CSE February 27, 2013
14 Spectrum slicing in PARSEC Being implemented in our code: Pseudopotential Algorithm for Real-Space Electronic Calcultions (PARSEC) See : A Spectrum Slicing Method for the Kohn-Sham Problem, G. Schofield, J. R. Chelikowsky and YS, Computer Physics Comm., vol 183, pp Refer to this paper for details on windowing and initial proof of concept 14 SIAM-CSE February 27, 2013
15 Computing the polynomials: Jackson-Chebyshev Chebyshev-Jackson approximation of a function f: f(x) k i=0 g k i γ it i (x) γ i = 2 δ i0 π x 2 f(x)dx δ i0 = Kronecker symbol The gi k s attenuate higher order terms in the sum. Attenuation coefficient gi k for k=50,100,150 0 g k i k=50 k=100 k= i (Degree) 15 SIAM-CSE February 27, 2013
16 Let α k = π k + 2, then : ( ) 1 i sin(α g k i = k+2 k ) cos(iα k ) + 1 k+2 cos(α k) sin(iα k ) sin(α k ) See Electronic structure calculations in plane-wave codes without diagonalization. Laurent O. Jay, Hanchul Kim, YS, and James R. Chelikowsky. Computer Physics Communications, 118:21 30, SIAM-CSE February 27, 2013
17 The expansion coefficients γ i When f(x) is a step function on [a, b] [ 1 1]: γ i = 1 (arccos(a) arccos(b)) : i = 0 π ( ) 2 sin(i arccos(a)) sin(i arccos(b)) : i > 0 π i A few examples follow 17 SIAM-CSE February 27, 2013
18 Computing the polynomials: Jackson-Chebyshev Polynomials of degree 30 for [a, b] = [.3,.6] Mid pass polynom. filter [ ]; Degree = 30 Standard Cheb. Jackson Cheb SIAM-CSE February 27, 2013
19 1.2 1 Mid pass polynom. filter [ ]; Degree = 80 Standard Cheb. Jackson Cheb SIAM-CSE February 27, 2013
20 1.2 1 Mid pass polynom. filter [ ]; Degree = 200 Standard Cheb. Jackson Cheb SIAM-CSE February 27, 2013
21 How to get the polynomial filter? Second approach Idea: φ First select an ideal filter e.g., a piecewise polynomial function a b For example φ = Hermite interpolating pol. in [0,a], and φ = 1 in [a, b] Referred to as the Base filter 21 SIAM-CSE February 27, 2013
22 Then approximate base filter by degree k polynomial in a least-squares sense. Can do this without numerical integration a b φ Main advantage: Extremely flexible. Method: Build a sequence of polynomials φ k which approximate the ideal PP filter φ, in the L 2 sense. Again 2 implementations 22 SIAM-CSE February 27, 2013
23 Define φ k the least-squares polynomial approximation to φ: φ k (t) = k φ, P j P j (t), j=1 where {P j } is a basis of polynomials that is orthonormal for some L 2 inner-product. Method 1: Use Stieljes procedure to computing orthogonal polynomials
24 ALGORITHM : 1 Stieljes 1. P 0 0, 2. β 1 = S 0, 3. P 1 (t) = 1 β 1 S 0 (t), 4. For j = 2,..., m Do 5. α j = t P j, P j, 6. S j (t) = t P j (t) α j P j (t) β j P j 1 (t), 7. β j+1 = S j, 8. P j+1 (t) = 1 β j+1 S j (t). 9. EndDo 24 SIAM-CSE February 27, 2013
25 Computation of Stieljes coefficients Problem: To compute the scalars α j and β j+1, of 3-term recurrence (Stieljes) + the expansion coefficients γ j. Need to avoid numerical integration. Solution: define orthogonal polynomials over two (or more) disjoint intervals see similar work YS 83: YS, Iterative solution of indefinite symmetric systems by methods using orthogonal polynomials over two disjoint intervals, SIAM Journal on Numerical Analysis, 20 (1983), pp E. KOKIOPOULOU AND YS, Polynomial Filtering in Latent Semantic Indexing for Information Retrieval, in Proc. ACM-SIGIR Conference on research and development in information retrieval, Sheffield, UK, (2004) 25 SIAM-CSE February 27, 2013
26 Let large interval be [0, b] should contain Λ(B) Assume 2 subintervals. On subinterval [a l 1, a l ], l = 1, 2 define the inner-product ψ 1, ψ 2 al 1,a l by ψ 1, ψ 2 al 1,a l = al a l 1 ψ 1 (t)ψ 2 (t) (t al 1 )(a l t) dt. Then define the inner product on [0, b] by ψ 1, ψ 2 = a 0 ψ 1 (t)ψ 2 (t) t(a t) dt + ρ b a ψ 1 (t)ψ 2 (t) (t a)(b t) dt. To avoid numerical integration, use a basis of Chebyshev polynomials on interval [YS 83] 26 SIAM-CSE February 27, 2013
27 Mehod 2 : Filtered CG/CR - like polynomial iterations Want: a CG-like (or CR-like) algorithms for which the inderlying residual polynomial or solution polynomial are Least-squares filter polynomials Seek s to minimize φ(λ) λ s(λ) w with respect to a certain norm. w. Equivalently, minimize (1 φ) (1 λ s(λ)) w over all polynomials s of degree k. Focus on second view-point (residual polynomial) goal is to make r(λ) 1 λs(λ) close to 1 φ. 27 SIAM-CSE February 27, 2013
28 Recall: Conjugate Residual Algorithm ALGORITHM : 2 Conjugate Residual Algorithm 1. Compute r 0 := b Ax 0, p 0 := r 0 2. For j = 0, 1,..., until convergence Do: 3. α j := (r j, Ar j )/(Ap j, Ap j ) 4. x j+1 := x j + α j p j 5. r j+1 := r j α j Ap j 6. β j := (r j+1, Ar j+1 )/(r j, Ar j ) 7. p j+1 := r j+1 + β j p j 8. Compute Ap j+1 = Ar j+1 + β j Ap j 9. EndDo Think in terms of polynomial iteration 28 SIAM-CSE February 27, 2013
29 ALGORITHM : 3 Filtered CR polynomial Iteration 1. Compute r 0 := b Ax 0, p 0 := r 0 π 0 = ρ 0 = 1 1.a. Compute λπ 0 2. For j = 0, 1,..., until convergence Do: 3. α j :=< ρ j, λρ j > w / < λπ j, λπ j > w 3.a. α j := α j < 1 φ, λπ j > w / < λπ j, λπ j > w 4. x j+1 := x j + α j p j 5. r j+1 := r j α j Ap j ρ j+1 = ρ j α j λπ j 6. β j :=< ρ j+1, λρ j+1 > w / < ρ j, λρ j > w 7. p j+1 := r j+1 + β j p j π j+1 := ρ j+1 + β j π j 8. Compute λπ j+1 9. EndDo All polynomials expressed in Chebyshev basis - cost of algorithm is negligible [ O(k 2 ) for deg. k. ] 29 SIAM-CSE February 27, 2013
30 A few mid-pass filters of various degrees Four examples of middle-pass filters ψ(λ) and their polynomial approximations ρ(λ). Degrees 20 and 30 d=20 d=30 1 base filter ψ(λ) polynomial filter ρ(λ) γ 1 base filter ψ(λ) polynomial filter ρ(λ) γ f(λ) 0.4 f(λ) λ λ 30 SIAM-CSE February 27, 2013
31 Degrees 50 and 100 d=50 d= base filter ψ(λ) polynomial filter ρ(λ) γ base filter ψ(λ) polynomial filter ρ(λ) γ f(λ) 0.4 f(λ) λ λ 31 SIAM-CSE February 27, 2013
32 Base Filter to build a Mid-Pass filter polynomial We partition [0, b] into five sub-intervals, [0, b] = [0, 1][τ 1, τ 2 ] [τ 2, τ 3 ] [τ 3, τ 4 ] [τ 4, b] 0 τ τ τ τ b Set: ψ(t) = 0 in [0, τ 1 ] [τ 4, b] and ψ(t) = 1 in [τ 2, τ 3 ] Use standard Hermite interpolation to get brigde functions in [τ 1, τ 2 ] and [τ 3, τ 4 ] 32 SIAM-CSE February 27, 2013
33 References A Filtered Lanczos Procedure for Extreme and Interior Eigenvalue Problems, H. R. Fang and YS, SISC 34(4) A (2012). For details on window-less implementation (one slice) + code Computation of Large Invariant Subspaces Using Polynomial Filtered Lanczos Iterations with Applications in Density Functional Theory, C. Bekas and E. Kokiopoulou and YS, SIMAX 30(1), (2008). Filtered Conjugate Residual-type Algorithms with Applications, YS; SIMAX 28 pp (2006) 33 SIAM-CSE February 27, 2013
34 Tests Test matrices Experiments performed in sequential mode: on two dualcore AMD Opteron(tm) Processors 2.2GHz and 16GB memory. Test matrices: * Five Hamiltonians from electronic structure calculations, * An integer matrix named Andrews, and * A discretized Laplacian (FD) 34 SIAM-CSE February 27, 2013
35 nz = SIAM-CSE February 27, 2013
36 Matrix characteristics matrix n nnz nnz n full eigen-range Fermi [a, b] n 0 GE87H76 112,985 7,892, [ , ] 212 Ge99H ,985 8,451, [ , ] 248 SI41Ge41H72 185,639 15,011, [ , ] 200 Si87H76 240,369 10,661, [ , ] 212 Ga41As41H72 268,096 18,488, [ , ] 200 Andrews 60, , [0, ] N/A Laplacian 1,000,000 6,940, [ , ] N/A 36 SIAM-CSE February 27, 2013
37 Experimental set-up eigen-interval # eig # eig matrix [ξ, η] in [ξ, η] in [a, η] η ξ b a η a b a GE87H76 [ 0.645, ] Ge99H100 [ 0.65, ] SI41Ge41H72 [ 0.64, ] Si87H76 [ 0.66, 0.33] Ga41As41H72 [ 0.64, 0.0] Andrews [4, 5] 1,844 3, Laplacian [1, 1.01] 276 >17, SIAM-CSE February 27, 2013
38 Results for Ge99H100 -set 1 of stats method degree # iter # matvecs memory d = 20 1,020 20,400 1,117 filt. Lan. d = , (high-pass) d = , d = , d = , filt. Lan. d = , (low-pass) d = , d = , Part. Lanczos 5,140 5,140 4,883 ARPACK 6,233 6,233 1, SIAM-CSE February 27, 2013
39 Results for Ge99H100 -CPU times (sec.) method degree ρ(a)v reorth eigvec total d = 20 1, ,417 filt. Lan. d = 30 1, ,440 (high-pass) d = 50 1, ,479 d = 100 1, ,930 d = filt. Lan. d = (low-pass) d = 30 1, ,123 d = 50 1, ,342 Part. Lanczos 234 1, ,962 ARPACK , , SIAM-CSE February 27, 2013
40 Results for Andrews - set 1 of stats method degree # iter # matvecs memory d = 20 9, ,800 4,829 filt. Lan. d = 30 6, ,120 2,799 (mid-pass) d = 50 3, ,000 1,947 d = 100 2, ,000 1,131 d = 10 5,990 59,900 2,799 filt. Lan. d = 20 4,780 95,600 2,334 (high-pass) d = 30 4, ,800 2,334 d = 50 4, ,500 2,334 Part. Lanczos 22,345 22,345 10,312 ARPACK 30,716 30,716 6, SIAM-CSE February 27, 2013
41 Results for Andrews - CPU times (sec.) method degree ρ(a)v reorth eigvec total d = 20 2, ,834 9,840 filt. Lan. d = 30 2, ,151 5,279 (mid-pass) d = 50 3, ,810 d = 100 3, ,147 d = 10 1,152 2,911 2,391 7,050 filt. Lan. d = 20 1,335 1,718 1,472 4,874 (high-pass) d = 30 1,806 1,218 1,274 4,576 d = 50 3,187 1,032 1,383 5,918 Part. Lanczos ,455 64, ,664 ARPACK ,492 18, , SIAM-CSE February 27, 2013
42 Results for Laplacian set 1 of stats method degree # iter # matvecs memory 600 1, ,000 10,913 mid-pass filter 1, ,000 7,640 1, ,136,000 6, SIAM-CSE February 27, 2013
43 Results for Laplacian CPU times method degree ρ(a)v reorth eigvec total , ,279 mid-pass filter 1, , ,384 1, , , SIAM-CSE February 27, 2013
44 Conclusion Quite appealing general approach when number of eigenvectors to be computed is large and when Matvec is not too expensise Will not work too well for generalized eigenvalue problem Code available here 44 SIAM-CSE February 27, 2013
Divide and conquer algorithms for large eigenvalue problems Yousef Saad Department of Computer Science and Engineering University of Minnesota
Divide and conquer algorithms for large eigenvalue problems Yousef Saad Department of Computer Science and Engineering University of Minnesota PMAA 14 Lugano, July 4, 2014 Collaborators: Joint work with:
More informationILAS 2017 July 25, 2017
Polynomial and rational filtering for eigenvalue problems and the EVSL project Yousef Saad Department of Computer Science and Engineering University of Minnesota ILAS 217 July 25, 217 Large eigenvalue
More informationPASC17 - Lugano June 27, 2017
Polynomial and rational filtering for eigenvalue problems and the EVSL project Yousef Saad Department of Computer Science and Engineering University of Minnesota PASC17 - Lugano June 27, 217 Large eigenvalue
More informationThe EVSL package for symmetric eigenvalue problems Yousef Saad Department of Computer Science and Engineering University of Minnesota
The EVSL package for symmetric eigenvalue problems Yousef Saad Department of Computer Science and Engineering University of Minnesota 15th Copper Mountain Conference Mar. 28, 218 First: Joint work with
More informationA FILTERED LANCZOS PROCEDURE FOR EXTREME AND INTERIOR EIGENVALUE PROBLEMS
A FILTERED LANCZOS PROCEDURE FOR EXTREME AND INTERIOR EIGENVALUE PROBLEMS HAW-REN FANG AND YOUSEF SAAD Abstract. When combined with Krylov projection methods, polynomial filtering can provide a powerful
More informationFILTERED CONJUGATE RESIDUAL-TYPE ALGORITHMS WITH APPLICATIONS
SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 3, pp. 845 87 c 26 Society for Industrial and Applied Mathematics FILTERED CONJUGATE RESIDUAL-TYPE ALGORITHMS WITH APPLICATIONS YOUSEF SAAD Abstract. It is often
More informationFiltered Conjugate Residual-type Algorithms with Applications
Filtered Conjugate Residual-type Algorithms with Applications Yousef Saad May 21, 25 Abstract In a number of applications, certain computations to be done with a given matrix are performed by replacing
More informationA Spectrum Slicing Method for the Kohn-Sham Problem
A Spectrum Slicing Method for the Kohn-Sham Problem Grady Schofield a, James R. Chelikowsky a,b, Yousef Saad c a Center for Computational Materials, Institute for Computational Engineering and Sciences,
More informationPolynomial filtered Lanczos iterations with applications in Density Functional Theory
Polynomial filtered Lanczos iterations with applications in Density Functional Theory C. Bekas, E. Kokiopoulou, Yousef Saad Computer Science & Engineering Dept. University of Minnesota, Twin Cities. {bekas,kokiopou,saad}@cs.umn.edu
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationEIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems
EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft
More informationMultilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota
Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota SIAM CSE Boston - March 1, 2013 First: Joint work with Ruipeng Li Work
More informationSolving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners
Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department
More informationDomain Decomposition-based contour integration eigenvalue solvers
Domain Decomposition-based contour integration eigenvalue solvers Vassilis Kalantzis joint work with Yousef Saad Computer Science and Engineering Department University of Minnesota - Twin Cities, USA SIAM
More informationBlock Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems
Mitglied der Helmholtz-Gemeinschaft Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Birkbeck University, London, June the 29th 2012 Edoardo Di Napoli Motivation and Goals
More informationCucheb: A GPU implementation of the filtered Lanczos procedure. Department of Computer Science and Engineering University of Minnesota, Twin Cities
Cucheb: A GPU implementation of the filtered Lanczos procedure Jared L. Aurentz, Vassilis Kalantzis, and Yousef Saad May 2017 EPrint ID: 2017.1 Department of Computer Science and Engineering University
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationLEAST-SQUARES RATIONAL FILTERS FOR THE SOLUTION OF INTERIOR EIGENVALUE PROBLEMS
LEAST-SQUARES RATIONAL FILTERS FOR THE SOLUTION OF INTERIOR EIGENVALUE PROBLEMS YUANZHE XI AND YOUSEF SAAD Abstract. This paper presents a method for computing partial spectra of Hermitian matrices, based
More informationDensity-Matrix-Based Algorithms for Solving Eingenvalue Problems
University of Massachusetts - Amherst From the SelectedWorks of Eric Polizzi 2009 Density-Matrix-Based Algorithms for Solving Eingenvalue Problems Eric Polizzi, University of Massachusetts - Amherst Available
More informationMultilevel Low-Rank Preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota. Modelling 2014 June 2,2014
Multilevel Low-Rank Preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota Modelling 24 June 2,24 Dedicated to Owe Axelsson at the occasion of his 8th birthday
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationPreconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators
Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationab initio Electronic Structure Calculations
ab initio Electronic Structure Calculations New scalability frontiers using the BG/L Supercomputer C. Bekas, A. Curioni and W. Andreoni IBM, Zurich Research Laboratory Rueschlikon 8803, Switzerland ab
More informationOn the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012
On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter
More informationImproving the performance of applied science numerical simulations: an application to Density Functional Theory
Improving the performance of applied science numerical simulations: an application to Density Functional Theory Edoardo Di Napoli Jülich Supercomputing Center - Institute for Advanced Simulation Forschungszentrum
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More informationIterative methods for symmetric eigenvalue problems
s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationFast estimation of approximate matrix ranks using spectral densities
Fast estimation of approximate matrix ranks using spectral densities Shashanka Ubaru Yousef Saad Abd-Krim Seghouane November 29, 216 Abstract Many machine learning and data related applications require
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationInexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm
University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses 2018 Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm Brendan E. Gavin University
More informationIterative Methods for Linear Systems of Equations
Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method
More informationA refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices
A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices Jean Christophe Tremblay and Tucker Carrington Chemistry Department Queen s University 23 août 2007 We want to
More informationOn prescribing Ritz values and GMRES residual norms generated by Arnoldi processes
On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard
More informationA knowledge-based approach to high-performance computing in ab initio simulations.
Mitglied der Helmholtz-Gemeinschaft A knowledge-based approach to high-performance computing in ab initio simulations. AICES Advisory Board Meeting. July 14th 2014 Edoardo Di Napoli Academic background
More informationA short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering
A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization
More informationRecent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.
Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator
More informationPreconditioned Eigenvalue Solvers for electronic structure calculations. Andrew V. Knyazev. Householder Symposium XVI May 26, 2005
1 Preconditioned Eigenvalue Solvers for electronic structure calculations Andrew V. Knyazev Department of Mathematics and Center for Computational Mathematics University of Colorado at Denver Householder
More informationConjugate-Gradient Eigenvalue Solvers in Computing Electronic Properties of Nanostructure Architectures
Conjugate-Gradient Eigenvalue Solvers in Computing Electronic Properties of Nanostructure Architectures Stanimire Tomov 1, Julien Langou 1, Andrew Canning 2, Lin-Wang Wang 2, and Jack Dongarra 1 1 Innovative
More informationA Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation
A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition
More informationParallel Self-Consistent-Field Calculations via Chebyshev-Filtered Subspace Acceleration
Parallel Self-Consistent-Field Calculations via Chebyshev-Filtered Subspace Acceleration Yunkai Zhou Yousef Saad Murilo L. Tiago James R. Chelikowsky September 3, 2006 Abstract Solving the Kohn-Sham eigenvalue
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationKarhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques
Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationDimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota
Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota Université du Littoral- Calais July 11, 16 First..... to the
More informationCOMPUTING PARTIAL SPECTRA WITH LEAST-SQUARES RATIONAL FILTERS
SIAM J. SCI. COMPUT. Vol. 38, No. 5, pp. A32 A345 c 26 Society for Industrial and Applied Mathematics COMPUTING PARTIAL SPECTRA WITH LEAST-SQUARES RATIONAL FILTERS YUANZHE XI AND YOUSEF SAAD Abstract.
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationEigenvalue Problems Computation and Applications
Eigenvalue ProblemsComputation and Applications p. 1/36 Eigenvalue Problems Computation and Applications Che-Rung Lee cherung@gmail.com National Tsing Hua University Eigenvalue ProblemsComputation and
More informationA Tour of the Lanczos Algorithm and its Convergence Guarantees through the Decades
A Tour of the Lanczos Algorithm and its Convergence Guarantees through the Decades Qiaochu Yuan Department of Mathematics UC Berkeley Joint work with Prof. Ming Gu, Bo Li April 17, 2018 Qiaochu Yuan Tour
More informationDeflation for inversion with multiple right-hand sides in QCD
Deflation for inversion with multiple right-hand sides in QCD A Stathopoulos 1, A M Abdel-Rehim 1 and K Orginos 2 1 Department of Computer Science, College of William and Mary, Williamsburg, VA 23187 2
More informationBounding the Spectrum of Large Hermitian Matrices
Bounding the Spectrum of Large Hermitian Matrices Ren-Cang Li Yunkai Zhou Technical Report 2009-07 http://www.uta.edu/math/preprint/ Bounding the Spectrum of Large Hermitian Matrices Ren-Cang Li and Yunkai
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationECS130 Scientific Computing Handout E February 13, 2017
ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ
More informationDepartment of Computer Science and Engineering
From data-mining to nanotechnology and back: The new problems of numerical linear algebra Yousef Saad Department of Computer Science and Engineering University of Minnesota Modelling 09 Roznov pod Radhostem
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled Experiments are not graded, and should not be turned in. They are designed for
More information1. Introduction. We consider the problem of solving the linear system. Ax = b, (1.1)
SCHUR COMPLEMENT BASED DOMAIN DECOMPOSITION PRECONDITIONERS WITH LOW-RANK CORRECTIONS RUIPENG LI, YUANZHE XI, AND YOUSEF SAAD Abstract. This paper introduces a robust preconditioner for general sparse
More informationConjugate-Gradient Eigenvalue Solvers in Computing Electronic Properties of Nanostructure Architectures
Conjugate-Gradient Eigenvalue Solvers in Computing Electronic Properties of Nanostructure Architectures Stanimire Tomov 1, Julien Langou 1, Andrew Canning 2, Lin-Wang Wang 2, and Jack Dongarra 1 1 Innovative
More informationComputation of Smallest Eigenvalues using Spectral Schur Complements
Computation of Smallest Eigenvalues using Spectral Schur Complements Constantine Bekas Yousef Saad January 23, 2004 Abstract The Automated Multilevel Substructing method (AMLS ) was recently presented
More informationIs there life after the Lanczos method? What is LOBPCG?
1 Is there life after the Lanczos method? What is LOBPCG? Andrew V. Knyazev Department of Mathematics and Center for Computational Mathematics University of Colorado at Denver SIAM ALA Meeting, July 17,
More informationConjugate Gradient Method
Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three
More informationMultigrid absolute value preconditioning
Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationA parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations
A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations Akira IMAKURA Center for Computational Sciences, University of Tsukuba Joint work with
More informationA CHEBYSHEV-DAVIDSON ALGORITHM FOR LARGE SYMMETRIC EIGENPROBLEMS
A CHEBYSHEV-DAVIDSON ALGORITHM FOR LARGE SYMMETRIC EIGENPROBLEMS YUNKAI ZHOU AND YOUSEF SAAD Abstract. A polynomial filtered Davidson-type algorithm is proposed for solving symmetric eigenproblems. The
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationKey words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR
POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric
More informationELSI: A Unified Software Interface for Kohn-Sham Electronic Structure Solvers
ELSI: A Unified Software Interface for Kohn-Sham Electronic Structure Solvers Victor Yu and the ELSI team Department of Mechanical Engineering & Materials Science Duke University Kohn-Sham Density-Functional
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationEfficient Deflation for Communication-Avoiding Krylov Subspace Methods
Efficient Deflation for Communication-Avoiding Krylov Subspace Methods Erin Carson Nicholas Knight, James Demmel Univ. of California, Berkeley Monday, June 24, NASCA 2013, Calais, France Overview We derive
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationState-of-the-art numerical solution of large Hermitian eigenvalue problems. Andreas Stathopoulos
State-of-the-art numerical solution of large Hermitian eigenvalue problems Andreas Stathopoulos Computer Science Department and Computational Sciences Cluster College of William and Mary Acknowledgment:
More informationPresentation of XLIFE++
Presentation of XLIFE++ Eigenvalues Solver & OpenMP Manh-Ha NGUYEN Unité de Mathématiques Appliquées, ENSTA - Paristech 25 Juin 2014 Ha. NGUYEN Presentation of XLIFE++ 25 Juin 2014 1/19 EigenSolver 1 EigenSolver
More informationPolynomial Filtering in Latent Semantic Indexing for Information Retrieval
Polynomial Filtering in Latent Semantic Indexing for Information Retrieval E. Koiopoulou Yousef Saad May 12, 24 Abstract Latent Semantic Indexing (LSI) is a well established and effective framewor for
More informationBlock Krylov Space Solvers: a Survey
Seminar for Applied Mathematics ETH Zurich Nagoya University 8 Dec. 2005 Partly joint work with Thomas Schmelzer, Oxford University Systems with multiple RHSs Given is a nonsingular linear system with
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationNumerical Simulation of Spin Dynamics
Numerical Simulation of Spin Dynamics Marie Kubinova MATH 789R: Advanced Numerical Linear Algebra Methods with Applications November 18, 2014 Introduction Discretization in time Computing the subpropagators
More informationGPU ACCELERATED POLYNOMIAL SPECTRAL TRANSFORMATION METHODS
GPU ACCELERATED POLYNOMIAL SPECTRAL TRANSFORMATION METHODS By: JARED L. AURENTZ A dissertation submitted in fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON STATE UNIVERSITY
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More informationA communication-avoiding thick-restart Lanczos method on a distributed-memory system
A communication-avoiding thick-restart Lanczos method on a distributed-memory system Ichitaro Yamazaki and Kesheng Wu Lawrence Berkeley National Laboratory, Berkeley, CA, USA Abstract. The Thick-Restart
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 435 (2011) 480 493 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Bounding the spectrum
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More information13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2
The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=
More informationAn Estimator for the Diagonal of a Matrix
An Estimator for the Diagonal of a Matrix C. Bekas, E. Kokiopoulou, Y. Saad Computer Science & Engineering Dept. University of Minnesota, Twin Cities {beaks,kokobeh,saad}@cs.umn.edu June 1, 2005 Abstract
More informationA Framework for Structured Linearizations of Matrix Polynomials in Various Bases
A Framework for Structured Linearizations of Matrix Polynomials in Various Bases Leonardo Robol Joint work with Raf Vandebril and Paul Van Dooren, KU Leuven and Université
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More information