Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method

Similar documents
A Bibliography of Publications of Andrew Knyazev

Is there life after the Lanczos method? What is LOBPCG?

Using the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers

c 2009 Society for Industrial and Applied Mathematics

ETNA Kent State University

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

A geometric theory for preconditioned inverse iteration III: A short and sharp convergence estimate for generalized eigenvalue problems

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

Iterative methods for symmetric eigenvalue problems

Preconditioned Eigensolver LOBPCG in hypre and PETSc

Implementation of a preconditioned eigensolver using Hypre

Preconditioned Eigenvalue Solvers for electronic structure calculations. Andrew V. Knyazev. Householder Symposium XVI May 26, 2005

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

ETNA Kent State University

NEW ESTIMATES FOR RITZ VECTORS

Rayleigh-Ritz majorization error bounds with applications to FEM and subspace iterations

APPLIED NUMERICAL LINEAR ALGEBRA

A GEOMETRIC THEORY FOR PRECONDITIONED INVERSE ITERATION APPLIED TO A SUBSPACE

Arnoldi Methods in SLEPc

STEEPEST DESCENT AND CONJUGATE GRADIENT METHODS WITH VARIABLE PRECONDITIONING

A GEOMETRIC THEORY FOR PRECONDITIONED INVERSE ITERATION I: EXTREMA OF THE RAYLEIGH QUOTIENT

Recent implementations, applications, and extensions of the Locally Optimal Block Preconditioned Conjugate Gradient method LOBPCG

Multigrid absolute value preconditioning

Direct methods for symmetric eigenvalue problems

M.A. Botchev. September 5, 2014

A Hierarchy of Preconditioned Eigensolvers for Elliptic Differential Operators

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

A Note on Inverse Iteration

An Algebraic Multigrid Method for Eigenvalue Problems

State-of-the-art numerical solution of large Hermitian eigenvalue problems. Andreas Stathopoulos

A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

Preconditioned inverse iteration and shift-invert Arnoldi method

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

c 2006 Society for Industrial and Applied Mathematics

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

The Conjugate Gradient Method

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

PRECONDITIONED ITERATIVE METHODS FOR LINEAR SYSTEMS, EIGENVALUE AND SINGULAR VALUE PROBLEMS. Eugene Vecharynski. M.S., Belarus State University, 2006

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

Preface to the Second Edition. Preface to the First Edition

DELFT UNIVERSITY OF TECHNOLOGY

Inexact inverse iteration with preconditioning

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Numerical Methods in Matrix Computations

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method

Adaptive Coarse Space Selection in BDDC and FETI-DP Iterative Substructuring Methods: Towards Fast and Robust Solvers

arxiv: v1 [cs.ms] 18 May 2007

SOME PRACTICAL ASPECTS OF PARALLEL ADAPTIVE BDDC METHOD

DELFT UNIVERSITY OF TECHNOLOGY

Multilevel Methods for Eigenspace Computations in Structural Dynamics

Lecture 3: Inexact inverse iteration with preconditioning

Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II)

Iterative methods for Linear System

ITERATIVE MINIMIZATION OF THE RAYLEIGH QUOTIENT BY BLOCK STEEPEST DESCENT ITERATIONS. 1. Introduction. The generalized matrix eigenvalue problem (1.

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

DELFT UNIVERSITY OF TECHNOLOGY

Max Planck Institute Magdeburg Preprints

Conjugate-Gradient Eigenvalue Solvers in Computing Electronic Properties of Nanostructure Architectures

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

A Parallel Implementation of the Trace Minimization Eigensolver

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

A CHEBYSHEV-DAVIDSON ALGORITHM FOR LARGE SYMMETRIC EIGENPROBLEMS

A Jacobi Davidson Method for Nonlinear Eigenproblems

A geometric theory for preconditioned inverse iteration II: Convergence estimates

The Deflation Accelerated Schwarz Method for CFD

of dimension n 1 n 2, one defines the matrix determinants

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

Combination of Jacobi Davidson and conjugate gradients for the partial symmetric eigenproblem

Eigenvalue Problems and Singular Value Decomposition

Multilevel and Adaptive Iterative Substructuring Methods. Jan Mandel University of Colorado Denver

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Singular Value Computation and Subspace Clustering

Preconditioned Parallel Block Jacobi SVD Algorithm

problem Au = u by constructing an orthonormal basis V k = [v 1 ; : : : ; v k ], at each k th iteration step, and then nding an approximation for the e

A Parallel Implementation of the Davidson Method for Generalized Eigenproblems

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015

Domain decomposition on different levels of the Jacobi-Davidson method

Application of Lanczos and Schur vectors in structural dynamics

MICHIEL E. HOCHSTENBACH

ETNA Kent State University

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

Linear Solvers. Andrew Hazel

Density-Matrix-Based Algorithms for Solving Eingenvalue Problems

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Iterative Methods for Solving A x = b

ABSTRACT OF DISSERTATION. Ping Zhang

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Gradient Method Based on Roots of A

A JACOBI DAVIDSON METHOD FOR SOLVING COMPLEX-SYMMETRIC EIGENVALUE PROBLEMS

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

A Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

6.4 Krylov Subspaces and Conjugate Gradients

Transcription:

Andrew Knyazev Department of Mathematics University of Colorado at Denver P.O. Box 173364, Campus Box 170 Denver, CO 80217-3364 Time requested: 45 Andrew.Knyazev@cudenver.edu http://www-math.cudenver.edu/~aknyazev/ tel: 303 556 8442, fax: 303 556 8550 17 years since degree Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method Andrew Knyazev Numerical solution of extremely large and ill conditioned eigenvalue problems is attracting a growing attention recently as such problems are of major importance in applications. They arise typically as discretization of continuous models described by systems of partial differential equations (PDE s). For such problems, preconditioned matrix-free eigensolvers are especially effective as the stiffness and the mass matrices do not need to be assembled, but instead can be only accessed through functions of the corresponding vector-matrix products. It is well recognized that traditional approaches are inefficient for very large eigenproblems. Preconditioning is the key for significant improvement of the performance as it allows one to find a path between Scylla of expensive factorizations and Charybdis of slow convergence. The study of preconditioned linear solvers has become a major focus of numerical analysts and engineers. For eigenvalue computations, preconditioning is much more difficult; and presently there are more questions than answers, even in the symmetric case. While the mainstream research in the area introduces preconditioning in eigenvalue solvers using preconditioned inner iterations for solving linear systems with shift-and-invert matrices, our approach is to incorporate preconditioning directly into Krylov-based iterations. This results in simple, robust, and efficient algorithms, in many preliminary numerical comparisons superior to inner-outer schemes commonly used at present, e.g., to the celebrated inexact Jacobi-Davidson methods. For symmetric eigenproblems, the suggested Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method not only outperforms the inexact Jacobi-Davidson methods in many cases, but even exhibits properties of the optimal algorithm on the whole class of the preconditioned eigensolvers, which includes most presently known methods; e.g., the generalized Davidson, trace minimization and inexact continuation methods. To be more specific, let us consider a generalized eigenvalue problem (A λb)x = 0 with real symmetric positive definite matrices A and B, where we are interested in computing p smallest eigenvalues and corresponding eigenvectors. An important class of eigenproblems is that of mesh eigenproblems, arising from discretizations of PDE s, e.g., in structural mechanics, where it is usual to call A the stiffness matrix, and B the mass matrix. To accelerate the convergence, we introduce a preconditioner T. In many engineering applications, preconditioned iterative solvers for linear systems Ax = b are already available, and efficient, e.g., multilevel or incomplete factorization based, preconditioners T A 1 are constructed. The peculiarity of the preconditioning we recommend is that no eigenproblem specific preconditioners are used. 1

Instead, we propose the same T be used to solve the eigenvalue problem. We assume that the preconditioner T is symmetric positive definite. We define [2-4] a preconditioned single-vector, for p = 1, eigensolver for the pencil A λb as a generalized polynomial method: x (k) = P k (T A, T B)x (0), where P k is a polynomial of the k-th degree of two independent variables, x (0) is an initial guess, and T is a fixed preconditioner. Thus, the approximation x (k) belongs to the generalized Krylov subspace K k (T A, T B, x (0)). It is important to realize that this definition is very broad, e.g., it is general enough to embrace most known preconditioned iterative methods for computing the extreme eigenpair, using a fixed preconditioner, no matter what the origin of a particular solver is. Now, one can immediately understand the difficulties, which are emanated from the fact that the Krylov subspace is constructed using polynomials of two noncommuting matrix variables. The majority of known tools developed for the Lanczos and PCG methods, most importantly, the theory of orthogonal polynomials, fails us in this case. A novel ground-breaking theory is apparently needed here. Having our definition of the class of preconditioned eigensolvers, we can introduce the global optimization method for computing the first eigenpair by minimizing the Rayleigh quotient λ(x) on the generalized Krylov subspace. While this method provides optimal accuracy on the generalized Krylov subspace, it is also exceedingly expensive as the dimension of the subspace grows exponentially and no short-term recurrence to find the optimum is known (and, perhaps, is even possible). For block methods, when p > 1, we introduce the generalized block Krylov subspace. The block global optimization GLOBAL method computes approximate eigenvectors as corresponding Ritz vectors on this subspace and is used for accuracy benchmarks. To introduce another benchmark, let us suppose that the minimal eigenvalue λ 1 is already known, and we just need to compute the corresponding eigenvector x 1, an element of the null space of the homogeneous system of linear equations (A λ 1 B)x 1 = 0. What would be an ideal preconditioned method of computing x 1 under the assumption that λ 1 is known? As such, we choose the standard PCG method. It is well known that the PCG method can be used to compute a nonzero element of the null space of a homogeneous system of linear equations with symmetric and nonnegative definite matrix if a nonzero initial guess is used and the preconditioner is symmetric positive definite. This Ideal method is suggested [4] for benchmarking of the accuracy and costs of practical eigenvalue solvers, when p = 1. We now introduce [1-4] single-vector, p = 1, LOPCG method for the pencil A λb: x (i+1) = w (i) + τ (i) x (i) + γ (i) x (i 1), w (i) = T (Ax (i) λ (i) Bx (i) ), λ (i) = λ(x (i) ), γ (0) = 0, (1) with scalar iteration parameters τ (i) and γ (i) chosen using an idea of local optimality, namely, select τ (i) and γ (i) that minimize the Rayleigh quotient λ(x (i+1) ) by using the Rayleigh Ritz (RR) method. Dropping the vector x (i 1) from (1) turns it into the steepest descent and dramatically slows it down, according to our numerical tests [2,3,6]. However, adding more vectors x (i 2), etc. to the scheme (1) does not increase the speed as shown in numerical simulations [6] for a FEM approximation of the Laplacian preconditioned with a V (2, 2) multigrid. Moreover, in a different set of numerical tests [4] the LOPCG converges with the same speed and is practically as efficient as the Ideal method. There is no explanation for these observations yet. 2

The LOBPCG is simply a block, for p > 1, version of (1), where all 3p vectors span the RR trial subspace. The LOBPCG, numerically compared with the GLOBAL method for a model problem with p = 3, mysteriously is able to reproduce essentially the same optimal approximation quality of the GLOBAL, even though dimensions of the block generalized Krylov subspace in GLOBAL are: 9, 21, 45, 93, 189, 381, 765, while the LOBPCG method uses local optimization only on 9-dimensional subspace on every step. A rigorous theoretical explanation of excellent convergence of the LOBPCG remains challenging and needs innovative mathematical ideas. The best presently known theoretical convergence rate estimate is proved in 2001 in an extensive four-parts paper, see [5] and references there, but it still does not capture some important convergence properties of the LOBPCG. We also provide results of numerical comparison of the LOBPCG with inexact Jacobi-Davidson, Generalized Davidson, Preconditioned Lanczos and inexact Rayleigh Quotient Iterations, suggesting that LOBPCG is practically one of the top preconditioned eigensolvers. A MATLAB code of the LOBPCG method and the Preconditioned Eigensolvers Benchmarking are available at http://www-math.cudenver.edu/ aknyazev/software/cg/. Parallel versions using PETSc and Hypre are in progress; preliminary numerical results are provided. References: 1. A. V. Knyazev. A preconditioned conjugate gradient method for eigenvalue problems and its implementation in a subspace. In International Ser. Numerical Mathematics, v. 96, Eigenwertaufgaben in Natur- und Ingenieurwissenschaften und ihre numerische Behandlung, Oberwolfach, 1990., pages 143-154, Basel, 1991. Birkhauser. 2. A. V. Knyazev. Preconditioned eigensolvers an oxymoron? Electron. Trans. Numer. Anal., 7:104-123 (electronic), 1998. Large scale eigenvalue problems (Argonne, IL, 1997): http://etna.mcs.kent.edu/vol.7.1998/pp104-123.dir/pp104-123.pdf 3. A. V. Knyazev. Preconditioned eigensolvers: practical algorithms. In Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors, Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide, pages 352-368. SIAM, Philadelphia, 2000. Section 11.3: http://www.cs.utk.edu/ dongarra/etemplates/node410.html 4. A. V. Knyazev. Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method. SIAM J. Sci. Comput., 23(2):517-541, 2001: http://epubs.siam.org/sam-bin/getfile/sisc/articles/36612.pdf 5. A. V. Knyazev and K. Neymeyr. A geometric theory for preconditioned inverse iteration, III: A short and sharp convergence estimate for generalized eigenvalue problems. Linear Algebra Appl., 2001. Accepted. The preliminary revision published as Technical report UCD-CCM 173, CU-Denver: http://www-math.cudenver.edu/ccmreports/rep173.pdf 6. A. V. Knyazev and K. Neymeyr. Efficient solution of symmetric eigenvalue problems using multigrid preconditioners in the locally optimal block conjugate gradient method. Electron. Trans. Numer. Anal., 2001. Accepted. The preliminary revision published as Technical report UCD-CCM 174, CU-Denver: http://www-math.cudenver.edu/ccmreports/rep174.pdf 3

A brief biography, a list of recent publications, etc. Education: Ph.D. in Numerical Mathematics, Ph.D. advisor - V.I. Lebedev. Institute of Numerical Mathematics Russian Academy of Sciences, 1985 B.A. and M.S. in Computer Science and Cybernetics, M.S. advisor - E.G. D yakonov. Moscow State University, Dept. Cybernetics and Computer Science, 1981 Employment: Center for Computational Math., University of Colorado at Denver Director, 1999-2001 Department of Mathematics, University of Colorado at Denver Associate Professor, 1994- present Courant Institute of Mathematical Sciences, New York University: Visitor, 1992-1994 Institute of Numerical Mathematics Russian Academy of Sciences: Senior Scientist, 1983-1992 Moscow Physico-Technical Institute (Moscow Institute of Physics and Technology), FPFE, Assistant Professor, 1985-1991 Moscow State University, Dept. Mathematics and Mechanics, Instructor, 1986-1988 Moscow Institute of Engineering and Physics, Instructor, 1982-1985 Kurchatov s Institute of Atomic Energy, Nuclear Reactors: Software Engineer, 1981-1983 Over 30 papers and reports were published. Selected papers: A. V. Knyazev, Merico E. Argentati, Principal Angles between Subspaces in an A-Based Scalar Product: Algorithms and Perturbation Estimate. Accepted to SISC, 2001. Andrew Knyazev and Klaus Neymeyr, Efficient solution of symmetric eigenvalue problems using multigrid preconditioners in the locally optimal block conjugate gradient method. Accepted to the Copper Mountain issue of ETNA, 2001. Andrew Knyazev and Klaus Neymeyr, A geometric theory for preconditioned inverse iteration. III: A short and sharp convergence estimate for generalized eigenvalue problems. To appear in Linear Algebra and Its Applications, 2001. N. S. Bakhvalov, A. V. Knyazev, and R. R. Parashkevov, Extension Theorems for Stokes and Lame equations for nearly incompressible media and their applications to numerical solution of problems with highly discontinuous coefficients. To appear in Numerical Linear Algebra with Applications, 2001. 4

A. V. Knyazev and Olof Widlund, Lavrentiev Regularization + Ritz Approximation = Uniform Finite Element Error Estimates for Differential Equations with Rough Coefficients. Mathematics of Computation, posted on July 13, 2001, S0025-5718-01-01378-3 (to appear in print). A. V. Knyazev, Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method. SIAM Journal on Scientific Computing 23 (2001), no. 2, pp. 517-541. A. V. Knyazev, Preconditioned eigensolvers: practical algorithms. In Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide, Editors: Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk Van der Vorst, SIAM, 2000. A. V. Knyazev, Preconditioned eigensolvers - an oxymoron?, Electronic Transactions on Numerical Analysis, 7 (1998), pp. 104-123. Knyazev, Andrew V. New estimates for Ritz vectors. Mathematics of Computation, 66 (1997), no. 219, 985 995. Bramble, James H.; Pasciak, Joseph E.; Knyazev, Andrew V. A subspace preconditioning algorithm for eigenvector/eigenvalue computatio. Advances in Computational Mathematics, 6 (1996), no. 2, 159 189. A. V. Knyazev and A. L. Skorokhodov, The preconditioned gradient-type iterative methods in a subspace for partial generalized symmetric eigenvalue problem, SIAM J. Numerical Analysis, v. 31, 1226, 1994. N. S. Bakhvalov and A. V. Knyazev, Fictitious domain methods and computation of homogenized properties of composites with a periodic structure of essentially different components, In Numerical Methods and Applications, Ed. Gury I. Marchuk, CRC Press, 221-276, 1994. Knyazev, A. V.; Skorokhodov, A. L. On exact estimates of the convergence rate of the steepest ascent method in the symmetric eigenvalue problem. Linear Algebra Appl. 154/156 (1991), 245 257. Knyazev, A. V.; Sharapov, I. A. Variational Rayleigh quotient iteration methods for a symmetric eigenvalue problem. East-West J. Numer. Math. 1 (1993), no. 2, 121 128. Knyazev, Andrey V. Iterative solution of PDE with strongly varying coefficients: algebraic version. Iterative methods in linear algebra (Brussels, 1991), 85 89, North-Holland, Amsterdam, 1992. Knyazev, A. V. A parallel algorithm of subspace iterations and its implementation on a multiprocessor with ring architecture. Russian J. Numer. Anal. Math. Modelling 7 (1992), no. 1, 55 61. 5

Knyazev, A. V. Convergence rate estimates for iterative methods for a mesh symmetric eigenvalue problem. Translated from the Russian. Soviet J. Numer. Anal. Math. Modelling 2 (1987), no. 5, 371 396. Knyazev, A. V. Sharp a priori error estimates for the Rayleigh-Ritz method with no assumptions on fixed sign or compactness. Math. Notes 38 (1985), no. 5-6, 998 1002. D yakonov, E. G.; Knyazev, A. V. Group iterative method for finding lower-order eigenvalues. Moscow Univ., Ser. 15, Math. Cyber. (1982), no. 2, 32-40. Selected conferences: Miniworkshop: Preconditioning in Eigenvalue Computations (organizer), 03.03. - 09.03.2002, Oberwolfach. PRISM 2001, May 21-23, 2001, University of Nijmegen, The Netherlands. III International Workshop on Accurate Solution of Eigenvalue Problems, July 3-6, 2000, Hagen, Germany. FIFTH US NATIONAL CONGRESS ON COMPUTATIONAL MECHANICS, August 4-6, 1999, University of Colorado at Boulder: MiniSymposium Very Large Eigenvalue Problems (organizer). SIAM 45th Anniversary Meeting, July 14-18, 1997, Stanford University: Minisymposium Preconditioned Methods for Large Eigenproblems (organizer). XII HOUSEHOLDER SYMPOSIUM, Lake Arrowhead, USA, 1993 Eigenwertaufgaben in Natur- und Ingenieurwissenschaften und ihre numerische Behandlung, Oberwolfach, 1990. XI HOUSEHOLDER SYMPOSIUM, Tylosand, SWEDEN, 1990. Awards: Teaching Excellence Award for the College of Liberal Arts and Sciences at the University of Colorado at Denver, 2000 Faculty Research Fellowship, University of Colorado at Denver, 2000 Researcher/Creative Artist Award for the College of Liberal Arts and Sciences at the University of Colorado at Denver, 1999 CU-Denver nominee for the University of Colorado President s Faculty Excellence Award for Advancing Teaching and Learning through Technology, 1999 6