SLEPc: overview, applications and some implementation details

Similar documents
SLEPc: Scalable Library for Eigenvalue Problem Computations

Large-scale eigenvalue computations with SLEPc

Arnoldi Methods in SLEPc

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

A Parallel Implementation of the Davidson Method for Generalized Eigenproblems

A Parallel Implementation of the Trace Minimization Eigensolver

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

Numerical Methods in Matrix Computations

Rational Krylov methods for linear and nonlinear eigenvalue problems

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Preliminary Results of GRAPES Helmholtz solver using GCR and PETSc tools

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Lecture 8: Fast Linear Solvers (Part 7)

Index. for generalized eigenvalue problem, butterfly form, 211

Preface to the Second Edition. Preface to the First Edition

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS

Eigenvalue Problems Computation and Applications

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Lecture 3: Inexact inverse iteration with preconditioning

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

The Conjugate Gradient Method

Contents. Preface... xi. Introduction...

DELFT UNIVERSITY OF TECHNOLOGY

Preconditioned inverse iteration and shift-invert Arnoldi method

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

FEAST eigenvalue algorithm and solver: review and perspectives

Iterative Methods for Solving A x = b

APPLIED NUMERICAL LINEAR ALGEBRA

Iterative methods for Linear System

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Presentation of XLIFE++

Numerical Methods I Non-Square and Sparse Linear Systems

The Lanczos and conjugate gradient algorithms

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Solving Ax = b, an overview. Program

Review problems for MA 54, Fall 2004.

Krylov Subspaces. The order-n Krylov subspace of A generated by x is

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

M.A. Botchev. September 5, 2014

Numerical Methods I Eigenvalue Problems

Linear Algebra Massoud Malek

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Contour Integral Method for the Simulation of Accelerator Cavities

Krylov subspace projection methods

State-of-the-art numerical solution of large Hermitian eigenvalue problems. Andreas Stathopoulos

Review of some mathematical tools

Linear Solvers. Andrew Hazel

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

FEM and sparse linear system solving

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

Alternative correction equations in the Jacobi-Davidson method

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

Computational Methods. Eigenvalues and Singular Values

Iterative Methods for Sparse Linear Systems

6.4 Krylov Subspaces and Conjugate Gradients

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

Linear Algebra and Eigenproblems

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

Scientific Computing: An Introductory Survey

Inexact inverse iteration with preconditioning

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Elementary linear algebra

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

A Jacobi Davidson Method for Nonlinear Eigenproblems

Customized Sparse Eigenvalue Solutions in Lighthouse

A comparison of solvers for quadratic eigenvalue problems from combustion

Application of Lanczos and Schur vectors in structural dynamics

Numerical Methods - Numerical Linear Algebra

Math 504 (Fall 2011) 1. (*) Consider the matrices

ABSTRACT OF DISSERTATION. Ping Zhang

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

Preconditioned Eigensolver LOBPCG in hypre and PETSc

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Statistical Geometry Processing Winter Semester 2011/2012

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

FINITE-DIMENSIONAL LINEAR ALGEBRA

Chapter 7 Iterative Techniques in Matrix Algebra

Singular Value Decompsition

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

Eigenvalue problems III: Advanced Numerical Methods

Transcription:

SLEPc: overview, applications and some implementation details Jose E. Roman Universidad Politécnica de Valencia, Spain ETH Zürich - May, 2010 Arnoldi Method Computes a basis V m of K m (A, v 1 ) and H m = V mav m for j = 1, 2,..., m w = Av j for i = 1, 2,..., j h i,j = v i w w = w h i,j v i end h j+1,j = w 2 v j+1 = w/h j+1,j end Very elegant algorithm, BUT what else is required for addressing the needs of real applications?

Outline 1 2 Eigenvalue Solvers Spectral Transformation 3 Some of Krylov Solvers Orthogonalization, restart Symmetry, deflation, parallelization Eigenproblems Large-scale eigenvalue problems are among the most demanding calculations in scientific computing Example application areas: Dynamic structural analysis (e.g. civil engineering) Stability analysis (e.g. control engineering) Eigenfunction determination (e.g. electromagnetics) Bifurcation analysis (e.g. fluid dynamics) Information retrieval (e.g. latent semantic indexing)

Application 1: Molecular Clusters Goal: Analysis of high-nuclearity spin clusters Bulk magnetic properties (magnetic susceptibility and magnetization) Spectroscopic properties (inelastic neutron scattering spectra) Example: Chain of Ni atoms with antiferromagnetic interaction (use closed ring to simulate infinite chain; for 10 ions n=59,049) [Ramos, R., Cardona-Serra, Clemente-Juan, 2010] (submitted) Application 2: FE in Schrödinger Equation Goal: Facilitate the finite-element analysis of the Schrödinger Equation HΨ = εψ SLEPc together with deal.ii finite-element library Features: Automatic mesh creation and adaptive refinement Computation of local matrices from a library of element types Automatic assembly of system matrices A, B Robust computation of energies/eigenstates with SLEPc (A ε n B) ψ n = 0 Initial work: simple harmonic oscillator [Young, Romero, R., 2009] Long term: Hartree-Fock self-consistent field formalism

Application 2: FE in Schrödinger Equation (cont d) λ = 1 λ = 2 λ = 3 Square domain, discretized with quadrangular Lagrange elements Application 3: SIESTA SIESTA: a parallel code for self-consistent DFT calculations Generalized symmetric-definite problem: (A ε n B) ψ n = 0 Computing 500 eigenpairs of benzene-2592 with tol=1e-6-0.2-0.3-0.4-0.5 Challenges: Large number of eigenpairs Eigenvalue -0.6-0.7-0.8-0.9 High -1 multiplicity -1.1 1e-11 1e-10 1e-09 1e-08 1e-07 1e-06 1e-05 Residual norm

Application 4: Structural Dynamics Smallest eigenfrequencies of Kx = λm x (real symmetric-definite) Application 5: Plasma Physics Plasma turbulence in a tokamak determines its energy confinement Simulation based on the nonlinear gyrokinetic equations GENE parallel code, scalable to 1000 s of processors Numerical solution with initial value solver

Application 5: Plasma Physics (cont d) Knowledge of the operator s spectrum can be useful Ax = λx Complex, non-hermitian eigenproblem The matrix is not built explicitly Sizes ranging from a few millions to a billion Uses: 1. Largest magnitude eigenvalue to estimate optimal timestep 2. Track sub-dominant instabilities (rightmost eigenvalues) [R., Kammerer, Merz, Jenko, 2010]! 12 1.5 1 10 0.5 8 0-0.5 6-1 4-1.5 2 0-2 -4-6 1 0.5 0-0.5-1 -8-0.1 0 0.1 " -2-1.5-1 -0.5 0 0.5 " -1-0.5 0 0.5 " Application 6: Nuclear Engineering Modal analysis of nuclear reactor cores Objectives: Improve safety Reduce operation costs Lambda Modes Equation Lφ = 1 λ Mφ Want modes associated to largest λ Criticality (eigenvalues) Prediction of instabilities and transient analysis (eigenvectors) 10 0 0 10 10 0 0 10 0.81 0.72 0.63 0.54 0.45 0.36 0.27 0.18 0.09 0 0.6 0.5 0.4 0.3 0.2 0.1 1.39e 16 0.1 0.2 0.3 0.4 0.5 0.6

Application 6: Nuclear Engineering (cont d) Discretized eigenproblem (two energy groups): [ L11 0 L 21 L 22 Can be formulated as ] [ ψ1 ψ 2 ] = 1 λ [ M11 M 12 0 0 ] [ ψ1 ψ 2 ] Gψ 1 = λψ 1, G = L 1 11 (M 11 + M 12 L 1 22 L 21) Matrix should not be computed explicitly In some applications, many successive problems are solved Other modes may be of interest [Verdú, Ginestar, R., Vidal, 2010] Application 7: Computational Electromagnetics Analysis of resonant cavities Source-free wave equations (ˆµ 1 r E) κ 2 0ˆε re =0 (ˆε 1 r H) κ 2 0ˆµ rh =0 FEM discretization leads to Ax = κ 2 0 Bx Target: smallest nonzero eigenfrequencies (large nullspace) λ 1, λ 2,..., λ }{{ k, λ } k+1, λ k+2,..., λ }{{} n =0 Target Eigenfunctions associated to 0 are irrotational electric fields, E = Φ. This allows the computation of a basis of N (A) Constrained Eigenvalue Problem Ax = κ 2 0 Bx } C T Bx = 0

Facts Observed from the Examples Various problem characteristics Real/complex, Hermitian/non-Hermitian Need to support complex in real arithmetic Many formulations Standard, generalized, quadratic, non-linear, SVD Special cases: implicit matrix, block-structured problems, constrained problems, structured spectrum Wanted solutions Usually only a few eigenpairs, but may be many Any part of the spectrum (exterior, interior), intervals Robustness and usability Singular B and other special cases, high multiplicity Interoperability (linear solvers, FE), flexibility Eigenvalue Problems Consider the following eigenvalue problems Standard Eigenproblem Ax = λx Generalized Eigenproblem Ax = λbx where λ is a (complex) scalar: eigenvalue x is a (complex) vector: eigenvector Matrices A and B can be real or complex Matrices A and B can be symmetric (Hermitian) or not Typically, B is symmetric positive (semi-) definite

Solution of the Eigenvalue Problem There are n eigenvalues (counted with their multiplicities) Partial eigensolution: nev solutions λ 0, λ 1,..., λ nev 1 C x 0, x 1,..., x nev 1 C n nev = number of eigenvalues / eigenvectors (eigenpairs) Different requirements: A few of the dominant eigenvalues (largest magnitude) A few λ i s with smallest or largest real parts A few λ i s closest to a target value in the complex plane Spectral Transformation A general technique that can be used in many methods Ax = λx = T x = θx In the transformed problem The eigenvectors are not altered The eigenvalues are modified by a simple relation Convergence is usually improved (better separation) Shift of Origin T S = A + σi Shift-and-invert T SI = (A σi) 1 Cayley T C = (A σi) 1 (A + τi) Drawback: T not computed explicitly, linear solves instead

What Users Need Provided by PETSc Abstraction of mathematical objects: vectors and matrices Efficient linear solvers (direct or iterative) Easy programming interface Run-time flexibility, full control over the solution process Parallel computing, mostly transparent to the user Provided by SLEPc State-of-the-art eigensolvers (or SVD solvers) Spectral transformations Summary PETSc: Portable, Extensible Toolkit for Scientific Computation Software for the scalable (parallel) solution of algebraic systems arising from partial differential equation (PDE) simulations Developed at Argonne National Lab since 1991 Usable from C, C++, Fortran77/90 Focus on abstraction, portability, interoperability Extensive documentation and examples Freely available and supported through email http://www.mcs.anl.gov/petsc Current version: 3.1 (released March 2010)

Summary SLEPc: Scalable Library for Eigenvalue Problem Computations A general library for solving large-scale sparse eigenproblems on parallel computers For standard and generalized eigenproblems For real and complex arithmetic For Hermitian or non-hermitian problems Also support for the partial SVD decomposition http://www.grycap.upv.es/slepc Current version: 3.0.0 (released Feb 2009) PETSc/SLEPc Numerical Components PETSc SLEPc Nonlinear Systems Line Search Trust Region Other Euler Time Steppers Backward Euler Pseudo Time Step Other Cross Product SVD Solvers Cyclic Matrix Thick Res. Lanczos Lanczos Krylov Subspace Methods GMRES CG CGS Bi-CGStab TFQMR Richardson Chebychev Other Eigensolvers Krylov-Schur Arnoldi Lanczos Other Preconditioners Spectral Transform Additive Schwarz Block Jacobi Jacobi ILU ICC LU Other Shift Shift-and-invert Cayley Fold Matrices Compressed Sparse Row Block Compressed Sparse Row Block Diagonal Dense Other Vectors Index Sets Indices Block Indices Stride Other

EPS: Basic Usage Usual steps for solving an eigenvalue problem with SLEPc: 1. Create an EPS object 2. Define the eigenvalue problem 3. (Optionally) Specify options for the solution 4. Run the eigensolver 5. Retrieve the computed solution 6. Destroy the EPS object All these operations are done via a generic interface, common to all the eigensolvers EPS: Simple Example EPS eps; /* eigensolver context */ Mat A, B; /* matrices of Ax=kBx */ Vec xr, xi; /* eigenvector, x */ PetscScalar kr, ki; /* eigenvalue, k */ EPSCreate(PETSC_COMM_WORLD, &eps); EPSSetOperators(eps, A, B); EPSSetProblemType(eps, EPS_GNHEP); EPSSetFromOptions(eps); EPSSolve(eps); EPSGetConverged(eps, &nconv); for (i=0; i<nconv; i++) { EPSGetEigenpair(eps, i, &kr, &ki, xr, xi); } EPSDestroy(eps);

EPS: Run-Time Examples % program -eps_view -eps_monitor % program -eps_type krylovschur -eps_nev 6 -eps_ncv 24 % program -eps_type arnoldi -eps_tol 1e-8 -eps_max_it 2000 % program -eps_type subspace -eps_hermitian -log_summary % program -eps_type lapack % program -eps_type arpack -eps_plot_eigs -draw_pause -1 % program -eps_type primme -eps_smallest_real EPS: Viewing Current Options Sample output of -eps view EPS Object: problem type: symmetric eigenvalue problem method: krylovschur selected portion of spectrum: largest eigenvalues in magnitude number of eigenvalues (nev): 1 number of column vectors (ncv): 16 maximum dimension of projected problem (mpd): 16 maximum number of iterations: 100 tolerance: 1e-07 dimension of user-provided deflation space: 0 IP Object: orthogonalization method: classical Gram-Schmidt orthogonalization refinement: if needed (eta: 0.707100) ST Object: type: shift shift: 0

Built-in Support Tools Plotting computed eigenvalues % program -eps_plot_eigs Printing profiling information % program -log_summary Debugging % program -start_in_debugger % program -malloc_dump Built-in Support Tools Monitoring convergence (textually) % program -eps_monitor Monitoring convergence (graphically) % program -draw_pause 1 -eps_monitor_draw

Spectral Transformation in SLEPc An ST object is always associated to any EPS object Ax = λx = T x = θx The user need not manage the ST object directly Internally, the eigensolver works with the operator T At the end, eigenvalues are transformed back automatically ST Standard problem Generalized problem shift A + σi B 1 A + σi fold (A + σi) 2 (B 1 A + σi) 2 sinvert (A σi) 1 (A σb) 1 B cayley (A σi) 1 (A + τi) (A σb) 1 (A + τb) Illustration of Spectral Transformation Spectrum folding θ Shift-and-invert θ θ= 1 λ σ θ=(λ σ) 2 θ 1 θ 2 0 σ λ λ 1 λ 2 θ 1 θ 3 λ 1 σ λ 2 θ 2 λ 3 λ

ST: Run-Time Examples % program -eps_type power -st_type shift -st_shift 1.5 % program -eps_type power -st_type sinvert -st_shift 1.5 % program -eps_type power -st_type sinvert -eps_power_shift_type rayleigh % program -eps_type arpack -eps_tol 1e-6 -st_type sinvert -st_shift 1 -st_ksp_type cgs -st_ksp_rtol 1e-8 -st_pc_type sor -st_pc_sor_omega 1.3 SLEPc Technical Reports Contain technical description of actual implementation STR-1 Orthogonalization Routines in SLEPc STR-2 Single Vector Iteration Methods in SLEPc STR-3 Subspace Iteration in SLEPc STR-4 Arnoldi Methods in SLEPc STR-5 Lanczos Methods in SLEPc STR-6 A Survey of Software for Sparse Eigenvalue Problems STR-7 Krylov-Schur Methods in SLEPc STR-8 Restarted Lanczos Bidiagonalization for the SVD in SLEPc STR-9 Practical Implementation of Harmonic Krylov-Schur Available at http://www.grycap.upv.es/slepc

Orthogonalization In Krylov methods, we need good quality of orthogonalization Classical GS is not numerically robust Modified GS is bad for parallel computing Modified GS can also be unstable in some cases Solution: iterative Gram-Schmidt Default in SLEPc: classical GS with selective reorthogonalization [Hernandez, Tomas, R., 2007] h 1:j,j = 0 repeat ρ = w 2 c 1:j,j = Vj w w = w V j c 1:j,j h 1:j,j = h 1:j,j + c 1:j,j h j+1,j = ρ 2 j i=1 c2 i,j until h j+1,j > η ρ Restart In most applications, restart is essential Implicit restart Also Explicit restart is not powerful enough Implicit/thick restart keeps a lot of useful information...... and purges unwanted eigenpairs Krylov-Schur is much easier to implement Restart with locking is necessary in Lanczos for problems with multiple eigenvalues Semi-orthogonalization for Lanczos may not be worth with restart

Computation of Many Eigenpairs By default, a subspace of dimension 2 nev is used... For large nev, this is not appropriate Excessive storage and inefficient computation S m A V m = V m b m+1 Strategy: restrict the dimension of the projected problem % program -eps_nev 2000 -eps_mpd 300 Stopping Criterion Krylov methods provide an estimate of the residual norm r i = β m+1 s m,i The stopping criterion based on the normwise backward errors where a = A, b = B, or a = 1, b = 1 r i (a + θ i b) x i < tol Warning: in shift-and-invert the above residual estimate is for (A σb) 1 Bx i θ i x i In SLEPc we allow for explicit computation of the residual

Preserving the Symmetry In symmetric-definite generalized eigenproblems symmetry is lost because, e.g., (A σb) 1 B is not symmetric Choice of Inner Product Standard Hermitian inner product: x, y = y x B-inner product: x, y B = y B x Observations:, B is a genuine inner product only if B is symmetric positive definite R n with, B is isomorphic to the Euclidean n-space R n with the standard Hermitian inner product (A σb) 1 B is self-adjoint with respect to, B SLEPc Abstraction The user can specify the problem type The user selects the solver Power / RQI Subspace Iteration Arnoldi Lanczos Krylov-Schur External Solvers Appropriate inner product is performed Appropriate matrix-vector product is performed General Hermitian Positive Definite Complex Symmetric The user can specify the spectral transform Shift Shift-and-invert Cayley Fold These operations are virtual functions: STInnerProduct and STApply

Purification of Eigenvectors When B is singular some additional precautions are required. If T = (A σb) 1 B, then all finite eigenvectors belong to R(T ) x, y B is a semi-inner product x B is a semi-norm x, y B is a true inner product on R(T ) Strategy: Force initial vector to lie in R(T ) Use Krylov method with, B In finite precision arithmetic we need purification: remove components in the nullspace of B (e.g., with x i T x i ) Problems with Implicit Matrix Example: cross-product matrix A A (for the SVD) Example: linearized QEP [ 0 I K C ] [ I 0 x = λ 0 M ] x In SLEPc it is very easy to work with implicit matrices 1. Create an empty matrix (shell matrix) 2. Register functions to be called for certain operations (matrix-vector product but maybe others) 3. Use the matrix as a regular matrix

Options for Subspace Expansion Initial Subspace Provide an initial trial subspace, e.g., from a previous computation (EPSSetInitialSpace) Krylov methods can only use one initial vector Deflation Subspace Provide a deflation space with EPSSetDeflationSpace The eigensolver operates in the restriction to the orthogonal complement Useful for constrained eigenproblems or problems with a known nullspace Currently implemented as an orthogonalization Parallel Layout - Vectors Each process locally owns a subvector of contiguously numbered global indices simple block-row division of vectors other orderings through permutation Vector dot products are inefficient (global reduction) avoid them if possible, or merge several together Proc 0 Proc 1 Proc 2 Proc 3 Proc 4

Parallel Layout - Matrices Each process locally owns a contiguous chunk of rows Proc 0 Proc 1 Proc 2 Proc 3 Proc 4 Each processor stores the diagonal and off-diagonal parts separately Parallel Performance on MareNostrum Speed-up S p = T s T p 320 256 Ideal ARPACK SLEPc Matrix PRE2 University of Florida Dimension 659,033 Speed-up 192 128 5,834,044 nonzero elem. 10 eigenvalues with 30 basis vectors 1 processor per node 64 1 1 64 128 192 256 320 Nodes

Wrap Up SLEPc highlights: Free software Growing list of solvers Seamlessly integrated spectral transformation Easy programming with PETSc s object-oriented style Data-structure neutral implementation Run-time flexibility Portability to a wide range of parallel platforms Usable from code written in C, C++, Fortran, Python Extensive documentation Next release: Jacobi-Davidson and QEP solvers More Information Homepage: http://www.grycap.upv.es/slepc Hands-on Exercises: http://www.grycap.upv.es/slepc/handson Contact email: slepc-maint@grycap.upv.es