Eigenvalue Problems Computation and Applications

Similar documents
Krylov Subspace Methods to Calculate PageRank

A hybrid reordered Arnoldi method to accelerate PageRank computations

Information Retrieval

Algebraic Representation of Networks

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Probabilistic Latent Semantic Analysis

Markov Chains and Spectral Clustering

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Calculating Web Page Authority Using the PageRank Algorithm

Linear Algebra Background

Applied Linear Algebra in Geoscience Using MATLAB

ABSTRACT. Professor G.W. Stewart

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Preconditioned inverse iteration and shift-invert Arnoldi method

Bruce Hendrickson Discrete Algorithms & Math Dept. Sandia National Labs Albuquerque, New Mexico Also, CS Department, UNM

Data Mining and Matrices

AMS526: Numerical Analysis I (Numerical Linear Algebra)

The Second Eigenvalue of the Google Matrix

Link Analysis. Leonid E. Zhukov

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

Computational Economics and Finance

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Fundamentals of Matrices

1 Searching the World Wide Web

Link Analysis Ranking

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

Computational Methods. Eigenvalues and Singular Values

STA141C: Big Data & High Performance Statistical Computing

CS 3750 Advanced Machine Learning. Applications of SVD and PCA (LSA and Link analysis) Cem Akkaya

Computing PageRank using Power Extrapolation

Introduction to Information Retrieval

Linear Algebra. Session 12

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings

Principal Component Analysis

Singular Value Decompsition

STA141C: Big Data & High Performance Statistical Computing

Linear Algebra Review. Fei-Fei Li

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Mathematical foundations - linear algebra

Math 304 Handout: Linear algebra, graphs, and networks.

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

MATH36001 Perron Frobenius Theory 2015

How does Google rank webpages?

Lecture 3: Inexact inverse iteration with preconditioning

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Machine Learning for Data Science (CS4786) Lecture 11

Parallel Singular Value Decomposition. Jiaxing Tan

Latent Semantic Analysis. Hongning Wang

Review of Some Concepts from Linear Algebra: Part 2

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Math 307 Learning Goals. March 23, 2010

Matrix Approximations

Numerical Methods I Singular Value Decomposition

Latent Semantic Models. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

The Push Algorithm for Spectral Ranking

1998: enter Link Analysis

STA141C: Big Data & High Performance Statistical Computing

ELE 538B: Mathematics of High-Dimensional Data. Spectral methods. Yuxin Chen Princeton University, Fall 2018

0.1 Naive formulation of PageRank

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

A Note on Google s PageRank

Mathematical Properties & Analysis of Google s PageRank

ECS231: Spectral Partitioning. Based on Berkeley s CS267 lecture on graph partition

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Updating PageRank. Amy Langville Carl Meyer

ECEN 689 Special Topics in Data Science for Communications Networks

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Algebra Review. Fei-Fei Li

Spectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods

Inf 2B: Ranking Queries on the WWW

EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4]

Latent Semantic Analysis. Hongning Wang

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Mathematical foundations - linear algebra

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors

Graph Partitioning Using Random Walks

Singular Value Decomposition

Main matrix factorizations

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

Math 2J Lecture 16-11/02/12

PV211: Introduction to Information Retrieval

Inexact inverse iteration with preconditioning

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

INF 141 IR METRICS LATENT SEMANTIC ANALYSIS AND INDEXING. Crista Lopes

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

CSE 494/598 Lecture-4: Correlation Analysis. **Content adapted from last year s slides

PageRank. Ryan Tibshirani /36-662: Data Mining. January Optional reading: ESL 14.10

DELFT UNIVERSITY OF TECHNOLOGY

Spectral Algorithms I. Slides based on Spectral Mesh Processing Siggraph 2010 course

Transcription:

Eigenvalue ProblemsComputation and Applications p. 1/36 Eigenvalue Problems Computation and Applications Che-Rung Lee cherung@gmail.com National Tsing Hua University

Eigenvalue ProblemsComputation and Applications p. 2/36 Outline Definitions Applications Google s pagerank. Latent semantic indexing. Point set segmentation. Computation Power method. Shift-invert enhancement. Subspace methods. Residual Arnoldi method. Conclusion

Eigenvalue ProblemsComputation and Applications p. 3/36 Eigenvalue and Eigenvector For a given n n matrix A, if a scalar λ and a nonzero vector x satisfies Ax = λx, we say (λ,x) is an eigenpair of A, λ is called an eigenvalue; x is called an eigenvector.

Eigenvalue ProblemsComputation and Applications p. 3/36 Eigenvalue and Eigenvector For a given n n matrix A, if a scalar λ and a nonzero vector x satisfies Ax = λx, we say (λ,x) is an eigenpair of A, λ is called an eigenvalue; x is called an eigenvector. x

Eigenvalue ProblemsComputation and Applications p. 3/36 Eigenvalue and Eigenvector For a given n n matrix A, if a scalar λ and a nonzero vector x satisfies Ax = λx, we say (λ,x) is an eigenpair of A, λ is called an eigenvalue; x is called an eigenvector. Ax = λ x

Applications Eigenvalue ProblemsComputation and Applications p. 4/36

Google s Pagerank Rank web pages by visiting probability. a Users browsing behavior is modeled by random walk. a Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd. The PageRank Citation Ranking: Bringing Order to the Web. Manuscript in progress. Eigenvalue ProblemsComputation and Applications p. 5/36

Eigenvalue ProblemsComputation and Applications p. 6/36 How to Compute? The row-stochastic matrix P. P = 0 1/2 0 1/2 0 0 0 0 1 0 0 1 0 0 0 0 1/3 1/3 0 1/3 1/2 1/2 0 0 0

Eigenvalue ProblemsComputation and Applications p. 6/36 How to Compute? The row-stochastic matrix P. P = 0 1/2 0 1/2 0 0 0 0 1 0 0 1 0 0 0 0 1/3 1/3 0 1/3 1/2 1/2 0 0 0 The largest eigenvalue, λ 1, of P T is 1; the corresponding eigenvector, x 1, gives the ranks.

Eigenvalue ProblemsComputation and Applications p. 6/36 How to Compute? The row-stochastic matrix P. P = 0 1/2 0 1/2 0 0 0 0 1 0 0 1 0 0 0 0 1/3 1/3 0 1/3 1/2 1/2 0 0 0 The largest eigenvalue, λ 1, of P T is 1; the corresponding eigenvector, x 1, gives the ranks. Markov chain: x 1 is the stationary distribution. Many, many, many applications.

Eigenvalue ProblemsComputation and Applications p. 7/36 Semantic Query Term Doc D1 D2 D3 D4 D5 apple 53 65 0 30 1 computer 10 20 40 43 0 imac 30 10 25 52 70 A query on "computer" returns D1, D2, D3, D4. A query on "apple" returns D1, D2 and D4. a b a S. Deerwester, Susan Dumais, G. W. Furnas, T. K. Landauer, R. Harshman (1990). Indexing by Latent Semantic Analysis. J of the ASIS 41 (6): 391V407. b Example is modified from Dianne O Leary s talk in MMDS 2006

Eigenvalue ProblemsComputation and Applications p. 8/36 Latent Semantic Indexing Term-Doc matrix A = Low rank appr  = 53 65 0 30 1 10 20 40 43 0 30 10 25 52 70 49 65 7 34 5 23 22 14 30 21 25 9 34 57 63..

Eigenvalue ProblemsComputation and Applications p. 8/36 Latent Semantic Indexing Term-Doc matrix A = Low rank appr  = 53 65 0 30 1 10 20 40 43 0 30 10 25 52 70 49 65 7 34 5 23 22 14 30 21 25 9 34 57 63.. Using  instead of A. Now the query on "computer" returns all docs. The query on "apple" returns D1, D2, D4.

Eigenvalue ProblemsComputation and Applications p. 9/36 Low Rank Approximation Singular value decomposition (SVD) A = UΣV T = Udiag(σ 1,,σ n )V T Low rank approximation  = Udiag(σ 1,,σ p, 0,, 0)V T  is the best rank-p approximation.

Eigenvalue ProblemsComputation and Applications p. 9/36 Low Rank Approximation Singular value decomposition (SVD) A = UΣV T = Udiag(σ 1,,σ n )V T Low rank approximation  = Udiag(σ 1,,σ p, 0,, 0)V T  is the best rank-p approximation. σ 2 1,,σ2 p is the p largest eigenvalues of AT A.

Eigenvalue ProblemsComputation and Applications p. 10/36 Applications of SVD Information retrieval, Text/speech summarization, Document organization Image processing and compression Pattern recognition, Eigenface, Watermarking Model reduction, Dimension reduction Digital signal processing Independent component analysis Total least square problem Bioinformatics Junk E-mail Filtering...

Eigenvalue ProblemsComputation and Applications p. 11/36 Point Set Segmentation Partitioning a given point-sampled surface into distinct parts without explicit construction of a triangle mesh. a a Ichitaro Yamazaki, etc Segmenting Point Sets, IEEE ICSMA 2006

Eigenvalue ProblemsComputation and Applications p. 12/36 Graph Partition Given a graph G = (V,E), the normalized cut of a vertex partition V = V 1 V 2 is Ncut(V 1,V 2 ) = cut(v 1,V 2 ) assoc(v 2,V ) + cut(v 1,V 2 ) assoc(v 1,V ). cut(v 1,V 2 ) = v 1 V 1,v 2 V 2 W(v 1,v 2 ) assoc(v 1,V ) = v 1 V 1,v V W(v 1,v)

Eigenvalue ProblemsComputation and Applications p. 12/36 Graph Partition Given a graph G = (V,E), the normalized cut of a vertex partition V = V 1 V 2 is Ncut(V 1,V 2 ) = cut(v 1,V 2 ) assoc(v 2,V ) + cut(v 1,V 2 ) assoc(v 1,V ). cut(v 1,V 2 ) = v 1 V 1,v 2 V 2 W(v 1,v 2 ) assoc(v 1,V ) = v 1 V 1,v V W(v 1,v) Discrete minimization of NCut is NP-complete. Spectral graph partition gives an approximate solution.

Eigenvalue ProblemsComputation and Applications p. 13/36 How to Compute? Laplacian matrix is L = D A, where D is a diagonal matrix whose elements are the degrees of vertices. A is the adjacent matrix.

Eigenvalue ProblemsComputation and Applications p. 13/36 How to Compute? Laplacian matrix is L = D A, where D is a diagonal matrix whose elements are the degrees of vertices. A is the adjacent matrix. The eigenvector of second smallest eigenvalue of L partitions the graph { vi V 1 if x 2 (i) > 0 v i V 2 if x 2 (i) < 0 The second smallest eigenvalue of L is called algebraic connectivity.

Eigenvalue ProblemsComputation and Applications p. 14/36 Applications of Spectral Graph Telephone network design Load balancing while minimizing communication Sparse matrix times vector multiplication VLSI layout Sparse Gaussian elimination Data mining and clustering Physical mapping of DNA Graph embedding Image segmentation...

Eigenvalue ProblemsComputation and Applications p. 15/36 Spectral Graph Embedding Compute the eigenvectors of the second and third smallest nonzero eigenvalues, x 2,x 3. Use x 2,x 3 as the xy-coordinates of vertices. 300 0.1 0.08 250 0.06 200 0.04 0.02 150 0 0.02 100 0.04 50 0.06 0.08 a 0 50 100 150 200 250 300 350 0.1 0.1 0.08 0.06 0.04 0.02 0 0.02 0.04 0.06 0.08 0.1 a Dan Spielman. Spectral Graph Theory and its Applications.

Computation Eigenvalue ProblemsComputation and Applications p. 16/36

Eigenvalue ProblemsComputation and Applications p. 17/36 The Power Method Algorithm: 1. Given an initial vector p 0, p 0 = 1. 2. For i = 1, 2,... until converged (a) p i = Ap i 1 (b) p i = p i / p i //normalization

Eigenvalue ProblemsComputation and Applications p. 17/36 The Power Method Algorithm: 1. Given an initial vector p 0, p 0 = 1. 2. For i = 1, 2,... until converged (a) p i = Ap i 1 (b) p i = p i / p i //normalization Properties 1. Only matrix-vector multiplication is needed. 2. Vector p k converges to the eigenvector of λ 1, assuming λ 1 λ 2 λ n. 3. The convergent rate is the ratio λ 2 λ 1

Eigenvalue ProblemsComputation and Applications p. 18/36 An Example A 100 100 matrix with eigenvalues 1, 0.95,..., 0.95 99. Converge to 10 14 in 550+ iterations. 10 5 Norm of residual 10 0 10 5 10 10 10 15 0 100 200 300 400 500 600 Iterations

Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz.

Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz. z

Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz. Az z

Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz. µ z Az

Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz. Az µ z r

Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz. Az µ z Small residual implies (µ,z) is a good approximation. (in the sense of backward error.) r

Speedup Methods Eigenvalue ProblemsComputation and Applications p. 20/36

Eigenvalue ProblemsComputation and Applications p. 20/36 Speedup Methods 1. Shift-invert enhancement Enlarge the ratio of λ 1 and λ 2. Can be used to find other eigenpairs.

Eigenvalue ProblemsComputation and Applications p. 20/36 Speedup Methods 1. Shift-invert enhancement Enlarge the ratio of λ 1 and λ 2. Can be used to find other eigenpairs. 2. Subspace methods Use linear combination of vectors to approximate the desired eigenvector. Can be used to compute more than one eigenpairs.

Eigenvalue ProblemsComputation and Applications p. 20/36 Speedup Methods 1. Shift-invert enhancement Enlarge the ratio of λ 1 and λ 2. Can be used to find other eigenpairs. 2. Subspace methods Use linear combination of vectors to approximate the desired eigenvector. Can be used to compute more than one eigenpairs. 3. Subspace methods + shift-invert enhancement

Eigenvalue ProblemsComputation and Applications p. 21/36 Shift-invert Enhancement Use C = (A σi) 1 in the power method. (Assume σ λ i for all i.) The eigenvectors of C are the same as A s. The eigenvalues of C are 1 λ i σ.

Eigenvalue ProblemsComputation and Applications p. 21/36 Shift-invert Enhancement Use C = (A σi) 1 in the power method. (Assume σ λ i for all i.) The eigenvectors of C are the same as A s. The eigenvalues of C are 1 λ i σ. If σ is close enough to λ 1, the convergence rate of x 1 becomes λ 1 σ λ 2 σ.

Eigenvalue ProblemsComputation and Applications p. 22/36 Example Use the previous example and set σ = 1.2. Convergent rate from 0.95 to 1 1.2 0.95 1.2 = 0.8. 10 5 Power method With shift invert 10 0 Norm of residual 10 5 10 10 10 15 0 100 200 300 400 500 600 Iterations

Eigenvalue ProblemsComputation and Applications p. 23/36 Subspace Methods The linear combination of eigenvector approximations can bring better approximations.

Eigenvalue ProblemsComputation and Applications p. 23/36 Subspace Methods The linear combination of eigenvector approximations can bring better approximations. Algorithm: 1. Construct a subspace span{u 1,u 2,,u k }. 2. Extract eigenvector approximations from the subspace. 3. Test convergence.

Eigenvalue ProblemsComputation and Applications p. 23/36 Subspace Methods The linear combination of eigenvector approximations can bring better approximations. Algorithm: 1. Construct a subspace span{u 1,u 2,,u k }. 2. Extract eigenvector approximations from the subspace. 3. Test convergence. Can compute more than one eigenvectors.

Eigenvalue ProblemsComputation and Applications p. 24/36 Krylov Subspace For a given unit vector u 1, the kth order Krylov subspace of matrix A and vector u 1 is K k (A,u 1 ) = span{u 1,Au 1,,A k 1 u 1 }.

Eigenvalue ProblemsComputation and Applications p. 24/36 Krylov Subspace For a given unit vector u 1, the kth order Krylov subspace of matrix A and vector u 1 is K k (A,u 1 ) = span{u 1,Au 1,,A k 1 u 1 }. Properties: 1. Only matrix-vector multiplication is required. 2. Usually contains good eigenvector approximations to those whose eigenvalues are on the peripheral of spectrum. 3. Superlinear convergence.

Eigenvalue ProblemsComputation and Applications p. 25/36 Krylov Subspace Two eigenpairs with largest eigenvalues converge to 10 14 in 40 iterations. 10 0 dominant eigenpair subdominant eigpair Norm of residual 10 5 10 10 10 15 0 5 10 15 20 25 30 35 40 Iterations

Eigenvalue ProblemsComputation and Applications p. 26/36 Krylov Subspace + Shift-invert Use subspace span{u 1,Cu 1,,C k 1 u 1 } where C = (A σi) 1, and σ = 1.2. 10 0 dominant eigenpair subdominant eigenpair Error of best approximation 10 5 10 10 10 15 5 10 15 20 25 30 35 40 Dimention of subspace

Eigenvalue ProblemsComputation and Applications p. 27/36 Problems For Krylov subspace method, (A σi)u k+1 = u k need be solved "exactly" in every iteration.

Eigenvalue ProblemsComputation and Applications p. 27/36 Problems For Krylov subspace method, (A σi)u k+1 = u k need be solved "exactly" in every iteration. When A is large, iterative methods for solving linear systems are often used. Computation time desired precision.

Eigenvalue ProblemsComputation and Applications p. 28/36 With Inexact Shift-invert Linear systems are solved to the accuracy 10 3 in every iteration. 10 0 dominant eigenpair subdominant eigenpair Error of best approximation 10 5 10 10 10 15 5 10 15 20 25 30 35 40 Dimention of subspace

Eigenvalue ProblemsComputation and Applications p. 29/36 Residual Arnoldi Method Construct the subspace by r 0,r 1, r k 1 r i is the residual of an approximation in the ith iteration.

Eigenvalue ProblemsComputation and Applications p. 29/36 Residual Arnoldi Method Construct the subspace by r 0,r 1, r k 1 r i is the residual of an approximation in the ith iteration. Algorithm: 1. Choose an approximation (µ k,p k ) from the current subspace. 2. Compute residual r k = Ap k µ k p k. 3. Add r k to the current subspace.

Eigenvalue ProblemsComputation and Applications p. 30/36 Without Shift-invert Use the residuals of the approximations to (λ 1,x 1 ). (λ 1,x 1 ) is called the target. 10 0 dominant eigenpair subdominant eigpair Norm of residual 10 5 10 10 10 15 0 5 10 15 20 25 30 35 40 Iterations

Eigenvalue ProblemsComputation and Applications p. 31/36 Residual Arnoldi + Shift-invert Residual Arnoldi method + Shift-invert 1. Choose an approximation (µ k,p k ) from the current subspace. 2. Compute residual r k = Ap k µ k p k. 3. Add (A σi) 1 r k to subspace.

Eigenvalue ProblemsComputation and Applications p. 31/36 Residual Arnoldi + Shift-invert Residual Arnoldi method + Shift-invert 1. Choose an approximation (µ k,p k ) from the current subspace. 2. Compute residual r k = Ap k µ k p k. 3. Add (A σi) 1 r k to subspace. Properties of inexact shift-invert 1. The approximations to the target can converge. 2. Other approximations fail to converge.

Eigenvalue ProblemsComputation and Applications p. 32/36 With Inexact Shift-invert Linear systems are solved to the accuracy 10 3 in every iteration. Target is (λ 1,x 1 ). 10 0 dominant eigenpair subdominant eigenpair Error of best approximation 10 5 10 10 10 15 0 5 10 15 20 25 30 35 40 Dimention of subspace

Eigenvalue ProblemsComputation and Applications p. 33/36 Change Target Change target to (λ 2,x 2 ) at iteration 20. 10 0 dominant eigenpair subdominant eigenpair Error of best approximation 10 5 10 10 10 15 0 5 10 15 20 25 30 35 40 Dimention of subspace

Eigenvalue ProblemsComputation and Applications p. 34/36 Performance Comparison For a 10000 10000 matrix, we want to find 6 smallest eigenvalues and their eigenvectors.

Eigenvalue ProblemsComputation and Applications p. 34/36 Performance Comparison For a 10000 10000 matrix, we want to find 6 smallest eigenvalues and their eigenvectors. Compare with the eigen-solver: ARPACK

Eigenvalue ProblemsComputation and Applications p. 34/36 Performance Comparison For a 10000 10000 matrix, we want to find 6 smallest eigenvalues and their eigenvectors. Compare with the eigen-solver: ARPACK Shift-invert enhancement are applied with shift 0 and linear solver GMRES.

Eigenvalue ProblemsComputation and Applications p. 34/36 Performance Comparison For a 10000 10000 matrix, we want to find 6 smallest eigenvalues and their eigenvectors. Compare with the eigen-solver: ARPACK Shift-invert enhancement are applied with shift 0 and linear solver GMRES. Performance are measured by the total number of matrix-vector multiplication

Eigenvalue ProblemsComputation and Applications p. 34/36 Performance Comparison For a 10000 10000 matrix, we want to find 6 smallest eigenvalues and their eigenvectors. Compare with the eigen-solver: ARPACK Shift-invert enhancement are applied with shift 0 and linear solver GMRES. Performance are measured by the total number of matrix-vector multiplication NO. of inner iterations NO. of outer iterations. Inner iteration: uses matrix-vector multiplication to solve linear systems. Outer iteration: uses the results in inner iteration to solve eigenproblem.

Eigenvalue ProblemsComputation and Applications p. 35/36 Convergent Result Red lines: ARPACK Blue dots: residual Arnoldi method. Residual Arnoldi method is 2.25 times faster than ARPACK, and uses less than half matrix-vector multiplications. 10 0 250 200 10 5 150 10 10 100 50 10 15 0 50 100 150 0 20 40 60 80 100 120

Eigenvalue ProblemsComputation and Applications p. 36/36 Conclusion Eigenvalue problems are everywhere.

Eigenvalue ProblemsComputation and Applications p. 36/36 Conclusion Eigenvalue problems are everywhere. There is no single best algorithm for solving large eigenproblems.

Eigenvalue ProblemsComputation and Applications p. 36/36 Conclusion Eigenvalue problems are everywhere. There is no single best algorithm for solving large eigenproblems. Residual Arnoldi method is good for computing interior or clustered eigenvalues. applying shift-invert enhancement. parallelization.