Eigenvalue Problems Computation and Applications
|
|
- Amberlynn Pearson
- 5 years ago
- Views:
Transcription
1 Eigenvalue ProblemsComputation and Applications p. 1/36 Eigenvalue Problems Computation and Applications Che-Rung Lee National Tsing Hua University
2 Eigenvalue ProblemsComputation and Applications p. 2/36 Outline Definitions Applications Google s pagerank. Latent semantic indexing. Point set segmentation. Computation Power method. Shift-invert enhancement. Subspace methods. Residual Arnoldi method. Conclusion
3 Eigenvalue ProblemsComputation and Applications p. 3/36 Eigenvalue and Eigenvector For a given n n matrix A, if a scalar λ and a nonzero vector x satisfies Ax = λx, we say (λ,x) is an eigenpair of A, λ is called an eigenvalue; x is called an eigenvector.
4 Eigenvalue ProblemsComputation and Applications p. 3/36 Eigenvalue and Eigenvector For a given n n matrix A, if a scalar λ and a nonzero vector x satisfies Ax = λx, we say (λ,x) is an eigenpair of A, λ is called an eigenvalue; x is called an eigenvector. x
5 Eigenvalue ProblemsComputation and Applications p. 3/36 Eigenvalue and Eigenvector For a given n n matrix A, if a scalar λ and a nonzero vector x satisfies Ax = λx, we say (λ,x) is an eigenpair of A, λ is called an eigenvalue; x is called an eigenvector. Ax = λ x
6 Applications Eigenvalue ProblemsComputation and Applications p. 4/36
7 Google s Pagerank Rank web pages by visiting probability. a Users browsing behavior is modeled by random walk. a Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd. The PageRank Citation Ranking: Bringing Order to the Web. Manuscript in progress. Eigenvalue ProblemsComputation and Applications p. 5/36
8 Eigenvalue ProblemsComputation and Applications p. 6/36 How to Compute? The row-stochastic matrix P. P = 0 1/2 0 1/ /3 1/3 0 1/3 1/2 1/
9 Eigenvalue ProblemsComputation and Applications p. 6/36 How to Compute? The row-stochastic matrix P. P = 0 1/2 0 1/ /3 1/3 0 1/3 1/2 1/ The largest eigenvalue, λ 1, of P T is 1; the corresponding eigenvector, x 1, gives the ranks.
10 Eigenvalue ProblemsComputation and Applications p. 6/36 How to Compute? The row-stochastic matrix P. P = 0 1/2 0 1/ /3 1/3 0 1/3 1/2 1/ The largest eigenvalue, λ 1, of P T is 1; the corresponding eigenvector, x 1, gives the ranks. Markov chain: x 1 is the stationary distribution. Many, many, many applications.
11 Eigenvalue ProblemsComputation and Applications p. 7/36 Semantic Query Term Doc D1 D2 D3 D4 D5 apple computer imac A query on "computer" returns D1, D2, D3, D4. A query on "apple" returns D1, D2 and D4. a b a S. Deerwester, Susan Dumais, G. W. Furnas, T. K. Landauer, R. Harshman (1990). Indexing by Latent Semantic Analysis. J of the ASIS 41 (6): 391V407. b Example is modified from Dianne O Leary s talk in MMDS 2006
12 Eigenvalue ProblemsComputation and Applications p. 8/36 Latent Semantic Indexing Term-Doc matrix A = Low rank appr  =
13 Eigenvalue ProblemsComputation and Applications p. 8/36 Latent Semantic Indexing Term-Doc matrix A = Low rank appr  = Using  instead of A. Now the query on "computer" returns all docs. The query on "apple" returns D1, D2, D4.
14 Eigenvalue ProblemsComputation and Applications p. 9/36 Low Rank Approximation Singular value decomposition (SVD) A = UΣV T = Udiag(σ 1,,σ n )V T Low rank approximation  = Udiag(σ 1,,σ p, 0,, 0)V T  is the best rank-p approximation.
15 Eigenvalue ProblemsComputation and Applications p. 9/36 Low Rank Approximation Singular value decomposition (SVD) A = UΣV T = Udiag(σ 1,,σ n )V T Low rank approximation  = Udiag(σ 1,,σ p, 0,, 0)V T  is the best rank-p approximation. σ 2 1,,σ2 p is the p largest eigenvalues of AT A.
16 Eigenvalue ProblemsComputation and Applications p. 10/36 Applications of SVD Information retrieval, Text/speech summarization, Document organization Image processing and compression Pattern recognition, Eigenface, Watermarking Model reduction, Dimension reduction Digital signal processing Independent component analysis Total least square problem Bioinformatics Junk Filtering...
17 Eigenvalue ProblemsComputation and Applications p. 11/36 Point Set Segmentation Partitioning a given point-sampled surface into distinct parts without explicit construction of a triangle mesh. a a Ichitaro Yamazaki, etc Segmenting Point Sets, IEEE ICSMA 2006
18 Eigenvalue ProblemsComputation and Applications p. 12/36 Graph Partition Given a graph G = (V,E), the normalized cut of a vertex partition V = V 1 V 2 is Ncut(V 1,V 2 ) = cut(v 1,V 2 ) assoc(v 2,V ) + cut(v 1,V 2 ) assoc(v 1,V ). cut(v 1,V 2 ) = v 1 V 1,v 2 V 2 W(v 1,v 2 ) assoc(v 1,V ) = v 1 V 1,v V W(v 1,v)
19 Eigenvalue ProblemsComputation and Applications p. 12/36 Graph Partition Given a graph G = (V,E), the normalized cut of a vertex partition V = V 1 V 2 is Ncut(V 1,V 2 ) = cut(v 1,V 2 ) assoc(v 2,V ) + cut(v 1,V 2 ) assoc(v 1,V ). cut(v 1,V 2 ) = v 1 V 1,v 2 V 2 W(v 1,v 2 ) assoc(v 1,V ) = v 1 V 1,v V W(v 1,v) Discrete minimization of NCut is NP-complete. Spectral graph partition gives an approximate solution.
20 Eigenvalue ProblemsComputation and Applications p. 13/36 How to Compute? Laplacian matrix is L = D A, where D is a diagonal matrix whose elements are the degrees of vertices. A is the adjacent matrix.
21 Eigenvalue ProblemsComputation and Applications p. 13/36 How to Compute? Laplacian matrix is L = D A, where D is a diagonal matrix whose elements are the degrees of vertices. A is the adjacent matrix. The eigenvector of second smallest eigenvalue of L partitions the graph { vi V 1 if x 2 (i) > 0 v i V 2 if x 2 (i) < 0 The second smallest eigenvalue of L is called algebraic connectivity.
22 Eigenvalue ProblemsComputation and Applications p. 14/36 Applications of Spectral Graph Telephone network design Load balancing while minimizing communication Sparse matrix times vector multiplication VLSI layout Sparse Gaussian elimination Data mining and clustering Physical mapping of DNA Graph embedding Image segmentation...
23 Eigenvalue ProblemsComputation and Applications p. 15/36 Spectral Graph Embedding Compute the eigenvectors of the second and third smallest nonzero eigenvalues, x 2,x 3. Use x 2,x 3 as the xy-coordinates of vertices a a Dan Spielman. Spectral Graph Theory and its Applications.
24 Computation Eigenvalue ProblemsComputation and Applications p. 16/36
25 Eigenvalue ProblemsComputation and Applications p. 17/36 The Power Method Algorithm: 1. Given an initial vector p 0, p 0 = For i = 1, 2,... until converged (a) p i = Ap i 1 (b) p i = p i / p i //normalization
26 Eigenvalue ProblemsComputation and Applications p. 17/36 The Power Method Algorithm: 1. Given an initial vector p 0, p 0 = For i = 1, 2,... until converged (a) p i = Ap i 1 (b) p i = p i / p i //normalization Properties 1. Only matrix-vector multiplication is needed. 2. Vector p k converges to the eigenvector of λ 1, assuming λ 1 λ 2 λ n. 3. The convergent rate is the ratio λ 2 λ 1
27 Eigenvalue ProblemsComputation and Applications p. 18/36 An Example A matrix with eigenvalues 1, 0.95,..., Converge to in 550+ iterations Norm of residual Iterations
28 Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz.
29 Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz. z
30 Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz. Az z
31 Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz. µ z Az
32 Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz. Az µ z r
33 Eigenvalue ProblemsComputation and Applications p. 19/36 Residual Residual of an approximation: Let (µ,z) be an approximation to an eigenpair of A. Its residual is r = Az µz. Az µ z Small residual implies (µ,z) is a good approximation. (in the sense of backward error.) r
34 Speedup Methods Eigenvalue ProblemsComputation and Applications p. 20/36
35 Eigenvalue ProblemsComputation and Applications p. 20/36 Speedup Methods 1. Shift-invert enhancement Enlarge the ratio of λ 1 and λ 2. Can be used to find other eigenpairs.
36 Eigenvalue ProblemsComputation and Applications p. 20/36 Speedup Methods 1. Shift-invert enhancement Enlarge the ratio of λ 1 and λ 2. Can be used to find other eigenpairs. 2. Subspace methods Use linear combination of vectors to approximate the desired eigenvector. Can be used to compute more than one eigenpairs.
37 Eigenvalue ProblemsComputation and Applications p. 20/36 Speedup Methods 1. Shift-invert enhancement Enlarge the ratio of λ 1 and λ 2. Can be used to find other eigenpairs. 2. Subspace methods Use linear combination of vectors to approximate the desired eigenvector. Can be used to compute more than one eigenpairs. 3. Subspace methods + shift-invert enhancement
38 Eigenvalue ProblemsComputation and Applications p. 21/36 Shift-invert Enhancement Use C = (A σi) 1 in the power method. (Assume σ λ i for all i.) The eigenvectors of C are the same as A s. The eigenvalues of C are 1 λ i σ.
39 Eigenvalue ProblemsComputation and Applications p. 21/36 Shift-invert Enhancement Use C = (A σi) 1 in the power method. (Assume σ λ i for all i.) The eigenvectors of C are the same as A s. The eigenvalues of C are 1 λ i σ. If σ is close enough to λ 1, the convergence rate of x 1 becomes λ 1 σ λ 2 σ.
40 Eigenvalue ProblemsComputation and Applications p. 22/36 Example Use the previous example and set σ = 1.2. Convergent rate from 0.95 to = Power method With shift invert 10 0 Norm of residual Iterations
41 Eigenvalue ProblemsComputation and Applications p. 23/36 Subspace Methods The linear combination of eigenvector approximations can bring better approximations.
42 Eigenvalue ProblemsComputation and Applications p. 23/36 Subspace Methods The linear combination of eigenvector approximations can bring better approximations. Algorithm: 1. Construct a subspace span{u 1,u 2,,u k }. 2. Extract eigenvector approximations from the subspace. 3. Test convergence.
43 Eigenvalue ProblemsComputation and Applications p. 23/36 Subspace Methods The linear combination of eigenvector approximations can bring better approximations. Algorithm: 1. Construct a subspace span{u 1,u 2,,u k }. 2. Extract eigenvector approximations from the subspace. 3. Test convergence. Can compute more than one eigenvectors.
44 Eigenvalue ProblemsComputation and Applications p. 24/36 Krylov Subspace For a given unit vector u 1, the kth order Krylov subspace of matrix A and vector u 1 is K k (A,u 1 ) = span{u 1,Au 1,,A k 1 u 1 }.
45 Eigenvalue ProblemsComputation and Applications p. 24/36 Krylov Subspace For a given unit vector u 1, the kth order Krylov subspace of matrix A and vector u 1 is K k (A,u 1 ) = span{u 1,Au 1,,A k 1 u 1 }. Properties: 1. Only matrix-vector multiplication is required. 2. Usually contains good eigenvector approximations to those whose eigenvalues are on the peripheral of spectrum. 3. Superlinear convergence.
46 Eigenvalue ProblemsComputation and Applications p. 25/36 Krylov Subspace Two eigenpairs with largest eigenvalues converge to in 40 iterations dominant eigenpair subdominant eigpair Norm of residual Iterations
47 Eigenvalue ProblemsComputation and Applications p. 26/36 Krylov Subspace + Shift-invert Use subspace span{u 1,Cu 1,,C k 1 u 1 } where C = (A σi) 1, and σ = dominant eigenpair subdominant eigenpair Error of best approximation Dimention of subspace
48 Eigenvalue ProblemsComputation and Applications p. 27/36 Problems For Krylov subspace method, (A σi)u k+1 = u k need be solved "exactly" in every iteration.
49 Eigenvalue ProblemsComputation and Applications p. 27/36 Problems For Krylov subspace method, (A σi)u k+1 = u k need be solved "exactly" in every iteration. When A is large, iterative methods for solving linear systems are often used. Computation time desired precision.
50 Eigenvalue ProblemsComputation and Applications p. 28/36 With Inexact Shift-invert Linear systems are solved to the accuracy 10 3 in every iteration dominant eigenpair subdominant eigenpair Error of best approximation Dimention of subspace
51 Eigenvalue ProblemsComputation and Applications p. 29/36 Residual Arnoldi Method Construct the subspace by r 0,r 1, r k 1 r i is the residual of an approximation in the ith iteration.
52 Eigenvalue ProblemsComputation and Applications p. 29/36 Residual Arnoldi Method Construct the subspace by r 0,r 1, r k 1 r i is the residual of an approximation in the ith iteration. Algorithm: 1. Choose an approximation (µ k,p k ) from the current subspace. 2. Compute residual r k = Ap k µ k p k. 3. Add r k to the current subspace.
53 Eigenvalue ProblemsComputation and Applications p. 30/36 Without Shift-invert Use the residuals of the approximations to (λ 1,x 1 ). (λ 1,x 1 ) is called the target dominant eigenpair subdominant eigpair Norm of residual Iterations
54 Eigenvalue ProblemsComputation and Applications p. 31/36 Residual Arnoldi + Shift-invert Residual Arnoldi method + Shift-invert 1. Choose an approximation (µ k,p k ) from the current subspace. 2. Compute residual r k = Ap k µ k p k. 3. Add (A σi) 1 r k to subspace.
55 Eigenvalue ProblemsComputation and Applications p. 31/36 Residual Arnoldi + Shift-invert Residual Arnoldi method + Shift-invert 1. Choose an approximation (µ k,p k ) from the current subspace. 2. Compute residual r k = Ap k µ k p k. 3. Add (A σi) 1 r k to subspace. Properties of inexact shift-invert 1. The approximations to the target can converge. 2. Other approximations fail to converge.
56 Eigenvalue ProblemsComputation and Applications p. 32/36 With Inexact Shift-invert Linear systems are solved to the accuracy 10 3 in every iteration. Target is (λ 1,x 1 ) dominant eigenpair subdominant eigenpair Error of best approximation Dimention of subspace
57 Eigenvalue ProblemsComputation and Applications p. 33/36 Change Target Change target to (λ 2,x 2 ) at iteration dominant eigenpair subdominant eigenpair Error of best approximation Dimention of subspace
58 Eigenvalue ProblemsComputation and Applications p. 34/36 Performance Comparison For a matrix, we want to find 6 smallest eigenvalues and their eigenvectors.
59 Eigenvalue ProblemsComputation and Applications p. 34/36 Performance Comparison For a matrix, we want to find 6 smallest eigenvalues and their eigenvectors. Compare with the eigen-solver: ARPACK
60 Eigenvalue ProblemsComputation and Applications p. 34/36 Performance Comparison For a matrix, we want to find 6 smallest eigenvalues and their eigenvectors. Compare with the eigen-solver: ARPACK Shift-invert enhancement are applied with shift 0 and linear solver GMRES.
61 Eigenvalue ProblemsComputation and Applications p. 34/36 Performance Comparison For a matrix, we want to find 6 smallest eigenvalues and their eigenvectors. Compare with the eigen-solver: ARPACK Shift-invert enhancement are applied with shift 0 and linear solver GMRES. Performance are measured by the total number of matrix-vector multiplication
62 Eigenvalue ProblemsComputation and Applications p. 34/36 Performance Comparison For a matrix, we want to find 6 smallest eigenvalues and their eigenvectors. Compare with the eigen-solver: ARPACK Shift-invert enhancement are applied with shift 0 and linear solver GMRES. Performance are measured by the total number of matrix-vector multiplication NO. of inner iterations NO. of outer iterations. Inner iteration: uses matrix-vector multiplication to solve linear systems. Outer iteration: uses the results in inner iteration to solve eigenproblem.
63 Eigenvalue ProblemsComputation and Applications p. 35/36 Convergent Result Red lines: ARPACK Blue dots: residual Arnoldi method. Residual Arnoldi method is 2.25 times faster than ARPACK, and uses less than half matrix-vector multiplications
64 Eigenvalue ProblemsComputation and Applications p. 36/36 Conclusion Eigenvalue problems are everywhere.
65 Eigenvalue ProblemsComputation and Applications p. 36/36 Conclusion Eigenvalue problems are everywhere. There is no single best algorithm for solving large eigenproblems.
66 Eigenvalue ProblemsComputation and Applications p. 36/36 Conclusion Eigenvalue problems are everywhere. There is no single best algorithm for solving large eigenproblems. Residual Arnoldi method is good for computing interior or clustered eigenvalues. applying shift-invert enhancement. parallelization.
Krylov Subspace Methods to Calculate PageRank
Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationA hybrid reordered Arnoldi method to accelerate PageRank computations
A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationInformation Retrieval
Introduction to Information CS276: Information and Web Search Christopher Manning and Pandu Nayak Lecture 13: Latent Semantic Indexing Ch. 18 Today s topic Latent Semantic Indexing Term-document matrices
More informationAlgebraic Representation of Networks
Algebraic Representation of Networks 0 1 2 1 1 0 0 1 2 0 0 1 1 1 1 1 Hiroki Sayama sayama@binghamton.edu Describing networks with matrices (1) Adjacency matrix A matrix with rows and columns labeled by
More informationDATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD
DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary
More informationIndex. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden
Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationMarkov Chains and Spectral Clustering
Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu
More informationData Mining Lecture 4: Covariance, EVD, PCA & SVD
Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The
More informationCalculating Web Page Authority Using the PageRank Algorithm
Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively
More informationLinear Algebra Background
CS76A Text Retrieval and Mining Lecture 5 Recap: Clustering Hierarchical clustering Agglomerative clustering techniques Evaluation Term vs. document space clustering Multi-lingual docs Feature selection
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationABSTRACT. Professor G.W. Stewart
ABSTRACT Title of dissertation: Residual Arnoldi Methods : Theory, Package, and Experiments Che-Rung Lee, Doctor of Philosophy, 2007 Dissertation directed by: Professor G.W. Stewart Department of Computer
More informationLatent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology
Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2014 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276,
More informationSummary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)
Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationBruce Hendrickson Discrete Algorithms & Math Dept. Sandia National Labs Albuquerque, New Mexico Also, CS Department, UNM
Latent Semantic Analysis and Fiedler Retrieval Bruce Hendrickson Discrete Algorithms & Math Dept. Sandia National Labs Albuquerque, New Mexico Also, CS Department, UNM Informatics & Linear Algebra Eigenvectors
More informationData Mining and Matrices
Data Mining and Matrices 10 Graphs II Rainer Gemulla, Pauli Miettinen Jul 4, 2013 Link analysis The web as a directed graph Set of web pages with associated textual content Hyperlinks between webpages
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationThe Second Eigenvalue of the Google Matrix
The Second Eigenvalue of the Google Matrix Taher H. Haveliwala and Sepandar D. Kamvar Stanford University {taherh,sdkamvar}@cs.stanford.edu Abstract. We determine analytically the modulus of the second
More informationLink Analysis. Leonid E. Zhukov
Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization
More informationLatent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology
Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2016 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276,
More informationComputational Economics and Finance
Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationLab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018
Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google
More informationFundamentals of Matrices
Maschinelles Lernen II Fundamentals of Matrices Christoph Sawade/Niels Landwehr/Blaine Nelson Tobias Scheffer Matrix Examples Recap: Data Linear Model: f i x = w i T x Let X = x x n be the data matrix
More information1 Searching the World Wide Web
Hubs and Authorities in a Hyperlinked Environment 1 Searching the World Wide Web Because diverse users each modify the link structure of the WWW within a relatively small scope by creating web-pages on
More informationLink Analysis Ranking
Link Analysis Ranking How do search engines decide how to rank your query results? Guess why Google ranks the query results the way it does How would you do it? Naïve ranking of query results Given query
More informationSVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)
Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 12: Graph Clustering Cho-Jui Hsieh UC Davis May 29, 2018 Graph Clustering Given a graph G = (V, E, W ) V : nodes {v 1,, v n } E: edges
More informationCS 3750 Advanced Machine Learning. Applications of SVD and PCA (LSA and Link analysis) Cem Akkaya
CS 375 Advanced Machine Learning Applications of SVD and PCA (LSA and Link analysis) Cem Akkaya Outline SVD and LSI Kleinberg s Algorithm PageRank Algorithm Vector Space Model Vector space model represents
More informationComputing PageRank using Power Extrapolation
Computing PageRank using Power Extrapolation Taher Haveliwala, Sepandar Kamvar, Dan Klein, Chris Manning, and Gene Golub Stanford University Abstract. We present a novel technique for speeding up the computation
More informationIntroduction to Information Retrieval
Introduction to Information Retrieval http://informationretrieval.org IIR 18: Latent Semantic Indexing Hinrich Schütze Center for Information and Language Processing, University of Munich 2013-07-10 1/43
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationData Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings
Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 6: Numerical Linear Algebra: Applications in Machine Learning Cho-Jui Hsieh UC Davis April 27, 2017 Principal Component Analysis Principal
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationMath 304 Handout: Linear algebra, graphs, and networks.
Math 30 Handout: Linear algebra, graphs, and networks. December, 006. GRAPHS AND ADJACENCY MATRICES. Definition. A graph is a collection of vertices connected by edges. A directed graph is a graph all
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationMATH36001 Perron Frobenius Theory 2015
MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,
More informationHow does Google rank webpages?
Linear Algebra Spring 016 How does Google rank webpages? Dept. of Internet and Multimedia Eng. Konkuk University leehw@konkuk.ac.kr 1 Background on search engines Outline HITS algorithm (Jon Kleinberg)
More informationLecture 3: Inexact inverse iteration with preconditioning
Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationMachine Learning for Data Science (CS4786) Lecture 11
Machine Learning for Data Science (CS4786) Lecture 11 Spectral clustering Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ ANNOUNCEMENT 1 Assignment P1 the Diagnostic assignment 1 will
More informationParallel Singular Value Decomposition. Jiaxing Tan
Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector
More informationLatent Semantic Analysis. Hongning Wang
Latent Semantic Analysis Hongning Wang CS@UVa VS model in practice Document and query are represented by term vectors Terms are not necessarily orthogonal to each other Synonymy: car v.s. automobile Polysemy:
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationGoogle PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano
Google PageRank Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano fricci@unibz.it 1 Content p Linear Algebra p Matrices p Eigenvalues and eigenvectors p Markov chains p Google
More informationCOMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare
COMP6237 Data Mining Covariance, EVD, PCA & SVD Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and covariance) in terms of
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationIntroduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa
Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis
More informationEigenvalue Problems CHAPTER 1 : PRELIMINARIES
Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14
More informationLet A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1
Eigenvalue Problems. Introduction Let A an n n real nonsymmetric matrix. The eigenvalue problem: EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4] Au = λu Example: ( ) 2 0 A = 2 1 λ 1 = 1 with eigenvector
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationMath 307 Learning Goals. March 23, 2010
Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent
More informationMatrix Approximations
Matrix pproximations Sanjiv Kumar, Google Research, NY EECS-6898, Columbia University - Fall, 00 Sanjiv Kumar 9/4/00 EECS6898 Large Scale Machine Learning Latent Semantic Indexing (LSI) Given n documents
More informationNumerical Methods I Singular Value Decomposition
Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)
More informationLatent Semantic Models. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze
Latent Semantic Models Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze 1 Vector Space Model: Pros Automatic selection of index terms Partial matching of queries
More informationThe Push Algorithm for Spectral Ranking
The Push Algorithm for Spectral Ranking Paolo Boldi Sebastiano Vigna March 8, 204 Abstract The push algorithm was proposed first by Jeh and Widom [6] in the context of personalized PageRank computations
More information1998: enter Link Analysis
1998: enter Link Analysis uses hyperlink structure to focus the relevant set combine traditional IR score with popularity score Page and Brin 1998 Kleinberg Web Information Retrieval IR before the Web
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationELE 538B: Mathematics of High-Dimensional Data. Spectral methods. Yuxin Chen Princeton University, Fall 2018
ELE 538B: Mathematics of High-Dimensional Data Spectral methods Yuxin Chen Princeton University, Fall 2018 Outline A motivating application: graph clustering Distance and angles between two subspaces Eigen-space
More information0.1 Naive formulation of PageRank
PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more
More information6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities
6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov
More informationRecent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.
Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator
More informationA Note on Google s PageRank
A Note on Google s PageRank According to Google, google-search on a given topic results in a listing of most relevant web pages related to the topic. Google ranks the importance of webpages according to
More informationMathematical Properties & Analysis of Google s PageRank
Mathematical Properties & Analysis of Google s PageRank Ilse Ipsen North Carolina State University, USA Joint work with Rebecca M. Wills Cedya p.1 PageRank An objective measure of the citation importance
More informationECS231: Spectral Partitioning. Based on Berkeley s CS267 lecture on graph partition
ECS231: Spectral Partitioning Based on Berkeley s CS267 lecture on graph partition 1 Definition of graph partitioning Given a graph G = (N, E, W N, W E ) N = nodes (or vertices), E = edges W N = node weights
More informationAlgebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes
Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov
More informationUpdating PageRank. Amy Langville Carl Meyer
Updating PageRank Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC SCCM 11/17/2003 Indexing Google Must index key terms on each page Robots crawl the web software
More informationECEN 689 Special Topics in Data Science for Communications Networks
ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 8 Random Walks, Matrices and PageRank Graphs
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More informationSpectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods
Spectral Clustering Seungjin Choi Department of Computer Science POSTECH, Korea seungjin@postech.ac.kr 1 Spectral methods Spectral Clustering? Methods using eigenvectors of some matrices Involve eigen-decomposition
More informationInf 2B: Ranking Queries on the WWW
Inf B: Ranking Queries on the WWW Kyriakos Kalorkoti School of Informatics University of Edinburgh Queries Suppose we have an Inverted Index for a set of webpages. Disclaimer Not really the scenario of
More informationEIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4]
EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4] Eigenvalue Problems. Introduction Let A an n n real nonsymmetric matrix. The eigenvalue problem: Au = λu λ C : eigenvalue u C n : eigenvector Example:
More informationLatent Semantic Analysis. Hongning Wang
Latent Semantic Analysis Hongning Wang CS@UVa Recap: vector space model Represent both doc and query by concept vectors Each concept defines one dimension K concepts define a high-dimensional space Element
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationMultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors
MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors Michael K. Ng Centre for Mathematical Imaging and Vision and Department of Mathematics
More informationGraph Partitioning Using Random Walks
Graph Partitioning Using Random Walks A Convex Optimization Perspective Lorenzo Orecchia Computer Science Why Spectral Algorithms for Graph Problems in practice? Simple to implement Can exploit very efficient
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationMain matrix factorizations
Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined
More informationLecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the
More informationMath 2J Lecture 16-11/02/12
Math 2J Lecture 16-11/02/12 William Holmes Markov Chain Recap The population of a town is 100000. Each person is either independent, democrat, or republican. In any given year, each person can choose to
More informationPV211: Introduction to Information Retrieval https://www.fi.muni.cz/~sojka/pv211
PV211: Introduction to Information Retrieval https://www.fi.muni.cz/~sojka/pv211 IIR 18: Latent Semantic Indexing Handout version Petr Sojka, Hinrich Schütze et al. Faculty of Informatics, Masaryk University,
More informationInexact inverse iteration with preconditioning
Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationINF 141 IR METRICS LATENT SEMANTIC ANALYSIS AND INDEXING. Crista Lopes
INF 141 IR METRICS LATENT SEMANTIC ANALYSIS AND INDEXING Crista Lopes Outline Precision and Recall The problem with indexing so far Intuition for solving it Overview of the solution The Math How to measure
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationCSE 494/598 Lecture-4: Correlation Analysis. **Content adapted from last year s slides
CSE 494/598 Lecture-4: Correlation Analysis LYDIA MANIKONDA HT TP://WWW.PUBLIC.ASU.EDU/~LMANIKON / **Content adapted from last year s slides Announcements Project-1 Due: February 12 th 2016 Analysis report:
More informationPageRank. Ryan Tibshirani /36-662: Data Mining. January Optional reading: ESL 14.10
PageRank Ryan Tibshirani 36-462/36-662: Data Mining January 24 2012 Optional reading: ESL 14.10 1 Information retrieval with the web Last time we learned about information retrieval. We learned how to
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationSpectral Algorithms I. Slides based on Spectral Mesh Processing Siggraph 2010 course
Spectral Algorithms I Slides based on Spectral Mesh Processing Siggraph 2010 course Why Spectral? A different way to look at functions on a domain Why Spectral? Better representations lead to simpler solutions
More information