Latent Semantic Models. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Similar documents
Linear Algebra Background

Information Retrieval

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

CS 572: Information Retrieval

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Problems. Looks for literal term matches. Problems:

Matrix decompositions and latent semantic indexing

Latent Semantic Analysis. Hongning Wang

Latent Semantic Analysis. Hongning Wang

Latent Semantic Indexing

EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4]

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

Singular Value Decompsition

Econ Slides from Lecture 7

Linear Algebra - Part II

Chapter 3 Transformations

Linear Algebra Review. Vectors

Linear Algebra Methods for Data Mining

Conceptual Questions for Review

Latent Semantic Analysis (Tutorial)

Large Scale Data Analysis Using Deep Learning

Latent semantic indexing

Manning & Schuetze, FSNLP (c) 1999,2000

Matrices, Vector Spaces, and Information Retrieval

Applied Linear Algebra in Geoscience Using MATLAB

1 Linearity and Linear Systems

Background Mathematics (2/2) 1. David Barber

15 Singular Value Decomposition

Manning & Schuetze, FSNLP, (c)

14 Singular Value Decomposition

Natural Language Processing. Topics in Information Retrieval. Updated 5/10

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

CS47300: Web Information Search and Management

.. CSC 566 Advanced Data Mining Alexander Dekhtyar..

INFO 4300 / CS4300 Information Retrieval. IR 9: Linear Algebra Review

Quantum Computing Lecture 2. Review of Linear Algebra

Numerical Methods I Singular Value Decomposition

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Review problems for MA 54, Fall 2004.

Multivariate Statistical Analysis

Variable Latent Semantic Indexing

COMP 558 lecture 18 Nov. 15, 2010

Review of Linear Algebra

Matrices, Vector-Spaces and Information Retrieval

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Lecture 9: SVD, Low Rank Approximation

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

CSE 494/598 Lecture-4: Correlation Analysis. **Content adapted from last year s slides

1. Ignoring case, extract all unique words from the entire set of documents.

Linear Algebra (Review) Volker Tresp 2017

Probabilistic Latent Semantic Analysis

Matrix Factorization & Latent Semantic Analysis Review. Yize Li, Lanbo Zhang

CSE 494/598 Lecture-6: Latent Semantic Indexing. **Content adapted from last year s slides

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Fast LSI-based techniques for query expansion in text retrieval systems

Linear Algebra: Matrix Eigenvalue Problems

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Text Analytics (Text Mining)

Linear Algebra for Machine Learning. Sargur N. Srihari

Matrix Decomposition and Latent Semantic Indexing (LSI) Introduction to Information Retrieval INF 141/ CS 121 Donald J. Patterson

Chap 3. Linear Algebra

PROBABILISTIC LATENT SEMANTIC ANALYSIS

Linear Algebra (Review) Volker Tresp 2018

Singular Value Decomposition

Assignment 3. Latent Semantic Indexing

EE731 Lecture Notes: Matrix Computations for Signal Processing

Foundations of Computer Vision

NORMS ON SPACE OF MATRICES

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Generic Text Summarization

vector space retrieval many slides courtesy James Amherst

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Introduction to Information Retrieval

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Lecture 02 Linear Algebra Basics

CS 340 Lec. 6: Linear Dimensionality Reduction

Linear Algebra- Final Exam Review

Lecture 1: Review of linear algebra

Introduction to Matrix Algebra

CS 246 Review of Linear Algebra 01/17/19

Deep Learning Book Notes Chapter 2: Linear Algebra

Information retrieval LSI, plsi and LDA. Jian-Yun Nie

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

The Singular Value Decomposition

Linear Algebra Review. Fei-Fei Li

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

UNIT 6: The singular value decomposition.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Linear Algebra in Actuarial Science: Slides to the lecture

Lecture 6 Positive Definite Matrices

Lecture 5: Web Searching using the SVD

Notes on Latent Semantic Analysis

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Transcription:

Latent Semantic Models Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze 1

Vector Space Model: Pros Automatic selection of index terms Partial matching of queries and documents (dealing with the case where no document contains all search terms) Raning according to similarity score (dealing with large result sets) erm weighting schemes (improves retrieval performance) Various extensions Document clustering Relevance feedbac (modifying query vector) Geometric foundation 2

Problems with Lexical Semantics Ambiguity and association in natural language Polysemy: Words often have a multitude of meanings and different types of usage (more severe in very heterogeneous collections). he vector space model is unable to discriminate between different meanings of the same word. 3

Problems with Lexical Semantics Synonymy: Different terms may have an identical or a similar meaning (weaer: words indicating the same topic). No associations between words are made in the vector space representation. 4

Polysemy and Context Document similarity on single word level: polysemy and context planet... saturn... contribution to similarity, if used in 1 st meaning, but not if in 2 nd meaning 1 meaning 2 ring jupiter space voyager car company dodge ford 5

Latent Semantic Indexing (LSI) Perform a low ran approximation of document term matrix (typical ran 100 300) General idea Map documents (and terms) to a low dimensional representation. Design a mapping such that the low dimensional space reflects semantic associations (latent semantic space). Compute document similarity based on the inner product in this latent semantic space 6

Goals of LSI Similar terms map to similar location in low dimensional space Noise reduction by dimension reduction 7

Latent Semantic Analysis Latent semantic space: illustrating example courtesy of Susan Dumais 8

Linear Algebra Bacground 9

Eigenvalues & Eigenvectors Eigenvectors (for a square mm matrix S) Example (right) eigenvector eigenvalue How many eigenvalues are there at most? only has a non-zero solution if his is a mth order equation in λ which can have at most m distinct solutions (roots of the characteristic polynomial) can be complex even though S is real. 10

Matrix vector multiplication 30 0 0 S 0 20 0 0 0 1 has eigenvalues 30, 20, 1 with corresponding eigenvectors v 1 1 0 0 v 2 0 1 0 v 3 0 0 1 On each eigenvector, S acts as a multiple of the identity matrix: but as a (usually) different multiple on each. 2 Any vector (say x= 4 ) can be viewed as a combination of 6 the eigenvectors: x = 2v 1 + 4v 2 + 6v 3 11

Matrix vector multiplication hus a matrix vector multiplication such as Sx (S, x as in the previous slide) can be rewritten in terms of the eigenvalues/vectors: Sx S(2v 1 4v 2 6v 3 ) Sx 2Sv 1 4Sv 2 6Sv 3 2 1 v 1 4 2 v 2 6 3 v 3 Sx 60v 1 80v 2 6v 3 Even though x is an arbitrary vector, the action of S on x is determined by the eigenvalues/vectors. 12

Matrix vector multiplication Suggestion: the effect of small eigenvalues is small. If we ignored the smallest eigenvalue (1), then instead of 60 we would get 60 80 80 6 0 hese vectors are similar (in cosine similarity, etc.) 13

Left Eigenvectors In a similar fashion, the left eigenvectors of a square matrix C are y such that : y C y where λis the corresponding eigenvalue: Consider a square matrix S with eigenvector v. We have: Sv v Recall that (AB) = B A v S v herefore, the eigenvalue of the right eigenvector is the same as the eigenvalue of the left eigenvector of the transposed matrix. 14

Eigenvalues & Eigenvectors Sv { 1,2} {1,2} v{1,2} For a symmetric matrix S, eigenvectors for distinct eigenvalues are orthogonal For 1 2, v1 v2 v1 v2 0 15

Eigenvalues & Eigenvectors All eigenvalues of a real symmetric matrix are real. for complex w, if S I 0 and S S All eigenvalues of a positive semidefinite matrix are non-negative n, w Sw 0, then if Sv v For any matrix A, A A is positive semidefinite 0 16

Plug in these values and solve for eigenvectors. Example Let hen he eigenvalues are 1 and 3 (nonnegative, real). he eigenvectors are orthogonal (and real): 2 1 1 2 S 0. 1 ) (2 2 1 1 2 2 I S I S 1 1 1 1 Real, symmetric. 17

Eigen/diagonal Decomposition Let be a square matrix with m linearly independent eigenvectors (a non defective matrix) heorem: Exists an eigen decomposition (cf. matrix diagonalization theorem) Columns of U are eigenvectors of S diagonal Diagonal elements of are eigenvalues of Unique for distinct eigenvalues 18

Diagonal decomposition: why/how n v v U... 1 Let U have the eigenvectors as columns: n n n n n v v v v v v S SU............ 1 1 1 1 1 hen, SU can be written And S=UU 1. hus SU=U, or U 1 SU= 19

Diagonal decomposition example 2 1 1 2 Recall S ; 1, 3. 1 1 1 2 1 1 he eigenvectors and form Inverting, we have U 1 hen, S=UU 1 = 1/ 2 1/ 2 1/ 2 1/ 2 U 1 1 Recall UU 1 =1. 1 1 1 01/ 2 1/ 2 1 1 0 31/ 2 1/ 2 1 1 20

Example continued Let s divide U (and multiply U 1 ) by 2 hen, S= 1/ 2 1/ 21 01/ 2 1/ 2 1/ 2 1/ 20 31/ 2 1/ 2 Q (Q -1 = Q ) Why? Stay tuned 21

Symmetric Eigen Decomposition If is a square symmetric matrix with m linearly independent eigenvectors: heorem: here exists a (unique) eigen decomposition S where Q is orthogonal: Q -1 = Q v i v j v i i v j QQ Each column v i of Q are normalized eigenvectors Columns are orthogonal (also called orthonormal basis) v i v i v v i 1 0 if i j 22

ime out! Strang s Applied Mathematics is a good reference What do these matrices have to do with text? Recall M N term document matrices But everything so far needs square matrices so 23

Singular Value Decomposition For an M N matrix A of ran r there exists a factorization (Singular Value Decomposition = SVD) as follows: A UV MM MN V is NN he columns of U are normalized orthogonal eigenvectors of AA. he columns of V are normalized orthogonal eigenvectors of A A. Eigenvalues 1 r of AA are the eigenvalues of A A. i i... diag 1 r Singular values. Recall that the ran of a matrix is the maximum number of linearly independent rows or columns 24

Singular Value Decomposition By multiplying A by its transposed version AA U U 2 UV U U V U Note that the left-hand side is a square symmetric matrix, and the right-hand side represents its symmetric diagonal decomposition. 25

Singular Value Decomposition Illustration of SVD dimensions and sparseness 26

SVD example Let 1 1 A 0 1 1 0 hus M=3, N=2. Its SVD is 2/ 6 0 1/ 3 3 0 1/ 2 1/ 2 1/ 6 1/ 2 1/ 3 0 1 1/ 6 1/ 2 1/ 3 0 0 1/ 2 1/ 2 ypically, the singular values arranged in decreasing order. 27

Low ran Approximation SVD can be used to compute optimal low ran approximations. Approximation problem: Find A of ran to achieve min ) X : ran ( X A X F Frobenius norm A is the best approximation of A. A and X are both mn matrices. ypically, want << r. 28

Low ran Approximation Solution via SVD A U diag( 1,...,,0,...,0) V set smallest r- singular values to zero A i 1 i u i v i column notation: sum of ran 1 matrices 29

Reduced SVD his is referred to as the reduced SVD 30

Reduced SVD It is the convenient (space saving) and usual form for computational applications It s what Matlab gives you 31

Approximation error How good (bad) is this approximation? It s the best possible, measured by the Frobenius norm of the error: :min X ran ( X ) A X F A A F 1 where the i are ordered such that i i+1. Suggests why Frobenius error drops as increased. 32

SVD Low ran approximation Whereas the term doc matrix A may have M=50000, N=10 million (and ran close to 50000) We can construct an approximation A 100 with ran 100. Of all ran 100 matrices, it would have the lowest Frobenius error. Great but why would we?? Answer: Latent Semantic Indexing C. Ecart, G. Young, he approximation of a matrix by another of lower ran. Psychometria, 1, 211-218, 1936. 33

Latent Semantic Indexing via the SVD 34

What it is From term doc matrix A, we compute the approximation A. here is a row for each term and a column for each doc in A hus docs live in a space of <<r dimensions hese dimensions are not the original axes 35

Performing the maps Each row and column of A gets mapped into the dimensional LSI space, by the SVD. As a result: q q U 1 V A U Query NO a sparse vector 1 A query q is also mapped into this space, by 36

Performing the maps Conduct similarity calculation under the low dimensional space () Claim this is not only the mapping with the best (Frobenius error) approximation to A, but in fact improves retrieval. 37

Empirical evidence Experiments on REC 1/2/3 Dumais Lanczos SVD code (available on netlib) due to Berry used in these experiments Running times quite long Dimensions various values 250 350 reported. Reducing improves recall. (Under 200 reported unsatisfactory) Generally expect recall to improve 38

Empirical evidence Precision at or above median REC precision op scorer on almost 20% of REC topics Slightly better on average than straight vector spaces Effect of dimensionality: Dimensions Precision 250 0.367 300 0.371 346 0.374 39

Failure modes Negated phrases REC topics sometimes negate certain query/terms phrases precludes automatic conversion of topics to latent semantic space. Boolean queries As usual, free text/vector space syntax of LSI queries precludes (say) Find any doc having to do with the following 5 companies 40

LSI has many other applications In many settings in pattern recognition and retrieval, we have a feature object matrix. For text, the terms are features and the docs are objects. Could be opinions and users his matrix may be redundant in dimensionality. Can wor with low ran approximation. If entries are missing (e.g., users opinions), can recover if dimensionality is low. Powerful general analytical technique Close, principled analog to clustering methods. 41