Latent Semantic Models Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze 1
Vector Space Model: Pros Automatic selection of index terms Partial matching of queries and documents (dealing with the case where no document contains all search terms) Raning according to similarity score (dealing with large result sets) erm weighting schemes (improves retrieval performance) Various extensions Document clustering Relevance feedbac (modifying query vector) Geometric foundation 2
Problems with Lexical Semantics Ambiguity and association in natural language Polysemy: Words often have a multitude of meanings and different types of usage (more severe in very heterogeneous collections). he vector space model is unable to discriminate between different meanings of the same word. 3
Problems with Lexical Semantics Synonymy: Different terms may have an identical or a similar meaning (weaer: words indicating the same topic). No associations between words are made in the vector space representation. 4
Polysemy and Context Document similarity on single word level: polysemy and context planet... saturn... contribution to similarity, if used in 1 st meaning, but not if in 2 nd meaning 1 meaning 2 ring jupiter space voyager car company dodge ford 5
Latent Semantic Indexing (LSI) Perform a low ran approximation of document term matrix (typical ran 100 300) General idea Map documents (and terms) to a low dimensional representation. Design a mapping such that the low dimensional space reflects semantic associations (latent semantic space). Compute document similarity based on the inner product in this latent semantic space 6
Goals of LSI Similar terms map to similar location in low dimensional space Noise reduction by dimension reduction 7
Latent Semantic Analysis Latent semantic space: illustrating example courtesy of Susan Dumais 8
Linear Algebra Bacground 9
Eigenvalues & Eigenvectors Eigenvectors (for a square mm matrix S) Example (right) eigenvector eigenvalue How many eigenvalues are there at most? only has a non-zero solution if his is a mth order equation in λ which can have at most m distinct solutions (roots of the characteristic polynomial) can be complex even though S is real. 10
Matrix vector multiplication 30 0 0 S 0 20 0 0 0 1 has eigenvalues 30, 20, 1 with corresponding eigenvectors v 1 1 0 0 v 2 0 1 0 v 3 0 0 1 On each eigenvector, S acts as a multiple of the identity matrix: but as a (usually) different multiple on each. 2 Any vector (say x= 4 ) can be viewed as a combination of 6 the eigenvectors: x = 2v 1 + 4v 2 + 6v 3 11
Matrix vector multiplication hus a matrix vector multiplication such as Sx (S, x as in the previous slide) can be rewritten in terms of the eigenvalues/vectors: Sx S(2v 1 4v 2 6v 3 ) Sx 2Sv 1 4Sv 2 6Sv 3 2 1 v 1 4 2 v 2 6 3 v 3 Sx 60v 1 80v 2 6v 3 Even though x is an arbitrary vector, the action of S on x is determined by the eigenvalues/vectors. 12
Matrix vector multiplication Suggestion: the effect of small eigenvalues is small. If we ignored the smallest eigenvalue (1), then instead of 60 we would get 60 80 80 6 0 hese vectors are similar (in cosine similarity, etc.) 13
Left Eigenvectors In a similar fashion, the left eigenvectors of a square matrix C are y such that : y C y where λis the corresponding eigenvalue: Consider a square matrix S with eigenvector v. We have: Sv v Recall that (AB) = B A v S v herefore, the eigenvalue of the right eigenvector is the same as the eigenvalue of the left eigenvector of the transposed matrix. 14
Eigenvalues & Eigenvectors Sv { 1,2} {1,2} v{1,2} For a symmetric matrix S, eigenvectors for distinct eigenvalues are orthogonal For 1 2, v1 v2 v1 v2 0 15
Eigenvalues & Eigenvectors All eigenvalues of a real symmetric matrix are real. for complex w, if S I 0 and S S All eigenvalues of a positive semidefinite matrix are non-negative n, w Sw 0, then if Sv v For any matrix A, A A is positive semidefinite 0 16
Plug in these values and solve for eigenvectors. Example Let hen he eigenvalues are 1 and 3 (nonnegative, real). he eigenvectors are orthogonal (and real): 2 1 1 2 S 0. 1 ) (2 2 1 1 2 2 I S I S 1 1 1 1 Real, symmetric. 17
Eigen/diagonal Decomposition Let be a square matrix with m linearly independent eigenvectors (a non defective matrix) heorem: Exists an eigen decomposition (cf. matrix diagonalization theorem) Columns of U are eigenvectors of S diagonal Diagonal elements of are eigenvalues of Unique for distinct eigenvalues 18
Diagonal decomposition: why/how n v v U... 1 Let U have the eigenvectors as columns: n n n n n v v v v v v S SU............ 1 1 1 1 1 hen, SU can be written And S=UU 1. hus SU=U, or U 1 SU= 19
Diagonal decomposition example 2 1 1 2 Recall S ; 1, 3. 1 1 1 2 1 1 he eigenvectors and form Inverting, we have U 1 hen, S=UU 1 = 1/ 2 1/ 2 1/ 2 1/ 2 U 1 1 Recall UU 1 =1. 1 1 1 01/ 2 1/ 2 1 1 0 31/ 2 1/ 2 1 1 20
Example continued Let s divide U (and multiply U 1 ) by 2 hen, S= 1/ 2 1/ 21 01/ 2 1/ 2 1/ 2 1/ 20 31/ 2 1/ 2 Q (Q -1 = Q ) Why? Stay tuned 21
Symmetric Eigen Decomposition If is a square symmetric matrix with m linearly independent eigenvectors: heorem: here exists a (unique) eigen decomposition S where Q is orthogonal: Q -1 = Q v i v j v i i v j QQ Each column v i of Q are normalized eigenvectors Columns are orthogonal (also called orthonormal basis) v i v i v v i 1 0 if i j 22
ime out! Strang s Applied Mathematics is a good reference What do these matrices have to do with text? Recall M N term document matrices But everything so far needs square matrices so 23
Singular Value Decomposition For an M N matrix A of ran r there exists a factorization (Singular Value Decomposition = SVD) as follows: A UV MM MN V is NN he columns of U are normalized orthogonal eigenvectors of AA. he columns of V are normalized orthogonal eigenvectors of A A. Eigenvalues 1 r of AA are the eigenvalues of A A. i i... diag 1 r Singular values. Recall that the ran of a matrix is the maximum number of linearly independent rows or columns 24
Singular Value Decomposition By multiplying A by its transposed version AA U U 2 UV U U V U Note that the left-hand side is a square symmetric matrix, and the right-hand side represents its symmetric diagonal decomposition. 25
Singular Value Decomposition Illustration of SVD dimensions and sparseness 26
SVD example Let 1 1 A 0 1 1 0 hus M=3, N=2. Its SVD is 2/ 6 0 1/ 3 3 0 1/ 2 1/ 2 1/ 6 1/ 2 1/ 3 0 1 1/ 6 1/ 2 1/ 3 0 0 1/ 2 1/ 2 ypically, the singular values arranged in decreasing order. 27
Low ran Approximation SVD can be used to compute optimal low ran approximations. Approximation problem: Find A of ran to achieve min ) X : ran ( X A X F Frobenius norm A is the best approximation of A. A and X are both mn matrices. ypically, want << r. 28
Low ran Approximation Solution via SVD A U diag( 1,...,,0,...,0) V set smallest r- singular values to zero A i 1 i u i v i column notation: sum of ran 1 matrices 29
Reduced SVD his is referred to as the reduced SVD 30
Reduced SVD It is the convenient (space saving) and usual form for computational applications It s what Matlab gives you 31
Approximation error How good (bad) is this approximation? It s the best possible, measured by the Frobenius norm of the error: :min X ran ( X ) A X F A A F 1 where the i are ordered such that i i+1. Suggests why Frobenius error drops as increased. 32
SVD Low ran approximation Whereas the term doc matrix A may have M=50000, N=10 million (and ran close to 50000) We can construct an approximation A 100 with ran 100. Of all ran 100 matrices, it would have the lowest Frobenius error. Great but why would we?? Answer: Latent Semantic Indexing C. Ecart, G. Young, he approximation of a matrix by another of lower ran. Psychometria, 1, 211-218, 1936. 33
Latent Semantic Indexing via the SVD 34
What it is From term doc matrix A, we compute the approximation A. here is a row for each term and a column for each doc in A hus docs live in a space of <<r dimensions hese dimensions are not the original axes 35
Performing the maps Each row and column of A gets mapped into the dimensional LSI space, by the SVD. As a result: q q U 1 V A U Query NO a sparse vector 1 A query q is also mapped into this space, by 36
Performing the maps Conduct similarity calculation under the low dimensional space () Claim this is not only the mapping with the best (Frobenius error) approximation to A, but in fact improves retrieval. 37
Empirical evidence Experiments on REC 1/2/3 Dumais Lanczos SVD code (available on netlib) due to Berry used in these experiments Running times quite long Dimensions various values 250 350 reported. Reducing improves recall. (Under 200 reported unsatisfactory) Generally expect recall to improve 38
Empirical evidence Precision at or above median REC precision op scorer on almost 20% of REC topics Slightly better on average than straight vector spaces Effect of dimensionality: Dimensions Precision 250 0.367 300 0.371 346 0.374 39
Failure modes Negated phrases REC topics sometimes negate certain query/terms phrases precludes automatic conversion of topics to latent semantic space. Boolean queries As usual, free text/vector space syntax of LSI queries precludes (say) Find any doc having to do with the following 5 companies 40
LSI has many other applications In many settings in pattern recognition and retrieval, we have a feature object matrix. For text, the terms are features and the docs are objects. Could be opinions and users his matrix may be redundant in dimensionality. Can wor with low ran approximation. If entries are missing (e.g., users opinions), can recover if dimensionality is low. Powerful general analytical technique Close, principled analog to clustering methods. 41