Assignment 3. Latent Semantic Indexing

Similar documents
Latent Semantic Analysis. Hongning Wang

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

Latent semantic indexing

Latent Semantic Analysis. Hongning Wang

Matrices, Vector Spaces, and Information Retrieval

CS47300: Web Information Search and Management

Problems. Looks for literal term matches. Problems:

CSE 494/598 Lecture-6: Latent Semantic Indexing. **Content adapted from last year s slides

Manning & Schuetze, FSNLP (c) 1999,2000

Conceptual Questions for Review

CSE 494/598 Lecture-4: Correlation Analysis. **Content adapted from last year s slides

Latent Semantic Analysis (Tutorial)

Dimensionality Reduction

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

Singular Value Decompsition

Information Retrieval

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

CS 3750 Advanced Machine Learning. Applications of SVD and PCA (LSA and Link analysis) Cem Akkaya

Latent Semantic Models. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

14 Singular Value Decomposition

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Lecture 5: Web Searching using the SVD

Linear Algebra (Review) Volker Tresp 2017

13 Searching the Web with the SVD

9 Searching the Internet with the SVD

Numerical Methods I Singular Value Decomposition

EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4]

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Linear Algebra (Review) Volker Tresp 2018

Manning & Schuetze, FSNLP, (c)

15 Singular Value Decomposition

Matrix Decomposition and Latent Semantic Indexing (LSI) Introduction to Information Retrieval INF 141/ CS 121 Donald J. Patterson

CS 143 Linear Algebra Review

INF 141 IR METRICS LATENT SEMANTIC ANALYSIS AND INDEXING. Crista Lopes

A few applications of the SVD

Singular Value Decomposition

Linear Algebra Background

Introduction to Information Retrieval

Generic Text Summarization

CS 572: Information Retrieval

Embeddings Learned By Matrix Factorization

.. CSC 566 Advanced Data Mining Alexander Dekhtyar..

Image Registration Lecture 2: Vectors and Matrices

Text Analytics (Text Mining)

Background Mathematics (2/2) 1. David Barber

PV211: Introduction to Information Retrieval

PROBABILISTIC LATENT SEMANTIC ANALYSIS

The Singular Value Decomposition

EECS 275 Matrix Computation

Applied Linear Algebra in Geoscience Using MATLAB

Jordan Normal Form and Singular Decomposition

1. Ignoring case, extract all unique words from the entire set of documents.

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Linear Algebra Review. Vectors

Probabilistic Latent Semantic Analysis

Variable Latent Semantic Indexing

Using Matrix Decompositions in Formal Concept Analysis

Singular Value Decomposition

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

7 Principal Component Analysis

Boolean and Vector Space Retrieval Models

Lecture 5 Least-squares

Matrix Factorization & Latent Semantic Analysis Review. Yize Li, Lanbo Zhang

Deep Learning Book Notes Chapter 2: Linear Algebra

B553 Lecture 5: Matrix Algebra Review

1 Linearity and Linear Systems

7. Symmetric Matrices and Quadratic Forms

Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

Natural Language Processing. Topics in Information Retrieval. Updated 5/10

Semantics with Dense Vectors. Reference: D. Jurafsky and J. Martin, Speech and Language Processing

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Large Scale Data Analysis Using Deep Learning

1 Cricket chirps: an example

Dimension reduction based on centroids and least squares for efficient processing of text data

Matrix decompositions and latent semantic indexing

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.

ANLP Lecture 22 Lexical Semantics with Dense Vectors

Review of Linear Algebra

Investigation of Latent Semantic Analysis for Clustering of Czech News Articles

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

Retrieval by Content. Part 2: Text Retrieval Term Frequency and Inverse Document Frequency. Srihari: CSE 626 1

Accuracy of Distributed Data Retrieval on a Supercomputer Grid. Technical Report December Department of Scientific Computing

Linear Algebra Review. Fei-Fei Li

CS 246 Review of Linear Algebra 01/17/19

Vector Spaces, Orthogonality, and Linear Least Squares

Numerical Methods for Inverse Kinematics

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Review of Some Concepts from Linear Algebra: Part 2

CS60021: Scalable Data Mining. Dimensionality Reduction

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

L26: Advanced dimensionality reduction

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Singular Value Decomposition and its. SVD and its Applications in Computer Vision

Chapter 8. Rigid transformations

Chapter 2. Matrix Arithmetic. Chapter 2

Maths for Signals and Systems Linear Algebra in Engineering

Linear Algebra and Eigenproblems

Linear Equations in Linear Algebra

Data Mining and Matrices

Transcription:

Assignment 3 Gagan Bansal 2003CS10162 Group 2 Pawan Jain 2003CS10177 Group 1 Latent Semantic Indexing OVERVIEW LATENT SEMANTIC INDEXING (LSI) considers documents that have many words in common to be semantically close, and ones with few words in common to be semantically distant. When we search an LSI-indexed database, it looks at similarity values it has calculated for every content word, and returns the documents that it considers best fits the query. Because two documents may be semantically very close even if they do not share a particular keyword, LSI does not require an exact match to return useful results. Where a plain keyword search will fail if there is no exact match, LSI will often return relevant documents that may not contain the keyword at all. METHOD Terms and Documents are represented as a matrix (A) having the number of rows as the number of terms and the number of columns equal to the number of documents Each cell of the matrix A(i,j) represents the frequency of term i in document j. Term Document Matrix No. of Rows = No of Terms No of Columns = No of Documents Frequency Matrix: Each Row represents the frequency of the corresponding term in each document. 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 1 1 1 1 1 0 0 1 1 0 1 0 0 0 1 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 1 Documents: A Course on Integral Equations Attractors for Semigroups and Evolution Equations Automatic Differentiation of Algorithms, Theory, Implementation and Application Geometrical Aspects of Partial Differential Equations Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra Introduction to Hamiltonian Dynamical Systems and the N-Body Problem Knapsack Problems: Algorithms and Computer Implementations Methods of Solving Singular Systems of Ordinary Differential Equations Nonlinear Systems Ordinary Differential Equations Oscillation Theory for Neutral Differential Equations with Delay Oscillation Theory of Delay Differential Equations Pseudo differential Operators and Nonlinear Partial Differential Equations Sinc Methods for Quadrature and Differential Equations Stability of Stochastic Differential Equations with Respect to Semi-Martingales The Boundary Integral Approach to Static and Dynamic Contact Problems The Double Mellin-Barnes Type Integrals and Their Applications to Convolution Theory Terms: algorithms application delay differential equations implementation integral introduction methods nonlinear ordinary oscillation partial problem systems

theory Now instead of working with the frequency matrix we calculate the WEIGHT matrix. Weighting: We define three kinds of weights: 1. Local Weights L(i,j)=log(1+ A(i,j)) It amounts to reducing the importance of a term if it appears many number of times in the document. 2. Global weight: Gi = log10 ( n / DF(i) ) Where n is the total number of documents and DF(i), called the document frequency of the ith term, is the number of documents that contain the ith term. This indicates the overall importance of each term in the complete document set. 3. Normalization factor: Nj = [ i in all terms (Gi Lij)2] This is a scaling step designed to keep large documents with many keywords from overwhelming smaller documents in our result set. Smaller documents are given more importance, and larger documents are penalized, so that every document has equal significance. So we replace the entries of the matrix A as A2(i,j) = L(i,j) * Gi * Nj (Note that A was the original frequency matrix while A2 is the new Weight matrix) So for example if we search for some query containing the term Computer on our Departmental website Computer would of low importance because it would have a low global weight as it would occur in almost all documents. So working with a weight matrix is better than working with the frequency matrix. So after we obtain this weight matrix (A2) we perform the SVD of A2 because we want to obtain the best lower rank approximation of A2. Also we check for rank approximation using QR factorization. We see that QR factorization gives the result closer to A But continuing by SVD we obtain A2 =U V where U and V are orthogonal matrices and is a diagonal matrix. Here U represents the orthogonal basis for the document space(column space) and V represents the basis for the term space (row space). Now we take a rank approximation (k rank) of the matrix A by

taking the first k columns of U and first k rows of V. We need to take an appropriate rank approximation so that we do not miss any data or we take more information than necessary. So suppose U1=k columns of U, V1= k rows of V and 1 = (kxk entries of ).So U1* 1 *V1 represents the best possible approximation of A2 in the reduced space. Let X= U1* 1 *V1 Now the dot product between two row vectors of X reflects the extent to which two terms have a similar pattern of occurrence across the set of documents. Hence for comparing all the terms we take X x X = U1* 1 2 U1 = (U1 x 1 ) x (U1 x 1 ). Similarly the dot product between the columns of X reflects the extent to which two documents are similar. So X x X =(V1 x 1 ) x (V1 x 1 ). Hence when we take the rank approximation as 2 and we plot the terms we take the first coordinate of a term corresponding to the first column of (U1 x 1 (1,1)) and the second coordinate as that corresponding to the second column of (U1 x 1 (2,2)). When we plot the documents we take the first coordinate of the document corresponding to the first column of (V1 x 1 (1,1)) and the second coordinate as that corresponding to the second column of (V1 x 1 (2,2)). Now when we get a query (Q), query is represented as a document and this query is then projected into reduced document space whose basis is U1.The projected query is say Q1 Then the cosine of the angle between the query and each document (say i) is found by taking the dot product of the i th row of V1 and the query (Q1).The distances above a threshold correspond to the documents who are the closest matches to the query. (Note if we do the reduced rank approximation (say rank k) using QR factorization we take the first k columns of Q and the top k rows of R.We can take this because we get the diagonal entries of R in sorted manner just like SVD though the entries may be negative here.)

RESULTS We work with above Term-Document Matrix,take its rank 2 approximation and plot the terms and documents in the 2-Dimensions we get the following plot : From this graph we can clearly observe that terms and documents are clustered. For example when we observe the term Differential and the term Equation co-occur in many documents. So when we do X x X we see that the rows (and columns) of this matrix corresponding to the terms are approximately same which means these terms have come closer to each other. This is clear from the graph as we see that the said two terms occur close to each in the above graph. So when we search for Differential we get the results as: Attractors for Semigroups and Evolution Equations' 'Methods of Solving Singular Systems of Ordinary Differential Equations' 'Geometrical Aspects of Partial Differential Equations' 'Sinc Methods for Quadrature and Differential Equations' 'Ordinary Differential Equations' 'Pseudodifferential Operators and Nonlinear Partial Differential Equations' 'Nonlinear Systems' 'Oscillation Theory of Delay Differential Equations' 'Oscillation Theory for Neutral Differential Equations with Delay

The first document does not contain the word Differential but since the distance between the terms Differential and Equations has reduced, we get this document as a match. Another Example Now when we search for Algorithms and observe the effect of different rank approximations on the results we observe the following The following table reads out the cosine of the Angle between the query and the documents: Greater the value, lesser is the angle between the query and the document and closer is the match: Rank 2 Rank 3 Rank 5 Rank10 D1 0.70371322871710 0.70045336681312 0.16560226808666 0.47175686880194 D2-0.34921857027344-0.22883398997894-0.26171292460233 0.31018143847271 D3 0.97397289290423 0.95925343922202 0.76681717720283 0.09486504283145 D4-0.40361917610330-0.21106331350824-0.06135866583859-0.40637277172979 D5 0.99975544898044 0.91440902090970 0.72976847867085 0.80039222873343 D6 0.99752003772297 0.39000024594427 0.44677502126222-0.38737356545878 D7 0.99995643911293 0.99398115433171 0.97222186202790 0.29865268124493 D8-0.36353348645931-0.00702208549189-0.12983996313130-0.01420839873451 D9-0.17408618301693 0.05346439024878 0.09256169752598 0.11700773933883 D10-0.41074368945045-0.07413460857505-0.23130827088124 0.15803746419291 D11-0.17254144046489-0.18341091587962-0.11653490527492-0.00187547204494 D12-0.17254144046489-0.18341091587962-0.11653490527492-0.00187547204417 D13-0.41924551970499-0.14656770463176-0.02178504680712 0.08423102911257 D14-0.41074368945045-0.07413460857505-0.23130827088124 0.15803746419350 D15-0.35271594040385-0.23168101789704-0.26236296684839 0.09301579211539 D16 0.99956435862881 0.88357756060881 0.72417266454682 0.06306494402202 D17 0.87043790655294 0.82113133070215 0.36016683963841-0.14458881236583 We see that when we approximate the rank to 10 there is least clustering of terms and documents and we get only D5 as the document with the maximum match. When we

approximate the rank to 5 we see that D7 comes out to be the closest match. As we reduce the rank further D3 also matches the query and finally when we reduce the rank to 2 we get D3, D5, D6, D7 and D16 as the matches. WHY DOES LSI WORK Each document in our collection is a vector with as many components as the number of terms. Documents in such a space that have many words in common will have vectors that are near to each other, while documents with few shared words will have vectors that are far apart. Latent semantic indexing works by projecting this large, multidimensional space down into a smaller number of dimensions. In doing so, keywords that are semantically similar will get squeezed together, and will no longer be completely distinct. What we are losing is noise from our original term-document matrix, revealing similarities that were latent in the document collection. In LSI we are really exploiting a property of natural language, namely that words with similar meaning tend to occur together. The vectors representing the documents are projected in a new low dimensional space obtained by singular value decomposition of the Term-Document matrix A. LSI preserves to the extent possible the relative distances and hence presumably the retrieval capabilities in the Term-Document matrix while projecting it to a lowerdimensional space. Now consider two terms which co-occur in many documents. Probably what we want LSI to do is that it should bring these terms close or when we search for one of these terms we should also get the documents containing just the other term (not containing the searched term). Now in the term-term autocorrelation matrix AA the two rows and columns corresponding to these two terms will be nearly identical as was observed above for the terms Differential and Equations. Therefore there is a very small singular value corresponding to this pair of terms. So earlier if a Document contained just the term Differential and another document just contained the term Equation, there dot product would have been 0 (implying they being 90 degree apart) but after doing SVD there dot product does not remain zero implying the angle between the documents decreases from 90 degree.