Assignment 3 Gagan Bansal 2003CS10162 Group 2 Pawan Jain 2003CS10177 Group 1 Latent Semantic Indexing OVERVIEW LATENT SEMANTIC INDEXING (LSI) considers documents that have many words in common to be semantically close, and ones with few words in common to be semantically distant. When we search an LSI-indexed database, it looks at similarity values it has calculated for every content word, and returns the documents that it considers best fits the query. Because two documents may be semantically very close even if they do not share a particular keyword, LSI does not require an exact match to return useful results. Where a plain keyword search will fail if there is no exact match, LSI will often return relevant documents that may not contain the keyword at all. METHOD Terms and Documents are represented as a matrix (A) having the number of rows as the number of terms and the number of columns equal to the number of documents Each cell of the matrix A(i,j) represents the frequency of term i in document j. Term Document Matrix No. of Rows = No of Terms No of Columns = No of Documents Frequency Matrix: Each Row represents the frequency of the corresponding term in each document. 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 1 1 1 1 1 0 0 1 1 0 1 0 0 0 1 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 1 Documents: A Course on Integral Equations Attractors for Semigroups and Evolution Equations Automatic Differentiation of Algorithms, Theory, Implementation and Application Geometrical Aspects of Partial Differential Equations Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra Introduction to Hamiltonian Dynamical Systems and the N-Body Problem Knapsack Problems: Algorithms and Computer Implementations Methods of Solving Singular Systems of Ordinary Differential Equations Nonlinear Systems Ordinary Differential Equations Oscillation Theory for Neutral Differential Equations with Delay Oscillation Theory of Delay Differential Equations Pseudo differential Operators and Nonlinear Partial Differential Equations Sinc Methods for Quadrature and Differential Equations Stability of Stochastic Differential Equations with Respect to Semi-Martingales The Boundary Integral Approach to Static and Dynamic Contact Problems The Double Mellin-Barnes Type Integrals and Their Applications to Convolution Theory Terms: algorithms application delay differential equations implementation integral introduction methods nonlinear ordinary oscillation partial problem systems
theory Now instead of working with the frequency matrix we calculate the WEIGHT matrix. Weighting: We define three kinds of weights: 1. Local Weights L(i,j)=log(1+ A(i,j)) It amounts to reducing the importance of a term if it appears many number of times in the document. 2. Global weight: Gi = log10 ( n / DF(i) ) Where n is the total number of documents and DF(i), called the document frequency of the ith term, is the number of documents that contain the ith term. This indicates the overall importance of each term in the complete document set. 3. Normalization factor: Nj = [ i in all terms (Gi Lij)2] This is a scaling step designed to keep large documents with many keywords from overwhelming smaller documents in our result set. Smaller documents are given more importance, and larger documents are penalized, so that every document has equal significance. So we replace the entries of the matrix A as A2(i,j) = L(i,j) * Gi * Nj (Note that A was the original frequency matrix while A2 is the new Weight matrix) So for example if we search for some query containing the term Computer on our Departmental website Computer would of low importance because it would have a low global weight as it would occur in almost all documents. So working with a weight matrix is better than working with the frequency matrix. So after we obtain this weight matrix (A2) we perform the SVD of A2 because we want to obtain the best lower rank approximation of A2. Also we check for rank approximation using QR factorization. We see that QR factorization gives the result closer to A But continuing by SVD we obtain A2 =U V where U and V are orthogonal matrices and is a diagonal matrix. Here U represents the orthogonal basis for the document space(column space) and V represents the basis for the term space (row space). Now we take a rank approximation (k rank) of the matrix A by
taking the first k columns of U and first k rows of V. We need to take an appropriate rank approximation so that we do not miss any data or we take more information than necessary. So suppose U1=k columns of U, V1= k rows of V and 1 = (kxk entries of ).So U1* 1 *V1 represents the best possible approximation of A2 in the reduced space. Let X= U1* 1 *V1 Now the dot product between two row vectors of X reflects the extent to which two terms have a similar pattern of occurrence across the set of documents. Hence for comparing all the terms we take X x X = U1* 1 2 U1 = (U1 x 1 ) x (U1 x 1 ). Similarly the dot product between the columns of X reflects the extent to which two documents are similar. So X x X =(V1 x 1 ) x (V1 x 1 ). Hence when we take the rank approximation as 2 and we plot the terms we take the first coordinate of a term corresponding to the first column of (U1 x 1 (1,1)) and the second coordinate as that corresponding to the second column of (U1 x 1 (2,2)). When we plot the documents we take the first coordinate of the document corresponding to the first column of (V1 x 1 (1,1)) and the second coordinate as that corresponding to the second column of (V1 x 1 (2,2)). Now when we get a query (Q), query is represented as a document and this query is then projected into reduced document space whose basis is U1.The projected query is say Q1 Then the cosine of the angle between the query and each document (say i) is found by taking the dot product of the i th row of V1 and the query (Q1).The distances above a threshold correspond to the documents who are the closest matches to the query. (Note if we do the reduced rank approximation (say rank k) using QR factorization we take the first k columns of Q and the top k rows of R.We can take this because we get the diagonal entries of R in sorted manner just like SVD though the entries may be negative here.)
RESULTS We work with above Term-Document Matrix,take its rank 2 approximation and plot the terms and documents in the 2-Dimensions we get the following plot : From this graph we can clearly observe that terms and documents are clustered. For example when we observe the term Differential and the term Equation co-occur in many documents. So when we do X x X we see that the rows (and columns) of this matrix corresponding to the terms are approximately same which means these terms have come closer to each other. This is clear from the graph as we see that the said two terms occur close to each in the above graph. So when we search for Differential we get the results as: Attractors for Semigroups and Evolution Equations' 'Methods of Solving Singular Systems of Ordinary Differential Equations' 'Geometrical Aspects of Partial Differential Equations' 'Sinc Methods for Quadrature and Differential Equations' 'Ordinary Differential Equations' 'Pseudodifferential Operators and Nonlinear Partial Differential Equations' 'Nonlinear Systems' 'Oscillation Theory of Delay Differential Equations' 'Oscillation Theory for Neutral Differential Equations with Delay
The first document does not contain the word Differential but since the distance between the terms Differential and Equations has reduced, we get this document as a match. Another Example Now when we search for Algorithms and observe the effect of different rank approximations on the results we observe the following The following table reads out the cosine of the Angle between the query and the documents: Greater the value, lesser is the angle between the query and the document and closer is the match: Rank 2 Rank 3 Rank 5 Rank10 D1 0.70371322871710 0.70045336681312 0.16560226808666 0.47175686880194 D2-0.34921857027344-0.22883398997894-0.26171292460233 0.31018143847271 D3 0.97397289290423 0.95925343922202 0.76681717720283 0.09486504283145 D4-0.40361917610330-0.21106331350824-0.06135866583859-0.40637277172979 D5 0.99975544898044 0.91440902090970 0.72976847867085 0.80039222873343 D6 0.99752003772297 0.39000024594427 0.44677502126222-0.38737356545878 D7 0.99995643911293 0.99398115433171 0.97222186202790 0.29865268124493 D8-0.36353348645931-0.00702208549189-0.12983996313130-0.01420839873451 D9-0.17408618301693 0.05346439024878 0.09256169752598 0.11700773933883 D10-0.41074368945045-0.07413460857505-0.23130827088124 0.15803746419291 D11-0.17254144046489-0.18341091587962-0.11653490527492-0.00187547204494 D12-0.17254144046489-0.18341091587962-0.11653490527492-0.00187547204417 D13-0.41924551970499-0.14656770463176-0.02178504680712 0.08423102911257 D14-0.41074368945045-0.07413460857505-0.23130827088124 0.15803746419350 D15-0.35271594040385-0.23168101789704-0.26236296684839 0.09301579211539 D16 0.99956435862881 0.88357756060881 0.72417266454682 0.06306494402202 D17 0.87043790655294 0.82113133070215 0.36016683963841-0.14458881236583 We see that when we approximate the rank to 10 there is least clustering of terms and documents and we get only D5 as the document with the maximum match. When we
approximate the rank to 5 we see that D7 comes out to be the closest match. As we reduce the rank further D3 also matches the query and finally when we reduce the rank to 2 we get D3, D5, D6, D7 and D16 as the matches. WHY DOES LSI WORK Each document in our collection is a vector with as many components as the number of terms. Documents in such a space that have many words in common will have vectors that are near to each other, while documents with few shared words will have vectors that are far apart. Latent semantic indexing works by projecting this large, multidimensional space down into a smaller number of dimensions. In doing so, keywords that are semantically similar will get squeezed together, and will no longer be completely distinct. What we are losing is noise from our original term-document matrix, revealing similarities that were latent in the document collection. In LSI we are really exploiting a property of natural language, namely that words with similar meaning tend to occur together. The vectors representing the documents are projected in a new low dimensional space obtained by singular value decomposition of the Term-Document matrix A. LSI preserves to the extent possible the relative distances and hence presumably the retrieval capabilities in the Term-Document matrix while projecting it to a lowerdimensional space. Now consider two terms which co-occur in many documents. Probably what we want LSI to do is that it should bring these terms close or when we search for one of these terms we should also get the documents containing just the other term (not containing the searched term). Now in the term-term autocorrelation matrix AA the two rows and columns corresponding to these two terms will be nearly identical as was observed above for the terms Differential and Equations. Therefore there is a very small singular value corresponding to this pair of terms. So earlier if a Document contained just the term Differential and another document just contained the term Equation, there dot product would have been 0 (implying they being 90 degree apart) but after doing SVD there dot product does not remain zero implying the angle between the documents decreases from 90 degree.