Information Retrieval and Web Search IR models: Vector Space Model IR Models Set Theoretic Classic Models Fuzzy Extended Boolean U s e r T a s k Retrieval: Adhoc Filtering Brosing boolean vector probabilistic Structured Models Non-Overlapping Lists Proximal Nodes Algebraic Generalized Vector Lat. Semantic Index Neural Netorks Probabilistic Inference Netork Belief Netork Brosing Flat Structure Guided Hypertext Slide 1
Vector-Space Model t distinct terms remain after preprocessing Unique terms that form the VOCABULARY These orthogonal terms form a vector space. Dimension = t = vocabulary 2 terms bi-dimensional; ; n-terms n-dimensional Each term, i, in a document or query j, is given a realvalued eight, ij. Both documents and queries are expressed as t- dimensional vectors: d j = ( 1j, 2j,, tj ) Slide 2 Vector-Space Model Query as vector: We regard query as short document We return the documents ranked by the closeness of their vectors to the query, also represented as a vector. Vectorial model as developed in the SMART system (Salton, c. 1970) and standardly used by TREC participants and eb IR systems Slide 3
Graphic Representation Example: D 1 = 2T 1 + 3T 2 + 5T 3 D 2 = 3T 1 + 7T 2 + T 3 5 T 3 Q = 0T 1 + 0T 2 + 2T 3 D 1 = 2T 1 + 3T 2 + 5T 3 Q = 0T 1 + 0T 2 + 2T 3 2 3 T 1 D 2 = 3T 1 + 7T 2 + T 3 T 2 7 Is D 1 or D 2 more similar to Q? Ho to measure the degree of similarity? Distance? Angle? Projection? Slide 4 Document Collection Representation A collection of n documents can be represented in the vector space model by a term-document matrix. An entry in the matrix corresponds to the eight of a term in the document; zero means the term has no significance in the document or it simply doesn t exist in the document. T 1 T 2. T t D 1 11 21 t1 D 2 12 22 t2 : : : : : : : : D n 1n 2n tn Slide 5
Term Weights: Term Frequency More frequent terms in a document are more important, i.e. more indicative of the topic. f ij = frequency of term i in document j May ant to normalize term frequency (tf) across the entire corpus: tf ij = f ij / max{f ij } Slide 6 Term Weights: Inverse Document Frequency Terms that appear in many different documents are less indicative of overall topic. df i = document frequency of term i = number of documents containing term i idf i = inverse document frequency of term i, = log 2 (N/ df i ) (N: total number of documents) An indication of a term s discrimination poer. Log used to dampen the effect relative to tf. Make the difference: Document frequency VS. corpus frequency Slide 7
TF-IDF Weighting A typical eighting is tf-idf eighting: ij = tf ij idf i = tf ij log 2 (N/ df i ) A term occurring frequently in the document but rarely in the rest of the collection is given high eight. Experimentally, tf-idf has been found to ork ell. It as also theoretically proved to ork ell (Papineni, NAACL 2001) Slide 8 Computing TF-IDF: An Example Given a document containing terms ith given frequencies: A(3), B(2), C(1) Assume collection contains 10,000 documents and document frequencies of these terms are: A(50), B(1300), C(250) Then: A: tf = 3/3; idf = log(10000/50) = 5.3; tf-idf = 5.3 B: tf = 2/3; idf = log(10000/1300) = 2.0; tf-idf = 1.3 C: tf = 1/3; idf = log(10000/250) = 3.7; tf-idf = 1.2 Slide 9
Query Vector Query vector is typically treated as a document and also tf-idf eighted. Alternative is for the user to supply eights for the given query terms. Slide 10 Similarity Measure We no have vectors for all documents in the collection, a vector for the query, ho to compute similarity? A similarity measure is a function that computes the degree of similarity beteen to vectors. Using a similarity measure beteen the query and each document: It is possible to rank the retrieved documents in the order of presumed relevance. It is possible to enforce a certain threshold so that the size of the retrieved set can be controlled. Slide 11
Desiderata for proximity If d 1 is near d 2, then d 2 is near d 1. If d 1 near d 2, and d 2 near d 3, then d 1 is not far from d 3. No document is closer to d than d itself. Sometimes it is a good idea to determine the maximum possible similarity as the distance beteen a document d and itself Slide 12 First cut: Euclidean distance Distance beteen vectors d 1 and d 2 is the length of the vector d 1 d 2. Euclidean distance Exercise: Determine the Euclidean distance beteen the vectors (0, 3, 2, 1, 10) and (2, 7, 1, 0, 0) Why is this not a great idea? We still haven t dealt ith the issue of length normalization Long documents ould be more similar to each other by virtue of length, not topic Hoever, e can implicitly normalize by looking at angles instead Slide 13
Second cut: Manhattan Distance Or city block measure Based on the idea that generally in American cities you cannot follo a direct line beteen to points. y x Uses the formula: ManhDist( X, Y ) n i1 x i y i Exercise: Determine the Manhattan distance beteen the vectors (0, 3, 2, 1, 10) and (2, 7, 1, 0, 0) Slide 14 Third cut: Inner Product Similarity beteen vectors for the document d i and query q can be computed as the vector inner product: sim(d j,q) = d j q = t i1 ij iq here ij is the eight of term i in document j and iq is the eight of term i in the query For binary vectors, the inner product is the number of matched query terms in the document (size of intersection). For eighted term vectors, it is the sum of the products of the eights of the matched terms. Slide 15
Properties of Inner Product Favors long documents ith a large number of unique terms. Again, the issue of normalization Measures ho many terms matched but not ho many terms are not matched. Slide 16 Inner Product: Example 1 k1 d2 d6 d7 k2 d4 d1 d5 d3 k1 k2 k3 q dj d1 1 0 1 2 d2 1 0 0 1 d3 0 1 1 2 d4 1 0 0 1 d5 1 1 1 3 d6 1 1 0 2 d7 0 1 0 1 k3 q 1 1 1 Slide 17
Inner Product: Exercise k1 d2 d6 d7 k2 d4 d1 d5 d3 k1 k2 k3 q dj d1 1 0 1? d2 1 0 0? d3 0 1 1? d4 1 0 0? d5 1 1 1? d6 1 1 0? d7 0 1 0? k3 q 1 2 3 Slide 18 Cosine similarity Distance beteen vectors d 1 and d 2 captured by the cosine of the angle x beteen them. Note this is similarity, not distance t 3 d 2 θ d 1 t 1 t 2 Slide 19
Cosine similarity d j dk sim( d j, dk ) d d n i1 Cosine of angle beteen to vectors j k i1 i, j i, k i1 The denominator involves the lengths of the vectors So the cosine measure is also knon as the normalized inner product n 2 i, j n 2 i, k n i 1 Length d j i 2, j Slide 20 Cosine similarity exercise Exercise: Rank the folloing by decreasing cosine similarity: To documents that have only frequent ords (the, a, an, of) in common. To documents that have no ords in common. To documents that have many rare ords in common (ingspan, tailfin). Slide 21
Example Documents: Austen's Sense and Sensibility, Pride and Prejudice; Bronte's Wuthering Heights SaS PaP WH affection 115 58 20 jealous 10 7 11 gossip 2 0 6 SaS PaP WH affection 0.996 0.993 0.847 jealous 0.087 0.120 0.466 gossip 0.017 0.000 0.254 cos(sas, PAP) =.996 x.993 +.087 x.120 +.017 x 0.0 = 0.999 cos(sas, WH) =.996 x.847 +.087 x.466 +.017 x.254 = 0.929 Slide 22 Cosine Similarity vs. Inner Product Cosine similarity measures the cosine of the angle beteen to vectors. Inner product normalized by the vector lengths. CosSim(d j, q) = d j d j InnerProduct(d j, q) = d j q q q i 1 t t D 1 = 2T 1 + 3T 2 + 5T 3 CosSim(D 1, Q) = 10 / (4+9+25)(0+0+4) = 0.81 D 2 = 3T 1 + 7T 2 + 1T 3 CosSim(D 2, Q) = 2 / (9+49+1)(0+0+4) = 0.13 Q = 0T 1 + 0T 2 + 2T 3 D 1 is 6 times better than D 2 using cosine similarity but only 5 times better using inner product. ( ij ij i 1 i 1 2 t iq ) iq 2 t 2 D 1 D 2 t 3 Q t 1 Slide 23
Comments on Vector Space Models Simple, mathematically based approach. Considers both local (tf) and global (idf) ord occurrence frequencies. Provides partial matching and ranked results. Tends to ork quite ell in practice despite obvious eaknesses. Allos efficient implementation for large document collections. Slide 24 Problems ith Vector Space Model Missing semantic information (e.g. ord sense). Missing syntactic information (e.g. phrase structure, ord order, proximity information). Assumption of term independence Lacks the control of a Boolean model (e.g., requiring a term to appear in a document). Given a to-term query A B, may prefer a document containing A frequently but not B, over a document that contains both A and B, but both less frequently. Slide 25
Naïve Implementation Convert all documents in collection D to tf-idf eighted vectors, d j, for keyord vocabulary V. Convert query to a tf-idf-eighted vector q. For each d j in D do Compute score s j = cossim(d j, q) Sort documents by decreasing score. Present top ranked documents to the user. Time complexity: O( V D ) Bad for large V & D! V = 10,000; D = 100,000; V D = 1,000,000,000 Slide 26 Practical Implementation Based on the observation that documents containing none of the query keyords do not affect the final ranking Try to identify only those documents that contain at least one query keyord Actual implementation of an inverted index Slide 27
Step 1: Preprocessing Implement the preprocessing functions: For tokenization For stop ord removal For stemming Input: Documents that are read one by one from the collection Output: Tokens to be added to the index No punctuation, no stop-ords, stemmed Slide 28 Step 2: Indexing Build an inverted index, ith an entry for each ord in the vocabulary Input: Tokens obtained from the preprocessing module Output: An inverted index for fast access Slide 29
Step 2 (cont d) Many data structures are appropriate for fast access B-trees, hashtables We need: One entry for each ord in the vocabulary For each such entry: Keep a list of all the documents here it appears together ith the corresponding frequency TF For each such entry, keep the total number of occurrences in all documents: IDF Slide 30 Step 2 (cont d) Index terms df D j, tf j computer database 3 2 D 7, 4 D 1, 3 science 4 D 2, 4 system 1 D 5, 2 Index file lists Slide 31
Step 2 (cont d) TF and IDF for each token can be computed in one pass Cosine similarity also required document lengths Need a second pass to compute document vector lengths Remember that the length of a document vector is the squareroot of sum of the squares of the eights of its tokens. Remember the eight of a token is: TF * IDF Therefore, must ait until IDF s are knon (and therefore until all documents are indexed) before document lengths can be determined. Do a second pass over all documents: keep a list or hashtable ith all document id-s, and for each document determine its length. Slide 32 Time Complexity of Indexing Complexity of creating vector and indexing a document of n tokens is O(n). So indexing m such documents is O(m n). Computing token IDFs can be done during the same first pass Computing vector lengths is also O(m n). Complete process is O(m n), hich is also the complexity of just reading in the corpus. Slide 33
Step 3: Retrieval Use inverted index (from step 2) to find the limited set of documents that contain at least one of the query ords. Incrementally compute cosine similarity of each indexed document as query ords are processed one by one. To accumulate a total score for each retrieved document, store retrieved documents in a hashtable, here the document id is the key, and the partial accumulated score is the value. Input: Query and Inverted Index (from Step 2) Output: Similarity values beteen query and documents Slide 34 Step 4: Ranking Sort the hashtable including the retrieved documents based on the value of cosine similarity Return the documents in descending order of their relevance Input: Similarity values beteen query and documents Output: Ranked list of documented in reversed order of their relevance Slide 35
Standard Evaluation Measures Starts ith a CONTINGENCY table retrieved not retrieved relevant x n 1 = + x not relevant y z n 2 = + y N Slide 36 Precision and Recall From all the documents that are relevant out there, ho many did the IR system retrieve? Recall: Precision: +x From all the documents that are retrieved by the IR system, ho many are relevant? +y Slide 37
Slide 38 What eighting methods? Weights applied to both document terms and query terms Direct impact on the final ranking Direct impact on the results Direct impact on the quality of IR system Slide 39