Query CS347. Term-document incidence. Incidence vectors. Which plays of Shakespeare contain the words Brutus ANDCaesar but NOT Calpurnia?

Similar documents
Query. Information Retrieval (IR) Term-document incidence. Incidence vectors. Bigger corpora. Answers to query

Information Retrieval Using Boolean Model SEEM5680

The Boolean Model ~1955

boolean queries Inverted index query processing Query optimization boolean model January 15, / 35

Ricerca dell Informazione nel Web. Aris Anagnostopoulos

Information Retrieval and Topic Models. Mausam (Based on slides of W. Arms, Dan Jurafsky, Thomas Hofmann, Ata Kaban, Chris Manning, Melanie Martin)

Document Similarity in Information Retrieval

Boolean and Vector Space Retrieval Models CS 290N Some of slides from R. Mooney (UTexas), J. Ghosh (UT ECE), D. Lee (USTHK).

Information Retrieval

Dealing with Text Databases

TDDD43. Information Retrieval. Fang Wei-Kleiner. ADIT/IDA Linköping University. Fang Wei-Kleiner ADIT/IDA LiU TDDD43 Information Retrieval 1

Vector Space Scoring Introduction to Information Retrieval Informatics 141 / CS 121 Donald J. Patterson

Information Retrieval

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

CS276A Text Information Retrieval, Mining, and Exploitation. Lecture 4 15 Oct 2002

Term Weighting and the Vector Space Model. borrowing from: Pandu Nayak and Prabhakar Raghavan

Vector Space Scoring Introduction to Information Retrieval INF 141 Donald J. Patterson

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

Term Weighting and Vector Space Model. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model

Vector Space Scoring Introduction to Information Retrieval INF 141 Donald J. Patterson

Ranked IR. Lecture Objectives. Text Technologies for Data Science INFR Learn about Ranked IR. Implement: 10/10/2018. Instructor: Walid Magdy

Scoring, Term Weighting and the Vector Space

1 Boolean retrieval. Online edition (c)2009 Cambridge UP

PV211: Introduction to Information Retrieval

Ranked IR. Lecture Objectives. Text Technologies for Data Science INFR Learn about Ranked IR. Implement: 10/10/2017. Instructor: Walid Magdy

Informa(on Retrieval

Complex Data Mining & Workflow Mining. Introduzione al text mining

Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer

CS 572: Information Retrieval

Informa(on Retrieval

CS 572: Information Retrieval

PV211: Introduction to Information Retrieval

MATRIX DECOMPOSITION AND LATENT SEMANTIC INDEXING (LSI) Introduction to Information Retrieval CS 150 Donald J. Patterson

CSCE 561 Information Retrieval System Models

Introduction to Information Retrieval

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

Introduction to Information Retrieval

Lecture 1b: Text, terms, and bags of words

Lecture 4 Ranking Search Results. Many thanks to Prabhakar Raghavan for sharing most content from the following slides

Introduction to Information Retrieval

Matrix Decomposition and Latent Semantic Indexing (LSI) Introduction to Information Retrieval INF 141/ CS 121 Donald J. Patterson

PV211: Introduction to Information Retrieval

CMPS 561 Boolean Retrieval. Ryan Benton Sept. 7, 2011

Boolean and Vector Space Retrieval Models

Chap 2: Classical models for information retrieval

Overview. Lecture 1: Introduction and the Boolean Model. What is Information Retrieval? What is Information Retrieval?

Algorithms: COMP3121/3821/9101/9801

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

Information Retrieval CS Lecture 03. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Variable Latent Semantic Indexing

CS4800: Algorithms & Data Jonathan Ullman

Lecture 3: Probabilistic Retrieval Models

Querying. 1 o Semestre 2008/2009

1 Information retrieval fundamentals

Information Retrieval and Web Search Engines

FROM QUERIES TO TOP-K RESULTS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

Maschinelle Sprachverarbeitung

Distributed Architectures

Indexing LZ77: The Next Step in Self-Indexing. Gonzalo Navarro Department of Computer Science, University of Chile

Natural Language Processing. Topics in Information Retrieval. Updated 5/10

Semantic Similarity from Corpora - Latent Semantic Analysis

CMSC 313 Lecture 16 Announcement: no office hours today. Good-bye Assembly Language Programming Overview of second half on Digital Logic DigSim Demo

Embeddings Learned By Matrix Factorization

Geoffrey Zweig May 7, 2009

Databases. DBMS Architecture: Hashing Techniques (RDBMS) and Inverted Indexes (IR)

13 Searching the Web with the SVD

Information Retrieval. Lecture 6

Lecture 5: Web Searching using the SVD

Data Compression Techniques

Information Retrieval

Information Retrieval

IR: Information Retrieval

ICS141: Discrete Mathematics for Computer Science I

Part A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 )

9 Searching the Internet with the SVD

Part I: Web Structure Mining Chapter 1: Information Retrieval and Web Search

Secure Indexes* Eu-Jin Goh Stanford University 15 March 2004

Efficient Data Reduction and Summarization

Recap of the last lecture. CS276A Information Retrieval. This lecture. Documents as vectors. Intuition. Why turn docs into vectors?

CAS GE 365 Introduction to Geographical Information Systems. The Applications of GIS are endless

Information Retrieval and Web Search

Relations. We have seen several types of abstract, mathematical objects, including propositions, predicates, sets, and ordered pairs and tuples.

Outline for today. Information Retrieval. Cosine similarity between query and document. tf-idf weighting

Nearest Neighbor Search with Keywords: Compression

CMSC 313 Lecture 15 Good-bye Assembly Language Programming Overview of second half on Digital Logic DigSim Demo

! Where are we on course map? ! What we did in lab last week. " How it relates to this week. ! Compression. " What is it, examples, classifications

Lecture 18 April 26, 2012

Natural Language Processing SoSe Words and Language Model

Statistical Ranking Problem

CS6220: DATA MINING TECHNIQUES

Text Mining. Dr. Yanjun Li. Associate Professor. Department of Computer and Information Sciences Fordham University

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Digital Logic

Texts 1. Required Worboys, M. and Duckham, M. (2004) GIS: A Computing Perspective. Other readings see course schedule.

Homework 1. Part(a) Due: 15 Mar, 2018, 11:55pm

CS6220: DATA MINING TECHNIQUES

1. Ignoring case, extract all unique words from the entire set of documents.

OBEUS. (Object-Based Environment for Urban Simulation) Shareware Version. Itzhak Benenson 1,2, Slava Birfur 1, Vlad Kharbash 1

Intelligent Data Analysis. Mining Textual Data. School of Computer Science University of Birmingham

Transcription:

Query CS347 Which plays of Shakespeare contain the words Brutus ANDCaesar but NOT Calpurnia? Lecture 1 April 4, 2001 Prabhakar Raghavan Term-document incidence Incidence vectors Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth 0 0 0 1 Antony Brutus 0 1 0 0 Caesar 0 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 worser 1 0 1 0 So we have a 0/1 vector for each term. To answer query: take the vectors for Brutus, Caesar and Calpurnia (complemented) bitwise AND. 110100 AND 110111 AND 101111 = 100100. 1 if play contains word, 0 otherwise 1

Answers to query Antony and Cleopatra, Act III, Scene ii Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus, WhenAntonyfound Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutusslain. Hamlet, Act III, Scene ii Lord Polonius: I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me. Bigger corpora Consider n = 1M documents, each with about 1K terms. Avg 6 bytes/term incl spaces/punctuation 6GB of data. Say there are m = 500K distinct terms among these. Can t build the matrix Inverted index Term Doc # 500K x 1M matrix has half-a-trillion 0 s and 1 s. But it has no more than one billion 1 s. matrix is extremely sparse. What s a better representation? Why? Documents are parsed to extract words and these are saved with the Document ID. Doc 1 I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me. Doc 2 So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious did 1 enact 1 julius 1 caesar 1 was 1 i' 1 the 1 capitol 1 brutus 1 me 1 so 2 let 2 it 2 be 2 with 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 was 2 ambitious 2 2

After all documents have been parsed the inverted file is sorted by terms Term Doc # did 1 enact 1 julius 1 caesar 1 was 1 i' 1 the 1 capitol 1 brutus 1 me 1 so 2 let 2 it 2 be 2 with 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 was 2 ambitious 2 Term Doc # ambitious 2 be 2 brutus 1 brutus 2 capitol 1 caesar 1 did 1 enact 1 hath 1 i' 1 it 2 julius 1 let 2 me 1 noble 2 so 2 the 1 the 2 told 2 you 2 was 1 was 2 with 2 Multiple term entries in a single document are merged and frequency information added Term Doc # ambitious 2 be 2 brutus 1 brutus 2 capitol 1 caesar 1 did 1 enact 1 hath 1 i' 1 it 2 julius 1 let 2 me 1 noble 2 so 2 the 1 the 2 told 2 you 2 was 1 was 2 with 2 Term Doc # Freq ambitious be brutus brutus capitol caesar 2 did enact hath 2 i' it julius 2 let me noble so the the told you was was with The file is commonly split into a Dictionary and a Postings file Term Doc # Freq ambitious be Term N docs Tot Freq brutus ambitious be brutus brutus 2 2 capitol capitol caesar 3 2 did did enact enact hath hath 2 2 i' i' it it julius julius 2 2 let let me me noble noble so so the 2 2 the told the you told was 2 2 you with was was with Doc # Freq 2 2 1 2 1 2 Where do we pay in storage? Terms Doc # Term N docs Tot Freq ambitious be brutus 2 2 capitol 3 did enact hath 2 i' it julius 2 let me noble so the 2 2 told you was 2 2 with Freq 2 2 1 2 1 2 Pointers 3

Two conflicting forces A term like Calpurnia occurs in maybe one doc out of a million - would like to store this pointer using log M ~ 20 bits. A term like the occurs in virtually every doc, so 20 bits/pointer is too expensive. Prefer 0/1 vector in this case. Postings file entry Store list of docs containing a term in increasing order of doc id. Brutus: 33,47,154,159,202 Consequence: suffices to store gaps. 33,14,107,5,43 Hope: most gaps encoded with far fewer than 20 bits. Variable encoding For Calpurnia, use ~20 bits/gap entry. For the, use ~1 bit/gap entry. If the average gap for a term is G, want to use ~log 2 G bits/gap entry. γ codes for gap encoding Length Offset Represent a gap G as the pair <length,offset> length is in unary and uses log 2 G +1 bits to specify the length of the binary encoding of offset = G - 2 log 2 G e.g., 9 represented as 1110001. Encoding G takes 2 log 2 G +1 bits. 4

What we ve just done Encoded each gap as tightly as possible, to within a factor of 2. For better tuning (and a simple analysis) - need some handle on the distribution of gap values. Zipf s law The kth most frequent term has frequency proportional to 1/k. Use this for a crude analysis of the space used by our postings file pointers. Rough analysis based on Zipf Most frequent term occurs in n docs n gaps of 1 each. Second most frequent term in n/2 docs n/2 gaps of 2 each kth most frequent term in n/k docs n/k gaps of k each - use 2log 2 k +1 bits for each gap; net of ~(2n/k).log 2 k bits for kth most frequent term. Sum over k from 1 to 500K Do this by breaking values of k into groups: group i consists of 2 i-1 k < 2 i. Group i has 2 i-1 components in the sum, each contributing at most (2ni)/2 i-1. Summing over i from 1 to 19, we get a net estimate of 340Mbits ~45MB for our index. Work out calculation. 5

Caveats This is not the entire space for our index: does not account for dictionary storage; as we get further, we ll store even more stuff in the index. Assumes Zipf s law applies to occurrence of terms in docs. All gaps for a term taken to be the same. Does not talk about query processing. Issues with index we just built How do we process a query? What terms in a doc do we index? All words or only important ones? Stopword list: terms that are so common that they re ignored for indexing. e.g., the, a, an, of, to language-specific. Repeat postings size calculation if 100 most frequent terms are not indexed. Issues in what to index Cooper s concordance of Wordsworth was published in 1911. The applications of full-text retrieval are legion: they include résumé scanning, litigation support and searching published journals on-line. Cooper s vs. Cooper vs. Coopers. Full-text vs. full text vs. {full, text} vs. fulltext. Accents: résumé vs. resume. Punctuation Ne er: use language-specific, handcrafted locale to normalize. State-of-the-art: break up hyphenated sequence. U.S.A. vs. USA - use locale. a.out 6

Numbers Case folding 3/12/91 Mar. 12, 1991 55 B.C. B-52 100.2.86.144 Reduce all letters to lower case proper nouns - from language module e.g., General Motors Fed vs. fed SAIL vs. sail Thesauri and soundex Handle synonyms and homonyms Hand-constructed equivalence classes e.g., car = automobile your you re Index such equivalences, or expand query? More later... Spell correction Look for all words within (say) edit distance 3 (Insert/Delete/Replace) at query time e.g., Alanis Morisette Spell correction is expensive and slows the query (upto a factor of 100) Invoke only when index returns zero matches. What if docs contain mis-spellings? 7

Stemming Reduce terms to their roots before indexing language dependent e.g., automate(s), automatic, automation all reduced to automat. for example compressed and compression are both accepted as equivalent to compress. for exampl compres and compres are both accept as equival to compres. Porter s algorithm Commonest algorithm for stemming English Conventions + 5 phases of reductions phases applied sequentially each phase consists of a set of commands sample convention: Of the rules in a compound command, select the one that applies to the longest suffix. Typical rules in Porter sses ss ies i ational ate tional tion So far: terms are the units of search What about phrases? Proximity: Find Gates NEAR Microsoft. Need index to capture position information in docs. Zones in documents: Find documents with (author = Ullman) AND(text contains automata). 8

Evidence accumulation 1 vs. 0 occurrence of a search term 2 vs. 1 occurrence 3 vs. 2 occurrences, etc. Need term frequency information in docs Ranking search results Boolean queries give inclusion or exclusion of docs. Need to measure proximity from query to each doc. Whether docs presented to user are singletons, or a group of docs covering various aspects of the query. Clustering and classification Given a set of docs, group them into clusters based on their contents. Given a set of topics, plus a new doc D, decide which topic(s) D belongs to. The web and its challenges Unusual and diverse documents Unusual and diverse users, queries, information needs Beyond terms, exploit ideas from social networks link analysis, clickstreams... 9

Course administrivia Course URL: http://www.stanford.edu/class/cs347 TA s:taher Haveliwala, Brent Miller, Sriram Raghavan Grading: 30% from midterm 40% from final 30% from group project. Groups of 4-5 Group project Strongly encouraged to build apps, not search engines Strongly encouraged to use one of the local corpora Short and not onerous on programming Details to be updated on course page Lead: Brent Miller; Discussion 4/11TBA Class schedule Lectures MW 1250-205pm, Thornton 102 April 11 - guest lecture by Dr. Andrei Broder, Chief Scientist at Altavista Apr 30 - mid -term in class May 23 onwards - Distributed Databases, Sriram Raghavan lecturing Resources for today s lecture Managing Gigabytes, Chapter 3. Modern Information Retrieval, Chapter 7.2 Porter s stemmer: http//www.sims.berkeley.edu/~hearst/irbook/porter.html Shakespeare: http://www.theplays.org 10