boolean queries Inverted index query processing Query optimization boolean model January 15, / 35

Similar documents
Information Retrieval Using Boolean Model SEEM5680

TDDD43. Information Retrieval. Fang Wei-Kleiner. ADIT/IDA Linköping University. Fang Wei-Kleiner ADIT/IDA LiU TDDD43 Information Retrieval 1

The Boolean Model ~1955

Ricerca dell Informazione nel Web. Aris Anagnostopoulos

Information Retrieval and Topic Models. Mausam (Based on slides of W. Arms, Dan Jurafsky, Thomas Hofmann, Ata Kaban, Chris Manning, Melanie Martin)

Document Similarity in Information Retrieval

Query CS347. Term-document incidence. Incidence vectors. Which plays of Shakespeare contain the words Brutus ANDCaesar but NOT Calpurnia?

Boolean and Vector Space Retrieval Models CS 290N Some of slides from R. Mooney (UTexas), J. Ghosh (UT ECE), D. Lee (USTHK).

1 Boolean retrieval. Online edition (c)2009 Cambridge UP

Query. Information Retrieval (IR) Term-document incidence. Incidence vectors. Bigger corpora. Answers to query

Dealing with Text Databases

CMPS 561 Boolean Retrieval. Ryan Benton Sept. 7, 2011

Vector Space Scoring Introduction to Information Retrieval INF 141 Donald J. Patterson

Introduction to Information Retrieval

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

Vector Space Scoring Introduction to Information Retrieval Informatics 141 / CS 121 Donald J. Patterson

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

Term Weighting and Vector Space Model. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model

Term Weighting and the Vector Space Model. borrowing from: Pandu Nayak and Prabhakar Raghavan

Information Retrieval

Information Retrieval

Informa(on Retrieval

CS276A Text Information Retrieval, Mining, and Exploitation. Lecture 4 15 Oct 2002

Vector Space Scoring Introduction to Information Retrieval INF 141 Donald J. Patterson

MATRIX DECOMPOSITION AND LATENT SEMANTIC INDEXING (LSI) Introduction to Information Retrieval CS 150 Donald J. Patterson

Ranked IR. Lecture Objectives. Text Technologies for Data Science INFR Learn about Ranked IR. Implement: 10/10/2018. Instructor: Walid Magdy

PV211: Introduction to Information Retrieval

Overview. Lecture 1: Introduction and the Boolean Model. What is Information Retrieval? What is Information Retrieval?

Informa(on Retrieval

Ranked IR. Lecture Objectives. Text Technologies for Data Science INFR Learn about Ranked IR. Implement: 10/10/2017. Instructor: Walid Magdy

Scoring, Term Weighting and the Vector Space

Introduction to Information Retrieval

Lecture 4 Ranking Search Results. Many thanks to Prabhakar Raghavan for sharing most content from the following slides

Matrix Decomposition and Latent Semantic Indexing (LSI) Introduction to Information Retrieval INF 141/ CS 121 Donald J. Patterson

CSCE 561 Information Retrieval System Models

PV211: Introduction to Information Retrieval

CS 572: Information Retrieval

Introduction to Information Retrieval

PV211: Introduction to Information Retrieval

Distributed Architectures

CS 572: Information Retrieval

Complex Data Mining & Workflow Mining. Introduzione al text mining

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

IR: Information Retrieval

Boolean and Vector Space Retrieval Models

Information Retrieval and Web Search

Retrieval by Content. Part 2: Text Retrieval Term Frequency and Inverse Document Frequency. Srihari: CSE 626 1

Maschinelle Sprachverarbeitung

Link Mining PageRank. From Stanford C246

Outline for today. Information Retrieval. Cosine similarity between query and document. tf-idf weighting

Embeddings Learned By Matrix Factorization

Singular Value Decompsition

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Geoffrey Zweig May 7, 2009

Slides based on those in:

Social networks and scripts. Zvi Lotker IRIF-France

Chap 2: Classical models for information retrieval

How does Google rank webpages?

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine

Databases. DBMS Architecture: Hashing Techniques (RDBMS) and Inverted Indexes (IR)

IS4200/CS6200 Informa0on Retrieval. PageRank Con+nued. with slides from Hinrich Schütze and Chris6na Lioma

text statistics October 24, 2018 text statistics 1 / 20

Information Retrieval. Lecture 6

Querying. 1 o Semestre 2008/2009

Non-Boolean models of retrieval: Agenda

Natural Language Processing. Topics in Information Retrieval. Updated 5/10

Link Analysis. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

IR Models: The Probabilistic Model. Lecture 8

Relations. We have seen several types of abstract, mathematical objects, including propositions, predicates, sets, and ordered pairs and tuples.

Introduction to Information Retrieval

Information Retrieval and Web Search Engines

Nearest Neighbor Search with Keywords

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

Lecture 5: Web Searching using the SVD

STUDY GUIDE. Macbeth WILLIAM SHAKESPEARE

Lecture 3: Probabilistic Retrieval Models

COMPUTING SIMILARITY BETWEEN DOCUMENTS (OR ITEMS) This part is to a large extent based on slides obtained from

13 Searching the Web with the SVD

9 Searching the Internet with the SVD

Online Social Networks and Media. Link Analysis and Web Search

1998: enter Link Analysis

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

CS6220: DATA MINING TECHNIQUES

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

Online Social Networks and Media. Link Analysis and Web Search

CS6220: DATA MINING TECHNIQUES

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University.

Multimedia Databases - 68A6 Final Term - exercises

Vector Space Model. Yufei Tao KAIST. March 5, Y. Tao, March 5, 2013 Vector Space Model

Fall CS646: Information Retrieval. Lecture 6 Boolean Search and Vector Space Model. Jiepu Jiang University of Massachusetts Amherst 2016/09/26

Ranking-II. Temporal Representation and Retrieval Models. Temporal Information Retrieval

Data and Algorithms of the Web

Recap of the last lecture. CS276A Information Retrieval. This lecture. Documents as vectors. Intuition. Why turn docs into vectors?

6.207/14.15: Networks Lecture 7: Search on Networks: Navigation and Web Search

Thanks to Jure Leskovec, Stanford and Panayiotis Tsaparas, Univ. of Ioannina for slides

Transcription:

boolean model January 15, 2017 1 / 35

Outline 1 boolean queries 2 3 4 2 / 35

taxonomy of IR models Set theoretic fuzzy extended boolean set-based IR models Boolean vector probalistic algebraic generalized vector LSI NN Document Property text links multimedia Semistructured text proximal nodes xml based probablistic BM25 language models Bayersian networks Multimedia image retrieval audio video web page rank hubs and authorities (HITs) 3 / 35

Outline 1 boolean queries 2 3 4 4 / 35

Boolean retrieval The Boolean model is arguably the simplest model to base an information retrieval system on. Queries are Boolean expressions, e.g., Caesar and Brutus The search engine returns all documents that satisfy the Boolean expression. Does Google use the Boolean model? 5 / 35

Does Google use the Boolean model? On Google, the default interpretation of a query w 1 w 2... w n is w 1 AND w 2 AND... AND w n Cases where you get hits that do not contain one of the w i : anchor text page contains variant of w i (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set they rank good hits (according to some estimator of relevance) higher than bad hits. 6 / 35

Outline 1 boolean queries 2 3 4 7 / 35

Unstructured data in 1650 Which plays of Shakespeare contain the words Brutus and Caesar, but not Calpurnia? One could grep all of Shakespeare s plays for Brutus and Caesar, then strip out lines containing Calpurnia. Why is grep not the solution? 8 / 35

Unstructured data in 1650 Which plays of Shakespeare contain the words Brutus and Caesar, but not Calpurnia? One could grep all of Shakespeare s plays for Brutus and Caesar, then strip out lines containing Calpurnia. Why is grep not the solution? Slow (for large collections) grep is line-oriented, IR is document-oriented not Calpurnia is non-trivial Other operations (e.g., find the word Romans near countryman) not feasible 8 / 35

Term-document incidence matrix Anthony Julius The Hamlet Othello Macbeth... and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0... Entry is 1 if term occurs. Example: Calpurnia occurs in Julius Caesar. Entry is 0 if term doesn t occur. Example: Calpurnia doesn t occur in The tempest. 9 / 35

Term-document incidence matrix Anthony Julius The Hamlet Othello Macbeth... and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0... Entry is 1 if term occurs. Example: Calpurnia occurs in Julius Caesar. Entry is 0 if term doesn t occur. Example: Calpurnia doesn t occur in The tempest. 9 / 35

Term-document incidence matrix Anthony Julius The Hamlet Othello Macbeth... and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0... Entry is 1 if term occurs. Example: Calpurnia occurs in Julius Caesar. Entry is 0 if term doesn t occur. Example: Calpurnia doesn t occur in The tempest. 9 / 35

Incidence vectors So we have a 0/1 vector for each term. To answer the query Brutusand Caesar and not Calpurnia: Take the vectors for Brutus, Caesar, and Calpurnia Complement the vector of Calpurnia Do a (bitwise) and on the three vectors Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 not Calpurnia 1 0 1 1 1 1 AND 1 0 0 1 0 0 10 / 35

0/1 vectors and result of bitwise operations Anthony Julius The Hamlet Othello Macbeth... and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0... result: 1 0 0 1 0 0 11 / 35

Answers to query Anthony and Cleopatra, Act III, Scene ii Agrippa [Aside to Domitius Enobarbus]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain. Hamlet, Act III, Scene ii Lord Polonius: I did enact Julius Caesar: I was killed i the Capitol; Brutus killed me. 12 / 35

Bigger collections Consider N = 10 6 documents, each with about 1000 tokens total of 10 9 tokens On average 6 bytes per token, including spaces and punctuation size of document collection is about 6 10 9 = 6 GB Assume there are M = 500,000 distinct terms in the collection (Notice that we are making a term/token distinction.) 13 / 35

Can t build the incidence matrix M = 500,000 10 6 = half a trillion 0s and 1s. But the matrix has no more than one billion 1s. Matrix is extremely sparse. What is a better representations? We only record the 1s. 14 / 35

Inverted Index For each term t, we store a list of all documents that contain t. Brutus 1 2 4 11 31 45 173 174 Caesar 1 2 4 5 6 16 57 132... Calpurnia 2 31 54 101. }{{}}{{} dictionary postings 15 / 35

Inverted Index For each term t, we store a list of all documents that contain t. Brutus 1 2 4 11 31 45 173 174 Caesar 1 2 4 5 6 16 57 132... Calpurnia 2 31 54 101. }{{}}{{} dictionary postings 15 / 35

Inverted Index For each term t, we store a list of all documents that contain t. Brutus 1 2 4 11 31 45 173 174 Caesar 1 2 4 5 6 16 57 132... Calpurnia 2 31 54 101. }{{}}{{} dictionary postings 15 / 35

construction 1 Collect the documents to be indexed: Friends, Romans, countrymen. So let it be with Caesar... 2 Tokenize the text, turning each document into a list of tokens: Friends Romans countrymen So... 3 Do linguistic preprocessing, producing a list of normalized tokens, which are the indexing terms: friend roman countryman so... 4 Index the documents that each term occurs in by creating an inverted index, consisting of a dictionary and postings. 16 / 35

Tokenization and preprocessing Doc 1. I did enact Julius Caesar: I was killed i the Capitol; Brutus killed me. Doc 2. So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious: = Doc 1. i did enact julius caesar i was killed i the capitol brutus killed me Doc 2. so let it be with caesar the noble brutus hath told you caesar was ambitious 17 / 35

Generate postings Doc 1. i did enact julius caesar i was killed i the capitol brutus killed me Doc 2. so let it be with caesar the noble brutus hath told you caesar was ambitious = term docid i 1 did 1 enact 1 julius 1 caesar 1 i 1 was 1 killed 1 i 1 the 1 capitol 1 brutus 1 killed 1 me 1 so 2 let 2 it 2 be 2 with 2 caesar 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 caesar 2 was 2 ambitious 2 18 / 35

Sort postings term docid term docid i 1 ambitious 2 did 1 be 2 enact 1 brutus 1 julius 1 brutus 2 caesar 1 capitol 1 i 1 caesar 1 was 1 caesar 2 killed 1 caesar 2 i 1 did 1 the 1 enact 1 capitol 1 hath 1 brutus 1 i 1 killed 1 i 1 me 1 i 1 = so 2 it 2 let 2 julius 1 it 2 killed 1 be 2 killed 1 with 2 let 2 caesar 2 me 1 the 2 noble 2 noble 2 so 2 brutus 2 the 1 hath 2 the 2 told 2 told 2 you 2 you 2 caesar 2 was 1 was 2 was 2 ambitious 2 with 2 19 / 35

Create postings lists, determine document frequency term docid ambitious 2 be 2 brutus 1 brutus 2 capitol 1 caesar 1 caesar 2 caesar 2 did 1 enact 1 hath 1 i 1 i 1 i 1 it 2 = julius 1 killed 1 killed 1 let 2 me 1 noble 2 so 2 the 1 the 2 told 2 you 2 was 1 was 2 with 2 term doc. freq. postings lists ambitious 1 2 be 1 2 brutus 2 1 2 capitol 1 1 caesar 2 1 2 did 1 1 enact 1 1 hath 1 2 i 1 1 i 1 1 it 1 2 julius 1 1 killed 1 1 let 1 2 me 1 1 noble 1 2 so 1 2 the 2 1 2 told 1 2 you 1 2 was 2 1 2 with 1 2 20 / 35

Split the result into dictionary and postings file Brutus 1 2 4 11 31 45 173 174 Caesar 1 2 4 5 6 16 57 132... Calpurnia 2 31 54 101. }{{}}{{} dictionary postings file 21 / 35

Outline 1 boolean queries 2 3 4 22 / 35

Simple conjunctive query (two terms) Consider the query: Brutus AND Calpurnia To find all matching documents using inverted index: 1 Locate Brutus in the dictionary 2 Retrieve its postings list from the postings file 3 Locate Calpurnia in the dictionary 4 Retrieve its postings list from the postings file 5 Intersect the two postings lists 6 Return intersection to user 23 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 This is linear in the length of the postings lists. 24 / 35

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 This is linear in the length of the postings lists. Note: This only works if postings lists are sorted. 24 / 35

Intersecting two postings lists Intersect(p 1, p 2 ) 1 answer 2 while p 1 nil and p 2 nil 3 do if docid(p 1 ) = docid(p 2 ) 4 then Add(answer, docid(p 1 )) 5 p 1 next(p 1 ) 6 p 2 next(p 2 ) 7 else if docid(p 1 ) < docid(p 2 ) 8 then p 1 next(p 1 ) 9 else p 2 next(p 2 ) 10 return answer 25 / 35

Query processing: Exercise france 1 2 3 4 5 7 8 9 11 12 13 14 15 paris 2 6 10 12 14 lear 12 15 Compute hit list for ((paris AND NOT france) OR lear) 26 / 35

Boolean retrieval model: Assessment The Boolean retrieval model can answer any query that is a Boolean expression. Boolean queries are queries that use and, or and not to join query terms. Views each document as a set of terms. Is precise: Document matches condition or not. Primary commercial retrieval tool for 3 decades Many professional searchers (e.g., lawyers) still like Boolean queries. You know exactly what you are getting. Many search systems you use are also Boolean: spotlight, email, intranet etc. 27 / 35

Outline 1 boolean queries 2 3 4 28 / 35

Consider a query that is an and of n terms, n > 2 For each of the terms, get its postings list, then and them together Example query: Brutus AND Calpurnia AND Caesar What is the best order for processing this query? 29 / 35

Example query: Brutus AND Calpurnia AND Caesar 30 / 35

Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency 30 / 35

Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency Start with the shortest postings list, then keep cutting further 30 / 35

Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency Start with the shortest postings list, then keep cutting further Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Caesar 5 31 30 / 35

Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency Start with the shortest postings list, then keep cutting further In this example, first Caesar, then Calpurnia, then Brutus Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Caesar 5 31 30 / 35

Optimized intersection algorithm for conjunctive queries Intersect( t 1,..., t n ) 1 terms SortByIncreasingFrequency( t 1,..., t n ) 2 result postings(first(terms)) 3 terms rest(terms) 4 while terms nil and result nil 5 do result Intersect(result, postings(first(terms))) 6 terms rest(terms) 7 return result 31 / 35

More general optimization Example query: (madding or crowd) and (ignoble or strife) Get frequencies for all terms Estimate the size of each or by the sum of its frequencies (conservative) Process in increasing order of or sizes 32 / 35

Advantages and disadvantages of Boolean Model Advantages: Easy for the system Users get transparency: it is easy to understand why a document was or was not retrieved Users get control: it easy to determine whether the query is too specific (few results) or too broad (many results) Disadvantages: The burden is on the user to formulate a good boolean query 33 / 35

search engine envisioned in 1945 The memex (memory extender) is the name of the hypothetical proto-hypertext system that Vannevar Bush described in his 1945 The Atlantic Monthly article As We May Think. Bush envisioned the memex as a device in which individuals would compress and store all of their books, records, and communications, mechanized so that it may be consulted with exceeding speed and flexibility. The memex would provide an enlarged intimate supplement to one s memory. The concept of the memex influenced the development of early hypertext systems (eventually leading to the creation of the World Wide Web). However, the memex system used a form of document bookmark list, of static microfilm pages, rather than a true hypertext system where parts of pages would have internal structure beyond the common textual format. 34 / 35

35 / 35