CS 572: Information Retrieval

Similar documents
Term Weighting and the Vector Space Model. borrowing from: Pandu Nayak and Prabhakar Raghavan

Information Retrieval

Scoring, Term Weighting and the Vector Space

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

Term Weighting and Vector Space Model. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model

PV211: Introduction to Information Retrieval

TDDD43. Information Retrieval. Fang Wei-Kleiner. ADIT/IDA Linköping University. Fang Wei-Kleiner ADIT/IDA LiU TDDD43 Information Retrieval 1

Information Retrieval

Informa(on Retrieval

Informa(on Retrieval

CS276A Text Information Retrieval, Mining, and Exploitation. Lecture 4 15 Oct 2002

Lecture 4 Ranking Search Results. Many thanks to Prabhakar Raghavan for sharing most content from the following slides

Ranked IR. Lecture Objectives. Text Technologies for Data Science INFR Learn about Ranked IR. Implement: 10/10/2018. Instructor: Walid Magdy

Ranked IR. Lecture Objectives. Text Technologies for Data Science INFR Learn about Ranked IR. Implement: 10/10/2017. Instructor: Walid Magdy

Dealing with Text Databases

Vector Space Scoring Introduction to Information Retrieval Informatics 141 / CS 121 Donald J. Patterson

Information Retrieval and Topic Models. Mausam (Based on slides of W. Arms, Dan Jurafsky, Thomas Hofmann, Ata Kaban, Chris Manning, Melanie Martin)

Vector Space Scoring Introduction to Information Retrieval INF 141 Donald J. Patterson

Document Similarity in Information Retrieval

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

Vector Space Scoring Introduction to Information Retrieval INF 141 Donald J. Patterson

Introduction to Information Retrieval

Boolean and Vector Space Retrieval Models CS 290N Some of slides from R. Mooney (UTexas), J. Ghosh (UT ECE), D. Lee (USTHK).

Information Retrieval and Web Search

MATRIX DECOMPOSITION AND LATENT SEMANTIC INDEXING (LSI) Introduction to Information Retrieval CS 150 Donald J. Patterson

Information Retrieval. Lecture 6

The Boolean Model ~1955

PV211: Introduction to Information Retrieval

Information Retrieval Using Boolean Model SEEM5680

Recap of the last lecture. CS276A Information Retrieval. This lecture. Documents as vectors. Intuition. Why turn docs into vectors?

boolean queries Inverted index query processing Query optimization boolean model January 15, / 35

Boolean and Vector Space Retrieval Models

CS 572: Information Retrieval

Introduction to Information Retrieval

Natural Language Processing. Topics in Information Retrieval. Updated 5/10

Introduction to Information Retrieval

Chap 2: Classical models for information retrieval

text statistics October 24, 2018 text statistics 1 / 20

PV211: Introduction to Information Retrieval

Information Retrieval CS Lecture 03. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Query CS347. Term-document incidence. Incidence vectors. Which plays of Shakespeare contain the words Brutus ANDCaesar but NOT Calpurnia?

Query. Information Retrieval (IR) Term-document incidence. Incidence vectors. Bigger corpora. Answers to query

1 Information retrieval fundamentals

Information Retrieval

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.

ANLP Lecture 22 Lexical Semantics with Dense Vectors

Vector Space Model. Yufei Tao KAIST. March 5, Y. Tao, March 5, 2013 Vector Space Model

Introduction to Algebra: The First Week

Information Retrieval and Web Search Engines

Ricerca dell Informazione nel Web. Aris Anagnostopoulos

Matrix Decomposition and Latent Semantic Indexing (LSI) Introduction to Information Retrieval INF 141/ CS 121 Donald J. Patterson

Part I: Web Structure Mining Chapter 1: Information Retrieval and Web Search

Non-Boolean models of retrieval: Agenda

Complex Data Mining & Workflow Mining. Introduzione al text mining

Lecture 3: Probabilistic Retrieval Models

Ranked Retrieval (2)

Geoffrey Zweig May 7, 2009

Maschinelle Sprachverarbeitung

Outline for today. Information Retrieval. Cosine similarity between query and document. tf-idf weighting

Retrieval by Content. Part 2: Text Retrieval Term Frequency and Inverse Document Frequency. Srihari: CSE 626 1

RETRIEVAL MODELS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

Modern Information Retrieval

Dimensionality reduction

Generic Text Summarization

Motivation. User. Retrieval Model Result: Query. Document Collection. Information Need. Information Retrieval / Chapter 3: Retrieval Models

Language Models. CS6200: Information Retrieval. Slides by: Jesse Anderton

DISTRIBUTIONAL SEMANTICS

Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer

Embeddings Learned By Matrix Factorization

Semantics with Dense Vectors. Reference: D. Jurafsky and J. Martin, Speech and Language Processing

Machine Learning for natural language processing

University of Illinois at Urbana-Champaign. Midterm Examination

CAIM: Cerca i Anàlisi d Informació Massiva

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization

Information Retrieval Basic IR models. Luca Bondi

Information Retrieval

From Non-Negative Matrix Factorization to Deep Learning

Latent Semantic Analysis. Hongning Wang

Fall CS646: Information Retrieval. Lecture 6 Boolean Search and Vector Space Model. Jiepu Jiang University of Massachusetts Amherst 2016/09/26

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

CS 188: Artificial Intelligence Spring Today

Vectors and their uses

Natural Language Processing Prof. Pawan Goyal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Applied Natural Language Processing

Variable Latent Semantic Indexing

Natural Language Processing

Lecture 1b: Text, terms, and bags of words

Lecture 2 August 31, 2007

Deep Learning Basics Lecture 10: Neural Language Models. Princeton University COS 495 Instructor: Yingyu Liang

Language as a Stochastic Process

Part A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 )

Linear Algebra Background

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

CSE 494/598 Lecture-4: Correlation Analysis. **Content adapted from last year s slides

Information Retrieval

CSCE 561 Information Retrieval System Models

Transcription:

CS 572: Information Retrieval Lecture 5: Term Weighting and Ranking Acknowledgment: Some slides in this lecture are adapted from Chris Manning (Stanford) and Doug Oard (Maryland)

Lecture Plan Skip for now index optimization: Distributed, tiered, caching: return to it later Term weighting Vector space model of IR 1/27/2016 CS572: Information Retrieval. Spring 2016

Relevance Relevance relates a topic and a document Duplicates are equally relevant, by definition Constant over time and across users Pertinence relates a task and a document Accounts for quality, complexity, language, Utility relates a user and a document Accounts for prior knowledge (e.g., search session) We want utility, but (for now) we get relevance

Advantages of Ranked Retrieval Closer to the way people think Some documents are better than others Enriches browsing behavior Decide how far down the list to go as you read it Allows more flexible queries Long and short queries can produce useful results

Ch. 6 Scoring as the basis of ranked retrieval We wish to return in order the documents most likely to be useful to the searcher How can we rank-order the documents in the collection with respect to a query? Assign a score say in [0, 1] to each document This score measures how well document and query match.

Attempt 1: Linear zone combinations First generation of scoring methods: use a linear combination of Booleans: Score = 0.6*<sorting in Title> + 0.3*<sorting in Abstract> + 0.05*<sorting in Body> + 0.05*<sorting in Boldface> Each expression such as <sorting in Title> takes on a value in {0,1}. Then the overall score is in [0,1]. For this example the scores can only take on a finite set of values what are they? 1/27/2016 CS572: Information Retrieval. Spring 2016

Linear zone combinations The expressions between <> on the last slide could be any Boolean query Who generates the Score expression (with weights such as 0.6 etc.)? In uncommon cases the user through the UI Most commonly, a query parser that takes the user s Boolean query and runs it on the indexes for each zone Weights determined from user studies and hardcoded into the query parser. 1/27/2016 CS572: Information Retrieval. Spring 2016

Semantics Exercise of OR : both appearing are better than only 1 term? For query bill OR rights suppose that we retrieve the following docs from the various zone indexes: Author: 0.6 Title: 0.3 Body: 0.1 Author Title Body bill rights bill rights bill rights 1 2 3 5 8 3 5 9 1 2 5 3 5 8 9 9 Compute the score for each doc based on zone weightings 0.6,0.3,0.1 1/27/2016 CS572: Information Retrieval. Spring 2016

General idea We are given a weight vector whose components sum up to 1. There is a weight for each zone/field. Given a Boolean query, we assign a score to each doc by adding up the weighted contributions of the zones/fields. 1/27/2016 CS572: Information Retrieval. Spring 2016

Index support for zone combinations In the simplest version we have a separate inverted index for each zone Variant: have a single index with a separate dictionary entry for each term and zone E.g., bill.author bill.title 1 2 3 5 8 bill.body 1 2 5 9 1/27/2016 CS572: Information Retrieval. Spring 2016

Zone combinations index The above scheme is still wasteful: each term is potentially replicated for each zone In a slightly better scheme, we encode the zone in the postings: bill 1.author, 1.body 2.author, 2.body 3.title zone names get compressed. At query time, accumulate contributions to the total score of a document from the various postings 1/27/2016 CS572: Information Retrieval. Spring 2016

Score accumulation ex: 1 2 3 5 0.7 0.7 0.4 0.4 bill 1.author, 1.body 2.author, 2.body 3.title rights 3.title, 3.body 5.title, 5.body As we walk the postings for the query bill OR rights, we accumulate scores for each doc in a linear merge as before. Note: we get both bill and rights in the Title field of doc 3, but score it no higher.

The Perfect Query Paradox Every information need has a perfect doc set If not, there would be no sense doing retrieval Almost every document set has a perfect query AND every word to get a query for document 1 Repeat for each document in the set OR every document query to get the set query But users find Boolean query formulation hard They get too much, too little, useless stuff,

Why Boolean Retrieval Fails Natural language is way more complex She saw the man on the hill with a telescope AND discovers nonexistent relationships Terms in different paragraphs, chapters, Guessing terminology for OR is hard good, nice, excellent, outstanding, awesome, Guessing terms to exclude is even harder! Democratic party, party to a lawsuit,

Problem with Boolean search: Ch. 6 feast or famine Boolean queries often result in either too few (=0) or too many (1000s) results. Query 1: standard user dlink 650 200,000 hits Query 2: standard user dlink 650 no card found : 0 hits It takes a lot of skill to come up with a query that produces a manageable number of hits. AND gives too few; OR gives too many

Boolean IR: Strengths and Weaknesses Strong points Accurate, if you know the right strategies Efficient for the computer Weaknesses Often results in too many documents, or none Users must learn Boolean logic Sometimes finds relationships that don t exist Words can have many meanings Choosing the right words is sometimes hard

Ranked Retrieval Paradigm Exact match retrieval often gives useless sets No documents at all, or way too many documents Query reformulation is one solution Manually add or delete query terms Best-first ranking can be superior Select every document within reason Put them in order, with the best ones first Display them one screen at a time

Feast or famine: not a problem Ch. in 6 ranked retrieval When a system produces a ranked result set, large result sets are not an issue size of the result set is not an issue We just show the top k ( 10) results We don t overwhelm the user Premise: the ranking algorithm works

Free Text Queries We just scored the Boolean query bill OR rights Most users more likely to type bill rights or bill of rights How do we interpret these free text queries? No Boolean connectives Of several query terms some may be missing in a doc Only some query terms may occur in the title, etc. 1/27/2016 CS572: Information Retrieval. Spring 2016

Incidence matrices Recall: Document (or a zone in it) is binary vector X in {0,1} v Query is a vector Score: Overlap measure: X Y Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth Antony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0 1/27/2016 CS572: Information Retrieval. Spring 2016

Example On the query ides of march, Shakespeare s Julius Caesar has a score of 3 All other Shakespeare plays have a score of 2 (because they contain march) or 1 Thus in a rank order, Julius Caesar would be 1st 1/27/2016 CS572: Information Retrieval. Spring 2016

Overlap matching What s wrong with the overlap measure? It doesn t consider: Term frequency in document Term rarity in collection (document mention frequency) of is more common than ides or march Length of documents (And queries: score not normalized) 1/27/2016 CS572: Information Retrieval. Spring 2016

Overlap matching One can normalize in various ways: Jaccard coefficient: X Y / X Y Cosine measure: X Y / X Y What documents would score highest using Jaccard against a typical query? Does the cosine measure fix this problem? 1/27/2016 CS572: Information Retrieval. Spring 2016

Term Weighting: Empirical Motivation During retrieval: Find the relevant postings based on query terms Manipulate the postings based on the query Return appropriate documents Example with Boolean queries What about postings for unimportant terms? a, the, not,?

Reuters RCV1 Collection Sec. 4.2

Sec. 4.2 Reuters RCV1 statistics symbol statistic value N documents 800,000 L avg. # tokens per doc 200 M terms (= word types) 400,000 avg. # bytes per token 6 (incl. spaces/punct.) avg. # bytes per token 4.5 (w/out spaces/punct.) avg. # bytes per term 7.5 non-positional postings 100,000,000 Note: 4.5 bytes per word token vs. 7.5 bytes per word type

Term Frequency: Zipf s Law George Kingsley Zipf (1902-1950) observed that for many frequency distributions, the nth most frequent event is related to its frequency as: f r c f = frequency r = rank c = constant or f c r

frequency Log(frequency) Zipfian Distribution rank log(rank) 1/27/2016 CS572: Information Retrieval. Spring 2016

Zipf s law for Reuters RCV1 Sec. 5.1

Zipfian Distribution Key points: A few elements occur very frequently A medium number of elements have medium frequency Many elements occur very infrequently Why do we care? English word frequencies follow Zipf s Law

Word Frequency in English Frequency of 50 most common words in English (sample of 19 million words) the 1130021 from 96900 or 54958 of 547311 he 94585 about 53713 to 516635 million 93515 market 52110 a 464736 year 90104 they 51359 in 390819 its 86774 this 50933 and 387703 be 85588 would 50828 that 204351 was 83398 you 49281 for 199340 company 83070 which 48273 is 152483 an 76974 bank 47940 said 148302 has 74405 stock 47401 it 134323 are 74097 trade 47310 on 121173 have 73132 his 47116 by 118863 but 71887 more 46244 as 109135 will 71494 who 42142 at 101779 say 66807 one 41635 mr 101679 new 64456 their 40910 with 101210 share 63925

Does it fit Zipf s Law? The following shows rf*1000/n r is the rank of word w in the sample f is the frequency of word w in the sample n is the total number of word occurrences in the sample the 59 from 92 or 101 of 58 he 95 about 102 to 82 million 98 market 101 a 98 year 100 they 103 in 103 its 100 this 105 and 122 be 104 would 107 that 75 was 105 you 106 for 84 company 109 which 107 is 72 an 105 bank 109 said 78 has 106 stock 110 it 78 are 109 trade 112 on 77 have 112 his 114 by 81 but 114 more 114 as 80 will 117 who 106 at 80 say 113 one 107 mr 86 new 112 their 108 with 91 share 114

Explanation for Zipfian distributions Zipf s own explanation ( least effort principle): Speaker s goal is to minimise effort by using a few distinct words as frequently as possible Hearer s goal is to maximise clarity by having as large a vocabulary as possible Update: Zipfian distribution describes phrases better than words (worth a Nature paper!?!): http://www.nature.com/articles/srep12209

Ch. 6 Issues with Jaccard for scoring It doesn t consider term frequency (how many times a term occurs in a document) Rare terms in a collection are more informative than frequent terms. Jaccard doesn t consider this information We need a more sophisticated way of normalizing for length Later in this lecture, we ll use A B / A B instead of A B / A B (Jaccard) for length normalization.

Sec. 6.2 Term-document count matrices Consider the number of occurrences of a term in a document: Each document is a count vector in N v : a column below Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth Antony 157 73 0 0 0 0 Brutus 4 157 0 1 0 0 Caesar 232 227 0 2 1 1 Calpurnia 0 10 0 0 0 0 Cleopatra 57 0 0 0 0 0 mercy 2 0 3 5 5 1 worser 2 0 1 1 1 0

Bag of Words model Vector representation doesn t consider the ordering of words in a document John is quicker than Mary and Mary is quicker than John have the same vectors This is called the bag of words model. In a sense, this is a step back: The positional index was able to distinguish these two documents. We will look at recovering positional information later in this course. For now: bag of words model

Term frequency tf The term frequency tf t,d of term t in document d is defined as the number of times that t occurs in d. We want to use tf when computing query-document match scores. But how? Raw term frequency is not what we want: A document with 10 occurrences of the term is more relevant than a document with 1 occurrence of the term. But not 10 times more relevant. Relevance does not increase proportionally with term frequency. NB: frequency = count in IR

Sec. 6.2 Log-frequency tf weighting The log frequency weight of term t in d is w t,d 1 log10 tft,d, if tft,d 0 0, otherwise 0 0, 1 1, 2 1.3, 10 2, 1000 4, etc. Score for a document-query pair: sum over terms t in both q and d: score (1 log tf t, d ) t q d The score is 0 if none of the query terms is present in the document.

Weighting should depend on the term overall Which of these tells you more about a doc? 10 occurrences of hernia? 10 occurrences of the? Would like to attenuate weights of common terms But what is common? Can use collection frequency (cf ) The total number of occurrences of the term in the entire collection of documents

Document frequency df Document frequency (df ) may be better: df = number of docs in the corpus containing the term Word cf df try 10422 8760 insurance 10440 3997 So how do we make use of df?

Sec. 6.2.1 Document frequency Rare terms are more informative than frequent terms the, a, of, Consider a term in the query that is rare in the collection (e.g., arachnocentric) A document containing this term is very likely to be relevant to the query arachnocentric We want a high weight for rare terms like arachnocentric.

Sec. 6.2.1 idf weight df t is the document frequency of t: the number of documents that contain t df t is an inverse measure of the informativeness of t df t N We define the idf (inverse document frequency) of t by idf log ( N/df t 10 t base of the log is not important We use log (N/df t ) instead of N/df t to dampen the effect of idf. )

Sec. 6.2.1 idf example, suppose N = 1 million term df t idf t calpurnia 1 6 animal 100 4 sunday 1,000 3 fly 10,000 2 under 100,000 1 the 1,000,000 0 idf t log ( N/df t ) There is one idf value for each term t in a collection.

Effect of idf on ranking Does idf have an effect on ranking for one-term queries, like iphone

Effect of idf on ranking Does idf have an effect on ranking for one-term queries, like? iphone idf has no effect on ranking one term queries idf affects the ranking of documents for queries with at least two terms For the query capricious person, idf weighting makes occurrences of capricious count for much more in the final document ranking than occurrences of person.

Sec. 6.2.1 Collection vs. Document frequency The collection frequency of t is the number of occurrences of t in the collection, counting multiple occurrences. Example: Word Collection frequency Document frequency insurance 10440 3997 try 10422 8760 Which word is a better search term (and should get a higher weight)?

Sec. 6.2.2 tf-idf weighting The tf-idf weight of a term is the product of its tf weight and its idf weight. w (1 log tf ) log t, d 10 ( N Best known weighting scheme in information retrieval Alternative names: tf.idf, tf x idf Increases with the number of occurrences within a document Increases with the rarity of the term in the collection / df t, d t )

Sec. 6.2.2 Final ranking of documents for a query Score(q,d) t q d tf.idf t,d

Sec. 6.3 Binary count weight matrix Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth Antony 5.25 3.18 0 0 0 0.35 Brutus 1.21 6.1 0 1 0 0 Caesar 8.59 2.54 0 1.51 0.25 0 Calpurnia 0 1.54 0 0 0 0 Cleopatra 2.85 0 0 0 0 0 mercy 1.51 0 1.9 0.12 5.25 0.88 worser 1.37 0 0.11 4.15 0.25 1.95 Each document is now represented by a real-valued vector of tf-idf weights R V

Sec. 6.3 Documents as vectors We have a V -dimensional vector space Terms are axes of the space Documents are points or vectors in this space Very high-dimensional: tens of millions of dimensions when you apply this to a web search engine Very sparse vectors - most entries are zero.

Why turn docs into vectors? First application: Query-by-example Given a doc D, find others like it. What are some applications? Now that D is a vector, find vectors (docs) near it.

Intuition t 3 d 2 d 3 θ d 1 φ t 1 t 2 d 4 d 5 Postulate: Documents that are close together in the vector space are about the same things.

Sec. 6.3 Queries as vectors Key idea 1: Do the same for queries: represent them as vectors in the space Key idea 2: Rank documents according to their proximity to the query in this space proximity = similarity of vectors proximity inverse of distance Recall: We do this because we want to get away from the all-or-nothing Boolean model. Instead: rank more relevant documents higher than less relevant documents

Sec. 6.3 Formalizing vector space proximity First cut: distance between two points ( = distance between the end points of the two vectors) Euclidean distance? Euclidean distance is a bad idea...... because Euclidean distance is large for vectors of different lengths.

Desiderata for proximity If d 1 is near d 2, then d 2 is near d 1 (bijection) If d 1 near d 2, and d 2 near d 3, then d 1 is not far from d 3 (transitivity) No doc is closer to d than d itself. (idempotence)

Why distance is a bad idea Sec. 6.3 The Euclidean distance between q and d 2 is large even though the distribution of terms in the query q and the distribution of terms in the document d 2 are very similar.

Sec. 6.3 Use angle instead of distance Thought experiment: take a document d and append it to itself. Call this document d. Semantically d and d have the same content The Euclidean distance between the two documents can be quite large The angle between the two documents is 0, corresponding to maximal similarity. Key idea: Rank documents according to angle with query.

Sec. 6.3 From angles to cosines The following two notions are equivalent. Rank documents in decreasing order of the angle between query and document Rank documents in increasing order of cosine(query,document) Cosine is a monotonically decreasing function for the interval [0 o, 180 o ]

Sec. 6.3 From angles to cosines But how and why should we be computing cosines?

Sec. 6.3 Length normalization A vector can be (length-) normalized by dividing each of its components by its length for this we use the L 2 norm: 2 x x 2 Dividing a vector by its L 2 norm makes it a unit (length) vector (on surface of unit hypersphere) Effect on the two documents d and d (d appended to itself) from earlier slide: they have identical vectors after length-normalization. Long and short documents now have comparable weights i i

cosine(query,document) V i i V i i V i i i d q q d d d q q d q d q d q 1 2 1 2 1 ), cos( Dot product Unit vectors q i is the tf-idf weight of term i in the query d i is the tf-idf weight of term i in the document cos(q,d) is the cosine similarity of q and d or, equivalently, the cosine of the angle between q and d. Sec. 6.3

Cosine for length-normalized vectors For length-normalized vectors, cosine similarity is simply the dot product (or scalar product): cos(q,d ) q d V q d i 1 i i for q, d length-normalized.

Cosine Similarity Example

Sec. 6.3 Cosine similarity amongst 3 documents How similar are the novels SaS: Sense and Sensibility PaP: Pride and Prejudice, and WH: Wuthering Heights? term SaS PaP WH affection 115 58 20 jealous 10 7 11 gossip 2 0 6 wuthering 0 0 38 Term frequencies (counts) Note: To simplify this example, we don t do idf weighting.

Sec. 6.3 3 documents example (cont d) Log frequency weighting term SaS PaP WH affection 3.06 2.76 2.30 jealous 2.00 1.85 2.04 gossip 1.30 0 1.78 wuthering 0 0 2.58 After length normalization term SaS PaP WH affection 0.789 0.832 0.524 jealous 0.515 0.555 0.465 gossip 0.335 0 0.405 wuthering 0 0 0.588 cos(sas,pap) 0.789 0.832 + 0.515 0.555 + 0.335 0.0 + 0.0 0.0 0.94 cos(sas,wh) 0.79 cos(pap,wh) 0.69 Why do we have cos(sas,pap) > cos(sas,wh)?

Sec. 6.3 Computing cosine scores

Sec. 6.4 tf-idf weighting has many variants Columns headed n are acronyms for weight schemes. Why is the base of the log in idf immaterial?

Weighting may differ in queries vs Sec. 6.4 documents Many search engines allow for different weightings for queries vs. documents SMART Notation: denotes the combination in use in an engine, with the notation ddd.qqq, using the acronyms from the previous table A very standard weighting scheme is: Lnc.Ltc Document: logarithmic tf (L as first character), no idf and yes cosine normalization A bad idea? Query: logarithmic tf (L in leftmost column), idf (t in second column), cosine normalization

Sec. 6.4 tf-idf example: Lnc.Ltc Document: car insurance auto insurance Query: best car insurance Term Query Document Pro d tfraw tfwt df idf wt n liz e tf-raw tf-wt wt n liz e auto 0 0 5000 2.3 0 0 1 1 1 0.52 0 best 1 1 50000 1.3 1.3 0.34 0 0 0 0 0 car 1 1 10000 2.0 2.0 0.52 1 1 1 0.52 0.27 insurance 1 1 1000 3.0 3.0 0.78 2 1.3 1.3 0.68 0.53 Exercise: what is N, the number of docs? Doc length = 1 2 0 2 1 2 1.3 2 1.92 Score = 0+0+0.27+0.53 = 0.8

Summary vector space ranking Represent the query as a weighted tf-idf vector Represent each document as a weighted tf-idf vector Compute the cosine similarity score for the query vector and each document vector Rank documents with respect to the query by score Return the top K (e.g., K = 10) to the user

Ch. 6 Resources for today s lecture MRS Ch 4, Ch 6.1 6.4