Informa(on Retrieval

Similar documents
Probabilistic Spelling Correction CE-324: Modern Information Retrieval Sharif University of Technology

{ Jurafsky & Martin Ch. 6:! 6.6 incl.

The Noisy Channel Model and Markov Models

CMPT-825 Natural Language Processing

Machine Learning for natural language processing

CSA4050: Advanced Topics Natural Language Processing. Lecture Statistics III. Statistical Approaches to NLP

N-gram N N-gram. N-gram. Detection and Correction for Errors in Hiragana Sequences by a Hiragana Character N-gram.

Language Modeling. Introduc*on to N- grams. Many Slides are adapted from slides by Dan Jurafsky

Foundations of Natural Language Processing Lecture 5 More smoothing and the Noisy Channel Model

N-grams. Motivation. Simple n-grams. Smoothing. Backoff. N-grams L545. Dept. of Linguistics, Indiana University Spring / 24

Probabilistic Language Modeling

Lecture 2: N-gram. Kai-Wei Chang University of Virginia Couse webpage:

Speech Recognition Lecture 5: N-gram Language Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky

Language Modeling. Michael Collins, Columbia University

Empirical Methods in Natural Language Processing Lecture 10a More smoothing and the Noisy Channel Model

Natural Language Processing. Statistical Inference: n-grams

Language Models. Data Science: Jordan Boyd-Graber University of Maryland SLIDES ADAPTED FROM PHILIP KOEHN

Language Models. Philipp Koehn. 11 September 2018

Chapter 3: Basics of Language Modelling

Minimum Edit Distance. Defini'on of Minimum Edit Distance

NLP: N-Grams. Dan Garrette December 27, Predictive text (text messaging clients, search engines, etc)

Recap: Language models. Foundations of Natural Language Processing Lecture 4 Language Models: Evaluation and Smoothing. Two types of evaluation in NLP

Language Modeling. Introduc*on to N- grams. Many Slides are adapted from slides by Dan Jurafsky

Deep Learning Basics Lecture 10: Neural Language Models. Princeton University COS 495 Instructor: Yingyu Liang

Foundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM

Natural Language Processing

TnT Part of Speech Tagger

Natural Language Processing SoSe Language Modelling. (based on the slides of Dr. Saeedeh Momtazi)

CS 6120/CS4120: Natural Language Processing

Language Models. CS6200: Information Retrieval. Slides by: Jesse Anderton

DT2118 Speech and Speaker Recognition

N-gram Language Modeling Tutorial

Natural Language Processing SoSe Words and Language Model

Statistical Methods for NLP

Naïve Bayes, Maxent and Neural Models

Ranked Retrieval (2)

Ngram Review. CS 136 Lecture 10 Language Modeling. Thanks to Dan Jurafsky for these slides. October13, 2017 Professor Meteer

ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging

Cross-Lingual Language Modeling for Automatic Speech Recogntion

The Language Modeling Problem (Fall 2007) Smoothed Estimation, and Language Modeling. The Language Modeling Problem (Continued) Overview

Lecture 12: Algorithms for HMMs

Unit 1: Sequence Models

Language Models. Hongning Wang

FROM QUERIES TO TOP-K RESULTS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

Week 13: Language Modeling II Smoothing in Language Modeling. Irina Sergienya

N-gram Language Modeling

Computa(on of Similarity Similarity Search as Computa(on. Stoyan Mihov and Klaus U. Schulz

Language Model. Introduction to N-grams

Seman&cs with Dense Vectors. Dorota Glowacka

Classification & Information Theory Lecture #8

perplexity = 2 cross-entropy (5.1) cross-entropy = 1 N log 2 likelihood (5.2) likelihood = P(w 1 w N ) (5.3)

Lecture 12: Algorithms for HMMs

Midterm sample questions

Language Modelling: Smoothing and Model Complexity. COMP-599 Sept 14, 2016

CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on

Language Technology. Unit 1: Sequence Models. CUNY Graduate Center. Lecture 4a: Probabilities and Estimations

Language Processing with Perl and Prolog

Chapter 10: Information Retrieval. See corresponding chapter in Manning&Schütze

Empirical methods in NLP

Natural Language Processing

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.

ANLP Lecture 22 Lexical Semantics with Dense Vectors

Natural Language Processing (CSE 490U): Language Models

LING 473: Day 10. START THE RECORDING Coding for Probability Hidden Markov Models Formal Grammars

Lecture 13: More uses of Language Models

Last Lecture Recap UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 3: Linear Regression

Lecture 1b: Text, terms, and bags of words

Introduction to N-grams

Language Technology. Unit 1: Sequence Models. CUNY Graduate Center Spring Lectures 5-6: Language Models and Smoothing. required hard optional

Recurrent Neural Networks. Dr. Kira Radinsky CTO SalesPredict Visi8ng Professor/Scien8st Technion. Slides were adapted from lectures by Richard Socher

Sta$s$cal sequence recogni$on

Fun with weighted FSTs

Information Retrieval and Organisation

Information Retrieval

A Simple Introduction to Information, Channel Capacity and Entropy

Lecture 3: ASR: HMMs, Forward, Viterbi


Noisy channel communication

SYNTHER A NEW M-GRAM POS TAGGER

Language Modeling. Introduction to N-grams. Klinton Bicknell. (borrowing from: Dan Jurafsky and Jim Martin)

Chapter 3: Basics of Language Modeling

Graphical Models. Mark Gales. Lent Machine Learning for Language Processing: Lecture 3. MPhil in Advanced Computer Science

Language Models. Web Search. LM Jelinek-Mercer Smoothing and LM Dirichlet Smoothing. Slides based on the books: 13

An Introduc+on to Sta+s+cs and Machine Learning for Quan+ta+ve Biology. Anirvan Sengupta Dept. of Physics and Astronomy Rutgers University

arxiv: v1 [cs.cl] 21 May 2017

ANLP Lecture 6 N-gram models and smoothing

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

Machine Translation. CL1: Jordan Boyd-Graber. University of Maryland. November 11, 2013

Classification: Naïve Bayes. Nathan Schneider (slides adapted from Chris Dyer, Noah Smith, et al.) ENLP 19 September 2016

CS221 / Autumn 2017 / Liang & Ermon. Lecture 15: Bayesian networks III

CSCI 5832 Natural Language Processing. Today 2/19. Statistical Sequence Classification. Lecture 9

Statistical Methods for NLP

IN FALL NATURAL LANGUAGE PROCESSING. Jan Tore Lønning

Founda'ons of Large- Scale Mul'media Informa'on Management and Retrieval. Lecture #3 Machine Learning. Edward Chang

CSE446: Linear Regression Regulariza5on Bias / Variance Tradeoff Winter 2015

Natural Language Processing

Bayes Theorem & Naïve Bayes. (some slides adapted from slides by Massimo Poesio, adapted from slides by Chris Manning)

Count-Min Tree Sketch: Approximate counting for NLP

Transcription:

Introduc*on to Informa(on Retrieval CS276: Informa*on Retrieval and Web Search Christopher Manning and Pandu Nayak Spelling Correc*on

The course thus far Index construc*on Index compression Efficient boolean querying Chapters 1, 2, 4, 5 Coursera lectures 1, 2, 3, 4 Spelling correc*on Chapter 3 Coursera lecture 5 (mainly some parts) This lecture (PA #2!) 2

Applica*ons for spelling correc*on Word processing Phones Web search 3

Rates of spelling errors Depending on the applica*on, ~1 20% error rates 26%: Web queries Wang et al. 2003 13%: Retyping, no backspace: Whitelaw et al. English&German 7%: Words corrected retyping on phone-sized organizer 2%: Words uncorrected on organizer Soukoreff &MacKenzie 2003 1-2%: Retyping: Kane and Wobbrock 2007, Gruden et al. 1983 4

Spelling Tasks Spelling Error Detec*on Spelling Error Correc*on: Autocorrect hteàthe Suggest a correc*on Sugges*on lists 5

Types of spelling errors Non-word Errors graffe àgiraffe Real-word Errors Typographical errors three àthere Cogni*ve Errors (homophones) pieceàpeace, too à two your àyou re Non-word correc*on was historically mainly context insensi*ve Real-word correc*on almost needs to be context sensi*ve 6

Non-word spelling errors Non-word spelling error detec*on: Any word not in a dic$onary is an error The larger the dic*onary the becer up to a point (The Web is full of mis-spellings, so the Web isn t necessarily a great dic*onary ) Non-word spelling error correc*on: Generate candidates: real words that are similar to error Choose the one which is best: Shortest weighted edit distance Highest noisy channel probability 7

Real word & non-word spelling errors For each word w, generate candidate set: Find candidate words with similar pronuncia$ons Find candidate words with similar spellings Include w in candidate set Choose best candidate Noisy Channel view of spell errors Context-sensi*ve so have to consider whether the surrounding words make sense Flying form Heathrow to LAX à Flying from Heathrow to LAX 8

Terminology These are character bigrams: st, pr, an These are word bigrams: palo alto, flying from, road repairs Similarly trigrams, k-grams etc In today s class, we will generally deal with word bigrams In the accompanying Coursera lecture, we mostly deal with character bigrams (because we cover stuff complementary to what we re discussing here) 9

The Noisy Channel Model of Spelling INDEPENDENT WORD SPELLING CORRECTION

Noisy Channel Intui*on 11

Noisy Channel = Bayes Rule We see an observa*on x of a misspelled word Find the correct word ŵ ŵ = argmax w V = argmax w V = argmax w V P(w x) P(x w)p(w) P(x) P(x w)p(w) Bayes 12

History: Noisy channel for spelling proposed around 1990 IBM Mays, Eric, Fred J. Damerau and Robert L. Mercer. 1991. Context based spelling correc*on. Informa)on Processing and Management, 23(5), 517 522 AT&T Bell Labs Kernighan, Mark D., Kenneth W. Church, and William A. Gale. 1990. A spelling correc*on program based on a noisy channel model. Proceedings of COLING 1990, 205-210

Non-word spelling error example acress 14

Candidate genera*on Words with similar spelling Small edit distance to error Words with similar pronuncia*on Small distance of pronuncia*on to error In this class lecture we mostly won t dwell on efficient candidate genera*on A lot more about candidate genera*on in the accompanying Coursera material 15

Candidate Tes*ng: Damerau-Levenshtein edit distance Minimal edit distance between two strings, where edits are: Inser*on Dele*on Subs*tu*on Transposi*on of two adjacent lecers See IIR sec 3.3.3 for edit distance 16

Words within 1 of acress Error Candidate Correc(on Correct LeDer Error LeDer Type acress actress t - dele*on acress cress - a inser*on acress caress ca ac transposi*on acress access c r subs*tu*on acress across o e subs*tu*on acress acres - s inser*on 17

Candidate genera*on 80% of errors are within edit distance 1 Almost all errors within edit distance 2 Also allow inser*on of space or hyphen thisidea à this idea inlaw à in-law Can also allow merging words data base à database For short texts like a query, can just regard whole string as one item from which to produce edits 18

How do you generate the candidates? 1. Run through dic*onary, check edit distance with each word 2. Generate all words within edit distance k (e.g., k = 1 or 2) and then intersect them with dic*onary 3. Use a character k-gram index and find dic*onary words that share most k-grams with word (e.g., by Jaccard coefficient) see IIR sec 3.3.4 4. Compute them fast with a Levenshtein finite state transducer 5. Have a precomputed map of words to possible correc*ons 19

A paradigm We want the best spell correc*ons Instead of finding the very best, we Find a subset of precy good correc*ons (say, edit distance at most 2) Find the best amongst them These may not be the actual best This is a recurring paradigm in IR including finding the best docs for a query, best answers, best ads Find a good candidate set Find the top K amongst them and return them as the best 20

Let s say we ve generated candidates: Now back to Bayes Rule We see an observa*on x of a misspelled word Find the correct word ŵ ŵ = argmax w V = argmax w V P(w x) P(x w)p(w) P(x) = argmax w V P(x w)p(w) What s P(w)? 21

Language Model Take a big supply of words (your document collec*on with T tokens); let C(w) = # occurrences of w P(w) = C(w) T In other applica*ons you can take the supply to be typed queries (suitably filtered) when a sta*c dic*onary is inadequate 22

Unigram Prior probability Counts from 404,253,213 words in Corpus of Contemporary English (COCA) word Frequency of word P(w) actress 9,321.0000230573 cress 220.0000005442 caress 686.0000016969 access 37,038.0000916207 across 120,844.0002989314 acres 12,874.0000318463 23

Channel model probability Error model probability, Edit probability Kernighan, Church, Gale 1990 Misspelled word x = x 1, x 2, x 3 x m Correct word w = w 1, w 2, w 3,, w n P(x w) = probability of the edit (dele*on/inser*on/subs*tu*on/transposi*on) 24

Compu*ng error probability: confusion matrix del[x,y]: count(xy typed as x) ins[x,y]: count(x typed as xy) sub[x,y]: count(y typed as x) trans[x,y]: count(xy typed as yx) Inser*on and dele*on condi*oned on previous character 25

Confusion matrix for subs*tu*on

Nearby keys

Genera*ng the confusion matrix Peter Norvig s list of errors Peter Norvig s list of counts of single-edit errors All Peter Norvig s ngrams data links: hcp://norvig.com/ngrams/ 28

Channel model P (x w) = 8 >< >: del[w i 1,w i ] count[w i 1 w i ] ins[w i 1,x i ] count[w i 1 ] sub[x i,w i ] count[w i ], trans[w i,w i+1 ] count[w i w i+1 ] Kernighan, Church, Gale 1990, if deletion, if insertion if substitution, if transposition 29

Smoothing probabili*es: Add-1 smoothing But if we use the confusion matrix example, unseen errors are impossible! They ll make the overall probability 0. That seems too harsh e.g., in Kernighan s chart qèa and aèq are both 0, even though they re adjacent on the keyboard! A simple solu*on is to add 1 to all counts and then if there is a A character alphabet, to normalize appropriately: If substitution, P(x w) = sub[x, w]+1 count[w]+ A 30

Channel model for acress Candidate Correc(on Correct LeDer Error LeDer x w P(x w) actress t - c ct.000117 cress - a a #.00000144 caress ca ac ac ca.00000164 access c r r c.000000209 across o e e o.0000093 acres - s es e.0000321 acres - s ss s.0000342 31

Candidate Correc(on Correct LeDer Error LeDer x w P(x w) P(w) 10 9 * Noisy channel probability for acress P(x w)* P(w) actress t - c ct.000117.0000231 2.7 cress - a a #.00000144.000000544.00078 caress ca ac ac ca.00000164.00000170.0028 access c r r c.000000209.0000916.019 across o e e o.0000093.000299 2.8 acres - s es e.0000321.0000318 1.0 acres - s ss s.0000342.0000318 1.0 32

Candidate Correc(on Correct Error x w P(x w) P(w) 10 9 * P(x w)p(w) Noisy channel LeDer LeDer probability for acress actress t - c ct.000117.0000231 2.7 cress - a a #.00000144.000000544.00078 caress ca ac ac ca.00000164.00000170.0028 access c r r c.000000209.0000916.019 across o e e o.0000093.000299 2.8 acres - s es e acres - s ss s.0000321.0000318 1.0.0000342.0000318 1.0 33

Evalua*on Some spelling error test sets Wikipedia s list of common English misspelling Aspell filtered version of that list Birkbeck spelling error corpus Peter Norvig s list of errors (includes Wikipedia and Birkbeck, for training or tes*ng) 34

Context-Sensi*ve Spelling Correc*on SPELLING CORRECTION WITH THE NOISY CHANNEL

Real-word spelling errors leaving in about fifteen minuets to go to her house. The design an construction of the system Can they lave him my messages? The study was conducted mainly be John Black. 25-40% of spelling errors are real words Kukich 1992 36

Context-sensi*ve spelling error fixing For each word in sentence (phrase, query ) Generate candidate set the word itself all single-lecer edits that are English words words that are homophones (all of this can be pre-computed!) Choose best candidates Noisy channel model 37

Noisy channel for real-word spell correc*on Given a sentence w 1,w 2,w 3,,w n Generate a set of candidates for each word w i Candidate(w 1 ) = {w 1, w 1, w 1, w 1, } Candidate(w 2 ) = {w 2, w 2, w 2, w 2, } Candidate(w n ) = {w n, w n, w n, w n, } Choose the sequence W that maximizes P(W)

Incorpora*ng context words: Context-sensi*ve spelling correc*on Determining whether actress or across is appropriate will require looking at the context of use We can do this with a becer language model You learned/can learn a lot about language models in CS124 or CS224N Here we present just enough to be dangerous/do the assignment A bigram language model condi*ons the probability of a word on (just) the previous word P(w 1 w n ) = P(w 1 )P(w 2 w 1 ) P(w n w n 1 ) 39

Incorpora*ng context words For unigram counts, P(w) is always non-zero if our dic*onary is derived from the document collec*on This won t be true of P(w k w k 1 ). We need to smooth We could use add-1 smoothing on this condi*onal distribu*on But here s a becer way interpolate a unigram and a bigram: P li (w k w k 1 ) = λp uni (w k ) + (1 λ)p bi (w k w k 1 ) P bi (w k w k 1 ) = C(w k 1, w k ) / C(w k 1 ) 40

All the important fine points Note that we have several probability distribu*ons for words Keep them straight! You might want/need to work with log probabili*es: log P(w 1 w n ) = log P(w 1 ) + log P(w 2 w 1 ) + + log P(w n w n 1 ) Otherwise, be very careful about floa*ng point underflow Our query may be words anywhere in a document We ll start the bigram es*mate of a sequence with a unigram es*mate O~en, people instead condi*on on a start-of-sequence symbol, but not good here Because of this, the unigram and bigram counts have different totals not a problem 41

Using a bigram language model a stellar and versatile acress whose combination of sass and glamour Counts from the Corpus of Contemporary American English with add-1 smoothing P(actress versatile)=.000021 P(whose actress) =.0010 P(across versatile) =.000021 P(whose across) =.000006 P( versatile actress whose ) =.000021*.0010 = 210 x10-10 P( versatile across whose ) =.000021*.000006 = 1 x10-10 42

Using a bigram language model a stellar and versatile acress whose combination of sass and glamour Counts from the Corpus of Contemporary American English with add-1 smoothing P(actress versatile)=.000021 P(whose actress) =.0010 P(across versatile) =.000021 P(whose across) =.000006 P( versatile actress whose ) =.000021*.0010 = 210 x10-10 P( versatile across whose ) =.000021*.000006 = 1 x10-10 43

Noisy channel for real-word spell correc*on two of thew... to threw tao off thaw too on the two of thaw 44

Noisy channel for real-word spell correc*on two of thew... to threw tao off thaw too on the two of thaw 45

Simplifica*on: One error per sentence Out of all possible sentences with one word replaced w 1, w 2,w 3,w 4 two off thew w 1,w 2,w 3,w 4 two of the w 1,w 2,w 3,w 4 too of thew Choose the sequence W that maximizes P(W)

Where to get the probabili*es Language model Unigram Bigram etc. Channel model Same as for non-word spelling correc*on Plus need probability for no error, P(w w) 47

Probability of no error What is the channel probability for a correctly typed word? P( the the ) If you have a big corpus, you can es*mate this percent correct But this value depends strongly on the applica*on.90 (1 error in 10 words).95 (1 error in 20 words).99 (1 error in 100 words) 48

Peter Norvig s thew example x w x w P(x w) P(w) 10 9 P(x w)p(w) thew the ew e 0.000007 0.02 144 thew thew 0.95 0.00000009 90 thew thaw e a 0.001 0.0000007 0.7 thew threw h hr 0.000008 0.000004 0.03 ew thew thwe we 0.000003 0.00000004 0.0001 49

State of the art noisy channel We never just mul*ply the prior and the error model Independence assump*onsàprobabili*es not commensurate Instead: Weight them ŵ = argmax w V P(x w)p(w) λ Learn λ from a development test set 50

Improvements to channel model Allow richer edits (Brill and Moore 2000) entàant phàf leàal Incorporate pronuncia*on into channel (Toutanova and Moore 2002) Incorporate device into channel Not all Android phones need have the same error model But spell correc*on may be done at the system level 51