Probabilistic Language Modeling

Similar documents
The Language Modeling Problem (Fall 2007) Smoothed Estimation, and Language Modeling. The Language Modeling Problem (Continued) Overview

Language Modeling. Michael Collins, Columbia University

Natural Language Processing. Statistical Inference: n-grams

Machine Learning for natural language processing

Recap: Language models. Foundations of Natural Language Processing Lecture 4 Language Models: Evaluation and Smoothing. Two types of evaluation in NLP

N-gram Language Modeling

Foundations of Natural Language Processing Lecture 5 More smoothing and the Noisy Channel Model

N-grams. Motivation. Simple n-grams. Smoothing. Backoff. N-grams L545. Dept. of Linguistics, Indiana University Spring / 24

Statistical Methods for NLP

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky

Empirical Methods in Natural Language Processing Lecture 10a More smoothing and the Noisy Channel Model

Language Model. Introduction to N-grams

Language Models. Philipp Koehn. 11 September 2018

N-gram Language Modeling Tutorial

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky

Language Modelling: Smoothing and Model Complexity. COMP-599 Sept 14, 2016

Natural Language Processing SoSe Language Modelling. (based on the slides of Dr. Saeedeh Momtazi)

language modeling: n-gram models

DT2118 Speech and Speaker Recognition

Natural Language Processing SoSe Words and Language Model

CMPT-825 Natural Language Processing

CS 6120/CS4120: Natural Language Processing

ANLP Lecture 6 N-gram models and smoothing

Language Models. CS6200: Information Retrieval. Slides by: Jesse Anderton

Language Modeling. Introduction to N- grams

Language Modeling. Introduction to N- grams

Speech Recognition Lecture 5: N-gram Language Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri

CS 6120/CS4120: Natural Language Processing

{ Jurafsky & Martin Ch. 6:! 6.6 incl.

ACS Introduction to NLP Lecture 3: Language Modelling and Smoothing

Graphical Models. Mark Gales. Lent Machine Learning for Language Processing: Lecture 3. MPhil in Advanced Computer Science

Language Technology. Unit 1: Sequence Models. CUNY Graduate Center Spring Lectures 5-6: Language Models and Smoothing. required hard optional

CSA4050: Advanced Topics Natural Language Processing. Lecture Statistics III. Statistical Approaches to NLP

perplexity = 2 cross-entropy (5.1) cross-entropy = 1 N log 2 likelihood (5.2) likelihood = P(w 1 w N ) (5.3)

Natural Language Processing

Statistical Natural Language Processing

Language Models. Data Science: Jordan Boyd-Graber University of Maryland SLIDES ADAPTED FROM PHILIP KOEHN

Chapter 3: Basics of Language Modelling

Language Modeling. Introduc*on to N- grams. Many Slides are adapted from slides by Dan Jurafsky

Language Modeling. Introduction to N-grams. Klinton Bicknell. (borrowing from: Dan Jurafsky and Jim Martin)

Empirical Methods in Natural Language Processing Lecture 5 N-gram Language Models

Ngram Review. CS 136 Lecture 10 Language Modeling. Thanks to Dan Jurafsky for these slides. October13, 2017 Professor Meteer

An Algorithm for Fast Calculation of Back-off N-gram Probabilities with Unigram Rescaling

Week 13: Language Modeling II Smoothing in Language Modeling. Irina Sergienya

The Noisy Channel Model and Markov Models

NLP: N-Grams. Dan Garrette December 27, Predictive text (text messaging clients, search engines, etc)

Language Processing with Perl and Prolog

Natural Language and Statistics

Lecture 2: N-gram. Kai-Wei Chang University of Virginia Couse webpage:

Language Models. Hongning Wang

Administrivia. Lecture 5. Part I. Outline. The Big Picture IBM IBM IBM IBM

Lecture 5. The Big Picture/Language Modeling. Bhuvana Ramabhadran, Michael Picheny, Stanley F. Chen

SYNTHER A NEW M-GRAM POS TAGGER

Natural Language Processing (CSE 490U): Language Models

CSCI 5832 Natural Language Processing. Today 2/19. Statistical Sequence Classification. Lecture 9

arxiv:cmp-lg/ v1 9 Jun 1997

MIA - Master on Artificial Intelligence

(today we are assuming sentence segmentation) Wednesday, September 10, 14

Language as a Stochastic Process

Ranked Retrieval (2)

CSCI 5832 Natural Language Processing. Today 1/31. Probability Basics. Lecture 6. Probability. Language Modeling (N-grams)

Sequences and Information

Lecture 4: Smoothing, Part-of-Speech Tagging. Ivan Titov Institute for Logic, Language and Computation Universiteit van Amsterdam

Chapter 3: Basics of Language Modeling

Exploring Asymmetric Clustering for Statistical Language Modeling

Hidden Markov Models, I. Examples. Steven R. Dunbar. Toy Models. Standard Mathematical Models. Realistic Hidden Markov Models.

Hidden Markov Models in Language Processing

Deep Learning Basics Lecture 10: Neural Language Models. Princeton University COS 495 Instructor: Yingyu Liang

Midterm sample questions

Statistical Machine Translation

Cross-Lingual Language Modeling for Automatic Speech Recogntion

Language Modeling. Introduc*on to N- grams. Many Slides are adapted from slides by Dan Jurafsky

Maximum Likelihood Estimation. only training data is available to design a classifier

CSC401/2511 Spring CSC401/2511 Natural Language Computing Spring 2019 Lecture 5 Frank Rudzicz and Chloé Pou-Prom University of Toronto

arxiv: v1 [cs.cl] 5 Mar Introduction

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Language Models. Tobias Scheffer

TnT Part of Speech Tagger

Hidden Markov Models. x 1 x 2 x 3 x N

Language Modelling. Steve Renals. Automatic Speech Recognition ASR Lecture 11 6 March ASR Lecture 11 Language Modelling 1

Statistical Methods for NLP

CS 224N HW:#3. (V N0 )δ N r p r + N 0. N r (r δ) + (V N 0)δ. N r r δ. + (V N 0)δ N = 1. 1 we must have the restriction: δ NN 0.

Improved Decipherment of Homophonic Ciphers

Conditional Language Modeling. Chris Dyer

Lecture 3: ASR: HMMs, Forward, Viterbi

Hierarchical Bayesian Nonparametric Models of Language and Text

Hierarchical Bayesian Nonparametric Models of Language and Text

More Smoothing, Tuning, and Evaluation

Classification & Information Theory Lecture #8

Kneser-Ney smoothing explained

Microsoft Corporation.

8: Hidden Markov Models

Probabilistic Spelling Correction CE-324: Modern Information Retrieval Sharif University of Technology

University of Illinois at Urbana-Champaign. Midterm Examination

IBM Model 1 for Machine Translation

Design and Implementation of Speech Recognition Systems

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

Today. Statistical Learning. Coin Flip. Coin Flip. Experiment 1: Heads. Experiment 1: Heads. Which coin will I use? Which coin will I use?

PROBABILISTIC PASSWORD MODELING: PREDICTING PASSWORDS WITH MACHINE LEARNING

COS 424: Interacting with Data. Lecturer: Dave Blei Lecture #11 Scribe: Andrew Ferguson March 13, 2007

CSC321 Lecture 7 Neural language models

Transcription:

Predicting String Probabilities Probabilistic Language Modeling Which string is more likely? (Which string is more grammatical?) Grill doctoral candidates. Regina Barzilay EECS Department MIT November 15, 2004 Grill doctoral updates. (example from Lee 1997) Methods for assigning probabilities to strings are called Language Models. Probabilistic Language Modeling 2/36 Last time Corpora processing Zipf s law Data sparseness Today: Probabilistic language Modeling Motivation Speech recognition, spelling correction, optical character recognition and other applications Let E be physical evidence, and we need to determine whether the string W is the message encoded by E Use Bayes Rule: P (W E) = P LM (W )P (E W ), P (E) where P LM (W ) is language model probability P LM (W ) provides the information necessary for disambiguation (esp. when the physical evidence is not sufficient for disambiguation) Probabilistic Language Modeling 1/36 Probabilistic Language Modeling 3/36

How to Compute it? The Language Modeling Problem Naive approach: Use the maximum likelihood estimates (MLE): the number of times that the string occurs in the corpus S, normalized by the corpus size P MLE ( Grill doctorate candidates ) = count( Grill doctorate candidates ) For unseen events, P MLE = 0 Dreadful behavior in the presence of Data Sparseness S Start with some vocabulary: ν = {the, a, doctorate, candidate, Professors, grill, cook, ask,...} Get a training sample of ν : Grill doctorate candidate. Cook Professors. Ask Professors.... Assumption: training sample is drawn from some underlying distribution P Probabilistic Language Modeling 4/36 Probabilistic Language Modeling 6/36 Two Famous Sentences The Language Modeling Problem It is fair to assume that neither sentence Colorless green ideas sleep furiously nor Furiously sleep ideas green colorless... has ever occurred... Hence, in any statistical model... these sentences will be ruled out on identical grounds as equally remote from English. Yet (1), though nonsensical, is grammatical, while (2) is not. [Chomsky 1957] Goal: learn a probability distribution P as close to P as possible x ν P (x) = 1 P (x) 0 for all x ν P (candidates) = 10 5 P (ask candidates) = 10 8 Probabilistic Language Modeling 5/36 Probabilistic Language Modeling 7/36

Deriving Language Model Assign probability to a sequence w 1 w 2... w n Apply chain rule: P (w 1 w 2... wn) = P (w 1 S) P (w 2 S, w 1 ) P (w 3 S, w 1, w 2 )... P (E S, w 1, w 2,..., wn) History-based model: we predict following things from past things How much context do we need to take into account? A Computational Model of Language A useful conceptual and practical device: coin-flipping models A sentence is generated by a randomized algorithm The generator can be one of several states Flip coins to choose the next state Flip other coins to decide which letter or word to output Shannon: The states will correspond to the residue of influence from preceding letters Probabilistic Language Modeling 8/36 Probabilistic Language Modeling 10/36 Markov Assumption For arbitrary long contexts P (w i w i n... w i 1 ) difficult to estimate Markov Assumption: w i depends only on n preceding words Trigrams (second order): P (w i ST ART, w 1, w 2,..., w i 1 ) = P (w i w i 1, w i 1 ) P (w 1 w 2... wn) = P (w 1 S) P (w 2 S, w 1 ) P (w 3 w 1, w 2 )... P (E w n 1, wn) Word-Based Approximations First-order approximation To him swallowed confess hear both. which. OF save on trail for are ay device and rote life have Every enter now severally so, let Hill he late speaks; or! a more to leg less first you enter Third-order approximation King Henry. What! I will go seek the traitor Gloucester. Exeunt some of the watch. A great banquet serv s in; Will you tell me how I am? It cannot be but so. Probabilistic Language Modeling 9/36 Probabilistic Language Modeling 11/36

Evaluating a Language Model We have n test string: S 1, S 2,..., S n Consider the probability under our model n i=1 P (S i), or log probability: Perplexity: log n P (S i ) = i=1 n log P (S i ) i=1 P erplexity = 2 x, where x = 1 W n i=1 log P (S i) and W is the total number of words in the test data. Perplexity estimate of human performance (Shannon, 1951) Shannon game humans guess next letter in text PP=142(1.3 bits/letter), uncased, open vocabulary estimate of trigram language model (Brown et al. 1992) PP=790(1.75 bits/letter), cased, open vocabulary Probabilistic Language Modeling 12/36 Probabilistic Language Modeling 14/36 Perplexity Perplexity is a measure of effective branching factor We have a vocabulary ν of size N, and model predicts for all the words in ν. What about Perplexity? P (w) = 1 N Maximum Likelihood Estimate MLE makes training data as probable as possible P ML (w i w i 1, w i 2 ) = Count(w i 2, w i 1, w i ) Count(w i 2, w i 1 ) For vocabulary of size N, we will have N 3 parameters in the model P erplexity = 2 x, where x = log 1 N P erplexity = N For N = 1, 000, we have to estimate 1, 000 3 = 10 9 parameters Problem: how to deal with unseen words? Probabilistic Language Modeling 13/36 Probabilistic Language Modeling 15/36

Sparsity Add-One (Laplace) Smoothing Simplest discounting technique: The aggregate probability of unseen events constitutes a large fraction of the test data Brown et al (1992): considered a 350 million word corpus of English, 14% of trigrams are unseen P (w i w i 1 ) = where ν is a vocabulary size C(w1, w2) + 1 c(w1) + ν, Bayesian estimator assuming a uniform unit prior on events Problem: Too much probability mass to unseen events Probabilistic Language Modeling 16/36 Probabilistic Language Modeling 18/36 Today Example How to estimate probability of unseen elements? Assume ν = 10, 000, and S = 1, 000, 000 Discounting Laplace Good-Turing P MLE (ball kick a) = Count(kick a ball) Count(kick a) = 9 10 = 0.9 Linear Interpolation Katz Back-Off P +1 (ball kick a) = Count(kick a ball) + 1 Count(kick a) + ν = 9 + 1 10 + 10, 000 = 9 10 4 Probabilistic Language Modeling 17/36 Probabilistic Language Modeling 19/36

Weaknesses of Laplace For Sparse distribution, Laplace s Law gives too much of the probability space to unseen events Worst at predicting the actual probabilities of bigrams than other methods More reasonable to use add-ɛ smoothing (Lidstone s Law) Good-Turing Discounting: Intuition Goal: estimate how often word with r counts in training data occurs in test set of equal size. We use deleted estimation: delete one word at a time if test word occurs r + 1 times in complete data set: it occurs r times in training set add one count to words with r counts total count placed to bucket for r-count words is: n r+1 (r + 1) (avg-count of r count words) = n r+1 (r+1) n r Probabilistic Language Modeling 20/36 Probabilistic Language Modeling 22/36 Good-Turing Discounting How likely are you to see a new word type in the future? Use things you ve seen once to estimate the probability of unseen things n r number of elements with r frequency and r > 0 n 0 size of the total lexicon minus the size of observed lexicon Good-Turing Discounting (cont.) In Good-Turing, the total probability assigned to all the unobserved events is equal to n 1 /N, where N is the size of the training set. It is the same as a relative frequency formula would assign to singleton events. Modified count for elements with frequency r r = (r + 1) n r+1 n r Probabilistic Language Modeling 21/36 Probabilistic Language Modeling 23/36

Example: Good-Turing Interpolation Training sample of 22,000,000 (Church&Gale 1991) r N r heldout r 0 74,671,100,000 0.00027 0.00027 1 2,018,046 0.448 0.446 2 449,721 1.25 1.26 3 188,933 2.24 2.24 4 105,668 3.23 3.24 5 68,379 4.21 4.22 One way of solving the sparseness in a trigram model is to mix that model with bigram and unigram models that suffer less from data sparseness The weights can be set using the Expectation-Maximization Algorithm or another numerical optimization technique 6 48,190 5.23 5.19 Probabilistic Language Modeling 24/36 Probabilistic Language Modeling 26/36 The Bias-Variance Trade-Off (Unsmoothed) trigram estimate Linear Interpolation P ML (w i w i 2, w i 1 ) = Count(w i 2w i 1 w i ) Count(w i 2, w i 1 ) (Unsmoothed) bigram estimate P ML (w i w i 1 ) = Count(w i 1w i ) Count(w i 1 ) (Unsmoothed) unigram estimate P ML (w i ) = Count(w i) Count(w j j) P (w i w i 2, w i 1 ) = λ 1 P ML (w i w i 2, w i 1 ) +λ 2 P ML (w i w i 1 ) +λ 3 P ML (w i ) where λ 1 + λ 2 + λ 3 = 1, and λ i 0 for all i. How close are these different estimates to the true probability P (w i w i 2, w i 1 )? Probabilistic Language Modeling 25/36 Probabilistic Language Modeling 27/36

This estimate defines distribution: w ν = P (w i w i 2, w i 1 ) w ν = λ 1 Linear Interpolation [λ 1 P ML (w i w i 2, w i 1 ) + λ 2 P ML (w i w i 1 ) + λ 3 P ML (w i )] λ 3 w ν w ν P ML (w i w i 2, w i 1 ) + λ 2 P ML (w i ) = λ 1 + λ 2 + λ 3 = 1 w ν P ML (w i w i 1 ) + An Iterative Method Initialization: Pick arbitrary/random values for λ 1, λ 2, λ 3 Step 1: Calculate the following quantities: c 1 = c 2 = c 3 = w 1,w 2,w 3 ν w 1,w 2,w 3 ν w 1,w 2,w 3 ν Step 2: Reestimate: λ i = Count 2 (w 1, w 2, w 3 )λ 1 P ML (w 3 w 2, w 1 ) λ 1 P ML (w i w i 2, w i 1 ) + λ 2 P ML (w i w i 1 ) + λ 3 P ML (w i ) Count 2 (w 1, w 2, w 3 )λ 2 P ML (w 3 w 2 ) λ 1 P ML (w i w i 2, w i 1 ) + λ 2 P ML (w i w i 1 ) + λ 3 P ML (w i ) Count 2 (w 1, w 2, w 3 )λ 3 P ML (w 3 ) λ 1 P ML (w i w i 2, w i 1 ) + λ 2 P ML (w i w i 1 ) + λ 3 P ML (w i ) c i c 1 +c 2 +c 3 Step 3: If λ i have not converged, go to Step 1. Probabilistic Language Modeling 28/36 Probabilistic Language Modeling 30/36 Parameter Estimation Hold out part of training set as validation data Define Count 2 (w 1, w 2, w 3 ) to be the number of times the trigram (w 1, w 2, w 3 ) is seen in validation set Choose λ i to maximize: L(λ 1, λ 2, λ 3 ) = w 1,w 2,w 3 ν such that λ 1 + λ 2 + λ 3 = 1, and λ i for all i. Count 2 (w 1, w 2, w 3 ) log P (w 3 w 1, w 2 ) Allowing the λ s to vary Partition histories (for instance, based on frequencies) Φ(w i 2, w i 1 ) = Condition λ s on partitions where λ Φ(w i 2,w i 1 ) 1 λ Φ(w i 2,w i 1 ) i P (w i w i 2, w i 1 ) = λ Φ(w i 2,w i 1 ) 1 { 1 if Count(wi 2, w i 1 ) = 0 +λ Φ(w i 2,w i 1 ) 2 +λ Φ(w i 2,w i 1 ) 3 0 for all i. 2 if 1 Count(w i 2, w i 1 ) 3 5 Otherwise + λ Φ(w i 2,w i 1 ) 2 P ML (w i w i 2, w i 1 ) P ML (w i w i 1 ) P ML (w i ) + λ Φ(w i 2,w i 1 ) 3 = 1, and Probabilistic Language Modeling 29/36 Probabilistic Language Modeling 31/36

Katz Back-Off Models (Bigrams) Define two sets Weaknesses of n-gram Models A(w i 1 ) = {w : Count(w i 1, w) > 0} B(w i 1 ) = {w : Count(w i 1, w) = 0} A bigram model P K (w i w i 1 ) = Count (w i 1,w) > 0 if w Count(w i 1 ) i A(w i 1 ) P α(w i 1 ) ML (w i ) if w i B(w i 1 ) w B(wi 1) P ML (w) Any ideas? Short-range Mid-range Long-range α(w i 1 ) = 1 w A(w i 1 ) Count (w i1, w) Count(w i 1 ) Probabilistic Language Modeling 32/36 Probabilistic Language Modeling 34/36 Count definitions More Refined Models Katz uses Good-Turing method for Count(x) < 5, and Count (x) = Count(x) for Count(x) 5 Kneser-Ney method: Count (x) = Count(x) D, whered = n 1 n 1 + n 2 n 1 is a number of elements with frequency 1 n 2 is a number of elements with frequency 2 Class-based models Structural models Topical and long-range models Probabilistic Language Modeling 33/36 Probabilistic Language Modeling 35/36

Summary Start with a vocabulary Select type of model Estimate Parameters Consider using CMU-Cambridge language modeling toolkit: http://mi.eng.cam.ac.uk/ prc14/toolkit.html Probabilistic Language Modeling 36/36