language modeling: n-gram models

Similar documents
Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky

CS 6120/CS4120: Natural Language Processing

Language Modeling. Introduction to N- grams

Language Modeling. Introduction to N- grams

CS 6120/CS4120: Natural Language Processing

Introduction to N-grams

Language Modeling. Introduction to N-grams. Klinton Bicknell. (borrowing from: Dan Jurafsky and Jim Martin)

Language Modeling. Introduc*on to N- grams. Many Slides are adapted from slides by Dan Jurafsky

Ngram Review. CS 136 Lecture 10 Language Modeling. Thanks to Dan Jurafsky for these slides. October13, 2017 Professor Meteer

Language Modeling. Introduc*on to N- grams. Many Slides are adapted from slides by Dan Jurafsky

CSCI 5832 Natural Language Processing. Today 1/31. Probability Basics. Lecture 6. Probability. Language Modeling (N-grams)

Language Model. Introduction to N-grams

Probabilistic Language Modeling

Machine Learning for natural language processing

Empirical Methods in Natural Language Processing Lecture 10a More smoothing and the Noisy Channel Model

N-grams. Motivation. Simple n-grams. Smoothing. Backoff. N-grams L545. Dept. of Linguistics, Indiana University Spring / 24

Foundations of Natural Language Processing Lecture 5 More smoothing and the Noisy Channel Model

Natural Language Processing. Statistical Inference: n-grams

Week 13: Language Modeling II Smoothing in Language Modeling. Irina Sergienya

Natural Language Processing

Language Modeling. Michael Collins, Columbia University

Recap: Language models. Foundations of Natural Language Processing Lecture 4 Language Models: Evaluation and Smoothing. Two types of evaluation in NLP

Lecture 2: N-gram. Kai-Wei Chang University of Virginia Couse webpage:

Natural Language Processing SoSe Language Modelling. (based on the slides of Dr. Saeedeh Momtazi)

Graphical Models. Mark Gales. Lent Machine Learning for Language Processing: Lecture 3. MPhil in Advanced Computer Science

NLP: N-Grams. Dan Garrette December 27, Predictive text (text messaging clients, search engines, etc)

DT2118 Speech and Speaker Recognition

N-gram Language Modeling Tutorial

N-gram Language Modeling

CSEP 517 Natural Language Processing Autumn 2013

{ Jurafsky & Martin Ch. 6:! 6.6 incl.

perplexity = 2 cross-entropy (5.1) cross-entropy = 1 N log 2 likelihood (5.2) likelihood = P(w 1 w N ) (5.3)

Language Models. Philipp Koehn. 11 September 2018

ANLP Lecture 6 N-gram models and smoothing

CMPT-825 Natural Language Processing

Natural Language Processing SoSe Words and Language Model

We have a sitting situation

Empirical Methods in Natural Language Processing Lecture 5 N-gram Language Models

ACS Introduction to NLP Lecture 3: Language Modelling and Smoothing

Kneser-Ney smoothing explained

Statistical Methods for NLP

Language Models. CS5740: Natural Language Processing Spring Instructor: Yoav Artzi

CS4705. Probability Review and Naïve Bayes. Slides from Dragomir Radev

The Language Modeling Problem (Fall 2007) Smoothed Estimation, and Language Modeling. The Language Modeling Problem (Continued) Overview

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Natural Language and Statistics

Language Modelling: Smoothing and Model Complexity. COMP-599 Sept 14, 2016

Conditional Language Modeling. Chris Dyer

Chapter 3: Basics of Language Modelling

Language Modelling. Steve Renals. Automatic Speech Recognition ASR Lecture 11 6 March ASR Lecture 11 Language Modelling 1

Hierarchical Bayesian Nonparametrics

Natural Language Processing (CSE 490U): Language Models

Midterm sample questions

Statistical Natural Language Processing

CS 188: Artificial Intelligence Spring Today

Lecture 3: ASR: HMMs, Forward, Viterbi

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Language Models. Tobias Scheffer

The Noisy Channel Model and Markov Models

Language Models. Data Science: Jordan Boyd-Graber University of Maryland SLIDES ADAPTED FROM PHILIP KOEHN

LING 473: Day 10. START THE RECORDING Coding for Probability Hidden Markov Models Formal Grammars

Natural Language Processing

Naïve Bayes, Maxent and Neural Models

Classification & Information Theory Lecture #8

Grammars and introduction to machine learning. Computers Playing Jeopardy! Course Stony Brook University

INF4820: Algorithms for Artificial Intelligence and Natural Language Processing. Language Models & Hidden Markov Models

Deep Learning Basics Lecture 10: Neural Language Models. Princeton University COS 495 Instructor: Yingyu Liang

A fast and simple algorithm for training neural probabilistic language models

Algorithms for NLP. Language Modeling III. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley

(today we are assuming sentence segmentation) Wednesday, September 10, 14

To make a grammar probabilistic, we need to assign a probability to each context-free rewrite

Mid-term Reviews. Preprocessing, language models Sequence models, Syntactic Parsing

Ranked Retrieval (2)

arxiv: v1 [cs.cl] 5 Mar Introduction

ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging

Hidden Markov Models in Language Processing

Speech Recognition Lecture 5: N-gram Language Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri

Introduction to Artificial Intelligence (AI)

Neural Network Language Modeling

Machine Learning for natural language processing

The Noisy Channel Model. CS 294-5: Statistical Natural Language Processing. Speech Recognition Architecture. Digitizing Speech

CSCI 5832 Natural Language Processing. Today 2/19. Statistical Sequence Classification. Lecture 9

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.

ANLP Lecture 22 Lexical Semantics with Dense Vectors

Reinforcement Learning Wrap-up

CS 188: Artificial Intelligence Fall 2011

Log-linear models (part 1)

Statistical Machine Translation

Chapter 3: Basics of Language Modeling

CS 188: Artificial Intelligence. Machine Learning

Language Models. Hongning Wang

text classification 3: neural networks

Classification: Naïve Bayes. Nathan Schneider (slides adapted from Chris Dyer, Noah Smith, et al.) ENLP 19 September 2016

N-gram Language Model. Language Models. Outline. Language Model Evaluation. Given a text w = w 1...,w t,...,w w we can compute its probability by:

CSA4050: Advanced Topics Natural Language Processing. Lecture Statistics III. Statistical Approaches to NLP

Maxent Models and Discriminative Estimation

Language Processing with Perl and Prolog

Neural Networks Language Models

Natural Language Processing Prof. Pawan Goyal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Sequences and Information

Transcription:

language modeling: n-gram models CS 585, Fall 2018 Introduction to Natural Language Processing http://people.cs.umass.edu/~miyyer/cs585/ Mohit Iyyer College of Information and Computer Sciences University of Massachusetts Amherst some slides from Dan Jurafsky and Richard Socher

questions from last time why do we have to update word embeddings during backpropagation? you don t have to, but it is common esp. if the domain you trained your embeddings on differs from the domain of your downstream task lots of us are missing class next week can you record the lectures? 2

goal: assign probability to a piece of text why would we ever want to do this? translation: P(i flew to the movies) <<<<< P(i went to the movies) speech recognition: P(i saw a van) >>>>> P(eyes awe of an)

You use Language Models every day! 4

You use Language Models every day! 5

philosophical question! should building the perfect language model be the ultimate goal of NLP? 6

7

8

9

10

11

back to reality Probabilistic Language Modeling Goal: compute the probability of a sentence or sequence of words: P(W) = P(w 1,w 2,w 3,w 4,w 5 w n ) Related task: probability of an upcoming word: P(w 5 w 1,w 2,w 3,w 4 ) A model that computes either of these: P(W) or P(w n w 1,w 2 w n-1 ) is called a language model or LM we have already seen one way to do this where? 12

How to compute P(W) How to compute this joint probability: P(its, water, is, so, transparent, that) Intuition: let s rely on the Chain Rule of Probability 13

Reminder: The Chain Rule Recall the definition of conditional probabilities P(B A) = P(A,B)/P(A) Rewriting: P(A,B) = P(A)P(B A) More variables: P(A,B,C,D) = P(A)P(B A)P(C A,B)P(D A,B,C) The Chain Rule in General P(x 1,x 2,x 3,,x n ) = P(x 1 )P(x 2 x 1 )P(x 3 x 1,x 2 ) P(x n x 1,,x n-1 ) 14

The Chain Rule applied to compute joint probability of words in sentence P( its water is so transparent ) = P(its) P(water its) P(is its water) P(so its water is) P(transparent its water is so) 15

How to estimate these probabilities Could we just count and divide? P(the its water is so transparent that) = Count(its water is so transparent that the) Count(its water is so transparent that) 16

How to estimate these probabilities Could we just count and divide? P(the its water is so transparent that) = Count(its water is so transparent that the) Count(its water is so transparent that) No! Too many possible sentences! We ll never see enough data for estimating these 17

Markov Assumption Simplifying assumption: Or maybe Andrei Markov (1856~1922) P(the its water is so transparent that) P(the that) P(the its water is so transparent that) P(the transparent that) 18

Markov Assumption In other words, we approximate each component in the product 19

Simplest case: Unigram model Some automatically generated sentences from a unigram model: fifth, an, of, futures, the, an, incorporated, a, a, the, inflation, most, dollars, quarter, in, is, mass thrift, did, eighty, said, hard, 'm, july, bullish that, or, limited, the 20

less dumb demo: https://mesotron.shinyapps.io/languagemodel/ 21

Approximating'Shakespeare 1 gram 2 gram 3 gram 4 gram Figure 4.3 To him swallowed confess hear both. Which. Of save on trail for are ay device and rote life have Hill he late speaks; or! a more to leg less first you enter Why dost stand forth thy canopy, forsooth; he is this palpable hit the King Henry. Live king. Follow. What means, sir. I confess she? then all sorts, he is trim, captain. Fly, and will rid me these news of price. Therefore the sadness of parting, as they say, tis done. This shall forbid it should be branded, if renown made it empty. King Henry. What! I will go seek the traitor Gloucester. Exeunt some of the watch. A great banquet serv d in; It cannot be but so. Eight sentences randomly generated from four N-grams computed from Shakespeare s works. All 22

N-gram models We can extend to trigrams, 4-grams, 5-grams In general this is an insufficient model of language because language has long-distance dependencies: The computer which I had just put into the machine room on the fifth floor crashed. But we can often get away with N-gram models next lecture, we will look at some models that can theoretically handle some of these longer-term dependencies 23

Estimating bigram probabilities The Maximum Likelihood Estimate (MLE) - relative frequency based on the empirical counts on a training set P(w i w i 1 ) = count(w i 1,w i ) count(w i 1 ) P(w i w i 1 ) = c(w i 1,w i ) c(w i 1 ) c count 24

An example MLE P(w w ) = c(w i 1,w ) i i i 1 c(w i 1 ) <s> I am Sam </s> <s> Sam I am </s> <s> I do not like green eggs and ham </s>?????? 25

An example MLE P(w w ) = c(w i 1,w ) i i i 1 c(w i 1 ) <s> I am Sam </s> <s> Sam I am </s> <s> I do not like green eggs and ham </s> 26

A bigger example: Berkeley Restaurant Project sentences can you tell me about any good cantonese restaurants close by mid priced thai food is what i m looking for tell me about chez panisse can you give me a listing of the kinds of food that are available i m looking for a good place to eat breakfast when is caffe venezia open during the day 27

Raw bigram counts Out of 9222 sentences 28

Raw bigram probabilities Normalize by unigrams: MLE P(w w ) = c(w i 1,w ) i i i 1 c(w i 1 ) Result: 29

Bigram estimates of sentence probabilities P(<s> I want english food </s>) = P(I <s>) P(want I) P(english want) P(food english) P(</s> food) =.000031 these probabilities get super tiny when we have longer inputs w/ more infrequent words how can we get around this? 30

What kinds of knowledge? P(english want) =.0011 P(chinese want) =.0065 P(to want) =.66 P(eat to) =.28 P(food to) = 0 P(want spend) = 0 P (i <s>) =.25 about the world grammar infinitive verb??? grammar 31

Language Modeling Toolkits SRILM http://www.speech.sri.com/projects/ srilm/ KenLM https://kheafield.com/code/kenlm/ 32

Evaluation: How good is our model? Does our language model prefer good sentences to bad ones? Assign higher probability to real or frequently observed sentences Than ungrammatical or rarely observed sentences? We train parameters of our model on a training set. We test the model s performance on data we haven t seen. A test set is an unseen dataset that is different from our training set, totally unused. An evaluation metric tells us how well our model does on the test set. 33

Evaluation: How good is our model? The goal isn t to pound out fake sentences! Obviously, generated sentences get better as we increase the model order More precisely: using maximum likelihood estimators, higher order is always better likelihood on training set, but not test set 34

Dan*Jurafsky Training'on'the'test'set We*can t*allow*test*sentences*into*the*training*set We*will*assign*it*an*artificially*high*probability*when*we*set*it*in* the*test*set Training*on*the*test*set Bad*science! And*violates*the*honor*code 35

Intuition of Perplexity The Shannon Game: How well can we predict the next word? I always order pizza with cheese and The 33 rd President of the US was I saw a Unigrams are terrible at this game. (Why?) A better model of a text mushrooms 0.1 pepperoni 0.1 anchovies 0.01 is one which assigns a higher probability to the word that actually occurs compute per word log likelihood (M words, m test sentence si). fried rice 0.0001. and 1e-100 Claude Shannon (1916~2001) 36

Dan*Jurafsky Perplexity The*best*language*model*is*one*that*best*predicts*an*unseen*test*set Gives*the*highest*P(sentence) Perplexity*is*the*inverse*probability*of* the*test*set,*normalized*by*the*number* of*words: PP(W ) = P(w 1 w 2...w N ) 1 N = N 1 P(w 1 w 2...w N ) Chain*rule: For*bigrams: Minimizing'perplexity'is'the'same'as'maximizing'probability 37

Perplexity'as'branching'factor Let s*suppose*a*sentence*consisting*of*random*digits What*is*the*perplexity*of*this*sentence*according*to*a*model* that*assign*p=1/10*to*each*digit? 38

Lower perplexity = better model Training 38 million words, test 1.5 million words, Wall Street Journal N-gram Unigram Bigram Trigram Order Perplexity 962 170 109 39

Zero probability bigrams Bigrams with zero probability mean that we will assign 0 probability to the test set! And hence we cannot compute perplexity (can t divide by 0)! for bigram Q: How do we deal with ngrams of zero probabilities? 40

Dan*Jurafsky Shakespeare'as'corpus N=884,647*tokens,*V=29,066 Shakespeare*produced*300,000*bigram*types* out*of*v 2 =*844*million*possible*bigrams. So*99.96%*of*the*possible*bigrams*were*never*seen* (have*zero*entries*in*the*table) Quadrigrams worse:***what's*coming*out*looks* like*shakespeare*because*it*is Shakespeare 41

an*jurafsky Zeros Training*set: *denied*the*allegations *denied*the*reports *denied*the*claims *denied*the*request Test*set *denied*the*offer *denied*the*loan P( offer * *denied*the)*=*0 42

The(intuition(of(smoothing((from(Dan(Klein) When*we*have*sparse*statistics: P(w* *denied*the) 3*allegations 2*reports 1*claims 1*request 7*total allegations reports claims request attack man outcome Steal*probability*mass*to*generalize*better P(w* *denied*the) 2.5*allegations 1.5*reports 0.5*claims 0.5*request 2*other 7*total allegations reports claims request attack man outcome 43

AddMone'estimation (again!) Also*called*Laplace*smoothing Pretend*we*saw*each*word*one*more*time*than*we*did Just*add*one*to*all*the*counts! P (w w ) = c(w i 1, w i ) MLE i i 1 MLE*estimate: c(w ) i 1 Add,1*estimate: P Add 1 (w i w i 1 ) = c(w i 1, w i )+1 c(w i 1 )+V 44

Dan*Jurafsky Berkeley(Restaurant(Corpus:(Laplace( smoothed(bigram(counts 45

Dan*Jurafsky LaplaceAsmoothed(bigrams 46

Dan*Jurafsky Reconstituted(counts 47

Compare(with(raw(bigram(counts 48

AddM1'estimation'is'a'blunt'instrument So*add,1*isn t*used*for*n,grams:* We ll*see*better*methods But*add,1*is*used*to*smooth*other*NLP*models For*text*classification* In*domains*where*the*number*of*zeros*isn t*so*huge. 49

Dan*Jurafsky Backoff and(interpolation Sometimes*it*helps*to*use*less context Condition*on*less*context*for*contexts*you*haven t*learned*much*about* Backoff:' use*trigram*if*you*have*good*evidence, otherwise*bigram,*otherwise*unigram Interpolation:' mix*unigram,*bigram,*trigram Interpolation*works*better 50

Dan*Jurafsky Linear'Interpolation Simple*interpolation ˆP(w n w n 2 w n 1 ) = l 1 P(w n w n 2 w n 1 ) +l 2 P(w n w n 1 ) +l 3 P(w n ) Lambdas*conditional*on*context: X X l i = 1 i 51

Dan*Jurafsky Absolute(discounting:(just(subtract(a( little(from(each(count Suppose*we*wanted*to*subtract*a*little* from*a*count*of*4*to*save*probability* mass*for*the*zeros How*much*to*subtract*? Church*and*Gale*(1991) s*clever*idea Divide*up*22*million*words*of*AP* Newswire Training*and*held,out*set for*each*bigram*in*the*training*set see*the*actual*count*in*the*held,out*set! Bigram*count* Bigram*count*in* in*training heldout set 0.0000270 1 0.448 2 1.25 3 2.24 4 3.23 5 4.21 6 5.23 7 6.21 8 7.21 9 8.26 52

Dan*Jurafsky Absolute(Discounting(Interpolation Save*ourselves*some*time*and*just*subtract*0.75*(or*some*d)! discounted bigram P (w w ) = c(w i 1, w i ) d AbsoluteDiscounting i i 1 c(w ) i 1 Interpolation weight + λ(w i 1 )P(w) (Maybe*keeping*a*couple*extra*values*of*d*for*counts*1*and*2) unigram But*should*we*really*just*use*the*regular*unigram*P(w)? 53

Kneser-Ney Smoothing Intuition Better*estimate*for*probabilities*of*lower,order*unigrams! Shannon*game:**I#can t#see#without#my#reading? Francisco *is*more*common*than* glasses *but* Francisco *always*follows* San Francisco glasses The*unigram*is*useful*exactly*when*we*haven t*seen*this*bigram! Instead*of**P(w):* How*likely*is*w P continuation (w):** How*likely*is*w*to*appear*as*a*novel*continuation? For*each*word,*count*the*number*of*bigram*types*it*completes Every*bigram*type*was*a*novel*continuation*the*first*time*it*was*seen P CONTINUATION (w) {w i 1 : c(w i 1, w) > 0} 54

exercise (from last time)! 55