Natural Language Processing (CSE 490U): Language Models

Size: px
Start display at page:

Download "Natural Language Processing (CSE 490U): Language Models"

Transcription

1 Natural Language Processing (CSE 490U): Language Models Noah Smith c 2017 University of Washington nasmith@cs.washington.edu January 6 9, / 67

2 Very Quick Review of Probability Event space (e.g., X, Y) in this class, usually discrete 2 / 67

3 Very Quick Review of Probability Event space (e.g., X, Y) in this class, usually discrete Random variables (e.g., X, Y ) 3 / 67

4 Very Quick Review of Probability Event space (e.g., X, Y) in this class, usually discrete Random variables (e.g., X, Y ) Typical statement: random variable X takes value x X with probability p(x = x), or, in shorthand, p(x) 4 / 67

5 Very Quick Review of Probability Event space (e.g., X, Y) in this class, usually discrete Random variables (e.g., X, Y ) Typical statement: random variable X takes value x X with probability p(x = x), or, in shorthand, p(x) Joint probability: p(x = x, Y = y) 5 / 67

6 Very Quick Review of Probability Event space (e.g., X, Y) in this class, usually discrete Random variables (e.g., X, Y ) Typical statement: random variable X takes value x X with probability p(x = x), or, in shorthand, p(x) Joint probability: p(x = x, Y = y) Conditional probability: p(x = x Y = y) 6 / 67

7 Very Quick Review of Probability Event space (e.g., X, Y) in this class, usually discrete Random variables (e.g., X, Y ) Typical statement: random variable X takes value x X with probability p(x = x), or, in shorthand, p(x) Joint probability: p(x = x, Y = y) Conditional probability: p(x = x Y = y) p(x = x, Y = y) = p(y = y) 7 / 67

8 Very Quick Review of Probability Event space (e.g., X, Y) in this class, usually discrete Random variables (e.g., X, Y ) Typical statement: random variable X takes value x X with probability p(x = x), or, in shorthand, p(x) Joint probability: p(x = x, Y = y) Conditional probability: p(x = x Y = y) p(x = x, Y = y) = p(y = y) Always true: p(x = x, Y = y) = p(x = x Y = y) p(y = y) = p(y = y X = x) p(x = x) 8 / 67

9 Very Quick Review of Probability Event space (e.g., X, Y) in this class, usually discrete Random variables (e.g., X, Y ) Typical statement: random variable X takes value x X with probability p(x = x), or, in shorthand, p(x) Joint probability: p(x = x, Y = y) Conditional probability: p(x = x Y = y) p(x = x, Y = y) = p(y = y) Always true: p(x = x, Y = y) = p(x = x Y = y) p(y = y) = p(y = y X = x) p(x = x) Sometimes true: p(x = x, Y = y) = p(x = x) p(y = y) 9 / 67

10 Very Quick Review of Probability Event space (e.g., X, Y) in this class, usually discrete Random variables (e.g., X, Y ) Typical statement: random variable X takes value x X with probability p(x = x), or, in shorthand, p(x) Joint probability: p(x = x, Y = y) Conditional probability: p(x = x Y = y) p(x = x, Y = y) = p(y = y) Always true: p(x = x, Y = y) = p(x = x Y = y) p(y = y) = p(y = y X = x) p(x = x) Sometimes true: p(x = x, Y = y) = p(x = x) p(y = y) The difference between true and estimated probability distributions 10 / 67

11 Language Models: Definitions V is a finite set of (discrete) symbols ( words or possibly characters); V = V V is the (infinite) set of sequences of symbols from V whose final symbol is p : V R, such that: For any x V, p(x) 0 p(x = x) = 1 x V (I.e., p is a proper probability distribution.) Language modeling: estimate p from examples, x 1:n = x 1, x 2,..., x n. 11 / 67

12 Immediate Objections 1. Why would we want to do this? 2. Are the nonnegativity and sum-to-one constraints really necessary? 3. Is finite V realistic? 12 / 67

13 Motivation: Noisy Channel Models A pattern for modeling a pair of random variables, X and Y : source Y channel X 13 / 67

14 Motivation: Noisy Channel Models A pattern for modeling a pair of random variables, X and Y : source Y channel X Y is the plaintext, the true message, the missing information, the output 14 / 67

15 Motivation: Noisy Channel Models A pattern for modeling a pair of random variables, X and Y : source Y channel X Y is the plaintext, the true message, the missing information, the output X is the ciphertext, the garbled message, the observable evidence, the input 15 / 67

16 Motivation: Noisy Channel Models A pattern for modeling a pair of random variables, X and Y : source Y channel X Y is the plaintext, the true message, the missing information, the output X is the ciphertext, the garbled message, the observable evidence, the input Decoding: select y given X = x. y = argmax p(y x) y = argmax y = argmax y p(x y) p(y) p(x) p(x y) }{{} channel model p(y) }{{} source model 16 / 67

17 Noisy Channel Example: Speech Recognition source sequence in V channel acoustics Acoustic model defines p(sounds x) (channel) Language model defines p(x) (source) 17 / 67

18 Noisy Channel Example: Speech Recognition Credit: Luke Zettlemoyer word sequence log p(acoustics word sequence) the station signs are in deep in english the stations signs are in deep in english the station signs are in deep into english the station s signs are in deep in english the station signs are in deep in the english the station signs are indeed in english the station s signs are indeed in english the station signs are indians in english the station signs are indian in english the stations signs are indians in english the stations signs are indians and english / 67

19 Noisy Channel Example: Machine Translation Also knowing nothing official about, but having guessed and inferred considerable about, the powerful new mechanized methods in cryptography methods which I believe succeed even when one does not know what language has been coded one naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography. When I look at an article in Russian, I say: This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode. Warren Weaver, / 67

20 Noisy Channel Examples Speech recognition Machine translation Optical character recognition Spelling and grammar correction 20 / 67

21 Immediate Objections 1. Why would we want to do this? 2. Are the nonnegativity and sum-to-one constraints really necessary? 3. Is finite V realistic? 21 / 67

22 Evaluation: Perplexity Intuitively, language models should assign high probability to real language they have not seen before. For out-of-sample ( held-out or test ) data x 1:m : m Probability of x 1:m is p( x i ) i=1 22 / 67

23 Evaluation: Perplexity Intuitively, language models should assign high probability to real language they have not seen before. For out-of-sample ( held-out or test ) data x 1:m : m Probability of x 1:m is p( x i ) i=1 Log-probability of x 1:m is m log 2 p( x i ) i=1 23 / 67

24 Evaluation: Perplexity Intuitively, language models should assign high probability to real language they have not seen before. For out-of-sample ( held-out or test ) data x 1:m : m Probability of x 1:m is p( x i ) i=1 Log-probability of x 1:m is m log 2 p( x i ) i=1 Average log-probability per word of x 1:m is l = 1 M m log 2 p( x i ) i=1 if M = m i=1 x i (total number of words in the corpus) 24 / 67

25 Evaluation: Perplexity Intuitively, language models should assign high probability to real language they have not seen before. For out-of-sample ( held-out or test ) data x 1:m : m Probability of x 1:m is p( x i ) i=1 Log-probability of x 1:m is m log 2 p( x i ) i=1 Average log-probability per word of x 1:m is l = 1 M m log 2 p( x i ) i=1 if M = m i=1 x i (total number of words in the corpus) Perplexity (relative to x 1:m ) is 2 l 25 / 67

26 Evaluation: Perplexity Intuitively, language models should assign high probability to real language they have not seen before. For out-of-sample ( held-out or test ) data x 1:m : m Probability of x 1:m is p( x i ) i=1 Log-probability of x 1:m is m log 2 p( x i ) i=1 Average log-probability per word of x 1:m is l = 1 M m log 2 p( x i ) i=1 if M = m i=1 x i (total number of words in the corpus) Perplexity (relative to x 1:m ) is 2 l Lower is better. 26 / 67

27 Understanding Perplexity It s a branching factor! 1 M 2 m log 2 p( x i ) i=1 Assign probability of 1 to the test data perplexity = 1 Assign probability of 1 V to every word perplexity = V Assign probability of 0 to anything perplexity = This motivates a stricter constraint than we had before: For any x V, p(x) > 0 27 / 67

28 Perplexity Perplexity on conventionally accepted test sets is often reported in papers. Generally, I won t discuss perplexity numbers much, because: Perplexity is only an intermediate measure of performance. Understanding the models is more important than remembering how well they perform on particular train/test sets. If you re curious, look up numbers in the literature; always take them with a grain of salt! 28 / 67

29 Immediate Objections 1. Why would we want to do this? 2. Are the nonnegativity and sum-to-one constraints really necessary? 3. Is finite V realistic? 29 / 67

30 Is finite V realistic? No 30 / 67

31 Is finite V realistic? No no n0 -no notta N o /no //no (no no 31 / 67

32 The Language Modeling Problem Input: x 1:n ( training data ) Output: p : V R + p should be a useful measure of plausibility (not grammaticality). 32 / 67

33 A Trivial Language Model p(x) = {i x i = x} n = c x 1:n (x) n 33 / 67

34 A Trivial Language Model p(x) = {i x i = x} n What if x is not in the training data? = c x 1:n (x) n 34 / 67

35 Using the Chain Rule p(x 1 = x 1 ) p(x 2 = x 2 X 1 = x 1 ) p(x 3 = x 3 X 1:2 = x 1:2 ) p(x = x) =. p(x l = X 1:l 1 = x 1:l 1 ) l = p(x j = x j X 1:j 1 = x 1:j 1 ) j=1 35 / 67

36 Unigram Model p(x = x) = l p(x j = x j X 1:j 1 = x 1:j 1 ) j=1 assumption = l p θ (X j = x j ) = j=1 Maximum likelihood estimate: l θ xj j=1 v V, ˆθ v = {i, j [x i] j = v} N = c x 1:n (v) N where N = n i=1 x i. Also known as relative frequency estimation. l j=1 ˆθ xj 36 / 67

37 Responses to Some of Your Questions I speak roughly 1.3 languages. Homeworks are mostly programming assignments. They are public, but other than maybe some commentary, solutions won t be public. Interested in research? Faculty doing NLP at UW: Summer internship application form: 37 / 67

38 38 / 67

39 Unigram Model p(x = x) = l p(x j = x j X 1:j 1 = x 1:j 1 ) j=1 assumption = l p θ (X j = x j ) = j=1 Maximum likelihood estimate: l θ xj j=1 v V, ˆθ v = {i, j [x i] j = v} N = c x 1:n (v) N where N = n i=1 x i. Also known as relative frequency estimation. l j=1 ˆθ xj 39 / 67

40 Unigram Models: Assessment Pros: Easy to understand Cheap Good enough for information retrieval (maybe) Cons: Bag of words assumption is linguistically inaccurate p(the the the the) p(i want ice cream) Data sparseness; high variance in the estimator Out of vocabulary problem 40 / 67

41 Markov Models n-gram Models p(x = x) = l p(x j = x j X 1:j 1 = x 1:j 1 ) j=1 assumption = l p θ (X j = x j X j n+1:j 1 = x j n+1:j 1 ) j=1 (n 1)th-order Markov assumption n-gram model Unigram model is the n = 1 case For a long time, trigram models (n = 3) were widely used 5-gram models (n = 5) are not uncommon now in MT 41 / 67

42 Estimating n-gram Models unigram bigram trigram p θ (x) = l j=1 θ xj l j=1 θ xj x j 1 l j=1 θ xj x j 2 x j 1 Parameters: θ v θ v v θ v v v v V v V, v V { } v V, v, v V { } MLE: c(v) N c(v v) c(v ) c(v v v) c(v v ) General case: l c(hv) θ xj x j n+1:j 1 θ v h, v V, h (V { }) n 1 c(h) j=1 42 / 67

43 The Problem with MLE The curse of dimensionality: the number of parameters grows exponentially in n Data sparseness: most n-grams will never be observed, even if they are linguistically plausible No one actually uses the MLE! 43 / 67

44 Smoothing A few years ago, I d have spent a whole lecture on this! Simple method: add λ > 0 to every count (including zero-counts) before normalizing What makes it hard: ensuring that each θ V Otherwise, perplexity calculations break Longstanding champion: modified Kneser-Ney smoothing (Chen and Goodman, 1998) Stupid backoff: reasonable, easy solution when you don t care about perplexity (Brants et al., 2007) 44 / 67

45 Interpolation If p and q are both language models, then so is for any α [0, 1]. αp + (1 α)q This idea underlies many smoothing methods Often a new model q only beats a reigning champion p when interpolated with it How to pick the hyperparameter α? 45 / 67

46 Algorithms To Know Score a sentence x Train from a corpus x 1:n Sample a sentence given θ 46 / 67

47 n-gram Models: Assessment Pros: Easy to understand Cheap (with modern hardware; Lin and Dyer, 2010) Good enough for machine translation, speech recognition,... Cons: Markov assumption is linguistically inaccurate (But not as bad as unigram models!) Data sparseness; high variance in the estimator Out of vocabulary problem 47 / 67

48 Dealing with Out-of-Vocabulary Terms Define a special OOV or unknown symbol unk. Transform some (or all) rare words in the training data to unk. You cannot fairly compare two language models that apply different unk treatments! Build a language model at the character level. 48 / 67

49 To-Do List Collins (2011); Jurafsky and Martin (2016) 49 / 67

50 References I Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. Large language models in machine translation. In Proc. of EMNLP-CoNLL, Peter F. Brown, Peter V. Desouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. Class-based n-gram models of natural language. Computational Linguistics, 18(4): , Stanley F. Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Center for Research in Computing Technology, Harvard University, Michael Collins. Course notes for COMS w4705: Language modeling, URL Daniel Jurafsky and James H. Martin. N-grams (draft chapter), URL Dan Klein. Lagrange multipliers without permanent scarring, Undated. URL Jimmy Lin and Chris Dyer. Data-Intensive Text Processing with MapReduce. Morgan and Claypool, / 67

51 Extras 51 / 67

52 Relative Frequency Estimation is the MLE (Unigram Model) The maximum likelihood estimation problem: max p θ (x 1:n ) θ V 52 / 67

53 Relative Frequency Estimation is the MLE (Unigram Model) Logarithm is a monotonic function. max p θ (x 1:n ) = exp max log p θ (x 1:n ) θ V θ V 53 / 67

54 Relative Frequency Estimation is the MLE (Unigram Model) Each sequence is an independent sample from the model. max log p θ (x 1:n ) = θ V max log θ V n p θ (x i ) i=1 54 / 67

55 Relative Frequency Estimation is the MLE (Unigram Model) Plug in the form of the unigram model. max log θ V n p θ (x i ) = i=1 max log θ V n l i i=1 j=1 θ [xi ] j 55 / 67

56 Relative Frequency Estimation is the MLE (Unigram Model) Log of product equals sum of logs. max log θ V n l i θ [xi ] j = i=1 j=1 max θ V n l i log θ [xi ] j i=1 j=1 56 / 67

57 Relative Frequency Estimation is the MLE (Unigram Model) Convert from tokens to types. max θ V n l i log θ [xi ] j = i=1 j=1 max c x1:n (v) log θ v θ V v V 57 / 67

58 Relative Frequency Estimation is the MLE (Unigram Model) Convert to a minimization problem (for consistency with textbooks). max c x1:n (v) log θ v = θ V v V min c x1:n (v) log θ v θ V v V 58 / 67

59 Relative Frequency Estimation is the MLE (Unigram Model) Lagrange multiplier to convert to a less constrained problem. min c x1:n (v) log θ v θ V v V = max min ( c x1:n (v) log θ v µ 1 ) θ v µ 0 θ R V 0 v V v V ( = min θ R V 0 max c x1:n (v) log θ v µ µ 0 v V 1 v V Intuitively, if v V θ v gets too big, µ will push toward +. θ v ) For more about Lagrange multipliers, see Dan Klein s tutorial (reference at the end of these slides). 59 / 67

60 Relative Frequency Estimation is the MLE (Unigram Model) Use first-order conditions to solve for θ in terms of µ. min max ( c x1:n (v) log θ v µ 1 ) θ v θ R V µ 0 0 v V v V fixing µ, for all v, set: 0 = θ v = c x 1:n (v) θ v + µ θ v = c x 1:n (v) µ 60 / 67

61 Relative Frequency Estimation is the MLE (Unigram Model) Plug in for each θ v. min θ R V 0 max c x1:n (v) log θ v µ µ 0 v V ( = max c x1:n (v) log c x 1:n (v) µ µ 0 µ v V ( 1 ) θ v v V 1 ) c x1:n (v) µ v V Remember: v V, θ v = c x 1:n (v) µ 61 / 67

62 Relative Frequency Estimation is the MLE (Unigram Model) Rearrange terms (a log a b = a log a a log b and N = v V max ( c x1:n (v) log c x 1:n (v) µ 1 µ 0 µ v V v V ) c x1:n (v) µ = max c x1:n (v) log c x1:n (v) + N log µ µ + N µ 0 v V c x1:n (v)). Remember: v V, θ v = c x 1:n (v) µ 62 / 67

63 Relative Frequency Estimation is the MLE (Unigram Model) Use first-order conditions to solve for µ. max c x1:n (v) log c x1:n (v) + N log µ µ + N µ 0 v V set: 0 = µ = N µ 1 µ = N Remember: v V, θ v = c x 1:n (v) µ 63 / 67

64 Relative Frequency Estimation is the MLE (Unigram Model) Plug in for µ. max c x1:n (v) log c x1:n (v) + N log µ µ + N µ 0 v V = v V c x1:n (v) log c x1:n (v) + N log N v V, θ v = c x 1:n (v) µ = c x 1:n (v) N... and that s the relative frequency estimate! 64 / 67

65 Language Models as (Weighted) Finite-State Automata (Deterministic) finite-state automaton: Set of k states S Initial state s0 S Final states F S Alphabet Σ Transitions δ : S Σ S A length l string x is in the language of the automaton iff there is a path s 0,..., s l such that s l F and l [[s i = δ(s i 1, x i )]] i=1 65 / 67

66 Language Models as (Weighted) Finite-State Automata (Deterministic) finite-state automaton: Set of k states S histories Initial state s 0 S Final states F S histories ending in Alphabet Σ V Transitions δ : S Σ S R >0 A weighted FSA defines a weight for every transition; e.g., w(h, v, δ(h, v)) = θ v h A length l string x is in the language of the automaton iff there is a path s 0,..., s l such that s l F and l [[s i = δ(s i 1, x i )]] i=1 The score of the string is the product of transition weights. l score(x) w(h i, x i, δ(h i, x i )) i=1 66 / 67

67 Class-Based Language Models Brown et al. (1992) Suppose we have a hard clustering of V, cl : V {1,..., k}, where k V. n-gram class-based p θ (x) = l j=1 θ xj x j n+1:j 1 l θ xj cl(x j )γ cl(xj ) cl(x j 1 ) Parameters: θ v h θ v cl(v) γ i j v V, h (V { }) n 1 v V i, j {1,..., k} MLE: c(hv) c(h) j=1 c(v) c(cl(v)) c(j) c(ji) 67 / 67

Natural Language Processing (CSEP 517): Introduction & Language Models

Natural Language Processing (CSEP 517): Introduction & Language Models Natural Language Processing (CSEP 517): Introduction & Language Models Noah Smith c 2017 University of Washington nasmith@cs.washington.edu March 27, 2017 1 / 87 What is NLP? NL {Mandarin Chinese, English,

More information

Machine Learning for natural language processing

Machine Learning for natural language processing Machine Learning for natural language processing N-grams and language models Laura Kallmeyer Heinrich-Heine-Universität Düsseldorf Summer 2016 1 / 25 Introduction Goals: Estimate the probability that a

More information

N-grams. Motivation. Simple n-grams. Smoothing. Backoff. N-grams L545. Dept. of Linguistics, Indiana University Spring / 24

N-grams. Motivation. Simple n-grams. Smoothing. Backoff. N-grams L545. Dept. of Linguistics, Indiana University Spring / 24 L545 Dept. of Linguistics, Indiana University Spring 2013 1 / 24 Morphosyntax We just finished talking about morphology (cf. words) And pretty soon we re going to discuss syntax (cf. sentences) In between,

More information

Foundations of Natural Language Processing Lecture 5 More smoothing and the Noisy Channel Model

Foundations of Natural Language Processing Lecture 5 More smoothing and the Noisy Channel Model Foundations of Natural Language Processing Lecture 5 More smoothing and the Noisy Channel Model Alex Lascarides (Slides based on those from Alex Lascarides, Sharon Goldwater and Philipop Koehn) 30 January

More information

Speech Recognition Lecture 5: N-gram Language Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri

Speech Recognition Lecture 5: N-gram Language Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri Speech Recognition Lecture 5: N-gram Language Models Eugene Weinstein Google, NYU Courant Institute eugenew@cs.nyu.edu Slide Credit: Mehryar Mohri Components Acoustic and pronunciation model: Pr(o w) =

More information

Empirical Methods in Natural Language Processing Lecture 10a More smoothing and the Noisy Channel Model

Empirical Methods in Natural Language Processing Lecture 10a More smoothing and the Noisy Channel Model Empirical Methods in Natural Language Processing Lecture 10a More smoothing and the Noisy Channel Model (most slides from Sharon Goldwater; some adapted from Philipp Koehn) 5 October 2016 Nathan Schneider

More information

Natural Language Processing (CSEP 517): Machine Translation

Natural Language Processing (CSEP 517): Machine Translation Natural Language Processing (CSEP 57): Machine Translation Noah Smith c 207 University of Washington nasmith@cs.washington.edu May 5, 207 / 59 To-Do List Online quiz: due Sunday (Jurafsky and Martin, 2008,

More information

perplexity = 2 cross-entropy (5.1) cross-entropy = 1 N log 2 likelihood (5.2) likelihood = P(w 1 w N ) (5.3)

perplexity = 2 cross-entropy (5.1) cross-entropy = 1 N log 2 likelihood (5.2) likelihood = P(w 1 w N ) (5.3) Chapter 5 Language Modeling 5.1 Introduction A language model is simply a model of what strings (of words) are more or less likely to be generated by a speaker of English (or some other language). More

More information

ACS Introduction to NLP Lecture 3: Language Modelling and Smoothing

ACS Introduction to NLP Lecture 3: Language Modelling and Smoothing ACS Introduction to NLP Lecture 3: Language Modelling and Smoothing Stephen Clark Natural Language and Information Processing (NLIP) Group sc609@cam.ac.uk Language Modelling 2 A language model is a probability

More information

The Noisy Channel Model and Markov Models

The Noisy Channel Model and Markov Models 1/24 The Noisy Channel Model and Markov Models Mark Johnson September 3, 2014 2/24 The big ideas The story so far: machine learning classifiers learn a function that maps a data item X to a label Y handle

More information

Natural Language Processing. Statistical Inference: n-grams

Natural Language Processing. Statistical Inference: n-grams Natural Language Processing Statistical Inference: n-grams Updated 3/2009 Statistical Inference Statistical Inference consists of taking some data (generated in accordance with some unknown probability

More information

Statistical Methods for NLP

Statistical Methods for NLP Statistical Methods for NLP Language Models, Graphical Models Sameer Maskey Week 13, April 13, 2010 Some slides provided by Stanley Chen and from Bishop Book Resources 1 Announcements Final Project Due,

More information

NLP: N-Grams. Dan Garrette December 27, Predictive text (text messaging clients, search engines, etc)

NLP: N-Grams. Dan Garrette December 27, Predictive text (text messaging clients, search engines, etc) NLP: N-Grams Dan Garrette dhg@cs.utexas.edu December 27, 2013 1 Language Modeling Tasks Language idenfication / Authorship identification Machine Translation Speech recognition Optical character recognition

More information

The Language Modeling Problem (Fall 2007) Smoothed Estimation, and Language Modeling. The Language Modeling Problem (Continued) Overview

The Language Modeling Problem (Fall 2007) Smoothed Estimation, and Language Modeling. The Language Modeling Problem (Continued) Overview The Language Modeling Problem We have some (finite) vocabulary, say V = {the, a, man, telescope, Beckham, two, } 6.864 (Fall 2007) Smoothed Estimation, and Language Modeling We have an (infinite) set of

More information

Language Modeling. Michael Collins, Columbia University

Language Modeling. Michael Collins, Columbia University Language Modeling Michael Collins, Columbia University Overview The language modeling problem Trigram models Evaluating language models: perplexity Estimation techniques: Linear interpolation Discounting

More information

Recap: Language models. Foundations of Natural Language Processing Lecture 4 Language Models: Evaluation and Smoothing. Two types of evaluation in NLP

Recap: Language models. Foundations of Natural Language Processing Lecture 4 Language Models: Evaluation and Smoothing. Two types of evaluation in NLP Recap: Language models Foundations of atural Language Processing Lecture 4 Language Models: Evaluation and Smoothing Alex Lascarides (Slides based on those from Alex Lascarides, Sharon Goldwater and Philipp

More information

N-gram Language Modeling

N-gram Language Modeling N-gram Language Modeling Outline: Statistical Language Model (LM) Intro General N-gram models Basic (non-parametric) n-grams Class LMs Mixtures Part I: Statistical Language Model (LM) Intro What is a statistical

More information

Week 13: Language Modeling II Smoothing in Language Modeling. Irina Sergienya

Week 13: Language Modeling II Smoothing in Language Modeling. Irina Sergienya Week 13: Language Modeling II Smoothing in Language Modeling Irina Sergienya 07.07.2015 Couple of words first... There are much more smoothing techniques, [e.g. Katz back-off, Jelinek-Mercer,...] and techniques

More information

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky Language Modeling Introduction to N-grams Many Slides are adapted from slides by Dan Jurafsky Probabilistic Language Models Today s goal: assign a probability to a sentence Why? Machine Translation: P(high

More information

N-gram Language Modeling Tutorial

N-gram Language Modeling Tutorial N-gram Language Modeling Tutorial Dustin Hillard and Sarah Petersen Lecture notes courtesy of Prof. Mari Ostendorf Outline: Statistical Language Model (LM) Basics n-gram models Class LMs Cache LMs Mixtures

More information

Midterm sample questions

Midterm sample questions Midterm sample questions CS 585, Brendan O Connor and David Belanger October 12, 2014 1 Topics on the midterm Language concepts Translation issues: word order, multiword translations Human evaluation Parts

More information

Ngram Review. CS 136 Lecture 10 Language Modeling. Thanks to Dan Jurafsky for these slides. October13, 2017 Professor Meteer

Ngram Review. CS 136 Lecture 10 Language Modeling. Thanks to Dan Jurafsky for these slides. October13, 2017 Professor Meteer + Ngram Review October13, 2017 Professor Meteer CS 136 Lecture 10 Language Modeling Thanks to Dan Jurafsky for these slides + ASR components n Feature Extraction, MFCCs, start of Acoustic n HMMs, the Forward

More information

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky Language Modeling Introduction to N-grams Many Slides are adapted from slides by Dan Jurafsky Probabilistic Language Models Today s goal: assign a probability to a sentence Why? Machine Translation: P(high

More information

DT2118 Speech and Speaker Recognition

DT2118 Speech and Speaker Recognition DT2118 Speech and Speaker Recognition Language Modelling Giampiero Salvi KTH/CSC/TMH giampi@kth.se VT 2015 1 / 56 Outline Introduction Formal Language Theory Stochastic Language Models (SLM) N-gram Language

More information

Language Model. Introduction to N-grams

Language Model. Introduction to N-grams Language Model Introduction to N-grams Probabilistic Language Model Goal: assign a probability to a sentence Application: Machine Translation P(high winds tonight) > P(large winds tonight) Spelling Correction

More information

Kneser-Ney smoothing explained

Kneser-Ney smoothing explained foldl home blog contact feed Kneser-Ney smoothing explained 18 January 2014 Language models are an essential element of natural language processing, central to tasks ranging from spellchecking to machine

More information

Probabilistic Language Modeling

Probabilistic Language Modeling Predicting String Probabilities Probabilistic Language Modeling Which string is more likely? (Which string is more grammatical?) Grill doctoral candidates. Regina Barzilay EECS Department MIT November

More information

CS 6120/CS4120: Natural Language Processing

CS 6120/CS4120: Natural Language Processing CS 6120/CS4120: Natural Language Processing Instructor: Prof. Lu Wang College of Computer and Information Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Outline Probabilistic language

More information

CSEP 517 Natural Language Processing Autumn 2013

CSEP 517 Natural Language Processing Autumn 2013 CSEP 517 Natural Language Processing Autumn 2013 Language Models Luke Zettlemoyer Many slides from Dan Klein and Michael Collins Overview The language modeling problem N-gram language models Evaluation:

More information

Machine Learning for natural language processing

Machine Learning for natural language processing Machine Learning for natural language processing Hidden Markov Models Laura Kallmeyer Heinrich-Heine-Universität Düsseldorf Summer 2016 1 / 33 Introduction So far, we have classified texts/observations

More information

ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging

ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging Stephen Clark Natural Language and Information Processing (NLIP) Group sc609@cam.ac.uk The POS Tagging Problem 2 England NNP s POS fencers

More information

Chapter 3: Basics of Language Modelling

Chapter 3: Basics of Language Modelling Chapter 3: Basics of Language Modelling Motivation Language Models are used in Speech Recognition Machine Translation Natural Language Generation Query completion For research and development: need a simple

More information

Natural Language Processing (CSEP 517): Text Classification

Natural Language Processing (CSEP 517): Text Classification Natural Language Processing (CSEP 517): Text Classification Noah Smith c 2017 University of Washington nasmith@cs.washington.edu April 10, 2017 1 / 71 To-Do List Online quiz: due Sunday Read: Jurafsky

More information

ANLP Lecture 6 N-gram models and smoothing

ANLP Lecture 6 N-gram models and smoothing ANLP Lecture 6 N-gram models and smoothing Sharon Goldwater (some slides from Philipp Koehn) 27 September 2018 Sharon Goldwater ANLP Lecture 6 27 September 2018 Recap: N-gram models We can model sentence

More information

Lecture 2: Probability and Language Models

Lecture 2: Probability and Language Models Lecture 2: Probability and Language Models Intro to NLP, CS585, Fall 2014 Brendan O Connor (http://brenocon.com) 1 Admin Waitlist Moodle access: Email me if you don t have it Did you get an announcement

More information

Natural Language Processing SoSe Language Modelling. (based on the slides of Dr. Saeedeh Momtazi)

Natural Language Processing SoSe Language Modelling. (based on the slides of Dr. Saeedeh Momtazi) Natural Language Processing SoSe 2015 Language Modelling Dr. Mariana Neves April 20th, 2015 (based on the slides of Dr. Saeedeh Momtazi) Outline 2 Motivation Estimation Evaluation Smoothing Outline 3 Motivation

More information

Natural Language Processing SoSe Words and Language Model

Natural Language Processing SoSe Words and Language Model Natural Language Processing SoSe 2016 Words and Language Model Dr. Mariana Neves May 2nd, 2016 Outline 2 Words Language Model Outline 3 Words Language Model Tokenization Separation of words in a sentence

More information

Efficient Path Counting Transducers for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices

Efficient Path Counting Transducers for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices Efficient Path Counting Transducers for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices Graeme Blackwood, Adrià de Gispert, William Byrne Machine Intelligence Laboratory Cambridge

More information

N-gram Language Model. Language Models. Outline. Language Model Evaluation. Given a text w = w 1...,w t,...,w w we can compute its probability by:

N-gram Language Model. Language Models. Outline. Language Model Evaluation. Given a text w = w 1...,w t,...,w w we can compute its probability by: N-gram Language Model 2 Given a text w = w 1...,w t,...,w w we can compute its probability by: Language Models Marcello Federico FBK-irst Trento, Italy 2016 w Y Pr(w) =Pr(w 1 ) Pr(w t h t ) (1) t=2 where

More information

Lecture 2: N-gram. Kai-Wei Chang University of Virginia Couse webpage:

Lecture 2: N-gram. Kai-Wei Chang University of Virginia Couse webpage: Lecture 2: N-gram Kai-Wei Chang CS @ University of Virginia kw@kwchang.net Couse webpage: http://kwchang.net/teaching/nlp16 CS 6501: Natural Language Processing 1 This lecture Language Models What are

More information

We have a sitting situation

We have a sitting situation We have a sitting situation 447 enrollment: 67 out of 64 547 enrollment: 10 out of 10 2 special approved cases for audits ---------------------------------------- 67 + 10 + 2 = 79 students in the class!

More information

Graphical models for part of speech tagging

Graphical models for part of speech tagging Indian Institute of Technology, Bombay and Research Division, India Research Lab Graphical models for part of speech tagging Different Models for POS tagging HMM Maximum Entropy Markov Models Conditional

More information

Language as a Stochastic Process

Language as a Stochastic Process CS769 Spring 2010 Advanced Natural Language Processing Language as a Stochastic Process Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu 1 Basic Statistics for NLP Pick an arbitrary letter x at random from any

More information

Natural Language Processing

Natural Language Processing SFU NatLangLab Natural Language Processing Anoop Sarkar anoopsarkar.github.io/nlp-class Simon Fraser University October 20, 2017 0 Natural Language Processing Anoop Sarkar anoopsarkar.github.io/nlp-class

More information

Lecture 3: ASR: HMMs, Forward, Viterbi

Lecture 3: ASR: HMMs, Forward, Viterbi Original slides by Dan Jurafsky CS 224S / LINGUIST 285 Spoken Language Processing Andrew Maas Stanford University Spring 2017 Lecture 3: ASR: HMMs, Forward, Viterbi Fun informative read on phonetics The

More information

CMPT-825 Natural Language Processing

CMPT-825 Natural Language Processing CMPT-825 Natural Language Processing Anoop Sarkar http://www.cs.sfu.ca/ anoop February 27, 2008 1 / 30 Cross-Entropy and Perplexity Smoothing n-gram Models Add-one Smoothing Additive Smoothing Good-Turing

More information

IBM Model 1 for Machine Translation

IBM Model 1 for Machine Translation IBM Model 1 for Machine Translation Micha Elsner March 28, 2014 2 Machine translation A key area of computational linguistics Bar-Hillel points out that human-like translation requires understanding of

More information

CRF Word Alignment & Noisy Channel Translation

CRF Word Alignment & Noisy Channel Translation CRF Word Alignment & Noisy Channel Translation January 31, 2013 Last Time... X p( Translation)= p(, Translation) Alignment Alignment Last Time... X p( Translation)= p(, Translation) Alignment X Alignment

More information

Language Modelling: Smoothing and Model Complexity. COMP-599 Sept 14, 2016

Language Modelling: Smoothing and Model Complexity. COMP-599 Sept 14, 2016 Language Modelling: Smoothing and Model Complexity COMP-599 Sept 14, 2016 Announcements A1 has been released Due on Wednesday, September 28th Start code for Question 4: Includes some of the package import

More information

Natural Language Processing (CSE 517): Cotext Models (II)

Natural Language Processing (CSE 517): Cotext Models (II) Natural Language Processing (CSE 517): Cotext Models (II) Noah Smith c 2016 University of Washington nasmith@cs.washington.edu January 25, 2016 Thanks to David Mimno for comments. 1 / 30 Three Kinds of

More information

Log-Linear Models, MEMMs, and CRFs

Log-Linear Models, MEMMs, and CRFs Log-Linear Models, MEMMs, and CRFs Michael Collins 1 Notation Throughout this note I ll use underline to denote vectors. For example, w R d will be a vector with components w 1, w 2,... w d. We use expx

More information

Deep Learning Basics Lecture 10: Neural Language Models. Princeton University COS 495 Instructor: Yingyu Liang

Deep Learning Basics Lecture 10: Neural Language Models. Princeton University COS 495 Instructor: Yingyu Liang Deep Learning Basics Lecture 10: Neural Language Models Princeton University COS 495 Instructor: Yingyu Liang Natural language Processing (NLP) The processing of the human languages by computers One of

More information

Conditional Language Modeling. Chris Dyer

Conditional Language Modeling. Chris Dyer Conditional Language Modeling Chris Dyer Unconditional LMs A language model assigns probabilities to sequences of words,. w =(w 1,w 2,...,w`) It is convenient to decompose this probability using the chain

More information

Probabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov

Probabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov Probabilistic Graphical Models: MRFs and CRFs CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov Why PGMs? PGMs can model joint probabilities of many events. many techniques commonly

More information

Empirical Methods in Natural Language Processing Lecture 11 Part-of-speech tagging and HMMs

Empirical Methods in Natural Language Processing Lecture 11 Part-of-speech tagging and HMMs Empirical Methods in Natural Language Processing Lecture 11 Part-of-speech tagging and HMMs (based on slides by Sharon Goldwater and Philipp Koehn) 21 February 2018 Nathan Schneider ENLP Lecture 11 21

More information

Sequences and Information

Sequences and Information Sequences and Information Rahul Siddharthan The Institute of Mathematical Sciences, Chennai, India http://www.imsc.res.in/ rsidd/ Facets 16, 04/07/2016 This box says something By looking at the symbols

More information

Empirical Methods in Natural Language Processing Lecture 5 N-gram Language Models

Empirical Methods in Natural Language Processing Lecture 5 N-gram Language Models Empirical Methods in Natural Language Processing Lecture 5 N-gram Language Models (most slides from Sharon Goldwater; some adapted from Alex Lascarides) 29 January 2017 Nathan Schneider ENLP Lecture 5

More information

Language Models. Philipp Koehn. 11 September 2018

Language Models. Philipp Koehn. 11 September 2018 Language Models Philipp Koehn 11 September 2018 Language models 1 Language models answer the question: How likely is a string of English words good English? Help with reordering p LM (the house is small)

More information

Language Modeling. Introduc*on to N- grams. Many Slides are adapted from slides by Dan Jurafsky

Language Modeling. Introduc*on to N- grams. Many Slides are adapted from slides by Dan Jurafsky Language Modeling Introduc*on to N- grams Many Slides are adapted from slides by Dan Jurafsky Probabilis1c Language Models Today s goal: assign a probability to a sentence Machine Transla*on: Why? P(high

More information

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology Basic Text Analysis Hidden Markov Models Joakim Nivre Uppsala University Department of Linguistics and Philology joakimnivre@lingfiluuse Basic Text Analysis 1(33) Hidden Markov Models Markov models are

More information

Natural Language Processing (CSE 517): Cotext Models

Natural Language Processing (CSE 517): Cotext Models Natural Language Processing (CSE 517): Cotext Models Noah Smith c 2018 University of Washington nasmith@cs.washington.edu April 18, 2018 1 / 32 Latent Dirichlet Allocation (Blei et al., 2003) Widely used

More information

8: Hidden Markov Models

8: Hidden Markov Models 8: Hidden Markov Models Machine Learning and Real-world Data Helen Yannakoudakis 1 Computer Laboratory University of Cambridge Lent 2018 1 Based on slides created by Simone Teufel So far we ve looked at

More information

Classification: Naïve Bayes. Nathan Schneider (slides adapted from Chris Dyer, Noah Smith, et al.) ENLP 19 September 2016

Classification: Naïve Bayes. Nathan Schneider (slides adapted from Chris Dyer, Noah Smith, et al.) ENLP 19 September 2016 Classification: Naïve Bayes Nathan Schneider (slides adapted from Chris Dyer, Noah Smith, et al.) ENLP 19 September 2016 1 Sentiment Analysis Recall the task: Filled with horrific dialogue, laughable characters,

More information

Language Models. Data Science: Jordan Boyd-Graber University of Maryland SLIDES ADAPTED FROM PHILIP KOEHN

Language Models. Data Science: Jordan Boyd-Graber University of Maryland SLIDES ADAPTED FROM PHILIP KOEHN Language Models Data Science: Jordan Boyd-Graber University of Maryland SLIDES ADAPTED FROM PHILIP KOEHN Data Science: Jordan Boyd-Graber UMD Language Models 1 / 8 Language models Language models answer

More information

Natural Language Processing (CSEP 517): Machine Translation (Continued), Summarization, & Finale

Natural Language Processing (CSEP 517): Machine Translation (Continued), Summarization, & Finale Natural Language Processing (CSEP 517): Machine Translation (Continued), Summarization, & Finale Noah Smith c 2017 University of Washington nasmith@cs.washington.edu May 22, 2017 1 / 30 To-Do List Online

More information

Language Modelling. Marcello Federico FBK-irst Trento, Italy. MT Marathon, Edinburgh, M. Federico SLM MT Marathon, Edinburgh, 2012

Language Modelling. Marcello Federico FBK-irst Trento, Italy. MT Marathon, Edinburgh, M. Federico SLM MT Marathon, Edinburgh, 2012 Language Modelling Marcello Federico FBK-irst Trento, Italy MT Marathon, Edinburgh, 2012 Outline 1 Role of LM in ASR and MT N-gram Language Models Evaluation of Language Models Smoothing Schemes Discounting

More information

Foundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM

Foundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM Foundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM Alex Lascarides (Slides from Alex Lascarides and Sharon Goldwater) 2 February 2019 Alex Lascarides FNLP Lecture

More information

The Noisy Channel Model. CS 294-5: Statistical Natural Language Processing. Speech Recognition Architecture. Digitizing Speech

The Noisy Channel Model. CS 294-5: Statistical Natural Language Processing. Speech Recognition Architecture. Digitizing Speech CS 294-5: Statistical Natural Language Processing The Noisy Channel Model Speech Recognition II Lecture 21: 11/29/05 Search through space of all possible sentences. Pick the one that is most probable given

More information

Statistical Methods for NLP

Statistical Methods for NLP Statistical Methods for NLP Sequence Models Joakim Nivre Uppsala University Department of Linguistics and Philology joakim.nivre@lingfil.uu.se Statistical Methods for NLP 1(21) Introduction Structured

More information

Machine Learning for natural language processing

Machine Learning for natural language processing Machine Learning for natural language processing Classification: Maximum Entropy Models Laura Kallmeyer Heinrich-Heine-Universität Düsseldorf Summer 2016 1 / 24 Introduction Classification = supervised

More information

Naïve Bayes, Maxent and Neural Models

Naïve Bayes, Maxent and Neural Models Naïve Bayes, Maxent and Neural Models CMSC 473/673 UMBC Some slides adapted from 3SLP Outline Recap: classification (MAP vs. noisy channel) & evaluation Naïve Bayes (NB) classification Terminology: bag-of-words

More information

CS 224N HW:#3. (V N0 )δ N r p r + N 0. N r (r δ) + (V N 0)δ. N r r δ. + (V N 0)δ N = 1. 1 we must have the restriction: δ NN 0.

CS 224N HW:#3. (V N0 )δ N r p r + N 0. N r (r δ) + (V N 0)δ. N r r δ. + (V N 0)δ N = 1. 1 we must have the restriction: δ NN 0. CS 224 HW:#3 ARIA HAGHIGHI SUID :# 05041774 1. Smoothing Probability Models (a). Let r be the number of words with r counts and p r be the probability for a word with r counts in the Absolute discounting

More information

Learning to translate with neural networks. Michael Auli

Learning to translate with neural networks. Michael Auli Learning to translate with neural networks Michael Auli 1 Neural networks for text processing Similar words near each other France Spain dog cat Neural networks for text processing Similar words near each

More information

Language Modeling. Introduction to N-grams. Klinton Bicknell. (borrowing from: Dan Jurafsky and Jim Martin)

Language Modeling. Introduction to N-grams. Klinton Bicknell. (borrowing from: Dan Jurafsky and Jim Martin) Language Modeling Introduction to N-grams Klinton Bicknell (borrowing from: Dan Jurafsky and Jim Martin) Probabilistic Language Models Today s goal: assign a probability to a sentence Why? Machine Translation:

More information

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009 CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models Jimmy Lin The ischool University of Maryland Wednesday, September 30, 2009 Today s Agenda The great leap forward in NLP Hidden Markov

More information

(today we are assuming sentence segmentation) Wednesday, September 10, 14

(today we are assuming sentence segmentation) Wednesday, September 10, 14 (today we are assuming sentence segmentation) 1 Your TA: David Belanger http://people.cs.umass.edu/~belanger/ I am a third year PhD student advised by Professor Andrew McCallum. Before that, I was an Associate

More information

Hierarchical Bayesian Nonparametric Models of Language and Text

Hierarchical Bayesian Nonparametric Models of Language and Text Hierarchical Bayesian Nonparametric Models of Language and Text Gatsby Computational Neuroscience Unit, UCL Joint work with Frank Wood *, Jan Gasthaus *, Cedric Archambeau, Lancelot James August 2010 Overview

More information

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 10: Acoustic Models

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 10: Acoustic Models Statistical NLP Spring 2009 The Noisy Channel Model Lecture 10: Acoustic Models Dan Klein UC Berkeley Search through space of all possible sentences. Pick the one that is most probable given the waveform.

More information

Statistical NLP Spring The Noisy Channel Model

Statistical NLP Spring The Noisy Channel Model Statistical NLP Spring 2009 Lecture 10: Acoustic Models Dan Klein UC Berkeley The Noisy Channel Model Search through space of all possible sentences. Pick the one that is most probable given the waveform.

More information

Improved Decipherment of Homophonic Ciphers

Improved Decipherment of Homophonic Ciphers Improved Decipherment of Homophonic Ciphers Malte Nuhn and Julian Schamper and Hermann Ney Human Language Technology and Pattern Recognition Computer Science Department, RWTH Aachen University, Aachen,

More information

CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm

CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm + September13, 2016 Professor Meteer CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm Thanks to Dan Jurafsky for these slides + ASR components n Feature

More information

CSA4050: Advanced Topics Natural Language Processing. Lecture Statistics III. Statistical Approaches to NLP

CSA4050: Advanced Topics Natural Language Processing. Lecture Statistics III. Statistical Approaches to NLP University of Malta BSc IT (Hons)Year IV CSA4050: Advanced Topics Natural Language Processing Lecture Statistics III Statistical Approaches to NLP Witten-Bell Discounting Unigrams Bigrams Dept Computer

More information

Hierarchical Bayesian Nonparametric Models of Language and Text

Hierarchical Bayesian Nonparametric Models of Language and Text Hierarchical Bayesian Nonparametric Models of Language and Text Gatsby Computational Neuroscience Unit, UCL Joint work with Frank Wood *, Jan Gasthaus *, Cedric Archambeau, Lancelot James SIGIR Workshop

More information

Bowl Maximum Entropy #4 By Ejay Weiss. Maxent Models: Maximum Entropy Foundations. By Yanju Chen. A Basic Comprehension with Derivations

Bowl Maximum Entropy #4 By Ejay Weiss. Maxent Models: Maximum Entropy Foundations. By Yanju Chen. A Basic Comprehension with Derivations Bowl Maximum Entropy #4 By Ejay Weiss Maxent Models: Maximum Entropy Foundations By Yanju Chen A Basic Comprehension with Derivations Outlines Generative vs. Discriminative Feature-Based Models Softmax

More information

P(t w) = arg maxp(t, w) (5.1) P(t,w) = P(t)P(w t). (5.2) The first term, P(t), can be described using a language model, for example, a bigram model:

P(t w) = arg maxp(t, w) (5.1) P(t,w) = P(t)P(w t). (5.2) The first term, P(t), can be described using a language model, for example, a bigram model: Chapter 5 Text Input 5.1 Problem In the last two chapters we looked at language models, and in your first homework you are building language models for English and Chinese to enable the computer to guess

More information

Variational Decoding for Statistical Machine Translation

Variational Decoding for Statistical Machine Translation Variational Decoding for Statistical Machine Translation Zhifei Li, Jason Eisner, and Sanjeev Khudanpur Center for Language and Speech Processing Computer Science Department Johns Hopkins University 1

More information

Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module No. # 01 Lecture No. # 08 Shannon s Theory (Contd.)

More information

CS 6120/CS4120: Natural Language Processing

CS 6120/CS4120: Natural Language Processing CS 6120/CS4120: Natural Language Processing Instructor: Prof. Lu Wang College of Computer and Information Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Today s Outline Probabilistic

More information

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information. L65 Dept. of Linguistics, Indiana University Fall 205 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission rate

More information

Dept. of Linguistics, Indiana University Fall 2015

Dept. of Linguistics, Indiana University Fall 2015 L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 28 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission

More information

{ Jurafsky & Martin Ch. 6:! 6.6 incl.

{ Jurafsky & Martin Ch. 6:! 6.6 incl. N-grams Now Simple (Unsmoothed) N-grams Smoothing { Add-one Smoothing { Backo { Deleted Interpolation Reading: { Jurafsky & Martin Ch. 6:! 6.6 incl. 1 Word-prediction Applications Augmentative Communication

More information

Statistical Natural Language Processing

Statistical Natural Language Processing Statistical Natural Language Processing N-gram Language Models Çağrı Çöltekin University of Tübingen Seminar für Sprachwissenschaft Summer Semester 2017 N-gram language models A language model answers

More information

Language Models. CS6200: Information Retrieval. Slides by: Jesse Anderton

Language Models. CS6200: Information Retrieval. Slides by: Jesse Anderton Language Models CS6200: Information Retrieval Slides by: Jesse Anderton What s wrong with VSMs? Vector Space Models work reasonably well, but have a few problems: They are based on bag-of-words, so they

More information

10/17/04. Today s Main Points

10/17/04. Today s Main Points Part-of-speech Tagging & Hidden Markov Model Intro Lecture #10 Introduction to Natural Language Processing CMPSCI 585, Fall 2004 University of Massachusetts Amherst Andrew McCallum Today s Main Points

More information

A fast and simple algorithm for training neural probabilistic language models

A fast and simple algorithm for training neural probabilistic language models A fast and simple algorithm for training neural probabilistic language models Andriy Mnih Joint work with Yee Whye Teh Gatsby Computational Neuroscience Unit University College London 25 January 2013 1

More information

Grammars and introduction to machine learning. Computers Playing Jeopardy! Course Stony Brook University

Grammars and introduction to machine learning. Computers Playing Jeopardy! Course Stony Brook University Grammars and introduction to machine learning Computers Playing Jeopardy! Course Stony Brook University Last class: grammars and parsing in Prolog Noun -> roller Verb thrills VP Verb NP S NP VP NP S VP

More information

arxiv: v1 [cs.cl] 5 Mar Introduction

arxiv: v1 [cs.cl] 5 Mar Introduction Neural Machine Translation and Sequence-to-sequence Models: A Tutorial Graham Neubig Language Technologies Institute, Carnegie Mellon University arxiv:1703.01619v1 [cs.cl] 5 Mar 2017 1 Introduction This

More information

Natural Language Processing

Natural Language Processing Natural Language Processing Info 59/259 Lecture 4: Text classification 3 (Sept 5, 207) David Bamman, UC Berkeley . https://www.forbes.com/sites/kevinmurnane/206/04/0/what-is-deep-learning-and-how-is-it-useful

More information

Language Technology. Unit 1: Sequence Models. CUNY Graduate Center Spring Lectures 5-6: Language Models and Smoothing. required hard optional

Language Technology. Unit 1: Sequence Models. CUNY Graduate Center Spring Lectures 5-6: Language Models and Smoothing. required hard optional Language Technology CUNY Graduate Center Spring 2013 Unit 1: Sequence Models Lectures 5-6: Language Models and Smoothing required hard optional Professor Liang Huang liang.huang.sh@gmail.com Python Review:

More information

CSCI 5832 Natural Language Processing. Today 1/31. Probability Basics. Lecture 6. Probability. Language Modeling (N-grams)

CSCI 5832 Natural Language Processing. Today 1/31. Probability Basics. Lecture 6. Probability. Language Modeling (N-grams) CSCI 5832 Natural Language Processing Jim Martin Lecture 6 1 Today 1/31 Probability Basic probability Conditional probability Bayes Rule Language Modeling (N-grams) N-gram Intro The Chain Rule Smoothing:

More information