Speech Recognition Lecture 5: N-gram Language Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri
|
|
- Avis Murphy
- 5 years ago
- Views:
Transcription
1 Speech Recognition Lecture 5: N-gram Language Models Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri
2 Components Acoustic and pronunciation model: Pr(o w) = d,c,p Pr(o d)pr(d c)pr(c p)pr(p w). acoustic model Pr(o d) : observation seq. distribution seq. Pr(d c) : distribution seq. CD phone seq. Pr(c p) : CD phone seq. phoneme seq. Pr(p w) : phoneme seq. word seq. Language model: seq. Pr(w), distribution over word 2
3 Language Models Definition: probability distribution sequences of words over Critical component of a speech recognition system. Problems: Learning: use large text corpus (e.g., several million words) to estimate course: n-gram models. Pr[w] w = w 1...w k. Pr[w]. Models in this Efficiency: computational representation and use. 3
4 This Lecture n-gram models definition and problems Good-Turing estimate Smoothing techniques Evaluation Representation of n-gram models Shrinking LMs based on probabilistic automata 4
5 N-Gram Models Definition: an n-gram model is a probability distribution based on the nth order Markov assumption i, Pr[w i w 1...w i 1 ]=Pr[w i h i ], h i n 1. Most widely used language models. Consequence: by the chain rule, Pr[w] = k i=1 Pr[w i w 1...w i 1 ]= k Pr[w i h i ]. i=1 5
6 Maximum Likelihood Likelihood: probability of observing sample under distribution p P, which, given the independence assumption is m Pr[x 1,...,x m ]= p(x i ). Principle: select distribution maximizing sample probability m p = argmax p(x i ), p P i=1 m or p = argmax log p(x i ). p P i=1 i=1 6
7 Example: Bernoulli Trials Problem: find most likely Bernoulli distribution, given sequence of coin flips H, T, T, H, T, H, T, H, H, H, T, T,..., H. Bernoulli distribution: p(h) =θ, p(t )=1 θ. dl(p) d Likelihood: Solution: = l l(p) = log θ N(H) (1 θ) N(T ) = N(H) log θ + N(T ) log(1 θ). is differentiable and concave; N(H) N(T ) 1 =0 = N(H) N(H)+N(T ). 7
8 Maximum Likelihood Estimation Definitions: n-gram: sequence of n consecutive words. S: sample or corpus of size m. c(w 1...w k ) : count of sequence w 1...w k. ML estimates: for, But, c(w 1...w n 1 ) 0 Pr[w n w 1...w n 1 ]= c(w 1...w n ) c(w 1...w n 1 ). c(w 1...w n )=0 = Pr[w n w 1...w n 1 ]=0! 8
9 N-Gram Model Problems Sparsity: assigning probability zero to sequences not found in the sample = speech recognition errors. Smoothing: adjusting ML estimates to reserve probability mass for unseen events. Central techniques in language modeling. Class-based models: create models based on classes (e.g., DAY) or phrases. Representation: for Σ =100,000, the number of bigrams is 10 10, the number of trigrams 10 15! Weighted automata: exploiting sparsity. 9
10 Smoothing Techniques Typical form: interpolation of n-gram models, e.g., trigram, bigram, unigram frequencies. Pr[w 3 w 1 w 2 ]=α 1 c(w 3 w 1 w 2 )+α 2 c(w 3 w 2 )+α 3 c(w 3 ). Some widely used techniques: Katz Back-off models (Katz, 1987). Interpolated models (Jelinek and Mercer, 1980). Kneser-Ney models (Kneser and Ney, 1995). 10
11 Definitions: Good-Turing Estimate Sample S of m words drawn from vocabulary Σ. c(x) : count of word x in S. S k : set of words appearing k times. M k : probability of drawing a word in S k. Good-Turing estimate of : G k = k +1 m M k S k+1 = (k +1) S k+1 m. (Good, 1953) 11
12 Good-Turing Count Estimate Definition: let r = c(w 1...w k ) and n r = S r, then c (w 1...w k )= G r m S r =(r +1) n r+1 n r. 12
13 Simple Method Additive smoothing: add each n-gram. Pr[w n w 1...w n 1 ]= δ [0, 1] Poor performance (Gale and Church, 1994). to the count of c(w 1...w n )+δ c(w 1...w n 1 )+δ Σ. (Jeffreys, 1948) Not a principled attempt to estimate or make use of an estimate of the missing mass. 13
14 Katz Back-off Model Idea: back-off to lower order model for zero counts. if c(w1 n 1 )=0, then Pr[w n w1 n 1 ]=Pr[w n w2 n 1 ]. otherwise, Pr[w n w n 1 1 ]= where d k = d k c(w n 1 ) d c(w n 1 ) c(w1 n 1 ) if c(w n 1 ) > 0 β Pr[w n w2 n 1 ] otherwise. is a discount coefficient such that 1 if k>k; (k +1)n k+1 kn k Katz suggests K =5. otherwise. (Katz, 1987) 14
15 Interpolated Models Idea: interpolation of different order models. Pr[w 3 w 1 w 2 ]=αc(w 3 w 1 w 2 )+βc(w 3 w 2 )+(1 α β)c(w 3 ), with 0 α, β 1. α and β (Jelinek and Mercer, 1980) are estimated by using held-out samples. sample split into multiple buckets, set of interpolation parameters learned per bucket optimization using expectation-maximization (EM) algorithm. deleted interpolation: k-fold cross-validation. 15
16 Kneser-Ney Model Idea: combination of back-off and interpolation, but backing-off to lower order model based on counts of contexts. Extension of absolute discounting. Pr[w 3 w 1 w 2 ]= max {0,c(w 1w 2 w 3 ) D} c(w 1 w 2 ) + α c( w 3) c( w3 ), where D is a constant. Modified version (Chen and Goodman, 1998): of c(w 1 w 2 w 3 ). D function 16
17 Evaluation Average log-likelihood of test sample of size N: L(p) = 1 N N k=1 Perplexity: the perplexity PP(q) of the model is defined as ( average branching factor ) For English texts, typically and log 2 p[w k h k ], h k n 1. PP(q) =2 ˆL(q). PP(q) [50, 1000] 6 L(q) 10 bits. 17
18 In Practice Evaluation: an empirical observation based on the word error rate of a speech recognizer is often a better evaluation than perplexity. n-gram order: typically n = 3, 4, or 5. Higher order n-grams typically do not yield any benefit. Smoothing: small differences between Back-off, interpolated, and other models (Chen and Goodman, 1998). Special symbols: beginning and end of sentences, start and stop symbols. 18
19 Example: Bigram Model <s> a/0.405 a φ/4.856 a/1.108 a/0.441 φ/0.231 b/0.693 </s>/1.101 a/0.287 </s>/1.540 φ/0.356 b/1.945 s b a a a a /s s b a a a a /s s a /s </s> b 19
20 Failure Transitions Definition: a failure transition Φ at state q of an automaton is a transition taken at that state without consuming any symbol, when no regular transition at q has the desired input label. Thus, a failure transition corresponds to the semantics of otherwise. Advantages: originally used in string-matching. More compact representation. (Aho and Corasick 1975; Mohri, 1997) Dealing with unknown alphabet symbols. 20
21 Weighted Automata Representation w i 2 w i 1 w i w i 1 w i Φ w i 1 Φ w i 1 w i Φ w i ε Representation of a trigram model using failure transitions. 21
22 Approximate Representation The cost of a representation without failure transitions and ɛ-transitions is prohibitive. For a trigram model, Σ n 1 states and Σ n transitions are needed. Exact on-the-fly representation but drawback: no offline optimization. Approximation: empirically, limited accuracy loss. Φ-transitions replaced by ɛ-transitions. Log semiring replaced by tropical semiring. Alternative: exact representation with ɛ-transitions (Allauzen, Roark, and Mohri, 2003). 22
23 Shrinking Idea: remove some n-grams from the model while minimally affecting its quality. Main motivation: real-time speech recognition (speed and memory). Method of (Seymore and Rosenfeld,1996): rank n- grams (w, h) according to difference of log probabilities before and after shrinking: [ ] c (wh) log p[w h] log p [w h]. 23
24 Shrinking Method of (Stolcke, 1998): greedy removal of n- grams based on relative entropy D(p p ) of the models before and after removal, independently for each n-gram. slightly lower perplexity. but, ranking close to that of (Seymore and Rosenfeld,1996) both in the definition and empirical results. 24
25 LMs Based on Probabilistic Automata Definition: expected count of sequence x in probabilistic automaton: where u x is the number of occurrences of x in u. u x Computation: c(x) = use counting weighted transducers. (Allauzen, Mohri, and Roark, 1997) u Σ u x A(u), can be generalized to other moments of the counts. 25
26 Example: Bigram Transducer b:ε/1 a:ε/1 b:ε/1 a:ε/1 0 a:a/1 b:b/1 1 a:a/1 b:b/1 2/1 Weighted transducer T. X T computes the (expected) count of each bigram {aa, ab, ba, bb} in X. 26
27 Counting Transducers b:ε/1 a:ε/1 b:ε/1 a:ε/1 x = ab bbabaabba 0 X:X/1 1/1 εεabεεεεε εεεεεabεε X is an automaton representing a string or any other regular expression. Alphabet Σ={a, b}. 27
28 References Cyril Allauzen, Mehryar Mohri, and Brian Roark. Generalized Algorithms for Constructing Statistical Language Models. In 41st Meeting of the Association for Computational Linguistics (ACL 2003), Proceedings of the Conference, Sapporo, Japan. July Peter F. Brown, Vincent J. Della Pietra, Peter V. desouza, Jennifer C. Lai, and Robert L. Mercer Class-based n-gram models of natural language. Computational Linguistics, 18(4): Stanley Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. Technical Report, TR-10-98, Harvard University William Gale and Kenneth W. Church. What s wrong with adding one? In N. Oostdijk and P. de Hann, editors, Corpus-Based Research into Language. Rodolpi, Amsterdam. Good, I. The population frequencies of species and the estimation of population parameters, Biometrika, 40, , Frederick Jelinek and Robert L. Mercer Interpolated estimation of Markov source parameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice, s
29 References Slava Katz. Estimation of probabilities from sparse data for the language model component of a speech recognizer, IEEE Transactions on Acoustics, Speech and Signal Processing, 35, , Reinhard Kneser and Hermann Ney. Improved backing-off for m-gram language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, volume 1, s , David A. McAllester, Robert E. Schapire: On the Convergence Rate of Good-Turing Estimators. Proceedings of Conference on Learning Theory (COLT) 2000: 1-6. Hermann Ney, Ute Essen, and Reinhard Kneser On structuring probabilistic dependences in stochastic language modeling. Computer Speech and Language, 8:1-38. Kristie Seymore and Ronald Rosenfeld. Scalable backoff language models. In Proceedings of the International Conference on Spoken Language Processing (ICSLP), Andreas Stolcke Entropy-based pruning of back-off language models. In Proceedings of DARPA Broadcast News Transcription and Understanding Workshop, s
30 References Ian H. Witten and Timothy C. Bell. The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression, IEEE Transactions on Information Theory, 37(4): ,
CMPT-825 Natural Language Processing
CMPT-825 Natural Language Processing Anoop Sarkar http://www.cs.sfu.ca/ anoop February 27, 2008 1 / 30 Cross-Entropy and Perplexity Smoothing n-gram Models Add-one Smoothing Additive Smoothing Good-Turing
More informationNatural Language Processing. Statistical Inference: n-grams
Natural Language Processing Statistical Inference: n-grams Updated 3/2009 Statistical Inference Statistical Inference consists of taking some data (generated in accordance with some unknown probability
More informationMachine Learning for natural language processing
Machine Learning for natural language processing N-grams and language models Laura Kallmeyer Heinrich-Heine-Universität Düsseldorf Summer 2016 1 / 25 Introduction Goals: Estimate the probability that a
More informationperplexity = 2 cross-entropy (5.1) cross-entropy = 1 N log 2 likelihood (5.2) likelihood = P(w 1 w N ) (5.3)
Chapter 5 Language Modeling 5.1 Introduction A language model is simply a model of what strings (of words) are more or less likely to be generated by a speaker of English (or some other language). More
More informationThe Language Modeling Problem (Fall 2007) Smoothed Estimation, and Language Modeling. The Language Modeling Problem (Continued) Overview
The Language Modeling Problem We have some (finite) vocabulary, say V = {the, a, man, telescope, Beckham, two, } 6.864 (Fall 2007) Smoothed Estimation, and Language Modeling We have an (infinite) set of
More informationExploring Asymmetric Clustering for Statistical Language Modeling
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL, Philadelphia, July 2002, pp. 83-90. Exploring Asymmetric Clustering for Statistical Language Modeling Jianfeng
More informationN-gram Language Modeling Tutorial
N-gram Language Modeling Tutorial Dustin Hillard and Sarah Petersen Lecture notes courtesy of Prof. Mari Ostendorf Outline: Statistical Language Model (LM) Basics n-gram models Class LMs Cache LMs Mixtures
More informationFoundations of Natural Language Processing Lecture 5 More smoothing and the Noisy Channel Model
Foundations of Natural Language Processing Lecture 5 More smoothing and the Noisy Channel Model Alex Lascarides (Slides based on those from Alex Lascarides, Sharon Goldwater and Philipop Koehn) 30 January
More informationNatural Language Processing (CSE 490U): Language Models
Natural Language Processing (CSE 490U): Language Models Noah Smith c 2017 University of Washington nasmith@cs.washington.edu January 6 9, 2017 1 / 67 Very Quick Review of Probability Event space (e.g.,
More informationEmpirical Methods in Natural Language Processing Lecture 10a More smoothing and the Noisy Channel Model
Empirical Methods in Natural Language Processing Lecture 10a More smoothing and the Noisy Channel Model (most slides from Sharon Goldwater; some adapted from Philipp Koehn) 5 October 2016 Nathan Schneider
More informationN-gram Language Modeling
N-gram Language Modeling Outline: Statistical Language Model (LM) Intro General N-gram models Basic (non-parametric) n-grams Class LMs Mixtures Part I: Statistical Language Model (LM) Intro What is a statistical
More informationProbabilistic Language Modeling
Predicting String Probabilities Probabilistic Language Modeling Which string is more likely? (Which string is more grammatical?) Grill doctoral candidates. Regina Barzilay EECS Department MIT November
More informationWeighted Finite-State Transducers in Computational Biology
Weighted Finite-State Transducers in Computational Biology Mehryar Mohri Courant Institute of Mathematical Sciences mohri@cims.nyu.edu Joint work with Corinna Cortes (Google Research). 1 This Tutorial
More informationLanguage Modeling. Michael Collins, Columbia University
Language Modeling Michael Collins, Columbia University Overview The language modeling problem Trigram models Evaluating language models: perplexity Estimation techniques: Linear interpolation Discounting
More informationLanguage Models. Philipp Koehn. 11 September 2018
Language Models Philipp Koehn 11 September 2018 Language models 1 Language models answer the question: How likely is a string of English words good English? Help with reordering p LM (the house is small)
More informationAn Algorithm for Fast Calculation of Back-off N-gram Probabilities with Unigram Rescaling
An Algorithm for Fast Calculation of Back-off N-gram Probabilities with Unigram Rescaling Masaharu Kato, Tetsuo Kosaka, Akinori Ito and Shozo Makino Abstract Topic-based stochastic models such as the probabilistic
More informationWeek 13: Language Modeling II Smoothing in Language Modeling. Irina Sergienya
Week 13: Language Modeling II Smoothing in Language Modeling Irina Sergienya 07.07.2015 Couple of words first... There are much more smoothing techniques, [e.g. Katz back-off, Jelinek-Mercer,...] and techniques
More informationRecap: Language models. Foundations of Natural Language Processing Lecture 4 Language Models: Evaluation and Smoothing. Two types of evaluation in NLP
Recap: Language models Foundations of atural Language Processing Lecture 4 Language Models: Evaluation and Smoothing Alex Lascarides (Slides based on those from Alex Lascarides, Sharon Goldwater and Philipp
More informationLanguage Modelling. Marcello Federico FBK-irst Trento, Italy. MT Marathon, Edinburgh, M. Federico SLM MT Marathon, Edinburgh, 2012
Language Modelling Marcello Federico FBK-irst Trento, Italy MT Marathon, Edinburgh, 2012 Outline 1 Role of LM in ASR and MT N-gram Language Models Evaluation of Language Models Smoothing Schemes Discounting
More informationDT2118 Speech and Speaker Recognition
DT2118 Speech and Speaker Recognition Language Modelling Giampiero Salvi KTH/CSC/TMH giampi@kth.se VT 2015 1 / 56 Outline Introduction Formal Language Theory Stochastic Language Models (SLM) N-gram Language
More informationLanguage Model. Introduction to N-grams
Language Model Introduction to N-grams Probabilistic Language Model Goal: assign a probability to a sentence Application: Machine Translation P(high winds tonight) > P(large winds tonight) Spelling Correction
More informationAdvanced topics in language modeling: MaxEnt, features and marginal distritubion constraints
Advanced topics in language modeling: MaxEnt, features and marginal distritubion constraints Brian Roark, Google NYU CSCI-GA.2585 Speech Recognition, Weds., April 15, 2015 (Don t forget to pay your taxes!)
More informationPre-Initialized Composition For Large-Vocabulary Speech Recognition
Pre-Initialized Composition For Large-Vocabulary Speech Recognition Cyril Allauzen, Michael Riley Google Research, 76 Ninth Avenue, New York, NY, USA allauzen@google.com, riley@google.com Abstract This
More informationStatistical Methods for NLP
Statistical Methods for NLP Language Models, Graphical Models Sameer Maskey Week 13, April 13, 2010 Some slides provided by Stanley Chen and from Bishop Book Resources 1 Announcements Final Project Due,
More informationN-gram Language Model. Language Models. Outline. Language Model Evaluation. Given a text w = w 1...,w t,...,w w we can compute its probability by:
N-gram Language Model 2 Given a text w = w 1...,w t,...,w w we can compute its probability by: Language Models Marcello Federico FBK-irst Trento, Italy 2016 w Y Pr(w) =Pr(w 1 ) Pr(w t h t ) (1) t=2 where
More informationNatural Language Processing SoSe Language Modelling. (based on the slides of Dr. Saeedeh Momtazi)
Natural Language Processing SoSe 2015 Language Modelling Dr. Mariana Neves April 20th, 2015 (based on the slides of Dr. Saeedeh Momtazi) Outline 2 Motivation Estimation Evaluation Smoothing Outline 3 Motivation
More informationLanguage Models. Data Science: Jordan Boyd-Graber University of Maryland SLIDES ADAPTED FROM PHILIP KOEHN
Language Models Data Science: Jordan Boyd-Graber University of Maryland SLIDES ADAPTED FROM PHILIP KOEHN Data Science: Jordan Boyd-Graber UMD Language Models 1 / 8 Language models Language models answer
More informationN-grams. Motivation. Simple n-grams. Smoothing. Backoff. N-grams L545. Dept. of Linguistics, Indiana University Spring / 24
L545 Dept. of Linguistics, Indiana University Spring 2013 1 / 24 Morphosyntax We just finished talking about morphology (cf. words) And pretty soon we re going to discuss syntax (cf. sentences) In between,
More informationN-gram N-gram Language Model for Large-Vocabulary Continuous Speech Recognition
2010 11 5 N-gram N-gram Language Model for Large-Vocabulary Continuous Speech Recognition 1 48-106413 Abstract Large-Vocabulary Continuous Speech Recognition(LVCSR) system has rapidly been growing today.
More informationStatistical Machine Translation
Statistical Machine Translation Marcello Federico FBK-irst Trento, Italy Galileo Galilei PhD School University of Pisa Pisa, 7-19 May 2008 Part V: Language Modeling 1 Comparing ASR and statistical MT N-gram
More informationWhat to Expect from Expected Kneser-Ney Smoothing
What to Expect from Expected Kneser-Ney Smoothing Michael Levit, Sarangarajan Parthasarathy, Shuangyu Chang Microsoft, USA {mlevit sarangp shchang}@microsoft.com Abstract Kneser-Ney smoothing on expected
More informationarxiv:cmp-lg/ v1 9 Jun 1997
arxiv:cmp-lg/9706007 v1 9 Jun 1997 Aggregate and mixed-order Markov models for statistical language processing Abstract We consider the use of language models whose size and accuracy are intermediate between
More informationChapter 3: Basics of Language Modelling
Chapter 3: Basics of Language Modelling Motivation Language Models are used in Speech Recognition Machine Translation Natural Language Generation Query completion For research and development: need a simple
More informationIntroduction to Machine Learning Lecture 14. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 14 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Density Estimation Maxent Models 2 Entropy Definition: the entropy of a random variable
More informationANLP Lecture 6 N-gram models and smoothing
ANLP Lecture 6 N-gram models and smoothing Sharon Goldwater (some slides from Philipp Koehn) 27 September 2018 Sharon Goldwater ANLP Lecture 6 27 September 2018 Recap: N-gram models We can model sentence
More informationEfficient Path Counting Transducers for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices
Efficient Path Counting Transducers for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices Graeme Blackwood, Adrià de Gispert, William Byrne Machine Intelligence Laboratory Cambridge
More informationNatural Language Processing SoSe Words and Language Model
Natural Language Processing SoSe 2016 Words and Language Model Dr. Mariana Neves May 2nd, 2016 Outline 2 Words Language Model Outline 3 Words Language Model Tokenization Separation of words in a sentence
More informationSequences and Information
Sequences and Information Rahul Siddharthan The Institute of Mathematical Sciences, Chennai, India http://www.imsc.res.in/ rsidd/ Facets 16, 04/07/2016 This box says something By looking at the symbols
More informationIntroduction to Machine Learning Lecture 9. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 9 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Kernel Methods Motivation Non-linear decision boundary. Efficient computation of inner
More informationN-Way Composition of Weighted Finite-State Transducers
International Journal of Foundations of Computer Science c World Scientific Publishing Company N-Way Composition of Weighted Finite-State Transducers CYRIL ALLAUZEN Google Research, 76 Ninth Avenue, New
More informationThe Noisy Channel Model and Markov Models
1/24 The Noisy Channel Model and Markov Models Mark Johnson September 3, 2014 2/24 The big ideas The story so far: machine learning classifiers learn a function that maps a data item X to a label Y handle
More informationLanguage Modelling: Smoothing and Model Complexity. COMP-599 Sept 14, 2016
Language Modelling: Smoothing and Model Complexity COMP-599 Sept 14, 2016 Announcements A1 has been released Due on Wednesday, September 28th Start code for Question 4: Includes some of the package import
More informationSpeech Recognition Lecture 7: Maximum Entropy Models. Mehryar Mohri Courant Institute and Google Research
Speech Recognition Lecture 7: Maximum Entropy Models Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.com This Lecture Information theory basics Maximum entropy models Duality theorem
More informationLanguage Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky
Language Modeling Introduction to N-grams Many Slides are adapted from slides by Dan Jurafsky Probabilistic Language Models Today s goal: assign a probability to a sentence Why? Machine Translation: P(high
More informationSpeech Recognition Lecture 9: Pronunciation Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri
Speech Recognition Lecture 9: Pronunciation Models. Eugene Weinstein Google, NYU Courant Institute eugenew@cs.nyu.edu Slide Credit: Mehryar Mohri Administrivia HW2 due on Nov 7th. Project proposal due
More informationIBM Research Report. Model M Lite: A Fast Class-Based Language Model
RC25631 (WAT1610-075) October 19, 2016 Computer Science IBM Research Report Model M Lite: A Fast Class-Based Language Model Stanley F. Chen IBM Research Division Thomas J. Watson Research Center P.O. Box
More informationAdapting n-gram Maximum Entropy Language Models with Conditional Entropy Regularization
Adapting n-gram Maximum Entropy Language Models with Conditional Entropy Regularization Ariya Rastrow, Mark Dredze, Sanjeev Khudanpur Human Language Technology Center of Excellence Center for Language
More informationLanguage Technology. Unit 1: Sequence Models. CUNY Graduate Center Spring Lectures 5-6: Language Models and Smoothing. required hard optional
Language Technology CUNY Graduate Center Spring 2013 Unit 1: Sequence Models Lectures 5-6: Language Models and Smoothing required hard optional Professor Liang Huang liang.huang.sh@gmail.com Python Review:
More informationCSEP 517 Natural Language Processing Autumn 2013
CSEP 517 Natural Language Processing Autumn 2013 Language Models Luke Zettlemoyer Many slides from Dan Klein and Michael Collins Overview The language modeling problem N-gram language models Evaluation:
More informationPhrasetable Smoothing for Statistical Machine Translation
Phrasetable Smoothing for Statistical Machine Translation George Foster and Roland Kuhn and Howard Johnson National Research Council Canada Ottawa, Ontario, Canada firstname.lastname@nrc.gc.ca Abstract
More informationProbabilistic Linguistics
Matilde Marcolli MAT1509HS: Mathematical and Computational Linguistics University of Toronto, Winter 2019, T 4-6 and W 4, BA6180 Bernoulli measures finite set A alphabet, strings of arbitrary (finite)
More informationCSA4050: Advanced Topics Natural Language Processing. Lecture Statistics III. Statistical Approaches to NLP
University of Malta BSc IT (Hons)Year IV CSA4050: Advanced Topics Natural Language Processing Lecture Statistics III Statistical Approaches to NLP Witten-Bell Discounting Unigrams Bigrams Dept Computer
More informationMidterm sample questions
Midterm sample questions CS 585, Brendan O Connor and David Belanger October 12, 2014 1 Topics on the midterm Language concepts Translation issues: word order, multiword translations Human evaluation Parts
More informationMicrosoft Corporation.
A Bit of Progress in Language Modeling Extended Version Joshua T. Goodman Machine Learning and Applied Statistics Group Microsoft Research One Microsoft Way Redmond, WA 98052 joshuago@microsoft.com August
More informationFinite-State Transducers
Finite-State Transducers - Seminar on Natural Language Processing - Michael Pradel July 6, 2007 Finite-state transducers play an important role in natural language processing. They provide a model for
More informationNatural Language Processing
SFU NatLangLab Natural Language Processing Anoop Sarkar anoopsarkar.github.io/nlp-class Simon Fraser University October 20, 2017 0 Natural Language Processing Anoop Sarkar anoopsarkar.github.io/nlp-class
More informationSpeech Recognition Lecture 8: Expectation-Maximization Algorithm, Hidden Markov Models.
Speech Recognition Lecture 8: Expectation-Maximization Algorithm, Hidden Markov Models. Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.com This Lecture Expectation-Maximization (EM)
More informationLecture 2: N-gram. Kai-Wei Chang University of Virginia Couse webpage:
Lecture 2: N-gram Kai-Wei Chang CS @ University of Virginia kw@kwchang.net Couse webpage: http://kwchang.net/teaching/nlp16 CS 6501: Natural Language Processing 1 This lecture Language Models What are
More informationACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging
ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging Stephen Clark Natural Language and Information Processing (NLIP) Group sc609@cam.ac.uk The POS Tagging Problem 2 England NNP s POS fencers
More informationLanguage Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky
Language Modeling Introduction to N-grams Many Slides are adapted from slides by Dan Jurafsky Probabilistic Language Models Today s goal: assign a probability to a sentence Why? Machine Translation: P(high
More informationOpenFst: An Open-Source, Weighted Finite-State Transducer Library and its Applications to Speech and Language. Part I. Theory and Algorithms
OpenFst: An Open-Source, Weighted Finite-State Transducer Library and its Applications to Speech and Language Part I. Theory and Algorithms Overview. Preliminaries Semirings Weighted Automata and Transducers.
More informationUnsupervised Model Adaptation using Information-Theoretic Criterion
Unsupervised Model Adaptation using Information-Theoretic Criterion Ariya Rastrow 1, Frederick Jelinek 1, Abhinav Sethy 2 and Bhuvana Ramabhadran 2 1 Human Language Technology Center of Excellence, and
More informationAdaptive Importance Sampling to Accelerate Training of a Neural Probabilistic Language Model
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. XX, NO. YY, MARCH 2008 1 Adaptive Importance Sampling to Accelerate Training of a Neural Probabilistic Language Model Yoshua Bengio and Jean-Sébastien Senécal,
More informationWeighted Finite State Transducers in Automatic Speech Recognition
Weighted Finite State Transducers in Automatic Speech Recognition ZRE lecture 10.04.2013 Mirko Hannemann Slides provided with permission, Daniel Povey some slides from T. Schultz, M. Mohri and M. Riley
More informationSYNTHER A NEW M-GRAM POS TAGGER
SYNTHER A NEW M-GRAM POS TAGGER David Sündermann and Hermann Ney RWTH Aachen University of Technology, Computer Science Department Ahornstr. 55, 52056 Aachen, Germany {suendermann,ney}@cs.rwth-aachen.de
More informationLecture 4: Smoothing, Part-of-Speech Tagging. Ivan Titov Institute for Logic, Language and Computation Universiteit van Amsterdam
Lecture 4: Smoothing, Part-of-Speech Tagging Ivan Titov Institute for Logic, Language and Computation Universiteit van Amsterdam Language Models from Corpora We want a model of sentence probability P(w
More informationWe have a sitting situation
We have a sitting situation 447 enrollment: 67 out of 64 547 enrollment: 10 out of 10 2 special approved cases for audits ---------------------------------------- 67 + 10 + 2 = 79 students in the class!
More informationLanguage Modelling. Steve Renals. Automatic Speech Recognition ASR Lecture 11 6 March ASR Lecture 11 Language Modelling 1
Language Modelling Steve Renals Automatic Speech Recognition ASR Lecture 11 6 March 2017 ASR Lecture 11 Language Modelling 1 HMM Speech Recognition Recorded Speech Decoded Text (Transcription) Acoustic
More informationComputer Science. Carnegie Mellon. DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited. DTICQUALBYlSFSPIiCTBDl
Computer Science Carnegie Mellon DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited DTICQUALBYlSFSPIiCTBDl Computer Science A Gaussian Prior for Smoothing Maximum Entropy Models
More informationRanked Retrieval (2)
Text Technologies for Data Science INFR11145 Ranked Retrieval (2) Instructor: Walid Magdy 31-Oct-2017 Lecture Objectives Learn about Probabilistic models BM25 Learn about LM for IR 2 1 Recall: VSM & TFIDF
More informationChapter 3: Basics of Language Modeling
Chapter 3: Basics of Language Modeling Section 3.1. Language Modeling in Automatic Speech Recognition (ASR) All graphs in this section are from the book by Schukat-Talamazzini unless indicated otherwise
More informationNatural Language Processing
Natural Language Processing Language models Based on slides from Michael Collins, Chris Manning and Richard Soccer Plan Problem definition Trigram models Evaluation Estimation Interpolation Discounting
More informationLanguage Processing with Perl and Prolog
Language Processing with Perl and Prolog Chapter 5: Counting Words Pierre Nugues Lund University Pierre.Nugues@cs.lth.se http://cs.lth.se/pierre_nugues/ Pierre Nugues Language Processing with Perl and
More informationGraphical models for part of speech tagging
Indian Institute of Technology, Bombay and Research Division, India Research Lab Graphical models for part of speech tagging Different Models for POS tagging HMM Maximum Entropy Markov Models Conditional
More informationStatistical Methods for NLP
Statistical Methods for NLP Sequence Models Joakim Nivre Uppsala University Department of Linguistics and Philology joakim.nivre@lingfil.uu.se Statistical Methods for NLP 1(21) Introduction Structured
More informationStatistical Natural Language Processing
199 CHAPTER 4 Statistical Natural Language Processing 4.0 Introduction............................. 199 4.1 Preliminaries............................. 200 4.2 Algorithms.............................. 201
More informationA Bayesian Interpretation of Interpolated Kneser-Ney NUS School of Computing Technical Report TRA2/06
A Bayesian Interpretation of Interpolated Kneser-Ney NUS School of Computing Technical Report TRA2/06 Yee Whye Teh tehyw@comp.nus.edu.sg School of Computing, National University of Singapore, 3 Science
More informationCS 6120/CS4120: Natural Language Processing
CS 6120/CS4120: Natural Language Processing Instructor: Prof. Lu Wang College of Computer and Information Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Outline Probabilistic language
More informationStatistical Natural Language Processing
Statistical Natural Language Processing N-gram Language Models Çağrı Çöltekin University of Tübingen Seminar für Sprachwissenschaft Summer Semester 2017 N-gram language models A language model answers
More informationCourse 495: Advanced Statistical Machine Learning/Pattern Recognition
Course 495: Advanced Statistical Machine Learning/Pattern Recognition Lecturer: Stefanos Zafeiriou Goal (Lectures): To present discrete and continuous valued probabilistic linear dynamical systems (HMMs
More information{ Jurafsky & Martin Ch. 6:! 6.6 incl.
N-grams Now Simple (Unsmoothed) N-grams Smoothing { Add-one Smoothing { Backo { Deleted Interpolation Reading: { Jurafsky & Martin Ch. 6:! 6.6 incl. 1 Word-prediction Applications Augmentative Communication
More informationDeep Learning Basics Lecture 10: Neural Language Models. Princeton University COS 495 Instructor: Yingyu Liang
Deep Learning Basics Lecture 10: Neural Language Models Princeton University COS 495 Instructor: Yingyu Liang Natural language Processing (NLP) The processing of the human languages by computers One of
More informationLanguage Models, Smoothing, and IDF Weighting
Language Models, Smoothing, and IDF Weighting Najeeb Abdulmutalib, Norbert Fuhr University of Duisburg-Essen, Germany {najeeb fuhr}@is.inf.uni-due.de Abstract In this paper, we investigate the relationship
More informationLanguage Models. CS6200: Information Retrieval. Slides by: Jesse Anderton
Language Models CS6200: Information Retrieval Slides by: Jesse Anderton What s wrong with VSMs? Vector Space Models work reasonably well, but have a few problems: They are based on bag-of-words, so they
More informationUniversität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Language Models. Tobias Scheffer
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Language Models Tobias Scheffer Stochastic Language Models A stochastic language model is a probability distribution over words.
More informationConditional Language Modeling. Chris Dyer
Conditional Language Modeling Chris Dyer Unconditional LMs A language model assigns probabilities to sequences of words,. w =(w 1,w 2,...,w`) It is convenient to decompose this probability using the chain
More informationACS Introduction to NLP Lecture 3: Language Modelling and Smoothing
ACS Introduction to NLP Lecture 3: Language Modelling and Smoothing Stephen Clark Natural Language and Information Processing (NLIP) Group sc609@cam.ac.uk Language Modelling 2 A language model is a probability
More informationEstimating Sparse Events using Probabilistic Logic: Application to Word n-grams Djamel Bouchara Abstract In several tasks from dierent elds, we are en
Estimating Sparse Events using Probabilistic Logic: Application to Word n-grams Djamel Bouchara boucha@cedar.bualo.edu Fax: (716) 645-6176 Phone: (716) 645-6176, Ext. 138 Center of Excellence for Document
More informationWeighted Finite-State Transducer Algorithms An Overview
Weighted Finite-State Transducer Algorithms An Overview Mehryar Mohri AT&T Labs Research Shannon Laboratory 80 Park Avenue, Florham Park, NJ 0793, USA mohri@research.att.com May 4, 004 Abstract Weighted
More informationBasic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology
Basic Text Analysis Hidden Markov Models Joakim Nivre Uppsala University Department of Linguistics and Philology joakimnivre@lingfiluuse Basic Text Analysis 1(33) Hidden Markov Models Markov models are
More informationGraphical Models. Mark Gales. Lent Machine Learning for Language Processing: Lecture 3. MPhil in Advanced Computer Science
Graphical Models Mark Gales Lent 2011 Machine Learning for Language Processing: Lecture 3 MPhil in Advanced Computer Science MPhil in Advanced Computer Science Graphical Models Graphical models have their
More informationKneser-Ney smoothing explained
foldl home blog contact feed Kneser-Ney smoothing explained 18 January 2014 Language models are an essential element of natural language processing, central to tasks ranging from spellchecking to machine
More informationOn Using Selectional Restriction in Language Models for Speech Recognition
On Using Selectional Restriction in Language Models for Speech Recognition arxiv:cmp-lg/9408010v1 19 Aug 1994 Joerg P. Ueberla CMPT TR 94-03 School of Computing Science, Simon Fraser University, Burnaby,
More informationSpeech Recognition Lecture 4: Weighted Transducer Software Library. Mehryar Mohri Courant Institute of Mathematical Sciences
Speech Recognition Lecture 4: Weighted Transducer Software Library Mehryar Mohri Courant Institute of Mathematical Sciences mohri@cims.nyu.edu Software Libraries FSM Library: Finite-State Machine Library.
More informationLecture 12: Algorithms for HMMs
Lecture 12: Algorithms for HMMs Nathan Schneider (some slides from Sharon Goldwater; thanks to Jonathan May for bug fixes) ENLP 26 February 2018 Recap: tagging POS tagging is a sequence labelling task.
More informationAdministrivia. Lecture 5. Part I. Outline. The Big Picture IBM IBM IBM IBM
Administrivia Lecture 5 The Big Picture/Language Modeling Bhuvana Ramabhadran, Michael Picheny, Stanley F. Chen T.J. Watson Research Center Yorktown Heights, New York, USA {bhuvana,picheny,stanchen}@us.ibm.com
More informationLecture 5. The Big Picture/Language Modeling. Bhuvana Ramabhadran, Michael Picheny, Stanley F. Chen
Lecture 5 The Big Picture/Language Modeling Bhuvana Ramabhadran, Michael Picheny, Stanley F. Chen T.J. Watson Research Center Yorktown Heights, New York, USA {bhuvana,picheny,stanchen}@us.ibm.com 06 October
More informationLecture 12: Algorithms for HMMs
Lecture 12: Algorithms for HMMs Nathan Schneider (some slides from Sharon Goldwater; thanks to Jonathan May for bug fixes) ENLP 17 October 2016 updated 9 September 2017 Recap: tagging POS tagging is a
More informationNLP: N-Grams. Dan Garrette December 27, Predictive text (text messaging clients, search engines, etc)
NLP: N-Grams Dan Garrette dhg@cs.utexas.edu December 27, 2013 1 Language Modeling Tasks Language idenfication / Authorship identification Machine Translation Speech recognition Optical character recognition
More informationLearning from Uncertain Data
Learning from Uncertain Data Mehryar Mohri AT&T Labs Research 180 Park Avenue, Florham Park, NJ 07932, USA mohri@research.att.com Abstract. The application of statistical methods to natural language processing
More information