Advanced topics in language modeling: MaxEnt, features and marginal distritubion constraints

Similar documents
Foundations of Natural Language Processing Lecture 5 More smoothing and the Noisy Channel Model

N-gram Language Modeling Tutorial

Speech Recognition Lecture 5: N-gram Language Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri

Empirical Methods in Natural Language Processing Lecture 10a More smoothing and the Noisy Channel Model

Language Models. Philipp Koehn. 11 September 2018

N-gram Language Modeling

perplexity = 2 cross-entropy (5.1) cross-entropy = 1 N log 2 likelihood (5.2) likelihood = P(w 1 w N ) (5.3)

What to Expect from Expected Kneser-Ney Smoothing

N-grams. Motivation. Simple n-grams. Smoothing. Backoff. N-grams L545. Dept. of Linguistics, Indiana University Spring / 24

Natural Language Processing. Statistical Inference: n-grams

Adapting n-gram Maximum Entropy Language Models with Conditional Entropy Regularization

ANLP Lecture 6 N-gram models and smoothing

Statistical Methods for NLP

CMPT-825 Natural Language Processing

Language Modeling. Michael Collins, Columbia University

Pre-Initialized Composition For Large-Vocabulary Speech Recognition

Cross-Lingual Language Modeling for Automatic Speech Recogntion

Chapter 3: Basics of Language Modelling

Chapter 3: Basics of Language Modeling

CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm

Naïve Bayes, Maxent and Neural Models

The Noisy Channel Model. CS 294-5: Statistical Natural Language Processing. Speech Recognition Architecture. Digitizing Speech

Statistical Machine Translation

Machine Learning for natural language processing

Language Modelling. Steve Renals. Automatic Speech Recognition ASR Lecture 11 6 March ASR Lecture 11 Language Modelling 1

Probabilistic Language Modeling

Conditional Language Modeling. Chris Dyer

From perceptrons to word embeddings. Simon Šuster University of Groningen

Conditional Random Fields

P(t w) = arg maxp(t, w) (5.1) P(t,w) = P(t)P(w t). (5.2) The first term, P(t), can be described using a language model, for example, a bigram model:

Recap: Language models. Foundations of Natural Language Processing Lecture 4 Language Models: Evaluation and Smoothing. Two types of evaluation in NLP

An Algorithm for Fast Calculation of Back-off N-gram Probabilities with Unigram Rescaling

DT2118 Speech and Speaker Recognition

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky

{ Jurafsky & Martin Ch. 6:! 6.6 incl.

Language Modeling. Introduction to N-grams. Many Slides are adapted from slides by Dan Jurafsky

1. Markov models. 1.1 Markov-chain

Week 13: Language Modeling II Smoothing in Language Modeling. Irina Sergienya

ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging

Language Modelling. Marcello Federico FBK-irst Trento, Italy. MT Marathon, Edinburgh, M. Federico SLM MT Marathon, Edinburgh, 2012

Augmented Statistical Models for Speech Recognition

] Automatic Speech Recognition (CS753)

CSCI 5832 Natural Language Processing. Today 2/19. Statistical Sequence Classification. Lecture 9

Weighted Finite State Transducers in Automatic Speech Recognition

Language Model. Introduction to N-grams

The Language Modeling Problem (Fall 2007) Smoothed Estimation, and Language Modeling. The Language Modeling Problem (Continued) Overview

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

Natural Language Processing (CSE 490U): Language Models

Unsupervised Model Adaptation using Information-Theoretic Criterion

Ngram Review. CS 136 Lecture 10 Language Modeling. Thanks to Dan Jurafsky for these slides. October13, 2017 Professor Meteer

Lecture 2: N-gram. Kai-Wei Chang University of Virginia Couse webpage:

Sparse Models for Speech Recognition

Midterm sample questions

N-gram Language Model. Language Models. Outline. Language Model Evaluation. Given a text w = w 1...,w t,...,w w we can compute its probability by:

Diagonal Priors for Full Covariance Speech Recognition

Natural Language Processing SoSe Language Modelling. (based on the slides of Dr. Saeedeh Momtazi)

Lecture 3: ASR: HMMs, Forward, Viterbi

Exploring Asymmetric Clustering for Statistical Language Modeling

Log-Linear Models, MEMMs, and CRFs

Natural Language Processing

Algorithms for NLP. Language Modeling III. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley

The Noisy Channel Model and Markov Models

Lecture 12: Algorithms for HMMs

Doctoral Course in Speech Recognition. May 2007 Kjell Elenius

Kneser-Ney smoothing explained

Sequences and Information

STA 414/2104: Machine Learning

Efficient Path Counting Transducers for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices

Hidden Markov Models in Language Processing

Hidden Markov Models and Gaussian Mixture Models

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009

CSA4050: Advanced Topics Natural Language Processing. Lecture Statistics III. Statistical Approaches to NLP

Probabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov

Speech Translation: from Singlebest to N-Best to Lattice Translation. Spoken Language Communication Laboratories

Statistical Methods for NLP

A fast and simple algorithm for training neural probabilistic language models

Lecture 12: Algorithms for HMMs

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Language Models. Tobias Scheffer

Language Model Rest Costs and Space-Efficient Storage

Experiments with a Gaussian Merging-Splitting Algorithm for HMM Training for Speech Recognition

Machine Learning & Data Mining Caltech CS/CNS/EE 155 Hidden Markov Models Last Updated: Feb 7th, 2017

STA 4273H: Statistical Machine Learning

Lecture 5 Neural models for NLP

TnT Part of Speech Tagger

Log-linear models (part 1)

Statistical Methods for NLP

Statistical NLP for the Web Log Linear Models, MEMM, Conditional Random Fields

CS 6120/CS4120: Natural Language Processing

PROBABILITY AND INFORMATION THEORY. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

N-gram N-gram Language Model for Large-Vocabulary Continuous Speech Recognition

Learning to translate with neural networks. Michael Auli

Natural Language Processing SoSe Words and Language Model

Ranked Retrieval (2)

Intelligent Systems (AI-2)

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 10: Acoustic Models

Statistical NLP Spring The Noisy Channel Model

NLP: N-Grams. Dan Garrette December 27, Predictive text (text messaging clients, search engines, etc)

Intelligent Systems (AI-2)

Latent Dirichlet Allocation Based Multi-Document Summarization

Language Modelling: Smoothing and Model Complexity. COMP-599 Sept 14, 2016

Transcription:

Advanced topics in language modeling: MaxEnt, features and marginal distritubion constraints Brian Roark, Google NYU CSCI-GA.2585 Speech Recognition, Weds., April 15, 2015 (Don t forget to pay your taxes!)

Key Points Maximum Entropy (MaxEnt) models provide flexible alternative to standard n-gram multinomial language models. Feature-based model, brute force normalization over Σ Feature definitions critical to approach Want expected feature frequency = actual feature frequency Marginal distribution also matched in Kneser-Ney smoothing Widely used method for n-gram language modeling. Instructive to compare features and optimization methods between MaxEnt and n-gram models. 1

Agenda N-gram (Markov) language models and automata encoding Maximum Entropy (MaxEnt) language modeling Features versus standard n-gram models Parameter optimization Backoff features in MaxEnt language models Biadsy et al., Interspeech 2014 Kneser-Ney language models and marginalization constraints Imposing marginalization constraints on smoothed models Roark et al., ACL 2013 2

Language models for Speech Recognition Assuming some familliarity with role of language models in automatic speech recognition (ASR) Language model score combines with acoustic score as a prior Standard language modeling approach: Closed vocabulary Σ, multinomial model over w Σ Multinomial based on history of preceding words Equivalence classes of histories for parameter estimation Equivalence defined by history suffix (Markov assumption) Regularization relies on link to history longest proper suffix 3

N-gram modeling Today, talking about backoff models, typically recursively defined: { P(w h) if c(hw) > 0 P(w h) = α(h) P(w h ) otherwise where w = w i, h = w i k... w i 1, h = w i k+1... w i 1 and s.t. 0 P(w h) 1 and α(h) ensures normalization w:c(hw)>0 Many ways to estimate P: Katz, Absolute Discounting (AD), etc. Kneser-Ney is a variant of AD that modifies lower order parameters to preserve observed marginal distributions 4

Fragment of n-gram automata representation u v (real semiring) w hw φ φ w/β(hw) φ h φ/α(h,h') w/β(h'w) 5 h'

Log linear modeling A general stochastic modeling approach that does not assume a particular WFSA structure WFSA structure can be imposed for a particular feature set, e.g., n-grams Flexibility of modeling allows for many overlapping features e.g., use unigram, bigram and trigram simultaneously or use POS-tags or morphologically-derived features Generative or discriminative training can be used Commonly used models for other sequence processing problems 6

MaxEnt language modeling Define a d-dimensional vector of features Φ e.g. Φ 1000 (h i w i ) = 1 if w i 1 is to and w i is the, 0 otherwise Estimate a d-dimensional parameter vector Θ Then P Θ (w i h i ) = exp(θ Φ(h iw i )) exp(θ Φ(h i w)) = exp w Σ w Σ d Θ s Φ s (h i w i ) s=1 exp( d Θ s Φ s (h i w)) s=1 log P Θ (w i h i ) = Θ Φ(h i w i ) log w Σ exp(θ Φ(h i w)) 7

Finding optimal model parameters Θ Train the model by maximizing the likelihood of training corpus Given a corpus C of N observations, find Θ that maximizes: LL(Θ; C) = w i C = w i C log P Θ (w i h i ) Θ Φ(h i w i ) log exp(θ Φ(h i w)) w Σ = w i C Θ Φ(h i w i ) w i C log w Σ exp(θ Φ(h i w)) Maximum likelihood Θ where the derivative of LL(Θ; C) is zero 8

Partial derivative wrt. a particular dimension LL(Θ; C) = w i C Θ Φ(h i w i ) w i C log w Σ exp(θ Φ(h i w)) Partial derivative wrt. dimension j: all other dimensions constant Useful facts about derivatives: d 1 d log f(x) = f(x) and d dx f(x) dx Then: LL(Θ; C) Θ j = w i C Φ j (h i w i ) dx exp f(x) = exp f(x) d dx f(x) w i C w Σ P Θ (w h i )Φ j (h i w) 9

Expected frequency vs. actual frequency Maximum likelihood Θ has derivative zero, i.e., for all j Φ j (h i w i ) = P Θ (w h i )Φ j (h i w) w i C w i C w Σ Observed frequency of Φ j = Expected frequency of Φ j Training algorithm: calculate gradients; update parameters; iterate Objective function is convex, hence global maximum exists Typical to regularize by penalizing parameter magnitude Summing over whole vocabulary at each point is expensive e.g., many current ASR systems have 1M+ vocabularies 10

Why use MaxEnt models The approach has extreme flexibility for defining features Multiple, overlapping, non-independent features Features dealing with words, word classes, morphemes, etc. For languages with highly productive morphology (Turkish, Russian, Korean, etc.), hard to capture constraints in ngram model. Feature engineering becomes a key enterprise. No need for explicit smoothing other than objective regularization mentioned above 11

Biadsy et al. (2014) Backoff Inspired Features for Maximum Entropy Language Models. 2014. Fadi Biadsy, Keith Hall, Pedro Moreno and Brian Roark. Proc. of Interspeech. Suppose you want to simply replace n-gram model with MaxEnt Same feature space, different parameterizations of features What is the feature space? n-grams... what else? MaxEnt not required to explicitly smooth to backoff history Has its own regularization, hence typically omit backoff However, backoff style features can improve models Paper also explores issues in scaling to large models 12

Backoff inspired features (Biadsy et al., 2014) Introduce some notation for n-gram features: Let NGram(w 1... w n, i, k) = <w i k,... w i 1, w i > be a binary indicator feature for the k-order n-gram ending at position i. A backoff feature fires when the full ngram is not observed In an n-gram model, each history has a backoff cost. Let SuffixBackoff(w 1... w n, i, k) = <w i k,... w i 1,BO> be a backoff feature that fires iff w i k... w i is unobserved What does unobserved mean in this case? Construct a feature dictionary of n-grams from corpus Only include in dictionary if frequency higher than threshold 13

Generalizing backoff features We can also define a prefix backoff feature analogously. Let PrefixBackoff(w 1... w n, i, k) = <BO,w i k+1,... w i > be a backoff feature that fires iff w i k... w i is unobserved Can also forget more than just one word on either end: Let PrefixBackoff j (w 1... w n, i, k) = <BO k,w i j,... w i > indicate an unobserved k-order ngram with suffix w i j... w i Similarly, let SuffixBackoff j (w 1... w n, i, k) = <w i k,... w i k+j,bo k > indicate an unobserved k-order ngram with prefix w i k... w i k+j 14

Example features Let sentence S = we will save the quail eggs. Then: SuffixBackoff(S, 5, 3) = < will, save, the, BO > PrefixBackoff(S, 5, 3) = < BO, save, the, quail > SuffixBackoff 0 (S, 5, 3) = < will, BO 3 > SuffixBackoff 1 (S, 5, 3) = < will, save, BO 3 > PrefixBackoff 0 (S, 5, 3) = < BO 3, quail > PrefixBackoff 1 (S, 5, 3) = < BO 3, the, quail > 15

Experimental setup Evaluation on American English speech recognition See paper for full details on system and setup MaxEnt n-grams up to 5-grams and BO features up to 4 words Baseline LM is also a 5-gram model (Katz backoff) N-best reranking experiments on large scale ASR system at Google Mixture of MaxEnt model with baseline LM at fixed mixture Perplexity results (see paper) in a variety of conditions Word error rate on multiple anonymized voice-search data sets including spoken search queries, questions and YouTube queries 16

Table 2 in Biadsy et al. (2014) Reranking feature set Test Utts / Wds NG+ NG+ NG+ Set ( 1000) None NG Pk Sk Pk+Sk 1 22.5 / 98.0 12.7 12.6 12.4 12.4 12.4 2 17.8 / 74.0 12.7 12.5 12.4 12.4 12.3 3 16.2 / 61.1 17.3 17.1 16.7 16.8 16.7 4 18.0 / 64.0 12.8 12.7 12.6 12.6 12.5 5 7.4 / 50.7 16.8 16.6 16.2 16.2 16.2 6 7.3 / 31.9 15.1 15.0 14.8 14.8 14.9 7 19.6 / 69.1 16.5 16.2 15.9 15.9 15.9 all 108.9 / 448.8 14.6 14.4 14.2 14.2 14.1 Active Features (M): 197.8 170.1 160.2 162.4 Table 2: WER results on 7 subcorpora and overall, for the baseline recognizer (no reranking) versus reranking models trained with different feature sets. 17

Transition back to n-gram models Recall backoff models, typically recursively defined: { P(w h) if c(hw) > 0 P(w h) = α(h) P(w h ) otherwise where w = w i, h = w i k... w i 1, h = w i k+1... w i 1 and s.t. 0 P(w h) 1 and α(h) ensures normalization w:c(hw)>0 Many ways to estimate P: Katz, Absolute Discounting (AD), etc. Kneser-Ney is a variant of AD that modifies lower order parameters to preserve observed marginal distributions Goodman (2001) proved this improves language models in general 18

Kneser-Ney estimation This technique modifies only lower order distributions Improve their utility within smoothing Intuition: Royce may occur frequently in a corpus but it is unlikely to follow anything but Rolls Hence unigram probability should be small within bigram model A variant of absolute discounting Highest order n-gram probs left the same P abs (w i h) = C(hw i ) C(h) + (1 λ h ) P kn (w i h ) 19

Intuition of Kneser-Ney benefit Build a language model automaton, randomly generate text Suppose we have a bigram model Want randomly generated unigrams to occur about as frequently as they were observed in the corpus we trained on If we use relative frequency for unigram distribution, Royce will be generated too frequently in random car corpus Generated with high frequency following Rolls Unigram prob. high, hence also generated in other contexts Kneser-Ney constrains lower order models so that lower-order n- grams are not overgenerated in random corpus 20

Marginal distribution constraints Such constraints widely used to improve models To train MaxEnt models based on feature expected frequency Kneser-Ney models match lower order n-gram expected freq. Standard n-gram models will over-generate certain n-grams Because the same observations train each order of n-gram Hence probability mass duplicated at each level of model Marginal distribution constraints remove n-gram probability mass already accounted for in higher-order distributions Kneser-Ney has a convenient closed-form solution 21

Kneser-Ney approach Estimation method for lower orders of n-gram model Thus, if h is the backoff history for h P(w h ) = P(g, w h ) g:g =h 22

Kneser-Ney approach Estimation method for lower orders of n-gram model Thus, if h is the backoff history for h P(w h ) = P(g, w h ) g:g =h = P(w g) P(g h ) + g:g =h, c(gw)>0 g:g =h, c(gw)=0 α(g) P(w h ) P(g h ) 23

Kneser-Ney approach Estimation method for lower orders of n-gram model Thus, if h is the backoff history for h P(w h ) = P(g, w h ) g:g =h = P(w g) P(g h ) + g:g =h, c(gw)>0 g:g =h, c(gw)=0 α(g) P(w h ) P(g h ) Work it out, and for AD under certain assumptions, we get: P(w h ) = {g : g = h, c(gw) > 0} {g : g = h, c(gv) > 0} v V 24

Pruning Kneser-Ney Kneser-Ney generally achieves strong model improvements But, as noted in Siivola et al. (2007) and Chelba et al. (2010): KN doesn t play well with pruning There are a couple of reasons put forward First, calculation of f(h) (state frequency) needed for pruning algorithms not correct when using KN parameters Second, the marginal distribution constraints no longer hold (higher order arcs have been pruned away; model not the same) First problem can be fixed by carrying extra information around Or try imposing marginal distribution constraints on pruned model 25

Table 3 from Chelba et al. (2010) Broadcast News (128M words training; 692K test); 143k vocab 4-gram models estimated and aggressively pruned using SRILM (why prune? what is perplexity?) 4-gram models Backoff Interpolated (31M ngrams in full model) Perplexity Perplexity Smoothing method full pruned Params full pruned Params Absolute Discounting 120.5 197.3 383,387 119.8 198.1 386,214 Witten-Bell Ristad - N/A - Katz - N/A - Kneser-Ney Kneser-Ney (Chen-Goodman) 26

Table 3 from Chelba et al. (2010) Broadcast News (128M words training; 692K test); 143k vocab 4-gram models estimated and aggressively pruned using SRILM (why prune? what is perplexity?) 4-gram models Backoff Interpolated (31M ngrams in full model) Perplexity Perplexity Smoothing method full pruned Params full pruned Params Absolute Discounting 120.5 197.3 383,387 119.8 198.1 386,214 Witten-Bell 118.8 196.3 380,372 121.6 202.3 396,424 Ristad 126.4 203.6 395,639 - N/A - Katz 119.8 198.1 386,214 - N/A - Kneser-Ney Kneser-Ney (Chen-Goodman) 27

Table 3 from Chelba et al. (2010) Broadcast News (128M words training; 692K test); 143k vocab 4-gram models estimated and aggressively pruned using SRILM (why prune? what is perplexity?) 4-gram models Backoff Interpolated (31M ngrams in full model) Perplexity Perplexity Smoothing method full pruned Params full pruned Params Absolute Discounting 120.5 197.3 383,387 119.8 198.1 386,214 Witten-Bell 118.8 196.3 380,372 121.6 202.3 396,424 Ristad 126.4 203.6 395,639 - N/A - Katz 119.8 198.1 386,214 - N/A - Kneser-Ney 114.5 285.1 388,184 115.8 274.3 398,750 Kneser-Ney (Chen-Goodman) 116.3 280.6 396,217 112.8 270.7 399,129 28

Roark et al. (2013) Smoothed marginal distribution constraints for language modeling. 2013. Brian Roark, Cyril Allauzen and Michael Riley. Proceedings of ACL. Beyond problems described, some reasons to pursue our approach We base marginal distribution constraints on smoothed estimates Kneser-Ney based on raw relative frequency estimates We base marginal distribution constraints on all orders of model Kneser-Ney estimates order k based on just order k + 1 Add l assumption in KN: denominator term assumed constant KN approach allows for easy count-based estimation Our approach is more compute intensive Will now introduce some notation to make description easier. 29

String notation Alphabet of symbols V (using V instead of Σ because lots of ing) h V V, V + and V k as standardly defined: strings of 0 or more; 1 or more; or exactly k symbols from V A string h V is an ordered list of zero or more symbols from an alphabet V Let h[i, j] be the substring beginning at the i th and ending at the j th symbol h denotes the length of h, i.e., h V h h[1, k] is a prefix of h (proper if k < h ) h[j, h ] is a suffix (proper if j > 1) For ease of notation, let h = h[2, h ], i.e., the longest suffix of h that s not h; and let h = h[1, h 1], i.e., the longest prefix of h that s not h Thus, the standardly defined backoff state of h is denoted h 30

N-gram model notation For this talk, we assume that a backoff n-gram model G: Provides a multinomial probability P(w h) for all w V, h V (we will call h the history ; and w the word ) Explicitly represents only a subset of substrings hw in the model; say hw G If hw G, then assume all prefixes and suffixes of hw are also G If hw G, then probability computed via longest suffix h[j, h ]w G (Let hw be the longest suffix of hw s.t. hw G) Standard backoff model formulation, slight notational mod: β(hw) if hw G (s.t. 0 P(w h) = α(h, h ) P(w h ) otherwise k Let α(h, h) = 1, and α(h, h[k, h ]) = α(h[i 1, h ], h[i, h ]) 31 i=2 v:hv G β(hv) < 1)

Fragment of n-gram automata representation u v (real semiring) w hw φ φ w/β(hw) φ h φ/α(h,h') w/β(h'w) 32 h'

Marginal distribution constraint For notational simplicity, suppose we have an n+1-gram model Largest history in model is of length n For state of interest, let h = k Let V n k h = {h V n : h = h[n k+1, n]}, i.e., strings of length n with h suffix (not just those explicitly in the model) Recall the constraint in former notation: P(w h ) = P(g, w h ) g:g =h in our current notation: P(w h ) = 33 P(h, w h ) h V n k h

Marginal distribution constraint We can state our marginal distribution constraint as follows: P(w h ) = P(h, w h ) h V n k h = P(w h) P(h h ) h V n k h P(h) = P(w h) h V n k h P(g) g V n k h = 1 g V n k h P(g) h V n k h P(w h) P(h) 34

Only interested in h w G P(w h ) = = 1 P(g) g V n k h 1 P(g) g V n k h h V n k h P(w h) P(h) ( h V n k h hw > h w h V n k h hw = h w α(h, hw ) β(hw) P(h) + α(h, h ) β(h w) P(h) ) P(w h ) g V n k h P(g) = h V n k h hw > h w α(h, hw ) β(hw) P(h) + 35 β(h w) h V n k h hw = h w α(h, h ) P(h)

Solve for β(h w) P(w h ) g V n k h P(g) = h V n k h hw > h w α(h, hw ) β(hw) P(h) + β(h w) α(h, h ) P(h) h V n k h hw = h w β(h w) = P(w h ) g V n k h P(g) h V n k h hw > h w α(h, h ) P(h) α(h, hw ) β(hw) P(h) h V n k h hw = h w 36

Numerator term #1 β(h w) = P(w h ) P(g) g V n k h h V n k h hw = h w α(h, hw ) β(hw) P(h) h V n k h hw > h w α(h, h ) P(h) Product of (1) smoothed probability of w given h and (2) sum of probability of all h V n that backoff to h (not just in model) Note, if h G, still has probability (end up in lower-order state) Key difference with KN: smoothed not unsmoothed models 37

Numerator term #2 β(h w) = P(w h ) P(g) g V n k h h V n k h hw = h w α(h, hw ) β(hw) P(h) h V n k h hw > h w α(h, h ) P(h) Subtract out the probability mass of hw n-grams associated with histories h for which h w is not accessed in the model Implies algorithm to estimate β(hw) in descending n-gram order Key difference with KN: not just next highest order considered 38

Denominator term β(h w) = P(w h ) P(g) g V n k h h V n k h hw = h w α(h, hw ) β(hw) P(h) h V n k h hw > h w α(h, h ) P(h) Divide by weighted sum of backoff cost over states that will access the h w n-gram to assign probability (recall α(h, h ) = 1) As with other values, can be calculated by recursive algorithm Key difference with KN: treated as a constant in KN estimation 39

Reduced equation complexity to give intuition P(w h ) = A B C Term A: total probability of emitting lower-order ngram Term B: probability already accounted for by higher-order ngrams Term C: backed-off probability mass from states using the lowerorder parameter 40

Stationary history probabilities All three terms require P(h) for all histories h For histories h G, their probability mass accumulates in their longest suffix h G Deterministic automaton; h G is an equivalence class Interested in the steady state probabilities of h as t Currently computed using the power method Using backoff, every state has V transitions to new states Efficient: convert to real semiring representation with epsilons Converges relatively quickly in practice 41

Key differences with Kneser-Ney Uses smoothed state and n-gram frequencies rather than observed Regularized model presumably improvement vs. unregularized Can be applied to any given backoff model, no extra info req d Numerator no longer guaranteed > 0? Considers full order of model, not just next-highest order Though can memoize information at next-highest order Does not take denominator as a constant Backoff weights α change with n-gram β values Requires iteration until α values converge 42

Implementation of algorithm Implemented within OpenGrm ngram library Worst case complexity O( V n ) (model order n) typically far less For every lower-order state h worst case O( V n 2 ) For every state h that backs-off to h worst case O( V ) For every w such that h w G worst case O( V ) Unigram state does typically require O( V 2 ) work (naive) Can be dramaticaly reduced with optimization Process higher order states first, memoize data for next lower order For each ngram, accumulate prob mass for higher order states requiring and not-requiring backoff for that ngram 43

Experimental results All results for English Broadcast News task LM Training/testing from LDC98T31 (1996 CSR Hub4 LM corpus) AM Training from 1996/1997 Hub4 AM data set Automatic speech recognition experiments on 1997 Hub4 eval set Triphone GMM acoustic model trained on approx. 150 hours AM training methods incl.: semi-tied covariance, CMLLR, MMI Multi-pass pass decoding strategy for speaker adaptation Eval set consists of 32,689 words 44

Experimental results Received scripts for replicating 2010 results from Ciprian Chelba Use SRILM for model shrinking and perplexity calculation Convert to/from OpenFst to apply this algorithm Nearly identical results to Chelba et al. (2010) Slightly fewer n-grams in full model, i.e., minor data diff Perplexities all within a couple tenths of a point OpenFst/ARPA conversion adds a few extra arcs to the model Prefix/suffix n-grams that are pruned by SRILM ( holes ) Sanity check: roundtrip conversion doesn t change perplexity 45

Perplexity results Perplexity n-grams Smoothing Full Pruned Pruned ( 1000) Method Model Model +MDC model WFST Abs.Disc. 120.4 197.1 382.3 389.2 Witten-Bell 118.7 196.1 379.3 385.0 Ristad 126.2 203.4 394.6 395.9 Katz 119.7 197.9 385.1 390.8 Kneser-Ney 114.4 234.1 375.4 AD,WB,Katz Mixture 118.5 196.6 388.7 Kneser-Ney model pruned using -prune-history-lm switch in SRILM. 46

Perplexity results Perplexity n-grams Smoothing Full Pruned Pruned ( 1000) Method Model Model +MDC model WFST Abs.Disc. 120.4 197.1 187.4 9.7 382.3 389.2 Witten-Bell 118.7 196.1 185.7 10.4 379.3 385.0 Ristad 126.2 203.4 190.3 13.1 394.6 395.9 Katz 119.7 197.9 187.5 10.4 385.1 390.8 Kneser-Ney 114.4 234.1 375.4 AD,WB,Katz Mixture 118.5 196.6 186.3 10.3 388.7 Kneser-Ney model pruned using -prune-history-lm switch in SRILM. 47

Iterating estimation of steady state probabilities 205 200 Witten Bell Ristad Katz Absolute Discounting WB,AD,Katz mixture Perplexity 195 190 185 180 0 1 2 3 4 5 6 Iterations of estimation (recalculating steady state probs) 48

WER results Smoothing Word error rate (WER) First pass Rescoring Pruned Pruned Pruned Pruned Method Model +MDC Model +MDC Abs.Disc. 20.5 19.7 20.2 19.6 Witten-Bell 20.5 19.9 20.1 19.6 Katz 20.5 19.7 20.2 19.7 Mixture 20.5 19.6 20.2 19.6 Kneser-Ney a 22.1 22.2 Kneser-Ney b 20.5 20.6 KN model (a) original pruning; (b) pruned using -prune-history-lm switch in SRILM. 49

Discussion of results General method for re-estimating parameters in a backoff model Achieved perplexity reductions on all method tried so far Perplexity of 185.7 best for models of this size (by 10 points) Initial somewhat clear best practices at this point Important to iterate estimation of steady state probabilities Not so important to iterate denominator calculation for new backoff α values But shouldn t treat denominator as a constant Remaining question: how does it perform for larger models? 50

Larger models See paper for explicit numbers on less dramatically pruned models Smaller gains over baselines with less pruning (less smoothing) Model degradation observed for lightly pruned Witten-Bell models, likely due to under-regularization Performance similar to Kneser-Ney on unpruned model, when starting from Katz or Absolute Discounting 51

Summary of main points in lecture MaxEnt models rely on good feature engineering Not only n-grams in n-gram models! Tip of the iceberg in terms of new features Marginal distribution constraints important and widely used For training MaxEnt models Also basis of Kneser-Ney language modeling Can be tricky to impose under some conditions 52