CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm

Similar documents
Lecture 3: ASR: HMMs, Forward, Viterbi

Speech and Language Processing. Chapter 9 of SLP Automatic Speech Recognition (II)

Lecture 5: GMM Acoustic Modeling and Feature Extraction

Hidden Markov Models The three basic HMM problems (note: change in notation) Mitch Marcus CSE 391

Hidden Markov Models and Gaussian Mixture Models

The Noisy Channel Model. CS 294-5: Statistical Natural Language Processing. Speech Recognition Architecture. Digitizing Speech

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 10: Acoustic Models

Statistical NLP Spring The Noisy Channel Model

Statistical NLP Spring Digitizing Speech

Digitizing Speech. Statistical NLP Spring Frame Extraction. Gaussian Emissions. Vector Quantization. HMMs for Continuous Observations? ...

Hidden Markov Models and Gaussian Mixture Models

Machine Learning for natural language processing

Hidden Markov Model and Speech Recognition

Hidden Markov Modelling

Automatic Speech Recognition (CS753)

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

CS 136 Lecture 5 Acoustic modeling Phoneme modeling

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

University of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I

Lecture 12: Algorithms for HMMs

Introduction to Machine Learning CMU-10701

HIDDEN MARKOV MODELS IN SPEECH RECOGNITION

10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)

Hidden Markov Models

Multiscale Systems Engineering Research Group

Dept. of Linguistics, Indiana University Fall 2009

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 9: Acoustic Models

Lecture 12: Algorithms for HMMs

STA 414/2104: Machine Learning

Ngram Review. CS 136 Lecture 10 Language Modeling. Thanks to Dan Jurafsky for these slides. October13, 2017 Professor Meteer

Recap: HMM. ANLP Lecture 9: Algorithms for HMMs. More general notation. Recap: HMM. Elements of HMM: Sharon Goldwater 4 Oct 2018.

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

Statistical Methods for NLP

1. Markov models. 1.1 Markov-chain

STA 4273H: Statistical Machine Learning

Lecture 3. Gaussian Mixture Models and Introduction to HMM s. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen, Markus Nussbaum-Thom

Brief Introduction of Machine Learning Techniques for Content Analysis

Recall: Modeling Time Series. CSE 586, Spring 2015 Computer Vision II. Hidden Markov Model and Kalman Filter. Modeling Time Series

Lecture 11: Hidden Markov Models

order is number of previous outputs

Hidden Markov Models

Statistical NLP: Hidden Markov Models. Updated 12/15

Expectation Maximization (EM)

Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

Hidden Markov Models. x 1 x 2 x 3 x K

Robert Collins CSE586 CSE 586, Spring 2015 Computer Vision II

p(d θ ) l(θ ) 1.2 x x x

Outline of Today s Lecture

Speech Recognition HMM

An Introduction to Bioinformatics Algorithms Hidden Markov Models

Statistical Sequence Recognition and Training: An Introduction to HMMs

Hidden Markov Models. Dr. Naomi Harte

CISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)

Experiments with a Gaussian Merging-Splitting Algorithm for HMM Training for Speech Recognition

Note Set 5: Hidden Markov Models

Weighted Finite-State Transducers in Computational Biology

Hidden Markov Models

Hidden Markov Models

Pair Hidden Markov Models

Expectation Maximization (EM)

Page 1. References. Hidden Markov models and multiple sequence alignment. Markov chains. Probability review. Example. Markovian sequence

Engineering Part IIB: Module 4F11 Speech and Language Processing Lectures 4/5 : Speech Recognition Basics

O 3 O 4 O 5. q 3. q 4. Transition

Hidden Markov Models. x 1 x 2 x 3 x K

Example: The Dishonest Casino. Hidden Markov Models. Question # 1 Evaluation. The dishonest casino model. Question # 3 Learning. Question # 2 Decoding

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Speech Recognition Lecture 8: Expectation-Maximization Algorithm, Hidden Markov Models.

Human Mobility Pattern Prediction Algorithm using Mobile Device Location and Time Data

CS 7180: Behavioral Modeling and Decision- making in AI

Statistical Machine Learning from Data

Statistical Pattern Recognition

Temporal Modeling and Basic Speech Recognition

Data Mining in Bioinformatics HMM

More on HMMs and other sequence models. Intro to NLP - ETHZ - 18/03/2013

Hidden Markov Models Hamid R. Rabiee

Hidden Markov Models

Hidden Markov Models Part 2: Algorithms

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009

An Introduction to Bioinformatics Algorithms Hidden Markov Models

Machine Learning & Data Mining Caltech CS/CNS/EE 155 Hidden Markov Models Last Updated: Feb 7th, 2017

HMM: Parameter Estimation

Computational Genomics and Molecular Biology, Fall

Data-Intensive Computing with MapReduce

Hidden Markov Models. Three classic HMM problems

EECS E6870: Lecture 4: Hidden Markov Models

Parametric Models Part III: Hidden Markov Models

Lecture 9. Intro to Hidden Markov Models (finish up)

Hidden Markov Models

CS838-1 Advanced NLP: Hidden Markov Models

Hidden Markov Models

CHAPTER. Speech and Language Processing. Daniel Jurafsky & James H. Martin. Copyright c All rights reserved. Draft of August 7, 2017.

CSC401/2511 Spring CSC401/2511 Natural Language Computing Spring 2019 Lecture 5 Frank Rudzicz and Chloé Pou-Prom University of Toronto

Design and Implementation of Speech Recognition Systems

What s an HMM? Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) Hidden Markov Models (HMMs) for Information Extraction

Hidden Markov Model. Ying Wu. Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208

Automatic Speech Recognition (CS753)

Sequence Labeling: HMMs & Structured Perceptron

CSCI 5832 Natural Language Processing. Today 2/19. Statistical Sequence Classification. Lecture 9

ASR using Hidden Markov Model : A tutorial

Transcription:

+ September13, 2016 Professor Meteer CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm Thanks to Dan Jurafsky for these slides

+ ASR components n Feature Extraction, MFCCs, start of AM n HMMs, Forward, Viterbi, n N-grams and Language Modeling n Acoustic Modeling and GMMs n Baum-Welch (Forward-Backward) n Search and Advanced Decoding n Dealing with Variation

+ Speech Recognition Architecture Thanks to Dan Jurafsky for these slides

+ The Three Basic Problems for HMMs Jack Ferguson at IDA in the 1960s n Problem 1 (Evaluation): Given the observation sequence O=(o 1 o 2 o T ), and an HMM model Φ = (A,B), how do we efficiently compute P(O Φ), the probability of the observation sequence, given the model n Problem 2 (Decoding): Given the observation sequence O=(o 1 o 2 o T ), and an HMM model Φ = (A,B), how do we choose a corresponding state sequence Q=(q 1 q 2 q T ) that is optimal in some sense (i.e., best explains the observations) n Problem 3 (Learning): How do we adjust the model parameters Φ = (A,B) to maximize P(O Φ )?

+ The Learning Problem n Learning: Given an observation sequence O and a the set of possible states in the HMM, learn the HMM parameters A and B n Baum-Welch = Forward-Backward Algorithm (Baum 1972) n Is a special case of the EM or Expectation-Maximization algorithm (Dempster, Laird, Rubin) n The algorithm will let us train the transition probabilities A= {a ij } and the emission probabilities B={b i (o t )} of the HMM

+ GMMs in Acoustic Modeling n Multivariant Gaussian Mixture Models n Gaussian: Determine the likelihood of a real value to be in a particular state n Multivariant: We have a vector of values, not just one n Mixture Models: Values may not be best modeled by a single Gaussian n Learning the parameters (means and variances) using the backward forward algorithm

+ Embedded Training 7 CS 224S Winter 2007 9/29/17

+ Starting out with Observable Markov Models n How to train? n Run the model on observation sequence O. n Since it s not hidden, we know which states we went through, hence which transitions and observations were used. n Given that information, training: n B = {b k (o t )}: Since every state can only generate one observation symbol, observation likelihoods B are all 1.0 n A = {a ij }: a ij = q Q C(i j) C(i q)

+ Extending Intuition to HMMs n For HMM, cannot compute these counts directly from observed sequences n Baum-Welch intuitions: n Iteratively estimate the counts. n Start with an estimate for a ij and b k, iteratively improve the estimates n Get estimated probabilities by: n computing the forward probability for an observation n dividing that probability mass among all the different paths that contributed to this forward probability

+ Embedded Training 10 n Given: phoneset, pronunciation lexicon, transcribed wavefiles n Build a whole sentence HMM for each sentence n Initialize A (transition) probs to 0.5, or to zero n Initialize B (observation) probs to global mean and variance n Run multiple iterations of Baum Welch n During each iteration, we compute forward and backward probabilities n Use them to re-estimate A and B n Run Baum-Welch til converge 9/29/17

+ The forward algorithm n Each cell of the forward algorithm trellis alpha t (j) n Represents the probability of being in state j n After seeing the first t observations n Given the automaton (model) λ n Each cell thus expresses the following probability Thanks to Dan Jurafsky for these slides

+ We update each cell Thanks to Dan Jurafsky for these slides

+ The Backward algorithm n We define the backward probability as follows: β t (i) = P(o t +1,o t +2,...o T, q t = i,φ) n This is the probability of generating partial observations O t+1 T from time t+1 to the end, given that the HMM is in state i at time t and (of course) given Φ.

+ Inductive step of the backward algorithm (figure inspired by Rabiner and Juang) Computation of β t (i) by weighted sum of all successive values β t+1

+ Intuition for re-estimation of a ij n We will estimate â ij via this intuition: ˆ a ij = expected number of transitions from state i to state j expected number of transitions from state i n Numerator intuition: n Assume we had some estimate of probability that a given transition i èj was taken at time t in observation sequence. n If we knew this probability for each time t, we could sum over all t to get expected value (count) for ièj.

+ Viterbi training 16 n Baum-Welch training says: n We need to know what state we were in, to accumulate counts of a given output symbol o t n We ll compute ξi(t), the probability of being in state i at time t, by using forward-backward to sum over all possible paths that might have been in state i and output o t. n Viterbi training says: n Instead of summing over all possible paths, just take the single most likely path n Use the Viterbi algorithm to compute this Viterbi path n Via forced alignment CS 224S Winter 2007 9/29/17

+ Forced Alignment 17 n Forced alignment : Computing the Viterbi path over the training data n Because we know which word string to assign to each observation sequence. n We just don t know the state sequence. n So we use a ij to constrain the path to go through the correct words n And otherwise do normal Viterbi n Result: state sequence! CS 224S Winter 2007 10/12/17

+ Summary n Transition probabilities n The ratio between the expected number of transitions from state i to j and the expected number of all transitions from state i n Observation probabilities n The ratio between the expected number of times the observation data emitted from state j is v k, and the expected number of times any observation is emitted from state j Thanks to Dan Jurafsky for these slides

+ Viterbi Training 19 n Much faster than Baum-Welch n But doesn t work quite as well n But the tradeoff is often worth it. CS 224S Winter 2007 9/29/17

+ Input to Baum-Welch n O unlabeled sequence of observations n Q vocabulary of hidden states n For ice-cream task n O = {1,3,2.,,,.} n Q = {H,C}

+ Forward Algorithm (α) n The dynamic programming computation of α n Backward (β) is similar but works back from Stop. Thanks to Dan Jurafsky for these slides

+ Eisner Spreadheet p( C) p( H) p( START) p(1 ) 0.7 0.1 p(2 ) 0.2 0.2 p(3 ) 0.1 0.7 p(c ) 0.8 0.1 0.5 p(h ) 0.1 0.8 0.5 p(stop ) 0.1 0.1 0 Day Ice Creams α(c) α(h) β(c) β(h) 1 2 0.1 0.1 1.17798E-18 7.94958E-18 2 3 0.009 0.063 2.34015E-18 1.41539E-17 3 3 0.00135 0.03591 7.24963E-18 2.51453E-17.................. 32 2 7.39756E-18 4.33111E-17 0.018 0.018 Thanks to Dan Jurafsky for these slides 33 2 2.04983E-18 7.07773E-18 0.1 0.1

=C$14*E59*INDEX(C$11:C$13,$B59,1)+C$15*F59*INDEX(D$11:D$13,$B59,1) + Eisner Spreadheet p( C) p( H) p( START) p(1 ) 0.7 0.1 p(2 ) 0.2 0.2 p(3 ) 0.1 0.7 p(c ) 0.8 0.1 0.5 p(h ) 0.1 0.8 0.5 E$14*INDEX(C$11:C$13,$B27,1) p(stop ) 0.1 0.1 0 p(c Start) * p(2 C) Day Ice Creams α(c) α(h) β(c) β(h) 1 2 0.1 0.1 1.17798E-18 7.94958E-18 2 3 0.009 0.063 2.34015E-18 1.41539E-17 3 3 0.00135 0.03591 7.24963E-18 2.51453E-17 32 2 7.39756E-18 4.33111E-17 0.018 0.018 33 2 2.04983E-18 7.07773E-18 0.1 0.1 Thanks to Dan Jurafsky for these slides

=C$14*E59*INDEX(C$11:C$13,$B59,1)+C$15*F59*INDEX(D$11:D$13,$B59,1) + Eisner Spreadheet Thanks to Dan Jurafsky for these slides p( C) p( H) p( START) p(1 ) 0.7 0.1 p(2 ) 0.2 0.2 p(3 ) 0.1 0.7 p(c ) 0.8 0.1 0.5 p(h ) 0.1 0.8 0.5 p(stop ) 0.1 0.1 0 P(H Start) * H(2 C) Day Ice Creams α(c) α(h) β(c) β(h) 1 2 0.1 0.1 1.17798E-18 7.94958E-18 2 3 0.009 0.063 2.34015E-18 1.41539E-17 3 3 0.00135 0.03591 7.24963E-18 2.51453E-17.................. 32 2 7.39756E-18 4.33111E-17 0.018 0.018 33 2 2.04983E-18 7.07773E-18 0.1 0.1

+ Eisner Spreadheet Thanks to Dan Jurafsky for these slides p( C) p( H) p( START) p(1 ) 0.7 0.1 p(2 ) 0.2 0.2 p(3 ) 0.1 0.7 p(c ) 0.8 0.1 0.5 p(h ) 0.1 0.8 0.5 p(stop ) 0.1 0.1 0 1A(C)*P(C C)+ 1α (H)*P(C H) * p(3 C Day Ice Creams α(c) α(h) β(c) β(h) 1 2 0.1 0.1 1.17798E-18 7.94958E-18 2 3 0.009 0.063 2.34015E-18 1.41539E-17 3 3 0.00135 0.03591 7.24963E-18 2.51453E-17.................. 32 2 7.39756E-18 4.33111E-17 0.018 0.018 33 2 2.04983E-18 7.07773E-18 0.1 0.1

=C$14*E59*INDEX(C$11:C$13,$B59,1)+C$15*F59*INDEX(D$11:D$13,$B59,1) + Eisner Spreadheet p( C) p( H) p( START) p(1 ) 0.7 0.1 p(2 ) 0.2\ 0.2 p(3 ) 0.1 0.7 p(c ) 0.8 0.1 0.5 p(h ) 0.1 0.8 0.5 p(stop ) 0.1 0.1 0 Day Ice Creams α(c) α(h) β(c) β(h) 1 2 0.1 0.1 1.17798E-18 7.94958E-18 2 3 0.009 0.063 2.34015E-18 1.41539E-17 3 3 0.00135 0.03591 7.24963E-18 2.51453E-17 p(c C)*33B(C)*p(2 C) + p(h C)* 33B(H) * p(2 H).................. 32 2 7.39756E-18 4.33111E-17 0.018 0.018 33 2 2.04983E-18 p(stop C) 7.07773E-18 0.1 0.1 Thanks to Dan Jurafsky for these slides p(stop H)

+ Working through Eisner All the paths through C up to time t All the paths through H up to time t Total prob of the data Prob of getting to C/H at time t given all the data α(c)*β (C α(h)*β (H) α(c)*β (C) +α(h)*β(h) p( C) p( H) 1.17798E-19 7.94958E-19 9.12756E-19 0.129 0.871 2.10613E-20 8.91695E-19 9.12756E-19 0.023 0.977 SUM: 9.78701E-21 9.02969E-19 9.12756E-19 0.011 0.989............... 14.679 18.321 9.931 3.212 1.537 ICs p( C,1) p( C,2) p( C,3) p( H,1) p( H,2) p( H,3) 2 0 0.129 0 0 0.871 0 3 0 0 0.023 0 0 0.977 3 0 0 0.011 0 0 0.989.................. Thanks SUM to Dan 9.931 Jurafsky for these slides 3.212 1.537 1.069 7.788 9.463

+ Working through Eisner p( C) p( H) 0.129 0.871 0.023 0.977 0.011 0.989...... 3.212 1.537 p( C) p( H) p( START) p(1 ) 0.454817043 0.21213604 0 p(2 ) 0.324427849 0.342217822 0 p(3 ) 0.220755108 0.445646139 0 SUM of Prob of going thru C when IC = 1 SUM: SUM of Prob of going thru C given all the data ICs p( C,1) p( C,2) p( C,3) p( H,1) p( H,2) p( H,3) 2 0 0.129 0 0 0.871 0 3 0 0 0.023 0 0 0.977 3 0 0 0.011 0 0 0.989.................. Thanks SUM to Dan 9.931 Jurafsky for these slides 3.212 1.537 1.069 7.788 9.463

+ Eisner results n Start p( C) p( H) p(1 ) 0.7 0.1 p(2 ) 0.2 0.2 p(3 ) 0.1 0.7 n After 1 iteration p( C) p( H) p(1 ) 0.6643 0.0592 p(2 ) 0.2169 0.4297 p(3 ) 0.1187 0.5110 n After 10 iterations p( C) p( H) p(1 ) 0.6407 0.0001 p(2 ) 0.1481 0.5342 p(3 ) 0.2112 0.4657 Thanks to Dan Jurafsky for these slides

+ Applying FB to speech: Caveats n Network structure of HMM is always created by hand n no algorithm for double-induction of optimal structure and probabilities has been able to beat simple hand-built structures. n Always Bakis network = links go forward in time n Subcase of Bakis net: beads-on-string net: n Baum-Welch only guaranteed to return local max, rather than global optimum

+ Multivariate Gaussians n Instead of a single mean µ and variance σ: f (x µ,σ) = 1 σ 2π exp( (x µ)2 2σ 2 ) n Vector of observations x modeled by vector of means µ and covariance matrix Σ f (x µ,σ) = 1 (2π) D / 2 Σ exp 1 1/ 2 2 (x µ)t Σ 1 (x µ)

+ Training a GMM n Problem: how do we train a GMM if we don t know what component is accounting for aspects of any particular observation? n Intuition: we use Baum-Welch to find it for us, just as we did for finding hidden states that accounted for the observation

+ How to train mixtures? 33 n Choose M (often 16; or can tune M dependent on amount of training observations) n Then can do various splitting or clustering algorithms n One simple method for splitting : 1. Compute global mean µ and global variance 2. Split into two Gaussians, with means µ±ε (sometimes ε is 0.2σ) 3. Run Forward-Backward to retrain 4. Go to 2 until we have 16 mixtures 10/3/17