Digitizing Speech. Statistical NLP Spring Frame Extraction. Gaussian Emissions. Vector Quantization. HMMs for Continuous Observations? ...

Similar documents
Statistical NLP Spring Digitizing Speech

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 10: Acoustic Models

Statistical NLP Spring The Noisy Channel Model

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 9: Acoustic Models

The Noisy Channel Model. CS 294-5: Statistical Natural Language Processing. Speech Recognition Architecture. Digitizing Speech

Lecture 5: GMM Acoustic Modeling and Feature Extraction

CS 136 Lecture 5 Acoustic modeling Phoneme modeling

Speech and Language Processing. Chapter 9 of SLP Automatic Speech Recognition (II)

Speech Recognition. CS 294-5: Statistical Natural Language Processing. State-of-the-Art: Recognition. ASR for Dialog Systems.

On the Influence of the Delta Coefficients in a HMM-based Speech Recognition System

CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm

Lecture 3: ASR: HMMs, Forward, Viterbi

University of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I

Hidden Markov Models and Gaussian Mixture Models

Automatic Speech Recognition (CS753)

CS 188: Artificial Intelligence Fall 2011

Experiments with a Gaussian Merging-Splitting Algorithm for HMM Training for Speech Recognition

Computer Science March, Homework Assignment #3 Due: Thursday, 1 April, 2010 at 12 PM

Hidden Markov Models and Gaussian Mixture Models

T Automatic Speech Recognition: From Theory to Practice

CS 343: Artificial Intelligence

10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)

Hidden Markov Modelling

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II

Automatic Speech Recognition (CS753)

Augmented Statistical Models for Speech Recognition

CS 343: Artificial Intelligence

HIDDEN MARKOV MODELS IN SPEECH RECOGNITION

STA 414/2104: Machine Learning

Monaural speech separation using source-adapted models

STA 4273H: Statistical Machine Learning

Hidden Markov Model and Speech Recognition

Automatic Speech Recognition (CS753)

Speech Recognition HMM

Discriminative Pronunciation Modeling: A Large-Margin Feature-Rich Approach

Lecture 9: Speech Recognition. Recognizing Speech

Lecture 9: Speech Recognition

CS460/626 : Natural Language Processing/Speech, NLP and the Web (Lecture 18 Alignment in SMT and Tutorial on Giza++ and Moses)

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

Machine Learning for Data Science (CS4786) Lecture 12

Linear Dynamical Systems (Kalman filter)

Sparse Models for Speech Recognition

Temporal Modeling and Basic Speech Recognition

CS460/626 : Natural Language

Deep Learning for Speech Recognition. Hung-yi Lee

Ngram Review. CS 136 Lecture 10 Language Modeling. Thanks to Dan Jurafsky for these slides. October13, 2017 Professor Meteer

An Evolutionary Programming Based Algorithm for HMM training

ON THE USE OF MLP-DISTANCE TO ESTIMATE POSTERIOR PROBABILITIES BY KNN FOR SPEECH RECOGNITION

This is the author s version of a work that was submitted/accepted for publication in the following source:

Hidden Markov Models

Lecture 3. Gaussian Mixture Models and Introduction to HMM s. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen, Markus Nussbaum-Thom

Hidden Markov Models

Reformulating the HMM as a trajectory model by imposing explicit relationship between static and dynamic features

Jorge Silva and Shrikanth Narayanan, Senior Member, IEEE. 1 is the probability measure induced by the probability density function

Lecture 3: Pattern Classification

FEATURE PRUNING IN LIKELIHOOD EVALUATION OF HMM-BASED SPEECH RECOGNITION. Xiao Li and Jeff Bilmes

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Chapter 9 Automatic Speech Recognition DRAFT

Hidden Markov Models Part 2: Algorithms

COMP90051 Statistical Machine Learning

Engineering Part IIB: Module 4F11 Speech and Language Processing Lectures 4/5 : Speech Recognition Basics

Weighted Finite State Transducers in Automatic Speech Recognition

Machine Learning for natural language processing

Forecasting Wind Ramps

Expectation Maximization (EM)

PHONEME CLASSIFICATION OVER THE RECONSTRUCTED PHASE SPACE USING PRINCIPAL COMPONENT ANALYSIS

Detection-Based Speech Recognition with Sparse Point Process Models

Hidden Markov Models

] Automatic Speech Recognition (CS753)

EECS E6870: Lecture 4: Hidden Markov Models

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

Introduction to Machine Learning CMU-10701

CISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)

Estimation of Cepstral Coefficients for Robust Speech Recognition

Automatic Speech Recognition (CS753)

Factor analysed hidden Markov models for speech recognition

Artificial Intelligence Markov Chains

Genones: Generalized Mixture Tying in Continuous. V. Digalakis P. Monaco H. Murveit. December 6, 1995 EDICS SA and

Speech Recognition Lecture 9: Pronunciation Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri

Lecture 10. Discriminative Training, ROVER, and Consensus. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen

Lecture 6. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen, Markus Nussbaum-Thom

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Acoustic Modeling for Speech Recognition

Automatic Phoneme Recognition. Segmental Hidden Markov Models

CSE446: Clustering and EM Spring 2017

Brief Introduction of Machine Learning Techniques for Content Analysis

Hidden Markov models

Hidden Markov Models in Language Processing

Algorithms for NLP. Speech Signals. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Pair Hidden Markov Models

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Statistical Sequence Recognition and Training: An Introduction to HMMs

Human Mobility Pattern Prediction Algorithm using Mobile Device Location and Time Data

Outline of Today s Lecture

Expectation Maximization (EM)

Lecture 3: Pattern Classification. Pattern classification

Lecture 7: Pitch and Chord (2) HMM, pitch detection functions. Li Su 2016/03/31

CS534 Machine Learning - Spring Final Exam

A TWO-LAYER NON-NEGATIVE MATRIX FACTORIZATION MODEL FOR VOCABULARY DISCOVERY. MengSun,HugoVanhamme

Transcription:

Statistical NLP Spring 2008 Digitizing Speech Lecture 10: Acoustic Models Dan Klein UC Berkeley Frame Extraction A frame (25 ms wide extracted every 10 ms 25 ms 10ms... a 1 a 2 a 3 Figure from Simon Arnfield HMMs for Continuous Observations? Before: discrete, finite set of observations Now: spectral feature vectors are real-valued! Solution 1: discretization Solution 2: continuous emissions models Gaussians Multivariate Gaussians Mixtures of Multivariate Gaussians A state is progressively: Context independent subphone (~3 per phone Context dependent phone (=triphones State tying of CD phone Idea: discretization Map MFCC vectors onto discrete symbols Compute probabilities just by counting This is called Vector Quantization or VQ Not used for ASR any more; too simple Useful to consider as a starting point Vector Quantization Gaussian Emissions VQ is insufficient for real ASR Instead: Assume the possible values of the observation vectors are normally distributed. Represent the observation likelihood function as a Gaussian with mean µ j and variance σ j 2 f (x µ,σ = 1 (x µ2 exp( σ 2π 2σ 2 1

Gaussians for Acoustic Modeling Multivariate Gaussians A Gaussian is parameterized by a mean and a variance: Different means P(o q: P(o q o P(o q is highest here at mean P(o q is low here, very far from mean Instead of a single mean µ and variance σ: 1 (x µ2 f (x µ,σ = exp( σ 2π 2σ 2 Vector of means µ and covariance matrix Σ 1 f (x µ,σ = (2π n / 2 Σ exp 1 1/ 2 2 (x µt Σ 1 (x µ Usually assume diagonal covariance This isn t very true for FFT features, but is fine for MFCC features Gaussian Intuitions: Size of Σ Gaussians: Off-Diagonal µ = [0 0] µ = [0 0] µ = [0 0] Σ = I Σ = 0.6I Σ = 2I As Σ becomes larger, Gaussian becomes more spread out; as Σ becomes smaller, Gaussian more compressed As we increase the off diagonal entries, more correlation between value of x and value of y Text and figures from Andrew Ng s lecture notes for CS229 Text and figures from Andrew Ng s lecture notes for CS229 In two dimensions In two dimensions From Chen, Picheny et al lecture slides From Chen, Picheny et al lecture slides 2

But we re not there yet Single Gaussian may do a bad job of modeling distribution in any dimension: Solution: Mixtures of Gaussians Mixtures of Gaussians M mixtures of Gaussians: For diagonal covariance: b j (o t = M k=1 f (x µ jk,σ jk = c jk N(x,µ jk,σ jk c jk D exp( 1 D (x jkd µ jkd 2 2π D 2 2 2 2 d =1 σ σ jkd jkd d =1 M M k=1 b j (o t = c jk N(o t,µ jk,σ jk k=1 Figure from Chen, Picheney et al slides GMMs HMMs for Speech Summary: each state has a likelihood function parameterized by: M mixture weights M mean vectors of dimensionality D Either M covariance matrices of DxD Or often M diagonal covariance matrices of DxD which is equivalent to M variance vectors of dimensionality D Phones Aren t Homogeneous Need to Use Subphones 5000 Frequency (Hz 0 0.48152 ay k 0.937203 Time (s 3

A Word with Subphones Viterbi Decoding ASR Lexicon: Markov Models Markov Process with Bigrams Figure from Huang et al page 618 Training Mixture Models Forced Alignment Computing the Viterbi path over the training data is called forced alignment We know which word string to assign to each observation sequence. We just don t know the state sequence. So we constrain the path to go through the correct words And otherwise do normal Viterbi Result: state sequence! Modeling phonetic context W iy r iy m iy n iy 4

Need with triphone models Implications of Cross-Word Triphones Possible triphones: 50x50x50=125,000 How many triphone types actually occur? 20K word WSJ Task (from Bryan Pellom Word internal models: need 14,300 triphones Cross word models: need 54,400 triphones But in training data only 22,800 triphones occur! Need to generalize models. State Tying / Clustering [Young, Odell, Woodland 1994] How do we decide which triphones to cluster together? Use phonetic features (or broad phonetic classes Stop Nasal Fricative Sibilant Vowel lateral State Tying Creating CD phones: Start with monophone, do EM training Clone Gaussians into triphones Build decision tree and cluster Gaussians Clone and train mixtures (GMMs Standard subphone/mixture HMM Our Model Temporal Structure Gaussian Mixtures Fully Connected Standard Model Model HMM Baseline Error rate 25.1% Single Gaussians 5

f r z t Hierarchical Baum-Welch Training Refinement of the /ih/-phone 32.1% 28.7% 25.6% 23.9% HMM Baseline 5 Split rounds 25.1% 21.4% Refinement of the /ih/-phone Refinement of the /ih/-phone HMM states per phone Inference 35 30 25 20 15 10 5 0 ae ao ay eh er ey ih s sil aa ah ix iy cl k sh n vcl ow l m v uw aw ax ch w th el dh uh p en oy hh jh ng y b d dx g epi zh State sequence: d 1 -d 6 -d 6 -d 4 -ae 5 -ae 2 -ae 3 -ae 0 -d 2 -d 2 -d 3 -d 7 -d 5 Phone sequence: d - d - d -d -ae - ae - ae - ae - d - d -d - d - d Transcription d - ae - d Viterbi Variational??? 6