The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 10: Acoustic Models

Similar documents
Statistical NLP Spring The Noisy Channel Model

Statistical NLP Spring Digitizing Speech

Digitizing Speech. Statistical NLP Spring Frame Extraction. Gaussian Emissions. Vector Quantization. HMMs for Continuous Observations? ...

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 9: Acoustic Models

The Noisy Channel Model. CS 294-5: Statistical Natural Language Processing. Speech Recognition Architecture. Digitizing Speech

Lecture 5: GMM Acoustic Modeling and Feature Extraction

CS 136 Lecture 5 Acoustic modeling Phoneme modeling

Speech and Language Processing. Chapter 9 of SLP Automatic Speech Recognition (II)

Speech Recognition. CS 294-5: Statistical Natural Language Processing. State-of-the-Art: Recognition. ASR for Dialog Systems.

On the Influence of the Delta Coefficients in a HMM-based Speech Recognition System

Lecture 3: ASR: HMMs, Forward, Viterbi

CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm

CS 188: Artificial Intelligence Fall 2011

University of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I

Experiments with a Gaussian Merging-Splitting Algorithm for HMM Training for Speech Recognition

Hidden Markov Models and Gaussian Mixture Models

Automatic Speech Recognition (CS753)

Computer Science March, Homework Assignment #3 Due: Thursday, 1 April, 2010 at 12 PM

T Automatic Speech Recognition: From Theory to Practice

CS 343: Artificial Intelligence

Automatic Speech Recognition (CS753)

Hidden Markov Models and Gaussian Mixture Models

CS 5522: Artificial Intelligence II

Hidden Markov Modelling

CS 5522: Artificial Intelligence II

Lecture 9: Speech Recognition. Recognizing Speech

Lecture 9: Speech Recognition

CS 343: Artificial Intelligence

Automatic Speech Recognition (CS753)

Monaural speech separation using source-adapted models

HIDDEN MARKOV MODELS IN SPEECH RECOGNITION

Chapter 9 Automatic Speech Recognition DRAFT

10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)

Reformulating the HMM as a trajectory model by imposing explicit relationship between static and dynamic features

Augmented Statistical Models for Speech Recognition

ON THE USE OF MLP-DISTANCE TO ESTIMATE POSTERIOR PROBABILITIES BY KNN FOR SPEECH RECOGNITION

STA 4273H: Statistical Machine Learning

Estimation of Cepstral Coefficients for Robust Speech Recognition

Hidden Markov Model and Speech Recognition

STA 414/2104: Machine Learning

Engineering Part IIB: Module 4F11 Speech and Language Processing Lectures 4/5 : Speech Recognition Basics

Automatic Speech Recognition (CS753)

Deep Learning for Speech Recognition. Hung-yi Lee

Ngram Review. CS 136 Lecture 10 Language Modeling. Thanks to Dan Jurafsky for these slides. October13, 2017 Professor Meteer

PHONEME CLASSIFICATION OVER THE RECONSTRUCTED PHASE SPACE USING PRINCIPAL COMPONENT ANALYSIS

An Evolutionary Programming Based Algorithm for HMM training

Automatic Phoneme Recognition. Segmental Hidden Markov Models

Detection-Based Speech Recognition with Sparse Point Process Models

Speech Recognition HMM

Machine Learning for Data Science (CS4786) Lecture 12

Temporal Modeling and Basic Speech Recognition

Discriminative Pronunciation Modeling: A Large-Margin Feature-Rich Approach

CS460/626 : Natural Language Processing/Speech, NLP and the Web (Lecture 18 Alignment in SMT and Tutorial on Giza++ and Moses)

Sparse Models for Speech Recognition

Lecture 3: Pattern Classification

Hidden Markov Models

CS460/626 : Natural Language

Hidden Markov Models in Language Processing

Hidden Markov Models

This is the author s version of a work that was submitted/accepted for publication in the following source:

COMP90051 Statistical Machine Learning

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

FEATURE PRUNING IN LIKELIHOOD EVALUATION OF HMM-BASED SPEECH RECOGNITION. Xiao Li and Jeff Bilmes

Jorge Silva and Shrikanth Narayanan, Senior Member, IEEE. 1 is the probability measure induced by the probability density function

Linear Dynamical Systems (Kalman filter)

Acoustic Modeling for Speech Recognition

] Automatic Speech Recognition (CS753)

Uncertainty Modeling without Subspace Methods for Text-Dependent Speaker Recognition

Lecture 10. Discriminative Training, ROVER, and Consensus. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen

Expectation Maximization (EM)

Hidden Markov Models Part 2: Algorithms

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

Expectation Maximization (EM)

Hidden Markov Models

Investigate more robust features for Speech Recognition using Deep Learning

SPEECH RECOGNITION USING TIME DOMAIN FEATURES FROM PHASE SPACE RECONSTRUCTIONS

Artificial Intelligence Markov Chains

Covariance Matrix Enhancement Approach to Train Robust Gaussian Mixture Models of Speech Data

Forecasting Wind Ramps

Machine Learning for natural language processing

EECS E6870: Lecture 4: Hidden Markov Models

Weighted Finite State Transducers in Automatic Speech Recognition

Speech Recognition Lecture 9: Pronunciation Models. Eugene Weinstein Google, NYU Courant Institute Slide Credit: Mehryar Mohri

The Noisy Channel Model and Markov Models

Lecture 3. Gaussian Mixture Models and Introduction to HMM s. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen, Markus Nussbaum-Thom

Using Sub-Phonemic Units for HMM Based Phone Recognition

Genones: Generalized Mixture Tying in Continuous. V. Digalakis P. Monaco H. Murveit. December 6, 1995 EDICS SA and

Hidden Markov models

Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015

Discriminative models for speech recognition

Lecture 3: Pattern Classification. Pattern classification

How does a dictation machine recognize speech?

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Exemplar-based voice conversion using non-negative spectrogram deconvolution

Brief Introduction of Machine Learning Techniques for Content Analysis

Robust Speech Recognition in the Presence of Additive Noise. Svein Gunnar Storebakken Pettersen

Algorithms for NLP. Speech Signals. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Deep Learning for Automatic Speech Recognition Part I

Upper Bound Kullback-Leibler Divergence for Hidden Markov Models with Application as Discrimination Measure for Speech Recognition

Note Set 5: Hidden Markov Models

Transcription:

Statistical NLP Spring 2009 The Noisy Channel Model Lecture 10: Acoustic Models Dan Klein UC Berkeley Search through space of all possible sentences. Pick the one that is most probable given the waveform. Speech Recognition Architecture Digitizing Speech Frame Extraction Mel Freq. Cepstral Coefficients A frame (25 ms wide extracted every 10 ms Do FFT to get spectral information Like the spectrogram/spectrum we saw earlier 25 ms 10ms... Apply Mel scaling Linear below 1kHz, log above, equal samples above and below 1kHz Models human ear; more sensitivity in lower freqs a 1 a 2 a 3 Figure from Simon Arnfield Plus Discrete Cosine Transformation 1

Final Feature Vector 39 (real features per 10 ms frame: 12 MFCC features 12 Delta MFCC features 12 Delta-Delta MFCC features 1 (log frame energy 1 Delta (log frame energy 1 Delta-Delta (log frame energy So each frame is represented by a 39D vector HMMs for Continuous Observations? Before: discrete, finite set of observations Now: spectral feature vectors are real-valued! Solution 1: discretization Solution 2: continuous emissions models Gaussians Multivariate Gaussians Mixtures of Multivariate Gaussians A state is progressively: Context independent subphone (~3 per phone Context dependent phone (=triphones State tying of CD phone Idea: discretization Map MFCC vectors onto discrete symbols Compute probabilities just by counting This is called Vector Quantization or VQ Not used for ASR any more; too simple Useful to consider as a starting point Vector Quantization Gaussian Emissions VQ is insufficient for real ASR Instead: Assume the possible values of the observation vectors are normally distributed. Represent the observation likelihood function as a Gaussian with mean µ j and variance σ j 2 f (x µ,σ = 1 (x µ2 exp( σ 2π 2σ 2 Gaussians for Acoustic Modeling Multivariate Gaussians A Gaussian is parameterized by a mean and a variance: Different means P(o q: P(o q o P(o q is highest here at mean P(o q is low here, very far from mean Instead of a single mean µ and variance σ: 1 (x µ2 f (x µ,σ = exp( σ 2π 2σ 2 Vector of means µ and covariance matrix Σ 1 f (x µ,σ = (2π n / 2 Σ exp 1 1/ 2 2 (x µt Σ 1 (x µ Usually assume diagonal covariance This isn t very true for FFT features, but is fine for MFCC features 2

Gaussian Intuitions: Size of Σ Gaussians: Off-Diagonal µ = [0 0] µ = [0 0] µ = [0 0] Σ = I Σ = 0.6I Σ = 2I As Σ becomes larger, Gaussian becomes more spread out; as Σ becomes smaller, Gaussian more compressed As we increase the off diagonal entries, more correlation between value of x and value of y Text and figures from Andrew Ng s lecture notes for CS229 Text and figures from Andrew Ng s lecture notes for CS229 But we re not there yet Scatter Plots Single Gaussian may do a bad job of modeling distribution in any dimension: Solution: Mixtures of Gaussians Figure from Chen, Picheney et al slides Mixtures of Gaussians M mixtures of Gaussians: For diagonal covariance: b j (o t = M k=1 f (x µ jk,σ jk = c jk N(x,µ jk,σ jk c jk 2π D D exp( 1 D (x jkd µ jkd 2 2 2 2 2 d =1 σ σ jkd jkd d =1 M M k=1 b j (o t = c jk N(o t,µ jk,σ jk k=1 GMMs Summary: each state has a likelihood function parameterized by: M mixture weights M mean vectors of dimensionality D Either M covariance matrices of DxD Or often M diagonal covariance matrices of DxD which is equivalent to M variance vectors of dimensionality D 3

HMMs for Speech Phones Aren t Homogeneous 5000 Frequency (Hz 0 0.48152 ay k 0.937203 Time (s Need to Use Subphones A Word with Subphones ASR Lexicon: Markov Models Markov Process with Bigrams Figure from Huang et al page 618 4

Training Mixture Models Forced Alignment Computing the Viterbi path over the training data (where the transcription is known is called forced alignment We know which word string to assign to each observation sequence. We just don t know the state sequence. So we constrain the path to go through the correct words (by using a special example-specific language model And otherwise do normal Viterbi Result: state sequence! Modeling phonetic context W iy r iy m iy n iy Need with triphone models Implications of Cross-Word Triphones Possible triphones: 50x50x50=125,000 How many triphone types actually occur? 20K word WSJ Task (from Bryan Pellom Word internal models: need 14,300 triphones Cross word models: need 54,400 triphones But in training data only 22,800 triphones occur! Need to generalize models. State Tying / Clustering [Young, Odell, Woodland 1994] How do we decide which triphones to cluster together? Use phonetic features (or broad phonetic classes Stop Nasal Fricative Sibilant Vowel lateral State Tying Creating CD phones: Start with monophone, do EM training Clone Gaussians into triphones Build decision tree and cluster Gaussians Clone and train mixtures (GMMs 5

Standard subphone/mixture HMM Our Model Temporal Structure Gaussian Mixtures Fully Connected Standard Model Model Error rate HMM Baseline 25.1% Single Gaussians Hierarchical Baum-Welch Training Refinement of the /ih/-phone 32.1% 28.7% 25.6% 23.9% HMM Baseline 25.1% 5 Split rounds 21.4% Refinement of the /ih/-phone Refinement of the /ih/-phone 6

f r s t HMM states per phone Inference 35 30 25 20 15 10 5 0 ae ao ay eh er ey ih sil aa ah ix iy z cl k sh n vcl ow l m v uw aw ax ch w th el dh uh p en oy hh jh ng y b d dx g epi zh State sequence: d 1 -d 6 -d 6 -d 4 -ae 5 -ae 2 -ae 3 -ae 0 -d 2 -d 2 -d 3 -d 7 -d 5 Phone sequence: d - d - d -d -ae - ae - ae - ae - d - d -d - d - d Transcription d - ae - d Viterbi Variational??? 7