Hidden Markov Models and Gaussian Mixture Models

Similar documents
Hidden Markov Models and Gaussian Mixture Models

University of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I

COM336: Neural Computing

Hidden Markov Models

CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm

Speech and Language Processing. Chapter 9 of SLP Automatic Speech Recognition (II)

Hidden Markov Modelling

HIDDEN MARKOV MODELS IN SPEECH RECOGNITION

Lecture 11: Hidden Markov Models

STA 414/2104: Machine Learning

STA 4273H: Statistical Machine Learning

Lecture 3: ASR: HMMs, Forward, Viterbi

Engineering Part IIB: Module 4F11 Speech and Language Processing Lectures 4/5 : Speech Recognition Basics

Statistical NLP Spring Digitizing Speech

Digitizing Speech. Statistical NLP Spring Frame Extraction. Gaussian Emissions. Vector Quantization. HMMs for Continuous Observations? ...

Automatic Speech Recognition (CS753)

CS 136 Lecture 5 Acoustic modeling Phoneme modeling

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 10: Acoustic Models

Statistical NLP Spring The Noisy Channel Model

EECS E6870: Lecture 4: Hidden Markov Models

Introduction to Neural Networks

Statistical Sequence Recognition and Training: An Introduction to HMMs

Lecture 5: GMM Acoustic Modeling and Feature Extraction

Inf2b Learning and Data

Cheng Soon Ong & Christian Walder. Canberra February June 2018

10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)

Automatic Speech Recognition (CS753)

Speech Recognition HMM

Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

The Noisy Channel Model. CS 294-5: Statistical Natural Language Processing. Speech Recognition Architecture. Digitizing Speech

Hidden Markov Models

Statistical Methods for NLP

Machine Learning for natural language processing

Hidden Markov Models The three basic HMM problems (note: change in notation) Mitch Marcus CSE 391

ASR using Hidden Markov Model : A tutorial

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 9: Acoustic Models

We Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named

Temporal Modeling and Basic Speech Recognition

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

Hidden Markov Model and Speech Recognition

Hidden Markov Models Hamid R. Rabiee

CISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)

Hidden Markov Models

Weighted Finite-State Transducers in Computational Biology

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

Basic math for biology

Lecture 3. Gaussian Mixture Models and Introduction to HMM s. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen, Markus Nussbaum-Thom

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Parametric Models Part III: Hidden Markov Models

p(d θ ) l(θ ) 1.2 x x x

Recap: HMM. ANLP Lecture 9: Algorithms for HMMs. More general notation. Recap: HMM. Elements of HMM: Sharon Goldwater 4 Oct 2018.

Hidden Markov Model. Ying Wu. Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208

Statistical Processing of Natural Language

An Evolutionary Programming Based Algorithm for HMM training

Gaussians. Hiroshi Shimodaira. January-March Continuous random variables

Brief Introduction of Machine Learning Techniques for Content Analysis

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Expectation Maximization (EM)

1. Markov models. 1.1 Markov-chain

Lecture 12: Algorithms for HMMs

Robert Collins CSE586 CSE 586, Spring 2015 Computer Vision II

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Lecture 12: Algorithms for HMMs

Introduction to Machine Learning CMU-10701

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Shankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms

Hidden Markov Models Part 2: Algorithms

ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging

Statistical Pattern Recognition

Hidden Markov Models. Dr. Naomi Harte

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

University of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 2: Multivariate Gaussians

order is number of previous outputs

Outline of Today s Lecture

Note Set 5: Hidden Markov Models

Hidden Markov Models in Language Processing

Data-Intensive Computing with MapReduce

A New OCR System Similar to ASR System

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.

Reformulating the HMM as a trajectory model by imposing explicit relationship between static and dynamic features

HMM part 1. Dr Philip Jackson

The Expectation-Maximization Algorithm

Jorge Silva and Shrikanth Narayanan, Senior Member, IEEE. 1 is the probability measure induced by the probability density function

Recall: Modeling Time Series. CSE 586, Spring 2015 Computer Vision II. Hidden Markov Model and Kalman Filter. Modeling Time Series

Lecture 4. Hidden Markov Models. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen, Markus Nussbaum-Thom

Computational Genomics and Molecular Biology, Fall

COMP90051 Statistical Machine Learning

Hidden Markov Models

( ).666 Information Extraction from Speech and Text

Statistical Methods for NLP

Gaussian Mixture Models, Expectation Maximization

Dynamic Approaches: The Hidden Markov Model

Expectation Maximization (EM)

Hidden Markov Models

L23: hidden Markov models

Statistical Machine Learning from Data

Dept. of Linguistics, Indiana University Fall 2009

Markov Chains and Hidden Markov Models

Transcription:

Hidden Markov Models and Gaussian Mixture Models Hiroshi Shimodaira and Steve Renals Automatic Speech Recognition ASR Lectures 4&5 25&29 January 2018 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 1

Overview HMMs and GMMs Key models and algorithms for HMM acoustic models Gaussians GMMs: Gaussian mixture models HMMs: Hidden Markov models HMM algorithms Likelihood computation (forward algorithm) Most probable state sequence (Viterbi algorithm) Estimating the parameters (EM algorithm) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 2

Fundamental Equation of Statistical Speech Recognition If X is the sequence of acoustic feature vectors (observations) and W denotes a word sequence, the most likely word sequence W is given by Applying Bayes Theorem: W = arg max P(W X) W P(W X) = p(x W) P(W) p(x) p(x W) P(W) W = arg max W p(x W) }{{} Acoustic model P(W) }{{} Language model NB: X is used hereafter to denote the output feature vectors from the signal analysis module rather than DFT spectrum. ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 3

Acoustic Modelling Recorded Speech Signal Analysis Decoded Text (Transcription) Hidden Markov Model Acoustic Model Training Data Lexicon Language Model Search Space ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 4

Hierarchical modelling of speech Generative Model "No right" Utterance NO RIGHT Word n oh r ai t Subword HMM Acoustics ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 5

Calculation of p(x W ) Speech signal Spectral analysis Feature vector sequence X= x x 1 2 x n p(x /sayonara/) p(x 1 /s/) p(x /a/) 2 p(x 3 /y/) p(x 5 /n/) 7 p(x /r/) p(x 4 /o/) p(x 6 /a/) p(x 8 /a/) Acoustic (phone) model [HMM] /a/ /u/ /s/ /p/ /i/ /r/ /t/ /w/ NB: some conditional independency is assumed here. ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 6

How to calculate p(x 1 /s/)? Assume x 1, x 2,, x T1 corresponds to phoneme /s/, the conditional probability that we observe the sequence is p(x 1 /s/) = p(x 1,, x T1 /s/), x i = (x 1i,, x Di ) t R d We know that HMM can be employed to calculate this. (Viterbi algorithm, Forward / Backward algorithm) To grasp the idea of probability calculation, let s consider an extremely simple case where the length of input sequence is just one (T 1 = 1), and the dimensionality of x is one (d = 1), so that we don t need HMM. p(x 1 /s/) p(x 1 /s/) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 7

How to calculate p(x 1 /s/)? (cont.) p(x /s/) : conditional probability (conditional probability density function (pdf) of x) A Gaussian / normal distribution function could be employed for this: 1 P(x /s/) = e (x µs ) 2 2σs 2 2πσ 2 s The function has only two parameters, µ s and σ 2 s Given a set of training samples {x 1,, x N }, we can estimate µ s and σ s ˆµ s = 1 N N x i=1 i, ˆσ s 2 = 1 N N (x i=1 i ˆµ s ) 2 For a general case where a phone lasts more than one frame, we need to employ HMM. ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 8

Acoustic Model: Continuous Density HMM P(s 1 s 1 ) P(s 2 s 2 ) P(s 3 s 3 ) s 1 s 2 s 3 s E s I P(s 1 s I ) P(s 2 s 1 ) P(s 3 s 2 ) P(s E s 3 ) p(x s 1 ) p(x s 2 ) p(x s 3 ) x Probabilistic finite state automaton Parameters λ: Transition probabilities: a kj = P(S =j S =k) Output probability density function: b j (x) = p(x S =j) NB: Some textbooks use Q or q to denote the state variable S. x corresponds to o t in Lecture slides 02. x ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 9 x

Acoustic Model: Continuous Density HMM s I s 1 s 2 s 3 s E x 1 x 2 x 3 x 4 x 5 x 6 Probabilistic finite state automaton Parameters λ: Transition probabilities: a kj = P(S =j S =k) Output probability density function: b j (x) = p(x S =j) NB: Some textbooks use Q or q to denote the state variable S. x corresponds to o t in Lecture slides 02. ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 9

HMM Assumptions s(t 1) s(t) s(t+1) x(t 1) x(t) x(t + 1) NB: unfolded version over time 1 Markov process: The probability of a state depends only on the previous state: P(S(t) S(t 1), S(t 2),..., S(1)) = P(S(t) S(t 1)) A state is conditionally independent of all other states given the previous state 2 Observation independence: The output observation x(t) depends only on the state that produced the observation: p(x(t) S(t), S(t 1),..., S(1), x(t 1),..., x(1)) = p(x(t) S(t)) An acoustic observation x is conditionally independent of all other observations given the state that generated it ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 10

Output distribution P(s 1 s 1 ) P(s 2 s 2 ) P(s 3 s 3 ) s 1 s 2 s 3 s E s I P(s 1 s I ) P(s 2 s 1 ) P(s 3 s 2 ) P(s E s 3 ) p(x s 1 ) p(x s 2 ) p(x s 3 ) x x x Single multivariate Gaussian with mean µ j, covariance matrix Σ j : b j (x) = p(x S =j) = N (x; µ j, Σ j ) M-component Gaussian mixture model: M b j (x) = p(x S =j) = c jm N (x; µ jm, Σ jm ) m=1 Neural network: b j (x) P(S =j x) / P(S =j) NB: NN outputs posterior probabilities ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 11

Background: cdf Consider a real valued random variable X Cumulative distribution function (cdf) F (x) for X : F (x) = P(X x) To obtain the probability of falling in an interval we can do the following: P(a < X b) = P(X b) P(X a) = F (b) F (a) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 12

Background: pdf The rate of change of the cdf gives us the probability density function (pdf), p(x): p(x) = d dx F (x) = F (x) F (x) = x p(x)dx p(x) is not the probability that X has value x. But the pdf is proportional to the probability that X lies in a small interval centred on x. Notation: p for pdf, P for probability ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 13

The Gaussian distribution (univariate) The Gaussian (or Normal) distribution is the most common (and easily analysed) continuous distribution It is also a reasonable model in many situations (the famous bell curve ) If a (scalar) variable has a Gaussian distribution, then it has a probability density function with this form: ( ) p(x µ, σ 2 ) = N (x; µ, σ 2 1 (x µ) 2 ) = exp 2πσ 2 2σ 2 The Gaussian is described by two parameters: the mean µ (location) the variance σ 2 (dispersion) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 14

Plot of Gaussian distribution Gaussians have the same shape, with the location controlled by the mean, and the spread controlled by the variance One-dimensional Gaussian with zero mean and unit variance (µ = 0, σ 2 = 1): 0.4 0.35 mean=0 variance=1 pdf of Gaussian Distribution 0.3 0.25 p(x m,s) 0.2 0.15 0.1 0.05 0 4 3 2 1 0 1 2 3 4 x ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 15

Properties of the Gaussian distribution N (x; µ, σ 2 ) = ( ) 1 (x µ) 2 exp 2πσ 2 2σ 2 0.4 pdfs of Gaussian distributions 0.35 mean=0 variance=1 0.3 mean=0 variance=2 0.25 p(x m,s) 0.2 0.15 mean=0 variance=4 0.1 0.05 0 8 6 4 2 0 2 4 6 8 x ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 16

Parameter estimation Estimate mean and variance parameters of a Gaussian from data x 1, x 2,..., x T Use the following as the estimates: ˆµ = 1 T ˆσ 2 = 1 T T t=1 x t (mean) T (x t ˆµ) 2 (variance) t=1 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 17

Exercise maximum likelihood estimation (MLE) Consider the log likelihood of a set of T training data points {x 1,..., x T } being generated by a Gaussian with mean µ and variance σ 2 : L = ln p({x 1,..., x T } µ, σ 2 ) = 1 2 = 1 2σ 2 T ( (xt µ) 2 t=1 σ 2 ) ln σ 2 ln(2π) T (x t µ) 2 T 2 ln σ2 T 2 ln(2π) t=1 By maximising the the log likelihood function with respect to µ show that the maximum likelihood estimate for the mean is indeed the sample mean: µ ML = 1 T x t. T t=1 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 18

The multivariate Gaussian distribution The D-dimensional vector x = (x 1,..., x D ) T follows a multivariate Gaussian (or normal) distribution if it has a probability density function of the following form: ( 1 p(x µ, Σ) = (2π) D/2 exp 1 ) Σ 1/2 2 (x µ)t Σ 1 (x µ) The pdf is parameterised by the mean vector µ = (µ 1,..., µ D ) T σ 11... σ 1D and the covariance matrix Σ =........ σ D1... σ DD The 1-dimensional Gaussian is a special case of this pdf The argument to the exponential 0.5(x µ) T Σ 1 (x µ) is referred to as a quadratic form. ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 19

Covariance matrix The mean vector µ is the expectation of x: µ = E[x] The covariance matrix Σ is the expectation of the deviation of x from the mean: Σ = E[(x µ)(x µ) T ] Σ is a D D symmetric matrix: σ ij = E[(x i µ i )(x j µ j )] = E[(x j µ j )(x i µ i )] = σ ji The sign of the covariance helps to determine the relationship between two components: If x j is large when x i is large, then (x i µ i )(x j µ j ) will tend to be positive; If x j is small when x i is large, then (x i µ i )(x j µ j ) will tend to be negative. ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 20

Spherical Gaussian Contour plot of p(x 1, x 2 ) Surface plot of p(x 1, x 2 ) 4 3 0.16 2 0.14 0.12 1 p(x 1,x 2 ) 0.1 0.08 x 2 0 0.06-1 0.04 0.02-2 0 4-3 2 4 x 2 0-2 -4-4 -2 0 x 1 2-4 -4-3 -2-1 0 1 2 3 4 x 1 µ = ( 0 0 ) Σ = ( 1 0 0 1 ) ρ 12 = 0 NB: Correlation coefficient ρ ij = σ ij σii σ jj ( 1 ρ ij 1) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 21

Diagonal Covariance Gaussian Contour plot of p(x 1, x 2 ) Surface plot of p(x 1, x 2 ) 4 3 0.16 2 0.14 0.12 1 p(x 1,x 2 ) 0.1 0.08 x 2 0 0.06-1 0.04 0.02-2 0 4-3 2 4 x 2 0-2 -4-4 -2 0 x 1 2-4 -4-3 -2-1 0 1 2 3 4 x 1 µ = ( 0 0 ) Σ = ( 1 0 0 4 ) ρ 12 = 0 NB: Correlation coefficient ρ ij = σ ij σii σ jj ( 1 ρ ij 1) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 22

Full covariance Gaussian Contour plot of p(x 1, x 2 ) Surface plot of p(x 1, x 2 ) 4 3 0.16 2 0.14 0.12 1 p(x 1,x 2 ) 0.1 0.08 x 2 0 0.06-1 0.04 0.02-2 0 4-3 2 4 x 2 0-2 -4-4 -2 0 x 1 2-4 -4-3 -2-1 0 1 2 3 4 x 1 µ = ( 0 0 ) Σ = ( 1 1 1 4 ) ρ 12 = 0.5 NB: Correlation coefficient ρ ij = σ ij σii σ jj ( 1 ρ ij 1) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 23

Parameter estimation of a multivariate Gaussian distribution It is possible to show that the mean vector ˆµ and covariance matrix ˆΣ that maximise the likelihood of the training data are given by: ˆµ = 1 T ˆΣ = 1 T T x t t=1 T (x t ˆµ)(x t ˆµ) T t=1 where x t = (x t1,..., x td ) T. NB: T denotes either the number of samples or vector transpose depending on context. ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 24

Example data 10 5 X2 0 5 4 2 0 2 4 6 8 10 X1 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 25

Maximum likelihood fit to a Gaussian 10 5 X2 0 5 4 2 0 2 4 6 8 10 X1 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 26

Data in clusters (example 1) 2.5 2 1.5 1 0.5 0 0.5 1 1.5 1.5 1 0.5 0 0.5 1 1.5 2 µ 1 = (0, 0) T µ 2 = (1, 1) T Σ 1 = Σ 2 = 0.2 I ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 27

Example 1 fit by a Gaussian 2.5 2 1.5 1 0.5 0 0.5 1 1.5 1.5 1 0.5 0 0.5 1 1.5 2 µ 1 = (0, 0) T µ 2 = (1, 1) T Σ 1 = Σ 2 = 0.2 I ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 28

k-means clustering k-means is an automatic procedure for clustering unlabelled data Requires a prespecified number of clusters Clustering algorithm chooses a set of clusters with the minimum within-cluster variance Guaranteed to converge (eventually) Clustering solution is dependent on the initialisation ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 29

k-means example: data set (4,13) 10 (2,9) (7,8) 5 (6,6) (7,6) (4,5) (5,4) (8,4) (10,5) 0 (1,2) (5,2) (1,1) (3,1) (10,0) 0 5 10 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 30

k-means example: initialisation (4,13) 10 (2,9) (7,8) 5 (6,6) (7,6) (4,5) (5,4) (8,4) (10,5) 0 (1,2) (5,2) (1,1) (3,1) (10,0) 0 5 10 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 31

k-means example: iteration 1 (assign points to clusters) (4,13) 10 (2,9) (7,8) 5 (6,6) (7,6) (4,5) (5,4) (8,4) (10,5) 0 (1,2) (5,2) (1,1) (3,1) (10,0) 0 5 10 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 32

k-means example: iteration 1 (recompute centres) (4,13) 10 (2,9) (4.33, 10) (7,8) 5 0 (6,6) (7,6) (4,5) (10,5) (5,4) (8,4) (8.75,3.75) (3.57, 3) (1,2) (5,2) (1,1) (3,1) (10,0) 0 5 10 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 33

k-means example: iteration 2 (assign points to clusters) (4,13) 10 (2,9) (4.33, 10) (7,8) 5 0 (7,6) (6,6) (4,5) (10,5) (5,4) (8,4) (8.75,3.75) (3.57, 3) (1,2) (5,2) (1,1) (3,1) (10,0) 0 5 10 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 34

k-means example: iteration 2 (recompute centres) (4,13) 10 (2,9) (4.33, 10) (7,8) 5 (6,6) (7,6) (4,5) (5,4) (8,4) (10,5) (8.2,4.2) 0 (1,2) (1,1) (3.17, 2.5) (5,2) (3,1) (10,0) 0 5 10 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 35

k-means example: iteration 3 (assign points to clusters) (4,13) 10 (2,9) (4.33, 10) (7,8) 5 (4,5) (6,6) (5,4) (7,6) (8,4) (10,5) (8.2,4.2) 0 (1,2) (1,1) No changes, so converged (3.17, 2.5) (5,2) (3,1) (10,0) 0 5 10 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 36

Mixture model A more flexible form of density estimation is made up of a linear combination of component densities: p(x) = M P(m)p(x m) m=1 This is called a mixture model or a mixture density p(x m) : component densities P(m) : mixing parameters Generative model: 1 Choose a mixture component based on P(m) 2 Generate a data point x from the chosen component using p(x m) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 37

Gaussian mixture model The most important mixture model is the Gaussian Mixture Model (GMM), where the component densities are Gaussians Consider a GMM, where each component Gaussian N (x; µ m, Σ m ) has mean µ m and a spherical covariance Σ m = σ 2 m I p(x) = M P(m) p(x m) = m=1 p(x) M P(m) N (x; µ m, σm 2 I) m=1 P(1) P(2) P(M) p(x 1) p(x 2) p(x M) x 1 x 2 x d ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 38

GMM Parameter estimation when we know which component generated the data Define the indicator variable z mt = 1 if component m generated data point x t (and 0 otherwise) If z mt wasn t hidden then we could count the number of observed data points generated by m: N m = T z mt And estimate the mean, variance and mixing parameters as: t ˆµ m = z mtx t ˆσ 2 m = ˆP(m) = 1 T t=1 N m t z mt x t ˆµ m 2 t N m z mt = N m T ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 39

GMM Parameter estimation when we don t know which component generated the data Problem: we don t know z mt - which mixture component a data point comes from... Idea: use the posterior probability P(m x), which gives the probability that component m was responsible for generating data point x. P(m x) = p(x m) P(m) p(x) = p(x m) P(m) M m =1 p(x m )P(m ) The P(m x)s are called the component occupation probabilities (or sometimes called the responsibilities) Since they are posterior probabilities: M P(m x) = 1 m=1 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 40

Soft assignment Estimate soft counts based on the component occupation probabilities P(m x t ): Nm T = P(m x t ) t=1 We can imagine assigning data points to component m weighted by the component occupation probability P(m x t ) So we could imagine estimating the mean, variance and prior probabilities as: ˆµ m = ˆσ 2 m = ˆP(m) = 1 T t P(m x t)x t t P(m x t) = t P(m x t) x t ˆµ m 2 t P(m x t) t P(m x t ) = N m T t P(m x t)x t N m = t P(m x t) x t ˆµ m 2 N m ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 41

EM algorithm Problem! Recall that: P(m x) = p(x m)p(m) p(x) = p(x m)p(m) M m =1 p(x m )P(m ) We need to know p(x m) and P(m) to estimate the parameters of P(m x), and to estimate P(m)... Solution: an iterative algorithm where each iteration has two parts: Compute the component occupation probabilities P(m x) using the current estimates of the GMM parameters (means, variances, mixing parameters) (E-step) Computer the GMM parameters using the current estimates of the component occupation probabilities (M-step) Starting from some initialisation (e.g. using k-means for the means) these steps are alternated until convergence This is called the EM Algorithm and can be shown to maximise the likelihood. (NB: local maximum rather than global) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 42

Maximum likelihood parameter estimation The likelihood of a data set X = {x 1, x 2,..., x T } is given by: T T M L = p(x t ) = p(x t m) P(m) t=1 t=1 m=1 We can regard the negative log likelihood as an error function: Considering the derivatives of E with respect to the parameters, gives expressions like the previous slide ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 43

Example 1 fit using a GMM 2.5 2 1.5 1 0.5 0 0.5 1 1.5 1.5 1 0.5 0 0.5 1 1.5 2 Fitted with a two component GMM using EM ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 44

Peakily distributed data (Example 2) 4 3 2 1 0 1 2 3 4 5 4 3 2 1 0 1 2 3 4 µ 1 = µ 2 = [0 0] T Σ 1 = 0.1I Σ 2 = 2I ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 45

Example 2 fit by a Gaussian 4 3 2 1 0 1 2 3 4 5 4 3 2 1 0 1 2 3 4 µ 1 = µ 2 = [0 0] T Σ 1 = 0.1I Σ 2 = 2I ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 46

Example 2 fit by a GMM 4 3 2 1 0 1 2 3 4 5 4 3 2 1 0 1 2 3 4 Fitted with a two component GMM using EM ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 47

Example 2: component Gaussians 4 4 3 3 2 2 1 1 0 0 1 1 2 2 3 3 4 4 3 2 1 0 1 2 3 4 4 4 3 2 1 0 1 2 3 4 P(x m =1) P(x m =2) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 48

Comments on GMMs GMMs trained using the EM algorithm are able to self organise to fit a data set Individual components take responsibility for parts of the data set (probabilistically) Soft assignment to components not hard assignment soft clustering GMMs scale very well, e.g.: large speech recognition systems can have 30,000 GMMs, each with 32 components: sometimes 1 million Gaussian components!! And the parameters all estimated from (a lot of) data by EM ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 49

Back to HMMs... P(s 1 s 1 ) P(s 2 s 2 ) P(s 3 s 3 ) s 1 s 2 s 3 s E s I P(s 1 s I ) P(s 2 s 1 ) P(s 3 s 2 ) P(s E s 3 ) p(x s 1 ) p(x s 2 ) p(x s 3 ) x x Output distribution: Single multivariate Gaussian with mean µ j, covariance matrix Σ j : b j (x) = p(x S =j) = N (x; µ j, Σ j ) M-component Gaussian mixture model: M b j (x) = p(x S =j) = c jm N (x; µ jm, Σ jm ) m=1 x ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 50

The three problems of HMMs Working with HMMs requires the solution of three problems: 1 Likelihood Determine the overall likelihood of an observation sequence X = (x 1,..., x t,..., x T ) being generated by an HMM. 2 Decoding Given an observation sequence and an HMM, determine the most probable hidden state sequence 3 Training Given an observation sequence and an HMM, learn the best HMM parameters λ = {{a jk }, {b j ()}} ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 51

1. Likelihood: how to calculate? S 4 33 a34 a a 22 a 11 states a a S 23 S 12 3 2 S a 1 01 trellis S 0 1 2 3 4 5 6 7 time x 2 x 3 x 4 x 5 x 6 x 7 observations x 1 p(x, path l λ) = p(x path l, λ)p(path l λ) = p(x s 0 s 1 s 1 s 1 s 2 s 2 s 3 s 3 s 4, λ)p(s 0 s 1 s 1 s 1 s 2 s 2 s 3 s 3 s 4 λ) = b 1 (x 1 )b 1 (x 2 )b 1 (x 3 )b 2 (x 4 )b 2 (x 5 )b 3 (x 6 )b 3 (x 7 )a 01 a 11 a 11 a 12 a 22 a 23 a 33 a 34 p(x λ) = {path l } p(x, path l λ) max path l p(x, path l λ) forward(backward) algorithm Viterbi algorithm ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 52

Trellis for /k ae t/ S 3 /k/ /ae/ /t/ S S S S S S S 2 1 3 2 1 3 2 S 1 S 0 1 2 3 4 5 6 x 1 x 2 x 3 x 4 x 5 x 6 7 x7 8 x 8 9 10 11 12 13 x9 x 10 x 11 x12 x13 14 x 14 15 16 x15 x 16 17 x 17 18 x 18 19 x 19 time observations ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 53

1. Likelihood: The Forward algorithm Goal: determine p(x λ) Sum over all possible state sequences s 1 s 2... s T that could result in the observation sequence X Rather than enumerating each sequence, compute the probabilities recursively (exploiting the Markov assumption) How many paths calculations in p(x λ)? N N N }{{} T times = N T N : number of HMM states T : length of observation e.g. N T 10 10 for N =3, T =20 Computation complexity of multiplication: O(2T N T ) The Forward algorithm reduces this to O(TN 2 ) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 54

Recursive algorithms on HMMs Visualise the problem as a state-time trellis t-1 t t+1 i i i j j j k k k ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 55

1. Likelihood: The Forward algorithm Goal: determine p(x λ) Sum over all possible state sequences s 1 s 2... s T that could result in the observation sequence X Rather than enumerating each sequence, compute the probabilities recursively (exploiting the Markov assumption) Forward probability, α t ( j ): the probability of observing the observation sequence x 1... x t and being in state j at time t: α t ( j ) = p(x 1,..., x t, S(t)=j λ) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 56

1. Likelihood: The Forward recursion Initialisation α 0 (s I ) = 1 α 0 ( j ) = 0 if j s I Recursion α t ( j ) = Termination N α t 1 ( i )a ij b j (x t ) i=1 p(x λ) = α T (s E ) = 1 j N, 1 t T N α T ( i )a ie i=1 s I : initial state, s E : final state ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 57

1. Likelihood: Forward Recursion α t ( j ) = p(x 1,..., x t, S(t)=j λ) = i α ( i ) t 1 j α ( j ) t 1 t 1 a ij a a jj kj Σ j α ( j ) t t i N α t 1 ( i )a ij b j (x t ) i=1 t+1 i j k α ( k ) t 1 k k ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 58

Viterbi approximation Instead of summing over all possible state sequences, just consider the most likely Achieve this by changing the summation to a maximisation in the recursion: V t ( j ) = max V t 1 ( i )a ij b j (x t ) i Changing the recursion in this way gives the likelihood of the most probable path We need to keep track of the states that make up this path by keeping a sequence of backpointers to enable a Viterbi backtrace: the backpointer for each state at each time indicates the previous state on the most probable path ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 59

Viterbi Recursion V t ( j ) = max V t 1 ( i )a ij b j (x t ) i Likelihood of the most probable path i V ( i ) t 1 j V ( j ) t 1 t 1 a ij ajj max j akj V ( j ) t t t+1 i i j k V ( k ) t 1 k k ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 60

Viterbi Recursion Backpointers to the previous state on the most probable path i i a V ij t 1( i ) bt t ( j )= i ajj j j V ( j ) akj V ( j ) t 1 t 1 t t+1 t i j k V ( k ) t 1 k k ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 61

2. Decoding: The Viterbi algorithm Initialisation V 0 ( i ) = 1 V 0 ( j ) = 0 bt 0 ( j ) = 0 if j i Recursion Termination V t ( j ) = N max i=1 V t 1( i )a ij b j (x t ) bt t ( j ) = arg N max i=1 V t 1( i )a ij b j (x t ) P = V T (s E ) = N max i=1 V T ( i )a ie s T = bt T (q E ) = arg N max i=1 V T ( i )a ie ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 62

Viterbi Backtrace Backtrace to find the state sequence of the most probable path t 1 t bt t ( i )= j t+1 i i i V ( i ) t 1 j V ( j ) t 1 j j k V ( k ) t 1 k k bt ( k )= i t+1 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 63

3. Training: Forward-Backward algorithm Goal: Efficiently estimate the parameters of an HMM λ from an observation sequence Assume single Gaussian output probability distribution Parameters λ: Transition probabilities a ij : b j (x) = p(x j ) = N (x; µ j, Σ j ) a ij = 1 Gaussian parameters for state j : mean vector µ j ; covariance matrix Σ j j ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 64

Viterbi Training If we knew the state-time alignment, then each observation feature vector could be assigned to a specific state A state-time alignment can be obtained using the most probable path obtained by Viterbi decoding Maximum likelihood estimate of a ij, if C( i j ) is the count of transitions from i to j â ij = C( i j ) k C( i k ) Likewise if Z j is the set of observed acoustic feature vectors assigned to state j, we can use the standard maximum likelihood estimates for the mean and the covariance: x Z ˆµ j = j x Z j x Z ˆΣ j = j (x ˆµ j )(x ˆµ j ) T Z j ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 65

EM Algorithm Viterbi training is an approximation we would like to consider all possible paths In this case rather than having a hard state-time alignment we estimate a probability State occupation probability: The probability γ t ( j ) of occupying state j at time t given the sequence of observations. Compare with component occupation probability in a GMM We can use this for an iterative algorithm for HMM training: the EM algorithm (whose adaption to HMM is called Baum-Welch algorithm ) Each iteration has two steps: E-step estimate the state occupation probabilities (Expectation) M-step re-estimate the HMM parameters based on the estimated state occupation probabilities (Maximisation) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 66

Backward probabilities To estimate the state occupation probabilities it is useful to define (recursively) another set of probabilities the Backward probabilities β t ( j ) = p(x t+1,..., x T S(t)=j, λ) The probability of future observations given a the HMM is in state j at time t These can be recursively computed (going backwards in time) Initialisation β T ( i ) = a ie Recursion β t ( i ) = Termination N a ij b j (x t+1 )β t+1 ( j ) for t = T 1,..., 1 j=1 p(x λ) = β 0 ( I ) = N a Ij b j (x 1 )β 1 ( j ) = α T (s E ) j=1 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 67

Backward Recursion N β t ( j ) = p(x t+1,..., x T S(t)=j, λ) = a ij b j (x t+1 )β t+1 ( j ) t 1 i j j β ( j ) t t i Σ a a a ji jk jj j=1 t+1 i β ( i ) t+1 j β ( j ) t+1 k k k β ( k) t+1 ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 68

State Occupation Probability The state occupation probability γ t ( j ) is the probability of occupying state j at time t given the sequence of observations Express in terms of the forward and backward probabilities: 1 γ t ( j ) = P(S(t)=j X, λ) = α T (s E ) α t( j )β t ( j ) recalling that p(x λ) = α T (s E ) Since α t ( j )β t ( j ) = p(x 1,..., x t, S(t)=j λ) p(x t+1,..., x T S(t)=j, λ) = p(x 1,..., x t, x t+1,..., x T, S(t)=j λ) = p(x, S(t)=j λ) P(S(t)=j X, λ) = p(x, S(t)=j λ) p(x λ) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 69

Re-estimation of Gaussian parameters The sum of state occupation probabilities through time for a state, may be regarded as a soft count We can use this soft alignment to re-estimate the HMM parameters: ˆµ j = ˆΣ j = T t=1 γ t( j )x t T t=1 γ t( j ) T t=1 γ t( j )(x t ˆµ j )(x ˆµ j ) T T t=1 γ t( j ) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 70

Re-estimation of transition probabilities Similarly to the state occupation probability, we can estimate ξ t ( i, j ), the probability of being in i at time t and j at t + 1, given the observations: ξ t ( i, j ) = P(S(t)=i, S(t +1)=j X, λ) = p(s(t)=i, S(t +1)=j, X λ) p(x λ) = α t( i )a ij b j (x t+1 )β t+1 ( j ) α T (s E ) We can use this to re-estimate the transition probabilities â ij = T t=1 ξ t( i, j ) N T k=1 t=1 ξ t( i, k ) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 71

Pulling it all together Iterative estimation of HMM parameters using the EM algorithm. At each iteration E step For all time-state pairs 1 Recursively compute the forward probabilities α t ( j ) and backward probabilities β t ( j ) 2 Compute the state occupation probabilities γ t ( j ) and ξ t ( i, j ) M step Based on the estimated state occupation probabilities re-estimate the HMM parameters: mean vectors µ j, covariance matrices Σ j and transition probabilities a ij The application of the EM algorithm to HMM training is sometimes called the Forward-Backward algorithm or Baum-Welch algorithm ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 72

Extension to a corpus of utterances We usually train from a large corpus of R utterances If x r t is the t th frame of the r th utterance X r then we can compute the probabilities α r t( j ), β r t ( j ), γ r t ( j ) and ξ r t ( i, j ) as before The re-estimates are as before, except we must sum over the R utterances, eg: R T r=1 t=1 ˆµ j = γr t ( j )x r t R T r=1 t=1 γr t ( j ) In addition, we usually employ embedded training, in which fine tuning of phone labelling with forced Viterbi alignment or forced alignment is involved. (For details see Section 9.7 in Jurafsky and Martin s SLP) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 73

Extension to Gaussian mixture model (GMM) The assumption of a Gaussian distribution at each state is very strong; in practice the acoustic feature vectors associated with a state may be strongly non-gaussian In this case an M-component Gaussian mixture model is an appropriate density function: b j (x) = p(x S =j) = M c jm N (x; µ jm, Σ jm ) m=1 Given enough components, this family of functions can model any distribution. Train using the EM algorithm, in which the component estimation probabilities are estimated in the E-step ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 75

EM training of HMM/GMM Rather than estimating the state-time alignment, we estimate the component/state-time alignment, and component-state occupation probabilities γ t ( j, m): the probability of occupying mixture component m of state j at time t. (ξ tm(j) in Jurafsky and Martin s SLP) We can thus re-estimate the mean of mixture component m of state j as follows ˆµ jm = T t=1 γ t( j, m)x t T t=1 γ t( j, m) And likewise for the covariance matrices (mixture models often use diagonal covariance matrices) The mixture coefficients are re-estimated in a similar way to transition probabilities: T t=1 ĉ jm = γ t( j, m) M T m =1 t=1 γ t( j, m ) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 76

Doing the computation The forward, backward and Viterbi recursions result in a long sequence of probabilities being multiplied This can cause floating point underflow problems In practice computations are performed in the log domain (in which multiplies become adds) Working in the log domain also avoids needing to perform the exponentiation when computing Gaussians ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 77

A note on HMM topology left to right model a 11 a 12 0 0 a 22 a 23 0 0 a 33 parallel path left to right model a 11 a 12 a 13 0 0 0 a 22 a 23 a 24 0 0 0 a 33 a 34 a 35 0 0 0 a 44 a 45 0 0 0 0 a 55 ergodic model a 11 a 12 a 13 a 14 a 15 a 21 a 22 a 23 a 24 a 25 a 31 a 32 a 33 a 34 a 35 a 41 a 42 a 43 a 44 a 45 a 51 a 52 a 53 a 54 a 55 Speech recognition: Speaker recognition: left-to-right HMM with 3 5 states ergodic HMM ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 78

A note on HMM emission probabilities a 11 a 22 a 33 1 a 01 b (x) a 12 b (x) 2 a 23 34 b (x) 3 a emission pdfs Emission prob. Continuous (density) HMM continuous density GMM, NN/DNN Discrete (probability) HMM discrete probability VQ Semi-continuous HMM continuous density tied mixture (tied-mixture HMM) ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 79

Summary: HMMs HMMs provide a generative model for statistical speech recognition Three key problems 1 Computing the overall likelihood: the Forward algorithm 2 Decoding the most likely state sequence: the Viterbi algorithm 3 Estimating the most likely parameters: the EM (Forward-Backward) algorithm Solutions to these problems are tractable due to the two key HMM assumptions 1 Conditional independence of observations given the current state 2 Markov assumption on the states ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 80

References: HMMs Gales and Young (2007). The Application of Hidden Markov Models in Speech Recognition, Foundations and Trends in Signal Processing, 1 (3), 195 304: section 2.2. Jurafsky and Martin (2008). Speech and Language Processing (2nd ed.): sections 6.1 6.5; 9.2; 9.4. (Errata at http://www.cs.colorado.edu/~martin/slp/errata/ SLP2-PIEV-Errata.html) Rabiner and Juang (1989). An introduction to hidden Markov models, IEEE ASSP Magazine, 3 (1), 4 16. Renals and Hain (2010). Speech Recognition, Computational Linguistics and Natural Language Processing Handbook, Clark, Fox and Lappin (eds.), Blackwells. ASR Lectures 4&5 Hidden Markov Models and Gaussian Mixture Models 81