Design and Implementation of Speech Recognition Systems

Similar documents
Design and Implementation of Speech Recognition Systems

Statistical Sequence Recognition and Training: An Introduction to HMMs

Temporal Modeling and Basic Speech Recognition

An Introduction to Bioinformatics Algorithms Hidden Markov Models

HIDDEN MARKOV MODELS IN SPEECH RECOGNITION

Conditional Random Field

STA 414/2104: Machine Learning

Hidden Markov Models Part 1: Introduction

10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)

Intelligent Systems (AI-2)

p(d θ ) l(θ ) 1.2 x x x

Intelligent Systems (AI-2)

Lecture 3. Gaussian Mixture Models and Introduction to HMM s. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen, Markus Nussbaum-Thom

order is number of previous outputs

1. Markov models. 1.1 Markov-chain

Pair Hidden Markov Models

Hidden Markov Models

HMM part 1. Dr Philip Jackson

STA 4273H: Statistical Machine Learning

Speech Recognition HMM

Hidden Markov Models and Gaussian Mixture Models

EECS E6870: Lecture 4: Hidden Markov Models

Hidden Markov Models

Doctoral Course in Speech Recognition. May 2007 Kjell Elenius

Hidden Markov Models

Reformulating the HMM as a trajectory model by imposing explicit relationship between static and dynamic features

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Speech and Language Processing. Chapter 9 of SLP Automatic Speech Recognition (II)

CS 188: Artificial Intelligence Fall 2011

Today s Lecture: HMMs

Hidden Markov Models. x 1 x 2 x 3 x K

Math 350: An exploration of HMMs through doodles.

Experiments with a Gaussian Merging-Splitting Algorithm for HMM Training for Speech Recognition

CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm

Hidden Markov Modelling

Statistical Methods for NLP

Hidden Markov Models and Gaussian Mixture Models

Lecture 4. Hidden Markov Models. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen, Markus Nussbaum-Thom

Note Set 5: Hidden Markov Models

CSC401/2511 Spring CSC401/2511 Natural Language Computing Spring 2019 Lecture 5 Frank Rudzicz and Chloé Pou-Prom University of Toronto

Improving the Multi-Stack Decoding Algorithm in a Segment-based Speech Recognizer

Mixtures of Gaussians with Sparse Structure

CS 343: Artificial Intelligence

Hidden Markov Models. x 1 x 2 x 3 x K

Data-Intensive Computing with MapReduce

CS 343: Artificial Intelligence

On the Influence of the Delta Coefficients in a HMM-based Speech Recognition System

Linear Dynamical Systems

Computational Genomics and Molecular Biology, Fall

Recall: Modeling Time Series. CSE 586, Spring 2015 Computer Vision II. Hidden Markov Model and Kalman Filter. Modeling Time Series

Hidden Markov Models

Robert Collins CSE586 CSE 586, Spring 2015 Computer Vision II

Probabilistic Models for Sequence Labeling

Lecture 9: Feb 9, 2005

Hidden Markov Models Part 2: Algorithms

CISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)

Sequence modelling. Marco Saerens (UCL) Slides references

Shankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms

Machine Learning Techniques for Computer Vision

Stephen Scott.

Statistical NLP for the Web Log Linear Models, MEMM, Conditional Random Fields

STATS 306B: Unsupervised Learning Spring Lecture 5 April 14

Hidden Markov Models

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Expectation Maximization (EM)

Multiple Sequence Alignment using Profile HMM

speaker recognition using gmm-ubm semester project presentation

CS 5522: Artificial Intelligence II

Sequential Supervised Learning

A brief introduction to Conditional Random Fields

Lecture 3: ASR: HMMs, Forward, Viterbi

CS 5522: Artificial Intelligence II

A TWO-LAYER NON-NEGATIVE MATRIX FACTORIZATION MODEL FOR VOCABULARY DISCOVERY. MengSun,HugoVanhamme

The Noisy Channel Model. CS 294-5: Statistical Natural Language Processing. Speech Recognition Architecture. Digitizing Speech

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

ASR using Hidden Markov Model : A tutorial

Lecture 11: Hidden Markov Models

Hidden Markov Models in Language Processing

Statistical Machine Learning from Data

Midterm sample questions

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Lecture 9: Speech Recognition

MACHINE LEARNING 2 UGM,HMMS Lecture 7

COMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009

Log-Linear Models, MEMMs, and CRFs

11.3 Decoding Algorithm

A gentle introduction to Hidden Markov Models

Extensions of Bayesian Networks. Outline. Bayesian Network. Reasoning under Uncertainty. Features of Bayesian Networks.

Algorithmisches Lernen/Machine Learning

Natural Language Processing Prof. Pushpak Bhattacharyya Department of Computer Science & Engineering, Indian Institute of Technology, Bombay

Applications of Hidden Markov Models

11 The Max-Product Algorithm

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

Jorge Silva and Shrikanth Narayanan, Senior Member, IEEE. 1 is the probability measure induced by the probability density function

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

L23: hidden Markov models

Undirected Graphical Models

Statistical Pattern Recognition

Expectation Maximization (EM)

Transcription:

Design and Implementation of Speech Recognition Systems Spring 2013 Class 7: Templates to HMMs 13 Feb 2013 1

Recap Thus far, we have looked at dynamic programming for string matching, And derived DTW from DP for isolated word recognition We identified the search trellis, time-synchronous search as efficient mechanisms for decoding We looked at ways to improve search efficiency using pruning In particular, we identified beam pruning as a nearly universal pruning mechanism in speech recognition We looked at the limitations of DTW and template matching: Ok for limited, small vocabulary applications Brittle; breaks down if speakers change 2

Recap: Isolated word Speech Recognition Spoken input word Word1 compare Word2 Word3 compare compare Lowest distance Word-N Recordings (templates) compare The compare operation finds the distance between example (training) recordings, i.e. templates of the words and the new input recording 3

MFC Vector Sequence for Template Recap: DTW based comparison of vector sequences Trellis for alignment Find lowest-cost path from start node (Red) to sink node (blue) t=0 1 2 3 4 5 6 7 8 9 10 11 MFC Vector Sequence for Input Recording 4

MFC Vector Sequence for Template Recap: DTW based comparison of vector sequences Trellis for alignment Find lowest-cost path from start node (Red) to sink node (blue) t=0 1 2 3 4 5 6 7 8 9 10 11 MFC Vector Sequence for Input Recording Cost of lowest-cost path = distance between template and input The lowest-cost path gives us the alignment between the two sequences 5

MFC Vector Sequence for Template Recap: DTW based comparison of vector Trellis for alignment node cost = distance between vectors sequences Find lowest-cost path from start node (Red) to sink node (blue) Standard setup: all edge costs are zero t=0 1 2 3 4 5 6 7 8 9 10 11 MFC Vector Sequence for Input Recording Cost of lowest-cost path = distance between template and input The lowest-cost path gives us the alignment between the two sequences 6

DTW: Dynamic Programming Algorithm P i,j = best path cost from origin to node [i,j] i-th template frame aligns with j-th input frame C i,j = local node cost of aligning template frame i to input frame j P i,j = min (P i,j-1 + C i,j, P i-1,j-1 + C i,j, P i-2,j-1 + C i,j ) = min (P i,j-1, P i-1,j-1, P i-2,j-1 ) + C i,j Edge costs are 0 in above formulation COST OF ALIGNMENT t=0 1 2 3 4 5 6 7 8 9 10 11 7

Today s Topics Generalize DTW based recognition Extend to multiple templates Move on to Hidden Markov Models Look ahead: The fundamental problems of HMMs Introduce the three fundamental problems of HMMs Two of the problems deal with decoding using HMMs, solved using the forward and Viterbi algorithms The third dealing with estimating HMM parameters (seen later) Incorporating prior knowledge into the HMM framework Different types of probabilistic models for HMMs Discrete probability distributions Continuous, mixture Gaussian distributions 8

TEMPLATE DTW Using A Single Template DATA We ve seen the DTW alignment of data to model

Limitations of A Single Template A single template cannot capture all the possible variations in how a word can be spoken Alternative: use multiple templates for each word Match the input against each one 10

DTW with multiple templates TEMPLATES DATA

DTW with multiple templates TEMPLATES DATA Each template warps differently to best match the input; the best matching template is selected

Problem With Multiple Templates Finding the best match requires the evaluation of many more templates (depending on the number) This can be computationally expensive Important for handheld devices, even for small-vocabulary applications Think battery life! Need a method for reducing multiple templates into a single one Even multiple templates do not cover the space of possible variations Need mechanism of generalizing from the templates to include data not seen before We can achieve both objectives by averaging all the templates for a given word 13

Generalizing from Templates Generalization implies going from the given templates to one that also represents others that we have not seen Taking the average of all available templates may represent the recorded templates less accurately, but will represent other unseen templates more robustly A general template (for a word) should capture all salient characteristics of the word, and no more Goal: Improving accuracy We will consider several steps to accomplish this 14

Improving the Templates Generalization by averaging the templates Generalization by reducing template length Accounting for variation within templates represented by the reduced model Accounting for varying segment lengths 15

Template Averaging How can we average the templates when they re of different lengths? Somehow need to normalize them to each other Solution: Apply DTW (of course!) Pick one template as a master Align all other templates to it Use the alignments generated to compute their average Note: Choosing a different master template will lead to a different average template Which template to choose as the master? Trial and error 16

TEMPLATES DTW with multiple templates Master template T4 T3 T4 T1 T2 T3 T4 Align T4 and T3 T3

TEMPLATES DTW with multiple templates T4 T3 T2 T1 Average all feature vectors aligned against each other T1 T2 T3 T4 Average Template Align T4/T2 and T4/T1, similarly; then average all of them

Benefits of Template Averaging We have eliminated the computational cost of having multiple templates for each word Using the averages of the aligned feature vectors generalizes from the samples The average is representative of the templates More generally, assumed to be representative of future utterances of the word The more the number of templates averaged, the better the generalization 19

Improving the Templates Generalization by averaging the templates Generalization by reducing template length Accounting for variation within templates represented by the reduced model Accounting for varying segment lengths 20

Template Size Reduction Can we do better? Consider the template for something : template s o me th i ng Here, the template has been manually segmented into 6 segments, where each segment is a single phoneme Hence, the frames of speech that make up any single segment ought to be fairly alike If so, why not replace each segment by a single representative feature vector? How? Again by averaging the frames within the segment This gives a reduction in the template size (memory size) 21

TEMPLATE Example: Single Templates With Three Segments Three segments DATA The feature vectors within each segment are assumed to be similar to each other

Averaging Each Template Segment 1 Model Vector vector ( i) N isegment 1 Model Vector vector ( i) N isegment 1 Model Vector vector ( i) N isegment m j 1 N j i segment ( j) x() i m j is the model vector for the j th segment N j is the number of vectors in the j th segment x(i) is the i th feature vector

TEMPLATE Template With One Model Vector Per Segment Just one template vector per segment DATA

MODEL DTW with one model DATA The averaged template is matched against the data string to be recognized Select the word whose averaed template has the lowest cost of match

DTW with multiple models MODELS DATA Segment all templates Average each region into a single point

DTW with multiple models MODELS DATA Segment all templates Average each region into a single point

AVG. MODEL DTW with multiple models MODELS m j k 1 N k, j i segment k ( j) x () i k segment k (j) is the j th segment of the k th training sequence m j is the model vector for the j th segment T1 T2 T3 T4 N k,j is the number of training vectors in the j th segment of the k th training sequence x k (i) is the i th vector of the k th training sequence

AVG. MODEL DTW with multiple models DATA Segment all templates, average each region into a single point To get a simple average model, which is used for recognition

Improving the Templates Generalization by averaging the templates Generalization by reducing template length Accounting for variation within templates represented by the reduced model Accounting for varying segment lengths 30

DTW with multiple models MODELS T1 T2 T3 T4 The inherent variation between vectors is different for the different segments E.g. the variation in the colors of the beads in the top segment is greater than that in the bottom segment Ideally we should account for the differences in variation in the segments E.g, a vector in a test sequence may actually be more matched to the central segment, which permits greater variation, although it is closer, in a Euclidean sense, to the mean of the lower segment, which permits lesser variation 31

DTW with multiple models MODELS We can define the covariance for each segment using the standard formula for covariance 1 C x i m x i m ( ) ( ) j k j k j N k k, j i segment k ( j) m j is the model vector for the j th segment T T1 T2 T3 T4 C j is the covariance of the vectors in the j th segment

DTW with multiple models The distance function must be modified to account for the covariance Mahalanobis distance: Normalizes contribution of all dimensions of the data d x m x m C x m T 1 (, j ) ( j ) j ( j) x is a data vector, m j is the mean of a segment, C j is the covariance matrix for the segment Negative Gaussian log likelihood: Assumes a Gaussian distribution for the segment and computes the probability of the vector on this distribution Gaussian( x; m, C ) d( x, m ) log Gaussian ( x; m, C j j j j j 2 1 D C j e T 1 j j j 0.5( xm ) C ( xm ) D 1 0.5log 2 C 0.5( x m ) C ( x m ) T j j j j 33

The Covariance The variance that we have computed is a full covariance matrix And the distance measure requires a matrix inversion 1 C x i m x i m ( ) ( ) j k j k j N k k k isegment k ( j) d x m x m C x m T 1 (, j ) ( j ) j ( j) In practice we assume that all off-diagonal terms in the matrix are 0 This reduces our distance metric to: ( xl m ) d( x, mj ) s j, l 2 jl, Where the individual variance terms s 2 are s 1 l 2 2 j, l k, l j, l N k k k isegment k ( j) 2 x ( i) m If we use a negative log Gaussian instead, the modified score (with the diagonal covariance) is d x m 2 (, j ) 0.5 log(2 s jl, ) 0.5 l l T ( x m ) l 2 s jl, 2 s s s N j, l 2 s1 0 0 2 1,1 2,1,1 0 s s s s 2 2 0 1,2 2 2,2 N,2 s 0 0 2 s N s s 1, N 2, N 2 N, N 34

Segmental K-means Simple uniform segmentation of training instances is not the most effective method of grouping vectors in the training sequences A better segmentation strategy is to segment the training sequences such that the vectors within any segment are most alike The total distance of vectors within each segment from the model vector for that segment is minimum For a global optimum, the total distance of all vectors from the model for their respective segments must be minimum This segmentation must be estimated The segmental K-means procedure is an iterative procedure to estimate the optimal segmentation

AVG. MODEL Alignment for training a model from multiple vector sequences MODELS T1 T2 T3 T4 Initialize by uniform segmentation

Alignment for training a model from multiple vector sequences T1 T2 T3 T4 Initialize by uniform segmentation

Alignment for training a model from multiple vector sequences T1 T2 T3 T4 Initialize by uniform segmentation Align each template to the averaged model to get new segmentations

Alignment for training a model from multiple vector sequences T1 T2 T3 T4 NEW T4 OLD

Alignment for training a model from multiple vector sequences T1 T2 T3 NEW T4 NEW

Alignment for training a model from multiple vector sequences T1 T2 NEW T3 NEW T4 NEW

Alignment for training a model from multiple vector sequences T1 NEW T2 NEW T3 NEW T4 NEW

Alignment for training a model from multiple vector sequences T1 NEW T4 NEW T2 NEW T3 NEW Initialize by uniform segmentation Align each template to the averaged model to get new segmentations Recompute the average model from new segmentations

Alignment for training a model from multiple vector sequences T2 NEW T1 NEW T4 NEW T3 NEW

Alignment for training a model from multiple vector sequences T1 NEW T4 NEW T2 NEW T3 NEW T1 T2 T3 T4 The procedure can be continued until convergence Convergence is achieved when the total best-alignment error for all training sequences does not change significantly with further refinement of the model

Shifted terminology TRAINING DATA MODEL STATE SEGMENT m j, s 2 j,l MODEL PARAMETERS or PARAMETER VECTORS SEGMENT BOUNDARY TRAINING DATA VECTOR

Improving the Templates Generalization by averaging the templates Generalization by reducing template length Accounting for variation within templates represented by the reduced model Accounting for varying segment lengths 47

MODEL Transition structures in models DATA The converged models can be used to score / align data sequences Model structure is incomplete.

DTW with multiple models T1 NEW T2 NEW T3 NEW T4 NEW Some segments are naturally longer than others E.g., in the example the initial (yellow) segments are usually longer than the second (pink) segments This difference in segment lengths is different from the variation within a segment Segments with small variance could still persist very long for a particular sound or word The DTW algorithm must account for these natural differences in typical segment length This can be done by having a state specific insertion penalty States that have lower insertion penalties persist longer and result in longer segments 49

Transition structures in models T 34 T 33 T 23 T 22 T 12 T 11 DATA State specific insertion penalties are represented as self transition arcs for model vectors. Horizontal edges within the trellis will incur a penalty associated with the corresponding arc. Every transition within the model can have its own penalty.

Transition structures in models T 34 T 33 T 33 T 33 T 23 T 23 T 22 T 12 T 12 T 11 T 11 T 11 T 01 DATA State specific insertion penalties are represented as self transition arcs for model vectors. Horizontal edges within the trellis will incur a penalty associated with the corresponding arc. Every transition within the model can have its own penalty or score

Transition structures in models T 34 T 33 T 23 T 22 T 13 T 12 T 11 DATA This structure also allows the inclusion of arcs that permit the central state to be skipped (deleted) Other transitions such as returning to the first state from the last state can be permitted by inclusion of appropriate arcs

What should the transition scores be Transition behavior can be expressed with probabilities For segments that are typically long, if a data vector is within that segment, the probability that the next vector will also be within it is high If a vector in the i th segment is typically followed by a vector in the j th segment, but also rarely by vectors from the k th segment, then.. if a data vector is within the i th segment, the probability that the next data vector lies in the j th segment is greater than the probability that it lies in the k th segment

What should the transition scores be A good choice for transition scores is the negative logarithm of the probabilities of the transitions T ii is the negative of the log of the probability that if the current data vector belongs to the i th state, the next data vector will also belong to the i th state T ij is the negative of the log of the probability that if the current data vector belongs to the i th state, the next data vector belongs to the j th state More probable transitions are less penalized. Impossible transitions are infinitely penalized

Modified segmental K-means AKA Viterbi training T1 NEW T2 NEW T3 NEW T4 NEW Transition scores can be easily computed by a simple extension of the segmental K-means algorithm Probabilities can be counted by simple counting P ij k k N N k, i, j k, i T ij log( P ij ) N k,i is the number of vectors in the i th segment (state) of the k th training sequence N k,i,j is the number of vectors in the i th segment (state) of the k th training sequence that were followed by vectors from the j th segment (state) E.g., No. of vectors in the 1 st (yellow) state = 20 No of vectors from the 1 st state that were followed by vectors from the 1 st state = 16 P 11 = 16/20 = 0.8; T 11 = -log(0.8) 55

T1 NEW T2 NEW Modified segmental K-means AKA T3 NEW N = 4 N 01 = 4 N 02 = 0 N 03 = 0 T4 NEW Viterbi training A special score is the penalty associated with starting at a particular state In our examples we always begin at the first state Enforcing this is equivalent to setting T 01 = 0, T 0j = infinity for j!= 1 It is sometimes useful to permit entry directly into later states i.e. permit deletion of initial states The score for direct entry into any state can be computed as P j N 0 j N T 0 j log( P N is the total number of training sequences N 0j is the number of training sequences for which the first data vector was in the j th state j ) 56

What is the Initial State Probability? T 34 T 33 T 23 T 22 T 13 T 12 T 11 T 03 T 01 T 02 Transition from an initial dummy state

Modified segmental K-means AKA Initializing state parameters Segment all training instances uniformly, learn means and variances Initializing T 0j scores Count the number of permitted initial states Let this number be M 0 Set all permitted initial states to be equiprobable: P j = 1/M 0 T 0j = -log(p j ) = log(m 0 ) Initializing T ij scores Viterbi training For every state i, count the number of states that are permitted to follow i.e. the number of arcs out of the state, in the specification Let this number be Mi Set all permitted transitions to be equiprobable: P ij = 1/M i Initialize T ij = -log(p ij ) = log(m i ) This is only one technique for initialization You may choose to initialize parameters differently, e.g. by random values

Modified segmental K-means AKA The entire segmental K-means algorithm: 1. Initialize all parameters State means and covariances Transition scores Viterbi training Entry transition scores 2. Segment all training sequences 3. Reestimate parameters from segmented training sequences 4. If not converged, return to 2

Alignment for training a model from multiple vector sequences Initialize Iterate T1 T2 T3 T4 The procedure can be continued until convergence Convergence is achieved when the total best-alignment error for all training sequences does not change significantly with further refinement of the model

The resulting model structure is also known as an HMM! 61

DTW and Hidden Markov Models (HMMs) T 11 T 22 T 33 T 12 T 23 T 13 This structure is a generic representation of a statistical model for processes that generate time series The segments in the time series are referred to as states The process passes through these states to generate time series The entire structure may be viewed as one generalization of the DTW models we have discussed thus far In this example -- strict left-to-right topology Commonly used for speech recognition 62

DTW -- Reversing Sense of Cost Use Score instead of Cost The same cost function but with the sign changed E.g. for at a node: negative Euclidean distance = (x i y i ) 2 ; (x i y i ) 2 ; i.e. ve Euclidean distance squared Other terms possible: Remember the Gaussian 63

DTW with costs Node and edge costs defined for trellis Find minimum cost path through trellis.. Each edge has a cost Each node has a cost t=0 1 2 3 4 5 6 7 8 9 10 11 64

DTW: Finding minimum-cost path P i,j = best path cost from origin to node [i,j] C i,j = local node cost of aligning template frame i to input frame j T i,j,k,l = Edge cost from node (i,j) to node (k,l) P i,j = min (P i,j-1 + T i,j-1,i,j, P i-1,j-1 + T i-1,,j-1,i,j, P i-2,j-1 + T i-2,j-1,i,j,) + C i,j MINIMIZE TOTAL PATH COST t=0 1 2 3 4 5 6 7 8 9 10 11 65

DTW with scores Node and edge scores defined for trellis Find maximum score path through trellis.. Each edge has a score Each node has a score t=0 1 2 3 4 5 6 7 8 9 10 11 66

DTW: finding maximum-score path P i,j = best path score from origin to node [i,j] C i,j = local node score of aligning template frame i to input frame j T i,j,k,l = Edge score from node (i,j) to node (k,l) P i,j = max (P i,j-1 + T i,j-1,i,j, P i-1,j-1 + T i-1,,j-1,i,j, P i-2,j-1 + T i-2,j-1,i,j,) + C i,j MAXIMIZE TOTAL PATH SCORE t=0 1 2 3 4 5 6 7 8 9 10 11 67

Probabilities for Scores HMM inference equivalent to DTW modified to use a probabilistic function, for the local node or edge scores in the trellis Edges have transition probabilities Nodes have output or observation probabilities They provide the probability of the observed input The output probability may be a Gaussian The goal is to find the template with highest probability of matching the input Probability values as scores are also called likelihoods 68

Log Likelihoods The problem with probabilities: The probabilities of unrelated variables multiply P(X,Y) = P(X)*P(Y) Probabilities multiply along the path Scores combines multiplicatively along a path score of a path = Product_over_nodes(score of node) * Product_over_edges(score of edge) Use log probabilities as scores Scores add as in DTW Max instead of Min May use negative log probabilities Cost adds as in DTW 69

HMM = DTW with scores Node and edge scores are log probabilities Find maximum score path through trellis.. Edge score (i,t->k,t+1) = log probability (T ij ) Node score = log P(x t j) t=0 1 2 3 4 5 6 7 8 9 10 11 70

Hidden Markov Models A Hidden Markov Model consists of two components A state/transition backbone that specifies how many states there are, and how they can follow one another A set of probability distributions, one for each state Markov chain Data distributions This can be factored into two separate probabilistic entities A probabilistic Markov chain with states and transitions A set of data probability distributions, associated with the states 71

Basic Structural Questions for HMM What is the structure like? What transitions are allowed How many states? Markov chain What are the probability distributions associated with states? Data distributions 72

Determining the Number of States The correct number of states for any word? We do not know, really Ideally there should be at least one state for each basic sound within the word Otherwise widely differing sounds may be collapsed into one state For efficiency, the number of states should the minimum needed to achieve the desired level of recognition accuracy These two are conflicting requirements, usually solved by making some educated guesses 73

Determining the Number of States For small vocabularies, it is possible to examine each word in detail and arrive at reasonable numbers: S O ME TH I NG For larger vocabularies, we may be forced to rely on some ad hoc principles E.g. proportional to the number of letters in the word Works better for some languages than others Spanish, Japanese (Katakana/Hiragana), Indian languages.. 74

What about the transition structure Speech is a left to right process With a deterministic structure to any word Prespecified set of sounds in prespecified order Although some sounds may occasionally be skipped This suggests the following kind of structure Markov chain The Bakis topology 75

The Transition Probabilities? T1 NEW T2 NEW T4 NEW Transition probabilities are just the probability of making transitions T3 NEW As before, can be obtained by simple counting P ij k k N N k, i, j k, i N k,i is the number of vectors in the i th segment (state) of the k th training sequence N k,i,j is the number of vectors in the i th segment (state) of the k th training sequence that were followed by vectors from the j th segment (state) 76

Basic Structural Questions for HMM What is the structure like? What transitions are allowed How many states? Markov chain What are the probability distributions associated with states? Data distributions 77

State Output Distributions The state output distribution for any state is the distribution of all vectors associated with that state It is the distribution of all vectors validly associated with the state From all potential instances of the word Not just the training (template) recordings We have implicitly assumed this to be Gaussian so far

The State Output Distribution The state output distribution is a probability distribution associated with each HMM state The negative log of the probability of any vector as given by this distribution would be the node cost in DTW The state output probability distribution could be any distribution at all We have assumed so far that state output distributions are Gaussian 1 P( x j) exp 0.5 d v P x j s l ( x m ) 2 2s jl, l jl, 2 j ( ) log( ( )) 0.5 log(2 jl, ) 0.5 l l l s j, l 2 jl, 2 ( x m ) l s j, l 2 2 Node cost for DTW (note change in notation) More generically, we can assume it to be a mixture of Gaussians More on this later w k P( x j) exp 0.5 k l ( x m ) j, k, l 2 2 2s jkl,, l jkl,, l s 2 d ( v) log( P( x j)) j 79

The complete package We have seen how to compute all probability terms for this structure Sufficient information to compute all terms in a trellis P ij or log P ij P(x j) or log P(x j) 80

We continue in the next class.. 81