Statistical Machine Learning from Data
|
|
- Gertrude King
- 6 years ago
- Views:
Transcription
1 Samy Bengio Statistical Machine Learning from Data Statistical Machine Learning from Data Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland December 2, 2005
2 Samy Bengio Statistical Machine Learning from Data
3 Samy Bengio Statistical Machine Learning from Data 3 Graphical View Training 2 3 4
4 Markov Models Graphical View Training Stochastic process of a temporal sequence: the probability distribution of the variable q at time t depends on the variable q at times t to. P(q, q 2,..., q T ) = P(q T ) = P(q ) First Order Markov Process: T t=2 P(q t q t ) = P(q t q t ) P(q t q t ) Markov Model: model of a Markovian process with discrete states. Samy Bengio Statistical Machine Learning from Data 4
5 Markov Models (Graphical View) Graphical View Training A Markov model: 2 3 A Markov model unfolded in time: q(t-2) q(t-) q(t) q(t+) 2 3 Samy Bengio Statistical Machine Learning from Data 5
6 Samy Bengio Statistical Machine Learning from Data 6 Training Markov Models Graphical View Training A Markov model is represented by all its transition probabilities: P(q t = i q t = j) i, j Given a training set of sequences X, training means re-estimating these probabilities. Simply count them to obtain the maximum likelihood solution: P(q t = i q t = j) = #(q t = i and q t = j X ) #(q t = j X ) Example: observe the weather today assuming it depends on the previous day.
7 Samy Bengio Statistical Machine Learning from Data
8 A hidden Markov model: A hidden Markov model unfolded in time: y(t-2) y(t-) y(t) y(t+) q(t-2) q(t-) q(t) q(t+) Samy Bengio Statistical Machine Learning from Data 8
9 Elements of an HMM Hidden Markov Model: Markov Model whose state is not observed, but of which one can observe a manifestation (a variable x t which depends only on q t ). A finite number of states N. Transition probabilities between states, which depend only on previous state: P(q t =i q t =j, θ). Emission probabilities, which depend only on the current state: p(x t q t =i, θ) (where x t is observed). Initial state probabilities: P(q 0 = i θ). Each of these 3 sets of probabilities have parameters θ to estimate. Samy Bengio Statistical Machine Learning from Data 9
10 The 3 Problems of HMMs The HMM model gives rise to 3 different problems: Given an HMM parameterized by θ, can we compute the likelihood of a sequence X = x T = {x, x 2,..., x T }: p(x T θ) Given an HMM parameterized by θ and a set of sequences D n, can we select the parameters θ such that: θ = arg max θ n p(x (p) θ) p= Given an HMM parameterized by θ, can we compute the optimal path Q through the state space given a sequence X : Q = arg max p(x, Q θ) Q Samy Bengio Statistical Machine Learning from Data 0
11 HMMs as Generative Processes HMMs can be use to generate sequences: Let us define a set of starting states with initial probabilities P(q 0 = i). Let us also define a set of final states. Then for each sequence to generate: Select an initial state j according to P(q 0 ). 2 Select the next state i according to P(q t = i q t = j). 3 Emit an output according to the emission distribution P(x t q t = i). 4 If i is a final state, then stop, otherwise loop to step 2. Samy Bengio Statistical Machine Learning from Data
12 Markovian Assumptions Emissions: the probability to emit x t at time t in state q t = i does not depend on anything else: p(x t q t = i, q t, x t ) = p(x t q t = i) Transitions: the probability to go from state j to state i at time t does not depend on anything else: P(q t = i q t = j, q t 2, x t ) = P(q t = i q t = j) Moreover, this probability does not depend on time t: P(q t = i q t = j) is the same for all t we say that such Markov models are homogeneous. Samy Bengio Statistical Machine Learning from Data 2
13 Samy Bengio Statistical Machine Learning from Data 3 Derivation of the Forward Variable α the probability of having generated the sequence x t state i at time t: and being in α(i, t) def = p(x t, q t = i) = p(x t x t, q t = i)p(x t, q t = i) = p(x t q t = i) j p(x t, q t = i, q t = j) = p(x t q t = i) j = p(x t q t = i) j = p(x t q t = i) j P(q t = i x t, q t = j)p(x t, q t = j) P(q t = i q t = j)p(x t, q t = j) P(q t = i q t = j)α(j, t )
14 From α to the Likelihood Reminder: α(i, t) def = p(x t, q t = i) Initial condition: α(i, 0) = P(q 0 = i) prior probabilities of each state i Then let us compute α(i, t) for each state i and each time t of a given sequence x T Afterward, we can compute the likelihood as follows: p(x T ) = i p(x T, q T = i) = i α(i, T ) Hence, to compute the likelihood p(x T ), we need O(N2 T ) operations, where N is the number of states Samy Bengio Statistical Machine Learning from Data 4
15 Samy Bengio Statistical Machine Learning from Data 5 EM Training for HMM For HMM, the hidden variable Q will describe in which state the HMM was for each observation x t of a sequence X. The joint likelihood of all sequences X (l) and the hidden variable Q is then: p(x, Q θ) = n p(x (l), Q θ) l= Let us introduce the following indicator variable: { if qt = i q i,t = 0 otherwise
16 Samy Bengio Statistical Machine Learning from Data 6 Joint Likelihood Let us now use our indicator variables q to instanciate Q: p(x, Q θ) = = n p(x (l), Q θ) l= ( n N ) P(q 0 = i) q i,0 l= T l i= t= i= N p(x t (l) q t = i) q i,t N P(q t = i q t = j) q i,t q j,t j=
17 Samy Bengio Statistical Machine Learning from Data 7 Joint Log Likelihood log p(x, Q θ) = n N q i,0 log P(q 0 = i) + l= i= n T l N q i,t log p(x t (l) q t = i) + l= t= i= n T l N N q i,t q j,t log P(q t = i q t = j) l= t= i= j=
18 Samy Bengio Statistical Machine Learning from Data 8 Auxiliary Function Let us now write the corresponding auxiliary function: A(θ, θ s ) = E Q [log p(x, Q θ) X, θ s ] = n N E Q [q i,0 X, θ s ] log P(q 0 = i) + l= i= n T l N E Q [q i,t X, θ s ] log p(x t (l) q t = i) + l= t= i= n T l N N E Q [q i,t q j,t X, θ s ] log P(q t = i q t = j) l= t= i= j= From now on, let us forget about index l for simplification.
19 Samy Bengio Statistical Machine Learning from Data 9 Derivation of the Backward Variable β the probability to generate the rest of the sequence xt+ T we are in state i at time t β(i, t) def = p(x T t+ q t =i) = j p(x T t+, q t+ =j q t =i) given that = j = j = j = j p(x t+ xt+2, T q t+ =j, q t =i)p(xt+2, T q t+ =j q t =i) p(x t+ q t+ =j)p(xt+2 q T t+ =j, q t =i)p(q t+ =j q t =i) p(x t+ q t+ =j)p(xt+2 q T t+ =j)p(q t+ =j q t =i) p(x t+ q t+ =j)β(j, t + )P(q t+ =j q t =i)
20 Samy Bengio Statistical Machine Learning from Data 20 Final Details About β Reminder: β(i, t) = p(x T t+ q t=i) Final condition: { if i is a final state β(i, T ) = 0 otherwise Hence, to compute all the β variables, we need O(N 2 T ) operations, where N is the number of states
21 Samy Bengio Statistical Machine Learning from Data 2 E-Step for HMMs Posterior on emission distributions: E Q [q i,t X, θ s ] = P(q t = i x T, θ s ) = P(q t = i x T ) = p(x T, q t = i) p(x T ) = p(x T t+ q t = i, x t )p(x t, q t = i) p(x T ) = p(x t+ T q t = i)p(x t, q t = i) p(x T ) β(i, t) α(i, t) = α(j, T ) j
22 Samy Bengio Statistical Machine Learning from Data 22 E-Step for HMMs Posterior on transition distributions: E Q [q i,t q j,t X, θ s ] = P(q t = i, q t = j x T, θ s ) = p(x T, q t = i, q t = j) p(x T ) = p(x t+ T q t=i)p(q t =i q t =j)p(x t q t =i)p(x t, q t =j) p(x T ) = β(i, t)p(q t = i q t = j)p(x t q t = i)α(j, t ) α(j, T ) j
23 Samy Bengio Statistical Machine Learning from Data 23 E-Step for HMMs Posterior on initial state distribution: E Q [q i,0 X, θ p ] = P(q 0 = i x T, θ s ) = P(q 0 = i x T ) = p(x T, q 0 = i) p(x T ) = p(x T q 0 = i)p(q 0 = i) p(x T ) = β(i, 0) P(q 0 = i) α(j, T ) j
24 M-Step for HMMs Find the parameters θ that maximizes A, hence search for A θ = 0 When transition distributions are represented as tables, using a Lagrange multiplier, we obtain: P(q t = i q t = j) = T P(q t = i, q t = j x T, θ s ) t= T P(q t = j x T, θ s ) t= When emission distributions are implemented as GMMs, use already given equations, weighted by the posterior on emissions P(q t = i x T, θs ). Samy Bengio Statistical Machine Learning from Data 24
25 The Most Likely Path (Graphical View) The Viterbi algorithm finds the best state sequence. Compute the patial paths Backtrack in time q3 States q2 q q3 States q2 q Time Time Samy Bengio Statistical Machine Learning from Data 25
26 Samy Bengio Statistical Machine Learning from Data 26 for HMMs The Viterbi algorithm finds the best state sequence. V (i, t) def = max q t p(x t, q t, q t =i) = max p(x t x q t t, q t, q t =i)p(x t, q t, q t =i) = p(x t q t =i) max max p(x q t 2 t, q t 2, q t =i, q t =j) j = p(x t q t =i) max max p(q t =i q t =j)p(x q t 2 t, q t 2, q t =j) j = p(x t q t =i) max j = p(x t q t =i) max j p(q t =i q t =j) max p(x q t 2 t, q t 2, q t =j) p(q t =i q t =j)v (j, t )
27 From Viterbi to the State Sequence Reminder: V (i, t) = max p(x, t q q t t, q t =i) Let us compute V (i, t) for each state i and each time t of a given sequence x T Moreover, let us also keep for each V (i, t) the associated argmax previous state j Then, starting from the state i = arg max V (j, T ) backtrack j to decode the most probable state sequence. Hence, to compute all the V (i, t) variables, we need O(N 2 T ) operations, where N is the number of states Samy Bengio Statistical Machine Learning from Data 27
28 of HMMs Classifying sequences such as... DNA sequences (which family) gesture sequences video sequences phoneme sequences etc. Decoding sequences such as... continuous speech recognition handwriting recognition sequence of events (meeting, surveilance, games, etc) Samy Bengio Statistical Machine Learning from Data 28
29 Samy Bengio Statistical Machine Learning from Data 29 Embbeded Training Word Error Rates Discriminant Approach 2 3 4
30 Embbeded Training Word Error Rates Discriminant Approach Application: continuous speech recognition: Find a sequence of phonemes (or words) given an acoustic sequence Idea: use a phoneme model Samy Bengio Statistical Machine Learning from Data 30
31 Embbeded Training of HMMs Embbeded Training Word Error Rates Discriminant Approach For each acoustic sequence in the training set, create a new HMM as the concatenation of the HMMs representing the underlying sequence of phonemes. Maximize the likelihood of the training sentences. C A T Samy Bengio Statistical Machine Learning from Data 3
32 HMMs: Decoding a Sentence Embbeded Training Word Error Rates Discriminant Approach Decide what is the accepted vocabulary. Optionally add a language model: P(word sequence) Efficient algorithm to find the optimal path in the decoding HMM: D O G C A T Samy Bengio Statistical Machine Learning from Data 32
33 Measuring Error Embbeded Training Word Error Rates Discriminant Approach How do we measure the quality of a speech recognizer? Problem: the target solution is a sentence, the obtained solution is also a sentence, but they might have different size! Proposed solution: the Edit Distance: assume you have access to the operators insert, delete, and substitute, what is the smallest number of such operators we need to go from the obtained to the desired sentence? An efficient algorithm exists to compute this. At the end, we measure the error as follows: WER = #ins + #del + #subst #words Note that the word error rate (WER) can be greater than... Samy Bengio Statistical Machine Learning from Data 33
34 Maximum Mutual Information Embbeded Training Word Error Rates Discriminant Approach Using the Maximum Likelihood criterion for a classification task might sometimes be worse than using a discriminative approach What about changing the criterion to be more discriminative? Maximum Mutual Information (MMI) between word (W ) and accoustic (A) sequences: I (A, W ) = log P(A, W ) P(A)P(W ) = log P(A W )P(W ) log P(A) log P(W ) = log P(A W ) log P(A) = log P(A W ) w log P(A w)p(w) Apply gradient ascent: I (A,W ) θ. Samy Bengio Statistical Machine Learning from Data 34
35 Samy Bengio Statistical Machine Learning from Data 35 Various Imbalance 2 3 4
36 Various Imbalance Capacity tuned by the following hyper-parameters: Number of states (or values the hidden variable can take) Non-zero transitions (full-connect, left-to-right, etc) Capacity of underlying emission models Number of training iterations Initialization: If the training set is aligned, use this information Otherwise, uniform for transitions, K-Means for GMM-based emissions Computational contraint: Work in the logarithmic domain! Samy Bengio Statistical Machine Learning from Data 36
37 Various Imbalance Imbalance between Transitions and Emissions A problem often seen in speech recognition... Decoding with Viterbi: V (i, t) = p(x t q t = i) max P(q t = i q t = j)v (j, t ) j Emissions represented by GMMs: densities depend on the number of dimensions of x t. Practical estimates on Numbers 95 database (39 dimensions): Variance log P(q t q t ) 9.8 log p(x t q t ) Comparison of variances of log distributions during decoding Samy Bengio Statistical Machine Learning from Data 37
Tutorial on Statistical Machine Learning
October 7th, 2005 - ICMI, Trento Samy Bengio Tutorial on Statistical Machine Learning 1 Tutorial on Statistical Machine Learning with Applications to Multimodal Processing Samy Bengio IDIAP Research Institute
More informationAn Asynchronous Hidden Markov Model for Audio-Visual Speech Recognition
An Asynchronous Hidden Markov Model for Audio-Visual Speech Recognition Samy Bengio Dalle Molle Institute for Perceptual Artificial Intelligence (IDIAP) CP 592, rue du Simplon 4, 1920 Martigny, Switzerland
More informationHierarchical Multi-Stream Posterior Based Speech Recognition System
Hierarchical Multi-Stream Posterior Based Speech Recognition System Hamed Ketabdar 1,2, Hervé Bourlard 1,2 and Samy Bengio 1 1 IDIAP Research Institute, Martigny, Switzerland 2 Ecole Polytechnique Fédérale
More informationHMM and IOHMM Modeling of EEG Rhythms for Asynchronous BCI Systems
HMM and IOHMM Modeling of EEG Rhythms for Asynchronous BCI Systems Silvia Chiappa and Samy Bengio {chiappa,bengio}@idiap.ch IDIAP, P.O. Box 592, CH-1920 Martigny, Switzerland Abstract. We compare the use
More informationp(d θ ) l(θ ) 1.2 x x x
p(d θ ).2 x 0-7 0.8 x 0-7 0.4 x 0-7 l(θ ) -20-40 -60-80 -00 2 3 4 5 6 7 θ ˆ 2 3 4 5 6 7 θ ˆ 2 3 4 5 6 7 θ θ x FIGURE 3.. The top graph shows several training points in one dimension, known or assumed to
More informationHidden Markov Models and other Finite State Automata for Sequence Processing
To appear in The Handbook of Brain Theory and Neural Networks, Second edition, (M.A. Arbib, Ed.), Cambridge, MA: The MIT Press, 2002. http://mitpress.mit.edu The MIT Press Hidden Markov Models and other
More informationDynamic Approaches: The Hidden Markov Model
Dynamic Approaches: The Hidden Markov Model Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Machine Learning: Neural Networks and Advanced Models (AA2) Inference as Message
More informationStatistical Machine Learning from Data
Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Ensembles Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique Fédérale de Lausanne
More informationAn Introduction to Bioinformatics Algorithms Hidden Markov Models
Hidden Markov Models Outline 1. CG-Islands 2. The Fair Bet Casino 3. Hidden Markov Model 4. Decoding Algorithm 5. Forward-Backward Algorithm 6. Profile HMMs 7. HMM Parameter Estimation 8. Viterbi Training
More informationCISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)
CISC 889 Bioinformatics (Spring 24) Hidden Markov Models (II) a. Likelihood: forward algorithm b. Decoding: Viterbi algorithm c. Model building: Baum-Welch algorithm Viterbi training Hidden Markov models
More informationHidden Markov Models
Hidden Markov Models Lecture Notes Speech Communication 2, SS 2004 Erhard Rank/Franz Pernkopf Signal Processing and Speech Communication Laboratory Graz University of Technology Inffeldgasse 16c, A-8010
More informationHidden Markov Models
Hidden Markov Models Outline 1. CG-Islands 2. The Fair Bet Casino 3. Hidden Markov Model 4. Decoding Algorithm 5. Forward-Backward Algorithm 6. Profile HMMs 7. HMM Parameter Estimation 8. Viterbi Training
More informationWe Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named
We Live in Exciting Times ACM (an international computing research society) has named CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Apr. 2, 2019 Yoshua Bengio,
More informationSequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them
HMM, MEMM and CRF 40-957 Special opics in Artificial Intelligence: Probabilistic Graphical Models Sharif University of echnology Soleymani Spring 2014 Sequence labeling aking collective a set of interrelated
More informationHidden Markov Modelling
Hidden Markov Modelling Introduction Problem formulation Forward-Backward algorithm Viterbi search Baum-Welch parameter estimation Other considerations Multiple observation sequences Phone-based models
More informationStatistical Machine Learning from Data
January 17, 2006 Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Other Artificial Neural Networks Samy Bengio IDIAP Research Institute, Martigny, Switzerland,
More informationTemporal Modeling and Basic Speech Recognition
UNIVERSITY ILLINOIS @ URBANA-CHAMPAIGN OF CS 498PS Audio Computing Lab Temporal Modeling and Basic Speech Recognition Paris Smaragdis paris@illinois.edu paris.cs.illinois.edu Today s lecture Recognizing
More informationHidden Markov Models
Hidden Markov Models CI/CI(CS) UE, SS 2015 Christian Knoll Signal Processing and Speech Communication Laboratory Graz University of Technology June 23, 2015 CI/CI(CS) SS 2015 June 23, 2015 Slide 1/26 Content
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationStatistical Machine Learning from Data
January 17, 2006 Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Multi-Layer Perceptrons Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole
More informationRecap: HMM. ANLP Lecture 9: Algorithms for HMMs. More general notation. Recap: HMM. Elements of HMM: Sharon Goldwater 4 Oct 2018.
Recap: HMM ANLP Lecture 9: Algorithms for HMMs Sharon Goldwater 4 Oct 2018 Elements of HMM: Set of states (tags) Output alphabet (word types) Start state (beginning of sentence) State transition probabilities
More informationHidden Markov Models. Terminology, Representation and Basic Problems
Hidden Markov Models Terminology, Representation and Basic Problems Data analysis? Machine learning? In bioinformatics, we analyze a lot of (sequential) data (biological sequences) to learn unknown parameters
More informationStatistical Methods for NLP
Statistical Methods for NLP Sequence Models Joakim Nivre Uppsala University Department of Linguistics and Philology joakim.nivre@lingfil.uu.se Statistical Methods for NLP 1(21) Introduction Structured
More informationHidden Markov Models in Language Processing
Hidden Markov Models in Language Processing Dustin Hillard Lecture notes courtesy of Prof. Mari Ostendorf Outline Review of Markov models What is an HMM? Examples General idea of hidden variables: implications
More informationMachine Learning for natural language processing
Machine Learning for natural language processing Hidden Markov Models Laura Kallmeyer Heinrich-Heine-Universität Düsseldorf Summer 2016 1 / 33 Introduction So far, we have classified texts/observations
More informationHidden Markov Models
CS769 Spring 2010 Advanced Natural Language Processing Hidden Markov Models Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu 1 Part-of-Speech Tagging The goal of Part-of-Speech (POS) tagging is to label each
More informationConditional Random Field
Introduction Linear-Chain General Specific Implementations Conclusions Corso di Elaborazione del Linguaggio Naturale Pisa, May, 2011 Introduction Linear-Chain General Specific Implementations Conclusions
More informationSpeech Recognition HMM
Speech Recognition HMM Jan Černocký, Valentina Hubeika {cernocky ihubeika}@fit.vutbr.cz FIT BUT Brno Speech Recognition HMM Jan Černocký, Valentina Hubeika, DCGM FIT BUT Brno 1/38 Agenda Recap variability
More informationStatistical Machine Learning from Data
Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique
More informationPlan for today. ! Part 1: (Hidden) Markov models. ! Part 2: String matching and read mapping
Plan for today! Part 1: (Hidden) Markov models! Part 2: String matching and read mapping! 2.1 Exact algorithms! 2.2 Heuristic methods for approximate search (Hidden) Markov models Why consider probabilistics
More informationHidden Markov Model and Speech Recognition
1 Dec,2006 Outline Introduction 1 Introduction 2 3 4 5 Introduction What is Speech Recognition? Understanding what is being said Mapping speech data to textual information Speech Recognition is indeed
More informationorder is number of previous outputs
Markov Models Lecture : Markov and Hidden Markov Models PSfrag Use past replacements as state. Next output depends on previous output(s): y t = f[y t, y t,...] order is number of previous outputs y t y
More informationLecture 11: Hidden Markov Models
Lecture 11: Hidden Markov Models Cognitive Systems - Machine Learning Cognitive Systems, Applied Computer Science, Bamberg University slides by Dr. Philip Jackson Centre for Vision, Speech & Signal Processing
More informationParametric Models Part III: Hidden Markov Models
Parametric Models Part III: Hidden Markov Models Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2014 CS 551, Spring 2014 c 2014, Selim Aksoy (Bilkent
More information10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)
10. Hidden Markov Models (HMM) for Speech Processing (some slides taken from Glass and Zue course) Definition of an HMM The HMM are powerful statistical methods to characterize the observed samples of
More informationData-Intensive Computing with MapReduce
Data-Intensive Computing with MapReduce Session 8: Sequence Labeling Jimmy Lin University of Maryland Thursday, March 14, 2013 This work is licensed under a Creative Commons Attribution-Noncommercial-Share
More informationHidden Markov Models and Gaussian Mixture Models
Hidden Markov Models and Gaussian Mixture Models Hiroshi Shimodaira and Steve Renals Automatic Speech Recognition ASR Lectures 4&5 23&27 January 2014 ASR Lectures 4&5 Hidden Markov Models and Gaussian
More informationHidden Markov Models. Terminology and Basic Algorithms
Hidden Markov Models Terminology and Basic Algorithms The next two weeks Hidden Markov models (HMMs): Wed 9/11: Terminology and basic algorithms Mon 14/11: Implementing the basic algorithms Wed 16/11:
More informationHidden Markov Models. Terminology and Basic Algorithms
Hidden Markov Models Terminology and Basic Algorithms What is machine learning? From http://en.wikipedia.org/wiki/machine_learning Machine learning, a branch of artificial intelligence, is about the construction
More informationHidden Markov Models Hamid R. Rabiee
Hidden Markov Models Hamid R. Rabiee 1 Hidden Markov Models (HMMs) In the previous slides, we have seen that in many cases the underlying behavior of nature could be modeled as a Markov process. However
More informationCOMP90051 Statistical Machine Learning
COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Trevor Cohn 24. Hidden Markov Models & message passing Looking back Representation of joint distributions Conditional/marginal independence
More informationCSC401/2511 Spring CSC401/2511 Natural Language Computing Spring 2019 Lecture 5 Frank Rudzicz and Chloé Pou-Prom University of Toronto
CSC401/2511 Natural Language Computing Spring 2019 Lecture 5 Frank Rudzicz and Chloé Pou-Prom University of Toronto Revisiting PoS tagging Will/MD the/dt chair/nn chair/?? the/dt meeting/nn from/in that/dt
More informationHidden Markov Models Part 2: Algorithms
Hidden Markov Models Part 2: Algorithms CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Hidden Markov Model An HMM consists of:
More informationHidden Markov Models
Hidden Markov Models Slides revised and adapted to Bioinformática 55 Engª Biomédica/IST 2005 Ana Teresa Freitas Forward Algorithm For Markov chains we calculate the probability of a sequence, P(x) How
More informationHidden Markov Models
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Hidden Markov Models Matt Gormley Lecture 22 April 2, 2018 1 Reminders Homework
More informationComputational Genomics and Molecular Biology, Fall
Computational Genomics and Molecular Biology, Fall 2011 1 HMM Lecture Notes Dannie Durand and Rose Hoberman October 11th 1 Hidden Markov Models In the last few lectures, we have focussed on three problems
More informationASR using Hidden Markov Model : A tutorial
ASR using Hidden Markov Model : A tutorial Samudravijaya K Workshop on ASR @BAMU; 14-OCT-11 samudravijaya@gmail.com Tata Institute of Fundamental Research Samudravijaya K Workshop on ASR @BAMU; 14-OCT-11
More informationCOMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma
COMS 4771 Probabilistic Reasoning via Graphical Models Nakul Verma Last time Dimensionality Reduction Linear vs non-linear Dimensionality Reduction Principal Component Analysis (PCA) Non-linear methods
More information1. Markov models. 1.1 Markov-chain
1. Markov models 1.1 Markov-chain Let X be a random variable X = (X 1,..., X t ) taking values in some set S = {s 1,..., s N }. The sequence is Markov chain if it has the following properties: 1. Limited
More informationCS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm
+ September13, 2016 Professor Meteer CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm Thanks to Dan Jurafsky for these slides + ASR components n Feature
More informationSparse Models for Speech Recognition
Sparse Models for Speech Recognition Weibin Zhang and Pascale Fung Human Language Technology Center Hong Kong University of Science and Technology Outline Introduction to speech recognition Motivations
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationHidden Markov Models
Andrea Passerini passerini@disi.unitn.it Statistical relational learning The aim Modeling temporal sequences Model signals which vary over time (e.g. speech) Two alternatives: deterministic models directly
More informationSequence modelling. Marco Saerens (UCL) Slides references
Sequence modelling Marco Saerens (UCL) Slides references Many slides and figures have been adapted from the slides associated to the following books: Alpaydin (2004), Introduction to machine learning.
More informationSequence Modelling with Features: Linear-Chain Conditional Random Fields. COMP-599 Oct 6, 2015
Sequence Modelling with Features: Linear-Chain Conditional Random Fields COMP-599 Oct 6, 2015 Announcement A2 is out. Due Oct 20 at 1pm. 2 Outline Hidden Markov models: shortcomings Generative vs. discriminative
More informationLecture 3: ASR: HMMs, Forward, Viterbi
Original slides by Dan Jurafsky CS 224S / LINGUIST 285 Spoken Language Processing Andrew Maas Stanford University Spring 2017 Lecture 3: ASR: HMMs, Forward, Viterbi Fun informative read on phonetics The
More information] Automatic Speech Recognition (CS753)
] Automatic Speech Recognition (CS753) Lecture 17: Discriminative Training for HMMs Instructor: Preethi Jyothi Sep 28, 2017 Discriminative Training Recall: MLE for HMMs Maximum likelihood estimation (MLE)
More informationHidden Markov Model. Ying Wu. Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
Hidden Markov Model Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/19 Outline Example: Hidden Coin Tossing Hidden
More informationHidden Markov Models
Hidden Markov Models Outline CG-islands The Fair Bet Casino Hidden Markov Model Decoding Algorithm Forward-Backward Algorithm Profile HMMs HMM Parameter Estimation Viterbi training Baum-Welch algorithm
More informationHuman-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg
Temporal Reasoning Kai Arras, University of Freiburg 1 Temporal Reasoning Contents Introduction Temporal Reasoning Hidden Markov Models Linear Dynamical Systems (LDS) Kalman Filter 2 Temporal Reasoning
More informationExpectation Maximization (EM)
Expectation Maximization (EM) The Expectation Maximization (EM) algorithm is one approach to unsupervised, semi-supervised, or lightly supervised learning. In this kind of learning either no labels are
More informationDoctoral Course in Speech Recognition. May 2007 Kjell Elenius
Doctoral Course in Speech Recognition May 2007 Kjell Elenius CHAPTER 12 BASIC SEARCH ALGORITHMS State-based search paradigm Triplet S, O, G S, set of initial states O, set of operators applied on a state
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationBasic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology
Basic Text Analysis Hidden Markov Models Joakim Nivre Uppsala University Department of Linguistics and Philology joakimnivre@lingfiluuse Basic Text Analysis 1(33) Hidden Markov Models Markov models are
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationWeighted Finite-State Transducers in Computational Biology
Weighted Finite-State Transducers in Computational Biology Mehryar Mohri Courant Institute of Mathematical Sciences mohri@cims.nyu.edu Joint work with Corinna Cortes (Google Research). 1 This Tutorial
More informationAdvanced Data Science
Advanced Data Science Dr. Kira Radinsky Slides Adapted from Tom M. Mitchell Agenda Topics Covered: Time series data Markov Models Hidden Markov Models Dynamic Bayes Nets Additional Reading: Bishop: Chapter
More informationHidden Markov Models (HMMs)
Hidden Markov Models (HMMs) Reading Assignments R. Duda, P. Hart, and D. Stork, Pattern Classification, John-Wiley, 2nd edition, 2001 (section 3.10, hard-copy). L. Rabiner, "A tutorial on HMMs and selected
More informationCS838-1 Advanced NLP: Hidden Markov Models
CS838-1 Advanced NLP: Hidden Markov Models Xiaojin Zhu 2007 Send comments to jerryzhu@cs.wisc.edu 1 Part of Speech Tagging Tag each word in a sentence with its part-of-speech, e.g., The/AT representative/nn
More informationHidden Markov Models
Hidden Markov Models Slides mostly from Mitch Marcus and Eric Fosler (with lots of modifications). Have you seen HMMs? Have you seen Kalman filters? Have you seen dynamic programming? HMMs are dynamic
More informationSamy Bengioy, Yoshua Bengioz. y INRS-Telecommunications, 16, Place du Commerce, Ile-des-Soeurs, Qc, H3E 1H6, CANADA
An EM Algorithm for Asynchronous Input/Output Hidden Markov Models Samy Bengioy, Yoshua Bengioz y INRS-Telecommunications, 6, Place du Commerce, Ile-des-Soeurs, Qc, H3E H6, CANADA z Dept. IRO, Universite
More informationHMM part 1. Dr Philip Jackson
Centre for Vision Speech & Signal Processing University of Surrey, Guildford GU2 7XH. HMM part 1 Dr Philip Jackson Probability fundamentals Markov models State topology diagrams Hidden Markov models -
More informationPage 1. References. Hidden Markov models and multiple sequence alignment. Markov chains. Probability review. Example. Markovian sequence
Page Hidden Markov models and multiple sequence alignment Russ B Altman BMI 4 CS 74 Some slides borrowed from Scott C Schmidler (BMI graduate student) References Bioinformatics Classic: Krogh et al (994)
More informationNote Set 5: Hidden Markov Models
Note Set 5: Hidden Markov Models Probabilistic Learning: Theory and Algorithms, CS 274A, Winter 2016 1 Hidden Markov Models (HMMs) 1.1 Introduction Consider observed data vectors x t that are d-dimensional
More informationBasic math for biology
Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood
More informationStatistical Sequence Recognition and Training: An Introduction to HMMs
Statistical Sequence Recognition and Training: An Introduction to HMMs EECS 225D Nikki Mirghafori nikki@icsi.berkeley.edu March 7, 2005 Credit: many of the HMM slides have been borrowed and adapted, with
More informationHidden Markov models
Hidden Markov models Charles Elkan November 26, 2012 Important: These lecture notes are based on notes written by Lawrence Saul. Also, these typeset notes lack illustrations. See the classroom lectures
More informationHidden Markov Models
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Hidden Markov Models Matt Gormley Lecture 19 Nov. 5, 2018 1 Reminders Homework
More informationLecture 12: Algorithms for HMMs
Lecture 12: Algorithms for HMMs Nathan Schneider (some slides from Sharon Goldwater; thanks to Jonathan May for bug fixes) ENLP 26 February 2018 Recap: tagging POS tagging is a sequence labelling task.
More informationHidden Markov Models. Ivan Gesteira Costa Filho IZKF Research Group Bioinformatics RWTH Aachen Adapted from:
Hidden Markov Models Ivan Gesteira Costa Filho IZKF Research Group Bioinformatics RWTH Aachen Adapted from: www.ioalgorithms.info Outline CG-islands The Fair Bet Casino Hidden Markov Model Decoding Algorithm
More informationLecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010
Hidden Lecture 4: Hidden : An Introduction to Dynamic Decision Making November 11, 2010 Special Meeting 1/26 Markov Model Hidden When a dynamical system is probabilistic it may be determined by the transition
More informationUniversity of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I
University of Cambridge MPhil in Computer Speech Text & Internet Technology Module: Speech Processing II Lecture 2: Hidden Markov Models I o o o o o 1 2 3 4 T 1 b 2 () a 12 2 a 3 a 4 5 34 a 23 b () b ()
More informationSequence Labeling: HMMs & Structured Perceptron
Sequence Labeling: HMMs & Structured Perceptron CMSC 723 / LING 723 / INST 725 MARINE CARPUAT marine@cs.umd.edu HMM: Formal Specification Q: a finite set of N states Q = {q 0, q 1, q 2, q 3, } N N Transition
More informationHidden Markov Models Part 1: Introduction
Hidden Markov Models Part 1: Introduction CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Modeling Sequential Data Suppose that
More informationComputational Genomics and Molecular Biology, Fall
Computational Genomics and Molecular Biology, Fall 2014 1 HMM Lecture Notes Dannie Durand and Rose Hoberman November 6th Introduction In the last few lectures, we have focused on three problems related
More informationCS 7180: Behavioral Modeling and Decision- making in AI
CS 7180: Behavioral Modeling and Decision- making in AI Hidden Markov Models Prof. Amy Sliva October 26, 2012 Par?ally observable temporal domains POMDPs represented uncertainty about the state Belief
More informationHidden Markov Models and Gaussian Mixture Models
Hidden Markov Models and Gaussian Mixture Models Hiroshi Shimodaira and Steve Renals Automatic Speech Recognition ASR Lectures 4&5 25&29 January 2018 ASR Lectures 4&5 Hidden Markov Models and Gaussian
More informationHidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010
Hidden Markov Models Aarti Singh Slides courtesy: Eric Xing Machine Learning 10-701/15-781 Nov 8, 2010 i.i.d to sequential data So far we assumed independent, identically distributed data Sequential data
More informationHIDDEN MARKOV MODELS IN SPEECH RECOGNITION
HIDDEN MARKOV MODELS IN SPEECH RECOGNITION Wayne Ward Carnegie Mellon University Pittsburgh, PA 1 Acknowledgements Much of this talk is derived from the paper "An Introduction to Hidden Markov Models",
More informationBrief Introduction of Machine Learning Techniques for Content Analysis
1 Brief Introduction of Machine Learning Techniques for Content Analysis Wei-Ta Chu 2008/11/20 Outline 2 Overview Gaussian Mixture Model (GMM) Hidden Markov Model (HMM) Support Vector Machine (SVM) Overview
More informationSpeech Recognition Lecture 8: Expectation-Maximization Algorithm, Hidden Markov Models.
Speech Recognition Lecture 8: Expectation-Maximization Algorithm, Hidden Markov Models. Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.com This Lecture Expectation-Maximization (EM)
More informationGraphical Models Seminar
Graphical Models Seminar Forward-Backward and Viterbi Algorithm for HMMs Bishop, PRML, Chapters 13.2.2, 13.2.3, 13.2.5 Dinu Kaufmann Departement Mathematik und Informatik Universität Basel April 8, 2013
More informationMachine Learning & Data Mining Caltech CS/CNS/EE 155 Hidden Markov Models Last Updated: Feb 7th, 2017
1 Introduction Let x = (x 1,..., x M ) denote a sequence (e.g. a sequence of words), and let y = (y 1,..., y M ) denote a corresponding hidden sequence that we believe explains or influences x somehow
More informationDiscriminative Learning in Speech Recognition
Discriminative Learning in Speech Recognition Yueng-Tien, Lo g96470198@csie.ntnu.edu.tw Speech Lab, CSIE Reference Xiaodong He and Li Deng. "Discriminative Learning in Speech Recognition, Technical Report
More informationI D I A P. Online Policy Adaptation for Ensemble Classifiers R E S E A R C H R E P O R T. Samy Bengio b. Christos Dimitrakakis a IDIAP RR 03-69
R E S E A R C H R E P O R T Online Policy Adaptation for Ensemble Classifiers Christos Dimitrakakis a IDIAP RR 03-69 Samy Bengio b I D I A P December 2003 D a l l e M o l l e I n s t i t u t e for Perceptual
More informationA gentle introduction to Hidden Markov Models
A gentle introduction to Hidden Markov Models Mark Johnson Brown University November 2009 1 / 27 Outline What is sequence labeling? Markov models Hidden Markov models Finding the most likely state sequence
More informationACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging
ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging Stephen Clark Natural Language and Information Processing (NLIP) Group sc609@cam.ac.uk The POS Tagging Problem 2 England NNP s POS fencers
More informationO 3 O 4 O 5. q 3. q 4. Transition
Hidden Markov Models Hidden Markov models (HMM) were developed in the early part of the 1970 s and at that time mostly applied in the area of computerized speech recognition. They are first described in
More informationWhat s an HMM? Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) Hidden Markov Models (HMMs) for Information Extraction
Hidden Markov Models (HMMs) for Information Extraction Daniel S. Weld CSE 454 Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) standard sequence model in genomics, speech, NLP, What
More informationAutomatic Speech Recognition (CS753)
Automatic Speech Recognition (CS753) Lecture 6: Hidden Markov Models (Part II) Instructor: Preethi Jyothi Aug 10, 2017 Recall: Computing Likelihood Problem 1 (Likelihood): Given an HMM l =(A, B) and an
More information