We Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named
|
|
- Gilbert Horton
- 5 years ago
- Views:
Transcription
1 We Live in Exciting Times ACM (an international computing research society) has named CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Apr. 2, 2019 Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. The ACM Turing Award, often referred to as the Nobel Prize of Computing, carries a $1 million prize. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing. () April 2, / 48 () April 2, / 48 Outline Outline 1 Markov chain 1 Markov chain 2 Hidden Markov Model 2 Hidden Markov Model () April 2, / 48 () April 2, / 48
2 Markov Models Markov chain Markov models are powerful probabilistic models to analyze sequential data. A.A.Markov ( ) introduced the Markov chains in 1906 when he produced the first theoretical results for stochastic processes. They are now commonly used in text or speech recognition stock market prediction bioinformatics Directed strongly connected graph with self-loops. Each edge labeled by a positive probability. At each state, the probabilities on outgoing edges sum up to 1. Transition (or stochastic ) matrix: A = a ij = P (i j in 1 step). () April 2, / 48 () April 2, / 48 Markov chain Definition Given a sequentially ordered random variables X 1, X 2,, X t,, X T, called states, Transition probability for describing how the state at time t 1 changes to the state at time t, P (X t = value X t 1 = value) Initial probability for describing the initial state at time t = 1. P (X 1 = value) All X t s take value from the same discrete set {1,..., N}. We will assume that the transition probability does not change with respect to time t, i.e., a stationary Markov chain. Markov chain Transition probabilities make a table/matrix A whose elements are a ij = P (X t = j X t 1 = i) Initial probability becomes a vector π whose elements are π i = P (X 1 = i) where i or j index over from 1 to N. We have the following constraints a ij = 1 π i = 1 j i Additionally, all those numbers should be non-negative. () April 2, / 48 () April 2, / 48
3 Examples Definition Example 1 (Language model) States [N] represent a dictionary of words, a ice,cream = P (X t+1 = cream X t = ice) is an example of the transition probability. Example 2 (Weather) States [N] represent weather at each day a sunny,rainy = P (X t+1 = rainy X t = sunny) A Markov chain is a stochastic process with the Markov property: a sequence of random variables X 1, X 2, s.t. P (X t+1 X 1, X 2,, X t ) = P (X t+1 X t ) i.e. the current state only depends on the most recent state. Is the Markov assumption reasonable? Not completely for the language model for example. Higher order Markov chains make it more reasonable, e.g. P (X t+1 X 1, X 2,, X t ) = P (X t+1 X t, X t 1 ) i.e. the current word only depends on the last two words. () April 2, / 48 () April 2, / 48 Exercise 1 Consider the following Markov model. Given that now I am having Coffee, whats the probability that the next step is Facebook and the next is Work? P (X 3 = W, X 2 = F X 1 = C) = = P (X 3 = W, X 2 = F, X 1 = C) P (X 1 = C) = P (X 3 = W X 2 = F, X 1 = C)P (X 2 = F X 1 = C)P (X 1 = C) P (X 1 = C) (chain rule) = P (X 3 = W X 2 = F )P (X 2 = F X 1 = C) (Markov rule) = = 0.3 () April 2, / 48 Exercise 2 Given that now I am having Coffee, what is the probability that in two steps I am at Work? P (X 3 = W X 1 = C) = = s P (X 3 = W, X 2 = s X 1 = C) = = P (X 3 = W X 2 = W )P (X 2 = W X 1 = C) (marginalization) + P (X 3 = W X 2 = C)P (X 2 = C X 1 = C) + P (X 3 = W X 2 = F )P (X 2 = F X 1 = C) = = 0.45 Using a transition matrix: P (X 3 = j X 1 = i) = N a ik a kj = a 2 ij k=1 () April 2, / 48
4 Parameter estimation for Markov models Now suppose we have observed M sequences of examples: x 1,1,..., x 1,T x M,1,..., x M,T where for simplicity we assume each sequence has the same length T lower case x n,t represents the value of the random variable X n,t From these observations how do we learn the model parameters (π, A)? () April 2, / 48 Finding the MLE Same story, Maximum Likelihood Estimation: argmax π,a ln P (X 1 = x 1, X 2 = x 2,..., X T = x T ) First, we need to compute this joint probability. Applying the chain rule for random variables, we get P (X 1, X 2,..., X T ) = P (X 2, X 3,..., X T X 1 )P (X 1 ) = P (X 3,..., X T X 1, X 2 )P (X 2 X 1 )P (X 1 ) = = = P (X 1 ) = P (X 1 ) T P (X t X 1,..., X t 1 ) T P (X t X t 1 ) (Markov property) () April 2, / 48 Finding the MLE Example The log-likelihood of a sequence x 1,..., x T is ln P (X 1 = x 1, X 2 = x 2,..., X T = x T ) = P (X 1 = x 1 ) + ln P (X t = x t X t 1 = x t 1 ) = ln π x1 + ln a xt 1,x t = n I[x 1 = n] ln π n + ( T ) I[x t 1 = n, x t = n ] ln a n,n n,n Suppose we observed the following 2 sequences of length 5 sunny, sunny, rainy, rainy, rainy rainy, sunny, sunny, sunny, rainy MLE is the following model () April 2, / 48 () April 2, / 48
5 Finding the MLE So MLE is argmax π,a (#initial states with value n) ln π n n + n,n (#transitions from n to n ) ln a n,n We have seen this many times. The solution is (derivation is left as an exercise): #of sequences starting with n π n = #of sequences #of transitions from n to n a n,n = #of transitions starting with n Outline 1 Markov chain 2 Hidden Markov Model Inferring HMMs Viterbi Algorithm Viterbi Algorithm: Example Learning HMMs () April 2, / 48 () April 2, / 48 Markov Model with outcomes Another example picture from Wikipedia Now suppose each state X t also emits some outcome Z t [O] based on the following model On each day, we also observe Bob s activity: walk, shop, or clean, which only depends on the weather of that day. P (Z t = o X t = s) = b s,o (emission probability) independent of anything else. For example, in the language model, Z t is the speech signal for the underlying word X t (very useful for speech recognition). Now the model parameters are ({π s }, {a s,s }, {b s,o }) = (π, A, B). () April 2, / 48 () April 2, / 48
6 HMM defines a joint probability Joint likelihood P (X 1, X 2,, X T, Z 1, Z 2,, Z T ) = P (X 1, X 2,, X T ) P (Z 1, Z 2,, Z T X 1, X 2,, X T ) Markov assumption simplifies the first term P (X 1, X 2,, X T ) = P (X 1 ) T P (X t X t 1 ) The independence assumption simplifies the second term P (Z 1, Z 2,, Z T X 1, X 2,, X T ) = T P (Z t X t ) Namely, each Z t is conditionally independent of anything else, if conditioned on X t. () April 2, / 48 The joint log-likelihood is ln P (X 1 = x 1, X 2 = x 2,, X T = x T, Z 1 = z 1, Z 2 = z 2,, Z T = z T ) T T = ln P (X 1 = x 1 ) P (X t = x t X t 1 = x t 1 ) P (Z t = z t X t = x t ) = ln P (X 1 = x 1 ) + + ln P (X t = x t X t 1 = x t 1 ) ln P (Z t = z t X t = x t ) = ln π x1 + ln a xt 1,x t + ln b xt,z t () April 2, / 48 Learning the model Learning the model If we observe M state-outcome sequences: x m,1, z m,1,..., x m,t, z m,t for m = 1,..., M, the MLE is again very simple (verify yourself): π s #initial states with value s a s,s #transitions from s to s b s,o #state-outcome pairs (s, o) However, most often we do not observe the states! Think about the speech recognition example. This is called Hidden Markov Model (HMM). It is important to notice that the denomination hidden while defining a Hidden Markov Model is referred to the states of the Markov chain, not to the parameters of the model. How to learn HMMs? Roadmap: first discuss how to infer when the model is known (key: dynamic programming) then discuss how to learn the model (key: EM) () April 2, / 48 () April 2, / 48
7 What can we infer about an HMM? What can we infer for a known HMM? In HMMs, we are often interested in the following problems the probability of observing a whole sequence P (Z 1:T = z 1:T ) = P (Z 1 = z 1,..., Z T = z T ) e.g. prob. of observing Bob s activities walk, walk, shop, clean, walk, shop, shop for one week the state at some point of time, given an observation sequence γ s (t) = P (X t = s Z 1:T = z 1:T ) e.g. given Bob s activities for one week, how was the weather like on Wed? In HMMs, we are often interested in the following problems the likelihood of two consecutive states at a given time ξ s,s (t) = P (X t = s, X t+1 = s Z 1:T = z 1:T ) e.g. given Bob s activities for one week, how was the weather like on Wed and Thu? most likely hidden states path, given an observation sequence argmax x 1:T P (X 1:T = x 1:T Z 1:T = z 1:T ) e.g. given Bob s activities for one week, what s the most likely weather for this week? () April 2, / 48 () April 2, / 48 Forward and backward messages The key is to compute two things: forward messages: for each s and t α s (t) = P (X t = s, Z 1:t = z 1:t ) The intuition is, if we observe up to time t, what is the likelihood of the Markov chain in state s? backward messages: for each s and t β s (t) = P (Z t+1:t = z t+1:t X t = s) The interpretation is: if we are told that the Markov chain at time t is in the state s, then what are the likelihood of observing future observations from t + 1 to T? () April 2, / 48 Computing forward messages Key: establish a recursive formula α s (t) = P (X t = s, Z 1:t ) = P (X t = s, Z 1:t 1, Z t ) = P (Z t X t = s, Z 1:t 1 )P (X t = s, Z 1:t 1 ) = P (Z t X t = s)p (X t = s, Z 1:t 1 ) (independence) = b s,zt P (X t = s, X t 1 = s, Z 1:t 1 ) (marginalizing) s = b s,zt P (X t = s X t 1 = s, Z 1:t 1 )P (X t 1 = s, Z 1:t 1 ) s = b s,zt P (X t = s X t 1 = s )P (X t 1 = s, Z 1:t 1 ) s = b s,zt a s,sα s (t 1) s [N] Base case: α s (1) = P (X 1, Z 1 ) = P (Z 1 X 1 )P (X 1 ) = π s b s,z1 () April 2, / 48
8 Forward procedure Forward procedure Forward algorithm For all s [N], compute α s (1) = π s b s,z1. For t = 2,..., T for each s [N], compute α s (t) = b s,zt a s,s α s (t 1) s [N] It takes O(S 2 T ) time and O(ST ) space using dynamic programming. Oh, no, CSCI-570 again.. () April 2, / 48 () April 2, / 48 Computing backward messages Backward procedure Again establish a recursive formula β s (t) = P (Z t+1:t X t = s) = P (Z t+1:t, X t+1 = s X t = s) (marginalizing) s = P (Z t+1:t X t+1 = s, X t = s)p (X t+1 = s X t = s) (sl. 11) s = a s,s P (Z t+1:t X t+1 = s ) = a s,s P (Z t+1, Z t+2:t X t+1 = s ) s s = a s,s P (Z t+1 Z t+2:t, X t+1 = s )P (Z t+2:t X t+1 = s ) s Backward algorithm For all s [N], set β s (T ) = 1. For t = T 1,..., 1 for each s [N], compute β s (t) = a s,s b s,z t+1 β s (t + 1) s [N] = s a s,s b s,z t+1 β s (t + 1) Again it takes O(S 2 T ) time and O(ST ) space. Base case: β s (T ) = 1 (verify yourself) () April 2, / 48 () April 2, / 48
9 Using forward and backward messages With forward and backward messages, we can compute problems stated on slide 25. γ s (t) = P (X t = s Z 1:T ) s [N] P (X t = s, Z 1:T ) = P (X t = s, Z 1:t, Z t+1:t ) = P (X t = s, Z 1:t )P (Z t+1:t X t = s, Z 1:t ) = P (X t = s, Z 1:t )P (Z t+1:t X t = s) = α s (t)β s (t) What constant are we omitting in? It is exactly P (Z 1:T ) = P (Z 1:T, X t = s) = α s (T ) = α s (t)β s (t), s [N] s [N] the probability of observing the sequence z 1:T. This is true for any t; a good way to check correctness of your code. () April 2, / 48 Using forward and backward messages Slide 26: the conditional probability of transition s to s at time t ξ s,s (t) = P (X t = s, X t+1 = s Z 1:T ) P (X t = s, X t+1 = s, Z 1:T ) = P (X t = s, Z 1:t, X t+1 = s, Z t+1:t ) = P (X t = s, Z 1:t )P (X t+1 = s, Z t+1:t X t = s, Z 1:t ) = α s (t) P (X t+1 = s, Z t+1:t X t = s) = α s (t) P (Z t+1:t X t+1 = s, X t = s)p (X t+1 = s X t = s) = α s (t)p (Z t+1:t X t+1 = s ) a s,s = α s (t) a s,s P (Z t+1:t, Z t+2:t X t+1 = s ) = α s (t) a s,s P (Z t+2:t X t+1 = s, Z t+1:t )P (Z t+1 X t+1 = s ) = α s (t) a s,s P (Z t+2:t X t+1 = s )P (Z t+1 X t+1 = s ) = α s (t) a s,s b s,x t+1 β s (t + 1) The normalization constant is in fact again P (Z 1:T ) () April 2, / 48 Finding the most likely path Find the most likely hidden states path, given an observation sequence: This is called Viterbi decoding. argmax x 1:T P (X 1:T = x 1:T Z 1:T = z 1:T ) We can t use forward and backward messages directly, though it is very similar to the forward procedure, just replace sum to max. We will compute (think of Dijkstra algorithm) δ s (t) = max x 1:t 1 P (X t = s, X 1:t 1, Z 1:t ) s (t) = argmax x t 1 max P (X t = s, X 1:t 1 Z 1:t ) x 1:t 2 Computing δ s (t) Follow me (the goal is to get a recurrence) δ s (t) = max x 1:t 1 P (X t = s, X 1:t 1, Z 1:t ) = max x 1:t 1 P (X t = s, Z t, X 1:t 1, Z 1:t 1 ) = max s max P (X t = s, Z t X 1:t 1, Z 1:t 1 )P (X 1:t 1 = s, Z 1:t 1 ) x 1:t 2 = max s δ s (t 1)P (X t, Z t X 1:t 1, Z 1:t 1 ) = max s δ s (t 1)P (Z t, X t X 1:t 1 ) = max s δ s (t 1)P (Z t X t, X 1:t 1 )P (X t X 1:t 1 ) = max s δ s (t 1)P (Z t X t )P (X t X t 1 ) = b s,zt max s a s,sδ s (t 1) Base case: δ s (1) = P (X 1 = s, Z 1 = z 1 ) = π s b s,z1 () April 2, / 48 () April 2, / 48
10 Viterbi Algorithm (!) Viterbi Algorithm For each s [N], compute δ s (1) = π s b s,z1. For each t = 2,..., T, for each s [N], compute Example Arrows represent the argmax, i.e. s (t). δ s (t) = b s,zt max s a s,sδ s (t 1) s (t) = argmax s a s,sδ s (t 1) Backtracking: let z T = argmax s δ s (T ). For each t = T,..., 2: set z t 1 = z t (t). The most likely path is rainy, rainy, sunny, sunny. Output the most likely path z 1,..., z T. () April 2, / 48 () April 2, / 48 Example t = 1, the initial time Consider the HMM below. In this world, every time step (say every few minutes), you can either be Studying or playing Video games. You re also either Grinning or Frowning while doing the activity. δ s (1) = π s b s,z1. Compute δ S (1) and δ V (1). δ S (1) = p(z 1 = Grin x 1 = S)π(x 1 = S) = = 0.25 Suppose that we believe that the initial state distribution is 50/50. We observe: Grin, Grin, Frown, Grin. What is the most likely path for this sequence of observations? δ V (1) = p(z 1 = Grin x 1 = V )π(x 1 = V ) = = 0.4 () April 2, / 48 () April 2, / 48
11 t = 2 δ s (t) = b s,zt max s a s,sδ s (t 1). Compute δ S (2) and δ V (2). t = 3 δ s (t) = b s,zt max s a s,sδ s (t 1). Compute δ S (3) and δ V (3). δ S (2) = p(z 2 = Grin x 2 = S) max{p(x 2 = S x 1 = S)δ S (1), p(x 2 = S x 1 = V )δ V (1)} = 0.5 max{ , } = 0.01 δ V (2) = p(z 2 = Grin x 2 = V ) max{p(x 2 = V x 1 = S)δ S (1), p(x 2 = V x 1 = V )δ V (1)} 0.8 max{ , } = () April 2, / 48 δ S (3) = p(z 3 = F rown x 3 = S) max{p(x 3 = S x 2 = S)δ S (2), p(x 3 = S x 2 = V )δ V (2)} = 0.5 max{ , } = 0.04 δ V (3) = p(z 3 = F rown x 3 = V ) max{p(x 3 = V x 2 = S)δ S (2), p(x 3 = V x 2 = V )δ V (2)} = 0.2 max{ , } = () April 2, / 48 t = 4 Learning the parameters of an HMM Observation at t = 4 is Grin All previous inferences depend on knowing the parameters (π, A, B). How do we learn the parameters based on M observation sequences z m,1,..., z m,t for m = 1,..., M? δ S (4) = δ V (4) = 0.01 Last step, since δ S (4) > δ V (4), so we choose MLE is intractable due to the hidden states (similar to GMM). Need to apply EM again! Known as the Baum Welch algorithm. z 4 = S Then z 3 = S, then z 2 = S and then z 1 = S, using the trace-back table. () April 2, / 48 () April 2, / 48
12 Applying EM: E-Step Recall in the E-Step we fix the parameters and find the posterior distributions q of the hidden states (for each sample), which leads to the complete log-likelihood: E x1:t q [ln(x 1:T = x 1:T, Z 1:T = z 1:T )] [ ] = E x1:t q ln π x1 + ln a xt 1,x t + ln b xt,z t = s T 1 γ s (1) ln π s + ξ s,s (t) ln a s,s + s,s This is quite similar to computations on slide 15. γ s (t) and ξ s,s (t) are defined on slides 9 and 10: γ s (t) = P (X t = s Z 1:T ) γ s (t) ln b s,zt ξ s,s (t) = P (X t = s, X t+1 = s Z 1:T ) s (slide 22) Applying EM: M-Step The maximizer of complete log-likelihood is simply doing weighted counting: where π s m a s,s m b s,o m γ (m) s (1) = E q [ #initial states with value s] 1 ξ (m) [ s,s (t) = E q #transitions from s to s ] t:z t =o γ s (m) (t) = E q [ #state-outcome pairs (s, o)] γ (m) s (t) = P (X m,t = s Z m,1:t = z m,1:t ) ξ (m) s,s (t) = P (X m,t = s, X m,t+1 = s Z m,1:t = z m,1:t ) () April 2, / 48 () April 2, / 48 Baum Welch algorithm Summary Step 0 Initialize the parameters (π, A, B) Step 1 (E-Step) Fixing the parameters, compute forward and backward messages for all sample sequences, then use these to compute γ s (m) (t) and ξ (m) s,s (t) for each m, t, s, s (see Slides 9 and 10). Step 2 (M-Step) Update parameters: π s m γ (m) s (1), a s,s m 1 ξ (m) s,s (t), b s,o m t:x t =o γ s (m) (t) Very important models: Markov chains, hidden Markov models Several algorithms: forward and backward procedures inferring HMMs based on forward and backward messages Viterbi algorithm Baum Welch (EM) algorithm Step 3 Return to Step 1 if not converged. () April 2, / 48 () April 2, / 48
Note Set 5: Hidden Markov Models
Note Set 5: Hidden Markov Models Probabilistic Learning: Theory and Algorithms, CS 274A, Winter 2016 1 Hidden Markov Models (HMMs) 1.1 Introduction Consider observed data vectors x t that are d-dimensional
More informationParametric Models Part III: Hidden Markov Models
Parametric Models Part III: Hidden Markov Models Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2014 CS 551, Spring 2014 c 2014, Selim Aksoy (Bilkent
More informationSequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them
HMM, MEMM and CRF 40-957 Special opics in Artificial Intelligence: Probabilistic Graphical Models Sharif University of echnology Soleymani Spring 2014 Sequence labeling aking collective a set of interrelated
More informationCISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)
CISC 889 Bioinformatics (Spring 24) Hidden Markov Models (II) a. Likelihood: forward algorithm b. Decoding: Viterbi algorithm c. Model building: Baum-Welch algorithm Viterbi training Hidden Markov models
More informationHidden Markov Model. Ying Wu. Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
Hidden Markov Model Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/19 Outline Example: Hidden Coin Tossing Hidden
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationHidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010
Hidden Markov Models Aarti Singh Slides courtesy: Eric Xing Machine Learning 10-701/15-781 Nov 8, 2010 i.i.d to sequential data So far we assumed independent, identically distributed data Sequential data
More informationAdministration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon.
Administration CSCI567 Machine Learning Fall 2018 Prof. Haipeng Luo U of Southern California Nov 7, 2018 HW5 is available, due on 11/18. Practice final will also be available soon. Remaining weeks: 11/14,
More informationChapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang
Chapter 4 Dynamic Bayesian Networks 2016 Fall Jin Gu, Michael Zhang Reviews: BN Representation Basic steps for BN representations Define variables Define the preliminary relations between variables Check
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationHidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing
Hidden Markov Models By Parisa Abedi Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed data Sequential (non i.i.d.) data Time-series data E.g. Speech
More informationLecture 3: ASR: HMMs, Forward, Viterbi
Original slides by Dan Jurafsky CS 224S / LINGUIST 285 Spoken Language Processing Andrew Maas Stanford University Spring 2017 Lecture 3: ASR: HMMs, Forward, Viterbi Fun informative read on phonetics The
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Hidden Markov Models Barnabás Póczos & Aarti Singh Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed
More informationMachine Learning for natural language processing
Machine Learning for natural language processing Hidden Markov Models Laura Kallmeyer Heinrich-Heine-Universität Düsseldorf Summer 2016 1 / 33 Introduction So far, we have classified texts/observations
More informationGraphical Models Seminar
Graphical Models Seminar Forward-Backward and Viterbi Algorithm for HMMs Bishop, PRML, Chapters 13.2.2, 13.2.3, 13.2.5 Dinu Kaufmann Departement Mathematik und Informatik Universität Basel April 8, 2013
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationCOMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma
COMS 4771 Probabilistic Reasoning via Graphical Models Nakul Verma Last time Dimensionality Reduction Linear vs non-linear Dimensionality Reduction Principal Component Analysis (PCA) Non-linear methods
More informationCOMP90051 Statistical Machine Learning
COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Trevor Cohn 24. Hidden Markov Models & message passing Looking back Representation of joint distributions Conditional/marginal independence
More informationBasic math for biology
Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood
More informationStatistical NLP: Hidden Markov Models. Updated 12/15
Statistical NLP: Hidden Markov Models Updated 12/15 Markov Models Markov models are statistical tools that are useful for NLP because they can be used for part-of-speech-tagging applications Their first
More informationStatistical Machine Learning from Data
Samy Bengio Statistical Machine Learning from Data Statistical Machine Learning from Data Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique Fédérale de Lausanne (EPFL),
More informationHidden Markov Models. Terminology and Basic Algorithms
Hidden Markov Models Terminology and Basic Algorithms What is machine learning? From http://en.wikipedia.org/wiki/machine_learning Machine learning, a branch of artificial intelligence, is about the construction
More informationHidden Markov Models
Hidden Markov Models CI/CI(CS) UE, SS 2015 Christian Knoll Signal Processing and Speech Communication Laboratory Graz University of Technology June 23, 2015 CI/CI(CS) SS 2015 June 23, 2015 Slide 1/26 Content
More informationHidden Markov Models Part 2: Algorithms
Hidden Markov Models Part 2: Algorithms CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Hidden Markov Model An HMM consists of:
More informationCSCI-567: Machine Learning (Spring 2019)
CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March
More informationHidden Markov Models
Hidden Markov Models Lecture Notes Speech Communication 2, SS 2004 Erhard Rank/Franz Pernkopf Signal Processing and Speech Communication Laboratory Graz University of Technology Inffeldgasse 16c, A-8010
More informationPart of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015
Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch COMP-599 Oct 1, 2015 Announcements Research skills workshop today 3pm-4:30pm Schulich Library room 313 Start thinking about
More informationBrief Introduction of Machine Learning Techniques for Content Analysis
1 Brief Introduction of Machine Learning Techniques for Content Analysis Wei-Ta Chu 2008/11/20 Outline 2 Overview Gaussian Mixture Model (GMM) Hidden Markov Model (HMM) Support Vector Machine (SVM) Overview
More informationLecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010
Hidden Lecture 4: Hidden : An Introduction to Dynamic Decision Making November 11, 2010 Special Meeting 1/26 Markov Model Hidden When a dynamical system is probabilistic it may be determined by the transition
More informationorder is number of previous outputs
Markov Models Lecture : Markov and Hidden Markov Models PSfrag Use past replacements as state. Next output depends on previous output(s): y t = f[y t, y t,...] order is number of previous outputs y t y
More informationHidden Markov Models in Language Processing
Hidden Markov Models in Language Processing Dustin Hillard Lecture notes courtesy of Prof. Mari Ostendorf Outline Review of Markov models What is an HMM? Examples General idea of hidden variables: implications
More informationSequence modelling. Marco Saerens (UCL) Slides references
Sequence modelling Marco Saerens (UCL) Slides references Many slides and figures have been adapted from the slides associated to the following books: Alpaydin (2004), Introduction to machine learning.
More informationHidden Markov Models and Gaussian Mixture Models
Hidden Markov Models and Gaussian Mixture Models Hiroshi Shimodaira and Steve Renals Automatic Speech Recognition ASR Lectures 4&5 23&27 January 2014 ASR Lectures 4&5 Hidden Markov Models and Gaussian
More informationHidden Markov Models. Terminology and Basic Algorithms
Hidden Markov Models Terminology and Basic Algorithms The next two weeks Hidden Markov models (HMMs): Wed 9/11: Terminology and basic algorithms Mon 14/11: Implementing the basic algorithms Wed 16/11:
More informationAdvanced Data Science
Advanced Data Science Dr. Kira Radinsky Slides Adapted from Tom M. Mitchell Agenda Topics Covered: Time series data Markov Models Hidden Markov Models Dynamic Bayes Nets Additional Reading: Bishop: Chapter
More informationBayesian Machine Learning - Lecture 7
Bayesian Machine Learning - Lecture 7 Guido Sanguinetti Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh gsanguin@inf.ed.ac.uk March 4, 2015 Today s lecture 1
More informationLEARNING DYNAMIC SYSTEMS: MARKOV MODELS
LEARNING DYNAMIC SYSTEMS: MARKOV MODELS Markov Process and Markov Chains Hidden Markov Models Kalman Filters Types of dynamic systems Problem of future state prediction Predictability Observability Easily
More information8: Hidden Markov Models
8: Hidden Markov Models Machine Learning and Real-world Data Helen Yannakoudakis 1 Computer Laboratory University of Cambridge Lent 2018 1 Based on slides created by Simone Teufel So far we ve looked at
More informationSTATS 306B: Unsupervised Learning Spring Lecture 5 April 14
STATS 306B: Unsupervised Learning Spring 2014 Lecture 5 April 14 Lecturer: Lester Mackey Scribe: Brian Do and Robin Jia 5.1 Discrete Hidden Markov Models 5.1.1 Recap In the last lecture, we introduced
More informationHidden Markov Models
CS769 Spring 2010 Advanced Natural Language Processing Hidden Markov Models Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu 1 Part-of-Speech Tagging The goal of Part-of-Speech (POS) tagging is to label each
More informationStatistical Methods for NLP
Statistical Methods for NLP Information Extraction, Hidden Markov Models Sameer Maskey Week 5, Oct 3, 2012 *many slides provided by Bhuvana Ramabhadran, Stanley Chen, Michael Picheny Speech Recognition
More informationDynamic Approaches: The Hidden Markov Model
Dynamic Approaches: The Hidden Markov Model Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Machine Learning: Neural Networks and Advanced Models (AA2) Inference as Message
More informationLecture 11: Hidden Markov Models
Lecture 11: Hidden Markov Models Cognitive Systems - Machine Learning Cognitive Systems, Applied Computer Science, Bamberg University slides by Dr. Philip Jackson Centre for Vision, Speech & Signal Processing
More informationHidden Markov Models NIKOLAY YAKOVETS
Hidden Markov Models NIKOLAY YAKOVETS A Markov System N states s 1,..,s N S 2 S 1 S 3 A Markov System N states s 1,..,s N S 2 S 1 S 3 modeling weather A Markov System state changes over time.. S 1 S 2
More informationHidden Markov Models The three basic HMM problems (note: change in notation) Mitch Marcus CSE 391
Hidden Markov Models The three basic HMM problems (note: change in notation) Mitch Marcus CSE 391 Parameters of an HMM States: A set of states S=s 1, s n Transition probabilities: A= a 1,1, a 1,2,, a n,n
More informationHidden Markov Models. Terminology, Representation and Basic Problems
Hidden Markov Models Terminology, Representation and Basic Problems Data analysis? Machine learning? In bioinformatics, we analyze a lot of (sequential) data (biological sequences) to learn unknown parameters
More informationA Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute
More informationHidden Markov Models
Hidden Markov Models Slides mostly from Mitch Marcus and Eric Fosler (with lots of modifications). Have you seen HMMs? Have you seen Kalman filters? Have you seen dynamic programming? HMMs are dynamic
More informationHidden Markov Models and Gaussian Mixture Models
Hidden Markov Models and Gaussian Mixture Models Hiroshi Shimodaira and Steve Renals Automatic Speech Recognition ASR Lectures 4&5 25&29 January 2018 ASR Lectures 4&5 Hidden Markov Models and Gaussian
More informationL23: hidden Markov models
L23: hidden Markov models Discrete Markov processes Hidden Markov models Forward and Backward procedures The Viterbi algorithm This lecture is based on [Rabiner and Juang, 1993] Introduction to Speech
More informationHuman-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg
Temporal Reasoning Kai Arras, University of Freiburg 1 Temporal Reasoning Contents Introduction Temporal Reasoning Hidden Markov Models Linear Dynamical Systems (LDS) Kalman Filter 2 Temporal Reasoning
More informationHidden Markov Models
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Hidden Markov Models Matt Gormley Lecture 22 April 2, 2018 1 Reminders Homework
More informationStatistical Sequence Recognition and Training: An Introduction to HMMs
Statistical Sequence Recognition and Training: An Introduction to HMMs EECS 225D Nikki Mirghafori nikki@icsi.berkeley.edu March 7, 2005 Credit: many of the HMM slides have been borrowed and adapted, with
More informationHidden Markov Models. Ivan Gesteira Costa Filho IZKF Research Group Bioinformatics RWTH Aachen Adapted from:
Hidden Markov Models Ivan Gesteira Costa Filho IZKF Research Group Bioinformatics RWTH Aachen Adapted from: www.ioalgorithms.info Outline CG-islands The Fair Bet Casino Hidden Markov Model Decoding Algorithm
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationDirected Probabilistic Graphical Models CMSC 678 UMBC
Directed Probabilistic Graphical Models CMSC 678 UMBC Announcement 1: Assignment 3 Due Wednesday April 11 th, 11:59 AM Any questions? Announcement 2: Progress Report on Project Due Monday April 16 th,
More informationHidden Markov Models. x 1 x 2 x 3 x N
Hidden Markov Models 1 1 1 1 K K K K x 1 x x 3 x N Example: The dishonest casino A casino has two dice: Fair die P(1) = P() = P(3) = P(4) = P(5) = P(6) = 1/6 Loaded die P(1) = P() = P(3) = P(4) = P(5)
More informationHidden Markov models
Hidden Markov models Charles Elkan November 26, 2012 Important: These lecture notes are based on notes written by Lawrence Saul. Also, these typeset notes lack illustrations. See the classroom lectures
More informationPlan for today. ! Part 1: (Hidden) Markov models. ! Part 2: String matching and read mapping
Plan for today! Part 1: (Hidden) Markov models! Part 2: String matching and read mapping! 2.1 Exact algorithms! 2.2 Heuristic methods for approximate search (Hidden) Markov models Why consider probabilistics
More informationStatistical Methods for NLP
Statistical Methods for NLP Sequence Models Joakim Nivre Uppsala University Department of Linguistics and Philology joakim.nivre@lingfil.uu.se Statistical Methods for NLP 1(21) Introduction Structured
More informationSequence Modelling with Features: Linear-Chain Conditional Random Fields. COMP-599 Oct 6, 2015
Sequence Modelling with Features: Linear-Chain Conditional Random Fields COMP-599 Oct 6, 2015 Announcement A2 is out. Due Oct 20 at 1pm. 2 Outline Hidden Markov models: shortcomings Generative vs. discriminative
More informationLinear Dynamical Systems (Kalman filter)
Linear Dynamical Systems (Kalman filter) (a) Overview of HMMs (b) From HMMs to Linear Dynamical Systems (LDS) 1 Markov Chains with Discrete Random Variables x 1 x 2 x 3 x T Let s assume we have discrete
More informationRecap: HMM. ANLP Lecture 9: Algorithms for HMMs. More general notation. Recap: HMM. Elements of HMM: Sharon Goldwater 4 Oct 2018.
Recap: HMM ANLP Lecture 9: Algorithms for HMMs Sharon Goldwater 4 Oct 2018 Elements of HMM: Set of states (tags) Output alphabet (word types) Start state (beginning of sentence) State transition probabilities
More informationA Higher-Order Interactive Hidden Markov Model and Its Applications Wai-Ki Ching Department of Mathematics The University of Hong Kong
A Higher-Order Interactive Hidden Markov Model and Its Applications Wai-Ki Ching Department of Mathematics The University of Hong Kong Abstract: In this talk, a higher-order Interactive Hidden Markov Model
More informationMini-project 2 (really) due today! Turn in a printout of your work at the end of the class
Administrivia Mini-project 2 (really) due today Turn in a printout of your work at the end of the class Project presentations April 23 (Thursday next week) and 28 (Tuesday the week after) Order will be
More informationMultiscale Systems Engineering Research Group
Hidden Markov Model Prof. Yan Wang Woodruff School of Mechanical Engineering Georgia Institute of echnology Atlanta, GA 30332, U.S.A. yan.wang@me.gatech.edu Learning Objectives o familiarize the hidden
More informationHidden Markov Models Hamid R. Rabiee
Hidden Markov Models Hamid R. Rabiee 1 Hidden Markov Models (HMMs) In the previous slides, we have seen that in many cases the underlying behavior of nature could be modeled as a Markov process. However
More informationAn Introduction to Bioinformatics Algorithms Hidden Markov Models
Hidden Markov Models Outline 1. CG-Islands 2. The Fair Bet Casino 3. Hidden Markov Model 4. Decoding Algorithm 5. Forward-Backward Algorithm 6. Profile HMMs 7. HMM Parameter Estimation 8. Viterbi Training
More informationHidden Markov model. Jianxin Wu. LAMDA Group National Key Lab for Novel Software Technology Nanjing University, China
Hidden Markov model Jianxin Wu LAMDA Group National Key Lab for Novel Software Technology Nanjing University, China wujx2001@gmail.com May 7, 2018 Contents 1 Sequential data and the Markov property 2 1.1
More informationCS532, Winter 2010 Hidden Markov Models
CS532, Winter 2010 Hidden Markov Models Dr. Alan Fern, afern@eecs.oregonstate.edu March 8, 2010 1 Hidden Markov Models The world is dynamic and evolves over time. An intelligent agent in such a world needs
More informationVL Algorithmen und Datenstrukturen für Bioinformatik ( ) WS15/2016 Woche 16
VL Algorithmen und Datenstrukturen für Bioinformatik (19400001) WS15/2016 Woche 16 Tim Conrad AG Medical Bioinformatics Institut für Mathematik & Informatik, Freie Universität Berlin Based on slides by
More information10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)
10. Hidden Markov Models (HMM) for Speech Processing (some slides taken from Glass and Zue course) Definition of an HMM The HMM are powerful statistical methods to characterize the observed samples of
More informationMACHINE LEARNING 2 UGM,HMMS Lecture 7
LOREM I P S U M Royal Institute of Technology MACHINE LEARNING 2 UGM,HMMS Lecture 7 THIS LECTURE DGM semantics UGM De-noising HMMs Applications (interesting probabilities) DP for generation probability
More informationCMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009
CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models Jimmy Lin The ischool University of Maryland Wednesday, September 30, 2009 Today s Agenda The great leap forward in NLP Hidden Markov
More informationA gentle introduction to Hidden Markov Models
A gentle introduction to Hidden Markov Models Mark Johnson Brown University November 2009 1 / 27 Outline What is sequence labeling? Markov models Hidden Markov models Finding the most likely state sequence
More informationStatistical Processing of Natural Language
Statistical Processing of Natural Language and DMKM - Universitat Politècnica de Catalunya and 1 2 and 3 1. Observation Probability 2. Best State Sequence 3. Parameter Estimation 4 Graphical and Generative
More informationLecture 12: Algorithms for HMMs
Lecture 12: Algorithms for HMMs Nathan Schneider (some slides from Sharon Goldwater; thanks to Jonathan May for bug fixes) ENLP 26 February 2018 Recap: tagging POS tagging is a sequence labelling task.
More informationProbabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm
Probabilistic Graphical Models 10-708 Homework 2: Due February 24, 2014 at 4 pm Directions. This homework assignment covers the material presented in Lectures 4-8. You must complete all four problems to
More informationBasic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology
Basic Text Analysis Hidden Markov Models Joakim Nivre Uppsala University Department of Linguistics and Philology joakimnivre@lingfiluuse Basic Text Analysis 1(33) Hidden Markov Models Markov models are
More informationExpectation Maximization (EM)
Expectation Maximization (EM) The Expectation Maximization (EM) algorithm is one approach to unsupervised, semi-supervised, or lightly supervised learning. In this kind of learning either no labels are
More informationData Mining in Bioinformatics HMM
Data Mining in Bioinformatics HMM Microarray Problem: Major Objective n Major Objective: Discover a comprehensive theory of life s organization at the molecular level 2 1 Data Mining in Bioinformatics
More informationLecture 15. Probabilistic Models on Graph
Lecture 15. Probabilistic Models on Graph Prof. Alan Yuille Spring 2014 1 Introduction We discuss how to define probabilistic models that use richly structured probability distributions and describe how
More informationMachine Learning for OR & FE
Machine Learning for OR & FE Hidden Markov Models Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Additional References: David
More informationA REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS
A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS Michael Ryan Hoff A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment
More informationHidden Markov Models
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Hidden Markov Models Matt Gormley Lecture 19 Nov. 5, 2018 1 Reminders Homework
More informationHidden Markov Models,99,100! Markov, here I come!
Hidden Markov Models,99,100! Markov, here I come! 16.410/413 Principles of Autonomy and Decision-Making Pedro Santana (psantana@mit.edu) October 7 th, 2015. Based on material by Brian Williams and Emilio
More informationApproximate Inference
Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate
More informationHMM: Parameter Estimation
I529: Machine Learning in Bioinformatics (Spring 2017) HMM: Parameter Estimation Yuzhen Ye School of Informatics and Computing Indiana University, Bloomington Spring 2017 Content Review HMM: three problems
More informationIntroduction to Hidden Markov Modeling (HMM) Daniel S. Terry Scott Blanchard and Harel Weinstein labs
Introduction to Hidden Markov Modeling (HMM) Daniel S. Terry Scott Blanchard and Harel Weinstein labs 1 HMM is useful for many, many problems. Speech Recognition and Translation Weather Modeling Sequence
More informationComputational Genomics and Molecular Biology, Fall
Computational Genomics and Molecular Biology, Fall 2011 1 HMM Lecture Notes Dannie Durand and Rose Hoberman October 11th 1 Hidden Markov Models In the last few lectures, we have focussed on three problems
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]
More informationDept. of Linguistics, Indiana University Fall 2009
1 / 14 Markov L645 Dept. of Linguistics, Indiana University Fall 2009 2 / 14 Markov (1) (review) Markov A Markov Model consists of: a finite set of statesω={s 1,...,s n }; an signal alphabetσ={σ 1,...,σ
More informationMachine Learning & Data Mining Caltech CS/CNS/EE 155 Hidden Markov Models Last Updated: Feb 7th, 2017
1 Introduction Let x = (x 1,..., x M ) denote a sequence (e.g. a sequence of words), and let y = (y 1,..., y M ) denote a corresponding hidden sequence that we believe explains or influences x somehow
More informationMarkov Chains and Hidden Markov Models. COMP 571 Luay Nakhleh, Rice University
Markov Chains and Hidden Markov Models COMP 571 Luay Nakhleh, Rice University Markov Chains and Hidden Markov Models Modeling the statistical properties of biological sequences and distinguishing regions
More informationHidden Markov Models
Hidden Markov Models Outline 1. CG-Islands 2. The Fair Bet Casino 3. Hidden Markov Model 4. Decoding Algorithm 5. Forward-Backward Algorithm 6. Profile HMMs 7. HMM Parameter Estimation 8. Viterbi Training
More informationLecture 12: Algorithms for HMMs
Lecture 12: Algorithms for HMMs Nathan Schneider (some slides from Sharon Goldwater; thanks to Jonathan May for bug fixes) ENLP 17 October 2016 updated 9 September 2017 Recap: tagging POS tagging is a
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationUniversity of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I
University of Cambridge MPhil in Computer Speech Text & Internet Technology Module: Speech Processing II Lecture 2: Hidden Markov Models I o o o o o 1 2 3 4 T 1 b 2 () a 12 2 a 3 a 4 5 34 a 23 b () b ()
More informationPage 1. References. Hidden Markov models and multiple sequence alignment. Markov chains. Probability review. Example. Markovian sequence
Page Hidden Markov models and multiple sequence alignment Russ B Altman BMI 4 CS 74 Some slides borrowed from Scott C Schmidler (BMI graduate student) References Bioinformatics Classic: Krogh et al (994)
More informationTutorial on Hidden Markov Model
Applied and Computational Mathematics 2017; 6(4-1): 16-38 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.s.2017060401.12 ISS: 2328-5605 (Print); ISS: 2328-5613 (Online) Tutorial on Hidden
More information