Statistical NLP: Hidden Markov Models. Updated 12/15

Similar documents
Hidden Markov Models NIKOLAY YAKOVETS

Dept. of Linguistics, Indiana University Fall 2009

DRAFT! c January 7, 1999 Christopher Manning & Hinrich Schütze Markov Models

1. Markov models. 1.1 Markov-chain

Statistical Processing of Natural Language

Hidden Markov Models The three basic HMM problems (note: change in notation) Mitch Marcus CSE 391

Introduction to Artificial Intelligence (AI)

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Statistical Methods for NLP

Hidden Markov Models

Statistical Sequence Recognition and Training: An Introduction to HMMs

Hidden Markov Model. Ying Wu. Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208

Statistical Methods for NLP

Math 350: An exploration of HMMs through doodles.

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

Hidden Markov Models

10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

Basic math for biology

CS 7180: Behavioral Modeling and Decision- making in AI

Hidden Markov Models. x 1 x 2 x 3 x K

Hidden Markov Models

Introduction to Machine Learning CMU-10701

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

We Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named

Lecture 3: ASR: HMMs, Forward, Viterbi

Machine Learning for natural language processing

Hidden Markov Modelling

Robert Collins CSE586 CSE 586, Spring 2015 Computer Vision II

Pair Hidden Markov Models

Data-Intensive Computing with MapReduce

p(d θ ) l(θ ) 1.2 x x x

Lecture 11: Hidden Markov Models

Lecture 12: Algorithms for HMMs

CISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)

A Higher-Order Interactive Hidden Markov Model and Its Applications Wai-Ki Ching Department of Mathematics The University of Hong Kong

Hidden Markov Models

CSCE 471/871 Lecture 3: Markov Chains and

LEARNING DYNAMIC SYSTEMS: MARKOV MODELS

Lecture 9. Intro to Hidden Markov Models (finish up)

Hidden Markov Models

CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm

CS 7180: Behavioral Modeling and Decision- making in AI

Recall: Modeling Time Series. CSE 586, Spring 2015 Computer Vision II. Hidden Markov Model and Kalman Filter. Modeling Time Series

Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015

Multiscale Systems Engineering Research Group

Sequence Labeling: HMMs & Structured Perceptron

Hidden Markov Models Part 2: Algorithms

A gentle introduction to Hidden Markov Models

Hidden Markov Models: All the Glorious Gory Details

Sequence modelling. Marco Saerens (UCL) Slides references

Lecture 12: Algorithms for HMMs

What s an HMM? Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) Hidden Markov Models (HMMs) for Information Extraction

CSC401/2511 Spring CSC401/2511 Natural Language Computing Spring 2019 Lecture 5 Frank Rudzicz and Chloé Pou-Prom University of Toronto

Sequence Modelling with Features: Linear-Chain Conditional Random Fields. COMP-599 Oct 6, 2015

O 3 O 4 O 5. q 3. q 4. Transition

Hidden Markov Models

Hidden Markov Models. Three classic HMM problems

Page 1. References. Hidden Markov models and multiple sequence alignment. Markov chains. Probability review. Example. Markovian sequence

CSCE 478/878 Lecture 9: Hidden. Markov. Models. Stephen Scott. Introduction. Outline. Markov. Chains. Hidden Markov Models. CSCE 478/878 Lecture 9:

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

STA 414/2104: Machine Learning

order is number of previous outputs

Hidden Markov Models and Gaussian Mixture Models

Computational Genomics and Molecular Biology, Fall

CS838-1 Advanced NLP: Hidden Markov Models

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang

Hidden Markov Models. Ivan Gesteira Costa Filho IZKF Research Group Bioinformatics RWTH Aachen Adapted from:

6.864: Lecture 5 (September 22nd, 2005) The EM Algorithm

Introduction to Hidden Markov Modeling (HMM) Daniel S. Terry Scott Blanchard and Harel Weinstein labs

Part A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 )

STA 4273H: Statistical Machine Learning

An Introduction to Bioinformatics Algorithms Hidden Markov Models

Linear Dynamical Systems (Kalman filter)

Hidden Markov models

Stephen Scott.

10/17/04. Today s Main Points

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

Plan for today. ! Part 1: (Hidden) Markov models. ! Part 2: String matching and read mapping

Hidden Markov Models

Natural Language Processing : Probabilistic Context Free Grammars. Updated 5/09

Graphical Models Seminar

University of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I

Supervised Learning Hidden Markov Models. Some of these slides were inspired by the tutorials of Andrew Moore

Example: The Dishonest Casino. Hidden Markov Models. Question # 1 Evaluation. The dishonest casino model. Question # 3 Learning. Question # 2 Decoding

L23: hidden Markov models

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

An Introduction to Bioinformatics Algorithms Hidden Markov Models

Hidden Markov Models. x 1 x 2 x 3 x N

Natural Language Processing Prof. Pushpak Bhattacharyya Department of Computer Science & Engineering, Indian Institute of Technology, Bombay

Linear Dynamical Systems

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Hidden Markov Models

( ).666 Information Extraction from Speech and Text

Human Mobility Pattern Prediction Algorithm using Mobile Device Location and Time Data

CS532, Winter 2010 Hidden Markov Models

11.3 Decoding Algorithm

A.I. in health informatics lecture 8 structured learning. kevin small & byron wallace

8: Hidden Markov Models

Transcription:

Statistical NLP: Hidden Markov Models Updated 12/15

Markov Models Markov models are statistical tools that are useful for NLP because they can be used for part-of-speech-tagging applications Their first use was in modeling the letter sequences in works of Russian literature They were later developed as a general statistical tool More specifically, they model a sequence (perhaps through time) of random variables that are not necessarily independent They rely on two assumptions: Limited Horizon and Time Invariant

Limited Horizon: P(X t =s k X 1,.., X t-1 )=P(X t = s k X t-1 ) i.e., a state at position t only depends on the previous state. Time Invariant: P(X t =s k X t-1 )=P(X 2 =s k X 1 ) i.e., the dependency does not change over time. If X possesses these properties, then X is said to be a Markov Chain Markov Assumptions Let X=(X 1,.., X t ) be a sequence of random variables taking values from some finite set S={s 1,, s n }, the state space, the Markov properties are:

Markov Model parameters A Markov chain is described by: Stochastic transition matrix a ij = p(x t+1 = s j X t = s i ) Probabilities of initial states of chain i = p(x 1 = s i ) there are obvious normalization constraints on a and on. Finally a Markov Model is formally specified by a three-tuple (S,, A) where S is the set of states, and, A are the probabilities for the initial state and state transitions.

Probability of sequence of states By Markov assumption P(X 1, X 2, X T ) = p(x 1 ) p(x 2 X 1 ) p(x T X T-1 ) = X1 T t=1 a Xt, Xt+1

Probability of sequence of states: P(t, i, p) = p(x 1 = t) p(x 2 =i X 1 =t) p(x 3 =p X 2 =i) = 1.0 * 0.3 * 0.6 = 0.18 Example of a Markov Chain.6 1 1 h.4 a.3.4.3 p.6 e 1 t Start.4 i

Are n-gram models Markov Models? Bi-gram models are obviously Markov Models. What about tri-gram, four-gram models etc.?

Stationary distribution of Markov Models What is the distribution of a Markov state p(x n ) for very large n. Obviously, the initial state is forgotten. Stationary distribution p*a = p this is an eigenvalue problem A *p = 1*p Example: Here A 0.7 0.5 0.3 0.5 0.7 Solution of eigenvalue problem: C start p 0.3 0.5 I 5 / 8 3 / 8 0.5

Hidden Markov Models (HMM) Here states are hidden. We have only clues about states by the symbols each state outputs: P(O t = k X t = s i, X t+1 = s j ) = b i,j,k

The crazy soft drink machine 0.3 0.7 Cola Preffered Ice tea Preffered 0.5 0.5 start Comment: for this machine the output really depends only on s i namely b ijk = b ik Output probability cola ice_t CP 0.6 0.1 IP 0.1 0.7 lem 0.3 0.2

The crazy soft drink machine cont. What is the probability of observing {lem, ice_t}? Need to sum over all 4 possible paths that might be taken through the HMM: 0.7*0.3*0.7*0.1 + 0.7*0.3*0.3*0.1 + 0.3*0.3*0.5*0.7 + 0.3*0.3*0.5*0.7 = 0.084 0.7 0.7 CP CP CP cola ice_t lem CP 0.6 0.1 0.3 IP 0.5 IP IP 0.1 0.7 0.2

Why Use Hidden Markov Models? HMMs are useful when one can think of underlying events probabilistically generating surface events. Example: Part-of-Speech-Tagging or speech. HMMs can efficiently be trained using the EM Algorithm. Another example where HMMs are useful is in generating parameters for linear interpolation of n-gram models.

interpoloating parameters for n-gram models 1 ab w 1 : P 1 (w 1 ) w 2 :P 1 (w 2 ) w b w 1 w a w b : 1 : 2 2 ab w b w 2 : 3 w M :P 1 (w M ) 3 ab w M :P 3 (w M w a w b ) w b w M

interpoloating parameters for n- gram models comments the HMM calculation of observing the sequence (w n-2, w n-1, w n ) is equivalent to P lin (w n w n-2,w n-1 )= 1 P 1 (w n )+ 2 P 2 (w n w n-1 )+ 3 P 3 (w n w n-1,w n-2 ) Transitions are special transitions that produce no output symbol In the above model each word pair (a,b) has a different HMM. This is relaxed by using tied states.

General Form of an HMM An HMM is specified by a five-tuple (S, K,, A, B) where S and K are the set of states and the output alphabet, and, A, B are the probabilities for the initial state, state transitions, and symbol emissions, respectively. When the sets S and K are obvious, HMM will be represented by a three-tuple (, A, B). Given a specification of an HMM, we can simulate the running of a Markov process and produce an output sequence using the algorithm shown on the next page. More interesting than a simulation, however, is assuming that some set of data was generated by a HMM, and then being able to calculate probabilities and probable underlying state sequences.

A Program for a Markov Process modeled by HMM t:= 1; Start in state s i with probability i (i.e., X 1 =i) Forever do Move from state s i to state s j with probability a ij (i.e., X t+1 = j) Emit observation symbol o t = k with probability b ijk t:= t+1 End

The Three Fundamental Questions for HMMs Given a model =(A, B, ), how do we efficiently compute how likely a certain observation is, that is, P(O ) Given the observation sequence O and a model, how do we choose a state sequence (X 1,, X T+1 ) that best explains the observations? Given an observation sequence O, and a space of possible models found by varying the model parameters = (A, B, ), how do we find the model that best explains the observed data?

Finding the probability of an observation I Given the observation sequence O=(o 1,, o T ) and a model = (A, B, ), we wish to know how to efficiently compute P(O ). For any state sequence X=(X 1,, X T+1 ), we find: This is simply the sum of the probability of the observation occurring according to each possible state sequence. Direct evaluation of this expression, however, is extremely inefficient. S 1 S 2 t= 1 2 3 4 5 6 7

Finding the probability of an observation I Given the observation sequence O=(o 1,, o T ) and a model = (A, B, ), we wish to know how to efficiently compute P(O ). For any state sequence X=(X 1,, X T+1 ), we find: This is simply the sum of the probability of the observation occurring according to each possible state sequence. Direct evaluation of this expression, however, is extremely inefficient. S 1 S 2 t= 1 2 3 4 5 6 7

Finding the probability of an observation I Given the observation sequence O=(o 1,, o T ) and a model = (A, B, ), we wish to know how to efficiently compute P(O ). For any state sequence X=(X 1,, X T+1 ), we find This is simply the sum of the probability of the observation occurring according to each possible state sequence. Direct evaluation of this expression, however, is extremely inefficient. S 1 S 2 t= 1 2 3 4 5 6 7

Finding the probability of an observation I Given the observation sequence O=(o 1,, o T ) and a model = (A, B, ), we wish to know how to efficiently compute P(O ). For any state sequence X=(X 1,, X T+1 ), we find: This is simply the sum of the probability of the observation occurring according to each possible state sequence. Direct evaluation of this expression, however, is extremely inefficient. S 1 S 2 t= 1 2 3 4 5 6 7

Finding the probability of an observation I Given the observation sequence O=(o 1,, o T ) and a model = (A, B, ), we wish to know how to efficiently compute P(O ). For any state sequence X=(X 1,, X T+1 ), we find: This is simply the sum of the probability of the observation occurring according to each possible state sequence. Direct evaluation of this expression, however, is extremely inefficient. S 1 S 2 t= 1 2 3 4 5 6 7

Finding the probability of an observation I Given the observation sequence O=(o 1,, o T ) and a model = (A, B, ), we wish to know how to efficiently compute P(O ). For any state sequence X=(X 1,, X T+1 ), we find: This is simply the sum of the probability of the observation occurring according to each possible state sequence. Direct evaluation of this expression, however, is extremely inefficient. S 1 S 2 t= 1 2 3 4 5 6 7

Finding the probability of an observation II In order to avoid this complexity, we can use dynamic programming or memorization techniques. In particular, we use trellis algorithms. We make a square array of states versus time and compute the probabilities of being at each state at each time in terms of the probabilities for being in each state at the preceding time. A trellis can record the probability of all initial subpaths of the HMM that end in a certain state at a certain time. The probability of longer subpaths can then be worked out in terms of the shorter subpaths.

Finding the probability of an observation III: The forward procedure A forward variable, i (t)= P(o 1 o 2 o t-1, X t =i ) is stored at (s i, t)in the trellis and expresses the total probability of ending up in state si at time t. Forward variables are calculated as follows: Initialization: i (1)= i, 1 i N Induction: j (t+1)= i=1n i (t)a ij b ijot, 1 t T, 1 j N Total: P(O )= i=1n i (T+1) This algorithm requires 2N 2 T multiplications (much less than the direct method which takes (2T+1)N T+1

Finding the probability of an observation IV: The backward procedure The backward procedure computes backward variables which are the total probability of seeing the rest of the observation sequence given that we were in state s i at time t. Backward variables are useful for the problem of parameter estimation.

Finding the probability of an observation V: The backward procedure Let i (t) = P(o t o T X t = i, ) be the backward variables. Backward variables can be calculated working backward through the trellis as follows: Initialization : i (T+1) = 1, 1 i N Induction: i (t) = j=1n a ij b ijot j (t+1), 1 t T, 1 i N Total: P(O )= i=1n i i (1)

Finding the probability of an observation VI: Combining Forward and backward P(O, X t = i ) = P(o 1 o T,X t = i ) = P(o 1 o t-1,x t = i, o t, o T ) = P(o 1 o t-1,x t = i ) P( o t, o T o 1 o t-1, X t = i, ) = P(o 1 o t-1,x t = i ) P( o t, o T X t = i, ) = i (t) i (t), Total: P(O ) = i=1n i (t) i (t), 1 t T+1

Finding the Best State Sequence I One method consists of finding the states individually: For each t, 1 t T+1, we would like to find X t that maximizes P(X t O, ). Let i (t) = P(X t = i O, ) = P(X t = i, O )/P(O ) = ( i (t) i (t)/ N j=1 j (t) j (t)) ^ The individually most likely state is X t =argmax 1 i N i (t), 1 t T+1 This quantity maximizes the expected number of states that will be guessed correctly. However, it may yield a quite unlikely state sequence.

Finding the Best State Sequence II: The Viterbi Algorithm The Viterbi algorithm efficiently computes the most likely state sequence. Commonly, we want to find the most likely complete path, that is: argmax X P(X O, ) To do this, it is sufficient to maximize for a fixed O: argmax X P(X,O ) We define j (t) = max X1..Xt-1 P(X 1 X t-1, o 1..o t-1, X t =j ) j (t) records the node of the incoming arc that led to this most probable path.

Finding the Best State Sequence III: The Viterbi Algorithm The Viterbi Algorithm works as follows: Initialization: j (1) = j, 1 j N Induction: j (t+1) = max 1 i N i (t)a ij b ijot, 1 j N Store backtrace: j (t+1) = argmax 1 i N i (t)a ij b ijot, 1 j N Termination and path readout: X ^ T+1 = argmax 1 j N j (T+1) Xt ^ = Xt+1 ^ (t+1) P(X) ^ = max 1 j N j (T+1)

Finding the Best State Sequence IV: Visualization S 1 S 2 S 3 S 4 t= 1 2 3 4 5 6 7 2 (t=6) = probability of reaching state 2 at t=6 by the most probable path (marked) through state 2 at t=6 2 (t=6) =3 is the state from preceding state 2 at t=6 on the most probable path through state 2 at t=6

Parameter Estimation I Given a certain observation sequence, we want to find the values of the model parameters =(A, B, ) which best explain what we observed. Using Maximum Likelihood Estimation, we can find the values that maximize P(O ), i.e. argmax P(O training ) There is no known analytic method to choose to maximize P(O ). However, we can locally maximize it by an iterative hill-climbing algorithm known as Baum-Welch or Forward-Backward algorithm. (special case of the EM Algorithm)

Parameter Estimation II We don t know what the model is, but we can work out the probability of the observation sequence using some (perhaps randomly chosen) model. Looking at that calculation, we can see which state transitions and symbol emissions were probably used the most. By increasing the probability of those, we can choose a revised model which gives a higher probability to the observation sequence.

Parameter Estimation III Define P t (i,j) as the probability of traversing a certain arc at time t given observation sequence:

Parameter Estimation IV The probability of traversing a certain arc at time t given observation sequence can be expressed as: The probability of leaving state i at time t i ( t ) j 1... N p t ( i, j)

Parameter Estimation V ˆ i i (1) aˆ ij T t 1 T p t 1 t ( i, j) ( t ) i

Parameter Estimation VI Namely, given an HMM model =(A, B, ) the Baum Welsh algorithm gives new values for the model parameters ^ =(A, ^ ^ B, ^ ). One may prove that P(O ( P(O ^ ( which is a general property of EM. The process is repeated until convergence. It is prone to local maxima, another property of EM.