Hidden Markov Models

Similar documents
CS838-1 Advanced NLP: Hidden Markov Models

STA 414/2104: Machine Learning

Graphical Models Seminar

STA 4273H: Statistical Machine Learning

Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015

Basic math for biology

Hidden Markov Models in Language Processing

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Cheng Soon Ong & Christian Walder. Canberra February June 2018

COMP90051 Statistical Machine Learning

CISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)

A gentle introduction to Hidden Markov Models

p(d θ ) l(θ ) 1.2 x x x

Dynamic Approaches: The Hidden Markov Model

Statistical Methods for NLP

Note Set 5: Hidden Markov Models

Lecture 13: Structured Prediction

Linear Dynamical Systems

Directed Probabilistic Graphical Models CMSC 678 UMBC

Statistical Methods for NLP

Machine Learning for natural language processing

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Empirical Methods in Natural Language Processing Lecture 11 Part-of-speech tagging and HMMs

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Sequence Modelling with Features: Linear-Chain Conditional Random Fields. COMP-599 Oct 6, 2015

10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)

Parametric Models Part III: Hidden Markov Models

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

Recall: Modeling Time Series. CSE 586, Spring 2015 Computer Vision II. Hidden Markov Model and Kalman Filter. Modeling Time Series

1. Markov models. 1.1 Markov-chain

Hidden Markov Models

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

Robert Collins CSE586 CSE 586, Spring 2015 Computer Vision II

A brief introduction to Conditional Random Fields

Midterm sample questions

More on HMMs and other sequence models. Intro to NLP - ETHZ - 18/03/2013

A.I. in health informatics lecture 8 structured learning. kevin small & byron wallace

Introduction to Machine Learning CMU-10701

Statistical Sequence Recognition and Training: An Introduction to HMMs

Hidden Markov Models Part 2: Algorithms

Probabilistic Graphical Models

Hidden Markov Models and Gaussian Mixture Models

Statistical Pattern Recognition

10 : HMM and CRF. 1 Case Study: Supervised Part-of-Speech Tagging

Probabilistic Models for Sequence Labeling

Lecture 12: Algorithms for HMMs

Hidden Markov Model. Ying Wu. Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208

CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm

Lecture 12: Algorithms for HMMs

Introduction to Machine Learning

Hidden Markov models

Automatic Speech Recognition (CS753)

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Statistical NLP: Hidden Markov Models. Updated 12/15

Conditional Random Fields and beyond DANIEL KHASHABI CS 546 UIUC, 2013

order is number of previous outputs

10/17/04. Today s Main Points

Mixture Models and Expectation-Maximization

We Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named

Sequential Supervised Learning

Text Mining. March 3, March 3, / 49

Computational Genomics and Molecular Biology, Fall

Sequence Labeling: HMMs & Structured Perceptron

Machine Learning & Data Mining Caltech CS/CNS/EE 155 Hidden Markov Models Last Updated: Feb 7th, 2017

Recap: HMM. ANLP Lecture 9: Algorithms for HMMs. More general notation. Recap: HMM. Elements of HMM: Sharon Goldwater 4 Oct 2018.

Natural Language Processing

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009

CS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas

Log-Linear Models, MEMMs, and CRFs

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Graphical models for part of speech tagging

Hidden Markov Models NIKOLAY YAKOVETS

Hidden Markov Model and Speech Recognition

Dept. of Linguistics, Indiana University Fall 2009

Statistical Machine Learning from Data

Hidden Markov Models and Gaussian Mixture Models

Lecture 15. Probabilistic Models on Graph

Master 2 Informatique Probabilistic Learning and Data Analysis

HMM: Parameter Estimation

L23: hidden Markov models

EECS E6870: Lecture 4: Hidden Markov Models

Today s Lecture: HMMs

Hidden Markov Modelling

Linear Dynamical Systems (Kalman filter)

Hidden Markov Models

Hidden Markov Models

Hidden Markov Models Hamid R. Rabiee

Collapsed Variational Bayesian Inference for Hidden Markov Models

Conditional Random Field

Hidden Markov Models The three basic HMM problems (note: change in notation) Mitch Marcus CSE 391

COMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma

What s an HMM? Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) Hidden Markov Models (HMMs) for Information Extraction

Lecture 3: ASR: HMMs, Forward, Viterbi

Machine Learning for OR & FE

Intelligent Systems (AI-2)

Advanced Data Science

Introduction to Machine Learning Midterm, Tues April 8

University of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I

Page 1. References. Hidden Markov models and multiple sequence alignment. Markov chains. Probability review. Example. Markovian sequence

A Higher-Order Interactive Hidden Markov Model and Its Applications Wai-Ki Ching Department of Mathematics The University of Hong Kong

Transcription:

CS769 Spring 2010 Advanced Natural Language Processing Hidden Markov Models Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu 1 Part-of-Speech Tagging The goal of Part-of-Speech (POS) tagging is to label each word in a sentence with its part-of-speech, e.g., The/AT representative/nn put/vbd chairs/nns on/in the/at table/nn. It is useful in information extraction, question answering, shallow parsing, and so on. There are many tag sets, for example the Penn Treebank Tagset has 45 POS. The major difficulty in POS tagging is that a word might have multiple possible POS. For example I/PN can/md can/vb a/at can/nn. Two kinds of information can help us overcome this difficulty: 1. Some tag sequences are more likely than others. For instance, AT JJ NN is quite common, while AT JJ VBP is unlikely. ( a new book ) 2. A word may have multiple possible POS, but some are more likely than others, e.g., flour is more often a noun than a verb. The question is: given a word sequence x 1:N, how do we compute the most likely POS sequence? One method is to use a Hidden Markov Model. 2 Hidden Markov Models An HMM has the following components: K states (e.g., tag types) initial distribution π (π 1,..., π K ), a vector of size K. emission probabilities p(x z), where x is an output symbol (e.g., a word) and z is a state (e.g., a tag). In this example, p(x z) can be a multinomial for each z. Note it is possible for different states to output the same symbol this is the source of difficulty. Let φ {p(x z)}. state transition probabilities p(z n j z n 1 i) A ij, where A is a K K transition matrix. This is a first-order Markov assumption on the states. The parameters of an HMM is θ {π, φ, A}. An HMM can be plotted as a transition diagram (note it is not a graphical model! The nodes are not random variables). However, its graphical model is a linear chain on hidden nodes, with observed nodes x 1:N. There is a nice urn and ball model that explains HMM as a generative model. We can run an HMM for n steps, and produce x 1:N,. The joint probability is p(x 1:N, θ) p(z 1 π)p(x 1 z 1, φ) 1 N p(z n z n 1, A)p(x n z n, φ). (1) n2

Hidden Markov Models 2 Since the same output can be generated by multiple states, there usually are more than one state sequence that can generate x 1:N. The problem of tagging is arg max z1:n p( x 1:N, θ), i.e., finding the most likely state sequence to explain the observation. As we see later, it is solved with the Viterbi algorithm, or more generally the max-sum algorithm. But we need good parameters θ first, which can be estimated using maximum likelihood on some (labeled and unlabeled) training data using EM (known as Baum-Welch algorithm for HMM). EM training involves the so-called forward-backward (or in general sum-product) algorithm. Besides tagging, HMM has been a huge success for acoustic modeling in speech recognition, where an observation is the acoustic feature vector in a short time window, and a state is a phoneme (this is oversimplified). 3 Use HMM for Tagging Given input (word) sequence w 1:N and HMM θ, tagging can be formulated as finding the most likely state sequence that maximizes p( x 1:N, θ). This is precisely the problem max-sum algorithm solves. In the context of HMMs, the max-sum algorithm is known as the Viterbi algorithm. 4 HMM Training: The Baum-Welch Algorithm Let x 1:N be a single, very long training sequence of observed output (e.g., a long document), and its hidden labels (e.g., the corresponding tag sequence). It is easy to extend to multiple training documents. 4.1 The trivial case: observed We find the maximum likelihood estimate of θ by maximizing the likelihood of observed data. Since both x 1:N and are observed, MLE boils down to frequency estimate. In particular, A ij is the fraction of times z i is followed by z j in. φ is the MLE of output x under z. For multinomials, p(x z) is the fraction of times x is produced under state z. π is the fraction of times each state being the first state of a sequence (assuming we have multiple training sequences). 4.2 The interesting case: unobserved The MLE will maximize (up to a local optimum, see below) the likelihood of observed data p(x 1:N θ) p(x 1:N, θ), (2) where the summation is over all possible label sequences of length N. Already we see this is an exponential sum with K N label sequences, and brute force will fail. HMM training uses a combination of dynamic programming and EM to handle this issue. Note the log likelihood involves summing over hidden variables, which suggests we can apply Jensen s

Hidden Markov Models 3 inequality to lower bound (2). log p(x 1:N θ) log p(x 1:N, θ) (3) log p( x 1:N, θ old p(x 1:N, θ) ) p( x 1:N, θ old ) (4) p( x 1:N, θ old ) log p(x 1:N, θ) p( x 1:N, θ old ). (5) We want to maximize the above lower bound instead. Taking the parts that depends on θ, we define an auxiliary function Q(θ, θ old ) p( x 1:N, θ old ) log p(x 1:N, θ). (6) The p(x 1:N, θ) term is defined as the product along the chain in (1). By taking the log, it becomes summation of terms. Q is the expectation of individual terms under the distribution p( x 1:N, θ old ). In fact more variables will be marginalized out, leading to expectation under very simple distributions. For example, the first term in log p(x 1:N, θ) is log p(z 1 π), and its expectation is p( x 1:N, θ old ) log p(z 1 π) p(z 1 x 1:N, θ old ) log p(z 1 π). (7) z 1 Note we go from an exponential sum over possible sequences to a single variable z 1, which has only K values. Introducing the shorthand γ 1 (k) p(z 1 k x 1:N, θ old ), the marginal of z 1 given input x 1:N and old parameters, the above expectation is K γ 1(k) log π k. In general, we use the shorthand γ n (k) p(z n k x 1:N, θ old ) (8) ξ n (jk) p(z n 1 j, z n k x 1:N, θ old ) (9) to denote the node marginals and edge marginals (conditioned on input x 1:N, under the old parameters). It will be clear that these marginal distributions play an important role. The Q function can be written as Q(θ, θ old N N ) γ 1 (k) log π k + γ n (k) log p(x n z n k, φ) + ξ n (jk) log A jk. (10) n1 n2 j1 We are now ready to state the EM algorithm for HMMs, known as the Baum-Welch algorithm. Our goal is to find θ that maximizes p(x 1:N θ), and we do so via a lower bound, or Q. In particular, 1. Initialize θ randomly or smartly (e.g., using limited labeled data with smoothing) 2. E-step: Compute the node and edge marginals γ n (k), ξ n (jk) for n 1... N, j, k 1... K. This is done using the forward-backward algorithm (or more generally the sum-product algorithm), which is discussed in a separate note. 3. M-step: Find θ to maximize Q(θ, θ old ) as in (10). 4. Iterate E-step and M-step until convergence. As usual, the Baum-Welch algorithm finds a local optimum of θ for HMMs.

Hidden Markov Models 4 5 The M-step The M-step is a constrained optimization problem since the parameters need to be normalized. As before, one can introduce Lagrange multipliers and set the gradient of the Lagrangian to zero to arrive at π k γ 1 (k) (11) N ξ n (jk) (12) A jk n2 Note A jk is normalized over k. φ is maximized depending on the particular form of the distribution p(x n z n, φ). If it is multinomial, p(x z k, φ) is the frequency of x in all output, but each output at step n is weighted by γ n (k). 6 The E-step We need to compute the marginals on each node z n, and on each edge z n 1 z n, conditioned on the observation x 1:N. Recall this is precisely what the sum-product algorithm can do. We can convert the HMM into a linear chain factor graph with variable nodes and factor nodes f 1:N. The graph is a chain with the order, from left to right, f 1, z 1, f 2, z 2,..., f N, z N. The factors are f 1 (z 1 ) p(z 1 π)p(x 1 z 1 ) (13) f n (z n 1, z n ) p(z n z n 1 )p(x n z n ). (14) We note that x 1:N do not appear in the factor graph, instead they are absorbed into the factors. We pass messages from left to right along the chain, and from right to left. This corresponds to the forward-backward algorithm. It is worth noting that since every variable node z n has only two factor node neighbors, it simply repeats the message it received from one of the neighbor to the other. The left-to-right messages are f n (z n 1 k, z n )µ zn 1 f n (15) f n (z n 1 k, z n )µ fn 1 z n 1 (16) p(z n z n 1 k)p(x n z n )µ fn 1 z n 1. (17) It is not hard to show that p(x 1,..., x n, z n ), (18) and this message is called α in standard forward-backward literature, corresponding to the forward pass. Similarly, one can show that the right-to-left message p(x n+1,..., x N z n ). (19) This message is called β and corresponds to the backward pass. By multiplying the two incoming messages at any node z n, we obtain the joint p(x 1:N, z n ) (20)

Hidden Markov Models 5 since x 1:N are observed (see the sum-product lecture notes, and we have equality because HMM is a directed graph). If we sum over z n, we obtain the marginal likelihood of observed data p(x 1:N ), (21) z n1 which is useful to monitor the convergence of Baum-Welch (note summing over any z n will give us this). Finally the desired node marginal is obtained by γ n (k) p(z n k x 1:N ) p(x 1:N, z n k). (22) p(x 1:N ) A similar argument gives ξ n (jk) using the sum-product algorithm too.