Sequence modelling. Marco Saerens (UCL) Slides references

Similar documents
We Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named

Statistical NLP: Hidden Markov Models. Updated 12/15

Statistical Machine Learning from Data

Page 1. References. Hidden Markov models and multiple sequence alignment. Markov chains. Probability review. Example. Markovian sequence

Lecture 11: Hidden Markov Models

p(d θ ) l(θ ) 1.2 x x x

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

Hidden Markov Models in Language Processing

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

order is number of previous outputs

8: Hidden Markov Models

Design and Implementation of Speech Recognition Systems

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang

CS838-1 Advanced NLP: Hidden Markov Models

10/17/04. Today s Main Points

Basic math for biology

Empirical Methods in Natural Language Processing Lecture 11 Part-of-speech tagging and HMMs

Computational Genomics and Molecular Biology, Fall

Reinforcement Learning

Hidden Markov Models (HMMs)

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

STA 414/2104: Machine Learning

Advanced Data Science

Statistical Processing of Natural Language

Hidden Markov Models

Hidden Markov Models Hamid R. Rabiee

Dynamic Approaches: The Hidden Markov Model

Part A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 )

COMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma

Markov Chains Absorption Hamid R. Rabiee

Parametric Models Part III: Hidden Markov Models

Hidden Markov Models

STA 4273H: Statistical Machine Learning

Statistical Methods for NLP

Plan for today. ! Part 1: (Hidden) Markov models. ! Part 2: String matching and read mapping

HMM part 1. Dr Philip Jackson

An Introduction to Bioinformatics Algorithms Hidden Markov Models

CISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)

Statistical Methods for NLP

Note Set 5: Hidden Markov Models

Data-Intensive Computing with MapReduce

Hidden Markov Models

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

Statistical Sequence Recognition and Training: An Introduction to HMMs

Graphical Models Seminar

Machine Learning for natural language processing

8: Hidden Markov Models

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

L23: hidden Markov models

CSCE 471/871 Lecture 3: Markov Chains and

A gentle introduction to Hidden Markov Models

Stephen Scott.

Hidden Markov Models

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

Hidden Markov Models. Terminology and Basic Algorithms

Hidden Markov Model. Ying Wu. Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208

Markov Chains Handout for Stat 110

Chapter 16 focused on decision making in the face of uncertainty about one future

Markov Chains and Hidden Markov Models. = stochastic, generative models

O 3 O 4 O 5. q 3. q 4. Transition

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Introduction to Machine Learning CMU-10701

Markov Chains Absorption (cont d) Hamid R. Rabiee

Hidden Markov Models The three basic HMM problems (note: change in notation) Mitch Marcus CSE 391

Hidden Markov Models NIKOLAY YAKOVETS

Computational Genomics and Molecular Biology, Fall

LEARNING DYNAMIC SYSTEMS: MARKOV MODELS

CSCE 478/878 Lecture 9: Hidden. Markov. Models. Stephen Scott. Introduction. Outline. Markov. Chains. Hidden Markov Models. CSCE 478/878 Lecture 9:

Hidden Markov Models

Hidden Markov Models and Gaussian Mixture Models

Sequence Labeling: HMMs & Structured Perceptron

1. Markov models. 1.1 Markov-chain

COMP90051 Statistical Machine Learning

A brief introduction to Conditional Random Fields

Hidden Markov Models

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

Lecture 13: Structured Prediction

VL Algorithmen und Datenstrukturen für Bioinformatik ( ) WS15/2016 Woche 16

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009

Machine Learning for OR & FE

Hidden Markov models 1

Hidden Markov Models

CSC401/2511 Spring CSC401/2511 Natural Language Computing Spring 2019 Lecture 5 Frank Rudzicz and Chloé Pou-Prom University of Toronto

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Learning from Sequential and Time-Series Data

Hidden Markov Models Part 1: Introduction

A.I. in health informatics lecture 8 structured learning. kevin small & byron wallace

Temporal Modeling and Basic Speech Recognition

Linear Dynamical Systems (Kalman filter)

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Text Mining. March 3, March 3, / 49

Directed Probabilistic Graphical Models CMSC 678 UMBC

PATTERN CLASSIFICATION

Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015

Introduction to Artificial Intelligence (AI)

Introduction to Hidden Markov Modeling (HMM) Daniel S. Terry Scott Blanchard and Harel Weinstein labs

INF4820: Algorithms for Artificial Intelligence and Natural Language Processing. Language Models & Hidden Markov Models

Design and Implementation of Speech Recognition Systems

Hidden Markov Models (HMM) and Support Vector Machine (SVM)

Markov Chains and Hidden Markov Models

Transcription:

Sequence modelling Marco Saerens (UCL) Slides references Many slides and figures have been adapted from the slides associated to the following books: Alpaydin (2004), Introduction to machine learning. The MIT Press. Duda, Hart & Stork (2000), Pattern classification, John Wiley & Sons. Han & Kamber (2006), Data mining: concepts and techniques, 2nd ed. Morgan Kaufmann. Tan, Steinbach & Kumar (2006), Introduction to data mining. Addison Wesley. Grinstead and Snell s Introduction to Probability (2006; GNU Free Documentation License) As well as from Wikipedia, the free online encyclopedia I really thank these authors for their contribution to machine learning and data mining teaching 2 1

Dynamic programming and edit-distance Marco Saerens (UCL) Dynamic programming Suppose we have a lattice with N levels: 3 3 3 2 1 2 2 2 1 1 2 2 0 2 3 4 3 2 2 1 2 3 0 2 3 1 5 3 2 3 2 1 2 2 2 3 2 2 0 4 2

Dynamic programming The problem is to reach level N From level 1 With minimum cost = shortest-path problem 5 Dynamic programming Some definitions The local cost associated to the decision to jump to the state s k = j at level k Given that we were in state s k 1 = i at level k 1 6 3

Dynamic programming The total cost of a path is The optimal cost when starting from state s 0 is 7 Dynamic programming The optimal cost, whatever the initial state, is The optimal cost when starting from some intermediate state 8 4

Dynamic programming Here are the recurrence relations allowing to obtain the optimal cost: 9 Dynamic programming Graphically: S = i+1 k+1 S = i k S k+1= i S = i 1 k+1 10 5

Dynamic programming 0 Example: 5 2 2 2 1 4 3 1 4 4 3 12 4 4 5 5 11 Dynamic programming Proof: 12 6

Dynamic programming In a symmetric way: 13 Dynamic programming Now, if there are jumps bypassing some levels, where is the set of states to which there is a direct jump from s k = 14 i 7

Dynamic programming To find the optimal path, we need to keep track of the previous state in each node (a pointer from the previous state to the current state) And use backtracking from the last node in order to retrieve the optimal path 15 Application to edit-distance Computation of a distance between two strings It computes the minimal number of insertions, deletions and substitutions That is, the minimum number of editions For transforming one character string x into another character string y 16 8

Application to edit-distance Also called the Levenstein distance We thus have two character strings x i being the character i of string x 17 Application to edit-distance The length of the string x is denoted by x In general, we have The substring of x, beginning at character i and ending at j is defined by 18 9

Application to edit-distance We will now read the characters of x one by one In order to construct the string y To this aim, we define the three editing operations: Insertion of a character in y (without taking any character from x) Deletion of a character from x (without concatenating it to y) Substitution of a character from y by one of x 19 Application to edit-distance First convention: Means that we have read (extracted) the i 1 first characters of x Thus, x has been cutted from its i 1 first characters They have been taken in order to construct y 20 10

Application to edit-distance Second convention: Means that the j first characters of y have been transcribed We progressively read the first characters of x in order to build y 21 Application to edit-distance It corresponds to a process with levels (steps) and states We will now apply dynamic programming One state is characterized by a couple (i, j) 22 11

Application to edit-distance Here is the formal definition of the three operations For the two first operations, we are jumping from level k to level k+1 For the last one, we directly jump to level k +2 (one level is passed) 23 Application to edit-distance This situation can be represented in a two-dimensional table One level is represented by (i + j) = constant One state is represented by (i, j) One operation corresponds to a valid transition in this table 24 12

Application to edit-distance Exemple of a table computing the distance between livre and lire 25 Application to edit-distance Each level corresponds to a diagonal of the table: (i + j) = constant 26 13

Application to edit-distance A cost (or penalty) is associated to each operation (insertion, deletion, substitution); for instance with 27 Application to edit-distance The dynamic programming formula can be applied to this problem: Initialization: Then: 28 14

Application to edit-distance And finally: This value is the edit-distance or Levenstein distance 29 Application to edit-distance One example: dist(lire, livre) = 1 30 15

Application to edit-distance In order to find the optimal path (optimal sequence of operations) A pointer to the previous state has to be maintained for each cell of the table The full path can be found by backtracking from the final state Numerous extensions and generalizations of this basic algorithm have been developed 31 Application to edit-distance Notice that the related quantity, the longest common subsequence, lcs(x,y), can be obtained by (no proof provided) dist(x, y) = lcs(x, x) + lcs(y, y) 2 lcs(x, y) and thus = x + y 2 lcs(x, y) lcs(x, y) = 1 ( x + y dist(x, y)) 2 32 16

Application to edit-distance Thus, since dist(lire, livre) = 1 We have lcs(lire, livre) = 0.5 (4 + 5 1) = 4 33 Application to edit-distance 34 17

A brief introduction to Markov chains Introduction to Markov chains We have a set of states, S = {1, 2,, n} s t = k means that the process, or system, is in state k at time t Example: We take as states the kind of weather R (rain), N (nice), and S (snow) 36 18

Introduction to Markov chains Markov chains are models of sequential discrete-time and discrete-state stochastic processes 37 Introduction to Markov chains The entries of this matrix P are p ij with We assume that the probability of jumping to a state only depends on the current state And not on the past, before this state This is the Markov property 38 19

Introduction to Markov chains The matrix P is called the one-step transition probabilities matrix It is a stochastic matrix The row sums are equal to 1 We also assume that these transition probabilities are stationary That is, independent of time Let us now compute P(s t+2 = j s t = i) : 39 Introduction to Markov chains 40 20

Introduction to Markov chains The matrix P 2 is the two-steps transition probabilities matrix y induction, P τ is the τ-steps transition probabilities matrix containing elements 41 Introduction to Markov chains If x(t) is the column vector containing the probability distribution of finding the process in each state of the Markov chain at time step t, we have 42 21

Introduction to Markov chains In matrix form, x(t) = P T x(t 1) Or, in function of the initial distribution, x(t) = (P T ) t x(0) Now, x j (t) = x(t) T e j = x(0) T P t e j 43 Introduction to Markov chains Thus, when starting from state i, x(0) = e i, and x j (t s 0 = i) = x j i (t) = (e i ) T P t e j It is the probability of observing the process in state j at time t when starting from state i at time t = 0 44 22

Introduction to Markov chains 45 Introduction to Markov chains Let us now introduce absorbing Markov chains A state i of a Markov chain is called absorbing if it is impossible to leave it, p ii = 1 An absorbing Markov chain chain is a Markov chain containing absorbing states The other states being called transient (TR) 46 23

Introduction to Markov chains Let us take an example: the drunkard walk (from Grinstead and Snell) 47 Introduction to Markov chains The transition matrix can be put in canonical form: Q is the transition matrix between transient states 48 24

Introduction to Markov chains R is the transition matrix between transient and absorbing states oth Q and R are sub-stochastic Their row sums are 1 and at least one row sum is < 1 49 Introduction to Markov chains Now, P t can be computed as Since Q is sub-stochastic, it can be shown that: 50 25

Introduction to Markov chains The matrix is called the fundamental matrix of the absorbing Markov chain Let us interpret the elements n ij = [N] ij of the fundamental matrix, where i, j are transient states 51 Introduction to Markov chains Recall that since i, j are transient states, we have x j (t s 0 = i) = x j i (t) = (e i ) T P t e j = (e i ) T Q t e j Thus, entry i, j of the matrix N for transient states, n ij, is n ij = e T i Ne j = e T i Q = = t=0 e T i Qe j t=0 x j i (t) t=0 e j 52 26

Introduction to Markov chains Thus, element n ij contains the expected number of passages through transient state j when starting from transient state i The expected number of visits (and therefore steps) before being absorbed when starting from each state is 53 Introduction to Markov chains For the drunkard s walk, 54 27

Introduction to Markov chains 55 Introduction to Markov chains 56 28

Introduction to Markov chains We can also compute absorption probabilities from each starting state We can compute the probability of being absorbed by absorbing state j given that we started in transient state i by b ij = t=0 n tr x k i (t) r kj k=1 k TR where n tr is the number of transient states and the sum over k is taken on the set of transient states (TR) only 57 Introduction to Markov chains The formula states that the probability of reaching absorbing node j at time (t+1) is given by the probability of passing through any state k at t and then jumping to state j from k at (t +1) The absorption probability is then given by taking the sum over all possible time steps 58 29

Introduction to Markov chains Let us compute this quantity b ij = = = n tr t=0 k=1 k TR n tr k=1 k TR x k i (t) r kj e T i Q t e k r kj t=0 n tr n ik r kj k=1 k TR = [NR] ij 59 Introduction to Markov chains The absorption probabilities are put in the matrix Let us reconsider the drunkard s example 60 30

Introduction to Markov chains 61 Introduction to Markov chains Some additional definitions If, in a Markov chain, it is possible to go to every state from each state, the Markov chain is called irreducible Moreover, the Markov chain is called regular if some power of the transition matrix has only positive (non-0) elements 62 31

Introduction to Markov chains It can further be shown that the powers P t of a regular transition matrix tend to a matrix with all rows the same 63 Introduction to Markov chains Moreover, the limiting probability distribution of states is independent of the initial state: 64 32

Introduction to Markov chains The stationary vector π is the left eigenvector of P, corresponding to eigenvalue 1 and normalized to a probability vector: 65 Introduction to Markov chains It provides the probability of finding the process in each state on the long run One can prove that this vector is unique for a regular Markov chain 66 33

Introduction to Markov chains Notice that the fundamental matrix for absorbing chains can be generalized To regular chains It, for instance, allows to compute the average first-passage times in matrix form See, for instance, Grinstead and Snell 67 Application to marketing 34

Application to marketing Suppose we have the following model We have a number n of customer clusters or segments ased, for instance, on RFM (Recency, Frequency, Monetary value) Each cluster is a state of a Markov chain The last (nth) cluster corresponds to lost customers It is absorbing and generates no benefit 69 Application to marketing Each month, we observe the movements from cluster to cluster Transition probabilities are estimated by counting the observed frequencies of jumping from one state to another in the past This provides the entries of the transition probabilities matrix 70 35

Application to marketing Suppose also there is an average profit, m i per month, associated to each customer in state i which could be negative There is also a discounting factor: 0 < γ < 1 The expected profit on an infinite time horizon can be computed 71 Application to marketing It is given by m = t=0 γ t n i=1 x i (t) m i It provides the expected profit on a infinite horizon 72 36

Application to marketing Which finally provides m = = = t=0 γ t n i=1 x i (t) m i γ t m T (P T ) t x(0) t=0 γ t x T (0)P t m t=0 = x T (0)( γ t P t )m t=0 = x T (0)(I γp) 1 m 73 Application to marketing This is an example of the computation of the lifetime value of a customer Which is the expected profit provided by the customer until it leaves the company 74 37

A brief introduction to dynamic time warping Marco Saerens, UCL Alain Soquet, UL Dynamic time warping Context: word recognition Suppose we have a database of word templates or references Someone pronounces a word which is recorded and analysed We want to recover the nearest template word in the database = nearest neighbor Which will be the recognized word 76 38

Dynamic time warping However, the timing in which the word has been pronounced can differ greatly Hence, we have to account for distortions or warping of the signal => Dynamic time warping 77 Dynamic time warping Here is an example: a spectrogram is produced Time window 78 39

Dynamic time warping We have to compare word1 and word2 Thus, align the two signals Word2 Reference word1 79 Dynamic time warping The two signals are aligned by Defining a distance d(i,j) between two frames, for instance the Euclidean distance Defining a time alignment that allows for warpings 80 40

Dynamic time warping We have to add monotonicity constraints in order to obtain meaningful alignments Monotonicity Continuity oundary conditions 81 Dynamic time warping The problem can be solved by dynamic programming y considering only the valid transitions 82 41

Dynamic time warping Here are the dynamic programming recurence relations for 83 A brief introduction to hidden Markov models Marco Saerens, UCL 42

Hidden Markov models Most slides are adapted from David Meir lei s slides, SRI International Thanks to David Meir lei 85 What is an HMM? Graphical model Green circles indicate states Purple circles indicate observations Arrows indicate probabilistic dependencies between states 86 43

What is an HMM? Green circles are hidden, unobserved, states (n in total) Dependent only on the previous state (arrow) The past is independent of the future given the present : Markov property 87 What is an HMM? Purple nodes are discrete observations (p in total) They dependent only on their corresponding hidden state in which they were generated 88 44

What is an HMM? Example of a word model P A R I S 89 HMM formalism s s s s s x x x x x {s, x, Π, P, } s : {1,, n} is the random variable for the hidden states taking its values from 1 to n x : {o 1,, o p } is the random variable for the observations whose values are denoted o i 90 45

HMM formalism s P s P s P s P s x x x x x {s, o, Π, P, } Π = {π i } are the initial state probabilities, P(s 1 =i) P = {p ij } are the state transition probabilities, P(s t+1 =j s t =i) = {b i (o k )} are the observation or emission probabilities, P(x t =o k s t =i) 91 A HMM example Taken from wikipedia 92 46

Inference in an HMM s P s P s P s P s x x x x x Three basic problems: 1. Compute the probability/likelihood of a given observation sequence (classification)? 2. Given an observation sequence, compute the most likely hidden state sequence (decoding)? 3. Given an observation sequence, which model parameters most closely fit the data (parameters estimation)? 93 Likelihood computation 94 47

Likelihood computation x 1 x t 1 x t x t+1 x T Given an observation sequence and a model, compute the likelihood of the observation sequence x = [x 1,..., x T ] T,! = {!, P, } Compute P(x!) 95 Likelihood computation s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T P(x s,!) = b s1 (x 1 )b s2 (x 2 )...b st (x T ) P(s!) = " s1 p s1 s 2 p s2 s 3... p st!1 s T P(x, s!) = P(x s,!)p(s!)! P(x!) = P(x s,!)p(s!) S 96 48

Likelihood computation s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T! P(x!) = " s1 b s1 (x 1 )# p s t s t+1 b st+1 (x t+1 ) (s 1...s T ) T"1 t=1 It involves a sum over all possible sequences of states! 97 Forward procedure s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T The special structure gives us an efficient solution using some recurrence formulas Define:! i (t) = P(x 1...x t, s t = i ") We will omit the dependency on θ 98 49

Forward procedure s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T " j (1) = P(x 1,s 1 = j) = P(x 1 s 1 = j)p(s 1 = j) = b j (x 1 )# j 99 Forward procedure! j (t +1) = P(x 1...x t+1, s t+1 = j) n! = P(x 1...x t, s t = i, x t+1, s t+1 = j) i=1 n! = P(x t+1, s t+1 = j x 1...x t, s t = i)p(x 1...x t, s t = i) i=1 n! = P(x t+1, s t+1 = j s t = i)! i (t) i=1 n! = P(x t+1 s t+1 = j)p(s t+1 = j s t = i)! i (t) i=1 " n % = $! p ij! i (t)'b j (x t+1 ) # & i=1 100 50

Forward procedure s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T! j (1) = b j (x 1 )" j " n %! j (t +1) = $! p ij! i (t)'b j (x t+1 ) # & i=1 101 Forward procedure From Rabiner et al.: 102 51

Likelihood computation: solutions s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T The likelihood is: n P(x!) =! P(x 1...x T, s T = j!) =!" j (T ) j=1 j=1 n 103 ackward procedure s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T We now introduce the backward variable Define:! i (t) = P(x t+1...x T s t = i,") We will omit the dependency on θ 104 52

ackward procedure! i (t) = P(x t+1...x T s t = i) n! = P(x t+1 x t+2...x T, s t+1 = j s t = i) j=1 n =! P(x t+1 x t+2...x T s t = i, s t+1 = j) P(s t+1 = j s t = i) j=1 n! = p ij P(x t+1 x t+2...x T s t+1 = j) j=1 n! = p ij P(x t+1 x t+2...x T, s t+1 = j)p(x t+2...x T s t+1 = j) j=1 n! = p ij b j (x t+1 )! j (t +1) j=1 105 ackward procedure s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T When t = T 1, we have! i (T 1) = P(x T s T 1 = i) n! = p ij b j (x T ) j=1 n!! i (T 1) = p ij b j (x T )! j (T ) j=1 => β j (T) =1 106 53

ackward procedure s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T " i (T) =1 n!! i (t) = p ij b j (x t+1 )! j (t +1) j=1 107 ackward procedure From Rabiner et al.: 108 54

Likelihood computation: solutions s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T P(x, s t = i!) = P(x 1 x t, s t = i, x t+1 x T!) = P(x t+1 x T x 1 x t, s t = i,!)p(x 1 x t, s t = i!) = P(x t+1 x T s t = i,!)p(x 1 x t, s t = i!) = " i (t)# i (t) 109 Likelihood computation: solutions s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T P(x!) = " i (T ) n! i=1 n! P(x!) = " i # i (1) i=1 n P(x!) =!" i (t)# i (t) i=1 Forward procedure ackward procedure Combination 110 55

Optimal state sequence (decoding) 111 est state s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T Find the most probable state at time t given the observations! i (t) = P(s t = i x,") = P(x, s = i ") t P(x ") = # i(t)$ i (t) n! j=1 # j (t)$ j (t) 112 56

est state sequence s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T Find the state sequence that best explains the observations Viterbi algorithm = dynamic programming algorithm arg max s P(s x,!) = argmax s P(s, x!) 113 Viterbi algorithm s 1 s t 1 j x 1 x t 1 x t x t+1 x T! j (t) = max s 1...s t!1 P(s 1...s t!1, x 1...x t!1, s t = j, x t ") The state sequence which maximizes the probability of generating the observations up to time t 1, landing in state j, and emitting the observation at time t 114 57

Viterbi algorithm s 1 s t 1 s t s t+1 x 1 x t 1 x t x t+1 x T! j (t) = max s 1...s t!1 P(s 1...s t!1, x 1...x t!1, s t = j, x t ") " j (1) = # j b j (x 1 )! j (t +1) = max{! i (t)p ij b j (x t+1 )} i " j (1) = 0! j (t +1) = argmax{! i (t)p ij b j (x t+1 )} i Recursive computation 115 Viterbi algorithm Indeed, δ j (t+1) is equal to max s 1...s t P(s 1...s t+1 = j, x 1...x t+1!) = max s 1...s t P(s t+1 = j, x t+1 s 1...s t, x 1...x t!)p(s 1...s t, x 1...x t!) = max { P(x t+1 s t+1 = j,!)p(s t+1 = j s t,!)p(s 1...s t, x 1...x t!) } s 1...s t = max s t { } max b st+1 (x t+1 )p st s t+1 P(s 1...s t, x 1...x t!) s 1...s t!1 " = max b j (x t+1 )p max % # st j P(s 1...s t, x 1...x t!) & s t $ s 1...s t!1 ' = max { b j (x t+1 )p st j! st (t)} s t 116 58

Viterbi algorithm And we can apply dynamic programming to the log-likelihood The cost is then additive 117 Viterbi algorithm s 1 s t 1 s t s t+1 s T x 1 x t 1 x t x t+1 x T s " (T) = argmax# i (T) i s " (t #1) =$ s " (t ) (t) P(x " ) = max# i (T) i Compute the most likely state sequence by working backwards 118 59

27/03/13 Parameter estimation 119 Parameter estimation Π P x1 P xt 1 P xt P xt+1 xt Given an observation sequence, find the model parameters Π, P,, that most likely produce that sequence (maximum likelihood) No closed-form solution Instance of the iterative EM algorithm 120 60

Parameter estimation Π P P P P x 1 x t 1 x t x t+1 x T! ij (t) = P(s t = i, s t+1 = j x,") Probability of traversing an arc! i (t) = P(s t = i x,") Probability of being in state i 121 Parameter estimation Recall that we already computed! i (t) = P(s t = i x,") = P(x, s t = i ") P(x ") = # i(t)$ i (t) n! j=1 # j (t)$ j (t) 122 61

Parameter estimation! ij (t) = P(s t = i, s t+1 = j x,") = P(x x, s = i, s = j, x x ") 1 t t t+1 t+1 T P(x ") = P(s = j, x x x x, s = i,")p(x x, s = i ") t+1 t+1 T 1 t t 1 t t P(x ") = P(s = j, x x s = i,")# (t) t+1 t+1 T t i P(x ") = P(x x s = i, s = j,")p(s = j s = i,")# (t) t+1 T t t+1 t+1 t i P(x ") = P(x x s = j,")p t+1 T t+1 ij# i (t) P(x ") = P(x x x, s = j,")p(x x s = j,")p t+1 t+2 T t+1 t+2 T t+1 ij# i (t) P(x ") = # (t)b (x )p i j t+1 ij$ j (t +1) N! k=1 # k (t)$ k (t) 123 Parameter estimation Π P P P P x 1 x t 1 x t x t+1 x T ˆ " i = probability of starting from i ˆ! i = P(s 1 = i x, ˆ ") = # i (1) Now we can compute the new estimates of the model parameters. 124 62

Parameter estimation Π P P P P x 1 x t 1 x t x t+1 x T ˆ p ij = expected number of transitions from i to j expected number of transitions out of state i 125 Parameter estimation Π P P P P x 1 x t 1 x t x t+1 x T ˆp ij = T!1 " t=1 P(s t = i, s t+1 = j x, ˆ!) T!1 " t=1 P(s t = i x, ˆ!) = T!1 " t=1 T!1 " t=1 " ij (t) " i (t) 126 63

Parameter estimation Π P P P P x 1 x t 1 x t x t+1 x T ˆb i (o k ) = expected number of emissions of o k in state i total number of emissions in state i 127 Parameter estimation Π P P P P x 1 x t 1 x t x t+1 x T ˆb i (o k ) = = T! t=1 P(s t = i, x t = o k x, ˆ!)! T! t=1 {t:x t =o k } T! t=1 P(s t = i x, ˆ!) P(s t = i x, ˆ!) P(s t = i x, ˆ!) = =! T! t=1 {t:x t =o k } T! t=1 P(s t = i x, ˆ!)"(x t = o k ) " t (i) " i (t) T! t=1 P(s t = i x, ˆ!) 128 64

Parameter estimation Π P P P P x 1 x t 1 x t x t+1 x T The two following two steps are iterated until convergence: Recompute the forward and backward variables α and β Recompute the parameter estimates for the Π, P, 129 Parameter estimation Π P P P P x 1 x t 1 x t x t+1 x T It can be shown that this iterative algorithm increases the likelihood 130 65

HMM applications Generating parameters for n-gram models Tagging speech Speech recognition ioinformatics sequence modeling 131 HMM applications Part-of-speech tagging The representative put chairs on the table AT NN VD NNS IN AT NN AT JJ NN VZ IN AT NN Some tags : AT: article, NN: singular or mass noun, VD: verb, past tense, NNS: plural noun, IN: preposition, JJ: adjective 132 66

HMM applications ioinformatics Durbin et al. iological Sequence Analysis, Cambridge University Press. Several applications, e.g. proteins From primary structure ATCPLELLLD Infer secondary structure HHHC.. 133 67