Recall: Modeling Time Series. CSE 586, Spring 2015 Computer Vision II. Hidden Markov Model and Kalman Filter. Modeling Time Series

Similar documents
Robert Collins CSE586 CSE 586, Spring 2015 Computer Vision II

Multiscale Systems Engineering Research Group

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

Introduction to Machine Learning CMU-10701

COMP90051 Statistical Machine Learning

Page 1. References. Hidden Markov models and multiple sequence alignment. Markov chains. Probability review. Example. Markovian sequence

Hidden Markov Modelling

STA 4273H: Statistical Machine Learning

Hidden Markov Models

Hidden Markov Models The three basic HMM problems (note: change in notation) Mitch Marcus CSE 391

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Lecture 11: Hidden Markov Models

Hidden Markov Models Hamid R. Rabiee

CISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)

Hidden Markov Model. Ying Wu. Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208

CS 136a Lecture 7 Speech Recognition Architecture: Training models with the Forward backward algorithm

Factor Graphs and Message Passing Algorithms Part 1: Introduction

Linear Dynamical Systems

STA 414/2104: Machine Learning

Data Mining in Bioinformatics HMM

Statistical NLP: Hidden Markov Models. Updated 12/15

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Hidden Markov Models

Hidden Markov Models

Brief Introduction of Machine Learning Techniques for Content Analysis

Hidden Markov Models

Hidden Markov Models and Gaussian Mixture Models

Machine Learning for OR & FE

CS711008Z Algorithm Design and Analysis

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Hidden Markov models

ECE521 Lecture 19 HMM cont. Inference in HMM

Hidden Markov Models Part 2: Algorithms

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

Hidden Markov Model and Speech Recognition

Hidden Markov Models. Terminology and Basic Algorithms

Hidden Markov Models. Terminology and Basic Algorithms

A.I. in health informatics lecture 8 structured learning. kevin small & byron wallace

Lecture 9. Intro to Hidden Markov Models (finish up)

Hidden Markov Models. Dr. Naomi Harte

Pattern Recognition and Machine Learning

Hidden Markov Models in Language Processing

CSC401/2511 Spring CSC401/2511 Natural Language Computing Spring 2019 Lecture 5 Frank Rudzicz and Chloé Pou-Prom University of Toronto

Dept. of Linguistics, Indiana University Fall 2009

CS838-1 Advanced NLP: Hidden Markov Models

Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015

Recap: HMM. ANLP Lecture 9: Algorithms for HMMs. More general notation. Recap: HMM. Elements of HMM: Sharon Goldwater 4 Oct 2018.

Human Mobility Pattern Prediction Algorithm using Mobile Device Location and Time Data

Graphical Models Seminar

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang

Note Set 5: Hidden Markov Models

Computational Genomics and Molecular Biology, Fall

Statistical Methods for NLP

Bayesian Machine Learning - Lecture 7

MACHINE LEARNING 2 UGM,HMMS Lecture 7

order is number of previous outputs

HMM: Parameter Estimation

10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)

Hidden Markov Models (recap BNs)

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

HIDDEN MARKOV MODELS IN SPEECH RECOGNITION

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

Weighted Finite-State Transducers in Computational Biology

Hidden Markov Models and Gaussian Mixture Models

Lecture 12: Algorithms for HMMs

University of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I

Lecture 12: Algorithms for HMMs

Machine Learning & Data Mining Caltech CS/CNS/EE 155 Hidden Markov Models Last Updated: Feb 7th, 2017

Math 350: An exploration of HMMs through doodles.

Statistical Sequence Recognition and Training: An Introduction to HMMs

p(d θ ) l(θ ) 1.2 x x x

Hidden Markov Models

Probabilistic Graphical Models

Graphical models for part of speech tagging

Supervised Learning Hidden Markov Models. Some of these slides were inspired by the tutorials of Andrew Moore

Lecture 3: ASR: HMMs, Forward, Viterbi

Master 2 Informatique Probabilistic Learning and Data Analysis

Today s Lecture: HMMs

Statistical Methods for NLP

Linear Dynamical Systems (Kalman filter)

Hidden Markov Models

Chapter 05: Hidden Markov Models

Hidden Markov models 1

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Probabilistic Graphical Models

Parametric Models Part III: Hidden Markov Models

Lecture 15. Probabilistic Models on Graph

Design and Implementation of Speech Recognition Systems

Basic Text Analysis. Hidden Markov Models. Joakim Nivre. Uppsala University Department of Linguistics and Philology

Hidden Markov Models. Terminology, Representation and Basic Problems

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Networks. Dynamic. Bayesian. A Whirlwind Tour. Johannes Traa. Computational Audio Lab, UIUC

HMM part 1. Dr Philip Jackson

Speech Recognition Lecture 8: Expectation-Maximization Algorithm, Hidden Markov Models.

We Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named

Hidden Markov Models NIKOLAY YAKOVETS

Hidden Markov Models (HMMs)

Transcription:

Recall: Modeling Time Series CSE 586, Spring 2015 Computer Vision II Hidden Markov Model and Kalman Filter State-Space Model: You have a Markov chain of latent (unobserved) states Each state generates an observation o 1! o 2! o n-1! o n! o n+1! Modeling Time Series Modeling Time Series P(x1,x2,x3,x4,...,o1,o2,o3,o4,...) = P(x1)P(o1 x1)p(x2 x1)p(o2 x2)p(x3 x2)p(o3 x3)p(x4 x3)p(o4 x4)... Examples of State Space models Hidden Markov model Kalman filter o 1! o 2! o n-1! o n! o n+1! o 1! o 2! o n-1! o n! o n+1! Hidden Markov Models Note: a good background reference is LR Rabiner, A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, Proc. of the IEEE, Vol.77, No.2, pp.257-286, 1989. Note: this picture is a state transition diagram, not a graphical model! Markov Chain Set of states, e.g. {S1,S2,S3,S4,S5} Table of transition probabilities (a ij = Prob of going from state S i to S j ) a 11 a 12 a 13 a 14 a 15 a 21 a 22 a 23 a 24 a 25 P(S a 31 a 32 a 33 a 34 a 35 j S i ) a 41 a 42 a 43 a 44 a 45 a 51 a 52 a 53 a 54 a 55 Prob of starting in each state e.g. π 1, π 2, π 3, π 4, π 5 P(S i ) Computation: prob of an ordered sequence S 2 S 1 S 5 S 4 P(S 2 ) P(S 1 S 2 ) P(S 5 S 1 ) P(S 4 S 5 ) = π 2 a 21 a 15 a 54 1

Hidden Markov Model Hidden Markov Model {v1, v2, v3} {v1, v2, v3} {v1,v2,v3} {v1,v2,v3} {v1,v2,v3} Set of states, e.g. {S1,S2,S3,S4,S5} Table of transition probabilities (a ij = Prob of going from state S i to S j ) a 11 a 12 a 13 a 14 a 15 a 21 a 22 a 23 a 24 a 25 P(S a 31 a 32 a 33 a 34 a 35 j S i ) a 41 a 42 a 43 a 44 a 45 a 51 a 52 a 53 a 54 a 55 Prob of starting in each state e.g. π 1, π 2, π 3, π 4, π 5 P(S i ) Set of observable symbols, e.g. {v1,v2,v3} Prob of seeing each symbol in a given state (b ik = Prob seeing v k in state S i ) b 11 b 21 b 31 b 41 b 51 b 12 b 22 b 32 b 42 b 52 b 13 b 23 b 33 b 43 b 53 P(v k S i ) Computation: Prob of ordered sequence (S 2,v 3 ) (S 1,v 1 ) (S 5,v 2 ) (S 4,v 1 ) P(S2,S1,S5,S4,v3,v1,v2,v1) = P(S 2 ) P(v 3 S 2 ) P(S 1 S 2 ) P(v 1 S 1 ) P(S 5 S 1 ) P(v 2 S 5 ) P(S 4 S 5 ) P(v 1 S 4 ) = π 2 b 23 a 21 b 11 a 15 b 52 a 54 b 41 = (π 2 a 21 a 15 a 54 ) ( b 23 b 11 b 52 b 41 ) = P(S1,S2,S3,S4) P(v1,v2,v3,v4 S1,S2,S3,S4) HMM as Graphical Model Given HMM with states S1,S2,...,Sn, symbols v1,v2,...,vk, transition probs A={a ij }, initial probs Π={π i }, and observation probs B={b ik } Let state random variables be 1,2,3,4,... with t being state at time t. Allowable values of t (a discrete random variable) are {S1,S2,...,Sn} Let observed random variables be O1,O2,O3... with Ot observed at time t. Allowable values of Ot (a discrete random variable) are {v1,v2,...,vk} HMM as Graphical Model Given HMM with states S1,S2,...,Sn, symbols v1,v2,...,vk, transition probs A={a ij }, initial probs Π={π i }, and observation probs B={b ik } Let state random variables be 1,2,3,4,... with t being state at time t. Allowable values of t (a discrete random variable) are {S1,S2,...,Sn} Let observed random variables be O1,O2,O3... with Ot observed at time t. Allowable values of Ot (a discrete random variable) are {v1,v2,...,vk} Π A A A 1 2 3 4 markov chain Π A A A 1 2 3 4 markov chain B B B B B B B B O1 O2 O3 O4 observations O1 O2 O3 O4 observations Oi are observed variables. j are hidden (latent) variables. HMM as Graphical Model Verify: What is P(1,2,3,4,O1,O2,O3,O4)? Three Computations for HMMs (From Rabiner tutorial) Π A A A 1 2 3 4 markov chain B B B B O1 O2 O3 O4 observations P(1,2,3,4,O1,O2,O3,O4) = P(1) P(O1 1) P(2 1) P(O2 2) P(3 2) P(O3 3) P(4 3)P(O4 4)... 2

HMM Problem 1 HMM Problem 1 What is the likelihood of observing a sequence, e.g. O1,O2,O3? Note: there are multiple ways that a given sequence could be emitted, involving different sequences of hidden states 1,2,3 One (inefficient) way to compute the answer: generate all sequences of three states Sx,Sy,Sz and compute P(Sx,Sy,Sz,O1,O2,O3) [which we know how to do]. Summing up over all sequences of three states gives us our answer. Better (efficient) solution is based on message passing / belief propagation. marginal Drawback, there are 3^N subsequences that can be formed from N states. This is the sum-product procedure of message passing. It is called the forward-backward procedure in HMM terminology HMM Problem 2 HMM Problem 2 What is the most likely sequence of hidden states S1,S2,S3 given that we have observed O1,O2,O3? Define: Φ()= max max max Again, there is an inefficient way based on explicitly generating the exponential number of possible state sequences... AND, there is a more efficient way using message passing. Then Φ() This is the max-product procedure of message passing. It is called the Viterbi algorithm in HMM terminology 1 S1 S2 S3 S4 Viterbi, under the hood 1) Build a state-space trellis. 2) Add a source and sink node. 3) Given observations. 4) Assign weights to edges based on observations. B P(S i ) 2 3 4 O1 O2 O3 O4 S1 S1 S1 S2 S3 S4 P(O1 S i )P(S j S i ) P(O2 S i )P(S j S i ) S2 S3 S4 S2 S3 S4 P(O3 S i )P(S j S i ) P(O4 S i ) Do multistage dynamic programming to find the max product path. (note: in some implementations, each edge is weighted by -log(w i ) of the weights shown here, so that standard min length path DP can be performed.) E HMM Problem 3 Given a training set of observation sequences {{O1,O2,...}, {O1,O2,...}, {O1,O2,...}} how do we determine the transition probs a ij, initial state probs p i, and observation probs b ik? This is a learning problem. It is the hardest problem of the three. We assume topology of the HMM is known (number and connectivity of the states) otherwise it is even harder. Two popular approaches: Segmental K-means algorithm Baum-Welch algorithm (EM-based) Recall clustering lectures earlier in the semester! 3

Segmental K-Means Note: this is essentially k-means clustering! 4

Note: This part assumes continuous-valued observations are output. For a discrete set of observation symbols, we could just do histograms here. 5

Note: For a discrete set of observation symbols, we could use histograms normalized to sum to 1 as estimates of the probability mass function of outputting each symbol. Really? 6

Baum-Welch Algorithm Kalman Filter Same thing, but with fractional assignments of observations to states. Basic idea is to use EM instead of K-means as a way of assessing ownership weights before computing observation statistics at each state. Note: a good background reference is An Introduction to the Kalman Filter, Greg Welch and Gary Bishop, University of North Carolina at Chapel Hill, Dept of Computer Science Tech Report 95-041. Welch also maintains an excellent website about all things Kalman: http://www.cs.unc.edu/~welch/kalman/ Kalman Filter for Tracking Kalman Filter for Tracking P(x1,x2,x3,x4,...,y1,y2,y3,y4,...) = P(x1)P(y1 x1)p(x2 x1)p(y2 x2)p(x3 x2)p(y3 x3)p(x4 x3)p(y4 x4)... P(x1,x2,x3,x4,...,y1,y2,y3,y4,...) = P(x1)P(y1 x1)p(x2 x1)p(y2 x2)p(x3 x2)p(y3 x3)p(x4 x3)p(y4 x4)... y 1! y 2! y n-1! y n! y n+1! y 1! y 2! y n-1! y n! y n+1! What we typically want to compute for tracking applications is P(xn y1, y2,..., yn) Linear Dynamical Systems Next state is a linear function of current state + zero-mean Gaussian noise Each observation is a linear function of current state + zeromean Gaussian noise Initial prior distribution on first state is Gaussian Assume: Kalman Filter Derivation (in 1D) Linear motion and measurement models: => All distributions remain Gaussian! => Means and covariances can be estimated over time by a Kalman filter Implied by motion and measurement models: 7

Kalman Filter Derivation Derivation Strategy Combine P(xk-1 y1,...,yk-1) and P(xk xk-1) to compute P(xk y1,...,yk-1) xk-1! x k! xk-1! x k! yk-1! y k! yk-1! y k! What is P(xk y1,...,yk)? Step 1: motion prediction Derivation Strategy Combine P(xk y1,...,yk-1) and P(yk xk) to compute P(xk y1,...,yk) xk-1! x k! yk-1! y k! Combine P(xk-1 y1,...,yk-1) and P(xk xk-1) to compute P(xk y1,...,yk-1) Quadratic form in xk-1 and xk. Therefore this is a joint Gaussian distribution over xk-1 and xk. Integrated over xk-1 yields marginal distribution P(xk y1,...,yk-1) Step 2: data correction Step 2: Data Correction Step 1: Motion Prediction Step 2: Data Correction Combine P(xk y1,...,yk-1) and P(yk xk) to compute P(xk y1,...,yk) Combine P(xk y1,...,yk-1) and P(yk xk) to compute P(xk y1,...,yk) Through some algebra (completing the square), we find that: Through some algebra (completing the square), we find that: This becomes the Gaussian prior for the next iteration of motion prediction and data correction. Thus we continue propagating forward into subsequent time steps. 8