Structure Learning in Sequential Data
|
|
- Alvin Harrington
- 5 years ago
- Views:
Transcription
1 Structure Learning in Sequential Data Liam Stewart Richard Zemel
2 Motivation. Cau, R. Kuiper, and W.-P. de Roever. Formalising Dijkstra's development strategy within Stark's formalism. In C.. Jones, R. C. Shaw, and T. Denvir, editors, Proc. 5th. CS-FCS Refinement Workshop, Cau, R. Kuiper, and W.-P. de Roever. Formalising Dijkstra's development strategy within Stark's formalism. In C.. Jones, R. C. Shaw, and T. Denvir, editors, Proc. 5th. CS-FCS Refinement Workshop, 1992.
3 Structure Learning Structure: patterns and relationships... - between labels and observations, - between labels. asic components: - Observations X = { x 1,...,x T x i R d } - s Y = {y 1,..., y T y i Y} Goal: learn a mapping from observations to labels.
4 Why is it difficult? If sequences are considered as a whole: - Observations have indeterminate dimensionality, - Number of joint classifications of labels is exponential in T. If the items are considered separately: - Information about label structures is lost. How do we learn a mapping while maintaining tractability and exploiting structure?
5 Outline 1. Probabilistic models for structure learning 2. Experimental results 3. Conclusions and future work
6 Probabilistic Models Choices: - Generative vs. conditional. - Directed vs. undirected model. - Use latent states? - Connectivity? Generative models express the conditional distribution P(Y X) in terms of the joint distribution P(Y,X) and require a model of the observations P(X). latent We focus on conditional models.
7 Conditional Models These model the conditional distribution p(y X) directly and make extensive use of features: functions of the observations. Features can be overlapping and non-independent. Examples: - Regular Expressions: [-Z] - Category: is a name - Exact match: Morgan ssumption: raw observations have been pre-processed by a set of feature functions.
8 Models We implemented - logistic regression (LR) - the Maximum Entropy Markov Model (MEMM) and a version with observation independent transitions (the IMEMM), - the Conditional Random Field (CRF) and a version with observation independent edge potentials (the ICRF). - the Input Output Hidden Markov Model (IOHMM) - the Hidden Random Field (HRF) - the RM-CRF template model. We report results for LR, the CRF, the IOHMM, the HRF, and RM-CRF models.
9 Conditional Models Fully Observed Chain Structures Latent State Template Models CRF IOHMM HRF RM-CRF
10 y 0 y 1 y 2 y T CRF x 1 x 2 x T Conditional random field [Lafferty, McCallum, Pereira 2002] It is the undirected version of the maximum entropy Markov model (MEMM). p(y X) = = 1 Z(X) 1 Z(X) T φ(y t, y t 1, x t ) t=1 T exp{θ yt,y t 1 x t } t=1
11 y 0 y 1 y 2 y T CRF x 1 x 2 x T Most implementations use a second-order quasi-newton optimizer (e.g. L-FGS) to optimize the weights. m i,t,j Let be one when = j. l(θ; D) = = y (i) t N log p(y i X i ) i=1 N i=1 T i t=1 θ (i) y t,y (i) t 1 x (i) t N log Z(X i ) i=1 θjk l(θ; D) = N T i [m i,t,j m i,t 1,k p(y t = j, y t 1 = k X i )]x (i) t i=1 t=1
12 Conditional Models Fully Observed Chain Structures Latent State Template Models CRF IOHMM HRF RM-CRF
13 y 1 y 2 y T IOHMM h 0 h 1 h 2 h T x 1 x 2 x T Input output HMM [engio, Frasconi 1995] It is an HMM where the transition and emission distributions are conditional on the observations. set of latent random variables H that form a chain structure are used to maintain state information. p(y X) = T p(h t h t 1, x t )p(y t h t, x t ). H t=1 p(h t = j h t 1 = k, x t ) exp{λ jk x t } p(y t = j h t = k, x t ) exp{θ jk x t }
14 y 1 y 2 y T IOHMM h 0 h 1 h 2 h T x 1 x 2 x T The EM algorithm is used to train the IOHMM. In the E step, we need to calculate the posterior distributions: p(h (i) t = j Y i, X i ) The M step updates: 1. the transition weights, 2. the emission weights. p(h (i) t = j, h (i) t 1 = k Y i, X i ) oth of these updates are weighted logistic regression problems.
15 y 1 y 2 y T HRF h 0 h 1 h 2 h T x 1 x 2 x T Hidden random field [Kakade, Teh, Roweis 2002] It is the undirected equivalent of the IOHMM. p(y X) = H p(y, H X) = = 1 Z(X) 1 Z(X) H H T φ(h t, h t 1, x t )φ(y t, h t, x t ) t=1 T exp{λ ht,h t 1 x t + θ yt,h t x t } t=1 Like the IOHMM, training uses the EM algorithm. The M step involves optimizing a fully-observed CRF.
16 Conditional Models Fully Observed Chain Structures Latent State Template Models CRF IOHMM HRF RM-CRF
17 Features CRF models are fully specified with respect to label configurations. dding higher-order features results in an exponential increase in model complexity. Idea: learn higher-order structures from the data using label features. Feature: parameterized feature on a group of label variables. The weights can be adjusted to change what the label feature looks for.
18 RM-CRF Restricted oltzmann machine CRF [He,Zemel,Carreira-Perpinan 2004] label feature is a binary random variable connected to J labels. Define label features through groups: g = n g, o g, {W g,n } n g n=1, b g n g : number of label features in the group. o g : vector of offsets that specifies the connectivity. Instantiate a group at time t (replication r): g(t) = n g, o g (t), {W g,n } n g n=1, b g o g (t) = o g + t
19 RM-CRF: Example Group: f s: 0 1 h f,1,1 h f,2,1 h f,1,2 h f,2,2 h f,1,3 h f,2,3 Group: g y 1 y 2 y 3 y 4 s: h g,1,1 h g,1,2
20 RM-CRF n RM-CRF is just a collection of groups. When rolled-out across a sequence, the model has the form of a restricted oltzmann machine (RM). p g (Y, H) exp{ [b g,n + w g,n,k l k ]h g,n,r } r n k = exp{[b g,n + w g,n,k l k ]h g,n,r } r n k = p g,n,r (h g,n,r, Y ) r n
21 RM-CRF: Inference and Training Exact inference is hard because of the loopy structure. However, we can exploit the bipartite nature of the graph to implement a block Gibbs sampler. RM-CRF models are products of experts so we can use contrastive divergence to train them.
22 RM-CRF: Observations There are several ways to incorporate observations into RM- CRF models. 1. Use a pre-trained classifier (e.g.: logistic regression, CRF). 2. Train a classifier at the same time as the RM-CRF. 3. Incorporate observations directly into the parameterization of the label feature. p g (Y, H X) exp{ r [b g,n + n j θ g,n,j x j + k w g,n,k l k ]h g,n,r }
23 Outline 1. Probabilistic models for structure learning 2. Experimental results 3. Conclusions and future work
24 Evaluation Metrics With respect to label l, let - be the number of true positives, be the number of false negatives - C be the number of false positives, D be the number of true negatives F1 score: F1 = (2)/(2 + + C) ccuracy: number of items labeled correctly divided by the total number of items in a sequence. Instance accuracy: percentage of sequences labeled entirely correctly. We use Viterbi decoding for the CRF, IOHMM, and HRF and maximum marginals for logistic regression and the RM-CRF.
25 Toy Problem training instances &100 test instances, each of length 50. Models require memory / knowledge of structures to classify observation 5. RM-CRF models look at groups of six label variables. IOHMM/HRF have 3 states. Observation s C D 1 2 Q C E Q 3 C 4 Q 5 D,E S
26 Toy Problem 1 LR/CRF IOHMM HRF RM(1) RM(2) RM(3) D E vg. F vg. cc Inst. cc
27 Toy Problem 1 Feature 1 Feature 2 Feature 1 C D C D E E Q Q C Feature 1 Feature Feature 3 D E C D E Q C D E Q C D E Q Q Feature Feature Feature C D E Q C D E Q C D E Q
28 Toy Problem 2 Three labels:,,o structures:,,,,. Separated by O Each label has a distribution over observations, but the interior of the larger structures is a mixture distribution. 100 training instances, 1000 test instances. RM(1) has 9 pair-wise label features; RM(2) has 9 pair-wise features and 3 features that look at groups of 8 label variables. Five-fold cross-validation used to choose number of states in the IOHMM and the HRF (8).
29 Toy Problem 2 LR CRF IOHMM HRF RM(1) RM(2) vg. F vg. cc Inst. cc
30 Toy Problem 3 From [Kakade, Teh, Roweis 2002]. Investigates variable term memory. The CRF, IOHMM, and HRF can model well, but achieve suboptimal performance. RM-CRF models with label features conditional on the input can improve improve upon the performance of the CRF but are not able to model the data perfectly because they maintain no state.
31 Cora References Consists of 500 bibliography entries from research papers; 350 are used for training and 150 are used for testing. s: author, book title, date, editor, institution, journal, location, note, pages, publisher, tech, title, and volume. Entries tokenized by whitespace. Each token processed by 4191 features: 3*(19 regular expressions + 4 categorical vocabulary) The IOHMM and HRF took too long to train.
32 Cora References Name Groups Observations RM-LR(2) (10,0:4), (10,0:9), (10,0:19) Pre-trained LR RM-CRF(2) (10,0:4), (10,0:9), (10,0:19) Pre-trained CRF RM(3) (15,0:1), (20,0:2), (5,0:9), (5,0:19) LR Notation: (number of nodes, offsets)
33 Cora References LR RM-LR (2) CRF* CRF RM-CRF (2) RM(3) vg. F vg. cc Inst. cc CRF*: CRF trained by [Peng, McCallum 2005].
34 Outline 1. Probabilistic models for structure learning 2. Experimental results 3. Conclusions and future work
35 Conclusions We evaluated several methods for doing sequence labeling including latent state models and parameterized templated models. IOHMMs and HRFs can be more expressive than CRFs, but it can be difficult to pick a good number of latent states and training take a long time and is subject to local optima. Template models may improve performance, but it easy to pick suboptimal architectures. They do not maintain state so they may not be appropriate for all tasks.
36 Future Work RM-CRF: - More experiments on real data are required. - Feature induction to learn the architecture of the model. - Extension to a hierarchical model. - Integration of the RM-CRF with one or more IOHMM/HRF models in a product model. IOHMM/HRF: - Investigation of training issues and methods. ll: - Put them in a the ayesian framework.
37 end.
38 Toy Problem 1 Method Mean Time (s) Standard Deviation LR MEMM IMEMM CRF ICRF IOHMM HRF RM(1) RM(2) RM(6)
39 Feature 1 RM(1) Replication Iteration Feature 1 Feature 2 RM(2) Replication Iteration Replication Iteration RM(3) Replication Feature Iteration Replication Feature Iteration Replication Feature Iteration Replication Feature Iteration Replication Feature Iteration Replication Feature Iteration
40 Toy Problem Five!Fold Cross Validation Error versus Number of Latent States Error Latent States
41 Toy Problem 2 Method Mean Time (s) Standard Deviation LR MEMM IMEMM CRF ICRF IOHMM HRF RM(1) RM(2)
42 Toy Problem 2 Feature 1 Feature 2 Feature 3 O O O 0 1 Feature Feature Feature 6 O O O 0 1 Feature Feature Feature 9 O O O RM(1)
43 Toy Problem 2 Feature 1 Feature 2 Feature 3 O O O 0 1 Feature Feature Feature 6 O O O 0 1 Feature Feature Feature 9 O O O RM(2) 0 1
44 Toy Problem 2 Feature 1 Feature 2 O O Feature 3 O RM(2)
45 Toy Problem Training Error versus Number of Hidden States for the IOHMM and HRF on Toy Problem 2 IOHMM,! = 2 IOHMM,! = 10 HRF,! = 2 HRF,! = Error ! Number of Hidden States
46 Toy Problem Test Error versus Number of Hidden States for the IOHMM and HRF on Toy Problem 2 IOHMM,! = 2 IOHMM,! = 10 HRF,! = 2 HRF,! = Error Number of Hidden States
47 Toy Problem 3 From [Kakade,Teh,Roweis 2002]. The label of the first I is always 1. CRF can only model //R. No RM-CRF can model //R/I as no state is maintained. Observation 0 1 R Last / I Inverse of last I
48 Toy Problem 3 : -1 0 RM(1) Group 1 : RM(2) Group 1 Group 2 Group 10
49 Toy Problem 3 LR CRF IOHMM HRF RM(1) RM(2) vg. F vg. cc Inst. cc Per-observation error rates LR CRF IOHMM HRF RM(1) RM(2) R I
Conditional Random Field
Introduction Linear-Chain General Specific Implementations Conclusions Corso di Elaborazione del Linguaggio Naturale Pisa, May, 2011 Introduction Linear-Chain General Specific Implementations Conclusions
More informationA brief introduction to Conditional Random Fields
A brief introduction to Conditional Random Fields Mark Johnson Macquarie University April, 2005, updated October 2010 1 Talk outline Graphical models Maximum likelihood and maximum conditional likelihood
More informationUndirected Graphical Models
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Properties Properties 3 Generative vs. Conditional
More informationProbabilistic Models for Sequence Labeling
Probabilistic Models for Sequence Labeling Besnik Fetahu June 9, 2011 Besnik Fetahu () Probabilistic Models for Sequence Labeling June 9, 2011 1 / 26 Background & Motivation Problem introduction Generative
More informationSequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them
HMM, MEMM and CRF 40-957 Special opics in Artificial Intelligence: Probabilistic Graphical Models Sharif University of echnology Soleymani Spring 2014 Sequence labeling aking collective a set of interrelated
More informationMachine Learning for Structured Prediction
Machine Learning for Structured Prediction Grzegorz Chrupa la National Centre for Language Technology School of Computing Dublin City University NCLT Seminar Grzegorz Chrupa la (DCU) Machine Learning for
More informationConditional Random Fields and beyond DANIEL KHASHABI CS 546 UIUC, 2013
Conditional Random Fields and beyond DANIEL KHASHABI CS 546 UIUC, 2013 Outline Modeling Inference Training Applications Outline Modeling Problem definition Discriminative vs. Generative Chain CRF General
More informationSequence Modelling with Features: Linear-Chain Conditional Random Fields. COMP-599 Oct 6, 2015
Sequence Modelling with Features: Linear-Chain Conditional Random Fields COMP-599 Oct 6, 2015 Announcement A2 is out. Due Oct 20 at 1pm. 2 Outline Hidden Markov models: shortcomings Generative vs. discriminative
More informationRandom Field Models for Applications in Computer Vision
Random Field Models for Applications in Computer Vision Nazre Batool Post-doctorate Fellow, Team AYIN, INRIA Sophia Antipolis Outline Graphical Models Generative vs. Discriminative Classifiers Markov Random
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationLecture 13: Discriminative Sequence Models (MEMM and Struct. Perceptron)
Lecture 13: Discriminative Sequence Models (MEMM and Struct. Perceptron) Intro to NLP, CS585, Fall 2014 http://people.cs.umass.edu/~brenocon/inlp2014/ Brendan O Connor (http://brenocon.com) 1 Models for
More informationConditional Random Fields for Sequential Supervised Learning
Conditional Random Fields for Sequential Supervised Learning Thomas G. Dietterich Adam Ashenfelter Department of Computer Science Oregon State University Corvallis, Oregon 97331 http://www.eecs.oregonstate.edu/~tgd
More informationLecture 13: Structured Prediction
Lecture 13: Structured Prediction Kai-Wei Chang CS @ University of Virginia kw@kwchang.net Couse webpage: http://kwchang.net/teaching/nlp16 CS6501: NLP 1 Quiz 2 v Lectures 9-13 v Lecture 12: before page
More information10 : HMM and CRF. 1 Case Study: Supervised Part-of-Speech Tagging
10-708: Probabilistic Graphical Models 10-708, Spring 2018 10 : HMM and CRF Lecturer: Kayhan Batmanghelich Scribes: Ben Lengerich, Michael Kleyman 1 Case Study: Supervised Part-of-Speech Tagging We will
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationLecture 16 Deep Neural Generative Models
Lecture 16 Deep Neural Generative Models CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 22, 2017 Approach so far: We have considered simple models and then constructed
More informationPartially Directed Graphs and Conditional Random Fields. Sargur Srihari
Partially Directed Graphs and Conditional Random Fields Sargur srihari@cedar.buffalo.edu 1 Topics Conditional Random Fields Gibbs distribution and CRF Directed and Undirected Independencies View as combination
More informationPredicting Sequences: Structured Perceptron. CS 6355: Structured Prediction
Predicting Sequences: Structured Perceptron CS 6355: Structured Prediction 1 Conditional Random Fields summary An undirected graphical model Decompose the score over the structure into a collection of
More informationConditional Random Fields: An Introduction
University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science 2-24-2004 Conditional Random Fields: An Introduction Hanna M. Wallach University of Pennsylvania
More informationGraphical models for part of speech tagging
Indian Institute of Technology, Bombay and Research Division, India Research Lab Graphical models for part of speech tagging Different Models for POS tagging HMM Maximum Entropy Markov Models Conditional
More informationChapter 16. Structured Probabilistic Models for Deep Learning
Peng et al.: Deep Learning and Practice 1 Chapter 16 Structured Probabilistic Models for Deep Learning Peng et al.: Deep Learning and Practice 2 Structured Probabilistic Models way of using graphs to describe
More informationLecture 6: Graphical Models
Lecture 6: Graphical Models Kai-Wei Chang CS @ Uniersity of Virginia kw@kwchang.net Some slides are adapted from Viek Skirmar s course on Structured Prediction 1 So far We discussed sequence labeling tasks:
More informationMaximum Entropy Markov Models
Wi nøt trei a høliday in Sweden this yër? September 19th 26 Background Preliminary CRF work. Published in 2. Authors: McCallum, Freitag and Pereira. Key concepts: Maximum entropy / Overlapping features.
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationLog-Linear Models, MEMMs, and CRFs
Log-Linear Models, MEMMs, and CRFs Michael Collins 1 Notation Throughout this note I ll use underline to denote vectors. For example, w R d will be a vector with components w 1, w 2,... w d. We use expx
More informationChapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang
Chapter 4 Dynamic Bayesian Networks 2016 Fall Jin Gu, Michael Zhang Reviews: BN Representation Basic steps for BN representations Define variables Define the preliminary relations between variables Check
More informationSequential Supervised Learning
Sequential Supervised Learning Many Application Problems Require Sequential Learning Part-of of-speech Tagging Information Extraction from the Web Text-to to-speech Mapping Part-of of-speech Tagging Given
More informationProbabilistic Graphical Models & Applications
Probabilistic Graphical Models & Applications Learning of Graphical Models Bjoern Andres and Bernt Schiele Max Planck Institute for Informatics The slides of today s lecture are authored by and shown with
More informationCRF for human beings
CRF for human beings Arne Skjærholt LNS seminar CRF for human beings LNS seminar 1 / 29 Let G = (V, E) be a graph such that Y = (Y v ) v V, so that Y is indexed by the vertices of G. Then (X, Y) is a conditional
More informationProbabilistic Graphical Models
Probabilistic Graphical Models David Sontag New York University Lecture 4, February 16, 2012 David Sontag (NYU) Graphical Models Lecture 4, February 16, 2012 1 / 27 Undirected graphical models Reminder
More informationBased on slides by Richard Zemel
CSC 412/2506 Winter 2018 Probabilistic Learning and Reasoning Lecture 3: Directed Graphical Models and Latent Variables Based on slides by Richard Zemel Learning outcomes What aspects of a model can we
More information2.2 Structured Prediction
The hinge loss (also called the margin loss), which is optimized by the SVM, is a ramp function that has slope 1 when yf(x) < 1 and is zero otherwise. Two other loss functions squared loss and exponential
More informationLecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010
Hidden Lecture 4: Hidden : An Introduction to Dynamic Decision Making November 11, 2010 Special Meeting 1/26 Markov Model Hidden When a dynamical system is probabilistic it may be determined by the transition
More informationGraphical Models for Collaborative Filtering
Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,
More informationPattern Recognition and Machine Learning
Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability
More informationStatistical NLP for the Web Log Linear Models, MEMM, Conditional Random Fields
Statistical NLP for the Web Log Linear Models, MEMM, Conditional Random Fields Sameer Maskey Week 13, Nov 28, 2012 1 Announcements Next lecture is the last lecture Wrap up of the semester 2 Final Project
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationLogistic Regression: Online, Lazy, Kernelized, Sequential, etc.
Logistic Regression: Online, Lazy, Kernelized, Sequential, etc. Harsha Veeramachaneni Thomson Reuter Research and Development April 1, 2010 Harsha Veeramachaneni (TR R&D) Logistic Regression April 1, 2010
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 295-P, Spring 213 Prof. Erik Sudderth Lecture 11: Inference & Learning Overview, Gaussian Graphical Models Some figures courtesy Michael Jordan s draft
More informationDynamic Approaches: The Hidden Markov Model
Dynamic Approaches: The Hidden Markov Model Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Machine Learning: Neural Networks and Advanced Models (AA2) Inference as Message
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Variational Inference II: Mean Field Method and Variational Principle Junming Yin Lecture 15, March 7, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3
More informationLearning Parameters of Undirected Models. Sargur Srihari
Learning Parameters of Undirected Models Sargur srihari@cedar.buffalo.edu 1 Topics Difficulties due to Global Normalization Likelihood Function Maximum Likelihood Parameter Estimation Simple and Conjugate
More information13: Variational inference II
10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational
More informationBasic math for biology
Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 24, 2016 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationCOMP90051 Statistical Machine Learning
COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Trevor Cohn 24. Hidden Markov Models & message passing Looking back Representation of joint distributions Conditional/marginal independence
More informationLearning Parameters of Undirected Models. Sargur Srihari
Learning Parameters of Undirected Models Sargur srihari@cedar.buffalo.edu 1 Topics Log-linear Parameterization Likelihood Function Maximum Likelihood Parameter Estimation Simple and Conjugate Gradient
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 9: Variational Inference Relaxations Volkan Cevher, Matthias Seeger Ecole Polytechnique Fédérale de Lausanne 24/10/2011 (EPFL) Graphical Models 24/10/2011 1 / 15
More informationFinal Exam, Machine Learning, Spring 2009
Name: Andrew ID: Final Exam, 10701 Machine Learning, Spring 2009 - The exam is open-book, open-notes, no electronics other than calculators. - The maximum possible score on this exam is 100. You have 3
More informationWhat s an HMM? Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) Hidden Markov Models (HMMs) for Information Extraction
Hidden Markov Models (HMMs) for Information Extraction Daniel S. Weld CSE 454 Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) standard sequence model in genomics, speech, NLP, What
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationVariational Inference (11/04/13)
STA561: Probabilistic machine learning Variational Inference (11/04/13) Lecturer: Barbara Engelhardt Scribes: Matt Dickenson, Alireza Samany, Tracy Schifeling 1 Introduction In this lecture we will further
More informationHidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010
Hidden Markov Models Aarti Singh Slides courtesy: Eric Xing Machine Learning 10-701/15-781 Nov 8, 2010 i.i.d to sequential data So far we assumed independent, identically distributed data Sequential data
More informationConditional Random Fields
Conditional Random Fields Micha Elsner February 14, 2013 2 Sums of logs Issue: computing α forward probabilities can undeflow Normally we d fix this using logs But α requires a sum of probabilities Not
More informationHidden Markov Models
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Hidden Markov Models Matt Gormley Lecture 22 April 2, 2018 1 Reminders Homework
More informationp(d θ ) l(θ ) 1.2 x x x
p(d θ ).2 x 0-7 0.8 x 0-7 0.4 x 0-7 l(θ ) -20-40 -60-80 -00 2 3 4 5 6 7 θ ˆ 2 3 4 5 6 7 θ ˆ 2 3 4 5 6 7 θ θ x FIGURE 3.. The top graph shows several training points in one dimension, known or assumed to
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18 Oct, 21, 2015 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models CPSC
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 23, 2015 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationLearning with Noisy Labels. Kate Niehaus Reading group 11-Feb-2014
Learning with Noisy Labels Kate Niehaus Reading group 11-Feb-2014 Outline Motivations Generative model approach: Lawrence, N. & Scho lkopf, B. Estimating a Kernel Fisher Discriminant in the Presence of
More informationProbabilistic Time Series Classification
Probabilistic Time Series Classification Y. Cem Sübakan Boğaziçi University 25.06.2013 Y. Cem Sübakan (Boğaziçi University) M.Sc. Thesis Defense 25.06.2013 1 / 54 Problem Statement The goal is to assign
More informationInductive Principles for Restricted Boltzmann Machine Learning
Inductive Principles for Restricted Boltzmann Machine Learning Benjamin Marlin Department of Computer Science University of British Columbia Joint work with Kevin Swersky, Bo Chen and Nando de Freitas
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin
More informationHidden Markov Models. Terminology, Representation and Basic Problems
Hidden Markov Models Terminology, Representation and Basic Problems Data analysis? Machine learning? In bioinformatics, we analyze a lot of (sequential) data (biological sequences) to learn unknown parameters
More information6.047 / Computational Biology: Genomes, Networks, Evolution Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.047 / 6.878 Computational Biology: Genomes, etworks, Evolution Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationarxiv: v1 [stat.ml] 17 Nov 2010
An Introduction to Conditional Random Fields arxiv:1011.4088v1 [stat.ml] 17 Nov 2010 Charles Sutton University of Edinburgh csutton@inf.ed.ac.uk Andrew McCallum University of Massachusetts Amherst mccallum@cs.umass.edu
More informationIntelligent Systems:
Intelligent Systems: Undirected Graphical models (Factor Graphs) (2 lectures) Carsten Rother 15/01/2015 Intelligent Systems: Probabilistic Inference in DGM and UGM Roadmap for next two lectures Definition
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 11 CRFs, Exponential Family CS/CNS/EE 155 Andreas Krause Announcements Homework 2 due today Project milestones due next Monday (Nov 9) About half the work should
More informationIntroduction to Machine Learning Midterm, Tues April 8
Introduction to Machine Learning 10-701 Midterm, Tues April 8 [1 point] Name: Andrew ID: Instructions: You are allowed a (two-sided) sheet of notes. Exam ends at 2:45pm Take a deep breath and don t spend
More informationCS Homework 3. October 15, 2009
CS 294 - Homework 3 October 15, 2009 If you have questions, contact Alexandre Bouchard (bouchard@cs.berkeley.edu) for part 1 and Alex Simma (asimma@eecs.berkeley.edu) for part 2. Also check the class website
More informationBrief Introduction of Machine Learning Techniques for Content Analysis
1 Brief Introduction of Machine Learning Techniques for Content Analysis Wei-Ta Chu 2008/11/20 Outline 2 Overview Gaussian Mixture Model (GMM) Hidden Markov Model (HMM) Support Vector Machine (SVM) Overview
More informationLinear Dynamical Systems (Kalman filter)
Linear Dynamical Systems (Kalman filter) (a) Overview of HMMs (b) From HMMs to Linear Dynamical Systems (LDS) 1 Markov Chains with Discrete Random Variables x 1 x 2 x 3 x T Let s assume we have discrete
More informationInformation Extraction from Text
Information Extraction from Text Jing Jiang Chapter 2 from Mining Text Data (2012) Presented by Andrew Landgraf, September 13, 2013 1 What is Information Extraction? Goal is to discover structured information
More informationClustering. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 8, / 26
Clustering Professor Ameet Talwalkar Professor Ameet Talwalkar CS26 Machine Learning Algorithms March 8, 217 1 / 26 Outline 1 Administration 2 Review of last lecture 3 Clustering Professor Ameet Talwalkar
More informationUNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013
UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013 Exam policy: This exam allows two one-page, two-sided cheat sheets; No other materials. Time: 2 hours. Be sure to write your name and
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Undirected Graphical Models Mark Schmidt University of British Columbia Winter 2016 Admin Assignment 3: 2 late days to hand it in today, Thursday is final day. Assignment 4:
More informationMidterm sample questions
Midterm sample questions CS 585, Brendan O Connor and David Belanger October 12, 2014 1 Topics on the midterm Language concepts Translation issues: word order, multiword translations Human evaluation Parts
More informationHidden Markov Models
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Hidden Markov Models Matt Gormley Lecture 19 Nov. 5, 2018 1 Reminders Homework
More informationProbabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm
Probabilistic Graphical Models 10-708 Homework 2: Due February 24, 2014 at 4 pm Directions. This homework assignment covers the material presented in Lectures 4-8. You must complete all four problems to
More informationBayesian Networks Introduction to Machine Learning. Matt Gormley Lecture 24 April 9, 2018
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Bayesian Networks Matt Gormley Lecture 24 April 9, 2018 1 Homework 7: HMMs Reminders
More informationMachine Learning for NLP
Machine Learning for NLP Uppsala University Department of Linguistics and Philology Slides borrowed from Ryan McDonald, Google Research Machine Learning for NLP 1(50) Introduction Linear Classifiers Classifiers
More informationCS229 Supplemental Lecture notes
CS229 Supplemental Lecture notes John Duchi Binary classification In binary classification problems, the target y can take on at only two values. In this set of notes, we show how to model this problem
More informationLearning MN Parameters with Alternative Objective Functions. Sargur Srihari
Learning MN Parameters with Alternative Objective Functions Sargur srihari@cedar.buffalo.edu 1 Topics Max Likelihood & Contrastive Objectives Contrastive Objective Learning Methods Pseudo-likelihood Gradient
More information3 : Representation of Undirected GM
10-708: Probabilistic Graphical Models 10-708, Spring 2016 3 : Representation of Undirected GM Lecturer: Eric P. Xing Scribes: Longqi Cai, Man-Chia Chang 1 MRF vs BN There are two types of graphical models:
More informationChapter 20. Deep Generative Models
Peng et al.: Deep Learning and Practice 1 Chapter 20 Deep Generative Models Peng et al.: Deep Learning and Practice 2 Generative Models Models that are able to Provide an estimate of the probability distribution
More informationFast Inference and Learning with Sparse Belief Propagation
Fast Inference and Learning with Sparse Belief Propagation Chris Pal, Charles Sutton and Andrew McCallum University of Massachusetts Department of Computer Science Amherst, MA 01003 {pal,casutton,mccallum}@cs.umass.edu
More informationCISC 889 Bioinformatics (Spring 2004) Hidden Markov Models (II)
CISC 889 Bioinformatics (Spring 24) Hidden Markov Models (II) a. Likelihood: forward algorithm b. Decoding: Viterbi algorithm c. Model building: Baum-Welch algorithm Viterbi training Hidden Markov models
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Hidden Markov Models Barnabás Póczos & Aarti Singh Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed
More informationProbabilistic Graphical Models. Guest Lecture by Narges Razavian Machine Learning Class April
Probabilistic Graphical Models Guest Lecture by Narges Razavian Machine Learning Class April 14 2017 Today What is probabilistic graphical model and why it is useful? Bayesian Networks Basic Inference
More informationClick Prediction and Preference Ranking of RSS Feeds
Click Prediction and Preference Ranking of RSS Feeds 1 Introduction December 11, 2009 Steven Wu RSS (Really Simple Syndication) is a family of data formats used to publish frequently updated works. RSS
More informationCSCI-567: Machine Learning (Spring 2019)
CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March
More informationp L yi z n m x N n xi
y i z n x n N x i Overview Directed and undirected graphs Conditional independence Exact inference Latent variables and EM Variational inference Books statistical perspective Graphical Models, S. Lauritzen
More informationHidden CRFs for Human Activity Classification from RGBD Data
H Hidden s for Human from RGBD Data Avi Singh & Ankit Goyal IIT-Kanpur CS679: Machine Learning for Computer Vision April 13, 215 Overview H 1 2 3 H 4 5 6 7 Statement H Input An RGBD video with a human
More informationLecture 21: Spectral Learning for Graphical Models
10-708: Probabilistic Graphical Models 10-708, Spring 2016 Lecture 21: Spectral Learning for Graphical Models Lecturer: Eric P. Xing Scribes: Maruan Al-Shedivat, Wei-Cheng Chang, Frederick Liu 1 Motivation
More informationHidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing
Hidden Markov Models By Parisa Abedi Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed data Sequential (non i.i.d.) data Time-series data E.g. Speech
More informationStudy Notes on the Latent Dirichlet Allocation
Study Notes on the Latent Dirichlet Allocation Xugang Ye 1. Model Framework A word is an element of dictionary {1,,}. A document is represented by a sequence of words: =(,, ), {1,,}. A corpus is a collection
More informationNotes on Markov Networks
Notes on Markov Networks Lili Mou moull12@sei.pku.edu.cn December, 2014 This note covers basic topics in Markov networks. We mainly talk about the formal definition, Gibbs sampling for inference, and maximum
More informationProbabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov
Probabilistic Graphical Models: MRFs and CRFs CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov Why PGMs? PGMs can model joint probabilities of many events. many techniques commonly
More informationPart 1: Expectation Propagation
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud
More informationLarge-Scale Feature Learning with Spike-and-Slab Sparse Coding
Large-Scale Feature Learning with Spike-and-Slab Sparse Coding Ian J. Goodfellow, Aaron Courville, Yoshua Bengio ICML 2012 Presented by Xin Yuan January 17, 2013 1 Outline Contributions Spike-and-Slab
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More information