Sequential Supervised Learning

Similar documents
Conditional Random Fields for Sequential Supervised Learning

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Conditional Random Field

Machine Learning for Structured Prediction

Undirected Graphical Models

Statistical Machine Learning Theory. From Multi-class Classification to Structured Output Prediction. Hisashi Kashima.

Statistical Machine Learning Theory. From Multi-class Classification to Structured Output Prediction. Hisashi Kashima.

Sequence Modelling with Features: Linear-Chain Conditional Random Fields. COMP-599 Oct 6, 2015

Statistical NLP for the Web Log Linear Models, MEMM, Conditional Random Fields

Hidden Markov Models

A brief introduction to Conditional Random Fields

Hidden Markov Models

10 : HMM and CRF. 1 Case Study: Supervised Part-of-Speech Tagging

Conditional Random Fields and beyond DANIEL KHASHABI CS 546 UIUC, 2013

Graphical models for part of speech tagging

Lecture 13: Structured Prediction

Lecture 9: PGM Learning

Intelligent Systems (AI-2)

Algorithms for Predicting Structured Data

Probabilistic Graphical Models

Probabilistic Models for Sequence Labeling

Logistic Regression: Online, Lazy, Kernelized, Sequential, etc.

A gentle introduction to Hidden Markov Models

Information Extraction from Text

Partially Directed Graphs and Conditional Random Fields. Sargur Srihari

Logistic Regression & Neural Networks

Intelligent Systems (AI-2)

Sequence Labeling: HMMs & Structured Perceptron

Structured Prediction

Predicting Sequences: Structured Perceptron. CS 6355: Structured Prediction

Hidden Markov Models

Hidden Markov Models in Language Processing

CS446: Machine Learning Fall Final Exam. December 6 th, 2016

Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015

Bayesian Networks Introduction to Machine Learning. Matt Gormley Lecture 24 April 9, 2018

Statistical Methods for NLP

2.2 Structured Prediction

Hidden Markov Models

Hidden Markov Models

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Log-Linear Models, MEMMs, and CRFs

Generalized Linear Classifiers in NLP

What s an HMM? Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) Hidden Markov Models (HMMs) for Information Extraction

Random Field Models for Applications in Computer Vision

Machine Learning for NLP

CS838-1 Advanced NLP: Hidden Markov Models

Active learning in sequence labeling

Machine Learning Practice Page 2 of 2 10/28/13

Lecture 13: Discriminative Sequence Models (MEMM and Struct. Perceptron)

An Introduction to Machine Learning

Introduction to Logistic Regression and Support Vector Machine

Pattern Recognition and Machine Learning

Support Vector Machines

Multiclass and Introduction to Structured Prediction

COMP90051 Statistical Machine Learning

Midterm sample questions

CS534: Machine Learning. Thomas G. Dietterich 221C Dearborn Hall

Multiclass and Introduction to Structured Prediction

lecture 6: modeling sequences (final part)

Intelligent Systems (AI-2)

MACHINE LEARNING FOR NATURAL LANGUAGE PROCESSING

Conditional Random Fields: An Introduction

Statistical Methods for NLP

CRF for human beings

Graphical Models for Collaborative Filtering

CSCI-567: Machine Learning (Spring 2019)

p(d θ ) l(θ ) 1.2 x x x

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

More on HMMs and other sequence models. Intro to NLP - ETHZ - 18/03/2013

Machine Learning for NLP

FINAL: CS 6375 (Machine Learning) Fall 2014

Statistical Data Mining and Machine Learning Hilary Term 2016

Expectation Maximization (EM)

Parameter learning in CRF s

Graphical Models Seminar

Linear & nonlinear classifiers

Machine Learning for Signal Processing Bayes Classification and Regression

Hidden Markov Models Part 2: Algorithms

Machine Learning. Classification, Discriminative learning. Marc Toussaint University of Stuttgart Summer 2015

Generative v. Discriminative classifiers Intuition

1 Machine Learning Concepts (16 points)

Kernel methods, kernel SVM and ridge regression

STA 4273H: Statistical Machine Learning

Advanced Data Science

Linear Classifiers IV

Discriminative Models

Topics in Structured Prediction: Problems and Approaches

Brief Introduction of Machine Learning Techniques for Content Analysis

Machine Learning & Data Mining Caltech CS/CNS/EE 155 Hidden Markov Models Last Updated: Feb 7th, 2017

Probabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov

Machine Learning. Regression-Based Classification & Gaussian Discriminant Analysis. Manfred Huber

Discriminative Models

Machine Learning Basics Lecture 7: Multiclass Classification. Princeton University COS 495 Instructor: Yingyu Liang

Lecture 15. Probabilistic Models on Graph

Pattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods

with Local Dependencies

Naïve Bayes classification

Directed Probabilistic Graphical Models CMSC 678 UMBC

Transcription:

Sequential Supervised Learning

Many Application Problems Require Sequential Learning Part-of of-speech Tagging Information Extraction from the Web Text-to to-speech Mapping

Part-of of-speech Tagging Given an English sentence, can we assign a part of speech to each word? Do you want fries with that? <verb pron verb noun prep pron>

Information Extraction from the Web <dl><dt><b>srinivasan Seshan</b> (Carnegie Mellon University) <dt><a href= ><i>making Virtual Worlds Real</i></a><dt>Tuesday, June, 00<dd>:00 PM, Sieg<dd>Research Seminar * * * name name * * affiliation affiliation affiliation * * * * title title title title * * * date date date date * time time * location location * event-type event-type

Text-to to-speech Mapping photograph => /f-ot@graf/ Ot@graf-/

Sequential Supervised Learning (SSL) Given: A set of training examples of the form (X( i,y i ), where X i = hx i,,, x i,t ii and Y i = hy i,,, y i,t ii are sequences of length T i Find: A function f for predicting new sequences: Y = f(x).

Examples of Sequential Supervised Learning Domain Part-of of-speech Tagging Information Extraction Test-to to-speech Mapping Input X i sequence of words sequence of tokens sequence of letters Output Y i sequence of parts of speech sequence of field labels {name, } sequence phonemes

Two Kinds of Relationships y y y x x x Vertical relationship between the x t s and y t s Example: Friday is usually a date Horizontal relationships among the y t s Example: name is usually followed by affiliation SSL can (and should) exploit both kinds of information

Existing Methods Hacks Sliding windows Recurrent sliding windows Hidden Markov models joint distribution: P(X,Y) Conditional Random Fields conditional distribution: P(Y X) Discriminant Methods: HM-SVMs, MMMs, voted perceptrons discriminant function: f(y; X)

Sliding Windows Sliding Windows that that with with fries fries want want you you Do Do verb verb you you Do Do verb verb fries fries want want you you noun noun with with fries fries want want prep prep that that with with fries fries pron pron that that with with pron pron want want you you Do Do

Properties of Sliding Windows Converts SSL to ordinary supervised learning Only captures the relationship between (part of) X and y t. Does not explicitly model relations among the y t s Assumes each window is independent

Recurrent Sliding Windows Recurrent Sliding Windows that that with with fries fries want want you you Do Do verb verb you you Do Do pron pron verb verb fries fries want want you you verb verb noun noun with with fries fries want want noun noun prep prep that that with with fries fries prep prep pron pron that that with with verb verb pron pron want want you you Do Do

Recurrent Sliding Windows Key Idea: Include y t as input feature when computing y t+. During training: Use the correct value of y t Or train iteratively (especially recurrent neural networks) During evaluation: Use the predicted value of y t

Properties of Recurrent Sliding Windows Captures relationship among the y s,, but only in one direction! Results on text-to to-speech: Method sliding window recurrent s. w. recurrent s. w. Direction none left-right right-left Words.5% 7.0%.% Letters 69.6% 67.9% 7.%

Hidden Markov Models Generalization of Naïve Bayes to SSL y y y y y 5 x x x x x 5 P(y ) P(y t y t- ) assumed the same for all t P(x t y t ) = P(x t, y t ) P(x t, y t ) L P(x assumed the same for all t P(x t,n,y,y t )

Making Predictions with HMMs Two possible goals: argmax Y P(Y X) find the most likely sequence of labels Y given the input sequence X argmax yt P(y t X) forall t find the most likely label y t at each time t given the entire input sequence X

Finding Most Likely Label Sequence: The Trellis s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Every label sequence corresponds to a path through the trellis graph. The probability of a label sequence is proportional to P(y ) P(x y ) P(y y ) P(x y ) L P(y T y T- ) P(x T y T )

Converting to Shortest Path Problem s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f max y,,y T P(y ) P(x y ) P(y y ) P(x y ) L P(y T y T- ) P(x T y T ) = min y,,y T l log [P(y ) P(x y )] + log [P(y y ) P(x y )] + L + log [P(y T y T- ) P(x T y T )] shortest path through graph. edge cost = log [P(y t y t- ) P(x t y t )]

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Step t of the Viterbi algorithm computes the possible successors of state y t- _ and computes the total path length for each edge

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Each node y t =k stores the cost µ of the shortest path that reaches it from s and the predecessor class y t- = k that achieves this cost k = argmin yt- log [P(y t y t- ) P(x t y t )] + µ(y t- ) µ(k) = min yt- log [P(y t y t- ) P(x t y t )] + µ(y t- )

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Compute Successors

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Compute and store shortest incoming arc at each node

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Compute successors

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Compute and store shortest incoming arc at each node

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Compute successors

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Compute and store shortest incoming edges

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Compute successors (trivial)

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Compute best edge into f

Finding Most Likely Label Sequence: The Viterbi Algorithm s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f Now trace back along best incoming edges to recover the predicted Y sequence: verb pronoun verb noun noun

Finding the Most Likely Label at time t: P(y t X) s verb pronoun verb noun noun verb pronoun noun adjective Do you want fries sir? f P(y = X) = probability of reaching y = from the start * probability of getting from y = to the finish

Finding the most likely class at each time t goal: compute P(y t x,, x T ) y:t :t- yt+:t P(y ) P(x y ) P(y y ) P(x y ) L P(y T y T- ) P(x T y T ) P(y y:t :t- ) P(x y ) P(y y ) P(x y ) L P(y t y t- ) P(x t y t ) yt+:t P(y t+ y t ) P(x t+ y t+ ) L P(y T y T- ) P(x T y T ) t+:t P(y yt- [ L y [ y P(y ) P(x y ) P(y y )] P(x y ) P(y t y t- )] P(x t y t ) P(y y )] L yt+ [P(y t+ y t ) P(x t+ y t+ ) L yt- [P(y T- y T- ) P(x T- y T- ) [P(y T y T- ) P(x T y T )]] L ]

Forward-Backward Algorithm α t (y t ) = yt- P(y t y t- ) P(x t y t ) α t- (y t- ) This is the sum over the arcs coming into y t = k It is computed forward along the sequence and stored in the trellis β t (y t ) = yt+ P(y t+ t+ y y t ) P( P(x t+ t+ y t+ ) β t+ t+ (y t+ ) It is computed backward along the sequence and stored in the trellis P(y t X) = α t (y t ) β t (y t ) / [ [ k α t (k) β t (k)]

Training Hidden Markov Models If the inputs and outputs are fully- observed, this is extremely easy: P(y =k) = [# examples with y =k] / m P(y t =k y t- = k ) k ) = [# <k,k > > transitions] / [# of times y t = k] P(x j = v y = k) = [# times y=k and x j =v ] / [# times y t = k] Should apply Laplace corrections to these estimates

Conditional Random Fields y y y x x x The y t s form a Markov Random Field conditioned on X: P(Y X) Lafferty, McCallum, & Pereira (00)

Markov Random Fields Graph G = (V,E) Each vertex v V represents a random variable y v. Each edge represents a direct probabilistic dependency. P(Y) = /Z exp [ [ c Ψ c (c(y))] c indexes the cliques in the graph Ψ c is a potential function c(y) ) selects the random variables participating in clique c.

A Simple MRF y y y Cliques: singletons: {y }, {y }, {y } pairs (edges); {y,y }, {y,y } P(hy,y,y i) ) = /Z exp[ψ (y ) + Ψ (y ) + Ψ (y ) + Ψ (y,y ) + Ψ (y,y )]

CRF Potential Functions are Conditioned on X y y y x x x Ψ t (y t,x): how compatible is y t with X? Ψ t,t- (y t,y t-,x): how compatible is a transition from y t- to y t with X?

CRF Potentials are Log Linear Models Ψ t (y t,x) ) = b β b g b (y t,x) Ψ t,t+ (y t,y t+,x) = a λ a f a (y t,y,y t+,x) where g b and f a are user-defined boolean functions ( features( features ) Example: g = [x[ t = o and y t = /@/]/] we will lump them together as Ψ t (y t, y t+,x) = a λ a f a (y t, y t+,x)

Making Predictions with CRFs Viterbi and Forward-Backward algorithms can be applied exactly as for HMMs

Training CRFs Let θ = {β{, β,, λ, λ, }} be all of our parameters Let F θ be our CRF, so F θ (Y,X) = P(Y X) Define the loss function L(Y,F θ (Y,X)) ) to be the Negative Log Likelihood L(Y,F θ (Y,X)) = log F θ (Y,X) Goal: Find θ to minimize loss (maximize likelihood) Algorithm: Gradient Descent

Gradient Computation gq = logp (Y X) λq = log λ q = λ q X = X t = X t t λ q X a Q t exp Ψ t(y t,y t,x) Z Ψ t (y t,y t,x) logz λafa(y t,y t,x) fq(y t,y t,x) λq logz λ q logz

Gradient of Z logz = Z λ q Z λ q = X Y exp Ψ t (yt 0 Z λq,y0 t,x) Y 0 t = X X exp Ψ t (yt 0 Z λq,y0 t,x) Y 0 t = X exp X Ψ t (yt 0 Z,y0 t,x) X Ψ t (yt 0 Y 0 t t λq,y0 t,x) h P = X exp t Ψ t(y 0 i t,y0 t,x) X X λ a f a (yt 0 Y 0 Z t λq,y0 t,x) a = X P (Y 0 X) fq(y 0 t,y0 t,x) Y 0 X t

Gradient Computation gq = X t fq(y t,y t,x) X Y 0 P (Y 0 X) X t fq(y 0 t,y 0 t,x) Number of times feature q is true minus the expected number of times feature q is true. This can be computed via the forward backward algorithm. First, apply forward-backward to compute P(y t-,y t X). P (y t,y t X) = Z X y t X y t α t (y t ) exp Ψ(y t,y t,x) β t (y t ) Then compute the gradient with respect to each λ q X g q = X f q (y t,y t,x) X t y t y t P (y t,y t X)f q (y t,y t,x)

Discriminative Methods Learn a discriminant function to which the Viterbi algorithm can be applied just get the right answer Methods: Averaged perceptron (Collins) Hidden Markov SVMs (Altun, et al.) Max Margin Markov Nets (Taskar, et al.)

Collins Perceptron Method If we ignore the global normalizer in the CRF, the score for a label sequence Y given an input sequence X is score(y ) = X t X a λ a f a (y t,y t,x) Collin s s approach is to adjust the weights λ a so that the correct label sequence gets the highest score according to the Viterbi algorithm

Sequence Perceptron Algorithm Initialize weights λ a = 0 For l =,,, L do For each training example (X i,y i ) apply Viterbi algorithm to find the path Ŷ with the highest score for all a, update λ a according to λ a := λ a + t [f a (y t,y t-,x) f a (ŷ t, ŷ t-, X)] Compares the viterbi path to the correct path.. Note that no update is made if the viterbi path is correct.

Averaged Perceptron Let λ l,i a be the value of λ a after processing training example i in iteration l Define λ * a = the average value of λ a = /(LN) l,i λ l,i a Use these averaged weights in the final classifier

Collins Part-of of-speech Tagging with Averaged Sequence Perceptron Without averaging:.68% error 0 iterations With averaging:.9% error 0 iterations

Hidden Markov SVM Define a kernel between two input values x and x : : k(x,x ). Define a kernel between (X,Y) and (X,Y,Y ) as follows: K((X,Y), (X,Y,Y )) = s,t I[y s- = y y t- & y s = y y t ] + I[y s = y y t ] k(x s,x t ) Number of (y t-,y t ) transitions that they share + Number of matching labels (weighted by similarity between the x values)

Dual Form of Linear Classifier Score(Y X) = j a α j (Y a ) K((X j,y a ), (X,Y)) a indexes support vector label sequences Y a Learning algorithm finds set of Y a label sequences weight values α j (Y a )

Dual Perceptron Algorithm Initialize α j = 0 For l from to L do For i from to N do Ŷ = argmax Y Score(Y X i ) if Ŷ Y i then α i (Y i ) = α i (Y i ) + α i (Ŷ)) = α i (Ŷ)

Hidden Markov SVM Algorithm For all i initialize S i = {Y i } set of support vector sequences for i α (Y)=0 i for all Y in S i For l from to L do For i from to N do Ŷ = argmax Y Yi Score(Y X i ) If Score(Y i X i ) < Score(Ŷ X i ) Add Ŷ to S i Solve quadratic program to optimize the α i (Y) for all Y in S i to maximize the margin between Y i and all of the other Y s Y s in S i If α i (Y) = 0, delete Y from S i

Altun et al. comparison

Maximum Margin Markov Networks Define SVM-like optimization problem to maximize the per time step margin Define F(X i,y i,ŷ)) = F(X i,y i ) F(X i,ŷ) Y(Y i, Ŷ) ) = i I[ŷ t y it ] MMM SVM formulation: min w + C i ξ i subject to w F(X i,y i,ŷ) Y(Y i, Ŷ) ) + ξ i forall Y, forall i

Dual Form maximize i Y α i (Ŷ) (Y i, Ŷ) ½ i Ŷ j Ŷ α i (Ŷ) α j (Ŷ )) [ F(X[ i,y i,ŷ) F(X j,y j,ŷ )] subject to Ŷ α i (Ŷ)) = C forall i α i (Ŷ) 0 forall i, forall Ŷ Note that there are exponentially-many Ŷ label sequences

Converting to a Polynomial-Sized Note the constraints: Ŷ α i (Ŷ)) = C forall i Formulation α i (Ŷ) 0 forall i, forall Ŷ These imply that for each i, the α i (Ŷ)) values are proportional to a probability distribution: Q(Ŷ X i ) = α i (Ŷ)) / C Because the MRF is a simple chain, this distribution can be factored into local distributions: Q(Ŷ X i ) = t Q(ŷ t-, ŷ t X i ) Let µ i (ŷ t-, ŷ t ) be the unnormalized version of Q

Reformulated Dual Form max X i X t X i,j X ŷ t µ i (ŷ t )I[ŷ t 6= y i,t ] X t X X X ŷ t,ŷ t s ŷ 0 s,ŷ 0 s µ i (ŷ t, ŷ t )µ j (ŷ 0 s, ŷ 0 s) F (ŷ t, ŷ t,x i ) F (ŷ 0 s, ŷ0 s,x j ) subject to X µ i (ŷ t, ŷ t ) = µ i (ŷ t ) ŷ t X µ i (ŷ t ) = C ŷ t µ i (ŷ t, ŷ t ) 0

Variables in the Dual Form µ i (k,k ) ) for each training example i and each possible class labels k, k : k : O(NK ) µ i (k) for each trianing example i and possible class label k: O(NK) Polynomial!

Taskar et al. comparison Handwriting Recognition log-reg: logistic regression sliding window CRF: msvm: multiclass SVM sliding window M^N: max margin markov net

Current State of the Art Discriminative Methods give best results not clear whether they scale published results all involve small numbers of training examples and very long training times Work is continuing on making CRFs fast and practical new methods for training CRFs potentially extendable to discriminative methods