Statistical Methods for NLP

Similar documents
Statistical NLP for the Web Log Linear Models, MEMM, Conditional Random Fields

Statistical Methods for NLP

Statistical Methods for NLP

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Conditional Random Field

More on HMMs and other sequence models. Intro to NLP - ETHZ - 18/03/2013

Sequence Modelling with Features: Linear-Chain Conditional Random Fields. COMP-599 Oct 6, 2015

Lecture 13: Structured Prediction

Conditional Random Fields: An Introduction

Statistical Methods for NLP

Graphical Models for Collaborative Filtering

Conditional Random Fields and beyond DANIEL KHASHABI CS 546 UIUC, 2013

Graphical models for part of speech tagging

Lecture 9: PGM Learning

CS838-1 Advanced NLP: Hidden Markov Models

Probabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov

Undirected Graphical Models

Intelligent Systems (AI-2)

Sequential Supervised Learning

Brief Introduction of Machine Learning Techniques for Content Analysis

Sequence Labeling: HMMs & Structured Perceptron

Hidden Markov Models

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Intelligent Systems (AI-2)

Log-Linear Models, MEMMs, and CRFs

Dynamic Approaches: The Hidden Markov Model

Logistic Regression: Online, Lazy, Kernelized, Sequential, etc.

Probabilistic Models for Sequence Labeling

Introduction to Logistic Regression and Support Vector Machine

Pattern Recognition and Machine Learning

Probabilistic Graphical Models

Generative and Discriminative Approaches to Graphical Models CMSC Topics in AI

Advanced Data Science

A brief introduction to Conditional Random Fields

COMP90051 Statistical Machine Learning

STA 4273H: Statistical Machine Learning

Hidden Markov Models in Language Processing

Midterm sample questions

Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015

Information Extraction from Text

Lecture 12: Algorithms for HMMs

Linear & nonlinear classifiers

Statistical Sequence Recognition and Training: An Introduction to HMMs

CSCI 5832 Natural Language Processing. Today 2/19. Statistical Sequence Classification. Lecture 9

Expectation Maximization (EM)

10 : HMM and CRF. 1 Case Study: Supervised Part-of-Speech Tagging

STA 414/2104: Machine Learning

Machine Learning for natural language processing

Probabilistic Graphical Models

Hidden Markov Models

CSE 490 U Natural Language Processing Spring 2016

What s an HMM? Extraction with Finite State Machines e.g. Hidden Markov Models (HMMs) Hidden Markov Models (HMMs) for Information Extraction

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

with Local Dependencies

Machine Learning Lecture 5

13: Variational inference II

Machine Learning for NLP

Introduction to Machine Learning Midterm, Tues April 8

Basic math for biology

CS6220: DATA MINING TECHNIQUES

Expectation Maximization (EM)

CRF for human beings

Probabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm

CS6220: DATA MINING TECHNIQUES

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

We Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named

Cours 7 12th November 2014

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

Text Mining. March 3, March 3, / 49

CS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas

CS6220: DATA MINING TECHNIQUES

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.

Natural Language Processing

Hidden Markov Models

Lecture 12: Algorithms for HMMs

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Empirical Methods in Natural Language Processing Lecture 11 Part-of-speech tagging and HMMs

Machine Learning for Structured Prediction

Machine Learning for Signal Processing Bayes Classification

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Lecture 13 : Variational Inference: Mean Field Approximation

Linear & nonlinear classifiers

Recap: HMM. ANLP Lecture 9: Algorithms for HMMs. More general notation. Recap: HMM. Elements of HMM: Sharon Goldwater 4 Oct 2018.

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

FINAL: CS 6375 (Machine Learning) Fall 2014

Probabilistic Graphical Models

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Statistical Data Mining and Machine Learning Hilary Term 2016

Probabilistic Graphical Models

Lecture 13: Discriminative Sequence Models (MEMM and Struct. Perceptron)

Bayesian Networks Introduction to Machine Learning. Matt Gormley Lecture 24 April 9, 2018

Hidden Markov Models

CMSC 723: Computational Linguistics I Session #5 Hidden Markov Models. The ischool University of Maryland. Wednesday, September 30, 2009

Intelligent Systems:

Machine Learning, Fall 2012 Homework 2

Lecture 3: Machine learning, classification, and generative models

Chris Bishop s PRML Ch. 8: Graphical Models

Transcription:

Statistical Methods for NLP Part I Markov Random Fields Part II Equations to Implementation Sameer Maskey Week 14, April 20, 2010 1

Announcement Next class : Project presentation 10 mins total time, strictly timed 2 mins for questions Please come to my office hours to download the presentation to my laptop Or please come 15 mins early to the class 2

Markov Random Fields Let x represent the cost of a shirt in a flea market The shirt price in flea market shops may be affected by proximity of each other We can take account of such dependency by potential function θ ij (x i,x j ) 3

Markov Random Fields stronger dependency ~ larger values Joint distribution over variables θ ij (x i,x j ) Seems familiar? P(x 1,...,x n )= exp( (i,j) E θ ij(x i,x j )) x 1,...,xn exp( (i,j) E θ ij(x i,x j )) Global normalization in denominator Log linear models, feature functions? 4

Markov Random Fields (cont d) Rewriting in terms of factors P(x 1,...,x n )= 1 Z(θ) exp( (i,j) E θ ij(x i,x j )) = 1 Z(θ) = 1 Z(θ) (i,j) E exp(θ ij(x i,x j )) (i,j) E ψ ij(x i,x j ) Potential Functions Positive functions over groups of variables 5

Potential Functions Potential is just a table (all node combinations represented as entries) Marginalizing the potential entails collapsing into one dimension Marginalize over B Marginalize over A 6

Factorization Each of this term is a clique, in fact maximal clique P(x 1,x 2,x 3 )= 1 Z(θ) ψ(x 1,x 2 )ψ(x 2,x 3 )ψ(x 2,x 3 ) Represent global configuration as product of local potentials Hammersley-Clifford theorem tell us how to represent a graph that is consistent with given distribution Clique : set of nodes such that there is an edge between every node 7

Graph Separation We liked Bayesian Network because it allowed to represent conditional independence well Conditional independence governed by D-separation criterion in Bayes Net Simple representation of independence properties in Markov Random Field Check graph reachability 8

Directed vs. Undirected Graphs 9

Converting Directed to Undirected Graphs Additional links 10

Directed vs. Undirected Distribution has not changed but the graph representation has 11

Factor Graphs Factor Node Variable Node We saw directed graphical model represent a given distribution We also found how to represent the same distribution using undirected models Factors graphs another way to represent same distribution Variables involved in a factor connected to the factor node Number of factors equal number of factor nodes 12

Graph Representation We saw that we can represent distributions in three different types of graphs Directed Acyclic Graphs Undirected Graphs Factor Graphs Depending on a problem one type of graph may be favored against another 13

Graphical Models in NLP Markov Random Field for term dependencies [Metzler W and Croft D, 05] Conditional Random fields for shallow parsing [Sha F. and Pereira F, 03] Bayesian Network for speech summarization [Maskey S. and Hirschberg J., 03] Dependency parsing with belief propagation [Smith D and Eisner J, 08] 14

Inference in Graphical Model Weights in Network make local assertions on how two nodes are related Inference algorithm takes these local assertions into global assertions between nodes[jordan, M, 02] Many inference algorithms Popular inference algorithm : Junction Tree algorithm 15

Inference Inference in Naïve Bayes? Given a graph and probability function we want to compute P(x 1,x 2,...,x n ) P(X h X e )= p(x h,x e ) p(x e ) Need to compute marginals P(X e )= X h P(X h,x e ) Computation of marginals needed in both directed and undirected graphs 16

Inference : Exploit Graph Structure We can compute marginals by summing over all other variables Brute force, computationally expensive, not efficient Better algorithm? Pass messages in the graph Enforce consistency among messages 17

Inference on a Chain *Next 4 slides are from Bishop book resource [1] 18

Inference on a Chain 19

Inference on a Chain 20

Inference on a Chain 21

Inference on a Chain To compute local marginals: Compute and store all forward messages,. Compute and store all backward messages,. Compute Z at any node x m Compute for all variables required. 22

Graphical Model for Dependency Parsing Raw sentence He reckons the current account deficit will narrow to only 1.8 billion in September. POS-tagged sentence Part-of-speech tagging He reckons the current account deficit will narrow to only 1.8 billion in September. PRP VBZ DT JJ NN NN MD VB TO RB CD CD IN NNP. Word dependency parsed sentence Word dependency parsing He reckons the current account deficit will narrow to only 1.8 billion in September. SUBJ MOD SPEC MOD SUBJ MOD COMP COMP S-COMP ROOT slide from Smith D and Eisner J, 08, [2] 23

Dependency Parsing with Belief Propagation [Smith D and Eisner J, 08] We can have dependencies represented as nodes of a factor graph Add constraints to make it a legal tree, no loops find preferred links 24

Graphical Models Summary Represent distributions with graphs (directed, undirected, factor graphs) Inference on the graph can be done by message passing algorithms Gaining more attention in NLP community 25

Statistical Methods for NLP Part II Equations to Implementation 26

Topics We Covered Text Mining Text Categorization NLP -- ML Linear Models of Regression Linear Methods of Classification Support Vector Machines Information Extraction Topic and Document Clustering Machine Translation Language Modeling Speech-to-Speech Translation Hidden Markov Model Maximum Entropy Models Conditional Random Fields K-means Expectation Maximization Viterbi Search Graphical Models 27

Scoring Unstructured Text All Amazon reviewers may not rate the product, may just write reviews, we may have to infer the rating based on text review Some of these patterns could be exploited to discover knowledge Patterns may exist in unstructured text Review of a camera in Amazon 28

Linear Regression Empirical Loss (Predicted vs. Original) Y X θ Given out N training data points we can build X and Y matrix and perform the matrix operations For any new test data plug in the x values (features) in our regression function with the best theta values we have 29

Implementation of Multiple Linear Regression Given out N training data points we can build X and Y matrix and perform the matrix operations Can use MATLAB or write your own, Matrix multiplication implementation to get theta matrix For any new test data plug in the x values (features) in our regression function with the best theta values we have 30

Regression Pseudocode Load X1, XN Load Y1,. YN Build X Matrix (NxK) Test Y = θ X 31

Text Classification 1905.5 26 78.9 9.9 12.3 16.8 47.6... Diabetes Journal 2000 25 85 9 15 20 45 Hepatitis Journal 1802.1 40.3 90.4 4 10.0 4.2 1.3... Diabetes? 32

Perceptron We want to find a function that would produce least training error R n (w)= 1 n n i=1 Loss(y i,f(x i ;w) 33

Perceptron Pseudocode Load X1,, XN Load Y1,., YN for (i = 1 to N) if (y(i) * x(i) * w <= 0) w = w + y(i) * x(i) end end 34

Naïve Bayes Classifier for Text P(Y k,x 1,X 2,...,X N )=P(Y k )Π i P(X i Y k ) Prior Probability of the Class Here N is the number of words, not to confuse with the total vocabulary size Conditional Probability of feature given the Class 35

Naïve Bayes Classifier for Text Given the training data what are the parameters to be estimated? P(Y) P(X Y 1 ) P(X Y 2 ) Diabetes : 0.8 Hepatitis : 0.2 the: 0.001 diabetic : 0.02 blood : 0.0015 sugar : 0.02 weight : 0.018 the: 0.001 diabetic : 0.0001 water : 0.0118 fever : 0.01 weight : 0.008 36

Naïve Bayes Pseudocode Foreach Class C totalwcount=0 for J = 1 to V count(wj) end totalcount = totalcount + totalcount(wj) for J= 1 to V C.WJProb = count(wj) + delta /totalcount(wj) + delta * V end end 37

SVM: Maximizing Margin with Constraints We can combine the two inequalities to get y i (w T x i +b) 1 0 i Problem formulation Minimize Subject to w 2 2 d+ d- y i (w T x i +b) 1 0 i 38

Dual Problem Solve dual problem instead Maximize J(α)= n i=1 α i 1 2 n i,j=1 α iα j y i y j (x i.x j ) subject to constraints of α i 0 i n i=1 α iy i =0 39

SVM Solution Linear combination of weighted training example Sparse Solution, why? ŵ= n i=1ˆα i y i x i Weights zero for non-support vectors i SV α iy i (x i.x)+ b 40

Sequential Minimal Optimization (SMO) Algorithm The weights are just linear combinations of training vectors weighted with alphas We still have not answered how do we get alphas Coordinate ascent Do until converged select pair of alpha(i) and alpha(j) reoptimize W(alpha) with respect to alpha(i) and alpha(j) holding all other alphas constant done 41

Information Extraction Tasks Named Entity Identification Relation Extraction Coreference resolution Term Extraction Lexical Disambiguation Event Detection and Classification 42

Classifiers for Information Extraction Sometimes extracted information can be a sequence Extract Parts of Speech for the given sentence DET VB AA ADJ NN This is a good book What kind of classifier may work well for this kind of sequence classification? 43

Classification with Memory P(NN book, prevadj) = 0.80 P(VB book, prevadj) = 0.04 P(VB book, prevadv) = 0.10 C C ADJ C C X1 X2 good book XN C(t) (class) is dependent on current observations X(t) and previous state of classification (C(t-1)) C(t) can be POS tags, document class, word class, X(t) can be text based features 44

HMM Example 45

Forward Algorithm : Computing alphas 46

Forward Algorithm Perl Code Sub Forward() { for (my $t = 0; $t < @obs ; $t++){ #sum across i,j transitions for all starting i and ending at j for (my $i = 0; $i < $num_states; $i++){ my $tot_forward_prob_for_cur_dest = 0; for (my $j = 0; $j < $num_states; $j++){ #multiply transition prob * obs prob my $trans_prob = $trans_matrix[$i][$j]; #get obs_id my $cur_obs = $obs[$t]; my $obs_id = $obs_vocab_id{$cur_obs}; #state i producing the given observation my $obs_prob = $obs_matrix[$i][$obs_id]; } } } #compute the forward prob if ($t == 0){ my $start_trans_prob = $start_state_matrix[$j]; $tot_forward_prob_for_cur_dest = $start_trans_prob * $obs_prob; } else{ $tot_forward_prob_for_cur_dest = $tot_forward_prob_for_cur_dest + $forward_prob_matrix[$i][$t] * $trans_prob * $obs_prob; } } #this is for passing on forward pass to next iteration $forward_prob_matrix[$i][$t+1] = $tot_forward_prob_for_cur_dest; 47

Viterbi Algorithm 48

Problem 3: Forward-Backward Algorithm Consider transition from state i to j, tr ij Let p t (tr ij,x) be the probability that tr ij is taken at time t, and the complete output is X. x t S i S j α t-1 (i) β t (j) p t (tr ij,x) = α t-1 (i) a ij b ij (x t ) β t (j) 49

Problem 3: F-B algorithm cont d p t (tr ij,x) = α t-1 (i) a ij b ij (x t ) β t (j) where: α t-1 (i) = Pr(state=i, x 1 x t-1 ) = probability of being in state i and having produced x 1 x t-1 a ij = transition probability from state i to j b ij (x t ) = probability of output symbol x t along transition ij β t (j) = Pr(x t+1 x T state= j) = probability of producing x t+1 x T given you are in state j 50

Estimating Transition and Emission Probabilities a ij = count(i j) q Q count(i q) â ij = Expected number of transitions from state i to j Expected number of transitions from state i ˆbj (x t ) = Expected number of times in state j and observing symbol xt Expected number of time in state j 51

Problem 3: F-B algorithm Compute Compute α s. α s. since since forced forced to to end end at at state state 3, 3, α α T =.008632=Pr(X) T =.008632=Pr(X) State: 1 2 3 Time: 0 1 2 3 4 Obs: φ a ab aba abaa 1/3x1/2 1/3x1/2 1/3x1/2 1/3x1/2 1.167.027.0046.00077.33 1/3x1/2 1/3x1/2 1/3x1/2 1/3x1/2 1/3 1/3 1/3 1/3 1/3 1/2x1/2 1/2x1/2 1/2x1/2 1/2x1/2 1/2x1/2.306 1/2x1/2.113 1/2x1/2.035.0097 1/2x1/2 0.083.076.028.008632 52

Problem 3: F-B algorithm, cont d Compute β s. State: 1 2 3 Time: 0 1 2 3 4 Obs: φ a ab aba abaa 1/3x1/2 1/3x1/2 1/3x1/2 1/3x1/2.0086.0039 1/3x1/2 1/3x1/2 1/3x1/2 1/3x1/2 1/3 1/3 1/3 1/3 1/3 1/2x1/2 1/2x1/2 1/2x1/2 1/2x1/2.016.0625.25 0 1/2x1/2.028 1/2x1/2.076 1/2x1/2.083 1/2x1/2 0 0 0 0 1 0 53

Problem 3: F-B algorithm, cont d Compute counts. (a posteriori probability of each transition) c t (tr ij X) = α t-1 (i) a ij b ij (x t ) β t (j)/ Pr(X) State: 1 2 3.167x.0625x.333x.5/.008632 Time: 0 1 2 3 4 Obs: φ a ab aba abaa.547.246.045 0.302.201.134.151.101.067.045 0.151.553.821 0 0 0 0 0 1 54

sub Backward() { my (@obs) = @_; # we need to start at the end my $end_t = scalar(@obs); #go upto zero so start 1 less $end_t--; for (my $t = $end_t; $t >=0; $t--){ #sum across i,j transitions for all starting i and ending at j for (my $i = 0; $i < $num_states; $i++){ my $tot_back_prob_for_cur_dest = 0; for (my $j = 0; $j < $num_states; $j++){ Backward Algorithm Perl Code #multiply transition prob * obs prob my $trans_prob = $trans_matrix[$i][$j]; #get obs_id my $cur_obs = $obs[$t]; my $obs_id = $obs_vocab_id{$cur_obs}; #state i producing the given observation my $obs_prob = $obs_matrix[$i][$obs_id]; } } } #compute the back prob if ($t == $end_t){ my $end_trans_prob = $end_state_matrix[$j]; $tot_back_prob_for_cur_dest = $end_trans_prob ; } else{ $tot_back_prob_for_cur_dest = $tot_back_prob_for_cur_dest + $back_prob_matrix[$i][$t] * $trans_prob * $obs_prob; } } $back_prob_matrix[$i][$t-1] = $tot_back_prob_for_cur_dest; 55

Finding Maximum Likelihood of our Conditional Models (Multinomial Logistic Regression) (C D,λ)= (c,d) (C,D) P(C D,λ)= (c,d) (C,D) p(c d, λ) logp(c d, λ) P(C D,λ)= (c,d) (C,D) log exp i λ if i (c,d) c exp i λ if i (c,d) 56

Maximizing Conditional Log Likelihood P(C D,λ)= logexp i λ if i (c,d) (c,d) (C,D) (c,d) (C,D) log c exp i λ if i (c,d) Taking derivative and setting it to zero log(p C,λ) λ i = (c,d) (C,D) f i (c,d) (c,d) (C,D) c P(c d,λ)f i (c,d) Empirical count (f i, c) Predicted count (f i, λ) Optimal parameters are obtained when empirical expectation equal predicted expectation 57

Finding Model Parameters We saw that optimum parameters are obtained when empirical expectation of a feature equals predicted expectation We are finding a model having maximum entropy and satisfying constraints for all features fj E p (f j )=E (p) (f j ) Hence finding the parameters of maximum entropy model entails to maximizing conditional log-likelihood and solving it Conjugate Gradient Descent Quasi Newton s Method A simple iterative scaling Features are non-negative (indicator functions are non-negative) Add a slack feature f m+1 (d,c)=m m where j=1 f j(d,c) m M =max i,c j=1 f j(d i,c) 58

Finding Model Parameters We saw that optimum parameters are obtained when empirical expectation of a feature equals predicted expectation We are finding a model having maximum entropy and satisfying constraints for all features fj Hence finding the parameters of maximum entropy model entails to maximizing conditional log-likelihood and solving it Conjugate Gradient Descent Quasi Newton s Method A simple iterative scaling Features are non-negative (indicator functions are non-negative) Add a slack feature where E p (f j )=E (p) (f j ) f m+1 (d,c)=m m j=1 f j(d,c) m M =max i,c j=1 f j(d i,c) 59

How to Cluster Documents with No Labeled Data? Treat cluster IDs or class labels as hidden variables Maximize the likelihood of the unlabeled data Cannot simply count for MLE as we do not know which point belongs to which class User Iterative Algorithm such as K-Means, EM 60

Document Clustering with K-means We can estimate parameters by doing 2 step iterative process Minimize J with respect to Keep µ k fixed r nk Step 1 Minimize J with respect to Keep r nk fixed µ k Step 2 61

r nk Step 1 Keep µ k fixed Minimize J with respect to r nk Optimize for each n separately by choosing for k that gives minimum x n r nk 2 r nk =1ifk=argmin j x n µ j 2 =0otherwise Assign each data point to the cluster that is the closest Hard decision to cluster assignment 62

Minimize J with respect to r nk µ k Step 2 Keep fixed µ k µ k J is quadratic in. Minimize by setting derivative w.rt. zero µ k = n r nkx n n r nk to Take all the points assigned to cluster K and re-estimate the mean for cluster K 63

Expectation Maximization for Gaussian Mixture Models γ(z nk )=E(z nk x n )=p(z k =1 x n ) E-Step γ(z nk )= π kn(x n µ k, k ) K j=1 π jn(x n µ j, j ) 64

Estimating Parameters M-step µ k = 1 N k N n=1 γ(z nk)x n k = 1 N k N n=1 γ(z nk)(x n µ k )(x n µ k )T π k = N k N wheren k = N n=1 γ(z nk) Iterate until convergence of log likelihood logp(x π,µ, )= N n=1 log( k k=1 N(x µ k, k )) 65

Maximum Entropy Markov Model ˆT =argmax T P(T W) MEMM Inference =argmax T i P(t i t i 1,w i ) ˆT =argmax T P(T W) HMM Inference ˆT =argmax T P(W T)P(T) =argmax T i P(w i t i )p(t i t i 1 ) 66

Transition Matrix Estimation ˆT =argmax T P(T W) MEMM Inference =argmax T i P(t i t i 1,w i ) Transition is dependent on the state and the feature These features do not have to be just word id, it can be any features functions If q are states and o are observations we get P(q i q i 1,o i )= 1 Z(o,q ) exp( i w if i (o,q)) 67

Viterbi in HMM vs. MEMM HMM decoding: v t (j)=max N i=1 v t 1(i)P(s j s i )P(o t s j )1 j N,1<t T MEMM decoding: v t (j)=max N i=1 v t 1(i)P(s j s i,o t )1 j N,1<t T This computed in maximum entropy framework but has markov assumption in states thus its name MEMM 68

Conditional Random Field Sum over all data points P(y x;w)= y Y exp( Sum over all feature function i exp( i Weight for given feature function w j f j (y i 1,y i,x,i)) j w j f j (y i 1,y i,x,i)) j Feature Functions Sum over all possible label sequence Feature function can access all of observation Model log linear on Feature functions 69

Inference in Linear Chain CRF We saw how to do inference in HMM and MEMM We can still do Viterbi dynamic programming based inference y =argmax y p(y x;w) =argmax y j w jf j (x,y) =argmax y j w j i f j(y i 1,y i,x,i) =argmax y i g i(y i 1,y i ) Denominator? whereg i (y i 1,y i )= j w jf j (y i 1,y i,x,i) x and i arguments of f_j dropped in definition of g_i g_i is different for each I, depends on w, x and i 70

Computing Expected Counts P(y x;w)= y Y exp( i exp( i w j f j (y i 1,y i,x,i)) j w j f j (y i 1,y i,x,i)) j Need to compute denominator 71

Optimization: Stochastic Gradient Ascent For all training (x,y) For all j Compute End For End For E y p(y x;w)[f j (x,y )] w j :=w j +α(f j (x,y) E y p(y x;w)[f j (x,y )]) 72

References [1] Christopher Bishop, Pattern Recognition and Machine Learning 2006 [2] David Smith and Jason Eisner, Dependency Parsing by Belief Propagation, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 145 156, Honolulu, October 2008. 73