Expectation maximization Subhransu Maji CMSCI 689: Machine Learning 14 April 2015
Motivation Suppose you are building a naive Bayes spam classifier. After your are done your boss tells you that there is no money to label the data. You have a probabilistic model that assumes labelled data, but you don't have any labels. Can you still do something? Amazingly you can Treat the labels as hidden variables and try to learn them simultaneously along with the parameters of the model Expectation Maximization (EM) A broad family of algorithms for solving hidden variable problems In today s lecture we will derive EM algorithms for clustering and naive Bayes classification and learn why EM works 2/19
Gaussian mixture model for clustering Suppose data comes from a Gaussian Mixture Model (GMM) you have K clusters and the data from the cluster k is drawn from a Gaussian with mean μk and variance σk 2 We will assume that the data comes with labels (we will soon remove this assumption) Generative story of the data: For each example n = 1, 2,.., N Choose a label y n Mult( 1, 2,..., K ) 2 Choose example x n N(µ k, k) Likelihood of the data: NY NY p(d) = p(y n )p(x n y n ) = p(d) = n=1 NY n=1 yn 2 2 y n D 2 exp 3/19 n=1 yn N(x n ; µ yn, xn µ yn 2 2 2 y n 2 y n )
GMM: known labels Likelihood of the data: p(d) = NY n=1 yn 2 2 y n D 2 exp xn µ yn 2 2 2 y n If you knew the labels yn then the maximum-likelihood estimates of the parameters is easy: k = 1 N µ k = [y n = k] n n [y n = k]x n n [y n = k] fraction of examples with label k mean of all the examples with label k 2 k = n [y n = k] x n µ k 2 n [y n = k] variance of all the examples with label k 4/19
GMM: unknown labels Now suppose you didn t have labels yn. Analogous to k-means, one solution is to iterate. Start by guessing the parameters and then repeat the two steps: Estimate labels given the parameters Estimate parameters given the labels In k-means we assigned each point to a single cluster, also called as hard assignment (point 10 goes to cluster 2) In expectation maximization (EM) we will will use soft assignment (point 10 goes half to cluster 2 and half to cluster 5) Lets define a random variable zn = [z1, z2,, zk] to denote the assignment vector for the n th point Hard assignment: only one of zk is 1, the rest are 0 Soft assignment: zk is positive and sum to 1 5/19
GMM: parameter estimation Formally zn,k is the probability that the n th point goes to cluster k z n,k = p(y n = k x n ) = (y n = k, x n ) (x n ) / (y n = k) (x n y n )= k N(x n ; µ k, Given a set of parameters (θk,μk,σk 2 ), zn,k is easy to compute Given zn,k, we can update the parameters (θk,μk,σk 2 ) as: k = 1 N µ k = 2 k = n z n,k n z n,kx n n z n,k n z n,k x n µ k 2 n z n,k 6/19 2 k) fraction of examples with label k mean of all the fractional examples with label k variance of all the fractional examples with label k
GMM: example We have replaced the indicator variable [yn = k] with p(yn=k) which is the expectation of [yn=k]. This is our guess of the labels. Just like k-means the EM is susceptible to local minima. Clustering example: k-means GMM http://nbviewer.ipython.org/github/nicta/mlss/tree/master/clustering/ 7/19
The EM framework We have data with observations xn and hidden variables yn, and would like to estimate parameters θ The likelihood of the data and hidden variables: Only xn are known so we can compute the data likelihood by marginalizing out the yn: arameter estimation by maximizing log-likelihood: ML p(d) = Y n p( ) = Y n arg max p(x n,y n ) n y n p(x n,y n ) log y n p(x n,y n ) hard to maximize since the sum is inside the log 8/19
Jensen s inequality Given a concave function f and a set of weights λi 0 and ᵢ λᵢ = 1 Jensen s inequality states that f( ᵢ λᵢ xᵢ) ᵢ λᵢ f(xᵢ) This is a direct consequence of concavity f(ax + by) a f(x) + b f(y) when a 0, b 0, a + b = 1 f(y) f(ax+by) f(x) a f(x) + b f(y) 9/19
The EM framework Construct a lower bound the log-likelihood using Jensen s inequality L( ) = n Maximize the lower bound: = n n = n, ˆL( ) log y n p(x n,y n ) f log q(y n ) p(x n,y n ) q(y y n ) n p(xn,y n ) q(y n ) log q(y y n ) n y n [q(y n ) log p(x n,y n ) q(y n ) log q(y n )] arg max n y n q(y n ) log p(x n,y n ) independent of θ 10/19 x λ Jensen s inequality
Lower bound illustrated Maximizing the lower bound increases the value of the original function if the lower bound touches the function at the current value L( ) ˆL( t+1 ) ˆL( t ) t t+1 11/19
An optimal lower bound Any choice of the probability distribution q(yn) is valid as long as the lower bound touches the function at the current estimate of θ" We can the pick the optimal q(yn) by maximizing the lower bound arg max [q(y n ) log p(x n,y n ) q(y n ) log q(y n )] q y n This gives us q(y n ) p(y n x n, t ) roof: use Lagrangian multipliers with sum to one constraint L( t )= ˆL( t ) This is the distributions of the hidden variables conditioned on the data and the current estimate of the parameters This is exactly what we computed in the GMM example 12/19
The EM algorithm We have data with observations xn and hidden variables yn, and would like to estimate parameters θ of the distribution p(x θ) EM algorithm Initialize the parameters θ randomly Iterate between the following two steps: E step: Compute probability distribution over the hidden variables q(y n ) p(y n x n, ) M step: Maximize the lower bound arg max q(y n ) log p(x n,y n ) n y n EM algorithm is a great candidate when M-step can done easily but p(x θ) cannot be easily optimized over θ For e.g. for GMMs it was easy to compute means and variances given the memberships 13/19
Naive Bayes: revisited Consider the binary prediction problem Let the data be distributed according to a probability distribution: We can simplify this using the chain rule of probability: Naive Bayes assumption: p (y, x) =p (y, x 1,x 2,...,x D ) p (y, x) =p (y)p (x 1 y)p (x 2 x 1,y)...p (x D x 1,x 2,...,x D DY = p (y) p (x d x 1,x 2,...,x d 1,y) d=1 p (x d x d 0,y)=p (x d y), 8d 0 6= d E.g., The words free and money are independent given spam 1,y) 14/19
Naive Bayes: a simple case Case: binary labels and binary features robability of the data: p (y) =Bernoulli( 0 ) p (x d y = 1) = Bernoulli( + d ) p (x d y = 1) = Bernoulli( d ) p (y, x) =p (y) DY d=1 p (x d y) }1+2D parameters = [y=+1] [y= 1] 0 (1 0 ) DY... +[x d=1,y=+1] d (1 + d )[x d=0,y=+1]... d=1 DY d=1 [x d=1,y= 1] d (1 d ) [x d=0,y= 1] // label +1 // label -1 15/19
Naive Bayes: parameter estimation Given data we can estimate the parameters by maximizing data likelihood The maximum likelihood estimates are: ˆ 0 = ˆ + d ˆ d = n [y n = +1] N = n [x d,n =1,y n = +1] n [y n = +1] n [x d,n =1,y n = 1] n [y n = 1] // fraction of the data with label as +1 // fraction of the instances with 1 among +1 // fraction of the instances with 1 among -1 16/19
Naive Bayes: EM Now suppose you don t have labels yn Initialize the parameters θ randomly E step: compute the distribution over the hidden variables q(yn) DY q(y n = 1) = p(y n =+1 x n, ) / 0 + +[x d,n=1] d (1 + d )[x d,n=0] M step: estimate θ given the guesses n 0 = q(y n = 1) N + d = n [x d,n = 1]q(y n = 1) n q(y n = 1) d = n [x d,n = 1]q(y n = 1) n q(y n = 1) d=1 // fraction of the data with label as +1 // fraction of the instances with 1 among +1 // fraction of the instances with 1 among -1 17/19
Summary Expectation maximization A general technique to estimate parameters of probabilistic models when some observations are hidden EM iterates between estimating the hidden variables and optimizing parameters given the hidden variables EM can be seen as a maximization of the lower bound of the data log-likelihood we used Jensen s inequality to switch the log-sum to sum-log EM can be used for learning: mixtures of distributions for clustering, e.g. GMM parameters for hidden Markov models (next lecture) topic models in NL probabilistic CA. 18/19
Slides credit Some of the slides are based on CIML book by Hal Daume III The figure for the EM lower bound is based on https:// cxwangyi.wordpress.com/2008/11/ Clustering k-means vs GMM is from http://nbviewer.ipython.org/ github/nicta/mlss/tree/master/clustering/ 19/19