Study Notes on the Latent Dirichlet Allocation

Similar documents
Latent Dirichlet Allocation (LDA)

Text Mining for Economics and Finance Latent Dirichlet Allocation

13: Variational inference II

Latent Dirichlet Allocation Introduction/Overview

Another Walkthrough of Variational Bayes. Bevan Jones Machine Learning Reading Group Macquarie University

Lecture 13 : Variational Inference: Mean Field Approximation

Latent Dirichlet Allocation (LDA)

Dirichlet Enhanced Latent Semantic Analysis

Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures

Recent Advances in Bayesian Inference Techniques

CS Lecture 18. Topic Models and LDA

Document and Topic Models: plsa and LDA

Evaluation Methods for Topic Models

A graph contains a set of nodes (vertices) connected by links (edges or arcs)

Latent Dirichlet Alloca/on

Note 1: Varitional Methods for Latent Dirichlet Allocation

LDA with Amortized Inference

Topic Models. Brandon Malone. February 20, Latent Dirichlet Allocation Success Stories Wrap-up

Topic Modelling and Latent Dirichlet Allocation

Fast Inference and Learning for Modeling Documents with a Deep Boltzmann Machine

Bayesian Methods for Machine Learning

Introduction to Machine Learning

STA 4273H: Statistical Machine Learning

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Time-Sensitive Dirichlet Process Mixture Models

Replicated Softmax: an Undirected Topic Model. Stephen Turner

Collapsed Variational Bayesian Inference for Hidden Markov Models

Infering the Number of State Clusters in Hidden Markov Model and its Extension

Topic Models. Charles Elkan November 20, 2008

Information retrieval LSI, plsi and LDA. Jian-Yun Nie

Gibbs Sampling. Héctor Corrada Bravo. University of Maryland, College Park, USA CMSC 644:

Probabilistic Time Series Classification

Gentle Introduction to Infinite Gaussian Mixture Modeling

Lecture 8: Graphical models for Text

CS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas

Gaussian Mixture Model

Note for plsa and LDA-Version 1.1

Mixtures of Gaussians. Sargur Srihari

CS6220: DATA MINING TECHNIQUES

Latent Dirichlet Bayesian Co-Clustering

Large-Scale Feature Learning with Spike-and-Slab Sparse Coding

Variational Inference (11/04/13)

VIBES: A Variational Inference Engine for Bayesian Networks

Posterior Regularization

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Lecture 16 Deep Neural Generative Models

Generative Clustering, Topic Modeling, & Bayesian Inference

Mixture Models and Expectation-Maximization

Lecture 21: Spectral Learning for Graphical Models

Two Useful Bounds for Variational Inference

STA 4273H: Statistical Machine Learning

Quantitative Biology II Lecture 4: Variational Methods

STA 4273H: Statistical Machine Learning

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Cheng Soon Ong & Christian Walder. Canberra February June 2018

CS145: INTRODUCTION TO DATA MINING

Latent Dirichlet Allocation

Latent Dirichlet Allocation

An overview of Bayesian analysis

Statistical Pattern Recognition

Latent Variable View of EM. Sargur Srihari

Content-based Recommendation

Pattern Recognition and Machine Learning

Expectation Maximization (EM)

Topic Modeling: Beyond Bag-of-Words

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

STA 414/2104: Machine Learning

Online Bayesian Passive-Agressive Learning

Learning MN Parameters with Approximation. Sargur Srihari

Lecture 6: Graphical Models: Learning

Collapsed Variational Inference for Sum-Product Networks

CSC2535: Computation in Neural Networks Lecture 7: Variational Bayesian Learning & Model Selection

Modeling User Rating Profiles For Collaborative Filtering

Lab 12: Structured Prediction

Basic math for biology

Contents. Part I: Fundamentals of Bayesian Inference 1

Unsupervised Learning

Series 6, May 14th, 2018 (EM Algorithm and Semi-Supervised Learning)

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Applying LDA topic model to a corpus of Italian Supreme Court decisions

Variational Scoring of Graphical Model Structures

ECE 5984: Introduction to Machine Learning

Machine Learning Summer School

Classical Predictive Models

Bayesian Nonparametrics for Speech and Signal Processing

Machine Learning Techniques for Computer Vision

Lecture : Probabilistic Machine Learning

Introduction to Bayesian inference

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Deep Learning Srihari. Deep Belief Nets. Sargur N. Srihari

16 : Approximate Inference: Markov Chain Monte Carlo

Part 1: Expectation Propagation

Chapter 20. Deep Generative Models

Mixtures of Multinomials

Variational Message Passing. By John Winn, Christopher M. Bishop Presented by Andy Miller

Introduction to Reinforcement Learning

Deep Poisson Factorization Machines: a factor analysis model for mapping behaviors in journalist ecosystem

STATS 306B: Unsupervised Learning Spring Lecture 2 April 2

Lecture 10. Announcement. Mixture Models II. Topics of This Lecture. This Lecture: Advanced Machine Learning. Recap: GMMs as Latent Variable Models

Statistical Debugging with Latent Topic Models

Transcription:

Study Notes on the Latent Dirichlet Allocation Xugang Ye 1. Model Framework A word is an element of dictionary {1,,}. A document is represented by a sequence of words: =(,, ), {1,,}. A corpus is a collection of documents: ={ (),, () }. {,} Dirichlet mixture prior Dirichlet prior Poisson prior ξ Word Word frequency belief (-D) Graphical representation of two-level model Document ={ 1,, } Poisson prior ξ Topic Word Document ={ 1,, } Topic belief (-D) Discrete distribution Dirichlet priors Graphical representation of LDA In two-level generative model, the probability of seeing an -words document is parameterized by hyper parameters {,} of the Dirichlet mixture prior, where contains mixture parameters and contains -dimensional pseudo-count vectors. The probability can be evaluated via conditioning on the word frequency belief as follows (,) = (,,)(,) = ( )(,) = ( ) (,) Γ( = ), (1) Γ( ) Γ( ()) Γ( ) 1

which is an explicit formula and where is the -th mixture parameter, is the pseudo-count of the -th keyword of the -th component, =, () is the count of the -th keyword, and Γ stands for gamma function. The two-level model does not explicitly consider the topic issue. Each Dirichlet component may represent a mix of topics. Latent Dirichlet allocation (LDA) aggressively considers the topic issue by incorporating the distribution of the word over topic, that is (, ), where the matrix =[ ] parameterizes this distribution and stands for the probability of seeing the -th keyword under the -th topic. LDA is a three-level generative model in which there is a topic level between the word level and the belief level and in LDA, becomes topic belief. LDA assumes that whenever a key word is observed, there is an associated topic, hence an -words document is associated with a topic sequence of length. By conditioning on and conditional independence, the parameterized probability (,) of seeing can be written as (,) = (,,)(,) = (, )( ). (2) By chain rule and conditional independence, we have (, ) =(,,,,,) =(,,,,,, )(,,,,,) =(, )(,,,,,) = =(,) (,). (3) Plugging (3) into (2) yields (,) = (,) ( ). (4) By conditioning on and conditional independence, we have ( ) =,( ) = ( ) = ( ). (5) Plugging (5) into (4) and exchanging the order of integration and summation yields 2

(,) = (,) ( ) Note that = (,) ( ) = (,) ( ) = (,) ( ). (6) (,) = (,) (,) = Plugging (7) into (6) yields the final simplified form (,) = (,) ( ),, = (,). (7) = (,) ( ), (8) which unfortunately does not have explicit formula (the integral cannot be removed because is latent). Note that (,) = (, ), we have (,) = (,) ( ), (9) which can be understood as a form of two level model. But in LDA, there is no direct arc from to, it has to go through the intermediate latent topic level and apply the distribution of word over topic. The derivation of the probability (8) shows that each document can exhibit multiple topics and each word can also associate with multiple topics. This flexibility gives LDA strong power to model a large collection of documents, as compared with some old models like unigram and mixture of unigram. Compared with the plsi, LDA has natural way of assigning probability to a previously unseen document. To estimate, from a corpus ={ (),, () }, one needs to maximize the log likelihood of data: ln (,) = ln (),, (10) 3

which is computationally intractable. However, the variational inference method can provide a computationally tractable lower bound. Given a set of estimated parameters {,}, one way to test it is to calculate the perplexity of a test set of documents = { (),, () }. The perplexity has the form perplexity ( ) = exp (),, (11) where is the total number of keywords in -th document. In order to evaluate (11), one has to compute ln (), for each. Let () = (, ). By (8), we have ln (,) =ln()( ) = ln (()). An estimator is ln (() ), where all () are drawn from the Dirichlet distribution Dir(). 2. Variational Inference The key of the variational inference method is to apply the Jensen s inequality to obtain a lower bound of ln (,). Let (, ) be a joint probability density function of,, applying the idea of importance sampling yields ln (,) =ln,,, =ln,,, (, ) (,) (, )ln,,, =(,), (12) (,) where the last step is by Jensen s inequality. Note that (,) = (, )ln,,, (,) Hence, = (, )ln,,,(,) (,) = (, )ln,,, +ln(,) = (, )ln (,) (,),,, + (, )ln(, ) = (, ),,, + ln(, ). 4

ln (,) =(,) +(, ),,,, (13) which says ln (,) is the sum of the variational lower bound and the KL-distance between the variational posterior and the true posterior. In variational method, ones wants to find a (,) that is as much close to ln (,) as possible. And this is equivalent to find a (parameterized) (, ) that has as small KL-distance to,,, as possible. Using the fact (factorization by conditional independence) that,,, =,,,, we have =,,, =,,,,, =(, ), (14) (,) = (, )ln (,) (,) = (ln (,)) + ln + ln ln (, ). (15) Now, let (, ) be parameterized: () Graphical representation of variational distribution (,), =, (),, (), =, (),, (), (),, (), = (),, () =,, (),, () =,,, (),, (),, (),, () 5

= (),, (),, () = = (), (16) where is Dirichlet parameter and () is multinomial parameter. Applying this variational distribution to (15) yields the expansion of each term as (ln (, )) = (, )ln(, ) = () ln(, ) = () ln(, ) = () ln(, ) = () (ln (,) ) = () ( ln (,) ) = () ln, = () ln, + +ln, = ln, () + + ln, () = () ln, \ () = () ln, = () ln, = () ln,, (17) ln = (, )ln = () ln = () ln = () ln = () ln 6

= () ln + +ln = ln () = ln () () = ln () () = ln = () ln. = () ln = () Ψ( ) Ψ. (18) ln = (, )ln = () ln = ln () = ln \ = (ln Γ( ) ln Γ( ) + ( 1)ln ) =lnγ( ) ln Γ( ) + ( 1) ln =lnγ( ) ln Γ( ) + ( 1)Ψ( ) Ψ, (19) ln (, ) = ln () = ln + ln (), ln = ln Γ( ) ln Γ( ) + ( 1)Ψ( ) Ψ, (20) ln () = (, )ln () = () ln () = () ln () = () ln () 7

= () ln () () = () ln () = () ln () = () () ln, (21) where Ψ is digamma function. Therefore, (,) =; (),, (), ;, = () ln, + () Ψ( ) Ψ \ + ln Γ( ) ln Γ( ) + ( 1)Ψ( ) Ψ lnγ( )+ ln Γ( ) ( 1)Ψ( ) Ψ () () ln. (22) 3 EM Algorithm The purpose of the EM algorithm is to maximize (10) by maximizing its variational lower bound. Since ln (), () ; (,),, (,), () ;,, we have ln (,) = ln (), () ; (,),, (,), () ;, =(, ;,), (23) which is overall variational lower bound and where = (,) is a tensor and =[ () ] is a matrix. The idea of the EM algorithm is to start from an initial (,), and iteratively improve the estimate via the following alternating E-step and M-step: 8

E-step: Given (,), find (, ) to maximize (, ;,), M-step: Given (, ) found from the E-step, find (,) to maximize (, ;,). The two steps are repeated until the value of (, ;,) converges. The update equations can be derived (using Lagrange multipliers) from the overall variational lower bound (, ;,) = () ; (,),, (,), () ;, = (,) ln, () + (,) Ψ () () Ψ + ln Γ( ) ln Γ( ) + ( 1)Ψ () () Ψ () ln Γ( ) ln Γ( () ) + ( () 1)Ψ () () Ψ (,) (,) ln (24) (,) and the constraints = 1, =1,,; =1, =1,,, =1,..,. In E-step, the update equations for and are (,), ()exp Ψ () () Ψ, (25) () = +. (26) (,) And in order for the variational bound converges, it requires alternating between these two equations. Once and are obtained in E-step, the update equation for can be determined by using Lagrange multipliers, it s (,) ( () =), (27) where is indicator function. Finally, there is no analytical form of the update equation for in the M-step, the Newton s method can be employed for obtaining updated by maximizing = ln Γ( ) ln Γ( ) + ( 1)Ψ () () Ψ. (28) Once the EM algorithm converges, it can return two target distributions:, which is word distribution over topic and /( ), which is topic distribution. 9

4 Gibbs Sampling Given the data ={ (),, () }, let ={ (),, () } be the corresponding latent variables such that () is associated with (). The central quantity of the Gibbs sampling methods for learning a LDA model is the full conditional posterior distribution ( () \ (),). Note that (,) () \ (), = (). (29) (\,) And most Gibbs sampling methods put a prior on so that (29) is parameterized as (,,) () \ (),,, =. (30) () (\,,) The nominator of right hand side of (30) is (,,) =(,,)(,) =(, )( ). (31) To evaluate (31), one may refer to the conditional independence represented by the following graph: Dirichlet prior Poisson prior Topic ξ Word () () () Topic belief (-D) Discrete distribution Dirichlet priors Document () ={ () () 1,, } Graphical representation of LDA for corpus By the conditional independence for and, we have ( ) = (, )( ) ( ={ (),, () }) = ( )( ) = ( () () ) (() ) () () 10

= ( () () ) (() ) () () = (() () ) () = () () (() ) () = () () ( ) = (), (32) () where () is count of topic in in document, and Β is Beta function, (, ) = (,, )(, ) = (, )( ) = ( () (),) ( ) = () (), ( ) = ( ) = (() () ) () () = (() () )() = () () () () = (), (33) () where is count of topic to word in all documents. Plugging (32) and (33) into (31) yields (,,) = () (). (34) () () For the denominator of the right hand side of (30), note that 11

\ (),, = (),\ (),\ (), = () \ (),\ (),,\ (),\ (), = () () (),( () ) \ (),\ (), \ (),\ (), = () () () (), (35) where and () respectively correspond to and (),with count associated with () excluded. Let = () and = (), then (30) is reduced to () =\ (), () =,\ (),, () () () () = () () () () () () () () = () () = () () () () () () () () () () () () (). (36) Given this full conditional posterior, the Gibbs sampling procedure is as straightforward as follows: Step 0: Initialize,, Step 1: Iteratively update by drawing () according to (36) Step 2: Update, by maximizing the joint likelihood function (34). Go to step (1) The procedure above is generic. The step 1 usually requires huge number of iterations. The objective function in step 2 is tractable compared with (10) and there is no need for a variational lower bound. 12

One can use Newton s method to maximize the log of this likelihood function. Once the burn-out phase is reached, the two target distributions can be estimated as (() )= () + / +, which represents the word distribution over topic and (() )=+ () () / +, which represents the topic distribution in document. () As a special case, is fixed, then the full conditional posterior (36) can be reduced to () =\ (), () =,\ (),, + () 1. (37) In fact, by reducing (34) and (35), we have (,,) =(,,)(,) =(, )( ) () () = \ (),, = (),\ (),\ (),, (38) = () \ (),\ (),,\ (),\ (), = () () (),( () ) \ (),\ (), \ (),\ (), = and dividing (38) by (39) yields (37). () (), (39) Compared with the variational inference method, which is deterministic, the Gibbs sampling is stochastic (in step 1), hence, it s easier for Gibbs sampling s iterations to jump out of local optima. Moreover, it s easier for the Gibbs sampling algorithm to incorporate the uncertainty on. The variational inference method (together with its EM algorithm) introduced in sections 2, 3 assumes that is fixed, whereas it s easy for the Gibbs sampling algorithm to relax this assumption (as the full conditional posterior formula (36) has shown). And as is fixed, the full conditional posterior formula has simpler form. Another advantage is that the Gibbs sampling algorithm relaxes the assumption that documents are independent. In variational inference method, the likelihood of seeing the corpus is the product of the likelihoods of seeing individual documents, however no such assumption is made in the inference based on the Gibbs sampling method. Given the hyperparameters,, the step 1 of the Gibbs sampling procedure can be applied to a corpus = { (),, () } to calculate the perplexity. The burn-out phases of multiple runs can be averaged to 13

obtain the estimates of and ={ (),, () }. Given,, the joint likelihood of seeing is simply expressed as (, ) = (,, )(, ) = (, )( ) () = () (), () () = (), () () = (), () () () () (), () () / (), () () () (),\ () () (), () () = () () (), () \ () = (), () () =, () () \ () = (), () () () / (), () () () / (), () () () = (), (40) () where () is number of -th keyword in document. This joint likelihood of corpus leads to a new perplexity formula as perplexity () = exp () Particularly, () () (). (41) =( (), () ). Although this perplexity formula assumes the independence between documents, it does not deal with the integral over. However, the estimation method is still based on sampling. To obtain the expected perplexity, the multiple runs of drawing () are required for obtaining the expected, corresponding to the target corpus. There is a very elegant toy example for testing any implementation of an LDA learning algorithm. Suppose there are 10 topics and 25 words, and the ground truth topic-to-word distribution (10 25 matrix) is represented by the following figure 14

The ground truth of (10 25 matrix) Also suppose =1, which means each topic is expected to have equal chance to appear. After generating 2000 documents with each has length 100, it is expected to recover the ground truth of from the artificially generated data. The following figure shows that it s well recovered by the Gibbs sampling procedure. Gibbs sampling results for learning 15

5 Application 5.1 Classification For the classification purpose, suppose an LDA model (), () is trained from a corpus that is labeled, then for an unlabeled document, the posterior probability that it belongs to class is ( ) = ( )() = (), () () ( )() (), () (), (42) where (), () is the -th conditional likelihood that can be evaluated according to (8), and () is the portion of documents that is labeled. Practically, due to the small scale of the joint probability of the keywords sequence of a document, only ln (), () is available for each. Let =ln (), () +ln() and = max, then ( ) = ( ) ( ), (43) which is an usual formula for avoiding numerical overflow. Since the size of the dictionary could be very large, the problem of feature selection arises. An ingenious idea that David Blei proposed is to take all the labeled and unlabelled documents as a single corpus and apply the variational inference method for the purpose of feature selection. Note that the objective function is (, ;,) = () ; (,),, (,), () ;,. Upon the termination of the algorithm, we will have () for each. And () is a document-specific parameter vector of the Dirichlet posterior of the topic belief. Since () has much lower dimension than (), we can use () as the new representation of the document. Compared with the usual feature selection methods like mutual information and statistics, which are basically greedy selection methods that represent each feature with a single word, each feature generated by LDA's variational inference process is represented by a topic that is related to many words. This implies that the LDA's feature vector of dimensions carry more information than selecting words. Without the worry about which key-word is informative for whether assigning a document to a class, the LDA features are theoretically superior. Moreover, this feature selection idea can also be implemented when the Gibbs sampling algorithm is used. That is, the Gibbs sampling algorithm can be applied on all the labeled and unlabelled documents. consequently, for each document, we can estimate the expected topic composition (() ) as () =+ () () / +, where () is counted from () that is generated by the Gibbs sampling. We can use () as the new dimension-reduced representation of the document for the purpose of document classification. Sometimes it's interesting to estimate ( ) where is a topic hidden in unlabelled documents. By conditioning on unlabelled document, we have ( ) = (,)( ) 16

= ( )( ) = ( )( ) () () = ( )( ). (44) () Practically there are only limited number of unlabelled documents, say ={ (),, () }. But one estimator of (44) can be built as ( ) = () () () = () ( () ) ( () ) = () () ( ) ( () ) = () (),, (45) ( () ) where () can be computed using (43) and ( () ) can be computed as ( () )= (), () (), () (). One may need special care for computing = in order to ( () ) avoid numerical overflow. Let =ln (), = ln (), and =ln( () ), then = ( ) =exp( ( ) ). Another method for computing (45) is to apply the Gibbs sampling to infer the latent topics sequence () for (). Given (), () can be estimated as the portion of the topic in (). And finally () can be estimated as the portion of the topic in (),..., (). Since () = ( )() =( ), one estimator of () can be (), which yields ( ) = () () () () () = (), (46) which is the weighted average of the ( ) over (),, (). It may need the model averaging in burn-out phase of the Gibbs sampling procedure to obtain averaged estimate of (). 5.2 Similarity The similarity between two documents () and () can be measured by the distance between (() ) and (() ), each of which is the expected topic composition. Given the LDA hyper- 17

parameters {,}, they are evaluated as () =+ () () / + and () = + () () / +, where () and () are counted from () and () that are generated from Gibbs sampling. One can calculate the similarity measure of () and (). If the variational inference method is used, then one can calculate the similarity measure of () / () and () () /. 5.3 Supervised LDA Dirichlet prior, response value Topic belief (-D) Poisson prior ξ Topic Discrete distribution Word Dirichlet priors Document ={ 1,, } Graphical representation of slda,,,, =,,,,,,,, =,, (, )( ) =,, (, ) =,, (, ) =,, (,). (47),, ~,, (48) where is the counts vector of.,,,, =,,,,. (49) (,) 18

It's not quite clear how this method (by David Blei and Jon McAliffe, 2007) is superior to the method of first finding the LDA feature representations and then do regression. Also, compared with (8), the summation Σ in (47) cannot be simplified. However, the experimental results reported in the 2007 paper shows better results than the regression using LDA features and Lasso. (to be continued ) 19