Bayesian Paragraph Vectors

Similar documents
arxiv: v2 [cs.cl] 1 Jan 2019

DISTRIBUTIONAL SEMANTICS

From perceptrons to word embeddings. Simon Šuster University of Groningen

Bayesian Semi-supervised Learning with Deep Generative Models

word2vec Parameter Learning Explained

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.

ANLP Lecture 22 Lexical Semantics with Dense Vectors

Neural Word Embeddings from Scratch

Generative Clustering, Topic Modeling, & Bayesian Inference

Semantics with Dense Vectors. Reference: D. Jurafsky and J. Martin, Speech and Language Processing

GloVe: Global Vectors for Word Representation 1

Context Selection for Embedding Models

Bayesian Deep Learning

Neural Networks for NLP. COMP-599 Nov 30, 2016

Towards Universal Sentence Embeddings

arxiv: v3 [cs.cl] 30 Jan 2016

An overview of word2vec

Stochastic Backpropagation, Variational Inference, and Semi-Supervised Learning

Local Expectation Gradients for Doubly Stochastic. Variational Inference

Deep Learning for NLP Part 2

text classification 3: neural networks

CME323 Distributed Algorithms and Optimization. GloVe on Spark. Alex Adamson SUNet ID: aadamson. June 6, 2016

Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning

Natural Gradients via the Variational Predictive Distribution

Deep Learning Basics Lecture 10: Neural Language Models. Princeton University COS 495 Instructor: Yingyu Liang

Data Mining & Machine Learning

Distributional Semantics and Word Embeddings. Chase Geigle

Lecture 6: Neural Networks for Representing Word Meaning

An Overview of Edward: A Probabilistic Programming System. Dustin Tran Columbia University

Word2Vec Embedding. Embedding. Word Embedding 1.1 BEDORE. Word Embedding. 1.2 Embedding. Word Embedding. Embedding.

Natural Language Processing

STA 4273H: Statistical Machine Learning

Part-of-Speech Tagging + Neural Networks 3: Word Embeddings CS 287

Auto-Encoding Variational Bayes

Scalable Large-Scale Classification with Latent Variable Augmentation

Latent Dirichlet Allocation Introduction/Overview

Variational Dropout and the Local Reparameterization Trick

ECE521 week 3: 23/26 January 2017

Complementary Sum Sampling for Likelihood Approximation in Large Scale Classification

Inference Suboptimality in Variational Autoencoders

Logistic Regression. COMP 527 Danushka Bollegala

Natural Language Processing

Nonparametric Bayesian Methods (Gaussian Processes)

Deep Generative Models

Index. Santanu Pattanayak 2017 S. Pattanayak, Pro Deep Learning with TensorFlow,

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

Neural Network Training

Summary and discussion of: Dropout Training as Adaptive Regularization

Quasi-Monte Carlo Flows

Proximity Variational Inference

Large-Scale Feature Learning with Spike-and-Slab Sparse Coding

Reading Group on Deep Learning Session 1

Instructions for NLP Practical (Units of Assessment) SVM-based Sentiment Detection of Reviews (Part 2)

Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text

Word Embeddings 2 - Class Discussions

Deep Learning for NLP

Outline. Supervised Learning. Hong Chang. Institute of Computing Technology, Chinese Academy of Sciences. Machine Learning Methods (Fall 2012)

Deep Variational Inference. FLARE Reading Group Presentation Wesley Tansey 9/28/2016

Variational Bayes on Monte Carlo Steroids

1 Inference in Dirichlet Mixture Models Introduction Problem Statement Dirichlet Process Mixtures... 3

11/3/15. Deep Learning for NLP. Deep Learning and its Architectures. What is Deep Learning? Advantages of Deep Learning (Part 1)

Generative models for missing value completion

Generative Models for Sentences

A Variance Modeling Framework Based on Variational Autoencoders for Speech Enhancement

STA 4273H: Statistical Machine Learning

Review: Probabilistic Matrix Factorization. Probabilistic Matrix Factorization (PMF)

Prepositional Phrase Attachment over Word Embedding Products

Gaussian Models

Generating Sentences by Editing Prototypes

Unsupervised Learning of Hierarchical Models. in collaboration with Josh Susskind and Vlad Mnih

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

CMU-Q Lecture 24:

CSC321 Lecture 18: Learning Probabilistic Models

Machine Learning Basics: Maximum Likelihood Estimation

Natural Language Processing with Deep Learning CS224N/Ling284

Black-box α-divergence Minimization

Incorporating Pre-Training in Long Short-Term Memory Networks for Tweets Classification

Evaluating the Variance of

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions?

Variational Inference via Stochastic Backpropagation

A graph contains a set of nodes (vertices) connected by links (edges or arcs)

Document and Topic Models: plsa and LDA

Last Time. Today. Bayesian Learning. The Distributions We Love. CSE 446 Gaussian Naïve Bayes & Logistic Regression

arxiv: v1 [stat.ml] 8 Jun 2015

Pachinko Allocation: DAG-Structured Mixture Models of Topic Correlations

Lecture 5: Logistic Regression. Neural Networks

Lecture 16 Deep Neural Generative Models

STA 4273H: Statistical Machine Learning

Sequence Models. Ji Yang. Department of Computing Science, University of Alberta. February 14, 2018

An Introduction to Statistical and Probabilistic Linear Models

Learning Word Representations by Jointly Modeling Syntagmatic and Paradigmatic Relations

Recurrent Latent Variable Networks for Session-Based Recommendation

Natural Language Processing with Deep Learning CS224N/Ling284. Richard Socher Lecture 2: Word Vectors

17 : Optimization and Monte Carlo Methods

Generating Text via Adversarial Training

PROBABILISTIC PROGRAMMING: BAYESIAN MODELLING MADE EASY

Andriy Mnih and Ruslan Salakhutdinov

Overview Today: From one-layer to multi layer neural networks! Backprop (last bit of heavy math) Different descriptions and viewpoints of backprop

Deep Learning For Mathematical Functions

Transcription:

Bayesian Paragraph Vectors Geng Ji 1, Robert Bamler 2, Erik B. Sudderth 1, and Stephan Mandt 2 1 Department of Computer Science, UC Irvine, {gji1, sudderth}@uci.edu 2 Disney Research, firstname.lastname@disneyresearch.com Abstract Word2vec (Mikolov et al., 2013b) has proven to be successful in natural language processing by capturing the semantic relationships between different words. Built on top of single-word embeddings, paragraph vectors (Le and Mikolov, 2014) find fixed-length representations for pieces of text with arbitrary lengths, such as documents, paragraphs, and sentences. In this work, we propose a novel interpretation for neural-network-based paragraph vectors by developing an unsupervised generative model whose maximum likelihood solution corresponds to traditional paragraph vectors. This probabilistic formulation allows us to go beyond point estimates of parameters and to perform Bayesian posterior inference. We find that the entropy of paragraph vectors decreases with the length of documents, and that information about posterior uncertainty improves performance in supervised learning tasks such as sentiment analysis and paraphrase detection. 1 Introduction Paragraph vectors (Le and Mikolov, 2014) are a recent method for embedding pieces of natural language text as fixed-length, real-valued vectors. Extending the word2vec framework (Mikolov et al., 2013b), paragraph vectors are typically presented as neural language models, and compute a single vector representation for each paragraph. Unlike word embeddings, paragraph vectors are not shared across the entire corpus, but are instead local to each paragraph. When interpreted as a latent variable, we expect them to have higher uncertainty when the paragraphs are short. Recently, Barkan (2017) proposed a probabilistic view of word2vec that has motivated research on combining word2vec with other priors (Bamler and Mandt, 2017). Inspired by this progress, we extend paragraph vectors to a probabilistic model. Our model may be specified via modern inference tools like Edward (Tran et al., 2016), which makes it easy to experiment with different inference algorithms. The experiments in Sec. 4 confirm the intuition that paragraph vectors have higher posterior uncertainty when paragraphs are short, and we show that explicitly modeling this uncertainty improves performance in supervised prediction tasks. 2 Related work Paragraph embeddings are built on top of word embeddings, a set of dimensionality reduction tools that map words from a large vocabulary to a dense vector representation. Most word embedding methods learn a point estimate for each embedding vector (Mikolov et al., 2013a,b; Mnih and Kavukcuoglu, 2013; Goldberg and Levy, 2014; Pennington et al., 2014). Barkan (2017) pointed out that the skip-gram model with negative sampling, also known as word2vec (Mikolov et al., 2013b), admits a Bayesian interpretation. The Bayesian skip-gram model allows uncertainty to be taken into account in a principled way, and lays the basis for our proposed Bayesian paragraph vector model. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.

z ij N pairs U V d n z n,ij M n,pairs U V N docs Figure 1: Left: Bayesian word embeddings by Barkan (2017). Right: Bayesian paragraph vectors. Many tasks in natural language processing require fixed-length features for text passages of variable length, such as sentences, paragraphs, or documents (in this paper, we treat these three terms interchangeably). Generalizing embeddings of single words, several methods have been proposed to find dense vector representations of paragraphs (Le and Mikolov, 2014; Kiros et al., 2015; Wieting et al., 2015; Palangi et al., 2016; Pagliardini et al., 2017). Since paragraph embeddings are local to short pieces of text, we expect them to have high posterior uncertainty if the paragraphs are short. In this work, we incorporate the idea of paragraph vectors (Le and Mikolov, 2014) into the Bayesian skip-gram model in order to coherently infer the uncertainty associated with paragraph vectors. 3 Method In Sec. 3.1, we summarize the Bayesian skip-gram model on which our model is based. We then present our Bayesian paragraph model in Sec. 3.2, and discuss two inference methods in Sec. 3.3. 3.1 Bayesian skip-gram model The Bayesian skip-gram model (Barkan, 2017) is a probabilistic interpretation of word2vec (Mikolov et al., 2013b). The left part of Figure 1 shows the generative process. For each word i in the vocabulary, the model draws a latent word embedding vector U i 2 R E and a latent context embedding vector V i 2 R E from a Gaussian prior N (0, 2 I). Here, E is the embedding dimension and is a hyperparameter. The model then constructs N pairs labeled pairs of words following a twostep process. First, a proposal pair of words (i, j) is drawn from a uniform distribution over the vocabulary. Then, the model assigns to the proposal pair a binary label z ij Bern( (U i > V j)), where (x) =1/(1 + e x ) is the sigmoid function. The pairs with label z ij =1form the so-called positive examples, and are assumed to correspond to occurrences of the word i in the context of word j somewhere in the corpus. The so-called negative examples with label z ij =0do not correspond to any observation in the corpus. When training the model, we resort to the heuristics proposed in (Mikolov et al., 2013b) to create artificial evidence for the negative examples (see Section 3.2 below). 3.2 Bayesian paragraph vectors Bayesian paragraph vectors (BPV) are a direct extension of the Bayesian skip-gram model. The right part of Figure 1 shows the generative process. In addition to global word and context embeddings U and V, the model draws a paragraph vector d n N(0, 2 I) for each of the N docs documents in the corpus. Following Le and Mikolov (2014), we add d n to the context vector V j when we classify a given pair of words (i, j) as a positive or a negative example. Thus, the likelihood of a word pair (i, j) in document n to have label z n,ij 2{0, 1} is p(z n,ij U i,v j,d n )= U > i (V j + d n ) zn,ij U > i (V j + d n ) 1 zn,ij. (1) We collect evidence for the positive examples X n + in each document n by forming pairs of words (w n,t,w n,t+ ). Here, w n,t is the word class of the t th token, t runs over all tokens in document n, runs from c to c where c is a small context window size, and we exclude =0. Negative examples are not observed in the corpus. Following Mikolov et al. (2013b), we construct artificial evidence X n for negative pairs by sampling from the noise distribution P (i, j) / f(i)f(j) 3 4, where f is the 2

empirical unigram frequency across the training corpus. The log-likelihood of the entire data is thus log p(x +, X U, V, d) = X " X log U i > (V j + d n ) + X # log U i > (V j + d n ). (2) n (i,j)2x n (i,j)2x + n In the limit d n! 0, Eq. (2) reduces to the negative loss function of word2vec. BPV can be easily specified in Edward, a Python library for probabilistic modeling and inference (Tran et al., 2016): from edward.models import Bernoulli, Normal U = Normal(loc=tf.zeros((W, E), dtype=tf.float32), scale=lam) V = Normal(loc=tf.zeros((W, E), dtype=tf.float32), scale=lam) d_n = Normal(loc=tf.zeros(E, dtype=tf.float32), scale=phi) u_n = tf.nn.embedding_lookup(u, indices_n_i) v_n = tf.nn.embedding_lookup(v, indices_n_j) z_n = Bernoulli(logits=tf.reduce_sum(u_n * (v_n + d_n), axis=1)) 3.3 MAP and black box variational inference The BPV model has global and local latent variables. We expect the posterior of the global variables to be peaked, and therefore approximate the global word embedding matrices U and V via point estimates. We expect a broader posterior distribution for the local paragraph vectors d n. Thus we use variational inference (VI) (Blei et al., 2016) to fit the posterior over d n with a fully factorized Gaussian distribution. We split inference into two stages. In the first stage, we point estimate all parameters. In the second stage, we fix U and V and only perform VI for the paragraph vectors. In the first stage, our goal is to train the global variables via stochastic gradient descent, where every minibatch contains a single document n and a fixed set of negative examples X n. We first maximize the joint probability p(x + n, X n,u,v,d n ) w.r.t the paragraph vector d n. As this local optimization is noise free, it converges quickly under a constant learning rate. Then, we perform a single gradient step for the global variables U and V. This gradient is noisy due to the minibatch sampling and the stochastic generation of negative examples. For this reason, a decreasing learning rate is used. Finally, we reinitialize d n and proceed to the next document. Optimizing d n in a nested loop before each update step saves memory since we only need to keep track of the document vectors one at a time. In the second stage, we fit a variational distribution for the paragraph vectors while holding U and V fixed. We use black box VI (Ranganath et al., 2014) with reparameterization gradients (Kingma and Welling, 2014; Rezende et al., 2014), which is provided by the Edward library. This time, we generate new negative examples in each update step to avoid overfitting. The stochastic optimization is again performed with a decreasing learning rate. We also perform a separate maximum a posteriori (MAP) estimate of the paragraph vectors to serve as the baseline for downstream classification tasks. 4 Experiments Paragraph vectors are often used as input features for supervised learning in natural language processing (Le and Mikolov, 2014; Kiros et al., 2015; Palangi et al., 2016). In this section, we apply BPV to two binary classification tasks: sentiment analysis and paraphrase detection. We find that the posterior uncertainty of BPV decreases as the length of paragraphs grows. We also find that by concatenating the variational mean and standard deviation features inferred by VI, we improve classification accuracy compared to MAP point estimates of paragraph embeddings. 4.1 Sentiment analysis We use the IMDB dataset (Maas et al., 2011) for sentiment analysis. It contains 100k movie reviews, split into 75k training points (25k labeled, 50k unlabeled) and 25k labeled test points. Positive and negative labels are balanced in both labeled subsets, and typical reviews consist of several sentences. As our algorithm is unsupervised, we run the inference algorithms described in Sec. 3.3 using all the training data, and then train a logistic regression classifier using the paragraph vectors of the labeled training data only. We use the most frequent 10k words as the vocabulary, and set the context window size c = 4, the embedding dimension E = 100, the hyperparameters for the prior = = 1, and the 3

Figure 2: Entropy of paragraph vectors as a function of the number of words in each document. Left: movie reviews in the IMDB dataset. Right: news clips in the MSR dataset. In the left plot, a log scale is used for the horizontal axis to account for the wide range of document lengths. number of negative examples per document equal to the average number of positive pairs of all the documents. The feature vectors x n for the classifier are the point estimates of the paragraph vectors d n for MAP, and the concatenation of the variational mean and standard deviation of d n for VI. Table 1 shows the test accuracy of the two inference methods. VI outperforms MAP since it takes into account posterior uncertainty in paragraph embeddings. Fig. 2 (left) shows the entropy of the paragraph vectors, computed using the posterior variance obtained from VI. As the document length grows, the entropy decreases, which makes intuitive sense since longer reviews can be more specific. Table 1: Classification accuracy of MAP and variational inference Task (dataset) MAP VI Sentiment analysis (IMDB) 86.88 86.96 Paraphrase detection (MSR) 69.97 70.96 4.2 Paraphrase detection We also test the discriminative power of BPV on the Microsoft Research Paraphrase Corpus (Dolan et al., 2004). Each data point contains of two sentences extracted from news sources on the web, and the goal is to predict whether they are paraphrases of each other. The training set contains 4076 sentence pairs in which 2753 are paraphrases, and the test set contains 1725 pairs among which 1147 are paraphrases. We use the same hyperparameters as in the sentiment analysis task, except that we take all the words appearing more than once into the vocabulary because this dataset is much smaller. After finding the paragraph vectors, we train the classifier by following Kiros et al. (2015), where features are constructed by concatenating the component-wise product x n x n 0 and the absolute difference x n x n 0 between each pair of features x n and x n 0. The classification results in Table 1 show that VI again outperforms MAP. The relationship between entropy and document length shown in Fig. 2 (right) is also similar to that of the IMDB dataset. 5 Discussion We proposed Bayesian paragraph vectors, a generative model of paragraph embeddings. We treated the local latent variables of paragraph vectors in a Bayesian way because we expected high uncertainty, especially for short documents. Our experiments confirmed this intuition, and showed that knowledge of the posterior uncertainty improves the performance of downstream supervised tasks. In addition to MAP and VI, we experimented with Hamiltonian Monte Carlo (HMC) inference, but our preliminary results showed worse performance; we plan to investigate further. A possible reason might be that we had to use a fixed set of negative examples for each document when generating HMC samples, which may result in overfitting to the noise. Finally, we believe that more sophisticated models of document embeddings would also benefit from a Bayesian treatment of the local variables. 4

References Bamler, R. and Mandt, S. (2017). Dynamic word embeddings. In ICML. Barkan, O. (2017). Bayesian neural word embedding. In AAAI. Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. (2016). Variational inference: A review for statisticians. arxiv:1601.00670. Dolan, B., Quirk, C., and Brockett, C. (2004). Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In ACL. Goldberg, Y. and Levy, O. (2014). Word2vec explained: Deriving mikolov et al. s negative-sampling wordembedding method. arxiv:1402.3722. Kingma, D. P. and Welling, M. (2014). Auto-encoding variational Bayes. In ICLR. Kiros, R., Zhu, Y., Salakhutdinov, R. R., Zemel, R., Urtasun, R., Torralba, A., and Fidler, S. (2015). Skip-thought vectors. In NIPS. Le, Q. and Mikolov, T. (2014). Distributed representations of sentences and documents. In ICML. Maas, A. L., Daly, R. E., Pham, P. T., Huang, D., Ng, A. Y., and Potts, C. (2011). Learning word vectors for sentiment analysis. In ACL. Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of word representations in vector space. arxiv:1301.3781. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b). Distributed representations of words and phrases and their compositionality. In NIPS. Mnih, A. and Kavukcuoglu, K. (2013). Learning word embeddings efficiently with noise-contrastive estimation. In NIPS. Pagliardini, M., Gupta, P., and Jaggi, M. (2017). Unsupervised learning of sentence embeddings using compositional n-gram features. arxiv:1703.02507. Palangi, H., Deng, L., Shen, Y., Gao, J., He, X., Chen, J., Song, X., and Ward, R. (2016). Deep sentence embedding using long short-term memory networks: Analysis and application to information retrieval. TASLP, 24(4):694 707. Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In EMNLP. Ranganath, R., Gerrish, S., and Blei, D. M. (2014). Black box variational inference. In AISTATS. Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In ICML. Tran, D., Kucukelbir, A., Dieng, A. B., Rudolph, M., Liang, D., and Blei, D. M. (2016). Edward: A library for probabilistic modeling, inference, and criticism. arxiv:1610.09787. Wieting, J., Bansal, M., Gimpel, K., Livescu, K., and Roth, D. (2015). From paraphrase database to compositional paraphrase model and back. TACL, 3:345 358. 5