Word Representations via Gaussian Embedding. Luke Vilnis Andrew McCallum University of Massachusetts Amherst
|
|
- Delilah Fisher
- 5 years ago
- Views:
Transcription
1 Word Representations via Gaussian Embedding Luke Vilnis Andrew McCallum University of Massachusetts Amherst
2 Vector word embeddings teacher chef astronaut composer person Low-Level NLP [Turian et al. 2010, Collobert et al. 2011] Named Entity Extraction [Passos et al. 2014] Machine Translation [Kalchbrenner & Blunsom 2013, Cho et al. 2014] Question Answering [Weston et al. 2015] road street lane boulevard
3 Vector word embeddings What s missing? - Breadth - Asymmetry composer person
4 Gaussian word embeddings Advantages - Breadth - Asymmetry famous person Bach composer
5 Gaussian word embeddings Advantages - Breadth - Asymmetry for each word i v i
6 Gaussian word embeddings Advantages - Breadth - Asymmetry for each word i N (x; µ i, i )
7 Gaussian word embeddings Advantages - Breadth: covariance matrix - Asymmetry for each word i N (x; µ i, i )
8 Gaussian word embeddings Advantages - Breadth: covariance matrix - Asymmetry for each word i N (x; µ i, i ) / log det( i ) (µ i x) > 1 i (µ i x)
9 Gaussian word embeddings Advantages - Breadth: covariance matrix - Asymmetry for each word i { Mahalanobis distance measured by Σi logarithmic penalty on volume due to normalization { N (x; µ i, i ) / log det( i ) (µ i x) > 1 i (µ i x)
10 Gaussian word embeddings Advantages - Breadth: covariance matrix - Asymmetry: KL-divergence for each word i { Mahalanobis distance measured by Σi logarithmic penalty on volume due to normalization { N (x; µ i, i ) / log det( i ) (µ i x) > 1 i (µ i x)
11 Gaussian word embeddings Advantages - Breadth: covariance matrix - Asymmetry: KL-divergence KL(N i kn j ) = Z x N (x; µ i, i ) log N (x; µ i, i ) N (x; µ j, j ) dx
12 Gaussian word embeddings Advantages - Breadth: covariance matrix - Asymmetry: KL-divergence KL(N i kn j ) = Z x N (x; µ i, i ) log N (x; µ i, i ) N (x; µ j, j ) dx [Wikipedia]
13 Gaussian word embeddings Advantages - Breadth: covariance matrix - Asymmetry: KL-divergence KL(N i kn j ) / tr( 1 i j ) (µ i µ j ) > 1 i (µ i µ j ) log det( i) det( j )
14 Gaussian word embeddings Advantages - Breadth: covariance matrix - Asymmetry: KL-divergence KL(N i kn j ) / tr( 1 i j ) (µ i µ j ) > 1 i (µ i µ j ) log det( i) det( j) { { { directions of variance should be aligned, i should be large and j small distance between means is small as measured by i logarithmic penalty on volume due to normalization
15 Learning vector embeddings e.g. [Mikolov et al. 2013] German musician and composer of the Baroque
16 Learning vector embeddings e.g. [Mikolov et al. 2013] German musician and composer of the Baroque E(word i, word j ) = hv i,v j i
17 Learning vector embeddings e.g. [Mikolov et al. 2013] German musician and composer of the Baroque (composer, musician) E(word i, word j ) = hv i,v j i
18 Learning vector embeddings e.g. [Mikolov et al. 2013] German musician and composer of the Baroque random (composer, musician) (composer, dictionary ) word E(word i, word j ) = hv i,v j i
19 Learning vector embeddings e.g. [Mikolov et al. 2013] German musician and composer of the Baroque (composer, musician) (composer, banana) E(word i, word j ) = hv i,v j i
20 Learning vector embeddings e.g. [Mikolov et al. 2013] German musician and composer of the Baroque E(composer, musician) > E(composer, banana) E(word i, word j ) = hv i,v j i
21 Learning vector embeddings e.g. [Mikolov et al. 2013] German musician and composer of the Baroque E(composer, musician) > E(composer, banana) E(word i, word j ) = X k v (k) v (k) i j
22 Learning vector embeddings e.g. [Mikolov et al. 2013] German musician and composer of the Baroque E(composer, musician) > E(composer, banana) E(word i, word j ) = Z k v i (k)v j (k)dk
23 Learning vector embeddings e.g. [Mikolov et al. 2013] German musician and composer of the Baroque E(composer, musician) E(word i, word j ) = Z k > E(composer, banana) v i (k)v j (k)dk
24 Learning Gaussian embeddings German musician and composer of the Baroque E(composer, musician) E(word i, word j ) = Z k > E(composer, banana) v i (k)v j (k)dk
25 Learning Gaussian embeddings German musician and composer of the Baroque E(composer, musician) E(word i, word j ) = [PPK, Jebara et al. 2003] Z x > E(composer, banana) N (x; µ i, i )N (x; µ j, j )dx
26 Learning Gaussian embeddings German musician and composer of the Baroque E(composer, musician) E(word i, word j ) = [PPK, Jebara et al. 2003] Z x > E(composer, banana) N (x; µ i, i )N (x; µ j, j )dx = N (0; µ i µ j, i + j )
27 Learning Gaussian embeddings German musician and composer of the Baroque E(composer, musician) E(word i, word j ) = [PPK, Jebara et al. 2003] Z x > E(composer, banana) N (x; µ i, i )N (x; µ j, j )dx = N (0; µ i µ j, i + j ) / log det( i + j ) (µ i µ j ) > ( i + j ) 1 (µ i µ j )
28 Learning Gaussian embeddings German musician and composer of the Baroque E(composer, musician) E(word i, word j ) = [PPK, Jebara et al. 2003] Z x > E(composer, banana) N (x; µ i, i )N (x; µ j, j )dx = N (0; µ i µ j, i + j ) / log det( i + j ) (µ i µ j ) > ( i + j ) 1 (µ i µ j ) { log-volume of ellipse { Mahalanobis distance between means
29 Learning Gaussian embeddings German musician and composer of the Baroque E(composer, musician) E(word i, word j ) = [PPK, Jebara et al. 2003] Z x > E(composer, banana) N (x; µ i, i )N (x; µ j, j )dx = N (0; µ i µ j, i + j )
30 Learning Gaussian embeddings German musician and composer of the Baroque E(composer, musician) > E(composer, banana) Loss PPK (w, c pos,c neg )= max(0,m E PPK (w, c pos )+E PPK (w, c neg ))
31 Learning Gaussian embeddings German musician and composer of the Baroque E(composer, musician) > E(composer, banana) Loss KL (w, c pos,c neg ) = max(0,m+ KL(c pos kw) KL(c neg kw)) (asymmetric supervision)
32 Related work Asymmetric, sparse, distributional [Baroni et al. 2012] Dense can be better [Baroni et al. 2014] Symmetric, dense [Bengio et al. 2003, Mikolov et al. 2013, many others] Bayesian matrix factorization [Salakhutdinov & Mnih 2008] (Mixture) density networks [Bishop 1994] Gaussian process neural nets [Damianou & Lawrence 2013]
33 Experimental results Synthetic hierarchies Entailment Word similarity tasks Scientific key phrase finding
34 Experimental results Synthetic hierarchies Entailment Word similarity tasks Scientific key phrase finding
35 Synthetic hierarchy Train data Objective child ` parent parent parent child child KL(v child kv parent )
36 Synthetic hierarchy Train data Learned model KL objective accurately learns all containments
37 Experimental results Synthetic hierarchies Entailment Word similarity tasks Scientific key phrase finding
38 Experimental results Synthetic hierarchies Entailment Word similarity tasks Scientific key phrase finding
39 Entailment Binary labeled dataset of entailment pairs [Baroni et al. 2012] adrenaline is-a neurotransmitter archbishop is-a clergyman horse is-a mammal pizza is-a food (+)
40 Entailment Binary labeled dataset of entailment pairs [Baroni et al. 2012] adrenaline is-a neurotransmitter archbishop is-a clergyman horse is-a mammal pizza is-a food (+) aircrew is-not-a playlist bamboo is-not-a bear (-) no relation
41 Entailment Binary labeled dataset of entailment pairs [Baroni et al. 2012] adrenaline is-a neurotransmitter archbishop is-a clergyman horse is-a mammal pizza is-a food (+) aircrew is-not-a playlist bamboo is-not-a bear (-) no relation food is-not-a pizza molecule is-not-a carbohydrate gathering is-not-a seminar (-) reversed
42 Entailment Model: diagonal (D) and spherical (S) variances Train: ~1b tokens Wikipedia + 3b tokens of newswire Evaluate: optimal F1 operating point, average precision
43 Entailment Model: diagonal (D) and spherical (S) variances Train: ~1b tokens Wikipedia + 3b tokens of newswire Evaluate: optimal F1 operating point, average precision
44 Experimental results Synthetic hierarchies Entailment Word similarity tasks Scientific key phrase finding
45 Experimental results Synthetic hierarchies Entailment Word similarity tasks Scientific key phrase finding
46 Symmetric word similarity Word similarity tasks (e.g. WordSim-353) (money, bank, 8.5) (psychology, Freud, 8.21) (media, radio, 7.42) (drug, abuse, 6.85) (Mars, scientist, 5.63) (cup, object, 3.69) (professor, cucumber, 0.31) Evaluate: Spearman s ρ
47 Symmetric word similarity skip-gram sphere, μ sphere, μ, Σ diagonal, μ diagonal, μ, Σ
48 Experimental results Synthetic hierarchies Entailment Word similarity tasks Scientific key phrase finding
49 Experimental results Synthetic hierarchies Entailment Word similarity tasks Scientific key phrase finding
50 physics Scientific key-phrase finding computational fluid dynamics Navier Stokes equations partial differential equations
51 Scientific key-phrase finding What makes a good key-phrase? - High frequency - Predictive
52 Scientific key-phrase finding What makes a good key-phrase? - High frequency - Predictive Phrases Frequent? Predictive? conventional wisdom suggests pre-defined categories paper describes experimental results EXPTIME complete autocorrelation function operational semantics regular languages No Yes No Yes No No Yes Yes
53 Scientific key-phrase finding Sample key-phrases from scientific paper abstracts: frequent, predictive rare, uninformative linear matrix inequality satisfiability problem encryption schemes sparse matrix vector spaces exploratory study theoretical basis major contributions hot topic
54 Thank you! Conclusion Introduced Gaussian word embeddings: - Capture asymmetry - Capture broadness of meaning and uncertainty - Expressive, dense, distributed representation - Scalable learning 4 billion tokens, 1 core, 8 hours Future work: - Multi-peaked, unnormalized, non-gaussian - Relations, documents, semantic frames - Non-NLP domains for density representations
Part-of-Speech Tagging + Neural Networks 3: Word Embeddings CS 287
Part-of-Speech Tagging + Neural Networks 3: Word Embeddings CS 287 Review: Neural Networks One-layer multi-layer perceptron architecture, NN MLP1 (x) = g(xw 1 + b 1 )W 2 + b 2 xw + b; perceptron x is the
More informationarxiv: v2 [cs.cl] 1 Jan 2019
Variational Self-attention Model for Sentence Representation arxiv:1812.11559v2 [cs.cl] 1 Jan 2019 Qiang Zhang 1, Shangsong Liang 2, Emine Yilmaz 1 1 University College London, London, United Kingdom 2
More informationNatural Language Processing
Natural Language Processing Word vectors Many slides borrowed from Richard Socher and Chris Manning Lecture plan Word representations Word vectors (embeddings) skip-gram algorithm Relation to matrix factorization
More informationRegularization Introduction to Machine Learning. Matt Gormley Lecture 10 Feb. 19, 2018
1-61 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Regularization Matt Gormley Lecture 1 Feb. 19, 218 1 Reminders Homework 4: Logistic
More informationarxiv: v1 [cs.cl] 19 Nov 2015
Gaussian Mixture Embeddings for Multiple Word Prototypes Xinchi Chen, Xipeng Qiu, Jingxiang Jiang, Xuaning Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of
More information1 Inference in Dirichlet Mixture Models Introduction Problem Statement Dirichlet Process Mixtures... 3
Contents 1 Inference in Dirichlet Mixture Models 2 1.1 Introduction....................................... 2 1.2 Problem Statement................................... 2 1.3 Dirichlet Process Mixtures...............................
More informationFrom perceptrons to word embeddings. Simon Šuster University of Groningen
From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written
More informationDiversity-Promoting Bayesian Learning of Latent Variable Models
Diversity-Promoting Bayesian Learning of Latent Variable Models Pengtao Xie 1, Jun Zhu 1,2 and Eric Xing 1 1 Machine Learning Department, Carnegie Mellon University 2 Department of Computer Science and
More informationGloVe: Global Vectors for Word Representation 1
GloVe: Global Vectors for Word Representation 1 J. Pennington, R. Socher, C.D. Manning M. Korniyenko, S. Samson Deep Learning for NLP, 13 Jun 2017 1 https://nlp.stanford.edu/projects/glove/ Outline Background
More informationSparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.
ANLP Lecture 22 Lexical Semantics with Dense Vectors Henry S. Thompson Based on slides by Jurafsky & Martin, some via Dorota Glowacka 5 November 2018 Previous lectures: Sparse vectors recap How to represent
More informationANLP Lecture 22 Lexical Semantics with Dense Vectors
ANLP Lecture 22 Lexical Semantics with Dense Vectors Henry S. Thompson Based on slides by Jurafsky & Martin, some via Dorota Glowacka 5 November 2018 Henry S. Thompson ANLP Lecture 22 5 November 2018 Previous
More informationDeep Learning. Ali Ghodsi. University of Waterloo
University of Waterloo Language Models A language model computes a probability for a sequence of words: P(w 1,..., w T ) Useful for machine translation Word ordering: p (the cat is small) > p (small the
More informationLearning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text
Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text Yi Zhang Machine Learning Department Carnegie Mellon University yizhang1@cs.cmu.edu Jeff Schneider The Robotics Institute
More informationDeep Learning Recurrent Networks 2/28/2018
Deep Learning Recurrent Networks /8/8 Recap: Recurrent networks can be incredibly effective Story so far Y(t+) Stock vector X(t) X(t+) X(t+) X(t+) X(t+) X(t+5) X(t+) X(t+7) Iterated structures are good
More informationA graph contains a set of nodes (vertices) connected by links (edges or arcs)
BOLTZMANN MACHINES Generative Models Graphical Models A graph contains a set of nodes (vertices) connected by links (edges or arcs) In a probabilistic graphical model, each node represents a random variable,
More informationNeural Networks Language Models
Neural Networks Language Models Philipp Koehn 10 October 2017 N-Gram Backoff Language Model 1 Previously, we approximated... by applying the chain rule p(w ) = p(w 1, w 2,..., w n ) p(w ) = i p(w i w 1,...,
More informationLatent Dirichlet Allocation Introduction/Overview
Latent Dirichlet Allocation Introduction/Overview David Meyer 03.10.2016 David Meyer http://www.1-4-5.net/~dmm/ml/lda_intro.pdf 03.10.2016 Agenda What is Topic Modeling? Parametric vs. Non-Parametric Models
More informationECS289: Scalable Machine Learning
ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 18, 2016 Outline One versus all/one versus one Ranking loss for multiclass/multilabel classification Scaling to millions of labels Multiclass
More informationLearning to translate with neural networks. Michael Auli
Learning to translate with neural networks Michael Auli 1 Neural networks for text processing Similar words near each other France Spain dog cat Neural networks for text processing Similar words near each
More informationSemantics with Dense Vectors. Reference: D. Jurafsky and J. Martin, Speech and Language Processing
Semantics with Dense Vectors Reference: D. Jurafsky and J. Martin, Speech and Language Processing 1 Semantics with Dense Vectors We saw how to represent a word as a sparse vector with dimensions corresponding
More informationNeural Word Embeddings from Scratch
Neural Word Embeddings from Scratch Xin Li 12 1 NLP Center Tencent AI Lab 2 Dept. of System Engineering & Engineering Management The Chinese University of Hong Kong 2018-04-09 Xin Li Neural Word Embeddings
More informationMachine Learning for Signal Processing Bayes Classification and Regression
Machine Learning for Signal Processing Bayes Classification and Regression Instructor: Bhiksha Raj 11755/18797 1 Recap: KNN A very effective and simple way of performing classification Simple model: For
More informationAdvanced Machine Learning & Perception
Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 6 Standard Kernels Unusual Input Spaces for Kernels String Kernels Probabilistic Kernels Fisher Kernels Probability Product Kernels
More informationMulti-View Learning of Word Embeddings via CCA
Multi-View Learning of Word Embeddings via CCA Paramveer S. Dhillon Dean Foster Lyle Ungar Computer & Information Science Statistics Computer & Information Science University of Pennsylvania, Philadelphia,
More informationDeep Learning for NLP
Deep Learning for NLP CS224N Christopher Manning (Many slides borrowed from ACL 2012/NAACL 2013 Tutorials by me, Richard Socher and Yoshua Bengio) Machine Learning and NLP NER WordNet Usually machine learning
More informationModeling Topics and Knowledge Bases with Embeddings
Modeling Topics and Knowledge Bases with Embeddings Dat Quoc Nguyen and Mark Johnson Department of Computing Macquarie University Sydney, Australia December 2016 1 / 15 Vector representations/embeddings
More informationDISTRIBUTIONAL SEMANTICS
COMP90042 LECTURE 4 DISTRIBUTIONAL SEMANTICS LEXICAL DATABASES - PROBLEMS Manually constructed Expensive Human annotation can be biased and noisy Language is dynamic New words: slangs, terminology, etc.
More information11/3/15. Deep Learning for NLP. Deep Learning and its Architectures. What is Deep Learning? Advantages of Deep Learning (Part 1)
11/3/15 Machine Learning and NLP Deep Learning for NLP Usually machine learning works well because of human-designed representations and input features CS224N WordNet SRL Parser Machine learning becomes
More informationPachinko Allocation: DAG-Structured Mixture Models of Topic Correlations
: DAG-Structured Mixture Models of Topic Correlations Wei Li and Andrew McCallum University of Massachusetts, Dept. of Computer Science {weili,mccallum}@cs.umass.edu Abstract Latent Dirichlet allocation
More informationA fast and simple algorithm for training neural probabilistic language models
A fast and simple algorithm for training neural probabilistic language models Andriy Mnih Joint work with Yee Whye Teh Gatsby Computational Neuroscience Unit University College London 25 January 2013 1
More informationword2vec Parameter Learning Explained
word2vec Parameter Learning Explained Xin Rong ronxin@umich.edu Abstract The word2vec model and application by Mikolov et al. have attracted a great amount of attention in recent two years. The vector
More informationHomework 3 COMS 4705 Fall 2017 Prof. Kathleen McKeown
Homework 3 COMS 4705 Fall 017 Prof. Kathleen McKeown The assignment consists of a programming part and a written part. For the programming part, make sure you have set up the development environment as
More informationRiemannian Metric Learning for Symmetric Positive Definite Matrices
CMSC 88J: Linear Subspaces and Manifolds for Computer Vision and Machine Learning Riemannian Metric Learning for Symmetric Positive Definite Matrices Raviteja Vemulapalli Guide: Professor David W. Jacobs
More informationAdvanced Machine Learning & Perception
Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 1 Introduction, researchy course, latest papers Going beyond simple machine learning Perception, strange spaces, images, time, behavior
More informationApplied Natural Language Processing
Applied Natural Language Processing Info 256 Lecture 20: Sequence labeling (April 9, 2019) David Bamman, UC Berkeley POS tagging NNP Labeling the tag that s correct for the context. IN JJ FW SYM IN JJ
More informationMachine Learning for Structured Prediction
Machine Learning for Structured Prediction Grzegorz Chrupa la National Centre for Language Technology School of Computing Dublin City University NCLT Seminar Grzegorz Chrupa la (DCU) Machine Learning for
More informationGenerative Models for Sentences
Generative Models for Sentences Amjad Almahairi PhD student August 16 th 2014 Outline 1. Motivation Language modelling Full Sentence Embeddings 2. Approach Bayesian Networks Variational Autoencoders (VAE)
More informationNatural Language Processing with Deep Learning CS224N/Ling284. Richard Socher Lecture 2: Word Vectors
Natural Language Processing with Deep Learning CS224N/Ling284 Richard Socher Lecture 2: Word Vectors Organization PSet 1 is released. Coding Session 1/22: (Monday, PA1 due Thursday) Some of the questions
More informationDimensionality Reduction and Principle Components Analysis
Dimensionality Reduction and Principle Components Analysis 1 Outline What is dimensionality reduction? Principle Components Analysis (PCA) Example (Bishop, ch 12) PCA vs linear regression PCA as a mixture
More informationUnsupervised Learning with Permuted Data
Unsupervised Learning with Permuted Data Sergey Kirshner skirshne@ics.uci.edu Sridevi Parise sparise@ics.uci.edu Padhraic Smyth smyth@ics.uci.edu School of Information and Computer Science, University
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 24, 2016 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationBayesian Paragraph Vectors
Bayesian Paragraph Vectors Geng Ji 1, Robert Bamler 2, Erik B. Sudderth 1, and Stephan Mandt 2 1 Department of Computer Science, UC Irvine, {gji1, sudderth}@uci.edu 2 Disney Research, firstname.lastname@disneyresearch.com
More informationStatistical Learning Reading Assignments
Statistical Learning Reading Assignments S. Gong et al. Dynamic Vision: From Images to Face Recognition, Imperial College Press, 2001 (Chapt. 3, hard copy). T. Evgeniou, M. Pontil, and T. Poggio, "Statistical
More informationGenerating Sentences by Editing Prototypes
Generating Sentences by Editing Prototypes K. Guu 2, T.B. Hashimoto 1,2, Y. Oren 1, P. Liang 1,2 1 Department of Computer Science Stanford University 2 Department of Statistics Stanford University arxiv
More informationAdvanced Machine Learning & Perception
Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 1 Introduction, researchy course, latest papers Going beyond simple machine learning Perception, strange spaces, images, time, behavior
More informationPrepositional Phrase Attachment over Word Embedding Products
Prepositional Phrase Attachment over Word Embedding Products Pranava Madhyastha (1), Xavier Carreras (2), Ariadna Quattoni (2) (1) University of Sheffield (2) Naver Labs Europe Prepositional Phrases I
More informationDeep Learning for NLP Part 2
Deep Learning for NLP Part 2 CS224N Christopher Manning (Many slides borrowed from ACL 2012/NAACL 2013 Tutorials by me, Richard Socher and Yoshua Bengio) 2 Part 1.3: The Basics Word Representations The
More informationPATTERN RECOGNITION AND MACHINE LEARNING
PATTERN RECOGNITION AND MACHINE LEARNING Slide Set 3: Detection Theory January 2018 Heikki Huttunen heikki.huttunen@tut.fi Department of Signal Processing Tampere University of Technology Detection theory
More informationECS289: Scalable Machine Learning
ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 27, 2015 Outline One versus all/one versus one Ranking loss for multiclass/multilabel classification Scaling to millions of labels Multiclass
More informationDeep Learning for NLP
Deep Learning for NLP Instructor: Wei Xu Ohio State University CSE 5525 Many slides from Greg Durrett Outline Motivation for neural networks Feedforward neural networks Applying feedforward neural networks
More informationDiscrete Mathematics and Probability Theory Fall 2015 Lecture 21
CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about
More informationUniversität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Language Models. Tobias Scheffer
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Language Models Tobias Scheffer Stochastic Language Models A stochastic language model is a probability distribution over words.
More informationNotes on Latent Semantic Analysis
Notes on Latent Semantic Analysis Costas Boulis 1 Introduction One of the most fundamental problems of information retrieval (IR) is to find all documents (and nothing but those) that are semantically
More informationFully Bayesian Deep Gaussian Processes for Uncertainty Quantification
Fully Bayesian Deep Gaussian Processes for Uncertainty Quantification N. Zabaras 1 S. Atkinson 1 Center for Informatics and Computational Science Department of Aerospace and Mechanical Engineering University
More informationarxiv: v3 [cs.cl] 30 Jan 2016
word2vec Parameter Learning Explained Xin Rong ronxin@umich.edu arxiv:1411.2738v3 [cs.cl] 30 Jan 2016 Abstract The word2vec model and application by Mikolov et al. have attracted a great amount of attention
More informationLearning Word Representations by Jointly Modeling Syntagmatic and Paradigmatic Relations
Learning Word Representations by Jointly Modeling Syntagmatic and Paradigmatic Relations Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng CAS Key Lab of Network Data Science and Technology Institute
More informationLatent Variable Models in NLP
Latent Variable Models in NLP Aria Haghighi with Slav Petrov, John DeNero, and Dan Klein UC Berkeley, CS Division Latent Variable Models Latent Variable Models Latent Variable Models Observed Latent Variable
More informationECE521 Lectures 9 Fully Connected Neural Networks
ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance
More informationDeep Learning for Natural Language Processing. Sidharth Mudgal April 4, 2017
Deep Learning for Natural Language Processing Sidharth Mudgal April 4, 2017 Table of contents 1. Intro 2. Word Vectors 3. Word2Vec 4. Char Level Word Embeddings 5. Application: Entity Matching 6. Conclusion
More informationDistance Metric Learning
Distance Metric Learning Technical University of Munich Department of Informatics Computer Vision Group November 11, 2016 M.Sc. John Chiotellis: Distance Metric Learning 1 / 36 Outline Computer Vision
More informationAndriy Mnih and Ruslan Salakhutdinov
MATRIX FACTORIZATION METHODS FOR COLLABORATIVE FILTERING Andriy Mnih and Ruslan Salakhutdinov University of Toronto, Machine Learning Group 1 What is collaborative filtering? The goal of collaborative
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 89 Part II
More informationDesign and Implementation of Speech Recognition Systems
Design and Implementation of Speech Recognition Systems Spring 2013 Class 7: Templates to HMMs 13 Feb 2013 1 Recap Thus far, we have looked at dynamic programming for string matching, And derived DTW from
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 23, 2015 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationLearning SVM Classifiers with Indefinite Kernels
Learning SVM Classifiers with Indefinite Kernels Suicheng Gu and Yuhong Guo Dept. of Computer and Information Sciences Temple University Support Vector Machines (SVMs) (Kernel) SVMs are widely used in
More informationTraining Restricted Boltzmann Machines on Word Observations
In this work, we directly address the scalability issues associated with large softmax visible units in RBMs. We describe a learning rule with a computational complexity independent of the number of visible
More informationSTA414/2104 Statistical Methods for Machine Learning II
STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements
More informationMachine Translation. 10: Advanced Neural Machine Translation Architectures. Rico Sennrich. University of Edinburgh. R. Sennrich MT / 26
Machine Translation 10: Advanced Neural Machine Translation Architectures Rico Sennrich University of Edinburgh R. Sennrich MT 2018 10 1 / 26 Today s Lecture so far today we discussed RNNs as encoder and
More informationTUTORIAL PART 1 Unsupervised Learning
TUTORIAL PART 1 Unsupervised Learning Marc'Aurelio Ranzato Department of Computer Science Univ. of Toronto ranzato@cs.toronto.edu Co-organizers: Honglak Lee, Yoshua Bengio, Geoff Hinton, Yann LeCun, Andrew
More informationThe Bayes classifier
The Bayes classifier Consider where is a random vector in is a random variable (depending on ) Let be a classifier with probability of error/risk given by The Bayes classifier (denoted ) is the optimal
More informationAlgorithms for NLP. Language Modeling III. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley
Algorithms for NLP Language Modeling III Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley Announcements Office hours on website but no OH for Taylor until next week. Efficient Hashing Closed address
More informationCS224n: Natural Language Processing with Deep Learning 1
CS224n: Natural Language Processing with Deep Learning Lecture Notes: Part I 2 Winter 27 Course Instructors: Christopher Manning, Richard Socher 2 Authors: Francois Chaubard, Michael Fang, Guillaume Genthial,
More information10-810: Advanced Algorithms and Models for Computational Biology. Optimal leaf ordering and classification
10-810: Advanced Algorithms and Models for Computational Biology Optimal leaf ordering and classification Hierarchical clustering As we mentioned, its one of the most popular methods for clustering gene
More informationSeman&cs with Dense Vectors. Dorota Glowacka
Semancs with Dense Vectors Dorota Glowacka dorota.glowacka@ed.ac.uk Previous lectures: - how to represent a word as a sparse vector with dimensions corresponding to the words in the vocabulary - the values
More informationAn overview of word2vec
An overview of word2vec Benjamin Wilson Berlin ML Meetup, July 8 2014 Benjamin Wilson word2vec Berlin ML Meetup 1 / 25 Outline 1 Introduction 2 Background & Significance 3 Architecture 4 CBOW word representations
More informationMixtures of Gaussians. Sargur Srihari
Mixtures of Gaussians Sargur srihari@cedar.buffalo.edu 1 9. Mixture Models and EM 0. Mixture Models Overview 1. K-Means Clustering 2. Mixtures of Gaussians 3. An Alternative View of EM 4. The EM Algorithm
More informationRuslan Salakhutdinov Joint work with Geoff Hinton. University of Toronto, Machine Learning Group
NON-LINEAR DIMENSIONALITY REDUCTION USING NEURAL NETORKS Ruslan Salakhutdinov Joint work with Geoff Hinton University of Toronto, Machine Learning Group Overview Document Retrieval Present layer-by-layer
More informationMachine Learning Basics: Maximum Likelihood Estimation
Machine Learning Basics: Maximum Likelihood Estimation Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics 1. Learning
More informationIntroduction to Deep Neural Networks
Introduction to Deep Neural Networks Presenter: Chunyuan Li Pattern Classification and Recognition (ECE 681.01) Duke University April, 2016 Outline 1 Background and Preliminaries Why DNNs? Model: Logistic
More informationKernel methods for comparing distributions, measuring dependence
Kernel methods for comparing distributions, measuring dependence Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Principal component analysis Given a set of M centered observations
More informationNeural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17
3/9/7 Neural Networks Emily Fox University of Washington March 0, 207 Slides adapted from Ali Farhadi (via Carlos Guestrin and Luke Zettlemoyer) Single-layer neural network 3/9/7 Perceptron as a neural
More informationLecture 6: Neural Networks for Representing Word Meaning
Lecture 6: Neural Networks for Representing Word Meaning Mirella Lapata School of Informatics University of Edinburgh mlap@inf.ed.ac.uk February 7, 2017 1 / 28 Logistic Regression Input is a feature vector,
More informationMachine Learning Lecture 2
Announcements Machine Learning Lecture 2 Eceptional number of lecture participants this year Current count: 449 participants This is very nice, but it stretches our resources to their limits Probability
More informationSparse Forward-Backward for Fast Training of Conditional Random Fields
Sparse Forward-Backward for Fast Training of Conditional Random Fields Charles Sutton, Chris Pal and Andrew McCallum University of Massachusetts Amherst Dept. Computer Science Amherst, MA 01003 {casutton,
More informationa) b) (Natural Language Processing; NLP) (Deep Learning) Bag of words White House RGB [1] IBM
c 1. (Natural Language Processing; NLP) (Deep Learning) RGB IBM 135 8511 5 6 52 yutat@jp.ibm.com a) b) 2. 1 0 2 1 Bag of words White House 2 [1] 2015 4 Copyright c by ORSJ. Unauthorized reproduction of
More informationtext classification 3: neural networks
text classification 3: neural networks CS 585, Fall 2018 Introduction to Natural Language Processing http://people.cs.umass.edu/~miyyer/cs585/ Mohit Iyyer College of Information and Computer Sciences University
More informationGaussian Mixture Distance for Information Retrieval
Gaussian Mixture Distance for Information Retrieval X.Q. Li and I. King fxqli, ingg@cse.cuh.edu.h Department of omputer Science & Engineering The hinese University of Hong Kong Shatin, New Territories,
More informationDeep Learning Basics Lecture 10: Neural Language Models. Princeton University COS 495 Instructor: Yingyu Liang
Deep Learning Basics Lecture 10: Neural Language Models Princeton University COS 495 Instructor: Yingyu Liang Natural language Processing (NLP) The processing of the human languages by computers One of
More informationCS395T: Structured Models for NLP Lecture 19: Advanced NNs I. Greg Durrett
CS395T: Structured Models for NLP Lecture 19: Advanced NNs I Greg Durrett Administrivia Kyunghyun Cho (NYU) talk Friday 11am GDC 6.302 Project 3 due today! Final project out today! Proposal due in 1 week
More information2 Related Works. 1 Introduction. 2.1 Gaussian Process Regression
Gaussian Process Regression with Dynamic Active Set and Its Application to Anomaly Detection Toshikazu Wada 1, Yuki Matsumura 1, Shunji Maeda 2, and Hisae Shibuya 3 1 Faculty of Systems Engineering, Wakayama
More informationThe Success of Deep Generative Models
The Success of Deep Generative Models Jakub Tomczak AMLAB, University of Amsterdam CERN, 2018 What is AI about? What is AI about? Decision making: What is AI about? Decision making: new data High probability
More informationA Continuous-Time Model of Topic Co-occurrence Trends
A Continuous-Time Model of Topic Co-occurrence Trends Wei Li, Xuerui Wang and Andrew McCallum Department of Computer Science University of Massachusetts 140 Governors Drive Amherst, MA 01003-9264 Abstract
More informationUNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014
UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 Exam policy: This exam allows two one-page, two-sided cheat sheets (i.e. 4 sides); No other materials. Time: 2 hours. Be sure to write
More informationMachine Learning, Fall 2009: Midterm
10-601 Machine Learning, Fall 009: Midterm Monday, November nd hours 1. Personal info: Name: Andrew account: E-mail address:. You are permitted two pages of notes and a calculator. Please turn off all
More information9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering
Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make
More informationLecture 5 Neural models for NLP
CS546: Machine Learning in NLP (Spring 2018) http://courses.engr.illinois.edu/cs546/ Lecture 5 Neural models for NLP Julia Hockenmaier juliahmr@illinois.edu 3324 Siebel Center Office hours: Tue/Thu 2pm-3pm
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationSum-Product Networks: A New Deep Architecture
Sum-Product Networks: A New Deep Architecture Pedro Domingos Dept. Computer Science & Eng. University of Washington Joint work with Hoifung Poon 1 Graphical Models: Challenges Bayesian Network Markov Network
More informationAdditive Regularization for Hierarchical Multimodal Topic Modeling
Additive Regularization for Hierarchical Multimodal Topic Modeling N. A. Chirkova 1,2, K. V. Vorontsov 3 1 JSC Antiplagiat, 2 Lomonosov Moscow State University 3 Federal Research Center Computer Science
More information