Semi-Supervised Learning

Similar documents
Semi-Supervised Learning with Graphs. Xiaojin (Jerry) Zhu School of Computer Science Carnegie Mellon University

Semi-Supervised Learning with Graphs

Active and Semi-supervised Kernel Classification

Graph-Based Semi-Supervised Learning

Learning from Labeled and Unlabeled Data: Semi-supervised Learning and Ranking p. 1/31

1 Graph Kernels by Spectral Transforms

Classification Semi-supervised learning based on network. Speakers: Hanwen Wang, Xinxin Huang, and Zeyu Li CS Winter

How to learn from very few examples?

Graphs, Geometry and Semi-supervised Learning

Analysis of Spectral Kernel Design based Semi-supervised Learning

HYPERGRAPH BASED SEMI-SUPERVISED LEARNING ALGORITHMS APPLIED TO SPEECH RECOGNITION PROBLEM: A NOVEL APPROACH

Cluster Kernels for Semi-Supervised Learning

Random Walks, Random Fields, and Graph Kernels

Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text

What is semi-supervised learning?

Beyond the Point Cloud: From Transductive to Semi-Supervised Learning

Global vs. Multiscale Approaches

Semi-supervised Learning

Large-Scale Graph-Based Semi-Supervised Learning via Tree Laplacian Solver

Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions

Graphs in Machine Learning

Semi-Supervised Learning in Gigantic Image Collections. Rob Fergus (New York University) Yair Weiss (Hebrew University) Antonio Torralba (MIT)

Semi-Supervised Learning by Multi-Manifold Separation

Spectral Clustering. Guokun Lai 2016/10

Semi-Supervised Learning through Principal Directions Estimation

Undirected Graphical Models

Pattern Recognition and Machine Learning

Laplacian Spectrum Learning

Discrete vs. Continuous: Two Sides of Machine Learning

Overview of Statistical Tools. Statistical Inference. Bayesian Framework. Modeling. Very simple case. Things are usually more complicated

Graphs in Machine Learning

A PROBABILISTIC INTERPRETATION OF SAMPLING THEORY OF GRAPH SIGNALS. Akshay Gadde and Antonio Ortega

Learning on Graphs and Manifolds. CMPSCI 689 Sridhar Mahadevan U.Mass Amherst

Semi-Supervised Classification with Universum

CS 6375 Machine Learning

Hidden Markov Models

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

Bayesian Learning in Undirected Graphical Models

Manifold Coarse Graining for Online Semi-supervised Learning

Machine Learning for Structured Prediction

Approximate Inference Part 1 of 2

Approximate Inference Part 1 of 2

More on HMMs and other sequence models. Intro to NLP - ETHZ - 18/03/2013

Semi Supervised Distance Metric Learning

Probabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov

Partially labeled classification with Markov random walks

Semi-Supervised Learning in Reproducing Kernel Hilbert Spaces Using Local Invariances

Part 4: Conditional Random Fields

CS838-1 Advanced NLP: Hidden Markov Models

11/3/15. Deep Learning for NLP. Deep Learning and its Architectures. What is Deep Learning? Advantages of Deep Learning (Part 1)

Topic Models and Applications to Short Documents

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views

CS249: ADVANCED DATA MINING

6.036 midterm review. Wednesday, March 18, 15

Manifold Regularization

Seeing stars when there aren't many stars Graph-based semi-supervised learning for sentiment categorization

L11: Pattern recognition principles

Semi-Supervised Learning with the Graph Laplacian: The Limit of Infinite Unlabelled Data

Large-scale Collaborative Prediction Using a Nonparametric Random Effects Model

Deep Learning for NLP

Bayesian Learning in Undirected Graphical Models

Expectation Maximization, and Learning from Partly Unobserved Data (part 2)

Data and Structure in Machine Learning

Expectation Maximization (EM)

Time-Sensitive Dirichlet Process Mixture Models

Learning gradients: prescriptive models

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Spectral Unsupervised Parsing with Additive Tree Metrics

Graph Quality Judgement: A Large Margin Expedition

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17

Logistic Regression: Online, Lazy, Kernelized, Sequential, etc.

Bayesian Co-Training

Conditional Random Field

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

Hidden Markov Models

Recent Advances in Bayesian Inference Techniques

Introduction to Machine Learning Midterm, Tues April 8

Generative Clustering, Topic Modeling, & Bayesian Inference

Part of Speech Tagging: Viterbi, Forward, Backward, Forward- Backward, Baum-Welch. COMP-599 Oct 1, 2015

Regularization on Discrete Spaces

Semi-Supervised Learning with Very Few Labeled Training Examples

Nonparametric Bayesian Methods (Gaussian Processes)

Brief Introduction of Machine Learning Techniques for Content Analysis

Expectation Maximization (EM)

Unsupervised Learning of Hierarchical Models. in collaboration with Josh Susskind and Vlad Mnih

Clustering K-means. Machine Learning CSE546. Sham Kakade University of Washington. November 15, Review: PCA Start: unsupervised learning

Linear Classifiers IV

Machine Learning Overview

Introduction to AI Learning Bayesian networks. Vibhav Gogate

One-class Label Propagation Using Local Cone Based Similarity

Discriminative Learning of Sum-Product Networks. Robert Gens Pedro Domingos

Jeff Howbert Introduction to Machine Learning Winter

Learning Tetris. 1 Tetris. February 3, 2009

Conditional Random Fields: An Introduction

Justin Solomon MIT, Spring 2017

Predicting Graph Labels using Perceptron. Shuang Song

Probabilistic Graphical Models

Laconic: Label Consistency for Image Categorization

Efficient Semi-supervised and Active Learning of Disjunctions

Information Extraction from Text

Transcription:

Semi-Supervised Learning getting more for less in natural language processing and beyond Xiaojin (Jerry) Zhu School of Computer Science Carnegie Mellon University 1

Semi-supervised Learning many human language tasks involve classification classifiers need labeled data to train labeled data scarce, unlabeled data abundant Traditional classifiers cannot use unlabeled data. My interest (semi-supervised learning): Develop classification methods that can use both labeled and unlabeled data for human language processing. 2

Motivating example 1: speech recognition classify sounds into words need sounds / word transcript pairs (labeled data) to train hard to obtain: transcription slow. 400 times real-time at phonetic level 10 times real-time at word level unlabeled data: sounds alone, easy to get. record the radio call center unlabeled data useful? 3

Motivating example 2: parsing classify sentences into parse trees NP S VP PN VB NP PP DET N P NP DET I saw a falcon with a telescope need sentence / parse tree pairs (treebank, labeled data) to train, hard to construct treebank (Hwa, 2003): English 40,000 sentences, 5 years; Chinese 4,000 sentences, 2 years unlabeled data: sentences without annotation, very easy to get. Useful? N 4

Motivating example 3: personalized news recommend news that interests you, beyond keywords classify news articles as interesting or not need articles / preference pairs to train typical user unlikely to label many articles unlabeled news articles abundant. Useful? 5

The message Unlabeled data can improve classification. 6

Why unlabeled data might help example: classify astronomy vs. travel articles articles represented by content word occurrence vectors article similarity measured by content word overlap d 1 d 3 d 4 d 2 asteroid bright comet year zodiac. airport bike camp yellowstone zion 7

Why labeled data alone might fail asteroid bright comet year zodiac. airport bike camp yellowstone zion d 1 d 3 d 4 d 2 no overlap! tends to happen when labeled data is scarce 8

Unlabeled data are stepping stones d 1 d 5 d 6 d 7 d 3 d 4 d 8 d 9 d 2 asteroid bright comet year zodiac. airport bike camp yellowstone zion observe direct similarity from features: d 1 d 5, d 5 d 6 etc. assume similar features same label labels propagate via unlabeled articles, indirect similarity 9

Unlabeled data are stepping stones arrange l labeled and u unlabeled(=test) points in a graph nodes: the n = l + u points edges: the direct similarity W ij, e.g. number of overlapping words. (in general: a decreasing function of the distance x i x j ) want to infer indirect similarity (with all paths) d1 d2 d3 d4 10

One way to use labeled and unlabeled data (Zhu and Ghahramani, 2002) input: n n graph weights W labels Y l {0, 1} l create matrix P ij = W ij / W i. repeat until f converges clamp labeled data f l = Y l propagate f P f f converges to a unique solution, the harmonic function. 0 f 1, soft labels 11

An electric network interpretation (Zhu, Ghahramani and Lafferty, ICML2003) harmonic function f is the voltage at the nodes edges are resistors with R = 1/W 1 volt battery connects to labeled nodes indirect similarity: similar voltage if many paths exist R = 1 ij w ij 1 +1 volt 0 12

Closed form solution for the harmonic function define diagonal degree matrix D, D ii = W i define graph Laplacian matrix = D W f u = uu 1 ul Y l graph version of the continuous Laplacian operator 2 = 2 + x 2 2 + y 2 2 z 2 harmonic: f = 0 with Dirichlet boundary conditions on labeled data currents in-flow = out-flow at any node (Kirchoff s law) average of neighbors: f u (i) = j i W ijf(j) j i W ij 13

Text categorization with harmonic functions accuracy baseball PC religion hockey Mac atheism 50 labeled articles, about 2000 unlabeled articles. 10NN graph. 14

Digit recognition with harmonic functions pixel-wise Euclidean distance not similar indirectly similar with stepping stones 15

Digit recognition with harmonic functions accuracy 1 vs. 2 10 digits 50 labeled images, about 4000 unlabeled images, 10NN graph 16

Practical concerns about harmonic functions does it scale? closed form involves matrix inversion f u = uu 1 ul Y l O(u 3 ), e.g. millions of crawled web pages solution 1: use iterative methods the label propagation algorithm (slow) loopy belief propagation conjugate gradient solution 2: reduce problem size use a random small unlabeled subset (Delalleau et al. 2005) harmonic mixtures 17

Harmonic mixtures (Zhu and Lafferty, 2005) fit unlabeled data with a mixture model, e.g. Gaussian mixtures for images multinomial mixtures for documents use EM or other methods M mixture components, here M = 30 u 1000 learn soft labels for the mixture components, not the unlabeled points 18

Harmonic mixtures learn labels for mixture components assume mixture component labels λ 1,, λ M labels on unlabeled points determined by the mixture model The mixture model defines responsibility R: R im = p(m x i ) f(i) = M m=1 R imλ m learn λ such that f is closest to harmonic convex optimization closed form solution λ = ( R uu R ) 1 R ul Y l 19

Harmonic mixtures 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 mixture component labels λ follow the graph 20

Harmonic mixtures computational savings computation on unlabeled data harmonic mixtures original harmonic function f u = R(R } {{ uu R } ) 1 R ul Y l M M f u = ( }{{} uu ) 1 ul Y l u u harmonic mixtures O(M 3 ), much cheaper than O(u 3 ) Harmonic mixtures can handle large problems. 21

From harmonic functions to kernels harmonic functions too specialized? I will show you the kernel behind harmonic function general, important concept in machine learning. early work on foundations of kernels (Wahba et al., 1970s) used in many learning algorithms, e.g. support vector machines on the graph: symmetric, positive semi-definite n n matrix I will then give you an even better kernel. 22

The kernel behind harmonic functions K = 1 covariance matrix K ij = indirect similarity The direct similarity W ij may be small But K ij will be large if many paths between i, j K can be used with many kernel machines K + support vector machine = semi-supervised SVM 23

Kernels should encourage smooth eigenvectors graph spectrum = n k=1 λ kφ k φ k small eigenvalue, smooth eigenvector i j W ij(φ k (i) φ k (j)) 2 = λ k smooth eigenvectors good for semi-supervised learning kernels encourage smooth eigenvectors with large weights Laplacian harmonic kernel = k λ kφ k φ k K = 1 = k 1 λ k φ k φ k 24

General semi-supervised kernels 1 not the only semi-supervised kernel, may not be the best General principle for creating semi-supervised kernels K = i r(λ i)φ i φ i r(λ) should be large when λ is small, to encourage smooth eigenvectors. Specific choices of r() lead to known kernels harmonic function kernel r(λ) = 1/λ diffusion kernel r(λ) = exp( σ 2 λ) random walk kernel r(λ) = (α λ) p Is there a best r()? Yes, as measured by kernel alignment. 25

Alignment measures kernel quality measures kernel by its alignment to the labeled data Y l align(k, Y l ) = K ll, Y l Y l K ll Y l Y l extension of cosine angle between vectors high alignment related to good generalization performance leads to a convex optimization problem 26

Finding the best kernel (Zhu, Kandola, Ghahramani and Lafferty, NIPS2004) the order constrained semi-supervised kernel max r align(k, Y l ) subject to K = i r iφ i φ i r 1 r n 0 order constraints r 1 r n encourage smoothness convex optimization r nonparametric 27

The order constrained kernel improves alignment and accuracy text categorization (religion vs. atheism), 50 labeled and 2000 unlabeled articles. alignment kernel order harmonic RBF alignment 0.31 0.17 0.04 accuracy with support vector machines kernel order harmonic RBF accuracy 84.5 80.4 69.3 We now have good kernels for semi-supervised learning. 28

Other research (1) Graph hyperparameter learning what if we don t know W ij? set up hyperparameters W ij = exp ( d ) (x id x jd ) 2 α d learn α with Bayesian evidence maximization 250 250 144 142 140 200 200 138 136 134 150 150 132 130 128 100 100 126 average 7 average 9 learned α 29

Other research (2) Sequences and other structured data (Lafferty, Zhu and Liu, 2004) y x what if x 1 x n form sequences? speech, natural language processing, biosequence analysis, etc. conditional random fields (CRF) y x kernel CRF kernel CRF + semi-supervised kernels 30

Other research (3) Active learning (Zhu, Lafferty and Ghahramani, 2003b) what if the computer can ask for labels? smart queries: not necessarily the most ambiguous points a 0.5 1 0 B 0.4 active learning + semi-supervised learning, fast algorithm 31

Related Work in Semi-supervised learning based on different assumptions method assumptions references graph similar feature, same label this talk mincuts (Blum and Chawla, 2001) normalized Laplacian (Zhou et al., 2003) regularization (Belkin et al., 2004) mixture model, EM generative model (Nigam et al., 2000) transductive SVM low density separation (Joachims, 1999) co-training feature split (Blum and Mitchell, 1998) Semi-supervised learning has so far received relatively little attention in statistics literature. 32

Summary Unlabeled data can improve classification. The methods have reached the stage where we can apply them to real-world human language tasks. 33

Future Plans application to human language tasks document categorization information extraction text learning from the WWW sentiment, polarity analysis social networks speech recognition: acoustic and language modeling continue the research on semi-supervised learning structured data ranking clustering explore different assumptions 34

Future Plans explore novel machine learning approaches for text mixed with other modalities, e.g. images speech and multimodal user interfaces bioinformatics collaboration 35

Thank You 36

References M. Belkin, I. Matveeva and P. Niyogi. Regularization and Semi-supervised Learning on Large Graphs. COLT 2004. A. Blum and S. Chawla. Learning from Labeled and Unlabeled Data using Graph Mincuts. ICML 2001. A. Blum, T. Mitchell. Combining Labeled and Unlabeled Data with Co-training. COLT 1998. O. Delalleau, Y. Bengio, N. Le Roux. Efficient Non-Parametric Function Induction in Semi-Supervised Learning. AISTAT 2005. R. Hwa. A Continuum of Bootstrapping Methods for Parsing Natural Languages. 2003. T. Joachims, Transductive inference for text classification using support vector machines. ICML 1999. K. Nigam, A. McCallum, S Thrun, T. Mitchell. Text Classification from Labeled and Unlabeled Documents using EM. Machine Learning. 2000. D. Zhou, O. Bousquet, T.N. Lal, J. Weston, B. Schlkopf. Learning with Local and Global Consistency. NIPS 2003. X. Zhu, J. Lafferty. Harmonic Mixtures. 2005. submitted. X. Zhu, Jaz Kandola, Z. Ghahramani, J. Lafferty. Nonparametric Transforms of 37

Graph Kernels for Semi-Supervised Learning. NIPS 2004. J. Lafferty, X. Zhu, Y. Liu. Kernel Conditional Random Fields: Representation and Clique Selection. ICML 2004. X. Zhu, Z. Ghahramani, J. Lafferty. Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. ICML 2003. X. Zhu, J. Lafferty, Z. Ghahramani. Combining Active Learning and Semi- Supervised Learning Using Gaussian Fields and Harmonic Functions. ICML 2003 workshop. X. Zhu, Z. Ghahramani. Learning from Labeled and Unlabeled Data with Label Propagation. CMU-CALD-02-106, 2002. 38

39

40

41

The probabilistic model behind harmonic function define a random field p(f) exp ( E(f)) energy E(f) = i j W ij(f i f j ) 2 = f f low energy means good label propagation E(f) = 4 E(f) = 2 E(f) = 1 if f {0, 1} discrete, standard Markov random fields (Boltzmann machines), inference hard 42

The probabilistic model behind harmonic function Gaussian random fields (Zhu, Ghahramani and Lafferty, ICML2003) continuous relaxation f R Gaussian random field Gaussian random field p(f) is a n-dimensional Gaussian with precision (inverse covariance) matrix. p(f) exp ( E(f)) = exp ( f f ) harmonic functions are the mean of Gaussian random fields Gaussian random fields = Gaussian processes on finite data 43

A random walk interpretation of harmonic functions harmonic function f i = P (hit label 1 start from i) random walk from node i to j with probability P ij stop if we hit a labeled node indirect similarity: random walks have same fate 1 i 0 44

Graph spectrum = n i=1 λ iφ i φ i 45

Learning component label with EM 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 Labels for the components do not follow the graph. (Nigam et al., 2000) 46