Is Manifold Learning for Toy Data only?

Size: px
Start display at page:

Download "Is Manifold Learning for Toy Data only?"

Transcription

1 s Manifold Learning for Toy Data only? Marina Meilă University of Washington MMDS Workshop 2016

2 Outline What is non-linear dimension reduction? Metric Manifold Learning Estimating the Riemannian metric Riemannian Relaxation Scalable manifold learning megaman An application to scientific data

3 When to do (non-linear) dimension reduction high-dimensional data p 2 R D, D =64 64 can be described by a small number d of continuous parameters Usually, large sample size n

4 When to do (non-linear) dimension reduction Why? To save space and computation n D data matrix! n s, s D To understand the data better preserve large scale features, suppress fine scale features To use it afterwards in (prediction) tasks

5 When to do (non-linear) dimension reduction Why? To save space and computation n D data matrix! n s, s D To understand the data better preserve large scale features, suppress fine scale features To use it afterwards in (prediction) tasks

6 When to do (non-linear) dimension reduction Richard Powell - The Hertzsprung Russell Diagram, CC BY-SA 2.5, Why? To save space and computation n D data matrix! n s, s D To understand the data better preserve large scale features, suppress fine scale features To use it afterwards in (prediction) tasks

7 How? Brief intro to manifold learning algorithms nput Data p 1,...p n,embeddingdimensionm, neighborhoodscale parameter p 1,...p n R D

8 How? Brief intro to manifold learning algorithms nput Data p 1,...p n,embeddingdimensionm, neighborhoodscale parameter Construct neighborhood graph p, p 0 neighbors i p p 0 2 apple p 1,...p n R D

9 How? Brief intro to manifold learning algorithms nput Data p 1,...p n,embeddingdimensionm, neighborhoodscale parameter Construct neighborhood graph p, p 0 neighbors i p p 0 2 apple Construct a n n matrix its leading eigenvectors are the coordinates (p 1:n) p 1,...p n R D

10 How? Brief intro to manifold learning algorithms nput Data p 1,...p n,embeddingdimensionm, neighborhoodscale parameter Construct neighborhood graph p, p 0 neighbors i p p 0 2 apple Construct a n n matrix its leading eigenvectors are the coordinates (p 1:n) Laplacian Eigenmaps [Belkin & Niyogi 02] Construct similarity matrix S =[S pp 0] p,p 0 2D withs pp 0 = e 1 p p0 2 i p, p 0 neighbors Construct Laplacian matrix L = T 1 S with T = diag(s1) Calculate 1...m = eigenvectors of L (smallest eigenvalues) coordinates of p 2Dare ( 1 (p),... m (p))

11 How? Brief intro to manifold learning algorithms nput Data p 1,...p n,embeddingdimensionm, neighborhoodscale parameter Construct neighborhood graph p, p 0 neighbors i p p 0 2 apple Construct a n n matrix its leading eigenvectors are the coordinates (p 1:n) somap [[Tennenbaum, desilva & Langford 00]] Find all shortest paths in neighborhood graph, construct matrix of distances M =[distance 2 pp 0] use M and Multi-Dimensional Scaling (MDS) to obtain m dimensional coordinates for p 2D

12 Atoyexample(the SwissRoll withahole) points in D 3dimensions samepointsreparametrizedin2d

13 Atoyexample(the SwissRoll withahole) points in D 3dimensions samepointsreparametrizedin2d nput Desired output

14 Embedding in 2 dimensions by di erent manifold learning algorithms nput

15 How to evaluate the results objectively? which of these embedding are correct? if several correct, how do we reconcile them? if not correct, what failed? Algorithms Multidimensional Scaling (MDS), Principal Components (PCA), somap, Locally Linear Embedding (LLE), Hessian Eigenmaps (HE), Laplacian Eigenmaps (LE), Di usion Maps (DM)

16 How to evaluate the results objectively? which of these embedding are correct? if several correct, how do we reconcile them? if not correct, what failed? what if have real data? Spectrum of a galaxy. Source SDSS, Jake VanderPlas

17 Outline What is non-linear dimension reduction? Metric Manifold Learning Estimating the Riemannian metric Riemannian Relaxation Scalable manifold learning megaman An application to scientific data

18 Outline What is non-linear dimension reduction? Metric Manifold Learning Estimating the Riemannian metric Riemannian Relaxation Scalable manifold learning megaman An application to scientific data

19 Preserving topology vs. preserving (intrinsic) geometry Algorithm maps data p 2 R D! (p) =x 2 R m Mapping M! (M) isdi eomorphism preserves topology often satisfied by embedding algorithms Mapping preserves distances along curves in M angles between curves in M areas, volumes

20 Preserving topology vs. preserving (intrinsic) geometry Algorithm maps data p 2 R D! (p) =x 2 R m Mapping M! (M) isdi eomorphism preserves topology often satisfied by embedding algorithms Mapping preserves distances along curves in M angles between curves in M areas, volumes...i.e. is isometry For most algorithms, in most cases, is not isometry Preserves topology Preserves topology + intrinsic geometry

21 Previous known results in geometric recovery Positive results Nash s Theorem: sometric embedding is possible. algorithm based on Nash s theorem (isometric embedding for very low d) [Verma 11] somap recovers (only) flat manifolds isometrically Consistency results for Laplacian and eigenvectors [[Hein & al 07,Coifman & Lafon 06, Ting & al 10, Gine & Koltchinskii 06]] imply isometric recovery for LE, DM in special situations Negative results obvious negative examples No a ne recovery for normalized Laplacian algorithms [Goldberg&al 08] Sampling density distorts the geometry for LE [Coifman& Lafon 06]

22 Our approach: Metric Manifold Learning [Perrault-Joncas,M 10] Given mapping that preserves topology true in many cases Objective augment with geometric information g so that (, g) preserves the geometry Dominique Perrault-Joncas

23 Our approach: Metric Manifold Learning [Perrault-Joncas,M 10] Given mapping that preserves topology true in many cases Objective augment with geometric information g so that (, g) preserves the geometry g is the Riemannian metric. Dominique Perrault-Joncas

24 The Riemannian metric g Mathematically M = (smooth) manifold p point on M T pm = tangent subspace at p g =Riemannianmetricon M g defines inner product on T pm < v, w > = v T g pw for v, w 2 T pm and for p 2M g is symmetric and positive definite tensor field g also called first di erential form (M, g) is a Riemannian manifold Computationally at each point p 2M, g p is a positive definite matrix of rank d

25 All geometric quantities on M involve g Volume element on manifold Z p Vol(W )= det(g)dx 1...dx d. W

26 All geometric quantities on M involve g Volume element on manifold Z p Vol(W )= det(g)dx 1...dx d. W Length of curve c l(c) = Z v b u t X a ij g ij dx i dt dx j dt dt,

27 All geometric quantities on M involve g Volume element on manifold Z p Vol(W )= det(g)dx 1...dx d. W Length of curve c l(c) = Z v b u t X a ij g ij dx i dt dx j dt dt, Under a change of parametrization, g changes in a way that leaves geometric quantities invariant

28 All geometric quantities on M involve g Volume element on manifold Z p Vol(W )= det(g)dx 1...dx d. W Length of curve c l(c) = Z v b u t X a ij g ij dx i dt dx j dt dt, Under a change of parametrization, g changes in a way that leaves geometric quantities invariant Current algorithms: estimate M This talk: estimate g along with M (and in the same coordinates)

29 Problem formulation Given: data set D = {p 1,... p n} sampled from manifold M R D embedding { x i = (p i ), p i 2D} by e.g LLE, somap, LE,... Estimate G i 2 R m m the (pushforward) Riemannian metric for p i 2D in the embedding coordinates The embedding {x 1:n, G 1:n} will preserve the geometry of the original data

30 g for Sculpture Faces n =698grayimagesoffacesinD =64 64 dimensions head moves up/down and right/left LTSA Algoritm

31 somap LTSA Laplacian Eigenmaps

32 Calculating distances in the manifold M Geodesic distance = shortest path on M should be invariant to coordinate changes Original somap Laplacian Eigenmaps

33 Calculating distances in the manifold M true distance d =1.57 Shortest Metric Rel. Embedding f (p) f (p 0 ) Path d G ˆd error Original data % somap s = % LTSA s = % LE s = %

34 Relation between g and =Laplace-BeltramioperatoronM Proposition 1 (Di erential geometric fact) f = p det(h) X 1 X l det(h) k f, k where h = g 1 (matrix inverse)

35 Relation between g and =Laplace-BeltramioperatoronM =div grad on C 2 (R d ), f = P 2 2 j on weighted graph with similarity matrix S, andt p = P pp 0 S pp 0, =diag { t p} S Proposition 1 (Di erential geometric fact) f = p det(h) X 1 X l det(h) k f, k where h = g 1 (matrix inverse)

36 Estimation of g Proposition 2 (Main Result 1) Let be the Laplace-Beltrami operator on M. Then h ij (p) = 1 2 ( i i(p)) ( j j (p)) i (p), j (p) where h = g 1 (matrix inverse) and i, j =1, 2,...m are embedding dimensions

37 Algorithm to Estimate Riemann metric g (Main Result 2) Given dataset D 1. Preprocessing (construct neighborhood graph,...) 2. Find an embedding of D into R m 3. Estimate discretized Laplace-Beltrami operator L 2 R n n 4. Estimate H p = Gp 1 and G p = H p for all p 2D Output ( p, G p) for all p

38 Algorithm to Estimate Riemann metric g (Main Result 2) Given dataset D 1. Preprocessing (construct neighborhood graph,...) 2. Find an embedding of D into R m 3. Estimate discretized Laplace-Beltrami operator L 4. Estimate H p = Gp 1 and G p = H p for all p 4.1 For i, j =1:m, H ij = 1 2 L( i j ) i (L j ) j (L i ) where X Y denotes elementwise product of two vectors X, Y 2 R N 4.2 For p 2D, H p =[Hp ij ] ij and G p = H p Output ( p, G p) for all p

39 Metric Manifold Learning summary Metric Manifold Learning = estimating (pushforward) Riemannian metric G i along with embedding coordinates x i Why useful Measures local distortion induced by any embedding algorithm G i = d when no distortion at p i Corrects distortion ntegrating with the local volume/length units based on G i Riemannian Relaxation (coming next) Algorithm independent geometry preserving method Outputs of di erent algorithms on the same data are comparable

40 Outline What is non-linear dimension reduction? Metric Manifold Learning Estimating the Riemannian metric Riemannian Relaxation Scalable manifold learning megaman An application to scientific data

41 Riemannian Relaxation Sometimes we can dispense with g dea f embedding is isometric, then push-forward metric is identity matrix d

42 Riemannian Relaxation Sometimes we can dispense with g dea f embedding is isometric, then push-forward metric is identity matrix d dea, formalized Measure distortion by loss = P n i=1 G i d 2 where G i is R. metric estimate at point i d is identity matrix teratively change embedding x 1:n to minimize loss

43 Riemannian Relaxation Sometimes we can dispense with g dea f embedding is isometric, then push-forward metric is identity matrix d dea, formalized Measure distortion by loss = P n i=1 G i d 2 where G i is R. metric estimate at point i d is identity matrix teratively change embedding x 1:n to minimize loss More details loss is non-convex is derived from operator norm Extends to s > d embeddings loss = P n i=1 G i U i Ui T 2 Extensions to principal curves and surfaces [Ozertem, Erdogmus 11], subsampling, non-uniform sampling densities

44 Riemannian Relaxation Sometimes we can dispense with g dea f embedding is isometric, then push-forward metric is identity matrix d dea, formalized Measure distortion by loss = P n i=1 G i d 2 where G i is R. metric estimate at point i d is identity matrix teratively change embedding x 1:n to minimize loss More details loss is non-convex is derived from operator norm Extends to s > d embeddings loss = P n i=1 G i U i Ui T 2 Extensions to principal curves and surfaces [Ozertem, Erdogmus 11], subsampling, non-uniform sampling densities mplementation nitialization with e.g Laplacian Eigenmaps Projected gradient descent to (local) optimum

45 Riemannian Relaxation of a deformed sphere target initialization algorithm output

46 Riemannian Relaxation of a deformed sphere target initialization algorithm output Mean-squared error and loss vs. noise amplitude

47 Outline What is non-linear dimension reduction? Metric Manifold Learning Estimating the Riemannian metric Riemannian Relaxation Scalable manifold learning megaman An application to scientific data

48 Outline What is non-linear dimension reduction? Metric Manifold Learning Estimating the Riemannian metric Riemannian Relaxation Scalable manifold learning megaman An application to scientific data

49 Scaling: Statistical viewpoint Rates of convergence as n! 1 Assume data sampled from manifold M with intrinsic dimension d, M, samplingdistributionare wellbehaved kernel bandwidth decreases slowly with n rate of Laplacian n [Wang 15] minimax rate of manifold learning n 1 2 d+6 [Singer 06], andofitseigenvectorsn (5d+6)(d+6) 2 d+2 [Genovese et al. 12]

50 Scaling: Statistical viewpoint Rates of convergence as n! 1 Assume data sampled from manifold M with intrinsic dimension d, M, samplingdistributionare wellbehaved kernel bandwidth decreases slowly with n rate of Laplacian n [Wang 15] minimax rate of manifold learning n 1 2 d+6 [Singer 06], andofitseigenvectorsn (5d+6)(d+6) 2 d+2 [Genovese et al. 12] Estimating M and accurately requires big data

51 Scaling: Computational viewpoint Laplacian Eigenmaps revisited 1. Construct similarity matrix S =[S pp 0] p,p 0 2D withs pp 0 = e 1 p p0 2 i p, p 0 neighbors 2. Construct Laplacian matrix L = T 1 S with T = diag(s1) 3. Calculate 1...m = eigenvectors of L (smallest eigenvalues) 4. coordinates of p 2Dare ( 1 (p),... m (p))

52 Scaling: Computational viewpoint Laplacian Eigenmaps revisited 1. Construct similarity matrix S =[S pp 0] p,p 0 2D withs pp 0 = e 1 p p0 2 i p, p 0 neighbors 2. Construct Laplacian matrix L = T 1 S with T = diag(s1) 3. Calculate 1...m = eigenvectors of L (smallest eigenvalues) 4. coordinates of p 2Dare ( 1 (p),... m (p)) Nearest neighbor search in high dimensions

53 Scaling: Computational viewpoint Laplacian Eigenmaps revisited 1. Construct similarity matrix S =[S pp 0] p,p 0 2D with S pp 0 = e 1 p p0 2 i p, p 0 neighbors 2. Construct Laplacian matrix L = T 1 S with T = diag(s1) 3. Calculate 1...m = eigenvectors of L (smallest eigenvalues) 4. coordinates of p 2Dare ( 1 (p),... m (p)) Nearest neighbor search in high dimensions Sparse Matrix Vector multiplication

54 Scaling: Computational viewpoint Laplacian Eigenmaps revisited 1. Construct similarity matrix S =[S pp 0] p,p 0 2D with S pp 0 = e 1 p p0 2 i p, p 0 neighbors 2. Construct Laplacian matrix L = T 1 S with T = diag(s1) 3. Calculate 1...m = eigenvectors of L (smallest eigenvalues) 4. coordinates of p 2Dare ( 1 (p),... m (p)) Nearest neighbor search in high dimensions Sparse Matrix Vector multiplication Principal eigenvectors of sparse, symmetric, (well conditioned) matrix

55 Manifold Learning with millions of points James McQueen Jake VanderPlas Jerry Zhang Grace Telford mplemented in python, compatible with scikit-learn Designed for performance sparse representation as default incorporates state of the art FLANN package 1 uses amp, lobpcg fast sparse eigensolver for SDP matrices exposes/caches intermediate states (e.g. data set index, distances, Laplacian, eigenvectors) 1 Fast Approximate Nearest Neighbor search

56 James McQueen Jake VanderPlas Jerry Zhang Grace Telford

57 Scalable Manifold Learning in python with megaman English words and phrases taken from Google news (3,000,000 phrases originally represented in 300 dimensions by the Deep Neural Network word2vec [Mikolov et al]) Main sample of galaxy spectra from the Sloan Digital Sky Survey (675,000 spectra originally in 3750 dimensions). preprocessed by Jake VanderPlas, figure by Grace Telford

58 Scalable Manifold Learning in python with megaman English words and phrases taken from Google news (3,000,000 phrases originally represented in 300 dimensions by the Deep Neural Network word2vec [Mikolov et al]) Main sample of galaxy spectra from the Sloan Digital Sky Survey (675,000 spectra originally in 3750 dimensions). preprocessed by Jake VanderPlas, figure by Grace Telford

59 Currently: on single core, embeds all data, all data in memory Near future: Nyström extension, lazy evaluations, multiple charts Next gigaman? scalable geometric/statistical tasks (search for optimal, Riemannian Relaxation, semi-supervised learning, clustering)

60 Outline What is non-linear dimension reduction? Metric Manifold Learning Estimating the Riemannian metric Riemannian Relaxation Scalable manifold learning megaman An application to scientific data

61 Manifold learning for SDSS Spectra of Galaxies (more in next talk!) Main sample of galaxy spectra from the Sloan Digital Sky Survey (675,000 spectra originally in 3750 dimensions). data curated by Grace Telford, noise removal by Jake VanderPlas

62 Chosing the embedding dimension

63 Embedding into 3 dimensions

64 Same embedding... only high density regions another viewpoint how distorted is this embedding?

65 How distorted is this embedding? (ellipses represent G 1 ) i

66 Riemannian Relaxation along principal curves Find principal curves

67 Riemannian Relaxation along principal curves Points near principal curves, colored by log 10 (G i ) (0 means no distortion)

68 Riemannian Relaxation along principal curves Points near principal curves, colored by log 10 (G i ), after Riemannian Relaxation (0 means no distortion)

69 Riemannian Relaxation along principal curves All data after Riemannian Relaxation

70 Manifold learning for sciences and engineering

71 Manifold learning is for toy data and toy problems

72 Manifold learning is for toy data and toy problems Manifold learning should be like PCA tractable automatic first step in data processing pipe-line

73 Manifold learning is for toy data and toy problems Manifold learning should be like PCA tractable automatic first step in data processing pipe-line Metric Manifold learning megaman Use any ML algorithm, estimate distortion by g and correct it (on demand) tractable for millions of data (in progress) implementing quantitative validation procedure (topology preservation, choice of ) future: port classification, regression, clustering to the manifold setting

74 Manifold Learning for engineering and the sciences scientific discovery by quantitative/statistical data analysis manifold learning as preprocessing for other tasks

75 Manifold Learning for engineering and the sciences scientific discovery by quantitative/statistical data analysis manifold learning as preprocessing for other tasks

76 Thank you

Metric Learning on Manifolds

Metric Learning on Manifolds Journal of Machine Learning Research 0 (2011) 0-00 Submitted 0/00; Published 00/00 Metric Learning on Manifolds Dominique Perrault-Joncas Department of Statistics University of Washington Seattle, WA 98195-4322,

More information

Nearly Isometric Embedding by Relaxation

Nearly Isometric Embedding by Relaxation Nearly Isometric Embedding by Relaxation Anonymous Author(s) Affiliation Address email Abstract 1 2 3 4 5 6 7 8 9 Many manifold learning algorithms aim to create embeddings with low or no distortion (i.e.

More information

Metric Learning and Manifolds: Preserving the Intrinsic Geometry

Metric Learning and Manifolds: Preserving the Intrinsic Geometry Metric Learning and Manifolds: Preserving the Intrinsic Geometry Dominique Perrault-Joncas Department of Statistics University of Washington Seattle, WA 98195-4322, USA Marina Meil Department of Statistics

More information

Nonlinear Dimensionality Reduction. Jose A. Costa

Nonlinear Dimensionality Reduction. Jose A. Costa Nonlinear Dimensionality Reduction Jose A. Costa Mathematics of Information Seminar, Dec. Motivation Many useful of signals such as: Image databases; Gene expression microarrays; Internet traffic time

More information

Non-linear Dimensionality Reduction

Non-linear Dimensionality Reduction Non-linear Dimensionality Reduction CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Introduction Laplacian Eigenmaps Locally Linear Embedding (LLE)

More information

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Neural Computation, June 2003; 15 (6):1373-1396 Presentation for CSE291 sp07 M. Belkin 1 P. Niyogi 2 1 University of Chicago, Department

More information

A Tour of Unsupervised Learning Part I Graphical models and dimension reduction

A Tour of Unsupervised Learning Part I Graphical models and dimension reduction A Tour of Unsupervised Learning Part I Graphical models and dimension reduction Marina Meilă mmp@stat.washington.edu Department of Statistics University of Washington MPS2016 Supervised vs. Unsupervised

More information

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Yoshua Bengio Pascal Vincent Jean-François Paiement University of Montreal April 2, Snowbird Learning 2003 Learning Modal Structures

More information

Advances in Manifold Learning Presented by: Naku Nak l Verm r a June 10, 2008

Advances in Manifold Learning Presented by: Naku Nak l Verm r a June 10, 2008 Advances in Manifold Learning Presented by: Nakul Verma June 10, 008 Outline Motivation Manifolds Manifold Learning Random projection of manifolds for dimension reduction Introduction to random projections

More information

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto Unsupervised Learning Techniques 9.520 Class 07, 1 March 2006 Andrea Caponnetto About this class Goal To introduce some methods for unsupervised learning: Gaussian Mixtures, K-Means, ISOMAP, HLLE, Laplacian

More information

Distance Metric Learning in Data Mining (Part II) Fei Wang and Jimeng Sun IBM TJ Watson Research Center

Distance Metric Learning in Data Mining (Part II) Fei Wang and Jimeng Sun IBM TJ Watson Research Center Distance Metric Learning in Data Mining (Part II) Fei Wang and Jimeng Sun IBM TJ Watson Research Center 1 Outline Part I - Applications Motivation and Introduction Patient similarity application Part II

More information

Nearly Isometric Embedding by Relaxation

Nearly Isometric Embedding by Relaxation Nearly Isometric Embedding by Relaxation James McQueen Department of Statistics University of Washington Seattle, WA 9895 jmcq@u.washington.edu Marina Meilă Department of Statistics University of Washington

More information

Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi

Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi Overview Introduction Linear Methods for Dimensionality Reduction Nonlinear Methods and Manifold

More information

Nonlinear Methods. Data often lies on or near a nonlinear low-dimensional curve aka manifold.

Nonlinear Methods. Data often lies on or near a nonlinear low-dimensional curve aka manifold. Nonlinear Methods Data often lies on or near a nonlinear low-dimensional curve aka manifold. 27 Laplacian Eigenmaps Linear methods Lower-dimensional linear projection that preserves distances between all

More information

Data-dependent representations: Laplacian Eigenmaps

Data-dependent representations: Laplacian Eigenmaps Data-dependent representations: Laplacian Eigenmaps November 4, 2015 Data Organization and Manifold Learning There are many techniques for Data Organization and Manifold Learning, e.g., Principal Component

More information

Improved Graph Laplacian via Geometric Consistency

Improved Graph Laplacian via Geometric Consistency Improved Graph Laplacian via Geometric Consistency Dominique C. Perrault-Joncas Google, Inc. dominiquep@google.com Marina Meilă Department of Statistics University of Washington mmp2@uw.edu James McQueen

More information

Unsupervised dimensionality reduction

Unsupervised dimensionality reduction Unsupervised dimensionality reduction Guillaume Obozinski Ecole des Ponts - ParisTech SOCN course 2014 Guillaume Obozinski Unsupervised dimensionality reduction 1/30 Outline 1 PCA 2 Kernel PCA 3 Multidimensional

More information

Nonlinear Dimensionality Reduction

Nonlinear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Kernel PCA 2 Isomap 3 Locally Linear Embedding 4 Laplacian Eigenmap

More information

Data dependent operators for the spatial-spectral fusion problem

Data dependent operators for the spatial-spectral fusion problem Data dependent operators for the spatial-spectral fusion problem Wien, December 3, 2012 Joint work with: University of Maryland: J. J. Benedetto, J. A. Dobrosotskaya, T. Doster, K. W. Duke, M. Ehler, A.

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

LECTURE NOTE #11 PROF. ALAN YUILLE

LECTURE NOTE #11 PROF. ALAN YUILLE LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform

More information

Graphs, Geometry and Semi-supervised Learning

Graphs, Geometry and Semi-supervised Learning Graphs, Geometry and Semi-supervised Learning Mikhail Belkin The Ohio State University, Dept of Computer Science and Engineering and Dept of Statistics Collaborators: Partha Niyogi, Vikas Sindhwani In

More information

Intrinsic Structure Study on Whale Vocalizations

Intrinsic Structure Study on Whale Vocalizations 1 2015 DCLDE Conference Intrinsic Structure Study on Whale Vocalizations Yin Xian 1, Xiaobai Sun 2, Yuan Zhang 3, Wenjing Liao 3 Doug Nowacek 1,4, Loren Nolte 1, Robert Calderbank 1,2,3 1 Department of

More information

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 23 1 / 27 Overview

More information

Diffusion Wavelets and Applications

Diffusion Wavelets and Applications Diffusion Wavelets and Applications J.C. Bremer, R.R. Coifman, P.W. Jones, S. Lafon, M. Mohlenkamp, MM, R. Schul, A.D. Szlam Demos, web pages and preprints available at: S.Lafon: www.math.yale.edu/~sl349

More information

IFT LAPLACIAN APPLICATIONS. Mikhail Bessmeltsev

IFT LAPLACIAN APPLICATIONS.   Mikhail Bessmeltsev IFT 6112 09 LAPLACIAN APPLICATIONS http://www-labs.iro.umontreal.ca/~bmpix/teaching/6112/2018/ Mikhail Bessmeltsev Rough Intuition http://pngimg.com/upload/hammer_png3886.png You can learn a lot about

More information

Contribution from: Springer Verlag Berlin Heidelberg 2005 ISBN

Contribution from: Springer Verlag Berlin Heidelberg 2005 ISBN Contribution from: Mathematical Physics Studies Vol. 7 Perspectives in Analysis Essays in Honor of Lennart Carleson s 75th Birthday Michael Benedicks, Peter W. Jones, Stanislav Smirnov (Eds.) Springer

More information

Learning gradients: prescriptive models

Learning gradients: prescriptive models Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan

More information

Justin Solomon MIT, Spring 2017

Justin Solomon MIT, Spring 2017 Justin Solomon MIT, Spring 2017 http://pngimg.com/upload/hammer_png3886.png You can learn a lot about a shape by hitting it (lightly) with a hammer! What can you learn about its shape from vibration frequencies

More information

Nonlinear Dimensionality Reduction

Nonlinear Dimensionality Reduction Nonlinear Dimensionality Reduction Piyush Rai CS5350/6350: Machine Learning October 25, 2011 Recap: Linear Dimensionality Reduction Linear Dimensionality Reduction: Based on a linear projection of the

More information

Lecture 10: Dimension Reduction Techniques

Lecture 10: Dimension Reduction Techniques Lecture 10: Dimension Reduction Techniques Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD April 17, 2018 Input Data It is assumed that there is a set

More information

Global (ISOMAP) versus Local (LLE) Methods in Nonlinear Dimensionality Reduction

Global (ISOMAP) versus Local (LLE) Methods in Nonlinear Dimensionality Reduction Global (ISOMAP) versus Local (LLE) Methods in Nonlinear Dimensionality Reduction A presentation by Evan Ettinger on a Paper by Vin de Silva and Joshua B. Tenenbaum May 12, 2005 Outline Introduction The

More information

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Introduction and Data Representation Mikhail Belkin & Partha Niyogi Department of Electrical Engieering University of Minnesota Mar 21, 2017 1/22 Outline Introduction 1 Introduction 2 3 4 Connections to

More information

Robust Laplacian Eigenmaps Using Global Information

Robust Laplacian Eigenmaps Using Global Information Manifold Learning and its Applications: Papers from the AAAI Fall Symposium (FS-9-) Robust Laplacian Eigenmaps Using Global Information Shounak Roychowdhury ECE University of Texas at Austin, Austin, TX

More information

Beyond Scalar Affinities for Network Analysis or Vector Diffusion Maps and the Connection Laplacian

Beyond Scalar Affinities for Network Analysis or Vector Diffusion Maps and the Connection Laplacian Beyond Scalar Affinities for Network Analysis or Vector Diffusion Maps and the Connection Laplacian Amit Singer Princeton University Department of Mathematics and Program in Applied and Computational Mathematics

More information

Self-Tuning Semantic Image Segmentation

Self-Tuning Semantic Image Segmentation Self-Tuning Semantic Image Segmentation Sergey Milyaev 1,2, Olga Barinova 2 1 Voronezh State University sergey.milyaev@gmail.com 2 Lomonosov Moscow State University obarinova@graphics.cs.msu.su Abstract.

More information

A graph based approach to semi-supervised learning

A graph based approach to semi-supervised learning A graph based approach to semi-supervised learning 1 Feb 2011 Two papers M. Belkin, P. Niyogi, and V Sindhwani. Manifold regularization: a geometric framework for learning from labeled and unlabeled examples.

More information

Apprentissage non supervisée

Apprentissage non supervisée Apprentissage non supervisée Cours 3 Higher dimensions Jairo Cugliari Master ECD 2015-2016 From low to high dimension Density estimation Histograms and KDE Calibration can be done automacally But! Let

More information

March 13, Paper: R.R. Coifman, S. Lafon, Diffusion maps ([Coifman06]) Seminar: Learning with Graphs, Prof. Hein, Saarland University

March 13, Paper: R.R. Coifman, S. Lafon, Diffusion maps ([Coifman06]) Seminar: Learning with Graphs, Prof. Hein, Saarland University Kernels March 13, 2008 Paper: R.R. Coifman, S. Lafon, maps ([Coifman06]) Seminar: Learning with Graphs, Prof. Hein, Saarland University Kernels Figure: Example Application from [LafonWWW] meaningful geometric

More information

Diffusion Geometries, Diffusion Wavelets and Harmonic Analysis of large data sets.

Diffusion Geometries, Diffusion Wavelets and Harmonic Analysis of large data sets. Diffusion Geometries, Diffusion Wavelets and Harmonic Analysis of large data sets. R.R. Coifman, S. Lafon, MM Mathematics Department Program of Applied Mathematics. Yale University Motivations The main

More information

Using Local Spectral Methods in Theory and in Practice

Using Local Spectral Methods in Theory and in Practice Using Local Spectral Methods in Theory and in Practice Michael W. Mahoney ICSI and Dept of Statistics, UC Berkeley ( For more info, see: http: // www. stat. berkeley. edu/ ~ mmahoney or Google on Michael

More information

Manifold Regularization

Manifold Regularization 9.520: Statistical Learning Theory and Applications arch 3rd, 200 anifold Regularization Lecturer: Lorenzo Rosasco Scribe: Hooyoung Chung Introduction In this lecture we introduce a class of learning algorithms,

More information

Spectral Processing. Misha Kazhdan

Spectral Processing. Misha Kazhdan Spectral Processing Misha Kazhdan [Taubin, 1995] A Signal Processing Approach to Fair Surface Design [Desbrun, et al., 1999] Implicit Fairing of Arbitrary Meshes [Vallet and Levy, 2008] Spectral Geometry

More information

EE Technion, Spring then. can be isometrically embedded into can be realized as a Gram matrix of rank, Properties:

EE Technion, Spring then. can be isometrically embedded into can be realized as a Gram matrix of rank, Properties: 5/25/200 2 A mathematical exercise Assume points with the metric are isometrically embeddable into Then, there exists a canonical form such that for all Spectral methods We can also write Alexander & Michael

More information

Lecture: Some Practical Considerations (3 of 4)

Lecture: Some Practical Considerations (3 of 4) Stat260/CS294: Spectral Graph Methods Lecture 14-03/10/2015 Lecture: Some Practical Considerations (3 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough.

More information

Multiscale Wavelets on Trees, Graphs and High Dimensional Data

Multiscale Wavelets on Trees, Graphs and High Dimensional Data Multiscale Wavelets on Trees, Graphs and High Dimensional Data ICML 2010, Haifa Matan Gavish (Weizmann/Stanford) Boaz Nadler (Weizmann) Ronald Coifman (Yale) Boaz Nadler Ronald Coifman Motto... the relationships

More information

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M =

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = CALCULUS ON MANIFOLDS 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = a M T am, called the tangent bundle, is itself a smooth manifold, dim T M = 2n. Example 1.

More information

Dimension Reduction and Low-dimensional Embedding

Dimension Reduction and Low-dimensional Embedding Dimension Reduction and Low-dimensional Embedding Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/26 Dimension

More information

Locality Preserving Projections

Locality Preserving Projections Locality Preserving Projections Xiaofei He Department of Computer Science The University of Chicago Chicago, IL 60637 xiaofei@cs.uchicago.edu Partha Niyogi Department of Computer Science The University

More information

CSE 291. Assignment Spectral clustering versus k-means. Out: Wed May 23 Due: Wed Jun 13

CSE 291. Assignment Spectral clustering versus k-means. Out: Wed May 23 Due: Wed Jun 13 CSE 291. Assignment 3 Out: Wed May 23 Due: Wed Jun 13 3.1 Spectral clustering versus k-means Download the rings data set for this problem from the course web site. The data is stored in MATLAB format as

More information

Statistical and Computational Analysis of Locality Preserving Projection

Statistical and Computational Analysis of Locality Preserving Projection Statistical and Computational Analysis of Locality Preserving Projection Xiaofei He xiaofei@cs.uchicago.edu Department of Computer Science, University of Chicago, 00 East 58th Street, Chicago, IL 60637

More information

The crucial role of statistics in manifold learning

The crucial role of statistics in manifold learning The crucial role of statistics in manifold learning Tyrus Berry Postdoc, Dept. of Mathematical Sciences, GMU Statistics Seminar GMU Feb., 26 Postdoctoral position supported by NSF ANALYSIS OF POINT CLOUDS

More information

Manifold Learning and it s application

Manifold Learning and it s application Manifold Learning and it s application Nandan Dubey SE367 Outline 1 Introduction Manifold Examples image as vector Importance Dimension Reduction Techniques 2 Linear Methods PCA Example MDS Perception

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

Laplace-Beltrami Eigenfunctions for Deformation Invariant Shape Representation

Laplace-Beltrami Eigenfunctions for Deformation Invariant Shape Representation Laplace-Beltrami Eigenfunctions for Deformation Invariant Shape Representation Author: Raif M. Rustamov Presenter: Dan Abretske Johns Hopkins 2007 Outline Motivation and Background Laplace-Beltrami Operator

More information

Exploiting Sparse Non-Linear Structure in Astronomical Data

Exploiting Sparse Non-Linear Structure in Astronomical Data Exploiting Sparse Non-Linear Structure in Astronomical Data Ann B. Lee Department of Statistics and Department of Machine Learning, Carnegie Mellon University Joint work with P. Freeman, C. Schafer, and

More information

Heat Kernel Signature: A Concise Signature Based on Heat Diffusion. Leo Guibas, Jian Sun, Maks Ovsjanikov

Heat Kernel Signature: A Concise Signature Based on Heat Diffusion. Leo Guibas, Jian Sun, Maks Ovsjanikov Heat Kernel Signature: A Concise Signature Based on Heat Diffusion i Leo Guibas, Jian Sun, Maks Ovsjanikov This talk is based on: Jian Sun, Maks Ovsjanikov, Leonidas Guibas 1 A Concise and Provably Informative

More information

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

Data Analysis and Manifold Learning Lecture 9: Diffusion on Manifolds and on Graphs

Data Analysis and Manifold Learning Lecture 9: Diffusion on Manifolds and on Graphs Data Analysis and Manifold Learning Lecture 9: Diffusion on Manifolds and on Graphs Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline of Lecture

More information

Nonlinear Manifold Learning Summary

Nonlinear Manifold Learning Summary Nonlinear Manifold Learning 6.454 Summary Alexander Ihler ihler@mit.edu October 6, 2003 Abstract Manifold learning is the process of estimating a low-dimensional structure which underlies a collection

More information

Directed Graph Embedding: an Algorithm based on Continuous Limits of Laplacian-type Operators

Directed Graph Embedding: an Algorithm based on Continuous Limits of Laplacian-type Operators Directed Graph Embedding: an Algorithm based on Continuous Limits of Laplacian-type Operators Dominique C. Perrault-Joncas Department of Statistics University of Washington Seattle, WA 98195 dcpj@stat.washington.edu

More information

Linear and Non-Linear Dimensionality Reduction

Linear and Non-Linear Dimensionality Reduction Linear and Non-Linear Dimensionality Reduction Alexander Schulz aschulz(at)techfak.uni-bielefeld.de University of Pisa, Pisa 4.5.215 and 7.5.215 Overview Dimensionality Reduction Motivation Linear Projections

More information

Isometries ( ) Dr. Emanuele Rodolà Matthias Vestner Room , Informatik IX

Isometries ( ) Dr. Emanuele Rodolà Matthias Vestner Room , Informatik IX Isometries (22.05.2014) Dr. Emanuele Rodolà Matthias Vestner {rodola,vestner}@in.tum.de Room 02.09.058, Informatik IX Seminar «The metric approach to shape matching» Alfonso Ros Wednesday, May 28th 14:00

More information

Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis

Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis Alvina Goh Vision Reading Group 13 October 2005 Connection of Local Linear Embedding, ISOMAP, and Kernel Principal

More information

THE HIDDEN CONVEXITY OF SPECTRAL CLUSTERING

THE HIDDEN CONVEXITY OF SPECTRAL CLUSTERING THE HIDDEN CONVEXITY OF SPECTRAL CLUSTERING Luis Rademacher, Ohio State University, Computer Science and Engineering. Joint work with Mikhail Belkin and James Voss This talk A new approach to multi-way

More information

Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota

Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota Université du Littoral- Calais July 11, 16 First..... to the

More information

Graph Partitioning Using Random Walks

Graph Partitioning Using Random Walks Graph Partitioning Using Random Walks A Convex Optimization Perspective Lorenzo Orecchia Computer Science Why Spectral Algorithms for Graph Problems in practice? Simple to implement Can exploit very efficient

More information

Spectral Techniques for Clustering

Spectral Techniques for Clustering Nicola Rebagliati 1/54 Spectral Techniques for Clustering Nicola Rebagliati 29 April, 2010 Nicola Rebagliati 2/54 Thesis Outline 1 2 Data Representation for Clustering Setting Data Representation and Methods

More information

Graph Metrics and Dimension Reduction

Graph Metrics and Dimension Reduction Graph Metrics and Dimension Reduction Minh Tang 1 Michael Trosset 2 1 Applied Mathematics and Statistics The Johns Hopkins University 2 Department of Statistics Indiana University, Bloomington November

More information

A Duality View of Spectral Methods for Dimensionality Reduction

A Duality View of Spectral Methods for Dimensionality Reduction A Duality View of Spectral Methods for Dimensionality Reduction Lin Xiao 1 Jun Sun 2 Stephen Boyd 3 May 3, 2006 1 Center for the Mathematics of Information, California Institute of Technology, Pasadena,

More information

Learning a Kernel Matrix for Nonlinear Dimensionality Reduction

Learning a Kernel Matrix for Nonlinear Dimensionality Reduction Learning a Kernel Matrix for Nonlinear Dimensionality Reduction Kilian Q. Weinberger kilianw@cis.upenn.edu Fei Sha feisha@cis.upenn.edu Lawrence K. Saul lsaul@cis.upenn.edu Department of Computer and Information

More information

From graph to manifold Laplacian: The convergence rate

From graph to manifold Laplacian: The convergence rate Appl. Comput. Harmon. Anal. 2 (2006) 28 34 www.elsevier.com/locate/acha Letter to the Editor From graph to manifold Laplacian: The convergence rate A. Singer Department of athematics, Yale University,

More information

Welcome to Copenhagen!

Welcome to Copenhagen! Welcome to Copenhagen! Schedule: Monday Tuesday Wednesday Thursday Friday 8 Registration and welcome 9 Crash course on Crash course on Introduction to Differential and Differential and Information Geometry

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Locally-biased analytics

Locally-biased analytics Locally-biased analytics You have BIG data and want to analyze a small part of it: Solution 1: Cut out small part and use traditional methods Challenge: cutting out may be difficult a priori Solution 2:

More information

A Duality View of Spectral Methods for Dimensionality Reduction

A Duality View of Spectral Methods for Dimensionality Reduction Lin Xiao lxiao@caltech.edu Center for the Mathematics of Information, California Institute of Technology, Pasadena, CA 91125, USA Jun Sun sunjun@stanford.edu Stephen Boyd boyd@stanford.edu Department of

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x

More information

Manifold Learning: From Linear to nonlinear. Presenter: Wei-Lun (Harry) Chao Date: April 26 and May 3, 2012 At: AMMAI 2012

Manifold Learning: From Linear to nonlinear. Presenter: Wei-Lun (Harry) Chao Date: April 26 and May 3, 2012 At: AMMAI 2012 Manifold Learning: From Linear to nonlinear Presenter: Wei-Lun (Harry) Chao Date: April 26 and May 3, 2012 At: AMMAI 2012 1 Preview Goal: Dimensionality Classification reduction and clustering Main idea:

More information

Dimensionality Reduction:

Dimensionality Reduction: Dimensionality Reduction: From Data Representation to General Framework Dong XU School of Computer Engineering Nanyang Technological University, Singapore What is Dimensionality Reduction? PCA LDA Examples:

More information

Dimensionality Reduction: A Comparative Review

Dimensionality Reduction: A Comparative Review Dimensionality Reduction: A Comparative Review L.J.P. van der Maaten, E.O. Postma, H.J. van den Herik MICC, Maastricht University, P.O. Box 616, 6200 MD Maastricht, The Netherlands. Abstract In recent

More information

DIMENSION REDUCTION. min. j=1

DIMENSION REDUCTION. min. j=1 DIMENSION REDUCTION 1 Principal Component Analysis (PCA) Principal components analysis (PCA) finds low dimensional approximations to the data by projecting the data onto linear subspaces. Let X R d and

More information

Spectral approximations in machine learning

Spectral approximations in machine learning Spectral approximations in machine learning Darren Homrighausen Department of Statistics Carnegie Mellon University Pittsburgh, PA 15213 dhomrigh@stat.cmu.edu Daniel J. McDonald Department of Statistics

More information

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Shuyang Ling Courant Institute of Mathematical Sciences, NYU Aug 13, 2018 Joint

More information

Nonlinear Data Transformation with Diffusion Map

Nonlinear Data Transformation with Diffusion Map Nonlinear Data Transformation with Diffusion Map Peter Freeman Ann Lee Joey Richards* Chad Schafer Department of Statistics Carnegie Mellon University www.incagroup.org * now at U.C. Berkeley t! " 3 3

More information

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR

More information

Learning a kernel matrix for nonlinear dimensionality reduction

Learning a kernel matrix for nonlinear dimensionality reduction University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science 7-4-2004 Learning a kernel matrix for nonlinear dimensionality reduction Kilian Q. Weinberger

More information

Spectral Hashing: Learning to Leverage 80 Million Images

Spectral Hashing: Learning to Leverage 80 Million Images Spectral Hashing: Learning to Leverage 80 Million Images Yair Weiss, Antonio Torralba, Rob Fergus Hebrew University, MIT, NYU Outline Motivation: Brute Force Computer Vision. Semantic Hashing. Spectral

More information

Beyond the Point Cloud: From Transductive to Semi-Supervised Learning

Beyond the Point Cloud: From Transductive to Semi-Supervised Learning Beyond the Point Cloud: From Transductive to Semi-Supervised Learning Vikas Sindhwani, Partha Niyogi, Mikhail Belkin Andrew B. Goldberg goldberg@cs.wisc.edu Department of Computer Sciences University of

More information

Bi-stochastic kernels via asymmetric affinity functions

Bi-stochastic kernels via asymmetric affinity functions Bi-stochastic kernels via asymmetric affinity functions Ronald R. Coifman, Matthew J. Hirn Yale University Department of Mathematics P.O. Box 208283 New Haven, Connecticut 06520-8283 USA ariv:1209.0237v4

More information

Functional Maps ( ) Dr. Emanuele Rodolà Room , Informatik IX

Functional Maps ( ) Dr. Emanuele Rodolà Room , Informatik IX Functional Maps (12.06.2014) Dr. Emanuele Rodolà rodola@in.tum.de Room 02.09.058, Informatik IX Seminar «LP relaxation for elastic shape matching» Fabian Stark Wednesday, June 18th 14:00 Room 02.09.023

More information

Semi-Supervised Learning by Multi-Manifold Separation

Semi-Supervised Learning by Multi-Manifold Separation Semi-Supervised Learning by Multi-Manifold Separation Xiaojin (Jerry) Zhu Department of Computer Sciences University of Wisconsin Madison Joint work with Andrew Goldberg, Zhiting Xu, Aarti Singh, and Rob

More information

Clustering in kernel embedding spaces and organization of documents

Clustering in kernel embedding spaces and organization of documents Clustering in kernel embedding spaces and organization of documents Stéphane Lafon Collaborators: Raphy Coifman (Yale), Yosi Keller (Yale), Ioannis G. Kevrekidis (Princeton), Ann B. Lee (CMU), Boaz Nadler

More information

Data Analysis and Manifold Learning Lecture 7: Spectral Clustering

Data Analysis and Manifold Learning Lecture 7: Spectral Clustering Data Analysis and Manifold Learning Lecture 7: Spectral Clustering Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline of Lecture 7 What is spectral

More information

Learning from Labeled and Unlabeled Data: Semi-supervised Learning and Ranking p. 1/31

Learning from Labeled and Unlabeled Data: Semi-supervised Learning and Ranking p. 1/31 Learning from Labeled and Unlabeled Data: Semi-supervised Learning and Ranking Dengyong Zhou zhou@tuebingen.mpg.de Dept. Schölkopf, Max Planck Institute for Biological Cybernetics, Germany Learning from

More information

Analysis of Spectral Kernel Design based Semi-supervised Learning

Analysis of Spectral Kernel Design based Semi-supervised Learning Analysis of Spectral Kernel Design based Semi-supervised Learning Tong Zhang IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Rie Kubota Ando IBM T. J. Watson Research Center Yorktown Heights,

More information

Lecture 3: Compressive Classification

Lecture 3: Compressive Classification Lecture 3: Compressive Classification Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Sampling Signal Sparsity wideband signal samples large Gabor (TF) coefficients Fourier matrix Compressive

More information

Dimensionality Reduc1on

Dimensionality Reduc1on Dimensionality Reduc1on contd Aarti Singh Machine Learning 10-601 Nov 10, 2011 Slides Courtesy: Tom Mitchell, Eric Xing, Lawrence Saul 1 Principal Component Analysis (PCA) Principal Components are the

More information

Diffusion Geometries, Global and Multiscale

Diffusion Geometries, Global and Multiscale Diffusion Geometries, Global and Multiscale R.R. Coifman, S. Lafon, MM, J.C. Bremer Jr., A.D. Szlam, P.W. Jones, R.Schul Papers, talks, other materials available at: www.math.yale.edu/~mmm82 Data and functions

More information