Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota

Size: px
Start display at page:

Download "Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota"

Transcription

1 Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota NASCA 2018, Kalamata July 6, 2018

2 Introduction, background, and motivation Common goal of data mining methods: to extract meaningful information or patterns from data. Very broad area includes: data analysis, machine learning, pattern recognition, information retrieval,... Main tools used: linear algebra; graph theory; approximation theory; optimization;... In this talk: emphasis on dimension reduction techniques and the interrelations between techniques Kalamata 07/06/18 p. 2

3 Introduction: a few factoids We live in an era increasingly shaped by DATA bytes of data created in % of data on internet created since Billion internet users in Million Google searches worldwide / minute (5.2 B/day) 15.2 Million text messages worldwide / minute Mixed blessing: Opportunities & big challenges. Trend is re-shaping & energizing many research areas including : numerical linear algebra Kalamata 07/06/18 p. 3

4 A huge potential: Health sciences Kalamata 07/06/18 p. 4

5 Recommending books or movies: recommender systems Kalamata 07/06/18 p. 5

6 A few sample (classes of) problems: Classification: Benign Malignant, Dangerous-Safe, Face recognition, digit recognition, Matrix completion: Recommender systems Projection type methods: PCA, LSI, Clustering, Eigenmaps, LLE, Isomap,... Problems for graphs/ networks: Pagerank, analysis of graphs (node centrality,...) Problems from computational statistics [Trace (inv(cov)), Log det (A), Log Likelyhood,...] Kalamata 07/06/18 p. 6

7 A common tool: Dimension reduction Dimension Reduction PLEASE! Picture modified from Kalamata 07/06/18 p. 7

8 Major tool of Data Mining: Dimension reduction Goal is not as much to reduce size (& cost) but to: Reduce noise and redundancy in data before performing a task [e.g., classification as in digit/face recognition] Discover important features or paramaters The problem: Given: X = [x 1,, x n ] R m n, find a low-dimens. representation Y = [y 1,, y n ] R d n of X Achieved by a mapping Φ : x R m y R d so: φ(x i ) = y i, i = 1,, n Kalamata 07/06/18 p. 8

9 n m X x i d Y y n i Φ may be linear : y j = W x j, j, or, Y = W X... or nonlinear (implicit). Mapping Φ required to: Preserve proximity? Maximize variance? Preserve a certain graph? Kalamata 07/06/18 p. 9

10 Basics: Principal Component Analysis (PCA) In Principal Component Analysis W is computed to maximize variance of projected data: max W R m d ;W W =I n i=1 y i 1 n n j=1 y j 2 2, y i = W x i. Leads to maximizing Tr [ W (X µe )(X µe ) W ], µ = 1 n Σn i=1 x i Solution W = { dominant eigenvectors } of the covariance matrix Set of left singular vectors of X = X µe Kalamata 07/06/18 p. 10

11 SVD: X = UΣV, U U = I, V V = I, Σ = Diag Optimal W = U d matrix of first d columns of U Solution W also minimizes reconstruction error.. i x i W W T x i 2 = i x i W y i 2 In some methods recentering to zero is not done, i.e., X replaced by X. Kalamata 07/06/18 p. 11

12 Unsupervised learning Unsupervised learning : methods do not exploit labeled data Example of digits: perform a 2-D projection Images of same digit tend to cluster (more or less) Such 2-D representations are popular for visualization Can also try to find natural clusters in data, e.g., in materials Basic clusterning technique: K- means PCA digits : Superconductors Multi ferroics Superhard Catalytic Photovoltaic Ferromagnetic Thermo electric Kalamata 07/06/18 p. 12

13 Example: Digit images (a random sample of 30) Kalamata 07/06/18 p. 13

14 2-D reductions : PCA digits : LLE digits : K PCA digits : ONPP digits : x 10 3 Kalamata 07/06/18 p. 14

15 DIMENSION REDUCTION EXAMPLE: INFORMATION RETRIEVAL

16 Application: Information Retrieval Given: collection of documents (columns of a matrix A) and a query vector q. Representation: m n term by document matrix A query q is a (sparse) vector in R m ( pseudo-document ) Problem: find a column of A that best matches q Vector space model: use cos (A(:, j), q), j = 1 : n Requires the computation of A T q Literal Matching ineffective Kalamata 07/06/18 p. 16

17 Common approach: Dimension reduction (SVD) LSI: replace A by a low rank approximation [from SVD] A = UΣV T A k = U k Σ k V T k Replace similarity vector: s = A T q by s k = A T k q Main issues: 1) computational cost 2) Updates Idea: Replace A k by Aφ(A T A), where φ == a filter function Consider the stepfunction (Heaviside): φ(x) = { 0, 0 x σ 2 k 1, σ 2 k x σ2 1 Would yield the same result as TSVD but not practical Kalamata 07/06/18 p. 17

18 Use of polynomial filters Solution : use a polynomial approximation to φ Note: s T = q T Aφ(A T A), requires only Mat-Vec s Ideal for situations where data must be explored once or a small number of times only Details skipped see: E. Kokiopoulou and YS, Polynomial Filtering in Latent Semantic Indexing for Information Retrieval, ACM-SIGIR, Kalamata 07/06/18 p. 18

19 IR: Use of the Lanczos algorithm (J. Chen, YS 09) Lanczos algorithm = Projection method on Krylov subspace Span{v, Av,, A m 1 v} Can get singular vectors with Lanczos, & use them in LSI Better: Use the Lanczos vectors directly for the projection K. Blom and A. Ruhe [SIMAX, vol. 26, 2005] perform a Lanczos run for each query [expensive]. Proposed: One Lanczos run- random initial vector. Then use Lanczos vectors in place of singular vectors. In summary: lower cost. Results comparable to those of SVD at a much Kalamata 07/06/18 p. 19

20 Supervised learning We now have data that is labeled Example: (health sciences) malignant - non malignant Example: (materials) photovoltaic, hard, conductor,... Example: (Digit recognition) Digits 0, 1,..., 9 d e c f a b g Kalamata 07/06/18 p. 20

21 Supervised learning We now have data that is labeled Example: (health sciences) malignant - non malignant Example: (materials) photovoltaic, hard, conductor,... Example: (Digit recognition) Digits 0, 1,..., 9 d e c f d e c f?? a b g a b g Kalamata 07/06/18 p. 21

22 Supervised learning: classification Best illustration: written digits recognition example Given: a set of labeled samples (training set), and an (unlabeled) test image. Problem: find label of test image Digit 0 Digit 0 Digfit 1 Digit Training data Digfit 1 Digit 2 Digit 9 Digit 9 Digit?? Test data Digit?? Dimension reduction Roughly speaking: we seek dimension reduction so that recognition is more effective in low-dim. space Kalamata 07/06/18 p. 22

23 Basic method: K-nearest neighbors (KNN) classification Idea of a voting system: get distances between test sample and training samples Get the k nearest neighbors (here k = 8) Predominant class among these k items is assigned to the test sample ( here)? Kalamata 07/06/18 p. 23

24 Supervised learning: Linear classification Linear classifiers: Find a hyperplane which best separates the data in classes A and B. Example of application: Distinguish between SPAM and non-spam e- mails Linear classifier Note: The world in non-linear. Often this is combined with Kernels amounts to changing the inner product Kalamata 07/06/18 p. 24

25 A harder case: 4 Spectral Bisection (PDDP) Use kernels to transform

26 0.1 Projection with Kernels σ 2 = Transformed data with a Gaussian Kernel Kalamata 07/06/18 p. 26

27 GRAPH-BASED TECHNIQUES

28 Graph-based methods Start with a graph of data. e.g.: graph of k nearest neighbors (k-nn graph) Want: Perform a projection which preserves the graph in some sense Define a graph Laplacean: L = D W x i x j e.g.,: w ij = { 1 if j Adj(i) 0 else D = diag d ii = w ij j i with Adj(i) = neighborhood of i (excluding i) Kalamata 07/06/18 p. 28

29 A side note: Graph partitioning If x is a vector of signs (±1) then x Lx = 4 ( number of edge cuts ) edge-cut = pair (i, j) with x i x j Consequence: Can be used for partitioning graphs, or clustering [take p = sign(u 2 ), where u 2 = 2nd smallest eigenvector..] Kalamata 07/06/18 p. 29

30 A few properties of graph Laplacean matrices Let L = any matrix s.t. L = D W, with: D = diag(d i ), w ij 0, d i = j i w ij Property 1: for any x R n : x Lx = 1 2 i,j w ij x i x j 2 Property 2: (generalization) for any Y R d n : Tr [Y LY ] = 1 w ij y i y j 2 2 i,j Kalamata 07/06/18 p. 30

31 Property 3: For the particular L = I 1 n 11 XLX = X X == n [Sample Covariance matrix] [Proof: 1) L is a projector: L L = L 2 = L, and 2) XL = X] PCA equivalent to maximizing ij y i y j 2 Property 4: (Graph partitioning) If x is a vector of signs (±1) and an edge-cut = pair (i, j) with x i x j then x Lx = 4 ( number of edge cuts ) Can be used for partitioning graphs, or for clustering [take p = sign(u 2 ), where u 2 = 2nd smallest eigenvector..] Kalamata 07/06/18 p. 31

32 Example: The Laplacean eigenmaps approach Laplacean Eigenmaps [Belkin-Niyogi 01] *minimizes* n F(Y ) = w ij y i y j 2 subject to Y DY = I i,j=1 Motivation: if x i x j is small (orig. data), we want y i y j to be also small (low-dim. data) Original data used indirectly through its graph Leads to n n sparse eigenvalue problem [In sample space] x i x j y i y j Kalamata 07/06/18 p. 32

33 Problem translates to: min Y R d n Y D Y = I Tr [ Y (D W )Y ]. Solution (sort eigenvalues increasingly): (D W )u i = λ i Du i ; y i = u i ; i = 1,, d Note: can assume D = I. Amounts to rescaling data. Problem becomes (I W )u i = λ i u i ; y i = u i ; i = 1,, d Kalamata 07/06/18 p. 33

34 Why smallest eigenvalues vs largest for PCA? Intuition: Graph Laplacean and unit Laplacean are very different: one involves a sparse graph (More like a discr. differential operator). The other involves a dense graph. (More like a discr. integral operator). They should be treated as the inverses of each other. Viewpoint confirmed by what we learn from Kernel approach Kalamata 07/06/18 p. 34

35 Locally Linear Embedding (Roweis-Saul-00) LLE is very similar to Eigenmaps. Main differences: 1) Graph Laplacean matrix is replaced by an affinity graph 2) Objective function is changed. 1. Graph: Each x i is written as a convex combination of its k nearest neighbors: x i Σw ij x j, j N i w ij = 1 Optimal weights computed ( local calculation ) by minimizing x i Σw ij x j for i = 1,, n x i x j Kalamata 07/06/18 p. 35

36 2. Mapping: The y i s should obey the same affinity as x i s Minimize: y i j i w ij y j 2 subject to: Y 1 = 0, Y Y = I Solution: (I W )(I W )u i = λ i u i ; y i = u i. (I W )(I W ) replaces the graph Laplacean of eigenmaps Kalamata 07/06/18 p. 36

37 Locally Preserving Projections (He-Niyogi-03) LPP is a linear dimensionality reduction technique n Recall the setting: Want V R m d ; Y = V X d m v T m X Y x y i i d Starts with the same neighborhood graph as Eigenmaps: L D W = graph Laplacean ; with D diag({σ i w ij }). n Kalamata 07/06/18 p. 37

38 Optimization problem is to solve min Y R d n, Y DY =I Σ i,j w ij y i y j 2, Y = V X. Difference with eigenmaps: Y is a projection of X data Solution (sort eigenvalues increasingly) XLX v i = λ i XDX v i y i,: = v i X Note: essentially same method in [Koren-Carmel 04] called weighted PCA [viewed from the angle of improving PCA] Kalamata 07/06/18 p. 38

39 ONPP (Kokiopoulou and YS 05) Orthogonal Neighborhood Preserving Projections A linear (orthogonoal) version of LLE obtained by writing Y in the form Y = V X Same graph as LLE. Objective: preserve the affinity graph (as in LEE) *but* with the constraint Y = V X Problem solved to obtain mapping: min V Tr s.t. V T V = I [ V X(I W )(I W )X V ] In LLE replace V X by Y Kalamata 07/06/18 p. 39

40 Implicit vs explicit mappings In PCA the mapping Φ from high-dimensional space (R m ) to low-dimensional space (R d ) is explicitly known: y = Φ(x) V T x In Eigenmaps and LLE we only know Mapping φ is complex, i.e., y i = φ(x i ), i = 1,, n Difficult to get φ(x) for an arbitrary x not in the sample. Inconvenient for classification The out-of-sample extension problem Kalamata 07/06/18 p. 40

41 DEMO

42 K-nearest neighbor graphs Nearest Neighbor graphs needed in: data mining, manifold learning, robot motion planning, computer graphics,... Given: a set of n data points X = {x 1,..., x n } vertices Given: a proximity measure between two data points x i and x j as measured by a quantity ρ(x i, x j ) Want: For each point x i a list of the nearest neighbors of x i (edges between x i and these nodes). (wll make the definition less vague shortly) Kalamata 07/06/18 p. 42

43 Building a nearest neighbor graph Problem: Build a nearest-neighbor graph from given data Data Graph Will demonstrate the power of the Lanczos algorithm combined with a divide a conquer approach. Kalamata 07/06/18 p. 43

44 Two types of nearest neighbor graph often used: ɛ-graph: Edges consist of pairs (x i, x j ) such that ρ(x i, x j ) ɛ knn graph: Nodes adjacent to x i are those nodes x l with the k with smallest distances ρ(x i, x l ). ɛ-graph is undirected and is geometrically motivated. Issues: 1) may result in disconnected components 2) what ɛ? knn graphs are directed in general (can be trivially fixed). knn graphs especially useful in practice. Kalamata 07/06/18 p. 44

45 Divide and conquer KNN: key ingredient Key ingredient is Spectral bisection Let the data matrix X = [x 1,..., x n ] R d n Each column == a data point. Center the data: ˆX = [ˆx 1,..., ˆx n ] = X ce T where c == centroid; e = ones(d, 1) (matlab) Goal: Split ˆX into halves using a hyperplane. Method: Principal Direction Divisive Partitioning D. Boley, 98. Idea: Use the (σ, u, v) = largest singular triplet of ˆX with: u T ˆX = σv T. Kalamata 07/06/18 p. 45

46 Hyperplane is defined as u, x = 0, i.e., it splits the set of data points into two subsets: X + = {x i u T ˆx i 0} and X = {x i u T ˆx i < 0}. u + SIDE SIDE Hyperplane Note that u T ˆx i = u T ˆXe i = σv T e i

47 X + = {x i v i 0} and X = {x i v i < 0}, where v i is the i-th entry of v. In practice: replace above criterion by X + = {x i v i med(v)} & X = {x i v i < med(v)} where med(v) == median of the entries of v. For largest singular triplet (σ, u, v) of ˆX : use Golub- Kahan-Lanczos algorithm or Lanczos applied to ˆX ˆX T or ˆX T ˆX Cost (assuming s Lanczos steps) : O(n d s) ; Usually: d very small Kalamata 07/06/18 p. 47

48 Two divide and conquer algorithms divide current set into two overlapping sub- Overlap method: sets X 1, X 2 Glue method: divide current set into two disjoint subsets X 1, X 2 plus a third set X 3 called gluing set. hyperplane hyperplane X 1 X 2 X 1 X 3 X 2 Kalamata 07/06/18 p. 48

49 The Overlap Method Divide current set X into two overlapping subsets: X 1 = {x i v i h α (S v )} and X 2 = {x i v i < h α (S v )}, where S v = { v i i = 1, 2,..., n}. and h α ( ) is a function that returns an element larger than (100α)% of those in S v. Rationale: to ensure that the two subsets overlap (100α)% of the data, i.e., X 1 X 2 = α X. Kalamata 07/06/18 p. 49

50 The Glue Method Divide the set X into two disjoint subsets X 1 and X 2 with a gluing subset X 3 : X 1 X 2 = X, X 1 X 2 =, X 1 X 3, X 2 X 3. Criterion used for splitting: X 1 = {x i v i 0}, X 2 = {x i v i < 0}, X 3 = {x i h α (S v ) v i < h α (S v )}. Note: gluing subset X 3 here is just the intersection of the sets X 1, X 2 of the overlap method. Kalamata 07/06/18 p. 50

51 Approximate knn Graph Construction: The Overlap Method 1: function G = knn-overlap(x, k, α) 2: if X < n k then 3: G knn-bruteforce(x, k) 4: else 5: (X 1, X 2 ) DIVIDE-OVERLAP(X, α) 6: G 1 knn-overlap(x 1, k, α) 7: G 2 knn-overlap(x 2, k, α) 8: G CONQUER(G 1, G 2 ) 9: REFINE(G) 10: end if 11: end function Kalamata 07/06/18 p. 51

52 Approximate knn Graph Construction: The Glue Method 1: function G = knn-glue(x, k, α) 2: if X < n k then 3: G knn-bruteforce(x, k) 4: else 5: (X 1, X 2, X 3 ) DIVIDE-GLUE(X, α) 6: G 1 knn-glue(x 1, k, α) 7: G 2 knn-glue(x 2, k, α) 8: G 3 knn-glue(x 3, k, α) 9: G CONQUER(G 1, G 2, G 3 ) 10: REFINE(G) 11: end if 12: end function Kalamata 07/06/18 p. 52

53 Theorem where The time complexity for the overlap method is t o = log 2/(1+α) 2 = T o (n) = Θ(dn t o ), (1) 1 1 log 2 (1 + α). (2) Theorem The time complexity for the glue method is where t g is the solution to the equation: T g (n) = Θ(dn t g /α), (3) 2 2 t + α t = 1. Example: When α = 0.1, then t o = 1.16 while t g = Kalamata 07/06/18 p. 53

54 Multilevel techniques in brief Divide and conquer paradigms as well as multilevel methods in the sense of domain decomposition Main principle: very costly to do an SVD [or Lanczos] on the whole set. Why not find a smaller set on which to do the analysis without too much loss? Tools used: graph coarsening, divide and conquer For text data we use hypergraphs Kalamata 07/06/18 p. 54

55 Multilevel Dimension Reduction Main Idea: coarsen for a few levels. Use the resulting data set ˆX to find a projector P from R m to R d. P can be used to project original data or new data Graph... Coarsen Last Level Coarsen n X m m X r n r Project n Y Yr n r d d Gain: Dimension reduction is done with a much smaller set. Hope: not much loss compared to using whole data Kalamata 07/06/18 p. 55

56 Making it work: Use of Hypergraphs for sparse data * * * * a * * * * b A = * * * * c * * * d * * e e net e c a b 5 d 6 net d Left: a (sparse) data set of n entries in R m represented by a matrix A R m n Right: corresponding hypergraph H = (V, E) with vertex set V representing to the columns of A. Kalamata 07/06/18 p. 56

57 Hypergraph Coarsening uses column matching similar to a common one used in graph partitioning Compute the non-zero inner product a (i), a (j) between two vertices i and j, i.e., the ith and jth columns of A. Note: a (i), a (j) = a (i) a (j) cos θ ij. Modif. 1: Parameter: 0 < ɛ < 1. Match two vertices, i.e., columns, only if angle between the vertices satisfies: tan θ ij ɛ Modif. 2: Scale coarsened columns. If i and j matched and if a (i) 0 a (j) 0 replace a (i) and a (j) by ) c (l) = ( 1 + cos 2 θ ij a (i) Kalamata 07/06/18 p. 57

58 Call C the coarsened matrix obtained from A using the approach just described Lemma: Let C R m c be the coarsened matrix of A obtained by one level of coarsening of A R m n, with columns a (i) and a (j) matched if tan θ i ɛ. Then x T AA T x x T CC T x 3ɛ A 2 F, for any x R m with x 2 = 1. Very simple bound for Rayleigh quotients for any x. Implies some bounds on singular values and norms - skipped. Kalamata 07/06/18 p. 58

59 Tests: Comparing singular values Singular values 20 to 50 and approximations Exact power it1 coarsen coars+zs rand Singular values 20 to 50 and approximations Exact power it1 coarsen coars+zs rand Singular values 20 to 50 and approximations Exact power it1 coarsen coars+zs rand Results for the datasets CRANFIELD (left), MEDLINE (middle), and TIME (right). Kalamata 07/06/18 p. 59

60 Low rank approximation: Coarsening, random sampling, and rand+coarsening. Err1 = A H k Hk T A F ; Err2= 1 k k Dataset n k c Coarsen Rand Sampl Err1 Err2 Err1 Err2 ˆσ i σ i σ i Kohonen aft FA chipcool brainpc scfxm1-2b thermomechtc Webbase-1M Kalamata 07/06/18 p. 60

61 Conclusion *Many* interesting new matrix problems in areas that involve the effective mining of data Among the most pressing issues is that of reducing computational cost - [SVD, SDP,..., too costly] Many online resources available Huge potential in areas like materials science though inertia has to be overcome To a researcher in computational linear algebra : Tsunami of change on types or problems, algorithms, frameworks, culture,.. But change should be welcome

62 When one door closes, another opens; but we often look so long and so regretfully upon the closed door that we do not see the one which has opened for us. Alexander Graham Bell ( ) In the words of Lao Tzu: If you do not change directions, you may end-up where you are heading Thank you! Visit my web-site at Kalamata 07/06/18 p. 62

Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota

Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota Université du Littoral- Calais July 11, 16 First..... to the

More information

Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota

Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota University of Padua June 7, 18 Introduction, background, and

More information

Efficient Linear Algebra methods for Data Mining Yousef Saad Department of Computer Science and Engineering University of Minnesota

Efficient Linear Algebra methods for Data Mining Yousef Saad Department of Computer Science and Engineering University of Minnesota Efficient Linear Algebra methods for Data Mining Yousef Saad Department of Computer Science and Engineering University of Minnesota NUMAN-08 Kalamata, 08 Team members involved in this work Support Past:

More information

Department of Computer Science and Engineering

Department of Computer Science and Engineering Linear algebra methods for data mining with applications to materials Yousef Saad Department of Computer Science and Engineering University of Minnesota ICSC 2012, Hong Kong, Jan 4-7, 2012 HAPPY BIRTHDAY

More information

Linear algebra methods for data mining Yousef Saad Department of Computer Science and Engineering University of Minnesota.

Linear algebra methods for data mining Yousef Saad Department of Computer Science and Engineering University of Minnesota. Linear algebra methods for data mining Yousef Saad Department of Computer Science and Engineering University of Minnesota NCSU Jan 31, 2014 Introduction: a few factoids Data is growing exponentially at

More information

Department of Computer Science and Engineering

Department of Computer Science and Engineering Linear algebra methods for data mining with applications to materials Yousef Saad Department of Computer Science and Engineering University of Minnesota SIAM 12 Annual Meeting Twin Cities, July 13th, 12

More information

A few applications of the SVD

A few applications of the SVD A few applications of the SVD Many methods require to approximate the original data (matrix) by a low rank matrix before attempting to solve the original problem Regularization methods require the solution

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x

More information

Department of Computer Science and Engineering

Department of Computer Science and Engineering From data-mining to nanotechnology and back: The new problems of numerical linear algebra Yousef Saad Department of Computer Science and Engineering University of Minnesota Modelling 09 Roznov pod Radhostem

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Introduction and Data Representation Mikhail Belkin & Partha Niyogi Department of Electrical Engieering University of Minnesota Mar 21, 2017 1/22 Outline Introduction 1 Introduction 2 3 4 Connections to

More information

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Neural Computation, June 2003; 15 (6):1373-1396 Presentation for CSE291 sp07 M. Belkin 1 P. Niyogi 2 1 University of Chicago, Department

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Nonlinear Methods. Data often lies on or near a nonlinear low-dimensional curve aka manifold.

Nonlinear Methods. Data often lies on or near a nonlinear low-dimensional curve aka manifold. Nonlinear Methods Data often lies on or near a nonlinear low-dimensional curve aka manifold. 27 Laplacian Eigenmaps Linear methods Lower-dimensional linear projection that preserves distances between all

More information

LECTURE NOTE #11 PROF. ALAN YUILLE

LECTURE NOTE #11 PROF. ALAN YUILLE LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform

More information

Nonlinear Dimensionality Reduction

Nonlinear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Kernel PCA 2 Isomap 3 Locally Linear Embedding 4 Laplacian Eigenmap

More information

Nonlinear Dimensionality Reduction. Jose A. Costa

Nonlinear Dimensionality Reduction. Jose A. Costa Nonlinear Dimensionality Reduction Jose A. Costa Mathematics of Information Seminar, Dec. Motivation Many useful of signals such as: Image databases; Gene expression microarrays; Internet traffic time

More information

Non-linear Dimensionality Reduction

Non-linear Dimensionality Reduction Non-linear Dimensionality Reduction CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Introduction Laplacian Eigenmaps Locally Linear Embedding (LLE)

More information

Statistical and Computational Analysis of Locality Preserving Projection

Statistical and Computational Analysis of Locality Preserving Projection Statistical and Computational Analysis of Locality Preserving Projection Xiaofei He xiaofei@cs.uchicago.edu Department of Computer Science, University of Chicago, 00 East 58th Street, Chicago, IL 60637

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi

Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi Overview Introduction Linear Methods for Dimensionality Reduction Nonlinear Methods and Manifold

More information

Nonlinear Dimensionality Reduction

Nonlinear Dimensionality Reduction Nonlinear Dimensionality Reduction Piyush Rai CS5350/6350: Machine Learning October 25, 2011 Recap: Linear Dimensionality Reduction Linear Dimensionality Reduction: Based on a linear projection of the

More information

Unsupervised dimensionality reduction

Unsupervised dimensionality reduction Unsupervised dimensionality reduction Guillaume Obozinski Ecole des Ponts - ParisTech SOCN course 2014 Guillaume Obozinski Unsupervised dimensionality reduction 1/30 Outline 1 PCA 2 Kernel PCA 3 Multidimensional

More information

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395 Data Mining Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1395 1 / 42 Outline 1 Introduction 2 Feature selection

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Enhanced graph-based dimensionality reduction with repulsion Laplaceans

Enhanced graph-based dimensionality reduction with repulsion Laplaceans Enhanced graph-based dimensionality reduction with repulsion Laplaceans E. Kokiopoulou a, Y. Saad b a EPFL, LTS4 lab, Bat. ELE, Station 11; CH 1015 Lausanne; Switzerland. Email: effrosyni.kokiopoulou@epfl.ch

More information

Iterative Laplacian Score for Feature Selection

Iterative Laplacian Score for Feature Selection Iterative Laplacian Score for Feature Selection Linling Zhu, Linsong Miao, and Daoqiang Zhang College of Computer Science and echnology, Nanjing University of Aeronautics and Astronautics, Nanjing 2006,

More information

Dimensionality Reduc1on

Dimensionality Reduc1on Dimensionality Reduc1on contd Aarti Singh Machine Learning 10-601 Nov 10, 2011 Slides Courtesy: Tom Mitchell, Eric Xing, Lawrence Saul 1 Principal Component Analysis (PCA) Principal Components are the

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component

More information

Data dependent operators for the spatial-spectral fusion problem

Data dependent operators for the spatial-spectral fusion problem Data dependent operators for the spatial-spectral fusion problem Wien, December 3, 2012 Joint work with: University of Maryland: J. J. Benedetto, J. A. Dobrosotskaya, T. Doster, K. W. Duke, M. Ehler, A.

More information

CSE 291. Assignment Spectral clustering versus k-means. Out: Wed May 23 Due: Wed Jun 13

CSE 291. Assignment Spectral clustering versus k-means. Out: Wed May 23 Due: Wed Jun 13 CSE 291. Assignment 3 Out: Wed May 23 Due: Wed Jun 13 3.1 Spectral clustering versus k-means Download the rings data set for this problem from the course web site. The data is stored in MATLAB format as

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 6: Numerical Linear Algebra: Applications in Machine Learning Cho-Jui Hsieh UC Davis April 27, 2017 Principal Component Analysis Principal

More information

Department of Computer Science and Engineering

Department of Computer Science and Engineering From data-mining to nanotechnology and back: The new problems of numerical linear algebra Yousef Saad Department of Computer Science and Engineering University of Minnesota Computer Science Colloquium

More information

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Yoshua Bengio Pascal Vincent Jean-François Paiement University of Montreal April 2, Snowbird Learning 2003 Learning Modal Structures

More information

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

Machine Learning - MT & 14. PCA and MDS

Machine Learning - MT & 14. PCA and MDS Machine Learning - MT 2016 13 & 14. PCA and MDS Varun Kanade University of Oxford November 21 & 23, 2016 Announcements Sheet 4 due this Friday by noon Practical 3 this week (continue next week if necessary)

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview

More information

Lecture: Some Practical Considerations (3 of 4)

Lecture: Some Practical Considerations (3 of 4) Stat260/CS294: Spectral Graph Methods Lecture 14-03/10/2015 Lecture: Some Practical Considerations (3 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough.

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage

More information

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26 Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26 Initial Question Intro Question Question Let S R n n be symmetric. 1

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Manifold Learning: Theory and Applications to HRI

Manifold Learning: Theory and Applications to HRI Manifold Learning: Theory and Applications to HRI Seungjin Choi Department of Computer Science Pohang University of Science and Technology, Korea seungjin@postech.ac.kr August 19, 2008 1 / 46 Greek Philosopher

More information

9 Searching the Internet with the SVD

9 Searching the Internet with the SVD 9 Searching the Internet with the SVD 9.1 Information retrieval Over the last 20 years the number of internet users has grown exponentially with time; see Figure 1. Trying to extract information from this

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Discriminative K-means for Clustering

Discriminative K-means for Clustering Discriminative K-means for Clustering Jieping Ye Arizona State University Tempe, AZ 85287 jieping.ye@asu.edu Zheng Zhao Arizona State University Tempe, AZ 85287 zhaozheng@asu.edu Mingrui Wu MPI for Biological

More information

Locality Preserving Projections

Locality Preserving Projections Locality Preserving Projections Xiaofei He Department of Computer Science The University of Chicago Chicago, IL 60637 xiaofei@cs.uchicago.edu Partha Niyogi Department of Computer Science The University

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Learning gradients: prescriptive models

Learning gradients: prescriptive models Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan

More information

Lecture 5: Web Searching using the SVD

Lecture 5: Web Searching using the SVD Lecture 5: Web Searching using the SVD Information Retrieval Over the last 2 years the number of internet users has grown exponentially with time; see Figure. Trying to extract information from this exponentially

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine

Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine Olga Kouropteva, Oleg Okun, Matti Pietikäinen Machine Vision Group, Infotech Oulu and

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

Clustering. CSL465/603 - Fall 2016 Narayanan C Krishnan

Clustering. CSL465/603 - Fall 2016 Narayanan C Krishnan Clustering CSL465/603 - Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Supervised vs Unsupervised Learning Supervised learning Given x ", y " "%& ', learn a function f: X Y Categorical output classification

More information

13 Searching the Web with the SVD

13 Searching the Web with the SVD 13 Searching the Web with the SVD 13.1 Information retrieval Over the last 20 years the number of internet users has grown exponentially with time; see Figure 1. Trying to extract information from this

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2014 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276,

More information

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto Unsupervised Learning Techniques 9.520 Class 07, 1 March 2006 Andrea Caponnetto About this class Goal To introduce some methods for unsupervised learning: Gaussian Mixtures, K-Means, ISOMAP, HLLE, Laplacian

More information

Preprocessing & dimensionality reduction

Preprocessing & dimensionality reduction Introduction to Data Mining Preprocessing & dimensionality reduction CPSC/AMTH 445a/545a Guy Wolf guy.wolf@yale.edu Yale University Fall 2016 CPSC 445 (Guy Wolf) Dimensionality reduction Yale - Fall 2016

More information

Machine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Machine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395 Machine Learning Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1395 1 / 47 Table of contents 1 Introduction

More information

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 12 Jan-Willem van de Meent (credit: Yijun Zhao, Percy Liang) DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Linear Dimensionality

More information

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?)

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?) Math 35 Exam Review SOLUTIONS Overview In this third of the course we focused on linear learning algorithms to model data. summarize: To. Background: The SVD and the best basis (questions selected from

More information

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 23 1 / 27 Overview

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Principal Component Analysis

Principal Component Analysis CSci 5525: Machine Learning Dec 3, 2008 The Main Idea Given a dataset X = {x 1,..., x N } The Main Idea Given a dataset X = {x 1,..., x N } Find a low-dimensional linear projection The Main Idea Given

More information

Data-dependent representations: Laplacian Eigenmaps

Data-dependent representations: Laplacian Eigenmaps Data-dependent representations: Laplacian Eigenmaps November 4, 2015 Data Organization and Manifold Learning There are many techniques for Data Organization and Manifold Learning, e.g., Principal Component

More information

Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis

Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis Alvina Goh Vision Reading Group 13 October 2005 Connection of Local Linear Embedding, ISOMAP, and Kernel Principal

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Nina Balcan] slide 1 Goals for the lecture you should understand

More information

Information Retrieval

Information Retrieval Introduction to Information CS276: Information and Web Search Christopher Manning and Pandu Nayak Lecture 13: Latent Semantic Indexing Ch. 18 Today s topic Latent Semantic Indexing Term-document matrices

More information

Dimension Reduction and Low-dimensional Embedding

Dimension Reduction and Low-dimensional Embedding Dimension Reduction and Low-dimensional Embedding Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/26 Dimension

More information

Latent Semantic Analysis. Hongning Wang

Latent Semantic Analysis. Hongning Wang Latent Semantic Analysis Hongning Wang CS@UVa VS model in practice Document and query are represented by term vectors Terms are not necessarily orthogonal to each other Synonymy: car v.s. automobile Polysemy:

More information

Example: Face Detection

Example: Face Detection Announcements HW1 returned New attendance policy Face Recognition: Dimensionality Reduction On time: 1 point Five minutes or more late: 0.5 points Absent: 0 points Biometrics CSE 190 Lecture 14 CSE190,

More information

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2016 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276,

More information

Distance Metric Learning in Data Mining (Part II) Fei Wang and Jimeng Sun IBM TJ Watson Research Center

Distance Metric Learning in Data Mining (Part II) Fei Wang and Jimeng Sun IBM TJ Watson Research Center Distance Metric Learning in Data Mining (Part II) Fei Wang and Jimeng Sun IBM TJ Watson Research Center 1 Outline Part I - Applications Motivation and Introduction Patient similarity application Part II

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

MLCC 2015 Dimensionality Reduction and PCA

MLCC 2015 Dimensionality Reduction and PCA MLCC 2015 Dimensionality Reduction and PCA Lorenzo Rosasco UNIGE-MIT-IIT June 25, 2015 Outline PCA & Reconstruction PCA and Maximum Variance PCA and Associated Eigenproblem Beyond the First Principal Component

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 622 - Section 2 - Spring 27 Pre-final Review Jan-Willem van de Meent Feedback Feedback https://goo.gl/er7eo8 (also posted on Piazza) Also, please fill out your TRACE evaluations!

More information

Deep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang

Deep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang Deep Learning Basics Lecture 7: Factor Analysis Princeton University COS 495 Instructor: Yingyu Liang Supervised v.s. Unsupervised Math formulation for supervised learning Given training data x i, y i

More information

Clustering VS Classification

Clustering VS Classification MCQ Clustering VS Classification 1. What is the relation between the distance between clusters and the corresponding class discriminability? a. proportional b. inversely-proportional c. no-relation Ans:

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

20 Unsupervised Learning and Principal Components Analysis (PCA)

20 Unsupervised Learning and Principal Components Analysis (PCA) 116 Jonathan Richard Shewchuk 20 Unsupervised Learning and Principal Components Analysis (PCA) UNSUPERVISED LEARNING We have sample points, but no labels! No classes, no y-values, nothing to predict. Goal:

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization Tim Roughgarden February 28, 2017 1 Preamble This lecture fulfills a promise made back in Lecture #1,

More information

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University CS224W: Social and Information Network Analysis Jure Leskovec Stanford University Jure Leskovec, Stanford University http://cs224w.stanford.edu Task: Find coalitions in signed networks Incentives: European

More information

Discriminant Uncorrelated Neighborhood Preserving Projections

Discriminant Uncorrelated Neighborhood Preserving Projections Journal of Information & Computational Science 8: 14 (2011) 3019 3026 Available at http://www.joics.com Discriminant Uncorrelated Neighborhood Preserving Projections Guoqiang WANG a,, Weijuan ZHANG a,

More information

Collaborative Filtering. Radek Pelánek

Collaborative Filtering. Radek Pelánek Collaborative Filtering Radek Pelánek 2017 Notes on Lecture the most technical lecture of the course includes some scary looking math, but typically with intuitive interpretation use of standard machine

More information

Manifold Regularization

Manifold Regularization 9.520: Statistical Learning Theory and Applications arch 3rd, 200 anifold Regularization Lecturer: Lorenzo Rosasco Scribe: Hooyoung Chung Introduction In this lecture we introduce a class of learning algorithms,

More information

CS60021: Scalable Data Mining. Dimensionality Reduction

CS60021: Scalable Data Mining. Dimensionality Reduction J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Dimensionality Reduction Sourangshu Bhattacharya Assumption: Data lies on or near a

More information

Linear Algebra Background

Linear Algebra Background CS76A Text Retrieval and Mining Lecture 5 Recap: Clustering Hierarchical clustering Agglomerative clustering techniques Evaluation Term vs. document space clustering Multi-lingual docs Feature selection

More information

Apprentissage non supervisée

Apprentissage non supervisée Apprentissage non supervisée Cours 3 Higher dimensions Jairo Cugliari Master ECD 2015-2016 From low to high dimension Density estimation Histograms and KDE Calibration can be done automacally But! Let

More information

USING SINGULAR VALUE DECOMPOSITION (SVD) AS A SOLUTION FOR SEARCH RESULT CLUSTERING

USING SINGULAR VALUE DECOMPOSITION (SVD) AS A SOLUTION FOR SEARCH RESULT CLUSTERING POZNAN UNIVE RSIY OF E CHNOLOGY ACADE MIC JOURNALS No. 80 Electrical Engineering 2014 Hussam D. ABDULLA* Abdella S. ABDELRAHMAN* Vaclav SNASEL* USING SINGULAR VALUE DECOMPOSIION (SVD) AS A SOLUION FOR

More information