Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Size: px
Start display at page:

Download "Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization"

Transcription

1 Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering, University of Toronto 2 Ryerson University The 25th International Conference on Machine Learning ICML 2008 Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

2 Outline 1 Motivation 2 The Proposed UMPCA Algorithm Tensor-to-Vector Projection Uncorrelated Multilinear PCA (UMPCA) 3 Experimental Evaluations Experimental Setup Experimental Results 4 Conclusions Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

3 Tensorial Data. Tensor: multidimensional array. Generalization of vector (first-order) and matrix (second-order). N modes Nth-order tensor. Wide range of applications: images, video sequences, streaming and mining data. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

4 Dimensionality Reduction Problem. Tensor objects are usually high-dimensional curse of dimensionality. Computationally expensive to handle. Many classifiers perform poorly in high-dimensional space given a small number of training samples. A class of tensor objects are mostly highly constrained to a subspace, a manifold of intrinsically low dimension. Dimensionality reduction (feature extraction): transformation to low-dimensional space while retaining most of the underlying structure. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

5 Dimensionality Reduction Problem. Tensor objects are usually high-dimensional curse of dimensionality. Computationally expensive to handle. Many classifiers perform poorly in high-dimensional space given a small number of training samples. A class of tensor objects are mostly highly constrained to a subspace, a manifold of intrinsically low dimension. Dimensionality reduction (feature extraction): transformation to low-dimensional space while retaining most of the underlying structure. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

6 Focus: Unsupervised Dimensionality Reduction-PCA. Linear method: principal component analysis (PCA) Produce uncorrelated features. Retain as much as possible the variations. Reshape tensors into vectors: high computational and memory demand, break natural structure in original data. Multilinear methods: feature extraction directly from tensors Tensor rank-one decomposition (TROD) [Shashua & Levin, 2001]. Two-dimensional PCA (2DPCA) [Yang et al., 2004]. Generalized low rank approximation of matrices (GLRAM) [Ye, 2005] & Generalized PCA (GPCA) [Ye et al., 2004]. Concurrent subspaces analysis (CSA) [Xu et al., 2005)]. Multilinear PCA (MPCA) [Lu et al., 2008)]. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

7 Focus: Unsupervised Dimensionality Reduction-PCA. Linear method: principal component analysis (PCA) Produce uncorrelated features. Retain as much as possible the variations. Reshape tensors into vectors: high computational and memory demand, break natural structure in original data. Multilinear methods: feature extraction directly from tensors Tensor rank-one decomposition (TROD) [Shashua & Levin, 2001]. Two-dimensional PCA (2DPCA) [Yang et al., 2004]. Generalized low rank approximation of matrices (GLRAM) [Ye, 2005] & Generalized PCA (GPCA) [Ye et al., 2004]. Concurrent subspaces analysis (CSA) [Xu et al., 2005)]. Multilinear PCA (MPCA) [Lu et al., 2008)]. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

8 Question to be Answered: Uncorrelated features Result in minimum redundancy. Ensure linear independence among features. Simplify classification task. Question: Can we extract uncorrelated features directly from tensor objects in an unsupervised way? Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

9 Outline 1 Motivation 2 The Proposed UMPCA Algorithm Tensor-to-Vector Projection Uncorrelated Multilinear PCA (UMPCA) 3 Experimental Evaluations Experimental Setup Experimental Results 4 Conclusions Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

10 Notations. Vector: lowercase boldface x. Matrix: uppercase boldface U. Tensor: calligraphic letter A. Subscript p: the feature index. Subscript m: the training sample index. Superscript (n): the n-mode. Superscript T : the transpose. Operation n : the n-mode multiplication. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

11 Elementary Multilinear Projection (EMP). y = X 1 u (1)T 2 u (2)T... N u (N)T Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

12 Tensor-to-Vector Projection (TVP). y = X N n=1 {u(n)t p, n = 1,..., N} P p=1 Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

13 Outline 1 Motivation 2 The Proposed UMPCA Algorithm Tensor-to-Vector Projection Uncorrelated Multilinear PCA (UMPCA) 3 Experimental Evaluations Experimental Setup Experimental Results 4 Conclusions Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

14 The UMPCA Problem Formulation. Input Tensorial training samples {X m R I 1... I N, m = 1,..., M}. The desired subspace dimensionality P. Objective {u (n)t p, n = 1,..., N} = arg max S y T p, S y T p = M m=1 (y m p y p ) 2. s.t. u (n)t p = 1 and g p : the pth coordinate vector, u (n) p g T p gq g p g q = δ pq, p, q = 1,..., P. g p (m) = y mp = X m N n=1 {u(n)t p, n = 1,..., N}. Output The TVP {u (n)t p, n = 1,..., N} P p=1 satisfying the objective criterion. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

15 The UMPCA Problem Formulation. Input Tensorial training samples {X m R I 1... I N, m = 1,..., M}. The desired subspace dimensionality P. Objective {u (n)t p, n = 1,..., N} = arg max S y T p, S y T p = M m=1 (y m p y p ) 2. s.t. u (n)t p = 1 and g p : the pth coordinate vector, u (n) p g T p gq g p g q = δ pq, p, q = 1,..., P. g p (m) = y mp = X m N n=1 {u(n)t p, n = 1,..., N}. Output The TVP {u (n)t p, n = 1,..., N} P p=1 satisfying the objective criterion. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

16 The UMPCA Problem Formulation. Input Tensorial training samples {X m R I 1... I N, m = 1,..., M}. The desired subspace dimensionality P. Objective {u (n)t p, n = 1,..., N} = arg max S y T p, S y T p = M m=1 (y m p y p ) 2. s.t. u (n)t p = 1 and g p : the pth coordinate vector, u (n) p g T p gq g p g q = δ pq, p, q = 1,..., P. g p (m) = y mp = X m N n=1 {u(n)t p, n = 1,..., N}. Output The TVP {u (n)t p, n = 1,..., N} P p=1 satisfying the objective criterion. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

17 The UMPCA Problem Formulation. Input Tensorial training samples {X m R I 1... I N, m = 1,..., M}. The desired subspace dimensionality P. Objective {u (n)t p, n = 1,..., N} = arg max S y T p, S y T p = M m=1 (y m p y p ) 2. s.t. u (n)t p = 1 and g p : the pth coordinate vector, u (n) p g T p gq g p g q = δ pq, p, q = 1,..., P. g p (m) = y mp = X m N n=1 {u(n)t p, n = 1,..., N}. Output The TVP {u (n)t p, n = 1,..., N} P p=1 satisfying the objective criterion. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

18 The UMPCA Problem Formulation. Input Tensorial training samples {X m R I 1... I N, m = 1,..., M}. The desired subspace dimensionality P. Objective {u (n)t p, n = 1,..., N} = arg max S y T p, S y T p = M m=1 (y m p y p ) 2. s.t. u (n)t p = 1 and g p : the pth coordinate vector, u (n) p g T p gq g p g q = δ pq, p, q = 1,..., P. g p (m) = y mp = X m N n=1 {u(n)t p, n = 1,..., N}. Output The TVP {u (n)t p, n = 1,..., N} P p=1 satisfying the objective criterion. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

19 The Approach of Successive Maximization. 1 Determine the first EMP {u (n)t 1, n = 1,..., N} by maximizing S y T 1 without any constraint. 2 Determine the second EMP {u (n)t 2, n = 1,..., N} by maximizing S y T 2 subject to the constraint that g T 2 g 1 = 0. 3 Determine the third EMP {u (n)t 3, n = 1,..., N} by maximizing S y T 3 subject to the constraint that g T 3 g 1 = 0 and g T 3 g 2 = Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

20 The Alternating Projection Method. The need for an iterative solution Simultaneous determination of N sets is infeasible. Alternating projection: solve one set with all the other sets fixed and iterate (e.g., Alternating Least Square). The alternating projection method 1 Assume that {u (n) p, n n } is given. 2 Project {X m } in these N 1 modes to {ỹ (n ) m p }. 3 Determine u (n ) p that projects {ỹ (n ) m p } onto a line with variance maximized, subject to zero-correlation. PCA with input {ỹ (n ) m p } and total scatter matrix S (n ) T p. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

21 The Alternating Projection Method. The need for an iterative solution Simultaneous determination of N sets is infeasible. Alternating projection: solve one set with all the other sets fixed and iterate (e.g., Alternating Least Square). The alternating projection method 1 Assume that {u (n) p, n n } is given. 2 Project {X m } in these N 1 modes to {ỹ (n ) m p }. 3 Determine u (n ) p that projects {ỹ (n ) m p } onto a line with variance maximized, subject to zero-correlation. PCA with input {ỹ (n ) m p } and total scatter matrix S (n ) T p. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

22 The UMPCA Solution. For p = 1 u (n ) 1 : the unit eigenvector of S (n ) T 1 associated with the largest eigenvalue. For p > 1 1 Let Ỹ(n ) p = [ ỹ (n ) 1 p, ỹ (n ) 2 p,..., ỹ (n ) M p ]. 2 A reformulation: u (n ) p = arg max u (n ) T p S (n ) T p u (n ) p s.t. u (n ) T p u (n ) p = 1 and u (n ) T p Ỹ (n ) p g q = 0, q = 1,..., p 1. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

23 The UMPCA Solution. For p = 1 u (n ) 1 : the unit eigenvector of S (n ) T 1 associated with the largest eigenvalue. For p > 1 1 Let Ỹ(n ) p = [ ỹ (n ) 1 p, ỹ (n ) 2 p,..., ỹ (n ) M p ]. 2 A reformulation: u (n ) p = arg max u (n ) T p S (n ) T p u (n ) p s.t. u (n ) T p u (n ) p = 1 and u (n ) T p Ỹ (n ) p g q = 0, q = 1,..., p 1. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

24 The UMPCA Solution. For p = 1 u (n ) 1 : the unit eigenvector of S (n ) T 1 associated with the largest eigenvalue. For p > 1 1 Let Ỹ(n ) p = [ ỹ (n ) 1 p, ỹ (n ) 2 p,..., ỹ (n ) M p ]. 2 A reformulation: u (n ) p = arg max u (n ) T p S (n ) T p u (n ) p s.t. u (n ) T p u (n ) p = 1 and u (n ) T p Ỹ (n ) p g q = 0, q = 1,..., p 1. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

25 The UMPCA Solution. For p = 1 u (n ) 1 : the unit eigenvector of S (n ) T 1 associated with the largest eigenvalue. For p > 1 1 Let Ỹ(n ) p = [ ỹ (n ) 1 p, ỹ (n ) 2 p,..., ỹ (n ) M p ]. 2 A reformulation: u (n ) p = arg max u (n ) T p S (n ) T p u (n ) p s.t. u (n ) T p u (n ) p = 1 and u (n ) T p Ỹ (n ) p g q = 0, q = 1,..., p 1. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

26 Solution for p > 1. Theorem The solution to the UMPCA problem (for p > 1) is the (unit-length) eigenvector corresponding to the largest eigenvalue of the following eigenvalue problem: Ψ (n ) Ψ (n ) p S (n ) T p u = λu, ) p = I In Ỹ(n p G p 1 Φ 1 p G T ) T p 1Ỹ(n p, Φ p = G T ) T p 1Ỹ(n p Ỹ (n ) p G p 1, G p 1 = [ g 1 g 2...g p 1 ] R M (p 1). I In : an identity matrix of size I n I n. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

27 Solution for p > 1. Theorem The solution to the UMPCA problem (for p > 1) is the (unit-length) eigenvector corresponding to the largest eigenvalue of the following eigenvalue problem: Ψ (n ) Ψ (n ) p S (n ) T p u = λu, ) p = I In Ỹ(n p G p 1 Φ 1 p G T ) T p 1Ỹ(n p, Φ p = G T ) T p 1Ỹ(n p Ỹ (n ) p G p 1, G p 1 = [ g 1 g 2...g p 1 ] R M (p 1). I In : an identity matrix of size I n I n. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

28 Outline 1 Motivation 2 The Proposed UMPCA Algorithm Tensor-to-Vector Projection Uncorrelated Multilinear PCA (UMPCA) 3 Experimental Evaluations Experimental Setup Experimental Results 4 Conclusions Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

29 Experimental Setup. Data: the FERET (face) database (subset) Maximum pose variation: 15 degrees. Minimum number of face images per subject: face images from 70 subjects. Preprocessing: manually cropped and aligned, normalized to pixels, 256 gray levels. Classification Nearest neighbor classifier. Euclidean distance measure. Performance: rank 1 identification rate. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

30 Experimental Setup. Data: the FERET (face) database (subset) Maximum pose variation: 15 degrees. Minimum number of face images per subject: face images from 70 subjects. Preprocessing: manually cropped and aligned, normalized to pixels, 256 gray levels. Classification Nearest neighbor classifier. Euclidean distance measure. Performance: rank 1 identification rate. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

31 Experimental Setup. Data: the FERET (face) database (subset) Maximum pose variation: 15 degrees. Minimum number of face images per subject: face images from 70 subjects. Preprocessing: manually cropped and aligned, normalized to pixels, 256 gray levels. Classification Nearest neighbor classifier. Euclidean distance measure. Performance: rank 1 identification rate. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

32 Outline 1 Motivation 2 The Proposed UMPCA Algorithm Tensor-to-Vector Projection Uncorrelated Multilinear PCA (UMPCA) 3 Experimental Evaluations Experimental Setup Experimental Results 4 Conclusions Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

33 Recognition Results for L = 1. L: number of training samples per subject. Extreme small sample size scenario. UMPCA outperforms other three methods. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

34 Recognition Results for L = 7. UMPCA outperforms other three methods. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

35 Examination of Variation Captured. L = 1 L = 7 Variation captured by UMPCA is much lower (due to zero-correlation & TVP). Too low variation limits contribution in recognition. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

36 Examination of Correlations Among Features. L = 1 L = 7 PCA and UMPCA: uncorrelated features. MPCA and TROD: correlated features. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

37 Summary UMPCA: uncorrelated feature extraction directly from tensor objects through TVP. The solution to UMPCA: successive variance maximization & alternating projection method. Evaluation: UMPCA outperforms PCA, MPCA, TROD in unsupervised face recognition task, especially in lower dimension. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

38 Future works Application to other unsupervised learning tasks, e.g. clustering. Investigation on design issues: initialization, projection order and termination. Combination of UMPCA features with PCA, MPCA or TROD features. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

39 Backup Slides Basic Operations. n-mode product: (A n U)(i 1,..., i n 1, j n, i n+1,..., i N ) = i n A(i 1,..., i N ) U(j n, i n ). Scalar product: < A, B >= i 1... i N A(i 1,..., i N ) B(i 1,..., i N ). Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

40 Backup Slides Design Issues in UMPCA. Initialization: vector 1 with normalization. Projection order: from 1 to N. Termination: maximum iteration number K. Haiping Lu, K. N. Plataniotis, A. N. Venetsanopoulos (U of T) Uncorrelated Multilinear PCA ICML / 27

MPCA: Multilinear Principal Component. Analysis of Tensor Objects

MPCA: Multilinear Principal Component. Analysis of Tensor Objects MPCA: Multilinear Principal Component 1 Analysis of Tensor Objects Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University

More information

A Survey of Multilinear Subspace Learning for Tensor Data

A Survey of Multilinear Subspace Learning for Tensor Data A Survey of Multilinear Subspace Learning for Tensor Data Haiping Lu a, K. N. Plataniotis b, A. N. Venetsanopoulos b,c a Institute for Infocomm Research, Agency for Science, Technology and Research, #21-01

More information

18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008

18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 MPCA: Multilinear Principal Component Analysis of Tensor Objects Haiping Lu, Student Member, IEEE, Konstantinos N. (Kostas) Plataniotis,

More information

GAIT RECOGNITION THROUGH MPCA PLUS LDA. Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos

GAIT RECOGNITION THROUGH MPCA PLUS LDA. Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos GAIT RECOGNITION THROUGH MPCA PLUS LDA Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto, M5S 3G4, Canada

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Contents. References 23

Contents. References 23 Contents 1 A Taxonomy of Emerging Multilinear Discriminant Analysis Solutions for Biometric Signal Recognition 1 Haiping Lu, K. N. Plataniotis and A. N. Venetsanopoulos 1.1 Introduction 1 1.2 Multilinear

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Learning Canonical Correlations of Paired Tensor Sets Via Tensor-to-Vector Projection

Learning Canonical Correlations of Paired Tensor Sets Via Tensor-to-Vector Projection Learning Canonical Correlations of Paired Tensor Sets Via Tensor-to-Vector Projection Haiping Lu Institute for Infocomm Research Singapore hplu@ieee.org Abstract Canonical correlation analysis (CCA) is

More information

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment

More information

Tensor Fields for Multilinear Image Representation and Statistical Learning Models Applications

Tensor Fields for Multilinear Image Representation and Statistical Learning Models Applications Tensor Fields for Multilinear Image Representation and Statistical Learning Models Applications Tiene Andre Filisbino and Gilson Antonio Giraldi Department of Computer Science National Laboratory for Scientific

More information

Salt Dome Detection and Tracking Using Texture Analysis and Tensor-based Subspace Learning

Salt Dome Detection and Tracking Using Texture Analysis and Tensor-based Subspace Learning Salt Dome Detection and Tracking Using Texture Analysis and Tensor-based Subspace Learning Zhen Wang*, Dr. Tamir Hegazy*, Dr. Zhiling Long, and Prof. Ghassan AlRegib 02/18/2015 1 /42 Outline Introduction

More information

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Multiple Similarities Based Kernel Subspace Learning for Image Classification Multiple Similarities Based Kernel Subspace Learning for Image Classification Wang Yan, Qingshan Liu, Hanqing Lu, and Songde Ma National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

What is Principal Component Analysis?

What is Principal Component Analysis? What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most

More information

Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi

Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi Overview Introduction Linear Methods for Dimensionality Reduction Nonlinear Methods and Manifold

More information

Dimensionality Reduction:

Dimensionality Reduction: Dimensionality Reduction: From Data Representation to General Framework Dong XU School of Computer Engineering Nanyang Technological University, Singapore What is Dimensionality Reduction? PCA LDA Examples:

More information

PCA and LDA. Man-Wai MAK

PCA and LDA. Man-Wai MAK PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer

More information

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation Clustering by Mixture Models General bacground on clustering Example method: -means Mixture model based clustering Model estimation 1 Clustering A basic tool in data mining/pattern recognition: Divide

More information

Face Detection and Recognition

Face Detection and Recognition Face Detection and Recognition Face Recognition Problem Reading: Chapter 18.10 and, optionally, Face Recognition using Eigenfaces by M. Turk and A. Pentland Queryimage face query database Face Verification

More information

Semi-Orthogonal Multilinear PCA with Relaxed Start

Semi-Orthogonal Multilinear PCA with Relaxed Start Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) Semi-Orthogonal Multilinear PCA with Relaxed Start Qiuan Shi and Haiing Lu Deartment of Comuter Science

More information

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction ECE 521 Lecture 11 (not on midterm material) 13 February 2017 K-means clustering, Dimensionality reduction With thanks to Ruslan Salakhutdinov for an earlier version of the slides Overview K-means clustering

More information

Iterative Laplacian Score for Feature Selection

Iterative Laplacian Score for Feature Selection Iterative Laplacian Score for Feature Selection Linling Zhu, Linsong Miao, and Daoqiang Zhang College of Computer Science and echnology, Nanjing University of Aeronautics and Astronautics, Nanjing 2006,

More information

ECE 661: Homework 10 Fall 2014

ECE 661: Homework 10 Fall 2014 ECE 661: Homework 10 Fall 2014 This homework consists of the following two parts: (1) Face recognition with PCA and LDA for dimensionality reduction and the nearest-neighborhood rule for classification;

More information

L11: Pattern recognition principles

L11: Pattern recognition principles L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

Subspace Analysis for Facial Image Recognition: A Comparative Study. Yongbin Zhang, Lixin Lang and Onur Hamsici

Subspace Analysis for Facial Image Recognition: A Comparative Study. Yongbin Zhang, Lixin Lang and Onur Hamsici Subspace Analysis for Facial Image Recognition: A Comparative Study Yongbin Zhang, Lixin Lang and Onur Hamsici Outline 1. Subspace Analysis: Linear vs Kernel 2. Appearance-based Facial Image Recognition.

More information

Machine Learning 11. week

Machine Learning 11. week Machine Learning 11. week Feature Extraction-Selection Dimension reduction PCA LDA 1 Feature Extraction Any problem can be solved by machine learning methods in case of that the system must be appropriately

More information

A Tensor Approximation Approach to Dimensionality Reduction

A Tensor Approximation Approach to Dimensionality Reduction Int J Comput Vis (2008) 76: 217 229 DOI 10.1007/s11263-007-0053-0 A Tensor Approximation Approach to Dimensionality Reduction Hongcheng Wang Narendra Ahua Received: 6 October 2005 / Accepted: 9 March 2007

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

Example: Face Detection

Example: Face Detection Announcements HW1 returned New attendance policy Face Recognition: Dimensionality Reduction On time: 1 point Five minutes or more late: 0.5 points Absent: 0 points Biometrics CSE 190 Lecture 14 CSE190,

More information

PCA and LDA. Man-Wai MAK

PCA and LDA. Man-Wai MAK PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer

More information

Eigenfaces. Face Recognition Using Principal Components Analysis

Eigenfaces. Face Recognition Using Principal Components Analysis Eigenfaces Face Recognition Using Principal Components Analysis M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive Neuroscience, 3(1), pp. 71-86, 1991. Slides : George Bebis, UNR

More information

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x

More information

arxiv: v2 [stat.ml] 7 May 2015

arxiv: v2 [stat.ml] 7 May 2015 Semi-Orthogonal Multilinear PCA with Relaxed Start Qiuan Shi and Haiing Lu Deartment of Comuter Science Hong Kong Batist University, Hong Kong, China csshi@com.hkbu.edu.hk, haiing@hkbu.edu.hk arxiv:1504.08142v2

More information

20 Unsupervised Learning and Principal Components Analysis (PCA)

20 Unsupervised Learning and Principal Components Analysis (PCA) 116 Jonathan Richard Shewchuk 20 Unsupervised Learning and Principal Components Analysis (PCA) UNSUPERVISED LEARNING We have sample points, but no labels! No classes, no y-values, nothing to predict. Goal:

More information

Symmetric Two Dimensional Linear Discriminant Analysis (2DLDA)

Symmetric Two Dimensional Linear Discriminant Analysis (2DLDA) Symmetric Two Dimensional inear Discriminant Analysis (2DDA) Dijun uo, Chris Ding, Heng Huang University of Texas at Arlington 701 S. Nedderman Drive Arlington, TX 76013 dijun.luo@gmail.com, {chqding,

More information

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction Robot Image Credit: Viktoriya Sukhanova 13RF.com Dimensionality Reduction Feature Selection vs. Dimensionality Reduction Feature Selection (last time) Select a subset of features. When classifying novel

More information

Machine Learning 2nd Edition

Machine Learning 2nd Edition INTRODUCTION TO Lecture Slides for Machine Learning 2nd Edition ETHEM ALPAYDIN, modified by Leonardo Bobadilla and some parts from http://www.cs.tau.ac.il/~apartzin/machinelearning/ The MIT Press, 2010

More information

Preprocessing & dimensionality reduction

Preprocessing & dimensionality reduction Introduction to Data Mining Preprocessing & dimensionality reduction CPSC/AMTH 445a/545a Guy Wolf guy.wolf@yale.edu Yale University Fall 2016 CPSC 445 (Guy Wolf) Dimensionality reduction Yale - Fall 2016

More information

PCA and admixture models

PCA and admixture models PCA and admixture models CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar, Alkes Price PCA and admixture models 1 / 57 Announcements HW1

More information

Linear Dimensionality Reduction

Linear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis

More information

Principal Components Analysis. Sargur Srihari University at Buffalo

Principal Components Analysis. Sargur Srihari University at Buffalo Principal Components Analysis Sargur Srihari University at Buffalo 1 Topics Projection Pursuit Methods Principal Components Examples of using PCA Graphical use of PCA Multidimensional Scaling Srihari 2

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

N-mode Analysis (Tensor Framework) Behrouz Saghafi

N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Drawback of 1-mode analysis (e.g. PCA): Captures the variance among just a single factor Our training set contains

More information

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis .. December 20, 2013 Todays lecture. (PCA) (PLS-R) (LDA) . (PCA) is a method often used to reduce the dimension of a large dataset to one of a more manageble size. The new dataset can then be used to make

More information

Computation. For QDA we need to calculate: Lets first consider the case that

Computation. For QDA we need to calculate: Lets first consider the case that Computation For QDA we need to calculate: δ (x) = 1 2 log( Σ ) 1 2 (x µ ) Σ 1 (x µ ) + log(π ) Lets first consider the case that Σ = I,. This is the case where each distribution is spherical, around the

More information

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Salvador Dalí, Galatea of the Spheres CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang Some slides from Derek Hoiem and Alysha

More information

Unsupervised Learning: K- Means & PCA

Unsupervised Learning: K- Means & PCA Unsupervised Learning: K- Means & PCA Unsupervised Learning Supervised learning used labeled data pairs (x, y) to learn a func>on f : X Y But, what if we don t have labels? No labels = unsupervised learning

More information

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to: System 2 : Modelling & Recognising Modelling and Recognising Classes of Classes of Shapes Shape : PDM & PCA All the same shape? System 1 (last lecture) : limited to rigidly structured shapes System 2 :

More information

Machine Learning (CSE 446): Unsupervised Learning: K-means and Principal Component Analysis

Machine Learning (CSE 446): Unsupervised Learning: K-means and Principal Component Analysis Machine Learning (CSE 446): Unsupervised Learning: K-means and Principal Component Analysis Sham M Kakade c 2019 University of Washington cse446-staff@cs.washington.edu 0 / 10 Announcements Please do Q1

More information

Lecture 13 Visual recognition

Lecture 13 Visual recognition Lecture 13 Visual recognition Announcements Silvio Savarese Lecture 13-20-Feb-14 Lecture 13 Visual recognition Object classification bag of words models Discriminative methods Generative methods Object

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Nina Balcan] slide 1 Goals for the lecture you should understand

More information

Dimensionality Reduction

Dimensionality Reduction Dimensionality Reduction Le Song Machine Learning I CSE 674, Fall 23 Unsupervised learning Learning from raw (unlabeled, unannotated, etc) data, as opposed to supervised data where a classification of

More information

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction Using PCA/LDA Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction One approach to deal with high dimensional data is by reducing their

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System processes System Overview Previous Systems:

More information

7. Variable extraction and dimensionality reduction

7. Variable extraction and dimensionality reduction 7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 12 Jan-Willem van de Meent (credit: Yijun Zhao, Percy Liang) DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Linear Dimensionality

More information

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395 Data Mining Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1395 1 / 42 Outline 1 Introduction 2 Feature selection

More information

Correlation Preserving Unsupervised Discretization. Outline

Correlation Preserving Unsupervised Discretization. Outline Correlation Preserving Unsupervised Discretization Jee Vang Outline Paper References What is discretization? Motivation Principal Component Analysis (PCA) Association Mining Correlation Preserving Discretization

More information

Principal Component Analysis

Principal Component Analysis CSci 5525: Machine Learning Dec 3, 2008 The Main Idea Given a dataset X = {x 1,..., x N } The Main Idea Given a dataset X = {x 1,..., x N } Find a low-dimensional linear projection The Main Idea Given

More information

Motivating the Covariance Matrix

Motivating the Covariance Matrix Motivating the Covariance Matrix Raúl Rojas Computer Science Department Freie Universität Berlin January 2009 Abstract This note reviews some interesting properties of the covariance matrix and its role

More information

STA 414/2104: Lecture 8

STA 414/2104: Lecture 8 STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

Neuroscience Introduction

Neuroscience Introduction Neuroscience Introduction The brain As humans, we can identify galaxies light years away, we can study particles smaller than an atom. But we still haven t unlocked the mystery of the three pounds of matter

More information

Face recognition Computer Vision Spring 2018, Lecture 21

Face recognition Computer Vision Spring 2018, Lecture 21 Face recognition http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 21 Course announcements Homework 6 has been posted and is due on April 27 th. - Any questions about the homework?

More information

Supervised locally linear embedding

Supervised locally linear embedding Supervised locally linear embedding Dick de Ridder 1, Olga Kouropteva 2, Oleg Okun 2, Matti Pietikäinen 2 and Robert P.W. Duin 1 1 Pattern Recognition Group, Department of Imaging Science and Technology,

More information

STA 414/2104: Lecture 8

STA 414/2104: Lecture 8 STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks Delivered by Mark Ebden With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable

More information

Lecture 7: Con3nuous Latent Variable Models

Lecture 7: Con3nuous Latent Variable Models CSC2515 Fall 2015 Introduc3on to Machine Learning Lecture 7: Con3nuous Latent Variable Models All lecture slides will be available as.pdf on the course website: http://www.cs.toronto.edu/~urtasun/courses/csc2515/

More information

Non-linear Dimensionality Reduction

Non-linear Dimensionality Reduction Non-linear Dimensionality Reduction CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Introduction Laplacian Eigenmaps Locally Linear Embedding (LLE)

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Machine Learning - MT & 14. PCA and MDS

Machine Learning - MT & 14. PCA and MDS Machine Learning - MT 2016 13 & 14. PCA and MDS Varun Kanade University of Oxford November 21 & 23, 2016 Announcements Sheet 4 due this Friday by noon Practical 3 this week (continue next week if necessary)

More information

Fantope Regularization in Metric Learning

Fantope Regularization in Metric Learning Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction

More information

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Recognition Using Class Specific Linear Projection Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Articles Eigenfaces vs. Fisherfaces Recognition Using Class Specific Linear Projection, Peter N. Belhumeur,

More information

Dimension Reduction (PCA, ICA, CCA, FLD,

Dimension Reduction (PCA, ICA, CCA, FLD, Dimension Reduction (PCA, ICA, CCA, FLD, Topic Models) Yi Zhang 10-701, Machine Learning, Spring 2011 April 6 th, 2011 Parts of the PCA slides are from previous 10-701 lectures 1 Outline Dimension reduction

More information

Machine Learning (Spring 2012) Principal Component Analysis

Machine Learning (Spring 2012) Principal Component Analysis 1-71 Machine Learning (Spring 1) Principal Component Analysis Yang Xu This note is partly based on Chapter 1.1 in Chris Bishop s book on PRML and the lecture slides on PCA written by Carlos Guestrin in

More information

Lecture: Face Recognition

Lecture: Face Recognition Lecture: Face Recognition Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 12-1 What we will learn today Introduction to face recognition The Eigenfaces Algorithm Linear

More information

Principal Component Analysis and Linear Discriminant Analysis

Principal Component Analysis and Linear Discriminant Analysis Principal Component Analysis and Linear Discriminant Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/29

More information

Eigenface-based facial recognition

Eigenface-based facial recognition Eigenface-based facial recognition Dimitri PISSARENKO December 1, 2002 1 General This document is based upon Turk and Pentland (1991b), Turk and Pentland (1991a) and Smith (2002). 2 How does it work? The

More information

Discriminant Uncorrelated Neighborhood Preserving Projections

Discriminant Uncorrelated Neighborhood Preserving Projections Journal of Information & Computational Science 8: 14 (2011) 3019 3026 Available at http://www.joics.com Discriminant Uncorrelated Neighborhood Preserving Projections Guoqiang WANG a,, Weijuan ZHANG a,

More information

Notes on Latent Semantic Analysis

Notes on Latent Semantic Analysis Notes on Latent Semantic Analysis Costas Boulis 1 Introduction One of the most fundamental problems of information retrieval (IR) is to find all documents (and nothing but those) that are semantically

More information

LEC 3: Fisher Discriminant Analysis (FDA)

LEC 3: Fisher Discriminant Analysis (FDA) LEC 3: Fisher Discriminant Analysis (FDA) A Supervised Dimensionality Reduction Approach Dr. Guangliang Chen February 18, 2016 Outline Motivation: PCA is unsupervised which does not use training labels

More information

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent Unsupervised Machine Learning and Data Mining DS 5230 / DS 4420 - Fall 2018 Lecture 7 Jan-Willem van de Meent DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Dimensionality Reduction Goal:

More information

Dimension Reduction and Low-dimensional Embedding

Dimension Reduction and Low-dimensional Embedding Dimension Reduction and Low-dimensional Embedding Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/26 Dimension

More information

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations

More information

Riemannian Metric Learning for Symmetric Positive Definite Matrices

Riemannian Metric Learning for Symmetric Positive Definite Matrices CMSC 88J: Linear Subspaces and Manifolds for Computer Vision and Machine Learning Riemannian Metric Learning for Symmetric Positive Definite Matrices Raviteja Vemulapalli Guide: Professor David W. Jacobs

More information

PCA FACE RECOGNITION

PCA FACE RECOGNITION PCA FACE RECOGNITION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Shree Nayar (Columbia) including their own slides. Goal

More information

Department of Computer Science and Engineering

Department of Computer Science and Engineering Linear algebra methods for data mining with applications to materials Yousef Saad Department of Computer Science and Engineering University of Minnesota ICSC 2012, Hong Kong, Jan 4-7, 2012 HAPPY BIRTHDAY

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Nonlinear Dimensionality Reduction. Jose A. Costa

Nonlinear Dimensionality Reduction. Jose A. Costa Nonlinear Dimensionality Reduction Jose A. Costa Mathematics of Information Seminar, Dec. Motivation Many useful of signals such as: Image databases; Gene expression microarrays; Internet traffic time

More information

Data Preprocessing Tasks

Data Preprocessing Tasks Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can

More information

Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides

Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides Intelligent Data Analysis and Probabilistic Inference Lecture

More information

Constructing Optimal Subspaces for Pattern Classification. Avinash Kak Purdue University. November 16, :40pm. An RVL Tutorial Presentation

Constructing Optimal Subspaces for Pattern Classification. Avinash Kak Purdue University. November 16, :40pm. An RVL Tutorial Presentation Constructing Optimal Subspaces for Pattern Classification Avinash Kak Purdue University November 16, 2018 1:40pm An RVL Tutorial Presentation Originally presented in Summer 2008 Minor changes in November

More information